text
stringlengths
14
1.76M
# $\mathfrak{X}$-elements in multiplicative lattices - A generalization of $J$-ideals, $n$-ideals and $r$-ideals in rings Sachin Sarode* and Vinayak Joshi *Department of Mathematics, Shri Muktanand College Gangapur, Dist. Aurangabad - 431 109, India<EMAIL_ADDRESS>**Department of Mathematics, Savitribai Phule Pune University, Pune-411 007, India<EMAIL_ADDRESS> <EMAIL_ADDRESS> (Date: January 17, 2021) ###### Abstract. In this paper, we introduce a concept of $\mathfrak{X}$-element with respect to an $M$-closed set $\mathfrak{X}$ in multiplicative lattices and study properties of $\mathfrak{X}$-elements. For a particular $M$-closed subset $\mathfrak{X}$, we define the concept of $r$-element, $n$-element and $J$-element. These elements generalize the notion of $r$-ideals, $n$-ideals and $J$-ideals of a commutative ring with unity to multiplicative lattices. In fact, we prove that an ideal $I$ of a commutative ring $R$ with unity is a $n$-ideal ($J$-ideal) of $R$ if and only if it is an $n$-element ($J$-element) of $Id(R)$, the ideal lattice of $R$. ###### 2020 Mathematics Subject Classification: Primary 13A15, 13C05, 06F10 Secondary 06A11 Keywords: Multiplicative lattice, prime element, $\mathfrak{X}$-element, $n$-element, $J$-element, $r$-element, commutative ring, $n$-ideal, $r$-ideal, $J$-ideal. ## 1\. Introduction The ideal theory of commutative rings with unity is very rich. Many researchers defined different ideals ranging from prime ideals, maximal ideals, primary ideals to recently introduced $r$-ideals, $n$-ideals and $J$-ideals. More details about $r$-ideals, $n$-ideals, and $J$-ideals can be found in Mohamadian [11], Tekir et al. [12] and, Khashan and Bani-Ata [9] respectively. Ward and Dilworth [13] introduced the concept of multiplicative lattices to generalize the ideal theory of commutative rings with unity. Analogously, the concepts of a prime element, maximal element, primary elements are defined. The study of prime elements and its generalization in a multiplicative lattice is the main focus of many researchers. Different classes of elements and generalization of a prime element in multiplicative lattices were studied; see Burton [1], Joshi and Ballal [4], Jayaram [5], Jayaram and Johnson [6, 7], Jayaram et al. [8], Manjarekar and Bingi [10]. We observed a unifying pattern in the results of $J$-ideals, $n$-ideals and $r$-ideals of rings. This motivates us to introduce a new class of elements, namely $\mathfrak{X}$-element in multiplicative lattices. Hence this study will unify many of the results proved for these ideals and generalize it to multiplicative lattice settings. Further, for a particular $M$-closed subset $\mathfrak{X}$, we define the concept of $r$-element, $n$-element and $J$-element. Hence this justifies the name $\mathfrak{X}$-element. These elements are the generalizations of $r$-ideals, $n$-ideals and $J$-ideals of a commutative ring with unity. In fact, we prove that an ideal $I$ of a commutative ring $R$ with unity is a $n$-ideal($J$-ideal) of $R$ if and only if it is an $n$-element ($J$-element) of $Id(R)$, the ideal lattice of $R$. Now, we begin with the necessary concepts and terminology. ###### Definition 1.1. A nonempty subset $I$ of a lattice $L$ is a semi-ideal if $x\leq a\in I$ implies $x\in I$. A semi-ideal $I$ of $L$ is an ideal if $a\vee b\in I$ whenever $a,b\in I$. An ideal (semi-ideal) $I$ of a lattice $L$ is a proper ideal (semi-ideal) of $L$ if $I\neq L$. A proper ideal (semi-ideal) $I$ is prime if $a\wedge b\in I$ implies $a\in I$ or $b\in I$, and it is minimal if it does not properly contain another prime ideal (prime semi-ideal). For $a\in L$, let $(a]=\\{x\in L\colon\,x\leq a\\}$. The set $(a]$ is the principal ideal generated by a. A lattice $L$ is complete if for any subset $S$ of $L$, we have $\bigvee S,\bigwedge S\in L$. The smallest element and the greatest element of a lattice $L$ is denoted by 0 and 1 respectively. The concept of multiplicative lattices was introduced by Ward and Dilworth [13] to study the abstract commutative ideal theory of commutative rings. A complete lattice $L$ is a multiplicative lattice if there exists a binary operation $``\cdot"$ called the multiplication on $L$ satisfying the following conditions: 1. (1) $a\cdot b=b\cdot a$, for all $a,b\in L$. 2. (2) $a\cdot(b\cdot c)=(a\cdot b)\cdot c$, for all $a,b,c\in L$. 3. (3) $a\cdot(\bigvee_{\alpha}b_{\alpha})=\bigvee_{\alpha}(a\cdot b_{\alpha})$, for all $a,b_{\alpha}\in L$, $\alpha\in\Lambda$(an index set). 4. (4) $a\cdot 1=a$, for all $a\in L$. Note that in a multiplicative lattice $L$, $a\cdot b\leq a\wedge b$ for $a,b\in L$. For this, let $a=a\cdot 1=a\cdot(b\vee 1)=a\cdot b\vee a$. Thus $a\cdot b\leq a$. Similarly, $a\cdot b\leq b$. This proves that $a\cdot b\leq a\wedge b$. Moreover, if $a\leq b$ in $L$, then $a\cdot c\leq b\cdot c$ for every $c\in L$. Also, if $a\leq b$ and $c\leq d$ then $a\cdot c\leq b\cdot d$. An element $c$ of a complete lattice $L$ is compact if $c\leq\bigvee_{\alpha}a_{\alpha}$, $\alpha\in\Lambda$ ($\Lambda$ is an index set) implies $c\leq\bigvee_{i=1}^{n}a_{\alpha_{i}}$, where $n\in\mathbb{Z}^{+}$. The set of all compact elements of a lattice $L$ is denoted by $L_{*}$. A lattice $L$ is compactly generated or algebraic if for every $x\in L$, there exist $x_{\alpha}\in L_{*}$ for $\alpha\in\Lambda$(an index set) such that $x=\bigvee_{\alpha}x_{\alpha}$, that is, every element is a join of compact elements. Equivalently, if $L$ is a compactly generated lattice and if $a\not\leq b$ for $a,b\in L$, then there exists a nonzero compact element $c\in L_{*}$ such that $c\leq a$ and $c\not\leq b$. A multiplicative lattice $L$ is $1$-compact if $1$ is a compact element of $L$. A multiplicative lattice $L$ is compact if every element of $L$ is a compact element. A multiplicative lattice $L$ is a $c$-lattice if $L$ is 1-compact, compactly generated multiplicative lattice in which the product of two compact elements is compact. Note that the ideal lattice of a commutative ring $R$ with unity is always a $c$-lattice. An element $p$ of a multiplicative lattice $L$ with $p\neq 1$ is prime if $a\cdot b\leq p$ implies $a\leq p$ or $b\leq p$. It is not difficult to prove that an element $p$ (with $p\neq 1$) of a $c$-lattice $L$ is prime if $a\cdot b\leq p$ for $a,b\in L_{*}$ implies $a\leq p$ or $b\leq p$. An element $p$ is said to be a minimal prime element if there is no prime element $q$ such that $q<p$. An ideal $P$ of a commutative ring $R$ with unity is prime if and only if it is a prime element of $Id(R)$, the ideal lattice of $R$. Let $L$ be a $c$-lattice and $a\in L$. Then the radical of $a$ is denoted by $\sqrt{a}$ and given by $\sqrt{a}=\bigvee\\{x\in L_{*}\;|\;\ x^{n}\leq a$ for some $n\in\mathbb{N}\\}$. Note that if any compact element $c\leq\sqrt{a}$, then $c^{m}\leq a$ for some $m\in\mathbb{N}$. An element $a$ of a $c$-lattice is a radical element, if $a=\sqrt{a}$. A $c$-lattice is called a domain if $0$ is a prime element of $L$. A proper element $i$ of a $c$-lattice $L$ is called primary element if whenever $a\cdot b\leq i$ for some $a,b\in L$ then either $a\leq i$ or $b\leq\sqrt{i}$. A proper element $m$ of a multiplicative lattice is said to be maximal, if $m\leq n<1$, then $m=n$. The set of all maximal elements of $L$ is denoted by Max$(L)$. The Jacobson radical of $L$ is the set $J(L)=\displaystyle\bigwedge\\{m\ |\ m\in\text{Max}(L)\\}$. It is easy to observe that a maximal element of a $c$-lattice $L$ is prime. A $c$-lattice $L$ is said to be local, if $L$ has the unique maximal element $m$. In this case, we write $(L;m)$. A non-empty subset $\mathfrak{X}$ of $L_{*}$(set of all compact elements) in a $c$-lattice $L$ is multiplicatively closed if $s_{1}\cdot s_{2}\in\mathfrak{X}$, whenever $s_{1},s_{2}\in\mathfrak{X}$. A non-empty subset $\mathfrak{X}$ of a multiplicative lattice $L$ is called $M$-closed if $a,b\in\mathfrak{X}$, then $a\cdot b\in\mathfrak{X}$. From the definitions, it is clear that every $M$-closed subset $A$ of a $c$-lattice is a multiplicatively closed subset of $L$. The converse is not true. However, if $L$ is a compact lattice or finite, then $L=L_{*}$ and hence both definitions coincide with each other. Further, if $p$ is a prime element of a $c$-lattice $L$, then $L\setminus(p]$ is an $M$-closed subset of $L$. In a multiplicative lattice $L$, an element $a\in L$ is nilpotent if $a^{n}=0$ for some $n\in\mathbb{Z}^{+}$, and $L$ is reduced if the only nilpotent element is 0. The set of all nilpotent elements of a multiplicative lattice $L$ is denoted by Nil$(L)$. We denote the set $Z(L)$ of zero-divisors in $L$ by the set $Z(L)=\bigl{\\{}x\in L\ |\ x\cdot y=0\text{ for some }y\in L\setminus\\{0\\}\bigr{\\}}$. Clearly, Nil$(L)\subseteq Z(L)$. Let $L$ be a multiplicative lattice and $a,b\in L$. Then $(a:b)=\bigvee\\{x\;|\;x\cdot b\leq a\\}$. Note that $x\cdot b\leq a\Leftrightarrow x\leq(a:b)$. Clearly, $a\leq(a:b)$ and $(a:b)\cdot b\leq a$ for $a,b\in L$. If $a\in L$, then $ann_{L}(a)=\bigvee\\{x\in L\;|\>a\cdot x=0\\}$. For undefined concepts in lattices, see Grätzer [3]. ## 2\. $\mathfrak{X}$-Elements in multiplicative Lattices We introduce the concept of an $\mathfrak{X}$-element in multiplicative lattices. ###### Definition 2.1. Let $L$ be a multiplicative lattice and $\mathfrak{X}$ be an $M$-closed subset of $L$. A proper element $i$ of a multiplicative lattice $L$ is called an $\mathfrak{X}$-element, if $a\cdot b~{}\leq i$ with $a~{}\notin\mathfrak{X}$ implies $b~{}\leq i$ for $a,~{}b~{}\in L$. ###### Example 2.2. Consider a lattice $K$ whose Hasse diagram is shown in Figure 1. On $K$, define the trivial multiplication $x\cdot y=0=y\cdot x$ for every $x,~{}y\not\in\\{1\\}$ and $x\cdot 1=x=1\cdot x$ for every $x\in K$. It is easy to see that $K$ is a multiplicative lattice. Moreover, $K$ is non- reduced. If we take $\mathfrak{X}=\\{0,~{}a,~{}b,~{}c,~{}d\\}$, then every proper element of $K$ is an $\mathfrak{X}$-element of $K$. $0$$c$$a$$b$$d$$1$$K$ Figure 1. A multiplicative lattice in which every proper element is an $\mathfrak{X}$-element ###### Remark 2.3. Note that a proper element of a multiplicative lattice $L$ is an $\mathfrak{X}$-element or not, depends on an $M$-closed subset $\mathfrak{X}$ under consideration. If $x$ is an $\mathfrak{X}_{1}$-element with respect to an $M$-closed subset $\mathfrak{X}_{1}$, then $x$ may or may not be an $\mathfrak{X}_{2}$-element with respect to an $M$-closed subset $\mathfrak{X}_{2}$ different from $\mathfrak{X}_{1}$. Also, note that if $L$ is a multiplicative lattice and $\mathfrak{X}=\\{1\\}$ is an $M$-closed subset of $L$, then $L$ does not contain an $\mathfrak{X}$-element. ###### Lemma 2.4. Let $L$ be a multiplicative lattice and $\mathfrak{X}$ be an $M$-closed subset of $L$. If $i$ is an $\mathfrak{X}$-element of $L$, then $(i]~{}\subseteq\mathfrak{X}$. In particular, if $(i]=\mathfrak{X}$, then $i$ is an $\mathfrak{X}$-element of $L$ if and only if $i$ is a prime element of $L$. ###### Proof. Suppose $i$ is an $\mathfrak{X}$-element of a multiplicative lattice $L$ and let $x\in(i]$. Suppose on the contrary that $x\notin\mathfrak{X}$. Clearly, $x\cdot 1\leq i$ with $x\notin\mathfrak{X}$. Since $i$ is an $\mathfrak{X}$-element, we get $1\leq i$, a contradiction to the fact that $i$ is a proper element of $L$. Therefore $x\in\mathfrak{X}$ and hence $(i]~{}\subseteq\mathfrak{X}$. Now, we prove “in particular” part. Suppose that $(i]=\mathfrak{X}$ and $i$ is an $\mathfrak{X}$-element of $L$. Let $a,b\in L$ such that $a\cdot b\leq i$ and $a\nleq i$, i.e., $a\notin\mathfrak{X}$. As $i$ is an $\mathfrak{X}$-element, $b\leq i$. So $i$ is a prime element of $L$. Conversely, suppose that $(i]=\mathfrak{X}$ and $i$ is a prime element of $L$. Let $a,b\in L$ such that $a\cdot b\leq i$ with $a\not\in\mathfrak{X}=(i]$. By primeness of $i$, $b\leq i$. Thus $i$ is an $\mathfrak{X}$-element of $L$. ∎ ###### Remark 2.5. The converse of the Lemma 2.4 need not be true in general, i.e., if $i$ is a proper element of a multiplicative lattice $L$ such that $i\in\mathfrak{X}$, then $i$ need not be an $\mathfrak{X}$-element of $L$. Consider the ideal lattice $L$ of the ring $\mathbb{Z}_{12}$. Clearly, $L$ is a non-reduced lattice. Put $\mathfrak{X}=\\{(0),~{}(6)\\}$. Then $(0),(6)\in\mathfrak{X}$ but $(0)$ and $(6)$ are not $\mathfrak{X}$-elements of $L$. ###### Lemma 2.6. Let $L$ be a multiplicative lattice and $\mathfrak{X}$ and $\mathfrak{X^{\prime}}$ be $M$-closed subsets of $L$ such that $\mathfrak{X}\subseteq\mathfrak{X^{\prime}}$. If $i$ is an $\mathfrak{X}$-element of $L$, then $i$ is an $\mathfrak{X^{\prime}}$-element of $L$. ###### Proof. follows from the definition of an $\mathfrak{X}$-element. ∎ ###### Lemma 2.7. Let $(L;m)$ be a local lattice. Then every proper element of $L$ is an $\mathfrak{X}$-element for $\mathfrak{X}=(m]$. ###### Proof. Let $a\cdot b\leq i$ and $a\notin\mathfrak{X}=(m]$. Since $L$ is local, $a=1$. Hence in this case $b\leq i$. This proves that $i$ is an $\mathfrak{X}$-element. ∎ ###### Lemma 2.8. Assume that every proper element of a $c$-lattice $L$ is an $\mathfrak{X}$-element, where $\mathfrak{X}=(m]$ and $m\in L\setminus\\{1\\}$. Then $m$ is a unique maximal element of $L$. ###### Proof. Let $i$ be a proper element of $L$ which is an $\mathfrak{X}$-element. Then by Lemma 2.4, $i\leq m$. This is true for all proper elements $i$ of $L$. In particular, it is true for all maximal elements $m^{\prime}$ too. This proves that $L$ has unique maximal element. Therefore it is a meet-irreducible element of $L$. ∎ ###### Lemma 2.9. Let $L$ be a multiplicative lattice and $\mathfrak{X}$ be an $M$-closed subset of $L$. If $\\{i_{j}\\}$, where $j\in\Lambda$ (an index set), is a non-empty set of $\mathfrak{X}$-elements of $L$, then $\bigwedge_{j}i_{j}$ is also an $\mathfrak{X}$-element. ###### Proof. Obvious.∎ ###### Remark 2.10. The join of two $\mathfrak{X}$-elements is not necessarily an $\mathfrak{X}$-element. Consider the ideal lattice $L$ of $\mathbb{Z}_{15}$ with $\mathfrak{X}=\\{(0),(3),(5)\\}$. Then $(3),(5)$ are $\mathfrak{X}$-elements of $L$, but $(3)\vee(5)=(1)$ is not an$\mathfrak{X}$-element. ###### Lemma 2.11. Let $i$ be a proper element of a $c$-lattice $L$ and $\mathfrak{X}$ be an $M$-closed subset of $L$. Then $i$ is an $\mathfrak{X}$-element of $L$ if and only if $i=(i:a)$ for all $a\notin\mathfrak{X}$. In particular, if $i$ is an $\mathfrak{X}$-element of $L$, then $(i:a)$ is an $\mathfrak{X}$-element of $L$ for all $a\notin\mathfrak{X}$. ###### Proof. Suppose $i$ is an $\mathfrak{X}$-element of $L$ and let $a\notin\mathfrak{X}$. We always have $i\leq(i:a)$. Let $x$ be any compact element such that $x\leq(i:a)$. Therefore $x\cdot a\leq i$. Since $i$ is an $\mathfrak{X}$-element and $a\notin\mathfrak{X}$, we get $x\leq i$. Hence $(i:a)\leq i$, as $L$ is a $c$-lattice. Therefore $i=(i:a)$ for all $a\notin\mathfrak{X}$. Conversely, suppose that $i=(i:a)$ for all $a\notin\mathfrak{X}$. Let $c,d\in L$ such that $c\cdot d~{}\leq i$ with $c~{}\notin\mathfrak{X}$. We claim that $d\leq i$. Since $c\cdot d~{}\leq i$, we have $d\leq(i:c)$. As $c~{}\notin\mathfrak{X}$, by the assumption $(i:c)=i$, we have $d\leq i$. Therefore, $i$ is an $\mathfrak{X}$-element of $L$. Further, “in particular part” is easy to observe. ∎ ###### Lemma 2.12. Let $i$ be a proper element of a multiplicative lattice $L$ and $\mathfrak{X}$ be an $M$-closed subset of $L$. Then the following statements are equivalent. 1. (1) $i$ is an $\mathfrak{X}$-element of $L$. 2. (2) $(i:a)$ is an $\mathfrak{X}$-element of $L$ for every $a\nleq i$. 3. (3) $\Bigl{(}(i:a)\Bigr{]}\subseteq\mathfrak{X}$ for all $a\nleq i$. ###### Proof. $(1)\implies(2)$: Suppose that $i$ is an $\mathfrak{X}$-element of $L$ with $j\nleq i$. Clearly, $(i:j)\not=1$. Let $a,b\in L$ such that $a\cdot b\leq(i:j)$ with $a\notin\mathfrak{X}$. So $a\cdot b\cdot j\leq i$. As $i$ is an $\mathfrak{X}$-element and $a\notin\mathfrak{X}$, we get $b\cdot j\leq i$, i.e., $b\leq(i:j)$. Therefore $(i:j)$ is an $\mathfrak{X}$-element of $L$. $(2)\implies(3)$: follows from Lemma 2.4. $(3)\implies(1)$: Suppose that $((i:a)]\subseteq\mathfrak{X}$ for all $a\nleq i$. Let $c,d\in L$ such that $c\cdot d\leq i$ with $c\notin\mathfrak{X}$. We claim that $d\leq i$. Suppose $d\nleq i$. So by the assumption and $c\leq(i:d)$, we have $c\in\mathfrak{X}$, a contradiction. Therefore $d\leq i$. Hence $i$ is an $\mathfrak{X}$-element of $L$. ∎ ###### Lemma 2.13. Let $L$ be a multiplicative lattice and $\mathfrak{X}$ be an $M$-closed subset of $L$. If $i$ is a maximal $\mathfrak{X}$-element of $L$, then $i$ is a prime element of $L$. ###### Proof. Suppose $i$ is a maximal $\mathfrak{X}$-element of $L$. Let $a,b\in L$ such that $a\cdot b\leq i$ and $a\nleq i$. Since $i$ is an $\mathfrak{X}$-element and $a\nleq i$, by Lemma 2.12, $(i:a)$ is an $\mathfrak{X}$-element of $L$. As $i$ is a maximal $\mathfrak{X}$-element of $L$ and $i\leq(i:a)$, we get $(i:a)=i$. Therefore $b\leq i$. ∎ ###### Lemma 2.14. Let $j$ be a proper element of a $c$-lattice $L$ and $\mathfrak{X}=(j]$ be an $M$-closed subset of $L$. Then a proper element $i$ is an $\mathfrak{X}$-element if and only if the condition $(*)$: $(*)$: for all $a,~{}b~{}\in L_{*}$ (set of all compact elements), $a\cdot b~{}\leq i$ with $a~{}\notin\mathfrak{X}$ implies $b~{}\leq i$. ###### Proof. Assume that the condition $(*)$ holds. Let $a,b\in L$ such that $a\cdot b~{}\leq i$ with $a~{}\notin\mathfrak{X}$. As $L$ is a $c$-lattice and $a~{}\nleq j$, there exists $(0\not=)x\in L_{*}$ such that $x\leq a$ and $x~{}\nleq j$. Now, let $y$ be a compact element such that $y\leq b$. As $x\cdot y\leq a\cdot b\leq i$ with $x~{}\notin\mathfrak{X}=(j]$, by the condition $(*)$, $y\leq i$. Thus every compact element $\leq b$ is $\leq i$ and $L$ is a $c$-lattice, we get $b\leq i$. Hence $i$ is an $\mathfrak{X}$-element. The converse is obvious. ∎ ###### Lemma 2.15. Let $L$ be a $c$-lattice and $\mathfrak{X}=(j]$ be an $M$-closed subset of $L$. Then for a prime element $i$ of $L$ with $j\leq i$, $i$ is an $\mathfrak{X}$-element if and only if $i=j$. ###### Proof. Assume that $i$ is a prime element which is also an $\mathfrak{X}$-element of $L$. By Lemma 2.4, we have $i\leq j$. This together with $j\leq i$, we have $i=j$. Conversely, assume that $i=j$ and $i$ is prime. To prove $i$ is an $\mathfrak{X}$-element, assume that $a\cdot b\leq i$ and $a\notin\mathfrak{X}$. Then by primeness of $i$ and $i=j$, we have $b\leq i$. ∎ ###### Corollary 2.16. Let $L$ be a $c$-lattice and $\mathfrak{X}=(j]$ be an $M$-closed subset of $L$. Then for a maximal element $i$ of $L$, $i$ is an $\mathfrak{X}$-element if and only if $i=j$. ###### Proof. Assume that $i$ is a maximal element which is also an $\mathfrak{X}$-element of $L$. By Lemma 2.4, we have $i\leq j$. Thus by the maximality of $i$, we have $i=j$. Conversely, assume that $i=j$. Since $i$ is maximal, it is prime. Thus the result follows from Lemma 2.15. ∎ ###### Theorem 2.17. Let L be a $c$-lattice and $\mathfrak{X}=(j]$ be an $M$-closed subset of $L$, where $j=\bigwedge P$ where $P=\bigl{\\{}i_{k}\ |\ i_{k}\text{ is a prime elements of }L\bigr{\\}}$. Then the following statements are equivalent. 1. (1) There exists an $\mathfrak{X}$-element in $L$. 2. (2) $j$ is a prime element of $L$. Moreover, if the set $Min(L)$ of all minimal prime elements in $L$ is finite, then all the above conditions are equivalent to $(*)$: $|Min(L)|=1$. ###### Proof. (1) $\Rightarrow$ (2): Suppose there exists an $\mathfrak{X}$-element $i$ in $L$. Let $\beta=\\{x\ |\ x$ is an $\mathfrak{X}$-element in $L\\}$. As $i\in\beta$, $\beta$ is a poset under induced partial order of $L$. Let $j_{1}\leq j_{2}\leq\cdots\leq j_{n}\leq\cdots$ be a chain $\mathcal{C}$ in $\beta$. We claim that $j=\bigvee^{\infty}_{\alpha=1}j_{\alpha}$ is in $\beta$, i.e., $j=\bigvee^{\infty}_{\alpha=1}j_{\alpha}$ is an $\mathfrak{X}$-element of $L$. Let $a,b\in L_{*}$ (set of all compact element of $L$) such that $a\cdot b\leq j=\bigvee^{\infty}_{\alpha=1}j_{\alpha}$ and $a\not\leq j$. As $L$ is a $c$-lattice and $a,b\in L_{*}$, we get $a\cdot b\in L_{*}$. Therefore $a\cdot b\leq j_{1}\vee j_{2}\vee\cdots\vee j_{n}$ for some $j_{1},j_{2},\cdots,j_{n}\in\mathcal{C}$. Since $\mathcal{C}$ is a chain, we must have $j_{1}\vee j_{2}\vee\cdots\vee j_{n}=j_{\gamma}$ for some $\gamma$, where $1\leq\gamma\leq n$. Thus $a\cdot b\leq j_{\gamma}$ with $a\nleq j_{\gamma}$. Since $j_{\gamma}$ is an $\mathfrak{X}$-element of $L$, we have $b\leq j_{\gamma}$. Thus $b\leq\bigvee^{\infty}_{\alpha=1}j_{\alpha}=j$. Therefore $j$ is an $\mathfrak{X}$-element of $L$. By Zorn’s Lemma $\beta$ has a maximal element $w$, that is, $w$ is a maximal $\mathfrak{X}$-element. By Lemma 2.13, $w$ is a prime element of $L$, that is, $w\in P$. Hence $j\leq w$. Also, $w$ is an $\mathfrak{X}$-element, we have by Lemma 2.4, $w\leq j$. Thus $j=w$. Hence $j$ is prime. (2) $\Rightarrow$ (1): Let $j$ be a prime element. By Lemma 2.15, $j$ is an $\mathfrak{X}$-element. We, now, prove $(2)\iff(*)$. Assume that $j$ is a prime element. Since $j=\bigwedge\bigl{\\{}i_{k}\ |\ i_{k}\text{ is a prime elements of }L\bigr{\\}}$ and the set $Min(L)$ is finite, without loss of generality, we assume that $\displaystyle j=\bigwedge_{k=1}^{n}\bigl{\\{}i_{k}\ |\ i_{k}\in Min(L)\bigr{\\}}$. By primeness of $j$ and $j\leq i_{k}$ for all $k$ with $i_{k}\in Min(L)$, we have $j=i_{k}$ for all $k$. Thus $|Min(L)|=1$. Conversely, assume that $|Min(L)|=1$. Let $p$ be the only minimal prime element in $L$. Then $p\leq i_{k}$ for every $k$. Hence $j=p$. This proves that $j$ is prime. ∎ ###### Lemma 2.18 (L. Fuchs and R. Reis [2, Lemma 2.5]). Let $L$ be a $c$-lattice and $a\in L$. Then radical of $a$ is given by $\sqrt{a}=\bigwedge\\{p\in L\colon\ $p is a minimal prime element over $a\\}$. ###### Lemma 2.19. Let L be a $c$-lattice and $\mathfrak{X}=(j]$ be an $M$-closed subset of $L$, where $j=\bigwedge\bigl{\\{}i_{k}\ |\ i_{k}\text{ is a prime element of }L\bigr{\\}}$. Then a proper element $i$ is an $\mathfrak{X}$-element of $L$ if and only if $i$ is a primary element of $L$ and $\sqrt{i}=j$. ###### Proof. Suppose $i$ is an $\mathfrak{X}$-element of $L$. By Lemma 2.4, $(i]\subseteq\mathfrak{X}$. Hence $i\leq\sqrt{i}\leq\sqrt{j}=j$. Clearly, by Lemma 2.18, $j\leq\sqrt{i}$. Hence $j=\sqrt{i}$. Let $a,b\in L$ such that $a\cdot b\leq i$. By Theorem 2.17, $j$ is a prime element of $L$. Hence either $a\leq j$ or $b\leq j$. So either $a\leq\sqrt{i}$ or $b\leq\sqrt{i}$. Hence $i$ is a primary element of $L$. Conversely, suppose that $i$ is a primary element of $L$ and $\sqrt{i}=j$. Let $a,b\in L$ such that $a\cdot b\leq i$ and $a\nleq j$. Since $i$ is a primary element and $a\nleq j=\sqrt{i}$, we get $b\leq i$. Thus $i$ is an $\mathfrak{X}$-element of $L$. ∎ ###### Lemma 2.20. Let L be a $c$-lattice and $\mathfrak{X}=(j]$ be an $M$-closed subset of $L$, where $j=\bigwedge\bigl{\\{}i_{k}\ |\ i_{k}\text{ is a maximal element of }L\bigr{\\}}$. Then a proper element $i$ is an $\mathfrak{X}$-element of $L$ if and only if $i$ satisfies the following condition $(*)$: If $a\cdot b\leq i$, then $a\leq i$ or $b\leq m$, where $m=\bigwedge\bigl{\\{}i_{k}\ |\ i_{k}\text{ is a maximal element\;}\geq i~{}\bigr{\\}}$ and $m=j$. ###### Proof. follows on similar lines as that of Lemma 2.19. ∎ ###### Lemma 2.21. Let $L$ be a $c$-lattice and $\mathfrak{X}$ be an $M$-closed subset of $L$. Let $k$ be an element of $L$ such that $k\notin\mathfrak{X}$. If $i_{1}$ and $i_{2}$ are $\mathfrak{X}$-elements with $i_{1}k=i_{2}k$, then $i_{1}=i_{2}$. Further, if $i$ is an element such that $ik$ is an $\mathfrak{X}$-element, then $ik=i$. ###### Proof. Clearly, $i_{1}k\leq i_{2}$ with $k\notin\mathfrak{X}$. Since $i_{2}$ is an $\mathfrak{X}$-element, we have $i_{1}\leq i_{2}$. On similar lines we can prove that $i_{2}\leq i_{1}$. Thus $i_{1}=i_{2}$. Now, we prove “further” part. Since $ik$ is an $\mathfrak{X}$-element and $ik\leq ik$ with $k\notin\mathfrak{X}$, we have $i\leq ik$. The reverse inequality is always true. Hence $i=ik$. ∎ It is well-known that if a proper ideal $P$ of a commutative ring $R$ with unity is prime if and only if $R\setminus P$ is a multiplicatively closed subset of $R$. Analogously, a proper element $p$ of a $c$-lattice $L$ is prime if and only if $L\setminus(p]$ is an $M$-closed subset of $L$. To characterize $\mathfrak{X}$-element, we define $\mathfrak{X}$-multiplicatively closed subset of $L$ as follows. ###### Definition 2.22. Let $\mathfrak{X}$ be an $M$-closed subset of a $c$-lattice $L$. A non-empty subset $A$ of $L_{*}$ with $(L_{*}\setminus\mathfrak{X})\subseteq A$ is called a $\mathfrak{X}$-multiplicatively closed subset of $L$, if $a_{1}\in(L_{*}\setminus\mathfrak{X})$ and $a_{2}\in A$, then $a_{1}\cdot a_{2}\in A$. ###### Remark 2.23. If $A$ is a $\mathfrak{X}$-multiplicatively closed subset of $L$, then $A$ need not be a multiplicatively closed subset of $L$. Consider a multiplicative lattice $K$ given in Example 2.2. If we take $\mathfrak{X}=\\{0,~{}a,~{}b,~{}c,~{}d\\}$, then $A=\\{1,~{}c,~{}d\\}$ is an $\mathfrak{X}$-multiplicatively closed subset of $K$ but $A=\\{1,~{}c,~{}d\\}$ is not a multiplicatively closed subset of $K$, as $c,d\in A$ and $c\cdot d=0\notin A$. Also, if $A$ is a multiplicatively closed subset of $L$, then $A$ need not be a $\mathfrak{X}$-multiplicatively closed subset of $L$ for some $M$-closed subset $\mathfrak{X}$. Consider the ideal lattice $L$ of the ring $\mathbb{Z}_{12}$. Then $\\{(1)\\}$ is a multiplicatively closed subset of $L$, but $\\{(1)\\}$ is not a $\mathfrak{X}$-multiplicatively closed subset of $L$ for $\mathfrak{X}=\\{(0),~{}(6)\\}$. ###### Lemma 2.24. Let $i$ be a proper element of a $c$-lattice $L$ and $\mathfrak{X}$ be an $M$-closed subset of $L$. If $i$ is an $\mathfrak{X}$-element of $L$, then $L_{*}\setminus(i]$ is an $\mathfrak{X}$-multiplicatively closed subset of $L$. The converse is true if either $\mathfrak{X}=(j]$ is an $M$-closed subset of a $c$-lattice $L$ or $L$ is a compact lattice. ###### Proof. Suppose that $i$ is an $\mathfrak{X}$-element of $L$. By Lemma 2.4, $(L_{*}\setminus\mathfrak{X})\subseteq(L_{*}\setminus(i])$. Let $a\in(L_{*}\setminus\mathfrak{X})$ and $b\in(L_{*}\setminus(i])$. We claim that $a\cdot b\in(L_{*}\setminus(i])$. Suppose on the contrary that $a\cdot b\notin(L_{*}\setminus(i])$. So $a\cdot b\leq i$. Since $i$ is an $\mathfrak{X}$-element of $L$ and $a\notin\mathfrak{X}$, we get $b\leq i$, a contradiction to $b\in(L_{*}\setminus(i])$. Thus $a\cdot b\in(L_{*}\setminus(i])$. Consequently, $L_{*}\setminus(i]$ is an $\mathfrak{X}$-multiplicatively closed subset of $L$. Conversely, suppose that $\mathfrak{X}=(j]$ is an $M$-closed subset of a $c$-lattice $L$ and $L_{*}\setminus(i]$ is an $\mathfrak{X}$-multiplicatively closed subset of $L$. Therefore $\bigl{(}L_{*}\setminus(j]\bigr{)}\subseteq\bigl{(}L_{*}\setminus(i]\bigr{)}$. In view of Lemma 2.14, to show that $i$ is an $\mathfrak{X}$-element of $L$, it is enough to show that for $a,b\in L_{*}$ such that $a\cdot b\leq i$ with $a\not\in\mathfrak{X}=(j]$, we have $b\leq i$. If $b\in(L_{*}\setminus(i])$, then as $(L_{*}\setminus(i])$ is an $\mathfrak{X}$-multiplicatively closed subset of $L$, we get $a\cdot b\in(L_{*}\setminus(i])$, a contradiction to $a\cdot b\leq i$. Therefore $i$ is an $\mathfrak{X}$-element of $L$. If $L$ is a compact lattice and $\mathfrak{X}$ be an $M$-closed subset of $L$, then the converse follows similarly. ∎ ###### Theorem 2.25. Let $j$ be a proper element of $c$-lattice $L$ and $\mathfrak{X}=(j]$ be an $M$-closed subset of $L$. Suppose $a\in L$ and $t\nleq a$ for all $t\in A$, where $A$ is an $\mathfrak{X}$-multiplicatively closed subset. Then there is an $\mathfrak{X}$-element $i$ of $L$ such that $a\leq i$ and $i$ is maximal with respect to $t\nleq i$ for all $t\in A$. ###### Proof. Let $R=\\{c\in L\;|\ a\leq c\text{ and }\;t\nleq c$ for all $t\in A\\}$. Clearly, $a\in R$ and hence $R$ is a poset under the induced partial order of $L$. Let $\mathcal{C}$ be a chain in $R$ and $w=\bigvee\\{d\ |\ d\in\mathcal{C}\\}$. We claim that $w\in R$. Suppose on the contrary that $w\notin R$, that is, $t\leq w$ for some $t\in A$. Since $t\in A\subseteq L_{*}$ is compact, we have $t\leq d_{1}\vee d_{2}\vee\dots\vee d_{n}$ for some $d_{1},d_{2},\cdots,d_{n}\in\mathcal{C}$. As $\mathcal{C}$ is a chain we must have $d_{1}\vee d_{2}\vee\dots\vee d_{n}=d_{i}$ for some $i$, where $1\leq i\leq n$. Thus $t\leq d_{i}$, a contradiction. Thus $w\in R$. Hence by Zorn’s lemma, there is a maximal element $i$ of $R$. Hence $a\leq i$ and $t\not\leq a$ for all $t\in A$. In view of Lemma 2.14, to prove $i$ is an $\mathfrak{X}$-element of $L$, assume that $x,y\in L_{*}$ such that $x\cdot y\leq i$ with $x\notin\mathfrak{X}=(j]$. Suppose that $y\nleq j$. Clearly, $y\leq(i:x)$ and, if $i=(i:x)$, then $y\leq i$ and we are done. Hence assume that $i<(i:x)$. Since $i$ is a maximal element of $R$, $(i:x)\not\in R$. Hence, there exists a compact element $t_{1}\in A$ such that $t_{1}\leq(i:x)$, that is, $x\cdot t_{1}\leq i$. But $x\cdot t_{1}\in A$, as $A$ is an $\mathfrak{X}$-multiplicatively closed subset, $x\in(L_{*}\setminus(j])$ and $t_{1}\in A$. Thus there exist an element $t_{2}=x\cdot t_{1}\in A$ such that $t_{2}\leq i$, a contradiction to $i\in R$. Hence $i$ is an $\mathfrak{X}$-element of $L$.∎ ###### Theorem 2.26. Let $L$ be a compact lattice and $\mathfrak{X}$ be an $M$-closed subset of $L$. Suppose $a\in L$ and $t\nleq a$ for all $t\in A$, where $A$ is an $\mathfrak{X}$-multiplicatively closed subset. Then there is an $\mathfrak{X}$-element $i$ of $L$ such that $a\leq i$ and $i$ is maximal with respect to $t\nleq i$ for all $t\in A$. ###### Proof. Proof follows on similar lines as that of Theorem 2.25. ∎ ## 3\. Applications of $\mathfrak{X}$-Elements As already mentioned in the introduction, there is a unifying pattern in the results of $J$-ideals, $n$-ideals and $r$-ideals of a commutative ring with unity. In this section, we prove these results of $J$-ideals, $n$-ideals and $r$-ideals by suitably replacing the set $\mathfrak{X}$ in multiplicative lattices. Hence most of the results of the papers [9], [11] and [12] becomes the corollaries of our results. First, we quote the definitions of $J$-ideals, $n$-ideals and $r$-ideals using the sets $Z(R)$, $N(R)$ and $J(R)$ ($Z(L)$, $N(L)$ and $J(L)$), the set of zero-divisors, the nil-radical and the Jacobson radical of a commutative ring $R$ (multiplicative lattice $L$) respectively. ###### Definition 3.1. A proper ideal $I$ of a commutative ring $R$ with unity is called: * • $r$-ideal, if $ab\in I$ with $ann_{R}(a)=(0)$ implies $b~{}\in I$ for all $a,~{}b~{}\in R$ (see Mohamadian [11]). * • $n$-ideal, if $ab\in I$ with $a~{}\not\in\sqrt{0}$ implies $b~{}\in I$ for all $a,~{}b~{}\in R$ (see U. Tekir et al. [12]). * • $J$-ideal, if $ab\in I$ with $a~{}\not\in J(R)$ implies $b~{}\in I$ for all $a,~{}b~{}\in R$ (see Khashan and Bani-Ata [9]). Analogously, we define the concepts of $r$-element, $n$-element and $J$-element in multiplicative lattices. ###### Definition 3.2. A proper element $i$ of a multiplicative lattice $L$ is called: * • $r$-element, if $a\cdot b~{}\leq i$ with $a~{}\not\in Z(L)$ implies $b~{}\leq i$ for all $a,~{}b~{}\in L_{*}$. * • $n$-element, if $a\cdot b~{}\leq i$ with $a~{}\notin\bigl{(}\sqrt{0}\ \\!\bigr{]}$ implies $b~{}\leq i$ for all $a,~{}b~{}\in L_{*}$. * • $J$-element, if $a\cdot b~{}\leq i$ with $a~{}\not\in\bigl{(}J(L)\bigr{]}$ implies $b~{}\leq i$ for all $a,~{}b~{}\in L_{*}$. We quote the following three results to prove that a proper ideal $I$ of a commutative ring $R$ with unity is an $r$-ideal, $n$-ideal and $J$-ideal if and only if it is an $r$-element, $n$-element and $J$-element of the multiplicative lattice $Id(R)$, the set of all ideals of $R$, respectively. ###### Theorem 3.3 (Mohamadian [11, Lemma 2.5]). Let $R$ be a commutative ring with unity and $I$ be a proper ideal of $R$. Then $I$ is an $r$-ideal if and only if whenever $J$ and $K$ are ideals of $R$ with $J\not\subseteq Z(R)$ and $JK\subseteq I$, then $K\subseteq I$. ###### Theorem 3.4 (U. Tekir et al. [12, Theorem 2.7]). Let $R$ be a commutative ring with unity and $I$ a proper ideal of $R$. Then the following are equivalent: 1. (1) $I$ is an $n$-ideal of $R$. 2. (2) $I=(I:a)$ for every $a\notin\sqrt{0}$. 3. (3) For ideals $J$ and $K$ of $R$, $JK\subseteq I$ with $J\cap(R-\sqrt{0})\not=\emptyset$ implies $K\subseteq I$. ###### Theorem 3.5 (H. A. Khashan et al. [9, Proposition 2.10]). Let $R$ be a commutative ring with unity and $I$ a proper ideal of $R$. Then the following are equivalent: 1. (1) $I$ is a $J$-ideal of $R$. 2. (2) $I=(I:a)$ for every $a\not\in J(R)$. 3. (3) For ideals $A$ and $B$ of $R$, $AB\subseteq I$ with $A\nsubseteq J(R)$ implies $B\subseteq I$. ###### Theorem 3.6. Let $R$ be a Noetherian ring with unity. Then $I$ is an $r$-ideal of $R$ if and only if $I$ is an $r$-element of the multiplicative lattice $L=Id(R)$, where $Id(R)$ is the ideal lattice of $R$. ###### Proof. Suppose that $I$ is an $r$-ideal of $R$. Let $J,K$ be any ideals of $R$ such that $J\cdot K\leq I$ in $L$ with $J\notin Z(L)$, that is, $ann_{L}(J)=0_{L}$, where $(0_{R})$ is the least element of $L$, denoted by $0_{L}$. We claim that $ann_{R}(J)=(0_{R})$. Suppose on the contrary that $(0_{R}\not=)x\in ann_{R}(J)$. Hence $(x)J=(0_{R})$, a contradiction to the $ann_{L}(J)=0_{L}$. Hence $ann_{R}(J)=(0_{R})$. Now, we prove that $J\not\subseteq Z(R)$. Suppose on the contrary that $J\subseteq Z(R)$. Since $R$ is Noetherian, $J\subseteq Z(R)=\bigcup_{i=1}^{n}P_{i}$, where $P_{i}$’s are associate primes. By Prime Avoidance Theorem $J\subseteq P_{k}$ for some $k$. Since $P_{k}$ is an associated prime, we have $P_{k}=0:x$ for some $x\in R$. But this will contradicts the fact that $ann_{R}(J)=(0)$. Hence $J\not\subseteq Z(R)$. By Theorem 3.3, $K\subseteq I$, i.e., $K\leq I$. Thus $I$ is an $r$-element of $L$. Conversely, suppose that $I$ is a $r$-element of $L$. Let $a,b\in R$ such that $a\cdot b\in I$ with $ann_{R}(a)=(0_{R})$. We claim that $b\in I$. Since $(a\cdot b)=(a)\cdot(b)\subseteq I$, we have $a^{\prime}\cdot b^{\prime}\leq I$ in $L$, where $a^{\prime}=(a),\ b^{\prime}=(b)$. Clearly, $a^{\prime}\notin Z(L)$. Hence $b^{\prime}\leq I$, i.e., $b\in I$. Thus $I$ is an $r$-ideal of $R$. ∎ ###### Remark 3.7. From the proof of Theorem 3.6, it is clear that every $r$-element of $Id(R)$ is an $r$-ideal of $R$. However, for the converse we need the assumption that a ring is Noetherian. It should be noted that if we replace “Noetherian ring” by “ring satisfies strongly annihilator condition” still the result is true. By strong annihilator condition, we mean, for given ideal $I$ of $R$, there exists $a\in I$ such that $ann_{R}(I)=ann_{R}(a)$. Further, we are unable to find an example to show that the condition that the ring is Noetherian or satisfies strongly annihilator condition is necessary to prove the above Theorem 3.6. Hence we raise the following question. ###### Question 3.8. Let $I$ be an $r$-ideal of a commutative ring $R$ with unity. Is $I$ an $r$-element of $Id(R)$? ###### Theorem 3.9. Let $R$ be a commutative ring with unity. Then $I$ is an $n$-ideal of $R$ with unity if and only if $I$ is a $n$-element of multiplicative lattice $L=Id(R)$, where $Id(R)$ is the ideal lattice of $R$. ###### Proof. Suppose that $I$ is an $n$-ideal of $R$. Let $J,K$ be any finitely generated ideals of $R$ such that $J\cdot K\leq I$ with $J\nleq\sqrt{0_{L}}$ in $L$. It is known that finitely generated ideals of $R$ are compact elements of $Id(R)$. Since $J\nleq\sqrt{0_{L}}$, we get $J^{n}\not=0_{L}=(0_{R})$ for every $n\in\mathbb{N}$. Hence $J\cap(R\setminus\sqrt{0_{R}})\not=\emptyset$. By Theorem 3.4, $K\subseteq I$, i.e., $K\leq I$. Therefore $I$ is a $n$-element of $L$. Conversely, suppose that $I$ is a $n$-element of $L$. Let $a,b\in R$ such that $a\cdot b\in I$ with $a\notin\sqrt{0_{R}}$. We claim that $b\in I$. Since $(a\cdot b)=(a)\cdot(b)\subseteq I$, we have $a^{\prime}\cdot b^{\prime}\leq I$ in $L$, where $a^{\prime}=(a),\ b^{\prime}=(b)\in L_{*}$. Clearly, $a^{\prime}\nleq\sqrt{0_{L}}$. Hence $b^{\prime}\leq I$, i.e., $b\in I$. Thus $I$ is an $n$-ideal of $R$. ∎ ###### Theorem 3.10. Let $R$ be a commutative ring with unity. Then $I$ is a $J$-ideal of $R$ if and only if $I$ is a $J$-element of multiplicative lattice $L=Id(R)$, where $Id(R)$ is the ideal lattice of $R$. ###### Proof. Suppose that $I$ is a $J$-ideal of $R$. Let $A,B$ be finitely generated ideals of $R$ (which are compact elements of $Id(R)$) such that $A\cdot B\leq I$ with $A\nleq J(L)$ in $L=Id(R)$. Since $A\nleq J(L)$, we get $A\nsubseteq J(R)$. By Theorem 3.5, $B\subseteq I$, i.e., $B\leq I$ in $L$. Hence $I$ is a $J$-element of $L$. Conversely, suppose that $I$ is a $J$-element of $L$. Let $a,b\in R$ such that $a\cdot b\in I$ with $a\notin J(R)$. We claim that $b\in I$. Since $(a\cdot b)=(a)\cdot(b)\subseteq I$, we have $a^{\prime}\cdot b^{\prime}\leq I$ in $L$, where $a^{\prime}=(a),\ b^{\prime}=(b)\in L_{*}$. Clearly, $a^{\prime}\nleq J(L)$. Hence $b^{\prime}\leq I$, i.e., $b\in I$. Hence $I$ is a $J$-ideal of $R$. ∎ Let $L$ be a multiplicative lattice. Then one can see the each of the sets $Z(L)$, $\bigl{(}\sqrt{0}\ \\!\bigr{]}$ and $\bigl{(}J(L)\\!\bigr{]}$ are multiplicatively $M$-closed subsets of $L$. So if we replace $\mathfrak{X}$ by these sets, then we get the results of $r$-element, $n$-element and $J$-element respectively. We quote some of these results for ready reference. One can see that in a $c$-lattice $L$, $\bigl{(}L_{*}\setminus Z(L)\bigr{)}\subseteq\bigl{(}L_{*}\setminus\bigl{(}\sqrt{0}\ \\!\bigr{]}\bigr{)}$ and $\bigl{(}\sqrt{0}\ \\!\bigr{]}\subseteq\bigl{(}J(L)\\!\bigr{]}$. For this, let $x\in\bigl{(}L_{*}\setminus Z(L)\bigr{)}$ and $x\in\bigl{(}\sqrt{0}\ \\!\bigr{]}$. Then $x^{n}=0$ for some $n\in\mathbb{N}$. Thus $x\in Z(L)$, a contradiction. This proves the inclusion $\bigl{(}L_{*}\setminus Z(L)\bigr{)}\subseteq\bigl{(}L_{*}\setminus\bigl{(}\sqrt{0}\ \\!\bigr{]}\bigr{)}$. Now, for the second inclusion, let $y$ be any compact element such that $y\in\bigl{(}\sqrt{0}\ \\!\bigr{]}$. Then $y^{k}=0$ for some $k\in\mathbb{N}$. Let $m$ be a maximal element of $L$. Then it is prime. This together with $y^{k}=0\leq m$ implies that $y\leq m$. This further yields that $y\in J(L)=\bigwedge_{k\in\Lambda}m_{k}$. Since $L$ is a $c$-lattice and every compact element below $\sqrt{0}$ is below $J(L)$, we have $\sqrt{0}\leq J(L)$. Hence by Lemma 2.6, we have the following result. ###### Proposition 3.11. Let $L$ be a $c$-lattice. Then every $n$-element of $L$ is a $r$-element as well as it is a $J$-element of $L$. By Proposition 3.11, Theorems 3.6, 3.9, and Theorem 3.10, we have: ###### Proposition 3.12. Let $R$ be a commutative ring with unity. Then every $n$-ideal of $R$ is a $r$-ideal as well as it is a $J$-ideal. From Lemma 2.4, we get Proposition 2.2 of [9] and Proposition 2.3 of [12]. Also, Proposition 2.4 of [12] follows from Lemma 2.9. It is easy to observe that Proposition 2.10 of [9] and Theorem 2.7 of [12] follows from Lemma 2.11. We observe that Proposition 2.13 of [9] follows from Lemmas 2.4 and 2.13. Note that Theorem 2.17 strengthens Theorem 2.12 of [12]. Lemmas 2.19 and 2.20 generalizes the equivalence of $(i)$ and $(ii)$ in Corollary 2.13 of [12] and Proposition 2.20 of [9] respectively. One can see that Lemma 2.21 extends Proposition 2.16 of [12] and Proposition 2.21 of [9]. Lastly, Theorem 2.23 of [12] and Proposition 2.29 of [9] follows from Theorem 2.25. For the following result, we need a little more explanation. ###### Proposition 3.13 ([9, Proposition 2.3]). Let $R$ be a commutative ring with unity. Then the following are equivalent. 1. (1) $R$ is a local ring. 2. (2) Every proper ideal of $R$ is a $J$-ideal. ###### Proof. $(1)\implies(2):$ It is clear that the ideal lattice $Id(R)$ of $R$ is a local lattice. Further, $J(L)=m$, where $m$ is the unique maximal element of $Id(R)$. Hence by Lemma 2.7, every proper element of $L$ is an $\mathfrak{X}$-element, where $\mathfrak{X}=(m]=\bigl{(}J(L)\\!\bigr{]}$. That is, every proper element of $L$ is a $J$-element. By Theorem 3.10, every proper ideal of $R$ is a $J$-ideal. $(2)\implies(1):$ follows from Lemma 2.8 ∎ Finally, the results of $n$-multiplicatively closed subset and $J$-multiplicatively closed subset can be obtained by using Lemma 2.24. Further, the results of $r$-ideals can be deduced from our results for Noetherian rings, since Theorem 3.6 is available for Noetherian lattice settings. If Question 3.8 has an affirmative answer, then our results will extend most of the results of $r$-ideals of a commutative ring with unity. ## References * [1] R. G. Burton, Fractional elements in multiplicative lattices, Pacific J. Math. 56 (1) (1975), 35-49. * [2] L. Fuchs and R. Reis, On lattice-ordered commutative semigroups, Algebra Universalis 50 (3) (2003), 341-357. * [3] G. Grätzer, General Lattice Theory, Birkhäuser Basel (1998). * [4] Vinayak Joshi and S. B. Ballal, A note on n-Baer multiplicative lattices, Southeast Asian Bull. Math. 39 (2015), 67-76. * [5] C. Jayaram, Primary elements in Prüfer lattices, Czech. Math. J. 52 (127) (2002), 585-593. * [6] C. Jayaram and E. W. Johnson, s-prime elements in multiplicative lattices, Period. Math. Hungar. 31 (1995), 201-208. * [7] C. Jayaram and E. W. Johnson, Strong compact elements in multiplicative lattices, Czech. Math. J. 47 (122) (1997), 105-112. * [8] C. Jayaram, U. Tekir and E. Yetkin, 2-absorbing and weakly 2-absorbing elements in multiplicative lattices, Communications in Algebra 42 (2014), 1-16. * [9] H. A. Khashan and A. B. Bani-Ata, $J$-ideals of commutative rings, Int. Electron. J. Algebra 29 (2021), 148-164. * [10] C. S. Manjarekar and A. V. Bingi, $\phi$-prime and $\phi$-primary elements in multiplicative lattices, Algebra, Volume 2014, Article ID 890312, 7 pages http://dx.doi.org/10.1155/2014/890312. * [11] R. Mohamadian, $r$-ideals in commutative rings, Turk. J. Math. 39 (2015), 733-749. * [12] U. Tekir, S. Koc and K. H. Oral, $n$-ideals of commutative rings, Filomat 31 (10) (2017), 2933-2941. * [13] M. Ward and R. P. Dilworth, Residuated lattices, Trans. Amer. Math. Soc. 45 (1939), 335-354.
A Survey of the Valuation Algebra motivated by a Fundamental Application to Dissection Theory Hery Randriamaro1 1 This research was funded by my mother Lot II B 32 bis Faravohitra, 101 Antananarivo, Madagascar <EMAIL_ADDRESS> Abstract A lattice $L$ is said lowly finite if the set $[\mathsf{0},a]$ is finite for every element $a$ of $L$. We mainly aim to provide a complete proof that, if $M$ is a subset of a complete lowly finite distributive lattice $L$ containing its join-irreducible elements, and $a$ an element of $M$ which is not join- irreducible, then $\displaystyle\sum_{b\in M\cap[\mathsf{0},a]}\mu_{M}(b,a)b$ belongs to the submodule $\langle a\wedge b+a\vee b-a-b\ |\ a,b\in L\rangle$ of $\mathbb{Z}L$. That property was originally established by Zaslavsky for finite distributive lattice. It is essential to prove the fundamental theorem of dissection theory as will be seen. We finish with a concrete application of that theorem to face counting for submanifold arrangements. Keywords: Lattice, Valuation, Möbius Algebra, Subspace Arrangement MSC Number: 06A07, 06A11, 06B10, 57N80 ## 1 Introduction A distributive lattice is a partially order set with join and meet operations which distribute over each other. The prototypical example of a such structure are the sets where join and meet are the usual union and intersection. Further examples include the Lindenbaum algebra of logics that support conjunction and disjunction, every Heyting algebra, and Young’s lattice whose join and meet of two partitions are given by the union and intersection of the corresponding Young diagrams. This survey mainly aims to provide a complete proof that, if $L$ is a complete lowly finite distributive lattice, $M$ a subset of $L$ containing its join-irreducible elements, $f:L\rightarrow G$ a valuation on $L$ to a module $G$, and $a$ an element of $M$ which is not join-irreducible, then $\sum_{b\in[\mathsf{0},a]\cap M}\mu_{M}(b,a)f(b)=0.$ (1) The proof is carried out in several stages. We first have to consider the general case of posets in Section 2. A proof of Zorn’s lemma and an introduction to the Möbius algebra $\mathrm{M\ddot{o}b}(L)$ of a lowly finite poset $L$ are namely provided. The lemma bearing his name was proved by Zorn in 1935 [19]. Although we can easily find diverse proofs of that lemma in the literature, new ones are still proposed other time like that of Lewin in 1991 [11]. Ours is inspired by the lecture notes of Debussche [5, § 2.II]. The Möbius algebra was discovered in 1967 by Solomon who defined it for finite posets [14, § 2]. We establish the Möbius inversion formula, and prove that $\displaystyle\bigg{\\{}\sum_{b\in[\mathsf{0},a]}\mu_{L}(b,a)b\ \bigg{|}\ a\in L\bigg{\\}}$ is a complete set of orthogonal idempotents in $\mathrm{M\ddot{o}b}(L)$. We study the special case of lattices in Section 3. After viewing some essential generalities, we focus on the distributive lattices, and establish diverse properties like the distributivity of a lattice $L$ if and only if, for all $a,b,c\in L$, $c\vee a=c\vee b$ and $c\wedge a=c\wedge b$ imply $a=b$. Those properties are necessary to investigate the valuation algebra in Section 4. It is the central part of this survey, and principally inspired from the articles of Geissinger [7], [8, § 3] and that of Zaslavsky [18, § 2]. In that section is particularly proved that if $M$ is a subset of a complete lowly finite distributive lattice $L$ containing its join-irreducible elements, and $a$ an element of $M$ which is not join-irreducible, then $\displaystyle\sum_{b\in M\cap[\mathsf{0},a]}\mu_{M}(b,a)b$ belongs to the submodule $\langle a\wedge b+a\vee b-a-b\ |\ a,b\in L\rangle$ of $\mathbb{Z}L$. It is the property that allows to deduce Equation 1. Thereafter, we use Equation 1 to deduce the fundamental theorem of dissection theory in Section 5, that is, if $\mathscr{A}$ is a subspace arrangement in a topological space $T$ with $|\chi(T)|<\infty$, and $L$ a meet-refinement of $L_{\mathscr{A}}$, then $\displaystyle\sum_{C\in C_{\mathscr{A}}}\chi(C)=\sum_{X\in L}\mu_{L}(X,T)\chi(X)$. In its original form of 1977 [18, Theorem 1.2], Zaslavsky expressed it for CW complexes. The number of chambers is consequently $\displaystyle\\#C_{\mathscr{A}}=\frac{1}{c}\sum_{X\in L}\mu_{L}(X,T)\chi(X)$ if $c$ is the Euler characteristic of every chamber. Deshpande showed a similar result in 2014 for the special case of a submanifold arrangement with chambers having the same Euler characteristic $(-1)^{l}$ [6, Theorem 4.6]. We finally compute the $\mathrm{f}$-polynomial of submanifold arrangements from the dissection theorem of Zaslavsky in Section 6. Face counting of topological space has doubtless its origins in the formulas for the numbers of the $i$-dimensional faces if planes in $\mathbb{R}^{3}$ fall into $k$ parallel families in general position established by Steiner in 1826 [15]. About 150 years later, Alexanderson and Wetzel computed the same numbers but for an arbitrary set of planes [1], and Zaslavsky for hyperplane arrangements in a Euclidean space of any dimension [17, Theorem A]. One of our formulas is a generalization of those results as it considers a submanifold arrangement $\mathscr{A}$ such that $\chi(X)=(-1)^{\dim X}$ for every $X\in L_{\mathscr{A}}\cup F_{\mathscr{A}}$, and states that $\mathrm{f}_{\mathscr{A}}(x)=(-1)^{\mathrm{rk}\,\mathscr{A}}\mathrm{M}_{\mathscr{A}}(-x,-1)$ where $\mathrm{M}_{\mathscr{A}}$ is the Möbius polynomial of $\mathscr{A}$. Moreover, Pakula computed the number of chambers of a pseudosphere arrangement with simple complements in 2003 [12, Corollary 1]. Another formula is a generalization of his result considering a submanifold arrangement $\mathscr{A}$ such that $\forall C\in F_{\mathscr{A}}:\,\chi(C)=(-1)^{\dim C}\quad\text{and}\quad\forall X\in L_{\mathscr{A}}:\,\chi(X)=\begin{cases}2&\text{if}\ \dim X\equiv 0\mod 2\\\ 0&\text{otherwise}\end{cases},$ and states $\mathrm{f}_{\mathscr{A}}(x)=(-1)^{n-\mathrm{rk}\,\mathscr{A}}\big{(}\mathrm{M}_{\mathscr{A}}(x,-1)+\gamma_{n}\mathrm{M}_{\mathscr{A}}(-x,-1)\big{)}\ \text{with}\ \gamma_{n}:=\begin{cases}1&\text{if}\ \dim X\equiv 0\mod 2\\\ -1&\text{otherwise}\end{cases}.$ ## 2 Poset We begin with the general case of posets. The Zorn’s lemma is especially proved, and the Möbius algebra described as it plays a key role in this survey. ###### Definition 2.1. A partial order is a binary relation $\preceq$ over a set $L$ such that, for $a,b,c\in L$, * • $a\preceq a$, * • if $a\preceq b$ and $a\succeq b$, then $a=b$, * • if $a\preceq b$ and $b\preceq c$, then $a\preceq c$. The set $L$ with a partial order is called a partially ordered set or poset, and two elements $a,b\in L$ are said comparable if $a\preceq b$ or $a\succeq b$. ###### Definition 2.2. A poset $L$ has an uppest resp. lowest element $\mathsf{1}$ resp. $\mathsf{0}\in L$ if, for every $a\in L$, one has $a\preceq\mathsf{1}$ resp. $a\succeq\mathsf{0}$. The poset is said complete if it has an uppest and a lowest element. ### 2.1 Zorn’s Lemma ###### Definition 2.3. A subset $C$ of a poset $P$ is a chain if any two elements in $C$ are comparable. Denote by $\mathcal{C}_{L}$ the set formed by the chains of a poset $L$. A subset $S$ of $L$ has an upper resp. lower bound if there exists $u$ resp. $l\in L$ such that $s\preceq u$ resp. $l\preceq s$ for each $s\in S$. The upper resp. lower bound $u$ resp. $l$ is said strict if $u$ resp. $l\notin S$. ###### Definition 2.4. A poset $L$ is said inductive if every chain included in $L$ has an upper bound. For an inductive poset $L$, and $C\in\mathcal{C}_{L}$, let $C_{\prec}$ be the set formed by the strict upper bound of $C$, and denote by $\mathcal{E}_{L}$ the set $\\{C\in\mathcal{C}_{L}\ |\ C_{\prec}\neq\emptyset\\}$. The axiom of choice allows to deduce the existence of a function $\mathrm{c}:2^{L}\setminus\\{\emptyset\\}\rightarrow L$ such that, for every $A\in 2^{L}\setminus\\{\emptyset\\}$, we have $\mathrm{c}(A)\in A$. Define the function $\mathrm{m}:\mathcal{E}_{L}\rightarrow L$, for $C\in\mathcal{E}_{L}$, by $\mathrm{m}(C):=\mathrm{c}(C_{\prec})$. ###### Definition 2.5. Let $S,A$ be subsets of a poset $L$. The set $S$ is called a segment of $A$ if $S\subseteq A\quad\text{and}\quad\forall s\in S,\,\forall a\in A:\ s\succeq a\Rightarrow a\in S.$ ###### Definition 2.6. An upper resp. lower bound $u$ resp. $l$ of the subset $S$ of a poset $L$ is called a join resp. meet if $u\preceq a$ resp. $b\preceq l$ for each upper resp. lower bound $a$ resp. $b$ of $S$. ###### Definition 2.7. A chain $C$ of an inductive poset $L$ is called a good set if, for every segment $S$ of $C$ with $S\neq C$, we have $S_{\prec}\cap C\neq\emptyset$ and $\mathrm{m}(S)$ is the meet of $S_{\prec}\cap C$. For elements $a,b$ of a poset, by $a\prec b$ we mean that $a\preceq b$ and $a\neq b$. ###### Lemma 2.8. Let $A,B$ be nonempty good sets of an inductive poset $L$. Then, either $A$ is a segment of $B$ or vice versa. ###### Proof. Note first that $\emptyset$ is a chain of $L$. As $L$ is inductive, $\emptyset$ has then an upper bound in $L$ which is necessary a strict upper bound, hence $\emptyset\in\mathcal{E}_{L}$. Moreover, since $\emptyset$ is obviously a segment of both $A$ and $B$ which are good sets, then $\mathrm{m}(\emptyset)\in\emptyset_{\prec}\cap A\cap B$ and $A\cap B\neq\emptyset$. For $a\in A\cap B$, the sets $S_{a,A}:=\\{s\in A\ |\ s\prec a\\}$ and $S_{a,B}:=\\{s\in B\ |\ s\prec a\\}$ are clearly segments of $A$ and $B$ respectively. Set $C:=\\{a\in A\cap B\ |\ S_{a,A}=S_{a,B}\\}$, and let $b\in C$, $c\in A$, with $b\succ c$. We have $c\in S_{b,A}=S_{b,B}$, then $c\in B$ which implies $c\in A\cap B$. If $d\in S_{c,A}$, then $d\prec c\prec b$ implies $d\in S_{b,A}=S_{b,B}$, hence $b\in S_{c,B}$ and $S_{c,A}\subseteq S_{c,B}$. Similarly, we have $S_{c,B}\subseteq S_{c,A}$, then $c\in C$. Therefore, $C$ is a segment of $A$ and $B$. Suppose now that $C\neq A$ and $C\neq B$. As $A,B$ are good sets, then $\mathrm{m}(C)\in A\cap B$. Remark that $C\sqcup\big{\\{}\mathrm{m}(C)\big{\\}}=S_{\mathrm{m}(C),A}=S_{\mathrm{m}(C),B}$, then $\mathrm{m}(C)\in C$ which is absurd. Hence $C=A$ or $C=B$, in other words, $A$ is a segment of $B$ or vice versa. ∎ Denoting by $\mathcal{G}_{L}$ the set formed by the good sets of an inductive poset $L$, set $\displaystyle U_{L}:=\bigcup_{A\in\mathcal{G}_{L}}A$. ###### Theorem 2.9. If $L$ is an inductive poset, then $U_{L}$ is a good set. ###### Proof. For $a,b\in U_{L}$, there exist good sets $S_{a},S_{b}$ such that $a\in S_{a}$ and $b\in S_{b}$. Using Lemma 2.8, we get either $S_{a}\subseteq S_{b}$ or $S_{b}\subseteq S_{a}$. That means either $a\preceq b$ or $a\succeq b$, and $U_{L}$ is consequently a chain. Let $A\in\mathcal{G}_{L}$, $a\in A$, and $b\in U_{L}$ with $a\succeq b$. There is $B\in\mathcal{G}_{L}$ with $b\in B$. From Lemma 2.8, * • if $A$ is a segment of $B$, then $A$ is a segment and $b\in A$, * • if $B$ is a segment of $A$, then $B\subseteq A$ and $b\in A$. In any case, we have $b\in A$, then $A$ is a segment of $U_{L}$. Consider a segment $S$ of $U_{L}$ such that $S\neq U_{L}$. Since $U_{L}$ is a chain, necessarily $U_{L}\setminus S\subseteq S_{\prec}$. Let $a\in U_{L}\setminus S$, and $A\in\mathcal{G}_{L}$ such that $a\in A$. As $A$ is a segment of $U_{L}$, then $S\varsubsetneq A$ and $S$ is a segment of $A$. Moreover, $\mathrm{m}(S)$ is the meet of $S_{\prec}\cap A$. If there exists $b\in S_{\prec}\cap U_{L}$ such that $b\prec\mathrm{m}(S)$, we would get $b\in A$, which is absurd. Therefore, $\mathrm{m}(S)$ is the meet of $S_{\prec}\cap U_{L}$, and $U_{L}$ is a good set. ∎ ###### Definition 2.10. An element $a$ of a poset $L$ is said maximal if there does not exist an element $b\in L\setminus\\{a\\}$ such that $b\succ a$. ###### Corollary 2.11 (Zorn’s Lemma). Every inductive poset $L$ has a maximal element. ###### Proof. Recall that, since $U_{L}$ is a chain, it consequently possesses an upper bound. Suppose $U_{L\prec}\neq\emptyset$, and let $u\in U_{L\prec}$. Then $U_{L}\sqcup\\{u\\}$ is a good set which is absurd. Hence, $U_{L}$ has a unique upper bound, contained in $U_{L}$, which is a maximal element of $L$. ∎ ### 2.2 Möbius Algebra For two elements $a,b$ of a poset $L$ such that $a\preceq b$, denote by $[a,b]$ the set $\\{c\in L\ |\ a\preceq c\preceq b\\}$. ###### Definition 2.12. A poset $L$ is locally finite if, for all $a,b\in L$ such that $a\preceq b$, $[a,b]$ is finite. For a locally finite poset $L$, denote by $\mathsf{Inc}(L)$ the module of the functions $f:L^{2}\rightarrow\mathbb{Z}$ with the property that, if $x,y\in L$, then $f(x,y)=0$ if $x\npreceq y$. ###### Definition 2.13. The incidence algebra $\mathrm{Inc}(L)$ of a locally finite poset $L$ is the module of functions $f:L^{2}\rightarrow\mathbb{Z}$, having the property $f(a,b)=0$ if $a\npreceq b$, with distributive multiplication $h=f\cdot g$ defined, for $f,g\in\mathrm{Inc}(L)$, by $h(a,b):=0\ \text{if}\ a\npreceq b\quad\text{and}\quad h(a,b):=\sum_{c\in[a,b]}f(a,c)g(c,b)\ \text{otherwise}.$ Its multiplicative identity is the Kronecker delta $\delta:L^{2}\rightarrow\mathbb{Z}$ with $\delta(a,b):=\begin{cases}1&\text{if}\ a=b,\\\ 0&\text{otherwise}\end{cases}$. ###### Definition 2.14. For a locally finite poset $L$, the zeta function $\zeta_{L}$ and the Möbius function $\mu_{L}$ in the incidence algebra $\mathrm{Inc}(L)$ are defined, for $a,b\in L$ with $a\preceq b$, by $\zeta_{L}(a,b):=1\quad\text{and}\quad\mu_{L}(a,b):=\begin{cases}1&\text{if}\ a=b\\\ \displaystyle-\sum_{\begin{subarray}{c}c\in[a,b]\\\ c\neq b\end{subarray}}\mu_{L}(a,c)=-\sum_{\begin{subarray}{c}c\in[a,b]\\\ c\neq a\end{subarray}}\mu_{L}(c,b)&\text{otherwise}\end{cases}.$ ###### Lemma 2.15. For a locally finite poset $L$, the zeta function is the multiplicative inverse of the Möbius function in the incidence algebra $\mathrm{Inc}(L)$. ###### Proof. For $a,b\in L$ with $a\preceq b$, we have $\zeta_{L}\cdot\mu_{L}(a,a)=\mu_{L}\cdot\zeta_{L}(a,a)=1=\delta(a,a)$, but also $\zeta_{L}\cdot\mu_{L}(a,b)=\sum_{c\in[a,b]}\mu_{L}(c,b)=0=\delta(a,b)\quad\text{and}\quad\mu_{L}\cdot\zeta_{L}(a,b)=\sum_{c\in[a,b]}\mu_{L}(a,c)=0=\delta(a,b).$ ∎ The proof of this proposition is inspired from the original proof of Rota [13, Proposition 2]. ###### Proposition 2.16 (Möbius Inversion Formula). Let $L$ be a locally finite poset, $a,b\in L$ with $a\preceq b$, and $f,g$ two functions from $L$ onto a module $M$ over $\mathbb{Z}$. Then, $\forall x\in[a,b]:\,g(x)=\sum_{c\in[a,x]}f(c)\quad\Longleftrightarrow\quad\forall x\in[a,b]:\,f(x)=\sum_{c\in[a,x]}g(c)\mu_{L}(c,x).$ ###### Proof. Assume first that, for every $x\in[a,b]$, $\displaystyle g(x)=\sum_{c\in[a,x]}f(c)$. Using Lemma 2.15, we get $\displaystyle\sum_{c\in[a,x]}g(c)\mu_{L}(c,x)$ $\displaystyle=\sum_{c\in[a,x]}\sum_{d\in[a,c]}f(d)\mu_{L}(c,x)=\sum_{c\in[a,x]}\sum_{d\in[a,c]}f(d)\zeta_{L}(d,c)\mu_{L}(c,x)$ $\displaystyle=\sum_{d\in[a,c]}\sum_{c\in[a,x]}f(d)\zeta_{L}(d,c)\mu_{L}(c,x)=\sum_{d\in[a,c]}f(d)\sum_{c\in[a,x]}\zeta_{L}(d,c)\mu_{L}(c,x)$ $\displaystyle=\sum_{d\in[a,c]}f(d)\,\zeta_{L}\cdot\mu_{L}(d,x)=\sum_{d\in[a,c]}f(d)\delta(d,x)$ $\displaystyle=f(x).$ Similarly, if $\displaystyle f(x)=\sum_{c\in[a,x]}g(c)\mu_{L}(c,x)$ for every $x\in[a,b]$, we obtain $\displaystyle\sum_{c\in[a,x]}f(c)$ $\displaystyle=\sum_{c\in[a,x]}\sum_{d\in[a,c]}g(d)\mu_{L}(d,c)=\sum_{c\in[a,x]}\sum_{d\in[a,c]}g(d)\mu_{L}(d,c)\zeta_{L}(c,x)=\sum_{d\in[a,c]}g(d)\delta(d,x)$ $\displaystyle=g(x).$ ∎ ###### Definition 2.17. We say that a poset $L$ is lowly finite if the set $\\{b\in L\ |\ b\preceq a\\}$ is finite for any $a\in L$. For a lowly finite poset $L$ and $a\in L$, let $u_{L}(a)$ be the element $\displaystyle\sum_{\begin{subarray}{c}c\in L\\\ c\preceq a\end{subarray}}\mu_{L}(c,a)c$ of the module $\mathbb{Z}L$. ###### Definition 2.18. The Möbius Algebra $\mathrm{M\ddot{o}b}(L)$ of a lowly finite poset $L$ is the module $\mathbb{Z}L$ with distributive multiplication defined, for $a,b\in L$, by $a\cdot b:=\sum_{\begin{subarray}{c}c\in L\\\ c\preceq a,\,c\preceq b\end{subarray}}u_{L}(c).$ Remark that the Möbius algebra was initial defined for finite posets [14, § 2]. For a lowly finite poset $L$ with a lowest element, define the algebra $\mathrm{A}_{L}:=\langle\alpha_{a}\ |\ a\in L\rangle$ over $\mathbb{Z}$ with multiplication $\alpha_{a}\alpha_{b}:=\begin{cases}\alpha_{a}&\text{if}\ a=b,\\\ 0&\text{otherwise}\end{cases}.$ To each $a\in L$, associate an element $a^{\prime}\in\mathrm{A}_{L}$ by setting $\displaystyle a^{\prime}:=\sum_{b\in[\mathsf{0},a]}\alpha_{b}$. ###### Lemma 2.19. For a lowly finite poset $L$ with a lowest element, the set $\\{a^{\prime}\ |\ a\in L\\}$ forms a basis of the algebra $\mathrm{A}_{L}$. ###### Proof. From the Möbius inversion formula, we get $\displaystyle\alpha_{a}=\sum_{b\in[\mathsf{0},a]}\mu(b,a)b^{\prime}$. The set $\\{a^{\prime}\ |\ a\in L\\}$ consequently generates $\mathrm{A}_{L}$. Suppose that there exists a finite set $I\subseteq L$ and an integer set $\\{i_{a}\\}_{a\in I}$ such that $\displaystyle\sum_{a\in I}i_{a}a^{\prime}=0$. If $b$ is a maximal element of $I$, then $\displaystyle\alpha_{b}\sum_{a\in I}i_{a}a^{\prime}=i_{b}\alpha_{b}=0$, hence $i_{b}=0$. Inductively, we deduce that $i_{a}=0$ for every $a\in I$. The set $\\{a^{\prime}\ |\ a\in L\\}$ is therefore independent. ∎ The following results were initially established by Greene for finite lattice [10, § 1]. ###### Theorem 2.20. For a lowly finite poset $L$ with a lowest element, the map $\phi:L\rightarrow\mathrm{A}_{L},\,a\mapsto a^{\prime}$ extends to an algebra isomorphism from $\mathrm{M\ddot{o}b}(L)$ to $\mathrm{A}_{L}$. ###### Proof. The map $\phi$ clearly becomes a module homomorphism by linear extension, and an isomorphism by Lemma 2.19. Moreover, for $a,b\in L$, $\displaystyle\phi(a\cdot b)$ $\displaystyle=\phi\Big{(}\sum_{c\in[\mathsf{0},a]\cap[\mathsf{0},b]}u_{L}(c)\Big{)}=\sum_{c\in[\mathsf{0},a]\cap[\mathsf{0},b]}\phi\Big{(}\sum_{d\in[\mathsf{0},c]}\mu(d,c)d\Big{)}$ $\displaystyle=\sum_{c\in[\mathsf{0},a]\cap[\mathsf{0},b]}\sum_{d\in[\mathsf{0},c]}\mu(d,c)d^{\prime}=\sum_{c\in[\mathsf{0},a]\cap[\mathsf{0},b]}\alpha_{c},$ and $\displaystyle\phi(a)\phi(b)=a^{\prime}b^{\prime}=\sum_{c\in[\mathsf{0},a]}\alpha_{c}\times\sum_{d\in[\mathsf{0},b]}\alpha_{d}=\sum_{c\in[\mathsf{0},a]\cap[\mathsf{0},b]}\alpha_{c}$. Then $\phi(a\cdot b)=\phi(a)\phi(b)$, and $\phi$ is consequently an algebra isomorphism. ∎ ###### Corollary 2.21. For a lowly finite poset $L$ with a lowest element, the set $\\{u_{L}(a)\ |\ a\in L\\}$ is a complete set of orthogonal idempotents in $\mathrm{M\ddot{o}b}(L)$. ###### Proof. Since $\displaystyle\phi\big{(}u_{L}(a)\big{)}=\sum_{b\in[\mathsf{0},a]}\mu_{L}(b,a)b^{\prime}=\alpha_{a}$, then $\\{u_{L}(a)\ |\ a\in L\\}$ is a basis of $\mathrm{M\ddot{o}b}(L)$. Moreover $\phi\big{(}u_{L}(a)\cdot u_{L}(a)\big{)}=\alpha_{a}=\phi\big{(}u_{L}(a)\big{)}$, so the $u_{L}(a)$’s are idempotents. Finally $\phi\big{(}u_{L}(a)\cdot u_{L}(b)\big{)}=\alpha_{a}\alpha_{b}=0$ if $a\neq b$, hence the $u_{L}(a)$’s are orthogonal. ∎ ###### Corollary 2.22. Let $L$ be a lowly finite poset with a lowest element, and $M$ a subset of $L$ containing $\mathsf{0}$. Then, the linear map $\mathrm{j}:\mathrm{M\ddot{o}b}(L)\rightarrow\mathrm{M\ddot{o}b}(M)$, which on the basis $\\{u_{L}(a)\ |\ a\in L\\}$ has the values $\mathrm{j}\big{(}u_{L}(a)\big{)}:=\begin{cases}u_{M}(a)&\text{if}\ a\in M,\\\ 0&\text{otherwise},\end{cases}$ is an algebra homomorphism. ###### Proof. Using Corollary 2.21, $\mathrm{j}\big{(}u_{L}(a)\cdot u_{L}(a)\big{)}=\mathrm{j}\big{(}u_{L}(a)\big{)}=u_{M}(a)=\mathrm{j}\big{(}u_{L}(a)\big{)}\cdot\mathrm{j}\big{(}u_{L}(a)\big{)}$ if $a\in M$. Otherwise, $\mathrm{j}\big{(}u_{L}(a)\cdot u_{L}(a)\big{)}=0=\mathrm{j}\big{(}u_{L}(a)\big{)}\cdot\mathrm{j}\big{(}u_{L}(a)\big{)}$. For $a,b\in L$ with $a\neq b$, $\mathrm{j}\big{(}u_{L}(a)\cdot u_{L}(b)\big{)}=0=\mathrm{j}\big{(}u_{L}(a)\big{)}\cdot\mathrm{j}\big{(}u_{L}(b)\big{)}$. ∎ ## 3 Lattice We study the special but important case of lattices. After viewing some generalities, we focus on distributive ones, and establish diverse properties which are necessary to investigate the valuation algebra in the next section. ###### Definition 3.1. A poset $L$ is a join-semilattice resp. meet-semilattice if each $2$-element subset $\\{a,b\\}\subseteq L$ has a join resp. meet denoted by $a\vee b$ resp. $a\wedge b$. It is called a lattice if $L$ is both a join- and meet- semilattice, moreover $\vee$ and $\wedge$ become binary operations on $L$. ###### Proposition 3.2. If a lattice $L$ is lowly finite, then it has a lowest element. ###### Proof. For any $a\in L$, the principal ideal $\mathrm{id}(a)$ has a lowest element which is $\displaystyle\mathsf{0}_{a}:=\bigwedge_{x\in\mathrm{id}(a)}x$. Consider $b\in L\setminus\\{a\\}$ and the lowest element $\mathsf{0}_{b}$ of $\mathrm{id}(b)$. The fact $\mathsf{0}_{a}\wedge\mathsf{0}_{b}\neq\mathsf{0}_{a}$ would contradict the fact that $\mathsf{0}_{a}$ is the lowest element of $\mathrm{id}(a)$. Hence, $L$ has a lowest element $\mathsf{0}$. ∎ ### 3.1 Generalities on Lattice ###### Definition 3.3. A sublattice of a lattice $L$ is a nonempty subset $M\subseteq L$ such that, for all $a,b\in M$, we have $a\vee b\in M$ and $a\wedge b\in M$. ###### Definition 3.4. A lattice homomorphism is a function $\varphi:L_{1}\rightarrow L_{2}$ between two lattices $L_{1}$ and $L_{2}$ such that, for all $a,b\in L_{1}$, $\varphi(a\vee b)=\varphi(a)\vee\varphi(b)\quad\text{and}\quad\varphi(a\wedge b)=\varphi(a)\wedge\varphi(b).$ ###### Definition 3.5. An ideal of a lattice $L$ is a sublattice $I\subseteq L$ such that, for any $a\in I$ and $b\in L$, we have $a\wedge b\in I$. If in addition $I\neq L$ and, for any $a\wedge b\in I$, either $a\in I$ or $b\in I$, then $I$ is a prime ideal. ###### Definition 3.6. Dually, a filter of a lattice $L$ is a sublattice $F\subseteq L$ such that, for any $a\in F$ and $b\in L$, we have $a\vee b\in F$. If in addition $F\neq L$ and, for any $a\vee b\in F$, either $a\in F$ or $b\in F$, then $F$ is a prime filter. ###### Proposition 3.7. A subset $M$ of a lattice $L$ is a prime ideal if and only if the subset $L\setminus M$ is a prime filter. ###### Proof. Assume that $M$ is a prime ideal: * • If $a,b\in L\setminus M$, clearly $a\wedge b\in L\setminus M$ and $a\vee b\in L\setminus M$ since $(a\vee b)\wedge b\in L\setminus M$, then $L\setminus M$ is a sublattice. * • If $a\in M$ and $b\in L\setminus M$, once again $a\vee b\in L\setminus M$ since $(a\vee b)\wedge b\in L\setminus M$, then $L\setminus M$ is a filter. * • If $a\vee b\in L\setminus M$, it is clear that both $a,b$ cannot be all in $M$, then $L\setminus M$ is prime. One similarly proves that if $M$ is a prime filter, then $L\setminus M$ is a prime ideal. ∎ ###### Definition 3.8. Let $L$ be a lattice, and $a\in L$. The principal ideal generated by $a$ is the ideal $\mathrm{id}(a):=\\{b\in L\ |\ b\preceq a\\}$, dually the principal filter generated by $a$ is the filter $\mathrm{fil}(a):=\\{b\in L\ |\ b\succeq a\\}$. ###### Definition 3.9. An element $a$ of a lattice $L$ is join-irreducible if, for any subset $S\subseteq L$, $\displaystyle a=\bigvee_{b\in S}b$ implies $a\in S$. Denote by $\mathrm{ji}(L)$ the set formed by the join-irreducible elements of $L$. ###### Lemma 3.10. Let $L$ be a lattice, and $a\in L$. Then, $a\in\mathrm{ji}(L)$ if and only if $\displaystyle a\neq\bigvee_{\begin{subarray}{c}b\in L\\\ b\prec a\end{subarray}}b$. ###### Proof. If $a\in\mathrm{ji}(L)$, as $a\notin\\{b\in L\ |\ b\prec a\\}$, then $\displaystyle a\neq\bigvee_{\begin{subarray}{c}b\in L\\\ b\prec a\end{subarray}}b$. Assume now that $\displaystyle a\neq\bigvee_{\begin{subarray}{c}b\in L\\\ b\prec a\end{subarray}}b$, and let $S\subseteq L$ such that $\displaystyle a=\bigvee_{b\in S}b$. Since $b\preceq a$ for every $b\in S$, the only possibility is $a\in S$, and consequently $a\in\mathrm{ji}(L)$. ∎ The proof of the following proposition takes inspiration from that of Bhatta and Ramananda [2, Proposition 2.2]. ###### Proposition 3.11. Let $L$ be a lowly finite lattice, and $a\in L$. Then, $\displaystyle a=\bigvee_{b\,\in\,\mathrm{id}(a)\cap\mathrm{ji}(L)}b$. ###### Proof. It is obvious if $a\in\mathrm{ji}(L)$. Now, assume that $a\in L\setminus\mathrm{ji}(L)$ and $\displaystyle a\neq\bigvee_{b\,\in\,\mathrm{id}(a)\cap\mathrm{ji}(L)}b$. The set $\displaystyle S=\Big{\\{}x\in L\ \Big{|}\ x\neq\bigvee_{b\,\in\,\mathrm{id}(x)\cap\mathrm{ji}(L)}b\Big{\\}}$ is nonempty and has a minimal element $c$ as $L$ is lowly finite. Since $\displaystyle c\neq\bigvee_{b\,\in\,\mathrm{id}(c)\cap\mathrm{ji}(L)}b$, then $c\notin\mathrm{ji}(L)$, and it follows from Lemma 3.10 that $\displaystyle c=\bigvee_{\begin{subarray}{c}b\in L\\\ b\prec c\end{subarray}}b$. Clearly, $c$ is an upper bound of the set $\displaystyle X=\bigcup_{\begin{subarray}{c}b\in L\\\ b\prec c\end{subarray}}\mathrm{id}(b)\cap\mathrm{ji}(L)$. If $u$ is another upper bound of $X$, then $u$ is an upper bound of $\mathrm{id}(x)\cap\mathrm{ji}(L)$ for every $x\in L$ with $x\prec c$. As $c$ is minimal in $S$, then $\displaystyle x=\bigvee_{b\,\in\,\mathrm{id}(x)\cap\mathrm{ji}(L)}b$ if $x\prec c$, hence $u$ is an upper bound of $\\{b\in L\ |\ b\prec c\\}$ implying $u\succeq c$. Observe that $\displaystyle X=\bigcup_{\begin{subarray}{c}b\in L\\\ b\prec c\end{subarray}}\big{(}\mathrm{id}(b)\cap\mathrm{ji}(L)\big{)}=\mathrm{ji}(L)\cap\bigcup_{\begin{subarray}{c}b\in L\\\ b\prec c\end{subarray}}\mathrm{id}(b)=\mathrm{ji}(L)\cap\mathrm{id}(c)$. Therefore, $c$ is a minimal upper bound for $\mathrm{id}(c)\cap\mathrm{ji}(L)$ which is a contradiction. ∎ For two elements $a,b$ of a lattice $L$ such that $a\preceq b$, let $\mathrm{j}_{a,b}:[a\wedge b,\,b]\rightarrow[a,\,a\vee b]$ and $\mathrm{m}_{a,b}:[a,\,a\vee b]\rightarrow[a\wedge b,\,b]$ be functions respectively defined by $\mathrm{j}_{a,b}(x):=a\vee x\quad\text{and}\quad\mathrm{m}_{a,b}(x):=x\wedge b.$ ###### Definition 3.12. A lattice $L$ is modular if, for all $a,b\in L$, $x\in[a\wedge b,\,b]$, and $y\in[a,\,a\vee b]$, we have $x=\mathrm{m}_{a,b}\,\mathrm{j}_{a,b}(x)\quad\text{and}\quad y=\mathrm{j}_{a,b}\,\mathrm{m}_{a,b}(y).$ ###### Proposition 3.13. A lattice $L$ is modular if and only if, for all $a,b,z\in L$, we have $(a\vee z)\wedge(a\vee b)=a\vee\big{(}z\wedge(a\vee b)\big{)}\quad\text{and}\quad(a\wedge z)\vee(a\wedge b)=a\wedge\big{(}z\vee(a\wedge b)\big{)}.$ ###### Proof. Assume first that $L$ is modular. We have $a\preceq(a\vee z)\wedge(a\vee b)\preceq a\vee b$. Letting $u=(a\vee z)\wedge(a\vee b)$, we get $u=\mathrm{j}_{a,b}\,\mathrm{m}_{a,b}(u)=a\vee\big{(}(a\vee z)\wedge(a\vee b)\wedge b\big{)}=a\vee\big{(}(a\vee z)\wedge b\big{)}.$ Since it is true for all $a,b,z\in L$, interchanging $z$ and $b$, we obtain $u=a\vee\big{(}z\wedge(a\vee b)\big{)}$. Likewise, we have $a\wedge b\preceq(z\wedge b)\vee(a\wedge b)\preceq b$. Letting $v=(z\wedge b)\vee(a\wedge b)$, we get $v=\mathrm{m}_{a,b}\,\mathrm{j}_{a,b}(v)=b\wedge\big{(}(z\wedge b)\vee(a\wedge b)\vee a\big{)}=b\wedge\big{(}(b\wedge z)\vee a\big{)}.$ Since it is true for all $a,b,z\in L$, interchanging $z$ and $a$, we obtain $v=b\wedge\big{(}z\vee(a\wedge b)\big{)}$. Assume now that $(a\vee z)\wedge(a\vee b)=a\vee\big{(}z\wedge(a\vee b)\big{)}$ and $(a\wedge z)\vee(a\wedge b)=a\wedge\big{(}z\vee(a\wedge b)\big{)}$ for all $a,b,z\in L$. If $a\preceq z\preceq a\vee b$, then $z=(a\vee b)\wedge z=(a\vee b)\wedge(a\vee z)=a\vee\big{(}b\wedge(a\vee z)\big{)}.$ Since $a\vee z=z$, then $z=a\vee(z\wedge b)=\mathrm{j}_{a,b}\,\mathrm{m}_{a,b}(z)$. Likewise, if $a\wedge b\preceq z\preceq b$, then $z=(a\wedge b)\vee z=(b\wedge a)\vee(z\wedge b)=b\wedge\big{(}a\vee(z\wedge b)\big{)}.$ And since $z\wedge b=z$, then $z=b\vee(a\wedge z)=\mathrm{m}_{a,b}\,\mathrm{j}_{a,b}(z)$. ∎ ### 3.2 Distributive Lattice ###### Proposition 3.14. Let $L$ be a poset, and $a,b,c\in L$. The condition $a\wedge(b\vee c)=(a\wedge b)\vee(a\wedge c)\text{\> is equivalent to\> }a\vee(b\wedge c)=(a\vee b)\wedge(a\vee c).$ ###### Proof. Assume that $a\wedge(b\vee c)=(a\wedge b)\vee(a\wedge c)$. Then, $\displaystyle(a\vee b)\wedge(a\vee c)$ $\displaystyle=\big{(}(a\vee b)\wedge a\big{)}\vee\big{(}(a\vee b)\wedge c\big{)}$ $\displaystyle=a\vee\big{(}(a\vee b)\wedge c\big{)}$ $\displaystyle=a\vee(a\wedge c)\vee(b\wedge c)$ $\displaystyle=a\vee(b\wedge c).$ Similarly, if we assume $a\vee(b\wedge c)=(a\vee b)\wedge(a\vee c)$, then we obtain $(a\wedge b)\vee(a\wedge c)=\big{(}(a\wedge b)\vee a\big{)}\wedge\big{(}(a\wedge b)\vee c\big{)}=a\wedge(b\vee c).$ ∎ ###### Definition 3.15. A lattice $L$ is distributive if, for all $a,b,c\in L$, $a\wedge(b\vee c)=(a\wedge b)\vee(a\wedge c)$. Denote by $\mathcal{I}_{L}$ the poset formed by the ideals of a lattice $L$ with inclusion as partial order. It is a lattice such that, for $I,J\in\mathcal{I}_{L}$, $\displaystyle I\vee J:=\bigcap_{\begin{subarray}{c}K\in\mathcal{I}_{L}\\\ I,J\subseteq K\end{subarray}}K$ and $\displaystyle I\wedge J:=\bigcup_{\begin{subarray}{c}K\in\mathcal{I}_{L}\\\ K\subseteq I\cap J\end{subarray}}K$. ###### Theorem 3.16. Let $L$ be a distributive lattice, $I$ an ideal of $L$, and $F$ a filter of $L$ such that $I\cap F=\emptyset$. Then, there exists a prime ideal $P$ of $L$ such that $I\subseteq P$ and $P\cap F=\emptyset$. ###### Proof. Set $\displaystyle\mathcal{X}_{I,F}:=\\{M\in\mathcal{I}_{L}\ |\ I\subseteq M,\,M\cap F=\emptyset\\}$. It is a poset with inclusion as partial order, and is nonempty since $I\in\mathcal{X}_{I,F}$. Consider a chain $\mathcal{E}\in\mathcal{C}_{\mathcal{X}_{I,F}}$, and let $\displaystyle E=\bigcup_{C\in\mathcal{E}}E$. If $a,b\in E$, then $a\in A$ and $b\in B$ for some $A,B\in\mathcal{E}$. Since $\mathcal{E}$ is a chain, either $A\subseteq B$ or $A\supseteq B$ hold, so let assume $A\subseteq B$. Then, $a\in B$, and $a\vee b\in B\subseteq E$, as $B$ is an ideal. Moreover, if $c\in L$, then $a\wedge c\in A\subseteq E$, as $A$ is also an ideal. We deduce that $E\in\mathcal{I}_{L}$. Besides, $I\subseteq E$ and $E\cap F=\emptyset_{\prec}$ obviously hold. Hence, $E$ is an upper bound of $\mathcal{E}$ in $\mathcal{X}_{I,F}$. Therefore, $\mathcal{X}_{I,F}$ is an inductive poset, and Zorn’s lemma allows to state that it has a maximal element $P$. Suppose that $P$ is not prime. Then, there exists $a,b\in L$ such that $a,b\notin P$ but $a\wedge b\in P$. The maximality of $P$ yields $\big{(}P\vee\mathrm{id}(a)\big{)}\cap F\neq\emptyset$ and $\big{(}P\vee\mathrm{id}(b)\big{)}\cap F\neq\emptyset$. Thus, there are $p,q\in P$ such that $p\vee a\in F$, $q\vee b\in F$, and $(p\vee a)\wedge(q\vee b)\in F$ since $F$ is a filter. Expanding by distributivity, we obtain $(p\vee a)\wedge(q\vee b)=\big{(}(p\vee a)\wedge q\big{)}\vee\big{(}(p\vee a)\wedge b\big{)}=(p\wedge q)\vee(a\wedge q)\vee(p\wedge b)\vee(a\wedge b)$ which belongs to $P$. That means $P\cap F\neq\emptyset$ or a contradiction. ∎ ###### Corollary 3.17. Let $L$ be a distributive lattice, $I\in\mathcal{I}_{L}$, and $a\in L$ such that $a\notin I$. Then, there exists a prime ideal $P$ of $L$ such that $I\subseteq P$ and $a\notin P$. ###### Proof. Remark that $I\neq\mathrm{fil}(a)=\emptyset$, otherwise, if $b\in I\neq\mathrm{fil}(a)$, then $b\wedge a=a\in I$, which is absurd. Now, for the proof, we apply Theorem 3.16 to $I$ and $F=\mathrm{fil}(a)$. ∎ ###### Corollary 3.18. Let $L$ be a distributive lattice, and $a,b\in L$ such that $a\neq b$. Then, $L$ has a prime ideal containing exactly one of $a$ and $b$. ###### Proof. If $a$ and $b$ are not comparable or $b\prec a$, then $a\notin\mathrm{id}(b)$. It remains to apply Corollary 3.17 to $I=\mathrm{id}(b)$. ∎ ###### Theorem 3.19. A lattice $L$ is distributive if and only if, for all $a,b,c\in L$, $c\vee a=c\vee b$ and $c\wedge a=c\wedge b$ imply $a=b$. ###### Proof. Suppose first that $L$ is distributive and that there exist $a,b,c\in L$ such that $a\vee c=b\vee c$ and $a\wedge c=b\wedge c$. Then, $a=a\vee(a\wedge c)=a\vee(b\wedge c)=(a\vee b)\wedge(a\vee c)=(a\vee b)\wedge(b\vee c)=b\vee(a\wedge c),$ which implies $a\preceq b$, and similarly we have $b\preceq a$. Suppose now that $a\vee c=b\vee c$ and $a\wedge c=b\wedge c$ imply $a=b$. If $x\in[a\wedge b,\,b]$, * • as $x\preceq b\wedge(a\vee x)$ then $a\vee x\preceq a\vee\big{(}b\wedge(a\vee x)\big{)}$, as $a\vee x\succeq b\wedge(a\vee x)$ then $a\vee x\succeq a\vee\big{(}b\wedge(a\vee x)\big{)}$, hence $a\vee x=a\vee\big{(}b\wedge(a\vee x)\big{)}$ on one side, * • on the other side, $a\wedge x=a\wedge b\wedge x=a\wedge b\wedge(a\vee x)$. By canceling $a$, we obtain $x=\mathrm{m}_{a,b}\,\mathrm{j}_{a,b}(x)$. If $y\in[a,\,a\vee b]$, as $y\succeq a\vee(b\wedge y)$ then $b\wedge y\succeq b\wedge\big{(}a\vee(b\wedge y)\big{)}$, as $b\wedge y\preceq a\vee(b\wedge y)$ then $b\wedge y\preceq b\wedge\big{(}a\vee(b\wedge y)\big{)}$, hence $b\wedge y=b\wedge\big{(}a\vee(b\wedge y)\big{)}$ on one side, and $b\vee y=a\vee b\vee y=a\vee b\vee(b\wedge y)$ on the other side. By canceling $b$, we obtain $y=\mathrm{j}_{a,b}\,\mathrm{m}_{a,b}(y)$. Therefore, $L$ is modular. Let $a^{*}=a\wedge(b\vee c)$, $b^{*}=b\wedge(c\vee a)$, and $c^{*}=c\wedge(a\vee b)$. Then, $a^{*}\wedge b^{*}=a\wedge(c\vee a)\wedge b\wedge(b\vee c)=a\wedge b$, $a^{*}\wedge c^{*}=a\wedge c$, and $b^{*}\wedge c^{*}=b\wedge c$. Set $d=(a\vee b)\wedge(b\vee c)\wedge(c\vee a)$. Using twice Proposition 3.13, we get $\displaystyle a^{*}\vee b^{*}$ $\displaystyle=a^{*}\vee\big{(}b\wedge(a\vee c)\big{)}=(a^{*}\vee b)\wedge(a\vee c)$ $\displaystyle=\Big{(}\big{(}(b\vee c)\wedge a\big{)}\vee b\Big{)}\wedge(a\vee c)=(b\vee c)\wedge(a\vee b)\wedge(a\vee c)$ $\displaystyle=d.$ By symmetry, we also have $a^{*}\vee c^{*}=b^{*}\vee c^{*}=d$. Hence, * • $c^{*}\vee a^{*}\vee(b\wedge c)=c^{*}\vee b^{*}\vee(a\wedge c)=d$, * • and $c^{*}\wedge\big{(}a^{*}\vee(b\wedge c)\big{)}=(c^{*}\wedge a^{*})\vee(b\wedge c)=(c^{*}\wedge b^{*})\vee(a\wedge c)=c^{*}\wedge\big{(}b^{*}\vee(a\wedge c)\big{)}$. By canceling $c^{*}$, we obtain $a^{*}\vee(b\wedge c)=b^{*}\vee(a\wedge c)$, whence $a^{*}\vee(b\wedge c)=a^{*}\vee(b\wedge c)\vee b^{*}\vee(a\wedge c)=a^{*}\vee b^{*}=d.$ It follows that $(a\vee b)\wedge c=c^{*}=c^{*}\wedge d=c^{*}\wedge\big{(}a^{*}\vee(b\wedge c)\big{)}=(a\wedge c)\vee(b\wedge c)$, hence $L$ is consequently distributive. ∎ ## 4 Valuation on Lattice This section is the central part of this survey. After defining the valuation algebra and showing some important properties, we prove that if $M$ is a subset of a complete lowly finite distributive lattice $L$ containing its join-irreducible elements, and $a$ an element of $M$ which is not join- irreducible, then $\displaystyle\sum_{b\in M\cap[\mathsf{0},a]}\mu_{M}(b,a)b$ belongs to the submodule $\langle a\wedge b+a\vee b-a-b\ |\ a,b\in L\rangle$ of $\mathbb{Z}L$. It would not have been possible to write the first two subsections without the articles of Geissinger [7], [8, § 3], and the third without that of Zaslavsky [18, § 2]. ###### Definition 4.1. A valuation on a lattice $L$ is a function $f$ from $L$ to a module $G$ such that, for all $a,b\in L$, $f(a\wedge b)+f(a\vee b)=f(a)+f(b).$ ### 4.1 Valuation Module ###### Definition 4.2. The valuation module of a lattice $L$ is the module $\mathrm{Val}(L):=\mathbb{Z}L/\mathrm{N}(L)$, where $\mathrm{N}(L)$ is the submodule $\langle a\wedge b+a\vee b-a-b\ |\ a,b\in L\rangle$ of the module $\mathbb{Z}L$. ###### Proposition 4.3. Let $\mathrm{i}:L\rightarrow\mathrm{Val}(L)$ be the natural induced map for a lattice $L$. Then, $\mathrm{i}$ is a valuation, and, for every valuation $f:L\rightarrow G$, there exists a unique module homomorphism $h:\mathrm{Val}(L)\rightarrow G$ such that the following diagram is commutative ${L}$${\mathrm{Val}(L)}$${G}$$\scriptstyle{f}$$\scriptstyle{\mathrm{i}}$$\scriptstyle{h}$ ###### Proof. It is clear that $\mathrm{i}$ is a valuation as $\mathrm{i}(a\wedge b)+\mathrm{i}(a\vee b)-\mathrm{i}(a)-\mathrm{i}(b)=a\wedge b+a\vee b-a-b=0$. Besides, we get the homomorphism $h$ by setting $\forall a\in L:\,h(a):=f(a)\quad\text{and}\quad\forall x,y\in\mathrm{Val}(L):\,h(x+y)=h(x)+h(y).$ ∎ ###### Proposition 4.4. For lattices $L_{1},L_{2}$ with natural induced maps $\mathrm{i}_{1},\mathrm{i}_{2}$ respectively, a lattice homomorphism $\varphi:L_{1}\rightarrow L_{2}$ induces a unique module homomorphism $\psi:\mathrm{Val}(L_{1})\rightarrow\mathrm{Val}(L_{2})$ such that, for every $a\in L_{1}$, $\psi\mathrm{i}_{1}(a)=\mathrm{i}_{2}\varphi(a)$. ###### Proof. We obtain the homomorphism $\psi$ by setting $\forall a\in L_{1}:\,\psi(a):=\varphi(a)\quad\text{and}\quad\forall x,y\in\mathrm{Val}(L_{1}):\,\psi(x+y)=\psi(x)+\psi(y).$ ∎ ###### Proposition 4.5. For any prime ideal or prime filter $M$ of a lattice $L$ with natural induced map $\mathrm{i}$, each element of $\mathrm{i}(M)$ is linearly independent of those in $\mathrm{i}(L\setminus M)$ and vise versa. ###### Proof. Assume that $M$ is a prime ideal, and consider the indicator function $\mathrm{1}_{M}:L\rightarrow\mathbb{Z}$ defined as $\mathrm{1}_{M}(a):=\begin{cases}1&\text{if }a\in M\\\ 0&\text{otherwise}\end{cases}$. For $a,b\in L$, * • if $a,b\in M$, we clearly have $\mathrm{1}_{M}(a\wedge b)+\mathrm{1}_{M}(a\vee b)=\mathrm{1}_{M}(a)+\mathrm{1}_{M}(b)=2$, * • if $a\in M$ and $b\notin M$, since $(a\vee b)\wedge b=b\notin M$, then $a\vee b\notin M$ and $\mathrm{1}_{M}(a\wedge b)+\mathrm{1}_{M}(a\vee b)=\mathrm{1}_{M}(a)+\mathrm{1}_{M}(b)=1$, * • if $a,b\notin M$, then $a\wedge b\notin M$, the fact $(a\vee b)\wedge b=b\notin M$ implies $a\vee b\notin M$, and $\mathrm{1}_{M}(a\wedge b)+\mathrm{1}_{M}(a\vee b)=\mathrm{1}_{M}(a)+\mathrm{1}_{M}(b)=0$. Therefore, $\mathrm{1}_{M}$ is a valuation on $L$. One similarly proves that if $M$ is prime filter, then $\mathrm{1}_{M}$ is also a valuation on $L$. We know from Proposition 4.3 that there exists a unique homomorphism $h:\mathrm{Val}(L)\rightarrow\mathbb{Z}$ such that the diagram ${L}$${\mathrm{Val}(L)}$${\mathbb{Z}}$$\scriptstyle{\mathrm{1}_{M}}$$\scriptstyle{\mathrm{i}}$$\scriptstyle{h}$ is commutative. As $h\mathrm{i}(a)=1$, for every $a\in M$, and $\big{\langle}\mathrm{i}(b)\ \big{|}\ b\in L\setminus M\big{\rangle}\subseteq\ker h$, each element of $\mathrm{i}(M)$ is then linearly independent of those in $\mathrm{i}(L\setminus M)$. Likewise, Proposition 3.7 allows to state that $\mathrm{1}_{L\setminus M}$ is a valuation, then one also proves that each element of $\mathrm{i}(L\setminus M)$ is linearly independent of those in $\mathrm{i}(M)$. ∎ ###### Proposition 4.6. The natural induced map $\mathrm{i}:L\rightarrow\mathrm{Val}(L)$ of a lattice $L$ is an injection if and only if $L$ is distributive. ###### Proof. If $L$ is distributive, we know from Corollary 3.18 that any two different elements $a,b\in L$ can be separated by a prime ideal, hence Proposition 4.5 allows to deduce that $\mathrm{i}(a)$ and $\mathrm{i}(b)$ are independent in $\mathrm{Val}(L)$. If $L$ is not distributive, then, by Theorem 3.19, it contains distinct elements $a,b,c$ with $c\vee a=c\vee b$ and $c\wedge a=c\wedge b$. Hence, $\mathrm{i}(a)+\mathrm{i}(c)=\mathrm{i}(c\vee a)+\mathrm{i}(c\wedge a)=\mathrm{i}(c\vee b)+\mathrm{i}(c\wedge b)=\mathrm{i}(b)+\mathrm{i}(c)$, and $\mathrm{i}(a)=\mathrm{i}(b)$. ∎ ###### Proposition 4.7. Let $L$ be a distributive lattice, and $a_{1},\dots,a_{n},b\in L$ with $\displaystyle b\notin\big{[}\bigwedge_{i\in[n]}a_{i},\,\bigvee_{i\in[n]}a_{i}\big{]}$. Then, $b$ is linearly independent of $\\{a_{1},\dots,a_{n}\\}$ in $\mathrm{Val}(L)$. ###### Proof. If $\displaystyle b\notin\mathrm{id}\Big{(}\bigvee_{i\in[n]}a_{i}\Big{)}$, then there exists a prime ideal $P$ such that $\\{a_{1},\dots,a_{n}\\}\subseteq P$ and $b\notin P$ by Corollary 3.17, and $b$ is linearly independent of $\\{a_{1},\dots,a_{n}\\}$ by Proposition 4.5. If $\displaystyle b\in\mathrm{id}\Big{(}\bigvee_{i\in[n]}a_{i}\Big{)}$, then $\displaystyle b\notin\mathrm{fil}\Big{(}\bigwedge_{i\in[n]}a_{i}\Big{)}$, otherwise $\displaystyle b\in\big{[}\bigwedge_{i\in[n]}a_{i},\,\bigvee_{i\in[n]}a_{i}\big{]}$ which is a contradiction. Hence, $\displaystyle\mathrm{id}(b)\cap\mathrm{fil}\Big{(}\bigwedge_{i\in[n]}a_{i}\Big{)}=\emptyset$, and there exists a prime ideal $P$ such that $\mathrm{id}(b)\subseteq P$ and $\displaystyle P\cap\mathrm{fil}\Big{(}\bigwedge_{i\in[n]}a_{i}\Big{)}=\emptyset$ by Theorem 3.16. As $\displaystyle\\{a_{1},\dots,a_{n}\\}\subseteq\mathrm{fil}\Big{(}\bigwedge_{i\in[n]}a_{i}\Big{)}$, we once again obtain the independence of $b$ by Proposition 4.5. ∎ As the lattice $L$ with either the operation $\vee$ or $\wedge$ form a semigroup, the module $\mathbb{Z}L$ may consequently be considered as an algebra with either $\vee$ or $\wedge$ as multiplication. Besides, if $L$ is distributive, Proposition 4.6 allows to identify $L$ with $\mathrm{i}(L)$. ###### Proposition 4.8. If $L$ is a distributive lattice, then $\mathrm{N}(L)$ is an ideal of the algebra $\mathbb{Z}L$ for both $\vee$ and $\wedge$ as multiplication. ###### Proof. For $a,b,c\in L$, we have $\displaystyle(a\wedge b+a\vee b-a-b)\wedge c$ $\displaystyle=(a\wedge b)\wedge c+(a\vee b)\wedge c-a\wedge c-b\wedge c$ $\displaystyle=(a\wedge c)\wedge(b\wedge c)+(a\wedge c)\vee(b\wedge c)-a\wedge c-b\wedge c$ which belongs to $\mathrm{N}(L)$. Then, by linearly extension, we get $(a\wedge b+a\vee b-a-b)\wedge t\in\mathrm{N}(L)$ for any $t\in\mathbb{Z}L$. Similarly, we have $(a\wedge b+a\vee b-a-b)\vee c=(a\vee c)\wedge(b\vee c)+(a\vee c)\vee(b\vee c)-a\vee c-b\vee c\in\mathrm{N}(L).$ ∎ ### 4.2 Valuation Algebra If the lattice $L$ is distributive, Proposition 4.8 allows to state that the valuation module $\mathrm{Val}(L)$ becomes a commutative algebra for either $\vee$ or $\wedge$ as multiplication. ###### Definition 4.9. The valuation algebra is the algebra $\big{(}\mathrm{Val}(L),\vee\big{)}$ or $\big{(}\mathrm{Val}(L),\wedge\big{)}$ for a distributive lattice $L$. ###### Lemma 4.10. Let $L$ be a complete distributive lattice, and define the map $\tau:\mathrm{Val}(L)\rightarrow\mathrm{Val}(L)$ by $\tau(x):=\mathsf{1}+\mathsf{0}-x$. Then, for $a,b\in L$, we have $\tau(a\vee b)=\tau(a)\wedge\tau(b)$. ###### Proof. We have $\mathsf{1}+\mathsf{0}-a\vee b=\mathsf{1}+\mathsf{0}+a\wedge b-a-b=(\mathsf{1}+\mathsf{0}-a)\wedge(\mathsf{1}+\mathsf{0}-b)$. ∎ ###### Proposition 4.11. Let $L$ be a complete distributive lattice, $n\in\mathbb{N}^{*}$, and $a_{1},\dots,a_{n}\in L$. Then, we have $\displaystyle\mathsf{1}-\bigvee_{i\in[n]}a_{i}=\bigwedge_{i\in[n]}(\mathsf{1}-a_{i})$, that is $\displaystyle\bigvee_{i\in[n]}a_{i}=\sum_{k=1}^{n}(-1)^{k-1}\sum_{\begin{subarray}{c}I\subseteq[n]\\\ \\#I=k\end{subarray}}\bigwedge_{i\in I}a_{i}.$ ###### Proof. Using Lemma 4.10 and $\mathsf{0}\wedge(\mathsf{1}-a_{i})=\mathsf{0}$, we obtain $\tau\Big{(}\bigvee_{i\in[n]}a_{i}\Big{)}=\mathsf{0}+\mathsf{1}-\bigvee_{i\in[n]}a_{i}=\bigwedge_{i\in[n]}\tau(a_{i})=\bigwedge_{i\in[n]}(\mathsf{0}+\mathsf{1}-a_{i})=\mathsf{0}+\bigwedge_{i\in[n]}(\mathsf{1}-a_{i}).$ Then $\displaystyle\mathsf{1}-\bigvee_{i\in[n]}a_{i}=\bigwedge_{i\in[n]}\tau(a_{i})=\bigwedge_{i\in[n]}(\mathsf{1}-a_{i})=\mathsf{1}+\sum_{k=1}^{n}(-1)^{k}\sum_{\begin{subarray}{c}I\subseteq[n]\\\ \\#I=k\end{subarray}}\bigwedge_{i\in I}a_{i}$. ∎ ###### Corollary 4.12. Let $L$ be a complete distributive lattice, $n\in\mathbb{N}^{*}$, $a_{1},\dots,a_{n}\in L$, and $f$ a valuation on $L$. Then, $f\Big{(}\bigvee_{i\in[n]}a_{i}\Big{)}=\sum_{k=1}^{n}(-1)^{k-1}\sum_{\begin{subarray}{c}I\subseteq[n]\\\ \\#I=k\end{subarray}}f\Big{(}\bigwedge_{i\in I}a_{i}\Big{)}.$ ###### Proof. If $f$ is a valuation to module $G$, we know from Proposition 4.3 that a unique module homomorphism $h:\mathrm{Val}(L)\rightarrow G$ such that $h\mathrm{i}=f$ exists. Then, using Proposition 4.11, we obtain $f\Big{(}\bigvee_{i\in[n]}a_{i}\Big{)}=h\Big{(}\bigvee_{i\in[n]}a_{i}\Big{)}=\sum_{k=1}^{n}(-1)^{k-1}\sum_{\begin{subarray}{c}I\subseteq[n]\\\ \\#I=k\end{subarray}}h\Big{(}\bigwedge_{i\in I}a_{i}\Big{)}=\sum_{k=1}^{n}(-1)^{k-1}\sum_{\begin{subarray}{c}I\subseteq[n]\\\ \\#I=k\end{subarray}}f\Big{(}\bigwedge_{i\in I}a_{i}\Big{)}.$ ∎ ###### Theorem 4.13. Let $L$ be a complete lowly finite distributive lattice. Then, $\mathrm{Val}(L)$ is equal to $\mathbb{Z}\mathrm{ji}(L)$ as modules. ###### Proof. We obviously have $\mathsf{0}\in\mathrm{ji}(L)$. Let $a\in L$, and assume that every $b\in L$ such that $a\succ b$ is a linear combination in $\mathrm{Val}(L)$ of a finite number of elements in $\mathrm{ji}(L)$. We know from Proposition 3.11 that there exists a subset $\\{b_{1},\dots,b_{n}\\}$ of $\mathrm{ji}(L)$ such that $\displaystyle a=\bigvee_{i\in[n]}b_{i}$. Using Proposition 4.11, we get $\displaystyle a=\sum_{k=1}^{n}(-1)^{k-1}\sum_{\begin{subarray}{c}I\subseteq[n]\\\ \\#I=k\end{subarray}}\bigwedge_{i\in I}b_{i}$ with $\displaystyle a\succ\bigwedge_{i\in I}b_{i}$ for each $I\subseteq[n]$. Thus $\mathrm{ji}(L)$ generates $\mathrm{Val}(L)$. Assume now that every subset with cardinality $n-1$ in $\mathrm{ji}(L)$ is independent, and consider a subset of $n$ elements $\\{a_{1},\dots,a_{n}\\}\subseteq\mathrm{ji}(L)$. We can suppose that $a_{n}$ is a maximal element in that set. Since $\displaystyle a_{n}\neq\bigvee_{i\in[n-1]}a_{i}$, then $\displaystyle a_{n}\notin\big{[}\bigwedge_{i\in[n-1]}a_{i},\,\bigvee_{i\in[n-1]}a_{i}\big{]}$. We deduce from Proposition 4.7 that $\\{a_{1},\dots,a_{n}\\}$ is independent. Hence $\mathrm{ji}(L)$ is an independent set in $\mathrm{Val}(L)$. ∎ ###### Corollary 4.14. If $L$ is a complete lowly finite distributive lattice, then every valuation of $L$ is determined by its values on $\mathrm{ji}(L)$ which can be assigned arbitrarily. ###### Proof. If $f$ is a valuation to a module $G$, we know from Proposition 4.3 that a unique module homomorphism $h:\mathrm{Val}(L)\rightarrow G$ such that $h\mathrm{i}=f$ exists. We know from Theorem 4.13 that, if $a\in L$, there exist subsets $\\{\lambda_{1},\dots,\lambda_{n}\\}\subseteq\mathbb{Z}$ and $\\{a_{1},\dots,a_{n}\\}\subseteq\mathrm{ji}(L)$ such that $\displaystyle a=\sum_{i\in[n]}\lambda_{i}a_{i}$. Then, $\displaystyle f(a)=h(a)=h\Big{(}\sum_{i\in[n]}\lambda_{i}a_{i}\Big{)}=\sum_{i\in[n]}\lambda_{i}h(a_{i})=\sum_{i\in[n]}\lambda_{i}f(a_{i})$. ∎ For a poset $L$, and $a,b\in L$, we write $a\lessdot b$ if $a\prec b$ and $\\{c\in L\ |\ a\prec c\prec b\\}=\emptyset$. ###### Proposition 4.15. Let $L$ be a distributive lattice, and $a\in\mathrm{ji}(L)$ such that $a$ is not minimal. Then, there exists a unique element $a^{*}\in L$ such that $a^{*}\lessdot a$. ###### Proof. Suppose that there exist two different elements $b,c\in L$ such that $b\lessdot a$ and $c\lessdot a$. Then, $b\vee c\succeq b$, $b\vee c\succeq c$, and $b\vee c\notin\\{b,c\\}$. The only possibility is $b\vee c=a$ which contradicts the join-irreducibility of $a$. ∎ Let $L$ be a distributive lattice having a lowest element $\mathsf{0}$. Define $e_{\mathsf{0}}:=\mathsf{0}\in\mathrm{Val}(L)\quad\text{and}\quad e_{a}:=a-a^{*}\in\mathrm{Val}(L)\ \text{for each}\ a\in\mathrm{ji}(L)\setminus\\{\mathsf{0}\\}.$ ###### Theorem 4.16. Let $L$ be a complete lowly finite distributive lattice. Then, $\big{\\{}e_{a}\ |\ a\in\mathrm{ji}(L)\big{\\}}$ is an orthogonal idempotent basis of $\mathrm{Val}(L)$. ###### Proof. For $a,b\in\mathrm{ji}(L)\setminus\\{\mathsf{0}\\}$ with $a\neq b$, we have $e_{\mathsf{0}}\wedge e_{\mathsf{0}}=e_{\mathsf{0}}$ and $e_{a}\wedge e_{\mathsf{0}}=a\wedge\mathsf{0}-a^{*}\wedge\mathsf{0}=0$, $e_{a}\wedge e_{a}=a\wedge a-a\wedge a^{*}-a^{*}\wedge a+a^{*}\wedge a^{*}=a-a^{*}-a^{*}+a^{*}=e_{a}$, and $\displaystyle e_{a}\wedge e_{b}$ $\displaystyle=a\wedge b-a\wedge b^{*}-a^{*}\wedge b+a^{*}\wedge b^{*}$ $\displaystyle=\begin{cases}b-b^{*}-b+b^{*}&\text{if}\ a^{*}=b\\\ a^{*}\wedge b^{*}-a^{*}\wedge b^{*}-a^{*}\wedge b^{*}+a^{*}\wedge b^{*}&\text{otherwise}\end{cases}$ $\displaystyle=0.$ Then, $\big{\\{}e_{a}\ |\ a\in\mathrm{ji}(L)\big{\\}}$ is orthogonal idempotent. Assume now that every subset with cardinality $n-1$ in $\big{\\{}e_{a}\ |\ a\in\mathrm{ji}(L)\big{\\}}$ is independent, and consider a subset of $n$ elements $\\{e_{a_{1}},\dots,e_{a_{n}}\\}$. We can suppose that $a_{n}$ is a maximal element in the set $\\{a_{1},\dots,a_{n}\\}$. Since $\displaystyle a_{n}\neq\bigvee_{i\in[n-1]}a_{i}\vee\bigvee_{i\in[n]}a_{i}^{*}$, then $\displaystyle a_{n}\notin\big{[}\bigwedge_{i\in[n-1]}a_{i}\wedge\bigwedge_{i\in[n]}a_{i}^{*},\,\bigvee_{i\in[n-1]}a_{i}\vee\bigvee_{i\in[n]}a_{i}^{*}\big{]}$. We deduce from Proposition 4.7 that $a_{n}$ is independent of $\\{a_{1},\dots,a_{n-1},a_{1}^{*},\dots,a_{n}^{*}\\}$. Hence $e_{a_{n}}$ is independent of $\\{e_{a_{1}},\dots,e_{a_{n-1}}\\}$, and $\\{e_{a_{1}},\dots,e_{a_{n}}\\}$ is consequently an independent set in $\mathrm{Val}(L)$. Finally, since there is a natural bijection $a\mapsto e_{a}$ between $\mathrm{ji}(L)$ and $\big{\\{}e_{a}\ |\ a\in\mathrm{ji}(L)\big{\\}}$, by Theorem 4.13 the latter is also a basis of $\mathrm{Val}(L)$. ∎ ### 4.3 Identities on Valuation Algebra ###### Theorem 4.17. Let $L$ be a complete lowly finite distributive lattice. Then, $\forall x\in L:\,x=\sum_{\begin{subarray}{c}a,b\in\mathrm{ji}(L)\\\ b\preceq a\preceq x\end{subarray}}\mu_{\mathrm{ji}(L)}(b,a)b.$ ###### Proof. If $a\in\mathrm{ji}(L)$, then $a=e_{a}+a^{*}$, particularly $\mathsf{0}=e_{\mathsf{0}}$. Now, consider any $x\in L\setminus\mathrm{ji}(L)$, and assume that, for every $b\in L$ such that $b\prec x$, we have $\displaystyle b=\sum_{\begin{subarray}{c}d\in\mathrm{ji}(L)\\\ d\preceq b\end{subarray}}e_{d}$. There exist $b,c\in L\setminus\\{x\\}$ such that $x=b\vee c$. Note that $\displaystyle b\wedge c=\sum_{\begin{subarray}{c}d\in\mathrm{ji}(L)\\\ d\preceq b\wedge c\end{subarray}}e_{d}$ as the $e_{d}$’s are orthogonal idempotent. Hence, $\displaystyle b\vee c=b+c-b\wedge c=\sum_{d\,\in\,\mathrm{ji}(L)\cap(\mathrm{id}(b)\cup\mathrm{id}(c))}e_{d}$. Besides, remark that, for any $y\in\mathrm{ji}(L)\cap\mathrm{id}(x)$, there exist $b,c\in L\setminus\\{x\\}$ such that $y\preceq b$ and $b\vee c=x$. Therefore, $\displaystyle x=\sum_{\begin{subarray}{c}d\in\mathrm{ji}(L)\\\ d\preceq x\end{subarray}}e_{d}$. Let $\mathrm{b}$ be the natural bijection $a\mapsto e_{a}$ between $\mathrm{ji}(L)$ and $\big{\\{}e_{a}\ |\ a\in\mathrm{ji}(L)\big{\\}}$. For $a\in\mathrm{ji}(L)$, we have $\displaystyle\mathrm{i}(a)=\sum_{d\,\in\,\mathrm{ji}(L)\cap[\mathsf{0},a]}\mathrm{b}(d)$. Then, using the Möbius inversion formula, we obtain $\displaystyle\mathrm{b}(a)=\sum_{d\,\in\,\mathrm{ji}(L)\cap[\mathsf{0},a]}\mu_{\mathrm{ji}(L)}(d,a)\mathrm{i}(d)\quad\text{or}\quad e_{a}=\sum_{\begin{subarray}{c}d\in\mathrm{ji}(L)\\\ d\preceq a\end{subarray}}\mu_{\mathrm{ji}(L)}(d,a)d.$ We obtain the result by combining $\displaystyle x=\sum_{\begin{subarray}{c}a\in\mathrm{ji}(L)\\\ a\preceq x\end{subarray}}e_{a}$ with $\displaystyle e_{a}=\sum_{\begin{subarray}{c}d\in\mathrm{ji}(L)\\\ d\preceq a\end{subarray}}\mu_{\mathrm{ji}(L)}(d,a)d$. ∎ ###### Lemma 4.18. If $L$ is a lowly finite distributive lattice, then $\big{(}\mathbb{Z}L,\wedge\big{)}$ is naturally isomorphic to the Möbius algebra $\big{(}\mathrm{M\ddot{o}b}(L),\cdot\big{)}$. ###### Proof. For $a\in L$, we have $\displaystyle u_{L}(a)=\sum_{c\in[\mathsf{0},a]}\mu_{L}(c,a)c$. The Möbius inversion formula consequently allows to state that $\displaystyle a=\sum_{c\in[\mathsf{0},a]}u_{L}(c)$. Then, for $a,b\in L$, we have $a\cdot b=\sum_{c\in[\mathsf{0},a]\cap[\mathsf{0},b]}u_{L}(c)=\sum_{c\in[\mathsf{0},a\wedge b]}u_{L}(c)=a\wedge b.$ ∎ ###### Lemma 4.19. If $L$ is a complete lowly finite distributive lattice, then $\big{(}\mathrm{M\ddot{o}b}(L)/\mathrm{N}(L),\cdot\big{)}$ is isomorphic to the Möbius algebra $\Big{(}\mathrm{M\ddot{o}b}\big{(}\mathrm{ji}(L)\big{)},\cdot\Big{)}$. ###### Proof. By Lemma 4.18, we get $\mathrm{M\ddot{o}b}(L)/\mathrm{N}(L)\simeq\mathbb{Z}L/\mathrm{N}(L)\simeq\mathrm{Val}(L)$. We know from Theorem 4.13 that $\mathrm{Val}(L)$ is isomorphic to $\mathbb{Z}\mathrm{ji}(L)$ as modules. Now, as algebras, $\big{(}\mathrm{Val}(L),\wedge\big{)}$ is naturally isomorphic to $\Big{(}\mathrm{M\ddot{o}b}\big{(}\mathrm{ji}(L)\big{)},\cdot\Big{)}$ since, for $a,b\in\mathrm{ji}(L)$, Theorem 4.17 allows to state that $a\cdot b=\sum_{c\in[\mathsf{0},a]\cap[\mathsf{0},b]\cap\mathrm{ji}(L)}u_{\mathrm{ji}(L)}(c)=\sum_{c\in[\mathsf{0},a\wedge b]\cap\mathrm{ji}(L)}u_{\mathrm{ji}(L)}(c)=a\wedge b.$ ∎ The following theorem is the main result of this survey. Zaslavsky originally proved it for finite distributive lattice [18, Theorem 2.1]. ###### Theorem 4.20. Let $L$ be a complete lowly finite distributive lattice, and $M$ a subset of $L$ such that $\mathrm{ji}(L)\subseteq M$. If $a\in M\setminus\mathrm{ji}(L)$, then $u_{M}(a)\in\mathrm{N}(L).$ ###### Proof. Consider the linear maps $\mathrm{j}:\mathrm{M\ddot{o}b}(L)\rightarrow\mathrm{M\ddot{o}b}\big{(}\mathrm{ji}(L)\big{)}$, $\mathrm{j}_{1}:\mathrm{M\ddot{o}b}(L)\rightarrow\mathrm{M\ddot{o}b}(M)$, and $\mathrm{j}_{2}:\mathrm{M\ddot{o}b}(M)\rightarrow\mathrm{M\ddot{o}b}\big{(}\mathrm{ji}(L)\big{)}$ which on the basis $\\{u_{L}(a)\ |\ a\in L\\}$, and $\\{u_{M}(a)\ |\ a\in M\\}$ respectively have the values $\displaystyle\mathrm{j}\big{(}u_{L}(a)\big{)}:=\begin{cases}u_{\mathrm{ji}(L)}(a)&\text{if}\ a\in\mathrm{ji}(L),\\\ 0&\text{otherwise}\end{cases},\quad\mathrm{j}_{1}\big{(}u_{L}(a)\big{)}:=\begin{cases}u_{M}(a)&\text{if}\ a\in M,\\\ 0&\text{otherwise}\end{cases},$ $\displaystyle\text{and}\ \mathrm{j}_{2}\big{(}u_{M}(a)\big{)}:=\begin{cases}u_{\mathrm{ji}(L)}(a)&\text{if}\ a\in\mathrm{ji}(L),\\\ 0&\text{otherwise}\end{cases}.$ Then, $\mathrm{j}$, $\mathrm{j}_{1}$, and $\mathrm{j}_{2}$ are algebra homomorphisms by Corollary 2.22. Moreover, as the diagram ${\mathrm{M\ddot{o}b}(L)}$${\mathrm{M\ddot{o}b}(M)}$${\mathrm{M\ddot{o}b}\big{(}\mathrm{ji}(L)\big{)}}$$\scriptstyle{\mathrm{j}}$$\scriptstyle{\mathrm{j}_{1}}$$\scriptstyle{\mathrm{j}_{2}}$ is commutative, then $u_{M}(a)\in\ker\mathrm{j}_{2}\subseteq\ker\mathrm{j}$ if $a\in M\setminus\mathrm{ji}(L)$. Finally, since $\mathrm{M\ddot{o}b}\big{(}\mathrm{ji}(L)\big{)}\simeq\mathrm{M\ddot{o}b}(L)/\ker\mathrm{j}$ [4, II-Theorem 6.12], we obtain $\ker\mathrm{j}=\mathrm{N}(L)$ using Lemma 4.19, and consequently $u_{M}(a)\in\mathrm{N}(L)$ if $a\in M\setminus\mathrm{ji}(L)$. ∎ ###### Corollary 4.21. Let $L$ be a complete lowly finite distributive lattice, $M$ a subset of $L$ such that $\mathrm{ji}(L)\subseteq M$, and $f:L\rightarrow G$ a valuation on $L$. If $a\in M\setminus\mathrm{ji}(L)$, then $\sum_{b\in[\mathsf{0},a]\cap M}\mu_{M}(b,a)f(b)=0.$ ###### Proof. Let $h:\mathrm{Val}(L)\rightarrow G$ be the module homomorphism associated to $f$ as in Proposition 4.3. We already know from Lemma 4.18 that $\mathrm{Val}(L)\simeq\mathrm{M\ddot{o}b}(L)/\mathrm{N}(L)$. By Theorem 4.20, we then obtain $\displaystyle\sum_{b\in[\mathsf{0},a]\cap M}\mu_{M}(b,a)b$ $\displaystyle=0$ $\displaystyle h\Big{(}\sum_{b\in[\mathsf{0},a]\cap M}\mu_{M}(b,a)b\Big{)}$ $\displaystyle=h(0)$ $\displaystyle\sum_{b\in[\mathsf{0},a]\cap M}\mu_{M}(b,a)h(b)$ $\displaystyle=0$ $\displaystyle\sum_{b\in[\mathsf{0},a]\cap M}\mu_{M}(b,a)f(b)$ $\displaystyle=0.$ ∎ ## 5 Dissection Theory We use Corollary 4.21 to prove the fundamental theorem of dissection theory. ###### Definition 5.1. Let us call subspace arrangement in a topological space $T$ a finite set of subspaces in $T$. For a subspace arrangement $\mathscr{A}$ in $T$, let $\displaystyle L_{\mathscr{A}}:=\Big{\\{}\bigcap_{H\in\mathscr{B}}H\in 2^{T}\setminus\\{\emptyset\\}\ \Big{|}\ \mathscr{B}\subseteq\mathscr{A}\Big{\\}}$ be the poset with partial order $\preceq$ defined, for $A,B\in L_{\mathscr{A}}$, by $A\preceq B$ if and only if $A\subseteq B$. ###### Definition 5.2. Let $\mathscr{A}$ be a subspace arrangement in a topological space $T$. A meet-refinement of $L_{\mathscr{A}}$ is a finite poset $L\subseteq 2^{T}\setminus\\{\emptyset\\}$ with the same partial order as that defined for $L_{\mathscr{A}}$ such that $\displaystyle\bigcup_{X\in L}X=\bigcup_{H\in\mathscr{A}}H$ and * • any element in $L_{\mathscr{A}}$ is a union of elements in $L$, * • any nonempty intersection of elements in $L$ is also a union of elements in $L$. Denote by $\mathrm{C}(X)$ the set formed by the connected components of a topological space $X$, and let $\mathscr{A}$ be a subspace arrangement of $T$. The set $\displaystyle L_{\mathscr{A}}^{c}:=L_{\mathscr{A}}\sqcup\bigg{\\{}\mathrm{C}\Big{(}\bigcap_{H\in\mathscr{B}}H\Big{)}\ \bigg{|}\ \mathscr{B}\subseteq\mathscr{A},\,\bigcap_{H\in\mathscr{B}}H\neq\emptyset\bigg{\\}}$ is for instance a meet-refinement of $L_{\mathscr{A}}$. ###### Definition 5.3. Let $\mathscr{A}$ be a subspace arrangement in a topological space $T$, and denote by $C_{\mathscr{A}}$ the set formed by the connected components of $\displaystyle T\setminus\bigcup_{H\in\mathscr{A}}H$. An element of $C_{\mathscr{A}}$ is called a chamber of $\mathscr{A}$. Consider a subspace arrangement $\mathscr{A}$, and a meet-refinement $L$ of $L_{\mathscr{A}}$. Let $\mathrm{D}(L)$ be the finite distributive lattice of sets generated by $L\sqcup C_{\mathscr{A}}$ through unions and intersections, that is $\mathrm{D}(L):=\Big{\\{}\bigcup_{A\in M}A\sqcup\bigcup_{X\in D}X\ \Big{|}\ M\subseteq L,\,D\subseteq C_{\mathscr{A}}\Big{\\}}.$ In that case, for $A,B\in\mathrm{D}(L)$, we have $A\vee B=A\cup B$ and $A\wedge B=A\cap B$. ###### Lemma 5.4. Let $\mathscr{A}$ be a subspace arrangement in a topological space $T$, and $L$ a meet-refinement of $L_{\mathscr{A}}$. Then, $\mathrm{ji}\big{(}\mathrm{D}(L)\big{)}\subseteq\\{\emptyset\\}\sqcup L\sqcup C_{\mathscr{A}}$. ###### Proof. Every element of $\mathrm{D}(L)\setminus(\\{\emptyset\\}\sqcup L\sqcup C_{\mathscr{A}})$ is the union of at least two elements of $L\sqcup C_{\mathscr{A}}$. Then, none of them can be join-irreducible. ∎ ###### Theorem 5.5. Let $\mathscr{A}$ be a subspace arrangement in a topological space $T$, $L$ a meet-refinement of $L_{\mathscr{A}}$, and $f$ a valuation on $\mathrm{D}(L)$. Then, $\sum_{C\in C_{\mathscr{A}}}f(C)=\sum_{X\in L\sqcup\\{\emptyset\\}}\mu_{L\sqcup\\{\emptyset\\}}(X,T)f(X).$ ###### Proof. Note first that $T\in L$ but $T\notin\mathrm{ji}\big{(}\mathrm{D}(L)\big{)}$ as $\displaystyle T=\bigcup_{H\in\mathscr{A}}H\sqcup\bigcup_{C\in C_{\mathscr{A}}}C$. From Corollary 4.21 and Lemma 5.4, we get $\sum_{A\in\\{\emptyset\\}\sqcup L\sqcup C_{\mathscr{A}}}\mu_{\\{\emptyset\\}\sqcup L\sqcup C_{\mathscr{A}}}(A,T)f(A)=0.$ The result is finally obtained after taking into account the following remarks: * • if $C\in C_{\mathscr{A}}$, then $\mu_{\\{\emptyset\\}\sqcup L\sqcup C_{\mathscr{A}}}(C,T)=-\mu_{\\{\emptyset\\}\sqcup L\sqcup C_{\mathscr{A}}}(C,C)=-1$, * • if $X\in\\{\emptyset\\}\sqcup L$, then $[X,T]\cap C_{\mathscr{A}}=\emptyset$, hence $\mu_{\\{\emptyset\\}\sqcup L\sqcup C_{\mathscr{A}}}(X,T)=\mu_{\\{\emptyset\\}\sqcup L}(X,T)$. ∎ ###### Definition 5.6. Let $T$ be a topological space, and denote by $H_{n}(T)$ the $n^{\text{th}}$ singular homology group of $T$ for $n\in\mathbb{N}$. The Euler characteristic of $T$ is $\chi(T):=\sum_{n\in\mathbb{N}}(-1)^{n}\,\mathrm{rank}\,H_{n}(T).$ We can now state the fundamental theorem of dissection theory. ###### Corollary 5.7 (Fundamental Theorem of Dissection Theory). Let $\mathscr{A}$ be a subspace arrangement in a topological space $T$ with $|\chi(T)|<\infty$, and $L$ a meet-refinement of $L_{\mathscr{A}}$. Then, $\sum_{C\in C_{\mathscr{A}}}\chi(C)=\sum_{X\in L}\mu_{L}(X,T)\chi(X).$ ###### Proof. It is known that $\chi(A)+\chi(B)=\chi(A\cup B)+\chi(A\cap B)$, for $A,B\subseteq T$, like stated at the end of [16, § 12.4]. The Euler characteristic is then a valuation on $\mathrm{D}(L)$. Moreover, $\chi(\emptyset)=0$ by definition. We consequently obtain the result by using Theorem 5.5 with $\chi$. ∎ ###### Example 1. Consider the arrangement $\mathscr{A}$ of parametric $1$-spheres $\displaystyle H_{1}:\begin{cases}x=\cos(\frac{\pi}{4})\\\ y=\sin(\frac{\pi}{4})\cos(t),\\\ z=\sin(\frac{\pi}{4})\sin(t)\end{cases}$ $\displaystyle H_{2}:\begin{cases}x=-\cos(\frac{\pi}{8})\\\ y=\sin(\frac{\pi}{8})\cos(t),\\\ z=\sin(\frac{\pi}{8})\sin(t)\end{cases}$ $\displaystyle H_{3}:\begin{cases}x=\cos(\frac{\pi}{6})\sin(t)\\\ y=\cos(\frac{\pi}{6})\cos(t),\\\ z=\sin(\frac{\pi}{6})\end{cases}$ $\displaystyle H_{4}:\begin{cases}x=\cos(\frac{\pi}{3})\sin(t)\\\ y=\cos(\frac{\pi}{3})\cos(t),\\\ z=-\sin(\frac{\pi}{3})\end{cases}$ where $t\in[0,2\pi]$, in $\mathbb{S}^{2}$ represented on Figure 1. On one side, $C_{\mathscr{A}}$ has $6$ chambers having Euler characteristic $1$, and $1$ with Euler characteristic $0$, then $\displaystyle\sum_{C\in C_{\mathscr{A}}}\chi(C)=6$. On the other side, $\displaystyle\sum_{X\in L_{\mathscr{A}}}\mu_{L_{\mathscr{A}}}(X,\mathbb{S}^{2})\chi(X)=\,$ $\displaystyle\mu_{L_{\mathscr{A}}}(\mathbb{S}^{2},\mathbb{S}^{2})\chi(\mathbb{S}^{2})+\sum_{i\in[4]}\mu_{L_{\mathscr{A}}}(H_{i},\mathbb{S}^{2})\chi(H_{i})$ $\displaystyle+\mu_{L_{\mathscr{A}}}(H_{1}\cap H_{3},\mathbb{S}^{2})\chi(H_{1}\cap H_{3})+\mu_{L_{\mathscr{A}}}(H_{2}\cap H_{3},\mathbb{S}^{2})\chi(H_{2}\cap H_{3})$ $\displaystyle=\,$ $\displaystyle 1\times 2+4\times(-1)\times 0+1\times 2+1\times 2$ $\displaystyle=\,$ $\displaystyle 6.$ Figure 1: $1$-Sphere Arrangement of Example 1 ###### Corollary 5.8. Let $\mathscr{A}$ be a subspace arrangement in a topological space $T$ with $|\chi(T)|<\infty$, and $L$ a meet-refinement of $L_{\mathscr{A}}$. Suppose that every chamber of $\mathscr{A}$ has the same Euler characteristic $c\neq 0$. Then, $\\#C_{\mathscr{A}}=\frac{1}{c}\sum_{X\in L}\mu_{L}(X,T)\chi(X).$ ###### Proof. It is obviously a consequence of the fundamental theorem of dissection theory where $\chi(C)=c$ for $C\in C_{\mathscr{A}}$. ∎ ## 6 Face Counting for Submanifold Arrangement We use the fundamental theorem of dissection theory to compute the $\mathrm{f}$-polynomial of submanifold arrangements having specific face properties. ###### Definition 6.1. Let $\mathscr{A}$ be a subspace arrangement in topological space $T$, and $X\in L_{\mathscr{A}}$. The induced subspace arrangement on $X$ is the subspace arrangement in $X$ defined by $\mathscr{A}_{X}:=\big{\\{}H\cap X\ \big{|}\ H\in\mathscr{A},\,H\cap X\notin\\{\emptyset,X\\}\big{\\}}.$ Let $\displaystyle F_{\mathscr{A}}:=\bigsqcup_{X\in L_{\mathscr{A}}}C_{\mathscr{A}_{X}}$, and call an element of $F_{\mathscr{A}}$ a face of $\mathscr{A}$. ###### Definition 6.2. Recall that a $n$-dimensional manifold or $n$-manifold is a topological space with the property that each point has a neighborhood that is homeomorphic to $\mathbb{R}^{n}$, and a submanifold of a $n$-manifold $T$ is a $k$-manifold included in $T$ where $k\in[0,n]$. ###### Definition 6.3. Let us call submanifold arrangement in the $n$-manifold $T$ a finite set of submanifolds $\mathscr{A}$ in $T$ such that every element of $L_{\mathscr{A}}\cup F_{\mathscr{A}}$ is a submanifold of $T$. ###### Example 2. Consider the arrangement $\mathscr{A}$ of $1$-manifolds $H_{1}:y=6\sin(x)$, $H_{2}:y=x+\cos(x)$, $\displaystyle H_{3}:\frac{x^{2}}{64}+\frac{y^{2}}{25}=1$ in $\mathbb{R}^{2}$ represented on Figure 2. We see that $\displaystyle\sum_{X\in L_{\mathscr{A}}}\mu_{L_{\mathscr{A}}}(X,\mathbb{R}^{2})\chi(X)=\,$ $\displaystyle\mu_{L_{\mathscr{A}}}(\mathbb{R}^{2},\mathbb{R}^{2})\chi(\mathbb{R}^{2})+\mu_{L_{\mathscr{A}}}(H_{1},\mathbb{R}^{2})\chi(H_{1})+\mu_{L_{\mathscr{A}}}(H_{2},\mathbb{R}^{2})\chi(H_{2})$ $\displaystyle+\mu_{L_{\mathscr{A}}}(H_{3},\mathbb{R}^{2})\chi(H_{3})+\mu_{L_{\mathscr{A}}}(H_{1}\cap H_{2},\mathbb{R}^{2})\chi(H_{1}\cap H_{2})$ $\displaystyle+\mu_{L_{\mathscr{A}}}(H_{1}\cap H_{3},\mathbb{R}^{2})\chi(H_{1}\cap H_{3})+\mu_{L_{\mathscr{A}}}(H_{2}\cap H_{3},\mathbb{R}^{2})\chi(H_{2}\cap H_{3})$ $\displaystyle=\,$ $\displaystyle 1\times 1+(-1)\times(-1)+(-1)\times(-1)+(-1)\times 0+1\times 3+1\times 10+1\times 2$ $\displaystyle=\,$ $\displaystyle 18$ is the number of chamber in $C_{\mathscr{A}}$. Figure 2: Submanifold Arrangement of Example 2 ###### Definition 6.4. Let $\mathscr{A}$ be a submanifold arrangement in a $n$-manifold $T$, and $x$ a variable. For $k\in[0,n]$, denote by $\mathrm{f}_{k}(\mathscr{A})$ the number of $k$-dimensional faces of $\mathscr{A}$. The $\mathrm{f}$-polynomial of $\mathscr{A}$ is $\mathrm{f}_{\mathscr{A}}(x):=\sum_{k\in[0,n]}\mathrm{f}_{k}(\mathscr{A})x^{n-k}.$ ###### Proposition 6.5. Let $\mathscr{A}$ be a submanifold arrangement in a $n$-manifold $T$ with $|\chi(T)|<\infty$. Suppose that $\displaystyle\forall k\in[0,n],\,\forall X\in L_{\mathscr{A}},\,\dim X=k:\ \chi(X)=l_{k},$ $\displaystyle\forall k\in[0,n],\,\forall C\in F_{\mathscr{A}},\,\dim C=k:\ \chi(C)=c_{k}\neq 0.$ Then, $\mathrm{f}_{\mathscr{A}}(x)=\sum_{i\in[0,n]}\sum_{\begin{subarray}{c}Y\in L_{\mathscr{A}}\\\ \dim Y=i\end{subarray}}\sum_{k\in[0,i]}\sum_{\begin{subarray}{c}X\in L_{\mathscr{A}_{Y}}\\\ \dim X=k\end{subarray}}\frac{l_{k}}{c_{i}}\mu_{L_{\mathscr{A}}}(X,Y)x^{n-k}.$ ###### Proof. Using the fundamental theorem of dissection theory, we get $\displaystyle\mathrm{f}_{i}(\mathscr{A})$ $\displaystyle=\sum_{\begin{subarray}{c}Y\in L_{\mathscr{A}}\\\ \dim Y=i\end{subarray}}\\#C_{L_{\mathscr{A}_{Y}}}$ $\displaystyle=\frac{1}{c_{i}}\sum_{\begin{subarray}{c}Y\in L_{\mathscr{A}}\\\ \dim Y=i\end{subarray}}\sum_{X\in L_{\mathscr{A}_{Y}}}\mu_{L_{\mathscr{A}_{Y}}}(X,Y)\chi(X)$ $\displaystyle=\sum_{\begin{subarray}{c}Y\in L_{\mathscr{A}}\\\ \dim Y=i\end{subarray}}\sum_{k\in[0,i]}\sum_{\begin{subarray}{c}X\in L_{\mathscr{A}_{Y}}\\\ \dim X=k\end{subarray}}\frac{l_{k}}{c_{i}}\mu_{L_{\mathscr{A}_{Y}}}(X,Y)$ $\displaystyle=\sum_{\begin{subarray}{c}Y\in L_{\mathscr{A}}\\\ \dim Y=i\end{subarray}}\sum_{k\in[0,i]}\sum_{\begin{subarray}{c}X\in L_{\mathscr{A}_{Y}}\\\ \dim X=k\end{subarray}}\frac{l_{k}}{c_{i}}\mu_{L_{\mathscr{A}}}(X,Y).$ ∎ ###### Definition 6.6. Let $\mathscr{A}$ be a submanifold arrangement in a $n$-manifold $T$. The rank of $X\in L_{\mathscr{A}}$ is $\mathrm{rk}\,X:=n-\dim X$, and that of $\mathscr{A}$ is $\mathrm{rk}\,\mathscr{A}:=\max\\{\mathrm{rk}\,X\ |\ X\in L_{\mathscr{A}}\\}$. ###### Definition 6.7. Let $\mathscr{A}$ be a submanifold arrangement in a $n$-manifold $T$, and $x,y$ two variables. The Möbius Polynomial of $\mathscr{A}$ is $\mathrm{M}_{\mathscr{A}}(x,y):=\sum_{X,Y\in L_{\mathscr{A}}}\mu_{L_{\mathscr{A}}}(X,Y)x^{\mathrm{rk}\,X}y^{\mathrm{rk}\,\mathscr{A}-\mathrm{rk}\,Y}.$ ###### Corollary 6.8. Let $\mathscr{A}$ be a submanifold arrangement in a $n$-manifold $T$ with $|\chi(T)|<\infty$. Suppose that $\chi(X)=(-1)^{\dim X}$ for every $X\in L_{\mathscr{A}}\cup F_{\mathscr{A}}$. Then, $\mathrm{f}_{\mathscr{A}}(x)=(-1)^{\mathrm{rk}\,\mathscr{A}}\mathrm{M}_{\mathscr{A}}(-x,-1).$ ###### Proof. From Proposition 6.5, we obtain $\displaystyle\mathrm{f}_{\mathscr{A}}(x)$ $\displaystyle=\sum_{i\in[0,n]}\sum_{\begin{subarray}{c}Y\in L_{\mathscr{A}}\\\ \dim Y=i\end{subarray}}\sum_{k\in[0,i]}\sum_{\begin{subarray}{c}X\in L_{\mathscr{A}_{Y}}\\\ \dim X=k\end{subarray}}(-1)^{k-i}\mu_{L_{\mathscr{A}}}(X,Y)x^{n-k}$ $\displaystyle=\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}(-1)^{\dim X-\dim Y}\mu_{L_{\mathscr{A}}}(X,Y)x^{n-\dim X}$ $\displaystyle=\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}(-1)^{n-\dim Y}\mu_{L_{\mathscr{A}}}(X,Y)(-1)^{\dim X-n}x^{n-\dim X}$ $\displaystyle=\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}(-1)^{\mathrm{rk}\,Y}\mu_{L_{\mathscr{A}}}(X,Y)(-x)^{\mathrm{rk}\,X}$ $\displaystyle=(-1)^{\mathrm{rk}\,\mathscr{A}}\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}\mu_{L_{\mathscr{A}}}(X,Y)(-x)^{\mathrm{rk}\,X}(-1)^{\mathrm{rk}\,Y-\mathrm{rk}\,\mathscr{A}}$ $\displaystyle=(-1)^{\mathrm{rk}\,\mathscr{A}}\mathrm{M}_{\mathscr{A}}(-x,-1).$ ∎ ###### Corollary 6.9. Let $\mathscr{A}$ be a submanifold arrangement in a $n$-manifold $T$ with $|\chi(T)|<\infty$. Suppose that $\forall C\in F_{\mathscr{A}}:\,\chi(C)=(-1)^{\dim C}\quad\text{and}\quad\forall X\in L_{\mathscr{A}}:\,\chi(X)=\begin{cases}2&\text{if}\ \dim X\equiv 0\mod 2\\\ 0&\text{otherwise}\end{cases}.$ Moreover, define $\displaystyle\gamma_{n}:=\begin{cases}1&\text{if}\ \dim X\equiv 0\mod 2\\\ -1&\text{otherwise}\end{cases}$. Then, $\mathrm{f}_{\mathscr{A}}(x)=(-1)^{n-\mathrm{rk}\,\mathscr{A}}\big{(}\mathrm{M}_{\mathscr{A}}(x,-1)+\gamma_{n}\mathrm{M}_{\mathscr{A}}(-x,-1)\big{)}.$ ###### Proof. From Proposition 6.5, we obtain $\displaystyle\mathrm{f}_{\mathscr{A}}(x)$ $\displaystyle=\sum_{i\in[0,n]}\sum_{\begin{subarray}{c}Y\in L_{\mathscr{A}}\\\ \dim Y=i\end{subarray}}\sum_{k\in[0,i]}\sum_{\begin{subarray}{c}X\in L_{\mathscr{A}_{Y}}\\\ \dim X=k\end{subarray}}(-1)^{-i}\chi(X)\mu_{L_{\mathscr{A}}}(X,Y)x^{n-k}$ $\displaystyle=\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}(-1)^{-\dim Y}\chi(X)\mu_{L_{\mathscr{A}}}(X,Y)x^{n-\dim X}$ $\displaystyle=(-1)^{n}\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}\chi(X)\mu_{L_{\mathscr{A}}}(X,Y)x^{\mathrm{rk}\,X}(-1)^{\mathrm{rk}\,Y}$ $\displaystyle=(-1)^{n-\mathrm{rk}\,\mathscr{A}}\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}\chi(X)\mu_{L_{\mathscr{A}}}(X,Y)x^{\mathrm{rk}\,X}(-1)^{\mathrm{rk}\,\mathscr{A}-\mathrm{rk}\,Y}$ $\displaystyle=(-1)^{n-\mathrm{rk}\,\mathscr{A}}\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}\mu_{L_{\mathscr{A}}}(X,Y)x^{\mathrm{rk}\,X}(-1)^{\mathrm{rk}\,\mathscr{A}-\mathrm{rk}\,Y}$ $\displaystyle\quad+(-1)^{n-\mathrm{rk}\,\mathscr{A}}\gamma_{n}\sum_{Y\in L_{\mathscr{A}}}\sum_{X\in L_{\mathscr{A}_{Y}}}\mu_{L_{\mathscr{A}}}(X,Y)(-x)^{\mathrm{rk}\,X}(-1)^{\mathrm{rk}\,\mathscr{A}-\mathrm{rk}\,Y}$ $\displaystyle=(-1)^{n-\mathrm{rk}\,\mathscr{A}}\mathrm{M}_{\mathscr{A}}(x,-1)+(-1)^{n-\mathrm{rk}\,\mathscr{A}}\gamma_{n}\mathrm{M}_{\mathscr{A}}(-x,-1).$ ∎ ## References * [1] G. Alexanderson, J. Wetzel, Arrangements of Planes in Space, Discrete Math. (34) (1981), 219–240. * [2] S. Bhatta, H. Ramananda, A Note on Irreducible Elements in a Finite Poset, Int. J. Algebra (4) (2010), 669–675. * [3] T. Blyth, Lattices and Ordered Algebraic Structures, Universitext, 2005. * [4] S. Burris, H. Sankappanavar, A Course in Universal Algebra, Grad. Texts in Math., 1981. * [5] A. Debussche, Compléments Mathématiques, Lecture Notes, ENS Rennes, 2011. * [6] P. Deshpande, On a Generalization of Zaslavsky’s Theorem for Hyperplane Arrangements, Ann. Comb. (18) (2014), 35–55. * [7] L. Geissinger, Valuations on Distributive Lattices I, Arch. Math. (Basel) (24) (1973), 230–239. * [8] L. Geissinger, Valuations on Distributive Lattices II, Arch. Math. (Basel) (24) (1973), 337–345. * [9] G. Grätzer, Lattice Theory: Foundation, Birkhäuser, 2011. * [10] C. Greene, On the Möbius Algebra of a Partially Ordered Set, Adv. Math. (10) (1973), 177–187. * [11] J. Lewin, A Simple Proof of Zorn’s Lemma, Amer. Math. Monthly (98) 4 (1991), 353–354. * [12] L. Pakula, Pseudosphere Arrangements with Simple Complements, Rocky Mountain J. Math. (33) 4 (2003), 1465–1477. * [13] G.-C. Rota, On the Foundations of Combinatorial Theory I. Theory of Möbius Functions, Z. Wahrscheinlichkeitstheorie (2) (1964), 340–368. * [14] L. Solomon, The Burnside Algebra of a Finite Group, J. Combin. Theory (2) 4 (1967), 603–615. * [15] J. Steiner, Einige Gesetze über die Theilung der Ebene und des Raumes, J. Reine Angew. Math. (1) (1826), 349–364. * [16] T. tom Dieck, Algebraic Topology, EMS Textbk. Math., 2008. * [17] T. Zaslavsky, Facing up to Arrangements: Face-Count Formulas for Partitions of Space by Hyperplanes, Mem. Amer. Math. Soc. 154 (1) 1 (1975). * [18] T. Zaslavsky, A Combinatorial Analysis of Topological Dissections, Adv. Math. (25) (1977), 267–285. * [19] M. Zorn, A Remark on Method in Transfinite Algebra, Bull. Amer. Math. Soc. (41) (1935), 667–670.
# Minimal Gaussian Curvature Surface Tom Gilat111Computer Science Department, Bar-Ilan University; <EMAIL_ADDRESS> ###### Abstract This paper deals with finding surfaces in $\mathbb{R}^{3}$ which are as close as possible to being flat and span a given contour such that the contour is a geodesic on the sought surface. We look for a surface which minimizes the total Gaussian curvature squared. We show that by a change of coordinates the curvature of the optimal surface is controlled by a PDE which can be reduced to the biharmonic equation with an easy-to-define Dirichlet boundary condition and Neumann boundary condition zero. We then state a system of PDEs for the function whose graph is the optimal surface. ## 1 Introduction This paper deals with finding surfaces in $\mathbb{R}^{3}$ which are as close as possible to being flat and span a given contour such that the contour is a geodesic on the sought surface. Explicitly: Given a smooth closed simple curve in $\mathbb{R}^{3}$, we will give a system of PDEs for which its solution describes the surface with minimum total Gaussian curvature squared, that spans the given contour. The boundary condition is that the given curve is a geodesic on the target surface (we give definitions in the following section). I wish to elaborate on the specific choice of energy used and the geodesic boundary condition as this is a new concept, which relates to challenges in computer graphics and computer and human vision which appeared in previous publications (see [1] and references therein). I wanted to be able to recover ”nice” surfaces from networks of geodesic curves in 3d space. I was looking for an energy on a surface that will allow me to translate the geodesic requirement for the contour to the boundary conditions for an Euler-Lagrange equation. I could then consider each ”cell” in the network of curves separately. I had in mind a piece of leather that is stretched on a shoe last or on a baseball - one starts with a flat piece of leather so it seemed reasonable to check the square of the Gaussian curvature (the seam on a baseball is not a geodesic but may have low geodesic curvature). The material, stretchable to some extent but intrinsically flat, would incline to stay flat. I was able to show that the geodesic boundary condition is equivalent to taking Neumann boundary condition zero for the domain of the PDE for the conformal factor for the optimal metric. I prove this in this text. We use a novel concept where we consider the projection of an ambient open surface containing the unknown optimal surface on an arbitrary affine plane to be the local coordinates, and we then find two coordinate transformation maps which relate the ”projection coordinates” to isothermal coordinates. The isothermal coordinates are adjusted so they identify with the projection coordinates when restricted to the contour. The affine plane can be arbitrary as long as the projection is one-to-one. The novelty is in having the two charts, related by transformations: the projection one which allows to ask for a function whose graph is the surface, and the isothermal chart where the Euler-Lagrange with its boundary conditions are computed and proved. (We assume that the local coordinates on the surface can be expressed by one isothermal coordinates patch. In addition, we assume that the optimal surface can be projected one-to-one on at least one affine plane.) In numerical applications, we do not use the full Euler-Lagrange equations, but we take its linear terms which are a biharmonic operator applied to the conformal factor of the metric. A related question is the question of prescribing the Gaussian curvature of a surface. This question has been considered in the past. For an open set $V\subset\mathbb{R}^{2}$, and for a function $u:V\subset\mathbb{R}^{2}\rightarrow\mathbb{R}$ and a prescribed Gaussian curvature $K:V\subset\mathbb{R}^{2}\rightarrow\mathbb{R}$ the equation for $u$ having curvature $K$ is the following: $\frac{u_{xx}u_{yy}-u_{xy}^{2}}{(1+u_{x}^{2}+u_{y}^{2})^{2}}=K(x,y)$ (1) Considering a general smooth $K(x,y)$ defined in $V$, this equation poses difficulties for even proving the existence of a solution. See Kazdan [4] for more information. We remark that this is a Monge-Ampère equation as it involves the determinant of the Hessian matrix. This work was done with applications in computer graphics in mind. I wanted to be able to recover a ”flat” surface from a network of geodesic curves. However, I believe that this work and its generalization to scalar curvature and in a pseudo-Riemannian setting can be useful in physics, for example in general relativity. In [1] I show how to numerically compute approximations of the minimal Gaussian curvature surfaces. I am currently exploring applications of these surfaces in the area of magnetic field computation and fluid mechanics, where the magnetic field or flow is normal relative to these surfaces. ## 2 Preliminaries We give the following definitions. ###### Definition 1. A surface is a dimension 2 regular submanifold of $\mathbb{R}^{3}$. A surface inherits a Riemannian metric from the Euclidean metric on $\mathbb{R}^{3}$. We will regard a surface as a Riemannian manifold with this metric. (See [7] [8].) ###### Definition 2. We say that a surface $S$ has the graphicality property with respect to an affine plane $H$, if the projection of $S$ on $H$ is one-to-one. ###### Definition 3. A surface $S$ which can be covered with one local coordinates patch is homeomorphic to $\mathbb{R}^{2}$. Denote this homeomorphism by $\phi$. If $\Gamma$ is a simple closed curve on $S$ then we can use the Jordan curve theorem to define the interior of $S$ with respect to $\Gamma$ as the preimage under $\phi$ of the interior component in $\mathbb{R}^{2}$ divided by $\phi(\Gamma)$. We denote it by $\accentset{\circ}{S}_{\Gamma}$. We let $S_{\Gamma}=\accentset{\circ}{S}_{\Gamma}\cup\Gamma$. Let $\Gamma$ be a smooth simple, closed curve in $\mathbb{R}^{3}$. Throughout the text $\Gamma$ will denote such a curve. We sometimes regard $\Gamma$ as a subset of $\mathbb{R}^{3}$. We now make reasonable assumptions about $\Gamma$: 1. 1. There exists an open surface $\widetilde{S}$ which can be covered with one isothermal coordinates patch with minimum total Gaussian curvature squared on the interior component of the surface with respect to $\Gamma$. The minimum is taken over all the two-dimensional Riemannian manifolds which can be covered with one isothermal coordinates patch, such that the image of the new chart contains, as a subset, the image under the chart of $\widetilde{S}$ of the interior component of $\widetilde{S}$ with respect to $\Gamma$. In addition, we require that the preimage of the boundary of the original ”interior component” under the new chart is a geodesic, and that the preimage (under this new chart) of the boundary of the ”interior component” agrees with the metric induced on $\Gamma\subset\widetilde{S}$ by $\mathbb{R}^{3}$ under the natural correspondence induced by the two charts. The total Gaussian curvature is computed on the ”interior component” of the two-dimensional manifold under consideration. 2. 2. There exists a plane $\widetilde{H}$ such that $\widetilde{S}$ has the graphicality property with respect to $\widetilde{S}$. From now on we assume that $\Gamma$ satisfies Properties (1) and (2). ###### Definition 4. $\mathcal{E}_{\Gamma}(S)=\int_{S_{\Gamma}}K^{2}d\mathrm{vol}_{g}$ where $g$ is the Riemann metric of $S$ (inherited from $\mathbb{R}^{3}$). $K$ is the Gaussian curvature at each point of $S$. If Property (1) holds for $\Gamma$ then $\mathcal{E}_{\Gamma}(\widetilde{S})\leq\mathcal{E}_{\Gamma}(S)$ for any surface $S$ which contains $\Gamma$ as a geodesic and can be covered by one isothermal coordinates patch (see proof of Theorem 2). ## 3 Main Theorems We will state the two main theorems shortly and prove them, but we first give an exposition of the strategy that we use. We assume that we are given $\Gamma$ for which Properties (1) and (2) hold. Let $\widetilde{H}$ be the affine plane as in the description of Property (2). The strategy is to use the expression for the Gaussian curvature squared in isothermal coordinates. Assume for a minute that we know the optimal surface $\widetilde{S}$, and we look at the local coordinates which are given by the projection of $\widetilde{S}$ to $\widetilde{H}$ (identified with $\mathbb{R}^{2}$). By a change of coordinates we can then assume that in local coordinates the Riemannian metric of the unknown surface is given by: $g=e^{2f(x,y)}(dx^{2}+dy^{2})$, where $f$ is a smooth real-valued function. These coordinates are called isothermal coordinates. In these coordinates the Gaussian curvature is given by $K(x,y)=-e^{-2f(x,y)}\Delta f(x,y)$, $\Delta$ being the Laplacian operator (computed by The Theorema Egregium, see [8, p. 90], appears in Prof. Xianfeng David Gu’s lectures notes found online and also in a more general form in [2, Section 3.5: Yamabe equation]). For the existence of isothermal coordinates, see [5, p. 135–138] or [6, p. 376–378]. We then compute the Euler-Lagrange equation for the Gaussian curvature squared given in the isothermal coordinates, as the energy. We are then able to reduce the Euler-Lagrange equation to be the biharomnic equation for the function $f$ in the exponent, which relates the original metric to the flat metric. The following PDE is the result of the Euler-Lagrange computation: $\begin{pmatrix}f_{xx}^{2}\\\ f_{xx}f_{yy}\\\ f_{x}^{2}f_{xx}\\\ f_{x}^{2}f_{yy}\\\ f_{x}f_{xxx}\\\ f_{x}f_{xyy}\\\ f_{xxxx}\\\ f_{xxyy}\\\ f_{yy}^{2}\\\ f_{y}^{2}f_{xx}\\\ f_{y}^{2}f_{yy}\\\ f_{y}f_{yyy}\\\ f_{y}f_{xxy}\\\ f_{yyyy}\end{pmatrix}^{T}\begin{pmatrix}-3\\\ -6\\\ 4\\\ 4\\\ -4\\\ -4\\\ 1\\\ 2\\\ -3\\\ 4\\\ 4\\\ -4\\\ -4\\\ 1\end{pmatrix}=0.$ (2) This equation looks intimidating but heuristically in applications one can replace it with the biharmonic equation: $f_{xxxx}+2f_{xxyy}+f_{yyyy}=0,$ (3) as all other terms consist of two or more factors, and are expected to be small. The biharmonic equation is a well-studied equation in the literature. The boundary conditions for the PDE constraining the function $f$ are deduced by simple arguments using the fact that the metric is conformal, and the that boundary of the surface is a geodesic in the sense we defined it. Recall that $\Gamma$ is the original contour in $\mathbb{R}^{3}$, and let $\widetilde{S}$ and $\widetilde{H}$ be as in Property (1) and (2). We denote by $\widetilde{\Gamma}$ the projection of $\Gamma$ on the affine plane $\widetilde{H}$. Let $\pi_{\widetilde{\Gamma}}:\Gamma\rightarrow\widetilde{\Gamma}$ be the projection map. Let $\gamma(t)$ be a parameterization (arc-length or not) of $\widetilde{\Gamma}$ with length $L$. In order to be able to take a derivative of $\gamma$ at 0, we regard its domain as $\mathbb{R}/L\mathbb{Z}$. We denote by $\phi_{\widetilde{H}}:\widetilde{S}\rightarrow\mathbb{R}^{2}$, the projection of $\widetilde{S}$ on $\widetilde{H}$ composed with an identification of $\widetilde{H}$ with $\mathbb{R}^{2}$. Lastly, let $\Omega=\phi_{\widetilde{H}}\left(\accentset{\circ}{\widetilde{S}}_{\Gamma}\right)$. We prove the following theorem, ###### Theorem 1. Given a smooth simple closed curve $\Gamma$ for which Properties (1) and (2) hold. Let $\widetilde{S}$ and $\widetilde{H}$ be as in the statement of the Property (1) and (2), with respect to $\Gamma$. Let $g=e^{2f(x,y)}(dx^{2}+dy^{2})$ be a Riemannian metric on $\widetilde{S}$ for a coordinate chart which maps $\Gamma$ to $\widetilde{\Gamma}$ by projection, then $f$ satisfies the following PDE: $\begin{split}&\mathrm{Equation}\ \eqref{ELFull}\quad\mathrm{in}\ \Omega,\\\ &f(\gamma(t))=\frac{1}{2}\cdot\log\left\|(\pi_{\widetilde{\Gamma}}^{-1}\circ\gamma)^{\prime}(t)\right\|,\quad t\in[0,L),\quad\mathrm{on}\ \partial\Omega,\\\ &\frac{\partial f}{\partial\overrightarrow{n}}=0\quad\mathrm{on}\ \partial\Omega.\\\ \end{split}$ (4) We give concise proofs for the PDE in $\Omega$, and the Dirichlet and Neumann boundary conditions. ###### Proof (PDE for $f$ inside the domain). The volume form for the surface $\widetilde{S}$ in the isothermal coordinates is given by $\sqrt{\mathrm{det}\ g}\ dxdy=e^{2f(x,y)}dxdy$, therefore the functional to be minimized is: $\int_{S_{\Gamma}}K^{2}d\mathrm{vol}_{g}=\int_{\Omega}e^{-2f(x,y)}(f_{xx}+f_{yy})^{2}dxdy.$ Working out the Euler-Lagrange equation yields Equation 2. ∎ ###### Proof (Dirichlet Boundary Condition). For any two points $x,y\in\partial\Omega$, we know the length on $\widetilde{S}$ of the preimage of each of the two arcs connecting $x$ to $y$. It is the length of the segment of $\Gamma$ connecting $\pi_{\widetilde{\Gamma}}^{-1}(x)$ and $\pi_{\widetilde{\Gamma}}^{-1}(y)$. The Dirichlet boundary condition is set to agree with this observation. ∎ ###### Proof (Neumann Boundary Condition). Assume that there is an $x\in\partial\Omega$ for which $\frac{\partial f}{\partial\overrightarrow{n}}(x)\neq 0$. The function $f$ is smooth, therefore there exists a neighborhood of $x$ (in the image of the coordinate chart of $\widetilde{S}$) on which $\frac{\partial f}{\partial\overrightarrow{n}}\neq 0$. The neighborhood we choose can be arbitrarily small. This shows that there is a shorter closed curve than $\Gamma$ on $\widetilde{S}$ which is equal to $\Gamma$ except for the preimage of a curve segment in an arbitrarily small neighborhood of $x$ under the coordinate chart. To see this: we can always make a detour around $x$ in the image of the coordinate chart of $\widetilde{S}$ which increases as little as we wish the factor $dx^{2}+dy^{2}$ in the metric taken along the new curve, while $e^{2f}$ is bounded from above by $e^{2f(x)-\delta}$ for some $\delta>0$. This is a contradiction to the assumption that $\Gamma$ is a geodesic on $\widetilde{S}$. (See [3, Chapter 1] for details.) ∎ We now prove the following theorem, which can be viewed as a corollary of Theorem 1. ###### Theorem 2. Given a smooth simple closed curve $\Gamma$ for which Properties (1) and (2) hold. Let $\widetilde{S}$ and $\widetilde{H}$ be as in the statement of the Property (1) and (2), with respect to $\Gamma$. Assume $\widetilde{H}=\mathbb{R}^{2}$, and let $\Omega$ be the projection of $\accentset{\circ}{\widetilde{S}}_{\Gamma}$ on $\mathbb{R}^{2}$. Let $h:\overline{\Omega}\rightarrow\mathbb{R}$ be a function which is $\mathcal{C}^{\infty}$ on $\Omega$ and continuous on $\overline{\Omega}$, such that the graph of $h$ is $\widetilde{S}_{\Gamma}$. (By Properties (1) and (2) such a function exists.) Let $f$ be the solution of Equation (4) in Theorem 1. Let $\widetilde{K}(x,y)=-e^{-2f(x,y)}\Delta f(x,y)$ be a function on $\overline{\Omega}$. Let $E(u,v)=1+h_{u}^{2}(u,v)$, $F(u,v)=h_{u}(u,v)h_{v}(u,v)$ and $G(u,v)=1+h_{v}^{2}(u,v)$. Then there is a coordinate change $x(u,v),y(u,v)$, such that $h,x,y$ satisfy the following equations (for $(u,v)\in\Omega$ in the first three equations): $\begin{split}&\frac{h_{uu}h_{vv}-h_{uv}^{2}}{(1+h_{u}^{2}+h_{v}^{2})^{2}}=\widetilde{K}(x(u,v),y(u,v)),\\\ &\frac{\partial}{\partial v}\frac{Fx_{u}-Ex_{v}}{\sqrt{EG-F^{2}}}+\frac{\partial}{\partial u}\frac{Fx_{v}-Ex_{u}}{\sqrt{EG-F^{2}}}=0,\\\ &\frac{\partial}{\partial v}\frac{Fy_{u}-Ey_{v}}{\sqrt{EG-F^{2}}}+\frac{\partial}{\partial u}\frac{Fy_{v}-Ey_{u}}{\sqrt{EG-F^{2}}}=0,\\\ &x(u,v)=u\quad\mathrm{on}\ \partial\Omega,\\\ &y(u,v)=v\quad\mathrm{on}\ \partial\Omega.\end{split}$ (5) ###### Proof. If the isothermal coordinates covering $\widetilde{S}$, which exist by Property (1), are given by $(\xi,\eta)$, then a coordinate change to new isothermal coordinates $(x,y)$ is given by the following equations: $x_{\xi\xi}+x_{\eta\eta}=0,\ y_{\xi\xi}+y_{\eta\eta}=0$ (see [5, p. 135–138]). By prescribing Dirichlet boundary values agreeing with the conditions in Theorem 1 for these Laplace equations, we can obtain isothermal coordinates $(x,y)$. This shows that isothermal coordinates, which agree with the conditions in Theorem 1, exist. We assume that $(u,v)$ are the coordinates on $\widetilde{S}$ obtained by the projection of the surface on $\mathbb{R}^{2}$. If $h$ is the inverse function of this projection then $E,F,G$ are given by the identities stated. Then the maps $x(u,v),y(u,v)$ are the solutions of the two Laplace-Beltrami equations in the above system with the two Dirichlet boundary conditions. See [5, p. 135–138] for details. If $f(x,y)$ satisfies Equation (4) then $\widetilde{K}(x,y)=-e^{-2f(x,y)}\Delta f(x,y)$ is the optimal Gauss curvature at $(x,y)$. The function $h$, whose graph is $\widetilde{S}_{\Gamma}$, is the solution of the Monge-Ampère equation stated. ∎ ## References * [1] Tom Gilat “Smooth Surfaces via Nets of Geodesics” In _arXiv preprint: https://arxiv.org/abs/2109.01429_ , 2021 * [2] Xianfeng David Gu, Feng Luo and Shing-Tung Yau “Recent Advances in Computational Conformal Geometry” In _Communications in Information and Systems_ 9.2 International Press of Boston, 2009, pp. 163–196 * [3] Jürgen Jost “Riemannian Geometry and Geometric Analysis”, Universitext Springer International Publishing, 2017 * [4] Jerry L. Kazdan “Prescribing the curvature of a Riemannian manifold”, Regional conference series in mathematics 57 Published for the Conference Board of the Mathematical Sciences by the American Mathematical Society, 1985 * [5] Wilhelm Schlag “A Course in Complex Analysis and Riemann Surfaces”, Graduate Studies in Mathematics 154 American Mathematical Society, 2014 * [6] Michael E. Taylor “Partial Differential Equations I: Basic Theory”, Applied Mathematical Sciences 115 Springer-Verlag New York, 2011 * [7] Loring W. Tu “An Introduction to Manifolds”, Universitext Springer International Publishing, 2011 * [8] Loring W. Tu “Differential Geometry: Connections, Curvature, and Characteristic Classes”, Graduate Studies in Mathematics 275 Springer International Publishing, 2017
# Protocol for autonomous rearrangement of cold atoms into low-entropy configurations Matthew A. Norcia<EMAIL_ADDRESS>Institut für Quantenoptik und Quanteninformation, Österreichische Akademie der Wissenschaften, Innsbruck, Austria ###### Abstract The preparation of low-entropy starting conditions is a key requirement for many experiments involving neutral atoms. Here, we propose a method to autonomously assemble arbitrary spatial configurations of atoms within arrays of optical tweezers or lattice sites, enabled by a combination of tunneling and ground-state laser cooling. In contrast to previous methods, our protocol does not rely on either imaging or evaporative cooling. This circumvents limitations associated with imaging fidelity and loss, especially in systems with small spatial scales, while providing a substantial improvement in speed relative to evaporative approaches. These features may make it well-suited for preparing arbitrary initial conditions for Bose-Hubbard or Rydberg interacting systems. ## I Introduction Microscopic control of neutral atoms has lead to a recent explosion of interest for applications in quantum simulation, quantum information processing, and quantum enhanced metrology Gross and Bloch (2017); Browaeys and Lahaye (2020); Weiss and Saffman (2017); Gil _et al._ (2014). Achieving configurations of large numbers of atoms with well-defined positions and motional states is a key capability at the forefront of modern experiments. Currently, two main approaches are used to accomplish this: In the first, evaporative cooling is used to remove entropy from the atomic system, after which point the atoms may be adiabatically loaded into a desired potential landscape such as an optical lattice Gross and Bloch (2017). Entropy redistribution techniques can then be used to further reduce the entropy of a sub-region Chiu _et al._ (2018); Kantian _et al._ (2018); Yang _et al._ (2020). In the second approach, atoms are stochastically loaded into the tightly confining potential of optical tweezers or lattice sites, with light- assisted collisions leading to either zero or one atom in each site Schlosser _et al._ (2001). The atoms are then rearranged into the desired configuration based on information gained from imaging Weiss _et al._ (2004); Miroshnychenko _et al._ (2006); Kim _et al._ (2016); Barredo _et al._ (2016); Endres _et al._ (2016). Of particular relevance for this proposal are recent experiments where atoms are also laser-cooled to their motional ground states Kumar _et al._ (2018). While highly successful, these two existing approaches each have limitations. Evaporation is typically slow, leading to long experimental cycle times. This can be problematic for quantum simulation experiments, where large numbers of experimental trials are required, and in the context of metrology, where long cycle times degrade the stability of atomic clocks or other sensors Dick (1987). Approaches based on measurement and rearrangement can provide a substantial advantage in terms of speed, as laser cooling replaces evaporation as the means of entropy removal. However, the resulting level of entropy can be limited by imaging fidelity and loss. Further, these approaches require an imaging system capable of localizing and manipulating atoms at the single-site level, which has so far prevented the application of rearrangement protocols to systems with sub-micron-scale lattices, desirable for tunneling. In this work, we propose an alternate approach that does not require either evaporation or imaging. Like the imaging-based approach, our method starts with an array of stochastically loaded atoms to be rearranged into a desired configuration. However, rather than relying on imaging to determine the initial site occupations, atoms are rearranged in an autonomous manner. For clarity, we will leave further motivation and connections to related concepts such as Thouless pumping, algorithmic cooling and autonomous stabilization for later (Section V) and proceed immediately with a description of our proposed mechanism. ## II Protocol Overview Figure 1: a. Conceptual overview of method. An array of lattice sites, consisting of target (red circles) and reservoir (no circles) sites is initially loaded in a stochastic manner. After subsequent cycles of directional tunneling into target sites and between reservoir sites, interleaved with dissipative cooling steps, target states within the array are occupied. Atoms remaining in the reservoir sites are then removed, leaving a tailored low-entropy configuration. b. Basic building block of implementation. i. An optical potential consisting of two neighboring wells is imbalanced to create a tunneling resonance between the ground state of the reservoir site (right well) and the first motionally excited state of the target site (left well). An atom occupying the reservoir site will then tunnel into an empty target site. ii. If an atom already occupies the target site, contact interactions shift the tunneling resonance, preventing double occupation of the target site. iii After tunneling, ground-state laser cooling is applied to return atoms to the ground state of their respective sites, providing dissipation and preventing future tunneling away from the target site. The goal of our protocol is to transfer atoms into specific “target” sites of an optical potential from stochastically loaded “reservoir” sites by repeated application of irreversible “shift” operations, after which point atoms remaining in reservoir sites are removed (Fig. 1a). The shift operation can be understood in a simple subsystem consisting of a single target site and a single reservoir site. If initially the target site is empty and the reservoir site is occupied by an atom, the atom is transferred from the reservoir site to the target site. For all other configurations, the state remains unchanged. Our proposed implementation for the shift operation is shown in Fig. 1b. We assume that either zero or one atom initially occupies the motional ground state (denoted by $n=0$) of each of two optical potential wells, defining a reservoir and target site. Excited motional states are assumed to be empty. The $n=0$ state of the reservoir well is then brought onto resonance with the first excited motional state in the target well ($n=1$), allowing the atom to tunnel between the two wells at a rate that we label $J_{01}$. The potential could either be held in this resonant configuration for a time $\pi/J_{01}$, or the relative depth of the wells could be adiabatically ramped through resonance to gain insensitivity to offset errors at the expense of reduced speed. In either case, an atom occupying the reservoir site will tunnel into the target site. In order to prevent double occupation of the target site, the contact interactions $U_{01}$ associated with atoms in the $n=0$ and $n=1$ states of the same site must exceed the tunneling rate $J_{01}$. This ensures that tunneling from the reservoir to the target site is not resonant if both sites are initially occupied, so the atom in the reservoir well remains in place. To make the process non-reversible – to prevent the atom from tunneling out of the target site on a later cycle – the shift operation concludes by subjecting all atoms to ground-state cooling. In general, a target site can be coupled to several reservoir sites. In this case, one of the atoms (if present) in any of the reservoir sites may tunnel into the target site. Unlike the two-site case, this process is inherently probabilistic, depending on the relative degeneracy of the configurations with occupied and unoccupied target sites (see appendix C for details). However, by sufficient repetition of the shift protocol, the target sites can still be loaded with high probability. Figure 2: Applications to different geometries. a. Target sites can be defined within a two-dimensional lattice by applying local optical potentials to these sites, for example with optical tweezers. Shift operations transfer atoms into target sites from neighboring reservoir sites. b. The same concept can be applied to the filling of layers within a three-dimensional lattice. Here, shift operations are applied between layers, and tunneling within layers enables reservoir atoms to find empty target sites. c. A continuous region can be filled by repeatedly shifting atoms towards a potential barrier, represented by the gray region on the left of the array. In all protocols, atoms are removed from the reservoir sites after filling of the target sites or layers is complete (see appendix D) Figure 3: Protocol for filling uniform regions without additional dimensions. By alternately shifting even and odd pairs of atoms, atoms are shuffled to one side, where they bunch up against a barrier (barrier not pictured). The shift operations are implemented in parallel by superimposing an optical lattice (red lines) with an auxiliary potential (blue lines) that both creates an imbalance between neighboring wells and pushes down the barrier in the combined potential (black lines), enabling rapid tunneling. The phase of the additional potential is shifted relative to the lattice to shift even (upper) or odd (lower) pairs. Right: The shuffle operation can easily be extended to two dimensions simply by applying shuffle operations to many rows in parallel. Shuffling of each row is independent of others. ## III Specific implementations Variations on this simple building block can be used in different geometrical configurations, for example to fill arbitrary sites within a two-dimensional plane, or to fill target planes within a three-dimensional lattice. It can also be used in more complex rearrangement algorithms, for example to fill contiguous regions within one- or two-dimensional lattices. In this section we briefly describe three such variations before turning to a more detailed analysis of their performance. The conceptually simplest application of our rearrangement protocol can be used to fill arbitrary target sites of a two-dimensional lattice, as illustrated in Fig. 2a. In this approach, a background lattice defines a two- dimensional plane of sites among which the atoms can tunnel, and an auxiliary potential (generated using optical tweezers or superlattices), is applied to the target sites. All other sites are considered reservoir sites. The shift operation is implemented by increasing the depth of the auxiliary potential, adiabatically ramping the energy of the $n=1$ state of the target sites across the $n=0$ states of the reservoir sites. Importantly, the reservoir sites are assumed to be degenerate in energy, so atoms can also tunnel between them. This serves the important function of mixing the reservoir atoms so that if a given target site initially lacked neighboring atoms, it may attain them in future cycles. During the ground-state cooling phase, the depth of the background lattice is increased, localizing each atom on its lattice site, the auxiliary potential decreased, and all atoms cooled to $n=0$. The same concept can be used to fill target planes instead of target sites, as shown in Fig. 2b. In this case, the background lattice is three-dimensional, and the auxiliary potential shifts the energy of entire planes. The plane shifted to the lowest energy becomes the target plane, and its neighbors the reservoir. The protocol then proceeds as before, with atoms repeatedly shifted into the target plane. Again, we assume that the reservoir sites in a single plane are degenerate, so tunneling within the planes serves to randomize the location of atoms, enabling high-probability loading of the target plane. The two preceding variations rely on a high connectivity of target sites to reservoir sites, and of reservoir sites to each other. This can be limiting if one wishes to prepare a state that is uniformly filled without using additional lattice dimensions. To get around this limitation, a procedure of repeated directional shifts can be used to shuffle atoms into a target region of a lattice, as shown in Fig. 2c. This protocol fundamentally operates in a single dimension, but can be used to generate two- or three-dimensional regions with simultaneous application to all rows in the array. With this in mind, we consider a single row within a three-dimensional optical lattice, which is tightly confining in the directions orthogonal to its length. To create a uniformly filled section of the lattice, a potential barrier is applied to one end of a desired region (here,the left end), and shift operations are applied simultaneously to all non-overlapping pairs of lattice sites with an even index of the left well (even pairs), alternating with shift operations applied to all of the pairs with an odd index of the left well (odd pairs), as shown in Fig. 3. We label a pair of such shift operations a shuffle. Repeated shuffles cause the atoms to move to the left, until they encounter either the barrier or other atoms, at which point further tunneling is suppressed either by the barrier or interaction potential. After this point, atoms beyond a certain distance from the barrier can be removed from the sample, leaving a uniformly filled region of the lattice (appendix D). Because the shuffling operations can be applied to all rows of a 2D array simultaneously and independently, we expect the characteristic time and error rates when filling an array of $M$ rows and $N$ columns to be the same as those for a single one-dimensional array of length $N$. This feature represents a key strength of this technique – while filling a single one- dimensional array takes a relatively large number of steps, because the atoms are only moved a single site at a time, the shift operations can be applied in parallel to all atoms at the same time, leading to the favorable scaling with respect to total number of atoms in the array, especially when $M$ exceeds $N$. ## IV Protocol Performance Figure 4: Effects of errors on target site (left column) and layer filling (right column) protocols. Red (black) points represent 50% initial filling, with $F=1,\ (0.9)$. Blue points represent 90% intitial filling, with $F=1$. a. Filling fraction $1-\xi$ versus number of cycles. Vertical lines represent the number of steps $n_{99}$ required to reach 99% filling for each condition. Loss not included. Inset: effect of tunneling fidelity on $n_{99}$. For moderate infidelities, the effect is minor. Gray line is intuitive approximation discussed in text. b. Minimum hole fraction $\xi$ fraction versus per-cycle loss, for same conditions as (a). The effectiveness of our protocol can be quantified in different situations by the number of cycles required to produce a desired filling (which would dictate experimental cycle times), and by the error rate in the final configuration. We quantify the former in terms of $n_{99}$, the typical number of cycles to reach 99% filling, as such a filling would enable assembly of perfect arrays of sizes that can be difficult to simulate classically. We define the error rate $\xi$ as the minimum achievable fraction of empty target sites. Both $n_{99}$ and $\xi$ may be limited by both fundamental aspects of the protocol and by experimental imperfections. Fundamental limits include the probabilistic distribution of atoms in reservoir sites, and the probabilistic nature of tunneling into a target site that has more than one neighboring reservoir site. Experimental imperfections can be broken into two categories: fidelity errors and loss errors. Fidelity errors represent imperfect tunneling (for example due to imperfections in the optical potential or imperfect adiabatic transfers), leading to a probability of desired tunneling events reduced by a factor $F$ below their idealized value. Loss errors, quantified by the per-cycle atom loss probability $\epsilon$, represent atom loss from the system due to effects such as collisions with background gas due to imperfect vacuum, double-occupancy of a lattice site, or imperfect ground state cooling. Detailed estimates for experimentally achievable fidelities and loss rates can be found in appendix B, indicating that fidelities $F\geq 0.9$ and loss $\epsilon\leq 0.01$ should be attainable. For all variations of our protocol, we find through simulation that for small loss rates and infidelity, the minimum occupancy error $\xi\simeq\alpha\epsilon/F$. $\alpha$ then provides a figure of merit that can be used to assess the performance of a given protocol, target state, and initial filling fraction. Intuitively, $\alpha/F$ is related to the number of cycles required to repair a defect caused by loss. Because $\epsilon$ must be kept small for any useful application of our protocol, the effect of loss is minor in determining the timescale required to reach a quasi-equilibrium condition, and we compute $n_{99}$ (which is of course not a good metric for conditions where $\xi\geq 0.01$) in the absence of loss. We estimate the performance of our protocols using a classical simulation of the whole system with with parameters informed by master-equation-based simulations performed on much smaller subsystems, details of which are provided in appendix C. We break each filling cycle into three steps: a filling step in which atoms may tunnel into an unoccupied target site from neighboring reservoir sites, a mixing step where the atoms are allowed to rearrange between reservoir sites (where applicable), and a loss step. In the experiment, filling and mixing could occur simultaneously, and be interleaved with ground state cooling (which is not represented in the simulation as we explicitly track only site occupation, but include the effects of imperfect cooling as effective loss). To benchmark the performance of our first protocol variation, we consider a specific target state – a grid with quarter filling of the base lattice, as shown in Fig. 2a. When the filling fraction of the target state is much lower than the typical filling of stochastically loaded lattices, there is a high probability of having enough atoms to fill the target sites for systems of at least moderate size. The filling process for such a target state is shown in Fig. 4a. In the absence of imperfections, we find that $n_{99}=6$ for 50% initial filling. Very roughly, this indicates that on each cycle, the chance that a target site remains unoccupied is reduced by a factor or about 2 (though this factor changes slightly over the filling process as atoms are moved to the target sites, reducing the density of the reservoir). The effect of imperfect tunneling fidelity is to reduce this factor, leading to slower filling. For half-filling, we find that an intuitively derived formula: $n_{99}=Log_{1-P}(0.02)$, where $P=0.5F$ represents the chance of filling an unfilled target site on any cycle, provides good agreement with the results of our simulation. The filling of layers is limited by conceptually similar factors to the filling of target sites. However, the number of neighbors connected to each target site, and the connectivity of the reservoir sites are not the same for the two geometries, leading to quantitative differences. Further, because the performance of these protocols benefits from a high density of reservoir atoms, the layer filling configuration presents an opportunity: atoms can be brought into the reservoir planes that neighbor the target plane from farther away planes. This leads to a higher density of atoms in reservoir planes that neighbor the target plane, allowing for more rapid filling of vacancies in the target plane. This both reduces the number of cycles $n_{99}$ required to fill the target plane, and allows errors associated with loss to be repaired more quickly, reducing the error rate $\xi$. As a specific example, we consider a implementation based on five layers, numbered one through five, that are initially stochastically loaded with half filling, with plane number three the target plane. First, shift operations are applied simultaneously from plane one to two, and from five to four. This increases the density of atoms in planes two and four. After this, repeated shift operations are applied into plane three from both of its neighbors, until the desired filling fraction is reached. The filling performance of this protocol is shown in figure 4b, along with the same prediction for the effects of imperfect fidelity as for the target sites. In this case, we find $n_{99}=7$ for perfect fidelity and 50% initial filling, and that for lower fidelities our intuitive prediction slightly underestimates the number of cycles required. This is because fewer atoms are transferred from planes one and five into the central three in the first cycle, reducing the density of available reservoir atoms. We note that this quantitative similarity with the target states protocol is specific to the example configurations that we chose, and does not represent a general property of the two approaches. For both benchmark cases described above (for target sites and target layers with 50% initial filling), we find from simulation that $\alpha\simeq 2.25$ (Fig.4c, d, red points). This number roughly reflects the inverse of the probability that a defect is filled on a given filling cycle. It is influenced both by the availability of atoms in neighboring reservoir sites, and the inherently probabilistic nature of filling when a target site is coupled to multiple reservoir sites (see appendix C). As mentioned above, these numbers can be dramatically improved by increasing the density of atoms in the reservoir states (Fig.4c, d, blue points), either by shifting atoms into those sites, or through protocols that can lead to filling fractions significantly above 50% Grunzweig _et al._ (2010); Lester _et al._ (2015); Brown _et al._ (2019). The filling process of a one-dimensional sub-array under our shuffling protocol is shown in Fig.5. In the absence of errors (Fig.5a, b), and starting from an initial randomly half-filled array long enough to contain at least $N$ atoms with high probability, we make an informed guess that the typical number of shuffles required to fill a sub-array of length $N$ is well approximated by $n_{99}=N+C(N)$, where $C(N)$ is a logarithmic correction factor. This guess is motivated by thinking of holes being shuffled out of the sub-array — each hole can move up to one sites per cycle (either even or odd shuffle), which contributes the linear term. However, a hole cannot begin to move until there is an adjacent atom to fill it, so if we start with several adjacent holes, the one that must move the farthest does not begin to move until its neighbors have been filled. Because the longest typical number of adjacent holes within the sub-array should scale with $\log_{2}(N)$ Schilling (1990), we expect this to set the scale of $C(N)$, and find empirically that $C(N)=\log_{2}(N)$ indeed provides reasonable agreement with simulation (Fig. 5b). In the shuffle protocol, the per-site error rate increases with the length of the region being filled. For an target region of $N$ by $M$ lattice sites, where shuffling is performed in the direction whose size is $N$, we find from simulation that the achievable error rate is given approximately by $\xi\simeq N\epsilon/2F$, corresponding to $\alpha=N/2$. Intuitively, this can be understood because a hole in the target region must be moved $N/2$ sites on average, requiring $N/2F$ shuffling cycles (Fig.5c, d). Note again that while the performance scales unfavorably with $N$, the number of columns in the target region, it is independent of the number of rows $M$. As a benchmark case, given per-shift losses of $\epsilon=0.01$, one could expect to achieve perfect filling of a five-by-five array in approximately half of attempts. In experiments where perfect arrays are not needed (for example, for those targeting entanglement-enhanced metrology Van Damme _et al._ (2020)), the system size could be dramatically increased. Figure 5: Shuffling performance in idealized case and with imperfect fidelity and loss. a. Example of lattice site occupation versus number of shuffling cycles, with perfect tunneling and no loss. Each cycle consists of either an even or an odd shift. After an initial traffic jam, the atoms shuffle at a rate of one site per step until they are blocked by other atoms. b. Number of cycles required to reach 99% filling ($n_{99}$) for different target region lengths $N$. Dashed black line is the prediction from the text, and dashed grey line is $N$. c. Example of lattice site occupation versus number of shuffling cycles in the presence of finite shift fidelity ($F=0.9$) and loss per cycle($\epsilon=0.01$). d. Prevalence of holes in the final array versus loss $\epsilon$ and $N$. ## V Discussion and outlook So far, we have described several ways in which low-entropy arrays of atoms may be autonomously assembled from stochastically populated initial conditions, potentially combining advantages of evaporative techniques, and those based on measurement and rearrangement. Because entropy is removed through the cooling process, rather than imaging, our techniques are suitable for systems where imaging may have limited fidelity or be accompanied by loss, for example in atomic species with complex level structures or in systems with small lattice spacings. This may make these techniques especially suitable for systems where preparing atoms in the motional ground state of a short wavelength optical lattice is already required, such as those seeking to explore Hubbard physics. Further, because the definition of the target sites is can be chosen at will (up to certain constraints), this approach could be useful for preparing initial conditions for Hubbard-regime experiments that require or benefit from arbitrary, non-uniform starting conditions, such as sampling problems Muraleedharan _et al._ (2019) or quantum walk experiments Kempe (2003), or for simulators of spin-models Browaeys and Lahaye (2020); Weiss and Saffman (2017), where interactions can be mediated by Rydberg interactions with range larger than the lattice spacing. There are some similarities between our method and previously proposed and demonstrated methods for algorithmic cooling, in that both rely on contact interactions between atoms in a lattice to allow or prevent coherent transitions that ultimately enable the reduction of filling errors Rabl _et al._ (2003); Popp _et al._ (2006); Bakr _et al._ (2011); Tichy _et al._ (2012). However, algorithmic cooling relies on overfilling the lattice and then removing the excess atoms, so typically requires an evaporatively prepared sample to begin with. In contrast, because our method works by irreversibly moving atoms towards the target sites, it is specifically suitable for the relatively sparse initial fillings typically associated with direct laser cooling into micron-scale optical potentials. The directional tunneling employed here is similar to the topological charge pumping employed in a Thouless pump, which has been demonstrated to be highly robust to experimental imperfections Romero-Isart and Garcia-Ripoll (2007); Lohse _et al._ (2016); Nakajima _et al._ (2016); Koepsell _et al._ (2020). However, the addition of ground-state cooling makes the pumping irreversible, enabling the compression of particles in the sites of an optical lattice and the removal of spatial entropy. Our scheme also bears resemblance to autonomous quantum stabilizers for bosonic systems, for example those used to stabilize highly correlated photonic states in arrays of superconducting qubits Ma _et al._ (2017, 2019). With suitable modifications, the basic ideas behind our approach could also be used to stabilize analogous atomic systems against the effects of atom loss, and to provide a well-controlled means of environmental coupling. This will be a direction of future exploration, and may benefit from the use of even narrower transitions, such as the clock transitions in alkaline earth atoms. In a very recent proposal Sharma and Mueller (2021), a related mechanism is explored to autonomously assemble and stabilize atoms within optical lattices, with dissipation provided by interaction with an atomic bath, rather than laser cooling as is used here. If extended to molecules in tweezers or optical lattices Chotia _et al._ (2012); Anderegg _et al._ (2019); Cairncross _et al._ (2021), variations on the method proposed here could be highly advantageous to increase the filling fractions and reduce disorder, as nondestructive imaging of molecules is challenging. ## VI Acknowledgements M.A.N. thanks Adam Kaufman and Hannes Pichler for insightful discussions and feedback on the manuscript. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska‐Curie grant agreement No 801110 and the Austrian Federal Ministry of Education, Science and Research (BMBWF). It reflects only the author’s view, the EU Agency is not responsible for any use that may be made of the information it contains ## References * Gross and Bloch (2017) C. Gross and I. Bloch, “Quantum simulations with ultracold atoms in optical lattices,” Science 357, 995–1001 (2017). * Browaeys and Lahaye (2020) A. Browaeys and T. Lahaye, “Many-body physics with individually controlled rydberg atoms,” Nature Physics , 1–11 (2020). * Weiss and Saffman (2017) D. Weiss and M. Saffman, “Quantum computing with neutral atoms,” Physics Today 70 (2017). * Gil _et al._ (2014) L. I. R. Gil, R. Mukherjee, E. M. Bridge, M. P. A. Jones, and T. Pohl, “Spin squeezing in a Rydberg lattice clock,” Phys. Rev. Lett. 112, 103601 (2014). * Chiu _et al._ (2018) C. Chiu, G. Ji, A. Mazurenko, D. Greif, and M. Greiner, “Quantum state engineering of a Hubbard system with ultracold fermions,” Phys. Rev. Lett. 120, 243201 (2018). * Kantian _et al._ (2018) A. Kantian, S. Langer, and A. Daley, “Dynamical disentangling and cooling of atoms in bilayer optical lattices,” Phys. Rev. Lett. 120, 060401 (2018). * Yang _et al._ (2020) B. Yang, H. Sun, C.-J. Huang, H.-Y. Wang, Y. Deng, H.-N. Dai, Z.-S. Yuan, and J.-W. Pan, “Cooling and entangling ultracold atoms in optical lattices,” Science (2020). * Schlosser _et al._ (2001) N. Schlosser, G. Reymond, I. Protsenko, and P. Grangier, “Sub-poissonian loading of single atoms in a microscopic dipole trap,” Nature 411, 1024–1027 (2001). * Weiss _et al._ (2004) D. S. Weiss, J. Vala, A. V. Thapliyal, S. Myrgren, U. Vazirani, and K. B. Whaley, “Another way to approach zero entropy for a finite system of atoms,” Phys. Rev. A 70, 040302(R) (2004). * Miroshnychenko _et al._ (2006) Y. Miroshnychenko, W. Alt, I. Dotsenko, L. Förster, M. Khudaverdyan, D. Meschede, D. Schrader, and A. Rauschenbeutel, “An atom-sorting machine,” Nature 442, 151–151 (2006). * Kim _et al._ (2016) H. Kim, W. Lee, H. Lee, H Jo, Y. Song, and J. Ahn, “In situ single-atom array synthesis using dynamic holographic optical tweezers,” Nat. Comm. 7, 1–8 (2016). * Barredo _et al._ (2016) D. Barredo, S. de Léséleuc, V. Lienhard, T. Lahaye, and A. Browaeys, “An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays,” Science 354, 1021–1023 (2016). * Endres _et al._ (2016) M. Endres, H. Bernien, A. Keesling, H. Levine, E. R. Anschuetz, A. Krajenbrink, C. Senko, V. Vuletić, M. Greiner, and M.D. Lukin, “Atom-by-atom assembly of defect-free one-dimensional cold atom arrays,” Science 354, 1024–1027 (2016). * Kumar _et al._ (2018) A. Kumar, T.-Y. Wu, F. Giraldo, and D.S. Weiss, “Sorting ultracold atoms in a three-dimensional optical lattice in a realization of Maxwell’s demon,” Nature 561, 83–87 (2018). * Dick (1987) G. Dick, “Local oscillator induced instabilities in trapped ion frequency standards,” Proc. of Precise Time and Time Interval , 133–147 (1987). * Grunzweig _et al._ (2010) T. Grunzweig, A. Hilliard, M. McGovern, and M. F. Andersen, “Near-deterministic preparation of a single atom in an optical microtrap,” Nat. Phys. 6, 951 (2010). * Lester _et al._ (2015) B.J. Lester, N. Luick, A.M. Kaufman, C.M. Reynolds, and C.A. Regal, “Rapid production of uniformly filled arrays of neutral atoms,” Phys. Rev. Lett. 115, 073003 (2015). * Brown _et al._ (2019) M.O. Brown, T. Thiele, C. Kiehl, T.-W. Hsu, and C.A. Regal, “Gray-molasses optical-tweezer loading: Controlling collisions for scaling atom-array assembly,” Phys. Rev. X 9, 011057 (2019). * Schilling (1990) M.F. Schilling, “The longest run of heads,” The College Mathematics Journal 21, 196–207 (1990). * Van Damme _et al._ (2020) J. Van Damme, X. Zheng, M. Saffman, M.G. Vavilov, and S. Kolkowitz, “Impacts of random filling on spin squeezing via rydberg dressing in optical clocks,” arXiv preprint arXiv:2010.04776 (2020). * Muraleedharan _et al._ (2019) G. Muraleedharan, A. Miyake, and I.H. Deutsch, “Quantum computational supremacy in the sampling of bosonic random walkers on a one-dimensional lattice,” New Journal of Physics 21, 055003 (2019). * Kempe (2003) J. Kempe, “Quantum random walks: an introductory overview,” Contemporary Physics 44, 307–327 (2003). * Rabl _et al._ (2003) P. Rabl, A.J. Daley, P.O. Fedichev, J.I. Cirac, and P. Zoller, “Defect-suppressed atomic crystals in an optical lattice,” Phys. Rev. Lett. 91, 110403 (2003). * Popp _et al._ (2006) M. Popp, J.-J. Garcia-Ripoll, K.G. Vollbrecht, and J.I. Cirac, “Ground-state cooling of atoms in optical lattices,” Phys. Rev. A 74, 013622 (2006). * Bakr _et al._ (2011) Waseem S Bakr, Philipp M Preiss, M Eric Tai, Ruichao Ma, Jonathan Simon, and Markus Greiner, “Orbital excitation blockade and algorithmic cooling in quantum gases,” Nature 480, 500–503 (2011). * Tichy _et al._ (2012) Malte C Tichy, Klaus Mølmer, and Jacob F Sherson, “Shaking the entropy out of a lattice: Atomic filtering by vibrational excitations,” Phys. Rev. A 86, 033618 (2012). * Romero-Isart and Garcia-Ripoll (2007) O. Romero-Isart and J.J. Garcia-Ripoll, “Quantum ratchets for quantum communication with optical superlattices,” Phys. Rev. A 76, 052304 (2007). * Lohse _et al._ (2016) M. Lohse, C. Schweizer, O. Zilberberg, M. Aidelsburger, and I. Bloch, “A thouless quantum pump with ultracold bosonic atoms in an optical superlattice,” Nat. Phys. 12, 350–354 (2016). * Nakajima _et al._ (2016) S. Nakajima, T. Tomita, S. Taie, T. Ichinose, H. Ozawa, L. Wang, M. Troyer, and Y. Takahashi, “Topological thouless pumping of ultracold fermions,” Nat. Phys. 12, 296–300 (2016). * Koepsell _et al._ (2020) J. Koepsell, S. Hirthe, D. Bourgund, P. Sompet, J. Vijayan, G. Salomon, C. Gross, and I. Bloch, “Robust bilayer charge-pumping for spin-and density-resolved quantum gas microscopy,” arXiv preprint arXiv:2002.07577 (2020). * Ma _et al._ (2017) R. Ma, C. Owens, A. Houck, D.I. Schuster, and J. Simon, “Autonomous stabilizer for incompressible photon fluids and solids,” Phys. Rev. A 95, 043811 (2017). * Ma _et al._ (2019) R. Ma, B. Saxberg, C. Owens, N. Leung, Y. Lu, J. Simon, and D.I. Schuster, “A dissipatively stabilized mott insulator of photons,” Nature 566, 51–57 (2019). * Sharma and Mueller (2021) V. Sharma and E. Mueller, “Driven-dissipative control of cold atoms in tilted optical lattices,” arXiv preprint arXiv:2101.00547 (2021). * Chotia _et al._ (2012) Amodsen Chotia, Brian Neyenhuis, Steven A Moses, Bo Yan, Jacob P Covey, Michael Foss-Feig, Ana Maria Rey, Deborah S Jin, and Jun Ye, “Long-lived dipolar molecules and feshbach molecules in a 3d optical lattice,” Physical review letters 108, 080405 (2012). * Anderegg _et al._ (2019) Loïc Anderegg, Lawrence W Cheuk, Yicheng Bao, Sean Burchesky, Wolfgang Ketterle, Kang-Kuen Ni, and John M Doyle, “An optical tweezer array of ultracold molecules,” Science 365, 1156–1158 (2019). * Cairncross _et al._ (2021) William B Cairncross, Jessie T Zhang, Lewis RB Picard, Yichao Yu, Kenneth Wang, and Kang-Kuen Ni, “Assembly of a rovibrational ground state molecule in an optical tweezer,” arXiv preprint arXiv:2101.03168 (2021). * Martinez de Escobar _et al._ (2008) Y. N. Martinez de Escobar, P. G. Mickelson, P. Pellegrini, S. B. Nagel, A. Traverso, M. Yan, R. Côté, and T. C. Killian, “Two-photon photoassociative spectroscopy of ultracold ${}^{88}\mathrm{Sr}$,” Phys. Rev. A 78, 062708 (2008). * Norcia _et al._ (2018) M.A. Norcia, A.W. Young, and A.M. Kaufman, “Microscopic control and detection of ultracold strontium in optical-tweezer arrays,” Phys. Rev. X 8, 041054 (2018). * Cooper _et al._ (2018) A. Cooper, J.P. Covey, I.S. Madjarov, S.G. Porsev, M.S. Safronova, and M. Endres, “Alkaline-earth atoms in optical tweezers,” Phys. Rev. X 8, 041055 (2018). * Sebby-Strabley _et al._ (2006) J. Sebby-Strabley, M. Anderlini, P.S. Jessen, and J.V. Porto, “Lattice of double wells for manipulating pairs of cold atoms,” Phys. Rev. A 73, 033605 (2006). * Bloch _et al._ (2008) I. Bloch, J. Dalibard, and W. Zwerger, “Many-body physics with ultracold gases,” Rev. Mod. Phys. 80, 885–964 (2008). * Wineland _et al._ (1987) D.J. Wineland, W.M. Itano, J.C. Bergquist, and R.G. Hulet, “Laser-cooling limits and single-ion spectroscopy,” Phys. Rev. A 36, 2220 (1987). ## Appendix A Experimental Implementation The success of this method relies on strong contact interactions between atoms to prevent multiple-occupancy errors, and a level structure compatible with high-fidelity ground-state laser cooling. One atomic species of interest that meets these criteria would be 86Sr. 86Sr is a boson (for fermionic species, the contact interaction vanishes for atoms of the same spin state in different motional states of the same well, so our protocol would not be applicable without somehow using multiple spin states), and has an unusually high s-wave scattering length of roughly 800 times the Bohr radius Martinez de Escobar _et al._ (2008), leading to large interaction shifts $U_{01}$. Strontium also has a narrow-linewidth optical transition with a linewidth of 7.5 kHz, suitable for ground-state sideband cooling Norcia _et al._ (2018); Cooper _et al._ (2018). Other bosonic species could be used as well, provided that strong enough interactions can be achieved, for example with magnetic Feshbach resonances, and that ground-state cooling is possible, for example using Raman sideband cooling. In practice, the optical potentials could be implemented in several ways, with appropriate modifications depending on the variant of the protocol. In any case, we assume the atoms to be confined in a three-dimensional optical lattice potential, whose depth can be tuned independently in all three directions. This defines the “base” potential. The ability to tune the lattice depths independently enables one to increase the confinement along directions where tunneling is not required, which increases the interaction strength for two particles occupying the same site. For the different protocols, we assume that the appropriate number of planes within the base potential are initially loaded, or that atoms remaining in undesired planes can later be removed. On top of the base potential is superimposed an auxiliary potential, whose form depends on the variant of the protocol to be implemented. For loading target sites within a two-dimensional plane, the auxiliary potential could simply consist of tightly focused optical tweezers, incident from a direction orthogonal to the plane and projected through a high- numerical aperture optical system. These serve both to create the required offset between target sites and neighboring reservoir sites, and by choosing the size of the tweezer spot, can also push down the potential barriers to enable faster tunneling. For loading target planes, the role of the auxiliary potential is simply to shift the energy of the target plane relative to its neighbors, and potentially of those neighbors relative to their outward neighbors if initial shift operations are applied to increase the reservoir density. Practically, this could be implemented by superimposing a long-wavelength standing-wave auxiliary lattice formed from two beam intersecting a small angle onto the base lattice. The planes of the base lattice that experience the lowest energy potential from the auxiliary lattice define the target planes. Favorable initial conditions could be achieved by loading a single plane of a variable- wavelength “accordion” auxiliary lattice at long wavelength, then transferring these atoms into several planes of the base lattice prior to ground-state cooling and parity projection on individual sites. The ratio of the wavelengths of the auxiliary and base lattices then defines how many planes of atoms the target plane has available to draw atoms from. Finally, for the shuffle protocol, the auxiliary potential consists of a non- sinusoidal potential with twice the period of the base potential along the tunneling direction, and variable relative phase to the base potential. The auxiliary potential could be created either by projecting a pattern onto the lattice using a high numerical aperture optical system, or by combining two polarizations of light in a bowtie-configuration lattice Sebby-Strabley _et al._ (2006). In either case, the wells of the auxiliary potential can be aligned to the base potential with a phase such that they create both an offset between selected neighboring wells, and to push down the potential barrier between the wells to facilitate tunneling Sebby-Strabley _et al._ (2006). In addition to these two periodic potentials, we assume an additional potential to be present that creates a wall on the left side of the array (for example by pushing a specific potential well off resonance), and a linear potential gradient applied along lattice to avoid additional unwanted resonances. ## Appendix B Estimates of experimental parameters Here, we provide estimates for the relevant parameters achievable in a realistic experimental system. As a benchmark case, we consider 86Sr atoms confined within a base lattice formed by retro-reflection of 813 nm light, as this is the “magic” wavelength that causes zero differential shift to the strontium clock transition. While this feature is not required for our protocol, it makes it a desirable wavelength for other applications of a strontium system. Tunneling rate: The tunneling rate $J_{01}$ between the ground state of one well and the first excited state of its neighbor is a critical quantity for the success of this protocol. Analytical expressions exist for the tunneling rates within a given band of a uniform lattice, but to our knowledge not for our nonuniform inter-band situation. We thus calculate the tunnelling rate numerically for our desired potential landscape by simply integrateing the Schrödinger equation. We find the ratio of $J_{01}$ to $J_{00}$ to be approximately four for depths of the base lattice of at least 10 $E_{R}$, where $E_{R}=\hbar^{2}k^{2}/2m$. Here, $k$ is the wavenumber of the light used to form the lattice, and $m$ is the mass of the atoms. An analytical expression exists for $J_{00}$ Bloch _et al._ (2008): $J_{00}\simeq\frac{4}{\sqrt{\pi}}E_{R}(\frac{V}{E_{R}})^{3/4}\mathrm{exp}[-2(\frac{V}{E_{R}})^{1/2}]$ (S1) Where $V$ is the depth of the lattice. For our benchmark system with a lattice depth of $V=13E_{R}$ (corresponding to a harmonic oscillator frequency of $\omega=2\pi\times 23$ kHz), we calculate a tunneling rate $J_{01}\simeq 2\pi\times 175$ Hz. This can easily be made slower by increasing the depth of the lattice. In principle, it could be made faster as well in a shallower lattice, but after applying the required shift to neighboring wells, the potential becomes substantially distorted at the energy of the occupied states, and our expressions are no longer valid. Interaction shifts: The interaction shifts that prevent double occupation of sites in our protocols result from the contact interaction between atoms in the first motional excited state and the ground state. These shifts (under the assumption of unperturbed eigenstates) are approximated by: $U_{01}=\frac{1}{2}\sqrt{\frac{8}{\pi}}kaE_{R}(\frac{V_{x}}{E_{R}})^{1/4}(\frac{V_{y}}{E_{R}})^{1/4}(\frac{V_{z}}{E_{R}})^{1/4}$ (S2) where $V_{i}$ represents the depth of the lattice in the $i$th direction. The factor of $1/2$ multiplying this expression comes from the finite overlap between the ground state and first excited state wavefunctions ($U_{00}$ is a factor of 2 larger). For our benchmark system (a = 800 $a_{0}$) in cases where the tunneling occurs along a single direction (shuffle and layer protocols), depths of $V_{x},\ V_{y},\ V_{z}$ = 13, 30, 30 $E_{R}$ give an interaction shift $U_{01}=2\pi\times 10$ kHz. When simultaneous tunneling in two dimensions is desired, the same interaction shift can be achieved using $V_{x},\ V_{y},\ V_{z}$ = 13, 13, 70 $E_{R}$. For systems with more typical scattering lengths of $a\simeq 100$ a0, the lower interaction shifts could be at least partially compensated by increasing the depth of the lattices in directions orthogonal to tunneling. Tunneling fidelities: In principle, our protocols could be implemented purely with resonant tunneling – the depths of neighboring wells could be quickly brought into the desired resonance condition, then held there for a time $\pi/J_{01}$. This approach would have the advantage of speed, but suffers from a high sensitivity to the relative depths of the neighboring wells, and requires precisely timed operations. For total depths of the optical potential near 100 $E_{R}$ (accounting for the tight confinement required in non- tunneling directions in order to ensure large interaction shifts), this would require neighboring wells to be balanced at the part-per-thousand level in total lattice depth. Because most of the intensity is associated with the base-potential, which can be created using retro-reflected lasers, this may not be unreasonable. Roughly 10% of the total potential in this scenario would be contributed by the auxiliary potential, so this would have to be controlled at the 1% level. The protocols will be substantially more robust however if an adiabatic ramp through resonance is used, so we assume this condition in all subsequent analysis. We numerically calculate the probability of an atom tunneling either into an empty well, or into an occupied well (represented by a shift of $U_{01}=2\pi\times 10$ kHz, as estimated above). For a linear sweep over a range of $10J_{01}$, centered about resonance, and over a duration $2\pi\times 5/J_{01}$ (roughly 30 ms for a tunneling rate $J_{01}\simeq 2\pi\times 175$ Hz), we expect a transfer fraction of approximately 95% into the empty well, and below $2\times 10^{-4}$ for the occupied well. Such ramps only require control of the total potential at the 1% level, and of the auxiliary potential at the 10% level. To compare with prior work, tunneling fidelities at the 99% level or higher have been demonstrated in the ground band of systems with optical superlattices Lohse _et al._ (2016); Yang _et al._ (2020); Koepsell _et al._ (2020). Given the additional requirements of speed and tunneling between bands for this proposal, we think that fidelities above 90% would be readily achievable in practice, and would not present a major performance limitation. Figure S1: Tunneling properties within nearest-neighbor subsystem. a., b. The motional excited state into which the atoms tunnel is two-fold degenerate, with a node either along the vertical (a.) or horizontal (b.) axis. Atoms tunneling into the target site populate one of these two states, depending on whether they tunneled horizontally or vertically. During the tunneling process, an atom is thus confined to its original row or column, but can tunnel through the target site to the reservoir site directly opposite (not shown) c. The probability of a target site being filled depends on the number and configuration of filled neighboring reservoir sites. Here we show the unique filling possibilities (omitting those that are equivalent up to reflection or rotation of the system), along with the associated probability for filling the target site, assuming a perfect adiabatic ramp of the target site depth. Atom loss: Atom loss is the core limitation for our protocol. Atom loss can occur either directly, though collisions with background gas, or as a result of our rearrangement protocol, through light-assisted collisions with other atoms. The latter occurs if two atoms occupy the same site, which could be caused either by imperfect tunneling suppression from of the interaction shift, or through errors in ground state cooling. The effect of background gas collisions simply depends on the duration of the shift cycle relative to the vacuum lifetime in the experiment. For our predicted tunneling rates of several hundred Hz, and associated ramp times of a few 10’s of ms, we expect each cycle of the rearrangement to take well below 100 ms, as ground state cooling and changes in trap configuration for the cooling should be possible on a timescale of several milliseconds. As atomic lifetimes of 100 seconds or more are now somewhat common, we expect this loss to be at the part per thousand level per cycle, and could in principle be better if resonant tunneling is used instead of adiabatic ramps. The impact of imperfect ground-state cooling can have several impacts on our protocol. If an atom in a target site begins a shift cycle in the n=1 state, it may tunnel into a neighboring empty reservoir state, leading to an effect similar to loss of the atom. More concerning would be the possibility of an atom tunneling into a well occupied by a neighbor, enabled by an accidental resonance between higher motional states (whose spacing becomes rapidly nonlinear with increasing N due to the relatively shallow potentials required for appreciable tunneling rates) and causing a double-occupancy. The tendency for this to happen would likely depend sensitively on the exact optical potential used, but we consider the somewhat pessimistic case that imperfect ground state cooling leads to the loss of the non-cooled atom. The fundamental limit on the performance of ground state cooling is given by $\bar{n}=5(\gamma/\omega)^{2}/16$, where $\bar{n}$ represents the average number of motional quanta along a given direction, $\gamma$ is the decay linewidth used for cooling, and $\omega$ is the trap frequency Wineland _et al._ (1987). This expression assumes cooling from three directions in an isotropic trap. For realistic trap frequencies of $\omega=2\pi\times 100$ kHz during cooling, this implies that motional excitation should be limited to the level of a few parts per thousand. As described above, we find from simulation that the chances of an atom tunneling onto an occupied lattice site in a single rearrangement cycle can be below $2\times 10^{4}$. Because this would lead to the correlated loss of both atoms, the effect of this loss differs by protocol. In the case of the shuffling protocol, both lost atoms are from the target state, so the effect on the final filling fraction is larger than single-atom loss. However, because the loss is correlated, if we are only interested in the probability of preparing a perfect array, the effect is similar to loss of a single atom. In the target sites and target planes protocols, only one of the atoms is from the target site, so the loss of the second atom is far less consequential. In principle, it may be possible to avoid the loss associated with double- occupation of lattice sites, as loss occurs only as a result of subsequent light-assisted collisions. However, this would likely lead to a permanent double-occupation of the site in the final configuration, so for simplicity, we assume here that light-assisted collisions occur after each shuffling step (either through deliberate application of photoassociation light or as a byproduct of ground state cooling) and causes the loss of both atoms. While the loss rates from double occupation may be acceptable for these parameters, they could be improved in several ways. First, because of the quadratic scaling of the double occupation probability with tunneling rate, a relatively minor reduction in tunneling rate (deeper lattice) could dramatically reduce these errors, though at the expense of longer cycle times and greater sensitivity to lattice inhomogeneity. Further, it may be possible to modify the light-assisted collision step to preferentially eject only a single atom, for example by using light blue-detuned light Grunzweig _et al._ (2010); Lester _et al._ (2015); Brown _et al._ (2019). Adding these effects together, and acknowledging that they are rough estimates that will vary depending on the details of experimental implementation, these predictions indicate that a loss per cycle of less than 1% should be possible, so we use this value as a benchmark for estimating the likely performance of our protocols. ## Appendix C Simulating rearrangement protocols: A fully quantum simulation of our protocols on a system of interesting size would be very challenging, due to the exponentially large Hilbert space. However, because a dissipative step is present between each cycle of tunneling, we do not expect long-range entanglement to play a significant role. Thus, to estimate the performance of our protocols, we perform simulations of the quantum dynamics for small sub-systems, from which we extract parameters that are used as inputs for classical simulations of larger systems. In the case of the shuffle protocol, the necessary parameters for the classical simulation are simply the tunneling fidelity $F$, and the loss rate per cycle. The means for estimating these are described above. For the target sites protocol, the concept of the tunneling fidelity becomes a bit more complicated. Each target site has four nearest neighbors. (For simplicity, we neglect diagonal neighbors, as the tunneling rates to these sites would be much lower, and these sites may be have different potential offsets from the nearest neighbors, depending on how the optical potential is generated.) Each of the four nearest neighbors can either be occupied or empty, which leads to six distinct configurations of atoms (assuming symmetry of the neighboring sites): for all atom numbers except for two, there is a single unique configuration. Two neighboring atoms can either be across the target site from each other or at a diagonal. For each of these six configurations, we calculate the probability of the target site becoming occupied (tabulated in Fig. S1), by integrating the Schrödinger equation for the appropriate adiabatic ramp. These values are then used in the classical simulation, as the base probability for filling a given target site, with the assumed imperfect fidelity multiplying this probability. We can gain intuition into the results using simple arguments. The excited states to which the atoms couple are anti-symmetric about their centers, and we assume that the two excitation directions are degenerate. If an atom tunnels into the target site, the state it excites only allows it to tunnel back into its original site or into the site across the target site from its original location. Thus, atoms are confined to a given row or column. At the beginning of the adiabatic ramp, we assume that the first motional excited states of the target site are higher in energy than the reservoir sites and that interactions are repulsive, so that the initial state always consists of a superposition of degenerate ground states, where the atoms are delocalized within their respective rows or columns. After the ramp, the states involving one atom in the target site are lower energy than those that do not. By comparing the degeneracy of the lowest energy states at the beginning of the ramp to the degeneracy of lowest energy states to which these couple to end of the ramp, we can find the probability of filling the target site. We confirm the results through explicit simulation for the relevant configurations. With the relevant parameters available, our simulations proceed by initializing randomly filled arrays of appropriate geometry, and then proceeding in a step-wise manner according to the relevant protocol. Steps representing the coherent shift operations are interleaved with steps that mix the reservoir atoms (in the case of the target sites and layer filling protocols), and with steps representing atom loss. For the shuffle protocol, each cycle consists of either a shift applied to even or odd pairs. For the target sites protocol, we implement the reservoir mixing step in the following manner: the population of each reservoir site (iterated through sequentially) is swapped with one of its neighbors, chosen at random. For the target layer protocol, we simply randomize the position of the atoms in each layer during each mixing step. These implementations are chosen for simplicity — we find that altering the details of the mixing step does not have a significant effect on the performance of the protocols, provided it is sufficiently randomizing. This is especially true for predicting the maximum filling fraction. When filling target sites within a two-dimensional plane, we assume for simplicity that each target site interacts independently with its neighbors – the probability of filling each site (randomly selected) is calculated in turn, with the atomic configuration updated in between. This ensures that reservoir atoms are not double-counted, while maintaining computational simplicity. If this situation is desired in practice, it could easily be achieved by offsetting the depths of the auxiliary potential applied to neighboring subsets of the target sites. ## Appendix D Removing excess atoms: All three of our protocols require the removal of excess atoms at the end of the filling cycles. There would be several ways of accomplishing this, depending on the protocol. For the shuffling protocol, one option would be to use the same light that defined the first edge of the filled region to define the second, after shuffling is complete. If the wavelength or polarization of this light is chosen such that it creates a differential shift on the cooling transition, then the cooling effect in these sites could be turned to heating. Subsequent cooling light would then cause their removal from the trap. Other options would include directing resonant light onto the region containing atoms to be removed, leading to heating or enabling future photo-ionization, or by applying light with strong spatial gradients, whose modulation at multiples of the trapping frequencies could lead to heating. For the target sites protocol, one could simply leave on the auxiliary potential that defines the target states while ramping off the lattice in the orthogonal direction (though retaining confinement perpendicular to the array), leaving the background atoms free to expand away. Slight heating or spatial potential gradients could assist this process. For the target layer protocol, one could take advantage of the fact that the target plane lies at a minimum of the auxiliary potential, and thus at a point of zero (or low) gradient. In this case, applying an intensity modulation at an odd harmonic of the trap frequency perpendicular to the target plane should selectively heat atoms in planes other than the target plane. These heated atoms could be subsequently removed by lowering the lattice depth. Alternatively, as in the case of the shuffle protocol, the auxiliary potential could be tuned (by a rotation of polarization for example) to provide a differential shift to either the cooling transition or an optical clock transition, enabling selective heating or shelving of atoms by layer. Finally, charge-pumping techniques using optical superlatticesKoepsell _et al._ (2020) could be adapted to move atoms away from the target plane, facilitating their later removal or making them irrelevant to future operations.
# End-to-end Interpretable Neural Motion Planner Wenyuan Zeng1,2 Wenjie Luo1,2∗ Simon Suo1,2 Abbas Sadat1 Bin Yang1,2 Sergio Casas1,2 Raquel Urtasun1,2 1Uber Advanced Technologies Group 2University of Toronto <EMAIL_ADDRESS><EMAIL_ADDRESS>denotes equal contribution. ###### Abstract In this paper, we propose a neural motion planner (NMP) for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and a HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. We then sample a set of diverse physically possible trajectories and choose the one with the minimum learned cost. Importantly, our cost volume is able to naturally capture multi-modality. We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in North America. Our experiments show that the learned cost volume can generate safer planning than all the baselines. ## 1 Introduction Self-driving vehicles (SDVs) are going to revolutionize the way we live. Building reliable SDVs at scale is, however, not a solved problem. As is the case in many application domains, the field of autonomous driving has been transformed in the past few years by the success of deep learning. Existing approaches that leverage this technology can be characterized into two main frameworks: end-to-end driving and traditional engineering stacks. Figure 1: Our end-to-end interpretable neural motion planner (NMP). Backbone network takes LiDAR data and maps as inputs, and outputs bounding boxes of other actors for future timesteps (perception), as well as a cost volume for planning with $T$ filters. Next, for each trajectory proposal from the sampler, its cost is indexed from different filters of the cost volume and summed together. The trajectory with the minimal cost will be our final planning. End-to-end driving approaches [3, 24] take the output of the sensors (e.g., LiDAR, images) and use it as input to a neural net that outputs control signals, e.g., steering command and acceleration. The main benefit of this framework is its simplicity as only a few lines of code can build a model and labeled training data can be easily obtained automatically by recording human driving under a SDV platform. In practice, this approach suffers from the compounding error due to the nature of self-driving control being a sequential decision problem, and requires massive amounts of data to generalize. Furthermore, interpretability is difficult to obtain for analyzing the mistakes of the network. It is also hard to incorporate sophisticated prior knowledge about the scene, e.g. that vehicles should not collide. In contrast, most self-driving car companies, utilize a traditional engineering stack, where the problem is divided into subtasks: perception, prediction, motion planning and control. Perception is in charge of estimating all actors’ positions and motions, given the current and past evidences. This involves solving tasks such as 3D object detection and tracking. Prediction111We’ll use prediction and motion forecasting interchangeably., on the other hand, tackles the problem of estimating the future positions of all actors as well as their intentions (e.g., changing lanes, parking). Finally, motion planning takes the output from previous stacks and generates a safe trajectory for the SDV to execute via a control system. This framework has interpretable intermediate representations by construction, and prior knowledge can be easily exploited, for example in the form of high definition maps (HD maps). However, solving each of these sub-tasks is not only hard, but also may lead to a sub-optimal overall system performance. Most self-driving companies have large engineering teams working on each sub-problem in isolation, and they train each sub-system with a task specific objective. As a consequence, an advance in one sub-system does not easily translate to an overall system performance improvement. For instance, 3D detection tries to maximize AP, where each actor has the same weight. However, in a driving scenario, high- precision detections of near-range actors who may influence the SDV motion, e.g. through interactions (cutting in, sudden stopping), is more critical. In addition, uncertainty estimations are difficult to propagate and computation is not shared among different sub-systems. This leads to longer reaction times of the SDV and make the overall system less reliable. In this paper we bridge the gap between these two frameworks. Towards this goal, we propose the first end-to-end learnable and interpretable motion planner. Our model takes as input LiDAR point clouds and a HD map, and produces interpretable intermediate representations in the form of 3D detections and their future trajectories. Our final output representation is a space-time cost volume that represents the “goodness” of each location that the SDV can take within a planning horizon. Our planner then samples a set of diverse and feasible trajectories, and selects the one with the minimum learned cost for execution. Importantly, the non-parametric cost volume is able to capture the uncertainty and multi-modality in possible SDV trajectories, e.g changing lane v.s keeping lane. We demonstrate the effectiveness of our approach in real world driving data captured in several cities in North America. Our experiments show that our model provides good interpretable representations, and shows better performance. Specifically for detection and motion forecasting, our model outperforms recent neural architectures specifically designed on these tasks. For motion planning, our model generates safer planning compared to the baselines. ## 2 Related Work Imitation Learning: Imitation learning (IL) uses expert demonstrations to directly learn a policy that maps states to actions. IL for self-driving vehicles was introduced in the pioneering work of [24] where a direct mapping from the sensor data to steering angle and acceleration is learned. [3] follows the similar philosophy. In contrast, with the help of a high-end driving simulator [9], Codevilla _et al_. [8] exploit conditional models with additional high-level commands such as continue, turn-left, turn-right. Muller _et al_. [21] incorporate road segmentation as intermediate representations, which are then converted into steering commands. In practice, IL approaches suffer from the compounding error due to the nature of self-driving control being a sequential decision problem. Furthermore, these approaches require massive amount of data, and generalize poorly, e.g., to situations drifting out of lane. RL & IRL: Reinforcement learning (RL) is a natural fit for sequential decision problems as it considers the interactions between the environment and the agent (a self-driving car in this case). Following the success of Alpha GO [29], RL has been applied to self-driving in [15, 23]. On the other hand, the inverse reinforcement learning (IRL) looks at learning the reward function for a given task. [31, 35] develop IRL algorithms to learn drivable region for self-driving cars. [25] further infers possible trajectories with a symmetrical cross-entropy loss. However, all these approaches have only been tested on simulated datasets or small real-world datasets, and it is unclear if RL and IRL can scale to more realistic settings. Furthermore, these methods do not produce interpretable representations, which are desirable in safety critical applications. Optimization Based Planners: Motion planning has long been treated as an independent task that uses the outputs of perception and prediction modules to formulate an optimization problem, usually by manually engineering a cost function [4, 10, 20, 36]. The preferred trajectory is then generated by minimizing this cost function. In practice, to simplify the optimization problem, many approaches assume the objective to be quadratic [7], decompose lateral and longitudinal planning as two tasks [1, 10] or represent the search space into speed and path [11, 14]. In [1] A* is used to search the space of possible motion. Similarly, the Baidu motion planner [10] uses dynamic programming to find an approximate path and speed profile. In [36], the trajectory planning problem is formulated as continuous optimization and used in practice to demonstrate 100km of autonomous driving. In sampling-based approaches, a set of trajectories is generated and evaluated against a predefined cost, among which, the one with minimum cost is chosen [27, 30]. Such approaches are attractive since they are highly parallelizable [19]. The drawback of all these hand-engineered approaches is that they are not robust to real-world driving scenarios, thus requires tremendous engineering efforts to fine-tune it. Planning under uncertainty: Planning methods for robust and safe driving in the presence of uncertainty have also been explored [2, 12, 33]. Uncertainty in the intention of other actors is the main focus of [2, 33]. In [12], possible future actions of other vehicles and collision probability are used to account for the uncertainty in obstacles positions. Compared to these approaches, our planner naturally handles uncertainty by learning a non- parametric cost function. Holistic Models: These models provide interpretability. Chen _et al_. [6] propose to learn a mapping from the sensor data to affordances, such as distance to left boundary/leading vehicle. This is then fed into a controller that generates steering command and acceleration. Sauer _et al_. [26] further propose a variant conditioned on direction command. On the other hand, Luo _et al_. [18] propose a joint model for perception and prediction from raw LiDAR data and [5] extends it to predict each vehicle’s intention. All the methods above are trained for tasks that provide interpretable perception/prediction outputs to be used in motion planning. However, no feed-back is back- propagated from the motion planning module. Figure 2: Trajectory Representation. We first sample a set of parameters of a Clothoid to determine the shape of a trajectory. We then sample a velocity profile to determine how fast the SDV go along this trajectory. Combining these two, we can get a space-time trajectory. In this work, we take a holistic model approach and take it one step further by designing a single neural network that takes raw sensors and dynamic map data as input and predicts the cost map for planning. Compared with imitation learning approaches [3, 8, 24] that directly regress a steer angle (from the raw data), our approach provides interpretability and handles multimodality naturally. When compared with traditional planners which use manually designed cost functions built on top of perception and prediction systems, our model has the advantage of being jointly trained and thus learns representations that are optimal for the end-task. Furthermore, our model can handle uncertainty naturally (as this is represented in the cost) and does not require costly parameter tuning. ## 3 Deep Structured Interpretable Planner We propose an end-to-end learnable motion planner that generates accurate space-time trajectories over a planning horizon of a few seconds. Importantly, our model takes as input LiDAR point clouds and a high definition map and produces interpretable intermediate representations in the form of 3D detections and their future motion forecasted over the planning horizon. Our final output representation is a space-time cost volume that represents the “goodness” of each possible location that the SDV can take within the planning horizon. Our planner then scores a series of trajectory proposals using the learned cost volume and chooses the one with the minimum cost. We train our model end-to-end with a multi-task objective. Our planning loss encourages the minimum cost plan to be similar to the trajectory performed by human demonstrators. Note that this loss is sparse as a ground-truth trajectory only occupies small portion of the space. As a consequence, learning with this loss alone is slow and difficult. To mitigate this problem, we introduce an another perception loss that encourages the intermediate representations to produce accurate 3D detections and motion forecasting. This ensures the interpretability of the intermediate representations and enables much faster learning. ### 3.1 Deep Structured Planning More formally, let $\mathbf{s}=\\{\mathbf{s}^{0},\mathbf{s}^{1},\cdots,\mathbf{s}^{T-1}\\}$ be a trajectory spanning over $T$ timesteps into the future, with $\mathbf{s}^{t}$ the location in bird’s eye view (BEV) at the timestep $t$. We formulate the planning problem as a deep structured minimization problem as follows $\mathbf{s}^{*}=\arg\min_{\mathbf{s}}\sum_{t}c^{t}(\mathbf{s}^{t})$ (1) where $c^{t}$ is our learned cost volume indexed at the timestep $t$, which is a 2D tensor with the same size as our region of interest. This minimization is approximated by sampling a set of physically valid trajectories s, and picking the one with minimum cost. Our model employs a convolutional network backbone to compute this cost volume. It first extracts features from both LiDAR and maps, and then feeds this feature map into two branches of convolution layers that output 3D detection and motion forecasting as well as the planning cost volume respectively. In this section we describe our input representation and network in details. #### Input representation: Our approach takes raw point clouds as inputs, captured by a LiDAR mounted on top of the SDV. We employ $T^{\prime}=10$ consecutive sweeps as observations, in order to infer the motion of all actors. For those sweeps, we correct for ego-motion and bring the point clouds from the past 10 frames to the same coordinate system centered at SDV’s current location. To make the input data amenable to standard convolutions, we follow [5] and rasterize the space into a 3D occupancy grid, where each voxel has a binary value indicating whether it contains a LiDAR point. This results in a 3D tensor of size $H$x$W$x$(ZT^{\prime})$, where $Z,H,W$ represents the height and x-y spatial dimensions respectively. Note that we have concatenated timesteps along the $Z$ dimension, thus avoiding 3D convolutions which are memory and computation intensive. Access to a map is also a key for accurate motion planning, as we need to drive according to traffic rules (e.g., stop at a red light, follow the lane, change lanes only when allowed). Towards this goal, we exploit HD maps that contain information about the semantics of the scene such as the location of lanes, their boundary type (e.g., solid, dashed) and the location of stop signs. Similar to [5], we rasterize the map to form an $M$ channels tensor, where each channel represents a different map element, including road, intersections, lanes, lane boundaries, traffic lights, etc. Our final input tensor is thus of size $H$x$W$x$(ZT^{\prime}+M)$. #### Backbone: Our backbone is adapted from the detection network of [32] and consists of five blocks. Each block has {2, 2, 3, 6, 5} Conv2D layers with filter number {32, 64, 128. 256, 256}, filter size 3x3 and stride 1. There are MaxPool layers after each of the first 3 blocks. A multi-scale feature map is generated after the first 4 blocks as follows. We resize the feature maps from each of the first 4 blocks to 1/4 of the input size and concatenate them together similar to [34], in order to increase the effective receptive field [17]. These multi-scale features are then fed into the $5$-th block. The whole backbone has a downsampling rate of 4. #### Perception Header: The perception header has two components formed of convolution layers, one for classification and one for regression. To reduce the variance of regression targets, we follow SSD [16] and employ multiple predefined anchor boxes $a_{i,j}^{k}$ at each feature map location, where subscript $i,j$ denotes the location on the feature map and $k$ indexes over the anchors. In total, there are 12 anchors at each location, with different sizes, aspect ratios and orientations. The classification branch outputs a score $p_{i,j}^{k}$ for each anchor indicating the probability of a vehicle at each anchor’s location. The regression branch also outputs regression targets for each anchor $a_{i,j}^{k}$ at different time-steps. This includes localization offset $l_{x}^{t},l_{y}^{t}$, size $s_{w}^{t},s_{h}^{t}$ and heading angle $a_{sin}^{t},a_{cos}^{t}$. The superscript $t$ stands for time frame, ranging from $0$ (present) to $T-1$ into the future. Regression is performed at every timesteps, thus producing motion forecasting for each vehicle. #### Cost Volume Head: The cost volume head consists of several convolution and deconvolution layers. To produce a cost volume $c$ at the same resolution as our bird-eye-view (BEV) input, we apply two deconvolution layers on the backbone’s output with filter number {128, 64}, filter size 3x3 and stride 2. Each deconvolution layer is also followed by a convolution layer with filter number {128, 64}, filter size 3x3 and stride 1. We then apply a final convolution layer with filter number $T$, which is our planning horizon. Each filter generates a cost volume $c^{t}$ for a future timestep $t$. This allows us to evaluate the cost of any trajectory $\mathbf{s}$ by simply indexing in the cost volume $c$. In our experiments, we also clip the cost volume value between -1000 to +1000 after the network. Applying such bounds prevents the cost value shifting arbitrarily, and makes tuning hyper-parameters easier. We next describe our output trajectory parameterization. ### 3.2 Efficient Inference Given the input LiDAR sweeps and the HD map, we can compute the corresponding cost volume $c$ by feedforward convolutional operations as describe above. The final trajectory can then be computed by minimizing Eq. (1). Note, however, that this optimization is NP hard222We expect the output trajectory of our planner is physically feasible. This introduces constraints on the solution set. Under these physical constraints, the optimization is NP hard.. We thus rely on sampling to obtain a low cost trajectory. Towards this goal, we sample a wide variety of trajectories that can be executed by the SDV and produce as final output the one with minimal cost according to our learned cost volume. In this section we describe how we efficiently sample physically possible trajectories during inference. Since the cost of a trajectory is computed by indexing from the cost volume, our planner is fast enough for real-time inference. #### Output Parameterization: A trajectory can be defined by the combination of the spatial path (a curve in the 2D plane) and the velocity profile (how fast we go along this path). Sampling a trajectory as a set of points in $(x,y)\in\Re^{2}$ space is not a good idea, as a vehicle cannot execute all possible set of points in the cartesian space. This is due for example to the physical limits in speed, acceleration and turning angle. To consider these real-world constraints, we impose that the vehicle should follow a dynamical model. In this paper, we employ the bicycle model [22], which is widely used for planning in self- driving cars. This model implies that the curvature $\kappa$ of the vehicle’s path is approximately proportional to the steering angle $\phi$ (angle between the front wheel and the vehicle): $\kappa=2tan(\phi)/L\approx 2\phi/L,$ where $L$ is the distance between the front and rear axles of the SDV. This is a good approximation as $\phi$ is usually small. We then utilize a Clothoid curve, also known as Euler spiral or Cornu spiral, to represent the 2D path of the SDV [28]. We refer the reader to Fig. 2 for an illustration. The curvature $\kappa$ of a point on this curve is proportional to its distance $\xi$ alone the curve from the reference point, i.e., $\kappa(\xi)=\pi\xi$. Considering the bicycle model, this linear curvature characteristic corresponds to steering the front wheel angle with constant angular velocity. The canonical form of a Clothoid can be defined as $\mathbf{s}(\xi)=\mathbf{s_{0}}+a\left[C\left(\frac{\xi}{a}\right)\mathbf{T_{0}}+S\left(\frac{\xi}{a}\right)\mathbf{N_{0}}\right]$ (2) $S(\xi)=\int_{0}^{\xi}sin\left(\frac{\pi u^{2}}{2}\right)du$ (3) $C(\xi)=\int_{0}^{\xi}cos\left(\frac{\pi u^{2}}{2}\right)du$ (4) Here, $\mathbf{s}(\xi)$ defines a Clothoid curve on a 2D plane, indexed by the distance $\xi$ to reference point $\mathbf{s_{0}}$, $a$ is a scaling factor, $\mathbf{T_{0}}$ and $\mathbf{N_{0}}$ are the tangent and normal vector of this curve at point $\mathbf{s_{0}}$. $S(\xi)$ and $C(\xi)$ are called the Fresnel integral, and can be efficiently computed. In order to fully define a trajectory, we also need a longitudinal velocity $\dot{\xi}$ (velocity profile) that specifies the SDV motion along the path $\mathbf{s(\xi)}$: $\dot{\xi}(t)=\ddot{\xi}t+\dot{\xi}_{0},$ where $\dot{\xi}_{0}$ is the initial velocity of the SDV and $\ddot{\xi}$ is a constant forward acceleration. Combining this and (2), we can obtain the trajectory points $\mathbf{s}$ in Eq. (1). #### Sampling: Since we utilize Clothoid curves, sampling a path corresponds to sampling the scaling factor $a$ in Eq. (2). Considering the city driving speed limit of 15m/s, we sample $a$ uniformly from the range of 6 to 80m. Once $a$ is sampled, the shape of the curve is fixed.333We also sample a binary random variable indicating it’s a canonical Clothoid or a vertically flipped mirror. They correspond with turning left or right respectively. We then use the initial SDV’s steering angle (curvature) to find the corresponding position on the curve. Note that Clothoid curves cannot handle circle and straight line trajectories well, thus we sample them separately. The probability of using straight-line, circle and Clothoid curves are 0.5, 0.25, 0.25 respectively. Also, we only use a single Clothoid segment to specify the path of SDV which we think is enough for the short planning horizon. In addition, we sample constant accelerations $\ddot{\xi}$ ranging from $-5m/s^{2}$ to $5m/s^{2}$ which specifies the SDV’s velocity profile. Combining sampled curves and velocity profiles, we can project the trajectories to discrete timesteps and obtain the corresponding waypoints (See Fig 2) for which to evaluate the learned cost. Method | L2 (m) | Collision Rate (%) | Lane Violation (%) ---|---|---|--- | 1.0s | 2.0s | 3.0s | 0.5s | 1.0s | 1.5s | 2.0s | 2.5s | 3.0s | 1.0s | 2.0s | 3.0s Ego-motin | 0.281 | 0.900 | 2.025 | 0.00 | 0.01 | 0.20 | 0.54 | 1.04 | 1.81 | 0.51 | 2.72 | 6.73 IL | 0.231 | 0.839 | 1.923 | 0.00 | 0.01 | 0.19 | 0.55 | 1.04 | 1.72 | 0.44 | 2.63 | 5.38 Acc | 0.403 | 1.335 | 2.797 | 0.05 | 0.12 | 0.27 | 0.53 | 1.18 | 2.39 | 0.24 | 0.46 | 0.64 Manual Cost | 0.402 | 1.432 | 2.990 | 0.00 | 0.02 | 0.09 | 0.22 | 0.79 | 2.21 | 0.39 | 2.73 | 5.02 Ours-NMP | 0.314 | 1.087 | 2.353 | 0.00 | 0.01 | 0.04 | 0.09 | 0.33 | 0.78 | 0.35 | 0.77 | 2.99 Table 1: Planning Metrics ### 3.3 End-to-End Learning Our ultimate goal is to plan a safe trajectory while following the rules of traffic. We want the model to understand where obstacles are and where they will be in the future in order to avoid collisions. Therefore, we use a multi- task training with supervision from detection, motion forecasting as well as human driven trajectories for the ego-car. Note that we do not have supervision for cost volume. We thus adopt max-margin loss to push the network to learn to discriminate between good and bad trajectories. The overall loss function is then: $\mathcal{L}=\mathcal{L}_{\text{perception}}+\beta\mathcal{L}_{\text{planning}}.$ (5) This multi-task loss not only directs the network to extract useful features, but also make the network output interpretable results. This is crucial for self-driving as it helps understand failure cases and improves the system. In the following, we describe each loss in more details. #### Perception Loss: Our perception loss includes classification loss, for distinguishing a vehicle from the background, and regression loss, for generating precise object bounding boxes. For each predefined anchor box, the network outputs a classification score as well as several regression targets. This classification score $p_{i,j}^{k}$ indicates the probability of existence of a vehicle at this anchor. We employ a cross-entropy loss for the classification defined as $\mathcal{L}_{cla}=\sum_{i,j,k}\left(q_{i,j}^{k}\log p_{i,j}^{k}+(1-q_{i,j}^{k})\log(1-p_{i,j}^{k})\right),$ (6) where $q_{i,j}^{k}$ is the class label for this anchor (i.e., $q_{i,j}^{k}=1$ for vehicle and $0$ for background). The regression outputs include information of position, shape and heading angle at each time frame $t$, namely $l_{x}=\frac{x^{a}-x^{l}}{w^{a}}\quad l_{y}=\frac{y^{a}-y^{l}}{h^{a}},$ $s_{w}=\log\frac{w^{l}}{w^{a}}\quad s_{h}=\log\frac{h^{l}}{h^{a}},$ $a_{sin}=sin(\theta^{a}-\theta^{l})\quad a_{cos}=cos(\theta^{a}-\theta^{l}),$ where superscript $a$ means anchor and $l$ means label. We use a weighted smooth L1 loss over all these outputs. The overall perception loss is $\mathcal{L}_{perception}=\sum\left(\mathcal{L}_{cla}+\alpha\sum_{t=0}^{T}\mathcal{L}^{t}_{reg}\right).$ (7) Note that the regression loss is summed over all vehicle correlated anchors, from the current time frame to our prediction horizon $T$. Thus it teaches the model to predict the position of vehicles at every time frame. To find the training label for each anchor, we associate it to its neighboring ground-truth bounding box, similar to [16, 18]. In particular, for each anchor, we find all the ground-truth boxes with intersection over union (IoU) higher than $0.4$. We associate the highest one among them to this anchor, and compute the class label and regression targets accordingly. We also associate any non-assigned ground-truth boxes with their nearest neighbor. The remaining anchors are treated as background, and are not considered in the regression loss. Note that one ground-truth box may associate to multiple anchors, but one anchor can at most be associated with one ground-truth box. During training, we also apply hard negative mining to overcome imbalance between positive and negative samples. #### Planning Loss: Learning a reasonable cost volume is challenging as we do not have ground- truth. To overcome this difficulty, we minimize the max-margin loss where we use the ground-truth trajectory as a positive example, and randomly sampled trajectories as negative examples. The intuition behind is to encourage the ground-truth trajectory to have the minimal cost, and others to have higher costs. More specifically, assume we have a ground-truth trajectory $\\{(x^{t},y^{t})\\}$ for the next $T$ time steps, where $(x^{t},y^{t})$ is the position of our vehicle at the $t$ time step. Define the cost volume value at this point $(x^{t},y^{t})$ as $\hat{c}^{t}$. Then, we sample $N$ negative trajectories, the $i^{th}$ among which is $\\{(x_{i}^{t},y_{i}^{t})\\}$ and the cost volume value at these points are $c_{i}^{t}$. The sampling procedure for negative trajectories is similar as we described in Section. 3.2, except there is 0.8 probability that the negative sample doesn’t obey SDV’s initial states, e.g. we randomly sample a velocity to replace SDV’s initial velocity. This will provide easier negative examples for the model to start with. The overall max-margin loss is defined as $\mathcal{L}_{\text{planning}}=\sum_{\tiny{\\{(x^{t},y^{t})\\}}}\left(\max_{1\leq i\leq N}\left(\sum_{t=1}^{T}\left[\hat{c}^{t}-c_{i}^{t}+d_{i}^{t}+\gamma_{i}^{t}\right]_{+}\right)\right)$ (8) The inner-most summation denotes the discrepancy between the ground-truth trajectory and one negative trajectory sample, which is a sum of per-timestep loss. $[]_{+}$ represents a ReLU function. This is designed to be inside the summation rather than outside, as it can prevent the cost volume at one time- step from dominating the whole loss. $d_{i}^{t}$ is the distance between negative trajectory and ground-truth trajectory $||(x^{t},y^{t})-(x_{i}^{t},y_{i}^{t})||_{2}$, which is used to encourage negative trajectories far from the ground-truth trajectory to have much higher cost. $\gamma_{i}^{t}$ is the traffic rule violation cost, which is a constant if and only if the negative trajectory $t$ violates traffic rules at time $t$, e.g. moving before red-lights, colliding with other vehicles etc. This is used to determined how ‘bad’ the negative samples are, as a result, it will penalize those rule violated trajectories more severely and thus avoid dangerous behaviors. After computing the discrepancy between the ground-truth trajectory and each negative sample, we only optimize the worst case by the $\max$ operation. This encourages the model to learn a cost volume that discriminates good trajectories from bad ones. Method | L2 along trajectory (m) | L2 across trajectory (m) | L1 (m) | L2 (m) ---|---|---|---|--- | 0s | 1s | 2s | 3s | 0s | 1s | 2s | 3s | 0s | 1s | 2s | 3s | 0s | 1s | 2s | 3s FaF[18] | 0.29 | 0.49 | 0.87 | 1.52 | 0.16 | 0.23 | 0.39 | 0.58 | 0.45 | 0.72 | 1.31 | 2.14 | 0.37 | 0.60 | 1.11 | 1.82 IntentNet[5] | 0.23 | 0.42 | 0.79 | 1.27 | 0.16 | 0.21 | 0.32 | 0.48 | 0.39 | 0.61 | 1.09 | 1.79 | 0.32 | 0.51 | 0.93 | 1.52 Ours-NMP | 0.21 | 0.37 | 0.69 | 1.15 | 0.12 | 0.16 | 0.25 | 0.37 | 0.34 | 0.54 | 0.94 | 1.52 | 0.28 | 0.45 | 0.80 | 1.31 Table 2: Motion Forecasting Metric ## 4 Experiments ID | Loss | Input | Penalty | mAP@IoU | Prediction L2 (m) | Collision Rate (%) | Traffic Violation (%) ---|---|---|---|---|---|---|--- | Det | Plan | 5 | 10 | | 0.5 | 0.7 | 1s | 2s | 3s | 1s | 2s | 3s | 1s | 2s | 3s 1 | ✓ | | | ✓ | | 94.1 | 81.3 | 0.48 | 0.84 | 1.34 | - | - | - | - | - | - 2 | | ✓ | | ✓ | | - | - | - | - | - | 0.01 | 0.23 | 1.42 | 0.37 | 1.06 | 3.85 3 | ✓ | ✓ | ✓ | | ✓ | 93.6 | 80.1 | 0.46 | 0.83 | 1.35 | 0.01 | 0.15 | 0.93 | 0.36 | 0.86 | 3.09 4 | ✓ | ✓ | | ✓ | | 94.2 | 81.1 | 0.45 | 0.80 | 1.30 | 0.01 | 0.29 | 1.40 | 0.36 | 1.02 | 3.26 5 | ✓ | ✓ | | ✓ | ✓ | 94.2 | 81.1 | 0.45 | 0.80 | 1.31 | 0.01 | 0.09 | 0.78 | 0.35 | 0.77 | 2.99 Table 3: Ablation Study. We compare effects of different supervisions, different input horizons and different training losses. ID denotes model id which we use for clarity and brevity. In this section, we evaluate our approach on a large scale real-world driving dataset. The dataset was collected over multiple cities across North America. It consists of 6,500 scenarios with about 1.4 million frames, the training set consists of 5,000 scenarios, while validation and test have 500 and 1,000 scenarios respectively. Our dataset has annotated 3D bounding boxes of vehicles for every 100ms. For all experiments, we utilize the same spatial region, which is centered at the SDV, with 70.4 meters both in front and back, 40 meters to the left and right, and height from -2 meters to 3.4 meters. This corresponds to a 704x400x27 tensor. Our input sequence is 10 frames at 10Hz, while the output is 7 frames at 2Hz, thus resulting in a planning horizon of 3 seconds. In the following, we first show quantitative analysis on planning on a wide variety of metrics measuring collision, similarity to human trajectory and traffic rule violation. Next we demonstrate the interpretability of our approach, through quantitative analysis of detection and motion forecasting, as well as visualization of the learned cost volume. Last, we provide an ablation study to show the effects of different loss functions and different temporal history lengths. ### 4.1 Planning Results We evaluate a wide variety of planning metrics. L2 Distance to Real Trajectory: This evaluates how far away the planned trajectory is from the real executed trajectory. Note that the real trajectory is just one of the many possible trajectories that a human could do, and thus this metric is not perfect. Future Potential Collision Rate: This is used to see if the planned trajectory will overlap with other vehicles in the future. For a given timestep t, we compute the percentage of occurrence of collisions up to time t, thus lower number is preferred. Lane Violation: this metric counts the percentage of planned trajectories crossing a solid yellow line. Note that lower is better, and here crossing is defined if the SDV touches the line. --- Figure 3: Cost Volume across Time We shown planned trajectory in red and ground-truth in blue. We overlay lower cost region for different timesteps in the same figure, using different colors (indicated by legend). Detection and corresponding prediction results are in cyan. (best view in color) We implement many baselines for comparison including: Ego-motion forecasting (Ego-motion): Ego-motion provides a strong cue of how the SDV would move in the future. This baselines takes only SDV’s past position as input and uses a 4-layer MLP to predict the future locations. Imitation Learning (IL): We follow the imitation learning framework [3, 8, 24], and utilize a deep network to extract features from raw LiDAR data and rasterized map. For fair comparison, we use the same backbone described (Sec. 3.1) and same input parameterization (Sec. 3.1) than our approach. In addition, the same MLP from Ego-motion forecasting baseline is used to extract features from ego-motion. These two features are then concatenated and fed into a 3 layer MLP to compute the final prediction. Adaptive Cruise Control (ACC): This baseline implements the simple behavior of following the leading vehicle. The vehicle follows the lane center-line, while adaptively adjusting its speed to maintain a safe distance from the vehicle ahead. When there is no lead vehicle, a safe speed limit is followed. Traffic controls (traffic lights, stop signs) are observed as a stationary obstacle, similar to a stopped lead vehicle. Plan w/ Manual Cost (Manual): This baselines uses the same trajectory parameterization and sampling procedure as our approach. However it utilizes a manually designed cost using perception and motion forecasting outputs. In detail, we rasterize all possible roads the SDV can take going forward and set it to a low cost of 0; all detected objects’s bounding box defines area of a high cost set to 255; cost of any other area is set to a default value 100. This baseline is designed to show the effectiveness of our learned cost volume as it utilize the same sampling procedure as our approach but just a different cost volume. As shown in Tab. 1, our approach has lower future collision rate at all timesteps by a large margin. Note that Ego-motion and IL baselines give lower L2 numbers as they optimize directly for this metric, however they are not good from planning perspective as they have difficulty reasoning about other actors and collide frequently with them. Comparing to the manual cost baseline and ACC, we achieve both better regression numbers and better collision rates, showing the advantage of our learned cost volume over manual a designed cost. For lane violation, ACC is designed to follow the lane, thus it has about 0 violation by definition. Comparing to other baselines, we achieve much smaller violation number, showing our model is able to reason and learn from the map. ### 4.2 Interpretability Interpretability is crucial for self-driving as it can help understand failure cases. We showcase the interpretability of our approach by showing quantitative results on 3D detection and motion forecasting and visualization our learned cost-map for all timesteps into the future. Method | Detection mAP @ IoU (pts $\geq$ 1) ---|--- | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 MobileNet[13] | 86.1 | 78.3 | 60.4 | 27.5 | 1.1 FaF[18] | 89.8 | 82.5 | 68.1 | 35.8 | 2.5 IntentNet[5] | 94.4 | 89.4 | 75.4 | 43.5 | 3.9 Pixor[32] | 93.4 | 89.4 | 78.8 | 52.2 | 7.6 Ours-NMP | 94.2 | 90.8 | 81.1 | 53.7 | 7.1 Table 4: Detection mAP Result #### Detection: We compare against several state-of-the-art real-time detectors, validating that our holistic model understand the environment. Our baselines include a MobileNet adapted from [13], FaF[18], IntentNet[5] and Pixor[32], which are specifically designed for LiDAR-based 3D object detection. The metric is mAP with different IoU thresholds, and vehicles without LiDAR points are not considered. As shown in Tab. 4, our model archives best results on 0.7 IoU threshold, which is the metric of choice for self-driving. Qualitative results can also be found in Fig. 3. #### Motion Forecasting: Tab. 2 shows quantitative motion forecasting results, including L1 and L2 distance to ground-truth locations. We also provides the L2 distance from our predictions to the ground-truth position along and perpendicular to the ground-truth trajectory. These help explain if the error is due to wrong velocity or direction estimation. We use baselines from [5, 18], which are designed for motion forecasting with raw LiDAR data. Our model performs better in all metric and all time steps. Note that IntentNet uses high-level intentions as additional information for training. Qualitative results are shown in Fig.3. #### Cost Map Visualization: In Fig. 3, we visualize a few different driving scenarios. Each figure gives a top-down view of the scene, showing the map, LiDAR point clouds, detection, motion forecasting and planning results including learned cost map. Each figure represents one example, where we overlay the cost map from different timesteps. We use different color to represent the lower cost region for different timesteps (indicated by color legend). As we can see, our model learns to produce a time-dependent cost map. In particular, the first column demonstrates multi-modality, second column shows lane-following in heavy traffic and the last column shows collision avoidance. ### 4.3 Ablation Study We conduct ablation studies and report the results in Table 3. Our best model is Model 5, comparing to Model 1 which is optimized only for detection and motion forecasting, it achieves similar performance in terms of detection and motion forecasting. Model 2 trains directly with planning loss only, without the supervision of object bounding boxes and performs worse. Model 3 exploits different input length, where longer input sequence gives better results. Model 4 is trained without the traffic rule penalty $\gamma$ in Eq. 8. It performs worse on planning, as it has no prior knowledge to avoid collision. ## 5 Conclusion We have proposed a neural motion planner that learns to drive safely while following traffic rules. We have designed a holistic model that takes LiDAR data and an HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost map defining the goodness of each position that the self-driving car can take within the planning horizon. Our planer then sample a set of physically possible trajectories and chooses the one with the minimum learned cost. We have demonstrated the effectiveness of our approach in very complex real-world scenarios in several cities of North America and show how we can learn to drive accurately. ## References * [1] Zlatan Ajanovic, Bakir Lacevic, Barys Shyrokau, Michael Stolz, and Martin Horn. Search-based optimal motion planning for automated driving. arXiv, 2018. * [2] Tirthankar Bandyopadhyay, Kok Sung Won, Emilio Frazzoli, David Hsu, Wee Sun Lee, and Daniela Rus. Intention-aware motion planning. In Algorithmic foundations of robotics X. 2013. * [3] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv, 2016. * [4] Martin Buehler, Karl Iagnemma, and Sanjiv Singh. The DARPA urban challenge: autonomous vehicles in city traffic. 2009\. * [5] Sergio Casas, Wenjie Luo, and Raquel Urtasun. Intentnet: Learning to predict intention from raw sensor data. In Proceedings of The 2nd Conference on Robot Learning, 2018. * [6] Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV, 2015. * [7] J. Chen, W. Zhan, and M. Tomizuka. Constrained iterative lqr for on-road autonomous driving motion planning. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017. * [8] Felipe Codevilla, Matthias Miiller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. End-to-end driving via conditional imitation learning. In ICRA, 2018. * [9] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. arXiv, 2017. * [10] Haoyang Fan, Fan Zhu, Changchun Liu, Liangliang Zhang, Li Zhuang, Dong Li, Weicheng Zhu, Jiangtao Hu, Hongye Li, and Qi Kong. Baidu apollo em motion planner. arXiv, 2018. * [11] Thierry Fraichard and Christian Laugier. Path-velocity decomposition revisited and applied to dynamic trajectory planning. In ICRA, 1993. * [12] Jason Hardy and Mark Campbell. Contingency planning over probabilistic obstacle predictions for autonomous road vehicles. IEEE Transactions on Robotics, 2013. * [13] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv, 2017. * [14] Kamal Kant and Steven W Zucker. Toward efficient trajectory planning: The path-velocity decomposition. The international journal of robotics research, 1986. * [15] Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, and Amar Shah. Learning to drive in a day. arXiv, 2018. * [16] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In ECCV, 2016. * [17] Wenjie Luo, Yujia Li, Raquel Urtasun, and Richard Zemel. Understanding the effective receptive field in deep convolutional neural networks. In NIPS, 2016. * [18] Wenjie Luo, Bin Yang, and Raquel Urtasun. Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. * [19] Matthew McNaughton. Parallel algorithms for real-time motion planning. 2011\. * [20] Michael Montemerlo, Jan Becker, Suhrid Bhat, Hendrik Dahlkamp, Dmitri Dolgov, Scott Ettinger, Dirk Haehnel, Tim Hilden, Gabe Hoffmann, Burkhard Huhnke, et al. Junior: The stanford entry in the urban challenge. Journal of field Robotics, 2008. * [21] Matthias Müller, Alexey Dosovitskiy, Bernard Ghanem, and Vladen Koltun. Driving policy transfer via modularity and abstraction. arXiv, 2018. * [22] Brian Paden, Michal Čáp, Sze Zheng Yong, Dmitry Yershov, and Emilio Frazzoli. A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on intelligent vehicles, 2016. * [23] Xinlei Pan, Yurong You, Ziyan Wang, and Cewu Lu. Virtual to real reinforcement learning for autonomous driving. arXiv, 2017. * [24] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NIPS, 1989. * [25] Nicholas Rhinehart, Kris M Kitani, and Paul Vernaza. R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting. In ECCV, 2018. * [26] Axel Sauer, Nikolay Savinov, and Andreas Geiger. Conditional affordance learning for driving in urban environments. arXiv, 2018. * [27] J. Schlechtriemen, K. P. Wabersich, and K. Kuhnert. Wiggling through complex traffic: Planning trajectories constrained by predictions. In 2016 IEEE Intelligent Vehicles Symposium (IV), 2016. * [28] Dong Hun Shin, Sanjiv Singh, and W Whittaker. Path generation for a robot vehicle using composite clothoid segments. IFAC Proceedings Volumes, 1992. * [29] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 2017. * [30] Moritz Werling, Julius Ziegler, Sören Kammel, and Sebastian Thrun. Optimal trajectory generation for dynamic street scenarios in a frenet frame. In ICRA, 2010. * [31] Markus Wulfmeier, Peter Ondruska, and Ingmar Posner. Maximum entropy deep inverse reinforcement learning. arXiv, 2015. * [32] Bin Yang, Wenjie Luo, and Raquel Urtasun. Pixor: Real-time 3d object detection from point clouds. * [33] Wei Zhan, Changliu Liu, Ching-Yao Chan, and Masayoshi Tomizuka. A non-conservatively defensive strategy for urban autonomous driving. In Intelligent Transportation Systems (ITSC), 2016 IEEE 19th International Conference on, 2016. * [34] H Zhao, J Shi, X Qi, X Wang, and J Jia. Pyramid scene parsing network. corrabs/1612.01105, 2016. * [35] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, 2008. * [36] Julius Ziegler, Philipp Bender, Thao Dang, and Christoph Stiller. Trajectory planning for bertha—a local, continuous method. In Intelligent Vehicles Symposium Proceedings, 2014 IEEE, 2014.
# An Active Galactic Nucleus Recognition Model based on Deep Neural Network Bo Han Chen,1 Tomotsugu Goto1,2, Seong Jin Kim1,2, Ting Wen Wang2 <EMAIL_ADDRESS>Daryl Joe D. Santos2, Simon C.-C. Ho2, Tetsuya Hashimoto1,3, Artem Poliszczuk4 Agnieszka Pollo4,5, Sascha Trippe6, Takamitsu Miyaji7,8111On sabbatical leave from IA-UNAM-E at AIP., Yoshiki Toba9,10,11 Matthew Malkan12, Stephen Serjeant13, Chris Pearson14,15,16, Ho Seong Hwang17 Eunbin Kim17, Hyunjin Shim18, Ting-Yi Lu2, Tiger Y.-Y. Hsiao2, Ting-Chi Huang19,20 Martín Herrera-Endoqui7, Blanca Bravo-Navarro7,21 and Hideo Matsuhara19,20 1Department of Physics, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu City 30013, Taiwan 2Institute of Astronomy, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu City 30013, Taiwan 3Centre for Informatics and Computation in Astronomy (CICA), National Tsing Hua University, 101, Section 2. Kuang-Fu Road, Hsinchu, 30013, Taiwan 4National Centre for Nuclear Research, ul.Pasteura 7, 02-093 Warsaw, Poland 5Astronomical Observatory of the Jagiellonian University, ul.Orla 171, 30-244 Krakow, Poland 6Department of Physics and Astronomy, Seoul National University, 1, Gwanak Road, Seoul, 08826, Republic of Korea 7Instituto de Astrnomía sede Ensenada, Universidad Nacinal Autónoma de México (IA-UNAM-E) Km 107, Carret. Tij.-Ens., 22860, Ensenada,BC, Mexico 8Leibnitz Instituto für Astrophysik (AIP), An der Sternwarte 16, 14482, Potsdam, Germany 9Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo- ku, Kyoto 606-8502, Japan 10Academia Sinica Institute of Astronomy and Astrophysics, 11F of Astronomy- Mathematics Building, AS/NTU, No.1, Section 4, Roosevelt Road, Taipei 10617, Taiwan 11Research Center for Space and Cosmic Evolution, Ehime University, 2-5 Bunkyo-cho, Matsuyama, Ehime 790-8577, Japan 12Department of Physics and Astronomy, UCLA, 475 Portola Plaza, Los Angeles, CA 90095-1547, USA 13School of Physical Sciences, The Open University, Milton Keynes, MK7 6AA, UK 14RAL Space, STFC Rutherford Appleton Laboratory, Didcot, Oxon, OX11 0QX, UK 15The Open University, Milton Keynes, MK7 6AA, UK 16University of Oxford, Keble Rd, Oxford, OX1 3RH, UK 17Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea 18Department of Earth Science Education, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea 19Department of Space and Astronautical Science, Graduate University for Advanced Studies, SOKENDAI, Shonankokusaimura, Hayama, Miura District, Kanagawa 240-0193, Japan 20Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan 21Inginiero Aeroespacial, Universidad Autónoma de Baja California, Blvd. Universitario 1000 Valle de Las Palmas, Tijuana, B.C. 22260, Mexico (Accepted 2020 December 10. Received 2020 December 9; in original form 2020 September 14) ###### Abstract To understand the cosmic accretion history of supermassive black holes, separating the radiation from active galactic nuclei (AGNs) and star-forming galaxies (SFGs) is critical. However, a reliable solution on photometrically recognising AGNs still remains unsolved. In this work, we present a novel AGN recognition method based on Deep Neural Network (Neural Net; NN). The main goals of this work are (i) to test if the AGN recognition problem in the North Ecliptic Pole Wide (NEPW) field could be solved by NN; (ii) to shows that NN exhibits an improvement in the performance compared with the traditional, standard spectral energy distribution (SED) fitting method in our testing samples; and (iii) to publicly release a reliable AGN/SFG catalogue to the astronomical community using the best available NEPW data, and propose a better method that helps future researchers plan an advanced NEPW database. Finally, according to our experimental result, the NN recognition accuracy is around 80.29% - 85.15%, with AGN completeness around 85.42% - 88.53% and SFG completeness around 81.17% - 85.09%. ###### keywords: galaxies: active surveys methods: data analysis ultraviolet: galaxies infrared: galaxies submillimetre: galaxies ††pubyear: 2020††pagerange: An Active Galactic Nucleus Recognition Model based on Deep Neural Network–An Active Galactic Nucleus Recognition Model based on Deep Neural Network ## 1 Introduction An active galactic nucleus (AGN) is a compact region at the centre of a galaxy which is highly-luminous due to processes not caused by star-forming activities. It is widely believed that AGNs are powered by the accretion of super massive black holes (SMBHs) located at the centre of galaxies. Furthermore, it is found that the bulge masses of galaxies co-evolve with the mass of the black holes (e.g. Magorrian et al. 1998). Thus, studying AGNs can help us understand galaxy evolution. In order to reveal the cosmic accretion history of SMBHs, it is crucial to find AGNs in the universe. However, it has been notoriously difficult to identify AGNs from normal SFGs photometrically. The difficulty comes from two aspects. First, UV and X-ray observations usually suffer from the extinction by dust and the absorption by gas surrounding AGNs. (e.g. Webster et al. 1995; Alexander et al. 2001; Richards et al. 2003). Though the extinction-free observations in mid-infrared (MIR) bands are promising alternative, MIR includes both polycyclic aromatic hydrocarbon (PAH) emissions from SFGs and power-law emission from AGNs. Thus, a definite classification based on MIR data could only be performed by using spectroscopic data but not photometric data, while the former is usually not available. Therefore, finding a way to separate AGNs from SFGs photometrically is important to advance the field. There are several photometric and spectroscopic methods proposed to select AGNs. Regarding photometric methods, one of them is using MIR colours from the Spitzer-WISE Survey (Lacy et al. 2004; Stern et al. 2005; Richards et al. 2006) or optical colours from Baryon Oscillation Spectroscopic Survey (BOSS) (Ross et al. 2012). Another is the variability selection based on _ugriz_ optical bands in the Sloan Digital Sky Survey (SDSS) region (Palanque- Delabrouille et al. 2011). The other is via spectral energy distribution (SED) fitting, which covers the mid-IR wavelength gap and includes up to 36 band filters using the AKARI space telescope(Huang et al. 2017, Wang et al. (2020)). In addition, a different study used fuzzy support vector machine (FSVM), which is a machine learning-based method, and it provided a high quality result on North Ecliptic Pole Deep (NEPD) field using 8 filters including 3 NIR bands and 5 MIR bands of the AKARI. (Poliszczuk et al. 2019). In terms of spectroscopic AGN selection methods, some selections of local AGNs are done by using BPT diagnostic(Baldwin et al. 1981; Veilleux & Osterbrock 1987). For selections of high redshift AGNs, Yan et al. (2011) select AGNs by combining the [OIII]/H$\beta$ ratio with rest-frame $U-B$ color. Juneau et al. (2011) and Juneau et al. (2013) developed mass-excitation diagnostic to select AGNs with redshift > 0.3. Marocco et al. (2011) selected AGNs from the SDSS by using spectral classification. Finally, Zhang & Hao (2018) proposed a kinematics–excitation (KEx) diagram to select AGNs. Zhang et al. (2019) select AGNs at intermediate redshift (z=0.3–0.8) by using supervised machine learning classification algorithms. Figure 1: The NN takes several photometric magnitudes ($m_{1}$,$m_{2}$…) and errors ($e_{1}$,$e_{2}$…) data of a galaxy as input, and accurately states whether the inputted galaxy is an AGN or a SFG. In this paper, we introduce a state-of-the-art technique, Deep Neural Network (Neural Net; NN), to build a robust model that can recognise AGNs from star- forming galaxies (SFGs). NN is a kind of algorithms inspired by biological neural networks that constitute animal brains. NN imitates biological neural network connections by proceeding linear matrix operation and biological neuron by applying a specific non-linear function. We describe the details of our NN in Section 2.3. Our goal is to construct a NN that can take several photometric magnitudes and errors of a galaxy as an input, and accurately state if the galaxy is an AGN or a SFG (Fig. 1). It is widely known that NN is good at solving a specific problem, such as image classification (Krizhevsky et al. 2017), young stellar objects search (Chiu et al. 2020), or even redshift estimation (Collister & Lahav 2004, De Wei & Yang 2019). A sufficiently large training set which includes the input data and the corresponding true answer (ground truth) is necessary to train the NN algorithm. In our work, the input data consists of at most 44 band magnitudes and errors, which includes observations from Hyper Suprime-Cam (HSC), $AKARI$, Maidanak, Canada-France-Hawaii Telescope (CFHT), Kitt Peak National Observatory (KPNO), Wide-field Infrared Survey Explorer ($WISE$), $Spitzer$, and $Herschel$ (Kim et al. 2020). The ground truth is taken from X-ray (Krumpe et al. 2014) and spectroscopic (Shim et al. 2013) classifications. We describe the details of the data in Section 2.1. There are 1,870 galaxy classification ground truths in total; about 10% of the galaxies are assigned as validation samples, which means they do not participate in training and are only applied for validating the accuracy of the model. Above all, the points of this work are as follows. * • NN could be applied for solving the AGN recognition problem in the NEPW field. * • We verify that the proposed NN method is superior to the popular SED fitting methodology in the testing samples from the NEPW field. * • We publicly release a more reliable AGN/SFG catalogue using the best available NEPW data. It is known through the universal approximation theorem that NN can approximate any given multivariate polynomial and any generic non-linearity (Cybenko 1989, Hornik 1991, Lu et al. 2017, see also Lin et al. 2017b) therefore NN is expected to be able to perform well in photometric classification problems in general. In addition, the performance of NN would be sustainingly reinforced as the number of training data increasing (Ng 2017). Hence, with the expected development of the training sample number and the upcoming observation in the NEPW field in near future (e.g. eROSITA, Subaru/PFS…), we could look ahead to a steady advancement on this project based on our method. Our aim in this paper is not to compare with other machine learning model against NN and show that NN is the most efficient one at the current stage, but rather to test whether NN can be used in selecting AGN. Once we verify that NN can be also used for our NEPW data and performs better than traditional SED fitting method, it could help the community invest more resources on developing the size of the training set, consequently leading to a steady development of the AGN recognition project. This work is organised as follows. We describe our sample selection and NN model in Section 2. Our AGN recognition results are described in Section 3. We present the discussion in Section 4. Our conclusions are in Section 5. Throughout this paper, we use AB magnitude system unless otherwise mentioned. ## 2 Data And Model Structure ### 2.1 Sample selection All involving galaxy samples in this work are based on a multi-wavelength catalogue in the NEPW field (Kim et al. 2020). The catalogue consists of various photometric data from optical CFHT/$u$-band to the Hershel/SPIRE bands, obtained to support the AKARI NEPW survey ($5.4$ deg2) data, centred at ($\rm{RA}=18h00m00s,\rm{Dec}.=+66^{\circ}33^{\prime}38^{\prime\prime}$; Matsuhara et al. 2006; Lee et al. 2009; Kim et al. 2012). The procedure for data preprocessing is shown in Fig. 2. The catalogue contains 91,861 sources in total, and 2,026 of them have spectroscopic data. The spectroscopic data is provided by and (2019), Oi et al. (2017) and Shim et al. (2013). In our study, we excluded objects which have neither spectroscopic nor photometric redshift measurements. The photometric redshifts of our samples without spectroscopic redshifts are estimated using $LePhare$ (Ho et al. 2020), a set of FORTRAN commands to compute photometric redshifts and to perform SED fitting. Among the sources with spectroscopic data in the multi-wavelength catalogue (Kim et al. 2020), 1615 SFGs and 255 AGNs are already classified. The identification comes from two sources. The first one is the analysis of spectroscopic data, obtained by MMT/Hectospec and WIYN/Hydra. The observed spectra were classified via visual inspection and/or identification of the diagnostics with emission lines (Shim et al. 2013). The second source is the analysis of X-ray data. By cross-matching X-ray sources from $Chandra$ North Ecliptic Pole Deep (NEPD) survey counterpart and the MIR-selected AGN candidates counterpart from AKARI NEPW field survey, a set of objects are confirmed as AGNs if X-ray sources have X-ray luminosity of $L_{x}>10^{41.5}erg\ s^{-1}$ in a standard X-ray band (e.g. 2-10 keV or 0.5-2 keV) (Krumpe et al. 2014). 84% of the AGN samples are provided spectroscopically, and X-ray identify 30% of the AGN samples. Roughly 14% of AGNs are consistently identified by the two methods. Total number of $1615+255=1870$ objects provide us with a foothold to train our model by supervised learning. We denote these identified objects as "Labelled Data"; on the other hand, the unidentified objects are denoted as "Unlabelled Data". We use $LePHARE$ classification to remove stars in the "Unlabelled Data". The SED of stellar templates and galaxy templates are used here. When SED fitting is performed (Ho et al. 2020), $\chi^{2}$ value is evaluated for both the galaxy templates (Ilbert et al. 2008) and stellar templates (Bohlin et al. 1995, Pickles 1998, Chabrier et al. 2000) for each source. Then, they compare the two $\chi^{2}$ values to separate stars and galaxies. If $\chi^{2}_{gal}>\chi^{2}_{star}$, where $\chi^{2}_{gal}$ and $\chi^{2}_{star}$ are the minimum $\chi^{2}$ values obtained with the galaxy and stellar templates, respectively, the object is flagged as a star. Here 23795 stars are removed, and the remaining 65548 galaxy objects would be classified as either AGN or SFG in Sec. 4.1 In terms of the input data of the NN, including those aimed for training, testing, or merely inferring, we use all available photometric bands in multi- wavelength catalogue. We provide a summary of the photometric bands used in this study in Fig. 3. The observational details are described in the following subsections. In addition, a more detailed description can found in Kim et al. (2020). Figure 2: The flow chart of data preprocessing. The "Labelled Data" are randomly and equally divided into 10 groups; whenever the training is performed, one of the groups (i.e., 10% of data) is excluded from the training to serve as validation data. The inference set contains galaxy data that will be classified with our trained NN. #### 2.1.1 The 44 Band-Merged data In our 44 band-merged catalogue, the data from UV to optical are provided by $Subaru$ Hyper Suprime-Cam (HSC), Maidanak’s Seoul National University 4K$\times$4K Camera (SNUCAM), Canada-France-Hawaii Telescope (CFHT) MegaCam, MegaPrime and Galaxy Evolution Explorer (GALEX). Subaru is a telescope on the summit of Mauna Kea in Hawaii, and its HSC provides us at most five-band photomerties and their errors including _g, r, i, z_ and _y_ bands (Oi et al. 2020). The photomerties have the detection limits at 28.6, 27.3, 26.7, 26.0 and 25.6, respectively. SNUCAM is a charge-coupled device (CCD) camera located at the Maidanak observatory in Uzbekistan, providing _B, R, I_ -band magnitudes and errors in our input (Jeon et al. 2010). The three detection limits respectively are 23.4, 23.1 and 22.3. CFHT is a telescope also located atop the summit of Mauna Kea. MegaCam and MegaPrime are optical imaging facilities at CFHT. We use _u, g, r, i, z_ -band data from MegaCam (Hwang et al. 2007; Oi et al. 2014) and u-band data from MegaPrime (Huang et al. 2020). Each of the detection limits from MegaCam are 26.0, 26.1, 25.6, 24.8 and 24.0. The u-band detection limit from MegaPrime is 25.27. GALEX is an UV space telescope providing the shortest wavelength data in our NN input. It provides near-UV and far-UV band magnitudes and errors, respectively, corresponding to the wavelengths of 0.2310 and 0.1528 µm (Martin et al. 2005). The near-UV detection be of a limit 22.9, and the far-UV one is 22.2. In Near-Infrared (NIR) to Mid-Infrared (MIR) data, we use the data obtained by $Spitzer$, Wide-field Infrared Survey Explorer ($WISE$), $AKARI$ Infrared Camera (IRC), Florida Multi-object Imaging Near-IR Grism Observational Spectrometer (FLAMINGOS) and CFHT WIRCam. $Spitzer$ is an IR space telescope. It provides us IRAC 1, IRAC 2, IRAC 3, IRAC 4, MIPS 24-band magnitudes and errors, which correspond to 3.6, 4.5, 5.8, 8.0 and 24 µm, respectively (Nayyeri et al. 2018). The detection limit of the five bands are 21.8, 22.4, 16.6, 15.4 and 20.6, respectively. $WISE$ is also an IR space telescope. Its observation includes W1, W2, W3, W4-band magnitudes and errors which correspond to wavelengths of 3.6, 4.6, 12 and 22 µm, respectively. (Jarrett et al. 2011). Each of the detection limits from $WISE$ are 18.1, 17.2, 18.4 and 16.1. Both $WISE$ and $Spitzer$ have a filter gap between 12 µm and 22 µm; in contrast, $AKARI$ provides us the data in this range. $AKARI$ is another IR space telescope with the continuous wavelength coverage from NIR to MIR, thus provides us with the important information in recognising AGNs. The IR camera of $AKARI$ includes _N2, N3, N4, S7, S9W, S11, L15, L18W,_ and _L24_ -band magnitudes and errors,which correspond to 2.4, 3.2, 4.1, 7, 9, 11, 15, 18 and 24 µm, respectively (Kim et al. 2012). The detection carry out by $AKARI$ cameras have the corresponding limits equal to 20.9, 21.1, 21.1, 19.5, 19.3, 19.0, 18.6, 18.7 and 17.8. The observations from FLAMINGOS, a wide-field IR imager and multi-slit spectrometer on the Kitt Peak National Observatory (KPNO), provide us _J, H_ -band magnitudes and errors (Jeon et al. 2014). The two detection limits respectively are 21.6 and 21.3. CFHT also provides NIR data. The data from CFHT WIRCam, a NIR mosaic imager, including _Y, J, Ks_ -band magnitudes and errors is used in this work (Oi et al. 2014). The three observations from the imager correspondingly have the detection limits 23.4, 23.0 and 22.7. In our data collection, the Far-Infrared (FIR) Data is uniquely provided by $Herschel$, a FIR and sub-millimetre space telescope. Its two instruments, i.e., Spectral and Photometric Imaging Receiver (SPIRE) and Photodetector Array Camera and Spectrometer (PACS) are FIR imaging photometers. The SPIRE has three bands, respectively centred on 250, 350 and 500 µm and be of the detection limits 14, 14.2 and 13.8 . In terms of PACS, two bands centred on 100 and 160 µm are included in this research, limiting at the magnitudes 14.7 and 14.1. In summary, $Herschel$ provides us at most five-band photometries and their errors in FIR range (Pearson et al. 2017, Pearson et al. 2018). Figure 3: The schematic diagram of the inputted band data. Each band has a magnitude and an error. The whole diagram form a $(44\times 2)$ array. #### 2.1.2 The statistical information regarding the Labelled data. The labelled data is comprised of 1870 objects, and these objects provide the foothold for NN training and validating. In order to further understand the basic composition of our research, we plot the distribution of redshift and colour distribution based on the labelled objects. The redshift distribution is obtained using the spectroscopic data mentioned in Sec. 2.1, shown in Fig. 4. The colour distribution is evaluated using the Band-Merged catalogue mentioned in Sec. 2.1.1. We separately give the plot of u-g, g-i, N2-N4, S7-S9, S9-L18 and $250\mu m-500\mu m$, covering the distribution of UV, optical, NIR, MIR and FIR. The 6 colour distributions are shown in Fig. 5. Figure 4: The spectroscopic redshift distribution of the labelled data. Figure 5: The colour distribution of the labelled data. The upper left panel shows the distribution of u-g, which represents the UV colour. The upper middle panel shows the optical colour distribution using g-i. The upper right panel is the NIR colour evaluated by N2-N4. The lower left and lower middle panel are both the colour distribution of MIR, composed by S7-S9 and S9-L18. The lower right one is a FIR colour plot represented by $250\mu m-500\mu m$. ### 2.2 Data preprocessing In order to measure the performance of NN model correctly, we validate the model performance by K-fold cross validation (Bishop 2006) with K = 10. The labelled data are equally divided into 10 groups; whenever the training is performed, one of the groups (that is, 10% of data) is excluded from the training to serve as validation data. Sequential exclusion and training would be repeated until all folds have once been the validation data. We then take the performance average from all 10 trainings as our K-fold cross validation result. All sources have at most 44 available band observations, and each observation has a pair of magnitude and magnitude error. For each unavailable band observation, we fill in 0 instead. Moreover, we conform that filling the missing data with the median value of the band or the median value of the neighbouring filters also give out the similar results. We then reshape the magnitudes and magnitude errors to a $(44\times 2)$ array (Fig. 3). To make it more convenient to refer to other machine learning background papers, we trivially denote the array shape as $(44\times 2\times 1)$ in the following sections. ### 2.3 Model Architecture We summarised the architecture of our NN model in Fig. 6. The NN has 5 learned layers, including 3 convolutional layers and 2 fully-connected layers. In the following subsections, we describe the overall architecture of the model and the specific technique we used during training. Figure 6: An illustration of the architecture of our NN. The network’s input is $(44\times 2\times 1)$, and the output feature map from each layer has a shape $(44\times 2\times 16)$, $(44\times 2\times 32)$, $(44\times 2\times 32)$. It is then flattened to a vector with 2816 entries, and processed two fully-connected layers with 64 and 16 neurons, and finally output a single scalar. #### 2.3.1 Convolutional layer As described in Section 2.2, our input feature map is a $(44\times 2\times 1)$ array. Three convolutional layers are placed sequentially to capture the features of the photometry, and between each layer a Rectified Linear Unit (ReLU) function is used to perform a non-linear transform (Krizhevsky et al. 2017). All the kernels of the convolutional layers have the size $(1\times 2)$. In each layer, respectively, 16, 32, 32 kernels are used to capture the band features. In addition, padding was used to maintain the size of feature map. Thus, the output feature map from each layer has a shape $(44\times 2\times 16)$, $(44\times 2\times 32)$, $(44\times 2\times 32)$, respectively. In addition, we apply batch normalisation (Ioffe & Szegedy 2015), a method to re-centre and re-scale data in each layer, to enhance the training and apply L2 regularisation (Cortes et al. 2012), a method which adds NN weighting information in the loss function, to avoid overfitting. #### 2.3.2 Fully-connected layer The final output feature map of the convolutional layers is a $(44\times 2\times 32)$ array. It is then flattened to a vector with 2816 entries and entered into fully-connected layers. Two fully-connected layers are placed, featuring with 64 and 16 neurons respectively. ReLU function is also used between each layer. In addition, we apply L2 regularisation and dropout (Srivastava et al. 2014), a method which disable a portion of units in NN during training, to avoid overfitting. The output of the last layer is immediately summed and mapped by a sigmoid function. This operation ensure that the NN outputs a single scalar range from 0 to 1. #### 2.3.3 Focal Loss Usually, cross-entropy is applied as the loss function of NN. The algorithm optimise the trainable parameter based on the first order differential of such function. We denote $y$ as the ground truth of the sample, where 0 represents SFG and 1 represents AGN, and denote $p$ as the single scalar output of NN, where it ranges from 0 to 1, then the cross-entropy loss function is written as: $\ \ \ \ \ \ Loss_{\scriptscriptstyle CE}=-y\log{p}-(1-y)\log{(1-p)}$ (1) Note that when the ground truth and the NN output are highly consistent (ex. $(y,p)=(0,0.01)$, or $(y,p)=(1,0.99)$), Eq. (1) is very close to -1. On the other hand, if the results are not consistent (ex. $(y,p)=(0,0.95)$, or $(y,p)=(1,0.07)$), Eq. (1) is more close to 0, making the loss larger. The purpose of NN training is to decrease the value of this loss function so that the NN output is consistent with the ground truth. However, Eq. (1) performs poorly in our case. The reason is that in our training sample there are only roughly 10% AGNs. Such a fact causes an unavoidable bias on AGN recognition——the large population of SFGs in the training set leads the NN more likely to classify an AGN to be a SFG, while our main purpose is to identify AGNs from SFGs. If we naïvely apply cross- entropy on training, the AGN completeness eventually fall under 50 %. In order to avoid such problem, we instead use focal loss (Lin et al. 2017a), which is a modified cross-entropy loss function for up-weighting the hard training samples, written as $Loss_{\scriptscriptstyle FL}=-y\alpha{(1-p)}^{\gamma}\log{p}-(1-y)(1-\alpha){p}^{\gamma}\log{(1-p)},$ (2) where $\alpha\in[0,1]$ and $\gamma>0$. By choosing a larger $\alpha$, the weighting of missing an AGN is enlarged. Moreover, the base of $\gamma$ ($1-p$ in the first term and $p$ in the second term) is the difference between the NN scalar output and the true answer; thus, choosing a larger $\gamma$ gives those worse-performing cases an exponentially larger weighting. #### 2.3.4 Training the Neural Network The procedure of NN training is illustrated in Algorithm. 1, and we make use of the deep learning framework Keras222https://keras.io/ to implement it. We use Adam optimisation (Kingma & Ba 2019), an adaptive learning rate optimisation algorithm, to improve the NN by cycling "Input training data – Evaluate loss – Adam optimisation". We denote cycling it one time as an "epoch". The training epochs come to the end when a specific condition is satisfied. This termination condition generally could be written as $\left\\{\begin{array}[]{lr}(monitor)_{t}-(monitor)_{t-M}<\delta\\\ monitor=N*(AGN\ completeness)+accuracy,\\\ \end{array}\right.$ (3) where $AGN\ completeness$ and $accuracy$ are defined in Section 3.2, $t$ denote the epoch, $N$ indicates the weighting in the termination condition, $M$ indicates the epoch of waiting the monitor not improving, and $\delta$ indicates the minimum change of the $monitor$ that could be qualified as improvement. Intuitively, Eq. (3) is a trick that trace the improvement of NN performance and terminate the training automatically before overfitting occurred. It should be emphasised that the AGN completeness and accuracy is based on validation set. The validation set data do not participate in training but it helps us decide when to stop training. The epoch of training can not be predetermined because the AGN completeness drops drastically if the training last too long. Thus, we need to carefully set up this termination condition. This trick assures the NN’s performance while facing the real world condition. Algorithm 1 Training algorithm for AGN recognition model 1:Input the training set and the validation set data. $\triangleright$ Fig. 2, left down; Fig. 3; Section. 2.1 2:Initialise the weights of deep neural network. $\triangleright$ Fig. 6; Section. 2.3 3:for $iteration=1,2\ldots$ do 4: Perform AGN/SFG classification on the training set by deep neural network. 5: Evaluate the Focal Loss of the last stage. $\triangleright$ Eq. 2 6: Perform Adam optimisation on the deep neural network. 7: Evaluate the $monitor$ value on the validation set and record. $\triangleright$ Eq. 3, lower 8: if termination condition satisfied then $\triangleright$ Eq. 3, upper 9: break $\triangleright$ Training Complete 10: end if 11:end for 12:Save the weights of deep neural network. ## 3 Empirical Result Figure 7: An example of NN training history, including the evolution of the AGN completeness (upper left panel), SFG completeness (upper right panel), accuracy (lower left panel) and focal loss (lower right panel). The AGN completeness increases from 30% to almost 100% in only 300 training epochs, and slightly decreases after 1500 epochs. The accuracy and SFG completeness are gradually raised until 3000 epochs. The focal loss descends in a stable manner before 1500 epochs and increases again because it is sensitive to the decline of AGN completeness. The result of the training is highly stochastic. A small difference of hyperparameters (the parameters configuring the NN models) could lead to a totally different outcome. Even if the hyperparameters are the same, the outcome will also not be the same between two tries; this is because the trainable parameters of NN are initialised randomly. In our experiments, most of the time, we need to repeat the training several times to obtain the best performance of that set of hyperparameters. In the following sections, we provide some of the well-performing results and their corresponding hyperparameters. The hyperparameters are chosen by hand tuning, which means we start from an arbitrary set of hyperparameters and manually, sequentially adjust it. ### 3.1 The Neural-Network training history Fig. 7 shows an example of NN training history, including the evolution of the AGN completeness, accuracy and Focal loss. We can see the AGN completeness increased from 50% to almost 100% in only 200 training epochs, however, at the cost of a very low accuracy and SFG completeness. This result is due to the large setting of $\alpha$ in (2), which is $\alpha=0.99$. This setting induces an effect like each AGN in the training set has a weighting 100 times larger than each SFG; thus the NN is more likely to classify an object as an AGN. Fortunately, as the training progressed, the accuracy and SFG completeness is gradually increased. The AGN completeness might slightly drop at this stage and the focal loss would sensitively response to the decline of AGN completeness since the focal loss is dominated by the large setting of $\alpha$. Usually, this status is held before the training comes to 1500~3000 epochs. At the end of this stage, we reach the optimal point of this training. If the hyperparameters are set properly, at this optimal point the accuracy would be at least above 80% with the AGN completeness staying above 85%. If we keep training the NN, making the training steps far away from the optimal point, the accuracy will still be raised, but AGN completeness in validation set would drop drastically, making the training meaningless. Thus, the termination condition mentioned in Section 2.3.4 helps us automatically stop the training near the optimal point. Table 1: The hyperparameters (defined in Eq. 2, Eq. 3) and the performance of the NN models under K-fold cross validation. | Hyperparameters | K-fold cross validation performance ---|---|--- Model A | $\alpha=0.99$ | AGN completeness $=85.42\%$ | $\gamma=2$ | SFG completeness $=85.09\%$ | $M=4$ | Accuracy $=85.14\%$ | $N=1.5$ | ROC AUC $=88.38\%$ | $\delta=-0.10$ | Model B | $\alpha=0.99$ | AGN completeness $=85.83\%$ | $\gamma=2$ | SFG completeness $=83.12\%$ | $M=4$ | Accuracy $=83.60\%$ | $N=3$ | ROC AUC $=88.89\%$ | $\delta=-0.15$ | Model C | $\alpha=0.99$ | AGN completeness $=86.43\%$ | $\gamma=2$ | SFG completeness $=81.17\%$ | $M=6$ | Accuracy $=82.10\%$ | $N=5$ | ROC AUC $=88.56\%$ | $\delta=-0.20$ | Figure 8: The ROC curves of the NN models referred in Table. 1. The result here is also using K-fold cross validation. Figure 9: The consistency of the inference result from the three models in Table. 1. The area implies the number of objects, but not drawn to scale. Upper panel shows the overlap of the AGN prediction; lower panel shows the overlap of the SFG prediction. The SFG prediction has a higher consistency. Figure 10: The bar graph of Fig. 9. 83.5% NEPW field objects receive consistent result from the three models. These consistent results are accumulated and present at the left side. ### 3.2 The Neural-Network performance on AGN recognition We eventually save the best-performing NN models after the training and record its validation set performance. The performance is validated using K-fold cross validation (Bishop 2006), with a total of 10 folds (K=10). We use a total of four metrices to present the performance of AGN recognition models in our work. These metrices are AGN completeness, SFG completeness, accuracy and area under the curve of receiver operating characteristic (ROC AUC). AGN completeness is defined as: $AGN\ \ completeness=True\ \ positive\ \ rate=\frac{TP}{TP+FN},$ (4) where $TP(True\ positive)$ denote the number of AGNs correctly identified by the model, and $FN(FalseNegative)$ denote the number of AGNs incorrectly excluded by the model. SFG completeness is defined as: $SFG\ \ completeness=True\ \ negative\ \ rate=\frac{TN}{TN+FP},$ (5) where $TN(True\ Negative)$ denote the number of SFGs correctly identified by the model, and $FP(False\ Positive)$ denote the number of SFGs incorrectly excluded by the model. Accuracy is defined as: $Accuracy=\frac{TP+TN}{Total},$ (6) where $Total$ denote the number of all objects in validation set. ROC AUC (Bradley 1997) is defined as the area under the ROC curve, which is the plot created by plotting true positive rate (TPR) against false positive rate (FPR) at various threshold settings. False positive rate is defined as: $False\ \ positive\ \ rate=\frac{FP}{TN+FP},$ (7) The NN determines the AGN/SFG candidates by its single scalar output and a specific threshold. With the various TPR and FPR results from various thresholds ranging from 0 to 1, we obtain the ROC curve of our NN model. ROC curve provides a straight forward comparison between different classifiers. The larger the AUC is, the better the model is. Several hyperparameter sets and the K-fold cross validation results of these settings are shown in Table. 1, and the corresponding ROC curves are shown in Fig. 8. In Table. 1 we see that there is a trade off between the accuracy, SFG completeness and AGN completeness. If the NN achieve an AGN completeness up to 86.43%, then the accuracy and SFG completeness is about 82.10% and 81.17%, respectively; if the NN only cover about 85.42% AGNs, then the accuracy reach 85.14% and the SFG completeness comes to 85.09%. The result shows that NN model typically carry out an AGN recognition performance around 85% level. Furthermore, we show the comparison with traditional statistical analysis in Section 4.2, stating that NN could provide a more reliable way on AGN recognition problem. ## 4 Discussions ### 4.1 The inference result on whole NEP field After the NN is well-trained, we use it to classify arbitrary objects in the NEPW field. As shown in Fig. 2, there are 65548 objects in the NEPW field with spectroscopic and/or photometric redshift measurement but no classification result yet available. We apply all three models referred in Table. 1 (only one fold in totally 10 folds here); the inference result of these NN models is shown in Table. 2. Comparing these three models, we obtain the estimates of AGN fraction (ratio of number of AGNs to total number of galaxies) between 25% to 34%. Note that these evaluations of AGN fraction include the objects in the training data. The AGNs recognised by model with the smallest AGN fraction (24.89%) are almost covered by the remaining two models too (only 497 exceptions). The remaining two models give out a quite similar AGN fraction (34.01% - 34.40%), but the identified AGNs have relatively larger differences (3198 and 2057 AGNs were only identified by models A and B, respectively). We also compare the inference result among these three models. We show the result in Fig. 9 and Fig. 10. The result shows that 83.49% objects are receiving the same results from all three NN models, 8.78% objects are voted as AGNs by one NN model and as SFGs by two NN models, 7.73% objects are voted as AGNs by two NN models and as SFGs by one NN model. Three NN models are showing high consistency in recognising SFGs, while the consistency is relatively low in recognising AGNs. This difference might have resulted from the fact that the population of SFGs in the training data is larger than AGNs, thus the SFG information provided to the NN model is comparatively sufficient. Table 2: The NEP field inference result of the three NN models in Table. 1. The estimations of AGN fraction include the population of training data. | Model A | Model B | Model C | training data ---|---|---|---|--- AGN: | 16853 | 16999 | 13172 | 255 SFG: | 48695 | 48549 | 52376 | 1615 Total: | 65548 | 65548 | 65548 | 1870 AGN fraction : | 34.01% | 34.40% | 24.89% | ### 4.2 Comparison with SED fitting result We compare the NN performance with the SED fitting performance (Wang et al. 2020) by presenting the ROC curve of two methods. In this comparison, NN metrices are using K-fold cross validation results from the validation sets, and the model is using the Model B referred in Table. 1, and SED fitting metrices are validated by the intersection of the labelled data shown in Fig. 2 and the SED fitting applicable candidates. The fitting model is provided by CIGALE. In this SED fitting work, the IR luminosity contribution of the galactic nucleus ($f_{\rm AGN\\_IR}$) is derived; a galaxy is identified as an AGN when $f_{\rm AGN\\_IR}\geq 0.2$. Thus, a ROC curve of SED fitting model could be carried out by varying the $f_{\rm AGN\\_IR}$ threshold. We plot the ROC curve of our result and the result from Wang et al. (2020) in Fig. 11. The ROC AUC of NN model and SED fitting are $89.91\%$ and $76.23\%$, respectively, indicating that the NN model provides a more accurate selection compared with SED fitting. Furthermore, SED fitting method require some critical IR detection (e.g. AKARI 18W, Herschel Spire PSW or PACS in our case), and a well-fitted result. These constraints have limited the applicable candidates to only 1671 objects in all NEP field; in contrast, NN models provide 65548 object classifications in total, covering almost the whole NEPW field. Thus, based on our testing result in the NEPW field samples, we state that NN is a better solution when it comes to AGN recognition. However, Wang et al. (2020) provided a more sophisticated investigation on physical properties of AGNs. They derive the physical properties (e.g. AGN contribution, star formation rate, etc) from CIGALE, which we cannot obtain from the NN model unless it is trained to provide it. Thus, additional properties could be obtained if we select AGN using our NN model and perform physical property analysis using SED fitting technique. Another photometric, machine learning-based AGN recognition is performed in Poliszczuk et al. 2019. The algorithm they used is a fuzzy support vector machine (FSVM). However, instead of selecting sources in the NEPW field, they focus on the NEPD field. Compare with our NEPW field source, NEPD field is with narrower area and fainter detection, thus making a comparison between their work and ours is not straightforward. In spite of resulting from different data, an informal comparison (not shown in this paper) shows that FSVM has a similar ROC curve performance compared with our NN model. Figure 11: The comparison between NN model and SED fitting via ROC curve. The SED fitting result is using CIGALE, provided by Wang et al. (2020). We show that the ROC AUC of NN model ($89.91\%$) is larger than SED fitting one ($76.23\%$), indicating that NN model is a better classifier. ### 4.3 The contribution of different range of observations. In order to study the contribution of different ranges of observations in our training, we perform the experiments that train the NN under a constraint that a range of data points are removed. Totally 6 experiments are performed in this part. Except one regular training, we experimented removing FIR data (100-500 ${\mu}m$, 6 data points), MIR data (5.8-24 ${\mu}m$, 11 data points), NIR data (0.8-4.6 ${\mu}m$, 18 data points), Optical data (0.4-0.65 ${\mu}m$, 6 data points), and UV data (0.15-0.36 ${\mu}m$, 4 data points). We show the training results of each experiment in the form of ROC curve in Fig. 12. This set of experiments shows that removing FIR an MIR observations leads to slightly worse results, but the performance is not drastically decreasing. Thus, based on this result, we can infer that none of FIR, MIR, NIR, Optical or UV observations are uniquely providing the key information for AGN recognition. Figure 12: The ROC curve of the NN models, with some portion of bands are removed during training. We remove 6 data points in FIR, 11 in MIR, 18 in NIR, 6 in optical and 4 in UV band for each training. The result shows that removing FIR an MIR observations lead to slightly worse result, but the performances are not decreasing drastically. ## 5 Conclusions A critical issue in the field of astrophysics is that although identifying AGNs from the normal SFGs is essential, in the NEPW field most of the objects are merely photometrically surveyed. Not many X-ray or spectroscopic classification is available in this aspect, hence the AGNs in the NEPW field have not been well-identified yet. In order to address such issues, we try a novel solution based on NN. Eventually, our work resulted in three main conclusions: * • We verify that Deep Neural-Network is applicable in recognising AGNs using photometric data in the NEPW field, and gives out a feasible technique set. The recognition accuracy, AGN completeness and SFG completeness are recorded to be around $82.10\%-85.14\%$, $85.42\%-86.43\%$ and $81.17\%-85.09\%$, respectively. * • We publicly release a high-quality AGN/SFG classification catalogue covering the whole NEPW field based on Deep Neural-Network. In this catalogue, $83.49\%$ of the galaxies have the same results from the three different Deep Neural-Network models, which differ on the hyperparameters. * • We show that Deep Neural-Network provides a more reliable and less prerequisite classification result compared with the popular SED fitting method according to our testing samples in NEPW filed. As shown in the ROC AUC values of Deep Neural-Network and SED fitting method, the scores are $88.38\%-88.89\%$ and $76.23\%$, respectively. In summary, we provide a high-quality AGN/SFG classification catalogue in the NEPW field for immediate scientific use. In addition, with the upcoming telescope in near future (e.g. JWST, Euclid, eROSITA, SPICA….etc), more and more training samples and photometrical bands would become available. We could consequently expect a further enhanced NN AGN recognition. ## Acknowledgements We are very grateful to the anonymous referee for many insightful comments. This research is based on observations with $AKARI$, a JAXA project with the participation of ESA. This research was conducted under the agreement on scientific cooperation between the Polish Academy of Sciences and the Ministry of Science and Technology (MOST) of Taiwan through grant 109-2927-I-007-505. TG acknowledges the support by the MOST of Taiwan through grant 108-2628-M-007 -004 -MY3. TH is supported by the Centre for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University (NTHU) through a grant from the Ministry of Education (MOE) of Taiwan. AP and AP are supported by the Polish National Science Centre grant UMO-2018/30/M/ST9/00757 and by Polish Ministry of Science and Higher Education grant DIR/WK/2018/12. TM is supported by UNAM-DGAPA PASPA and PAPIIT IN111319 as well as CONACyT 252531. This work used high-performance computing facilities operated by CICA at NTHU. This equipment was funded by the MOE of Taiwan, MOST of Taiwan, and NTHU. ## Data Availability The data is available upon request. ## References * Alexander et al. (2001) Alexander D. M., Brandt W. N., Hornschemeier A. E., Garmire G. P., Schneider D. P., Bauer F. E., Griffiths R. E., 2001, The Astronomical Journal, 122, 2156 * Baldwin et al. (1981) Baldwin J. A., Phillips M. M., Terlevich R., 1981, Publications of the Astronomical Society of the Pacific, 93, 5 * Bishop (2006) Bishop C. M., 2006, Pattern Recognition and Machine Learning. Springer-Verlag New York Inc., https://www.ebook.de/de/product/5324937/christopher_m_bishop_pattern_recognition_and_machine_learning.html * Bohlin et al. (1995) Bohlin R. C., Colina L., Finley D. S., 1995, The Astronomical Journal, 110, 1316 * Bradley (1997) Bradley A. P., 1997, Pattern Recognition, 30, 1145 * Chabrier et al. (2000) Chabrier G., Baraffe I., Allard F., Hauschildt P., 2000, The Astrophysical Journal, 542, 464 * Chiu et al. (2020) Chiu Y.-L., Ho C.-T., Wang D.-W., Lai S.-P., 2020, arXiv preprint arXiv:2007.06235 * Collister & Lahav (2004) Collister A. A., Lahav O., 2004, Publications of the Astronomical Society of the Pacific, 116, 345 * Cortes et al. (2012) Cortes C., Mohri M., Rostamizadeh A., 2012, arXiv preprint arXiv:1205.2653 * Cybenko (1989) Cybenko G., 1989, Mathematics of control, signals and systems, 2, 303 * De Wei & Yang (2019) De Wei K. C., Yang A., 2019, EPJ Web of Conferences, 206, 09006 * Ho et al. (2020) Ho S. C.-C., et al., 2020, MNRAS, in press * Hornik (1991) Hornik K., 1991, Neural networks, 4, 251 * Huang et al. (2017) Huang T.-C., Goto T., Hashimoto T., Oi N., Matsuhara H., 2017, Monthly Notices of the Royal Astronomical Society, 471, 4239 * Huang et al. (2020) Huang T.-C., et al., 2020, Monthly Notices of the Royal Astronomical Society, 498, 609 * Hwang et al. (2007) Hwang N., et al., 2007, The Astrophysical Journal Supplement Series, 172, 583 * Ilbert et al. (2008) Ilbert O., et al., 2008, The Astrophysical Journal, 690, 1236 * Ioffe & Szegedy (2015) Ioffe S., Szegedy C., 2015, arXiv preprint arXiv:1502.03167 * Jarrett et al. (2011) Jarrett T. H., et al., 2011, The Astrophysical Journal, 735, 112 * Jeon et al. (2010) Jeon Y., Im M., Ibrahimov M., Lee H. M., Lee I., Lee M. G., 2010, The Astrophysical Journal Supplement Series, 190, 166 * Jeon et al. (2014) Jeon Y., Im M., Kang E., Lee H. M., Matsuhara H., 2014, The Astrophysical Journal Supplement Series, 214, 20 * Juneau et al. (2011) Juneau S., Dickinson M., Alexander D. M., Salim S., 2011, The Astrophysical Journal, 736, 104 * Juneau et al. (2013) Juneau S., et al., 2013, The Astrophysical Journal, 764, 176 * Kim et al. (2012) Kim S. J., et al., 2012, Astronomy & Astrophysics, 548, A29 * Kim et al. (2020) Kim S. J., et al., 2020, Monthly Notices of the Royal Astronomical Society, 500, 4078 * Kingma & Ba (2019) Kingma D. P., Ba J. A., 2019, arXiv preprint arXiv:1412.6980, 434 * Krizhevsky et al. (2017) Krizhevsky A., Sutskever I., Hinton G. E., 2017, Communications of the ACM, 60, 84 * Krumpe et al. (2014) Krumpe M., et al., 2014, Monthly Notices of the Royal Astronomical Society, 446, 911 * Lacy et al. (2004) Lacy M., et al., 2004, The Astrophysical Journal Supplement Series, 154, 166 * Lee et al. (2009) Lee H. M., et al., 2009, Publications of the Astronomical Society of Japan, 61, 375 * Lin et al. (2017a) Lin T.-Y., Goyal P., Girshick R., He K., Dollar P., 2017a, in 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, doi:10.1109/iccv.2017.324 * Lin et al. (2017b) Lin H. W., Tegmark M., Rolnick D., 2017b, Journal of Statistical Physics, 168, 1223 * Lu et al. (2017) Lu Z., Pu H., Wang F., Hu Z., Wang L., 2017, Advances in neural information processing systems, 30, 6231 * Magorrian et al. (1998) Magorrian J., et al., 1998, The Astronomical Journal, 115, 2285 * Marocco et al. (2011) Marocco J., Hache E., Lamareille F., 2011, Astronomy & Astrophysics, 531, A71 * Martin et al. (2005) Martin D. C., et al., 2005, The Astrophysical Journal, 619, L1 * Matsuhara et al. (2006) Matsuhara H., et al., 2006, Publications of the Astronomical Society of Japan, 58, 673 * Nayyeri et al. (2018) Nayyeri H., et al., 2018, The Astrophysical Journal Supplement Series, 234, 38 * Ng (2017) Ng A., 2017, Why is Deep Learning taking off?, %****␣AIAGNR.bbl␣Line␣200␣****https://www.coursera.org/lecture/neural-networks-deep-learning/why-is-deep-learning-taking-off-praGm * Oi et al. (2014) Oi N., et al., 2014, Astronomy & Astrophysics, 566, A60 * Oi et al. (2017) Oi N., Goto T., Malkan M., Pearson C., Matsuhara H., 2017, Publications of the Astronomical Society of Japan, 69 * Oi et al. (2020) Oi N., et al., 2020, Monthly Notices of the Royal Astronomical Society * Palanque-Delabrouille et al. (2011) Palanque-Delabrouille N., et al., 2011, Astronomy & Astrophysics, 530, A122 * Pearson et al. (2017) Pearson C., et al., 2017, Publications of The Korean Astronomical Society, 32, 219 * Pearson et al. (2018) Pearson C., et al., 2018, Publications of the Astronomical Society of Japan, 71 * Pickles (1998) Pickles A. J., 1998, Publications of the Astronomical Society of the Pacific, 110, 863 * Poliszczuk et al. (2019) Poliszczuk A., et al., 2019, Publications of the Astronomical Society of Japan, 71 * Richards et al. (2003) Richards G. T., et al., 2003, The Astronomical Journal, 126, 1131 * Richards et al. (2006) Richards G. T., et al., 2006, The Astrophysical Journal Supplement Series, 166, 470 * Ross et al. (2012) Ross N. P., et al., 2012, The Astrophysical Journal Supplement Series, 199, 3 * Shim et al. (2013) Shim H., et al., 2013, The Astrophysical Journal Supplement Series, 207, 37 * Srivastava et al. (2014) Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R., 2014, Journal of Machine Learning Research, 15, 1929 * Stern et al. (2005) Stern D., et al., 2005, The Astrophysical Journal, 631, 163 * Veilleux & Osterbrock (1987) Veilleux S., Osterbrock D. E., 1987, The Astrophysical Journal Supplement Series, 63, 295 * Wang et al. (2020) Wang T.-W., et al., 2020, Monthly Notices of the Royal Astronomical Society, 499, 4068 * Webster et al. (1995) Webster R. L., Francis P. J., Petersont B. A., Drinkwater M. J., Masci F. J., 1995, Nature, 375, 469 * Yan et al. (2011) Yan R., et al., 2011, The Astrophysical Journal, 728, 38 * Zhang & Hao (2018) Zhang K., Hao L., 2018, The Astrophysical Journal, 856, 171 * Zhang et al. (2019) Zhang K., Schlegel D. J., Andrews B. H., Comparat J., Schäfer C., Mata J. A. V., Kneib J.-P., Yan R., 2019, The Astrophysical Journal, 883, 63 * and (2019) and T. M., 2019, Proceedings of the International Astronomical Union, 15, 172
# Deep Generative Models of Gravitational Waveforms via Conditional Autoencoder Chung-Hao Liao<EMAIL_ADDRESS>Department of Physics, National Taiwan Normal University, Taipei 11677, Taiwan Feng-Li Lin<EMAIL_ADDRESS>corresponding author. Department of Physics, National Taiwan Normal University, Taipei 11677, Taiwan Center of Astronomy and Gravitation, National Taiwan Normal University, Taipei 11677, Taiwan ###### Abstract We construct few deep generative models of gravitational waveforms based on the semi-supervising scheme of conditional autoencoders and their variational extensions. Once the training is done, we find that our best waveform model can generate the inspiral-merger waveforms of binary black hole coalescence with more than $97\%$ average overlap matched filtering accuracy for the mass ratio between $1$ and $10$. Besides, the generation time of a single waveform takes about one millisecond, which is about $10$ to $100$ times faster than the EOBNR algorithm running on the same computing facility. Moreover, these models can also help to explore the space of waveforms. That is, with mainly the low-mass-ratio training set, the resultant trained model is capable of generating large amount of accurate high-mass-ratio waveforms. This result implies that our generative model can speed up the waveform generation for the low latency search of gravitational wave events. With the improvement of the accuracy in future work, the generative waveform model may also help to speed up the parameter estimation and can assist the numerical relativity in generating the waveforms of higher mass ratio by progressively self-training. ## I Introduction LIGO/Virgo has detected about a hundred of compact binary coalescence (CBC) up to its O3 observations Abbott:2016blz ; LIGOScientific:2018mvr ; Abbott:2020niy . This is remarkable achievement of modern science. Due to the limitation of LIGO/Virgo’s sensitivity, these events are detected by the method of matched filtering schutz_1991 ; Owen_1996 ; Owen_1999 , which calculates the overlap between the whitening data and the theoretical gravitational waveform templates. Similarly, the source properties of these events are also extracted based on matched filtering to perform the Markov- Chain-Monte-Carlo (MCMC) Bayesian parameter estimation (PE) veitch2015parameter ; biwer2019pycbc . In both processes of detection and PE of gravitational wave events, a huge number of theoretical waveform templates are required for matched filtering, therefore the efficiency of evaluating waveform templates is crucial for detection and to accelerate the PE procedures. However, due to the nonlinear feature of Einstein gravity and the unavoidable strong gravity regime for the mergers of two compact objects, it is notoriously difficult to calculate the CBC dynamics and the associated gravitational waveforms. For example, it is known Hinder:2010vn ; Pfeiffer_2012 to require about hundred thousand CPU hours to obtain a state- of-art CBC waveform by solving numerical relativity. The required computing time will be increased by one or two order for the higher mass-ratio CBC events, and is beyond what the current computing facility can afford. Thus, it is impractical to adopt such ab initio waveforms directly for performing either detection or PE. To accelerate the generation of theoretical waveforms for practical applications, some analytical waveform models are introduced with a few parameters to be fitted by the results of numerical relativity. The well-known examples are IMRPhenomP models Hannam:2013oca ; Khan:2018fmp , the synergy models Buonanno_2007 ; Pan_2011 ; Ajith_2011 by combining the post-Newtonian Einstein:1938yz ; Blanchet:2013haa , effective-one-body (EOB) formalism PhysRevD.59.084006 ; Buonanno_2000 , black hole perturbation Kokkotas_1999 ; Nollert:1999ji and numerical relativity Centrella:2010mx ; Loffler:2011ay , and the reduced order models or surrogate models Blackman_2015 ; P_rrer_2016 ; Williams_2020 which span the generic waveforms with some orthonormal basis. However, it still takes few hundredths to few tenths of a second to evaluate a single waveform based on the aforementioned analytical waveform models 111This can be estimated by generating the waveforms from template library in either PyCBC or GstLAL.. By this speed of waveform generation, it will usually take weeks or even months to obtain the state-of-art PE results for a single event based on the MCMC algorithm. One can then expect the overall computing power cost or time-span for PE will increase rapidly for the latter operations of LIGO/Virgo/KAGRA such as O4 or O5, for which the number of detection CBC events will be increased by an order or more. Therefore, the speed-up of waveform generation becomes a pressing issue even in the near future. Besides, those aforementioned analytical waveform models are in nature interpolating models by fitting the parameters with a known set of waveforms. This implies that the model could become more complicated and cumbersome when the range of the waveforms are extended, such as going to higher mass ratio. The increase of the complexity will reduce the models’ efficiency of generating the real-time waveforms for the detection or PE. Thus, it is crucial to have some extrapolating models of waveform generation to resolve the conflict between complexity and efficiency of the traditional analytical waveform models. Figure 1: A generative model of gravitational waveforms: Once the neural network model is well-trained, it can generate the gravitational waveforms with more than $95\%$ accuracy when providing just the source labels such masses $(m_{1},m_{2})$ of the binary black holes. The accuracy rate of the waveforms is defined in (8) below. As a preliminary study for the proof-of- concept, in this work we mainly consider the inspiral-merger parts of the full waveforms. Motivated by the above discussions on the limitations of the known models of waveform generation, we turns to the deep learning for the resolution. We aim to construct some deep learning neural network to generate the CBC gravitational waveforms of high accuracy by giving the source parameters such as the masses, spins of the binary compact objects, as schematically depicted in Fig. 1. Even the training time will be increased as the training set is enlarged, the time of evaluating a new waveform with the trained machine will not be increased much. This then resolves the conflict between the complexity and efficiency for the real-time applications. Moreover, we also hope this deep learning neural network to be generative so that it can generate the waveforms which do not belong to the source parameter ranges of the training set. For example, we can train the machine with waveforms of only low mass ratio (LMR), and then generate the accurate waveforms of higher mass ratio (HMR). This could help to efficiently obtain the HMR waveforms, which will be computationally costly by numerical relativity. However, in this work we will not explore this scenario but just a toy version, for which we employ the training set containing small fraction of HMR waveforms to demonstrate the possibility. In view of the above target features, this deep learning machine should be the supervised one when training with the given source parameters and the associated waveforms. On the other hand, it is also better to be generative and the unsupervised one so that it has the potential to turn into an extrapolating model of generating the HMR waveforms. For this purpose, in this paper we adopt the conditional (variational) autoencoder (cAE or cVAE) Sohn2015LearningSO ; nguyen2017plug ; tonolini2020variational to construct various deep learning models to generate CBC gravitational waveforms 222The cVAE framework is recently adopted as the generative model of posteriors for the PE of the CBC events, see gabbard2019bayesian ; Green:2020hst ; green2020complete .. This scheme belongs to the so-called semi-supervised learning by combining both features of supervised and unsupervised learning 333For the basic discussion of supervised and unsupervised learning, please see GoodBengCour16 ; Carleo_2019 .. It is built on a more basic scheme for the unsupervised learning, the autoencoder (AE) 10.5555/2987189.2987190 , or its generative extension, the variational autoencoder (VAE) kingma2019introduction ; yu2020tutorial . We will introduce the basics of these neural networks in the next section. As a preliminary study for the proof-of-concept, in this work we mainly consider the inspiral-merger parts of the full waveforms but truncating the ringdown part. We find that our best generative models can produce the waveforms with accuracy higher than $97\%$ even for the generation of HMR waveforms. Moreover, it can produce a single waveform within one millisecond, which is about 10 to 100 times faster than producing an EOB waveform on the same computing facility. To visualize the accuracy rate, in Fig. 2 we shows some typical examples of the waveforms with different accuracy rates. Figure 2: Some typical examples of the generated waveforms by well-trained cVAE models with different accuracy rates. Top-left: a generated inspiral- merger waveform of $99.50\%$ accuracy when comparing with the corresponding EOB waveform by overlap match. Top-right: a generated inspiral-merger waveform of $92.37\%$ accuracy. Bottom: A generated full inspiral-merger-ringdown waveform of $99.74\%$ accuracy. The rest of the paper is organized as follows. In the next section, we will briefly sketch the basics of autoencoder and its extensions including VAE and the conditional versions. In section III we describe the tomography of our training data set, and how we prepare our training waveforms. Besides, the fitting factor or faithfulness (FF) based on the overlap of matched filtering is introduced to characterize the accuracy of the generative waveform models. In section IV we consider four waveform models based on the cAE scheme, and then summarize their accuracy and run-time in Table 2 and 4, respectively. By comparing the accuracy, we pick up the best cAE waveform model and present its detailed information. Finally, we conclude this paper in section V. In the Appendix, we present the performance of the cVAE counterparts of the cAE waveform models considered in the main text. ## II Autoencoder and its extensions Our goal is to construct some deep learning models of gravitational waveforms as depicted in Fig. 1. The basic structure of this generative model is the so- called autoencoder (AE) 10.5555/2987189.2987190 or it extension, the variational auto encoder (VAE) kingma2019introduction ; yu2020tutorial . The basic structure of AE and VAE is shown in Fig. 3, which contains two parts: the encoder and the decoder. The encoder (denoted by $q_{\phi}(z|x)$ with $\phi$ the abbreviation of biases and weights of the encoder’s neural network) compress the input data $x$ into the latent layer $z$ of smaller dimensions than the ones of $x$, and then the decoder (denoted by $p_{\theta}(\tilde{x}|z)$ with $\theta$ the abbreviation of biases and weights of the decoder’s neural network) un-compress the latent layer back to the final result $\tilde{x}$ of the same dimension as $x$. One then use some distance measure such as mean-squared error (MSE) between $x$ and $\tilde{x}$ as the reconstruction loss. The goal is to minimize the reconstruction loss to optimize the biases and weights of the whole AE’s neural network. Since there is no label for the input data, this is the unsupervised learning. Figure 3: Schematic structure of a AE or VAE. It contains two components. (i) An encoder $q_{\phi}(z|x)$ which transforms an input vector $x$ to a latent vector $z$, which is deterministic for AE, but stochastic for VAE, i.e., $z=\mu_{\phi}(x)+\sigma^{2}_{\phi}(x){\cal N}(0,1)$. Here ${\cal N}(0,1)$ is the unit normal distribution. (ii) An decoder $p_{\theta}(\tilde{x}|z)$ which transforms $x$ to an output $\tilde{x}$. The loss function of AE is just the reconstruction loss such as mean-squared error (MSE) between $\tilde{x}$ and $x$. On the other hand, the loss function of VAE contains two parts: the reconstruction loss and the Kullbac-Leibler (KL) loss as discussed in (1). Since the AE is a deterministic machine so that it may lack the power of extrapolations and could fail to be generative. To remedy this drawback, the VAE is introduced by making the latent layer a stochastic one. This is done by generating the means and variances of the Gaussian distributions as the output of the encoder, from which one can sample a latent layer as the input to the decoder, as shown in the middle of Fig. 3. The uncertainty of the layer make the VAE to be able to “think out of the box”, thus a generative machine. However, besides the reconstruction loss one should also consider the regularization loss which characterizes how much the stochastic latent layer deviates from ${\cal N}(0,1)$, i.e., the unit Gaussian with zero mean. This is measured by their Kullbac-Leibler (KL) divergence. It turns out that the combined loss is equal to upper bound of the negative of the log likelihood of the input data distribution $p_{\theta}(x)$, i.e., $\displaystyle-\log p_{\theta}(x)$ $\displaystyle\leq$ $\displaystyle{\bf E}_{z\sim q_{\phi}(z|x)}[-\log p_{\theta}(\tilde{x}|z)]\qquad$ (1) $\displaystyle+$ $\displaystyle{\bf D}_{KL}[q_{\phi}(z|x)||{\cal N}(0,1)]\qquad$ where the first term on the right-hand-side is the reconstruction loss and the second term is the regularization loss. Figure 4: The schematic structure of a cAE or cVAE as a generative waveform model. Left panel: During the training period, it needs two encoders: (a) one for training the input data such as strains/waveforms, and (b) one for training the source labels associated with the input data such as $(m_{1},m_{2})$. Right panel: After the training, the encoder (a) is removed so that it becomes a generative model, namely it generates waveforms by providing only the associated source labels. When training the waveform models, the input $x$ from the training data set is the theoretical waveform such as EOB waveform, and we call it strain for short. We can choose the reconstruction loss to be the MSE between $x$ and $\tilde{x}$. The training process is to optimize the biases and weights of the neural network by minimizing the reconstruction loss such that the generated $\tilde{x}$ can be as close to $x$ as possible. After the training, the decoder can be turned into a generative model of strains, namely, given some input latent vector, the decoder will output some strain. However, this machine is not so useful in generating the strains with specific source properties because the latent space may not correspond to the required parameter space of the physical source properties, such as masses $(m_{1},m_{2})$ of the binary black holes. For convenience we call the source parameters the labels. To make the AE or VAE useful for our purpose, we adopt the way of semi-supervised training by also conditioning the labels when training the machine. After the training we will truncate the encoder part associated with the strain input, the remaining one with the label as input will then become the useful generative model of strains, namely, given the label such as $(m_{1},m_{2})$, the machine will generate the associated strain. The above scheme is called the conditional AE/VAE abbreviated as cAE/cVAE Sohn2015LearningSO ; nguyen2017plug ; tonolini2020variational , and the basic structure is depicted in Fig. 4 where an additional encoder for the labels of input data is introduced. Due to the additional encoder, we now have two latent vectors $z_{1}$ and $z_{2}$ as shown in Fig. 4. We can then introduce the latent loss to measure their difference. For AE, the latent loss can be MSE between latent vectors, but for VAE it is the KL divergence between the Gaussian distributions generated by the two encoders. On the other hand, the reconstruction loss is the MSE for both AE or VAE. ## III Waveform data preparation and overlap accuracy Once we construct the code for cAE or cVAE, we prepare a set of strains to train the machine. A strain is the linear combination of the two polarization modes $h_{+}(t)$ and $h_{\times}(t)$, i.e., $h(t)=h_{+}(t)+ih_{\times}(t)\;.$ (2) To be specific, we consider the inspiral-merger part of the CBC strains of binary black holes with their masses $(m_{1},m_{2})$ as the only source parameters, i.e., without spin and procession. We further divide the set into two subsets, one is called the low-mass-ratio (LMR) set for $q=m_{2}/m_{1}\leq 5$ and the other one called the high-mass-ratio (HMR) set for $q>5$. These strains are obtained from ENOBNR Buonanno_2007 of the PyCBC library biwer2019pycbc , in which each strain is divided into $8192$ time segments. We basically train the machine mainly with the LMR set but combining with about $20\%$ of HMR strains. The latter is used as the tutor seed to train the machine toward a generative model for the other $80\%$ of HMR strains. The tomography of our training and test sets is shown in Fig. 5, and in Table 1 we give the more details for the specs of this tomography. Moreover, the fraction of the HMR templates in the total training set is only about $2.46\%$. This tiny fraction is chosen on purpose to mimic as closely as possible the real extrapolating model for which the training set contains no HMR waveform. Figure 5: Tomography of data set for training, validation and test of a cAE model with the numeric as listed in Table 1. Top: Overview of the tomography. Bottom: Enlarged view of some portion circled in the Top figure, with more clear visual estimate. | $m_{1}$ | $m_{2}$ | $q$ | $\Delta m$ | train | valid | test ---|---|---|---|---|---|---|--- $q\leq 5$ | [5.0,75.0] | [5.0,75.0] | [1,5] | 0.25 | 24865 | 3552 | 7104 $q>5$ | [5.0,75.0] | [5.0,75.0] | [5,10] | 0.25 | 682 | 36 | 2873 Table 1: The range of source parameters and the amounts of the data set. Here the mass ratio is denoted by $q\equiv m_{2}/m_{1}$ with $1\leq q\leq 10$. The corresponding percentages of (training, validation, test) is (70%, 10%, 20%) for LMR (low-mass-ratio) data ($q\leq 5)$, and is (19%, 1%, 80%) for HMR (high-mass-ratio) data ($q>5$). Note that the fraction of the HMR templates is only about $2.46\%$ of the total training data set, including both training and validation data. Figure 6: A typical generative waveform (blue color) from a trained cAE or cVAE by inputting a inspiral-merger parts of the CBC strains (orange color) in a time series format. It cannot catch both the phase and amplitude at the same time. As a preliminary study to demonstrate that a generative model of gravitational waveform is in principle possible, we will not consider the full CBC strain but truncate the ringdown part which is far shorter than the other part of the strain. The truncated waveform is denoted as the inspiral-merger strain. The purpose of this truncation is to further reduce the complexity of the frequency/amplitude part of a strain caused by the sudden change at the merger, and will help to well train the machine with less efforts in tuning the hyper-parameters. Using the time series form of the inspiral-merger strains to train a cAE or cVAE, the result turns out to be not good for reasonable machine size and training, see Fig. 6 for a typical result. It implies that the model cannot catch up the amplitude and phase correctly at the same time. This suggests that this form of strain is still too complicated for a cAE or cVAE of reasonable size to work properly. Motivated by this result, we then decide to separate the amplitude and frequency parts of a strain, and then juxtapose them as the input of the cAE or cVAE. To be specific, from the two polarization modes we first obtain the instantaneous phase $\theta(t)=\tan^{-1}\Big{(}{h_{\times}(t)\over h_{+}(t)}\Big{)}$ (3) and the instantaneous frequency and amplitude are given respectively by $\displaystyle\omega(t+{\delta t\over 2})={\theta(t+\delta t)-\theta(t)\over 2\pi\delta t}$ (4) $\displaystyle A(t)=\sqrt{h^{2}_{+}(t)+h^{2}_{\times}(t)}.$ (5) A typical example showing the above decomposition is given in Fig. 7. Figure 7: Decomposition of a time series strain $h(t)$ (top) into frequency (bottom-left) and amplitude (bottom-right) by using (4) and (5). The amplitude part has been multiplied by a factor of $10^{20}$ to have comparable scale with the frequency one. Even using this frequency/amplitude separated form of the strains to train the cAE or cVAE model, the result is still not good because the magnitudes of the input data have not been rescaled to avoid too small or too large values. This however can be solved as in the usual deep learning process for neural network by just normalizing the input data franoischollet2017learning . The way of normalization we adopt is as follows: $\hat{\omega}(t)={\omega(t)-\mu_{\omega}\over\sigma_{\omega}}\;,\qquad\hat{A}(t)={A(t)-\mu_{A}\over\sigma_{A}}$ (6) where the normalization parameters $(\mu_{\omega},\sigma_{\omega})$ are respectively mean and variance evaluated from the 8192 segments of $\omega(t)$ 444Due to the nature of its definition from the difference between two neighbor segments, there are only $8191$ segments for $\omega(t)$., and similarly for $(\mu_{A},\sigma_{A})$. We call these four parameters the key (to reconstruct the associated un-normalized strain). Naively, we can juxtapose these four normalization parameters with the normalized strain vector $(\hat{\omega}(t),\hat{A}(t))$ of $2\times 8192$ segments to form the input of cAE or cVAE. However, their dimensions are not in proportion, the juxtaposition could suppress the significance of the key during the training, which will induce the unbearable error in recovering the full strain via (6). Therefore, we need to find the appropriate cAE or cVAE schemes to well train both the normalizing strains and the associated keys to the desirable accuracy. Once the training of a waveform deep learning model is done, we need to evaluate their performance based on some criterion of accuracy by comparing a machine-generated waveforms $h_{\rm ML}(t)$ with the corresponding waveform $h_{\rm EOB}(t)$ obtained from EOBNR. To calculate the accuracy we adopt the conventional overlap method used in gravitational waveform community. The overlap method is motivated by the matched filtering schutz_1991 ; Owen_1996 ; Owen_1999 for the signal detection or parameter estimation, in which the overlap between two waveforms $h_{1}(t)$ and $h_{2}(t)$ is defined by $\langle h_{1}|h_{2}\rangle=4\;{\textrm{Re}}\int_{0}^{\infty}\frac{\tilde{h}_{1}(f)\tilde{h}_{2}(f)^{*}}{S_{n}(f)}df$ (7) where $\tilde{h}_{i}(f)$ is the Fourier transform of $h_{i}(t)$ and $S_{n}(f)$ is the power spectral density (PSD) of detector’s noise. In practical, some appropriate low and high frequency cutoffs will be imposed when performing the integral. To evaluate the accuracy of a waveform model, the following fitting factor (FF) or faithfulness Babak:2006uv ; Buonanno_2007 ; Williams_2020 is adopted to compare $h_{\rm ML}(t)$ generated by our waveform model and the standard EOB waveform $h_{\rm EOB}(t)$, $\textrm{FF}=\max_{t_{0},\phi_{0}}\left[\frac{\langle h_{\rm EOB}|h_{\rm ML}\rangle}{\sqrt{\langle h_{\rm EOB}|h_{\rm EOB}\rangle\langle h_{\rm ML}|h_{\rm ML}\rangle}}\right]$ (8) where $t_{0}$ and $\phi_{0}$ are respectively the initial time and inital phase of $h_{\rm EOB}(t)$. Without being biased by the detector noise, below we will choose flat PSD, i.e., $S_{n}(f)=1$ for evaluating the FF Williams_2020 . To characterize the performance, we need to evaluate the FF of each template in the test data set, i.e., $20\%$ of LMR and $80\%$ of HMR, and find out the distribution of FFs, which can also be represented by its maximum, median and minimum. However, for simplicity we can represent and denote the accuracy simply by the average of the FFs over the test data set. This may not be precise enough but is more convenient when comparing the performances of different generative waveform model. Later on, we will give the cumulative distribution function of FFs and the associated maximum, median and minimum for the best model selected by comparing the average of FFs over the test data set. Moreover, at current stage the initial phase is not optimized when evaluating FF. Despite that, our best waveform models can be shown to achieve more than $97\%$ accuracy even without optimizing the initial phase. Once the initial phase is also optimized, the accuracy can be expected to be further enhanced. Note that it turns out that the cVAE models yield comparable but lower accuracy than the cAE models. This is probably because the template data sets considered in this paper are parameterized by two mass parameters, which cause not much degeneracy in mapping the parameter space to the template space. The degeneracy here means that the different set of parameters may yield quite similar waveforms. Thus, the variational feature of latent space of cVAE models may not be needed for such kind of deterministic training set. Despite that, when considering more complicated template sets with more source parameters, the variational feature could be helpful to disentangle the degeneracy, which may occur more often. In this respect, it is still interesting to consider the VAE type models as a preliminary study for the future work. To not digress the main theme of this work, from now on we will simply focus on the various models based on the cAE scheme in the main text. As a comparison, in Appendix we present the performance of cVAE models with the same schematic structures of the cAE models. ## IV Conditional Autoencoder Waveform Models Based on the cAE scheme we can construct various waveform models by different arrangements of the encoders and decoders. Since the input data are separated into keys and normalized strains, their associated cAE can be arranged to share a common decoder or not. For either cases, we consider two waveform models which further differ by how the labels are conditioned. In total, we will consider four waveform models, and compare their performances. In this way, we can understand the relevance of different arrangements to the performance, so that such experiences could be helpful for further constructions. Below we first consider the models in which the keys and normalized strains do not share a common decoder, and then the models do. 12 Figure 8: Schematic structure of the cAE+NN waveform model. Figure 9: Schematic structure of the 2cAE waveform model. The first cAE waveform model as shown in Fig. 8 is what we call cAE+NN model, in which the strains are trained with cAE and the associated keys are trained with conventional supervised learning neural network (NN) because the dimensions of the key are relatively small. After done the training, we can drop the strain part from the model, and turn the remaining network (Right- most part of Fig. 8) into a generative machine for waveforms. Since the keys and the normalized amplitude and frequency parts of the strains are trained separately, when generating a strain we need to combine them together to get the un-normalized amplitude $A(t)$ and frequency $\omega(t)$. Finally, we need to integrate the frequency to get the phase $\theta(t)$ and then combine with the amplitude to get the strain $h_{\rm ML}(t)$. With the output strain $h_{\rm ML}(t)$ we can use (8) to evaluate the FF for each waveform in test data set, and then take the average to obtain the accuracy. The resultant performance is $85.73\%$ for LMR, and $55.95\%$ for HMR. As expected, the accuracy for HMR is lower than the one for LMR. The accuracy is not good enough for the purpose of data analysis. This is because the simple NN for training the key is not the optimal scheme as AE. By the faith on the power of semi-supervised training, we can replace the NN by AE to train the key, and the new scheme is shown in Fig. 9. As expected, the performance for both LMR and HMR get improved. The resultant accuracy is $89.92\%$ for LMR and $67.20\%$ for HMR. The accuracy of LMR is now barely good for detection purpose, but not good for PE to extract the source properties. Still the HRM part is not accurate enough for practical purpose. Figure 10: Schematic structure of the 1c2E1D waveform model. Figure 11: Schematic structure of the 2c2E1D waveform model. A common feature of the above two models is that they train the keys and strains separately, and the correlation is only through the common labels. Instead, we can correlate the outputs of the different encoders into a common decoder, and directly compare the decoder’s output to the corresponding un- normalized frequency and amplitude of the input strain to obtain the reconstruction loss. Intuitively, the additional correlation may improve the accuracy of the generative model. We will consider two such kinds of models. The first one is shown in Fig. 10, which we call 1c2E1D, i.e., one conditional encoder, and two waveform encoders (one for the strain and one for the key), and finally one common decoder. In this model, the latent sizes of all the encoders should be the same. This may cause redundancy of the latent space for training the key since the key’s dimension is far less than the strain’s. To remedy this, we introduce the second model as shown in Fig. 11. We call this model 2c2E1D, i.e., now we have two conditional encoders so that the latent sizes of the encoders for the strain and the key can be different. Moreover, since now the keys and the normalized strains share a common encoder, we can in fact choose the target in Fig. 10 and 11 to be the un-normalized amplitude and frequency, i.e., $A(t)$ and $\omega(t)$, but not the normalized $\hat{A}(t)$ and $\hat{\omega}(t)$, to evaluate the reconstruction loss. For simplicity, we choose the MSE for the reconstruction loss so that the minimization of the reconstruction loss will make the machine to generate $\tilde{x}$ being as close to the target $A(t)$ and $\omega(t)$ as possible. In contrast to the cAE+NN and 2cAE waveform models, this will save the procedure of converting the normalized strains into their un-normalized counterparts. | cAE+NN | 2cAE | 1c2E1D | 2c2E1D ---|---|---|---|--- accuracy (LMR) | 85.73% | 89.92% | 97.65% | 98.20% accuracy (HMR) | 55.95% | 67.20% | 97.70% | 97.02% Table 2: Summary of the accuracy of the low-mass-ratio (LMR) and high-mass- ratio (HMR) waveforms for each cAE waveform model considered in this paper. The accuracy is the average of the fitting factors (FFs) (see (8)) for all the test data. We see that both 1c2E1D and 2c2E1D models can have accuracy more than $97\%$ for both LMR and HMR. FFs for 2c2E1D | LMR | HMR ---|---|--- Minimum FF | 82.49% | 74.13% Median FF | 98.61% | 98.22% Maximum FF | 100.0% | 99.99% Table 3: Summary of the FFs of the LMR and HMR waveforms for 2c2E1D cAE waveform model considered in this paper. The associated cumulative distribution function of FFs is shown in Fig. 14. We see that the medians are comparable with accuracy listed in Table 2. | cAE+NN | 2cAE | 1c2E1D(cAE) | 2c2E1D(cAE) ---|---|---|---|--- Training time (strain)(sec) | 4042.5 | 3329.4 | 4536.6 | 4462.6 Training time (key)(sec) | 81.2 | 144.8 Generations/epochs (strain) | 8000 | 8000 | 10000 | 10000 Generations/epochs (key) | 10000 | 10000 Generation time per waveform (milli sec) | 0.8-1.0 Table 4: Summary of the training time, generation/epoch number and the generation time of a single waveform for each cAE waveform model considered in this paper. Note that it takes less than 1 millisecond to generate a single waveform. This is about 10 to 100 times faster than the EOB running on the same computing facility. The resultant performance for the above two models are the following. For the 1c2E1D waveform model, the accuracy is $97.65\%$ for LMR, and $97.70\%$ for HMR. For the 2c2E1D waveform model, the accuracy is $98.20\%$ for LMR, and $97.02\%$ for HMR. Their performances are comparable, and are accurate enough for both LMR and HMR, i.e., greater than $97\%$. Note that we have not optimized the overlap accuracy over the initial phase yet, once it is done we will expect higher overlap accuracy 555Our preliminary study shows that it can achieve almost $99\%$ for LMR and $98\%$ for HMR.. The high accuracy of the generated waveforms indicates that both models are good for the purpose of low latency detection and for PE of gravitational wave events with improvement of accuracy in the future work. The high accuracy for HMR part can also be exploited for progressively self-training to generate waveforms of HMR. We summarize in Table 2 the accuracy for each cAE waveform model considered in this paper. The 1c2E1D and 2c2E1D model are the best in the accuracy rate. Overall the 2c2E1D model is slightly superior. To characterize its detailed performance, we also give the minimum FF, median FF and maximum FF of this model in Table 3, from which we see the median FF is comparable with the accuracy, i.e., the average of FFs. Later we will discuss more details for this model. Note that, the above models are all implemented based on the cAE scheme. We can also replace the cAE schemes in these models by the cVAE ones, and obtain the corresponding cVAE waveform models. However, there is one additional model called cVAE+cAE, see Fig. 15 in the Appendix, in which we use cAE to train the keys and cVAE to train the strains. The performances of these cVAE waveform models are listed in the Table 6 of the Appendix. It turns out that the accuracy of the cVAE waveform models are comparable to their cAE counterparts. However, by examining in more details it seems that cAE models are superior than the cVAE ones, even for the HMR. This is a bit surprising that the generative nature of VAE does not help to improve the accuracy. Besides, we also summarize in Table 4 the training time and generation/epoch number of the waveform models and the generation time of a single waveform for each cAE waveform model considered in this paper. The run-times of the cVAE waveform models are comparable with their cAE counterparts listed in Table 4, thus are omitted for simplicity. We see that the training time is about $4000$ seconds for all the waveform models, it is quite modest and implies that the extension to the full waveform models with more source parameters is manageable in the near future. Furthermore, the generation time of a single waveform is about one millisecond. Compared to the typical generation time for a EOB waveform, which is about few hundredths to few tenths of a second on the same computing facility, the speed enhancement is about $10$ to $100$ times. As the 2c2E1D waveform model is the best among all the waveform models considered in this paper, we look into some details of this model. First, we list the hyperparameters of this model in in Table 5, and histogram of its training losses in Fig. 12. Based on the information one can reproduce the model quite easily. From Fig. 12 we see that the training and validation losses match well and stop increasing around $8000$ generations/epochs. This implies that our training is not over-fitted and stabilized. | $E_{1}$ | $E_{2}$ | $E_{1}^{{}^{\prime}}$ | $E_{2}^{{}^{\prime}}$ | Decoder ---|---|---|---|---|--- Latent size | 8 | 8 | 3 | 3 | 8 & 3 CNN layers | None | 2 | None | None | 3 Filter size | None | [16,16] | None | None | [4,4,4] conv features | None | [5,15] | None | None | [16,32,64] Pool size | None | [4,4] | None | None | [4,4,4] dilation rate | None | None | None | None | [1,2,2] NN layers | 4 | 3 | 4 | 3 | 6 Neural size | 500 | 500 | 400 | 400 | 800 Table 5: Hyperparameters for the 2c2E1D model. Here “conv features” is the abbreviation of the convolution features. Figure 12: Histogram of the training losses for the 2c2E1D cAE waveform model. The types of training/validation losses are denoted by (Loss/Eval$\\_$loss, latent Loss(strain)/Eval latent Loss(strain), latent Loss(key)/Eval latent Loss(key)) in the graphic illustrations, which literally mean the total loss, the latent loss of the normalized strain, and the latent loss of the key, respectively. The match of Loss and latent Loss implies no over-fitting. The overall trends show that the training is stabilized around $10000$ generations/epochs. Figure 13: Cumulative distributions functions of fitting factors (FFs) of 2c2E1D cAE waveform model for the LMR (blue) and HMR (orange) generated waveforms. As expected, the HMR one has a broader tail. Overall, the outliers are rare. Figure 14: Distributions of FFs of 2c1E1D cAE waveform model as functions of mass ratio (Upper row) and total mass (Lower row) for LMR (blue) and HMR (red) test data set. Note that each dot represent one template in the test data set. Further, we can understand more the tomography of the accuracy for the 2c2E1D cAE waveform model by plotting the cumulative distributions function (CDF) of the FF for both LMR and HMR. The results are shown in Fig.13. As expected, we see that the HMR one has a broader tail, however,overall the FFs are concentrated on the the side of high FF near $100\%$. This implies the outliers are rare, and the generated waveforms can be reliably implemented for the practical data analysis such as the detection and PE of the gravitational wave events. For curiosity, it is also interesting to see how FF changes with the mass ratio and total mass of the BBH. We present the distributions for 2c2E1D cAE waveform model in Fig. 14. We see that the low FFs appear more often in the regime of lower mass ratio and smaller total mass for LMR ones. The latter one could be due to the the fact that the templates in this regime are less loud (small mass), but it is not clear about the cause for the former one (lower mass ratio). On the other hand, the FF for the HMR does not behaves the same way, which could be due to the insufficient training data. Finally, the typical neural network structure of the above cAE models is given in Fig. 16 in Appendix. ## V Conclusion and Discussion In this paper we construct various waveform models based on the neural network of conditional autoencoders (cAE) and its variational extensions (cVAE). Their accuracy and run-time have been summarized in Table 2 and Table 4 in the main text for cAE models, and in Table 6 in Appendix for cVAE models, respectively. For simplicity, we represent the accuracy of these generative waveform models by the the average of fitting factor, which is based on the waveform overlap of the matched filtering. Among these waveform models, the so-called 2c2E1D cAE model is the best with more than $97\%$ for both the low-mass-ratio (LMR) and high-mass-ratio (HMR) waveform generation. This demonstrates the viability of our best waveform model to be implemented in the practical gravitational wave data analysis and parameter estimation (PE). Especially, the generation time of a single waveform is $10$ to $100$ times faster than the traditional EOBNR method, it implies that the waveform generation for the low latency detection can be accelerated by our waveform models. With the improvement of the accuracy in the future work, the revised version of our generative waveform model may also help to speed the parameter estimation. Moreover, the impressive accuracy for HMR waveform generations is encouraging because fraction of the HMR waveforms in the training and validation data set is less than $3\%$. This implies that one may be able to generate higher mass- ratio waveforms by a series of self-training with the generative outputs of the lower-mass ratio machine as the training data for the higher mass-ratio ones. This may open a new venue to generate the waveforms with intermediate mass-ratio, say greater than $15$. Despite that, there are still ample space to improve our waveform models. As a proof-of-concept study, we only consider the inspiral-merger part of the full waveforms. Although the ringdown part is quite short but it contains the information of quasi-normal modes. We are currently training the waveform models for the full waveform based on the similar cAE or cVAE scheme, and will report our results in the near future. Moreover, to be more useful in the practical data analysis tasks, we shall also include more source parameters such as spins, precession and tidal deformabilities. Once the above goals are achieved, we can incorporate our waveform models to the standard pipeline of detection and PE, and help to accelerate the data analysis tasks in the coming O4 and O5 observation runs of LIGO/Virgo/KAGRA. Added-in-proof. When finalizing the draft, an eprint Lee:2021isa with the similar goal appears, in which a RNN framework is adopted to generate the merger-ringdown parts of the waveform from the input associated inspiral one. ## Acknowledgement We thank Kai-Feng Chen, Yao-Yu (Joshua) Lin and members of TGWG and CAG for discussions and comments. In particular, we also thank Han-Shiang Kuo for the discussions on cVAE structures, and Jie-Shiun Tsao for helping the sever implementation. CHL would like to thank Wei-Ren Xu for his guidance into the machine learning. This work is supported by Taiwan’s Ministry of Science and Technology (MoST) through Grant No. 109-2112-M-003-007-MY3. We also thank NCTS for partial financial support. ## Appendix: Structures and Results of cVAE waveform models In this Appendix, we summarize the performance of the cVAE counterpart of the cAE waveform models considered in the main text. These counterpart models are simply obtained by replacing the cAE with cVAE in the associated cAE waveform model. However, for the 2cAE model, we can in fact replace only the cAE for the strains by cVAE, and still keep the cAE for the keys intact. In this way, we have a new model called cVAE+cAE model as shown in Fig. 15. For all the cAE and cVAE waveforms models considered in this work, the typical neural network structure is shown in Fig. 16, which serve as the guideline for the readers to implement the coding. Figure 15: Schematic structure of the cVAE+cAE waveform model. Figure 16: A typical machine structure with details of hyperparameters for the cAE or cVAE used in this work. Each neural network (NN) or convolutional neural network (CNN) is denoted by a box with its dimension specified. Top- left: the encoder for the strain. Top-right: the encoder for the label. Bottom: the decoder to reproduce the strain by inputting the latent vector and the label. The accuracy for the cVAE waveform models are shown in Table 6, from which one can compare with Table 2 and finds that the accuracy are comparable for the cVAE models and their counterparts. Besides, the run-times for these cVAE models are comparable with their cAE counterparts, i.e., about $4000$ seconds for training and $1$ milli second to generate a single waveform, thus for simplicity we will not listed here. Finally, one more point is that after training is done, we will just choose the mean value of the latent layer to generate the waveform to avoid the stochastic feature of VAE. | cVAE+NN | cVAE+cAE | 2cVAE | 1c2E1D | 2c2E1D ---|---|---|---|---|--- accuracy | 89.73% | 89.03% | 73.23% | 94.35% | 97.16% (LMR) accuracy | 65.56% | 70.26% | 73.65% | 79.11% | 91.92% (HMR) Table 6: Summary of the accuracy for both LMR and HMR for the cVAE counterpart of each cAE waveform model considered in the main text. ## References * [1] B. Abbott et al. Observation of Gravitational Waves from a Binary Black Hole Merger. Phys. Rev. Lett., 116(6):061102, 2016. * [2] B. Abbott et al. GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs. Phys. Rev. X, 9(3):031040, 2019. * [3] R. Abbott et al. GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run. 10 2020. * [4] P. Ajith, M. Hannam, S. Husa, Y. Chen, B. Brügmann, N. Dorband, D. Müller, F. Ohme, D. Pollney, C. Reisswig, and et al. Inspiral-merger-ringdown waveforms for black-hole binaries with nonprecessing spins. Physical Review Letters, 106(24), Jun 2011. * [5] S. Babak, H. Fang, J. R. Gair, K. Glampedakis, and S. A. Hughes. ’Kludge’ gravitational waveforms for a test-body orbiting a Kerr black hole. Phys. Rev. D, 75:024005, 2007. [Erratum: Phys.Rev.D 77, 04990 (2008)]. * [6] C. M. Biwer, C. D. Capano, S. De, M. Cabero, D. A. Brown, A. H. Nitz, and V. Raymond. Pycbc inference: A python-based parameter estimation toolkit for compact binary coalescence signals. Publications of the Astronomical Society of the Pacific, 131(996):024503, 2019. * [7] J. Blackman, S. E. Field, C. R. Galley, B. Szilágyi, M. A. Scheel, M. Tiglio, and D. A. Hemberger. Fast and accurate prediction of numerical relativity waveforms from binary black hole coalescences using surrogate models. Physical Review Letters, 115(12), Sep 2015. * [8] L. Blanchet. Gravitational Radiation from Post-Newtonian Sources and Inspiralling Compact Binaries. Living Rev. Rel., 17:2, 2014. * [9] A. Buonanno and T. Damour. Effective one-body approach to general relativistic two-body dynamics. Phys. Rev. D, 59:084006, Mar 1999. * [10] A. Buonanno and T. Damour. Transition from inspiral to plunge in binary black hole coalescences. Physical Review D, 62(6), Aug 2000. * [11] A. Buonanno, Y. Pan, J. G. Baker, J. Centrella, B. J. Kelly, S. T. McWilliams, and J. R. van Meter. Approaching faithful templates for nonspinning binary black holes using the effective-one-body approach. Physical Review D, 76(10), Nov 2007. * [12] G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová. Machine learning and the physical sciences. Reviews of Modern Physics, 91(4), Dec 2019. * [13] J. Centrella, J. G. Baker, B. J. Kelly, and J. R. van Meter. Black-hole binaries, gravitational waves, and numerical relativity. Rev. Mod. Phys., 82:3069, 2010. * [14] F. Chollet. Deep Learning with Python. Manning, Nov. 2017. * [15] A. Einstein, L. Infeld, and B. Hoffmann. The Gravitational equations and the problem of motion. Annals Math., 39:65–100, 1938. * [16] H. Gabbard, C. Messenger, I. S. Heng, F. Tonolini, and R. Murray-Smith. Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy. arXiv preprint arXiv:1909.06296, 2019. * [17] I. J. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, Cambridge, MA, USA, 2016. http://www.deeplearningbook.org. * [18] S. R. Green and J. Gair. Complete parameter inference for gw150914 using deep learning. arXiv preprint arXiv:2008.03312, 2020. * [19] S. R. Green, C. Simpson, and J. Gair. Gravitational-wave parameter estimation with autoregressive neural network flows. Phys. Rev. D, 102(10):104057, 2020. * [20] M. Hannam, P. Schmidt, A. Bohé, L. Haegel, S. Husa, F. Ohme, G. Pratten, and M. Pürrer. Simple Model of Complete Precessing Black-Hole-Binary Gravitational Waveforms. Phys. Rev. Lett., 113(15):151101, 2014. * [21] I. Hinder. The Current Status of Binary Black Hole Simulations in Numerical Relativity. Class. Quant. Grav., 27:114004, 2010. * [22] G. E. Hinton and R. S. Zemel. Autoencoders, minimum description length and helmholtz free energy. In Proceedings of the 6th International Conference on Neural Information Processing Systems, NIPS’93, page 3–10, San Francisco, CA, USA, 1993. Morgan Kaufmann Publishers Inc. * [23] S. Khan, K. Chatziioannou, M. Hannam, and F. Ohme. Phenomenological model for the gravitational-wave signal from precessing binary black holes with two-spin effects. Phys. Rev. D, 100(2):024059, 2019. * [24] D. P. Kingma and M. Welling. An introduction to variational autoencoders. arXiv preprint arXiv:1906.02691, 2019. * [25] K. D. Kokkotas and B. G. Schmidt. Quasi-normal modes of stars and black holes. Living Reviews in Relativity, 2(1), Sep 1999. * [26] J. Lee, S. H. Oh, K. Kim, G. Cho, J. J. Oh, E. J. Son, and H. M. Lee. Deep Learning Model on Gravitational Waveforms in Merging and Ringdown Phases of Binary Black Hole Coalescences. 1 2021. * [27] F. Loffler et al. The Einstein Toolkit: A Community Computational Infrastructure for Relativistic Astrophysics. Class. Quant. Grav., 29:115001, 2012. * [28] A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space, 2017. * [29] H.-P. Nollert. TOPICAL REVIEW: Quasinormal modes: the characteristic ‘sound’ of black holes and neutron stars. Class. Quant. Grav., 16:R159–R216, 1999. * [30] B. J. Owen. Search templates for gravitational waves from inspiraling binaries: Choice of template spacing. Physical Review D, 53(12):6749–6761, Jun 1996. * [31] B. J. Owen and B. S. Sathyaprakash. Matched filtering of gravitational waves from inspiraling compact binaries: Computational cost and template placement. Physical Review D, 60(2), Jun 1999. * [32] Y. Pan, A. Buonanno, M. Boyle, L. T. Buchman, L. E. Kidder, H. P. Pfeiffer, and M. A. Scheel. Inspiral-merger-ringdown multipolar waveforms of nonspinning black-hole binaries using the effective-one-body formalism. Physical Review D, 84(12), Dec 2011. * [33] H. P. Pfeiffer. Numerical simulations of compact object binaries. Classical and Quantum Gravity, 29(12):124004, Jun 2012. * [34] M. Pürrer. Frequency domain reduced order model of aligned-spin effective-one-body waveforms with generic mass ratios and spins. Physical Review D, 93(6), Mar 2016. * [35] B. F. Schutz. Data processing, analysis, and storage for interferometric antennas, page 406–452. Cambridge University Press, 1991. * [36] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In NIPS, 2015. * [37] F. Tonolini, J. Radford, A. Turpin, D. Faccio, and R. Murray-Smith. Variational inference for computational imaging inverse problems, 2020\. * [38] J. Veitch, V. Raymond, B. Farr, W. Farr, P. Graff, S. Vitale, B. Aylott, K. Blackburn, N. Christensen, M. Coughlin, et al. Parameter estimation for compact binaries with ground-based gravitational-wave observations using the lalinference software library. Physical Review D, 91(4):042003, 2015. * [39] D. Williams, I. Heng, J. Gair, J. Clark, and B. Khamesra. Precessing numerical relativity waveform surrogate model for binary black holes: A gaussian process regression approach. Physical Review D, 101(6), Mar 2020. * [40] R. Yu. A tutorial on vaes: From bayes’ rule to lossless compression, 2020.
# Studying the $\bar{D}_{1}K$ molecule in the Bethe-Salpeter equation approach Jing-Juan Qi 111e-mail<EMAIL_ADDRESS>Junior College, Zhejiang Wanli University, Zhejiang 315101, China Zhen-Yang Wang 222Corresponding author, e-mail<EMAIL_ADDRESS>Physics Department, Ningbo University, Zhejiang 315211, China Zhu-Feng Zhang 333e-mail<EMAIL_ADDRESS>Physics Department, Ningbo University, Zhejiang 315211, China Xin-Heng Guo 444Corresponding author, e-mail<EMAIL_ADDRESS>College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China ###### Abstract We interpret the $X_{1}(2900)$ as an $S$-wave $\bar{D}_{1}K$ molecular state in the Bethe-Salpeter equation approach with the ladder and instantaneous approximations for the kernel. By solving the Bethe-Salpeter equation numerically with the kernel containing one-particle-exchange diagrams and introducing three different form factors (monopole, dipole, and exponential form factors) in the verties, we find the bound state exists. We also study the decay width of the decay $X_{1}(2900)$ to $D^{-}K^{+}$. ###### pacs: ********** ## I Introduction Recently, two new open flavor states were observed by LHCb collaboration in the $D^{-}K^{+}$ invariant mass distribution of $B^{+}\rightarrow D^{+}D^{-}K^{+}$, and the parameters are determined to be Aaij:2020ypa $\begin{split}X_{0}(2900):M&=2.866\pm 0.007\pm 0.002\ \mathrm{GeV},\\\ \Gamma&=57\pm 12\pm 4\ \mathrm{MeV},\\\ X_{1}(2900):M&=2.904\pm 0.005\pm 0.001\ \mathrm{GeV},\\\ \Gamma&=110\pm 11\pm 4\ \mathrm{MeV},\\\ \end{split}$ respectively. Since the resonances $X_{0}(2900)$ and $X_{1}(2900)$ are observed in the $D^{-}K^{+}$ channel, they should be manifestly exotic and have minimal quark content $\bar{c}du\bar{s}$. These two states are new fully open flavor states after the discovery of $X(5568)$ ($su\bar{b}\bar{d}$), which was reported by D0 collaboration in the $B_{s}\pi$ invariant mass distribution in 2016. However, since then LHCb, CMS, CDF, and ATLAS Collaborations have not found evidence for $X(5568)$ Aaij:2016iev ; Sirunyan:2017ofq ; Aaltonen:2017voc ; Aaboud:2018hgx . In the past decades, a growing number of good candidates of exotic states have been observed, with lots of them containing $c\bar{c}$ or $b\bar{b}$ quarks Guo:2017jvc ; Olsen:2017bmm . Thus, the discovery of $X_{0}(2900)$ and $X_{1}(2900)$ have drawn a lot of attentions. The $X_{0}(2900)$ can be interpreted as a $cs\bar{u}\bar{d}$ compact tetraquark both in the universal quark mass picture Karliner:2020vsi and in the quark model Wang:2020prk , but not within an extended relativized quark model Lu:2020qmp . In Ref. He:2020jna , the authors used the two-body chromomagnetic interactions to find that the $X_{0}(2900)$ can be interpreted as a radial excited tetraquark and the $X_{1}(2900)$ can be an orbitally excited tetraquark. It was also suggested the $X_{0}(2900)$ can be interpreted as the $S$-wave $D^{\ast-}K^{\ast+}$ molecule state and the $X_{1}(2900)$ as the $P$-wave $\bar{c}\bar{s}ud$ compact tetraquark state Chen:2020aos . In the chiral constituent quark model, it was shown show that no candidate of $X(2900)$ was founded in the $IJ^{P}=00^{+}$ and $IJ^{P}=01^{+}$ $cs\bar{q}\bar{q}$ system, while there were two states in the $P$-wave excited $cs\bar{q}\bar{q}$ system, $D_{1}\bar{K}$ and $D_{J}\bar{K}$ , which could be candidates of $X(2900)$ Tan:2020cpu . From the QCD Sum Rules, the $X_{0}(2900)$ and $X_{1}(2900)$ were studied in molecular and diquark-antidiquark tetraquark pictures, respectively, and the results for masses are in good agreement with the observed masses in the experiment Mutuk:2020igv . Investigations bases the one-boson exchange model Liu:2020nil and the phenomenological Lagrangian approach Huang:2020ptc , showed that the $X_{0}(2900)$ can be a $D^{\ast}\bar{K}^{\ast}$ molecule, but the $X_{1}(2900)$ can not. In Ref. Xiao:2020ltm , the decay width for $X_{0}(2900)\rightarrow\bar{D}K$ process was found to be in agreement with the experimental data with the $S$-wave $\bar{D}^{\ast}K^{\ast}$ scenario for $X_{0}(2900)$ in the effective lagrangian approach. The study in Ref. Dong:2020rgs showed that the $X_{1}(2900)$ as a $\bar{D}_{1}K$ is disfavored within the meson exchange model. In Ref. He:2020btl , in the quasipotential Bethe-Salpeter (BS) equation approach, the authors supported the assignment of $X_{0}(2900)$ as a $D^{\ast}\bar{K}^{\ast}$ molecular state and $X_{1}(2900)$ as a $\bar{D_{1}}K$ virtual state. Considering the mass of $X_{1}(2900)$ is about 10 MeV below the $\bar{D}_{1}K$ threshold, it is natural to explore the existence of the $S$-wave $\bar{D}_{1}K$ molecule. In this work, we will focus on the $X_{1}(2900)$ in the BS equation approach, investigating whether the $X_{1}(2900)$ can be an $S$-wave $\bar{D}_{1}K$ bound state. We will also to study the decay width of $X_{1}(2900)\rightarrow D^{-}K^{+}$. In the rest of the manuscript we will proceed as follows. In Sec. II, we will establish the BS equation for the bound state of an axial-vector meson ($\bar{D}_{1}$) and a pseudoscalar meson ($K$). Then we will discuss the interaction kernel of the BS equation and calculate numerical results of the Lorentz scalar functions in the normalized BS wave function in Sec. III. In Sec. IV, the decay width of the $X_{1}(2900)$ to $D^{-}K^{+}$ final state will be calculated. In Sec. V, we will present a summary of our results. ## II The BS formalism for $\bar{D}_{1}K$ system For the molecule composed of an axial-vector meson ($\bar{D}_{1}$) and a pseudoscalar meson ($K$), its BS wave function is defined as $\chi^{\mu}\left(x_{1},x_{2},P\right)=\langle 0|T\bar{D}_{1}^{\mu}(x_{1})K(x_{2})|P\rangle,$ (1) where $\bar{D}_{1}(x_{1})$ and $K(x_{2})$ are the field operators of the axial-vector meson $\bar{D}_{1}$ and the pseudoscalar meson $K$ at space coordinates $x_{1}$ and $x_{2}$, respectively, $P=Mv$ is the total momentum of bound state and $v$ is its velocity. Let $m_{\bar{D}_{1}}$ and $m_{K}$ be the masses of the $\bar{D}_{1}$ and $K$ mesons, respectively, $p$ be the relative momentum of the two constituents, and define $\lambda_{1}$=$m_{\bar{D}_{1}}/(m_{\bar{D}_{1}}+m_{K})$, $\lambda_{2}$=$m_{K}/(m_{\bar{D}_{1}}+m_{K})$. The BS wave function in momentum space is defined as $\chi^{\mu}_{P}(x_{1},x_{2},P)=e^{-iPX}\int\frac{d^{4}p}{(2\pi)^{4}}e^{-ipx}\chi^{\mu}_{P}(p),$ (2) where $X=\lambda_{1}x_{1}+\lambda_{2}x_{2}$ is the coordinate of the center of mass and $x=x_{1}-x_{2}$. The momentum of the $\bar{D}_{1}$ meson is $p_{1}=\lambda_{1}P+p$ and that of the $K$ meson is $p_{2}=\lambda_{2}P-p$. It can be shown that the BS wave function of the $\bar{D}_{1}K$ system satisfies the following BS equation: $\chi^{\mu}_{P}(p)=S^{\mu\nu}_{\bar{D}_{1}}(p_{1})\int\frac{d^{4}q}{(2\pi)^{4}}K_{\nu\lambda}(P,p,q)\chi^{\lambda}_{P}(q)S_{K}(p_{2}),$ (3) where $S^{\mu\nu}_{\bar{D}_{1}}(p_{1})$ and $S_{K}(p_{2})$ are the propagators of $\bar{D}_{1}$ and $K$ mesons, respectively, and $K_{\nu\lambda}(P,p,q)$ is the kernel, which is defined as the sum of all the two particle irreducible diagrams with respect to $D_{1}$ and $K$ mesons. For convenience, in the following we use the variables $p_{l}(=p\cdot v)$ and $p_{t}(=p-p_{l}v)$ as the longitudinal and transverse projections of the relative momentum ($p$) along the bound state momentum ($P$), respectively. Then, in the heavy quark limit the propagator of $D_{1}$ is $S^{\mu\nu}_{D_{1}}(\lambda_{1}P+p)=\frac{-i\left(g^{\mu\nu}-v^{\mu}v^{\nu}\right)}{2\omega_{1}\left(\lambda_{1}M+p_{l}-\omega_{1}+i\epsilon\right)},$ (4) and the propagator of the $K$ meson is $S_{K}(\lambda_{2}P-p)=\frac{i}{\left(\lambda_{2}M-p_{l}\right)^{2}-\omega_{2}^{2}+i\epsilon},$ (5) respectively, where $\omega_{1(2)}=\sqrt{m_{\bar{D}_{1}(K)}^{2}+p_{t}^{2}}$ (we have defined $p_{t}^{2}=-p_{t}\cdot p_{t}$). In the BS equation approach, the interaction between $\bar{D}_{1}$ and $K$ mesons arises from the light vector-meson ($\rho$ and $\omega$) exchange. Based on the heavy quark symmetry and the chiral symmetry, the relevant effective Lagrangian used in this work is shown in the following Ding:2008gr : $\begin{split}\mathcal{L}_{D_{1}D_{1}V}=&ig_{D_{1}D_{1}V}(D^{\nu}_{1b}\overleftrightarrow{\partial}_{\mu}D_{1a\nu}^{\dagger})V_{ba}^{\mu}+ig^{\prime}_{D_{1}D_{1}V}(D_{1b}^{\mu}D_{1a}^{\nu{\dagger}}-D_{1a}^{\mu{\dagger}}D_{1a}^{\nu})(\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu})_{ba}\\\ &+ig_{\bar{D}_{1}\bar{D}_{1}V}(\bar{D}_{1b\nu}\overleftrightarrow{\partial}_{\mu}\bar{D}_{1a}^{\nu{\dagger}})V_{ab}^{\mu}+ig^{\prime}_{\bar{D}_{1}\bar{D}_{1}V}(\bar{D}_{1b}^{\mu}\bar{D}_{1a}^{\nu{\dagger}}-\bar{D}_{1a}^{\mu{\dagger}}\bar{D}_{1b}^{\nu})(\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu})_{ab}\\\ \mathcal{L}_{KKV}=&ig_{KKV}(K_{b}\overleftrightarrow{\partial}_{\mu}K_{a}^{\dagger})V_{ba}^{\mu}+ig_{\bar{K}\bar{K}V}(\bar{K}_{b}\overleftrightarrow{\partial}_{\mu}\bar{K}_{a}^{\dagger})V_{ba}^{\mu},\\\ \end{split}$ (6) where $a$ and $b$ represent the light flavor quark ($u$ and $d$), $V_{\mu}$ is a $3\times 3$ Hermitian matrix containing $\rho$, $\omega$, $K^{\ast}$, and $\phi$: $\displaystyle V$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{ccc}\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega}{\sqrt{2}}&\rho^{+}&K^{*+}\\\ \rho^{-}&-\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega}{\sqrt{2}}&K^{*0}\\\ K^{*-}&\bar{K}^{*0}&\phi\end{array}\right).$ (10) The coupling constants involved in Eq. (6) are related to each other as follows Ding:2008gr : $\begin{split}&g_{D_{1}D_{1}V}=-g_{\bar{D}_{1}\bar{D}_{1}V}=\frac{1}{\sqrt{2}}\beta_{2}g_{V},\\\ &g^{\prime}_{D_{1}D_{1}V}=-g^{\prime}_{\bar{D}_{1}\bar{D}_{1}V}=\frac{5\lambda_{2}g_{V}}{3\sqrt{2}}m_{D_{1}},\\\ &g_{KKV}=g_{V}/2,\\\ \end{split}$ (11) where the parameters $\beta_{2}g_{V}$ and $\lambda_{2}g_{V}$ are given by $2g_{\rho NN}$ and $\frac{3}{10m_{N}}(g_{\rho NN}+f_{\rho NN})$, respectively, with $g_{\rho NN}^{2}/4\pi=0.84$ and $f_{\rho NN}/g_{\rho NN}=6.10$ Wang:2019aoc . The parameter $g_{V}=5.8$ is determined by the Kawarabayashi- Suzuki-Riazuddin-Fayyazuddin relations Ding:2008gr . Figure 1: One-particle exchange diagrams induced by vector mesons $\rho$ and $\omega$. Then, at the tree level, in the $t$-channel the kernel for the BS equation of the $\bar{D}_{1}K$ system in the lader approximation includes the following term (see Fig. 1): $\begin{split}K^{\tau\sigma}_{direct}(P,p,q;m_{V})=&-(2\pi)^{4}\delta^{4}(p^{\prime}_{1}+p^{\prime}_{2}-p_{1}-p_{2})c_{I}\Big{\\{}g_{D_{1}D_{1}V}g_{DDV}(p_{1}+q_{1})_{\gamma}(p_{2}+q_{2})_{\rho}g^{\tau\sigma}\\\ &\times\Delta^{\rho\gamma}(k,m_{V})+g^{\prime}_{D_{1}D_{1}V}g_{DDV}(p_{2}+q_{2})_{\rho}\left[k^{\tau}\Delta^{\rho\sigma}(k,m_{V})-k^{\sigma}\Delta^{\rho\tau}(k,m_{V})\right]\Big{\\}},\\\ \end{split}$ (12) where $m_{V}$ ($V=\rho,\omega$) represents the mass of the exchanged light vector meson $\rho$ or $\omega$ , $c_{I}$ is the isospin coefficient: $c_{0}=3,1$ and $c_{1}=-1,1$ for $\rho$ and $\omega$, respectively, $\Delta^{\mu\nu}$ represents the propagator for the light vector meson. In order to describe the phenomena in the real world, we should include a form factor at each interacting vertex of hadrons to include the finite-size effects of these hadrons. For the meson-exchange case, the form factor is assumed to take the following forms: $\begin{split}F_{M}(k)&=\frac{\Lambda_{M}^{2}-m^{2}}{\Lambda_{M}^{2}-k^{2}},\\\ F_{D}(k)&=\frac{(\Lambda_{D}^{2}-m^{2})^{2}}{(\Lambda_{D}^{2}-k^{2})^{2}},\\\ F_{E}(k)&=e^{(k^{2}-m^{2})/\Lambda_{E}^{2}},\\\ \end{split}$ (13) in the monopole ($M$), dipole ($D$), and exponential ($E$) models, respectively, where $\Lambda$, $m$ and $k$ represent the cutoff parameter, the mass of the exchanged meson and the momentum of the exchanged meson, respectively. The value of $\Lambda$ is near 1 GeV which is the typical chiral symmetry breaking scale. In general, for an axial-vector meson ($D_{1}$) and a pseudoscalar meson ($K$) bound state, the BS wave function $\chi_{P}^{\mu}(p)$ has the following form: $\chi_{P}^{\mu}(p)=f_{0}(p)p^{\mu}+f_{1}(p)P^{\mu}+f_{2}(p)\epsilon^{\mu}+f_{3}(p)\varepsilon^{\mu\nu\alpha\beta}p_{\alpha}P_{\beta}\epsilon_{\nu},$ (14) where $f_{i}(p)$ $(i=0,1,2,3)$ are Lorentz-scalar functions and $\epsilon^{\mu}$ represents the polarization vector of the bound state. After considering the constraints imposed by parity and Lorentz transformations, it is easy to prove that $\chi_{P}^{\mu}(p)$ can be simplified as $\chi_{P}^{\mu}(p)=f(p)\varepsilon^{\mu\nu\alpha\beta}p_{\alpha}P_{\beta}\epsilon_{\nu},$ (15) where the scalar function $f(p)$ contains all the dynamics. In the following derivation of the BS equation, we will apply the instantaneous approximation, in which the energy exchanged between the constituent particles of the binding system is neglected. In our calculation we choose the absolute value of the binding energy $E_{b}$ of the $\bar{D}_{1}K$ system (which is defined as $E_{b}=M-m_{D_{1}}-m_{K}$) less than 30 MeV. In this case the exchange of energy between the constituent particles can be neglected. Substituting Eqs. (4), (5), (12) and (13) into Eq. (3) and using the covariant instantaneous approximation in the kernel, $p_{l}=q_{l}$, one obtains the folowing expression: $\begin{split}f(p)=&\int\frac{d^{4}q}{(2\pi)^{4}}\frac{i}{6\omega_{1}(\lambda_{1}M+p_{l}-\omega_{1}+i\epsilon)[(\lambda_{2}M-p_{l})^{2}-\omega_{2}^{2}+i\epsilon][-(p_{t}-q_{t})^{2}-m_{V}^{2}]}\\\ &\Big{\\{}g_{\bar{D}_{1}\bar{D}_{1}V}g_{KKV}\left[4(\lambda_{1}M+p_{l})(\lambda_{2}M-p_{l})+(p_{t}+q_{t})^{2}+(p_{t}^{2}-q_{t}^{2})^{2}/m_{V}^{2}\right]\\\ &+g^{\prime}_{\bar{D}_{1}\bar{D}_{1}V}g_{KKV}\omega_{2}(p_{t}\cdot q_{t}-q_{t}^{2})/(\lambda_{2}M-\omega_{2}))\Big{\\}}F^{2}(k_{t})f(q),\end{split}$ (16) where $k_{t}=p_{t}-q_{t}$ is the momentum of the exchanged meson in the covariant instantaneous approximation. In Eq. (16) there are poles in the plane of $p_{l}$ at $-\lambda_{1}M+\omega_{1}-i\epsilon$, $\lambda_{2}M+\omega_{2}-i\epsilon$ and $\lambda_{2}M-\omega_{2}+i\epsilon$. By choosing the appropriate contour, we integrate over $p_{l}$ on both sides of Eq. (16) in the rest frame of the bound state, then we obtain the following equation: $\begin{split}\tilde{f}(p_{t})=&\int\frac{dq_{t}^{3}}{(2\pi)^{3}}\frac{1}{12\omega_{1}\omega_{2}(M-\omega_{1}-\omega_{2})\left[-(p_{t}-q_{t})^{2}-m_{V}^{2}\right]}\\\ &\times\Big{\\{}3g_{\bar{D}_{1}\bar{D}_{1}V}g_{KKV}\left[4\omega_{2}(M-\omega_{2})+(p_{t}+q_{t})^{2}+(p_{t}^{2}-q_{t}^{2})^{2}/m_{V}^{2}\right]\\\ &+2g^{\prime}_{\bar{D}_{1}\bar{D}_{1}V}g_{KKV}\omega_{2}(p_{t}\cdot q_{t}-q_{t}^{2})/(\lambda_{2}M-\omega_{2})\Big{\\}}F^{2}(k_{t})\tilde{f}(q_{t}),\end{split}$ (17) where $\tilde{f}(p_{t})\equiv\int dp_{l}f(p)$. Now, we can solve the BS equation numerically and study whether the $S$-wave $\bar{D}_{1}K$ bound state exists or not. It can be seen from Eq. (17) that there is only one free parameter in our model, the cutoff $\Lambda$, which enters through various phenomenological form factors in Eq. (13). It contains the information about the extended interaction due to the structures of hadrons. The value of $\Lambda$ is of order 1 GeV which is the typical scale of nonperturbative QCD interaction. In this work, we shall treat $\Lambda$ as a parameter and vary it in a wide range 0.8-4.8 GeV when the binding energy $E_{b}$ is in the region from -5 to -30 MeV to see if the BS equation has solutions. To find out the possible molecular bound states, one only needs to solve the homogeneous BS equation. One numerical solution of the homogeneous BS equation corresponds to a possible bound state. The integration region in each integral is discretized into $n$ pieces, with $n$ being sufficiently large. In this way, the integral equation is converted into an $n\times n$ nmatrix equation, and the scalar wave function will now be regarded as an $n$-dimensional vector. Then, the integral equation can be illustrated as $\tilde{f}^{(n)}(p_{t})=A^{(n\times n)}(p_{t},q_{t})\tilde{f}^{(n)}(q_{t})$, where $\tilde{f}^{(n)}(p_{t})(\tilde{f}^{(n)}(q_{t}))$ is an $n$-dimensional vector, and $A^{(n\times n)}(p_{t},q_{t})$ is an $n\times n$ matrix, which corresponds to the matrix labeled by $p_{t}$ and $q_{t}$ in each integral equation. Generally, $p_{t}$ ($q_{t}$) varies from 0 to $+\infty$. Here, $p_{t}$ ($q_{t}$) is transformed into a new variable $t$ that varies from $-1$ to 1 based on the Gaussian integration method, $p_{t}=\mu+w\log\left[1+y\frac{1+t}{1-t}\right],$ (18) where $\mu$ is a parameter introduced to avoid divergence in numerical calculations, $w$ and $y$ are parameters used in controlling the slope of wave functions and finding the proper solutions for these functions. Then one can obtain the numerical results of the BS wave functions by requiring the eigenvalue of the eigenvalue equation to be 1.0. In our calculation, we choose to work in the rest frame of the bound state in which $P=(M,0)$. We take the averaged masses of the mesons from the PDG Zyla:2020zbs , $m_{D_{1}}=2420.8$ MeV, $m_{K}=494.98$ MeV, $m_{\rho}=775.26$ MeV, and $m_{\omega}=782.65$ MeV. After searching for possible solutions in the isoscalar channel of the $\bar{D}_{1}K$ system, we find the bound state exists. We list some values of $E_{b}$ and the corresponding $\Lambda$ for the three different form factor models in Table 1. Table 1: Values of $E_{b}$ and corresponding cutoff $\Lambda_{M}$, $\Lambda_{D}$, and $\Lambda_{E}$ for $I=0$ and $I=1$ $\bar{D}_{1}K$ bound states for the monopole, dipole, and exponential form factors, respectively. | $E_{b}$(MeV) | -5 | -10 | -15 | -20 | -25 | -30 ---|---|---|---|---|---|---|--- I=0 | $\Lambda_{M}$(MeV) | 1208 | 1261 | 1297 | 1327 | 1352 | 1375 $\Lambda_{D}$(MeV) | 1668 | 1756 | 1817 | 1867 | 1910 | 1948 $\Lambda_{E}$(MeV) | 1159 | 1231 | 1280 | 1321 | 1356 | 1386 I=1 | $\Lambda_{M}$(MeV) | 1541 | 1649 | 1723 | 1783 | 1835 | 1880 $\Lambda_{D}$(MeV) | 1897 | 2371 | 2492 | 2589 | 2671 | 2744 $\Lambda_{E}$(MeV) | 1571 | 1708 | 1804 | 1881 | 1947 | 2006 ## III The Normalization Condition of the BS wave function To find out whether the bound state of the $\bar{D}_{1}K$ system exists or not, one only needs to solve the homogeneous BS equation. However, when we want to calculate physical quantities such as the decay width we have to face the problem of the normalization of the BS wave function. In the following we will discuss the normalization of the BS wave function $\chi^{\mu}_{P}(p)$. In the heavy quark limit, the normalization of the BS wave function of the $\bar{D}_{1}K$ system can be written as Guo:2007qu $i\int\frac{d^{4}pd^{4}q}{(2\pi)^{8}}\bar{\chi}^{\mu}_{P}(p)\frac{\partial}{\partial P_{0}}[I_{P\mu\nu}(p,q)]\chi^{\nu}(q)=2E_{P},$ (19) where $I_{P\mu\nu}(p,q)=(2\pi)^{4}\delta^{4}(p-q)S^{-1}_{\mu\nu}(p_{1})S^{-1}(p_{2})$. In the rest frame, the normalization condition can be written in the following form: $\begin{split}-i\int\frac{d^{4}p}{(2\pi)^{4}}\Big{\\{}&4M^{2}p_{t}^{2}\big{[}\lambda_{1}^{2}(6\lambda_{2}^{2}M^{2}-6\lambda_{2}Mp_{l}+p_{l}^{2}-\omega_{2}^{2})\\\ &+2\lambda_{1}\lambda_{2}p_{l}(3\lambda_{2}M-2p_{l})+\lambda_{2}^{2}(p_{l}^{2}-\omega_{1}^{2})\big{]}f^{2}(q)=1.\end{split}$ (20) From Eqs. (16) and (17), we obtain $\begin{split}f(p)=&\frac{i\omega_{2}(M-\omega_{1}-\omega_{2})}{\pi(\lambda_{1}M+p_{l}-\omega_{1}+i\epsilon)(\lambda_{2}M-p_{l}+\omega_{2}-i\epsilon)(\lambda_{2}M-p_{l}-\omega_{2}+i\epsilon)}\tilde{f}(p_{t}).\\\ \end{split}$ (21) Then, one can recast the normalization condition for the BS wave function into the form $\begin{split}&-\int\frac{d^{3}p_{t}}{8\pi^{5}}\frac{M^{2}p_{t}^{2}\omega_{1}}{\omega_{2}^{2}(M-\omega_{1}-\omega_{2})^{2}}\Big{\\{}\lambda_{2}^{2}(p_{t}^{2}-\omega_{1}^{2})(\lambda_{2}M-\omega_{1}-3\omega_{2})+\lambda_{1}^{3}(\lambda_{2}^{2}M^{3}-2M\omega_{2}^{2})\\\ &+\lambda_{1}\lambda_{2}[2\lambda_{2}^{3}M^{3}+\lambda_{2}M(p_{t}^{2}-\omega_{1}^{2})-4\omega_{2}^{2}(\omega_{1}-\omega_{2})-2\lambda_{2}^{2}M^{2}(\omega_{1}+3\omega_{2})]\\\ &+\lambda_{1}^{2}[3\lambda_{2}^{3}M^{3}-6\lambda_{2}M\omega_{2}^{2}+2\omega_{2}^{2}(\omega_{1}+\omega_{2})-\lambda_{2}^{2}M^{2}(\omega_{1}+3\omega_{2})]\Big{\\}}\tilde{f}^{2}(p_{t})=1.\end{split}$ (22) The wave function obtained in the previous section (which is calculated numerically from Eq.(17)) can be normalized by Eq. (22). In our case, the binding energy $E_{b}=M_{X_{1}(2900)}-(M_{{\bar{D}_{1}}}+M_{K})\simeq-12.4$ MeV, where we have used the mass of the $X_{1}(2900)$ as 2904 MeV. From our calculations, we find the $I=0$ $\bar{D}_{1}K$ system can be assigned as the $X_{1}(2900)$ state when the cutoff $\Lambda$ = 1280 MeV, 1788 MeV, and 1257 MeV for the monopole, dipole, and exponential form factors, respectively, the $I=1$ $\bar{D}_{1}K$ system can be the $X_{1}(2900)$ state when the cutoff $\Lambda$ = 1688 MeV, 2434 MeV, and 1758 MeV for the monopole, dipole, and exponential form factors, respectively. The corresponding numerical results of the normalized Lorentz scalar function, $\tilde{f}(p_{t})$, are given in Figs. 2 and 3 for the $X_{1}(2900)$ states with $I=0$ and $I=1$ in the $\bar{D}_{1}K$ molecule picture for the monopole, dipole, and exponential form factors, respectively. Figure 2: Numerical results of the normalized Lorentz scalar function $\tilde{f}(p_{t})$ for the $X_{1}(2900)$ in the $I=0$ $\bar{D}_{1}K$ molecular picture with (a) the monopole form factor, (b) the dipole form factor, and (c) the exponential form factor. Figure 3: Numerical results of the normalized Lorentz scalar function $\tilde{f}(p_{t})$ for the $X_{1}(2900)$ in the $I=1$ $\bar{D}_{1}K$ molecular picture with (a) the monopole form factor, (b) the dipole form factor, and (c) the exponential form factor. ## IV The decay of $X_{1}(2900)\rightarrow D^{-}K^{+}$ Figure 4: The diagrams contributing to the $X_{1}(2900)\rightarrow D^{-}K^{+}$ decay process induced by $\rho$ and $\omega$. Besides investigating whether the bound state of the $\bar{D}_{1}K$ system can be $X_{1}(2900)$ or not, we can also study the decay of the $X_{1}(2900)$ as the $S$-wave $\bar{D}_{1}K$ bound state. The $X_{1}(2900)$ can decay to $D^{-}K^{+}$ via the Feynman diagrams in Fig. 4. which are induced by the effective Lagrangians for the $D_{1}DV$ and $KKV$ vertices (which have been given in Eq. (6)) as the following Ding:2008gr : $\begin{split}\mathcal{L}_{DD_{1}V}=&g_{DD_{1}V}D_{1b}^{\mu}V_{\mu ba}D_{a}^{\dagger}+g^{\prime}_{DD_{1}V}(D_{1b}^{\mu}\overleftrightarrow{\partial}^{\nu}D_{a}^{\dagger})(\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu})_{ba}\\\ &+g_{\bar{D}\bar{D}_{1}V}\bar{D}_{a}^{\dagger}V_{\mu ba}\bar{D}_{1b}^{\mu}+g^{\prime}_{\bar{D}\bar{D}_{1}V}(\bar{D}_{1b}^{\mu}\overleftrightarrow{\partial}^{\nu}\bar{D}_{a}^{\dagger})(\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu})_{ba}+H.c.,\\\ \end{split}$ (23) where the coupling constants are given as $g_{DD_{1}V}=-g_{\bar{D}\bar{D}_{1}V}=-\frac{2}{\sqrt{3}}\zeta_{1}g_{v}\sqrt{m_{D}m_{D_{1}}}$, $g^{\prime}_{DD_{1}V}=-g^{\prime}_{\bar{D}\bar{D_{1}}V}=\frac{1}{\sqrt{3}}\mu_{1}g_{V}$ Ding:2008gr , with the two parameters $\zeta_{1}$ and $\mu_{1}$ being involved in the coupling constants, about which the information is very scarce leading them undetermined. However, in the heavy quark limit, we can roughly assume that the coupling constants $g_{D{D_{1}}V}$ and $g^{\prime}_{DD_{1}V}$ are equal to $g_{D^{\ast}D_{0}V}$ (=$\zeta g_{V}\sqrt{2m_{D^{\ast}}m_{D^{0}}}$) and $g^{\prime}_{D^{\ast}D_{0}V}$ (=$1/\sqrt{2}\mu g_{V}$), respectively. The parameters $\mu=0.1$ $\mathrm{GeV}^{-1}$ and $\zeta=0.1$ are taken in Ref. Casalbuoni:1996pg . According to the above interactions, we can write down the amplitude for the decay $X_{1}(2900)\rightarrow D^{-}K^{+}$ induced by light vector meson ($\rho$ and $\omega$) exchanges as shown in Fig. 4, as the following: $\begin{split}\mathcal{M}=&g_{KKV}\sqrt{2E}\int\frac{d^{4}p}{(2\pi)^{4}}F^{2}(k)\big{[}g_{\bar{D}\bar{D}_{1}V}(p_{2}+p^{\prime}_{2})^{\alpha}\Delta_{\alpha\mu}(k,m_{V})\\\ &+g^{\prime}_{\bar{D}\bar{D}_{1}V}(p_{1}+p^{\prime}_{1})^{\nu}(p_{2}+p^{\prime}_{2})^{\alpha}\left(k_{\mu}\Delta_{\alpha\nu}(k,m_{V})-k_{\nu}\Delta_{\alpha\mu}(k,m_{V})\right)\big{]}\chi_{P}^{\mu}(p)\end{split}$ (24) In the rest frame, we define $p_{1}^{\prime}=(E_{1}^{\prime},-\mathbf{p}^{\prime}_{1})$ and $p_{2}^{\prime}=(E_{2}^{\prime},\mathbf{p}^{\prime}_{2})$ to be the momenta of $D$ and $K$, respectively. According to the kinematics of the two-body decay of the initial state in the rest frame, one has $\begin{split}E_{1}^{\prime}&=\frac{M^{2}-m_{2}^{{}^{\prime}2}+m_{1}^{{}^{\prime}2}}{2M},\quad\quad E_{1}^{\prime}=\frac{M^{2}-m_{1}^{{}^{\prime}2}+m_{2}^{{}^{\prime}2}}{2M},\\\ |\mathbf{p}^{\prime}_{1}|&=|\mathbf{p}^{\prime}_{2}|=\frac{\sqrt{[M^{2}-(m^{\prime}_{1}+m^{\prime}_{2})^{2}][M^{2}-(m^{\prime}_{1}-m^{\prime}_{2})^{2}]}}{2M},\end{split}$ (25) and $d\Gamma=\frac{1}{32\pi^{2}}|\mathcal{M}|^{2}\frac{|\mathbf{p}^{\prime}|}{M^{2}}d\Omega,$ (26) where $|\mathbf{p}^{\prime}_{1}|$ and $|\mathbf{p}^{\prime}_{2}|$ are the norm of the 3-momentum of the particles in the final states in the rest frame of the initial bound state and $\mathcal{M}$ is the Lorentz-invariant decay amplitude of the process. Substituting the normalized numerical solutions of the BS equation, and the cutoff $\Lambda$ are 1280 MeV, 1788 MeV, and 1257 MeV with $I=0$ and 1688 MeV, 2434 MeV, and 1758 MeV with $I=1$ for the monopole, dipole, and exponential form factors, respectively. The decay widths of the $X_{1}(2900)$ to $D^{-}K^{+}$ can be obtained as following: $\Gamma_{X_{1}(2900)(I=0)\rightarrow D^{-}K^{+}}=\left\\{\begin{aligned} &70.73\ \mathrm{MeV}\ \mathrm{with\ monopole\ form\ factor},\\\ &98.75\ \mathrm{MeV}\ \mathrm{with\ dipole\ form\ factor},\\\ &60.38\ \mathrm{MeV}\ \mathrm{with\ exponential\ form\ factor},\end{aligned}\right.$ (27) and $\Gamma_{X_{1}(2900)(I=1)\rightarrow D^{-}K^{+}}=\left\\{\begin{aligned} &28.14\ \mathrm{MeV}\ \mathrm{with\ monopole\ form\ factor},\\\ &18.13\ \mathrm{MeV}\ \mathrm{with\ dipole\ form\ factor},\\\ &12.78\ \mathrm{MeV}\ \mathrm{with\ exponential\ form\ factor}.\end{aligned}\right.$ (28) From our calculation results, we can see that different form factors have a great influence on the decay width, and different cutoff $\Lambda$ for the same form factor also have a great influence on the decay width. ## V summary In this paper, we studied the $X_{1}(2900)$ with the hadronic molecule interpretation by regarding it as a bound state of $\bar{D}_{1}$ and $K$ mesons in the BS equation approach. In our model, we applied the ladder and instantaneous approximations to obtain the kernel containing one-particle- exchange diagrams and introduced three different form factors (the monopole form factor, the dipole form factor, and the exponential form factor) at the interaction vertices. From the calculating results we find that there exist bound states of the $\bar{D}_{1}K$ system. The binding energy depends on the value of the cutoff $\Lambda$. For the $I=0$ $\bar{D}_{1}K$ system, we find the cutoff regions in which the solutions (with the binding energy $E_{b}\in$ (-5, -30) MeV) for the ground state of the BS equation can be found (in units of MeV): $\Lambda_{M}\sim$ (1208, 1375), $\Lambda_{D}\sim$ (1668, 1948), and $\Lambda_{E}\sim$ (1159, 1386) for the monopole form factor, the dipole form factor, and the exponential form factor, respectively. For the $I=1$ $\bar{D}_{1}K$ system, we find two the regions (in units of MeV): $\Lambda_{M}\sim$ (1541, 1880), $\Lambda_{D}\sim$ (1897, 2744), and $\Lambda_{E}\sim$ (1571, 2006) in which the solutions of the BS equation can be found. Thus, we can confirm that $X_{1}(2900)$ can be regarded as the $S$-wave $\bar{D}_{1}K$ molecules in our model. We applied the numerical solutions for the BS wave functions with the corresponding cutoff ($\Lambda$ = 1280 MeV, 1788 MeV, and 1257 MeV for $I=0$ and $\Lambda$ = 1688 MeV, 2434 MeV, and 1758 MeV for $I=1$ with the monopole, dipole, and exponential form factors, respectively.) to calculate the decay widths of $X_{1}(2900)\rightarrow D^{-}K^{+}$ induced by $\rho$ and $\omega$ exchanges. We predict the decay widths are 70.73, 98.75, and 60.38 MeV and 28.14, 18.13, and 12.78 MeV for $X_{1}(2900)$ as $I=0$ and $I=1$ $\bar{D}_{1}K$ molecules with the corresponding cutoff in the decay process, respectively. From our study, the $X_{1}(2900)$ is suitable as $I=0$ $\bar{D}_{1}K$ molecular state. There are two uncertain factors in the calculation of the decay width, one is that the parameters $\xi_{1}$ and $\mu_{1}$ have not been determined since the information about them is very scarce, the other is that we can not give the definite value of the cutoff $\Lambda$. ###### Acknowledgements. This work was supported by National Natural Science Foundation of China (Projects No. 11775024, No.11605150 and No.11947001), the Natural Science Foundation of Zhejiang pvovince (No. LQ21A050005), the Ningbo Natural Science Foundation (No.2019A610067), the Fundamental Research Funds for the Provincial Universities of Zhejiang Province and K.C.Wong Magna Fund in Ningbo University. ## References * (1) R. Aaij et al. [LHCb], Phys. Rev. D 102, 112003 (2020). * (2) R. Aaij et al. [LHCb], Phys. Rev. Lett. 117, 152003 (2016). * (3) A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 120, 202005 (2018). * (4) T. Aaltonen et al. [CDF], Phys. Rev. Lett. 120, 202006 (2018). * (5) M. Aaboud et al. [ATLAS], Phys. Rev. Lett. 120, 202007 (2018). * (6) F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, Rev. Mod. Phys. 90, 015004 (2018). * (7) S. L. Olsen, T. Skwarnicki and D. Zieminska, Rev. Mod. Phys. 90, 015003 (2018). * (8) M. Karliner and J. L. Rosner, Phys. Rev. D 102, 094016 (2020). * (9) G. J. Wang, L. Meng, L. Y. Xiao, M. Oka and S. L. Zhu, [arXiv:2010.09395 [hep-ph]]. * (10) Q. F. Lü, D. Y. Chen and Y. B. Dong, Phys. Rev. D 102, 074021 (2020). * (11) X. G. He, W. Wang and R. Zhu, Eur. Phys. J. C 80, 1026 (2020). * (12) H. X. Chen, W. Chen, R. R. Dong and N. Su, Chin. Phys. Lett. 37, 101201 (2020). * (13) Y. Tan and J. Ping, [arXiv:2010.04045 [hep-ph]]. * (14) H. Mutuk, [arXiv:2009.02492 [hep-ph]]. * (15) M. Z. Liu, J. J. Xie and L. S. Geng, Phys. Rev. D 102, 091502 (2020). * (16) Y. Huang, J. X. Lu, J. J. Xie and L. S. Geng, Eur. Phys. J. C 80, 973 (2020). * (17) C. J. Xiao, D. Y. Chen, Y. B. Dong and G. W. Meng, [arXiv:2009.14538 [hep-ph]]. * (18) X. K. Dong and B. S. Zou, [arXiv:2009.11619 [hep-ph]]. * (19) J. He and D. Y. Chen, [arXiv:2008.07782 [hep-ph]]. * (20) G. J. Ding, Phys. Rev. D 79, 014001 (2009). * (21) F. L. Wang, R. Chen, Z. W. Liu and X. Liu, Phys. Rev. D 99, 054021 (2019). * (22) P. A. Zyla et al. [Particle Data Group], PTEP 2020, no. 8, 083C01 (2020). * (23) X. H. Guo, K. W. Wei and X. H. Wu, Phys. Rev. D 77, 036003 (2008). * (24) R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio and G. Nardulli, Phys. Rept. 281, 145-238 (1997).
# Hamiltonicity of graphs perturbed by a random regular graph Alberto Espuny Díaz<EMAIL_ADDRESS>and António Girão <EMAIL_ADDRESS>Institut für Mathematik, Technische Universität Ilmenau, 98684 Ilmenau, Germany. Mathematical Institute, University of Oxford, Oxford OX2 6GG, United Kingdom. ###### Abstract. We study Hamiltonicity and pancyclicity in the graph obtained as the union of a deterministic $n$-vertex graph $H$ with $\delta(H)\geq\alpha n$ and a random $d$-regular graph $G$, for $d\in\\{1,2\\}$. When $G$ is a random $2$-regular graph, we prove that a.a.s. $H\cup G$ is pancyclic for all $\alpha\in(0,1]$, and also extend our result to a range of sublinear degrees. When $G$ is a random $1$-regular graph, we prove that a.a.s. $H\cup G$ is pancyclic for all $\alpha\in(\sqrt{2}-1,1]$, and this result is best possible. Furthermore, we show that this bound on $\delta(H)$ is only needed when $H$ is ‘far’ from containing a perfect matching, as otherwise we can show results analogous to those for random $2$-regular graphs. Our proofs provide polynomial-time algorithms to find cycles of any length. This project has received partial funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 786198, A. Espuny Díaz). The research leading to these results was also partially supported by the Carl Zeiss Foundation (A. Espuny Díaz) and the EPSRC grant no. EP/N019504/1 (A. Girão) and EPSRC grant no. EP/V007327/1 (A. Girão). ## 1\. Introduction Two classical areas of research in graph theory are that which deals with extremal results, and that which studies properties of random structures. In the former area, one often looks for sufficient conditions to guarantee that a graph satisfies a certain property. An example of such a result is the well- known Dirac’s theorem [16], which states that any graph $G$ on $n\geq 3$ vertices with minimum degree at least $n/2$ contains a Hamilton cycle (that is, a cycle which contains all vertices of $G$). In the latter, one considers a random structure, chosen according to some distribution, and studies whether it satisfies a given property with sufficiently high probability. For instance, the binomial random graph $G_{n,p}$ (where $G_{n,p}$ is obtained by considering a vertex set of size $n$ and adding each of the possible $\bigl{(}\kern-1.00006pt\genfrac{}{}{0.0pt}{}{n}{2}\kern-1.00006pt\bigr{)}$ edges with probability $p$ and independently of each other) is known to contain a Hamilton cycle asymptotically almost surely (a.a.s.) whenever $p\geq(1+\epsilon)\log n/n$ [23], while if $p\leq(1-\epsilon)\log n/n$, then a.a.s. the graph is not even connected. A more recent area of research at the interface of extremal combinatorics and random graph theory is that of _randomly perturbed graphs_. The general setting is as follows. One considers an arbitrary graph $H$ from some class of graphs (usually, given by a minimum degree condition) and a random graph $G$, and asks whether the union of $H$ and $G$ satisfies a desired property a.a.s. This can be seen as a way to bridge between the two areas described above. Indeed, suppose that $H$ is any $n$-vertex graph with minimum degree $\delta(H)\geq\alpha n$ and $G=G_{n,p}$ is a binomial random graph (on the same vertex set as $H$), and consider the property of being Hamiltonian (that is, containing a Hamilton cycle). If $\alpha\geq 1/2$, then Dirac’s theorem guarantees that $H\cup G$ is Hamiltonian, for all values of $p$. If, on the other hand, $\alpha=0$, $H$ could be the empty graph, so we are left simply with the random graph $G$. The relevant question, then, is whether, for all $\alpha\in(0,1/2)$, any graph $H$ with minimum degree $\alpha n$ is sufficiently ‘close’ to Hamiltonicity that adding a random graph $G$ (which is itself not a.a.s. Hamiltonian) will yield Hamiltonicity. In particular, the goal is to determine whether the range of $p$ for which this holds is significantly larger than the range for $G_{n,p}$ itself. The randomly perturbed graph model was introduced by Bohman, Frieze and Martin [5], who studied precisely the problem of Hamiltonicity. They proved that, for any constant $\alpha>0$, if $H$ is an $n$-vertex graph with $\delta(H)\geq\alpha n$, then a.a.s. $H\cup G_{n,p}$ is Hamiltonian for all $p\geq C(\alpha)/n$. Note, in particular, that this increases the range of $p$ given by the random graph model. This result was very recently generalised by Hahn-Klimroth, Maesaka, Mogge, Mohr and Parczyk [19] to allow $\alpha$ to tend to $0$ with $n$ (that is, to consider graphs $H$ which are not dense), similarly improving the range of $p$. In a different direction, the results of Bohman, Frieze and Martin [5] about Hamiltonicity were generalised by Krivelevich, Kwan and Sudakov [24] to pancyclicity, that is, the property of containing cycles of all lengths between $3$ and $n$. Other properties that have been studied in the context of randomly perturbed graphs are, e.g., the existence of powers of Hamilton cycles [11, 8, 17, 2, 28], $F$-factors [3, 20, 9, 10], spanning bounded degree trees [25, 7] and (almost) unbounded degree trees [22], or general bounded degree spanning graphs [8]. The model of randomly perturbed graphs has also been extended to other settings. For instance, Hamiltonicity has been studied in randomly perturbed directed graphs [5, 24], hypergraphs [24, 27, 4, 21, 12] and subgraphs of the hypercube [13]. A common phenomenon in randomly perturbed graphs is that, by considering the union with a dense graph (i.e., with linear degrees), the threshold for the probabilities of different properties is significantly lower than that of the classical $G_{n,p}$ model. In this paper, we study the analogous problem where the random graph that is added to a deterministic graph is a random _regular_ graph (a graph is _regular_ when all its vertices have the same degree). To be more precise, we will consider a given $n$-vertex graph $H$ with $\delta(H)\geq\alpha n$, and we will study the Hamiltonicity and pancyclicity of $H\cup G$, where $G$ is a random $d$-regular graph, for some $d\in\mathbb{N}$. We write $G_{n,d}$ for a graph chosen uniformly at random from the set of all $d$-regular graphs on $n$ vertices (we implicitly assume that $nd$ is even throughout). The model of random regular graphs, though harder to analyse than the binomial model, has been studied thoroughly. In particular, it is a well-known fact that $G_{n,d}$ is a.a.s. Hamiltonian for all $3\leq d\leq n-1$ [29, 30, 15, 26]. It follows that, for all $d\geq 3$, $H\cup G_{n,d}$ is a.a.s. Hamiltonian independently of $H$, so the problem we consider only becomes relevant for $d\in\\{1,2\\}$. For more information about random regular graphs, we recommend the survey of Wormald [31]. It turns out that the behaviour of the problem is quite different in each of these two cases. When we consider $G=G_{n,2}$ (also referred to as a random $2$-factor), we obtain a result similar to those obtained in the binomial setting: for every constant $\alpha>0$, if $H$ is an $n$-vertex graph with $\delta(H)\geq\alpha n$, then a.a.s. $H\cup G$ is Hamiltonian. Even more, we can prove that, in this case, a.a.s. $H\cup G$ is pancyclic, and also extend the range of $\alpha$ for which this holds to some function of $n$ which tends to $0$. ###### Theorem 1.1. Let $\alpha=\omega((\log n/n)^{1/4})$, and let $H$ be an $n$-vertex graph with $\delta(H)\geq\alpha n$. Then, a.a.s. $H\cup G_{n,2}$ is pancyclic. In contrast to this, when we consider $G=G_{n,1}$ (that is, a random perfect matching), there is a particular value $\alpha^{*}<1/2$ such that, for all $\alpha>\alpha^{*}$, if $H$ is an $n$-vertex graph with $\delta(H)\geq\alpha n$, then a.a.s. $H\cup G$ is Hamiltonian. ###### Theorem 1.2. For all $\epsilon>0$, the following holds. Let $\alpha\coloneqq(1+\epsilon)(\sqrt{2}-1)$, and let $H$ be an $n$-vertex graph with $\delta(H)\geq\alpha n$. Then, a.a.s. $H\cup G_{n,1}$ is pancyclic. This result is best possible. Indeed, for every $\alpha<\sqrt{2}-1$, there exist graphs $H$ with $\delta(H)\geq\alpha n$ such that $H\cup G_{n,1}$ is not a.a.s. Hamiltonian. As we will discuss later (see Section 5), the main extremal construction of $H$ for the lower bound is a complete unbalanced bipartite graph. One key feature of this example is that $H$ does not contain a very large matching. Indeed, when we further impose that $H$ contains an (almost) perfect matching, we can obtain the following result analogous to Theorem 1.1. ###### Theorem 1.3. Let $\alpha=\omega((\log n/n)^{1/4})$, and let $H$ be an $n$-vertex graph with $\delta(H)\geq\alpha n$ which contains a matching $M$ which covers $n-o(\alpha^{2}n)$ vertices. Then, a.a.s. $H\cup G_{n,1}$ is pancyclic. The rest of the paper is organised as follows. In Section 2 we present our notation and some preliminary results which will be useful for us, and in Section 3 we prove some properties of $G_{n,1}$ and $G_{n,2}$ regarding their edge distribution and component structure which are crucial for our proofs. We devote Section 4 to proving Theorems 1.1 and 1.3, and defer the proof of Theorem 1.2 to Section 5. Finally, we discuss some possible extensions in Section 6. ## 2\. Preliminaries ### 2.1. Notation For any $n\in\mathbb{Z}$, we denote $[n]\coloneqq\\{i\in\mathbb{Z}:1\leq i\leq n\\}$ and $[n]_{0}\coloneqq\\{i\in\mathbb{Z}:0\leq i\leq n\\}$. In particular, $[0]=\varnothing$. Given any $a,b,c\in\mathbb{R}$, we write $c=a\pm b$ if $c\in[a-b,a+b]$. Whenever we use a hierarchy, the constants are chosen from right to left. To be more precise, when claiming that a statement holds for $0<a\ll b\leq 1$, we mean that it holds for all $0<b\leq 1$ and all $0<a\leq f(b)$, for some unspecified non-decreasing function $f$. For our asymptotic statements we often use the standard $\mathcal{O}$ notation; when doing so, we always understand that the functions are non-negative. Throughout this paper, the word _graph_ will refer to simple, undirected graphs. Whenever we consider graphs with loops or parallel edges, we will call them _multigraphs_. Whenever we consider a (multi)graph $G$ on $n$ vertices, we implicitly assume that $V(G)=[n]$ (in particular, all graphs are labelled). Given any (multi)graph $G=(V,E)$ and any two (not necessarily disjoint) sets $A,B\subseteq V$, we denote the (multi)set of edges of $G$ with both endpoints in $A$ by $E_{G}(A)$, and the (multi)set of edges with one endpoint in $A$ and one in $B$ by $E_{G}(A,B)$. If $A=\\{a\\}$ is a singleton, for simplicity we will write $E_{G}(a,B)\coloneqq E_{G}(\\{a\\},B)$, and similarly in the rest of the notation. We denote $e_{G}(A)\coloneqq|E_{G}(A)|$ and $e_{G}(A,B)\coloneqq|E_{G}(A,B)|$. We write $G[A]\coloneqq(A,E_{G}(A))$ for the subgraph of $G$ induced by $A$. If $A$ and $B$ are disjoint, we write $G[A,B]\coloneqq(A\cup B,E_{G}(A,B))$ for the bipartite subgraph of $G$ induced by $A$ and $B$. Given two (multi)graphs $G$ and $H$, we write $G\cup H\coloneqq(V(G)\cup V(H),E(G)\cup E(H))$, and if $V(H)\subseteq V(G)$, we write $G\setminus H\coloneqq(V(G),E(G)\setminus E(H))$. We denote $G-A\coloneqq G[V\setminus A]$. We will write $A\triangle B\coloneqq(A\cup B)\setminus(A\cap B)$ for the symmetric difference of $A$ and $B$. We will often abbreviate edges $e=\\{x,y\\}$ as $e=xy$; recall, however, that edges are sets of vertices, and will be used as such throughout. Given any vertex $x\in V$, we let $d_{G}(x)\coloneqq|\\{e\in E:x\in e\\}|+|\\{e\in E:e=xx\\}|$ denote its _degree_ in $G$ (that is, loops count twice towards the degree of a vertex). We write $\Delta(G)$ and $\delta(G)$ for the maximum and minimum vertex degrees in $G$, respectively. We write $N_{G}(x)$ for the neighbourhood of $x$ in $G$, that is, the set of vertices $y\in V$ such that $xy\in E$. Given any set $A\subseteq V$, we denote $N_{G}(A)\coloneqq\bigcup_{x\in A}N_{G}(x)$. Given any set of edges $E^{\prime}$, we write $V(E^{\prime})\coloneqq\bigcup_{e\in E^{\prime}}e$. A _path_ $P$ can be seen as a set of vertices which can be labelled in such a way that there is an edge between any two consecutive vertices. Equivalently, they can be defined by the corresponding set of edges. We will often consider isolated vertices as a degenerate case of paths. If the _endpoints_ of $P$ (that is, the first and last vertices, according to the labelling) are $x$ and $y$, we often refer to $P$ as an $(x,y)$-path. We refer to all vertices of a path $P$ which are not its endpoints as _internal_ vertices. The _length_ of a path is equal to the number of edges it contains (the same definition holds for cycles). The _distance_ between two vertices $x,y$ in a (multi)graph $G$, denoted by $\operatorname{dist}_{G}(x,y)$, is the length of the shortest $(x,y)$-path $P\subseteq G$ (if there is no such path, the distance is said to be infinite). Given any two sets of vertices $A,B\subseteq V(G)$, we define $\operatorname{dist}_{G}(A,B)\coloneqq\min_{x\in A,y\in B}\operatorname{dist}_{G}(x,y)$. Whenever we consider asymptotic statements, we will use the convention of abbreviating a sequence of graphs $\\{G_{i}\\}_{i\in\mathbb{N}}$ such that $|V(G_{i})|\to\infty$ by simply writing that $G$ is an $n$-vertex graph (and implicitly understanding that $n\to\infty$). In this setting, we say that an event $\mathcal{E}$ holds _asymptotically almost surely_ (a.a.s. for short) if $\mathbb{P}[\mathcal{E}]=1-o(1)$. We will ignore rounding issues whenever this does not affect the arguments. ### 2.2. The configuration model In order to study random regular graphs, one of the most useful tools is the _configuration model_ , first introduced by Bollobás [6], which gives a process that samples $d$-regular graphs uniformly at random. In its simplest form, the process works as follows. For each $i\in[n]$, consider a set of $d$ vertices $x_{i,1},\ldots,x_{i,d}$. Then, choose a uniformly random perfect matching $M$ covering the set $\\{x_{i,j}:i\in[n],j\in[d]\\}$. Now, generate a multigraph $G=G(M)=([n],E)$ where, for each edge $x_{i,j}x_{i^{\prime},j^{\prime}}\in M$, an edge $ii^{\prime}$ is added to $E$ (if $i=i^{\prime}$, this creates a loop). For any $i\in[n]$, we will refer to $\\{x_{i,j}:j\in[d]\\}$ as the _extended set_ of $i$. Similarly, for any $A\subseteq[n]$, we will refer to $\\{x_{i,j}:i\in A,j\in[d]\\}$ as the _extended set_ of $A$. Whenever we produce a (multi)graph $G$ following the process above, we will write $G\sim\mathcal{C}_{n,d}$. We will write $M\sim\mathcal{C}^{*}_{n,d}$ for the perfect matching $M$ such that $G=G(M)$. In order to easily distinguish both models, we will refer to $M\sim\mathcal{C}^{*}_{n,d}$ as a _configuration_. By abusing notation, we will also sometimes write $\mathcal{C}^{*}_{n,d}$ to denote the set of all configurations with parameters $n$ and $d$. Observe that, in particular, for every $n$-vertex $d$-regular multigraph $G$, there exists at least one configuration $M\in\mathcal{C}_{n,d}^{*}$ such that $G=G(M)$. Given any two configurations $M,M^{\prime}\in\mathcal{C}^{*}_{n,d}$, we write $M\sim M^{\prime}$ if there exist $u_{1}u_{2},v_{1}v_{2}\in M$ such that $M^{\prime}=(M\setminus\\{u_{1}u_{2},v_{1}v_{2}\\})\cup\\{u_{1}v_{1},u_{2}v_{2}\\}$. The following lemma bounds the probability that certain variables on configurations deviate from their expectation (see, e.g., [14, Lemma 4.3]). ###### Lemma 2.1. Let $d\in\mathbb{N}$ be fixed. Let $c>0$ and let $X$ be a random variable on $\mathcal{C}^{*}_{n,d}$ such that, for every pair of configurations $M\sim M^{\prime}$, we have $|X(M)-X(M^{\prime})|\leq c$. Then, for all $t\in\mathbb{N}$, $\mathbb{P}[|X-\mathbb{E}[X]|\geq t]\leq 2\mathrm{e}^{-\frac{t^{2}}{2ndc^{2}}}.$ Whenever sampling $G\sim\mathcal{C}_{n,d}$, the process described above may yield a multigraph with both loops and parallel edges. (Of course, this does not apply to the case $d=1$, where we simply generate a uniformly random perfect matching.) However, upon conditioning on the event that $G$ is a simple graph, such a graph is a uniformly random $d$-regular graph. The following is by now a well-established fact (see, e.g., [31, section 2.2]): ###### Lemma 2.2. Let $d\in\mathbb{N}$ be fixed, and let $G\sim\mathcal{C}_{n,d}$. If $n$ is sufficiently large, then $\mathbb{P}[G\text{ is simple}]\geq\mathrm{e}^{-d^{2}/4}.$ ## 3\. Properties of random regular graphs of low degree We will need to have bounds on the number of components of random $2$-factors (note that every $2$-regular graph is a union of vertex-disjoint cycles), as well as the number of cycles in the union of a (not necessarily spanning) matching and a random perfect matching. ###### Lemma 3.1. The following statements hold. 1. $(\mathrm{i})$ A.a.s. the number of components of $G_{n,2}$ is at most $\log^{2}n$. 2. $(\mathrm{ii})$ Let $M$ be any (not necessarily perfect) matching on $[n]$. Then, a.a.s. $M\cup G_{n,1}$ contains at most $\log^{2}n$ cycles, and its number of components is at most $n/2-|M|+\log^{2}n$. ###### Proof. We are going to generate a multigraph $G\sim\mathcal{C}_{n,2}$ according to the configuration model (for the proof of $(\mathrm{ii})$, this will be conditioned on certain edges being present). For this, it is important to note that a uniformly random perfect matching on any set $A$ of $2n$ vertices (that is, a configuration $M\sim\mathcal{C}_{n,2}^{*}$) can be obtained iteratively by choosing $n$ edges in $n$ steps. For step $i$, let $B_{i}$ be the set of all vertices which are covered by the edges of the first $i-1$ steps. Then, choose any vertex $x_{i}\in A\setminus B_{i}$, and choose a neighbour $y_{i}\in A\setminus(B_{i}\cup\\{x_{i}\\})$ for $x_{i}$ uniformly at random. The choice of $x_{i}$ in each step of this process can be made arbitrarily. If we generate a random perfect matching on $2n$ vertices conditioned on $m$ given pairs being present in this matching, we can follow the process above starting at step $m+1$, assuming that the given $m$ edges were chosen in the first $m$ steps of the process. For our proofs, we are going to generate a uniformly random configuration following this process. For each $j\in[n]$, let $A_{j}$ be the extended set of $j$, and let $A\coloneqq\bigcup_{j\in[n]}A_{j}$. Consider first the proof of $(\mathrm{i})$. In the $i$-th step, the choice of the vertex $x_{i}$ will be made as follows: if $i\geq 2$ and for the unique $j\in[n]$ such that $y_{i-1}\in A_{j}$ we have that $|A_{j}\cap B_{i}|=1$, then let $x_{i}$ be the unique vertex in $A_{j}\setminus\\{y_{i-1}\\}$; otherwise, let $x_{i}\in A\setminus B_{i}$ be arbitrary. Roughly speaking, this choice of $x_{i}$ at each step means that we are revealing the edges of $G\sim\mathcal{C}_{n,2}$ in such a way that we reveal all edges of each connected component of $G$ before moving on to the next one. In particular, we only start a new component at step $i+1$ when the randomly chosen vertex $y_{i}$ lies in the unique $A_{j}$ where exactly one vertex had already been picked. Whenever this happens, we say that the process has _finished_ a component at step $i$. For each $i\in[n]$, let $X_{i}$ be an indicator random variable which takes value $1$ whenever the process finishes a component at step $i$, and $0$ otherwise. Observe that, since $y_{i}$ is chosen uniformly at random, for each $i\in[n]$ we have that $\mathbb{P}[X_{i}=1]=\frac{1}{2n-2i+1}.$ The total number of components $X$ of $G$ is then given by the sum of these indicator variables. In particular, for $n$ sufficiently large it follows that $\mathbb{E}[X]=\sum_{i=1}^{n}\frac{1}{2n-2i+1}\leq 2\log n.$ The claim now follows immediately by Markov’s inequality and Lemma 2.2. Consider now the proof of $(\mathrm{ii})$. First, extend the given matching $M$ into a perfect matching $M^{\prime}$ arbitrarily. For each $j\in[n]$, let $\sigma(j)$ be the unique $k\in[n]$ such that $\\{j,k\\}\in M^{\prime}$. Now, consider the extended set $A$ and, for each $\\{j,k\\}\in M^{\prime}$, add the pair $x_{j,1}x_{k,1}$ to a matching $\tilde{M}$ on $A$, so we have that $G(\tilde{M})=M^{\prime}$. We are going to obtain a configuration $M^{\prime\prime}\sim\mathcal{C}_{n,2}^{*}$ conditioned on containing this matching $\tilde{M}$. Observe that, since exactly one point in the extended set of each vertex has been covered by $\tilde{M}$, a random matching on the uncovered points corresponds to a random perfect matching on $[n]$. In particular, $G(M^{\prime\prime})$ may contain some parallel edges (which correspond to ‘isolated’ edges in $M^{\prime}\cup G_{n,1}$), but it cannot contain any loops. For each $i\in[n]\setminus[n/2]$, the choice of the vertex $x_{i}$ will be made as follows: if $i\geq n/2+2$ and for the unique $j\in[n]$ such that $y_{i-1}\in A_{j}$ we have that $|A_{\sigma(j)}\cap B_{i}|=1$, let $x_{i}$ be the unique vertex in $A_{\sigma(j)}\setminus B_{i}$; otherwise, choose $x_{i}$ arbitrarily. Similarly to the proof of $(\mathrm{i})$, this allows us to count the number of components of $G(M^{\prime\prime})$ (and, thus, of $M^{\prime}\cup G_{n,1}$). Note that we only start a new component at step $i+1$ if the randomly chosen $y_{i}$ lies in the unique (not completely covered) $A_{j}$ such that $A_{\sigma(j)}$ was covered before the choice of $y_{i}$. Whenever this happens, we say that the process has finished a component at step $i$. For each $i\in[n]\setminus[n/2]$, let $Y_{i}$ be a random variable which takes value $1$ whenever the process finishes a component at step $i$, and $0$ otherwise, and let $Y$ be the number of components of $G(M^{\prime\prime})$. As before, since $y_{i}$ is chosen uniformly at random, for each $i\in[n]\setminus[n/2]$ we have that $\mathbb{P}[Y_{i}=1]=\frac{1}{2n-2i+1},$ and it follows that a.a.s. $Y\leq\log^{2}n$. Each of these components is either a cycle or an isolated pair of parallel edges. Finally, $M\cup G_{n,1}$ can be obtained by contracting parallel edges into a single edge and then removing the edges of $M^{\prime}\setminus(M\cup G_{n,1})$ from $G(M^{\prime\prime})$, and this never creates any new cycles, so the bound on the number of cycles follows. In order to bound the number of components of $M\cup G_{n,1}$, note that the deletion of each edge of $M^{\prime}\setminus(M\cup G_{n,1})$ may create at most one new component. Thus, a.a.s. the number of components of $M\cup G_{n,1}$ is at most $\log^{2}n+(n/2-|M|)$. ∎ ###### Remark 3.2. The proof of Lemma 3.1$(\mathrm{ii})$ gives the following simple bound: if $M$ is any (not necessarily perfect) matching on $[n]$, then a.a.s. $|E(M)\cap E(G_{n,1})|\leq\log^{2}n$. The following edge-distribution property of $G_{n,1}$ and $G_{n,2}$ will also be useful for us. ###### Lemma 3.3. Let $\epsilon>0$ and $k=k(n)\leq n^{3}$. For each $i\in[k]$, let $\alpha_{i}=\alpha_{i}(n)$ and $\beta_{i}=\beta_{i}(n)$ be such that $\alpha_{i}\beta_{i}=\omega((\log n/n)^{1/2})$ and let $A_{i},B_{i}\subseteq[n]$ be two not necessarily disjoint sets of vertices with $|A_{i}|\geq\alpha_{i}n$ and $|B_{i}|\geq\beta_{i}n$. Let $G=G_{n,1}$ or $G=G_{n,2}$. Then, a.a.s., for every $i\in[k]$, $A_{i}$ contains at least $(1-\epsilon)\alpha_{i}\beta_{i}n$ vertices $z$ such that $N_{G}(z)\cap B_{i}\neq\varnothing$. ###### Proof. We will show how to prove the statement when $G=G_{n,2}$; the other case can be shown in exactly the same way, but avoiding the reference to Lemma 2.2. Let $G^{\prime}\sim\mathcal{C}_{n,2}$. Let $\mathcal{E}$ be the event that the statement holds for $G^{\prime}$. First, fix some $i\in[k]$, let $Z\coloneqq\\{z\in A_{i}:N_{G^{\prime}}(z)\cap B_{i}\neq\varnothing\\}$, and let $X\coloneqq|Z|$. By using the configuration model, for each $z\in A_{i}$ and $n$ sufficiently large we have that $\mathbb{P}[z\in Z]\geq\frac{2|B_{i}|-1}{2n-1}\geq(1-\epsilon/2)\beta_{i},$ which implies that $\mathbb{E}[X]\geq(1-\epsilon/2)\alpha_{i}\beta_{i}n$. Now observe that any random variable on $d$-regular multigraphs obtained according to the configuration model can also be seen as a random variable on uniformly random configurations. In particular, since any pair of configurations $M\sim M^{\prime}$ are equal except for their edges at four vertices, it follows that, given any two configurations $M\sim M^{\prime}$, we have $|X(M)-X(M^{\prime})|\leq 4$. Thus, by Lemma 2.1 we conclude that $\mathbb{P}[X<(1-\epsilon)\alpha_{i}\beta_{i}n]\leq\mathrm{e}^{-\Omega(\alpha_{i}^{2}\beta_{i}^{2}n)}$. By a union bound over all $i\in[k]$ and the bound on $\alpha_{i}\beta_{i}$ in the statement, we conclude that $\mathbb{P}[\mathcal{E}]\geq 1-\sum_{i=1}^{k}\mathrm{e}^{-\Omega(\alpha_{i}^{2}\beta_{i}^{2}n)}=1-o(1).$ Finally, by applying Lemma 2.2, we have that the statement holds a.a.s. for $G=G_{n,2}$. ∎ ## 4\. Randomly perturbing graphs by a $2$-factor We can now prove Theorems 1.1 and 1.3 simultaneously. We note that we believe the bound on $\delta(H)$ in the statements is far from optimal and, thus, we make no effort to optimise the constants throughout. We will let $G=G_{n,2}$ or $G=G_{n,1}$ and want to show that $H\cup G$ is pancyclic. Our general strategy will be to first construct a special path $P$ which can be used to ‘absorb’ vertices into a given cycle. To be more precise, we will set aside an arbitrary set of vertices of a suitable size and then construct a path $P$ which avoids this set of vertices. Furthermore, we will ensure that each of the vertices set aside forms a triangle with one of the edges of $P$ (and each of the vertices does this with a different edge). Then, by replacing the corresponding edge of $P$ by the path of length $2$ that forms the triangle, we can incorporate each of the vertices set aside into the path (thus ‘absorbing’ the vertex). This will allow us to have some control over the length of the cycles which we can produce. Once we have constructed the special path $P$, we will show that we can find an ‘almost’ spanning cycle $C$ with $P\subseteq C$ which avoids the vertices that can be absorbed with $P$. We can then use the cycle $C$, together with the set of vertices that we can absorb into $P$, to find cycles of all lengths larger than that of $C$. A similar strategy using $P$ will allow us to find all shorter cycles. ###### Proof of Theorems 1.1 and 1.3. Let $\alpha=\omega((\log n/n)^{1/4})$, and let $G=G_{n,2}$ or $G=G_{n,1}$, depending on which of the statements we want to prove. Condition on the event that the statements of Lemmas 3.1 and 3.3 hold, which occurs a.a.s., where we apply Lemma 3.3 to the pairs of sets $N_{H}(x)$ and $N_{H}(y)$ for each pair $(x,y)\in V(H)\times V(H)$ with $\epsilon=1/2$. Thus, for all $(x,y)\in V(H)\times V(H)$ we have that $N_{H}(x)$ contains at least $\alpha^{2}n/2$ vertices $z$ such that $N_{G}(z)\cap N_{H}(y)\neq\varnothing$. (4.1) Whenever necessary, for our claims we assume that $n$ is sufficiently large. For each pair of (not necessarily distinct) vertices $(x,y)\in V(H)\times V(H)$, consider the set $Z(x,y)\coloneqq\\{zz^{\prime}\in E(G):z\in N_{H}(x),z^{\prime}\in N_{H}(y)\\}.$ This is a set of ‘available’ edges for $(x,y)$. Throughout the proof, we will use these edges to make alterations on graphs; every time we do so, we will update this set of available edges (in particular, we will always restrict the list to edges of a ‘current’ graph; this will become clear later in the proof). For simplicity of notation, whenever we consider an edge $zz^{\prime}$ from some set $Z(x,y)$ (or any of their updated versions), we will implicitly assume that $z\in N_{H}(x)$ and $z^{\prime}\in N_{H}(y)$; note, however, that these edges are not really ‘oriented’ in the definition, and so $zz^{\prime}\in Z(y,x)$ too. By (4.1), for all $(x,y)\in V(H)\times V(H)$ we have $|Z(x,y)|\geq\alpha^{2}n/4.$ (4.2) Note further that, since $Z(x,y)\subseteq G$ (by abusing notation and identifying $Z(x,y)$ with a graph), $\Delta(Z(x,y))\leq 2.$ (4.3) Take an arbitrary set $U\subseteq V(H)$ of $m\coloneqq\alpha^{2}n/1000$ vertices and label them as $u_{1},\ldots,u_{m}$. For each $j\in[m]$, iteratively, choose an edge $e_{j}=z_{j}z_{j}^{\prime}\in Z(u_{j},u_{j})$ such that $e_{j}\cap U=\varnothing$ and $e_{j}\cap\bigcup_{k\in[j-1]}e_{k}=\varnothing$ (the existence of such edges is guaranteed in every step by (4.2) and (4.3)). Let $W\coloneqq\bigcup_{j\in[m]}e_{j}$. Now, for each $(x,y)\in V(H)\times V(H)$, we update the list of ‘available’ edges for $(x,y)$ by letting $Z^{\prime}(x,y)\coloneqq\\{zz^{\prime}\in E(G):z\in N_{H}(x)\setminus(U\cup W),z^{\prime}\in N_{H}(y)\setminus(U\cup W)\\},$ so it follows from (4.2) and (4.3) and since $|U\cup W|=3m$ that $|Z^{\prime}(x,y)|\geq\alpha^{2}n/4-3\alpha^{2}n/500\geq\alpha^{2}n/5.$ (4.4) Now, for each $j\in[m-1]$, iteratively, choose an edge $f_{j}=w_{j}w_{j}^{\prime}\in Z^{\prime}(z_{j}^{\prime},z_{j+1})$ satisfying that $f_{j}\cap\bigcup_{k\in[j-1]}f_{k}=\varnothing$. The existence of such edges in each step follows by (4.4) and (4.3). Consider the path $P\coloneqq z_{1}z_{1}^{\prime}w_{1}w_{1}^{\prime}z_{2}\ldots w_{m-1}^{\prime}z_{m}z_{m}^{\prime}$ (the fact that this is a path follows by the previous choices of edges). Note $|V(P)|\leq 4m$. Let $W^{\prime}\coloneqq V(P)\setminus\\{z_{1},z_{m}^{\prime}\\}$. Consider now the graph $G_{0}\coloneqq(G-(W^{\prime}\cup U))\cup P$ or $G_{0}\coloneqq((M\cup G)-(W^{\prime}\cup U))\cup P$, depending on the statement we are trying to prove. By Lemma 3.1, this is a union of at most $\log^{2}n$ cycles and at most $(1+o(1))\alpha^{2}n/200$ paths, all vertex-disjoint (where we might have degenerate cases where some paths have length $0$, that is, they are an isolated vertex). Indeed, note that Lemma 3.1 asserts that the number of components of $G_{n,2}$ or $M\cup G_{n,1}$ is $o(\alpha^{2}n)$. The removal of each of the vertices in $W^{\prime}\cup U$ may increase the number of components by at most one, and $|W^{\prime}\cup U|\leq 5m=\alpha^{2}n/200$, so the claim follows. Furthermore, after adding the path $P$ again, we have that $\Delta(G_{0})\leq 2$. Indeed, the fact that the endpoints of $P$ have degree at most $2$ in $G_{0}$ follows since the first and last edges of $P$ lie in $E(G)$. We will now use the edges of $H$ to iteratively combine these paths and cycles into a single cycle $C\subseteq(H\cup G)-U$ of length $n-m$ with $P\subseteq C$, which we will later use to obtain a Hamilton cycle (and, indeed, cycles of all lengths). Let $V\coloneqq V(H)\setminus U$. For each $(x,y)\in V^{2}$, we update the set of available edges by setting $Z_{0}(x,y)\coloneqq\\{zz^{\prime}\in E(G):z\in N_{H}(x)\setminus(V(P)\cup U),z^{\prime}\in N_{H}(y)\setminus(V(P)\cup U)\\}.$ It follows by (4.2) and (4.3) and since $|V(P)\cup U|\leq 5m$ that $|Z_{0}(x,y)|\geq\alpha^{2}n/4-\alpha^{2}n/100=6\alpha^{2}n/25.$ (4.5) Throughout the upcoming process, we will define graphs $G_{1},\ldots,G_{t}$, for some $t\in\mathbb{N}$, where each one is obtained from the previous by some ‘small’ alteration. All these graphs will be unions of vertex-disjoint paths and cycles spanning $V$. The process will end when $G_{t}=C$, that is, when we obtain the desired cycle spanning $V$. For each $i\in[t]$, the alteration that we perform on $G_{i-1}$ to construct $G_{i}$ will depend on its structure. We present here a sketch; the full details are given in cases 1 to 3 below. While the current graph $G_{i-1}$ contains at least two paths (case 1), we create a new graph $G_{i}$ with one fewer path and whose number of components does not increase by more than one. If $G_{i-1}$ contains exactly one path (case 2), then we either decrease the number of components or make sure that $G_{i}$ has no paths (while not increasing the number of components). Finally, if $G_{i-1}$ contains at least two cycles and no paths (case 3), then we create a graph $G_{i}$ with one fewer component, but we possibly create a path in the process. It follows from this description that the process must really end (assuming it can be carried out, which we prove later). Indeed, we only apply case 1 while at least two components are paths, and each time reduce the number of paths and do not increase the number of components by more than one. Since $G_{0}$ contains at most $\log^{2}n$ cycles and at most $(1+o(1))\alpha^{2}n/200$ paths, after some number of iterations we will have a graph containing only one path and at most $(1+o(1))\alpha^{2}n/100$ cycles. Furthermore, once there is only one path, there will never be more than one again, so we will never apply case 1 again. From this point on, we apply cases 2 or 3 intermittently, as needed, and we always either reduce the number of components of the graph, or make sure that we will reduce this number in the next iteration (while not increasing the number of components now). In particular, this guarantees that every two steps we decrease the number of components. Thus, at some point we will have a unique component. If it is not a cycle, an application of case 2 will yield the desired cycle. The alterations performed throughout the process will use some of the ‘available’ edges. For instance, in some cases we will pick two vertices $x$ and $y$ and an edge $zz^{\prime}\in Z_{0}(x,y)$, and use these to alter the graph. With this alteration, we will remove $zz^{\prime}$ from $G_{i-1}$ (and our process is correct only if $zz^{\prime}\in E(G_{i-1})$). Thus, for future iterations, we need to remove $zz^{\prime}$ from the set of ‘available’ edges $Z_{0}(x,y)$. For all $i\in[t]$ and all pairs of vertices $x,y$, we will denote by $Z_{i}(x,y)$ a subset of $Z_{0}(x,y)$ which is ‘available’ for the next iteration. For all $i\in[t]$, we will always define $G_{i}$ as $G_{i}\coloneqq(G_{i-1}\setminus E_{1})\cup E_{2}$, where $E_{1}$ and $E_{2}$ are sets of edges of $H\cup G$. Then, for all $(x,y)\in V^{2}$, we will set $Z_{i}(x,y)\coloneqq Z_{i-1}(x,y)\setminus E_{1}$. Thus, we will not write this definition explicitly in each case, as it will follow from the definition of $G_{i}$. Throughout the process, we will assume that some conditions hold. In particular, for all $i\in[t]_{0}$ we will have $\Delta(G_{i})\leq 2$ and $P\subseteq G_{i}\subseteq(H\cup G)\setminus U$ (note that these hold for $G_{0}$ by definition; for larger values of $i$, they will be a consequence of the specific process which we follow) and, for all $(x,y)\in V^{2}$, that $Z_{i}(x,y)\subseteq G_{i}$ and $Z_{i}(x,y)$ and $P$ are vertex-disjoint (both of which hold by definition). Note that the conditions that $Z_{i}(x,y)\subseteq G_{i}$ and $\Delta(G_{i})\leq 2$ imply (4.3) holds for each $Z_{i}(x,y)$ (which also holds simply by the definition of $Z_{i}(x,y)$); we will keep referring to (4.3) when we wish to use this fact. Furthermore, we will assume that in each step we have $|Z_{i-1}(x,y)|\geq\alpha^{2}n/8;$ (4.6) it will follow from our process that the number of steps $t$ that we perform and the number of edges that become ‘unavailable’ at each step are small, so that (4.6) holds. We now provide the details of the process. Let $i\in\mathbb{N}$ and assume we have already defined $G_{i-1}$. If $G_{i-1}$ is a cycle of length $n-m$ (which spans $V$), we are done. Otherwise, we will want to create a new graph $G_{i}$. For this, we will need to consider several cases. Case 1. Assume at least two of the components of $G_{i-1}$ are paths. Let $P^{\prime}$ be one of the path components of $G_{i-1}$, and let its endpoints be $x$ and $y$. Choose any edge $zz^{\prime}\in Z_{i-1}(x,y)$ such that $\operatorname{dist}_{G_{i-1}}(zz^{\prime},\\{x,y\\})>1$ and let $G_{i}\coloneqq(G_{i-1}\setminus\\{zz^{\prime}\\})\cup\\{xz,yz^{\prime}\\}$. Depending on the relative position of $zz^{\prime}$ with respect to $P^{\prime}$, the total number of components will either decrease by one, remain the same, or increase by one. Case 2. Assume exactly one component of $G_{i-1}$ is a path $P^{\prime}$. Let its endpoints be $x$ and $y$. Consider the following cases. 1. 2.1. Assume that $N_{H}(x)\cup N_{H}(y)\nsubseteq U\cup V(P)\cup V(P^{\prime})$. Suppose that there exists some $z\in N_{H}(x)\setminus(U\cup V(P)\cup V(P^{\prime}))$ (the other case is analogous). Then, choose a vertex $z^{\prime}\in N_{G_{i-1}}(z)$ (so $zz^{\prime}$ is an edge of a cycle of $G_{i-1}$) and let $G_{i}\coloneqq(G_{i-1}\setminus\\{zz^{\prime}\\})\cup\\{xz\\}$. This reduces the number of components and increases the length of the unique path $P^{\prime}$. 2. 2.2. Otherwise, we have that $Z_{i-1}(x,y)\subseteq E(P^{\prime})$. Suppose there is an edge $zz^{\prime}\in Z_{i-1}(x,y)$ such that $\operatorname{dist}_{P^{\prime}}(x,z^{\prime})<\operatorname{dist}_{P^{\prime}}(x,z)$. If so, let $G_{i}\coloneqq(G_{i-1}\setminus\\{zz^{\prime}\\})\cup\\{xz,yz^{\prime}\\}$. In this case, we turn $P^{\prime}$ into a cycle. 3. 2.3. Otherwise, every $zz^{\prime}\in Z_{i-1}(x,y)$ satisfies $z,z^{\prime}\in V(P^{\prime})$ and $\operatorname{dist}_{P^{\prime}}(x,z)<\operatorname{dist}_{P^{\prime}}(x,z^{\prime})$. By (4.3) and (4.6), we can find a set $Z_{i-1}^{\prime}(x,y)\subseteq Z_{i-1}(x,y)$ with $|Z_{i-1}^{\prime}(x,y)|\geq\alpha^{2}n/16$ (4.7) and such that any two $zz^{\prime},ww^{\prime}\in Z_{i-1}^{\prime}(x,y)$ are vertex-disjoint. Now choose two edges $zz^{\prime},ww^{\prime}\in Z_{i-1}^{\prime}(x,y)$ with $\operatorname{dist}_{P^{\prime}}(x,z)<\operatorname{dist}_{P^{\prime}}(x,w)$ which minimise $\operatorname{dist}_{P^{\prime}}(z,w)$ over all possible such pairs of edges. We now have two cases. 1. 2.3.1. If $\operatorname{dist}_{P^{\prime}}(z^{\prime},w)=1$, let $G_{i}\coloneqq(G_{i-1}\setminus\\{z^{\prime}w\\})\cup\\{xw,yz^{\prime}\\}$. This, again, turns $P^{\prime}$ into a cycle. 2. 2.3.2. Otherwise, let $z^{\prime\prime},w^{\prime\prime}\in V(P^{\prime})$ be such that $\operatorname{dist}_{P^{\prime}}(x,z^{\prime\prime})=\operatorname{dist}_{P^{\prime}}(x,z^{\prime})+1$ and $\operatorname{dist}_{P^{\prime}}(x,w^{\prime\prime})=\operatorname{dist}_{P^{\prime}}(x,w)-1$. Let $P^{\prime\prime}$ be the subpath of $P^{\prime}$ whose endpoints are $z^{\prime\prime}$ and $w^{\prime\prime}$, and let $\ell\coloneqq\operatorname{dist}_{P^{\prime}}(z^{\prime\prime},w^{\prime\prime})$ be the length of $P^{\prime\prime}$. By an averaging argument using (4.7), it follows that $\ell\leq 16|V(P^{\prime})|/(\alpha^{2}n)=\mathcal{O}(\alpha^{-2})=o(\alpha^{2}n).$ By (4.6), this guarantees that we can pick an edge $z^{3}w^{3}\in Z_{i-1}(z^{\prime\prime},w^{\prime\prime})$ such that $\operatorname{dist}_{G_{i-1}}(z_{3}w_{3},V(P^{\prime\prime}))\geq 1$. Now, let $G_{i}\coloneqq(G_{i-1}\setminus\\{z^{\prime}z^{\prime\prime},ww^{\prime\prime},z^{3}w^{3}\\}))\cup\\{xw,yz^{\prime},z^{\prime\prime}z^{3},w^{\prime\prime}w^{3}\\}.$ In this case, if $z^{3}w^{3}\in E(P^{\prime})$, we turn $P^{\prime}$ into a cycle; otherwise, we combine the vertices of $P^{\prime}$ and the cycle containing $z^{3}w^{3}$ into two new cycles. Case 3. Assume all components of $G_{i-1}$ are cycles. We consider the following cases. 1. 3.1. For each edge $xy\in E(G_{i-1})$, let $C_{xy}$ be the cycle of $G_{i-1}$ which contains this edge. Assume that there exists some $xy\in E(G_{i-1})\setminus E(P)$ satisfying that $Z_{i-1}(x,y)\nsubseteq E(C_{xy})$. If so, let $zz^{\prime}\in Z_{i-1}(x,y)\setminus E(C_{xy})$ and let $G_{i}\coloneqq(G_{i-1}\setminus\\{xy,zz^{\prime}\\})\cup\\{xz,yz^{\prime}\\}$. This combines two cycles into one. 2. 3.2. Otherwise, from (4.6), it follows that each cycle of $G_{i-1}$ must contain at least $\alpha^{2}n/8$ vertices. Now let $x,y\in V\setminus V(P)$ be such that $x$ and $y$ lie in different cycles (note, in particular, that we may pick a vertex in any of the cycles since $|V(P)|\leq 4m<\alpha^{2}n/8$) and let $zz^{\prime}\in Z_{i-1}(x,y)$. Assume without loss of generality that $zz^{\prime}$ lies in a cycle other than the one containing $x$. By the definition of $Z_{0}(x,y)$, this means that $xz$ is an edge of $H$ joining two of the cycles of $G_{i-1}$. Now let $w\in N_{G_{i-1}}(x)$and let $G_{i}\coloneqq(G_{i-1}\setminus\\{wx,zz^{\prime}\\})\cup\\{xz\\}$. This combines two cycles into a path. The fact that $\Delta(G_{i})\leq 2$ follows by the construction in each case, since we only add at most one edge incident to each vertex, and we only do this for vertices which had degree one in $G_{i-1}$ or for which we first delete an incident edge. The fact that $P\subseteq G_{i}$ follows since we have made sure not to delete any of the edges of $P$, and $G_{i}\subseteq(H\cup G)\setminus U$ since we only add edges of $H$. Now let $C=G_{t}$ be the graph resulting from the above process. The argument that we provided when arguing that the process must end shows that $t\leq(1+o(1))\alpha^{2}n/40$, and this implies that (4.6) holds (indeed, observe that in all cases the number of edges which are deleted from each $Z_{i}(x,y)$ is at most $3$, so at most $(3+o(1))\alpha^{2}n/40$ edges are removed, and the conclusion follows from (4.5)). By the iterative process above, we have proved the existence of a cycle $C$ of length $n-m$ such that $P\subseteq C$. We must now prove that there is a cycle of length $k$, for all $3\leq k\leq n$. Recall that $Z_{t}(x,y)\subseteq E(C)\setminus E(P)$ for all $(x,y)\in V^{2}$ and (4.6) holds. We split our analysis into three cases. Suppose first that $3\leq k\leq\alpha^{2}n/20$. In such a case, consider any subpath $P^{\prime}\subseteq C$ of length $k-3$, and let its endpoints be $x$ and $y$. Now choose any edge $zz^{\prime}\in Z_{t}(x,y)$ such that $z,z^{\prime}\notin V(P^{\prime})$ (the existence of such an edge follows by (4.6) and (4.3)). Then, the union of $P^{\prime}$ and the path $xzz^{\prime}y$ forms a cycle of length $k$. Assume next that $n-m\leq k\leq n$. Consider a set $J\subseteq[m]$ with $|J|=k+m-n$. In $C$, for each $j\in J$, replace the edge $e_{j}=z_{j}z_{j}^{\prime}$ by the path $z_{j}u_{j}z_{j}^{\prime}$. This yields a cycle of the desired length. Finally, assume $\alpha^{2}n/20<k<n-m$. Consider a subpath $P^{\prime}\subseteq C$ of length $k-3$ such that $P\subseteq P^{\prime}$. Let the endpoints of $P^{\prime}$ be $x$ and $y$, respectively, and consider the set $Z_{t}(x,y)$. We consider the following three cases. 1. 1. Assume that there exists $zz^{\prime}\in Z_{t}(x,y)$ such that $z,z^{\prime}\notin V(P^{\prime})$. Then, the union of $P^{\prime}$ and the path $xzz^{\prime}y$ forms a cycle of length $k$. 2. 2. Otherwise, let $Z_{t}^{\prime}(x,y)\coloneqq\\{e\in Z_{t}(x,y):e\subseteq V(P^{\prime})\\}$, so we have $Z_{t}^{\prime}(x,y)\subseteq E(P^{\prime})$ and $|Z_{t}^{\prime}(x,y)|\geq|Z_{t}(x,y)|-2>\alpha^{2}n/16$ by (4.6). Recall that $Z_{t}^{\prime}(x,y)$ and $P$ are vertex-disjoint. Suppose there is an edge $zz^{\prime}\in Z_{t}^{\prime}(x,y)$ such that $\operatorname{dist}_{P^{\prime}}(x,z^{\prime})<\operatorname{dist}_{P^{\prime}}(x,z)$. If so, $(P^{\prime}\setminus\\{zz^{\prime}\\})\cup\\{xz,yz^{\prime}\\}$ is a cycle of length $k-2$ which contains $P$. To obtain a cycle of length $k$, replace $e_{1}=z_{1}z_{1}^{\prime}$ and $e_{2}=z_{2}z_{2}^{\prime}$ by the paths $z_{1}u_{1}z_{1}^{\prime}$ and $z_{2}u_{2}z_{2}^{\prime}$, respectively. 3. 3. Otherwise, $Z_{t}^{\prime}(x,y)\subseteq E(P^{\prime})$ and all $zz^{\prime}\in Z_{t}^{\prime}(x,y)$ satisfy that $\operatorname{dist}_{P^{\prime}}(x,z)<\operatorname{dist}_{P^{\prime}}(x,z^{\prime})$. Recall, again, that $Z_{t}^{\prime}(x,y)$ and $P$ are vertex-disjoint. We now proceed similarly to case 2.3. First, by (4.3) and the current bound on $|Z_{t}^{\prime}(x,y)|$, we may restrict ourselves to a subset of available edges $Z_{t}^{\prime\prime}(x,y)\subseteq Z_{t}^{\prime}(x,y)$ such that $|Z_{t}^{\prime\prime}(x,y)|\geq\alpha^{2}n/32$ and any two edges $zz^{\prime},ww^{\prime}\in Z_{t}^{\prime\prime}(x,y)$ are vertex-disjoint. Now, choose two edges $zz^{\prime},ww^{\prime}\in Z_{t}^{\prime\prime}(x,y)$ with $\operatorname{dist}_{P^{\prime}}(x,z)<\operatorname{dist}_{P^{\prime}}(x,w)$ which minimise $\operatorname{dist}_{P^{\prime}}(z,w)$ over all possible pairs. Let $P^{\prime\prime}$ be the subpath of $P^{\prime}$ whose endpoints are $z^{\prime}$ and $w$, and let $\ell\coloneqq\operatorname{dist}_{P^{\prime}}(z,w)$. By an averaging argument using the fact that $|Z_{t}^{\prime\prime}(x,y)|\geq\alpha^{2}n/32$, it follows that $\ell\leq 32|V(P^{\prime})|/(\alpha^{2}n)<m$. Then, the graph $(P^{\prime}\setminus E(P^{\prime\prime}))\cup\\{xw,yz^{\prime}\\}$ is a cycle of length $k-\ell$ which contains $P$. In order to obtain a cycle of length $k$, for each $j\in[\ell]$ replace $e_{j}=z_{j}z_{j}^{\prime}$ by the path $z_{j}u_{j}z_{j}^{\prime}$.∎ ## 5\. Randomly perturbing graphs by a perfect matching Let $\alpha^{*}\coloneqq\sqrt{2}-1$. We first show that, for any constant $\alpha<\alpha^{*}$, there exist $n$-vertex graphs $H$ with $\delta(H)=\alpha n$ such that $H\cup G_{n,1}$ does not a.a.s. contain a Hamilton cycle. Indeed, let $H=(A,B,E)$ be a complete unbalanced bipartite graph, where $|A|=\alpha n$ and $|B|=(1-\alpha)n$ (so $d_{H}(v)=\alpha n$ for all $v\in B$), and let $G$ be a perfect matching on the same vertex set as $H$. It is easy to check that a necessary condition for $G$ so that $H\cup G$ contains a Hamilton cycle is that $G[B]$ contains at least $(1-2\alpha)n$ edges. Now, in $G_{n,1}$, each edge appears with probability $1/(n-1)$, so $\mathbb{E}[e_{G_{n,1}}(B)]=\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{(1-\alpha)n}{2}\kern-1.00006pt\right)\frac{1}{n-1}\leq\frac{(1-\alpha)^{2}}{2}n<(1-2\alpha)n.$ The conclusion then follows by Markov’s inequality. In order to prove Theorem 1.2, we first need the following lemma. ###### Lemma 5.1. Let $1/n\ll\beta<\alpha/2<1/4$. Let $H$ be an $n$-vertex graph with $\delta(H)\geq\alpha n$ which does not contain a matching of size greater than $(n-\sqrt{n})/2$. Let $M$ be a maximum matching in $H$. Then, the vertex set of $H$ can be partitioned into sets $A\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}B_{1}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}B_{2}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}C_{1}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}C_{2}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}R$ in such a way that the following hold: 1. $(\mathrm{H1})$ $|A|\leq 12\beta^{-2}$; 2. $(\mathrm{H2})$ $n-2|M|-|A|\leq|R|\leq n-2|M|$; 3. $(\mathrm{H3})$ $|B_{1}|=|B_{2}|$ and $H[B_{1},B_{2}]$ contains a perfect matching; furthermore, for every $v\in B_{2}\cup R$ we have $e_{H}(v,B_{1})\geq(\alpha-2\beta)n$, and 4. $(\mathrm{H4})$ $|C_{1}|=|C_{2}|$ and $H[C_{1},C_{2}]$ contains a perfect matching; furthermore, for all $v\in C_{2}$ we have $e_{H}(v,B_{2}\cup R)\leq\beta^{-1}+1$. ###### Proof. Let $M$ be a maximum matching in $H$. Let $V\coloneqq V(M)$ and $R^{\prime}\coloneqq V(H)\setminus V$. By the maximality of $M$ and the condition on $H$ in the statement, we have that $E(H[R^{\prime}])=\varnothing$ and $|R^{\prime}|\geq\sqrt{n}$. For each vertex $v\in V$, let $M(v)$ denote the unique vertex $w\in V$ such that $vw\in M$. For any set $S\subseteq V$, we define $M(S)\coloneqq\\{M(v):v\in S\\}$. To prove the statement we will follow an iterative process. We will inductively construct two sequences of sets $\\{B_{1}^{i}\\}_{i\geq 1}$ and $\\{B_{2}^{i}\\}_{i\geq 1}$ with $B_{1}^{i},B_{2}^{i}\subseteq V$ and use these to construct the vertex partition. First, for notational purposes we let $B_{2}^{-1}=B_{1}^{0}\coloneqq\varnothing$, and we define $B_{2}^{0}\coloneqq R^{\prime}$. Then, for each $i\geq 1$, we define $B_{1}^{i}\coloneqq\left\\{v\in V\setminus\left(\bigcup_{j=0}^{i-1}(B_{1}^{j}\cup B_{2}^{j})\right):e_{H}(v,B_{2}^{i-1})\geq 2\right\\}\quad\text{ and }\quad B_{2}^{i}\coloneqq M(B_{1}^{i}).$ (5.1) ###### Claim 1. For all $i\in[\beta n/2]_{0}$, the following properties hold. 1. $(\mathrm{i})$ $B_{1}^{i}$ does not span any edge from $M$. 2. $(\mathrm{ii})$ $B_{2}^{i}$ is an independent set. 3. $(\mathrm{iii})$ All but at most $4\beta^{-1}$ vertices $v\in B_{2}^{i-1}$ satisfy that $e_{H}\left(v,\bigcup_{j=1}^{i}B_{1}^{j}\right)\geq(\alpha-\beta)n.$ ###### Proof. We proceed by induction on $i$. As a base case, note that $(\mathrm{i})$ holds trivially for $B_{1}^{0}$, we have already established that $B_{2}^{0}=R^{\prime}$ is an independent set, and $(\mathrm{iii})$ is vacuously true. Now, for some $i\in[\beta n/2-1]_{0}$, assume that the properties in the statement hold for all $j\in[i]_{0}$ and that we want to show the properties also hold for $i+1$. Note that (5.1) and property $(\mathrm{i})$ imply that for all $(j,\ell),(j^{\prime},\ell^{\prime})\in([i]_{0})\times[2]$ with $(j,\ell)\neq(j^{\prime},\ell^{\prime})$ we have $B_{\ell}^{j}\cap B_{\ell^{\prime}}^{j^{\prime}}=\varnothing$. (5.2) Observe that, if $|B_{1}^{i+1}|\leq 1$, then $(\mathrm{i})$ and $(\mathrm{ii})$ hold trivially, and $B_{1}^{i+j}=\varnothing$ for all $j\geq 2$. Therefore, for the proof of these two properties we may assume that $|B_{1}^{i+1}|\geq 2$ and, thus, $|B_{1}^{j}|\geq 2$ for all $j\in[i]$. In order to prove $(\mathrm{i})$, suppose for a contradiction that $B_{1}^{i+1}$ spans some edge $e=xy\in M$. Our aim is to construct a matching larger than $M$. To do so, first note that, by the definition of $B_{1}^{i+1}$ in (5.1), we may find two distinct vertices $x_{2}^{i},y_{2}^{i}\in B_{2}^{i}$ such that $xx_{2}^{i},yy_{2}^{i}\in E(H)$. We now proceed recursively for $j\in[i]$, starting with $j=i$ and decreasing its value, as follows: * • Let $x_{1}^{j}\coloneqq M(x_{2}^{j})$ and $y_{1}^{j}\coloneqq M(y_{2}^{j})$, so $x_{1}^{j},y_{1}^{j}\in B_{1}^{j}$. * • Choose two distinct vertices $x_{2}^{j-1},y_{2}^{j-1}\in B_{2}^{j-1}$ such that $x_{1}^{j}x_{2}^{j-1},y_{1}^{j}y_{2}^{j-1}\in E(H)$ (which can be done by the definition of $B_{1}^{j}$). Now consider the path $P\coloneqq x_{2}^{0}x_{1}^{1}x_{2}^{1}\ldots x_{1}^{i}x_{2}^{i}xyy_{2}^{i}y_{1}^{i}\ldots y_{2}^{1}y_{1}^{1}y_{2}^{0}$ (where (5.2) guarantees that this is indeed a path). We may then take $M_{1}\coloneqq M\triangle E(P)$. It is easy to verify that $|M_{1}|=|M|+1$, which contradicts the maximality of $M$. In order to prove $(\mathrm{ii})$, we proceed similarly. Suppose for a contradiction that $B_{2}^{i+1}$ is not an independent set and let $e=v_{2}^{i+1}w_{2}^{i+1}\in E(H[B_{2}^{i+1}])$. We now proceed recursively for $j\in[i+1]$, starting with $j=i+1$ and decreasing its value, as follows: * • Let $v_{1}^{j}\coloneqq M(v_{2}^{j})$ and $w_{1}^{j}\coloneqq M(w_{2}^{j})$, so $v_{1}^{j},w_{1}^{j}\in B_{1}^{j}$. * • Choose two distinct vertices $v_{2}^{j-1},w_{2}^{j-1}\in B_{2}^{j-1}$ such that $v_{1}^{j}v_{2}^{j-1},w_{1}^{j}w_{2}^{j-1}\in E(H)$ (which can be done by the definition of $B_{1}^{j}$). Consider the path $P^{\prime}\coloneqq v_{2}^{0}v_{1}^{1}v_{2}^{1}\ldots v_{1}^{i}v_{2}^{i}v_{1}^{i+1}v_{2}^{i+1}w_{2}^{i+1}w_{1}^{i+1}w_{2}^{i}w_{1}^{i}\ldots w_{2}^{1}w_{1}^{1}w_{2}^{0}$ (in order to guarantee that this is indeed a path, we are using the fact that we have now proved that (5.2) also holds for $i+1$). We may now take $M_{1}^{\prime}\coloneqq M\triangle E(P^{\prime})$ and, again, verify that $|M_{1}^{\prime}|=|M|+1$. Finally, suppose there are $k\coloneqq\lceil 4\beta^{-1}\rceil$ distinct vertices $v_{1},\ldots,v_{k}\in B_{2}^{i}$ such that for all $\ell\in[k]$ we have $e_{H}\left(v_{\ell},\bigcup_{j=1}^{i+1}B_{1}^{j}\right)<(\alpha-\beta)n.$ (5.3) By (5.1), for each $j\in[i-1]_{0}$ and $\ell\in[k]$ we have that $e_{H}(v_{\ell},B_{2}^{j})\leq 1$. It follows from this, the fact that $E(H[B_{2}^{i}])=\varnothing$ (by $(\mathrm{ii})$), the assumption on the minimum degree of $H$ and (5.3) that for each $\ell\in[k]$ we have $e_{H}\left(v_{\ell},V\setminus\left(B_{1}^{i+1}\cup\bigcup_{j=1}^{i}(B_{1}^{j}\cup B_{2}^{j})\right)\right)>\beta n-i\geq\beta n/2.$ Furthermore, again by (5.1), we must have $e_{H}(B_{2}^{i},B_{2}^{i+1})\leq n$. Combining this with the previous bound, it follows that $e_{H}\left(B_{2}^{i},V\setminus\bigcup_{j=1}^{i+1}(B_{1}^{j}\cup B_{2}^{j})\right)>k\beta n/2-n\geq n,$ so there must exist some vertex $y\in V\setminus\bigcup_{j=1}^{i+1}(B_{1}^{j}\cup B_{2}^{j})$ with $e_{H}(y,B_{2}^{i})\geq 2$, which contradicts the definition of $B_{1}^{i+1}$, so $(\mathrm{iii})$ holds. ∎ Consider the smallest $j\geq 1$ such that $|B_{1}^{j}|<\beta n/2$ and let $i^{*}\coloneqq j-1$. Note that $i^{*}\leq\beta^{-1}<\beta n/2$ and that 1$(\mathrm{iii})$ for $i=1$ guarantees that $i^{*}\geq 1$. ###### Claim 2. All but at most $4\beta^{-1}$ vertices $v\in B_{2}^{i^{*}}$ satisfy that $e_{H}\left(v,\bigcup_{j=1}^{i^{*}}B_{1}^{j}\right)\geq(\alpha-\beta)n.$ ###### Proof. Similarly to the proof of property $(\mathrm{iii})$ in 1, suppose there is a set $S\subseteq B_{2}^{i^{*}}$ of size $k\coloneqq\lceil 4\beta^{-1}\rceil$ such that for every $v\in S$ we have $e_{H}(v,\bigcup_{j=1}^{i^{*}}B_{1}^{j})<(\alpha-\beta)n$. By (5.1), for each $j\in[i^{*}-1]_{0}$ and $v\in S$ we have that $e_{H}(v,B_{2}^{j})\leq 1$. Combining these two bounds with the fact that $E(H[B_{2}^{i^{*}}])=\varnothing$ (by 1$(\mathrm{ii})$) and the assumption on the minimum degree of $H$, it follows that for each $v\in S$ we have $e_{H}\left(v,V\setminus\bigcup_{j=1}^{i^{*}}(B_{1}^{j}\cup B_{2}^{j})\right)>\beta n-i^{*},$ so we conclude that $e_{H}\left(S,V\setminus\bigcup_{j=1}^{i^{*}}(B_{1}^{j}\cup B_{2}^{j})\right)>k(\beta n-i^{*}).$ On the other hand, by the definition in (5.1), we have that $e_{H}\left(S,V\setminus\bigcup_{j=1}^{i^{*}}(B_{1}^{j}\cup B_{2}^{j})\right)\leq n+k|B_{1}^{i^{*}+1}|.$ It follows that $|B_{1}^{i^{*}+1}|>\beta n-i^{*}-n/k\geq\beta n/2$, which contradicts the definition of $i^{*}$. ∎ Now, a partition of $V(H)$ as described in the statement can be obtained as follows. Let $W$ be the set of all vertices $v\in\bigcup_{i=1}^{i^{*}}B_{2}^{i}$ for which $e_{H}(v,\bigcup_{i=1}^{i^{*}}B_{1}^{i})<(\alpha-\beta)n$, and let $W^{\prime}$ be the set of all vertices $v\in B_{2}^{0}$ for which $e_{H}(v,B_{1}^{1})<(\alpha-\beta)n$. Let $A\coloneqq W^{\prime}\cup W\cup M(W)$. It follows by 1$(\mathrm{iii})$ together with 2 that $|A|\leq 4\beta^{-1}(2i^{*}+1)\leq 12\beta^{-2}$, so $(\mathrm{H1})$ holds. Then, for each $\ell\in[2]$, let $B_{\ell}\coloneqq\bigcup_{i=1}^{i^{*}}B_{\ell}^{i}\setminus A$, and let $R\coloneqq R^{\prime}\setminus A$ (so $(\mathrm{H2})$ holds). It follows that, for every $v\in R\cup B_{2}$, we have $e_{H}(v,B_{1})\geq(\alpha-\beta)n-|A|\geq(\alpha-2\beta)n$, so $(\mathrm{H3})$ holds. Finally, let $C\coloneqq V\setminus(A\cup B_{1}\cup B_{2})$. Note that, if $C$ is not empty, then it contains a perfect matching by construction. Consider a bipartition of $C$ into sets $C_{1}$ and $C_{2}$ such that $C_{2}=M(C_{1})$ and $B_{1}^{i^{*}+1}\subseteq C_{1}$. Then, $(\mathrm{H4})$ follows by (5.1) and the fact that $i^{*}\leq\beta^{-1}$. ∎ We will also use the following observations repeatedly. Remark 5.2 is a trivial observation, while Remarks 5.3 and 5.4 follow from elementary case analyses. ###### Remark 5.2. Let $G$ be a graph and consider a bipartition $V(G)=A\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}B$. Let $P\subseteq G$ be a path of length at least $5$ which does not contain two consecutive edges in $A$ or in $B$. Then, for each $X\in\\{A,B\\}$, the path $P$ contains two distinct vertices $x,y\in X$ with $\operatorname{dist}_{P}(x,y)\leq 3$. ###### Remark 5.3. Let $G$ be a union of vertex-disjoint paths and cycles, where there are $p$ paths and $c$ cycles. Let $P_{1},P_{2}\subseteq G$ be two non-degenerate vertex-disjoint subpaths of $G$. Let $x$ be an endpoint of $P_{1}$ and $y$ be an endpoint of $P_{2}$, and assume $e\coloneqq\\{x,y\\}\notin E(G)$. Then, $(G\setminus(P_{1}\cup P_{2}))\cup\\{e\\}$ is a union of vertex-disjoint paths and cycles, with at most $p+1$ non-degenerate paths and at most $c+1$ cycles. ###### Remark 5.4. Let $G$ be a union of vertex-disjoint paths and cycles. Let $x,y,z,z^{\prime}\in V(G)$ such that $e_{1}\coloneqq\\{x,z\\},e_{2}\coloneqq\\{y,z^{\prime}\\}\notin E(G)$ and $x$ and $y$ lie in the same cycle $\mathcal{C}$ of $G$. Let $P_{1},P_{2},P_{3}\subseteq G$ be three non-degenerate vertex-disjoint subpaths of $G$ such that $P_{1}$ has $z$ as an endpoint, $P_{2}$ has $z^{\prime}$ as an endpoint, and $P_{3}$ is an $(x,y)$-path. Then, $(G\setminus(P_{1}\cup P_{2}\cup P_{3}))\cup\\{e_{1},e_{2}\\}$ may contain a cycle $\mathcal{C}^{\prime}\nsubseteq G$ only if at least one of the following holds: in $G\setminus(P_{1}\cup P_{2}\cup P_{3})$, $x$ and $z$ lie in the same component, or $y$ and $z^{\prime}$ lie in the same component, or $z$ and $z^{\prime}$ lie in the same component. We are finally ready to prove Theorem 1.2. The general idea is similar to the proof of Theorems 1.1 and 1.3, though the details are more involved. We will use Lemma 5.1 to show that $H$ satisfies certain properties and combine these with the properties given by Lemmas 3.1 and 3.3. We will then consider the graph given by the union of $G_{n,1}$ and a large matching of $H$, given by Lemma 5.1, and iteratively modify this graph to obtain cycles of all lengths. To achieve this, we will construct a special path $P$ which will be used to absorb other vertices, in a similar fashion to the proof of Theorems 1.1 and 1.3, and then find an almost spanning cycle $C$ containing $P$. A key difference with respect to that proof is that the iterative process by which we construct $C$ is more involved, since the graph we start with is more structured. Another key difference is that, while creating $C$, we will isolate some vertices which cannot be absorbed with $P$; we need to make sure that these can be absorbed by the cycle itself at a later stage, so that we may find a Hamilton cycle (but these vertices play no role when constructing cycles shorter than $C$). The properties given by Lemma 5.1 are crucial in proving that our construction works. ###### Proof of Theorem 1.2. By Theorem 1.3, if $H$ contains a matching covering all but $o(n)$ vertices, we are done, so we may assume that the largest matching in $H$ covers at most $n-\sqrt{n}$ vertices. Let $0<\eta\ll\epsilon\ll 1$. Throughout, we always assume $n$ is sufficiently large. Apply Lemma 5.1 with $\alpha\coloneqq(1+\epsilon)(\sqrt{2}-1)$ and $\beta=\eta$. (Note, in particular, that we may assume $\alpha<1/2$.) This yields a partition of $V(H)$ into $A\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}B_{1}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}B_{2}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}C_{1}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}C_{2}\mathbin{\mathchoice{\leavevmode\vtop{\halign{\hfil$\m@th\displaystyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\textstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}{\leavevmode\vtop{\halign{\hfil$\m@th\scriptscriptstyle#$\hfil\cr\cup\cr\cdot\crcr}}}}R$ satisfying properties $(\mathrm{H1})$–$(\mathrm{H4})$ of the statement of the lemma. We define $\gamma_{1}\coloneqq|C_{1}|/n$ and $\gamma_{2}\coloneqq|B_{1}|/n$, so by $(\mathrm{H1})$ and $(\mathrm{H2})$ it follows that $|R|=(1-2\gamma_{1}-2\gamma_{2})n\pm 12\eta^{-2}\geq\sqrt{n}/2$ (5.4) (where the lower bound follows from the bound on the size of a maximum matching in $H$). Note that by $(\mathrm{H3})$ and since $\alpha>\sqrt{2}-1$ we have $\gamma_{2}\geq(1-6\eta)\alpha.$ (5.5) Observe that $\gamma_{1}\leq 1/2-\gamma_{2}$. It follows from this, (5.5), the bound on $\alpha$, and by taking $\eta$ sufficiently small that $\gamma_{1}\leq 86/1000.$ (5.6) It follows from (5.5), $(\mathrm{H1})$, $(\mathrm{H4})$, the fact that $e_{H}(v,C_{1}\cup C_{2})\leq|C_{1}\cup C_{2}|\leq(1-2\gamma_{2})n$, and the bounds on $\delta(H)$ and $\alpha$ that for all $v\in C_{2}$ we have $e_{H}(v,B_{1})\geq n/5$. (5.7) Indeed, for each $v\in C_{2}$ we have that $\displaystyle e_{H}(v,B_{1})$ $\displaystyle=e_{H}(v,V(H))-e_{H}(v,B_{2}\cup R)-e_{H}(v,C_{1}\cup C_{2})-e_{H}(v,A)$ $\displaystyle\geq\alpha n-\eta^{-1}-1-(1-2\gamma_{2})n-12\eta^{-2}$ $\displaystyle\geq(3\alpha-12\eta-1)n-12\eta^{-2}-\eta^{-1}-1\geq n/5.$ Furthermore, by $(\mathrm{H3})$, for any $x,y\in B_{2}\cup R$ we have $|N_{H}(x)\cap N_{H}(y)\cap B_{1}|\geq(2\alpha-\gamma_{2}-4\eta)n$. It then follows from the fact that $\gamma_{2}\leq 1/2$ and the bound on $\alpha$ that for all $x,y\in B_{2}\cup R$ we have $|N_{H}(x)\cap N_{H}(y)\cap B_{1}|\geq(1-20\eta)(2\alpha-\gamma_{2})n$. (5.8) Let $M$ be the union of a perfect matching in $H[C_{1},C_{2}]$ and a perfect matching in $H[B_{1},B_{2}]$. For any $v\in V(M)$, we let $M(v)$ be the unique vertex $w$ such that $vw\in M$. Similarly, for any set $A\subseteq V(M)$, we let $M(A)\coloneqq\bigcup_{v\in A}M(v)$. Let $G=G_{n,1}$. Throughout the rest of the proof, we will use the fact that $d_{G}(x)=1$ for every $x\in V(H)$ repeatedly and without further mention. We now apply Lemma 3.3, with $\eta$ playing the role of $\epsilon$, to the pairs of sets $(A_{i},B_{i})=(N_{H}(x),N_{H}(y))$ for all $(x,y)\in V(H)\times V(H)$ (with $\alpha_{i}=\beta_{i}=\alpha$) and the pairs of sets $(A_{i},B_{i})=(N_{H}(x)\cap N_{H}(y)\cap B_{1},N_{H}(x)\cap N_{H}(y)\cap B_{1})$ for all pairs $(x,y)\in(B_{2}\cup R)\times(B_{2}\cup R)$ (with $\alpha_{i}=\beta_{i}=(1-20\eta)(2\alpha-\gamma_{2})$, see (5.8)). Condition on the event that the statement of Lemma 3.3 holds for these pairs of sets, and that the statements of Lemma 3.1$(\mathrm{ii})$ and Remark 3.2 hold, which occurs a.a.s. Let $G_{0}\coloneqq M\cup G$. We claim that the following properties hold: 1. (G1) For every $(x,y)\in V(H)\times V(H)$, we have $e_{G}(N_{H}(x),N_{H}(y))\geq(1-\eta)\alpha^{2}n/2$. 2. (G2) For every $(x,y)\in(B_{2}\cup R)\times(B_{2}\cup R)$, we have $e_{G}(N_{H}(x)\cap N_{H}(y)\cap B_{1})\geq(1-50\eta)(2\alpha-\gamma_{2})^{2}n/2.$ 3. (G3) $G_{0}$ is the union of at most $\log^{2}n$ cycles and $(1+o(1))|R|/2$ paths, all vertex-disjoint. 4. (G4) Each of the paths of $G_{0}$ either has both endpoints in $A\cup R$ or is comprised of a single edge in $E(M)\cap E(G)$. Furthermore, every vertex in $A\cup R$ is the endpoint of some path. 5. (G5) All subpaths of $G_{0}$ alternate between edges of $G$ and edges of $M$. 6. (G6) $|E(M)\cap E(G)|\leq\log^{2}n$. Indeed, both (G1) and (G2) follow from Lemma 3.3, (G3) follows from Lemma 3.1$(\mathrm{ii})$ together with $(\mathrm{H2})$ and (5.4), and (G6) holds by Remark 3.2. Finally, (G4) and (G5) must hold by definition. A simple algebraic manipulation with (5.4), (5.5) and the definition of $\alpha$ shows that $(1-50\eta)(2\alpha-\gamma_{2})^{2}n\geq(1+\eta)|R|.$ (5.9) Using (5.5) it is also easy to check that $(1-3\eta)\alpha^{2}n\geq(1-50\eta)(2\alpha-\gamma_{2})^{2}n$, hence $(1-3\eta)\alpha^{2}n\geq(1+\eta)|R|.$ (5.10) The bounds in (5.9) and (5.10) are crucial for our proof. Our goal is to show that we can find cycles of all possible lengths in $H\cup G_{0}=H\cup G$; to achieve this, we will need to modify $G_{0}$ through an algorithm, where we will delete some edges of $G_{0}$ and add some edges of $H$ for each subsequent alteration. The outcome of this process will be an ‘almost’ spanning cycle $C$, and we will make sure to satisfy certain properties which will allow us to obtain all the desired cycles. We begin with a high-level sketch of the process. At each step of the algorithm, we always think of the graph as a union of vertex-disjoint paths and cycles; the vertex set can become smaller at each step, though, as we sometimes ‘delete’ some vertices (which we will need to ‘absorb’ at the end of the process). We split the process into six steps (see Steps 1–6 below). In Step 1, we simply remove all ‘isolated’ edges of $G_{0}$, since these will not be useful for us. In Step 2 we create an absorbing path $P\subseteq H\cup G_{0}$ that will allow us to incorporate a suitable number of vertices into a cycle; this choice will be used at the end of the process to guarantee that we can construct cycles of all lengths. Crucially, we must make sure that $P$ is not modified through the remaining steps of the process. At this point all paths in the graph will have their endpoints in $A\cup B_{2}\cup R$ (when constructing $P$ we may produce new paths, but we make sure that the new endpoints lie in $B_{2}$). We would like to have all their endpoints in $B_{2}\cup R$ so that we may use (G2) in the future, so through Step 3 we will make it so that all endpoints are in $B_{2}\cup R$. Step 4 is used to guarantee that $P$ does not lie in a cycle; this helps us avoid possible problems in Step 5, where we turn all cycles into paths (making sure that all the resulting endpoints lie in $B_{2}\cup R$). At this point, the graph we are considering is a union of vertex-disjoint paths with all their endpoints in $B_{2}\cup R$. Then, using the aforementioned (G2), in Step 6 we can iteratively combine all the paths into a single, ‘almost’ spanning path containing $P$, which we later turn into an almost spanning cycle $C$ containing $P$. The bound given in (5.9) is crucial to prove that the almost spanning path can be constructed. For simplicity of notation, we will always refer to the graph by the same name throughout each of the steps, but the graph is continuously updated in each step of the process. As already mentioned above, throughout the process we sometimes ‘delete’ some vertices from the graph. What this means is that they no longer play a role in this process and will not be vertices of the resulting cycle $C$. We will need to ensure that these vertices can later be ‘absorbed’ into the cycle (some via $P$, and the rest without help from $P$). We will denote this set of ‘deleted’ vertices by $S$. We think of these vertices simply as being isolated. Note, however, that not all vertices of $G_{0}$ that become isolated through the process are added to $S$, as we will still allow some degenerate paths of length $0$ to be part of the graph. Thus, we will always explicitly say which vertices are added to $S$ (whenever we do this, we mean that these vertices are removed from (the current version of) $G_{0}$, so we may keep thinking of $G_{0}$ as a union of paths and cycles with the desired properties). In particular, we will always think of the graph at any point throughout the process as a graph on vertex set $V(H)\setminus S$ (and, when choosing vertices for any purpose before the end of Step 6, we will always avoid $S$, even if not explicitly stated). When altering the graph throughout the process, we will often need to use some edges of $G$ which are ‘available’ to us. To keep track of these, we define a set $D$ of ‘unavailable’ edges. In particular, all edges deleted from $G_{0}$ are automatically added to $D$ (so we will not add them explicitly), but some other edges are added to $D$ to ensure that our process will work (in particular, to ‘protect’ $P$ throughout the process). Whenever we add edges to $D$ without removing them from the graph, we say so explicitly. Finally, for some of the alterations, given some vertex $v$, we will want to find a neighbour $b_{1}\in N_{H}(v)\cap B_{1}$. When using these neighbours, we will want to delete the edge of $M$ containing them (this will help us to guarantee that the resulting graph remains a union of disjoint paths and cycles). However, we cannot always delete this edge of $M$ (in particular, if it has already been deleted). Therefore, it will be important for us to keep track of those vertices $b_{1}\in B_{1}$ which we cannot choose at any given step (they are also ‘unavailable’). We will denote the set of these vertices by $K$. As happens for $D$, we will often update $K$ implicitly; in particular, whenever an edge $b_{1}b_{2}\in E(M)$ with $b_{1}\in B_{1}$ is added to $D$, implicitly or explicitly, $b_{1}$ is added to $K$. However, we will sometimes explicitly add extra vertices to this set (in particular, to ‘protect’ $P$). We remark that $K$ and $S$ need not be disjoint. Let us now describe the steps of our process. The fact that the sets $S$, $D$ and $K$ are updated immediately throughout the iterations in each of the steps is crucial in some of the choices we make below. We also remark that we continually use the fact that the graph we consider is a union of vertex- disjoint paths and cycles, often implicitly. Step 1. For each edge $e=xy\in E(M)\cap E(G)$, delete $e$ from $G_{0}$ and add $x$ and $y$ to $S$. Let the resulting graph be denoted by $G_{1}$. We claim that the following properties hold: 1. (A1) $|S|,|D|,|K|\leq 2\log^{2}n$. 2. (A2) $G_{1}$ is the union of at most $\log^{2}n$ cycles and $(1+o(1))|R|/2$ paths, all vertex-disjoint. 3. (A3) Each of the paths of $G_{1}$ has both endpoints in $A\cup R$. Furthermore, $d_{G_{1}}(x)=1$ for every $x\in A\cup R$. 4. (A4) All subpaths of $G_{1}$ alternate between edges of $G$ and edges of $M$. Indeed, (A1) follows immediately by (G6), (A3) follows by (G4) and the deletions in Step 1, and (A2) and (A4) follow from (G3) and (G5), respectively. In particular, note that (A4) and the fact that all edges of $M$ are contained in $B_{1}\cup B_{2}\cup C_{1}\cup C_{2}$ implies the following: 1. (A5) If $e\in E(G_{1}[A\cup R])$, then $e$ is an isolated edge in $G_{1}$. Step 2. Let $G_{1}^{\prime}\coloneqq G_{1}$; this is a copy of the ‘original’ $G_{1}$ which we will not update. We now wish to construct an absorbing path $P$. Let $U\subseteq R$ be an arbitrary set of $t\coloneqq\lceil 3\eta^{-1}\alpha^{-2}\rceil$ vertices (such a set must exist by (5.4) and (A1)). Fix a labelling of the vertices in $U$ as $u_{1},\ldots,u_{t}$. For each $i\in[t]$, 1. 1. choose an edge $x_{i}y_{i}\in E_{G}(N_{H}(u_{i})\cap B_{1})\setminus D$ and add it to $D$ (but do not remove it from $G_{1}$). These edges $x_{i}y_{i}$ will later be part of $P$, and they are added to $D$ in order to ‘protect’ $P$. Their existence, for each $i\in[t]$, follows from (G2), (5.9), (5.4), (A1), and the value of $t$. Let $U_{E}\coloneqq\\{x_{i},y_{i}:i\in[t]\\}$ and $U_{M}\coloneqq M(U_{E})\subseteq B_{2}$. Remove all the edges of $M$ incident to $U_{E}\setminus\\{x_{1},y_{t}\\}$ from $G_{1}$ (in particular, by (A4), now each edge $x_{i}y_{i}$ becomes an isolated edge, and the endpoints of the deleted edges in $U_{M}$ become endpoints of some paths in the resulting graph) and, in order to ‘protect’ $P$, add $x_{1}$ and $y_{t}$ to $K$, even if their respective edges in $M$ are not removed from the graph. Next we are going to connect the edges $x_{i}y_{i}$ we just chose into a path $P$. We will achieve this by using a $(y_{i},x_{i+1})$-path of length $3$ for each $i\in[t-1]$. We are going to follow a process to choose a new set of edges in $G$, which will be the central edges of the aforementioned paths. By the end of the process, we will have constructed a path $P$ satisfying the following properties: 1. (B1) $V(P)\cap U=\varnothing$. 2. (B2) $M(x_{1}),M(y_{t})\in B_{2}$ are the endpoints of $P$. 3. (B3) $P$ and $E(G)\setminus D$ are vertex-disjoint. Together with $P$, we will have a graph $G_{2}$ which satisfies the following properties: 1. (B4) $P\subseteq G_{2}$. 2. (B5) $G_{2}$ is the union of at most $2\log^{2}n$ cycles and $(1+o(1))|R|/2$ paths, all vertex-disjoint. 3. (B6) Each of the paths of $G_{2}$ has both endpoints in $A\cup B_{2}\cup R$. 4. (B7) All subpaths of $G_{2}\setminus P$ have at most two consecutive vertices in $A\cup R\cup B_{2}\cup C_{2}$ or in $B_{1}\cup C_{1}$. 5. (B8) If $e\in E(G_{1}^{\prime}[A\cup R])$, then either $e\in E(P)$ or $e$ is an isolated edge in $G_{2}$. Let us now describe the process. We note that (A4) is crucial and will be used implicitly throughout. Let $W_{1}\coloneqq U_{E}\cup M(\\{x_{1},y_{t}\\})$. For each $i\in[t]$, we will have a set $W_{i}$ which will consist of all vertices of $P$ which have already been chosen. We claim that, at the start of each iteration of the process (that is, for each $i\in[t-1]$), we may assume that the following properties hold: 1. (B9) $|S|,|D|,|K|\leq 3\log^{2}n$. 2. (B10) $W_{1}\subseteq W_{i}$. 3. (B11) $|W_{i}|=2t+2i$. 4. (B12) $W_{i}\cap B_{1}\subseteq K$. 5. (B13) $M(W_{i}\cap B_{2})\subseteq K$. 6. (B14) $G_{1}$ is a union of vertex-disjoint paths and cycles. 7. (B15) All of the paths of $G_{1}$ have both endpoints in $A\cup B_{2}\cup R$, or have one endpoint in $A\cup B_{2}\cup R$ and the other is $y_{i}$, or have one endpoint in $A\cup B_{2}\cup R$ and the other is $x_{t}$, or consist of a unique edge $x_{j}y_{j}$ for some $j\in[t-1]\setminus[i]$. 8. (B16) For any vertex $v\in B_{2}$ which is an endpoint of some path in $G_{1}$ we have $M(v)\in K$. 9. (B17) All edges in $G_{1}\setminus G_{1}^{\prime}$ are contained in $W_{i}$ or have one endpoint in $B_{1}$ and the other in $C_{2}$. 10. (B18) All edges $e\in E(G_{1}^{\prime}\setminus G_{1})$ with $e\in M$ contained in $C_{1}\cup C_{2}$ are incident to $W_{i}$, or contained in $S$, or satisfy that $\operatorname{dist}_{G_{1}^{\prime}}(e,K)\leq 1$. 11. (B19) $N_{G_{1}}(A\cup U)\cap K\subseteq W_{i}$. Before starting the process (that is, for $i=1$), (B9) holds by (A1), the value of $t$ and the fact that, so far in this step, we have increased the sizes of $S$, $D$ and $K$ by at most $3t$; (B10) is trivial; (B11) and (B12) follow by the definition of $W_{1}$ and $K$; (B13) holds trivially since $W_{1}\cap B_{2}=M(\\{x_{1},y_{t}\\})\subseteq K$ by definition; (B14) holds by (A2), since so far in this step we have only deleted edges; (B15) and (B16) follow from (A3) and the fact that, so far, throughout this step we have only deleted edges of $M$ with one endpoint in $B_{2}$ (which then becomes an endpoint of some path) and the other endpoint in one of the edges $x_{i}y_{i}$ (but not the edges containing $x_{1}$ or $y_{t}$); (B17) is vacuously true since $E(G_{1}\setminus G_{1}^{\prime})=\varnothing$; (B18) is vacuously true since all edges of $G_{1}^{\prime}$ which have been deleted so far are contained in $B_{1}\cup B_{2}$, and (B19) holds by construction, since at this point we have $K\subseteq W_{1}\cup S$ and $N_{G_{1}}(A\cup U)\cap S=\varnothing$ by the definition of $S$. We will later show that these properties must indeed hold throughout. For each $i\in[t-1]$, we proceed as follows: 1. 2. Choose an edge $w_{i}z_{i}\in E_{G}(N_{H}(y_{i})\setminus U,N_{H}(x_{i+1})\setminus U)\setminus D$ such that 1. (B20) $\operatorname{dist}_{G_{1}}(\\{w_{i},z_{i}\\},W_{i})\geq 5$, 2. (B21) $\operatorname{dist}_{G_{1}^{\prime}}(\\{w_{i},z_{i}\\},W_{i})\geq 2$, and 3. (B22) $\operatorname{dist}_{G_{1}^{\prime}}(\\{w_{i},z_{i}\\},K)\geq 5$. Note that such an edge must exist by (G1), (B9), (B11), (B14), and the value of $t$. In order to ‘protect’ $P$, add $w_{i}z_{i}$ to $D$ (but do not remove it from $G_{1}$). 2. 3. Now we make some modifications to ensure $G_{1}$ will remain a union of vertex-disjoint paths and cycles after adding $y_{i}w_{i}$ and $z_{i}x_{i+1}$. If the component of $G_{1}$ containing $w_{i}z_{i}$ is a cycle of length at most $8$, we remove all edges of this cycle (except $w_{i}z_{i}$) from $G_{1}$ and add all its vertices (except $w_{i}$ and $z_{i}$) to $S$. Otherwise, for each $x\in\\{w_{i},z_{i}\\}$, we consider several cases: 1. 3.1. If $x\in A\cup R$, do nothing. 2. 3.2. If $x\in B_{1}$, remove the edge of $M$ containing $x$ from $G_{1}$ (which must have belonged to $G_{1}$ by the definition of $K$ and 2.(B22)). Note that $M(x)\in B_{2}$ becomes an endpoint of a path in the resulting graph. 3. 3.3. If $x\in C_{1}$, consider the edge $xy\in E(M)$ (which must lie in $G_{1}$ by (B18), 2.(B20), 2.(B21) and 2.(B22)) and let $z^{*}$ be the other neighbour of $y\in C_{2}$ in $G_{1}$ (note that it must exist by (B14) and (B15)). Choose some vertex $z\in(N_{H}(y)\cap B_{1})\setminus(K\cup N_{G_{1}}(A\cup U)\cup\\{w_{i},z_{i},z^{*}\\})$, which must exist by (5.7), (B9), $(\mathrm{H1})$, and the value of $t$. Then, add $yz$ to $G_{1}$ and remove both $xy$ and the edge of $M$ containing $z$ (which must have belonged to $G_{1}$ by the definition of $K$). Now $M(z)\in B_{2}$ becomes an endpoint of a path in the resulting graph. 4. 3.4. If $x\in B_{2}\cup C_{2}$, consider the edge $xy\in E(M)$ (which must lie in $G_{1}$ by (B18), 2.(B20), 2.(B21) and 2.(B22)) and let $z$ be the other neighbour of $y$ in $G_{1}$ (which must exist by (B10), (B14), (B15) and 2.(B20)). Now consider the following cases: 1. 3.4.1. If $z\in A\cup R$, remove $xy$ and $yz$ from $G_{1}$ and add $y$ to $S$. We now think of $z$ as a degenerate path. 2. 3.4.2. If $z\in B_{2}$, let $z^{\prime}$ be the other neighbour of $z$ (which must exist by (B14), (B16), (B17) and 2.(B22)) and observe that, by (B17) and 2.(B22), we must have $zz^{\prime}\in M$. Remove $xy$ and $yz$ from $G_{1}$ and add $y$ to $S$. Furthermore, add $z^{\prime}$ to $K$ (but do not remove $zz^{\prime}$ from $G_{1}$). Then, $z\in B_{2}$ becomes the endpoint of a path in the resulting graph. 3. 3.4.3. If $z\in B_{1}$, let $zz^{\prime}\in E(M)$ (which must lie in $G_{1}$ by (B17), 2.(B22) and the definition of $K$), remove $xy$, $yz$ and $zz^{\prime}$ from $G_{1}$, and add $y$ and $z$ to $S$. Now $z^{\prime}\in B_{2}$ becomes an endpoint of a path. 4. 3.4.4. If $z\in C_{2}$, let $z^{*}$ be the other neighbour of $z$ in $G_{1}$ (which must exist by (B14) and (B15)), and choose some vertex $z^{\prime}\in(N_{H}(z)\cap B_{1})\setminus(K\cup N_{G_{1}}(A\cup U)\cup\\{w_{i},z_{i},z^{*}\\})$ (which must exist by (5.7), (B9), $(\mathrm{H1})$, and the value of $t$). Then, add $zz^{\prime}$ to $G_{1}$ and remove $xy$, $yz$ and the edge of $M$ containing $z^{\prime}$ (which must lie in $G_{1}$ by the definition of $K$) from $G_{1}$, and add $y$ to $S$. Now $M(z^{\prime})\in B_{2}$ becomes an endpoint of a path. 5. 3.4.5. If $z\in C_{1}$, let $zz^{\prime}\in E(M)$ (which must lie in $G_{1}$ by (B17), (B18), 2.(B20) and 2.(B22)), let $z^{*}$ be the other neighbour of $z^{\prime}\in C_{2}$ in $G_{1}$ (which must exist by (B14) and (B15)), and choose some vertex $z^{\prime\prime}\in(N_{H}(z^{\prime})\cap B_{1})\setminus(K\cup N_{G_{1}}(A\cup U)\cup\\{w_{i},z_{i},z^{*}\\})$ (which must exist by (5.7), (B9), $(\mathrm{H1})$, and the value of $t$). Then, add $z^{\prime}z^{\prime\prime}$ to $G_{1}$ and remove $xy$, $yz$, $zz^{\prime}$ and the edge of $M$ containing $z^{\prime\prime}$ (which must lie in $G_{1}$ by the definition of $K$) from $G_{1}$, and add $y$ and $z$ to $S$. The vertex $M(z^{\prime\prime})\in B_{2}$ becomes an endpoint of a path. 3. 4. Add $y_{i}w_{i}$ and $z_{i}x_{i+1}$ to $G_{1}$. Let $W_{i+1}\coloneqq W_{i}\cup\\{w_{i},z_{i}\\}$ and iterate. Note now that (B9) follows from (A1) and the fact that $S$, $K$ and $D$ increase their sizes by at most $8$ in each step of the process above. This, combined with the at most $3t$ increase before the process and the value of $t$, immediately yields the bound. (B10) holds trivially by definition. (B11) holds trivially by the definition of $W_{i+1}$, since $w_{i},z_{i}\notin W_{i}$ by 2.(B20) and are distinct. (B12) and (B13) hold because, throughout the process, if $x\in B_{1}\cup B_{2}$, we delete the edge of $M$ containing $x$, which implies the conditions. (B14) follows from the fact that, throughout, we guarantee that all vertices of the graph $G_{1}$ have degree at most $2$: indeed, we make sure to delete one edge incident to each of the vertices that lie in one of the edges which will be added; the only exception to this is case 3.1, in which there is no need to delete edges by (A3) and the fact that the different vertices $x$ throughout the process are always distinct. In order to prove (B15), note that all the newly created endpoints lie in $B_{2}$, as remarked throughout the process. Furthermore, the edges which are added to the graph in cases 3.3, 3.4.4 and 3.4.5 have one of their endpoints in $C_{2}$ and the other in $B_{1}\setminus K$, which by (B12) cannot belong to any of the vertices in $W_{i}$; this guarantees that the different paths consisting of a single edge $x_{j}y_{j}$ with $j\in[t-1]\setminus\\{1\\}$ do not become part of any longer paths or cycles until the iteration in which $w_{j-1}z_{j-1}$ is considered. Observe that avoiding the vertex $z^{*}$ in cases 3.3, 3.4.4 and 3.4.5 is crucial, as otherwise we would create an endpoint in $C_{2}$. (B16) follows directly from the process, since every time we create a new endpoint $v\in B_{2}$ we do so by deleting the edge of $M$ containing $v$, except in case 3.4.2, where we artificially add $M(v)$ to $K$. (B17) holds directly by construction: in the $i$-th iteration, we add two edges, $y_{i}w_{i}$ and $z_{i}x_{i+1}$, which are contained in $W_{i+1}$ by definition, and possibly some edges with an endpoint in $B_{1}$ and the other in $C_{2}$ (this happens in cases 3.3, 3.4.4 and 3.4.5). Now, consider (B18). In case 3.3, the deleted edge is incident to a vertex which is added to $W_{i+1}$; the same is true for the edge containing $x$ in all subcases of case 3.4; by (B17), we are guaranteed that the other edge $e$ deleted in case 3.4.5 now satisfies $\operatorname{dist}_{G_{1}^{\prime}}(e,K)=1$; the only other case in which one such edge may be deleted is when $w_{i}z_{i}$ is contained in a cycle $C$ of length at most $8$, but here all deleted edges are either incident to $W_{i+1}$ or contained in $S$. Finally, (B19) can be checked throughout the process by (B15), (B17) and the choices in cases 3.3, 3.4.4 and 3.4.5. Note also that (B9)–(B13) hold after the last iteration of the process (that is, for $i=t$). Similarly, (B17) and (B19) also hold after the process (that is, replacing $G_{1}$ by $G_{2}$ and with $i=t$). These will come in useful later. Let $G_{2}$ be the graph resulting from the process above. Let $P\coloneqq M(x_{1})x_{1}y_{1}w_{1}z_{1}x_{2}\ldots y_{t-1}w_{t-1}z_{t-1}x_{t}y_{t}M(y_{t})$ (recall that all these vertices are distinct, as follows from the choices throughout the process). Observe that (B1), (B2) and (B3) hold by definition. In order to prove that (B4) holds, note first that all the edges of $P$ must belong to $G_{1}$ at some point throughout the process, since $E(P)\cap E(G)\subseteq E(G_{1}^{\prime})$, $x_{1}M(x_{1}),y_{t}M(y_{t})\in E(G_{1}^{\prime})$ and all other edges of $P$ are added in the fourth step of the process throughout the different iterations. The fact that none of these edges are deleted throughout the process follows from (B12), (B13) and 2.(B20): indeed, throughout the process, all deleted edges either lie at distance at most $3$ from $w_{i}$ or $z_{i}$, so they cannot be incident to $W_{i}$ by 2.(B20), or they have an endpoint in $B_{1}\setminus K$ and belong to $M$ (where we further ensure that said endpoint cannot be either $w_{i}$ or $z_{i}$), and therefore cannot be incident to $W_{i}$ by (B12) and (B13). Note, furthermore, that the edges deleted in cases 3.2, 3.3 and 3.4 for $w_{i}$ cannot ‘interfere’ with those deleted for $z_{i}$, since in these cases we are guaranteed that $\operatorname{dist}_{G_{i}\setminus\\{w_{i}z_{i}\\}}(w_{i},z_{i})\geq 9$. (B5) holds by (A2), (B14) and since, by Remark 5.3, the number of paths and cycles does not increase too much throughout the process. Indeed, the number of paths increases by at most $2t$ due to the deletions before the process, and then by at most $2t$ due to the deletions throughout the process, and at most $2t$ new cycles are created overall (see Remark 5.3 for cases 3.3, 3.4.4 and 3.4.5). This, combined with the bounds in (A2), (5.4) and the value of $t$, guarantees that (B5) holds. (B6) is a direct consequence of (B15) together with the final iteration of the process. Now, by (A4) we know that all subpaths of $G_{1}^{\prime}$ have at most two consecutive vertices in $A\cup R\cup B_{2}\cup C_{2}$ or in $B_{1}\cup C_{1}$. By (B17), all edges added throughout the process are either contained in $W_{t}=V(P)$ or have one endpoint in $B_{1}$ and the other in $C_{2}$. This guarantees that all subpaths of $G_{2}$ (except possibly those that intersect $P$) must also have at most two consecutive vertices in $A\cup R\cup B_{2}\cup C_{2}$ or in $B_{1}\cup C_{1}$. That is, (B7) holds. Finally, (B8) follows directly from (A5) and (B17). Step 3. Our next goal is to obtain a graph $G_{3}$ which satisfies the following properties: 1. (C1) $P\subseteq G_{3}$. 2. (C2) $G_{3}$ is the union of at most $3\log^{2}n$ cycles and $(1+o(1))|R|/2$ paths, all vertex-disjoint. 3. (C3) Each of the paths of $G_{3}$ has both endpoints in $B_{2}\cup R$. 4. (C4) All subpaths of $G_{3}\setminus P$ have at most two consecutive vertices in $R\cup B_{2}\cup C_{2}$ or in $B_{1}\cup C_{1}$. 5. (C5) $U\cap V(G_{3})=\varnothing$. The goal of (C5) is to have all vertices in $U$ isolated, so that they can later be absorbed by $P$. In order to achieve this, roughly speaking, we are going to remove all edges incident to $(A\cup U)\setminus V(P)$ from $G_{2}$ and then perform a few more alterations on the graph to guarantee that (C3) holds. For these alterations, we are are going to follow a process. We claim that, throughout, we may assume that the following properties hold: 1. (C6) $|S|,|D|,|K|\leq 4\log^{2}n$. 2. (C7) $G_{2}$ is a union of vertex-disjoint paths and cycles. 3. (C8) Each of the paths of $G_{2}$ has both endpoints in $A\cup B_{2}\cup R$. 4. (C9) All edges in $G_{2}\setminus G_{1}^{\prime}$ are contained in $V(P)$ or have one endpoint in $B_{1}$ and the other in $C_{2}$. Note that, before we start the process, (C6), (C7), (C8) and (C9) are guaranteed by (B9), (B5), (B6) and (B17), respectively. We proceed as follows. For each edge $e=xy\in E(G_{2})$ with $x\in(A\cup U)\setminus V(P)$, we proceed as follows depending on the position of $y$. Note that, by (B5), we are guaranteed that $y\notin V(P)\setminus M(\\{x_{1},y_{t}\\})$. 1. 1. If $y\in A\cup U\cup B_{2}$, remove $xy$ from $G_{2}$. 2. 2. If $y\in R\setminus U$, remove $xy$ from $G_{2}$ and add $y$ to $S$. Note that, by (B8), this does not delete any other edges of $G_{2}$. 3. 3. If $y\in B_{1}$, remove $xy$ and the edge of $M$ containing $y$ from $G_{2}$ (note that this edge must have been in $G_{2}$ by (B19) and the definition of $K$, since $y\notin V(P)$; see (B2)) and add $y$ to $S$. Observe that $M(y)\in B_{2}$ becomes and endpoint in the resulting graph. 4. 4. If $y\in C_{2}$, let $z^{*}$ be the other neighbour of $y$ in $G_{2}$ (which must exist by (C7) and (C8)) and choose a vertex $z\in(N_{H}(y)\cap B_{1})\setminus(K\cup\\{z^{*}\\})$ (its existence follows from (5.7) and (C6)), add $yz$ to $G_{2}$, and remove $xy$ and the edge of $M$ containing $z$ from $G_{2}$. (C8) guarantees that $z$ does not become an endpoint in the resulting graph. Instead, $M(z)\in B_{2}$ becomes an endpoint. 5. 5. If $y\in C_{1}$, let $z$ be the other neighbour of $y$ in $G_{2}$ (which exists by (C7) and (C8)), and note that $yz\in E(M)$ (which follows from (C9)), so $z\in C_{2}$. Let $z^{*}$ be the other neighbour of $z$ in $G_{2}$ (which must exist, again, by (C7) and (C8)). Then, choose a vertex $z^{\prime}\in(N_{H}(z)\cap B_{1})\setminus(K\cup\\{z^{*}\\})$ (recall that its existence is guaranteed by (5.7) and (C6)), add $zz^{\prime}$ to $G_{2}$, remove $xy$, $yz$ and the edge of $M$ containing $z^{\prime}$ from $G_{2}$, and add $y$ to $S$. Again, by (C8), $z^{\prime}$ does not become an endpoint, and $M(z^{\prime})\in B_{2}$ does. Once this is done for all the desired edges, add $A\cup U$ to $S$. Note that, by construction, the sizes of $S$, $D$ and $K$ increase by at most $3$ for each edge $xy$ deleted following the previous process. It then follows by $(\mathrm{H1})$, the definition of $U$ in Step 2 and (B9) that (C6) holds throughout, and even after the last step of the process. The remaining properties follow similarly as in Step 2. (C7) follows from the fact that, throughout, we guarantee that all vertices of $G_{2}$ have degree at most $2$, since we delete one edge incident to each of the vertices that lie in one of the edges which will be added. (C8) holds since all the newly created endpoints lie in $B_{2}$, as remarked throughout the process. Observe that avoiding $z^{*}$ in cases 4 and 5 ensures that we do not create a new endpoint in $C_{2}$. Finally, (C9) holds since the edges added to the graph in cases 4 and 5 have one endpoint in $B_{1}$ and the other in $C_{2}$ by construction. Note that (C9) also holds after the process, that is, replacing $G_{2}$ by $G_{3}$. Let $G_{3}$ be the graph resulting from the process. (C1) must hold by (B4) and since no edges incident to the internal vertices of $P$ are added nor deleted throughout the process (which follows from (B12), (B13) and the fact that $y\notin V(P)\setminus M(\\{x_{1},y_{t}\\})$ throughout). Furthermore, the process above may again increase the number of paths and cycles, but, by Remark 5.3, it creates at most one new cycle or path for each vertex $y$ above. This, combined with $(\mathrm{H1})$, the definition of $U$ in Step 2, (5.4) and (B5), ensures that (C2) holds. (C3) follows by (C8) since, at the end of the process, we have $A\cap V(G_{3})=\varnothing$; similarly, we have that (C5) holds too. Finally, as happened in Step 2, all edges added in cases 4 and 5 above have one endpoint in $B_{1}$ and the other in $C_{2}$ (see (C9)), so by (B7) we must have that (C4) holds. Step 4. We now want to make sure that $P$ does not lie in a cycle. If $P$ is not contained in a cycle, we are done, so assume it is contained in some cycle $\mathcal{C}$. If $|E(\mathcal{C})|\leq|E(P)|+14$, simply delete $E(\mathcal{C})\setminus E(P)$, and add $V(\mathcal{C})\setminus V(P)$ to $S$. Otherwise, for each $x\in\\{M(x_{1}),M(y_{t})\\}$ (recall these are the endpoints of $P$ and that they lie in $B_{2}$, see (B2)), we proceed as follows. Let $y$ be the first vertex of $\mathcal{C}$ which lies in $B_{2}\cup C_{2}$ when moving along $\mathcal{C}$ away from $P$ starting at $x$ (disregarding $x$ itself). Note that, by (C4) and Remark 5.2, we have that $\operatorname{dist}_{\mathcal{C}}(x,y)\leq 3$. Now, delete all edges of the shortest $(x,y)$-path of $\mathcal{C}$, and add all its internal vertices to $S$. If $y\in B_{2}$, it becomes an endpoint and we are done; otherwise, choose a vertex $z\in(N_{H}(y)\cap B_{1})\setminus(K\cup N_{G_{3}}(y))$ (which must exist by (5.7) and (C6)), add $yz$ to $G_{1}$, and delete the edge of $M$ containing $z$. Note that, in this last case, $M(z)\in B_{2}$ becomes an endpoint in the resulting graph. Let $G_{4}$ be the resulting graph. We claim that the following properties are satisfied: 1. (D1) $|S|,|D|,|K|\leq 5\log^{2}n$. 2. (D2) $P\subseteq G_{4}$. Furthermore, the component of $G_{4}$ containing $P$ is a path. 3. (D3) $G_{4}$ is the union of at most $4\log^{2}n$ cycles and $(1+o(1))|R|/2$ paths, all vertex-disjoint. 4. (D4) For each cycle $\mathcal{C}\subseteq G_{4}$ we have that $|V(\mathcal{C})\cap B_{1}|=|V(\mathcal{C})\cap B_{2}|\pm 5\log^{2}n$. 5. (D5) Each of the paths of $G_{4}$ has both endpoints in $B_{2}\cup R$. 6. (D6) All edges in $G_{4}\setminus G_{1}^{\prime}$ are contained in $V(P)$ or have one endpoint in $B_{1}$ and the other in $C_{2}$. 7. (D7) All subpaths of $G_{4}\setminus P$ have at most two consecutive vertices in $R\cup B_{2}\cup C_{2}$ or in $B_{1}\cup C_{1}$. Indeed, the sizes of $S$, $D$ and $K$ may increase by at most $14$ in this step, so (D1) follows from (C6). (D2) follows from the construction, since no edges incident to any vertex of $P$ have been removed (this follows by the choice of the vertices $z$, (B12) and (B13)). Note that $G_{4}$ is a union of paths and cycles since, again, we have made sure that every vertex has degree at most $2$. Furthermore, even though the number of paths and cycles may have increased with respect to $G_{3}$, by Remark 5.3 we have that they increase by at most $2$. Therefore, (D3) follows from (C2) and (5.4). (D4) follows by combining the bound on $|K|$ in (D1) with (A4). (D5) holds by construction and (C3). Note, in particular, that the choices of the vertices $z$ together with (C2) guarantee that we do not create any new endpoints in $C_{2}$. (D6) follows from (C9) and the fact that the only edges we may have added in this step have one endpoint in $B_{1}$ and the other in $C_{2}$. This in turn, together with (C4), implies that (D7) holds. Step 5. Now we want to obtain a graph $G_{5}$ which satisfies the following properties: 1. (E1) $P\subseteq G_{5}$. 2. (E2) $G_{5}$ is the union of at most $(1+o(1))|R|/2$ paths, all vertex-disjoint. 3. (E3) Each of the paths of $G_{5}$ has both endpoints in $B_{2}\cup R$. In order to achieve this, we must alter each cycle in $G_{4}$. Crucially, we will show that, with these alterations, we do not create any new cycles. We claim that, throughout the coming process, we may assume that the following properties hold: 1. (E4) $|S|,|D|,|K|\leq 105\log^{2}n$. 2. (E5) $P\subseteq G_{4}$. Furthermore, the component of $G_{4}$ containing $P$ is a path. 3. (E6) All subpaths of $G_{4}\setminus P$ have at most two consecutive vertices in $R\cup B_{2}\cup C_{2}$ or in $B_{1}\cup C_{1}$. Note that, before the process starts, (E4), (E5) and (E6) hold by (D1), (D2) and (D7), respectively. We proceed by iteratively choosing a cycle $\mathcal{C}\subseteq G_{4}$ and considering the following cases. 1. 1. If $\mathcal{C}$ has length at most $25$, we delete all its edges and add all its vertices to $S$. 2. 2. Otherwise, assume $\mathcal{C}$ contains two vertices $x,y\in B_{2}$ with $\operatorname{dist}_{\mathcal{C}}(x,y)\leq 11$ and let $Q\subseteq\mathcal{C}$ be an $(x,y)$-path of length $\operatorname{dist}_{\mathcal{C}}(x,y)$. Then, remove all edges of $Q$ from $G_{4}$ and add all its internal vertices to $S$. 3. 3. Otherwise, since all cycles in $G_{4}$ are disjoint from $R$ (as follows from (A3), (D2), (D6) and the fact that no new cycles are created throughout this step), by (E6), we may apply Remark 5.2 to conclude that $\mathcal{C}$ must contain three vertices $x,y,z\in C_{2}$ such that $\operatorname{dist}_{\mathcal{C}}(x,y)\leq 3$ and $\operatorname{dist}_{\mathcal{C}}(y,z)\leq 3$. ###### Claim 3. There exist distinct $v_{1},v_{2}\in\\{x,y,z\\}$ such that the following holds. Let $Q$ be the shortest $(v_{1},v_{2})$-path in $G_{4}$. For each $i\in[2]$, there exists $w_{i}\in(N_{H}(v_{i})\cap B_{1})\setminus(K\cup N_{\mathcal{C}}(v_{i}))$ (with $w_{1}\neq w_{2}$) such that, if we let $e_{i}$ be the edge of $M$ containing $w_{i}$, then $w_{1}$, $w_{2}$ and $v_{1}$ each lie in a different component of $G_{4}\setminus(Q\cup\\{e_{1},e_{2}\\})$. ###### Proof. Recall that any two vertices $x^{\prime},y^{\prime}\in V(\mathcal{C})\cap B_{2}$ satisfy that $\operatorname{dist}_{\mathcal{C}}(x^{\prime},y^{\prime})\geq 12$. By (E6) and Remark 5.2, this means that $|V(\mathcal{C})\cap B_{2}|\leq|V(\mathcal{C})\cap C_{2}|/3$. Therefore, by (5.6) we have that $|V(\mathcal{C})\cap B_{2}|\leq\gamma_{1}n/3\leq 29n/1000$. Then, by (D4), we conclude that $|V(\mathcal{C})\cap B_{1}|\leq 3n/100.$ (5.11) Aiming for a contradiction, let us assume that the statement does not hold, that is, for each pair $v_{1},v_{2}\in\\{x,y,z\\}$ we have that, for every $w_{1}\in(N_{H}(v_{1})\cap B_{1})\setminus(K\cup N_{\mathcal{C}}(v_{1}))$ and every $w_{2}\in(N_{H}(v_{2})\cap B_{1})\setminus(K\cup N_{\mathcal{C}}(v_{2}))$ with $w_{2}\neq w_{1}$, if for each $i\in[2]$ we let $e_{i}$ be the edge of $M$ containing $w_{i}$ (which, recall, must lie in $G_{4}$ by the definition of $K$), then at least two of the vertices $w_{1}$, $w_{2}$ and $v_{1}$ lie in the same component of $G_{4}\setminus(Q\cup\\{e_{1},e_{2}\\})$. Let $X\coloneqq(N_{H}(x)\cap B_{1})\setminus(K\cup V(\mathcal{C}))$, $Y\coloneqq(N_{H}(y)\cap B_{1})\setminus(K\cup V(\mathcal{C}))$ and $Z\coloneqq(N_{H}(z)\cap B_{1})\setminus(K\cup V(\mathcal{C}))$. In particular, every vertex in $X\cup Y\cup Z$ lies in an edge of $M$ in $G_{4}$. Consider $x$ and $y$, and let $x_{1}\in X$ and $y_{1}\in Y$ be distinct (such vertices exist by (5.7), (E4), and (5.11)). Let $Q_{xy}$ be the shortest $(x,y)$-path in $\mathcal{C}$, and let $e_{x}$ and $e_{y}$ be the edges of $M$ which contain $x_{1}$ and $y_{1}$, respectively. Our choice of $x_{1}$ and $y_{1}$ guarantees that, in $G_{4}\setminus(Q_{xy}\cup\\{e_{x},e_{y}\\})$, they lie in a different component than $x$ (and $y$), so $x_{1}$ and $y_{1}$ must lie in the same component. Let $F$ be the component of $G_{4}$ containing $x_{1}$ and $y_{1}$. By fixing $x_{1}$, if any choice of $y_{2}\in Y\setminus\\{x_{1}\\}$ lies in a component of $G_{4}$ different from $F$, we would reach a contradiction, so we must have that $Y\subseteq V(F)$, and similarly we have $X\subseteq V(F)$. Assume first that $F$ is a path, and let $u_{F}$ and $v_{F}$ be its endpoints. Given any pair of vertices $a,b\in V(F)$, we write that $a<_{F}b$ if a traversal of $F$ starting at $u_{F}$ reaches $a$ before $b$. Now assume that $x_{1},x_{2}\in X$ and $y_{1}\in Y$ satisfy that $x_{1}<_{F}y_{1}<_{F}x_{2}$. Then, upon deleting the edge of $M$ containing $y_{1}$, it cannot be in the same component as both $x_{1}$ and $x_{2}$, so we would reach a contradiction. Thus, we must have that $x_{1}<_{F}y_{1}$ for all $x_{1}\in X$ and $y_{1}\in Y\setminus\\{x_{1}\\}$, or $y_{1}<_{F}x_{1}$ for all $x_{1}\in X$ and $y_{1}\in Y\setminus\\{x_{1}\\}$. In particular, this implies that $|X\cap Y|\leq 1$. One can similarly show that, if $F$ is a cycle, then $|X\cap Y|\leq 2$. By following the same arguments as above when considering $y$ and $z$, and $x$ and $z$, we conclude that $Z\subseteq V(F)$ and that $|Y\cap Z|\leq 2$ and $|X\cap Z|\leq 2$. Combining this with (5.7), (E4), and (5.11), we have that $|X\cup Y\cup Z|\geq 101n/200$. But $X\cup Y\cup Z\subseteq B_{1}$ and $|B_{1}|\leq n/2$, the final contradiction. ∎ Let $v_{1},v_{2},w_{1},w_{2},e_{1},e_{2}$ be given by 3. Delete $E(Q)$, $e_{1}$ and $e_{2}$ from $G_{4}$, add $v_{1}w_{1}$ and $v_{2}w_{2}$ to $G_{4}$, and add all internal vertices of $Q$ to $S$. Observe that, in cases 1 and 2, we trivially cannot create any new cycles, since no edges are added to the graph. By Remark 5.4, we are also guaranteed that we do not create a new cycle in case 3. This, together with (D3), means that the process described above is repeated at most $4\log^{2}n$ times. Since in each iteration we delete at most $25$ edges (with the bound coming from case 1), (E4) follows from (D1). The fact that $P\subseteq G_{4}$ holds since, by (B12), (B13), the fact that $P$ is not contained in a cycle and the choices throughout the process, no edges incident to $P$ are added nor deleted. The fact that no new cycles are created then implies that (E5) must hold throughout. Finally, (E6) follows from the fact that all added edges have one endpoint in $B_{1}$ and the other in $C_{2}$. All three properties must also hold after the process is finished. Let $G_{5}$ be the graph resulting from the process above. (E1) follows directly from (E5). (E2) follows from (D3), (5.4), (E4) and since the increase in the number of paths in each iteration of the above process is clearly bounded by $3$. Finally, (E3) holds by (D5) and the construction (in particular, note that the choice of $w_{1}$ and $w_{2}$ guarantees that no endpoint is created in $C_{2}$). Step 6. Recall that, together, the paths described in (E3) cover all vertices of $V(H)\setminus S$. We are now going to iteratively combine the paths which conform $G_{5}$ into a single path with the same vertex set. We will later turn this path into a cycle and absorb all vertices of $S$ into it. For simplicity of notation, from now on we update $G_{5}$ as well as $D$ in each step; $S$ and $K$, however, are no longer updated. To be more precise, our aim is to obtain a graph $G_{6}$ which satisfies the following properties: 1. (F1) $P\subseteq G_{6}$. 2. (F2) $G_{6}$ consists of a unique path on vertex set $V(H)\setminus S$. 3. (F3) The endpoints of the path of $G_{6}$ lie in $B_{2}\cup R$. To achieve this, we will follow a process, each iteration of which reduces the number of components of $G_{5}$ by one. We claim that the following properties hold throughout: 1. (F4) $|D|\leq(1+o(1))|R|/2$. 2. (F5) $G_{5}$ is a union of vertex-disjoint paths. 3. (F6) The endpoints of all paths of $G_{5}$ lie in $B_{2}\cup R$. Observe that, before the process starts, (F4) follows from (E4) and (5.4), (F5) holds by (E2), and (F6) follows from (E3). We proceed as follows. While $G_{5}$ contains at least two paths, choose any such two paths $P_{1}$ and $P_{2}$, and let $x$ be an endpoint of $P_{1}$, and $y$ be an endpoint of $P_{2}$. In particular, by (F6), $x,y\in B_{2}\cup R$. Choose some edge $e=zz^{\prime}\in E_{G}(N_{H}(x)\cap N_{H}(y)\cap B_{1})\setminus D$ (which must exist by (G2), (5.9) and (F4)). * • If $e\notin E(P_{1})\cup E(P_{2})$, add the edges $xz$ and $yz^{\prime}$ to $G_{5}$ and remove $zz^{\prime}$. * • Otherwise, suppose $e\in E(P_{1})$ and that $\operatorname{dist}_{P_{1}}(x,z)<\operatorname{dist}_{P_{1}}(x,z^{\prime})$ (the other cases are similar). Then, remove $zz^{\prime}$ from $G_{5}$ and add $xz^{\prime}$ and $yz$. In both cases, we clearly reduce the number of paths by one. Furthermore, in each step we delete exactly one edge from $G_{5}$. This, together with (E2) and (E4), guarantees that (F4) holds throughout (and, in particular, this implies the process can indeed be carried out). (F5) follows by construction, as does (F6), since no new endpoints are created throughout. Let $G_{6}$ be the graph resulting from the process so far. As the process ends, (F2) and (F3) follow by construction. Finally, (F1) holds since, in all cases above, no edges incident to $V(P)$ are removed or added to the graph, as guaranteed by (B3). We can now complete the proof. Let the endpoints of the unique component of $G_{6}$ (see (F2)) be $x$ and $y$. By (F3) we have $x,y\in B_{2}\cup R$, so by (G2), (5.9) and (F4) we can take some edge $e=zz^{\prime}\in E_{G}(N_{H}(x)\cap N_{H}(y)\cap B_{1})\setminus D$. Assume without loss of generality that $\operatorname{dist}_{P^{\prime}}(x,z)<\operatorname{dist}_{P^{\prime}}(x,z^{\prime})$. Then, by (B3) and (F1), removing $zz^{\prime}$ from $G_{3}$ and adding $xz^{\prime}$ and $yz$ results in a cycle $\mathcal{C}$ with the same vertex set and such that $P\subseteq\mathcal{C}$. Let $m\coloneqq|S|$, so $\mathcal{C}$ has length $n-m$. We must now prove that there is a cycle of length $k$, for all $3\leq k\leq n$. We split our analysis into three cases. Assume first that $n-m\leq k\leq n$. Consider a set $J\subseteq S$ with $|J|=k+m-n$. For each $w\in J\setminus U$, choose a distinct edge $e_{w}=x_{w}y_{w}\in E_{G}(N_{H}(w))\setminus D$ (recall that for each $w\in U$ we have $w=u_{i}$ for some $i\in[t]$ and we already defined an edge $e_{w}\coloneqq x_{i}y_{i}$ in Step 2). Note that (G1), (5.10), (E4) and (F4) guarantee that there is a choice of edges as desired. Then, for each $w\in J$, replace $e_{w}$ by the path $x_{w}wy_{w}$. This clearly results in a cycle of the desired length. Suppose next that $3\leq k\leq\alpha^{2}n/10$. In such a case, consider any subpath $P^{\prime}\subseteq\mathcal{C}$ of length $k-3$, and let its endpoints be $x$ and $y$. Now choose any edge $zz^{\prime}\in E_{G}(N_{H}(x),N_{H}(y))$ such that $z,z^{\prime}\notin V(P^{\prime})$ (the existence of such an edge follows by (G1) and (E4)). Then, the union of $P^{\prime}$ and the path $xzz^{\prime}y$ forms a cycle of length $k$. Finally, assume $\alpha^{2}n/10<k<n-m$. Consider a subpath $P^{\prime}\subseteq\mathcal{C}$ of length $k-3$ such that $P\subseteq P^{\prime}$. Let the endpoints of $P^{\prime}$ be $x$ and $y$, respectively, and let $Z\coloneqq E_{G}(N_{H}(x),N_{H}(y))\setminus D$ (for notational purposes, when we write $zz^{\prime}\in Z$ we assume that $z\in N_{H}(x)$ and $z^{\prime}\in N_{H}(y)$). Note that (G1), (5.10) and (F4) imply that $|Z|\geq\eta\alpha^{2}n.$ (5.12) Recall that, by (B3), $Z$ and $P$ are vertex-disjoint. We consider the following three cases. 1. 1. Assume that there exists $zz^{\prime}\in Z$ such that $z,z^{\prime}\notin V(P^{\prime})$. Then, the union of $P^{\prime}$ and the path $xzz^{\prime}y$ forms a cycle of length $k$. 2. 2. Otherwise, let $Z^{\prime}\coloneqq\\{e\in Z:e\subseteq V(P^{\prime})\\}$, so we have that $Z^{\prime}\subseteq E(P^{\prime})$ and $|Z^{\prime}|\geq|Z|-2\geq\eta\alpha^{2}n/2$ (5.13) by (5.12). Suppose there is an edge $zz^{\prime}\in Z^{\prime}$ such that $\operatorname{dist}_{P^{\prime}}(x,z^{\prime})<\operatorname{dist}_{P^{\prime}}(x,z)$. If so, then $(P^{\prime}\setminus\\{zz^{\prime}\\})\cup\\{xz,yz^{\prime}\\}$ is a cycle of length $k-2$ which contains $P$. To obtain a cycle of length $k$, replace $x_{1}y_{1}$ and $x_{2}y_{2}$ by the paths $x_{1}u_{1}y_{1}$ and $x_{2}v_{2}y_{2}$, respectively. 3. 3. Otherwise, $Z^{\prime}\subseteq E(P^{\prime})$ and all $zz^{\prime}\in Z^{\prime}$ satisfy that $\operatorname{dist}_{P^{\prime}}(x,z)<\operatorname{dist}_{P^{\prime}}(x,z^{\prime})$. Note that all edges in $Z^{\prime}$ are pairwise vertex-disjoint by definition. Choose two edges $zz^{\prime},ww^{\prime}\in Z^{\prime}$ with $\operatorname{dist}_{P^{\prime}}(x,z)<\operatorname{dist}_{P^{\prime}}(x,w)$ which minimise $\operatorname{dist}_{P^{\prime}}(z,w)$ over all possible pairs of edges. Let $P^{\prime\prime}$ be the $(z^{\prime},w)$-subpath of $P^{\prime}$, and let $\ell\coloneqq\operatorname{dist}_{P^{\prime}}(z,w)$. By an averaging argument using (5.13), it follows that $1\leq\ell\leq 2|V(P^{\prime})|/(\eta\alpha^{2}n)<2\eta^{-1}\alpha^{-2}<t$. In particular, this shows that $P\nsubseteq P^{\prime\prime}$, so we must have $P\subseteq P^{\prime}\setminus P^{\prime\prime}$. Then, the graph $(P^{\prime}\setminus E(P^{\prime\prime}))\cup\\{xw,yz^{\prime}\\}$ is a cycle of length $k-\ell$ which contains $P$. In order to obtain a cycle of length $k$, replace $u_{1},\ldots,u_{\ell}$ by $x_{1}u_{1}y_{1},\ldots,x_{\ell}u_{\ell}y_{\ell}$.∎ ## 6\. Concluding remarks and open problems Binomial random graphs and random regular graphs are perhaps the two most studied random graph models. As we have mentioned in the introduction, many results about randomly perturbed graphs have been obtained for the binomial random graph; it thus seems natural to study analogous problems in random regular graphs. We believe it would be interesting to study graphs perturbed by a random graph with a fixed degree sequence as well. Very recently, the first author has also considered Hamiltonicity of graphs perturbed by a random geometric graph [18]. Of course, this study should also be extended to other graph properties. As we have observed, the behaviour of the graph $H$ perturbed by $G_{n,d}$ when $d=1$ and $d=2$ is quite different. When $d=1$, we have shown that, if $\delta(H)\geq\alpha n$ with $\alpha>\sqrt{2}-1$, then a.a.s. $H\cup G_{n,1}$ is Hamiltonian, but the same is not necessarily true if $\alpha<\sqrt{2}-1$. On the other hand, for $d=2$, we have shown that $H\cup G_{n,2}$ is a.a.s. Hamiltonian for far sparser graphs $H$ ($\delta(H)=\omega(n^{3/4}(\log n)^{1/4})$ suffices). We believe this is far from optimal and should be true for even sparser graphs $H$. We thus propose the following question. ###### Question 6.1. What is the minimum $f=f(n)$ such that, for every $n$-vertex graph $H$ with $\delta(H)\geq f$, a.a.s. $H\cup G_{n,2}$ is Hamiltonian? The only lower bound we can provide for this question is of order $\log{n}$, which is very far from the upper bound given by Theorem 1.1. Indeed, consider an $n$-vertex complete unbalanced bipartite graph $H=(A,B,E)$ where $|A|=\log{n}/5$. It follows by a standard concentration argument (in the proof of Lemma 3.1$(\mathrm{i})$, one can see that the variables $X_{i}$ are actually independent, hence standard Chernoff bounds are applicable) that a.a.s. $G_{n,2}[B]$ contains at least $\log{n}/2$ cycles. Upon conditioning on this event, it is easy to check that $H\cup G_{n,2}$ does not contain a Hamilton cycle. From an algorithmic perspective, by retracing our proofs of Theorems 1.1, LABEL:, 1.2, LABEL: and 1.3 as well as Lemma 5.1, it easily follows that, given an $n$-vertex graph $H$ with $\delta(H)\geq\alpha n$ and any $d$-regular graph $G$ which satisfies the statements of Lemmas 3.1 and 3.3 (the latter with respect to the sets defined in each of the proofs, which can be checked in polynomial time), there is a polynomial-time algorithm that finds cycles of any given length in $H\cup G$. One could consider a generalisation of the results we have obtained for random perfect matchings by considering random $F$-factors (where an $F$-factor is a union of vertex-disjoint copies of $F$ which together cover the vertex set), for some fixed graph $F$, assuming the necessary divisibility conditions. In general, we believe the behaviour here will be similar to that of random perfect matchings: if $G$ is a uniformly random $n$-vertex $F$-factor, there will exist a specific value $\alpha^{*}=\alpha^{*}(F)\in(0,1/2)$, independent of $n$, such that, for every $\epsilon>0$, the following hold: * • for every $n$-vertex graph $H$ with $\delta(H)\geq(1+\epsilon)\alpha^{*}n$, a.a.s. $H\cup G$ is Hamiltonian, and * • there exists some $n$-vertex graph $H$ with $\delta(H)\geq(1-\epsilon)\alpha^{*}n$ such that $H\cup G$ is not a.a.s. Hamiltonian. Theorem 1.2 asserts that $\alpha^{*}(K_{2})=\sqrt{2}-1$. We propose the following conjecture. ###### Conjecture 6.2. For all $r\geq 2$, we have that $\alpha^{*}(K_{r})$ is the unique real positive solution to the equation $x^{r}+rx-1=0$. The lower bound for the conjectured value of $\alpha^{*}(K_{r})$ is given by the same extremal example as for perfect matchings, that is, a complete unbalanced bipartite graph. Indeed, consider a complete bipartite graph with parts $A$ and $B$, where $|A|=\alpha n$ and $|B|=(1-\alpha)n$, and let $G$ be a uniformly random $n$-vertex $K_{r}$-factor. We are going to estimate the number of cliques of each size in $G[B]$. For each $s\in[r]$, let $X_{s}$ be the number of components of $G[B]$ which are isomorphic to $K_{s}$. Fix a vertex $v\in B$. For each $s\in[r]$, the probability that the component containing $v$ in $G[B]$ is isomorphic to $K_{s}$ is (roughly) given by $\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{(1-\alpha)n}{s-1}\kern-1.00006pt\right)\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{\alpha n}{r-s}\kern-1.00006pt\right)\frac{1}{\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{n}{r-1}\kern-1.00006pt\right)}.$ Thus, we have that $\mathbb{E}[X_{s}]\approx(1-\alpha)n\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{(1-\alpha)n}{s-1}\kern-1.00006pt\right)\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{\alpha n}{r-s}\kern-1.00006pt\right)\frac{1}{\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{n}{r-1}\kern-1.00006pt\right)}\frac{1}{s}\approx\frac{n}{r}\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{r}{s}\kern-1.00006pt\right)(1-\alpha)^{s}\alpha^{r-s}.$ Now observe that, since $H$ is a complete bipartite graph, in building a longest cycle, each vertex of $A$ can ‘absorb’ each of the components of $G[B]$. The conclusion is that, if the number of such components is larger than the number of vertices in $A$, then a Hamilton cycle is impossible. That is, a necessary condition for a Hamilton cycle would be that $\sum_{s=1}^{r}X_{s}\leq\alpha n.$ If we consider the expectations (for the lower bound, Markov’s inequality provides sufficient concentration), we have that $\sum_{s=1}^{r}\mathbb{E}[X_{s}]\approx\frac{n}{r}\sum_{s=1}^{r}\left(\kern-1.00006pt\genfrac{}{}{0.0pt}{}{r}{s}\kern-1.00006pt\right)(1-\alpha)^{s}\alpha^{r-s}=\frac{n}{r}(1-\alpha^{r}),$ so our necessary condition becomes $\frac{n}{r}(1-\alpha^{r})\leq\alpha n\iff\alpha^{r}+r\alpha-1\geq 0.$ We think the problem might be interesting for other instances of $F$ as well. In a different direction, we also believe the problem could be interesting for hypergraphs. In particular, following work of Altman, Greenhill, Isaev and Ramadurai [1], it is known that for every integer $r\geq 2$ there exists an (explicit) constant $\rho(r)$ such that a random $d$-regular $r$-uniform hypergraph $G_{n,d}^{(r)}$ a.a.s. contains a loose Hamilton cycle if $d>\rho(r)$, and a.a.s. does not contain such a cycle if $d\leq\rho(r)$. We propose the following question. ###### Question 6.3. Let $r\geq 3$ be an integer. For each $d\leq\rho(r)$, for which values of $\alpha$ (possibly as a function of $n$) is it true that for any $n$-vertex $r$-uniform hypergraph $H$ with $\delta(H)\geq\alpha n^{r-1}$ we have that a.a.s. $H\cup G_{n,d}^{(r)}$ contains a loose Hamilton cycle? ## Acknowledgement We would like to thank Padraig Condon for nice discussions at an early stage of this project. We are also indebted to the anonymous referees for their helpful comments and suggestions. ## References * Altman, Greenhill, Isaev and Ramadurai [2020] D. Altman, C. Greenhill, M. Isaev and R. Ramadurai, ‘A threshold result for loose Hamiltonicity in random regular uniform hypergraphs’. _J. Combin. Theory Ser. B_ 142 (2020), 307–373, doi: 10.1016/j.jctb.2019.11.001. * Antoniuk, Dudek, Reiher, Ruciński and Schacht [2021] S. Antoniuk, A. Dudek, C. Reiher, A. Ruciński and M. Schacht, ‘High powers of Hamiltonian cycles in randomly augmented graphs’. _Journal of Graph Theory_ 98.2 (2021), 255–284, doi: 10.1002/jgt.22691. * Balogh, Treglown and Wagner [2019] J. Balogh, A. Treglown and A. Z. Wagner, ‘Tilings in randomly perturbed dense graphs’. _Combin. Probab. Comput._ 28 (2019), 159–176, doi: 10.1017/S0963548318000366. * Bedenknecht, Han, Kohayakawa and Mota [2019] W. Bedenknecht, J. Han, Y. Kohayakawa and G. O. Mota, ‘Powers of tight Hamilton cycles in randomly perturbed hypergraphs’. _Random Structures Algorithms_ 55 (2019), 795–807, doi: 10.1002/rsa.20885. * Bohman, Frieze and Martin [2003] T. Bohman, A. Frieze and R. Martin, ‘How many random edges make a dense graph Hamiltonian?’ _Random Structures Algorithms_ 22 (2003), 33–42, doi: 10.1002/rsa.10070. * Bollobás [1980] B. Bollobás, ‘A probabilistic proof of an asymptotic formula for the number of labelled regular graphs’. _European J. Combin._ 1 (1980), 311–316, doi: 10.1016/S0195-6698(80)80030-8. * Böttcher, Han, Kohayakawa, Montgomery, Parczyk and Person [2019] J. Böttcher, J. Han, Y. Kohayakawa, R. Montgomery, O. Parczyk and Y. Person, ‘Universality for bounded degree spanning trees in randomly perturbed graphs’. _Random Structures Algorithms_ 55 (2019), 854–864, doi: 10.1002/rsa.20850. * Böttcher, Montgomery, Parczyk and Person [2020] J. Böttcher, R. Montgomery, O. Parczyk and Y. Person, ‘Embedding spanning bounded degree graphs in randomly perturbed graphs’. _Mathematika_ 66 (2020), 422–447, doi: 10.1112/mtk.12005. * Böttcher, Parczyk, Sgueglia and Skokan [2020] J. Böttcher, O. Parczyk, A. Sgueglia and J. Skokan, ‘Triangles in randomly perturbed graphs’. _arXiv e-prints_ (2020). arXiv: 2011.07612. * Böttcher, Parczyk, Sgueglia and Skokan [2021] ———, ‘Cycle factors in randomly perturbed graphs’. _arXiv e-prints_ (2021). arXiv: 2103.06136. * Böttcher, Parczyk, Sgueglia and Skokan [2022] ———, ‘The square of a Hamilton cycle in randomly perturbed graphs’. _arXiv e-prints_ (2022). arXiv: 2202.05215. * Chang, Han and Thoma [2020] Y. Chang, J. Han and L. Thoma, ‘On powers of tight Hamilton cycles in randomly perturbed hypergraphs’. _arXiv e-prints_ (2020). arXiv: 2007.11775. * Condon, Espuny Díaz, Girão, Kühn and Osthus [2020] P. Condon, A. Espuny Díaz, A. Girão, D. Kühn and D. Osthus, ‘Hamiltonicity of random subgraphs of the hypercube’. _arXiv e-prints_ (2020). arXiv: 2007.02891. * Condon, Espuny Díaz, Girão, Kühn and Osthus [2021] ———, ‘Dirac’s theorem for random regular graphs’. _Combin. Probab. Comput._ 30 (2021), 17–36, doi: 10.1017/S0963548320000346. * Cooper, Frieze and Reed [2002] C. Cooper, A. Frieze and B. Reed, ‘Random regular graphs of non-constant degree: connectivity and Hamiltonicity’. _Combin. Probab. Comput._ 11 (2002), 249–261, doi: 10.1017/S0963548301005090. * Dirac [1952] G. A. Dirac, ‘Some theorems on abstract graphs’. _Proc. London Math. Soc._ 2 (1952), 69–81, doi: 10.1112/plms/s3-2.1.69. * Dudek, Reiher, Ruciński and Schacht [2020] A. Dudek, C. Reiher, A. Ruciński and M. Schacht, ‘Powers of Hamiltonian cycles in randomly augmented graphs’. _Random Structures Algorithms_ 56 (2020), 122–141, doi: 10.1002/rsa.20870. * Espuny Díaz [2021] A. Espuny Díaz, ‘Hamiltonicity of graphs perturbed by a random geometric graph’. _arXiv e-prints_ (2021). arXiv: 2102.02321. * Hahn-Klimroth, Maesaka, Mogge, Mohr and Parczyk [2021] M. Hahn-Klimroth, G. S. Maesaka, Y. Mogge, S. Mohr and O. Parczyk, ‘Random perturbation of sparse graphs’. _Electron. J. Comb._ 28.2 (2021), research paper p2.26, 12, doi: 10.37236/9510. * Han, Morris and Treglown [2020] J. Han, P. Morris and A. Treglown, ‘Tilings in randomly perturbed graphs: bridging the gap between Hajnal-Szemerédi and Johansson-Kahn-Vu’. _Random Structures Algorithms_ (2020), 1–37, doi: 10.1002/rsa.20981. * Han and Zhao [2020] J. Han and Y. Zhao, ‘Hamiltonicity in randomly perturbed hypergraphs’. _J. Combin. Theory Ser. B_ 144 (2020), 14–31, doi: 10.1016/j.jctb.2019.12.005. * Joos and Kim [2020] F. Joos and J. Kim, ‘Spanning trees in randomly perturbed graphs’. _Random Structures Algorithms_ 56 (2020), 169–219, doi: 10.1002/rsa.20886. * Koršunov [1977] A. D. Koršunov, ‘Solution of a problem of P. Erdős and A. Rényi on Hamiltonian cycles in undirected graphs’. _Metody Diskretn. Anal._ 31 (1977), 17–56. * Krivelevich, Kwan and Sudakov [2016] M. Krivelevich, M. Kwan and B. Sudakov, ‘Cycles and matchings in randomly perturbed digraphs and hypergraphs’. _Combin. Probab. Comput._ 25 (2016), 909–927, doi: 10.1017/S0963548316000079. * Krivelevich, Kwan and Sudakov [2017] ———, ‘Bounded-degree spanning trees in randomly perturbed graphs’. _SIAM J. Discrete Math._ 31 (2017), 155–171, doi: 10.1137/15M1032910. * Krivelevich, Sudakov, Vu and Wormald [2001] M. Krivelevich, B. Sudakov, V. H. Vu and N. C. Wormald, ‘Random regular graphs of high degree’. _Random Structures Algorithms_ 18 (2001), 346–363, doi: 10.1002/rsa.1013. * McDowell and Mycroft [2018] A. McDowell and R. Mycroft, ‘Hamilton $\ell$-cycles in randomly perturbed hypergraphs’. _Electron. J. Combin._ 25 (2018), Paper No. 4.36, 30, doi: 10.37236/7671. * Nenadov and Trujić [2021] R. Nenadov and M. Trujić, ‘Sprinkling a few random edges doubles the power’. _SIAM J. Discrete Math._ 35.2 (2021), 988–1004, doi: 10.1137/19M125412X. * Robinson and Wormald [1992] R. W. Robinson and N. C. Wormald, ‘Almost all cubic graphs are Hamiltonian’. _Random Structures Algorithms_ 3 (1992), 117–125, doi: 10.1002/rsa.3240030202. * Robinson and Wormald [1994] ———, ‘Almost all regular graphs are Hamiltonian’. _Random Structures Algorithms_ 5 (1994), 363–374, doi: 10.1002/rsa.3240050209. * Wormald [1999] N. C. Wormald, ‘Models of random regular graphs’. _Surveys in combinatorics, 1999 (Canterbury)_ , _London Math. Soc. Lecture Note Ser._ , vol. 267, 239–298, Cambridge Univ. Press, Cambridge (1999).
On uniqueness and reconstruction of a nonlinear diffusion term in a parabolic equation Barbara Kaltenbacher[ Department of Mathematics, Alpen-Adria-Universität Klagenfurt. William Rundell[ Department of Mathematics, Texas A&M University, Texas 77843. The problem of recovering coefficients in a diffusion equation is one of the basic inverse problems. Perhaps the most important term is the one that couples the length and time scales and is often referred to as the diffusion coefficient $a$ in $u_t - \nabla(a\nabla u) = f$. In this paper we seek the unknown $a$ assuming that $a=a(u)$ depends only on the value of the solution at a given point. Such diffusion models are the basic of a wide range of physical phenomena such as nonlinear heat conduction, chemical mixing and population dynamics. We shall look at two types of overposed data in order to effect recovery of $a(u)$: the value of a time trace $u(x_0,t)$ for some fixed point $x_0$ on the boundary of the region $\Omega$; or the value of $u$ on an interior curve $\Sigma$ lying within $\Omega$. As examples, these might represent a temperature measurement on the boundary or a census of the population in some subset of $\Omega$ taken at a fixed time $T>0$. In the latter case we shall show a uniqueness result that leads to a constructive method for recovery of $a$. Indeed, for both types of measured data we shall show reconstructions based on the iterative algorithms developed in the paper. Keywords: Inverse problem, nonlinear diffusion, reconstruction algorithms ams classification: 35R30, 35K15, 35K58, 80A23. § INTRODUCTION The setting is in a bounded, simply connected domain $\Omega\subset \mathbb{R}^d$ with smooth ($C^2$) boundary $\partial\Omega$ and the problem is to recover the conductivity coefficient $a(u)$ in the reaction diffusion equation \begin{eqnarray} &&u_t-\nabla(a(u)\nabla u)= r(x,t,u) \quad t\in(0,T)\,, \quad u(0)=u_0\label{eqn:u-nl} \end{eqnarray} subject to (nonlinear) impedance or Dirichlet ($\gamma=\infty$) boundary conditions \begin{equation}\label{eqn:bndy} a(u)\partial_\nu u+\gamma u = b(x,t), \mbox{ or } u = d(x,t) \quad x\in\partial\Omega,\quad t\in(0,T) \end{equation} where we assume that the forcing term $r(x,t,u)$ is known. Since for a given $a(u)$ within a suitable class, the pair (<ref>) and (<ref>) allows a unique determination of the solution $u$, we have to give additional (overposed) data in order to recover $a$. This will take the form of either: observations along a curve $\omega\subset\Omega$ for some fixed time $T$ (final time measurements) \begin{equation}\label{eqn:fiti} g(x)=u(x,T), \quad x\in \omega\subseteq\Omega. \end{equation} time trace observations \begin{equation}\label{eqn:titr} h(t)=u(x_0,t), \quad t\in (0,T) \end{equation} for some $x_0\in\overline{\Omega}\subseteq\R^d$. In the latter case, typically $x_0$ will be a boundary point on There is a parallel equation to (<ref>) which in its simplest form is \begin{equation}\label{eqn:phiu} u_t- \triangle \phi(u) = 0 \quad t\in(0,T)\,, \end{equation} and thus looking at only the principal part of the operator gives the operator in (<ref>) with $a(u) = \phi'(u)$. In many applications this is the preferred form, perhaps the best known case is when $\phi(u) = u^m$ obtaining the Porous Medium Equation, The heat equation can be derived from a basic random walk model whereby at fixed times $dt$ jumps of length $dx$ are made in a random direction; that is simple Brownian motion. The coefficient $a$ appears as the coupling constant between these length-time scales and may be a function of the position $x$ – or a coefficient $a(u)$ meaning that in the random walk model the jump length depends on the value or density at the current location. It plays the role of a nonlinear thermal conductivity in heat conduction or of a nonlinear diffusion in wide variety of areas including applications to population dynamics. The conductivity of materials depends on temperature and sometimes quite strongly so that the simple assumption that it be constant is only valid over an often fairly narrow range. The thermal conductivity for pure metals depends on the motion of free electrons. The molecular vibrations increase with temperature thus decreasing the mean free path of molecules and in turn obstruct the flow of free electrons, resulting in a reduction of conductivity. However, for most non-metals the opposite effect occurs and conductivity often rises with temperature. Alloys and composite materials can have more complex behaviour and for most materials at extreme temperatures the graph of $a(u)$ may certainly not be monotonic. Both the forms (<ref>) and (<ref>) occur frequently in the life sciences and many from ecology are based on the seminal paper of Skellam, [13]. For example in [14], modelling aggregative movement in populations where $\phi(u)$ was a cubic and the associated $a(u)$ was a constant plus a logistical growth term. In [6, 1] the doubly nonlinear model with $r(x,t,u) = f(u)$ was modelling instead dispersive behaviour of populations. In [6], a similar nonlinear model was used where the reaction term $f(u)$ is based on the Fisher model of the logistic quadratic nonlinearity. Note that we assume that $r(x,t,u)$ is known although in the case of and unknown $f(u)$, recovery of $f$ from over-posed data similar to that considered here was studied in [8]. The presence of $a$ in the boundary condition (<ref>) follows from the fact we have imposed a condition on the thermal flux rather than on the gradient of $u$ itself. We will use this fact in a key way in our reconstruction algorithm for time trace data where the representation will lead naturally to a fixed point equation for $a(u)$. This latter problem was studied in one space dimension by Cannon and DuChateau [2] and DuChateau [4] using the transformation $A(s)=\int_0^s a(r)\, dr$, $b(s)=(a(A^{-1}(s)))^{-1}$, $v(x,t)=A(u(x,t))$, taking $u_t - \bigl(a(u)u_x\bigr)_x = 0$ into $b(u)u_t - u_{xx} = 0$ and then showing uniqueness using monotonicity arguments. In the next section we will consider fixed point schemes for the iterative reconstruction of $a$ in these two cases of overposed data. In section sect:forwards we shall prove some results on the forwards problem for (<ref>) and (<ref>) that will be needed in the analysis of the inverse problems. In section sec:fiti_contractivity we shall show that the map developed in section sect:setup for final time data is contractive in a suitable setting and leads directly to a uniqueness result. The final section shows some reconstructions based on the above analysis. § THE PROBLEM SET UP We will consider fixed point schemes for the iterative reconstruction of $a$ in these two cases of overposed data. As will be seen, the recovery process both numerically and analytically is quite different. In some ways this is to be expected for although space and time is intrinsically connected in a parabolic equation, the nonlinearity has its own effects that can play out in a different manner within the interior of $\Omega$ and on its boundary $\partial\Omega$. In both cases we must arrange the prescribed data to ensure that the range of $u$ over the measurement region contains the entire range of $u$ over $\Omega\times (0,T)$. This is usually unnecessary in the case of spatially dependent coefficients and is one significant factor which sets recovery within equations such as (<ref>) apart. What this means is the ideal situation is the ability to set up an experiment allowing adherence to these conditions and using known properties of parabolic equations (such as the maximum principle) to show the viability. This of course limits the ability to recover $a(u)$ in, say ecological models, where one has usually to measure just what is found. §.§ Final time data In (<ref>), the observation domain $\omega$ is supposed to be chosen such that there exists a curve $\Sigma\subseteq\mbox{int}(\omega)$, (so that by differentiation of the data we also know $\nabla g$, $\triangle g$ on $\Sigma$) that can be parametrized in such a way that \begin{equation}\label{eqn:Sigma} \Sigma =\{\vec{x}(\sigma)\, : \sigma\in [0,1]\}\,, \quad |(g\circ\vec{x})'(\sigma)| = |\nabla g(\vec{x}(\sigma))\cdot \vec{x}'(\sigma)|\geq\kappa>0\,. \end{equation} This allows us to define the inverse operator \begin{equation}\label{eqn:phi} \phi:= (g\circ\vec{x})^{-1}:g(\Sigma)\to[0,1]. \end{equation} In the 1-d case $\Omega=\Sigma=(0,1)$, (<ref>) simplifies to the strict monotonicity assumption $|g_x(x)|\geq\kappa>0$. Moreover we assume the range of $g$ to contain the range of $u_{act}$ \begin{equation}\label{eqn:range_fiti} J:=[\ul{u},\ol{u}]=g(\Sigma)\supseteq u_{act}(\Omega\times(0,T)) \end{equation} for an exact solution $(a_{act},u_{act})$ of the inverse problem (<ref>), (<ref>), (<ref>). Similarly to [9] we will then enforce $J$ as the domain of the reconstructions, which are set to constant values outside $J$. Projecting the pde in (<ref>) on the observation manifold $\Sigma\times\{T\}$ yields the fixed point iteration \begin{equation}\label{eqn:proj} \nabla(a_{k+1}(g)\nabla g) = D_t u(\cdot,T;a_k)-r(T) \mbox{ on }\Sigma \end{equation} and further inverting the curve parametrization, cf. (<ref>), we get \begin{equation}\label{eqn:ODEa+} \begin{aligned} a_{k+1}'(\tau)|\nabla g(\vec{x}(\phi(\tau)))|^2 + a_{k+1}(\tau)\triangle g(\vec{x}(\phi(\tau))) = D_t u(\vec{x}(\phi(\tau)),T;a_k)-r(\vec{x}(\phi(\tau)),T) \\ &&\mbox{ for all }\tau\in[\ul{u},\ol{u}]. \end{aligned} \end{equation} This ode, together with a given value of $a$ at some fixed point allows us to uniquely determine the update $a_{k+1}$ inside $J$. We set $a_{k+1}(\tau)=a_{le}(\tau)$ for $\tau<\ul{u}$, $a_{k+1}(\tau)=a_{ri}(\tau)$ for $\tau>\ol{u}$, where $a_{le}$, $a_{ri}$ are the – assumed known – values of $a_{act}$ outside $J$. More precisely, to preserve $C^{2,\alpha}$ smoothness of the iterates over all of $\mathbb{R}$, we use the metric projection of the iterates onto $\{a\in H^s(\mathbb{R})\, : \, a(\tau)=a_{le}(\tau)\mbox{ for }\tau<\ul{u}, \ a(\tau)=a_{ri}(\tau)\mbox{ for }\tau>\ol{u}\}$ with $s>\frac{d}{2}+2+\alpha$ so that $H^s(\mathbb{R})$ continuously embeds into $C^{2,\alpha}(\mathbb{R})$, cf. [8, 10]. In the 1-d case $\Omega=\Sigma=(0,1)$ this can simply be solved by integrating (<ref>) from $0$ to $x$ and dividing by $g_x$, then replacing $-a_{k+1}(g(0))g_x(0)$ by $b(0,T)-\gamma g(0)$ (cf. (<ref>)) \begin{equation}\label{eqn:a+1-d} a_{k+1}(g(x))=\frac{\int_0^x (D_t u(\xi,T;a_k)-r(\xi,T))\, d\xi - b(0,T) + \gamma g(0)}{g_x(x)} \end{equation} In higher space dimensions, assuming that $x_0=\vec{x}(0)\in\Sigma\cap\partial\Omega$ and $\partial_\nu g(x_0)\not=0$ so that we can replace $a_{k+1}(0)$ by $a_{act}(g(x_0))=\frac{b(x_0,T)-\gamma g(x_0)}{\partial_\nu g(x_0)} =: a^0$ (cf. (<ref>)), we get from (<ref>) \begin{equation}\label{eqn:a+h-d} \begin{aligned} a_{k+1}(\tau)=&\exp\left(-\int_0^\tau\frac{\triangle g(\vec{x}(\phi(\sigma)))}{|\nabla g(\vec{x}(\phi(\sigma)))|^2}\, d\sigma\right) &+ \int_0^\tau \frac{D_t u(\vec{x}(\phi(\sigma)),T;a_k)-r(\vec{x}(\phi(\sigma)),T)}{|\nabla g(\vec{x}(\phi(\sigma)))|^2} \exp\left(-\int_\sigma^\tau\frac{\triangle g(\vec{x}(\phi(\rho)))}{|\nabla g(\vec{x}(\phi(\rho)))|^2}\, d\rho \right)\, d\sigma\,. \end{aligned} \end{equation} §.§ Time trace data The range condition becomes \begin{equation}\label{eqn:range_titr} [\ul{u},\ol{u}]=h(0,T)\supseteq u_{act}(\Omega\times(0,T)) \end{equation} and projection of the pde on the observation manifold $\{x_0\}\times(0,T)$ yields the iteration \begin{equation}\label{eqn:proj_titr} a_{k+1}'(h(t)) |\nabla u(x_0,t;a_k)|^2 + a_{k+1}(h(t)) \Delta u(x_0,t;a_k) = h'(t)-r(x_0,t) \,, \quad t\in(0,T) \end{equation} Here we again employ the ode solution formula as above. By abbreviating $u_k(x,t) = u(x,t;a_k)$ and using $a_{\rm act}(\tau_0)=a_{\rm act}(h(0))=\frac{b(x_0,0)-\gamma h(0)}{\partial_\nu u_0} =: a^0$ (or, if $\partial_\nu u_0=0$, assuming $a_{\rm act}(\tau_0)=a_{\rm act}(h(0)) =: a^0$ to be known) we obtain \begin{equation}\label{eqn:a+_titr_PDE} \begin{aligned} a_{k+1}(\tau)=&\exp\left(-\int_{\tau_0}^\tau\frac{\triangle u_k(x_0,h^{-1}(\sigma))}{|\nabla u_k(x_0,h^{-1}(\sigma))|^2}\, d\sigma\right)a^0\\ &+ \int_{\tau_0}^\tau \frac{h'(h^{-1}(\sigma))-r(x_0,h^{-1}(\sigma))}{|\nabla u_k(x_0,h^{-1}(\tau))|^2} \exp\left(-\int_\sigma^\tau \frac{\triangle u_k(x_0,h^{-1}(\rho))}{|\nabla u_k(x_0,h^{-1}(\rho))|^2}\, d\rho \right)\, d\sigma\,. \end{aligned} \end{equation} Alternatively, with $x_0\in\partial\Omega$, we might slightly modify the iteration by inserting the nonlinear boundary conditions (<ref>), thus replacing the second term in (<ref>) to obtain \begin{equation}\label{eqn:proj_titr_alt} a_{k+1}'(h(t)) |\nabla u_k(x_0,t)|^2 = \frac{\gamma h(t)-b(x_0,t)}{\partial_\nu u_k(x_0,t)} \Delta u_k(x_0,t) + h'(t)-r(x_0,t) \,, \quad t\in(0,T). \end{equation} Then by inverting $h$ and integrating with respect to $\tau$ yields \begin{equation}\label{eqn:a+_titr_PDE_bndy} a_{k+1}(\tau)=a^0+\int_{\tau_0}^\tau\frac{\frac{\gamma \sigma-b(x_0,h^{-1}(\sigma))}{\partial_\nu u_k(x_0,h^{-1}(\sigma))} \Delta u_k(x_0,h^{-1}(\sigma)) + h'(h^{-1}(\sigma))-r(x_0,h^{-1}(\sigma))}{|\nabla u_k(x_0,h^{-1}(\sigma))|^2}\,d\sigma\,. \end{equation} A third scheme is obtained by relying exclusively on the nonlinear boundary conditions (<ref>) and defining \begin{equation}\label{eqn:a+_titr_bndy} a_{k+1}(\tau)=\frac{b(x_0,h^{-1}(\tau))-\gamma \tau}{\partial_\nu u^D_k(x_0,h^{-1}(\tau))} \end{equation} where $u^D_k$ solves (<ref>) with the nonlinear impedance boundary conditions replaced by Dirichlet boundary conditions \begin{equation}\label{eqn:bndyD} u^D_k(x,t) = h(x,t), \quad x\in\Gamma,\quad t\in(0,T) \end{equation} on part of the boundary $\Gamma\subset\partial\Omega$, where we assume to have observations on $\Gamma\times(0,T)$. In case of $d>1$ space dimensions, $\Gamma$ should be a set of positive $d-1$ dimensional measure, whereas in 1-d it suffices to use $\Gamma=\{x_0\}$. In all three cases, it is clearly crucial to have strict monotonicity of $h$ and boundedness away from zero of $|\nabla u_{act}|$, $\partial_\nu u_{act}$ (thus, by a perturbation argument, also of $|\nabla u_k|$, $\partial_\nu u_k$) – analogously to the assumption on $g$ in (<ref>). § WELL-POSEDNESS OF THE FORWARD PROBLEM (<REF>), (<REF>) \begin{equation}\label{eq:PDEdata} a\in C^{1+\alpha}(\R)\,, \quad a(s)\geq\underline{a}>0\,, \quad s\in\R\,, \quad r\in C^{\alpha,\alpha/2}(Q_T)\,, \quad b\in C^{\alpha,\alpha/2}(\partial\Omega\times(0,T)) \end{equation} where $Q_T=\Omega\times(0,T)$ is the space-time cylinder, <cit.> implies existence of a solution $u(a)\in H^{2+\alpha,1+\alpha/2}(Q_T)$ of (<ref>), (<ref>), whose norm only depends on the norms of $a$, $r$, and $b$ in the mentioned spaces. To obtain Schauder estimates on $u=u(a)$, we first of all estimate the Schauder norm of the space- and time dependent coefficient $a(u)$ \[ \begin{aligned} \|a(u)\|_{C^{\epsilon,\epsilon/2}(Q_T)}& =\|a(u)\|_{C(Q_T)}+\sup_{(x,t)\not=(\tilde{x},\tilde{t})\in Q_T} \frac{a(u(x,t))-a(u(\tilde{x},\tilde{t}))}{\sqrt{|x-\tilde{x}|^2+|t-\tilde{t}|}^\epsilon}\\ &=\|a(u)\|_{C(Q_T)}+\sup_{(x,t)\not=(\tilde{x},\tilde{t})\in Q_T} \frac{a(u(x,t))-a(u(\tilde{x},\tilde{t}))}{|u(x,t)-u(\tilde{x},\tilde{t})|} \frac{|u(x,t)-u(\tilde{x},\tilde{t})|}{\sqrt{|x-\tilde{x}|^2+|t-\tilde{t}|}^\epsilon}\\ &\leq\|a\|_{C(\R)}+ |a|_{C^{0+1}(\R)} |u|_{C^{\epsilon,\epsilon/2}(Q_T)}\\ &\leq\|a\|_{C(\R)}+ |a|_{C^{0+1}(\R)} C \|u\|_{H^{2+\alpha,1+\alpha/2}(Q_T)}\\ \end{aligned} \] for $0<\epsilon<2+\alpha-\frac{d+2}{2}$, by the embedding result <cit.>. Thus we can apply <cit.> to the transformed version of (<ref>), (<ref>) \[ \begin{aligned} &D_t w-a(u)\triangle w= a(u)r \mbox{ in }\Omega\times(0,T)\\ &w(0)=a(u_0) \mbox{ in }\Omega\\ &\partial_\nu w = b-\gamma u, \mbox{ on }\partial\Omega\times(0,T). \end{aligned} \] Note the advantage of only needing $a(u)\in C^\epsilon$ in this non-divergence form which allows us to apply the results from [5]), where \[ w(x,t)= A(u(x,t))\quad A(s)=\int_0^s a(r)\, dr\,, \] to obtain \[ \begin{aligned} &\|w\|_{C^{\epsilon,\epsilon/2}(Q_T)}+\|D_t w\|_{C^{\epsilon,\epsilon/2}(Q_T)} &\qquad\qquad\leq K \Bigl(\|a(u_0)\|_{C^{2+\epsilon}(\Omega)} + \|a(u)r\|_{C^{\epsilon,\epsilon/2}(Q_T)} +\|b-\gamma u\|_{C^{\epsilon,\epsilon/2}(\partial\Omega\times(0,T))} \Bigr)\,. \end{aligned} \] second and third terms on the right hand side can be estimated by $\|a(u)\|_{C^{\epsilon,\epsilon/2}(Q_T)}\, \|r\|_{C^{\epsilon,\epsilon/2}(Q_T)}+\|b\|_{C^{\epsilon,\epsilon/2}(\partial\Omega\times(0,T))}+\gamma C \|u\|_{H^{2+\alpha,1+\alpha/2}(Q_T)}$, which for $\epsilon\leq\alpha$ is already covered by the above assumptions and estimates. From this we get \begin{equation}\label{eqn:estuC1} \begin{aligned} \|u\|_{C^{1,1/2}(Q_T)} &\leq \|u\|_{C(Q_T)}+\sup_{(x,t)\not=(\tilde{x},\tilde{t})\in Q_T} \frac{u(x,t)-u(\tilde{x},\tilde{t})}{\sqrt{|x-\tilde{x}|^2+|t-\tilde{t}|}}\\ &\leq \|u\|_{C(Q_T)}+\sum_{i=1}^n\|D_{x_i}u\|_{C(Q_T)}+\|D_tu\|_{C(Q_T)}\\ &= \|u\|_{C(Q_T)}+\sum_{i=1}^n\|\frac{1}{a(u)}D_{x_i}w\|_{C(Q_T)}+\|\frac{1}{a(u)}D_tw\|_{C(Q_T)}\\ &\leq (1+\tfrac{1}{\underline{a}}) K \Bigl( %BK 2020-10-16 add u0 term! \|a(u_0)\|_{C^{2+\epsilon}(\Omega)}+ \|a(u)r\|_{C^{\epsilon,\epsilon/2}(Q_T)}\\ &\qquad+\|b\|_{C^{\epsilon,\epsilon/2}(\partial\Omega\times(0,T))}+\gamma C \|u\|_{H^{2+\alpha,1+\alpha/2}(Q_T)}\Bigr) \ \leq %BK 2020-10-16 C_{\rm arb}\,. C_{{\rm arb}\, u_0}\,. \end{aligned} \end{equation} $C_{{\rm arb}\, u_0}$ denotes a generic constant depending only on the quantities \begin{equation}\label{eq:Carb} \underline{a},\ \|a\|_{C^\alpha(\R)},\ \|r\|_{C^{\alpha,\alpha/2}(Q_T)},\ \|b\|_{C^{\alpha,\alpha/2}(\partial\Omega\times(0,T)}, \ %BK 2020-10-16 add u0 term! \|a(u_0)\|_{C^{2+\alpha}(\Omega)}. \end{equation} Thus the above estimate on $a(u)$ can be improved to \begin{equation}\label{eqn:estauCalpha} \begin{aligned} \|a(u)\|_{C^{\alpha,\alpha/2}(Q_T)} &=\|a(u)\|_{C(Q_T)}+\sup_{(x,t)\not=(\tilde{x},\tilde{t})\in Q_T} \frac{a(u(x,t))-a(u(\tilde{x},\tilde{t}))}{|u(x,t)-u(\tilde{x},\tilde{t})|^\alpha} \Bigl(\frac{|u(x,t)-u(\tilde{x},\tilde{t})|}{\sqrt{|x-\tilde{x}|^2+|t-\tilde{t}|}}\Bigr)^\alpha\\ &\leq\|a\|_{C^\alpha(\R)} \|u\|_{C^{1,1/2}(Q_T)}^\alpha \ \leq %BK 2020-10-16 C_{\rm arb}\,. C_{{\rm arb}\, u_0}\,. \end{aligned} \end{equation} and we can apply the above result from [5] with $\alpha$ in place of $\epsilon$ to obtain \[ \begin{aligned} &\|w\|_{C^{\alpha,\alpha/2}(Q_T)}+\|D_t w\|_{C^{\alpha,\alpha/2}(Q_T)} &\quad \leq K \Bigl(\|a(u_0)\|_{C^{2+\alpha}(\Omega)} +\|a(u)\|_{C^{\alpha,\alpha/2}(Q_T)}\, \|r\|_{C^{\alpha,\alpha/2}(Q_T)} \underbrace{\|u\|_{C^{\alpha,\alpha/2}(Q_T)}}_{\leq \|u\|_{C^{1,1/2}(Q_T)}} \Bigr) \end{aligned} \] In particular, we get, besides (<ref>) \begin{equation}\label{eqn:estuxCalpha} \|\nabla u\|_{C^{\alpha,\alpha/2}(Q_T)} = \|\frac{1}{a(u)} \nabla w\|_{C^{\alpha,\alpha/2}(Q_T)} \ \leq C_{{\rm arb}\, u_0} \end{equation} \begin{equation}\label{eqn:estuxxCalpha} \|\triangle u\|_{C^{\alpha,\alpha/2}(Q_T)} = \|\frac{1}{a(u)} (\triangle w-\frac{a'(u)}{a(u)^2}|\nabla w|^2)\|_{C^{\alpha,\alpha/2}(Q_T)} \ \leq C_{{\rm arb}\, u_0} \end{equation} For $a$, $r$, $b$ as in (<ref>) and $u_0$ such that $a(u_0)\in C^{2+\alpha}(\Omega)$, there exists a unique solution $u$ of (<ref>), (<ref>), which satisfies \[ \|a(u)D_tu\|_{C^{\alpha,\alpha/2}(Q_T)} + \|\nabla u\|_{C^{\alpha,\alpha/2}(Q_T)} + \|\triangle u\|_{C^{\alpha,\alpha/2}(Q_T)} \leq C_{{\rm arb}\, u_0} \] with some constant $C_{{\rm arb}\, u_0}$ depending only on the quantities in (<ref>). Note that in case of constant initial data $u_0$, $a(u_0)\in C^{2+\alpha}(\Omega)$ does not imply any additional assumptions on the smoothness of $a$. § CONTRACTIVITY WITH FINAL TIME DATA We start with some estimates on the iteration errors in $a$ and $u$. Our goal is to prove that \[ \|a_{k+1}-a_{\rm act}\|\leq q\|a_k-a_{\rm act}\| \] for some $q\in(0,1)$ and some appropriate norm $\|\cdot\|$, that is, the fixed point iteration defined in Section subsec:fiti is contractive. Abbreviating $\hat{a}=a_k-a_{\rm act}$, $\hat{a}_+=a_{k+1}-a_{\rm act}$, $\hat{u}(x,t)=u(x,t;a_k)-u(x,t;a_{\rm act})=u_k(x,t)-u_{\rm act}(x,t)$ we get the following error representations. In one spatial dimension (<ref>) yields \[ \begin{aligned} \hat{a}_+(g(x))&=\tfrac{1}{g_x(x)}\int_0^x D_t \hat{u}(\xi,T)\, d\xi \\ \hat{a}_+'(g(x))g_x(x)&= \tfrac{1}{g_x(x)} D_t \hat{u}(x,T) + (\tfrac{1}{g_x})_x(x)\int_0^x D_t \hat{u}(\xi,T)\, d\xi \end{aligned} \] In higher space dimensions, we get from (<ref>) and (<ref>) \[ \begin{aligned} \hat{a}_+(\tau)&=\int_0^\tau \frac{D_t \hat{u}(\vec{x}(\phi(\sigma)),T;a_k)}{|\nabla g(\vec{x}(\phi(\sigma)))|^2} \exp\left(-\int_\sigma^\tau\frac{\triangle g(\vec{x}(\phi(\rho)))}{|\nabla g(\vec{x}(\phi(\rho)))|^2}\, d\rho \right)\, d\sigma \\ \hat{a}_+'(\tau)&=\frac{1}{|\nabla g(\vec{x}(\phi(\tau)))|^2} \left( -\hat{a}_+(\tau)\triangle g(\vec{x}(\phi(\tau))) + D_t \hat{u}(\vec{x}(\phi(\tau)),T)\right)\,. \end{aligned} \] Hence, there exists a constant $C_{g\Sigma}=C(\kappa,\|g\|_{C^{2+\alpha}(\Omega)},\|\vec{x}\|_{C^{0+1}(0,1)})$ depending only on the data $g$ and the observation manifold cf. (<ref>), (<ref>) such that \begin{equation}\label{eqn:ahatDtuhat} \|a_{k+1}-a_{\rm act}\|_{C^{2+\alpha}(J)}= \|\hat{a}_+\|_{C^{2+\alpha}(J)} \leq C_{g\Sigma} \|D_t\hat{u}(T)\|_{C^{1+\alpha}(\Omega)}\,. \end{equation} Thus it remains to estimate the norm of $z(T)=D_t\hat{u}(T)$ by a small multiple of $\hat{a}$. To this end, observe that $\hat{u}=u-u_{\rm act}$ and $z=D_t\hat{u}$ solve \begin{equation}\label{eqn:PDEuhat} \begin{aligned} &D_t \hat{u} -\triangle (\bar{a} \hat{u}) = \triangle \Bigl(\int_0^{u}\hat{a}(s)\, ds\Bigr) = \nabla\cdot(\hat{a}(u)\nabla u) \mbox{ in }\Omega\times(0,T)\\ &\hat{u}(0)=0 \mbox{ in }\Omega\\ &\partial_\nu (\bar{a} \hat{u})+\gamma \hat{u}=-\hat{a}(u)\partial_\nu u, \mbox{ on }\partial\Omega\times(0,T) \end{aligned} \end{equation} \begin{equation}\label{eqn:PDEz} \begin{aligned} &D_t z -\triangle (\bar{a} z) = \triangle \Bigl(\hat{a}(u)(D_t u_{\rm act}+z) +(\bar{a}_{1,0} D_t u_{\rm act} + \bar{a}_{1,1} z) \hat{u}\Bigr) \mbox{ in }\Omega\times(0,T)\\ &z(0)=D_t\hat{u}(0)= \nabla\cdot(\hat{a}(u_0)\nabla u_0) \mbox{ in }\Omega\\ &\partial_\nu (\bar{a} z)+\gamma z = -\partial_\nu (\hat{a}(u)(D_t u_{\rm act}+z) + (\bar{a}_{1,0} D_t u_{\rm act} + \bar{a}_{1,1} z) \hat{u}\Bigr), \mbox{ on }\partial\Omega\times(0,T), \end{aligned} \end{equation} \begin{equation}\label{eqn:aij} \bar{a} = \bar{a}_{0,0} = \int_0^1 a_{\rm act}(u_{\rm act}+\theta\hat{u})\, d\theta\,, \qquad \bar{a}_{i,j} = \int_0^1 a_{\rm act}^{(i)}(u_{\rm act}+\theta\hat{u})\theta^j\, d\theta \end{equation} so that $D_t \bar{a} = \bar{a}_{1,0} D_t u_{\rm act} + \bar{a}_{1,1} z$. Let us assume now that $a_{\rm act}$ is positive and bounded away from zero by some constant $\underline{a}$ \[ a_{\rm act}(s)\geq\underline{a}>0 \] and we have Dirichlet boundary conditions in equation  (<ref>), so that the boundary conditions in (<ref>) simplify to homogeneous Dirichlet ones. Similarly to we can make use of the exponential decay of $z$ and $D_t u_{\rm act}$ in order to achieve contractivity. To this end we multiply (<ref>) with $e^{\mu t}$ which for $z_\mu(x,t)=e^{\mu t} z(x,t)$, $\tilde{u}_\mu(x,t)=e^{\mu t} D_tu_{\rm act}(x,t)$ \begin{equation}\label{eqn:PDEzmu} \begin{aligned} &D_t z_\mu - \triangle(\bar{a} z_\mu) = \triangle[y_1 z_\mu + y_2 \tilde{u}_\mu] \mbox{ in }\Omega\times(0,T)\\ &z_\mu(0)=\nabla\cdot(\hat{a}(u_0)\nabla u_0) \mbox{ in }\Omega\\ &z_\mu = 0 \mbox{ on }\partial\Omega\times(0,T) \end{aligned} \end{equation} with the multipliers \begin{equation}\label{eqn:y1y2} y_1 = \bar{a}_{1,1} \hat{u} + \hat{a}(u)\,, \qquad y_2 = \bar{a}_{1,0} \hat{u} + \hat{a}(u) \,, \end{equation} where we choose $\mu\in(0,\underline{a}\lambda_1)$ with $\lambda_1>0$ the smallest eigenvalue of $-\triangle$ (with Dirichlet boundary conditions) so that $-\triangle(\bar{a} \cdot)-\mu I$ is still elliptic. Then maximal parabolic regularity yields \begin{equation}\label{eqn:CAz} \begin{aligned} &\|z_\mu\|_{W^{1,p}(0,t;L^p(\Omega))}+\|z_\mu\|_{L^p(0,t;W^{2,p}(\Omega))} \\ &\qquad\leq C_{\underline{a},\mu,p} \Bigl(\|\triangle[y_1 z_\mu + y_2 \tilde{u}_\mu]\|_{L^p(\Omega\times(0,t))} +\|\nabla\cdot(\hat{a}(u_0)\nabla u_0)\|_{W^{2(1-1/p),p}(\Omega)}\Bigr) \end{aligned} \end{equation} with a constant $C_{\underline{a},\mu,p}$ independent of $t$, (cf., e.g., <cit.>, <cit.>). Here, on one hand, we can use continuity of the embeddings $W^{\theta,p}(0,t)\to C(0,t)$ and $W^{2-2\theta,p}(\Omega)\to C^{1,\alpha}(\Omega)$ for $\theta\in(0,1)$, $\theta>\frac{1}{p}$, $1-2\theta>\frac{d}{p}+\alpha$ (which can always be achieved by choosing $p\in [1,\infty)$ sufficiently large) and apply interpolation and the the fact that the norm of the embedding $W^{\theta,p}(0,t)\to C(0,t)$ is independent of $t$ (by Morrey's inequality) to obtain \begin{equation}\label{eqn:estz} \begin{aligned} \|e^{\mu t} z(t)\|_{C^{1+\alpha}(\Omega)} & \leq \|z_\mu\|_{C(0,t;C^{1+\alpha}(\Omega))} \ \leq C_{W^{\theta,p},C}^{\mathbb{R^+}} C_{W^{2-2\theta,p},C^{1+\alpha}}^\Omega \|z_\mu\|_{W^{\theta,p}(0,t;W^{2-2\theta,p}(\Omega))}\\ &\leq C_{W^{\theta,p},C}^{\mathbb{R^+}} C_{W^{2-2\theta,p},C^{1+\alpha}}^\Omega(\|z_\mu\|_{W^{1,p}(0,t;L^p(\Omega))}+\|z_\mu\|_{L^p(0,t;W^{2,p}(\Omega))}). \end{aligned} \end{equation} On the other hand, we can estimate the right hand side in (<ref>) by \[ \|\triangle[y_1 z_\mu + y_2 \tilde{u}_\mu]\|_{L^p(\Omega\times(0,t))} \leq \sum_{i=1}^2\|\triangle[y_i w_i]\|_{L^p(\Omega\times(0,t))} \] where $w_1=z_\mu$, $w_2=\tilde{u}_\mu$ and both terms can be estimated in the same way: \begin{equation}\label{eqn:yiwi} \begin{aligned} &\|\triangle[y_i w_i]\|_{L^p(\Omega\times(0,t))} \ = \|\triangle y_i \, w_i + 2 \nabla y_i \cdot \nabla w_i + y_i \triangle w_i\|_{L^p(\Omega\times(0,t))}\\ &\leq \|\triangle y_i\|_{L^p(0,t;L^p(\Omega))} \|w_i\|_{L^\infty(0,t;L^\infty(\Omega))} + 2 \|\nabla y_i\|_{L^\infty(0,t;L^{qp/(q-p)}(\Omega)} \|\nabla w_i\|_{L^p(0,t;L^q(\Omega)} \\ &\qquad+ \|y_i\|_{L^\infty(\Omega\times(0,t))} \|\triangle w_i\|_{L^p(0,t;L^p(\Omega))}\\ &\leq \Bigl(C_{W^{\theta,p},C}^{\mathbb{R^+}} C_{W^{2-2\theta,p},C}^\Omega \|\triangle y_i\|_{L^p(0,t;L^p(\Omega))} + 2 C_{W^{2,p},W^{1,q}}^\Omega \|\nabla y_i\|_{L^\infty(0,t;L^{qp/(q-p)}(\Omega)} + \|y_i\|_{L^\infty(\Omega\times(0,t))} \Bigr)\\ &\qquad\qquad \cdot(\|w_i\|_{W^{1,p}(0,t;L^p(\Omega))}+\|w_i\|_{L^p(0,t;W^{2,p}(\Omega))}) \end{aligned} \end{equation} \begin{equation}\label{eqn:pq} 2-\frac{d}{p}>0\,, \quad 1-\frac{d}{p}>-\frac{d}{q}\,, \quad q>p\,. \end{equation} The $y_i$- terms in (<ref>) can be estimated by (see (<ref>), (<ref>)) \[ \begin{aligned} \|y_1\|_{L^\infty(\Omega\times(0,t))} &\leq c_a \|\hat{u}\|_{L^\infty(\Omega\times(0,t))} + \|\hat{a}\|_{C(J)} \\[1ex] \|\nabla y_1\|_{L^\infty(0,t;L^{qp/(q-p)}(\Omega)} &=\|(\bar{a}_{2,1}\hat{u}+\hat{a}'(u))\nabla u_{\rm act} + (\bar{a}_{1,1}+\bar{a}_{2,2}\hat{u}+\hat{a}'(u))\nabla \hat{u} \|_{L^\infty(0,t;L^{qp/(q-p)}(\Omega)}\\ &\leq |\Omega|^{(q-p)/(qp)} \Bigl( (\tfrac12 c_a \|\hat{u}\|_{C(\Omega\times(0,t))} + \|\hat{a}'\|_{C(J)}) \|\nabla u_{\rm act}\|_{C(\Omega\times(0,t))}\\ + ((\tfrac12+\tfrac13\|\hat{u}\|_{C(\Omega\times(0,t))})c_a + \|\hat{a}'\|_{C(J)}) \|\nabla \hat{u}\|_{L^\infty(0,t;L^{qp/(q-p)}(\Omega)}\Bigr) \end{aligned} \] \[ \begin{aligned} &\|\triangle y_1\|_{L^p(0,t;L^p(\Omega))} =\|(\bar{a}_{2,1}\hat{u}+\hat{a}'(u))\triangle u_{\rm act} + (\bar{a}_{1,1}+\bar{a}_{2,2}\hat{u}+\hat{a}'(u))\triangle \hat{u}\\ &\qquad\qquad\qquad\qquad + (\bar{a}_{3,1}\hat{u}+\hat{a}''(u))|\nabla u_{\rm act}|^2 + 2(\bar{a}_{2,1}+\bar{a}_{3,2}\hat{u}+\hat{a}''(u))\nabla u_{\rm act}\cdot\nabla \hat{u}\\ &\qquad\qquad\qquad\qquad + (2\bar{a}_{2,2}+\bar{a}_{3,3}\hat{u}+\hat{a}''(u))|\nabla \hat{u}|^2\|_{L^p(0,t;L^p(\Omega))}\\ \Bigl(\tfrac12 c_a \|\hat{u}\|_{C(\Omega\times(0,t))} + \|\hat{a}'\|_{C(J)} + C_{W^{2,p},W^{1,2p}}^\Omega \bigl((\tfrac12 + \tfrac56\|\hat{u}\|_{C(\Omega\times(0,t))}) c_a + 2\|\hat{a}''\|_{C(J)})\bigr) \Bigr) \\ &\hspace*{5cm}\cdot\|u_{\rm act}\|_{L^p(0,t;W^{2,p}(\Omega))}\\ +\Bigl((\tfrac12+\tfrac13\|\hat{u}\|_{C(\Omega\times(0,t))})c_a + \|\hat{a}'\|_{C(J)} + C_{W^{2,p},W^{1,2p}}^\Omega \bigl((\tfrac76 + \tfrac{7}{12}\|\hat{u}\|_{C(\Omega\times(0,t))}) c_a + 2\|\hat{a}''\|_{C(J)})\bigr) \Bigr) \\ \end{aligned} \] where we have used continuity of the embedding $W^{2,p}(\Omega)\to W^{1,2p}(\Omega)$ (cf. (<ref>)) and the fact that with $c_a:=\|a_{\rm act}\|_{C^3(J)}$ we have $\|\bar{a}_{i,j}\|_{L^\infty(\Omega\times(0,t))}\leq \frac{1}{j+1} c_a$. We can estimate $y_2$ analogously. For $\hat{u}$ defined in (<ref>) (but with homogeneous Dirichlet boundary conditions) the same maximal parabolic regularity as well as embedding and interpolation estimate as above, we get that \[ \begin{aligned} \|\hat{u}\|_{C(\Omega\times(0,t))}+\|\hat{u}\|_{L^p(0,t;W^{2,p}(\Omega))} &\leq C_{\underline{a},p} \|\nabla\cdot(\hat{a}(u)\nabla u)\|_{L^p(\Omega\times(0,t))} \\ &\leq C_{\underline{a},p} \Bigl(\|\hat{a}\|_{C(J)}\|u\|_{L^p(0,t;W^{2,p}(\Omega))} + \|\hat{a}'\|_{C(J)} \|u\|_{L^{2p}(0,t;W^{1,2p}(\Omega))}^2\Bigr)\\ &\leq C_{\underline{a},p} \Bigl(C_{rdu_0} + \bigl(C_{W^{1,p}(L^p)\cap L^p(W^{2,p}),L^{2p}(W^{1,2p})}^{Q_T} C_{rdu_0}\bigr)^2\Bigr) \|\hat{a}\|_{C^1(J)} \,, \end{aligned} \] \[ C_{rdu_0} = C_{\underline{a},p} (\|r\|_{L^p(\Omega\times(0,t))}+\|d\|_{W_{p,tr}} +\|u_0\|_{W^{2(1-1/p),p}(\Omega)}) \] with $W_{p,tr} = W^{1-1/(2p),p}(0,t;L_p(\partial\Omega))\cap L^p(0,t;W^{2-1/p}(\Omega))$. For $w_2=\tilde{u}_\mu = e^{\mu t}D_t u_{\rm act}$ we get, with the same maximal parabolic regularity result and the fact that it solves (note that we are in the Dirichlet boundary setting $\gamma=\infty$ now) \begin{equation}\label{eqn:PDEutilmu} \begin{aligned} &D_t \tilde{u}_\mu -\triangle[a_{\rm act}(u_{\rm act})\tilde{u}_\mu] -\mu \tilde{u}_\mu= \tilde{r}_\mu \mbox{ in }\Omega\times(0,T)\\ &\tilde{u}_\mu(0)=\nabla\cdot(a_{\rm act}(u_0)\nabla u_0)+D_tr(0) \mbox{ in }\Omega\\ &\tilde{u}_\mu = D_td \mbox{ on }\partial\Omega\times(0,T) \end{aligned} \end{equation} the estimate \begin{equation}\label{eqn:CAutil} \begin{aligned} &\|\tilde{u}_\mu\|_{W^{1,p}(0,t;L^p(\Omega))}+\|\tilde{u}_\mu\|_{L^p(0,t;W^{2,p}(\Omega))} \\ &\qquad\leq C_{\underline{a},\mu,p} \Bigl(\|\tilde{r}_\mu\|_{L^p(\Omega\times(0,t))} + \|\tilde{d}_\mu\|_{W_{p,tr}} + \|\nabla\cdot(a_{\rm act}(u_0)\nabla u_0)+D_tr(0)\|_{W^{2(1-1/p),p}(\Omega)}\Bigr)\\ \end{aligned} \end{equation} where $\tilde{r}_\mu(x,t)=e^{\mu t} D_tr(x,t)$, $\tilde{d}_\mu(x,t)=e^{\mu t} D_td(x,t)$. The initial data terms in (<ref>), (<ref>) can be estimated by \[ \|\nabla\cdot(\tilde{a}(u_0)\nabla u_0)\|_{W^{2(1-1/p),p}(\Omega)}\leq C \|\tilde{a}\|_{C^{2+\alpha}(J)} \|u_0\|_{W^{4-2/p,p}(\Omega)} \] for $\tilde{a}\in\{\hat{a},a_{\rm act}\}$, $\alpha\geq 1-2/p$. Note that for spatial dimension $d=3$, according to (<ref>) we have to choose $p>\frac32$ and therefore are in the the regime $1-\frac{1}{2p}>\frac{1}{p}$ where (cf. compatibility conditions \begin{equation}\label{eqn:compat} \nabla\cdot(\hat{a}(u_0)\nabla u_0) = 0\,, \quad \nabla\cdot(a_{\rm act}(u_0)\nabla u_0)+D_tr(0) = D_t d(0) \mbox{ on }\partial\Omega \end{equation} need to be assumed. From the above we get an estimate of the form \[ \begin{aligned} &\leq C \|\hat{a}\|_{C^{2+\alpha}(J)} \Bigl(\|z_\mu\|_{W^{1,p}(0,t;L^p(\Omega))}+\|z_\mu\|_{L^p(0,t;W^{2,p}(\Omega))} \\&\qquad \qquad +\|u_0\|_{W^{4-2/p,p}(\Omega)} + \|D_tr(0)\|_{W^{2(1-1/p),p}(\Omega)} + \|\tilde{r}_\mu\|_{L^p(\Omega\times(0,t))} + \|\tilde{d}_\mu\|_{W_{p,tr}}\Bigr) \end{aligned} \] with some constant $C$ independent of $t$, hence, for $\|\hat{a}\|_{C^{2+\alpha}(J)}\leq \frac{1}{2C}$ \begin{equation}\label{eqn:estz2} \begin{aligned} &\leq C \|\hat{a}\|_{C^{2+\alpha}(J)} \Bigl(\|u_0\|_{W^{4-2/p,p}(\Omega)} + \|D_tr(0)\|_{W^{2(1-1/p),p}(\Omega)} + \|\tilde{r}_\mu\|_{L^p(\Omega\times(0,t))} + \|\tilde{d}_\mu\|_{W_{p,tr}}\Bigr)\,. \end{aligned} \end{equation} Thus, provided that for some $p>\frac{d}{2}$, (cf. (<ref>)) $D_t r$, $D_t d$ decay exponentially \begin{equation}\label{eqn:Dtrdecay} \begin{aligned} &\|D_tr(t)\|_{L^p(\Omega)}\leq C_r e^{-\mu_r t} \,, \quad \|D_td(t)\|_{W^{2-1/p}(\Omega)}\leq C_d e^{-\mu_d t} \,, \quad t>0 \,, \\ &\|e^{\mu_d\cdot}D_td\|_{W^{1-1/(2p),p}(0,\infty;L_p(\partial\Omega))}:=D_2 <\infty\,, \end{aligned} \end{equation} for some $C_r, C_d, \mu_r, \mu_d >0$ so that \begin{equation} \begin{aligned} \leq (\frac{C_r}{p(\mu_r-\mu)})^{1/p}=:R<\infty\\ \end{aligned} \end{equation} for any $\mu\in(0,\min\{\mu_r,\mu_d,\underline{a}\lambda_1\})\!\!$ [Note that the characterisation of general Sobolev spaces via fractional derivatives seems to be still open and so characterisation of the last condition in (<ref>) by exponential decay of some fractional derivative of $D_t d$ is not possible] we get from (<ref>), (<ref>) \[ \|z(T)\|_{C^{1+\alpha}(\Omega)} \leq C e^{-\mu tT} \Bigl(\|u_0\|_{W^{4-2/p,p}(\Omega)} + \|D_tr(0)\|_{W^{2(1-1/p),p}(\Omega)} + R + D_1 + D_2\Bigr) \ \|\hat{a}\|_{C^{2+\alpha}(J)} \] with $C$ independent of $T$. In view of (<ref>) we have thus proven the following contractivity result. Assume $g\in C^{3+\alpha}(\Omega)$, (<ref>) with $\vec{x}\in C^2([0,1])$, $a_{\rm act}\in C^3(\R)$, $a_{\rm act}=a_{le}(\tau)$ for $\tau<\ul{u}$, $a_{\rm act}(\tau)=a_{ri}(\tau)$ for $\tau>\ol{u}$, $a_{\rm act}(\tau)\geq \underline{a}>0$, $\tau\in\mathbb{R}$, $u_0\in W^{4-2/p,p}(\Omega)\cap C^{2+\alpha}(\Omega)$, $D_tr(0)\in W^{2(1-1/p),p}(\Omega)$ (<ref>), for some $p>\frac{d}{2}$, $\alpha\geq 1-2/p$, and, in case $d=3$, (<ref>). Then for $T$ large enough and $\|a_0-a_{\rm act}\|_{C^{2+\alpha}(J)}$ small enough there exists a constant $q\in(0,1)$ such that for all $k\in\N$ \[ \|a_{k+1}-a_{\rm act}\|_{C^{2+\alpha}(J)} \leq q \|a_k-a_{\rm act}\|_{C^{2+\alpha}(J)}\,. \] Under the assumptions of Theorem th:contr_fiti, $a$ is uniquely determined. In reality, instead of $g$ we have noisy data lying only in $L^p(\Omega)$, with a noise level $\delta$ that is also only given (if at all) with respect to the $L^p$ norm. Smoothing this data leads to an approximate version of $g$ in $C^{2+\alpha}$ with a (typically larger) noise level $\tilde{\delta}$ in the $C^{2+\alpha}$ norm. Due to contractivity of the scheme, this noise will propagate boundedly through the iteration so that after sufficiently many – $O(\log(1/\tilde{\delta})$ – iterations (but without having to stop early, in principle) we end up with an $O(\tilde{\delta})$ accuracy in the reconstruction. For details, see, e.g., <cit.>. § RECONSTRUCTIONS In this section we will show the results of numerical experiments to recover the function $a(u)$ with the two different types of data measurements. The computations will be done in one space dimension. The first of these is when we are only able to obtain “census-type” information and thus measure $g(x) := u(x,T)$ for some fixed time $T$ and $x\in \Sigma$ where $\Sigma$ is a curve in $\Omega$ whose endpoints lie at different points on $\partial\Omega$. Since our reconstructions will be based on one space variable we simply use $u(x,T)$ as the data and ensure that our imposed initial/boundary conditions allows the range condition to hold. The second is when we are able to measure the solution $u$ at a fixed point $x_0\in\partial\Omega$. The range condition can be imposed by driving that boundary $x=x_0$ where we measure $u(x_0,t)$ with sufficiently large supplied heat flux. To procure data for the reconstructions a direct solver based on a Crank-Nicolson scheme produced output values and data values were produced from these by sampling at a relatively small number $N_x$ and $N_t$ of points in both the spatial and temporal directions: This sampled data was then both interpolated to a full working size to obtain data values $g(x)$ and $h(t)$ commensurate to the grid being used by the solver used in the inverse problem and such that the value of $g(x)$ was smoothed by an $H^2$ filter while that of $h(t)$ by an $H^1$ filter. Figure fig:titi_titi shows reconstructions of a function $a(u)$ that goes well beyond a low-degree polynomial from time trace data, using the iteration (<ref>), that clearly outperformed the two schemes (<ref>), (<ref>) in all our experiments. The initial approximation was $a(u) = 1$. The leftmost figure shows the reconstructions of selected iterations; the final one shown was the effective numerical convergence. The noise level here was $0.01\%$. The rightmost figure shows the convergence of the $n^{\rm th}$ iterate specifically $\|a_n - a_{act}\|$ for both $L^2$ and $L^\infty$. As suggested by the left figure, the convergence is initially quite rapid but slows down considerably. Certainly, at the scale of the graph the tenth and final iteration would be indistinguishable. Recovery of $a(u)$ from time trace data Figure fig:fiti_fiti shows reconstructions of the same but using final time $g(x) = u(x,T;a)$ measurements at $T=1$. Again the initial approximation was $a(u) = 1$. The leftmost figure shows the reconstructions of selected iterations and here the fifth one corresponded to effective numerical convergence. The noticeable difference is in both the speed and in far greater accuracy obtained from such final time data as opposed to boundary time trace data. This pattern was consistent in several numerical experiments with different $a(u)$ functions and different imposed boundary and forcing functions Note that the small level of noise added to the data $g(x)$ (at 0.1%), to avoid any issue of an inverse crime, played a quite insignificant role. The rightmost figure show the final reconstructions (again the fifth iteration) under increased noise levels; now at 1% and 5%. Recovery of $a(u)$ from final time data The thermal conductivity of most materials varies considerably over a range that encompasses a phase transition. An experiment set up to make conductivity measurements would most likely be conducted over a limited range sufficient to be restricted to a single phase, but we show in Figure fig:fiti_fiti_sq a simulation of an experiment where $a(u)$ has discontinuities. Recovery of $a(u)$ from final time data From an analysis viewpoint we need $a(u)$ to be in at least $C^{2+\alpha}$ and so we illustrate these boundaries with a mollified piecewise constant function as shown. This posed little difficulties in the case of final time data as the figure to the right shows. Effective numerical convergence was essentially at the third iteration and even the first iteration starting from $a_{\rm init} = 1$ contains much of the salient features of the actual $a(u)$. The noise level added to the data was set at $0.1\%$ but reconstructions of similar quality are obtained at higher noise levels than that shown in Figure fig:fiti_fiti for smoother actual $a(u)$. The reconstruction shown is slightly under-smoothed as can be seen from the oscillations in the region between $u\in (1,2)$. This was done to show how accurately the algorithm was able to detect and reconstruct sharp boundaries. By letting our variation equal such bounds we see the efficiency of the numerical simulation based on the algorithm developed in section subsec:fiti and resulting in Theorem th:contr_fiti. § ACKNOWLEDGMENT The work of the first author was supported by the Austrian Science Fund fwf under the grants P30054 and DOC 78. The work of the second author was supported in part by the National Science Foundation through award dms-1620138. [1] Alessandro Audrito and Juan Luis Vázquez. The Fisher-KPP problem with doubly nonlinear “fast” diffusion. Nonlinear Anal., 157:212–248, 2017. [2] J. R. Cannon and Paul DuChateau. An inverse problem for a nonlinear diffusion equation. SIAM J. Appl. Math., 39(2):272–289, 1980. [3] R. Denk, M. Hieber, and J Prüss. Optimal $l^p-l^q$-estimates for parabolic boundary value problems with inhomogeneous data. Math. Z., 257:193–224, 2007. [4] Paul DuChateau. Monotonicity and uniqueness results in identifying an unknown coefficient in a nonlinear diffusion equation. SIAM J. Appl. Math., 41(2):310–323, 1981. [5] A. Friedman. Partial differential equations of parabolic type. Prentice-Hall, 1964. [6] Morton E. Gurtin and Richard C. MacCamy. On the diffusion of biological populations. Math. Biosci., 33(1-2):35–49, 1977. [7] Barbara Kaltenbacher and William Rundell. On an inverse potential problem for a fractional reaction-diffusion Inverse Problems, 35:065004, 2019. [8] Barbara Kaltenbacher and William Rundell. On the identification of a nonlinear term in a reaction-diffusion Inverse Problems, 35:115007, 2019. [9] Barbara Kaltenbacher and William Rundell. Recovery of multiple coefficients in a reaction-diffusion equation. Journal of Mathematical Analysis and Applications, 481:123475, [10] Barbara Kaltenbacher and William Rundell. The inverse problem of reconstructing reaction-diffusion systems. Inverse Problems, 36:065011, 2020. [11] O.A. Ladyzhenskaja, V.A. Solonnikov, and N.N. Ural'tseva. Linear and Quasi-linear Equations of Parabolic Type. American Mathematical Society, translations of mathematical monographs. American Mathematical Society, 1968. [12] Yuri Latushkin, Jan Prüss, and Roland Schnaubelt. Stable and unstable manifolds for quasilinear parabolic systems with fully nonlinear boundary conditions. Journal of Evolution Equations, 6(4):537–576, Dec 2006. [13] J. G. Skellam. Random dispersal in theoretical populations. Biometrika, 38(1/2):196–218, 1951. [14] P Turchin. Population consequences of aggregative movement. Journal of Animal Ecology, 58:75–100, 1989. [15] Juan Luis Vázquez. The porous medium equation. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2007. Mathematical theory.
# Superfluid Density in Conventional Superconductors: From Clean to Strongly Disordered Surajit Dutta1, Pratap Raychaudhuri1, Sudhansu S. Mandal2, T.V.Ramakrishnan3 1Tata Institute of Fundamental Research, Mumbai 400005, India 2Department of Physics, Indian Institute of Technology, Kharagpur 721302, India 3Department of Physics, Indian Institute of Science, Bangalore 560012, India ###### Abstract The highly convergent form of superfluid density in disordered conventional superconductors available in the literature and independently obtained by us following the approach of an earlier paper [Phys. Rev. B $\bm{102}$, 024514 (2020)] has been reformulated to separate out the generally used so-called ‘dirty-limit’ term and an additional term. We use this new expression for making an extensive comparison with previously published experimental data and show that the former, generally used, term is not sufficient for analyzing these results. We point out that consequently, there is a large regime (disordered superconductors with moderate to no disorder) where theoretical predictions need to be confronted with experiment. ## I Introduction The additional free energy ${\cal F}$ of a superconductor depends on its nonzero superfluid velocity ${\rm\bf v}_{s}$ as ${\cal F}\sim(\rho_{s}/2)\int d\bm{r}\,{\rm\bf v}_{s}^{2}$ where $\rho_{s}$ is the superfluid stiffness or phase rigidity, analogous to the mass (see e.g. Ref.Coleman ). This gauge invariant superfluid velocity ${\rm\bf v}_{s}$ is related to the phase $\theta$ of the superconducting order parameter as ${\rm\bf v}_{s}=(1/m_{e})(\bm{\nabla}\theta-2e\bm{A})$; here $\bm{A}$ is the vector potential, $e$ and $m_{e}$ are the charge and mass of an electron respectively, and we set $\hbar=1$. Experimentally, one measures the magnetic penetration depth $\lambda$ which is related to the superfluid density $n_{s}$ asTinkham $\lambda^{-2}=\mu_{0}e^{2}n_{s}/m_{e}$. The superfluid density $n_{s}$ is proportional to the superfluid stiffness; $n_{s}=(4/m_{e})\rho_{s}$. We use the above relation between the experimentally measured penetration depth $\lambda$ and the calculated $\rho_{s}$ to compare in detail theoretical results with experiment, and suggest that there is a large regime of disorder in relatively clean systems so that measurements are needed here, to also establish the clean London limiting value. The solely diamagnetic response of the electron system to an external magnetic field leads to $n_{s}^{d}=n$, the electron density. This is the London value which also follows for the ground state ($T=0$) from Galilean invariance, for a homogeneous continuum. However, the actual superfluid density is less than $n_{s}^{d}$ due to the paramagnetic response of the system: $n_{s}=n_{s}^{d}-n_{s}^{p}$, $n_{s}^{p}$ being the paramagnetic contribution to the superfluid density. For a clean conventional Bardeen-Cooper-Schrieffer (BCS) superconductor BCS , $n_{s}^{p}=0$ at zero temperature and is exponentially small at low temperatures because of the presence of the quasiparticle gap. However, $n_{s}^{p}$ grows with temperature and eventually becomes equal to $n_{s}^{d}$ at the superconducting critical temperature $T_{c}$ where $n_{s}$ vanishes. In disordered superconductors, $n_{s}^{p}\neq 0$ at zero temperature ($T=0$), and the resulting superfluid density is disorder dependent and is smaller A-G than the London limiting value at $T=0$. This, and the temperature dependence of $n_{s}$ have been discussed in literature AG2 ; AGD ; Weiss ; Nam ; Kogan ; M-R . The effect of static, short range nonmagnetic disorder on superconductors is most simply characterized by a broadening $\Gamma\ll\epsilon_{\rm F}$ of the electron spectral density (here $\epsilon_{\rm F}$ is the Fermi energy Coleman ; A-G . Microscopic calculations generally use on site or zero range disorder with a Gaussian probability distribution of its strength related to this broadening. The effect of disorder on electrons is mostly implemented in the Born approximation, where it leads to a finite lifetime $\tau=(1/\Gamma)$ of electronic states. Such a treatment neglects Anderson localization effectsMa- Lee . In this approximation, it is well known that in the so called ‘dirty limit’, i.e. for $\Delta_{0}/\Gamma<<1$, $n_{s}$ at $T=0$ scales A-G with the dc conductivity $\sigma=ne^{2}\tau/m_{e}$ in the normal state, i.e., $n_{s}(T=0)=\sigma(\pi m_{e}\Delta_{0}/e^{2})=n\pi\Delta_{0}\tau$, where $\sigma$ is the electrical conductivity of the system, $n$ is the normal electron density, and $\Delta_{0}$ is the gap at $T=0$. We note that $\Delta_{0}$ is independent of disorder, according to Anderson’s theoremAnderson . A generalized form of this zero-temperature superfluid density at finite temperatures, namely $n_{s}(T)=n\pi\tau\Delta(T)\,\tanh\left(\frac{\Delta(T)}{2k_{B}T}\right)\,$ (1) is often used for analyzing experimental data Lemberger07 ; Mondal11 ; Mandal20 ; where $\Delta(T)$ is the gap at the temperature $T$. However, this expression has also been derived AG2 ; Kogan in the dirty limit. Clearly, $n_{s}(T)$ in Eq. (1) cannot be valid for all $\tau$ because for $\tau$ large enough such that $\Delta_{0}\tau>1/\pi$, the superfluid density $n_{s}(T=0)$ exceeds the maximum possible London limiting value $n$. In this paper, we exhibit the superfluid density as a sum of the commonly used term (1) and another term in the following way. We reformulate an expression (2) of superfluid density AGD ; Nam ; Weiss ; Scheffler , which is a convergent sum of Matsubara frequencies only and which shows explicitly that $n_{s}$ vanishes when $\Delta$ vanishes. This frequency sum is converted into a contour integral over complex frequencies, and displays two simple poles at $\pm\Delta$ and branch cuts for the domains $(\Delta,\infty)$ and $(-\infty,-\Delta)$. The residue of the simple poles provides the contribution (1) generally used for the analysis of experimental data. We have derived an additional contribution arising from the branch cuts; this competes with the former as they are opposite in sign. We find that the contribution of the latter is insignificant if $\Delta_{0}\tau\lesssim 10^{-3}$; it begins to be relevant for $\Delta_{0}\tau\sim 5\times 10^{-3}$. Both the contributions increase with $\Delta_{0}\tau$, and their difference asymptotically approaches the London limit at $T=0$ for $\Delta_{0}\tau\rightarrow\infty$. The contribution of the latter to superfluid density and thus to the measured absolute value of the penetration depth provides a large regime, which is yet unexplored, for experimental studies of disorder dependent superfluid density in relatively clean superconductors over a wide span in $\Delta_{0}\tau$, namely roughly from $10^{-3}$ to $10$, i.e., from the dirty limit to the clean limit. We also find that temperature dependence of the scaled superfluid density $n_{s}(T)/n_{s}(0)$ is almost independent of disorder; this scaled density function is easily obtained in the dirty limit as well as in the pure limit, and is the same. This fact has led to the belief that the dirty limit expression is appropriate for all disorder, including very weak disorder. Our finding suggests a disorder dependent study with measurement of the absolute value of the superfluid density as a function of disorder, and provides explicit expressions for it at different temperatures and for different values of disorder. Unfortunately, not much data is available in the literature where absolute measurement of $n_{s}$ has been performed, so that our results cannot be easily compared with experiment. In Section III, we analyze some of the available experimental data in superconductors like Nb- doped SrTiO${}_{{}_{3}}$, Pb, Sn, Nb, NbN, and a-MoGe. The data for $T_{c}$ and $n$ have been obtained via transport measurements, and the dimensionless parameter $\delta=\Delta_{0}/(2k_{B}T_{c})$ is obtained from the measurement of $\Delta_{0}$ in tunneling experiments. We then have just one free parameter $\Delta_{0}\tau$ which we extract by fitting the above mentioned theoretical expression where we have explicitly shown also the contributions of both the terms in the expressions separately. The extracted values of $\Delta_{0}\tau$ range from about $5\times 10^{-5}$ to $0.5$. The ratio $\eta$ of the two contributions to $n_{s}(T)$ mentioned above, is almost negligible for a-MoGe and NbN for which $\Delta_{0}\tau$ is very small, but it becomes recognizable for the Nb sample, it becomes more prominent for Pb and Sn, and for Nb-doped SrTiO3 it is the largest amongst all the ones analyzed here. Section IV is devoted to the outlook and discussion where we have pointed out that many more experiments are needed to be confronted with theoretical prediction as the highest value of $n_{s}/n$ that has been found in the earlier experiments is about $0.56$, whereas it can go up to $1.0$ for the pure limit that may be attained for the samples with $\Delta_{0}\tau\sim 10$. We also discuss here the physics that cannot be revealed from the theoretical prediction above. In appendix A, we have estimated the superfluid density by utilizing the oscillator sum rule for the real part of optical conductivity. We show that it reproduces the clean limit exactly and the dirty limit up to a numerical factor of order unity. ## II Reformulated Superfluid Density A highly convergent expression AGD of super-fluid density (see also Refs.Nam, ; Weiss, ; AG2, ; Kogan, ) at finite temperatures for all disorder (excluding the localization regime) is given by $n_{s}(T)=\frac{n\pi}{\beta}\sum_{\omega_{m}}\left[\frac{\tilde{\Delta}^{2}}{(\tilde{\Delta}^{2}+\tilde{\omega}_{m}^{2})^{3/2}}\right]$ (2) which is obtained also by a series of successive integration by parts for removing divergences in the approach of Ref.M-R, , where renormalized frequency, $\tilde{\omega}_{m}$, and gap, $\tilde{\Delta}$, in terms of Matsubara frequency $\omega_{m}=\pi(2m+1)/\beta$ and the superconducting gap $\Delta$ can be expressed as $\frac{\tilde{\omega}_{m}}{\omega_{m}}=\frac{\tilde{\Delta}}{\Delta}=1+\frac{1}{2\tau\sqrt{\Delta^{2}+\omega_{m}^{2}}}.$ (3) Here $\beta=1/(k_{B}T)$ and the introduction A-G of finite electronic life- time $\tau$ in the theory of disordered superconductors through the Nambu- Green’s functions for Bogoliubov quasiparticles. The superfluid density is explicitly seen to vanish (2) in the absence of isotropic superconducting gap. The expression (2) of $n_{s}(T)$ is applicable to both two and three dimensional superconductors provided the localization effect of disorder does not set in for very strong disorder. However, the expression (2) has not been frequently used for analyzing experimental data because it involves complicated sum over the Matsubara frequency. Here we reformulate Eq.2 below in terms of a simple term and a simple integral, and analyze available data of the absolute measurements of superfluid density in the next section. Figure 1: (Color online) Zero temperature contributions for the expression (5) of normalized superfluid density, $n_{s}/n$, as a function of dimensionless disorder $\Delta_{0}\tau$: We show the first term, the second term, and their difference for several decades of $\Delta_{0}\tau$. The nonzero value of the difference is apparent on this log-log scale in the separate curve for it. The frequency sum in Eq. 2 is evaluated in the usual way: we change it into a contour integration for complex $z$ such that the contour contains only the poles at $z=i\omega_{m}$, $n_{s}(T)=n\pi\int_{C}\frac{dz}{2\pi i}\frac{\Delta^{2}}{(\Delta^{2}-z^{2})(\sqrt{\Delta^{2}-z^{2}}+\frac{1}{2\tau})}\frac{1}{e^{\beta z}+1}.$ (4) We now deform the contour to exclude the non-analyticities on the real axis (energy $\epsilon$), namely the simple poles at $z=\pm\Delta$ as well as the branch cut from $z=\Delta\to\infty$ and from $-\Delta\to-\infty$ (arising from the square root term) so that $\displaystyle\frac{n_{s}}{n}$ $\displaystyle=$ $\displaystyle\pi\Delta\tau\tanh\left(\frac{\beta\Delta}{2}\right)$ (5) $\displaystyle-$ $\displaystyle\Delta^{2}\int_{\Delta}^{\infty}d\epsilon\frac{\tanh\left(\frac{\beta\epsilon}{2}\right)}{\sqrt{\epsilon^{2}-\Delta^{2}}(\epsilon^{2}-\Delta^{2}+\frac{1}{4\tau^{2}})}\,.$ The first term is due to the contribution from the residues of the poles and the second term is due to the branch cut. In the dirty limit $(\Delta_{0}\tau<<1)$ the latter is much smaller than the first and can therefore be neglected; the contribution of the corresponding branch cut to the superfluid density is negligible. This fact leads to a considerable simplification of calculations in the dirty limit. We note that while the first term in Eq.(5) is present in the superfluid density expression in Ref.M-R, as well, the second term differs. This finer difference makes the expression (5) consistent in temperature dependence at all disorder. Figure 2: (Color online) Temperature dependence of $n_{s}$ scaled with electron density for different levels of disorder: $\Delta_{0}\tau=10^{-3},\,10^{-2},\,10^{-1},\,1,\,10,\,10^{2}$ (in the unit of $\hbar$). Temperature is scaled with the BCS $T_{c}$. Inset: $n_{s}(T)$ is scaled with $n_{s0}=n_{s}(T=0)$. Temperature variation of $n_{s}/n_{s0}$ is almost independent of disorder, although $n_{s0}$ is strongly disorder dependent. The zero temperature limit of Eq. (5) yields $\displaystyle\frac{n_{s}}{n}(T=0)=\pi\Delta_{0}\tau-$ $\displaystyle\left\\{\begin{array}[]{ll}\frac{(2\Delta_{0}\tau)^{2}}{\sqrt{(2\Delta_{0}\tau)^{2}-1}}\tan^{-1}\left(\sqrt{(2\Delta_{0}\tau)^{2}-1}\right)&\text{for}\,\,\,\,2\Delta_{0}\tau>1\\\ \frac{(2\Delta_{0}\tau)^{2}}{\sqrt{1-(2\Delta_{0}\tau)^{2}}}\tanh^{-1}\left(\sqrt{1-(2\Delta_{0}\tau)^{2}}\right)&\text{for}\,\,\,\,2\Delta_{0}\tau\leq 1\end{array}\right.$ (8) (9) Though superficially different from the well known $T=0$ result A-G ; Nam ; Kogan ; M-R , this has also the right clean and dirty limits, namely $n$ and $n\pi\Delta_{0}\tau$. Although the first term in Eq. (5) is sufficient for extreme dirty limit $(\Delta_{0}\tau<<1)$ as mentioned above, it alone is incomplete when $\Delta_{0}\tau\sim 1$ as it can exceed the London limit, namely the electron density $n$! We show variations of the first and second terms of Eq.(5) and their difference, i.e., $n_{s}/n$ over several decades of $\Delta_{0}\tau$ at $T=0$ in Fig. 1. Contribution of the second term is negligible as it is less by 3 orders of magnitude than the first term when $\Delta_{0}\tau=10^{-4}$. However, the role of the former begins to be significant even for $\Delta_{0}\tau=5\times 10^{-3}$ when the latter is about $3$% of the former. While both the terms increase with $\Delta_{0}\tau$, the difference between them asymptotically becomes unity, namely it approaches the disorder-free London limit.The zero temperature value of $n_{s}$ depends strongly on $\Delta_{0}\tau$ and attains the pure limit for $\Delta_{0}\tau\sim 10$ while it has the dirty limit value for $\Delta_{0}\tau\lesssim 0.005$. The temperature dependence of $n_{s}$ is numerically calculated using a dimensionless form of the variables and parameters of Eq. (5) and reinstating $\hbar$ as appropriate: $\displaystyle\frac{n_{s}}{n}$ $\displaystyle=$ $\displaystyle\pi\tilde{\Delta}\left(\frac{\Delta_{0}\tau}{\hbar}\right)\tanh\left(\frac{\delta\tilde{\Delta}}{T/T_{c}}\right)$ (10) $\displaystyle-$ $\displaystyle\tilde{\Delta}^{2}\int_{\tilde{\Delta}}^{\infty}d\tilde{\epsilon}\frac{\tanh\left(\frac{\delta\tilde{\epsilon}}{T/T_{c}}\right)}{\sqrt{\tilde{\epsilon}^{2}-\tilde{\Delta}^{2}}(\tilde{\epsilon}^{2}-\tilde{\Delta}^{2}+\frac{1}{4(\Delta_{0}\tau/\hbar)^{2}})}.$ where $\tilde{\Delta}=\Delta/\Delta_{0}$, $\tilde{\epsilon}=\epsilon/\Delta_{0}$, and $\delta=\Delta_{0}/(2k_{B}T_{c})$. Figure 2 shows the temperature dependence of $n_{s}/n$ for a wide range of $\Delta_{0}\tau$ (in the unit of $\hbar$) and using the BCS value of $\delta=0.882$. As expected, temperature dependence of $n_{s}$ at low temperatures is exponentially weak due to the presence of gap $\Delta_{0}$, but it strongly depends on $T$ beyond a threshold value $T_{\rm th}$ and eventually vanishes at $T=T_{c}$. Inset of Fig. 2 shows temperature dependence of scaled $n_{s}(T)$ by its zero-temperature value $n_{s0}$ for wide range of $\Delta_{0}\tau$ (scaled by $T_{c}$). The unrecognizable differences of $n_{s}(T)/n_{s}(0)$ with disorder indicates that the experimental techniques in which absolute value (in lieu of relative value with respect to zero temperature) of $n_{s}(T)$ is measured is the only one suitable for studying the disorder dependence of superfluid density. ## III Comparison with Experiment Table 1: Experimental data of $T_{c}$, $\Delta(0)$, $\lambda(0)$, $n$, and normal-state resistivity $\rho_{{}_{N}}$, mean free path $\ell$ and effective mas $m^{\ast}$ of an electron obtained from a number of experiments Devlin ; Megerle ; Wyckoff ; Kittel ; Rinderer83 ; Lemberger07 ; Sutton ; Karim ; Mondal11 ; Chand ; Mattheiss ; Chockalingam ; Mandal20 ; Dressel ; Lin in various samples. Sample | $T_{c}$ | $\Delta(0)$ | $\lambda(0)$ | $n$ | $\rho_{{}_{N}}$ | $\ell$ | $m^{\ast}/m_{e}$ ---|---|---|---|---|---|---|--- | (K) | (meV) | (nm) | ($10^{28}$ m-3) | ($\mu\Omega$-m) | $(A^{o})$ | Sn | 3.72 Devlin | 0.555 Megerle | 42.5 Devlin | 14.8 Wyckoff | *** | *** | 1.26 Kittel Pb | 7.2 Rinderer83 | 1.34 Megerle | 52.5 Rinderer83 | 13.2 Wyckoff | *** | *** | 1.97 Kittel Nb (15.3nm) | 8.17 Lemberger07 | 1.525 Sutton | 135.08 Lemberger07 | 5.56 Wyckoff | 0.135 Lemberger07 | 64.6 | 1.81 Karim NbN-1111$n$ and $\rho_{{}_{N}}$ of NbN is obtained by interpolation using given data set of Ref. Chand, . The three samples correspond to different levels of disorder. | 14.3 Mondal11 | 2.5 Chand | 358.4 Mondal11 | 16.85 Chand | 1.14 Chand | 3.65 | 1.0 Mattheiss ; Chockalingam NbN-2a | 9.94 Mondal11 | 1.736 Chand | 583.9 Mondal11 | 11.6 Chand | 2.22 Chand | 2.41 | 1.0 Mattheiss ; Chockalingam NbN-3a | 8.5 Mondal11 | 1.485 Chand | 759.1 Mondal11 | 11.76 Chand | 2.41 Chand | 2.2 | 1.0 Mattheiss ; Chockalingam MoGe-1 (21 nm)222Three amorphous MoGe thin films with different thickness (within bracket). The carrier density is measured from Hall effect for MoGe-1 and assumed to remain same for other thickness. | 7.56 Mandal20 | 1.28 Mandal20 | 528 Mandal20 | 46 Mandal20 | 1.5 Mandal20 | 1.42 | 1.0 MoGe-2 (11nm)b | 6.62 Mandal20 | 1.25 Mandal20 | 554.6 Mandal20 | 46 Mandal20 | 1.64 Mandal20 | 1.3 | 1.0 MoGe-3 (4.5nm)b | 4.8 Mandal20 | 1.12 Mandal20 | 613.07 Mandal20 | 46 Mandal20 | 1.44 Mandal20 | 1.48 | 1.0 Nb-doped STO | 0.346 Scheffler | 0.052 Scheffler | 1349.5 Scheffler | 0.011 Scheffler | 0.52 Scheffler | *** | 4.0 Lin Figure 3: (Color online) (a) Experimental data (black dots) of $n_{s}/n$ vs. $T$ for Nb-doped SrTiO3 fitted with Eq.(7); blue and green curves respectively represent the contributions of 1st and 2nd terms of Eq. (7). (b) Fit of the same data but with the dirty limit BCS formula which is equivalent to taking only the first term of Eq.(7); the fit deviates at intermediate temperatures. (c)–(e) Experimental data for Pb, Sn crystal, and 15.3-nm thick Nb film respectively; red solid curves are the theoretical fits using Eq.(7). (f) and (g) respectively correspond to temperature dependence of $n_{s}/n$ for NbN and MoGe films with various thickness; solid lines are the theoretical fits using Eq.(7); this fit is indistinguishable from the contribution of the 1st term of Eq. 71) alone. (h) The ratio of the contributions of the 2nd and 1st terms of Eq. (7), $\eta$, vs. the parameter $\Delta_{0}\tau$ (in the unit of $\hbar$) (solid line) and the same extracted from the fits mentioned above for various samples (dots). In this section, we analyze some of the published experimental data of $n_{s}(T)$ which are extracted from the measured penetration depth using London’s formula Tinkham : $n_{s}=\frac{m^{\ast}}{\mu_{0}e^{2}}\lambda^{-2}=2.82\times 10^{13}\left(\frac{m^{\ast}}{m_{e}}\right)m^{-1}\lambda^{-2}$ (11) in the light of the expression (10) derived here, where $m^{\ast}$ is the effective mass of an electron in a system. One difficulty in comparison between theory and experiment is that in much of the literature on conventional superconductors, only the change of penetration depth with respect to a given temperature rather than the absolute value of $\lambda$ has been measured in bulk sample. Absolute values have been measured for colloidal particles Shoenberg ; Lock and large area thin films on mica, but for those samples it is difficult to estimate other properties like resistivity and carrier density which could significantly differ from bulk and have not been reported. Nevertheless, researchers used indirect schemes to estimate $\lambda(0)$. For example, in Ref. Rinderer83, for Pb, $\Delta$ obtained from tunneling was used as input parameter and $\lambda(0)$ was obtained from tuning it to the value that consistently reproduced the BCS temperature dependence $\lambda(T)$ for a set of samples with different amount of impurity. In some other cases such as in pure Sn crystal Devlin , $\lambda(0)$ was estimated from the normal state properties. More recently, absolute measurement of $\lambda$ have been performed on a number of superconducting thin films using two-coil mutual inductance technique Benfatto ; Kamlapure ; Bose and on some single crystals using microwave techniques Hafner . Here, we analyze the data of Nb-doped STO Scheffler and Sn crystal Devlin , polycrystalline Rinderer83 Pb and 15.3 nm thick Nb film Lemberger07 , and relatively stronger disordered thin films Mondal11 ; Mandal20 of NbN and a-MoGe. Although Nb-doped SrTiO3 was initially thought to be a multi-band superconductor Bednorz , the recent data are in favor of a single-band Thiemann2 ; Hwang ; Eagles superconductor. Together these systems span a large range of disorder for which $n_{s}/n\sim 0.6$–$10^{-4}$. In Table 1, we summarize the properties of these materials. For Sn and Pb, the authors reported $\lambda(T)$ vs. $\left(1-(T/T_{c})^{4}\right)^{1/2}$; the data was digitized and converted into $\lambda^{-2}(T)$ vs. $T$. One important parameter in Table 1 is the effective mass of the electron. This value is taken either from electronic specific heat (Sn, Pb, NbN) or quantum oscillations (Nb-doped STO and Nb). For a-MoGe, we did not find an independent estimate but used the electron mass as has been done in the literature Tashiro . In figure 3(a)–(g), we show the temperature variation $n_{s}/n$ for different materials. We first focus on the Nb-doped SrTiO3 crystal which is the cleanest sample analyzed here. In Fig. 3(a) we fit $n_{s}(T)/n$ using the full expression in Eq.(10) using the values of $\delta$ as shown in Table 2 and $\alpha$ as the only adjustable parameter. In the same panel we also separately plot the 1st and 2nd term on the right hand side of Eq. (10). In Fig. 3(b), we try to fit the same data using only first term which is equivalent to the dirty limit expression in Eq. (1). As can be seen best fit curve deviates at high temperature, showing at this level of cleanliness a small but discernible difference in the T-dependence emerges between the exact expression and the dirty-limit BCS expression. For Sn, Pb, Nb film (Fig. 3(c)-3(e)) as $n_{s}/n$ decreases, the contribution of the 2nd term in the overall expression progressively decreases. For the strongly disordered NbN and a-MoGe films (Fig. 3(f)-3(g)) the contribution of the 2nd term is negligible and the data can be fitted with the dirty limit BCS expression. The extracted parameters from the fits are also shown in Table 2. Wherever resistivity data is available the values of $\tau$ extracted from the present fits, $\tau_{{}_{P}}$ are consistent with those obtained from resistivity, $\tau_{{}_{T}}$, using Drude model. In Fig. 3(h), we show the ratio of the second term to the first term, $\eta$, as a function of $\Delta_{0}\tau$. It is obvious from the graph that the cleanest superconductor analyzed here, Nb- doped STO, is far from the BCS clean limit for which $n_{s}(0)/n\sim 1$ and $\Delta_{0}\tau>>1$. Most studies on pure elemental superconductor show $n_{s}/n=0.05$–$0.3$ Tai ; Pippard ; Dressel . Surprisingly, there is one report Strongin where $n_{s}/n$ values very close to one was reported for very pure polycrystalline Ta and Nb. However, in that paper $\lambda(0)$ values were obtained from $\lambda(T)$ close to $T_{c}$. However for the same sample, the low temperature variation of $\lambda(T)$ showed unexpected distinct deviation from BCS variation, probably from surface contamination. Similarly it was suggested that Nb-doped SrTiO3 could be in the clean limit Collignon but this has been contested from direct measurements of the penetration depth Scheffler . Therefore there is a need for further measurements on high purity single crystals to explore if the BCS limit can indeed be realized. ## IV Outlook and Conclusion Our analysis is based on the Born approximation for disorder potential. We thus have not considered localization effect which plays a major role for strongly disordered superconductors when $k_{F}\ell\sim 1$ (where $k_{F}$ is the Fermi wavenumber and $\ell$ is the mean free path of an electron). The superfluid density presented here is without consideration of higher order effects due to phase fluctuations which again finds its role for relatively large disorder when $\alpha=\Delta_{0}\tau/\hbar\lesssim 10^{-5}$, and hence the physics of pseudogap phase Pratap_jpcm has also been ignored. Our study reveals that the absolute measurement of superfluid density at all temperatures, rather than the relative measurement with respect to a given $T$, is necessary for determining its dependence on disorder. This is because $n_{s}(T)/n_{s}(0)$ is weakly disorder dependent while both $n_{s}(T)$ and $n_{s}(0)$ are disorder dependent. This analysis is based on the assumption that $\Delta$ is disorder independent, as a consequence of Anderson’s theorem Anderson . We find that the estimated relaxation time from the resistivity data and from the fitted parameter $\alpha$ are in the same ballpark for all the samples those have been analyzed, excepting purer samples Pb and Sn for which resistivity data are not available for comparison. One surprising finding in this study is that most samples on which the temperature dependence of the superfluid density has been investigated seem to be in the dirty limit where $n_{s}(0)<<n$. In fact, the paradigmatic BCS clean limit seems to be very rare. To achieve the clean BCS limit the superconductor needs to have a large electronic relaxation time, $\tau>\hbar/\Delta_{0}\sim 10^{-11}$–$10^{-12}$s, which translates into an electronic mean free path, $\ell$, greater than tens of micrometers. Such a large $\ell$ is indeed very rare and has been realized in very high purity single crystals of noble metals like Ag and semimetals like Bi on which electron focusing experiments Benistant83 ; Heil were performed. This requirement is even more stringent than the mean free path required in typical single crystals on which de Haas-van Alphen measurements are performed at fields of several Tesla. It will be instructive to try to synthesize superconductors with comparable mean free path to experimentally verify the temperature variation of $n_{s}/n$ from the clean-limit BCS theory. Table 2: Parameters calculated using or extracted from the experimental data shown in table 1 for all the samples. Relaxation time calculated using the transport data, $\tau_{{}_{T}}=m^{\ast}/(ne^{2}\rho_{{}_{N}})$, and the same calculated using the parameter $\alpha$ extracted by fitting $n_{s}/n$ with Eq. (10), $\tau_{{}_{P}}$, are in the same ballpark. Sample | $n_{s}(0)$ | $\frac{n_{s}(0)}{n}$ | $\delta=\frac{\Delta(0)}{2k_{B}T_{c}}$ | $\alpha=\frac{\Delta(0)\tau}{\hbar}$ | $\tau_{{}_{T}}=\frac{m^{\ast}}{ne^{2}\rho_{{}_{N}}}$ | $\tau_{{}_{P}}=\frac{\alpha\hbar}{\Delta(0)}$ ---|---|---|---|---|---|--- | ($10^{25}$m-3) | $(10^{-3})$ | | $(10^{-3})$ | ($10^{-17}$ s) | ($10^{-17}$ s) Sn | 1967.17 | 132.92 | 0.865 | 48.5 | *** | 5751.8 Pb | 2015.56 | 152.69 | 1.082 | 63 | *** | 3094 Nb (15.3nm) | 279.7 | 50.3 | 0.96 | 17.7 | 855 | 763.9 NbN-1 | 21.94 | 1.30 | 1.0135 | 0.415 | 18.4 | 10.9 NbN-2 | 8.27 | 0.713 | 1.0135 | 0.228 | 13.8 | 8.64 NbN-3 | 4.89 | 0.416 | 1.0135 | 0.134 | 12.5 | 5.93 MoGe-1 (21 nm) | 10.11 | 0.219 | 1.06 | 0.0694 | 5.14 | 3.57 MoGe-2 (11nm) | 9.17 | 0.199 | 1.116 | 0.0638 | 4.7 | 3.36 MoGe-3 (4.5nm) | 7.50 | 0.163 | 1.3 | 0.0518 | 5.35 | 3.04 Nb-doped STO | 6.2 | 563.1 | 0.875 | 500 | $2.5\times 10^{5}$ Scheffler | $6.3\times 10^{5}$ ## Appendix A Sum Rule for the Suppression of Superfluid Density While the clean BCS limit can only be reached in specially prepared very clean single crystals, frequently available polycrystalline and thin film superconductors are in the opposite limit, i.e., dirty limit where, $\tau\ll\Delta_{0}/\hbar$. In such a situation, $n_{s}(0)\ll n$. $n_{s}/n$ can be intuitively estimated based on the oscillator sum rule Kubo ; Glover ; Ferrell that gets the result correct within a factor of order unity; here we outline this derivation and compare with the accurate expression of $n_{s}$ that has already been derived microscopically in this paper (9) and originally by Abrikosov and Gorkov A-G in the linear response theory. The optical conductivity of a metal in Drude theory is given by $\sigma(\omega)=\sigma^{\prime}(\omega)+i\sigma^{\prime\prime}(\omega)$ where $\sigma^{\prime}(\omega)=\frac{\sigma_{0}}{1+(\omega\tau)^{2}}\,\,\,;\,\,\,\sigma^{\prime\prime}(\omega)=\frac{\sigma_{0}\omega\tau}{1+(\omega\tau)^{2}}$ (12) with dc conductivity $\sigma_{0}=ne^{2}\tau/m_{e}$. The well known oscillator sum rule for $\sigma^{\prime}(\omega)$ is given by $\int_{0}^{\infty}\sigma^{\prime}(\omega)\,d\omega=\frac{\pi ne^{2}}{2m_{e}}\,.$ (13) The sum rule in Eq. (13), however, remains unaltered for finite temperature, magnetic field, the presence of interaction between electrons, and even when the metallic system makes a phase transition into the superconducting state. However, the spectral weight in $\sigma^{\prime}(\omega)$ is redistributed, depending on the state of the system. When a metal goes into the superconducting state, a spectral gap opens for the frequency $\omega<2\Delta_{0}/\hbar$. At a very high frequency $(\omega>>2\Delta_{0}/\hbar)$, the distribution of spectral weight in the real part of conductivity in the superconducting state, $\sigma_{s}^{\prime}(\omega)$, remains unaltered from its metallic counterpart. $\sigma_{s}^{\prime}(\omega)$ approaches zero as $\omega\to 2\Delta_{0}/\hbar$ from its higher values. However, this depletion of spectral weight gets accumulated at zero frequency in the form of Dirac delta function: $\sigma^{\prime}_{s}(\omega)=\frac{\pi n_{s}e^{2}}{m_{e}}\delta(\omega)$ (14) where the prefactor $\pi n_{s}e^{2}/m_{e}$ is known as Drude weight to the conductivity that is proportional to the superfluid density. The precise variation of $\sigma_{s}(\omega)$ for a s-wave superconductor may be obtained from Mattis-Bardeen theoryBardeen . However for the purpose of an approximate estimation of $n_{s}$, we consider a discontinuous jump in $\sigma^{\prime}_{s}(\omega)$ at $\omega=2\Delta_{0}/\hbar$ from its zero value to normal-metallic value. Following the sum rule (13), we thus write $\int_{0}^{2\Delta_{0}/\hbar}\sigma^{\prime}(\omega)d\omega\approx\int_{0}^{2\Delta_{0}/\hbar}\sigma^{\prime}_{s}(\omega)d\omega$ (15) which yields $\frac{n_{s}}{n}=\frac{2}{\pi}\tan^{-1}\left(\frac{2\Delta_{0}\tau}{\hbar}\right)$ (16) reproducing the clean limit ($\Delta_{0}\tau\to\infty$), i.e., $n_{s}=n$. In the dirty limit ( $\Delta_{0}\tau\to 0$), we find $n_{s}/n=4\Delta_{0}\tau/(\pi\hbar)$ which differs with the microscopic result only by a numerical factor $\pi^{2}/4$. It is instructive to write Eq.(16) in terms of the measurable quantities such as penetration depth and normal state resistivity $\rho_{{}_{N}}=1/\sigma_{0}$. Substituting $n_{s}$ by $(m_{e}/\mu_{0}e^{2})\lambda^{-2}(0)$ in Eq.(16) and reinstating the above mentioned factor $\pi^{2}/4$, we find $\lambda^{-2}(0)=\frac{\pi\mu_{0}\Delta_{0}}{\hbar\rho_{{}_{N}}}$ (17) in the dirty limit. The relation (17) is particularly powerful as it relates three independent measurable quantities $\lambda(0)$, $\Delta_{0}$ and $\rho_{{}_{N}}$ without any adjustable parameters. ## Acknowledgments PR would like to thank Mohit Randeria for valuable discussions in 2008 on the connection between the oscillator sum rule and superfluid density. We thank Thomas Lemberger and Marc Scheffler for sharing data on Nb and Nb-doped SrTiO3 respectively. We also thank Marc Scheffler and Peter Armitage for valuable online discussions and feedback after an early draft of this paper was circulated. We acknowledge financial support by the Department of Atomic Energy, Govt of India (Project No: RTI4003). ## References * (1) Coleman P 2015 Introduction to Many-Body Physics (London: Cambridge University Press) * (2) Tinkham M 1996 Introduction to Superconductivity (Singapore: McGrawHill) * (3) Bardeen J, Cooper L N and Schrieffer J R 1957 Theory of Superconductivity Phys. Rev. 108 1175 * (4) Abrikosov A A and Gor’kov L P 1959 On the Theory of Superconducting Alloys; I. The Electrodynamics of Alloys at Absolute Zero Soviet Phys. JETP 35 1090 * (5) Abrikosov A A and Gor’kov L P 1959 Superconducting Alloys at Finite Temperatures JETP 36 319 * (6) Abrikosov A A, Gor’kov L P and Dzyaloshinskii I E 1963 Methods of Quantum Field Theory in Statistical Physics (New York: Dover) * (7) Skalski S, Matibet O B and Weiss P R 1964 Properties of Superconducting Alloys Containing Paramagmetic Impurity Phys. Rev. 136 A1500 * (8) Nam S B 1967 Theory of Electromagnetic Properties of Superconducting and Normal Systems I Phys. Rev. 156 470 * (9) Kogan V G 2013 Homes Scaling and BCS Phys. Rev. B 87 220507(R) * (10) Mandal S S and Ramakrishnan T V 2020 Microscopic free energy functional of superconductive amplitude and phase: Superfluid density in disordered superconductors Phys. Rev. B 102 024514 * (11) Ma M and Lee P A 1985 Localized superconductors Phys. Rev. B 32 5658 * (12) Anderson P W 1959 Theory of dirty superconductors J. Phys. Chem. Solids 11 26 * (13) Lemberger T R, Hetel I, Knepper J and Yang F Y 2007 Penetration depth study of very thin superconducting Nb films Phys. Rev B 76 094515 * (14) Mondal M, Kamlapure A, Chand M, Saraswat G, Kumar S, Jesudasan J, Benfatto L, Tripathi V and Raychaudhuri P 2011 Phase Fluctuations in a Strongly Disordered s-Wave NbN Superconductor Close to the Metal-Insulator Transition Phys. Rev. Lett. 106 047001 * (15) Mandal S, Dutta S, Basistha S, Roy I, Jesudasan J, V. Bagwe, Benfatto L, Thamizhavel A and Raychaudhuri P 2020 Destruction of superconductivity through phase fluctuations in ultrathin a-MoGe films Phys. Rev. B 102, 060501(R) * (16) Thiemann M, Beutel M H, Dressel M, Lee-Hone N R, Broun D M, Fillis-Tsirakis E, Boschker H, Mannhart J and Scheffler M 2018 Single-Gap Superconductivity and Dome of Superfluid Density in Nb-Doped SrTiO3 Phys. Rev. Lett. 120 237002 * (17) Shoenberg D 1939 Superconducting Colloidal Mercury Nature (London) 143 434 * (18) Lock J M 1951 Penetration of magnetic fields into superconductors III. Measurements on thin films of tin, lead and indium Proc. Royal Soc. A 208 391 * (19) Egloff C, Raychaudhuri A K and Rinderer L 1983 Penetration of a Magnetic Field into Superconducting Lead and Lead-Indium Alloys J. Low Temp. Phys. 52 163 * (20) Schawlow A L and Devlin G E 1959 Effect of the energy gap on the penetration depth of superconductors Phys. Rev. 113 120 * (21) Yong J, Lemberger T R, Benfatto L, IIin K and Siegel M 2013 Robustness of the Berezinskii-Kosterlitz-Thouless transition in ultrathin NbN films near the superconductor-insulator transition Phys. Rev. B 87 184505 * (22) Kamlapure A, Mondal M, Chand M, Mishra A, Jesudasan J, Bagwe V, Benfatto L, Tripathi V and Raychaudhuri P 2010 Measurement of magnetic penetration depth and superconducting energy gap in very thin epitaxial NbN films Appl. Pys. Lett. 96 072509 * (23) Gupta C, Parab P and Bose S 2020 Superfluid density from magnetic penetration depth measurements in Nb–Cu 3D nano-composite films Sci. Reports 10 18331 * (24) Hafner D, Dressel M and Scheffler M 2014 Surface-resistance measurements using superconducting stripline resonators Rev. of Sci. Instruments 85 014702 * (25) Binnig G, Baratoff A, Hoenig H E, and Bednorz J G 1980 Two-Band Superconductivity in Nb-Doped SrTiO3 Phys. Rev. Lett. 45 1352 * (26) Thiemann M, Beutel M H, Dressel M, Lee-Hone N R, Broun D M, Fillis-Tsirakis E, Boschker H, Mannhart J, and Scheffler M 2018 Single-Gap Superconductivity and Dome of Superfluid Density in Nb-Doped SrTiO3 Phys. Rev. Lett. 120 237002 * (27) Swartz A G, Inoue H, Merz T A, Hikita Y, Raghu S, Devereaux T P, Johnston S, and Hwang H Y 2018 Polaronic behavior in a weak-coupling superconductor Proceedings of the National Academy of Sciences 115 1475 * (28) Eagles D M 2018 Published Tunneling Results of Binnig et al Interpreted as Related to Surface Superconductivity in SrTiO3 Journal of Superconductivity and Novel Magnetism 31 1021 * (29) Giaever I and Megerle K 1961 Study of Superconductors by Electron Tunneling Phys. Rev. 122 1101 * (30) Wyckoff R W G 1963 Crystal Structures (2nd Edition) (New York: Interscience) * (31) Kittel C 2005 Introduction to Solid State Physics (8th Edition) (New Delhi: Wiley India) * (32) Townsend P and Sutton J 1962 Investigation by Electron Tunneling of the Superconducting Energy Gaps in Nb, Ta, Sn, and Pb Phys. Rev. 128 591 * (33) Karim D P, Ketterson J B and Crabtree G 1978 A de Haas-van Alphen study of niobium: Fermi surface, cyclotron effective masses and magnetic breakdown effects J. Low. Temp. Phys. 30 389 * (34) Chand M Transport, magneto-transport and electron tunneling studies on disordered superconductors (Ph.D. thesis) (Mumbai: Tata Institute of Fundamental Research) (www.tiftr.res.in/$\sim$superconductivity/pdfs.madhavi.pdf). * (35) Mattheiss L F 1972 Electronic Band Structure of Niobium Nitride Phy. Rev. B 5 315 * (36) Chockalingam S P, Madhavi C, Jesudasan J, Tripathi N and Raychaudhuri P 2008 Superconducting properties and Hall effect of epitaxial NbN thin films Phy. Rev. B 77 214503 * (37) Thiemann D, Dressel M and Scheffler M 2018 Complete electrodynamics of a BCS superconductor with $\mu$eV energy scales: Microwave spectroscopy on titanium at mK temperatures Phys. Rev. B 97 214516 * (38) Lin X, Bridoux G, Gourgout A, Seyfarth G, Krämer S, Nardone M, Fauqué B, and Behnia K 2014 Critical Doping for the Onset of a Two-Band Superconducting Ground State in SrTiO3-δ Phys. Rev. Lett. 112 207002 * (39) Tashiro H, Graybeal J M, Tanner D B, Nicol E J, Carbotte J P and Carr G L 2008 Unusual thickness dependence of the superconducting transition of $\alpha$-MoGe thin films Phys. Rev. B78 014509 * (40) Tai P C L, Beasley M R and Tinkham M 1975 Anisotropy of the penetration depth in superconducting tin Phys. Rev. B 11 411 * (41) Faber T E and Pippard A B 1955 The penetration depth and high-frequency resistance of superconducting aluminium Proceed. Royal Soc. A 231 178 * (42) Varmazis C and Strongin M 1974 Inductive transition of niobium and tantalum in the 10-MHz range. I. Zero-field superconducting penetration depth Phys. Rev. B 10 1885 * (43) Collignon C, Fauqué B, Cavanna A, Gennser U, Mailly D and Behnia K 2017 Superfluid density and carrier concentration across a superconducting dome: The case of strontium titanate Phys. Rev. B 96 224506 * (44) Raychowdhury P and Dutta S 2022 Phase fluctuations in conventional superconductors J. Phys.: Condens. Matter 34 083001 * (45) Benistant P A M, van Kempen H and Wyder P 1983 Direct Observation of Andreev Reflection Phys. Rev. Lett. 51 817 * (46) Heil J, Böhm A, Gröger A, Primke M, Wyder P, Keppler P, Major J, Bender H, Schönherr E, Wendel H, Wolf B, Würz K U, Grill W, Herrnberger H, Knauth S and Lenzner J 2000, Electron focusing in metals and semimetals Physics Reports 323 387 * (47) Kubo R 1957 Statistical Mechanical Theory of Irreversible Processes I. J. Phys. Soc. Jpn. 12 570 * (48) Ferrell R A and Glover R E III 1958 Conductivity of Superconducting Films: A Sum Rule Phys. Rev. 109 1398 * (49) Tinkham M and Ferrell R A 1959 Determination of the Superconducting Skin Depth from the Energy Gap and Sum Rule Phys. Rev. Lett. 2 331 * (50) Mattis D C and Bardeen J 1958 Theory of the Anomalous Skin Effect in Normal and Superconducting Metals Phys. Rev. 111 412
# Symmetry operators for the conformal wave equation in rotating black hole spacetimes Finnian Gray<EMAIL_ADDRESS>Perimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, Canada, N2L 3G1 Tsuyoshi Houri<EMAIL_ADDRESS>National Institute of Technology, Maizuru College, Kyoto 625-8511, Japan David Kubizňák <EMAIL_ADDRESS>Perimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, Canada, N2L 3G1 Yukinori Yasui <EMAIL_ADDRESS>Institute for Fundamental Sciences, Setsunan University, Osaka 572-8508, Japan (April 29, 2021) ###### Abstract We present covariant symmetry operators for the conformal wave equation in the (off-shell) Kerr–NUT–AdS spacetimes. These operators, that are constructed from the principal Killing–Yano tensor, its ‘symmetry descendants’, and the curvature tensor, guarantee separability of the conformal wave equation in these spacetimes. We next discuss how these operators give rise to a full set of conformally invariant mutually commuting operators for the conformally rescaled spacetimes and underlie the $R$-separability of the conformal wave equation therein. Finally, by employing the WKB approximation we derive the associated Hamilton–Jacobi equation with a scalar curvature potential term and show its separability in the Kerr–NUT–AdS spacetimes. ## I Introduction Symmetries, both explicit and hidden, play an important role in general relativity – in their presence one may be able to explicitly integrate the Einstein equations and/or significantly simplify the study of matter fields in a given curved spacetime. Perhaps one of the most remarkable symmetries is a hidden symmetry of the principal Killing–Yano tensor Frolov:2017kze . Such a symmetry appears for the Kerr family of spacetimes in all dimensions, or more precisely for all the so called (off-shell) Kerr–NUT–AdS metrics Houri:2007xz ; Krtous:2008tb ; Houri:2008th , and underlies many of their remarkable properties. In particular, it stands behind the separability of the massless and massive scalar, spinor, and vector field equations in the Kerr–NUT–AdS backgrounds Frolov:2006pe ; Oota:2007vx ; Lunin:2017drx ; Frolov:2018ezx (see also Lunin:2019pwz for a separability of $p$-form fields). Most recently, it has been demonstrated Gray:2020rtr that also the conformally coupled scalar wave equation $\bigl{(}\Box-\eta R\bigr{)}\Phi=0\,,\quad\eta=\frac{1}{4}\frac{D-2}{D-1}\,,$ (1) separates in the general off-shell Kerr–NUT–AdS spacetimes. Here, $D$ stands for the number of spacetime dimensions, $R$ is the Ricci scalar of the background metric ${\boldsymbol{g}}$, and prefactor $\eta$ is chosen so that the equation enjoys conformal symmetry, (see, e.g., appendix D of wald1984general ). Namely, a solution to this equation remains a solution in a conformally scaled spacetime $\widetilde{{\boldsymbol{g}}}=\Omega^{2}{\boldsymbol{g}}\,,$ (2) provided it also scales as $\widetilde{\Phi}=\Omega^{w}\Phi$ , with the conformal weight $w=1-D/2$ . The wave equation equation (1) is of fundamental importance and has a number of applications, see e.g. recent study of the asymptotic structure of Kerr spacetime via conformal compactification Hennig:2020rns . The purpose of the present paper is to further our understanding of the conformal wave equation (1) in the Kerr–NUT–AdS spacetime —filling some important gaps in the previous analysis. In particular, we want to ‘intrinsically characterize’ the obtained separability by finding an explicit covariant form of the corresponding symmetry operators that were found in Gray:2020rtr in a given coordinate basis. As we shall see, such operators can be written in terms of the principal Killing–Yano tensor, its symmetry descendants, and the curvature tensor. Moreover, following Michel:2013dfa , such operators can be ‘lifted up’ to conformal operators and guarantee $R$-separability of the conformal wave equation in any conformally related spacetime (2). Finally, by applying the WKB approximation we derive an associated with (1) Hamilton–Jacobi equation with a scalar curvature potential, $g^{ab}\partial_{a}S\,\partial_{b}S+\eta R=0\,,$ (3) and demonstrate its separability in the Kerr–NUT–AdS spacetimes. The equation (3) has a long history, going back at least to a paper by DeWitt DeWitt:1952js which considers quantum Hamiltonians arising from classical systems. Therein, couplings to the geometrical objects can naturally arise. In a similar vein, the extra term we find in the Hamiltonian can arise due to ambiguities in operator ordering when quantizing non-linear systems Omote:1976fx . It has also found use when considering the quantum mechanics of the motion of a free particle constrained to a Riemannian surface Destri:1992sg ; Lian:2017qoh . Here we understand it as a purely classical equation that describes certain modification of the free particle motion in a curved space. Our plan for the remainder of the paper is as follows. In the next section we review the Kerr–NUT–AdS spacetimes, their hidden symmetry of the principal Killing–Yano tensor, and its ‘symmetry descendants’. In Sec. III we construct the covariant form of the symmetry operators for the conformal wave equation in these spacetimes. The associated operators for the conformally rescaled metrics are studied in Sec. IV. In Sec. V we derive the Hamilton–Jacobi equation (3) and demonstrate its separability in Kerr–NUT–AdS spacetimes. Sec. VI is devoted to the final discussion. Technical results are summarized in Appendices A and B. ## II Principal Killing–Yano tensor and Kerr–NUT–AdS spacetimes The principal Killing–Yano tensor ${\boldsymbol{h}}$ is a non-degenerate closed conformal Killing–Yano 2-form ${\boldsymbol{h}}$ obeying the following equation: $\nabla_{a}h_{bc}=g_{ab}\,\xi_{c}-g_{ac}\,\xi_{b}\,,$ (4) where $\xi^{a}=\frac{1}{D-1}\nabla_{b}h^{ba}\,$ (5) is the associated primary Killing vector field Krtous:2008tb . Starting with a single principal Killing–Yano tensor ${\boldsymbol{h}}$, one can generate the whole towers of explicit and hidden symmetries – the ‘symmetry descendants’ of ${\boldsymbol{h}}$. In brief, we can construct the following tower of closed conformal Killing–Yano tensors: ${\boldsymbol{h}}^{(j)}=\frac{1}{j!}\underbrace{{\boldsymbol{h}}\wedge\dots\wedge{\boldsymbol{h}}}_{j\ \mbox{\tiny times}}\,.$ (6) Their Hodge duals ${\boldsymbol{f}}^{(j)}=\star{\boldsymbol{h}}^{(j)}$ are Killing–Yano tensors, and their square gives rise to a tower of rank-2 Killing tensors: $k^{ab}_{(j)}=\frac{1}{(D-2j-1)!}f^{(j)a}{}_{c_{1}\dots d_{d-2j-1}}f^{(j)bc_{1}\dots c_{d-2j-1}}\,$ (7) for $j\in(0,\dots,n-1)$. In turn, these tensors give rise to the tower of Killing vectors: $\boldsymbol{l}_{(j)}=\boldsymbol{k}_{(j)}\cdot\boldsymbol{\xi}^{\flat}\,.$ (8) Note that the $j=0$ Killing tensor is just the inverse metric and the zeroth Killing vector is the primary Killing vector, $\boldsymbol{l}_{(0)}={\boldsymbol{\xi}}$. We also have in odd dimensions an extra redundant Killing tensor ${\boldsymbol{k}}_{(n)}={\boldsymbol{l}}_{(n)}\otimes{\boldsymbol{l}}_{(n)}$. All of the above constructed symmetries mutually Schouten–Nijenhuis commute $\displaystyle\left[\boldsymbol{l}_{(i)},\boldsymbol{k}_{(j)}\right]_{\mbox{\tiny SN}}$ $\displaystyle=$ $\displaystyle 0\;,\;\left[\boldsymbol{l}_{(i)},\boldsymbol{l}_{(j)}\right]_{\mbox{\tiny SN}}=0\;,$ $\displaystyle\left[\boldsymbol{k}^{(i)},\boldsymbol{k}^{(j)}\right]_{\mbox{\tiny SN}}$ $\displaystyle=$ $\displaystyle k^{(i)}_{e(a}\nabla^{e}k^{(j)}_{bc)}-k^{(j)}_{e(a}\nabla^{e}k^{(i)}_{bc)}=0\;\,.$ (9) In addition, the Killing tensors obey the following algebraic identity (i.e. they commute as matrices): $k^{a}_{(i)\,b}k^{b}_{(j)\,c}-k^{a}_{(j)\,b}k^{b}_{(i)\,c}=0\,,$ (10) see Frolov:2017kze for all the details and proofs of the above statements. The most general spacetime admitting the principal Killing–Yano tensor is the (off-shell) Kerr–NUT–AdS spacetime Houri:2007xz ; Krtous:2008tb (see also Houri:2008th ). Denoting by ${D=2n+\varepsilon}$ the total number of spacetime dimensions (with $\varepsilon=0$ in even and $\varepsilon=1$ in odd dimensions), the metric takes the following explicit form: $\displaystyle{\boldsymbol{g}}$ $\displaystyle=$ $\displaystyle\sum_{\mu=1}^{n}\;\biggl{[}\;\frac{U_{\mu}}{X_{\mu}}\,{{{\boldsymbol{d}}}x_{\mu}^{2}}+\,\frac{X_{\mu}}{U_{\mu}}\,\Bigl{(}\,\sum_{j=0}^{n-1}A^{\\!(j)}_{\mu}{{\boldsymbol{d}}}\psi_{j}\Bigr{)}^{\\!2}\;\biggr{]}$ (11) $\displaystyle\qquad+\frac{\varepsilon c}{A^{\\!(n)}}\Bigl{(}\sum_{k=0}^{n}A^{\\!(k)}{{\boldsymbol{d}}}\psi_{k}\\!\Bigr{)}^{\\!2}\;,$ while the principal Killing–Yano tensor reads ${\boldsymbol{h}}=\sum_{\mu=1}^{n}\,x_{\mu}dx^{\mu}\wedge\left(\sum_{k=0}^{{{n}}-1}A^{\\!(k)}_{\mu}{{\boldsymbol{d}}}\psi_{k}\right)\;.$ (12) The employed coordinates $\\{x_{\mu},\psi_{k}\\}$ have a natural geometrical meaning associated with the principal Killing–Yano tensor. They split into (time and azimuthal angle) _Killing coordinates_ ${\psi_{k}}$ (${k}={0,\,\dots,\,{{n}}{-}1{+}\varepsilon}$) that correspond to the Killing vectors (8), $\boldsymbol{l}_{(k)}={{\boldsymbol{\partial}}}_{\psi_{k}}\,,$ (13) and the non-trivial (radial and longitudinal angle) coordinates ${x_{\mu}}$ ($\mu=1,\,\dots,\,{{n}}$) that represent the ‘eigenvalues’ of ${\boldsymbol{h}}$, see Frolov:2017kze . In the above, the functions ${A^{\\!(k)}}$, ${A^{\\!(j)}_{\mu}}$, and ${U_{\mu}}$ are ‘symmetric polynomials’ of the coordinates ${x_{\mu}}$, and are defined by: $\displaystyle A^{\\!(k)}$ $\displaystyle=\\!\\!\\!\\!\\!\sum_{\begin{subarray}{c}\nu_{1},\dots,\nu_{k}=1\\\ \nu_{1}<\dots<\nu_{k}\end{subarray}}^{{n}}\\!\\!\\!\\!\\!x^{2}_{\nu_{1}}\dots x^{2}_{\nu_{k}}\;,\>\>\>A^{\\!(j)}_{\mu}=\\!\\!\\!\\!\\!\sum_{\begin{subarray}{c}\nu_{1},\dots,\nu_{j}=1\\\ \nu_{1}<\dots<\nu_{j}\\\ \nu_{i}\neq\mu\end{subarray}}^{{n}}\\!\\!\\!\\!\\!x^{2}_{\nu_{1}}\dots x^{2}_{\nu_{j}}\;,$ $\displaystyle U_{\mu}$ $\displaystyle=\prod_{\begin{subarray}{c}\nu=1\\\ \nu\neq\mu\end{subarray}}^{{n}}(x_{\nu}^{2}-x_{\mu}^{2})\;,\;\>\>U=\prod_{\begin{subarray}{c}\mu,\nu=1\\\ \mu<\nu\end{subarray}}^{n}(x^{2}_{\mu}-x^{2}_{\nu})=\det\bigl(A^{\\!(j)}_{\mu}\bigr{missing})\;,$ (14) where we have fixed $A^{\\!(0)}=1=A^{\\!(0)}_{\mu}$. Each metric function ${X_{\mu}}$ is an unspecified function of a single coordinate ${x_{\mu}}$: $X_{\mu}=X_{\mu}(x_{\mu})\,.$ (15) Lastly, the constant $c$ only appearing in odd dimensions is a free parameter. Despite the fact that the metric is rather complex, its Ricci scalar takes a fairly simple form Hamamoto:2006zf $R=\sum_{\mu=1}^{n}\frac{r_{\mu}}{U_{\mu}}\,,$ (16) where each function $r_{\mu}$ depends only on a single variable $x_{\mu}$: $r_{\mu}=-X_{\mu}^{\prime\prime}-\frac{2\varepsilon X_{\mu}^{\prime}}{x_{\mu}}-\frac{2\varepsilon c}{x_{\mu}^{4}}\,.$ (17) The determinant of the metric reads $\sqrt{\absolutevalue{g}}=\bigl{(}cA^{\\!(n)}\bigr{)}^{\frac{\varepsilon}{2}}\,U\,.$ (18) Importantly for our purposes the Killing tensors ${\boldsymbol{k}}_{(j)}$ take the following coordinate form: $\displaystyle{\boldsymbol{k}}_{(j)}\\!$ $\displaystyle=$ $\displaystyle\\!\sum_{\mu=1}^{n}\\!A^{\\!(j)}_{\mu}\\!\\!\left[\\!\frac{X_{\mu}}{U_{\mu}}\,{{{\boldsymbol{\partial}}}_{x_{\mu}}^{2}}\\!+\\!\frac{U_{\mu}}{X_{\mu}}\\!\left(\\!\sum_{k=0}^{n-1+\varepsilon}\\!{\frac{(-x_{\mu}^{2})^{n-1-k}}{U_{\mu}}}{{\boldsymbol{\partial}}}_{\psi_{k}}\right)^{\\!\\!2}\\!\right]$ (19) $\displaystyle+\varepsilon\,\frac{A^{\\!(j)}}{cA^{\\!(n)}}{{\boldsymbol{\partial}}}_{\psi_{n}}^{2}\,,$ where $j=0$ corresponds to the inverse metric, ${\boldsymbol{g}}^{-1}={\boldsymbol{k}}_{(0)}$. ## III Separability of the conformal wave equation and its intrinsic characterization Recently it was shown Gray:2020rtr , that a solution to the conformal wave equation (1) in the background (11) can be found in the multiplicative separated form, $\Phi=\prod_{\mu=1}^{n}Z_{\mu}(x_{\mu})\prod_{k=0}^{n-1+\varepsilon}e^{i\Psi_{k}\psi_{k}}\,,$ (20) where $\Psi_{k}$ are the Killing vector separation constants, and each of the $Z_{\mu}$, which is a function of the single corresponding variable $x_{\mu}$, obeys the following ordinary differential equation: $\displaystyle{Z_{\mu}^{\prime\prime}}$ $\displaystyle+{Z_{\mu}^{\prime}}\Bigl{(}\frac{X_{\mu}^{\prime}}{X_{\mu}}+\frac{\varepsilon}{x_{\mu}}\Bigr{)}-\frac{Z_{\mu}}{X_{\mu}^{2}}\Bigl{(}\sum_{k=0}^{n-1+\varepsilon}{(-x_{\mu}^{2})^{n-1-k}}\Psi_{k}\Bigr{)}^{2}$ (21) $\displaystyle-\frac{Z_{\mu}}{X_{\mu}}\Bigl{(}\eta r_{\mu}+\frac{\varepsilon}{cx_{\mu}^{2}}\Psi_{n}^{2}+\sum_{k=0}^{n-1}C_{k}(-x_{\mu}^{2})^{n-1-k}\Bigr{)}=0\,,\quad\ \ $ where $C_{k}$ ($k=0,\dots,n-1)$ are the (non-trivial) separation constants and we have set $C_{0}=0$. As also shown in Gray:2020rtr , underlying this separability is a complete set of symmetry operators $\\{{\cal K}_{(j)},{\cal L}_{(j)}\\}$, $\displaystyle{\cal K}_{(j)}$ $\displaystyle=$ $\displaystyle\nabla_{a}k_{(j)}^{ab}\nabla_{b}-\eta R_{(j)}\,,$ (22) $\displaystyle{\cal L}_{(j)}$ $\displaystyle=$ $\displaystyle-i\,l_{(j)}^{a}\nabla_{\\!a}\,,$ (23) that all mutually commute one of another, $\bigl{[}{\cal K}_{(k)},{\cal L}_{(l)}\bigr{]}=0\;,\;\bigl{[}{\cal L}_{(k)},{\cal L}_{(l)}\bigr{]}=0\;,\;\bigl{[}{\cal K}_{(k)},{\cal K}_{(l)}\bigr{]}=0\;,$ (24) and one of which is the wave conformal operator. Namely, ${\cal K}_{(0)}\Psi=0$ (25) is the conformal wave equation (1). The fact that these commuting operators exist means that, there exists a common eigenfunction of these operators $\Phi$ obeying $\displaystyle{\cal K}_{(j)}\Phi$ $\displaystyle=$ $\displaystyle C_{j}\Phi\,,$ (26) $\displaystyle{\cal L}_{(j)}\Phi$ $\displaystyle=$ $\displaystyle\Psi_{j}\Phi\,.$ (27) It is precisely this eigenfunction which is the separated solution (20). The operators ${\cal L}_{(j)}$ are the standard scalar operators that are generated from Killing vectors $\boldsymbol{l}_{(j)}$. On the other hand, the Killing tensor operators ${\cal K}_{(j)}$ pick up, in addition to the standard Killing tensor part $\nabla_{a}k_{(j)}^{ab}\nabla_{b}$, also an ‘anomalous conformal term’ $R_{(j)}$ which ensures the commutation with the conformal wave operator ${\cal K}_{(0)}$. In Gray:2020rtr an explicit coordinate expression for this term has been found, it reads $R_{(j)}=\sum_{\mu=1}^{n}\frac{A_{\mu}^{(j)}}{U_{\mu}}r_{\mu}\,,$ (28) where $r_{\mu}$ are the ‘Ricci scalar functions’ (17). However, no covariant expression for $R_{(j)}$ has been given in Gray:2020rtr . Here we amend this situation. That is to say, we show in the appendix A that $R_{(j)}$ are given in terms of the principal Killing–Yano tensor, its symmetry descendants, and the curvature tensor by the following covariant formula: $\displaystyle R_{(j)}$ $\displaystyle=k^{ab}_{(j)}R_{ab}+\frac{D-4}{2(D-2)}\Box\Tr({\boldsymbol{k}}_{(j)})$ $\displaystyle\qquad+\alpha_{j}k_{(j-1)}^{ac}h_{c}{}^{b}\,(d\xi)_{ab}-\beta_{j}l_{(j-1)}^{a}\,\xi_{a}$ $\displaystyle=k^{ab}_{(j)}R_{ab}+\frac{D-4}{2(D-2)}\Box\Tr({\boldsymbol{k}}_{(j)})$ $\displaystyle\qquad- k_{(j-1)}^{ab}\Bigl{(}\alpha_{j}h_{a}{}^{c}\,(d\xi)_{cb}+\beta_{j}\xi_{a}\,\xi_{b}\Bigr{)}\;,$ (29) where ${\boldsymbol{\xi}}={\boldsymbol{l}}_{(0)}$ is the primary Killing vector (5), for $j=0$ we defined ${\boldsymbol{k}}_{(-1)}\equiv 0\equiv{\boldsymbol{l}}_{(-1)}$, and the constants $\alpha_{j}$ and $\beta_{j}$ are given by $\displaystyle\alpha_{j}$ $\displaystyle=\frac{(n-j+\frac{\epsilon}{2})}{(n-1+\frac{\epsilon}{2})}\,\quad\beta_{j}=2\frac{(n-j+\frac{\epsilon}{2})}{(n-1+\frac{\epsilon}{2})}(2j-3)\,.$ (30) Interestingly these objects can be understood as follows. Let us define the following 1-forms ${\boldsymbol{\kappa}}^{(j)}$: $\kappa^{(j)}_{a}=k_{(j)\,a}^{\;\;\;\;\;\;\;b}\nabla_{b}R\;.$ (31) Then, quantities $R_{(j)}$ can be understood as ‘potentials’ for the above 1-forms: ${\boldsymbol{\kappa}}_{(j)}={\boldsymbol{d}}R_{(j)}\;,$ (32) see appendix A for the proof. In fact, it is this property which underlies the commutation of the operators (22). Given that $[\nabla_{a}k_{(i)}^{ab}\nabla_{b},\nabla_{c}k_{(j)}^{cd}\nabla_{d}]f=0$ Sergyeyev:2007gf ; Gray:2020rtr for any scalar function $f$, we have $\displaystyle\left[\mathcal{K}_{(i)},\mathcal{K}_{(j)}\right]f\\!\\!\\!\\!\\!\\!\\!\\!$ $\displaystyle=-\eta$ $\displaystyle\left(\left[\nabla_{a}k_{(i)}^{ab}\nabla_{b},R_{(j)}\right]f+\left[R_{(i)},\nabla_{a}k_{(j)}^{ab}\nabla_{b}\right]f\right)$ $\displaystyle=-\eta$ $\displaystyle\left\\{\nabla_{a}(k_{(i)}^{ab}\nabla_{b}(R_{(j)}f))-R_{(j)}\nabla_{a}(k_{(i)}^{ab}\nabla_{b}(f))\right.$ $\displaystyle\left.+R_{(i)}\nabla_{a}(k_{(i)}^{ab}\nabla_{b}(f))-\nabla_{a}(k_{(j)}^{ab}\nabla_{b}(R_{(i)}f))\right\\}$ $\displaystyle=-\eta$ $\displaystyle\left\\{\nabla_{a}\Big{(}f\,[k^{ab}_{(i)}\nabla_{b}R_{(j)}-k^{ab}_{(j)}\nabla_{b}R_{(i)}]\Big{)}\right.$ $\displaystyle+$ $\displaystyle\left.(\nabla_{a}f)\Big{(}k^{ab}_{(i)}\nabla_{b}R_{(j)}-k^{ab}_{(j)}\nabla_{b}R_{(i)}\Big{)}\right\\}$ $\displaystyle=-\eta$ $\displaystyle\left\\{\nabla_{a}\Big{(}f\,[k^{a}_{(i)\,b}k^{b}_{(j)\,c}-k^{a}_{(j)\,b}k^{b}_{(i)\,c}]\nabla^{c}R\Big{)}\right.$ $\displaystyle+$ $\displaystyle\left.(\nabla_{a}f)(k^{a}_{(i)\,b}k^{b}_{(j)\,c}-k^{a}_{(j)\,b}k^{b}_{(i)\,c})\nabla^{c}R\right\\}$ $\displaystyle=0\ \ $ $\displaystyle\,,$ (33) where in the final step we have used the algebraic identity (10). We note that, this is a special case of the result presented in benenti2002 . Therein, it is shown that the commutation of any operators, $\Box+g$ and $\nabla_{a}K^{ab}\nabla_{b}+f$, where $f,g\in{\cal C}^{\infty}({\cal M})$ and $K^{ab}$ is a Killing tensor is guaranteed provided $\nabla_{a}f=\tensor{K}{{}_{a}^{b}}\nabla_{a}g-\frac{1}{3}\nabla_{b}(\tensor{K}{{}_{a}^{c}}\tensor{R}{{}_{c}^{b}}-\tensor{R}{{}_{a}^{c}}\tensor{K}{{}_{c}^{b}})\,.$ (34) In the case of the off-shell Kerr–NUT–AdS metrics the final term on the right hand side vanishes as the Killing and Ricci tensors are diagonal in the same basis Frolov:2017kze ; Hamamoto:2006zf (See (82) and (84) in Appendix A). Thus, this equation reduces to the relationship between (31) and (32). ## IV Symmetry operators in conformally related spacetimes As mentioned in the introduction, the conformal wave equation (1) enjoys the conformal symmetry. That is, provided we have a solution $\Phi$ in the spacetime ${\boldsymbol{g}}$, then $\widetilde{\Phi}=\Omega^{w}\Phi\,,\quad w=1-D/2\,$ (35) is a solution of the same equation in the conformally rescaled spacetime $\widetilde{{\boldsymbol{g}}}=\Omega^{2}{\boldsymbol{g}}\,.$ (36) In particular, this means that (35) with $\Phi$ given by (20) yields an $R$-separated solution of the conformal wave equation in any spacetime related to the off-shell Kerr–NUT–AdS metric by the conformal transformation (36). It is interesting to ask if also such $R$-separability can be intrinsically characterized by some complete set of mutually commuting operators. In what follows we show that this is indeed the case – we explicitly construct such operators and discuss their properties. First, starting from the special conformal frame with $\Omega=1$, we scale the operators $\\{{\cal K}_{(j)},{\cal L}_{(j)}\\}$, to construct a complete set of mutually commuting operators for the metric $\widetilde{{\boldsymbol{g}}}$, (36). Second, following Michel:2013dfa , we show that such operators can in fact be lifted to conformally invariant operators, providing thus a complete set of conformally invariant mutually commuting operators for the conformal wave equation (1) in any spacetime related to the Kerr–NUT–AdS metric by a conformal transformation. ### Mutually commuting operators Starting from the mutually commuting operators $\\{{\cal K}_{(j)},{\cal L}_{(j)}\\}$ in the special frame with $\Omega=1$, let us define new operators $\\{\widetilde{\mathcal{O}}_{(j)},\widetilde{\mathcal{P}}_{(j)}\\}$ for general $\Omega$ by: $\displaystyle\widetilde{\mathcal{O}}_{(j)}$ $\displaystyle\equiv$ $\displaystyle\Omega^{w}{\cal K}_{(j)}\Omega^{-w}\,,$ $\displaystyle\widetilde{\mathcal{P}}_{(j)}$ $\displaystyle\equiv$ $\displaystyle\Omega^{w}{\cal L}_{(j)}\Omega^{-w}\,.$ (37) By construction such operators mutually commute, as we have $\displaystyle\left[\widetilde{\mathcal{O}}_{(i)},\widetilde{\mathcal{O}}_{(j)}\right]$ $\displaystyle=\Omega^{w}\left[\mathcal{K}_{(i)},\mathcal{K}_{(j)}\right]\Omega^{-w}=0\;,$ (38) $\displaystyle\left[\widetilde{\mathcal{O}}_{(i)},\widetilde{\mathcal{{P}}}_{(j)}\right]$ $\displaystyle=\Omega^{w}\left[\mathcal{K}_{(i)},\mathcal{L}_{(j)}\right]\Omega^{-w}=0\,,$ (39) $\displaystyle\left[\widetilde{\mathcal{P}}_{(i)},\widetilde{\mathcal{P}}_{(j)}\right]$ $\displaystyle=\Omega^{w}\left[\mathcal{L}_{(i)},\mathcal{L}_{(j)}\right]\Omega^{-w}=0\,.$ (40) Moreover, it follows that when $\Phi$ satisfies the eigenvalue problem (26) in the spacetime ${\boldsymbol{g}}$, $\widetilde{\Phi}=\Omega^{w}\Phi$ given by (35) obeys the ‘associated’ eigenvalue problem: $\displaystyle\widetilde{\mathcal{O}}_{(j)}\widetilde{\Phi}$ $\displaystyle=$ $\displaystyle C_{j}\widetilde{\Phi}\,,$ $\displaystyle\widetilde{\mathcal{P}}_{(j)}\widetilde{\Phi}$ $\displaystyle=$ $\displaystyle\Psi_{j}\widetilde{\Phi}\,,$ (41) in the conformal spacetime $\widetilde{{\boldsymbol{g}}}$. In other words, the operators $\\{\widetilde{\mathcal{O}}_{(j)},\widetilde{\mathcal{P}}_{(j)}\\}$, (IV), intrinsically characterize the separability of the conformal wave equation in the conformal spacetime (36). The only ‘problem’ with (IV) is that the new operators $\\{\widetilde{\mathcal{O}}_{(j)},\widetilde{\mathcal{P}}_{(j)}\\}$ remain expressed in terms of the ‘old’ connection $\nabla_{a}$, the old Ricci tensor $R_{ab}$, and other objects associated with the metric ${\boldsymbol{g}}$ rather than the conformally rescaled metric $\widetilde{{\boldsymbol{g}}}$. However, using the well known transformation properties of the connection and curvature tensor, one can straightforwardly amend this situation. For example, let us define the following tilded objects:111We stress that these objects are not the conformal symmetries of the spacetime $\widetilde{{\boldsymbol{g}}}$, although it is possible to define such symmetries. Namely, the following objects: $k^{ab}_{(j>0)}\,,\quad\Omega^{3}h_{ab}\,,\quad l^{a}_{(j\geq 0)}\,,$ are the conformal Killing tensors, conformal Killing–Yano 2-form, and conformal Killing vectors of the spacetime $\widetilde{{\boldsymbol{g}}}$. Notice that in doing this, necessarily ${\boldsymbol{k}}_{(0)}={\boldsymbol{g}}$ transforms differently to the rest of the Killing tensors. One could, of course, use these objects to define the transformed operators, leading to different (seemingly more complex) expressions. We will adopt this strategy for the Killing tensors at least in the next section (IV). $\widetilde{k}^{ab}_{(j)}=\Omega^{-2}k^{ab}_{(j)}\,,\quad\widetilde{h}_{ab}=\Omega^{2}h_{ab}\,,\quad\widetilde{l}^{a}_{(j)}=\Omega^{-2}l^{a}_{(j)}\,,$ (42) and raise or lower their indices with the metric $\widetilde{{\boldsymbol{g}}}$ and its inverse. We further denote by $\widetilde{\nabla}_{a}$ the covariant derivative in the spacetime $\widetilde{{\boldsymbol{g}}}$ and by $\widetilde{R}_{ab}$ its Ricci tensor. With these at hand, the operators (IV) can be expressed as follows (see appendix B for details): $\displaystyle\widetilde{\mathcal{O}}_{(j)}:=$ $\displaystyle\Omega^{2}\left(\widetilde{\mathcal{K}}_{(j)}+\eta\left[\Bigl{(}\widetilde{\nabla}_{a}\widetilde{\nabla}_{b}\bigl{(}\widetilde{k}_{(j)}^{ab}+\frac{1}{2}\widetilde{k}^{c}_{(j)\,c}\widetilde{g}^{ab}\bigr{)}\Bigr{)}\right]\right)\,,$ (43) $\displaystyle\widetilde{\mathcal{P}}_{(j)}:=$ $\displaystyle\Omega^{2}\left(\widetilde{\mathcal{L}}_{(j)}-\frac{w}{D-2}\widetilde{\nabla}_{a}\widetilde{l}^{a}_{(j)}\right)\;,$ (44) where $\widetilde{\mathcal{K}}_{(j)}$ and $\widetilde{\mathcal{L}}_{(j)}$ are given by expressions (22), (23), and (III), with all the objects replaced by the tilded ones. Note that the quantities $\widetilde{\nabla}_{b}\left[\widetilde{k}_{(j)}^{ab}+\frac{1}{2}\widetilde{k}^{c}_{(j)\,c}\widetilde{g}^{ab}\right]$ and $\widetilde{\nabla}_{a}\widetilde{l}^{a}_{(j)}$ vanishes identically when $\Omega=1$ due to the Killing tensor and Killing vector equations $\nabla\tensor{[}_{(a}]{k}{{}^{(j)}_{b}{}_{c)}}=0\,,\quad\nabla_{\\!(a}l^{(j)}_{b)}=0\,,$ (45) respectively. Moreover, $\widetilde{\mathcal{O}}_{(0)}$ is just a conformally rescaled $\widetilde{\mathcal{K}}_{(0)}$, $\widetilde{\mathcal{K}}_{(0)}=\Omega^{-2}\widetilde{\mathcal{O}}_{(0)}=\Omega^{w-2}\mathcal{K}_{(0)}\Omega^{-w}\,,$ (46) highlighting the conformal invariance of this operator. The other operators, however, take a more complicated form, as is to be expected from the privileged role of the conformal frame with $\Omega=1.$222This is the only frame where the spacetime admits full (not only conformal) Killing tensors and the Ricci tensor is diagonal in the natural orthonormal frame Kubiznak:2007kh . We shall return to this issue in the next subsection where we discuss the conformal form of these operators. ### Conformal symmetry operators Conformal symmetry operators for the conformal wave equation have been studied for many years, see e.g. carter1977killing ; boyer1976symmetry ; kalnins1982intrinsic ; kamran1985separation2 ; duval1999conformally ; benenti2002 ; eastwood2005higher ; eastwood2008higher ; gover2012higher ; andersson2014second . This work culminated in ref. Michel:2013dfa where a complete and constructive theory was finally formulated. Our goal for the remainder of this section is to review this theory in a more physics community oriented language, and briefly discuss how it applies to the problem at hand. To start with, we define a conformally invariant operator as an operator that preserves its form under a conformal transformation. More specifically, a conformally invariant operator of weights $s_{1}$ and $s_{2}$ obeys the following equality: $\widetilde{Q}_{s_{1},s_{2}}=\Omega^{s_{2}}Q_{s_{1},s_{2}}\Omega^{-s_{1}}\,,$ (47) under the conformal transformation (36). That is, $\widetilde{Q}_{s_{1},s_{2}}$ has exactly the ‘same form’ as ${Q}_{s_{1},s_{2}}$ but is constructed out of conformally scaled (tilded) tensors associated with the metric $\widetilde{{\boldsymbol{g}}}$ rather than ${\boldsymbol{g}}$. To give an example, the conformal wave operator $\mathcal{K}_{(0)}$ obeys the equation (46) and thence is a conformal operator with weights $s_{1}=w$ and $s_{2}=w-2$. In what follows, we are going to concentrate on conformal operators of equal weights, $s_{1}=s_{2}=s$. In particular, as shown in Michel:2013dfa the most general second-order conformal operator with weight $s$ that is built out of a symmetric tensor $K^{ab}$ is given by $\displaystyle Q_{s}(K)=$ $\displaystyle\nabla_{a}K^{ab}\nabla_{b}+\Bigl{(}\gamma_{1}[\nabla_{a}K^{ab}]+\gamma_{2}[\nabla^{b}\Tr K]\Bigr{)}\nabla_{b}$ $\displaystyle+\gamma_{3}(\nabla_{a}\nabla_{b}K^{ab})+\gamma_{4}(\Box\Tr K)+\gamma_{5}\,R_{ab}K^{ab}$ $\displaystyle+\gamma_{6}\,R\,\Tr K+f\;.$ (48) Here $f$ is a function which does not scale under conformal transformation, we assume $\widetilde{K}^{ab}=K^{ab}$, and the coefficients are $\displaystyle\gamma_{1}$ $\displaystyle=2\gamma_{2}=-\frac{(2s+D)}{D+2}\,,\,\gamma_{3}=\frac{(s-1)s}{(D+1)(D+2)}\,,$ $\displaystyle\gamma_{4}$ $\displaystyle=\frac{s(D+2s-1)}{2(D+1)(D+2)}\,,\,\gamma_{5}=\frac{s(D+s)}{(D-2)(D+1)}\,,$ $\displaystyle\gamma_{6}$ $\displaystyle=-\frac{2s(D+s)}{(D-2)(D-1)(D+1)(D+2)}\,.$ (49) Similarly, having a vector $L^{a}$, the corresponding conformal operator is given by $Q_{s}(L)=L^{a}\nabla_{\\!a}-\frac{s}{D}(\nabla_{\\!a}\,L^{a})\;.$ (50) In particular, we consider conformal operators of weight $w=1-D/2$, c.f. (35), $\widetilde{Q}_{w}=\Omega^{w}Q_{w}\Omega^{-w}\,,$ (51) that are symmetry operators of the conformal wave operator $\mathcal{K}_{(0)}$, that is, they satisfy the following relation: $\mathcal{K}_{(0)}\circ Q_{w}={\cal D}\circ\mathcal{K}_{(0)}\,,$ (52) for some operator ${\cal D}$; in fact, it is easy to see that the conformal invariance implies ${\cal D}\equiv{\cal D}_{-2+w}$. Note that the equation (52) obviously preserves the kernel of $\mathcal{K}_{(0)}$. To find such symmetry operators we can use the following theorem Michel:2013dfa : Theorem 1. _Let $K^{ab}$ be a (special) Killing tensor of the metric ${\boldsymbol{g}}$, so that the following conformally invariant ‘geometric obstruction’ built from the Weyl tensor $C_{abcd}$:_ $\text{Obs}(K)_{a}=\frac{2(D-2)}{3(D+1)}\Bigl{(}\nabla_{b}K^{cd}\tensor{C}{{}^{b}_{c}{}_{d}{}_{a}}-\frac{3}{D-3}K^{cd}\nabla_{b}\tensor{C}{{}^{b}_{c}{}_{d}{}_{a}}\Bigr{)}$ (53) _is exact, that is,_ ${\bf Obs}(K)=-2{\boldsymbol{d}}f\,.$ (54) _Then ( IV) with $f$ given by (54) (up to a constant) is a symmetry operator for the conformal wave operator and in fact satisfies_ $\mathcal{K}_{(0)}\circ Q_{w}(K)=Q_{-2+w}(K)\circ\mathcal{K}_{(0)}\,.$ (55) When $K^{ab}$ is a Killing tensor we can simplify the operator (IV) via the Killing equation, $\nabla\tensor{[}_{(a}]{K}{{}_{b}{}_{c)}}=0\,,$ (56) however this will only hold for a particular metric of the conformal class. For this particular metric, we then have $\displaystyle Q_{w}(K)$ $\displaystyle=$ $\displaystyle Q_{w-2}(K)$ (57) $\displaystyle=$ $\displaystyle\nabla_{a}K^{ab}\nabla_{b}-\frac{(D-2)}{8(D+1)}\left[\Box\Tr K\right]$ $\displaystyle-$ $\displaystyle\frac{(D+2)}{4(D+1)}R_{ab}K^{ab}+\frac{R\,\Tr K}{2(D+1)(D-1)}+f\;.\quad\quad$ In this case, therefore the corresponding symmetry operator (52) actually commutes with the conformal wave equation $\left[Q_{w},\mathcal{K}_{(0)}\right]=0\,,$ (58) and more generally, we have the conformal commutation $\left[\widetilde{Q}_{w},\,\Omega^{2}\,\widetilde{\mathcal{K}}_{(0)}\right]=0\,,$ (59) valid in any conformal frame. In particular, taking the Killing tensors ${\boldsymbol{k}}_{(j)}$ ($j>0)$ in the Kerr–NUT–AdS metric ${\boldsymbol{g}}$, we find that they satisfy the obstruction condition (54) with $f_{(j)}$ given by $\displaystyle f_{(j)}$ $\displaystyle=$ $\displaystyle\frac{1}{4(1-D^{2})}\Bigl{[}2D\,k^{ab}_{(j)}R_{ab}+3\Box\Tr k_{(j)}$ (60) $\displaystyle+(D+1)(D-2)k_{(j-1)}^{ab}\Bigl{(}\alpha_{j}h_{a}{}^{c}\,(d\xi)_{cb}+\beta_{j}\xi_{a}\,\xi_{b}\Bigr{)}$ $\displaystyle-2R\,\Tr k_{(j)}\Bigr{]}\,.$ It can then easily be checked that333Of course, the expression (53) is only defined in this coordinate invariant way in the $\Omega=1$ frame although its coordinate expression will be unchanged no matter the frame. the corresponding operators ${\cal K}_{w}^{(j)}\equiv Q_{w}(k_{(j)})\,,$ (61) (IV), coincide with the operators $\mathcal{K}_{(j)}$, (22), ${\cal K}_{w}^{(j)}=\mathcal{K}_{(j)}\,.$ (62) Since all of these operators commute with one another for $\Omega=1$, their conformal versions $\widetilde{{\cal K}}_{w}^{(j)}$, (51) also mutually commute in the spacetime $\widetilde{{\boldsymbol{g}}}$. Of course, these are nothing else than the operators $\widetilde{\mathcal{O}}_{(j)}$, (IV), this time, however, written in a conformally invariant way (IV).444Although the formulae (43) and (IV) look rather different, they represent the same operators, and in particular, the coordinate expressions for the operators $\widetilde{\mathcal{O}}_{(j)}$ and $\widetilde{{\cal K}}_{w}^{(j)}$ will coincide in any conformal frame. The apparent differences arise from how we choose scale the Killing tensors. The remaining commutation relations are then guaranteed by (59), since we define for $j=0$ $\widetilde{{\cal K}}_{w}^{(0)}\equiv\widetilde{\mathcal{O}}_{(0)}=\Omega^{2}\widetilde{\mathcal{K}}_{(0)}\,,$ (63) reflecting the fact that the metric transforms differently than the other Killing tensors under the conformal transformation. Similarly one can ‘lift’ the operators $\mathcal{L}_{(j)}$, (23), to the conformal ones (as in (50) and c.f. (44) where the Killing vectors transform differently) $\mathcal{L}^{(j)}_{w}=-i\,l_{(j)}^{a}\nabla_{\\!a}+i\frac{w}{D}(\nabla_{\\!a}\,l_{(j)}^{a})\;,$ (64) where the second term identically vanishes in the frame $\Omega=1$ where $l_{(j)}^{a}$ are (full, _not_ conformal,) Killing vectors. Of course, these will coincide with $\widetilde{\mathcal{P}}_{(j)}$, (44), in any coordinate system. To summarize, we have found a conformally invariant ‘generalization’ $\\{\mathcal{K}^{(j)}_{w},\mathcal{L}^{(j)}_{w}\\}$ of the symmetry operators (22) and (23), with the two being equal in the Kerr-NUT–AdS conformal frame ${\boldsymbol{g}}$. Writing $\widetilde{\Phi}=\Omega^{w}\Phi$ in any conformal frame $\widetilde{{\boldsymbol{g}}}$, these operators obey the following eigenvalue problem: $\displaystyle\widetilde{\mathcal{K}}^{(j)}_{w}\widetilde{\Phi}$ $\displaystyle=$ $\displaystyle C_{j}\widetilde{\Phi}\,,$ (65) $\displaystyle\widetilde{\mathcal{L}}^{(j)}_{w}\widetilde{\Phi}$ $\displaystyle=$ $\displaystyle\Psi_{j}\widetilde{\Phi}\,,$ (66) guaranteeing $R$-separability of $\widetilde{\Phi}$ in any of these frames. ## V Associated Hamilton–Jacobi equation and its separability We finally turn to study the natural extension of the Hamiltonian–Jacobi equation that arises from the the geometric optics (WKB) approximation of the conformal wave equation. Consider the following ‘$\alpha$-modified conformal wave equation’: $\bigl{(}\alpha^{2}\Box-\eta R\bigr{)}\Phi=0\,.$ (67) Then, upon employing the geometric optics ansatz $\Phi=\Phi_{0}\exp\Bigl(\frac{i}{\alpha}S\Bigr{missing})\,,$ (68) while taking the WKB limit $\alpha\to 0$, we arrive at the corresponding Hamilton–Jacobi equation: $g^{ab}\partial_{a}S\partial_{b}S+\eta R=0\,.$ (69) This equation is obviously not conformally invariant, however, it is consistent with the particle Hamiltonian, $H=g^{ab}p_{a}p_{b}+\eta R\,.$ (70) See e.g. DeWitt:1952js ; Omote:1976fx for how such a coupling to the Ricci scalar can arise from quantum corrections. The equations of motion for this Hamiltonian yield the following modified geodesic equation $\frac{Dp_{a}}{d\lambda}=-\eta\partial_{a}R\,.$ (71) Let us stress that the procedure of deriving (69) is similar to how one arrives at the massive Hamilton–Jacobi equation starting from the massive ($\alpha$-modified) Klein–Gordon one, e.g. Sergyeyev:2007gf . There is, however, a fundamental difference. Namely, the $\alpha$-modified equation (67) is not conformally invariant, unless $\alpha=1$. This is the reason why the WKB limit $\alpha\to 0$ does not produce a conformally invariant Hamilton–Jacobi equation. If instead, one started with the conformal wave equation, setting $\alpha=1$ in (67), the WKB approximation would then yield the massless Hamilton–Jacobi equation, which of course is conformally invariant. In what follows we consider the Hamilton–Jacobi equation (69) of potential physical interest and show its separability in the off-shell Kerr–NUT–AdS spacetimes. Using the form of the inverse metric given by (19) for $j=0$, the Hamilton–Jacobi equation (69) takes the following explicit form: $\displaystyle\sum_{\mu=1}^{n}\\!\left[\\!\frac{X_{\mu}}{U_{\mu}}{S_{\mu}^{\prime 2}}+\frac{1}{U_{\mu}X_{\mu}}\\!\left(\sum_{k=0}^{n-1+\varepsilon}(-x_{\mu}^{2})^{n-1-k}\\!\Psi_{k}\right)^{\\!\\!2}\right]$ $\displaystyle+\varepsilon\,\frac{1}{cA^{\\!(n)}}\,\Psi_{n}^{2}+\eta\sum_{\mu=1}^{n}\frac{r_{\mu}}{U_{\mu}}=0\;,$ (72) where we have used the additive separation ansatz: $S=\sum_{\mu=1}^{n}S_{\mu}(x_{\mu})+\sum_{k}\Psi_{k}\psi_{k}\,.$ (73) Using next the following identity: $\frac{1}{A^{(n)}}=\sum_{\mu}\frac{1}{x_{\mu}^{2}U_{\mu}}\,,$ (74) we can rewrite the previous equation as $\sum_{\mu}\frac{G_{\mu}}{U_{\mu}}=0\,,$ (75) where $G_{\mu}=X_{\mu}S_{\mu}^{\prime 2}+\frac{1}{X_{\mu}}\\!\left(\sum_{k=0}^{n-1+\varepsilon}(-x_{\mu}^{2})^{n-1-k}\,\Psi_{k}\right)^{\\!\\!2}+\varepsilon\,\frac{\Psi_{n}^{2}}{cx_{\mu}^{2}}+\eta r_{\mu}\,.$ (76) To proceed, we use the separation Lemma: The most general solution of $\sum_{\mu=1}^{n}\frac{f_{\mu}(x_{\mu})}{U_{\mu}}=0\,,$ (77) where $U_{\mu}$ is defined in (II), is given by $f_{\mu}=\sum_{k=1}^{n-1}C_{k}(-x_{\mu}^{2})^{n-1-k}\,,$ (78) where $C_{k}$ are arbitrary (separation) constants. This yields the following ordinary differential equations for the separated solution: $\displaystyle X_{\mu}S_{\mu}^{\prime 2}+\frac{1}{X_{\mu}}\\!\left(\sum_{k=0}^{n-1+\varepsilon}(-x_{\mu}^{2})^{n-1-k}\\!\Psi_{k}\right)^{\\!\\!2}+\varepsilon\,\frac{\Psi_{n}^{2}}{cx_{\mu}^{2}}+\eta r_{\mu}$ $\displaystyle=\sum_{k=1}^{n-1}C_{k}(-x_{\mu}^{2})^{n-1-k}\,.$ (79) Inverting this expression and identifying the canonical momenta ${\boldsymbol{p}}={\boldsymbol{d}}S$ the corresponding constants of motion of the modified geodesic equation (71) are given by $C_{j}=k^{(j)}_{ab}p^{a}p^{b}+\eta R_{(j)}\,,$ (80) where $R_{(j)}$ are given by (III). It would be interesting to understand what these constants of motion represent physically, e.g. in a quantum system DeWitt:1952js ; Omote:1976fx , as this would give a natural interpretation for the functions $R_{(j)}$. ## VI Discussion In this paper we have built on the previous work Gray:2020rtr to find covariant forms of the symmetry operators (22) and (23) of the conformal wave equation in the Kerr–NUT–AdS background (11). These operators are built out of the principal Killing–Yano tensor, its symmetry descendants, and the curvature tensor. Moreover their commutativity descends naturally from the commutation properties of the Killing tensors and the special character of the Ricci scalar functions $R_{(j)}$, (III). We then showed how to lift these to a full set of conformally invariant mutually commuting symmetry operators $\\{{\cal K}_{w}^{(j)},{\cal L}_{w}^{(j)}\\}$ that guarantee $R$-separability of the conformal wave equation in any conformally related spacetime $\widetilde{{\boldsymbol{g}}}$, providing thus a highly non-trivial example to the beautiful theory developed in Michel:2013dfa . The conformal wave equation (1) is characterized by a specific value of $\eta$. In principle one can consider more general wave equations, where $\eta$ takes any value. It is easy to see that all such equations still separate in the Kerr–NUT–AdS backgrounds; the operators (22) and (23) commute for any value of $\eta$. However, for general $\eta$ the corresponding wave equations are not conformally invariant and will not separate in a generic conformally related spacetime. In this case, one could use the conformal properties outlined in appendix B to construct an equation which separates in the conformal spacetime, however there is no clear physical interpretation for such an equation. We have also introduced a modified Hamilton–Jacobi equation for a single particle with a Ricci scalar potential term. This equation naturally arises from the WKB limit of the ‘$\alpha$-modified’ conformal wave equation. This limit breaks the conformal invariance and the resulting equation no longer enjoys conformal symmetry. We have shown that this equation also separates in the Kerr–NUT-AdS spacetimes – the corresponding non-trivial constants of motion are given by the Killing tensors and the scalar functions $R_{(j)}$, giving a natural setting for the interpretation of the latter. In future, we would like to study the physical implications of the newly derived (non-minimal coupling) Hamilton–Jacobi equation. We also hope to extend the present results to understand separability of conformal fields with higher spin. ## Acknowledgements We would like to thank T. Bäckdahl for pointing to us the extended mathematical literature on the conformal wave symmetry operators and in particular the ref. Michel:2013dfa . F.G. acknowledges support from NSERC via a Vanier Canada Graduate Scholarship. Y.Y. is supported by Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology, Japan No.19K03877. This work was also supported by the Perimeter Institute for Theoretical Physics and by the Natural Sciences and Engineering Research Council of Canada (NSERC). Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. Perimeter Institute and the University of Waterloo are situated on the Haldimand Tract, land that was promised to the Haudenosaunee of the Six Nations of the Grand River, and is within the territory of the Neutral, Anishnawbe, and Haudenosaunee peoples. ## Appendix A Covariant form of $R_{(j)}$ In this appendix we find the covariant form of $R_{(j)}$ in Kerr–NUT–AdS spacetimes by starting from the explicit expressions in canonical coordinates (11). To start with we need an expression for the Ricci tensor. It is rather simple since it is diagonal in the orthonormanl basis of the metric $\displaystyle{\boldsymbol{e}}^{\mu}$ $\displaystyle=\frac{{\boldsymbol{d}}x_{\mu}}{\sqrt{Q_{\mu}}}\,,\quad\hat{{\boldsymbol{e}}}^{\mu}=\sqrt{Q_{\mu}}\sum_{k}A^{(k)}_{\mu}{\boldsymbol{d}}\psi_{k}\,,$ $\displaystyle{\boldsymbol{e}}^{0}$ $\displaystyle=\sqrt{\frac{c}{A^{\\!(n)}}}\sum_{k}A^{(k)}{\boldsymbol{d}}\psi_{k}\,,$ (81) where $Q_{\mu}=X_{\mu}/U_{\mu}$. In fact it is given by Hamamoto:2006zf ; Frolov:2017kze $\text{\bf Ric}=-\sum_{\mu}\hat{r}_{\mu}\left({\boldsymbol{e}}^{\mu}{\boldsymbol{e}}^{\mu}+\hat{{\boldsymbol{e}}}^{\mu}\hat{{\boldsymbol{e}}}^{\mu}\right)-\varepsilon\hat{r}_{0}{\boldsymbol{e}}^{0}{\boldsymbol{e}}^{0}\,,$ (82) where we have introduced $\displaystyle\hat{r}_{\mu}$ $\displaystyle=\frac{\hat{X}_{\mu}^{\prime\prime}+\frac{\epsilon\hat{X}_{\mu}^{\prime}}{x_{\mu}}}{2U_{\mu}}+\sum_{\nu\neq\mu}\frac{x_{\nu}\hat{X}_{\nu}^{\prime}-x_{\mu}\hat{X}_{\mu}^{\prime}-(1-\epsilon)(\hat{X}_{\nu}-\hat{X}_{\mu})}{\left(x_{\nu}^{2}-x_{\mu}^{2}\right)U_{\nu}}\;\;,\;\;$ $\displaystyle\hat{r}^{0}$ $\displaystyle=\sum_{\nu}\frac{\hat{X}^{\prime}_{\nu}}{x_{\nu}U_{\nu}}\;,\quad\hat{X}_{\mu}=X_{\mu}+\varepsilon c/x_{\mu}^{2}\,.$ (83) Also in this basis the Killing tensors are diagonal too, ${\boldsymbol{k}}_{(j)}=\\!\sum_{\mu=1}^{n}\\!A^{\\!(j)}_{\mu}\left[{\boldsymbol{e}}_{\mu}{\boldsymbol{e}}_{\mu}+\hat{{\boldsymbol{e}}}_{\mu}\hat{{\boldsymbol{e}}}_{\mu}\right]+\varepsilon A^{\\!(j)}{\boldsymbol{e}}_{0}{\boldsymbol{e}}_{0}\,.$ (84) Hence, using the identity $\sum_{\nu\neq\mu}\frac{A^{\\!(j)}_{\mu}-A^{\\!(j)}_{\nu}}{x^{2}_{\nu}-x^{2}_{\mu}}=(n-j)A^{\\!(j-1)}_{\mu}\;,$ (85) we have $\displaystyle R_{(j)}-k^{ab}_{(j)}R_{ab}$ $\displaystyle=\sum_{\mu}\left[\varepsilon\frac{A^{\\!(j-1)}_{\mu}x_{\mu}\hat{X}_{\mu}^{\prime}}{U_{\mu}}+2\sum_{\nu\neq\mu}\frac{A^{\\!(j)}_{\mu}}{x^{2}_{\nu}-x^{2}_{\mu}}\left(\frac{x_{\mu}\hat{X}^{\prime}_{\mu}-(1-\varepsilon)\hat{X}_{\mu}}{U_{\mu}}+\frac{x_{\nu}\hat{X}^{\prime}_{\nu}-(1-\varepsilon)\hat{X}_{\nu}}{U_{\nu}}\right)\right]$ $\displaystyle=\sum_{\mu}\left[\varepsilon\frac{A^{\\!(j-1)}_{\mu}x_{\mu}\hat{X}_{\mu}^{\prime}}{U_{\mu}}+2\frac{x_{\mu}\hat{X}^{\prime}_{\mu}-(1-\varepsilon)\hat{X}_{\mu}}{U_{\mu}}\sum_{\nu\neq\mu}\frac{A^{\\!(j)}_{\mu}-A^{\\!(j)}_{\nu}}{x^{2}_{\nu}-x^{2}_{\mu}}\right]$ $\displaystyle=2\sum_{\mu}\frac{A^{\\!(j-1)}_{\mu}}{U_{\mu}}\left([n-j+\varepsilon/2]x_{\mu}\hat{X}^{\prime}_{\mu}-(n-j)(1-\varepsilon)\hat{X}_{\mu}\right)\;.$ (86) Furthermore, since the Killing vectors satisfy $\nabla\tensor{[}_{(a}]{l}{{}^{(j)}_{b)}}=0$ the only information of their derivatives is contained in their exterior derivative. In particular, since in the orthonormal basis (A) ${\boldsymbol{l}}^{(j)}=\sum_{\mu}A^{\\!(j)}_{\mu}\sqrt{Q_{\mu}}\hat{{\boldsymbol{e}}}^{\mu}+\epsilon A^{\\!(j)}\sqrt{\frac{c}{A^{\\!(n)}}}{\boldsymbol{e}}^{0}\,,$ (87) we have that $\displaystyle{\boldsymbol{d}}{\boldsymbol{l}}^{(j)}=$ $\displaystyle\sum_{\mu}\left[\left(A^{\\!(j)}_{\mu}Q_{\mu}\frac{X_{\mu}^{\prime}}{X_{\mu}}-\varepsilon\frac{2}{x_{\mu}}\frac{cA^{\\!(j)}}{A^{\\!(n)}}+2x_{\mu}\sum_{\nu\neq\mu}\frac{Q_{\mu}A^{\\!(j)}_{\mu}+Q_{\nu}A^{\\!(j)}_{\nu}}{x^{2}_{\nu}-x^{2}_{\mu}}\right]{\boldsymbol{e}}^{\mu}\wedge\hat{{\boldsymbol{e}}}^{\mu}\right.$ $\displaystyle\left.+\varepsilon 2x_{\mu}\sqrt{Q_{\mu}}\sqrt{\frac{c}{A^{\\!(n)}}}A^{\\!(j-1)}_{\mu}{\boldsymbol{e}}^{\mu}\wedge{\boldsymbol{e}}^{0}+\sum_{\nu\neq\mu}2x_{\nu}\sqrt{Q_{\mu}Q_{\nu}}\frac{A^{\\!(j)}_{\mu}-A^{\\!(j)}_{\nu}}{x^{2}_{\nu}-x^{2}_{\mu}}{\boldsymbol{e}}^{\nu}\wedge\hat{{\boldsymbol{e}}}^{\mu}\right)\;.$ (88) Moreover introducing the Killing co-potential $(D-2j-1){\boldsymbol{\omega}}^{(j)}_{ab}:=k_{(j)\,a}^{\,\,\,n}h_{nb}=\sum_{\mu}A^{\\!(j)}_{\mu}x_{\mu}{\boldsymbol{e}}^{\mu}\wedge\hat{{\boldsymbol{e}}}^{\mu}\;.$ (89) which generates the Killing tensors Frolov:2017kze $l^{a}_{(j)}=\nabla_{b}\,\omega^{ba}_{(j)}\,,$ (90) we can calculate $k_{(j)n}^{\,\,\,a}h^{nb}\,dl_{ab}^{(k)}=2\sum_{\mu}\frac{1}{U_{\mu}}\left(A^{\\!(j)}_{\mu}A^{\\!(k)}_{\mu}x_{\mu}\hat{X}^{\prime}_{\mu}+\sum_{\nu\neq\mu}\frac{2\hat{X}_{\mu}(A^{\\!(j)}_{\mu}A^{\\!(k)}_{\mu}x^{2}_{\mu}-A^{\\!(j)}_{\nu}A^{\\!(k)}_{\nu}x^{2}_{\nu})-\varepsilon cA^{\\!(j)}_{\mu}\left(\frac{A^{\\!(k)}_{\nu}}{U_{\nu}}-\frac{A^{\\!(k)}_{\mu}}{U_{\mu}}\right)}{x^{2}_{\nu}-x^{2}_{\mu}}\right)\;.$ (91) Notice the last term proportional to $\varepsilon$ vanishes when $k=0$. Finally let us us calculate $\Box\Tr({\boldsymbol{k}}_{(j)})$. First, we have $\Tr({\boldsymbol{k}}_{(j)})=\varepsilon A^{\\!(j)}+\sum_{\mu}2A^{\\!(j)}_{\mu}=(2(n-j)+\varepsilon)A^{\\!(j)}\;.$ (92) Since this expression only depends on $x_{\mu}$ we can use the form of the wave operator (see (20) in Gray:2020rtr ) to write $\displaystyle\nabla_{a}(k_{(j)}^{ab}\nabla_{b}\Tr[{\boldsymbol{k}}_{(j)}])$ $\displaystyle=\sum_{\mu}\frac{A^{\\!(j)}_{\mu}}{U_{\mu}}\left[X_{\mu}\partial^{2}_{\mu}\Tr({\boldsymbol{k}}_{(j)})+\partial_{\mu}\Tr({\boldsymbol{k}}_{(j)})\left(X_{\mu}^{\prime}+\frac{\varepsilon}{x_{\mu}}X_{\mu}\right)\right]$ $\displaystyle=4\sum_{\mu}\frac{A^{\\!(j)}_{\mu}A^{\\!(j-1)}_{\mu}}{U_{\mu}}[n-j+\frac{\varepsilon}{2}]\left(x_{\mu}X^{\prime}_{\mu}+(1+\varepsilon)X_{\mu}\right)\,.$ (93) Putting this together we have $\displaystyle\alpha_{j}k_{(j-1)n}^{\,\,\,a}h^{nb}\,dl_{ab}^{(0)}-\beta_{j}l_{(j-1)}^{a}\,l_{a}^{(0)}+\frac{D-4}{2(D-2)}\Box\Tr({\boldsymbol{k}}_{(j)})$ $\displaystyle=2\sum_{\mu}\frac{1}{U_{\mu}}\left(A^{\\!(j-1)}_{\mu}\left(\left[\alpha_{j}+\frac{(D-4)(n-j+\frac{\varepsilon}{2})}{D-2}\right]x_{\mu}\hat{X}^{\prime}_{\mu}-\left[\frac{\beta_{j}}{2}-\frac{(D-4)(n-j+\frac{\varepsilon}{2})(1+\varepsilon)}{D-2}\right]\hat{X}_{\mu}\right)-2\alpha_{j}\hat{X}_{\mu}\sum_{\nu\neq\mu}\frac{A^{\\!(j)}_{\mu}-A^{\\!(j)}_{\nu}}{x^{2}_{\nu}-x^{2}_{\mu}}\right)$ $\displaystyle=2\sum_{\mu}\frac{A^{\\!(j-1)}_{\mu}}{U_{\mu}}\left(\left[\alpha_{j}+\frac{(D-4)(n-j+\frac{\varepsilon}{2})}{D-2}\right]x_{\mu}\hat{X}^{\prime}_{\mu}-\left[\frac{\beta_{j}}{2}+2(n-j)\alpha_{j}-\frac{(D-4)(n-j+\frac{\varepsilon}{2})(1+\varepsilon)}{D-2}\right]\hat{X}_{\mu}\right)\,.$ (94) Thus, using $\varepsilon=\\{0,1\\}$ we can choose the coefficients to be $\displaystyle\alpha_{j}$ $\displaystyle=\frac{2(n-j+\frac{\varepsilon}{2})}{D-2}\;,$ (95) $\displaystyle\beta_{j}$ $\displaystyle=\frac{4(n-j+\frac{\varepsilon}{2})}{D-2}(D-3-2(n-j+\frac{\varepsilon}{2}))\,.$ (96) Thence we obtain our covariant expression for $R_{(j)}$ $\displaystyle R_{(j)}=$ $\displaystyle k^{ab}_{(j)}R_{ab}+\frac{D-4}{2(D-2)}\Box\Tr({\boldsymbol{k}}_{(j)})$ $\displaystyle+\alpha_{j}k_{(j-1)n}^{\,\,\,a}h^{nb}\,dl_{ab}^{(0)}-\beta_{j}l_{(j-1)}^{a}\,l_{a}^{(0)}\;,$ (97) which matches the form in the text upon noting ${\boldsymbol{l}}_{0}={\boldsymbol{\xi}}$ and $D=2n+\varepsilon$. Moreover the derivative of $R_{(j)}$ is particularly nice. We can calculate $\displaystyle\nabla_{a}R_{(j)}\overset{a=\nu}{=}\sum_{\mu=1}^{n}\frac{\partial_{\nu}r_{\mu}\,A^{(j)}_{\mu}}{U_{\mu}}+2x_{\nu}A^{\\!(j)}_{\nu}\sum_{\mu\neq\nu}\frac{\frac{r_{\nu}}{U_{\nu}}+\frac{r_{\mu}}{U_{\mu}}}{x^{2}_{\mu}-x^{2}_{\nu}}$ $\displaystyle=\frac{r^{\prime}_{\nu}\,A^{(j)}_{\nu}}{U_{\nu}}+2x_{\nu}A^{\\!(j)}_{\nu}\sum_{\mu\neq\nu}\frac{\frac{r_{\nu}}{U_{\nu}}+\frac{r_{\mu}}{U_{\mu}}}{x^{2}_{\mu}-x^{2}_{\nu}}\;.$ (98) Notice that one can also construct $\displaystyle k_{(j)\,a}^{\;\;\;\;\;\;\;b}\nabla_{b}R\,\overset{a=\nu}{=}$ $\displaystyle\,\sum_{\mu}A^{\\!(j)}_{\nu}\,\partial_{\nu}\frac{r_{\nu}}{U_{\nu}}$ $\displaystyle=\frac{r^{\prime}_{\nu}\,A^{(j)}_{\nu}}{U_{\nu}}+2x_{\nu}A^{\\!(j)}_{\nu}\sum_{\mu\neq\nu}\frac{\frac{r_{\nu}}{U_{\nu}}+\frac{r_{\mu}}{U_{\mu}}}{x^{2}_{\mu}-x^{2}_{\nu}}$ $\displaystyle=\nabla_{a}R_{(j)}\;.$ (99) Thus we have found a covariant expression for our symmetry operators’ derivatives $\kappa_{a}^{(j)}:=k_{(j)\,a}^{\;\;\;\;\;\;\;b}\nabla_{b}R=\nabla_{a}R_{(j)}\;.$ (100) Clearly ${\boldsymbol{\kappa}}$ is closed and also locally exact in all dimensions thus we can say that our $R_{(j)}$ are the potentials for ${\boldsymbol{\kappa}}^{(j)}$ i.e. ${\boldsymbol{\kappa}}^{(j)}={\boldsymbol{d}}R_{(j)}\;.$ (101) ## Appendix B Conformal Transformations Given the spacetime $(\mathcal{M},{\boldsymbol{g}})$ we now consider a conformal transformation of the metric, Killing tensors, and scalar field (${\boldsymbol{k}}_{(j)}\rightarrow\Omega^{-2}{\boldsymbol{k}}_{(j)}$, $\Phi\rightarrow\Omega^{w}\Phi$ for $w=1-D/2$) to the conformal spacetime $(\mathcal{M},{\boldsymbol{g}},\Omega)$. The goal of this section is to find a conformally covariant form of our wave operators $\bigl{(}\hat{\mathcal{K}}_{(j)}-\eta R_{(j)}\bigr{)}\Phi\,,\quad\hat{\mathcal{K}}_{(j)}=\nabla_{a}k^{ab}_{(j)}\nabla_{b}\,,\quad\eta=\frac{1}{4}\frac{D-2}{D-1}\,.$ (102) Using the conformal properties of the Ricci tensor and covariant derivatives, we find the following transformations $\displaystyle\Omega^{2}\,\hat{\mathcal{K}}_{(j)}\Phi\rightarrow$ $\displaystyle\Omega^{w}\left(\hat{\mathcal{K}}_{(j)}+w\nabla_{a}(k^{ab}_{(j)}\nabla_{b}\log\Omega)\right.$ $\displaystyle\left.+w(w-2+D)\nabla_{a}\log\Omega\,k^{ab}_{(j)}\nabla_{b}\log\Omega\right)\Phi$ (103) and $\displaystyle\Omega^{2}\,k^{ab}_{(j)}R_{ab}\rightarrow$ $\displaystyle k^{ab}_{(j)}R_{ab}-\left[(D-2)k^{ab}_{(j)}+k^{c}_{(j)\,c}g^{ab}\right]\nabla_{a}\nabla_{b}\log\Omega$ $\displaystyle+(D-2)\left[k^{ab}_{(j)}-k^{c}_{(j)\,c}g^{ab}\right]\nabla_{a}\log\Omega\nabla_{b}\log\Omega\;.$ (104) Thence we have $\displaystyle\Omega^{2}\left(\hat{\mathcal{K}}_{(j)}\Phi-\eta k^{ab}_{(j)}R_{ab}\Phi\right)/\Phi\rightarrow$ $\displaystyle(\hat{\mathcal{K}}_{(j)}\Phi-\eta k^{ab}_{(j)}R_{ab}\Phi)/\Phi+w(\nabla_{a}k^{ab}_{(j)})\nabla_{b}\log\Omega+((w+\eta(D-2))k^{ab}_{(j)}{+}\eta k^{c}_{(j)c}g^{ab})\left[\nabla_{a}\nabla_{b}\log\Omega\right]$ $\displaystyle+(w(w-2+D)-(D-2)\eta)k^{ab}_{(j)}+(D-2)\eta k^{c}_{(j)c}g^{ab})\left[\nabla_{a}\log\Omega\nabla_{b}\log\Omega\right]$ $\displaystyle=\left(\hat{\mathcal{K}}_{(j)}\Phi-\eta k^{ab}_{(j)}R_{ab}\Phi\right)/\Phi+w(\nabla_{a}k^{ab}_{(j)})\nabla_{b}\log\Omega-\eta D\,\hat{k}^{ab}_{(j)}\left[(D-2)\nabla_{a}\log\Omega\nabla_{b}\log\Omega+\nabla_{a}\nabla_{b}\log\Omega\right]\,.$ (105) Here we have introduced the traceless Killing tensor $\hat{k}^{ab}_{(j)}=k^{ab}_{(j)}-k^{c}_{(j)\,c}g^{ab}/D$. Clearly this vanishes when $j=0$ so the first operator is conformally invariant. Notice that the last term contains two derivatives of the conformal factor, so consider the term identically zero term (following from the Killing tensor equation) $\displaystyle\nabla_{a}\nabla_{b}\left(k_{(j)}^{ab}+\frac{1}{2}k^{c}_{(j)\,c}g^{ab}\right)\equiv 0\,.$ (106) Under the transformation ${\boldsymbol{k}}_{(j)}\rightarrow\Omega^{2}{\boldsymbol{k}}_{(j)}$ this becomes $\displaystyle\Omega^{2}\,\nabla_{a}\nabla_{b}\left(k_{(j)}^{ab}+\frac{1}{2}k^{c}_{(j)\,c}g^{ab}\right)$ $\displaystyle\rightarrow\nabla_{a}\nabla_{b}\left(k_{(j)}^{ab}+\frac{1}{2}k^{c}_{(j)\,c}g^{ab}\right)+(D+2)(\nabla_{a}k^{ab}_{(j)})\nabla_{b}\log\Omega$ $\displaystyle+D\,\hat{k}^{ab}_{(j)}\left[(D-2)\nabla_{a}\log\Omega\nabla_{b}\log\Omega+\nabla_{a}\nabla_{b}\log\Omega\right]\,.$ (107) So we have $\displaystyle\Omega^{2}\left(\hat{\mathcal{K}}_{(j)}\Phi-\eta\left[k^{ab}_{(j)}R_{ab}-\left\\{\nabla_{a}\nabla_{b}\left(k_{(j)}^{ab}+\frac{1}{2}k^{c}_{(j)\,c}g^{ab}\right)\right\\}\right]\Phi\right)/\Phi\rightarrow$ $\displaystyle\left(\hat{\mathcal{K}}_{(j)}\Phi-\eta\left[k^{ab}_{(j)}R_{ab}-\left\\{\nabla_{a}\nabla_{b}\left(k_{(j)}^{ab}+\frac{1}{2}k^{c}_{(j)\,c}g^{ab}\right)\right\\}+(D-4)(\nabla_{a}k^{ab}_{(j)})\nabla_{b}\log\Omega\right]\Phi\right)/\Phi\,.$ (108) Note that, as the covariant derivatives and Killing tensors in the second line are in the $\Omega=1$ frame, we have $(D-4)(\nabla_{a}k^{ab}_{(j)})\nabla_{b}\log\Omega=-(D-4)/2\,(\nabla_{a}k^{c}_{(j)\,c})\nabla_{b}\log\Omega$. Thus this term will be canceled by the transformation of $\Box\Tr({\boldsymbol{k}}_{(j)})$. That is, $\frac{D-4}{2(D-2)}\Box\Tr({\boldsymbol{k}}_{(j)})\rightarrow\Omega^{-2}\left(\frac{D-4}{2(D-2)}\Box\Tr({\boldsymbol{k}}_{(j)})+\frac{D-4}{2}\nabla_{a}\left[\Tr({\boldsymbol{k}}_{(j)})\right]\nabla^{a}\log\Omega\right)\,.$ (109) We now consider the conformal transformation of the final piece; ${\cal R}_{(j)}:=\alpha_{j}k_{(j-1)n}^{\,\,\,a}h^{nb}\,dl_{ab}^{(0)}-\beta_{j}l_{(j-1)}^{a}\,l_{a}^{(0)}\,.$ (110) Now, if ${\boldsymbol{k}}_{(j)}\rightarrow\Omega^{-2}{\boldsymbol{k}}_{(j)}$ consistency demands that ${\boldsymbol{h}}\rightarrow\Omega^{2}{\boldsymbol{h}}$ and that $l^{a}_{(j)}\rightarrow\Omega^{-2}l^{a}_{(j)}$. That is, one can show on a $p$ form $\star\rightarrow\Omega^{d-2p}\star$. Assuming $h\rightarrow\Omega^{r}h$; $h^{j}\rightarrow\Omega^{jr}h^{j}$ then $f^{(j)}=\star h^{j}\rightarrow\Omega^{d-4j+jr}f^{(j)}$. So $k^{(j)}_{ab}\propto f_{ac_{1}\dots c_{D-2j-1}}f_{b}^{c_{1}\dots c_{D-2j-1}}\rightarrow\Omega^{2(d-4j+jr)+2(D-2j-1)}k^{(j)}_{ab}=\Omega^{2+2j(-2+r)}k^{(j)}_{ab}\;.$ (111) Hence demanding for all $j$ that $k^{(j)}_{ab}\rightarrow\Omega^{2}k^{(j)}_{ab}$ fixes $r=2$. Then, we are left with ${\cal R}_{(j)}$ as a scalar density of weight $-2$: ${\cal R}_{(j)}\rightarrow\Omega^{-2}\,{\cal R}_{(j)}\;.$ (112) Thus putting this all together $\Omega^{2}\,\left[\left(\hat{\mathcal{K}}_{(j)}-\eta\left[R_{(j)}-\left\\{\nabla_{a}\nabla_{b}\left(k_{(j)}^{ab}+\frac{1}{2}k^{c}_{(j)\,c}g^{ab}\right)\right\\}\right]\right)\Phi\right]/\Phi\rightarrow\\\ \left[\left(\hat{\mathcal{K}}_{(j)}-\eta\left[R_{(j)}-\left\\{\nabla_{a}\nabla_{b}\left(k_{(j)}^{ab}+\frac{1}{2}k^{c}_{(j)\,c}g^{ab}\right)\right\\}\right]\right)\Phi\right]/\Phi\,,$ (113) which gives us the form we use in the main text. ## References * (1) V. Frolov, P. Krtous and D. Kubiznak, _Black holes, hidden symmetries, and complete integrability_ , _Living Rev. Rel._ 20 (2017) 6 [1705.05482]. * (2) T. Houri, T. Oota and Y. Yasui, _Closed conformal Killing-Yano tensor and Kerr-NUT-de Sitter spacetime uniqueness_ , _Phys. Lett. B_ 656 (2007) 214 [0708.1368]. * (3) P. Krtous, V. P. Frolov and D. Kubiznak, _Hidden Symmetries of Higher Dimensional Black Holes and Uniqueness of the Kerr-NUT-(A)dS spacetime_ , _Phys. Rev._ D78 (2008) 064022 [0804.4705]. * (4) T. Houri, T. Oota and Y. Yasui, _Generalized Kerr-NUT-de Sitter metrics in all dimensions_ , _Phys. Lett. B_ 666 (2008) 391 [0805.0838]. * (5) V. P. Frolov, P. Krtouš and D. Kubizňák, _Separability of Hamilton-Jacobi and Klein-Gordon equations in general Kerr-NUT-AdS spacetimes_ , _JHEP_ 0702 (2007) 005 [hep-th/0611245]. * (6) T. Oota and Y. Yasui, _Separability of Dirac equation in higher dimensional Kerr-NUT-de Sitter spacetime_ , _Phys. Lett._ B659 (2008) 688 [0711.0078]. * (7) O. Lunin, _Maxwell’s equations in the Myers-Perry geometry_ , _JHEP_ 12 (2017) 138 [1708.06766]. * (8) V. P. Frolov, P. Krtouš, D. Kubizňák and J. E. Santos, _Massive Vector Fields in Rotating Black-Hole Spacetimes: Separability and Quasinormal Modes_ , _Phys. Rev. Lett._ 120 (2018) 231103 [1804.00030]. * (9) O. Lunin, _Excitations of the Myers-Perry Black Holes_ , _JHEP_ 10 (2019) 030 [1907.03820]. * (10) F. Gray, I. Holst, D. Kubiznak, G. Odak, D. M. Pirvu and T. R. Perche, _Conformally Coupled Scalar in Rotating Black Hole Spacetimes_ , _Phys. Rev. D_ 101 (2020) 084031 [2002.05221]. * (11) R. M. Wald, _General Relativity_. Chicago Univ. Pr., Chicago, USA, 1984. * (12) J. Hennig and R. Panosso Macedo, _Fully pseudospectral solution of the conformally invariant wave equation on a Kerr background_ , 2012.02240. * (13) J.-P. Michel, F. Radoux and J. Šilhan, _Second order symmetries of the conformal Laplacian_ , _SIGMA_ 10 (2014) 016 [1308.1046]. * (14) B. S. DeWitt, _Point transformations in quantum mechanics_ , _Phys. Rev._ 85 (1952) 653. * (15) M. Omote, _Point Canonical Transformations and the Path Integral_ , _Nucl. Phys. B_ 120 (1977) 325. * (16) C. Destri, P. Maraner and E. Onofri, _On the definition of quantum free particle on curved manifolds_ , _Nuovo Cim. A_ 107 (1994) 237 [hep-th/9210027]. * (17) D. Lian, L. Hu and Q. Liu, _Identification of Geometric Potential from Quantum Conditions for a Particle on a Curved Surface_ , 1701.08370. * (18) N. Hamamoto, T. Houri, T. Oota and Y. Yasui, _Kerr-NUT-de Sitter curvature in all dimensions_ , _J. Phys. A_ 40 (2007) F177 [hep-th/0611285]. * (19) A. Sergyeyev and P. Krtous, _Complete Set of Commuting Symmetry Operators for Klein-Gordon Equation in Generalized Higher-Dimensional Kerr-NUT-(A)dS Spacetimes_ , _Phys. Rev. D_ 77 (2008) 044033 [0711.4623]. * (20) S. Benenti, C. Chanu and G. Rastelli, _Remarks on the connection between the additive separation of the hamilton–jacobi equation and the multiplicative separation of the schrödinger equation. i. the completeness and robertson conditions_ , _Journal of Mathematical Physics_ 43 (2002) 5183. * (21) D. Kubiznak and P. Krtous, _On conformal Killing-Yano tensors for Plebanski-Demianski family of solutions_ , _Phys. Rev. D_ 76 (2007) 084036 [0707.0409]. * (22) B. Carter, _Killing tensor quantum numbers and conserved currents in curved space_ , _Physical Review D_ 16 (1977) 3395. * (23) C. Boyer, E. Kalnins and W. Miller, _Symmetry and separation of variables for the helmholtz and laplace equations_ , _Nagoya Mathematical Journal_ 60 (1976) 35. * (24) E. G. Kalnins and W. Miller Jr, _Intrinsic characterisation of orthogonal r separation for laplace equations_ , _Journal of Physics A: Mathematical and General_ 15 (1982) 2699. * (25) N. Kamran and R. McLenaghan, _Separation of variables and symmetry operators for the conformally invariant klein-gordon equation on curved spacetime_ , _letters in mathematical physics_ 9 (1985) 65. * (26) C. Duval, P. Lecomte and V. Ovsienko, _Conformally equivariant quantization: existence and uniqueness_ , in _Annales de l’institut Fourier_ , vol. 49, 1999. * (27) M. Eastwood, _Higher symmetries of the laplacian_ , _Annals of mathematics_ (2005) 1645. * (28) M. Eastwood and T. Leistner, _Higher symmetries of the square of the laplacian_ , in _Symmetries and overdetermined systems of partial differential equations_ , pp. 319–338. Springer, 2008. * (29) A. R. Gover and J. Šilhan, _Higher symmetries of the conformal powers of the laplacian on conformally flat manifolds_ , _Journal of mathematical physics_ 53 (2012) 032301. * (30) L. Andersson, T. Bäckdahl and P. Blue, _Second order symmetry operators_ , _Classical and Quantum Gravity_ 31 (2014) 135015\.
# Coherent manipulation of an Andreev spin qubit M. Hays<EMAIL_ADDRESS>Department of Applied Physics, Yale University, New Haven, CT 06520, USA V. Fatemi<EMAIL_ADDRESS>Department of Applied Physics, Yale University, New Haven, CT 06520, USA D. Bouman QuTech and Delft University of Technology, 2600 GA Delft, The Netherlands Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands J. Cerrillo Área de Física Aplicada, Universidad Politécnica de Cartagena, E-30202 Cartagena, Spain Departamento de Física Teórica de la Materia Condensada C-V, Universidad Autónoma de Madrid, E-28049 Madrid, Spain S. Diamond Department of Applied Physics, Yale University, New Haven, CT 06520, USA K. Serniak Department of Applied Physics, Yale University, New Haven, CT 06520, USA Current Affiliation: MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02420, USA T. Connolly Department of Applied Physics, Yale University, New Haven, CT 06520, USA P. Krogstrup Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark J. Nygård Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark A. Levy Yeyati Departamento de Física Teórica de la Materia Condensada C-V, Universidad Autónoma de Madrid, E-28049 Madrid, Spain Condensed Matter Physics Center (IFIMAC) and Instituto Nicolás Cabrera, Universidad Autónoma de Madrid, E-28049 Madrid, Spain A. Geresdi QuTech and Delft University of Technology, 2600 GA Delft, The Netherlands Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands Quantum Device Physics Laboratory, Department of Microtechnology and Nanoscience, Chalmers University of Technology, SE 41296 Gothenburg, Sweden M. H. Devoret<EMAIL_ADDRESS>Department of Applied Physics, Yale University, New Haven, CT 06520, USA ###### Abstract Two promising architectures for solid-state quantum information processing are electron spins in semiconductor quantum dots and the collective electromagnetic modes of superconducting circuits. In some aspects, these two platforms are dual to one another: superconducting qubits are more easily coupled but are relatively large among quantum devices $(\sim\mathrm{mm})$, while electrostatically-confined electron spins are spatially compact ($\sim\mathrm{\mu}\mathrm{m}$) but more complex to link. Here we combine beneficial aspects of both platforms in the Andreev spin qubit: the spin degree of freedom of an electronic quasiparticle trapped in the supercurrent- carrying Andreev levels of a Josephson semiconductor nanowire. We demonstrate coherent spin manipulation by combining single-shot circuit-QED readout and spin-flipping Raman transitions, finding a spin-flip time $T_{\mathrm{s}}=17~{}\mathrm{\mu}\mathrm{s}$ and a spin coherence time $T_{2E}=52~{}\mathrm{ns}$. These results herald a new spin qubit with supercurrent-based circuit-QED integration and further our understanding and control of Andreev levels – the parent states of Majorana zero modes – in semiconductor-superconductor heterostructures. A weak link between two superconductors hosts discrete, fermionic modes known as Andreev levels Beenakker and Van Houten (1991); Furusaki and Tsukada (1991). They govern the physics of the weak link on the microscopic scale, ultimately giving rise to macroscopic phenomena such as the Josephson supercurrent. Superconducting quantum circuits crucially rely on the nonlinearity of the supercurrent in Josephson tunnel junctions, a manifestation of the ground-state properties of millions of Andreev levels acting in concert Clarke and Braginski (2004); Devoret and Schoelkopf (2013); Roy and Devoret (2016). While the vast majority of conduction electrons participate in the nonlinear bosonic oscillations of the superconducting condensate, each Andreev level is itself a fermionic degree of freedom, able to be populated by electronic excitations known as Bogoliubov quasiparticles. In 2003, it was proposed to store quantum information in the spin state of a quasiparticle trapped in a weak link possessing a spin-orbit coupling Chtchelkatchev and Nazarov (2003); Padurariu and Nazarov (2010); Reynoso et al. (2012); Park and Yeyati (2017). It was pointed out that this Andreev spin qubit would carry a state-dependent supercurrent, opening new paths for spin manipulation and measurement that are unavailable to electrostatically- confined spin qubits Hanson et al. (2007); Childress and Hanson (2013). In particular, such a state-dependent supercurrent could be used to achieve strong coupling with a superconducting microwave resonator, an area of active research in the spin qubit community Petersson et al. (2012); Samkharadze et al. (2018); Mi et al. (2018); Harvey et al. (2018); Landig et al. (2018); Cubaynes et al. (2019); Borjans et al. (2020). This supercurrent-based coupling has been used in such circuit quantum electrodynamics (cQED) architectures Blais et al. (2004); Wallraff et al. (2004) to detect and manipulate pairs of quasiparticles trapped in Andreev levels Janvier et al. (2015); Hays et al. (2018). However, because the Andreev levels of most weak links are paired into spin-degenerate doublets, quasiparticle spin manipulation has remained out of reach. The level structure of an Andreev doublet is determined by the geometric and material properties of the host weak link, as shown in weak links composed of superconductor-proximitized semiconductor nanowires, or “Josephson nanowires” for short Krogstrup et al. (2015); Chang et al. (2015). Thanks to recently- achieved atomic-scale perfection of the superconductor-semiconductor interface, it is now possible to observe the Andreev spectra of Josephson nanowires, revealing a rich interplay between electromagnetic field effects, device geometry, and spin-orbit coupling van Woerkom et al. (2017); Tosi et al. (2019). These properties of Andreev levels in superconductor-semiconductor nanowires have been employed to demonstrate gate-tunable weak links for superconducting qubits Larsen et al. (2015); De Lange et al. (2015), probe non-abelian Andreev levels known as Majorana zero modes Fu and Kane (2008); Lutchyn et al. (2010); Oreg et al. (2010); Mourik et al. (2012); Deng et al. (2016), and, importantly for this experiment, investigate spin-split doublets without a Zeeman field Tosi et al. (2019); Hays et al. (2020). In this letter, we demonstrate the first coherent manipulation of the spin of an individual quasiparticle excitation of a superconductor. The quasiparticle is trapped in the Andreev levels of a Josephson nanowire, where it resides predominantly in the two spin states of the lowest-energy Andreev doublet with roughly equal probability. First, we initialize this Andreev spin qubit in one of the two spin states by post-selecting on a single-shot cQED spin measurement, which we demonstrated in an earlier work Hays et al. (2020). We then achieve full coherent control of the Andreev spin qubit by driving Raman transitions in a natural $\Lambda$ system formed by the two spin states and an excited state. We observe spin lifetimes up to $T_{\mathrm{s}}=17~{}\mathrm{\mu}\mathrm{s}$ at the presented gate voltages [see Supplementary Information Fig. S9]. However, the much shorter spin coherence time $T_{2}=52~{}\mathrm{ns}$ appears to be limited by a spinful bath. Figure 1: Principle of the Andreev spin qubit. (a) Illustration of a semiconductor nanowire (white) coated with epitaxial superconductor (light blue). A quasiparticle is trapped in the exposed weak link by the pair potential $\Delta$ of the superconducting leads. Due to spin-orbit coupling, if the quasiparticle is in the spin-up state (upper panel) supercurrent flows to the right near zero phase bias $\varphi\neq 0$, while in the spin-down state (lower panel) supercurrent flows to the left. Applying nonzero $\varphi$ thus breaks spin degeneracy. (b) Level structure of two Andreev doublets tuned to a $\Lambda$ configuration. Two microwave drives (frequencies $f_{\downarrow}$ and $f_{\uparrow}$) are equally detuned from $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$, inducing a coherent Raman process between the qubit states $|\\!\downarrow_{q}\,\rangle$ and $|\\!\uparrow_{q}\,\rangle$ via a virtual level (black dashed line). (c) Both microwave drives induce an rf electric field $E_{\mathrm{d}}$ between the superconducting leads. But, for a nanowire symmetric across the plane $M$ (i.e. only nanowire + substrate), drive-induced spin-flips would be forbidden. However, as depicted in the inset, here the mirror symmetry is broken by both the partial aluminum shell as well as the cutter (blue, bias $V_{\mathrm{g,c}}$) and plunger gates (black, bias $V_{\mathrm{g,p}}$). (d) The Josephson nanowire (light blue) is embedded in a superconducting loop (gray), which enables phase bias via an external flux $\varphi\approx 2\pi\Phi/\Phi_{0}$ as well as inductive coupling to a superconducting microwave resonator (maroon). The resonator reflection coefficient $\Gamma=I+iQ$ is probed with a tone near the resonator frequency $f_{\mathrm{r}}=$ 9.188 GHz. (e) Repeated $1.9~{}\mathrm{\mu}\mathrm{s}$ measurements of $I$ and $Q$ clustered into three distributions, corresponding to $|\\!\downarrow_{q}\,\rangle$, $|\\!\uparrow_{q}\,\rangle$ and $|g\rangle$ (standard deviation $\sigma$). The system state was assigned based on thresholds indicated by the black dotted lines. Our realization of the Andreev spin qubit hinges on the interplay between spin-orbit coupling in the semiconductor nanowire and the superconducting phase bias $\varphi$ across the weak link [Fig. 1(a)] Chtchelkatchev and Nazarov (2003); Padurariu and Nazarov (2010); Reynoso et al. (2012); Park and Yeyati (2017); Tosi et al. (2019); Hays et al. (2020). In a conventional weak link, a trapped quasiparticle is restricted to spin-degenerate Andreev doublets and therefore the spin cannot be coherently manipulated. In a Josephson nanowire, however, an inter-subband spin-orbit interaction can cause spin to hybridize with translational degrees of freedom (this hybridized spin is sometimes known as pseudospin, though we will continue to refer to it as “spin” for simplicity). Due to this interaction between spin and motion, the two spin states of an Andreev doublet carry equal and opposite supercurrent $\pm I_{\mathrm{s}}/2$ at $\varphi=0$, with $I_{\mathrm{s}}$ doublet- dependent. The doublet degeneracy can thus be lifted with a nonzero phase bias: perturbatively near $\varphi=0$, the spin splitting is given by $\epsilon_{\mathrm{s}}=I_{\mathrm{s}}\varphi\,\Phi_{0}/2\pi$. Microwave quantum optics techniques are well-suited to achieve quasiparticle spin manipulation, given the frequency selectivity brought about by such a flux-induced spin splitting. In this experiment, the two spin states $|\\!\downarrow_{q}\,\rangle,|\\!\uparrow_{q}\,\rangle$ of one Andreev doublet form the qubit basis [Fig. 1(b)], while a second, higher-energy doublet provides auxiliary states $|\\!\uparrow_{a}\,\rangle$, $|\\!\downarrow_{a}\,\rangle$ critical for both qubit control and measurement Hays et al. (2020). To manipulate the Andreev spin qubit, we use the qubit states $|\\!\downarrow_{q}\,\rangle,|\\!\uparrow_{q}\,\rangle$ in conjunction with $|\\!\uparrow_{a}\,\rangle$ as a $\Lambda$ system. We apply simultaneous microwave drives to both the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition (drive frequency $f_{\uparrow}$) and $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ (drive frequency $f_{\downarrow}$). By equally detuning the two drives from their respective transitions, a Raman process is induced such that the $\\{|\\!\downarrow_{q}\,\rangle,|\\!\uparrow_{q}\,\rangle\\}$ manifold can be coherently manipulated while $|\\!\uparrow_{a}\,\rangle$ remains minimally populated. The success of the Raman process is contingent on our ability to drive both the spin-conserving transition $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and the spin-flipping transition $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$. While spin-orbit hybridization is necessary to enable electric-field induced spin- flips Nadj-Perge et al. (2010), in our situation a broken spatial symmetry of the Josephson nanowire is also required (see Supplementary Information for details). In this experiment, our hexagonal nanowire was made of [001] wurtzite indium arsenide grown by molecular beam epitaxy. Such a nanowire lying alone on a substrate would possess a transverse mirror symmetry [Fig. 1(c)]; this property would then be inherited by the levels of the nanowire such that one spin state of each doublet would be mirror-symmetric and the other anti-symmetric. Since we apply the microwave drive voltage across the weak link, the rf electric field respects the mirror symmetry (it points along the nanowire) and therefore cannot flip spin. In the device used in this work, the mirror symmetry is broken by both the superconducting leads and the electrostatic gates [Fig. 1(c)], as well as any symmetry-breaking disorder present in the nanowire. The superconducting leads consist of $10~{}\mathrm{nm}$-thick epitaxial aluminum, of which a $500~{}\mathrm{nm}$ length was removed to form the weak link. The aluminum only covers two of six nanowire facets, thereby breaking the mirror symmetry of the nanowire-substrate system. As the gates are fabricated on one side of the nanowire, they also break the mirror symmetry. Both the cutter and plunger gates were used to tune the transparency of the weak link and were biased to $V_{\mathrm{g,c}}=-71.9~{}\mathrm{mV}$ and $V_{\mathrm{g,p}}=4.0~{}\mathrm{mV}$ respectively, unless otherwise noted (see Supplementary Information for system tune up). With the mirror symmetry broken, the drive may induce spin flips and a Raman process can be used for coherent spin manipulation. We detect the state of the Andreev spin qubit by embedding the Josephson nanowire in a cQED architecture [Fig. 1(d)]. As we previously demonstrated Hays et al. (2020), the effect of spin-orbit coupling on both the inter- doublet transition spectrum and supercurrent can be harnessed to achieve a spin-dependent dispersive shift of the frequency of a superconducting microwave resonator. Following that work, we detect the quasiparticle spin state by measuring the resonator response to a resonant probe tone [Fig. 1(e)]. The complex amplitude $\Gamma=I+iQ$ of the reflected tone clustered into three distributions, corresponding to the two spin states $|\\!\downarrow_{q}\,\rangle$, $|\\!\uparrow_{q}\,\rangle$ of a trapped quasiparticle as well as the ground state $|g\rangle$ of the junction where no quasiparticle was present. Throughout this work, we present data in terms of spin state occupation probabilities $P_{\uparrow},P_{\downarrow}$, which we compute based on the thresholds displayed in Fig. 1(e). The Andreev spin qubit exists exclusively when a quasiparticle is stochastically trapped in the Josephson nanowire (see Supplementary Information for a quantum jumps trace). For the bias conditions presented in this work, we found that a trapped quasiparticle occupied the two spin states of the lowest doublet with roughly equal probability. Thus, under any coherent manipulation $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{q}\,\rangle$ the observed spin state populations would not change. Throughout this work, we overcame this problem by initializing the quasiparticle in $|\\!\uparrow_{q}\,\rangle$ via an initial readout pulse and post-selection (see Supplementary Information for the same measurements with $|\\!\downarrow_{q}\,\rangle$ post-selection). Our single-shot spin readout was thus critical to our observation of coherent population transfer between $|\\!\uparrow_{q}\,\rangle$ and $|\\!\downarrow_{q}\,\rangle$. Figure 2: Raman transitions of a trapped quasiparticle. (a) In a two-tone measurement, the Josephson nanowire was first driven by a $1~{}\mathrm{\mu}\mathrm{s}$ saturation pulse (gray) of variable carrier frequency $f_{d}$ before the quasiparticle state was determined with a readout pulse (maroon). A dip is observed in $P_{\uparrow}$ corresponding to the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition and in $P_{\downarrow}$ corresponding to the $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition [see Fig. S5 for $\Phi$-dependence]. (b) The quasiparticle was first prepared in $|\\!\uparrow_{q}\,\rangle$ via an initial readout pulse and post-selection. Simultaneous Gaussian pulses ($235~{}\mathrm{ns}$ full width at half maximum, 30 dB more power than used in (a)) with variable frequencies $f_{\downarrow},f_{\uparrow}$ were then applied, followed by a final readout pulse. The observed peak in the final $|\\!\downarrow_{q}\,\rangle$ population lies along $f_{\downarrow}=f_{\uparrow}+609~{}\mathrm{MHz}$ (black dashed line). (c) Full $\Gamma$ histograms of the final readout pulse for the two subsets of measurements enclosed by the gray and black solid lines in (b). Data accrued in the region enclosed by the gray line shows little population transfer from the post-selected $|\\!\uparrow_{q}\,\rangle$ (left panel), while data in the region enclosed by the black shows significant population transfer to $|\\!\downarrow_{q}\,\rangle$ (right panel). The first step in driving the Raman process [Fig. 1(b)] was locating the two transitions that defined the $\Lambda$ system: $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$. After breaking spin degeneracy with $\Phi=-0.10\Phi_{0}$ and tuning the transitions to a local maximum in $V_{\mathrm{g,c}}$ to mitigate charge noise (see Supplementary Information), we measured the spectrum shown in Fig.2(a) using two-tone spectroscopy, without spin initialization. The dip in $P_{\uparrow}$ at $13.000~{}\mathrm{GHz}$ corresponds to the drive coming into resonance with the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition, resulting in population transfer out of $|\\!\uparrow_{q}\,\rangle$ and into $|\\!\uparrow_{a}\,\rangle$. Similarly, the dip in $P_{\downarrow}$ at $13.684~{}\mathrm{GHz}$ corresponds to the $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition (see Supplementary Information for gate voltage and $\Phi$ dependence). Taking the difference yields the spin splitting $\epsilon_{\mathrm{s}}/h=684~{}\mathrm{MHz}$. Having characterized the $\Lambda$ system, we then investigated two-photon Raman transitions of the trapped quasiparticle. After initializing the quasiparticle in $|\\!\uparrow_{q}\,\rangle$, we applied two simultaneous Gaussian pulses with variable respective carrier frequencies $f_{\uparrow}$ and $f_{\downarrow}$ and then measured the final qubit spin state [Fig. 2(b)]. Throughout the main text, we present data with $f_{\uparrow}$ and $f_{\downarrow}$ blue-detuned from $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ respectively (see Supplementary Information for data over a wider frequency range). Along a line given by $f_{\downarrow}=f_{\uparrow}+609~{}\mathrm{MHz}$, we observe increased $|\\!\downarrow_{q}\,\rangle$ population that we attribute to the onset of a Raman process. As expected for Raman transitions, the slope of this line is equal to one, since a shift of one drive frequency must be compensated by an equal shift of the other. The discrepancy between the spin splitting $\epsilon_{\mathrm{s}}/h=684~{}\mathrm{MHz}$ and the $609~{}\mathrm{MHz}$ offset was due to an uncontrolled shift of the Andreev spectrum that occurred in between the measurements shown in Fig. 2(a) and Fig. 2(b) (see Supplementary Information). To further illustrate the dynamics of the quasiparticle under the Raman transitions, we histogram $\Gamma$ for data points off/on resonance with the Raman process [Fig. 2(c)]. Off resonance, the quasiparticle was found predominantly in $|\\!\uparrow_{q}\,\rangle$, as expected from post-selection on the initial readout pulse. On resonance, there was significant population transfer to $|\\!\downarrow_{q}\,\rangle$ as desired, as well as a small population transfer to $|g\rangle$. This is due to drive-induced quasiparticle de-trapping, which we comment on further below. Figure 3: Coherent $\Lambda$-Rabi oscillations of the quasiparticle spin at $f_{\downarrow}=13.280~{}\mathrm{GHz}$ and $f_{\uparrow}=13.964~{}\mathrm{GHz}$. (a) Independently varying the amplitudes $A_{\uparrow}$ and $A_{\downarrow}$ of the simultaneous Gaussian drive pulses ($94~{}\mathrm{ns}$ full width at half maximum) resulted in coherent oscillations between $|\\!\uparrow_{q}\,\rangle$ and $|\\!\downarrow_{q}\,\rangle$ characteristic of a Raman process. The oscillations are only present away from $A_{\uparrow},A_{\downarrow}=0$, and are symmetric under sign flips of $A_{\downarrow},A_{\uparrow}$. (b) Simulated dynamics of the quasiparticle under the action of the drive pulse. The reduced contrast observed in (a) is taken into account using the measured readout fidelities. Finally, having induced spin population transfer using Raman transitions, we demonstrate the first coherent manipulation of the spin of an individual quasiparticle. We first chose our Raman drive frequencies using the same measurement as shown in Fig. 2, but with shorter pulses ($94~{}\mathrm{ns}$ full width at half maximum): we detuned $f_{\downarrow}$ by $280~{}\mathrm{MHz}$ from $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and varied $f_{\uparrow}$ until we observed maximum spin population transfer. We then varied the amplitudes $A_{\uparrow},A_{\downarrow}$ of the two Gaussian pulses before determining the final quasiparticle state [Fig. 3(a)]. The observed oscillations in the population difference between the two spin states are characteristic of a coherent Raman process. Qualitatively, when either $A_{\downarrow}=0$ or $A_{\uparrow}=0$ there is no population transfer because both drives are required to induce the Raman process. As the amplitudes of both drives are increased (roughly along the diagonals $|A_{\downarrow}|\simeq|A_{\uparrow}|$), the spin population difference undergoes coherent oscillations. As expected, the data are symmetric under $A_{\downarrow}\rightarrow-A_{\downarrow}$ and $A_{\uparrow}\rightarrow- A_{\uparrow}$. Quantitatively, the data are well-represented by a simulation of the coherent quasiparticle dynamics under the action of the drive pulses [Fig. 3(b)]. Details of this numerical simulation can be found in the Supplementary Information. Using a Lindblad master equation Johansson et al. (2013) we calculated the dynamics of the quasiparticle between the two Andreev doublets, with the inter-doublet transition frequencies and dephasing rates, spin dephasing rate, state-dependent readout fidelities, and pulse frequencies and envelopes fixed to values determined by independent measurements and instrument settings. We then fit the simulation to the measured data by varying the four inter-doublet transition matrix elements [Fig. 1(d)], as well as a slight detuning from the Raman resonance condition, which we found to be $5.5\pm 0.1~{}\mathrm{MHz}$. We also included a phenomenological drive-induced quasiparticle de-trapping rate ($10.8\pm 0.9~{}\mathrm{MHz}$ at $|A_{\downarrow}|=|A_{\uparrow}|=1$) to capture the measured increase of $|\\!\downarrow_{q}\,\rangle,|\\!\uparrow_{q}\,\rangle\rightarrow|g\rangle$ for larger drive powers. We were thus able to capture the measured Raman spin dynamics of a quasiparticle trapped in the Andreev levels of a Josephson nanowire. Figure 4: Coherence decay of the quasiparticle spin ($V_{\mathrm{g,c}}=-59.1~{}\mathrm{mV},~{}V_{\mathrm{g,p}}=-33.3~{}\mathrm{mV},\Phi=-0.115\Phi_{0}$). Ramsey (a) and Hahn-echo (b) experiments reveal $T_{2R}=18\pm 1~{}\mathrm{ns}$ and $T_{2E}=52\pm 3~{}\mathrm{ns}$, respectively. Solid lines indicate fits to the data (see main text). Oscillations were introduced in both cases by adding a phase proportional to $\tau$ to the final Raman pulse. With the ability to perform coherent spin manipulation in hand, we then characterized the coherence lifetime of an Andreev spin. A Ramsey measurement [Fig. 4(a)] revealed spin coherence decay with a timescale $T_{2R}=18\pm 1~{}\mathrm{ns}$, while a Hahn-echo pulse sequence [Fig. 4(b)] resulted in a slightly longer timescale $T_{2E}=52\pm 3~{}\mathrm{ns}$. Both measurements were well-described by a decay envelope $\exp{[-(\tau/T_{2})^{1+\alpha}]}$ with $\alpha=~{}0.3\pm 0.1$, indicative of excess low-frequency components compared to a white noise spectrum where $\alpha$ would be zero. The observed oscillations in $P_{\uparrow},P_{\downarrow}$ are centered about a lower value in Fig. 4(b) as compared to 4(a), which we attribute to additional quasiparticle de-trapping $|\\!\downarrow_{q}\,\rangle,|\\!\uparrow_{q}\,\rangle\rightarrow|g\rangle$ caused by the echo pulse. Both the observed Ramsey and Hahn-echo coherence times are comparable to that of the spin-orbit qubit Nadj-Perge et al. (2010); Petersson et al. (2012); Van den Berg et al. (2013), the closest cousin of the Andreev spin qubit in that it consists of the spin-orbit hybridized pseudospin of a single electron. However, because here the quasiparticle was trapped in Andreev levels, we possessed a different experimental lens with which to investigate the effects of the environment on the spin coherence. As the Andreev levels of a Josephson nanowire are tunable via both electrostatic voltages and flux, we first suspected charge or flux noise as the source limiting the Andreev spin qubit coherence. However, we found that neither $T_{2R}$ nor $T_{2E}$ varied with $V_{\mathrm{g,c}}$ around the sweet- spot bias point (see Supplementary Information). This indicated that the spin coherence was not limited by charge noise, consistent with the observed weak dependence of $\epsilon_{\mathrm{s}}$ on $V_{\mathrm{g,c}}$. By comparing to the charge-noise-limited inter-doublet transitions, we extracted a lower-bound of $4.2~{}\mathrm{\mu}\mathrm{s}$ on the charge-noise-induced dephasing time of the quasiparticle spin (see Supplementary Information). Moreover, the spin coherence time was not limited by flux noise, as we found no measurable dependence on $\Phi$. To better understand what was limiting the coherence of the Andreev levels, we additionally measured the coherence lifetimes of both inter-doublet transitions and so-called “pair transitions” at several gate bias points [Tab. 1]. The latter correspond to the excitation of two quasiparticles out of the condensate into both levels of a doublet Bretheau et al. (2013); Janvier et al. (2015); Hays et al. (2018); Tosi et al. (2019). We found that the pair transition coherence times were systematically an order of magnitude longer than inter-doublet transition coherence times. To first order, perturbations that couple to spin (such as a Zeeman field) result in equal and opposite energy shifts of the two doublet levels Tosi et al. (2019). As such, these perturbations do not change the frequency of the doublet pair transition, and therefore do not cause dephasing. However, such spin-specific perturbations do induce dephasing of both inter-doublet transitions and the Andreev spin qubit. It thus appears that the coherence lifetime of the Andreev spin qubit is limited by a spin-specific noise source such as hyperfine interactions with the spinful nuclei of indium and arsenic (though nuclear baths are typically lower frequency than the measured ratio $T_{2E}/T_{2R}=2.9$ and decay envelope would indicate Malinowski et al. (2017)), phonon-induced fluctuations of the nanowire spin-orbit coupling, or noisy paramagnetic impurities on the surface of the nanowire Hanson et al. (2007). Transition | $\boldsymbol{V_{\mathrm{g,c}}~{}(\mathrm{mV})}$ | $\boldsymbol{V_{\mathrm{g,p}}~{}(\mathrm{mV})}$ | $\boldsymbol{T_{2R}~{}(\mathrm{ns})}$ | $\boldsymbol{T_{2E}~{}(\mathrm{ns})}$ ---|---|---|---|--- $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ | -166.3 | -127.6 | n.m. | $39\pm 8$ pair | -164.9 | -127.6 | $38\pm 9$ | $257\pm 9$ $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ | 144.5 | 24.9 | n.m. | $9\pm 1$ pair | 116.2 | 23.8 | $11\pm 1$ | $420\pm 20$ $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ | 32.7 | -4.7 | n.m. | $11\pm 3$ pair | 30.0 | -4.7 | $14\pm 2$ | $490\pm 10$ TAB. 1: Coherence lifetimes of Andreev transitions at various bias points. Ramsey coherence times for the inter-doublet transitions were not measured (n.m.) as they were too short. . In this work, we have demonstrated the first coherent manipulation of an Andreev spin qubit by driving Raman transitions of a single quasiparticle spin. In future experiments, the fidelity of this process may be improved by the implementation of a resonant STIRAP protocol Cerrillo et al. (2020), which would reduce the necessary pulse amplitude and length, thereby mitigating the effects of dephasing and quasiparticle de-trapping. The parity dynamics may also be suppressed by improved filtering of high-frequency radiation Serniak et al. (2019). In addition, engineering of the nanowire mirror symmetry could result in larger spin-flip drive matrix elements without sacrificing the spin lifetime. Critical to the demonstration of spin manipulation was our ability to perform single-shot, cQED readout, which inherently relies on substantial coupling between the Josephson nanowire and a microwave resonator. In future experiments, this coupling could be used to achieve long-range interaction between separate qubits, a worthy goal in spin qubit research Petersson et al. (2012); Mi et al. (2018); Samkharadze et al. (2018); Landig et al. (2018); Borjans et al. (2020). The nature of the spinful bath limiting the quasiparticle spin coherence may be elucidated by the application of magnetic field Hanson et al. (2007), which could also push the Josephson nanowire through a topological phase transition Fu and Kane (2008); Lutchyn et al. (2010); Oreg et al. (2010); Mourik et al. (2012); Deng et al. (2016). In general, a detailed understanding of the effect of magnetic field on quasiparticle dynamics will be critical for further progress. We thank Gijs de Lange for assistance with device design, and thank Nick Frattini and Vladimir Sivak for providing us with a SNAIL parametric amplifier. We are grateful to Marcelo Goffman, Cyril Metzger, Hugues Pothier, Leandro Tosi, and Cristián Urbina for sharing their experimental results and hypotheses. We acknowledge useful discussions with Nick Frattini, Luigi Frunzio, Leonid Glazman, Manuel Houzet, Pavel Kurilovich, Vlad Kurilovich, and Charles Marcus. This research was supported by the US Office of Naval Research (N00014-16-1-2270) and by the US Army Research Office (W911NF-18-1-0020, W911NF-18-1-0212 and W911NF-16-1-0349). D.B. acknowledges support by Netherlands Organisation for Scientific Research (NWO) and Microsoft Corporation Station Q. J.C. acknowledges the support from MICINN (Spain) (“Beatriz Galindo” Fellowship BEAGAL18/00081). J.N. acknowledges support from the Danish National Research Foundation. Some of the authors acknowledge the European Union’s Horizon 2020 research and innovation programme for financial support: A.G received funding from the European Research Council, grant no. 804988 (SiMS), and A.G., A.L.Y., J.C., and J.N. further acknowledge grant no. 828948 (AndQC) and QuantERA project no. 127900 (SuperTOP). A.L.Y. acknowledges support by Spanish MICINN through grants FIS2017-84860-R and through the “María de Maeztu” Programme for Units of Excellence in R&D (Grant No. MDM-2014-0377). Contributions M.H., V.F., K.S., D.B., T.C., A.G., and M.D. designed the experimental setup. P.K. and J.N. developed the nanowire materials. D.B. and A.G. fabricated the device. M.H. and V.F. performed the measurements. V.F., M.H., J.C. and A.L.Y. developed the symmetry analysis and microscopic modeling. M.H., J.C., V.F., and A.L.Y. developed and performed the Raman simulations. M.H., V.F., K.S., S.D., and M.D. analyzed the data. M.H., V.F., and M.D. wrote the manuscript with feedback from all authors. ## References * Beenakker and Van Houten (1991) C. Beenakker and H. Van Houten, Phys. Rev. Lett. 66, 3056 (1991). * Furusaki and Tsukada (1991) A. Furusaki and M. Tsukada, Phys. Rev. B 43, 10164 (1991). * Clarke and Braginski (2004) J. Clarke and A. I. Braginski, eds., _The SQUID Handbook_ , vol. 1 (Wiley, Weinheim, 2004). * Devoret and Schoelkopf (2013) M. H. Devoret and R. J. Schoelkopf, Science 339, 1169 (2013). * Roy and Devoret (2016) A. Roy and M. Devoret, Comptes Rendus Physique 17, 740 (2016). * Chtchelkatchev and Nazarov (2003) N. M. Chtchelkatchev and Y. V. Nazarov, Phys. Rev. Lett. 90, 226806 (2003). * Padurariu and Nazarov (2010) C. Padurariu and Y. V. Nazarov, Phys. Rev. B 81, 144519 (2010). * Reynoso et al. (2012) A. A. Reynoso, G. Usaj, C. A. Balseiro, D. Feinberg, and M. Avignon, Phys. Rev. B 86, 214519 (2012). * Park and Yeyati (2017) S. Park and A. L. Yeyati, Phys. Rev. B 96, 125416 (2017). * Hanson et al. (2007) R. Hanson, L. P. Kouwenhoven, J. R. Petta, S. Tarucha, and L. M. K. Vandersypen, Rev. Mod. Phys. 79, 1217 (2007). * Childress and Hanson (2013) L. Childress and R. Hanson, MRS Bull. 38, 134 (2013). * Petersson et al. (2012) K. D. Petersson, L. W. McFaul, M. D. Schroer, M. Jung, J. M. Taylor, A. A. Houck, and J. R. Petta, Nature 490, 380 (2012). * Samkharadze et al. (2018) N. Samkharadze, G. Zheng, N. Kalhor, D. Brousse, A. Sammak, U. Mendes, A. Blais, G. Scappucci, and L. Vandersypen, Science 359, 1123 (2018). * Mi et al. (2018) X. Mi, M. Benito, S. Putz, D. M. Zajac, J. M. Taylor, G. Burkard, and J. R. Petta, Nature 555, 599 (2018). * Harvey et al. (2018) S. P. Harvey, C. G. L. Bøttcher, L. A. Orona, S. D. Bartlett, A. C. Doherty, and A. Yacoby, Phys. Rev. B 97, 235409 (2018). * Landig et al. (2018) A. J. Landig, J. V. Koski, P. Scarlino, U. Mendes, A. Blais, C. Reichl, W. Wegscheider, A. Wallraff, K. Ensslin, and T. Ihn, Nature 560, 179 (2018). * Cubaynes et al. (2019) T. Cubaynes, M. R. Delbecq, M. C. Dartiailh, R. Assouly, M. M. Desjardins, L. C. Contamin, L. E. Bruhat, Z. Leghtas, F. Mallet, A. Cottet, et al., npj Quantum Inf. 5, 1 (2019). * Borjans et al. (2020) F. Borjans, X. Croot, X. Mi, M. Gullans, and J. Petta, Nature 577, 195 (2020). * Blais et al. (2004) A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Phys. Rev. A 69, 062320 (2004). * Wallraff et al. (2004) A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S. Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J. Schoelkopf, Nature 431, 162 (2004). * Janvier et al. (2015) C. Janvier, L. Tosi, L. Bretheau, Ç. Ö. Girit, M. Stern, P. Bertet, P. Joyez, D. Vion, D. Esteve, M. F. Goffman, et al., Science 349, 1199 (2015). * Hays et al. (2018) M. Hays, G. de Lange, K. Serniak, D. van Woerkom, D. Bouman, P. Krogstrup, J. Nygård, A. Geresdi, and M. Devoret, Phys. Rev. Lett. 121, 047001 (2018). * Krogstrup et al. (2015) P. Krogstrup, N. L. B. Ziino, W. Chang, S. M. Albrecht, M. H. Madsen, E. Johnson, J. Nygård, C. M. Marcus, and T. S. Jespersen, Nat. Mater. 14, 400 (2015). * Chang et al. (2015) W. Chang, S. Albrecht, T. Jespersen, F. Kuemmeth, P. Krogstrup, J. Nygård, and C. M. Marcus, Nature nanotechnology 10, 232 (2015). * van Woerkom et al. (2017) D. J. van Woerkom, A. Proutski, B. van Heck, D. Bouman, J. I. Väyrynen, L. I. Glazman, P. Krogstrup, J. Nygård, L. P. Kouwenhoven, and A. Geresdi, Nat. Phys. 13, 876 (2017). * Tosi et al. (2019) L. Tosi, C. Metzger, M. Goffman, C. Urbina, H. Pothier, S. Park, A. L. Yeyati, J. Nygård, and P. Krogstrup, Phys. Rev. X 9, 011010 (2019). * Larsen et al. (2015) T. W. Larsen, K. D. Petersson, F. Kuemmeth, T. S. Jespersen, P. Krogstrup, J. Nygård, and C. M. Marcus, Phys. Rev. Lett. 115, 127001 (2015). * De Lange et al. (2015) G. De Lange, B. Van Heck, A. Bruno, D. Van Woerkom, A. Geresdi, S. Plissard, E. Bakkers, A. Akhmerov, and L. DiCarlo, Phys. Rev. Lett. 115, 127002 (2015). * Fu and Kane (2008) L. Fu and C. L. Kane, Phys. Rev. Lett. 100, 096407 (2008). * Lutchyn et al. (2010) R. M. Lutchyn, J. D. Sau, and S. Das Sarma, Phys. Rev. Lett. 105, 077001 (2010). * Oreg et al. (2010) Y. Oreg, G. Refael, and F. von Oppen, Phys. Rev. Lett. 105, 177002 (2010). * Mourik et al. (2012) V. Mourik, K. Zuo, S. M. Frolov, S. R. Plissard, E. P. A. M. Bakkers, and L. P. Kouwenhoven, Science 336, 1003 (2012). * Deng et al. (2016) M. Deng, S. Vaitiekėnas, E. B. Hansen, J. Danon, M. Leijnse, K. Flensberg, J. Nygård, P. Krogstrup, and C. M. Marcus, Science 354, 1557 (2016). * Hays et al. (2020) M. Hays, V. Fatemi, K. Serniak, D. Bouman, S. Diamond, G. de Lange, P. Krogstrup, J. Nygård, A. Geresdi, and M. Devoret, Nat. Phys. 16, 1103 (2020). * Nadj-Perge et al. (2010) S. Nadj-Perge, S. Frolov, E. Bakkers, and L. P. Kouwenhoven, Nature 468, 1084 (2010). * Johansson et al. (2013) J. R. Johansson, P. D. Nation, and F. Nori, Computer Physics Communications 184, 1234 (2013). * Van den Berg et al. (2013) J. Van den Berg, S. Nadj-Perge, V. Pribiag, S. Plissard, E. Bakkers, S. Frolov, and L. Kouwenhoven, Phys. Rev. Lett. 110, 066806 (2013). * Bretheau et al. (2013) L. Bretheau, Ç. Girit, H. Pothier, D. Esteve, and C. Urbina, Nature 499, 312 (2013). * Malinowski et al. (2017) F. K. Malinowski, F. Martins, P. D. Nissen, E. Barnes, Ł. Cywiński, M. S. Rudner, S. Fallahi, G. C. Gardner, M. J. Manfra, C. M. Marcus, et al., Nature nanotechnology 12, 16 (2017). * Cerrillo et al. (2020) J. Cerrillo, M. Hays, V. Fatemi, and A. Levy Yeyati, arXiv preprint: 2012.07132 (2020). * Serniak et al. (2019) K. Serniak, S. Diamond, M. Hays, V. Fatemi, S. Shankar, L. Frunzio, R. Schoelkopf, and M. Devoret, Phys. Rev. Applied 12, 014052 (2019). * (42) Frattini, N. E., Sivak, V. V., Lingenfelter, A., Shankar, S. & Devoret, M. H., Phys. Rev. Applied 10, 054020 (2018). * Governale and Zülicke (2002) M. Governale and U. Zülicke, Phys. Rev. B 66, 073311 (2002). * Levenson-Falk et al. (2014) E. Levenson-Falk, F. Kos, R. Vijay, L. Glazman, and I. Siddiqi, Phys. Rev. Lett. 112, 047002 (2014). * Martinis et al. (2009) J. M. Martinis, S. Nam, J. Aumentado, K. M. Lang, and C. Urbina, Phys. Rev. B 67, 094510 (2003). * Houck et al. (2009) A. A. Houck, J. Koch, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Quantum Information Processing 8, 105 (2009). * Press (2002) W. Press, _Numerical Recipes in C++ : The Art of Scientific Computing_ (Cambridge University Press, 2002). Supplementary Information ## 1 I. Symmetry Analysis Of Pseudospin-Flip Matrix Elements In this section we motivate a simple physical picture to explain the qualitative features of the driven spin-flipping transitions. First, note that while an electron spin does in principle couple to the magnetic field of our microwave drive, this coupling is extremely weak. The electric field of the drive, on the other hand, only couples to motional degrees of freedom, so it cannot induce spin-flip transitions if spin is a good quantum number. Spin- orbit coupling is the conventional invocation to solve this problem since it hybridizes spin and translational wavefunction components into what is often referred to as pseudospin Nadj-Perge et al. (2010). We will use symmetry considerations to show that this conversion to pseudospin is necessary but insufficient to allow electric fields to flip the quasiparticle pseudospin in our system, and that an additional broken spatial symmetry is required. Following this we further validate these ideas by inspection of a tight- binding model that specifically incorporates the physics and energy scales of Andreev levels. ### 1.1 A. General Considerations We are interested in how transitions between the Andreev levels of the Josephson nanowire are induced by our microwave drive voltage $V_{d}\cos\omega_{d}t$. Here $\omega_{d}$ is the drive frequency and $V_{d}$ the spatial profile. Initially, we will model the spatial profile as a purely longitudinal differential voltage along the Josephson nanowire $V_{d}(x)\approx-V_{d}(-x)$, which is a reasonable starting point given our highly symmetric device design [Fig. S3]. The transition rates will depend both on the matrix element of $V_{d}$ between the initial/final states, as well as on the mismatch between $\hbar\omega_{d}$ and the energy difference between the initial/final states. We first focus on matrix element considerations, initially assuming that $\Phi=0$ such that the system is time- reversal symmetric, before generalizing to any value of $\Phi$. Let’s begin by imagining the Josephson nanowire as a quasi-1d system described by a Hamiltonian $H_{0}$. This Hamiltonian includes both the superconducting leads and the semiconductor nanowire, though for the moment we neglect spin- orbit coupling. Additionally, let us suppose that the system is rotationally invariant about the $x$-axis. As with any spin-1/2 system with time-reversal symmetry, the energy levels are paired into spin-degenerate doublets (Kramers theorem). In order to achieve the Raman process investigated in this experiment, it was necessary to simultaneously drive spin-conserving and spin- flipping inter-doublet transitions. Under our current model of the system, while $V_{d}$ can induce spin-conserving inter-doublet transitions by coupling to the spatial character of the wavefunctions, it cannot induce spin-flipping transitions because both $H_{0}$ and $V_{d}$ are block-diagonal in spin (they are spin-rotation-symmetric). Spin-orbit coupling can help solve this problem by mixing spin and motional degrees of freedom. We can see this in the form of a Rashba interaction generated by a static electric field in the $z$ direction $H_{R}=iE_{z}\gamma(\sigma_{x}\partial_{y}-\sigma_{y}\partial_{x})$, where $\sigma_{i}$ are Pauli matrices of the spin and $\gamma$ is a material parameter. Thus, in the presence of spin-orbit interaction, the quasiparticle “spin” is actually a pseudospin. We stress that a hybridization like this must be present in our system, as it is critical to break the Andreev-doublet degeneracy Governale and Zülicke (2002); Reynoso et al. (2012); Tosi et al. (2019). Upon including a Rashba interaction, the Hamiltonian $H=H_{0}+H_{R}$ is no longer block-diagonal in spin, and one might imagine that $V_{d}$ could flip pseudospin. However, a selection rule prevents this. While the static electric field in the $z$ direction associated with the Rashba effect breaks rotational symmetry, a transverse mirror symmetry in the $y$ direction remains [see main text Fig. 1(c)]. This mirror symmetry is described by the operator $M_{y}=-i\sigma_{y}\delta_{y,-y}$, where $\delta_{y,-y}$ sends $y$ to $-y$. Because $[H,M_{y}]=0$, the energy eigenstates are also mirror eigenstates. Moreover, as spin is flipped under time-reversal $T\sigma_{y}T^{-1}=-\sigma_{y}$, the mirror eigenvalue is also flipped. Therefore, the pair of pseudospins comprising each doublet have mirror eigenvalues $+i$ and $-i$ (this remains true in the presence of a time- reversal-symmetry-breaking phase-bias, as explained below). As our longitudinal drive $V_{d}(x)\approx-V_{d}(-x)$ also obeys the mirror symmetry $[V_{d},M_{y}]=0$, it cannot induce transitions between states of different mirror eigenvalue (pseudospin flips). To induce pseudospin-flip transitions, the mirror symmetry must be broken. In the real device, this symmetry is broken by the epitaxial aluminum shell which covers two of six nanowire facets, the presence of the side gates and their applied voltages [main text Fig. 1(c)], as well as any non-idealities of the device. As such, the symmetry may be broken in any/all of the Hamiltonian terms: * • $[H_{0},M_{y}]\neq 0$: asymmetric superconducting leads, gate-electrode perturbations to the transverse confinement * • $[H_{R},M_{y}]\neq 0$: modified Rashba interaction due to non-symmetric electric field profiles * • $[V_{d},M_{y}]\neq 0$: drive applied via asymmetric leads and perturbed by presence of the metallic gate Under this broken mirror symmetry, both the pseudospin-conserving transition $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and the pseudospin-flipping transition $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ are allowed and therefore the Raman process can be driven. Thus far in this section, we have assumed $\Phi=0$ such that $THT^{-1}=H$. Because the drive also obeys time-reversal symmetry $TV_{d}T^{-1}=V_{d}$, the direct pseudo-spin transition $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{q}\,\rangle$ is thus forbidden for $\Phi=0$. While this feature is ideal for the use of the Andreev doublets as a $\Lambda$ system, in this experiment it was necessary to break the doublet degeneracy such that $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ were frequency-resolved. We therefore performed most measurements at a nonzero flux bias ($\Phi\approx-0.1\Phi_{0}$) such that we could drive the inter-doublet transitions as well as detect the pseudospin state of the lower doublet. As we investigate numerically in the next section, we expect the direct spin-flip transition to remain suppressed even for nonzero $\Phi$. Note that as long as the additional time-reversal-breaking terms in $H$ respect the mirror symmetry, as is the case for a dc phase applied to mirror-symmetric leads, all the inter-doublet transition considerations presented above for $\Phi=0$ automatically generalize to the case of nonzero $\Phi$. In particular, one state of each doublet has mirror eigenvalue $+i$ and the other $-i$, independent of $\Phi$. ### 1.2 B. Numerical Tight-Binding Model We now explore these symmetry considerations numerically using a tight-binding model. We begin by investigating spin-orbit hybridization and broken mirror symmetry in the band structure of an un-proximitized nanowire, before moving on to the Andreev levels themselves. #### 1.2.1 1\. Unconfined Normal Channels Our model of the un-proximitized nanowire is graphically represented in Fig. S1(a). It consists of two coupled infinite parallel chains aligned along the $x$-direction, and with Rashba spin-orbit coupling: $\displaystyle H$ $\displaystyle=$ $\displaystyle\sum_{i,\tau,\sigma}(\epsilon_{i,\tau}-\mu)c^{\dagger}_{i,\tau,\sigma}c_{i,\tau,\sigma}+t_{x}c^{\dagger}_{i,\tau,\sigma}c_{i+1,\tau,\sigma}+\sigma\alpha_{x}c^{\dagger}_{i,\tau,\sigma}c_{i+1,\tau,\bar{\sigma}}+\mbox{H.c.}$ (1) $\displaystyle+\sum_{i,\sigma}t_{y}c^{\dagger}_{i,1,\sigma}c_{i,2,\sigma}+i\alpha_{y}c^{\dagger}_{i,1,\sigma}c_{i,2,\bar{\sigma}}+\mbox{H.c.}$ Here the operators $c^{\dagger}_{i,\sigma,\tau}$ create an electron on the site $i$, within the channel $\tau=1,2$ and spin $\sigma$. The scalar hopping strengths $\\{t_{x},t_{y}\\}$ correspond to the {longitudinal, transverse} directions, while the Rashba hopping strengths are given by $\\{\alpha_{x},\alpha_{y}\\}$. The strengths are scaled by the discretization of a continuous Hamiltonian with a longitudinal lattice parameter $a=20$ nm and width $W=100$ nm, giving $t_{x}=\hbar^{2}/(m^{*}a^{2})$, $\alpha_{x}=\alpha/(2a)$, $t_{y}=\hbar^{2}/(m^{*}W^{2})$ and $\alpha_{y}=\alpha/(2W)$, where $m^{*}=0.028m_{e}$ is the effective mass for InAs and $\alpha=E_{z}\gamma=25~{}\mathrm{meV}\cdot\mathrm{nm}$ is the Rashba parameter. In the mirror-symmetric case, the on-site energies $\epsilon_{i,\tau}$ are given by $2t_{x}$. Momentarily, we will break this mirror symmetry by including an energy offset $V_{A,y}$ between the two chains such that $\epsilon_{1,\tau}\rightarrow\epsilon_{1,\tau}+V_{A,y}/2$ and $\epsilon_{2,\tau}\rightarrow\epsilon_{2,\tau}-V_{A,y}/2$. Below, we will express all energies as fractions of the superconducting gap of bulk aluminum $\Delta=0.185~{}\mathrm{meV}$. - - - - - - - - - - - - - - - - - - - - - - - - Figure S1: (a) Visual representation of the tight-binding model of the un- proximitized infinite nanowire. (b) Band structure of the tight-binding model for $V_{A,y}=0$. Bands split by the longitudinal Rashba term (gray dashed lines) undergo avoided crossings due to the transverse Rasbha term to give the final bands of the model (colored lines). Bands colored in blue are anti- symmetric eigenstates of $M_{y}$, and bands colored in red are symmetric eigenstates. The black star marks the slow, positive momentum Fermi point, the transverse wavefunction components of which are plotted in (c) for the values of $\mu$ indicated by the gray strip. Only the anti-symmetric amplitudes $a_{-}$ and $b_{-}$ are nonzero. As the chemical potential is tuned through the avoided crossing between the bands, weight shifts from $|\Leftarrow,S\rangle$ to $|\Rightarrow,A\rangle$ (see Eqn. S3). (d) For $V_{A,y}=2\Delta$, the bands are qualitatively similar to (b), but are no longer mirror eigenstates. As shown in (e), the wavefunction of the slow, positive-momentum Fermi point has both symmetric and anti-symmetric components. First, we consider the mirror-symmetric case $V_{A,y}=0$. The model has four bands [Fig. S1(b)]: the spatial character of the transverse wavefunction can either be symmetric $|S\rangle$ (lower energy) or anti-symmetric $|A\rangle$ (higher energy), while the spin can be either $|\Rightarrow\rangle$ or $|\Leftarrow\rangle$ in the $y$-direction. The longitudinal Rashba term $i\alpha_{x}c^{\dagger}_{i,\tau,\sigma}c_{i+1,\tau,\bar{\sigma}}+\mbox{H.c.}$ generates momentum split-bands [gray dashed lines in Fig. S1(b)], but spin in the $y$-direction remains a good quantum number. However, the transverse part of the Rashba term $i\alpha\sigma_{x}\partial_{y}\rightarrow i\alpha_{y}c^{\dagger}_{i,1,\sigma}c_{i,2,\bar{\sigma}}+\mbox{H.c.}$ generates an avoided crossing between the lower-energy, transverse-symmetric band and the higher-energy, transverse-anti-symmetric band. This inter-band mixing results in hybridization between spin and transverse motional degrees of freedom, so that the transverse character of the resultant low-energy band is given by $\displaystyle|+\rangle$ $\displaystyle=a_{+}|\Rightarrow,S\rangle+b_{+}|\Leftarrow,A\rangle$ (2) $\displaystyle|-\rangle$ $\displaystyle=a_{-}|\Leftarrow,S\rangle+b_{-}|\Rightarrow,A\rangle$ (3) The numerically-calculated value for the $|-\rangle$ coefficients is shown in Fig. S1(c). As $\mu$ is swept through the avoided crossing, weight shifts from $|\Leftarrow,S\rangle$ to $|\Rightarrow,A\rangle$, with maximal hybridization occurring around $\mu\approx-0.05\Delta$. Critically, although there is hybridization, the new bands are still eigenstates of the mirror operator $M_{y}|\pm\rangle=\pm i|\pm\rangle$. As such, a mirror-symmetric drive cannot induce transitions between them. To model a broken mirror symmetry, we set a non-zero inter-chain potential difference $V_{A,y}=2\Delta$. The bands for this case are depicted in Fig. S1(d). While the change in the energy dispersion relation is subtle, we can see the effect of the broken mirror symmetry in the wavefunction character [Fig. S1(e)]. The band near the avoided crossing now has character of all four basis states $|\Rightarrow,S\rangle,|\Leftarrow,S\rangle,|\Rightarrow,A\rangle,|\Leftarrow,A\rangle$. There are thus no selection rules forbidding transitions induced by a mirror- symmetric drive. #### 1.2.2 2\. Andreev Levels Now we confine the normal region between two superconducting leads with a pair potential $\Delta$ and a different chemical potential $\mu_{S}$ [Fig. S2(a)]. The total number of sites in each of the two chains is $N$. The superconducting phase difference is applied within the gauge of the $t_{x}$ hopping elements in the middle of the chain, i.e. between sites $N/2$ and $N/2+1$. Symbolically the Hamiltonian is $\displaystyle H$ $\displaystyle=$ $\displaystyle\sum_{i,\tau,\sigma}(\epsilon_{i,\tau}-\mu_{i})c^{\dagger}_{i,\tau,\sigma}c_{i,\tau,\sigma}+t_{x}c^{\dagger}_{i,\tau,\sigma}c_{i+1,\tau,\sigma}-\alpha_{x}c^{\dagger}_{i,\tau,\sigma}c_{i+1,\tau,\bar{\sigma}}+\mbox{H.c.}$ (4) $\displaystyle+\sum_{i,\tau}\Delta_{i}c_{i,\tau,\downarrow}c_{i,\tau,\uparrow}+\mbox{H.c.}+\sum_{i,\sigma}t_{y}c^{\dagger}_{i,1,\sigma}c_{i,2,\sigma}+i\alpha_{y}c^{\dagger}_{i,1,\sigma}c_{i,2,\bar{\sigma}}+\mbox{H.c.}$ where $\mu_{i}=\mu_{S}$ for the sites in the superconducting leads and $\mu_{i}=\mu$ for the normal region as in the previous section. We simulate the Hamiltonian Eqn. S(4) in Nambu space, fixing $\mu_{S}=1.5\Delta$, $N=34$, and the number of sites in each lead $N_{\mathrm{leads}}=6$. We have found that while $N_{\mathrm{leads}}=6$ is large enough for the qualitative investigation we present here, more lead sites are necessary if quantitative accuracy is desired. We examine both the mirror-symmetric case $V_{A,y}=0$ and the non-symmetric case $V_{A,y}=\Delta$. We note that we apply $V_{A,y}$ only to the sites in the normal region $N_{\mathrm{leads}}<i<N-N_{\mathrm{leads}}$. For both the mirror-symmetric and non-symmetric cases, we find Andreev levels with qualitatively similar energies [Fig. S2(b/c)] and therefore similar inter- doublet transition frequencies [Fig. S2(d/e)]. For $V_{A,y}=0$, the Andreev levels are eigenstates of the mirror operator just as in the states of the infinite nanowire. Next, we check the drive matrix elements. We model the drive voltage profile as a linear potential drop between the leads $V_{d}(x)=\frac{2}{L}V_{0}x$, choosing $V_{0}=2\Delta$. For the mirror-symmetric case, only mirror- preserving transitions can be driven regardless of flux [Fig. S2(f)] and chemical potential [Fig. S2(h)], as expected. For $V_{A,y}=\Delta$, we indeed find that all transitions are allowed. However, while the inter-doublet transitions are finite for all values of $\Phi$, the direct spin-flip matrix element goes to zero at $\Phi\rightarrow 0,\Phi_{0}/2$ as required by time- reversal symmetry. We also find that the matrix element of pseudospin preserving and flipping transitions become of similar magnitude for higher chemical potentials [Fig. S2(h)]. This is consistent with the stronger degree of spin-orbital mixing when the chemical potential is close to the next sub- band as shown in Fig. S1(e). - - - - - - - - - - - - - - - - - - - - - - - - - - - Figure S2: (a) Visual representation of the tight-binding model, now including superconducting leads. (b-h) For $V_{A,y}=0,\Delta$, we compute the Andreev level energies (b,c), all transition frequencies (d,e), all $V_{d}$ matrix elements versus $\Phi$ (f,g), and $V_{d}$ matrix elements (drive amplitude $V_{0}=2\Delta$) versus $\mu$ involving the lowest energy level (h,i). For $V_{A,y}=0$, states are colored by their mirror eigenvalue as before: red for $+1$ and blue for $-1$. For $V_{A,y}=\Delta$, states are colored the same as in main text Fig. 1(a). Inter-doublet transition frequencies and matrix elements take the same color as the participating lower doublet state, while those for the intra-doublet transition are grey. Thick dashed lines correspond to pseudospin-conserving transitions, while thin dashed lines correspond to pseudospin-flipping transitions. Vertical dashed black lines in (f)/(g) correspond to $\Phi$ for (h)/(i), while vertical dashed black lines in (h)/(i) correspond to $\mu$ for (f)/(g). ## 2 II. Experiment schematic Figure S3: Cryogenic wiring diagram and device micrographs (see Extended Data of Ref. Hays et al. (2020) for original publication). Optical micrograph (e) is of the device on which the presented measurements were performed. Optical micrographs (b), (c), (d) and scanning electron micrograph (f) are of an extremely similar (unmeasured) device, the main difference being that the length of the weak length is $750~{}\mathrm{nm}$ instead of $500~{}\mathrm{nm}$. The microwave readout and drive tones pass through the depicted circuitry (a) before being routed through the $\Delta$ port of a $180^{\circ}$ hybrid resulting in differential microwave voltages at the device input. After reaching two coupling capacitors (c), the readout tone was reflected off the differential $\sim\lambda/4$ mode of the coplanar strip resonator (red, frequency $f_{\mathrm{r}}=9.18843~{}\mathrm{GHz}$, coupling $\kappa_{\mathrm{c}}=2\pi\times 1.23~{}\mathrm{MHz}$, internal loss $\kappa_{\mathrm{i}}=2\pi\times 1.00~{}\mathrm{MHz}$) and then routed through the depicted amplification chain (a), which was comprised of a SNAIL parametric amplifier (SPA) frattini2018 , HEMT, and room-temperature amplifiers. In this circuit, the drive tone creates an ac phase drop across the nanowire (f), which is embedded in the superconducting $\Phi$-bias loop (green) at the end of the resonator (d,e). One edge of the loop connects the two strips of the resonator and thereby forms the shared inductance with the nanowire. We controlled the electrostatic potential in the nanowire weak link (f) with a dc gate (pink, voltage $V_{\mathrm{g,c}}$). Gates on the nanowire leads (orange) were used to gain additional electrostatic control (voltage $V_{\mathrm{nw}}$). To reference the resonator/nanowire island to ground, an additional strip runs between the resonator strips, and connects to a large finger capacitor (purple). This strip does not significantly perturb the resonator’s microwave properties because it resides at the zero voltage point with respect to the resonator’s differential mode. ## 3 III. Tuning up the device In this experiment, we possessed three in-situ control knobs of the nanowire Andreev levels: a loop flux $\Phi$ [Fig. S3(d, e)], a main gate voltage $V_{\mathrm{g,c}}$ acting on the nanowire weak link [Fig. S3(f)], and an additional gate voltage $V_{\mathrm{g,p}}$ applied to two more gates positioned on either side of the main gate [Fig. S3(f)]. Upon cooling down the device, we observed $\Phi$ and gate voltage dependence of the readout resonance around $V_{\mathrm{g,c}},V_{\mathrm{g,p}}=0$, indicating that conduction channels in the nanowire link were transmitting. With $\Phi=-0.13\Phi_{0}$ and $V_{\mathrm{g,p}}=0$, we swept $V_{\mathrm{g,c}}$ while performing two-tone spectroscopy [Fig. S4(a)]. We observed several dispersing transitions, with a local maximum at $V_{\mathrm{g,c}}=-71.0~{}\mathrm{mV}$. Parking $V_{\mathrm{g,c}}$ at the local maximum to mitigate the effects of electrostatic noise on the Andreev level coherence (see below for further data and discussion), we then performed two-tone spectroscopy while sweeping $\Phi$ [Fig. S4(b)]. Four flux-dependent resonances were observed, which cross at $\Phi=0$. This is characteristic of inter-doublet transitions of a quasiparticle between spin-orbit split Andreev levels Tosi et al. (2019). In conjunction with the population transfer measurements shown in Fig. 2(a) of the main text, this characteristic spectrum allowed us to identify the two lowest-frequency transitions as $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$. At certain $\Phi$ bias points, some of the transitions are not visible, or become significantly dimmer. For example, the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition is barely visible at $\Phi=\simeq-0.13\Phi_{0}$. This drop in signal occurs because the quasiparticle population of the relevant level of the lower doublet (either $|\\!\downarrow_{q}\,\rangle$ or $|\\!\uparrow_{q}\,\rangle$) decreases, often below 0.01. We attribute these population drops to evacuation of the quasiparticle into cold, dot-like levels in the nanowire that are brought into resonance with the Andreev doublets as $\Phi$, $V_{\mathrm{g,c}}$, and $V_{\mathrm{g,p}}$ are varied. While these features are not completely understood, we found they could be easily avoided with an appropriate choice of bias conditions. The effects of these population drops can be observed in Figs. SS4 and SS5. Having identified the transitions that defined the $\Lambda$ system, we searched for a local maximum of the transitions in both gate voltages in order to mitigate electrostatic noise. We found such a bias point at $V_{\mathrm{g,c}}=-71.9~{}\mathrm{mV}$ and $V_{\mathrm{g,p}}=4.0~{}\mathrm{mV}$ (see Fig. S5 for $\Phi$-dependence at these gate voltages). Figure S4: Gate and flux dependence of the inter-doublet transitions. (a) A local maximum (“sweet spot”) is observed in both $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ at $V_{\mathrm{g,c}}=-71.0~{}\mathrm{mV}$ (black dotted line). (b) Flux dependence of the four inter-doublet transitions at $V_{\mathrm{g,c}}=-71.0~{}\mathrm{mV}$, $V_{\mathrm{g,p}}=0.0~{}\mathrm{mV}$. Black dotted line indicates $\Phi$ bias for data shown in (a). Figure S5: Flux dependence of $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ at the bias point for all data presented in the main text, excluding Fig. 4 ($V_{\mathrm{g,c}}=-71.9~{}\mathrm{mV}$, $V_{\mathrm{g,p}}=4.0~{}\mathrm{mV}$). ## 4 IV. Searching for Raman transitions With the gate voltages set to the optimum values discussed above and $\Phi=-0.10\Phi_{0}$, we applied simultaneous Gaussian drive pulses of variable carrier frequency, as explained in the main text and Fig. 2(b). Here we present all measured transition probabilities, and over a wider frequency range than in the main text [Fig. S6]. The drive powers used in this measurement were 30 dB larger than in the two-tone spectroscopy measurement of Fig. 2(a). The $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition is visible (though broadened) just above its low-power value, while the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition is no longer visible. Other transitions that are constant in one drive frequency or the other are observed, perhaps due to evaporation of the quasiparticle into the same dot-like levels as discussed above. Several multi- photon transitions are observed that lie along $f_{\uparrow}=-f_{\downarrow}+c$. The $-1$ slope indicates that the two drive frequencies are adding to reach a highly-excited state of the system, perhaps exciting the quasiparticle above the superconducting gap or into other dot- like states in the nanowire. Finally, the desired two-photon Raman process $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{q}\,\rangle$ occurs along the black dashed line, which has slope 1 and intersects with the crossing point of the two single-photon transitions of the $\Lambda$ system (unlike the data shown in Fig. 2(b) of the main text). This measurement was taken between the measurements displayed in Fig. 2(b) and 2(a) of the main text. We thus conclude that there was an uncontrolled shift of the Andreev levels between the measurement shown Fig. S6 and Fig. 2(b), as referenced in the main text. Such jumps were not uncommon in this experiment, and occurred on a timescale of days to weeks. Figure S6: All transition probabilities under the action of simultaneous drive pulses of variable carrier frequencies, as in Fig. 2(b) of the main text. Measured transition frequencies of $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ are indicated by pink and purple dashed lines, respectively. Black dashed lines have a slope of one, and run through the crossing point of the two transitions. ## 5 V. Coherent spin dynamics Using an experiment similar to that shown in Fig. S6, we chose drive frequencies such that the desired Raman process $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{q}\,\rangle$ was being driven, but the drives were maximally detuned from undesired processes. We then varied the amplitudes $A_{\downarrow}$, $A_{\uparrow}$ to induce coherent oscillations of the quasiparticle spin [main text, Fig. 3(a)]. To better understand these coherent dynamics, we performed a simulation of our system using QuTiP Johansson et al. (2013) [main text, Fig. 3(b)]. Here we present transition probabilities out of the two spin states under the action of these variable amplitude drive pulses, both measured [Fig. S7(a)] and simulated [Fig. S7(b)] (measurements where the system started in $|g\rangle$ showed no features). The simulation of the coherent dynamics included all four Andreev levels $|\\!\downarrow_{q}\,\rangle$, $|\\!\uparrow_{q}\,\rangle$, $|\\!\uparrow_{a}\,\rangle$, and $|\\!\downarrow_{a}\,\rangle$ [Fig. S7(c)]. While there were only two drives applied in this experiment, because each drive could couple to each of the four inter-doublet transitions we needed to account for a total of eight Hamiltonian terms. Four of these terms produced two Raman processes [dashed double-headed arrows in Fig. S7(c)], one of which was via $|\\!\uparrow_{a}\,\rangle$ as desired, and the other was via $|\\!\downarrow_{a}\,\rangle$. The $|\\!\uparrow_{a}\,\rangle$ Raman process dominated the dynamics, as the detuning of the drives from $|\\!\uparrow_{a}\,\rangle$ was $\Delta_{\mathrm{R}}=-290~{}\mathrm{MHz}$, as compared to the detuning to $|\\!\downarrow_{a}\,\rangle$ which was $\Delta_{\mathrm{R}}^{\prime}=1.36~{}\mathrm{GHz}$. The other four Hamiltonian terms produced Stark shifts [thin solid double-headed arrows in Fig. S7(c)]. The fixed parameters in the simulation were the four inter-doublet transition frequencies, the measured dephasing rates of both doublets, the pulse length and shape (Gaussian, $40~{}\mathrm{ns}$ standard deviation), and the detunings $\Delta_{\mathrm{R}}=-290~{}\mathrm{MHz}$, $\Delta_{\mathrm{R}}^{\prime}=1.36~{}\mathrm{GHz}$. We fit the simulation to the data using six free parameters: the four drive matrix elements associated with the four inter-doublet transitions ($M_{\downarrow,\uparrow}$, $M_{\uparrow,\uparrow}$, $M_{\downarrow,\downarrow}$, $M_{\uparrow,\downarrow}$), the detuning $\delta$ from the Raman resonance condition, and the ratio $\alpha$ of the $f_{\uparrow}$ drive amplitude to the $f_{\downarrow}$ drive amplitude. From the fit, we extract the below values and associated co-variance matrix: $\begin{matrix}\delta/(2\pi)=5.5\\\ \;\;\;M_{\uparrow,\uparrow}/(2\pi)=232~{}\mathrm{MHz}\\\ \;\;\;M_{\downarrow,\uparrow}/(2\pi)=255~{}\mathrm{MHz}\\\ \;\;\;M_{\downarrow,\downarrow}/(2\pi)=280~{}\mathrm{MHz}\\\ \;M_{\uparrow,\downarrow}/(2\pi)=80~{}\mathrm{MHz}\\\ \;\;\;\;\;\;\;\;\;\;\alpha=0.54\\\ \end{matrix}\;\;\;\;\;\;\;\;C=\begin{pmatrix}+0.02&+0.01&-0.09&+0.2&+0.2&+0.0003\\\ +0.01&+70&-50&-200&+200&+0.02\\\ -0.09&-50&+30&+100&-100&-0.02\\\ +0.2&-200&+100&+500&-40&-0.05\\\ +0.2&+200&-100&-40&+400&+0.06\\\ +0.0003&+0.02&-0.02&-0.05&+0.06&+0.00002\\\ \end{pmatrix}$ (5) Note that the extracted values of the matrix elements include the drive amplitudes at the device, which we estimate to be $\sim 400~{}\mathrm{nV}$ across the junction. We note that, in the experimental data, the oscillation minima/maxima do not lie perfectly along $|A_{\uparrow}|=|A_{\downarrow}|$; the inclusion of $\delta$ in the fit is necessary to reproduce this. The inclusion of the Stark shift terms, on the other hand, were not needed to reproduce the data, but in reality they must be present. Figure S7: Coherent $\Lambda$-Rabi oscillations of the quasiparticle spin, as in Fig. 3 of the main text. (a) Measured final state probabilities after the application of simultaneous drive pulses, with the spin initialized in either $|\\!\downarrow_{q}\,\rangle$ or $|\\!\uparrow_{q}\,\rangle$. (b) Simulated probabilities. (c) Level diagram with simulation parameters. Additionally, as can be seen in Fig. S7, we observed a drive-induced quasiparticle evaporation rate; the $|g\rangle$ population grows as the drive amplitudes are increased. In simulation, we found that this quasiparticle evaporation was captured by including two dissipators on the lower doublet of the form $\sqrt{\Gamma_{\mathrm{de- trap}}}(|A_{\downarrow}|^{2}+|A_{\uparrow}|^{2}+|A_{\downarrow}||A_{\uparrow}|)|g\rangle\langle\downarrow_{q}|$, $\sqrt{\Gamma_{\mathrm{de- trap}}}(|A_{\downarrow}|^{2}+|A_{\uparrow}|^{2}+|A_{\downarrow}||A_{\uparrow}|)|g\rangle\langle\uparrow_{q}|$, with $\Gamma_{\mathrm{de-trap}}/(2\pi)=1.2\pm 0.1~{}\mathrm{MHz}$ as extracted from the $|g\rangle$ population data. While this is certainly an over- simplified model (no frequency dependence, no spin dependence, etc.), the scaling of the rate with the drive amplitudes indicates that the evaporation is likely due to multi-photon transitions of the trapped quasiparticle to excited states either in the dot-like levels previously discussed or into the continuum above the superconducting energy gap of the leads. This differs with previous results on drive-induced quasiparticle evaporation where a linear scaling of the rate with power was observed Levenson-Falk et al. (2014), most likely because the Andreev levels studied in this work exist at lower energy. These results are also consistent with our observation that the undesired transitions seen in Fig. S7 only occur at high powers. Finally, we included readout errors in the simulation by extracting the transition probabilities between the outcomes of the first and second readout pulses for the zero-amplitude experimental data and then applying the resultant transfer matrix to the simulated probabilities for all drive amplitudes. While this did not result in a qualitative change of the data, it was important for replication of the experimental contrast. Note that this is why there are some Raman features visible in the simulated $|g\rangle$ data: the rate at which the quasiparticle spontaneously evacuates the junction during readout is slightly spin-dependent. Although we were unable to measure the upper doublet population directly because the dispersive shift of these states was too small at this bias point, the simulation indicates that it was below 20%. ## 6 VI. Analysis of the Andreev level coherence times As discussed in the main text, we observed no dependence of the quasiparticle spin $T_{2}$ on any of the in-situ bias knobs ($\Phi,~{}V_{\mathrm{g,c}},~{}V_{\mathrm{g,p}}$). However, we did find that the coherence times of the inter-doublet transitions depended on $V_{\mathrm{g,c}}$ [Fig. S8(a)]. In particular, we found that the $T_{2}$ of the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ was maximum around a $V_{\mathrm{g,c}}$ sweet spot at $-71.0~{}\mathrm{mV}$. Away from this sweet spot, first-order electrostatic noise contributed to dephasing, causing $T_{2}$ to drop. We model this behavior using the relation for exponential coherence decay $\frac{1}{T_{2}}=\big{(}2\pi V_{\mathrm{rms}}\frac{df}{dV_{\mathrm{g,c}}}\big{)}^{2}+\Gamma_{\mathrm{c}}$, where $V_{\mathrm{rms}}=0.24\pm 0.01~{}\mathrm{mV}$ is the effective root- mean-square voltage noise and $\Gamma_{\mathrm{c}}=0.012\pm 0.001~{}\mathrm{ns}^{-1}$ is a $V_{\mathrm{g,c}}$-independent dephasing rate Martinis et al. (2009). We note that the $T_{2}$ at the sweet spot is given by $\Gamma_{\mathrm{c}}$, as we found that second-order noise coupling to $\frac{d^{2}f}{dV_{\mathrm{g,c}}^{2}}$ was negligible Houck et al. (2009). To calculate the effect of this electrostatic noise on the quasiparticle spin $T_{2}$, we first extracted the $V_{\mathrm{g,c}}$ dependence of the $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{q}\,\rangle$ splitting $\epsilon_{\mathrm{s}}$, which is given by the difference between the frequencies of the $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ and $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transitions. We visualize this in Fig. S8(b) by plotting the same two-tone data as shown in Fig. S8(a), but with the fitted value of the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition $f_{\uparrow\leftrightarrow\downarrow}$ subtracted from the drive frequency $f_{d}$ at every $V_{\mathrm{g,c}}$ bias. The $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition thus lies along $f_{d}-f_{\uparrow\leftrightarrow\downarrow}=0$ (pink horizontal line), while the $V_{\mathrm{g,c}}$ dependence of $\epsilon_{\mathrm{s}}/h$ is given by the behavior of $f_{\downarrow\leftrightarrow\downarrow}$ (purple horizontal line). We observe that $\epsilon_{\mathrm{s}}/h$ has no discernible slope with $V_{\mathrm{g,c}}$, consistent with the lack of a spin $T_{2}$ dependence on $V_{\mathrm{g,c}}$. However, using an upper bound on this slope $\frac{d\epsilon_{\mathrm{s}}/h}{dV_{\mathrm{g,c}}}<32~{}\mathrm{MHz}/\mathrm{mV}$ [black dashed line in Fig. S8(b)] and twice the extracted value of $V_{\mathrm{rms}}$ as an upper bound on the electrostatic noise [black dashed line in Fig. S8(a)], we find a lower bound on the spin dephasing time of $4.2~{}\mathrm{\mu}\mathrm{s}$. Figure S8: Extracting a lower-bound on the electrostatic-noise-induced dephasing time of the quasiparticle spin. (a) Same spectroscopy data as shown in Fig. S2(a). A local maximum is observed in both transitions at $V_{\mathrm{g,c}}=-71.0~{}\mathrm{mV}$. Measurements of the coherence time of $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ are shown in pink (right axis). White dashed line is a fit to the $T_{2}$ data assuming first-order noise in $V_{\mathrm{g,c}}$ plus a constant dephasing rate. Black dashed line corresponds to the expected $T_{2}$ given this same constant dephasing rate, but twice the $V_{\mathrm{g,c}}$ noise. (b) Same data as shown in (a), but with the fitted frequency of the $|\\!\uparrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition subtracted from $f_{\mathrm{d}}$ for each $V_{\mathrm{g,c}}$ bias. Both the purple and pink lines have no slope, and lie along the average values of the two transitions. The yellow dashed line is an upper bound on the slope of the $|\\!\downarrow_{q}\,\rangle\leftrightarrow|\\!\uparrow_{a}\,\rangle$ transition. ## 7 VII. Analysis of the Andreev level lifetimes To extract the transition rates between $|g\rangle$, $|\\!\downarrow_{q}\,\rangle$, and $|\\!\uparrow_{q}\,\rangle$, we analyzed the quantum jumps of the system using a hidden Markov model algorithm [Fig. S9] Press (2002); Hays et al. (2018, 2020). As we reported in Ref. Hays et al. (2020), the spin lifetime $T_{\mathrm{s}}$ increased with $|\Phi/\Phi_{0}|$. Unlike in the data presented in Ref. Hays et al. (2020), at this gate bias point the parity lifetime did vary with $\Phi$ due to evacuation of the quasiparticle into the cold, fermionic modes discussed above. In Fig. S9, we plot quantum jumps for $\Phi=-0.14\Phi_{0}$ where $T_{\mathrm{s}}=17\pm 1~{}\mathrm{\mu}\mathrm{s}$ and $T_{\mathrm{parity}}=22\pm 1~{}\mathrm{\mu}\mathrm{s}$. Figure S9: Quantum jumps of spin and parity at $\Phi=-0.14\Phi_{0}$. (a) $\Gamma$ histogram resulting from $10^{5}$ consecutive readout pulses. (b) Plotting $Q$ against time reveals quantum jumps between $|g\rangle$, $|\\!\downarrow_{q}\,\rangle$, and $|\\!\uparrow_{q}\,\rangle$ (only a subset of the time trace is shown). A hidden Markov algorithm was employed to perform state assignment (colored rectangles), as well as to extract all six transition rates between the three states. These are summarized by the parity lifetime $T_{\mathrm{parity}}=22\pm 1~{}\mathrm{\mu}\mathrm{s}$ and spin lifetime $T_{\mathrm{s}}=17\pm 1~{}\mathrm{\mu}\mathrm{s}$.
# Fourier, Gabor, Morlet or Wigner: Comparison of Time-Frequency Transforms Stefan Scholl <EMAIL_ADDRESS> ###### Abstract In digital signal processing time-frequency transforms are used to analyze time-varying signals with respect to their spectral contents over time. Apart from the commonly used short-time Fourier transform, other methods exist in literature, such as the Wavelet, Stockwell or Wigner-Ville transform. Consequently, engineers working on digital signal processing tasks are often faced with the question which transform is appropriate for a specific application. To address this question, this paper first briefly introduces the different transforms. Then it compares them with respect to the achievable resolution in time and frequency and possible artifacts. Finally, the paper contains a gallery of time-frequency representations of numerous signals from different fields of applications to allow for visual comparison. ## I Introduction In many fields of engineering and science it is vital to analyze the spectral content of time-varying signals as it changes over time. This includes medical signals (EEG, ECG, ultrasonic), music and speech, animal voices, seismic activity, vibrations, fluctuations in power grids, radar and communication signals and many more. For that purpose many different types of time-frequency transforms have been introduced in literature. Some of the transforms have a high practical impact, others are more of theoretical interest. The available methods provide largely different properties such as time resolution, frequency resolution, accuracy and the potential introduction of artifacts that do not correspondent to actual signal components. This paper considers the STFT, Gabor, Wavelet and Stockwell (S) transform as well as the Wigner-Ville distribution (WVD) and its smoothed pseudo variant (SPWVD). Section II provides a brief introduction to these methods. Readers that are interested in more in-depth information and the theoretical backgrounds are referred to [1, 2, 3]. Section III compares the transforms with respect to resolution and artifacts. Finally, the transforms are applied to numerous different signals to form a gallery of time-frequency transforms. The purpose of this gallery is to show the properties, advantages and disadvantages of the transforms on actual signals in order to support engineers selecting the right transform for their application. ## II Time-Frequency Transforms ### II-A Short-Time Fourier and Gabor Transform The STFT is the most widely known and commonly used time-frequency transform. It is well understood, easy to interpret and there exist fast implementations (FFT). Its drawbacks are the limited and fixed resolution in time and frequency. The idea of the STFT is to move a sliding window $w(t)$ over the signal $x(t)$ to be analyzed, such that a particular time span of the signal is selected. For each position of the window a Fourier transform is calculated, that represents the frequency content of that time span. The STFT results in the two dimensional time-frequency representation: $X(t,f)=\intop_{-\infty}^{\infty}x(t_{1})w^{*}(t_{1}-t)e^{-j2\pi ft_{1}}dt_{1}$ For spectral analysis the squared magnitude of the STFT is considered, which is called spectrogram: $S_{x}(t,f)=\left|X(t,f)\right|^{2}$ Time and frequency resolution can be controlled by the window length, as shown in Figures 2 and 3: A short window captures only a short period of time and has thus a precise time resolution. However, the frequency resolution is poor, because the windowed signal contains only few time samples resulting in only few frequency bins. Contrary, a long window provides poor time resolution, but creates precise frequency information due to the larger number of samples. This phenomenon is known as the uncertainty principle, that states that the product of resolution in time and frequency is limited: $BT\geq\frac{1}{4\pi}$ (with B being the bandwidth of one frequency bin) [2]. A special case of the STFT, where the uncertainty equation above is fulfilled with equality (i.e. having the best joint time and frequency resolution), is known as the Gabor transform. The Gabor transform is simply a STFT with the window being a Gaussian function $w(t)=e^{-\alpha t^{2}}$ where the parameter $\alpha$ controls the window length, i.e. the emphasis on time or frequency resolution. Figure 1: Exemplary signal in the time domain Figure 2: STFT (spectrogram) with short and long windows of the exemplary signal in Figure 1, showing the difference in frequency and time resolution Figure 3: Resolution of the STFT with long (upper) and short window (lower) ### II-B Wavelet Transform The result of the wavelet transform differs from the STFT in that its time- frequency resolution is not fixed and depends on the frequency (multi-scale property, see Fig. 5). In general, the wavelet transform represents lower frequency components with finer frequency resolution and coarser time resolution. For higher frequencies the reverse is true: frequency resolution is coarser and time resolution is finer. This variable resolution property of the wavelet transform is sometimes superior to the Fourier approach, because it may give clearer spectral information for certain applications, such as audio signal processing. The wavelet transform compares the time domain signal $x(t)$ with a short analysis function $\varPsi(t)$. $\varPsi(t)$ is called the wavelet and can take on many forms as will be described below. During the calculation of the transform the wavelet is repeatedly moved over the signal (time shifted) – each pass with a different scale factor in time, that dilates the wavelets to a different length (dilation or scale). This creates a two dimensional representation of time (i.e. shift) and scale (can be related to frequency). The time shift is denoted by $b$, the scale by $a$. The continuous wavelet transform (CWT) is defined as: $CWT(a,b)=\frac{1}{\sqrt{a}}\intop_{-\infty}^{\infty}x(t)\varPsi^{*}\left(\frac{t-b}{a}\right)dt$ Analog to the spectrogram of the STFT, the scalogram of the wavelet transform is defined as $\left|CWT(a,b)\right|^{2}$ Different wavelet functions are available for analysis. For the CWT commonly Morlet wavelets (also called Gabor wavelets) are used, that consist of a complex sine wave with Gaussian envelope $\varPsi_{Morlet}(t)=e^{-\alpha t^{2}}e^{j2\pi f_{c}t}$ Here, the parameters “center frequency” $f_{c}$ and “width of Gaussian” $\alpha$ control the trade-off between time and frequency resolution and need to be selected before conducting the transform. Basically, wavelet functions need to be functions that are both local in time and frequency to provide adequate time and frequency resolution. Figure 4 shows an example of the scalogram of a CWT. Note, that besides the CWT, the discrete wavelet transform (DWT) exists. Different from the CWT, the DWT preserves the complete information of the time domain signal and is thus invertible. Therefore the DWT is often used to create sparse signal representations that can be used for data compression (e.g. widely applied in image processing). Usually different types of wavelets are used for the DWT, such as the Haar or Daubechies wavelets. The DWT is often considered less suitable for time-frequency representation, because it creates less readable plots. Figure 4: CWT (scalogram): note the varying resolution of frequency and time Figure 5: Variable resolution property of the CWT and S transform ### II-C Stockwell Transform The Stockwell or S-transform [4] is basically a STFT with a Gaussian (Gabor) window, whose length is frequency dependent. This results in a varying time- frequency resolution similar to the wavelet transform (Figure 5). The Stockwell transform is defined as $ST(t,f)=\intop_{-\infty}^{\infty}x(t_{1})\frac{\left|f\right|}{\sqrt{2\pi}}e^{\frac{-f^{2}(t_{1}-t)^{2}}{2}}e^{-j2\pi ft_{1}}dt_{1}$ It is also similar to the CWT with a Morlet wavelet. However, in contrast to the CWT, the Stockwell transform has absolute referenced phase information. This means that the phase of the kernel functions, which are multiplied with the signal for analysis, at t = 0 is zero. Moreover, the Stockwell transform tends to emphasize higher frequency content due to the factor $|f|$ in its formula. Figure 6 shows an example. Figure 6: S transform: note the varying resolution of frequency and time and the emphasis on the higher frequency component ### II-D Wigner-Ville Distribution The Wigner-Ville distribution (WVD) [1] overcomes the limited resolution of the Fourier and wavelet based methods using an autocorrelation approach. The standard version of the autocorrelation function (ACF) considers the pointwise multiplication of a signal with a lagged version of itself and integrates the results over time. It is defined as $r{}_{xx}(\tau)=\intop_{-\infty}^{\infty}x(t)x^{*}(t+\tau)dt$ The standard ACF is only dependent on the lag $\tau$, because time is integrated out of the result. The WVD uses a variation of the ACF, called the instantaneous autocorrelation, which omits the integration step. Thus time remains in the result. The instantaneous autocorrelation is therefore a two dimensional function, depending on $t$ and the lag $\tau$: $R_{xx}(t,\tau)=x(t+\frac{\tau}{2})x^{*}(t-\frac{\tau}{2})$ As an example, Figure 7 shows the instantaneous autocorrelation of a triangular shaped signal. The WVD calculates the frequency content for each time step $t$ by taking a Fourier transform of the instantaneous autocorrelation across the axis of the lag variable $\tau$ for that given $t$ (depicted in Figure 7) : $\displaystyle W(t,f)$ $\displaystyle=$ $\displaystyle\intop_{-\infty}^{\infty}R_{xx}(t,\tau)e^{-j2\pi f\tau}d\tau$ $\displaystyle=$ $\displaystyle\intop_{-\infty}^{\infty}x(t+\frac{\tau}{2})x^{*}(t-\frac{\tau}{2})e^{-j2\pi f\tau}d\tau$ The result is real-valued. This way of calculation is related to the fact, that the Fourier spectrum of a signal equals the Fourier transform of its ACF. The WVD offers very high resolution in both time and frequency, this is much finer than of the STFT. Figure 7: Depiction of the instantaneous autocorrelation $R_{xx}(t,\tau)$ and the calculation of the WVD of a triangular signal. The slices represent the pointwise multiplication of the signal with a lagged version of itself for a specific lag value $\tau$. Calculating a Fourier transform across the $\tau$-axis for every value of $t$ creates the WVD. If one would integrate over $t$ instead, the result would be the standard ACF. An important disadvantage of the WVD are the so-called cross terms. These are artifacts occurring in the result, if the input signal contains a mixture of several signal components, see Figure 8. They stem from the fact, that the WVD is a quadratic (and therefore a non-linear) transform due to the way the instantaneous autocorrelation is calculated. The WVD of the superposition of two signals is $W{}_{x_{1}+x_{2}}=W{}_{x_{1}}+W{}_{x_{2}}+2\Re\left\\{W_{x_{1},x_{2}}\right\\}$ and may be dominated by the cross term $W_{x_{1},x_{2}}$, which may have twice the amplitude of the auto terms $W{}_{x_{1}}$and $W{}_{x_{2}}$. Unfortunately, the occurrence of these cross terms limits the usefulness for many practical signals. The WVD is usually calculated with the analytic version of the input signal that does not contain negative frequency components. This avoids cross terms between positive and negative frequency content that may mask low frequency components in the WVD. The cross terms occur midway between the auto terms and often have an oscillatory (high-frequency) pattern. A method to reduce cross terms is to suppress the oscillating components by additional low-pass filtering in time and frequency. However, this suppression of cross terms comes at the expense of reduced resolution. This idea of additional cross term suppression leads to the more general formulation of time-frequency transforms called Cohen’s class [1, 3]. From Cohen’s class many different variants can be deduced, that basically differ in the way the low-pass filter is designed. A prominent one is the smoothed pseudo Wigner-Ville distribution (SPWVD) [3]. Other variants such as Choi-Williams, Margenau-Hill or Rihaczek can be found in literature, but often provide very similar results to the SPWVD for practical signals. The SPWVD is defined as the WVD filtered by two separate kernels $g(t)$ and $H(f)$ (need to be chosen prior to the transform), that smooth the WVD in frequency and time: $SPWVD(t,f)=\intop_{t_{1}}\intop_{f_{1}}g(t-t_{1})H(f-f_{1})W(t_{1,}f_{1})dt_{1}df_{1}$ Figure 8 shows an example of the WVD and Figure 9 its smoothed version, the SPWVD. Figure 8: Wigner-Ville distribution: Cross term artifacts occur with high amplitude and oscillating behavior Figure 9: Smoothed pseudo WVD: cross terms are suppressed ## III Comparison and Time-Frequency Gallery The introduced time-frequency transforms are now compared with respect to their resolution in time and frequency. Figure 10 presents an overview of the schematic resolution grids. It visualizes the impact of the transforms and their parameters on the resolution. For completeness, the figure also shows the plain time domain and spectrum representations, that do not resolve frequency and time, respectively. Furthermore, the transforms are compared for various synthetic signals and signals from real sources that cover different applications. The signals used for analysis consist of both synthetic and real-world signals. Synthetic signals include sweeps, pulses, frequency steps or short sine bursts. Some signals are composed of several of these elements. The results for synthetic signals are shown in Fig. Fourier, Gabor, Morlet or Wigner: Comparison of Time-Frequency Transforms and Fourier, Gabor, Morlet or Wigner: Comparison of Time-Frequency Transforms. Real-world signals include audio signals from speech and music (Fig. Fourier, Gabor, Morlet or Wigner: Comparison of Time- Frequency Transforms), radio signals with different modulation types (Fig. Fourier, Gabor, Morlet or Wigner: Comparison of Time-Frequency Transforms) and signals from nature and medical applications such as ECG, ultrasonic, bat sound, earthquake (Fig. Fourier, Gabor, Morlet or Wigner: Comparison of Time- Frequency Transforms). The considered purpose of a transform is to visually present discriminative time and frequency features. Therefore only the magnitudes of the results are shown. Some transforms require certain parameters to be tuned, e.g. window length of the STFT or the center frequency of the Morlet mother wavelet. These parameters have been chosen such that the features of the signals become easily visible. According to common practice in literature, all transforms are typically plotted on linear frequency scale, except for the wavelet transform, which occurs both in logarithmic and linear scale. Finally, the choice of the transform depends on the signal properties and the further use or subsequent processing of the results. Important aspects to consider are the desired resolution in time and frequency as well as the tolerability of artifacts. Table I summarizes the transforms and provides recommendations when to use which transform. Transform | Time-Frequency Resolution | Artifacts | Application ---|---|---|--- Short-Time Fourier (STFT) / Gabor | poor | no | general purpose Continuous Wavelet (CWT) | poor, | | frequency dependent | no | when variable time-frequency resolution required, e.g. audio | Stockwell (S) | poor, | | frequency dependent | no | when variable time-frequency resolution with a fixed phase alignment required, emphasizes higher frequencies | Smoothed Pseudo Wigner-Ville (SPWVD) | good | sometimes | general purpose, when high resolution required and some artifacts can be tolerated Wigner-Ville (WVD) | excellent | strong | for simple signals or when artifacts can be tolerated Table I: When to use which transform Figure 10: Schematics of the resolutions of the transforms, including the time domain representation and the standard Fourier spectrum for comparison ## References * [1] J. Semmlow and B. Griffel, _Biosignal and Medical Image Processing_. CRC Press, 2014. * [2] M. Sandsten, “Time-frequency analysis of time-varying signals and non-stationary processes,” _Lund University_ , 2018. * [3] F. Hlawatsch and G. F. Boudreaux-Bartels, “Linear and quadratic time-frequency signal representations,” _IEEE Signal Processing Magazine_ , vol. 9, no. 2, pp. 21–67, 1992. * [4] R. Stockwell, L. Mansinha, and R. Lowe, “Localization of the complex spectrum: the s transform,” _IEEE Transactions on Signal Processing_ , 1996. Synthetic signals (1) Synthetic signals (2) Audio signals Radio signals Signals from nature and medical applications
11institutetext: Boris S. Mordukhovich 22institutetext: Department of Mathematics, Wayne State University, Detroit, Michigan 48202, USA 22email<EMAIL_ADDRESS> 33institutetext: Pedro Pérez-Aros 44institutetext: Instituto de Ciencias de la Ingeniería, Universidad de O’Higgins, Rancagua, Chile 44email<EMAIL_ADDRESS> # Generalized Leibniz rules and Lipschitzian stability for expected-integral mappings††thanks: Research of the first author was partially supported by the USA National Science Foundation under grants DMS-1512846 and DMS-1808978, by the USA Air Force Office of Scientific Research under grant #15RT04, and by the Australian Research Council under Discovery Project DP-190100555. Research of the second author was partially supported by ANID grant: Fondecyt Regular 1200283 and Fondecyt Regular 1190110. Boris S. Mordukhovich Pedro Pérez-Aros ###### Abstract This paper is devoted to the study of the expected-integral multifunctions given in the form $\mathrm{E}_{\Phi}(x):=\int_{T}\Phi_{t}(x)d\mu,$ where $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is a set-valued mapping on a measure space $(T,\mathcal{A},\mu)$. Such multifunctions appear in applications to stochastic programming, which require developing efficient calculus rules of generalized differentiation. Major calculus rules are developed in this paper for coderivatives of multifunctions $\mathrm{E}_{\Phi}$ and second-order subdifferentials of the corresponding expected-integral functionals with applications to constraint systems arising in stochastic programming. The paper is self-contained with presenting in the preliminaries some needed results on sequential first-order subdifferential calculus of expected-integral functionals taken from the first paper of this series. ###### Keywords: Stochastic programming Generalized differentiation Integral multifunctions Leibniz rules Lipschitzian stability ###### MSC: Primary: 49J53, 90C15, 90C34 Secondary: 49J52 ## 1 Introduction Stochastic programming has been highly recognized as an important area of optimization theory with a variety of practical applications; see, e.g., the book sdr and the references therein. Although advanced methods of variational analysis and generalized differentiation have been used in the study and applications of stochastic programming (see, e.g., ah ; ap ; burke ; chp20 ; dent-rusz ; hhp ; hr ; mp20 ; mor-sag18 ; mor-sag19 among other publications), there is no comparison between the number of currently achieved results in this vein for stochastic problems and the broad implementation of the aforementioned methods in deterministic optimization. The primal motivation for our study is to narrow the gap between deterministic and stochastic applications of variational analysis and generalized differentiation to optimization and related problems. The underlying feature of stochastic problems is the presence of integration with respect to probability measures over extended-real-valued variable integrands as well as over vector-valued and set-valued mappings. Then efficient calculus rules of their generalized differentiation are in strong demand. In our preceding paper mp20a we established various sequential versions of the generalized Leibniz rule (subdifferentiation under the integral sign) in terms of regular subgradients of the expected-integral functionals that are defined by $\mathrm{E}_{\varphi}(x,\mathpzc{y}):=\int_{T}\varphi_{t}\big{(}x,\mathpzc{y}(t)\big{)}d\mu,$ (1.1) where $x\in\mathbb{R}^{n}$, $\mathpzc{y}\in\textnormal{L}^{1}(T;\mathbb{R}^{m})$, $\varphi_{t}(x,y):=\varphi(t,x,y)$, and $\varphi\colon T\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\overline{\mathbb{R}}:=[-\infty,\infty]$ is an extended-real-valued function on a complete finite measure space $(T,\mathcal{A},\mu)$, which is assumed below. The goal of this paper is to proceed much further and to study expected- integral multifunctions (set-valued mapping) given in the form $\mathrm{E}_{\Phi}(x):=\int_{T}\Phi_{t}(x)d\mu,$ (1.2) where $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is a set-valued (in particular, single-valued $\Phi\colon T\times{\mathbb{R}^{n}}\to\mathbb{R}^{m}$) mapping defined on a measure space. The reader may consult the survey paper hess for an overview on the integration of random set-valued mappings and set-valued probability theory. We also refer the reader to the recent publications burke ; dent-rusz and the bibliographies therein for new developments and applications. Our current work presented in this paper is mostly theoretical, while we’ll discuss the aimed stochastic applications in the concluding section below. A natural extension of subdifferentials to the case of (single-valued and set- valued) mappings is provided by coderivatives, and thus we focus here on deriving coderivative versions of Leibniz’s rule for expected-integral multifunctions of type (1.2). To the best of our knowledge, this has never been done in the literature. In contrast to mp20a , our major results are obtained in the exact/pointwise form, i.e., they are formulated exactly at the points in question via the limiting constructions. Such results are clearly more convenient for the theory and applications than sequential/fuzzy ones and provide new rules even for extended-integral functionals (1.1) in comparison with mp20a . An important role in deriving pointwise coderivative Leibniz-type rules is played by the new integrable quasi-Lipschitz property for set-valued random normal integrands $\Phi_{t}(x)$ in (1.2). For deterministic multifunctions, the introduced property is equivalent to the (Aubin) Lipschitz-like property due to the (Mordukhovich) coderivative criterion of variational analysis (see m93 ; m06 ; rw ), while these two Lipschitzian properties are essentially different in stochastic frameworks. It happens, in particular, for spaces with nonatomic measures, where the integrable quasi-Lipschitz property occurs to be equivalent to the integrable counterpart of the (Hausdorff) locally Lipschitzian property for random multifunctions. All of this leads us to pleasable conclusions about Lipschitz stability of random feasible solution mappings in problems of stochastic programming. They include, in particular, efficient conditions on quantitative continuity of parametric sets of feasible solutions to stochastic programs with inequality constraints. Along with the study of expected-integral multifunctions (1.2) defined by arbitrary integrand mappings $\Phi_{t}(x)$, we consider structural multifunctions of the type $\Phi_{t}(x)=F\big{(}t,g_{t}(x)\big{)}\;\mbox{ for all }\;x\in U\;\mbox{ and a.e.\ }\;t\in T$ (1.3) defined as compositions of set-valued mappings $F$ and single-valued mappings $g$ between finite-dimensional spaces. It is shown that such multifunctions (1.2) exhibit integrable Lipschitz stability in the case of convex outer mappings $F$ and smooth inner mappings $g$ in (1.3). Then we introduce a new class of integrable amenable set-valued compositions and use them to establish chain rules of coderivative integration for the corresponding expected- integral multifunctions. The obtained results are specified for the case where set-valued mappings $F$ in (1.3) are given by inequality constraints, which is a typical case for feasible solutions mappings in constrained stochastic programming. Finally, in this paper we establish, for the first time in the literature, Leibniz-type rules for second-order subdifferentials of expected-integral functionals given in the form $\mathrm{E}_{\varphi}(x):=\int_{T}\varphi_{t}\big{(}x\big{)}d\mu,$ (1.4) generated by extended-real-valued functions $\varphi\colon T\times{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$. The obtained results are specified for the case where $\varphi$ in (1.4) is given as a maximum function, which is important for applications to problems of stochastic programming. The rest of the paper is organized as follows. Section 2 recalls and discusses the basic constructions of generalized differentiation in variational analysis that are systematically employed in the subsequent material. In Section 3 we briefly review some notions of measurability and integration for multifunctions and then present the sequential Leibniz rules for subdifferentiation of expected-integral functionals (1.1) taken from mp20 and used in what follows. Section 4 is devoted to the study of Lipschitzian properties of random multifunctions and presents various characterizations of all the three properties mentioned above with establishing relationships between them in general measure spaces as well as in spaces with purely atomic and nonatomic measures. In Section 5 we obtain several results of the new type labeled as coderivative Leibniz rules, which evaluate both regular and limiting coderivatives of expected-integral multifunctions (1.2) via the integration of random integrands $\Phi_{t}$ therein. The results obtained include sequential Leibniz rules for regular coderivatives and pointwise ones for the limiting coderivative construction. As a consequence of the coderivative Leibniz rules of the latter type and the aforementioned coderivative characterizations of Lipschitzian properties, we establish efficient conditions for Lipschitz stability of the expected-integral multifunctions in terms of coderivatives of their integrands $\Phi_{t}(x)$. Section 6 addresses the setting where the integrand $\Phi_{t}(x)$ is given in the composition form (1.3) defined by integrable amenable mappings. The obtained conditions for Lipschitz stability and composite Leibniz rules are specified here for random sets of feasible solutions in constrained stochastic programming. Considering in Section 7 expected functionals (1.4), we derive sequential and pointwise second-order Leibniz rules in terms of second-order subdifferentials of two types, which have been well known in variational analysis while have never been used in the study of expected-integral functionals and applications to stochastic programming. The concluding Section 8 summarizes the main achievements of this paper and discusses some directions of our future research and applications. In this paper we use the standard notation from variational analysis, generalized differentiation, and stochastic programming; see, e.g., m06 ; rw ; sdr . Recall that the extended real line is denoted by $\overline{\mathbb{R}}:=[-\infty,\infty]$ with the conventions that $(\pm\infty)\cdot 0=0\cdot(\pm\infty)=0$ and $\infty-\infty=-\infty+\infty=\infty$. The symbol $\mathbb{B}_{r}(x)$ stands for the closed ball centered at $x$ with radius $r>0$, while the unit closed ball of the space in question is denoted simply by $\mathbb{B}$. Given a nonempty set $\Omega\subset{\mathbb{R}^{n}}$, its indicator function $\delta_{\Omega}$ is defined by $\delta_{\Omega}(x):=0$ if $x\in\Omega$ and $\delta_{\Omega}(x):=\infty$ otherwise, while the characteristic function $\operatorname{\mathds{1}}_{\Omega}$ is defined by $\operatorname{\mathds{1}}_{\Omega}(x):=1$ for $x\in\Omega$ and $\operatorname{\mathds{1}}_{\Omega}(x):=0$ for $x\notin\Omega$. The symbol $x\stackrel{{\scriptstyle\Omega}}{{\to}}\bar{x}$ means that $x\to\bar{x}$ with $x\in\Omega$, and $\Omega^{c}$ denotes the complement of $\Omega$. Finally, $\mathbb{N}:=\\{1,2,\ldots\\}$ and $\mathbb{R}_{+}:=\\{\alpha\in\mathbb{R}\;|\;\alpha>0\\}$. ## 2 Preliminaries from Generalized Differentiation In this section we recall some basic constructions of generalized differentiation in variational analysis that are broadly used in what follows. Let $\Phi\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued mapping with the domain and graph given by, respectively, $\mbox{\rm dom}\,\Phi:=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;\Phi(x)\neq\emptyset\big{\\}}\;\mbox{ and }\;\operatorname{gph}\Phi:=\big{\\{}(x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{m}\;\big{|}\;y\in\Phi(x)\big{\\}}.$ The Painlevé-Kuratowski outer limit of $F$ as $x\to\bar{x}$ is defined by $\mathop{{\rm Lim}\,{\rm sup}}_{x\to\bar{x}}\Phi(x):=\big{\\{}v\in\mathbb{R}^{m}\big{|}\;\exists\,\mbox{ seqs. }\;x_{k}\to\bar{x},\;v_{k}\to v\;\mbox{ s.t. }\;v_{k}\in\Phi(x_{k})\big{\\}}.$ (2.1) The regular/Fréchet normal cone to $\Omega$ at $\bar{x}\in\Omega$ is $\widehat{N}(\bar{x};\Omega):=\Big{\\{}x^{*}\in\mathbb{R}^{n}\;\Big{|}\;\limsup_{x\stackrel{{\scriptstyle\Omega}}{{\to}}\bar{x}}\frac{\langle x^{*},x-\bar{x}\rangle}{\|x-\bar{x}\|}\leq 0\Big{\\}}$ (2.2) with $\widehat{N}(\bar{x};\Omega):=\emptyset$ if $\bar{x}\notin\Omega$. The limiting/Mordukhovich normal cone to $\Omega$ at $\bar{x}$ is defined via (2.1) by $N(\bar{x};\Omega):=\mathop{{\rm Lim}\,{\rm sup}}_{x\to\bar{x}}\widehat{N}(x;\Omega).$ (2.3) Note that, in contrast to (2.2), the limiting normal cone (2.3) and the associated coderivative and subdifferential constructions (see below) are robust, meaning that they are closed-graph multifunctions with respect to perturbations of the initial points. Based on the normal cones (2.2) and (2.3) to the graph of $\Phi\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ at $(\bar{x},\bar{y})\in\operatorname{gph}\Phi$, we define the corresponding regular and basic/limiting coderivatives of $\Phi$ at $(\bar{x},\bar{y})$ for all $y^{*}\in\mathbb{R}^{m}$ by, respectively, $\widehat{D}^{*}\Phi(\bar{x},\bar{y})(y^{*}):=\big{\\{}x^{*}\in{\mathbb{R}^{n}}\;\big{|}\;(x^{*},-y^{*})\in\widehat{N}\big{(}(\bar{x},\bar{y});\operatorname{gph}\Phi\big{)}\big{\\}},$ (2.4) $D^{*}\Phi(\bar{x},\bar{y})(y^{*}):=\big{\\{}x^{*}\in{\mathbb{R}^{n}}\;\big{|}\;(x^{*},-y^{*})\in N\big{(}(\bar{x},\bar{y});\operatorname{gph}\Phi\big{)}\big{\\}},$ (2.5) where we omit $\bar{y}$ in the coderivative notation of $\Phi$ if it is a singleton $\\{\Phi(\bar{x})\\}$. If $\Phi\colon{\mathbb{R}^{n}}\to\mathbb{R}^{m}$ is ${\cal C}^{1}$-smooth around $\bar{x}$ (in fact, merely strictly differentiable at this point), then both coderivatives above reduce to the adjoint (transpose) Jacobian matrix linearly applied to any $y^{*}\in\mathbb{R}^{m}$: $\widehat{D}^{*}\Phi(\bar{x})(y^{*})=D^{*}\Phi(\bar{x})(y^{*})=\big{\\{}\nabla\Phi(\bar{x})^{*}y^{*}\big{\\}}.$ (2.6) In general, both coderivatives (2.4) and (2.5) are positively homogeneous multifunctions, where (2.4) is always convex-valued, while (2.5) is not even for very simple convex functions as, e.g., for $\Phi(x):=|x|$ at $\bar{x}=0\in\mathbb{R}$. Nevertheless, the limiting coderivative (2.5), together with the normal cone (2.3) and the associated first-order and second- order subdifferentials of extended-real-valued functions presented below, enjoy full pointwise calculus based on variational and extremal principles of variational analysis; see the books m06 ; m18 ; rw for more details and references. Unfortunately, it is not the case for the corresponding regular constructions, for which only “fuzzy” results are available. On the other hand, it is convenient to have the following representation: $D^{*}\Phi(\bar{x},\bar{y})(y^{\ast})=\mathop{{\rm Lim}\,{\rm sup}}_{(x,y)\stackrel{{\scriptstyle\operatorname{gph}\Phi}}{{\to}}(\bar{x},\bar{y}),\,v^{*}\to y^{*}}\widehat{D}^{*}\Phi(x,y)(v^{*})$ (2.7) of (2.5) at $(\bar{x},\bar{y})$ as the outer limit of (2.4) at points nearby. Next we consider an extended-real-valued function $\varphi\colon\mathbb{R}^{n}\to\overline{\mathbb{R}}$ with its domain and epigraph that are defined, respectively, by $\mbox{\rm dom}\,\varphi:=\big{\\{}x\in\mathbb{R}^{n}\;\big{|}\;\varphi(x)<\infty\big{\\}}\;\mbox{ and }\;\mbox{\rm epi}\,\varphi:=\big{\\{}(x,\alpha)\in\mathbb{R}^{n+1}\;\big{|}\;\alpha\geq\varphi(x)\big{\\}}.$ The properness of $\varphi$, which we assume from now on, means that $\mbox{\rm dom}\,\varphi\neq\emptyset$ and $\varphi(x)>-\infty$ for all $x\in{\mathbb{R}^{n}}$. Applying the normal cones (2.2) and (2.3) to the epigraph of $\varphi$ at $(\bar{x},\varphi(\bar{x}))$ with $|\varphi(\bar{x})|\neq\infty$ gives us the corresponding (geometric) definitions of the regular and limiting/basic subdifferentials $\widehat{\partial}\varphi(\bar{x}):=\big{\\{}x^{*}\in\mathbb{R}^{n}\;\big{|}\;(x^{*},-1)\in\widehat{N}\big{(}(\bar{x},\varphi(\bar{x}));\mbox{\rm epi}\,\varphi\big{)}\big{\\}},$ (2.8) $\partial\varphi(\bar{x}):=\big{\\{}x^{*}\in\mathbb{R}^{n}\;\big{|}\;(x^{*},-1)\in N\big{(}(\bar{x},\varphi(\bar{x}));\mbox{\rm epi}\,\varphi\big{)}\big{\\}},$ (2.9) while the reader is referred to m06 ; m18 ; rw for equivalent analytic descriptions, various properties, and applications. We say that the function $\varphi$ is lower regular at $\bar{x}$ if $\varphi$ is finite and $\operatorname{\widehat{\partial}}\varphi(\bar{x})=\operatorname{\partial}\varphi(\bar{x})$. In the finite-dimensional setting under consideration, the regular subdifferential (2.8) admits the following useful representation: $\widehat{\partial}\varphi(\bar{x})=\big{\\{}x^{\ast}\in{\mathbb{R}^{n}}\;\big{|}\;\langle x^{\ast},w\rangle\leq d{\varphi}(x)(w)\;\text{ for all }\;w\in{\mathbb{R}^{n}}\big{\\}}$ (2.10) in terms of the (Dini-Hadamard) subderivative of $\varphi$ at $\bar{x}$ with respect to the direction $w$ defined by $d{\varphi}(\bar{x})(w)=\liminf\limits_{t\downarrow 0,\,u\to w}\frac{\varphi(\bar{x}+tu)-\varphi(x)}{t}.$ (2.11) Finally in this section, we recall two notions of second-order subdifferentials of extended-real-valued functions that are obtained by the scheme of m92 as coderivatives of subgradient mappings; see m06 ; m18 for more details. Given $\varphi\colon{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$ finite at $\bar{x}$ and $\bar{x}^{\ast}\in\partial\varphi(\bar{x})$, the basic second-order subdifferential of $\varphi$ at $\bar{x}$ relative to $\bar{x}^{\ast}$ is defined as the set-valued mapping $\partial^{2}\varphi(\bar{x},\bar{x}^{\ast})\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;{\mathbb{R}^{n}}$ with the values $\partial^{2}\varphi(\bar{x},\bar{x}^{\ast})(v^{\ast}):=\big{(}D^{\ast}\partial\varphi\big{)}(\bar{x},\bar{x}^{\ast})(v^{\ast})\;\mbox{ whenever }\;v^{\ast}\in{\mathbb{R}^{n}}.$ (2.12) The combined second-order subdifferential of $\varphi$ at $\bar{x}$ with respect to $\bar{x}^{\ast}$ is defined similarly to (2.12) by replacing the basic coderivative (2.5) in (2.12) with the regular one (2.4) as $\breve{\partial}^{2}\varphi(\bar{x},\bar{x}^{\ast})(v^{\ast}):=\big{(}\widehat{D}^{\ast}\partial\varphi\big{)}(\bar{x},\bar{x}^{\ast})(v^{\ast})\;\mbox{ for all }\;v^{\ast}\in{\mathbb{R}^{n}}.$ (2.13) The indication of $\bar{x}^{*}$ is dropped in the notation of (2.12) and (2.13) when $\partial\varphi(\bar{x})=\\{\nabla\varphi(\bar{x})\\}$. Note that for ${\cal C}^{2}$-smooth functions $\varphi$ we have $\partial^{2}\varphi(\bar{x})(v^{\ast})=\breve{\partial}^{2}\varphi(\bar{x})(v^{\ast})=\big{\\{}\nabla^{2}\varphi(\bar{x})v^{*}\big{\\}}\;\mbox{ for any }\;v^{*}\in{\mathbb{R}^{n}}$ via the (symmetric) Hessian matrix. Calculus rules for the second-order subdifferentials and their computations for remarkable classes of functions can be found in m06 ; m18 ; mr and the references therein. ## 3 Subdifferentiation of Expected-Integral Functionals In this section we review the needed notions of measurability and integration for set-valued mappings and then present some results on sequential Leibniz rules for expected-integral functionals obtained in mp20 . Throughout the paper, $(T,\mathcal{A},\mu)$ is a complete finite measure space as mentioned in Section 1. To avoid confusions, we use the special font (as, e.g., $\mathpzc{v},\mathpzc{w},\mathpzc{x},\mathpzc{y},\mathpzc{z}$, etc.) to denote vector functions defined on $T$. When $p\in[1,\infty]$, the notation $\textnormal{L}^{p}({T},\mathbb{R}^{n})$ stands for the space of all the (equivalence classes by the relation equal almost everywhere) measurable functions $\mathpzc{x}$ such that the scalar function $\|\mathpzc{x}(\cdot)\|^{p}$ is integrable for $p\in[1,\infty)$ and essentially bounded for $p=\infty$. The norm in $\textnormal{L}^{p}(T,\mathbb{R}^{n})$ is denoted by $\|\cdot\|_{p}$, and the points in ${\mathbb{R}^{n}}$ are identified with constant functions in $\textnormal{L}^{p}(T,{\mathbb{R}^{n}})$. Thus for $x\in{\mathbb{R}^{n}}$ and $\mathpzc{x}\in\textnormal{L}^{p}(T,{\mathbb{R}^{n}})$ we have the expressions $\displaystyle\|x-\mathpzc{x}\|_{p}$ $\displaystyle:=\left(\int_{T}\|x-\mathpzc{x}(t)\|^{p}d\mu\right)^{1/p}\;\mbox{ as }\;p\in[1,\infty),$ $\displaystyle\|x-\mathpzc{x}\|_{\infty}$ $\displaystyle:=\operatorname*{ess\,sup}\limits_{t\in T}\|x-\mathpzc{x}(t)\|.$ Recall that a set-valued mapping $F\colon T\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{n}$ is measurable if for every open set $U\subset\mathbb{R}^{n}$ the inverse image $F^{-1}(U):=\\{t\in T\;|\;F(t)\cap U\neq\emptyset\\}$ is measurable, i.e., $F^{-1}(U)\in\mathcal{A}$. The mapping $F$ is said to be graph measurable if $\operatorname{gph}F\in\mathcal{A}\otimes\mathcal{B}(\mathbb{R}^{n})$, where $\mathcal{B}(\mathbb{R}^{n})$ is the Borel $\sigma$-algebra, i.e., the $\sigma$-algebra generated by open subsets of $\mathbb{R}^{n}$. It is easy to see, due to the completeness of the measure space $(T,\mathcal{A},\mu)$, that a multifunction $F$ with closed values is measurable if and only if it is graph measurable. The Aumann integral of $F\colon T\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{n}$ over a measurable set $S\in\mathcal{A}$ is defined by $\int_{S}F(t)d\mu:=\bigg{\\{}\int_{S}\mathpzc{x}^{*}(t)d\mu\;\bigg{|}\;\mathpzc{x}^{*}\in{\textnormal{L}}^{1}(T,{\mathbb{R}^{n}})\textnormal{ and }\mathpzc{x}^{*}(t)\in F(t)\text{ a.e.}\bigg{\\}}.$ (3.1) The fundamental Lyapunov convexity theorem says that if the measure $\mu$ is nonatomic on $T$ (i.e., there is no $A\in{\cal A}$ such that $\mu(A)>0$ and for any $B\subset A$ with $B\in{\cal A}$ and $\mu(B)<\mu(A)$ it follows that $\mu(B)=0$), then the integral set in (3.1) is closed and convex in $\mathbb{R}^{n}$ provided that the multifunction $F$ is measurable and uniformly bounded by a summable on $T$ function. The reader is referred to the book du and its bibliography for the Lyapunov convexity theorem and its infinite-dimensional extensions; see also mor-sag18 for the most recent results in this direction. Together with (3.1), in this paper we often use the following notion for extended-real-valued functions of two variables. A function $\varphi\colon T\times\mathbb{R}^{n}\to\overline{\mathbb{R}}$ is called a normal integrand if the multifunction $t\to\mbox{\rm epi}\,\varphi_{t}$ is measurable with closed values. By the completeness of $(T,\mathcal{A},\mu)$, this is equivalent to saying that $\varphi$ is $\mathcal{A}\otimes\mathcal{B}(\mathbb{R}^{n})$-measurable, and that for every $t\in T$ the function $\varphi_{t}:=\varphi(t,\cdot)$ is lower semicontinuous (l.s.c.); see, e.g., (rw, , Corollary 14.34). The normal integrand $\varphi$ is proper if the function $\varphi_{t}$ is proper for each $t\in T$. If furthermore $\varphi_{t}$ is a convex for all $t\in T$ that $\varphi$ is called to be a convex normal integrand. Motivated by the above definition of normal integrands for extended-real- valued functions, we introduce now its counterpart for set-valued mappings. ###### Definition 1 (set-valued normal integrands) We say that a mapping $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is a set-valued normal integrand on a measure space $(T,\mathcal{A},\mu)$ if for all $t\in T$ the multifunction $\Phi_{t}:=\Phi(t,\cdot)$ has closed graph and the graph of $\Phi$ belongs to $\mathcal{A}\otimes\mathcal{B}({\mathbb{R}^{n}}\times\mathbb{R}^{m})$. If in addition the set $\operatorname{gph}\Phi_{t}$ is convex for a.e. $t\in T$, then we say that $\Phi$ is a set-valued convex normal integrand. Based on our previous discussions and the completeness of the measure space $(T,\mathcal{A},\mu)$, we conclude that Definition 1 of set-valued normal integrands amounts to saying that $t\mapsto\operatorname{gph}\Phi_{t}$ is a measurable multifunction with closed values. Next we present some results on measurable multifunctions and normal integrands that are broadly used in what follows. The first proposition concerns graph measurability of subgradient mappings generated by extended- real-valued normal integrands as well as coderivatives associated with set- valued normal integrand mappings. ###### Proposition 1 (graph measurability of subgradient and coderivative mappings) Let $\varphi\colon T\times{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$ be a proper normal integrand, and $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a proper set-valued normal integrand. Then the following multifunctions are graph measurable: 1. (i) $t\mapsto\operatorname{gph}\widehat{\partial}\varphi_{t}=\big{\\{}(x,x^{\ast})\in\mathbb{R}^{2n}\big{|}\;x^{\ast}\in\widehat{\partial}\varphi_{t}(x)\big{\\}}$. 2. (ii) $t\mapsto\operatorname{gph}\partial\varphi_{t}:=\big{\\{}(x,x^{\ast})\in\mathbb{R}^{2n}\;\big{|}\;x^{\ast}\in\partial\varphi_{t}(x)\big{\\}}$. 3. (iii) $t\mapsto\operatorname{gph}\widehat{D}^{\ast}\Phi_{t}:=\big{\\{}(x,y,x^{\ast},y^{\ast})\in\mathbb{R}^{2(n+m)}\big{|}\;x^{\ast}\in\widehat{D}^{\ast}\Phi_{t}(x,y)(y^{\ast})\big{\\}}$. 4. (iv) $t\mapsto\operatorname{gph}D^{\ast}\Phi_{t}:=\big{\\{}(x,y,x^{\ast},y^{\ast})\in\mathbb{R}^{2(n+m)}\;\big{|}\;x^{\ast}\in D^{\ast}\Phi_{t}(x,y)(y^{\ast})\big{\\}}$. Proof. Item (i) is proved in (mp20, , Theorem 3.2), item (ii) follows from (i), but it can be also derived from (rw, , Theorems 14.26 and 14.60). Finally, items (iii) and (iv) follow from (i) and (ii), respectively, by their applications to the normal integrand $(t,x,y)\mapsto\delta_{\operatorname{gph}\Phi_{t}}(x,y)$. $\hfill\square$ The following result is classical in the theory of measurable multifunctions. It asserts that a graph measurable multifunction with nonempty, while not necessarily closed, values admits a measurable (single-valued) selection; see, e.g., the book (cv, , Theorem III.22) and its references. ###### Proposition 2 (measurable selections) Let $F\colon T\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;{\mathbb{R}^{n}}$ be a graph measurable multifunction with nonempty values on a measurable space $(T,\mathcal{A},\mu)$. Then there exists a measurable single-valued mapping $\mathpzc{x}\colon T\to{\mathbb{R}^{n}}$ such that $\mathpzc{x}(t)\in F(t)$ for a.e. $t\in T$. Finally in this section, we recall the notion of expected-integral functionals introduced in our preceding paper mp20 and present two sequential Leibniz- type rules in terms of regular subgradients that were established therein. Let $\varphi\colon T\times{\mathbb{R}^{n}}\times\mathbb{R}^{m}\to\overline{\mathbb{R}}$ be a proper normal integrand such that there exists $\nu\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ ensuring the uniform boundedness from below condition $\displaystyle\varphi_{t}(v,w)\geq-\nu(t)\;\text{ for all }\;v\in{\mathbb{R}^{n}}\;\;w\in\mathbb{R}^{m}\text{ and all }t\in T.$ (3.2) As defined in (1.1), the expected-integral functional $\mathrm{E}_{\varphi}\colon{\mathbb{R}^{n}}\times\textnormal{L}^{1}(T,\mathbb{R}^{m})\to\overline{\mathbb{R}}$ generated by a normal integrand $\varphi$ is given by the formula $\displaystyle\mathrm{E}_{\varphi}(x,\mathpzc{y}):=\int_{T}\varphi_{t}(x,\mathpzc{y}(t))d\mu.$ It is well known in measure theory (see, e.g., bog ) that in a finite measure space $(T,\mathcal{A},\mu)$ there exist measurable disjoint sets $T_{pa}$ and $T_{na}$ such that $\mu_{pa}(\cdot):=\mu(\cdot\cap T_{pa})$ is purely atomic and $\mu_{na}(\cdot):=\mu(\cdot\cap T_{na})$ is nonatomic. Moreover, $T_{pa}$ is a countable union of disjoint atoms. In the following results we assume that for a given point of interest $\bar{x}$ there exists $\rho>0$ such that $\varphi_{t}(v,\cdot)\;\text{ is convex for all }\;v\in\mathbb{B}_{\rho}(\bar{x})\;\text{ and }\;t\in T_{na}.$ (3.3) Now we are ready to present two sequential subdifferential Leibniz rules for expected-integral functionals taken, respectively, from Theorem 5.2 and Theorem 5.4 of our preceding paper mp20 . Note that, although these results are not written in the expected Leibniz form, they certainly can be treated as approximate versions. Furthermore, such results will lead us to the desired generalized Leibniz rules by employing limiting procedures under appropriate qualification conditions. ###### Theorem 3.1 (subdifferential Leibniz rule, I) Let $\varphi$ be a proper normal integrand on $(T,\mathcal{A},\mu)$ satisfying (3.2) and (3.3) around some $\bar{x}\in{\mathbb{R}^{n}}$. Pick any $p,q\in(1,\infty)$ with $1/p+1/q=1$ and suppose that $(\bar{x}^{*},\bar{\mathpzc{y}}^{\ast})\in\widehat{\partial}\mathrm{E}_{\varphi}(\bar{x},\bar{\mathpzc{y}})$ are such that the function $t\mapsto\inf_{{\mathbb{R}^{n}}\times\mathbb{R}^{m}}\big{\\{}\varphi_{t}(\cdot,\cdot)-\langle\bar{\mathpzc{y}}^{\ast}(t),\cdot\rangle\big{\\}}$ is integrable on $T$. Then there exist sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{p}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{*}\\}\subset{\textnormal{L}}^{q}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ satisfying the assertions: 1. (i) $\big{(}\mathpzc{x}_{k}^{*}(t),\mathpzc{y}_{k}^{\ast}(t)\big{)}\in\widehat{\partial}\varphi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}$ for a.e. and all $k\in\mathbb{N}$. 2. (ii) $\|\bar{x}-x_{k}\|\to 0$, $\|\bar{x}-\mathpzc{x}_{k}\|_{p}\to 0$, and $\|\bar{\mathpzc{y}}-\mathpzc{y}_{k}\|_{1}\to 0$ as $k\to\infty$. 3. (iii) $\displaystyle\bigg{\|}\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu-\bar{x}^{\ast}\bigg{\|}\to 0$ and $\|\mathpzc{x}_{k}^{*}\|_{q}\|\mathpzc{x}_{k}-x_{k}\|_{p}\to 0$ as $k\to\infty$. 4. (iv) $\displaystyle\int_{T}\Big{|}\varphi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}-\varphi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}\Big{|}d\mu\to 0$ as $k\to\infty$. 5. (v) $\|\mathpzc{y}_{k}^{\ast}-\bar{\mathpzc{y}}^{\ast}\|_{\infty}\to 0$ as $k\to\infty$. ###### Theorem 3.2 (subdifferential Leibniz rule, II) Let $\varphi$ be a proper normal integrand on $(T,\mathcal{A},\mu)$ satisfying (3.2) and (3.3) at $\bar{x}\in{\mathbb{R}^{n}}$, and let $(\bar{x}^{*},\bar{\mathpzc{y}}^{\ast})\in\widehat{\partial}\mathrm{E}_{\varphi}(\bar{x},\bar{\mathpzc{y}})$ be chosen so that the function $t\mapsto\inf_{\mathbb{B}_{\hat{\rho}}(\bar{x})\times\mathbb{R}^{m}}\big{\\{}\varphi_{t}(\cdot,\cdot)-\langle\bar{\mathpzc{y}}^{\ast}(t),\cdot\rangle\big{\\}}$ is integrable on $T$ for some $\hat{\rho}>0$. Then there exist sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{\infty}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{\ast}\\}\subset{\textnormal{L}}^{1}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ such that: 1. (i) $\big{(}\mathpzc{x}_{k}^{*}(t),\mathpzc{y}_{k}^{\ast}(t)\big{)}\in\widehat{\partial}\varphi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}$ for a.e. and all $k\in\mathbb{N}$. 2. (ii) $\|\bar{x}-x_{k}\|\to 0$, $\|\bar{x}-\mathpzc{x}_{k}\|_{\infty}\to 0$, and $\|\bar{\mathpzc{y}}-\mathpzc{y}_{k}\|_{1}\to 0$ as $k\to\infty$. 3. (iii) $\displaystyle\bigg{\|}\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu-\bar{x}^{\ast}\bigg{\|}\to 0$, and $\displaystyle\int_{T}\Big{\|}\mathpzc{x}_{k}^{*}(t)\Big{\|}\cdot\Big{\|}\mathpzc{x}_{k}(t)-x_{k}\Big{\|}d\mu\to 0$ as $k\to\infty$. 4. (iv) $\displaystyle\int_{T}\Big{|}\varphi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}-\varphi_{t}(\bar{x},\bar{\mathpzc{y}}(t)\big{)}\Big{|}d\mu\to 0$ as $k\to\infty$. 5. (v) $\|\mathpzc{y}_{k}^{\ast}-\bar{\mathpzc{y}}^{\ast}\|_{\infty}\to 0$ as $k\to\infty$. ## 4 Lipschitzian Properties of Random Multifunctions This section is devoted to the study of new Lipschitzian properties for random multifunctions defined on finite measure spaces. Our main attention is paid to the three major properties of such multifunctions that we label as integrably local Lipschitzian, integrably quasi-Lipschitzian, and integrably Lipschitz- like ones. We reveal relationships between these properties in various classes of measure spaces and compare them with the corresponding properties of deterministic multifunctions. On the one hand, the new Lipschitzian properties are important to establish stability of random sets of feasible solutions in stochastic programming, but on the other hand they play a crucial role in deriving calculus rules to evaluate coderivatives of expected-integral multifunctions that is given in the subsequent sections. Starting with the local Lipschitzian property of deterministic multifunctions, recall that $\Phi\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is (Hausdorff) locally Lipschitzian around $\bar{x}\in\mbox{\rm dom}\,\Phi$ if there are numbers $\eta>0$ and $\ell\geq 0$ such that $\Phi(x)\subset\Phi(x^{\prime})+\ell\|x-x^{\prime}\|\mathbb{B}\;\text{ for all }\;x,x^{\prime}\in\mathbb{B}_{\eta}(\bar{x}).$ (4.1) Having (4.1) in mind, we now define its random version for set-valued normal integrands on the general measure spaces under consideration. ###### Definition 2 (integrable local Lipschitzian property of random multifunctions) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand on a complete finite measure space $(T,\mathcal{A},\mu)$, and let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$ be given for $E_{\Phi}$ taken from (1.2). We say that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$ if there exist $\eta>0$, $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$, and $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})=0$ such that $\Phi_{t}(x)\subset\Phi_{t}(x^{\prime})+\ell(t)\|x-x^{\prime}\|\mathbb{B}\;\text{ for all }\;t\in\widehat{T}\;\text{ and }\;x,x^{\prime}\in\mathbb{B}_{\eta}(\bar{x}).$ (4.2) Let us present useful characterizations of integrably locally Lipschitzian multifunctions in terms of distance functions. Recall that the Pompeiu- Hausdorff distance between sets $\Omega_{1},\Omega_{2}\subset{\mathbb{R}^{n}}$ is given by $\displaystyle{\rm haus}(\Omega_{1},\Omega_{2})$ $\displaystyle:=\max\left\\{\sup\limits_{x\in\Omega_{1}}{\rm dist}(x;\Omega_{2}),\sup\limits_{x\in\Omega_{2}}{\rm dist}(x;\Omega_{1})\right\\}$ (4.3) $\displaystyle=\sup\limits_{x\in{\mathbb{R}^{n}}}\big{|}{\rm dist}(x;\Omega_{1})-{\rm dist}(x;\Omega_{2})\big{|},$ where ${\rm dist}(x;\Omega)$ is the standard distance function in ${\mathbb{R}^{n}}$ with the convention that $\mbox{\rm dist}\,(x,\emptyset):=\infty$. Here are the equivalent descriptions of the integrable local Lipschitzian property from (4.2). ###### Proposition 3 (distance descriptions of integrable local Lipschitzian multifunctions) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand, and let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$ for $\mathrm{E}_{\Phi}$ taken from (1.2). Then the following assertions are equivalent: 1. (i) There exist $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})=0$, $\eta>0$, and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that for all $t\in\widehat{T}$ and all $x\in\mathbb{B}_{\eta}$ we have the inclusion in (4.2). 2. (ii) There exist $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})=0$, $\eta>0$, and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that $\displaystyle{\rm haus}\big{(}\Phi_{t}(x),\Phi_{t}(x^{\prime})\big{)}\leq\ell(t)\|x-x^{\prime}\|\;\text{ for all }\;x,x^{\prime}\in\mathbb{B}_{\eta}(\bar{x}),\text{ }t\in\widehat{T}.$ (4.4) 3. (iii) In the setting of (i) we have that the function $x\mapsto{\rm dist}(y;\Phi_{t}(x))$ is $\ell(t)$-Lipschitz continuous on $\mathbb{B}_{\eta}(\bar{x})$. 4. (iv) In the setting (ii) we have the estimate ${\rm dist}\big{(}y;\Phi_{t}(x)\big{)}\leq\ell(t){\rm dist}\big{(}x;\Phi_{t}^{-1}(y)\cap\mathbb{B}_{\eta}(\bar{x})\big{)}\;\text{ for all }\;x\in\mathbb{B}_{\eta}(\bar{x}),\quad y\in\mathbb{R}^{m},\;\text{ and }\;t\in\widehat{T}.$ (4.5) Proof. Note that assertion (i) means that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$ by Definition 2. The equivalence between (i) and (ii) easily follows from (4.2) and definition (4.3) of the Pompeiu-Hausdorff distance. The representation of the latter distance presented in (4.3) ensures the equivalence between (ii) and (iii). To finish the proof, it remains verifying the equivalence between (ii) and (iv). Observe first that we can always suppose that $\mathbb{B}_{\eta}(\bar{x})\subset\mbox{\rm dom}\,\Phi_{t}$ for all $t\in\widehat{T}$. Indeed, the choice of $\bar{x}$ tells us that $\bar{x}\in\mbox{\rm dom}\,\Phi_{t}$ holds for a.e. $t\in T$. Hence it follows from (ii) that ${\rm haus}(\,\Phi_{t}(x),\Phi_{t}(\bar{x}))$ is finite for all $x\in\mathbb{B}_{\eta}(\bar{x})$ and $t\in\widehat{T}$, which means that $\mathbb{B}_{\eta}(\bar{x})\subset\mbox{\rm dom}\,\Phi_{t}$ whenever $t\in\widehat{T}$. On the other hand, assuming (iv) gives us $t\in\widehat{T}$, $y_{t}\in\Phi_{t}(\bar{x})$, and $x\in\mathbb{B}_{\eta}(\bar{x})$ such that (4.5) yields ${\rm dist}\big{(}y_{t};\Phi_{t}(x)\big{)}\leq\ell(t)\|x-\bar{x}\|$, which shows the necessity of $\mathbb{B}_{\eta}(\bar{x})\subset\mbox{\rm dom}\,\Phi_{t}$. The equivalence between (ii) and (iv) follows now from (dr, , Proposition 3C.1), which therefore completes the proof of this proposition. The following example illustrates how the the distance characterizations of Proposition 3 allow us to easily check the fulfillment of the integrable local Lipschitzian property of random multifunctions. ###### Example 1 $($checking the integrable locally Lipschitzian property of multifunctions$)$ Let $F\colon T\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$, be an integrable bounded measurable multifunction, i.e., there exists $\lambda\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that $F(t)\subset\lambda(t)\mathbb{B}$, and let $b\colon T\times{\mathbb{R}^{n}}\to\mathbb{R}^{m}$ and $A\colon T\times{\mathbb{R}^{n}}\to\mathbb{R}^{m\times m}$ be two measurable mappings. Take $\bar{x}\in{\mathbb{R}^{n}}$ for which there are $\eta>0$, $\ell_{1}\in\mathbb{R}_{+}$, and $\ell_{2}\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ ensuring that $\|A(t,x)-A(t,x^{\prime})\|\leq\ell_{1}\|x-x^{\prime}\|\;\mbox{ and }\;\|b(t,x)-b(t,x^{\prime})\|\leq\ell_{2}(t)\|x-x^{\prime}\|$ for all $x,x^{\prime}\in\mathbb{B}_{\eta}(\bar{x}),\;t\in T$, and that $A(t,x)$ is nonsingular for such $t,x$. Then the mapping $\Phi_{t}(x):=A(t,x)F(t)+b(t,x)$ is integrably locally Lipschitzian around $\bar{x}$. Indeed, it follows from (rw, , Example 9.32) that for a.e. $t\in T$ the modulus of Lipschitz continuity of $\Phi_{t}$ on $\mathbb{B}_{\eta}(\bar{x})$ can be bounded by $\lambda(t)\ell_{1}+\ell_{2}(t)$. Thus we conclude from the distance characterizations (4.4) of Proposition 3 that the above multifunction $\Phi$ enjoys the claimed Lipschitzian property. Our next goal is to establish coderivative characterizations of integrably locally Lipschitzian and related Lipschitzian properties of random multifunctions. Recall first that for deterministic set-valued mappings $\Phi\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ the local Lipschitzian property (4.1) can be written in form (4.2), where the measure space is a single atom. The latter property is characterized by $\sup\big{\\{}\|x^{*}\|\;\big{|}\;x^{*}\in D^{*}\Phi(\bar{x},\bar{y})(y^{*})\big{\\}}\leq\ell\|y^{*}\|\;\mbox{ for all }\;\bar{y}\in\Phi(\bar{x}),\;y^{*}\in\mathbb{R}^{m},$ (4.6) provided that $\Phi$ is closed-graph and is uniformly bounded around this point; see (m93, , Theorem 5.11). Due the robustness of the coderivative (2.5) and its representation (2.7), the pointwise characterization in (4.6) can be equivalently written in the following forms: there exist $\eta>0$ and $\ell\geq 0$ such that $\sup\big{\\{}\|x^{*}\|\;\big{|}\;x^{*}\in D^{*}\Phi(x,y)(y^{*})\big{\\}}\leq\ell\|y^{*}\|\;\mbox{ and }\;\sup\big{\\{}\|x^{*}\|\;\big{|}\;x^{*}\in\widehat{D}^{*}\Phi(x,y)(y^{*})\big{\\}}\leq\ell\|y^{*}\|$ (4.7) for all $x\in\mathbb{B}_{\eta}(\bar{x})$, $y\in\Phi(x)$, and $y^{*}\in\mathbb{R}^{m}$. Observe further that a graphical localization of the locally Lipschitzian property (4.1) of deterministic multifunctions $\Phi\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is known as the Lipschitz-like (pseudo-Lipschitz, Aubin) property of $\Phi\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ around $(\bar{x},\bar{y})\in\operatorname{gph}\Phi$ defined as: there exist $\eta>0$ and $\ell\geq 0$ such that $\Phi(x)\cap\mathbb{B}_{\eta}(\bar{y})\subset\Phi(x^{\prime})+\ell\|x-x^{\prime}\|\mathbb{B}\;\text{ for all }\;x,x^{\prime}\in\mathbb{B}_{\eta}(\bar{x}).$ (4.8) As well recognized (see, e.g., (m06, , Theorem 1.42)), the locally Lipschitzian property of $\Phi$ around $\bar{x}\in\mbox{\rm dom}\,\Phi$ is equivalent to the Lipschitz-like property of $\Phi$ around $(\bar{x},\bar{y})$ for all $\bar{y}\in\Phi(\bar{x})$, provided that $\Phi$ is closed-graph and uniformly bounded around $\bar{x}$. Due to this fact, the above characterization (4.6) of the locally Lipschitzian property of deterministic multifunctions is a consequence of the following characterizations of the Lipschitz-like property of $\Phi$ around $(\bar{x},\bar{y})$ given by $\sup\big{\\{}\|x^{*}\|\;\big{|}\;x^{*}\in D^{*}\Phi(\bar{x},\bar{y})(y^{*})\big{\\}}\leq\ell\|y^{*}\|\;\mbox{ for all }\;y^{*}\in\mathbb{R}^{m},$ (4.9) which is known as the coderivative/Mordukhovich criterion; see (m93, , Theorem 5.7) and (rw, , Theorem 9.40). Similarly to the case of locally Lipschitzian multifunctions, we can equivalently reformulate (4.9) via the neighborhood estimates in (4.7) but valid now for all $x\in\mathbb{B}_{\eta}(\bar{x})$, $y\in\Phi(x)\cap\mathbb{B}_{\eta}(\bar{y})$, and $y^{*}\in\mathbb{R}^{m}$ without the uniform boundedness assumption on $\Phi$ around $\bar{x}$. The situation with random multifunctions is essentially more involved in comparison with the deterministic case, being rather similar in some aspects while significantly different in the others; this can be precisely seen from the results given below. Let us start with a coderivative characterization of integrably locally Lipschitzian multifunctions (4.2) defined on general measure spaces. The following theorem not only extends the coderivative conditions in (4.7) to random mappings, but also provides a new result in the deterministic case by dropping the uniform boundedness assumption. For brevity, we prove a regular coderivative characterization similar to the second estimate in (4.7). The one in the form of the first estimate therein can be derived similarly to the proof of Proposition 4 given below. ###### Theorem 4.1 (coderivative characte rization of integrably locally Lipschitzian multifunctions) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand defined on a complete finite measure space $(T,\mathcal{A},\mu)$, and let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$. Then $\Phi$ is integrably locally Lipschitzian around $\bar{x}$ if and only if there exist $\eta>0$, $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$, and $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})=0$ such that $\sup\big{\\{}\|x^{*}\|\;\big{|}\;x^{*}\in\widehat{D}^{*}\Phi_{t}(x,y)(y^{*})\big{\\}}\leq\ell(t)\|y^{*}\|\;\mbox{ for all }\;y\in\Phi_{t}(x)\;\mbox{ and }\;t\in\widehat{T}$ (4.10) whenever $x\in\mathbb{B}_{\eta}(\bar{x})$ and $y^{*}\in\mathbb{R}^{m}$. Proof. First we show that the fulfillment of condition (4.10) for all $x\in\mathbb{B}_{\eta}(\bar{x})$ and $y^{*}\in\mathbb{R}^{m}$ yields the integrable local Lipschitz property of $\Phi$ around $\bar{x}$. Take $\eta>0$, $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$, and $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})$ for which (4.10) holds, and then fix $t\in\widehat{T}$. Picking $x_{0},x_{1}\in\mathbb{B}_{\eta/2}(\bar{x})$ and $y\in\Phi_{t}(x_{0})$, we claim that $y\in\Phi_{t}(x_{1})+\ell(t)\|x_{0}-x_{1}\|\mathbb{B}.$ (4.11) To proceed, define $x_{\lambda}:=(1-\lambda)x_{0}+\lambda x_{1}$ with $\lambda\in[0,1]$ and denote $\bar{\lambda}:=\sup\big{\\{}\lambda>0\;\big{|}\;y\in\Phi_{t}(x_{\lambda})+\ell(t)\lambda\|x_{0}-x_{1}\|\mathbb{B}\big{\\}}.$ (4.12) Since $t$ is fixed, we can consider $\Phi_{t}(x)$ as a deterministic multifunction and thus deduce from the aforementioned version of (4.7) as a characterization of the Lipschitz-like property of deterministic multifunctions by (m06, , Theorem 4.7) that $\Phi_{t}$ is Lipschitz-like around $(x_{0},y)$. This gives us $\eta_{x_{0},y}>0$ such that $y\in\Phi_{t}(x_{0})\cap\mathbb{B}_{\eta_{x_{0},y}}(y)\subset\Phi_{t}(u)+\ell(t)\|x_{0}-u\|\mathbb{B}\;\text{ for all }\;u\in\mathbb{B}_{\eta_{x_{0},y}}(x_{0}).$ Using the above inclusion for $u=x_{\lambda}$ with a sufficiently small number $\lambda>0$ shows that $\bar{\lambda}>0$. Let us show that the supremum in (4.12) is attained, i.e., we have $\exists y_{\bar{\lambda}}\in\Phi_{t}(x_{\bar{\lambda}}),\;\exists v_{\bar{\lambda}}\in\mathbb{B}\;\text{ such that }\;y=y_{\bar{\lambda}}+\ell(t)\bar{\lambda}\|x_{0}-x_{1}\|v_{\bar{\lambda}}.$ (4.13) Indeed, consider a sequence $\lambda_{k}\uparrow\bar{\lambda}$ together with the points $y_{\lambda_{k}}\in\Phi_{t}(x_{\lambda_{k}})$ and $v_{k}\in\mathbb{B}$ satisfying $y=y_{\lambda_{k}}+\ell(t){\lambda_{k}}\|x_{0}-x_{1}\|v_{k},\quad k\in\mathbb{N}.$ (4.14) It follows that $x_{\lambda_{k}}\to x_{\bar{\lambda}}$ as $k\to\infty$, and that $v_{k}\to v_{\bar{\lambda}}\in\mathbb{B}$ along a subsequence. Thus we deduce from (4.14) that a subsequence of $\\{y_{\lambda_{k}}\\}$ converges to some $y_{\bar{\lambda}}$, which yields (4.13) by the closedness of $\operatorname{gph}\Phi_{t}$. Applying now (4.10) and the characterization of (m06, , Theorem 4.7) tells us that $\Phi_{t}$ is Lipschitz-like around $(x_{\bar{\lambda}},y_{\bar{\lambda}})$, i.e., there exists $\eta_{\bar{\lambda}}>0$ ensuring $y_{\bar{\lambda}}\in\Phi_{t}(x_{\bar{\lambda}})\cap\mathbb{B}_{\eta_{\bar{\lambda}}}(y_{\bar{\lambda}})\subset\Phi_{t}(w)+\ell(t)\|w-x_{\bar{\lambda}}\|\mathbb{B}\;\mbox{ for all }\;w\in\mathbb{B}_{\eta_{\bar{\lambda}}}(x_{\bar{\lambda}}).$ The latter allows us to find $\lambda^{\prime}>\bar{\lambda}$ for which $x_{\lambda^{\prime}}\in\mathbb{B}(x_{\bar{\lambda}})$, $y_{\lambda^{\prime}}\in\Phi_{t}(x_{\lambda^{\prime}})$, and $v_{\lambda^{\prime}}\in\mathbb{B}$ are such that $y_{\bar{\lambda}}\in y_{\lambda^{\prime}}+\ell(t)\|x_{\lambda^{\prime}}-x_{\bar{\lambda}}\|v_{\lambda^{\prime}}=y_{\lambda^{\prime}}+\ell(t)(\lambda^{\prime}-\bar{\lambda})\|x_{1}-x_{0}\|v_{\lambda^{\prime}}.$ (4.15) It readily follows from (4.13) and (4.15) that $\displaystyle y$ $\displaystyle=y_{\bar{\lambda}}+\ell(t)\bar{\lambda}\|x_{0}-x_{1}\|v_{\bar{\lambda}}$ $\displaystyle=y_{\lambda^{\prime}}+\ell(t)(\lambda^{\prime}-\bar{\lambda})\|x_{1}-x_{0}\|v_{\lambda^{\prime}}+\ell(t)\bar{\lambda}\|x_{0}-x_{1}\|v_{\bar{\lambda}}$ $\displaystyle=y_{\lambda^{\prime}}+\ell(t)\lambda^{\prime}\|x_{1}-x_{0}\|\left(\frac{\lambda^{\prime}-\bar{\lambda}}{\lambda^{\prime}}v_{\lambda^{\prime}}+\frac{\bar{\lambda}}{\lambda^{\prime}}v_{\bar{\lambda}}\right)$ $\displaystyle\in\Phi_{t}(x_{\lambda^{\prime}})+\ell(t)\lambda^{\prime}\|x_{0}-x_{1}\|\mathbb{B},$ which contradicts the maximality of $\bar{\lambda}$ as chosen in (4.12). This gives us (4.11) and verifies therefore that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$. Conversely, suppose that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$ and thus find $\eta>0$, $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$, and $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})$ such that (4.2) is satisfied. Taking $x\in\mathbb{B}_{\eta/2}(\bar{x})$, $t\in\widehat{T}$, and $y\in\Phi_{t}(x)$, we deduce from (4.2) and the characterization of (m06, , Theorem 4.7) that $\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in\widehat{D}^{\ast}\Phi_{t}(x,y)(y^{\ast})\big{\\}}\leq\ell(t)\|y^{\ast}\|\;\mbox{ for all }\;y^{\ast}\in\mathbb{R}^{m},$ which concludes the proof of the theorem. $\hfill\square$ Now we are ready to introduce two graphically localized Lipschitzian properties of multifunctions defined on finite measure spaces. They both may be considered as extensions to random multifunctions of the Lipschitz-like property (4.8) and its coderivative characterizations for deterministic multifunctions while being generally different from each other in the stochastic case. To proceed, we associate with a given set-valued integrand $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ the multifunction ${\cal S}\colon{\mathbb{R}^{n}}\times\mathbb{R}^{m}\to\textnormal{L}^{1}(T,\mathbb{R}^{m})$ defined by $\mathcal{S}_{\Phi}(x,y):=\Big{\\{}\mathpzc{y}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})\;\Big{|}\;\int_{T}\mathpzc{y}(t)d\mu=y\;\text{ and }\;\mathpzc{y}(t)\in\Phi_{t}(x)\;\text{ for a.e. }\;t\in T\Big{\\}}.$ (4.16) ###### Definition 3 (graphically localized random Lipschitzian multifunctions) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand on a complete finite measure space $(T,\mathcal{A},\mu)$, let $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$ for $E_{\Phi}$ taken from (1.2), and let $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$ for ${\cal S}_{\Phi}$ taken from (4.16). We say that: (i) $\Phi$ is integrably Lipschitz-like around $(\bar{x},\bar{\mathpzc{y}})$ if there exist $\ell,\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$, strictly positive measurable functions $\eta$ and $\gamma$, and a measurable set $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})=0$ such that $\Phi_{t}(x)\cap\mathbb{B}_{\gamma(t)}\big{(}\mathpzc{y}(t)\big{)}\subset\Phi_{t}(x^{\prime})+\ell(t)\|x-x^{\prime}\|\mathbb{B}\;\text{ for all }\;t\in\widehat{T}\;\text{ and }\;x,x^{\prime}\in\mathbb{B}_{\eta(t)}(\bar{x}).$ (4.17) (ii) $\Phi$ is integrably quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ if there exist $\eta>0$ and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that $\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in{D}^{\ast}\Phi_{t}\big{(}\mathpzc{x}(t),\mathpzc{y}(t)\big{)}\big{(}\mathpzc{y}^{\ast}(t)\big{)}\big{\\}}\leq\ell(t)\|\mathpzc{y}^{\ast}(t)\|\text{ for a.e. }\;t\in T$ (4.18) whenever $\mathpzc{x}\in\mathbb{B}_{\eta}(\bar{x})$, $\mathpzc{y}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\Phi(\mathpzc{x})$, and $\mathpzc{y}^{\ast}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ with $\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\Phi(\mathpzc{x}):=\big{\\{}\mathpzc{y}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})\;\big{|}\;\mathpzc{y}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\;\text{ and }\;\mathpzc{y}(t)\in\Phi_{t}\big{(}\mathpzc{x}(t)\big{)}\;\text{ a.e.}\big{\\}}.$ First we show that the integrable quasi-Lipschitzian property can be equivalently reformulated in terms of the regular coderivative (2.4) replacing the basic one in (4.18). ###### Proposition 4 (equivalent description of the integrable quasi- Lipschitzian property) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand with $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$ and $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$. Then $\Phi$ is integrably quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ if and only if there exist $\eta>0$ and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that $\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in\widehat{D}^{\ast}\Phi_{t}\big{(}\mathpzc{x}(t),\mathpzc{y}(t)\big{)}\big{(}\mathpzc{y}^{\ast}(t)\big{)}\big{\\}}\leq\ell(t)\|\mathpzc{y}^{\ast}(t)\|\;\text{ for a.e. }\;t\in T$ (4.19) whenever $\mathpzc{x}\in\mathbb{B}_{\eta}(\bar{x})$, $\mathpzc{y}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\Phi(\mathpzc{x})$, and $\mathpzc{y}^{\ast}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$. Proof. We obviously have that (4.2) yields (4.19). To verify the opposite implication, suppose without loss of generality that $\mu(T)=1$ and take $\eta>0$ and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ from (4.19). Fix further $\gamma\in(0,\eta)$ and pick $\mathpzc{x}\in\mathbb{B}_{\gamma}(\bar{x})$, $\mathpzc{y}\in\mathbb{B}_{\gamma}(\bar{\mathpzc{y}})\cap\Phi(\mathpzc{x})$, and $\mathpzc{y}^{\ast}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$. Proposition 1 tells us that the set-valued mapping $F(t):={D}^{\ast}\Phi_{t}(\mathpzc{x}(t),\mathpzc{y}(t))(\mathpzc{y}^{\ast}(t))$ is measurable with closed values on $T$. By the Castaing representation of the measurable multifunction $F$ (see, e.g., (rw, , Theorem 14.5)) we find a sequence of measurable selections $\mathpzc{x}_{k}^{\ast}(t)\in F(t)$ for a.e. $t\in\widehat{T}:=\mbox{\rm dom}\,F$ such that $\sup\big{\\{}\|x_{k}^{\ast}(t)\|\;\big{|}\;k\in\mathbb{N}\big{\\}}=\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in{D}^{\ast}\Phi_{t}\big{(}\mathpzc{x}(t),\mathpzc{y}(t)\big{)}\big{(}\mathpzc{y}^{\ast}(t)\big{)}\big{\\}}\;\text{ for a.e. }\;t\in\widehat{T}.$ Fixing any $\varepsilon\in(0,(\eta-\gamma)/2)$ and $k\in\mathbb{N}$, define the multifunction $F^{k}_{\varepsilon}\colon\widehat{T}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{2(n+m)}$ by $(u,u^{\ast},v,v^{\ast})\in F^{k}_{\varepsilon}(t)\Longleftrightarrow\left\\{\begin{array}[]{ll}u^{\ast}\in\widehat{D}^{\ast}\Phi_{t}(u,v)(v^{\ast}),\;u\in\mathbb{B}_{\gamma}\big{(}\mathpzc{x}(t)\big{)},\;v\in\mathbb{B}_{\gamma}\big{(}\mathpzc{y}(t)\big{)},\\\ \|u^{\ast}-\mathpzc{x}_{k}^{\ast}(t)\|\leq\varepsilon,\;\|v^{\ast}-\bar{\mathpzc{y}}^{\ast}(t)\|\leq\varepsilon/(1+\ell(t)).\end{array}\right.$ It follows from Proposition 1 and the measurability of the mappings involved that the multifunctions $F^{k}_{\varepsilon}$ are graph measurable. Applying the coderivative representation (2.7) to $F_{\varepsilon}^{k}(\cdot)$ ensures that the sets $F_{\varepsilon}^{k}(t)$ are nonempty for a.e. $t\in\widehat{T}$. Then Proposition 2 gives us a measurable selection $(\mathpzc{u}(t),\mathpzc{u}^{\ast}(t),\mathpzc{v}(t),\mathpzc{v}^{\ast}(t))\in F_{\varepsilon}^{k}(t)$ for a.e. $t\in\widehat{T}$ and $\varepsilon,k$ fixed above. Thus $\mathpzc{u}\in\mathbb{B}_{\eta}(\bar{x})$, $\mathpzc{v}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\Phi(\mathpzc{u})$, and $\mathpzc{v}^{\ast}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$. Employing the regular coderivative estimate (4.19) ensures the relationships $\displaystyle\|\mathpzc{x}_{k}^{\ast}(t)\|$ $\displaystyle\leq\|\mathpzc{u}^{\ast}(t)\|+\varepsilon\leq\ell(t)\|\mathpzc{v}^{\ast}(t)\|+\varepsilon\leq\ell(t)\|\bar{\mathpzc{y}}^{\ast}(t)\|+\ell(t)\|\mathpzc{v}^{\ast}(t)-\bar{\mathpzc{y}}^{\ast}(t)\|+\varepsilon$ $\displaystyle\leq\ell(t)\|\bar{\mathpzc{y}}^{\ast}(t)\|+2\varepsilon\;\mbox{ a.e. }\;t\in\widehat{T}.$ Since $\varepsilon\in(0,(\eta-\gamma)/2)$ and $k\in\mathbb{N}$ were chosen arbitrarily, we arrive at the integrable quasi-Lipschitzian property (4.18) and thus complete the proof of the proposition. $\hfill\square$ Next we establish closed relationships (actually the equivalence) between the graphically localized Lipschitzian properties of random multifunctions from Definition 3 in the case of purely atomic spaces with countably many atoms. The following theorem can be treated as a stochastic extension of the coderivative characterization of the Lipschitz-like property for deterministic multifunctions. ###### Theorem 4.2 (relationships between graphically localized integrably Lipschitzian properties) Let $(T,\mu,\mathcal{A})$ be a purely atomic measure space consisting of countable disjoint family of atoms $(T_{k})_{k\in\mathbb{N}}$. Consider a set- valued normal integrand $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ with $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$ and $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$. If $\Phi$ is integrably quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$, then there exist $\gamma>0$ and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that for all $k\in\mathbb{N}$ we have the integrable Lipschitz-like property in the form $\Phi_{t}(x)\cap\mathbb{B}_{\frac{\gamma}{\mu(T_{k})}}\big{(}\bar{y}(t)\big{)}\subset\Phi_{t}(x^{\prime})+\ell(t)\|x-x^{\prime}\|\mathbb{B}\;\text{ for all }\;x,x^{\prime}\in\mathbb{B}_{\frac{\gamma}{(\ell(t)+1)}}(\bar{x})\;\text{ and }\;t\in T_{k}.$ (4.20) Conversely, the fulfillment of the integrable Lipschitz-like property (4.20) with some $\ell\in\textnormal{L}^{\infty}(T,\mathbb{R}_{+})$ implies that $\Phi$ is integrably quasi-Lipschitzian around $(\bar{x},\bar{y})$. Proof. Suppose that $\Phi$ is integrably quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ with the given data $\eta>0$ and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ for which (4.18) holds. Since the measure is purely atomic, each measurable function or multifunction must be constant a.e. on each atom. Then we can assume that all the involved mappings are constants over the atoms, i.e., $\Phi_{t}=\Phi_{k}$, $\bar{\mathpzc{y}}(t)=\bar{y}_{k}$, and $\ell(t)=\ell_{k}$ on $T_{k}$. Furthermore, observe that the coderivative condition (4.19) implies that for every $k\in\mathbb{N}$ we have $\displaystyle\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in\widehat{D}^{\ast}\Phi_{k}(x,y)(y^{\ast})\big{\\}}\leq\ell_{k}\|y^{\ast}\|$ (4.21) whenever $x\in\mathbb{B}_{\eta}(\bar{x}_{k})$, $y\in\mathbb{B}_{\frac{\eta}{\mu(T_{k})}}(\bar{y}_{k})\cap\Phi_{k}(x)$, and $y^{\ast}\in\mathbb{R}^{m}$. Denote $\gamma:=\frac{\eta}{3(\mu(T)+1)}$ and fix $k\in\mathbb{N}$. Then consider $x_{0},x_{1}\in\mathbb{B}_{\frac{\gamma}{(\ell_{k}+1)}}(\bar{x})$ and $y_{k}\in\Phi_{k}(x_{0})\cap\mathbb{B}_{\frac{\gamma}{\mu(T_{k})}}(\bar{y}_{k})$. We claim that $y_{k}\in\Phi_{k}(x_{1})+\ell_{k}\|x_{0}-x_{1}\|\mathbb{B}.$ (4.22) Indeed, define $x_{\lambda}:=(1-\lambda)x_{0}+\lambda x_{1}$ for $\lambda\in[0,1]$ and $\bar{\lambda}:=\sup\big{\\{}\lambda>0\;\big{|}\;y_{k}\in\Phi_{k}(x_{\lambda})+\ell_{k}\lambda\|x_{0}-x_{1}\|\mathbb{B}\big{\\}}.$ Applying the coderivative criterion (4.21) for the deterministic multifunctions $\Phi_{k}$, $k\in\mathbb{N}$, we have by (m06, , Theorem 4.7) that $\Phi_{k}$ is Lipschitz-like around $(x_{0},y_{k})$. This gives us $\eta_{k}>0$ such that $y_{k}\in\Phi_{k}(x_{0})\cap\mathbb{B}_{\eta_{k}}(y_{k})\subset\Phi_{k}(u)+\ell_{k}\|x_{0}-u\|\mathbb{B}\;\text{ for all }\;u\in\mathbb{B}_{\eta_{k}}(x_{0}).$ (4.23) Plugging there $u=x_{\lambda}$ for sufficiently small $\lambda>0$ ensures that $\bar{\lambda}>0$. Arguing similarly to the proof of Theorem 4.1 verifies that the supremum in the definition of $\bar{\lambda}$ is realized, i.e., $\exists y_{\bar{\lambda}}\in\Phi_{k}(x_{\bar{\lambda}}),\;\exists v_{\bar{\lambda}}\in\mathbb{B}\;\text{ such that }\;y_{k}=y_{\bar{\lambda}}+\ell_{k}\bar{\lambda}_{k}\|x_{0}-x_{1}\|v_{\bar{\lambda}}.$ (4.24) Moreover, we have the following estimate of the distance between $\bar{y}_{k}$ and $y_{\bar{\lambda}}$ from (4.24): $\|\bar{y}_{k}-y_{\bar{\lambda}}\|\leq\frac{2}{3\mu(T_{k})}\eta.$ Using (4.21) and applying again (m06, , Theorem 4.7) tell us that each multifunction $\Phi_{k}$ is Lipschitz-like around $(x_{\bar{\lambda}},y_{\bar{\lambda}})$. Thus there exists $\eta_{\bar{\lambda}}>0$ ensuring the inclusions $y_{\bar{\lambda}}\in\Phi_{k}(x_{\bar{\lambda}})\cap\mathbb{B}_{\eta_{\bar{\lambda}}}(y_{\bar{\lambda}})\subset\Phi_{k}(w)+\ell_{k}\|w-x_{\bar{\lambda}}\|\mathbb{B}\;\text{ for all }\;w\in\mathbb{B}_{\eta_{\bar{\lambda}}}(x_{\bar{\lambda}}).$ (4.25) The latter yields the existence of $\lambda^{\prime}>\bar{\lambda}$ such that $x_{\lambda^{\prime}}\in\mathbb{B}(x_{\bar{\lambda}})$, $y_{\lambda^{\prime}}\in\Phi_{k}(x_{\lambda^{\prime}})$, and $v_{\lambda^{\prime}}\in\mathbb{B}$ for which $y_{\bar{\lambda}}\in y_{\lambda^{\prime}}+\ell_{k}\|x_{\lambda^{\prime}}-x_{\bar{\lambda}}\|v_{\lambda^{\prime}}=y_{\lambda^{\prime}}+\ell_{k}(\lambda^{\prime}-\bar{\lambda})\|x_{1}-x_{0}\|v_{\lambda^{\prime}}.$ Using then (4.24) and (4.25) implies that $\displaystyle y$ $\displaystyle=y_{\bar{\lambda}}+\ell_{k}\bar{\lambda}\|x_{0}-x_{1}\|v_{\bar{\lambda}}=y_{\lambda^{\prime}}+\ell_{k}(\lambda^{\prime}-\bar{\lambda})\|x_{1}-x_{0}\|v_{\lambda^{\prime}}+\ell_{k}\bar{\lambda}\|x_{0}-x_{1}\|v_{\bar{\lambda}}$ $\displaystyle=y_{\lambda^{\prime}}+\ell_{k}\lambda^{\prime}\|x_{1}-x_{0}\|\left(\frac{\lambda^{\prime}-\bar{\lambda}}{\lambda^{\prime}}v_{\lambda^{\prime}}+\frac{\bar{\lambda}}{\lambda^{\prime}}v_{\bar{\lambda}}\right)\in\Phi_{k}(x_{\lambda^{\prime}})+\ell_{k}\lambda^{\prime}\|x_{0}-x_{1}\|\mathbb{B},$ which contradicts the above choice of $\bar{\lambda}$. This tells us that (4.22) holds, and thus $\Phi$ satisfies the integrable Lipschitz-like property (4.20). Conversely, assume that $\Phi$ satisfies (4.20) with some $\ell\in\textnormal{L}^{\infty}(T,\mathbb{R}_{+})$. A close look at the proof of (m06, , Theorem 1.43) tells us that (4.20) implies that $\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in\widehat{D}^{\ast}\Phi_{t}(x,y)(y^{\ast})\big{\\}}\leq\ell_{k}\|y^{\ast}\|\;\text{ on }\;T_{k}$ (4.26) whenever $x\in\mathbb{B}_{\frac{\gamma}{2(\ell_{k}+1)}}(\bar{x})$, $y\in\mathbb{B}_{\frac{\gamma}{2\mu(T_{k})}}(\bar{\mathpzc{y}}(t))\cap\Phi_{t}(x)$, and $y^{\ast}\in\mathbb{R}^{m}$. Picking now any number $0<\eta<\frac{\gamma}{2(\|\ell\|_{\infty}+1)}$ together with measurable functions $\mathpzc{x}\in\mathbb{B}_{\eta}(\bar{x})$, $\mathpzc{y}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\Phi(\mathpzc{x})$, and $\mathpzc{y}^{\ast}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ yields $\|\mathpzc{y}(t)-\bar{\mathpzc{y}}(t)\|\leq\frac{\gamma}{2\mu(T_{k})}\;\mbox{ for a.e. }\;t\in T_{k}.$ Then the application of (4.26) verifies the integrable quasi-Lipschitzian property of $\Phi$ around $(\bar{x},\bar{\mathpzc{y}})$. $\hfill\square$ Now we discover a remarkable phenomenon concerning Lipschitzian properties of random multifunctions, which does not have any analogs in the deterministic framework. Namely, it is revealed that in spaces with nonatomic measures integrable quasi-Lipschitzian and local Lipschitzian properties of random multifunctions agree. Furthermore, the measure nonatomicity is also necessary for the fulfillment of this phenomenon as far as arbitrary multifunctions with these properties are addressed. ###### Theorem 4.3 (integrably Lipschitzian multifunctions on nonatomic spaces) Let $(T,\mathcal{A},\mu)$ be a complete finite measure spaces, and let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand. Assume that the measure $\mu$ in nonatomic. Then given any $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$ and $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$, we have that $\Phi$ is integrably quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ if and only if it is integrably locally Lipschitzian around $\bar{x}$. Conversely, the presence of at least one atom on $T$ yields the existence of a multifunction $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$, which is integrably quasi- Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ but not integrably locally Lipschitzian around $\bar{x}$. Proof. It follows from the equivalent descriptions of the integrably locally Lipschitzian and quasi-Lipschitzian properties in Theorem 4.1 and Proposition 4, respectively, that the former property implies the latter one without the nonatomicity assumption. Assume now that $\Phi$ is integrably quasi- Lipschitzian around the point in question. Arguing by contraposition, suppose that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$, i.e., the coderivative condition (4.10) in Theorem 4.1 fails. This gives us a set $A\in\mathcal{A}$ with $\mu(A)>0$ such that for all $t\in A$ there exist points $x_{t}\in\mathbb{B}_{\eta}(\bar{x})$, $y_{t}^{\ast}\in\mathbb{R}^{m}$, $y_{t}\in\Phi_{t}(x_{t})$, and $x_{t}^{\ast}\in\widehat{D}^{\ast}\Phi_{t}(x_{t},y_{t})(y_{t}^{\ast})$ with $\|x^{\ast}_{t}\|>\ell(t)\|y_{t}^{\ast}\|$. Now define the multifunction $F\colon A\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ by $\displaystyle\operatorname{gph}F:=\left\\{(t,x,y,x^{\ast},y^{\ast})\in T\times\mathbb{R}^{2(n+m)}\;\Bigg{|}\;\begin{array}[]{ll}x\in\mathbb{B}_{\eta}(\bar{x}),\;(t,x,y)\in\operatorname{gph}\Phi\\\ (x^{\ast},-y^{\ast})\in\widehat{N}_{\operatorname{gph}\Phi_{t}}(x,y),\\\ \text{and }\;\|x^{\ast}\|>\ell(t)\|y^{\ast}\|\end{array}\right\\}.$ Observe that $F$ is graph measurable with nonempty (not necessary closed) values. The measurable selection theorem presented in Proposition 2 gives us measurable functions $\mathpzc{x},\mathpzc{x}^{\ast}\colon T\to{\mathbb{R}^{n}}$ and $\mathpzc{y},\mathpzc{y}^{\ast}\colon T\to\mathbb{R}^{m}$ such that for almost all $t\in A$ we get $\mathpzc{x}(t)\in\mathbb{B}_{\eta}(\bar{x}),\;\mathpzc{x}^{\ast}(t)\in\widehat{D}^{\ast}\Phi_{t}\big{(}\mathpzc{x}(t),\mathpzc{y}(t)\big{)}\big{(}\mathpzc{y}^{\ast}(t)\big{)}\;\mbox{ and }\;\|\mathpzc{x}^{\ast}(t)\|>\ell(t)\|\mathpzc{y}^{\ast}(t)\|.$ (4.27) The nonatomicity of $\mu$ allows us to find a measurable set $B\subset A$ with $\mu(B)>0$ and $\int_{B}\|\mathpzc{y}(t)-\bar{\mathpzc{y}}(t)\|d\mu\leq\eta$. Hence the measurable functions $\mathpzc{v}:=\mathpzc{x}\operatorname{\mathds{1}}_{B}+\bar{x}\operatorname{\mathds{1}}_{B^{c}}$ and $\mathpzc{w}:=\mathpzc{y}\operatorname{\mathds{1}}_{B}+\bar{\mathpzc{y}}\operatorname{\mathds{1}}_{B^{c}}$ belong to the balls $\mathbb{B}_{\eta}(\bar{x})$ and $\mathbb{B}_{\eta}(\bar{\mathpzc{y}})$, respectively. Employing now (4.27) with taking into account Proposition 4, we arrive at a contradiction with (4.18) and thus verify the assertion of the theorem in nonatomic spaces. To justify the converse assertion, suppose that $(T,\mathcal{A},\mu)$ contains at least one atom, say $T_{0}\in\mathcal{A}$. Taking into account the representation of the measure space as the union of purely atomic and nonatomic parts as discussed in Section 3, we assume without loss of generality that the space $T\backslash T_{0}$ is nonatomic; see also the proof of Proposition 5 for more details in similar arguments. Consider two arbitrary deterministic multifunctions $F_{1},F_{2}\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ such that $F_{1}$ is locally Lipschitzian around some $\bar{x}$, while $F_{2}$ is Lipschitz-like around $(\bar{x},\bar{y})$ with some $\bar{y}\in F_{2}(\bar{x})$ but not being locally Lipschitzian around $\bar{x}$. Specifically $F_{2}$ can be constructed as $F_{2}(x)=|x|$ on $\mathbb{R}$ with the additional value $F(0)=1$, while $(\bar{x},\bar{y})=(0,0)$. Define now the random multifunction $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ by $\displaystyle\Phi_{t}(x):=\left\\{\begin{array}[]{ll}F_{1}(x)&\text{ if }\;t\in T\backslash T_{0},\\\ F_{2}(x)&\text{ if }\;t\in T_{0}\\\ \end{array}\right.$ (4.30) and construct a measurable selection $\bar{\mathpzc{y}}(t)\in\Phi_{t}(x)$ by $\bar{\mathpzc{y}}(t):=y_{1}\operatorname{\mathds{1}}_{T\backslash T_{0}}+\bar{y}_{1}\operatorname{\mathds{1}}_{T_{0}}$ with some $y_{1}\in F_{1}(x)$. It follows from Theorem 4.1 and from the first part of this theorem applied on the nonatomic part $T\backslash T_{0}$ that the multifunction $\Phi$ from (4.30) is integrably quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ when $t\in T\backslash T_{0}$. This property of $\Phi$ on the purely atomic part $T_{0}$ follows from Theorem 4.2 since $F_{2}$ is a deterministic Lipschitz-like multifunction. However, the random multifunction $\Phi$ cannot be integrably locally Lipschitzian around $\bar{x}$ since $F_{2}$ in (4.30) was chosen to be merely Lipschitz-like around the point in question. This therefore completes the proof of the theorem. $\hfill\square$ It is easy to deduce from Theorem 4.3 that the integrably Lipschitz-like and quasi-Lipschitzian properties are different in any nonatomic space. Indeed, Theorem 4.3 tells us that in such spaces the latter property is equivalent to its locally Lipschitzian counterpart defined by inclusion (4.2), which may clearly be different from the integrable Lipschitz-like property (4.2) whenever the sets $\Phi_{t}(x)$ are unbounded. The following simple example demonstrates the difference between these two Lipschitzian properties of random set-valued mappings even in the case of bounded multifunctions. ###### Example 2 $($quasi-Lipschitzian versus Lipschitz-like random multifunctions$)$ Consider any nonatomic probability space $(T,\mathcal{A},\mu)$ and the constant mapping $\Phi\colon T\times\mathbb{R}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}_{+}$ defined by $\Phi(t,x):=[0,\sqrt{|x|}+1]$. Then we have $\mathrm{E}_{\Phi}(x)=[0,\sqrt{|x|}+1]$ with $(\bar{x},\bar{y})=(0,0)\in\operatorname{gph}\mathrm{E}_{\Phi}$ and the constant function $\bar{\mathpzc{y}}(t)=0$ belongs to $\mathcal{S}_{\Phi}(\bar{x},\bar{y})$ from (4.16). It is easy to see that $\mathrm{E}_{\Phi}(x)$ is Lipschitz-like around $(\bar{x},\bar{y})$. However, $\Phi$ cannot be quasi-Lipschitzian around this point, because otherwise it follows from Theorem 4.1 and 4.3 that $\Phi$ satisfies (4.2). The latter leads us to a contradiction, since it would imply, in particular, that the function $\varphi(x):=\sqrt{|x|}$ is locally Lipschitzian around $\bar{x}=0$. Similarly to the deterministic case, the integrable Lipschitz-like property brings us to conclusions about Lipschitz stability of feasible solution sets in stochastic programming, while the integrable quasi-Lipschitzian property plays a crucial role in deriving the most useful pointwise Leibniz-type rules for coderivatives and second-order subdifferentials that are established in the subsequent sections. The final result of this section provides efficient conditions ensuring the integral quasi-Lipschitzian property of set-valued normal integrands defined on general measure spaces that are important, in particular, to check the qualification conditions in the calculus rules derived below. ###### Proposition 5 (sufficient conditions for integrably quasi-Lipschitzian multifunctions) Let $(T,\mathcal{A},\mu)$ be a complete finite measure space, and let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand. Take $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$ and $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$ and suppose that there exist $\varepsilon,\ell_{pa}>0$ as well as $\gamma_{pa}>0$ and $\ell_{na}\in\textnormal{L}^{1}(T,\mathbb{R_{+}})$ such that the following inclusions hold for all $x,x^{\prime}\in\mathbb{B}_{\varepsilon}(\bar{x})$: $\begin{array}[]{ll}\Phi_{t}(x)\cap\mathbb{B}_{\frac{\gamma_{pa}}{\mu(T_{k})}}\big{(}\bar{y}(t)\big{)}\subset\Phi_{t}(x^{\prime})+\ell_{pa}\|x-x^{\prime}\|\mathbb{B}\;\text{ if }\;t\in T^{k}_{pa},\\\ \Phi_{t}(x)\subset\Phi_{t}(x^{\prime})+\ell_{na}(t)\|x-x^{\prime}\|\mathbb{B}\;\text{ if }\;t\in T_{na},\end{array}$ (4.31) where $T_{pa}$ and $T_{na}$ form a disjoint decomposition of $T$ into purely atomic and nonatomic parts with $\\{T_{pa}^{k}\\}_{k\in\mathbb{N}}$ being a countable disjoint family of atoms. Then $\Phi$ is quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$. Proof. Let $\mu_{pa}(\cdot)$ and $\mu_{na}(\cdot)$ be the purely atomic and nonatomic part of $\mu$, i.e., $\mu_{pa}(\cdot):=\mu(\cdot\cap T_{pa})$ and $\mu_{na}(\cdot):=\mu(\cdot\cap T_{na})$, , respectively. Using the second inclusion in (4.31), we apply Theorems 4.1 and 4.3 to the multifunction $\Phi$ on the nonatomic measure space $(T_{na},\mathcal{A},\mu_{na})$ and thus find $\eta_{na}>0$ and $\ell_{na}\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that (4.18) holds. On the other hand, the first inclusion in (4.31) allows us to apply Theorem 4.2 to $\Phi$ on the purely atomic measure space $(T_{pa},\mathcal{A},\mu_{pa})$ and hence get $\eta_{pa}>0$ and $\ell_{pa}\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that (4.18) holds. Now we claim that $\Phi$ satisfies (4.18) with $\eta:=\min\\{\eta_{na},\eta_{pa}\\}$ and $\ell(t):=\ell_{na}(t)\operatorname{\mathds{1}}_{T_{na}}(t)+\ell_{pa}(t)\operatorname{\mathds{1}}_{T_{pa}}(t)$ on the entire measure space $(T,\mathcal{A},\mu)$. Indeed, picking any $\mathpzc{x}\in\mathbb{B}_{\eta}(\bar{x})$, $\mathpzc{y}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\Phi(\mathpzc{x})$, and $\mathpzc{y}^{\ast}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ leads us to the estimate $\int_{T}\|\mathpzc{y}(t)-\bar{\mathpzc{y}}(t)\|d\mu_{na}+\int_{T}\|\mathpzc{y}(t)-\bar{\mathpzc{y}}(t)\|d\mu_{pa}=\int_{T}\|\mathpzc{y}(t)-\bar{\mathpzc{y}}(t)\|d\mu\leq\eta,$ which yields (4.18) for the selected triple $(\mathpzc{x},\mathpzc{y},\mathpzc{y}^{\ast})$ and thus completes the proof of the proposition. $\hfill\square$ ## 5 Coderivative Leibniz Rules for Expected-Integral Mappings In this section we establish several calculus rules to evaluate coderivatives of the expected-integral multifunctions $\mathrm{E}_{\Phi}\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ given by $\mathrm{E}_{\Phi}(x):=\int_{T}\Phi_{t}(x)d\mu\;\mbox{ for all }\;x\in{\mathbb{R}^{n}},$ (5.1) where $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is a set-valued normal integrand defined on a complete finite measure space $(T,\mathcal{A},\mu)$. Various results, unified under the name of coderivative Leibniz rules, are obtained to evaluate the regular and limiting coderivatives of (5.1) via the corresponding coderivatives of the integrand multifunctions $\Phi_{t}$. The most efficient and useful pointwise rules will be obtained to evaluate the limiting coderivative of $\mathrm{E}_{\Phi}$ at the given point $(\bar{x},\bar{y})$ in terms of the limiting coderivative of $\Phi_{t}$ under the integrable quasi- Lipschitzian property of the integrand multifunctions around the corresponding points. Throughout this section we assume that at the given point of interest $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$ there exist a positive number $\rho$ and a function $\kappa\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that the following conditions are satisfied: $\displaystyle\Phi_{t}(x)$ $\displaystyle\text{ is convex for all }\;x\in\mathbb{B}_{\rho}(\bar{x})\;\text{ and a.e. }\;t\in T_{na},$ (5.2) $\displaystyle\Phi_{t}(x)$ $\displaystyle\subset\kappa(t)\mathbb{B}\;\text{ for all }\;x\in\mathbb{B}_{\rho}(\bar{x})\;\text{ and a.e. }\;t\in T,$ where the second condition is known as integrable boundedness. We start with deriving sequential Leibniz-type rules to estimate the regular coderivative of $\mathrm{E}_{\Phi}$ via sequences of regular coderivatives of $\Phi_{t}$ at points nearby. These results are of the same flavor as the sequential subdifferential Leibniz rules for expected-integral functionals (1.1) that are presented in Theorems 3.1 and 3.2 and are used in the proofs below. Since the proofs of the following two propositions are similar to each other, we prove only the second one. ###### Proposition 6 (sequential coderivative Leibniz rule, I) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand defined on a complete finite measure space $(T,\mathcal{A},\mu)$, let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$ satisfy the conditions in (5.2), and let $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$ be taken from (4.16). Then for any fixed $p,q\in(1,\infty)$ with $1/p+1/q=1$ there exist sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{p}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{*}\\}\subset{\textnormal{L}}^{q}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ such that we have the assertions: 1. (i) $\mathpzc{x}_{k}^{*}(t)\in\widehat{D}^{\ast}\Phi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}\big{(}\mathpzc{y}_{k}^{\ast}(t)\big{)}$ for a.e. $t\in T$ and all $k\in\mathbb{N}$. 2. (ii) $\|\bar{x}-x_{k}\|\to 0$, $\|\bar{x}-\mathpzc{x}_{k}\|_{p}\to 0$, and $\|\bar{\mathpzc{y}}-\mathpzc{y}_{k}\|_{1}\to 0$ as $k\to\infty$. 3. (iii) $\|\mathpzc{y}_{k}^{\ast}-\bar{\mathpzc{y}}^{\ast}\|_{\infty}\to 0$, $\|\mathpzc{x}_{k}^{*}\|_{q}\|\mathpzc{x}_{k}-x_{k}\|_{p}\to 0$, and $\displaystyle\int_{T}x_{k}^{*}(t)d\mu\to\bar{x}^{\ast}$ as $k\to\infty$. ###### Proposition 7 (sequential coderivative Leibniz rule, II) Let $\bar{x}^{\ast}\in\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(\bar{y}^{\ast})$ in the setting of Proposition 7. Then there exist sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{\infty}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{\ast}\\}\subset{\textnormal{L}}^{1}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ such that the following assertions hold: 1. (i) $\mathpzc{x}_{k}^{*}(t)\in\widehat{D}^{\ast}\Phi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}\big{(}\mathpzc{y}_{k}^{\ast}(t)\big{)}$ for a.e. $t\in T$ and all $k\in\mathbb{N}$. 2. (ii) $\|\bar{x}-x_{k}\|\to 0$, $\|\bar{x}-\mathpzc{x}_{k}\|_{\infty}\to 0$, $\displaystyle\int_{T}\|\bar{\mathpzc{y}}(t)-\mathpzc{y}_{k}(t)\|d\mu\to 0$, and $\|\mathpzc{y}_{k}^{\ast}-\bar{y}^{\ast}\|_{\infty}\to 0$ as $k\to\infty$. 3. (iii) $\displaystyle\int_{T}\|\mathpzc{x}_{k}^{*}(t)\|\cdot\|\mathpzc{x}_{k}(t)-x_{k}\|d\mu\to 0$ and $\displaystyle\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu\to\bar{x}^{\ast}$. Proof. Considering the extended-real-valued function $\varphi(t,v,w):=\delta_{\operatorname{gph}\Phi_{t}}(u,w)$, observe that $\varphi$ is a normal integrand. Furthermore, it is easy to see that the imposed assumptions in (5.2) yield the fulfillment of (3.3) for $\varphi$. Picking any $\bar{x}^{\ast}\in\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(\bar{y}^{\ast})$ gives us by regular coderivative definition (2.4) that $(\bar{x}^{\ast},\bar{y}^{\ast})\in\widehat{\partial}\delta_{\operatorname{gph}\mathrm{E}_{\Phi}}(\bar{x},\bar{y})$. Now we use the smooth variational description of regular subgradients taken from (m06, , Theorem 1.88(i)), which tells us that there exist $\eta>0$ and a function $\vartheta\colon\mathbb{B}_{\eta}(\bar{x},\bar{y})\to{\mathbb{R}^{n}}\times\mathbb{R}^{m}$ that is Fréchet differentiable at $(\bar{x},\bar{y})$ with $\nabla\vartheta(\bar{x},\bar{y})=(\bar{x}^{\ast},\bar{y}^{\ast})$ and such that the difference $\delta_{\operatorname{gph}\mathrm{E}_{\Phi}}-\vartheta$ attains its minimum at $(\bar{x},\bar{y})$ on $\mathbb{B}_{\eta}(\bar{x},\bar{y})$. Observe that $\displaystyle\delta_{\operatorname{gph}\mathrm{E}_{\Phi}}\bigg{(}u,\int_{T}\bar{\mathpzc{y}}(t)d\mu\bigg{)}$ $\displaystyle=\int_{T}\varphi_{t}\big{(}u,\bar{\mathpzc{y}}(t)\big{)}d\mu\;\mbox{ and}$ $\displaystyle\delta_{\operatorname{gph}\mathrm{E}_{\Phi}}\bigg{(}u,\int_{T}\mathpzc{w}(t)d\mu\bigg{)}$ $\displaystyle\leq\int_{T}\varphi_{t}\big{(}u,\mathpzc{w}(t)\big{)}d\mu\;\text{ for all }\;(u,\mathpzc{w})\in{\mathbb{R}^{n}}\times\textnormal{L}^{1}(T,\mathbb{R}^{m}).$ Consider further the extended-real-valued function ${\mathbb{R}^{n}}\times\textnormal{L}^{1}(T,\mathbb{R}^{m})\ni(u,\mathpzc{w})\to\mathrm{E}_{\varphi}(u,\mathpzc{w})-\psi(u,\mathpzc{w})\;\mbox{ with }\;\psi(u,\mathpzc{w}):=\vartheta\bigg{(}u,\int_{T}\mathpzc{w}(t)d\mu\bigg{)},$ which attains a local minimum at $(\bar{x},\bar{\mathpzc{y}})$. Hence we have by the elementary version of the subdifferential Fermat rule (see, e.g., (m06, , Proposition 1.114)) that $(0,0)\in\widehat{\partial}(\mathrm{E}_{\varphi}-\psi)(\bar{x},\bar{\mathpzc{y}})$. Observe that $\psi$ is clearly Fréchet differentiable at $(\bar{x},\bar{\mathpzc{y}})$ with $\nabla\psi(\bar{x},\bar{\mathpzc{y}})=(\bar{x}^{\ast},\bar{\mathpzc{y}}^{\ast})$, where $\bar{\mathpzc{y}}^{\ast}$ is the constant function $\bar{\mathpzc{y}}^{\ast}(t):=\bar{y}^{\ast}\operatorname{\mathds{1}}_{T}(t)$. Employing now the sum rule from (m06, , Proposition 1.107)) yields $(\bar{x}^{\ast},\bar{\mathpzc{y}}^{\ast})\in\widehat{\partial}\mathrm{E}_{\varphi}(\bar{x},\bar{\mathpzc{y}})$. Furthermore, it follows from (5.2) that the function $t\to\inf_{\mathbb{B}_{\rho}(\bar{x})\times\mathbb{R}^{m}}\big{\\{}\varphi_{t}(\cdot,\cdot)-\langle\bar{\mathpzc{y}}^{\ast}(t),\cdot\rangle\big{\\}}$ is integrable on $T$. Finally, the application of Theorem 3.2 completes the proof of the proposition. $\hfill\square$ The subsequent results of this section establish pointwise versions of coderivative Leibniz rules under the integrable quasi-Lipschitzian property of set-valued integrands $\Phi_{t}$. The first theorem gives us an upper estimate of the regular coderivative of $\mathrm{E}_{\Phi}$ via the integral of the limiting coderivative of $\Phi_{t}$. ###### Theorem 5.1 (pointwise estimate for regular coderivatives of expected- integral mappings) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand on a complete finite measure space $(T,\mathcal{A},\mu)$, let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$ satisfy the conditions in (5.2), and let $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$ for some $\bar{y}\in\mathrm{E}_{\Phi}(\bar{x})$. Assume in addition that the integrable quasi-Lipschitzian property holds for $\Phi$ around $(\bar{x},\bar{\mathpzc{y}})$. Then we have the inclusion $\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu\;\mbox{ for all }\;y^{\ast}\in\mathbb{R}^{m}.$ (5.3) Proof. Pick any $\bar{x}^{\ast}\in\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(\bar{y}^{\ast})$. Proposition 7 allows us to find sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{\infty}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{\ast}\\}\subset{\textnormal{L}}^{1}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ satisfying the conditions $\mathpzc{x}_{k}^{*}(t)\in\widehat{D}^{\ast}\Phi_{t}(\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t))(\mathpzc{y}_{k}^{\ast}(t))$ for a.e. $t\in T$, $\|\bar{x}-\mathpzc{x}_{k}\|_{\infty}\to 0$, $\|\mathpzc{y}_{k}^{\ast}-\bar{y}^{\ast}\|_{\infty}\to 0$, as well as $\int_{T}\big{\|}\bar{\mathpzc{y}}(t)-\mathpzc{y}_{k}(t)\big{\|}d\mu\to 0\;\mbox{ and }\;\bigg{\|}\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu-\bar{x}^{\ast}\bigg{\|}\to 0\;\mbox{ when }\;k\to\infty.$ Passing to a subsequence if necessary, we get that that $(\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t),\mathpzc{y}_{k}^{\ast}(t))\to(\bar{x},\bar{\mathpzc{y}}(t),\bar{y}^{\ast})$ as $k\to\infty$ for a.e. $t\in T$, which implies by using the limiting coderivative representation (2.7) that $\mathop{{\rm Lim}\,{\rm sup}}_{k\to\infty}\big{\\{}\mathpzc{x}_{k}^{*}(t)\big{\\}}\subset D^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(\bar{y}^{\ast})\;\mbox{ for a.e. }\;t\in T.$ (5.4) Employing now the imposed integrable quasi-Lipschitzian property of $\Phi$ around $(\bar{x},\bar{\mathpzc{y}})$ yields $\|\mathpzc{x}_{k}^{\ast}(t)\|\leq\ell(t)\|\mathpzc{y}_{k}^{\ast}(t)\|\leq\ell(t)M\;\text{ for a.e. }\;t\in T\;\mbox{ and large }\;k\in\mathbb{N},$ where $M:=\sup\\{\|\mathpzc{y}_{k}^{\ast}\|_{\infty}\mbox{ over }k\in\mathbb{N}\\}$. Then Fatou’s lemma for multifunctions taken from (bs, , Corollary 4.1) and being combined with (5.4) tells us that $\displaystyle\bar{x}^{\ast}\in\mathop{{\rm Lim}\,{\rm sup}}_{k\to\infty}\left\\{\displaystyle\int_{{T}}\mathpzc{x}_{k}^{*}(t)d\mu\right\\}$ $\displaystyle\subset\displaystyle\int_{{T}}\mathop{{\rm Lim}\,{\rm sup}}_{k\to\infty}\big{\\{}\mathpzc{x}_{k}^{*}(t)\big{\\}}d\mu$ $\displaystyle\subset\displaystyle\int_{{T}}D^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu$ for all $y^{*}\in\mathbb{R}^{m}$. This shows that $\bar{x}^{\ast}\in\displaystyle\int_{{T}}D^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu$ and thus verifies (5.3). $\hfill\square$ To proceed further with deriving an efficient pointwise upper estimate of the limiting coderivative of $\mathrm{E}_{\Phi}$ via the limiting coderivative of the integrand $\Phi_{t}$, we need to invoke an additional property of the set- valued mapping $\mathcal{S}_{\Phi}$ defined in (4.16). This property of multifunctions, known as inner semicompactness, is formulated and discussed in (m06, , Definition 1.63(ii)). The reader can find in (m06, , Definition 1.63(i)) a parallel inner semicontinuity property of multifunctions, which could also be used to establish complemented coderivative Leibniz rules, while we are not going to pursue this aim in this paper. Recall that $\mathcal{S}_{\Phi}$ is inner semicompact at $(\bar{x},\bar{y})$ if for every sequence $(x_{k},y_{k})\to(\bar{x},\bar{y})$ there exists a sequence $\mathpzc{y}_{k}\in\mathcal{S}_{\Phi}(x_{k},y_{k})$ that contains an $\textnormal{L}^{1}({T},\mathbb{R}^{m})$-norm convergent subsequence as $k\to\infty$. ###### Theorem 5.2 (coderivative Leibniz rule for integrably quasi- Lipschitzian multifunctions) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand on a complete finite measure space $(T,\mathcal{A},\mu)$, and let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$ satisfy the conditions in (5.2). Take $\bar{y}\in\mathrm{E}_{\Phi}(\bar{x})$ such that the mapping $\mathcal{S}_{\Phi}$ is inner semicompact at $(\bar{x},\bar{y})$ and assume in addition that $\Phi$ is integrably quasi-Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ for all $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$. Then we have the limiting coderivative Leibniz rule ${D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\bigcup\limits_{\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})}\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu\;\mbox{ whenever }\;y^{\ast}\in\mathbb{R}^{m}.$ (5.5) Proof. Pick any $\bar{x}^{\ast}\in{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(\bar{y}^{\ast})$. It follows from representation (2.7) of the limiting coderivative that there exist the convergent quadruples $(x_{k},y_{k},x_{k}^{\ast},y_{k}^{\ast})\to(\bar{x},\bar{y},\bar{x}^{\ast},\bar{y}^{\ast})$ with $x_{k}^{\ast}\in\widehat{D}^{\ast}\mathrm{E}_{\Phi}(x_{k},y_{k})(y_{k}^{\ast})$ for all $k\in\mathbb{N}$. Employing the imposed inner semicompactness of $\mathcal{S}_{\Phi}$ and passing to a subsequence if necessary give us $\mathpzc{y}_{k}\in\mathcal{S}_{\Phi}(x_{k},y_{k})$ such that $\mathpzc{y}_{k}\to\bar{\mathpzc{y}}$ in the norm topology of $\textnormal{L}^{1}({T},\mathbb{R}^{m})$. Remembering that the graph of $\Phi_{t}$ is closed yields $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$. Applying further Proposition 7 to each quadruple $(x_{k},y_{k},x_{k}^{\ast},y_{k}^{\ast})$ and employing the diagonal process ensure the existence of sequences $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{\infty}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{\ast}\\}\subset{\textnormal{L}}^{1}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ satisfying the following conditions: $\mathpzc{x}_{k}^{*}(t)\in\widehat{D}^{\ast}\Phi_{t}(\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t))(\mathpzc{y}_{k}^{\ast}(t))$ for a.e. $t\in T$ and all $k\in\mathbb{N}$ together with the norm convergence $\|\bar{x}-\mathpzc{x}_{k}\|_{\infty}\to 0$, $\|\bar{\mathpzc{y}}-\mathpzc{y}_{k}\|_{1}\to 0$, and $\|\mathpzc{y}_{k}^{\ast}-\bar{y}^{\ast}\|_{\infty}\to 0$ as well as $\bigg{\|}\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu-\bar{x}^{\ast}\bigg{\|}\to 0\;\mbox{ and }\;\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t),\mathpzc{y}_{k}^{\ast}(t)\big{)}\to\big{(}\bar{x},\bar{\mathpzc{y}}(t),\bar{y}^{\ast}\big{)}\;\mbox{ for a.e. }\;t\in T\;\mbox{ as }\;k\to\infty.$ Furthermore, we have the limiting inclusion (5.4) as obtained in the proof of Theorem 5.1. Using now the assumed integrable quasi-Lipschitzian property of $\Phi$ around $(\bar{x},\bar{\mathpzc{y}})$ and remembering the ${\textnormal{L}}^{\infty}$-norm convergence $\mathpzc{y}_{k}\to\bar{\mathpzc{y}}$ as $k\to\infty$ allow us to find $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ and $k_{0}\in\mathbb{N}$ such that $\|\mathpzc{x}_{k}^{\ast}(t)\|\leq\ell(t)\|\mathpzc{y}_{k}^{\ast}(t)\|\leq\ell(t)M\;\mbox{ for almost all }\;t\in T\;\mbox{ and all }\;k\geq k_{0},$ where $M=\sup\\{\|\mathpzc{y}_{k}\|_{\infty}\;\mbox{over}\;k\in\mathbb{N}\\}$. Hence the functions $\mathpzc{x}_{k}$ are uniformly integrable on $T$. It follows from (5.4) and the aforementioned Fatou’s lemma for multifunctions that for all $y^{*}\in\mathbb{R}^{m}$ we get $\displaystyle\bar{x}^{\ast}\in\displaystyle\int_{{T}}D^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu\subset\bigcup\limits_{\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})}\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu,$ which verifies (5.5) and completes the proof of the theorem. $\hfill\square$ The following result corresponds to an alternative Leibniz rule for coderivative of expected multifunctions when the mapping $\mathcal{S}$ is not inner semicompact at the reference point, but the mapping $\Phi$ is integrably locally Lipschitzian. ###### Theorem 5.3 (coderivative Leibniz rule for integrably Lipschitzian multifunctions) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a set-valued normal integrand on a complete finite measure space $(T,\mathcal{A},\mu)$, and let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$. Suppose that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$. Then for every $\bar{y}\in\mathrm{E}_{\Phi}(\bar{x})$ we have the limiting coderivative Leibniz rule ${D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\Phi_{t}(\bar{x})\big{)}(y^{\ast})d\mu\;\mbox{ whenever }\;y^{\ast}\in\mathbb{R}^{m},$ (5.6) where ${D}^{\ast}\Phi_{t}\big{(}\bar{x},\Phi_{t}(\bar{x})\big{)}(y^{\ast}):=\bigcup_{y\in\Phi_{t}(\bar{x})}{D}^{\ast}\Phi_{t}\big{(}\bar{x},y)(y^{\ast})$. ###### Proof Following the arguments of Theorem 5.2, we have the existence of sequences $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{\infty}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{\ast}\\}\subset{\textnormal{L}}^{1}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ satisfying the following conditions: $\mathpzc{x}_{k}^{*}(t)\in\widehat{D}^{\ast}\Phi_{t}(\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t))(\mathpzc{y}_{k}^{\ast}(t))$ for a.e. $t\in T$ and all $k\in\mathbb{N}$ together with the norm convergence $\|\bar{x}-\mathpzc{x}_{k}\|_{\infty}\to 0$, and $\|\mathpzc{y}_{k}^{\ast}-\bar{y}^{\ast}\|_{\infty}\to 0$ as well as $\|\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu-\bar{x}^{\ast}\|\to 0$ and $\|\mathpzc{x}_{k}^{\ast}(t)\|\leq\ell(t)M$ for a.e. $t\in T$. Now, by Fatou’s lemma for multifunctions we get that $\displaystyle x^{\ast}\in\mathop{{\rm Lim}\,{\rm sup}}_{k\to\infty}\left\\{\displaystyle\int_{{T}}\mathpzc{x}_{k}^{*}(t)d\mu\right\\}$ $\displaystyle\subset\displaystyle\int_{{T}}\mathop{{\rm Lim}\,{\rm sup}}_{k\to\infty}\big{\\{}\mathpzc{x}_{k}^{*}(t)\big{\\}}d\mu.$ To conclude the proof, let us show the following inclusion $\mathop{{\rm Lim}\,{\rm sup}}_{k\to\infty}\\{\mathpzc{x}_{k}^{*}(t)\\}\subset{D}^{\ast}\Phi_{t}\big{(}\bar{x},\Phi_{t}(\bar{x})\big{)}(y^{\ast})$ for almost all $t\in T$. Indeed, consider a set of full measure $\hat{T}$, where $\mathpzc{x}_{k}(t)\to\bar{x}$ and $\mathpzc{y}_{k}(t)\to\bar{y}$. Fix $t\in\hat{T}$ and $u^{\ast}\in\mathop{{\rm Lim}\,{\rm sup}}_{k\to\infty}\\{\mathpzc{x}_{k}^{*}(t)\\}$. Hence, by definition there exists a subsequence $x^{\ast}_{k_{j}}(t)\to u^{\ast}$. Since $\|\mathpzc{y}_{k_{j}}(t)\|\leq\kappa(t)$ (recall (5.2)), we can take assume that $\mathpzc{y}_{k_{j}}\to y$ for some $y\in\Phi_{t}(\bar{x})$ (recall that the graph of $\Phi_{t}$ is closed). Therefore, by the limiting coderivative representation (2.7), we get that $x^{\ast}\in\limsup_{j\to\infty}\widehat{D}^{\ast}\Phi_{t}(\mathpzc{x}_{k_{j}}(t),\mathpzc{y}_{k_{j}}(t))(\mathpzc{y}_{k_{j}}^{\ast}(t))\subset{D}^{\ast}\Phi_{t}\big{(}\bar{x},y)\big{)}(y^{\ast})$, and that concludes the proof. Next we present a simple consequence of Theorem 5.2 that is used in what follows. ###### Corollary 1 (coderivative Leibniz rule for single-valued expected- integral mappings) Under the general assumptions of Theorem 5.2, suppose in addition that $\mathrm{E}_{\Phi}(\bar{x})$ is a singleton, and that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$. Then the mapping $\mathcal{S}_{\Phi}$ is inner semicompact at $(\bar{x},\mathrm{E}_{\Phi}(\bar{x}))$ and we have the coderivative Leibniz rule ${D}^{\ast}\mathrm{E}_{\Phi}(\bar{x})(y^{\ast})\subset\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x}\big{)}(y^{\ast})d\mu\;\mbox{ for all }\;y^{*}\in\mathbb{R}^{m}.$ (5.7) Proof. Denote $\bar{y}:=\mathrm{E}_{\Phi}(\bar{x})$ and take a measurable selection $\bar{\mathpzc{y}}(t)\in\Phi_{t}(\bar{x})$ for a.e. $t\in T$. To verify that $\mathcal{S}_{\Phi}$ is inner semicompact at $(\bar{x},\bar{y})$, let $(x_{k},y_{k})\to(\bar{x},\bar{y})$ as $k\to\infty$, and let $\mathpzc{y}_{k}\in\mathcal{S}_{\Phi}(x_{k},y_{k})$ for all $k\in\mathbb{N}$. Employing the integrable local Lipschitzian property (4.2) of $\Phi$ around $\bar{x}$ tells us that $\displaystyle\|\mathpzc{y}_{k}(t)-\bar{\mathpzc{y}}(t)\|\leq\ell(t)\|x_{k}-\bar{x}\|\;\text{ for a.e. }\;t\in T\;\mbox{ and all large }\;k\in\mathbb{N}.$ Then it follows from Lebesgue’s dominated convergence theorem that $\mathpzc{y}_{k}\to\bar{\mathpzc{y}}$ in $\textnormal{L}^{1}(T,\mathbb{R}^{m})$ as $k\to\infty$, which verifies the inner semicompactness property of $\mathcal{S}_{\Phi}$ at $(\bar{x},\bar{y})$. Applying finally Theorem 5.2, we arrive at (5.7) and thus complete the proof of the corollary. $\hfill\square$ As we see, the previous versions of the coderivative Leibniz rule provided just upper estimates of the regular and limiting coderivatives of the expected-integral mappings. Although calculus rules as inclusions of this type suffice for many applications, it is important to establish efficient conditions ensuring the fulfillment of such results as equalities. The following theorem does the job. We say that a set-valued mapping $F$ is coderivative regular at $(\bar{x},\bar{y})\in\operatorname{gph}F$ for $y^{\ast}\in\mathbb{R}^{m}$ if $D^{\ast}F(\bar{x},\bar{y})(y^{\ast})=\widehat{D}^{\ast}F(\bar{x},\bar{y})(y^{\ast})$. ###### Theorem 5.4 (coderivative Leibniz rule as equality) In the setting of Corollary 1, suppose that for a.e. $t\in T$ we have that $\Phi_{t}(\bar{x})=\\{\bar{\mathpzc{y}}(t)\\}$ and that $\Phi_{t}$ is coderivative regular at $(\bar{x},\bar{\mathpzc{y}}(t))$ for $y^{\ast}\in\mathbb{R}^{m}$. Then $\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x})(y^{\ast})={D}^{\ast}\mathrm{E}_{\Phi}(\bar{x})(y^{\ast})=\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x}\big{)}(y^{\ast})d\mu.$ (5.8) ###### Proof Taking into account the inclusion of Theorem 5.2, it is sufficient to show that $\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu\subset\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x})(y^{\ast}).$ (5.9) First we check that for all $(u,v)\in{\mathbb{R}^{n}}\times\mathbb{R}^{m}$ there exists $\mathpzc{v}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})$ with $v=\int_{T}\mathpzc{v}(t)d\mu$ such that $\displaystyle\int_{T}\overline{\mbox{\rm co}\,}d_{\operatorname{gph}\Phi_{t}}(\bar{x},\bar{\mathpzc{y}}(t))(u,\mathpzc{v}(t))d\mu\leq d_{\operatorname{gph}\mathrm{E}_{\Phi}}(\bar{x},\bar{y})(u,v),$ (5.10) where $d_{\operatorname{gph}\mathrm{E}_{\Phi}}$ denotes the subderivative (2.11) of the indicator function of $\operatorname{gph}\mathrm{E}_{\Phi}$, and where $\overline{\mbox{\rm co}\,}d_{\operatorname{gph}\Phi_{t}}$ is the convex closure of this subderivative. This claim is obvious if right-hand side of (5.10) is $\infty$. Otherwise, suppose that $d_{\operatorname{gph}\mathrm{E}_{\Phi}}(\bar{x},\bar{y})(u,v)<\infty$, we find sequences $s_{k}\downarrow 0$ and $(u_{k},v_{k})\to 0$ as $k\to\infty$ such that $\displaystyle d_{\operatorname{gph}\mathrm{E}_{\Phi}}(\bar{x},\bar{y})(u,v)=\lim\limits_{k\to\infty}\frac{\delta_{\operatorname{gph}\mathrm{E}_{\Phi}}(\bar{x}+s_{k}u_{k},\bar{y}_{k}+s_{k}v_{k})}{s_{k}}.$ The above inequality implies that whenever $k\in\mathbb{N}$ is sufficiently large there exists $\mathpzc{w}_{k}(t)\in\Phi_{t}(\bar{x}+s_{k}u_{k})$ for a.e. $t\in T$ such that $\int_{T}\mathpzc{w}_{k}(t)d\mu=\bar{y}+s_{k}v_{k}$. By the integrable Lipschitz continuity of $\Phi$, it can be assumed that for a.e. $t\in T$ and all $k\in\mathbb{N}$ we have the representation $\displaystyle\mathpzc{w}_{k}(t)=\bar{\mathpzc{y}}(t)+s_{k}\mathpzc{v}_{k}(t)\;\text{ with some measurable function }\;\mathpzc{v}_{k}(t)\in\|u_{k}\|\ell(t)\mathbb{B},$ which tells us that $\|\mathpzc{v}_{k}(t)\|\leq\max\big{\\{}\|u_{k}\|\;\big{|}\;k\in\mathbb{N}\\}\ell(t)$. This implies that a subsequence of $\\{\mathpzc{v}_{k}\\}$ converges weakly to some $\mathpzc{v}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})$. Thus $v_{k}=\int_{T}\mathpzc{v}_{k}(t)d\mu(t)\to\int_{T}\mathpzc{v}(t)d\mu$ along this subsequence, and therefore we arrive at $v=\int_{T}\mathpzc{v}(t)d\mu$. Consider further the integrand $\Psi\colon T\times[0,\infty]\times{\mathbb{R}^{n}}\times\mathbb{R}^{m}\to\overline{\mathbb{R}}$ defined by $\displaystyle\Psi(t,r,a,b):=\left\\{\begin{array}[]{cc}\displaystyle\frac{\delta_{\operatorname{gph}\Phi_{t}}(\bar{x}+ra,\bar{\mathpzc{y}}(t)+rb)}{r}&\text{ if }r>0,\\\ &\\\ \overline{\mbox{\rm co}\,}d_{\operatorname{gph}\Phi_{t}}(\bar{x},\bar{\mathpzc{y}}(t))(a,b)&\text{ if }r=0.\end{array}\right.$ (5.14) It is easy to see that $\Psi(t,\cdot)$ is lower semicontinuous for all $t\in T$, and also that for every $(t,r,a)\in T_{na}\times[0,\infty)\times{\mathbb{R}^{n}}$ the function $\Psi(t,r,a,\cdot)$ is convex. Applying (bal, , Theorem 2.1) on $T_{na}$, we get that $\displaystyle\int_{T_{pa}}\overline{\mbox{\rm co}\,}d_{\operatorname{gph}\Phi_{t}}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}\big{(}u,\mathpzc{v}(t)\big{)}d\mu$ $\displaystyle=\int_{T_{na}}\Psi\big{(}t,0,u,\mathpzc{v}(t)\big{)}d\mu\leq\liminf\limits_{k\to\infty}\int_{T_{na}}\Psi\big{(}t,s_{k},u_{k},\mathpzc{v}_{k}(t)\big{)}d\mu=0.$ On the other hand, noticing that $\mathpzc{v}_{k}$ converges pointwise to $\mathpzc{v}$ on $T_{pa}$, we deduce from Fatou’s lemma and the lower semicontinuity of $\Psi$ that $\displaystyle\int_{T_{pa}}\overline{\mbox{\rm co}\,}d_{\operatorname{gph}\Phi_{t}}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}\big{(}u,\mathpzc{v}(t)\big{)}d\mu$ $\displaystyle=\int_{T_{pa}}\Psi\big{(}t,0,u,\mathpzc{v}(t)\big{)}d\mu\leq\int_{T_{na}}\liminf\limits_{k\to\infty}\Psi\big{(}t,s_{k},u_{k},\mathpzc{v}_{k}(t)\big{)}$ $\displaystyle\leq\liminf\limits_{k\to\infty}\int_{T_{na}}\Psi\big{(}t,s_{k},u_{k},\mathpzc{v}_{k}(t)\big{)}d\mu=0,$ which therefore verifies the estimate in (5.10). To proceed, pick $x^{\ast}\in\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu$ and then find, by using the assumed graphical regularity of the integrand $\Phi_{t}$, a measurable selection $\mathpzc{x}(t)\in{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})=\widehat{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})$ such that $x^{\ast}=\int_{T}\mathpzc{x}^{\ast}(t)d\mu$. This implies due to (2.10) that $\displaystyle\langle\mathpzc{x}^{\ast}(t),u\rangle-\langle y^{\ast},v\rangle\leq d_{\operatorname{gph}\Phi_{t}}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(u,u)\;\text{ for all }\;(u,v)\in{\mathbb{R}^{n}}\times\mathbb{R}^{m},$ which ensures by definition of the convex closure the estimate $\displaystyle\langle\mathpzc{x}^{\ast}(t),u\rangle-\langle y^{\ast},v\rangle\leq\overline{\mbox{\rm co}\,}d_{\operatorname{gph}\Phi_{t}}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(u,v)\;\text{ for all }\;(u,v)\in{\mathbb{R}^{n}}\times\mathbb{R}^{m}.$ Fixing $(u,v)\in{\mathbb{R}^{n}}\times\mathbb{R}^{m}$, find $\mathpzc{v}$ satisfying (5.10), and so $\displaystyle\langle x^{\ast},u\rangle-\langle y^{\ast},v\rangle$ $\displaystyle=\int_{{T}}\left(\langle\mathpzc{x}^{\ast}(t),u\rangle-\langle y^{\ast},\mathpzc{v}(t)\rangle\right)d\mu$ $\displaystyle\leq\int_{T}\overline{\mbox{\rm co}\,}d_{\operatorname{gph}\Phi_{t}}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}\big{(}u,\mathpzc{v}(t)\big{)}d\mu$ $\displaystyle\leq d_{\operatorname{gph}\mathrm{E}_{\Phi}}(\bar{x},\bar{y})(u,v).$ Invoking now (2.10) yields $x^{\ast}\in\widehat{D}^{\ast}\mathrm{E}_{\Phi}\bar{x},\bar{y})(y^{\ast})$, which verifies (5.9) and thus completes the proof. The coderivative Leibniz rules obtained above in terms of the limiting coderivative and the coderivative characterizations of Lipschitzian properties of deterministic multifunctions discussed in Section 4 allow us to establish efficient conditions for Lipschitz stability of expected-integral mappings. We present here the following result ensuring the Lipschitz-like property of the multifunction $\mathrm{E}_{\Phi}$. ###### Proposition 8 (Lipschitz-like property of expected-integral multifunctions) In the setting of Theorem 5.2, assume in addition that $\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(0)d\mu=\big{\\{}0\big{\\}}\;\mbox{ for all }\;\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y}).$ (5.15) Then the expected-integral multifunction $\mathrm{E}_{\Phi}$ from (1.2) is Lipschitz-like around $(\bar{x},\bar{y})$. Proof. We readily deduce this statement from the coderivative Leibnitz rule (5.5) and the coderivative criterion (4.9) for the Lipschitz-like property of deterministic set-valued mappings. $\hfill\square$ We conclude this section with a direct consequence of the obtained coderivative Leibniz rules of the inclusion and equality types to derive the corresponding subdifferential Leibniz rule for expected-integral functionals. The results in this vein can be found in different settings in chp19 ; mor- sag18 ; chp20 ; chp192 . ###### Proposition 9 Let $\varphi\colon T\times\mathbb{R}^{n}\to\overline{\mathbb{R}}$ be a normal integrand. Take $\bar{x}\in{\mathbb{R}^{n}}$ and assume that there exist a positive integrable function $\ell\colon T\to(0,\infty)$ and a number $\varepsilon>0$ such that $\displaystyle|\varphi(t,x)-\varphi(t,y)|\leq\ell(t)\|x-y\|\;\text{ for all }\;x,y\in\mathbb{B}_{\varepsilon}(\bar{x})\;\mbox{ and a.e. }\;t\in T.$ Then we have the limiting subdifferential Leibniz rule $\partial\mathrm{E}_{\varphi}(\bar{x})\subset\int_{T}\partial\varphi_{t}(\bar{x})d\mu,$ (5.16) where $\varphi_{t}(x):=\varphi(t,x)$. Furthermore, (5.16) holds as equality if $\varphi_{t}$ is lower regular at $\bar{x}$ for a.e. $t\in T$. Proof. The inclusion result in (5.16) follows directly from Corollary 1. To verify the equality therein, observe that $D^{\ast}\varphi(\bar{x})(1)=\partial\varphi(\bar{x})$ for real-valued l.s.c. functions (see (m18, , Theorem 1.23)), and that we always have $\widehat{D}^{\ast}\varphi(\bar{x})(1)\supset\widehat{\partial}\varphi(\bar{x})$. Invoking finally the assumed lower regularity verifies the equality in (5.16) and thus completes the proof of the proposition. $\hfill\square$ ## 6 Lipschitz Stability and Coderivatives of Random Mappings with Composite Integrands This section is devoted to the study of expected-integral multifunctions $\mathrm{E}_{\Phi}$ from (1.2) with $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ generated by composite set-valued normal integrands of the type $\Phi_{t}(x)=F\big{(}t,g_{t}(x)\big{)}\;\text{ for all }\;x\in U\;\text{ and a.e. }\;t\in T$ (6.1) near some point $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$, where $F\colon T\times\mathbb{R}^{q}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ and $g\colon T\times U\to\mathbb{R}^{q}$. First we reveal general conditions on $F$ and $g$ ensuring that $\Phi$ is the integrably locally Lipschitzian in the sense of (4.2) around the reference point. Then we introduce a new notion of integrable amenable multifunctions, which expands to (both random and deterministic) set- valued mappings the fundamental notion of amenable extended-real-valued deterministic functions that was comprehensively studied in rw . The Leibniz- type rules obtained for such compositions give us pointwise upper estimates of both regular and limiting coderivatives of (1.2) via compositions of limiting coderivatives under the integral sign. Finally, we apply the obtained results to evaluate coderivatives of expected-integral multifunctions generated by feasible set mappings in constrained stochastic programming. Let us start with establishing Lipschitz stability of expected-integral multifunctions with composite integrals (6.1). The following lemma for deterministic set-valued mappings with convex graphs and its very simple proof are of their own interest. This result can be treated as a quantitative counterpart (without the closed-graph assumption) of (m93, , Theorems 5.9 and 5.12). Recall r that a set-valued mapping $G\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is sub-Lipschitzian around $\bar{x}\in\mbox{\rm dom}\,G$ if for every compact set $V\subset\mathbb{R}^{m}$ there exist a neighborhood $U$ of $\bar{x}$ and a number $\ell\geq 0$ such that we have the inclusion $G(x)\cap V\subset G(x^{\prime})+\ell\|x-x^{\prime}\|\mathbb{B}\;\mbox{ for all }\;x,x^{\prime}\in U.$ (6.2) The coderivative characterization from (m93, , Theorem 5.9(b)) of the sub- Lipschitzian property (6.2) for (locally) closed-graph multifunctions reads as follows: $G$ is sub-Lipschitzian around $\bar{x}$ if and only if for any compact set $V\subset\mathbb{R}^{m}$ there exist $\eta>0$ and $\ell\geq 0$ such that $\sup\big{\\{}\|x^{*}\|\;\big{|}\;x^{*}\in D^{*}G(x,y)(y^{*})\big{\\}}\leq\ell\|y^{*}\|\;\mbox{ whenever }\;y^{*}\in\mathbb{R}^{m}$ (6.3) for all $x\in\mathbb{B}_{\eta}(\bar{x})$ and $y\in G(x)\cap V$. Here is our quantitative version for convex-graph multifunctions. ###### Lemma 1 (coderivative estimate for convex-graph multifunctions) Let $G\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a convex-graph multifunction with $\bar{x}\in\mbox{\rm dom}\,G$ for which there exist positive numbers $M$ and $\eta$ such that ${\rm dist}\big{(}0;G(x)\big{)}\leq M\;\text{ whenever }\;x\in\mathbb{B}_{2\eta}(\bar{x}).$ (6.4) Then for all $x\in\mathbb{B}_{\eta}(\bar{x})$, all $y\in G(x)$, and all $y^{*}\in\mathbb{R}^{m}$ we have the coderivative estimate $\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in D^{\ast}G(x,y)(y^{\ast})\big{\\}}\leq\left(\frac{M+\|y\|}{\eta}\right)\|y^{\ast}\|.$ (6.5) If in addition the graph of $G$ is locally closed, then this multifunction is sub-Lipschitzian around $\bar{x}$. Proof. Pick any $x\in\mathbb{B}_{\eta}(\bar{x})$, $y\in G(x)$, and $x^{\ast}\in D^{\ast}G(x,y)(y^{\ast})$ with $y^{\ast}\in\mathbb{R}^{m}$. Since the graph of $G$ is convex and (6.4) holds, we get the inequalities (see, e.g., (m18, , Proposition 1.7)) $\langle x^{\ast},x+h-x\rangle\leq\langle y^{\ast},y_{h}-y\rangle\leq\|y^{\ast}\|(M+\|y\|)\;\mbox{ for all }\;h\in\mathbb{B}_{\eta}(0),$ where $y_{h}\in G(x+h)$ is such that $\|y_{h}\|={\rm dist}(0;G(x+h))$. This readily yields (6.5). Finally, the sub-Lipschitzian property of $G$ around $\bar{x}$ follows from (6.3) and (6.5) with $\ell:=(M+\|y\|)/\eta$. $\hfill\square$ The second lemma is more technical while being used below in the proofs of both theorems in this section. To proceed, we need the following assumption: for all $h\in\mathbb{B}_{\eta}(\bar{x})$ the function $t\mapsto{\rm dist}\big{(}0;F_{t}(g_{t}(\bar{x})+h)\big{)}\;\mbox{ is integrable on }\;T.$ (6.6) ###### Lemma 2 (perturbed distance estimate) Let $F\colon T\times\mathbb{R}^{q}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a convex normal integrand satisfying condition (6.6). Then there exist $\kappa\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ and $\eta_{1}\in(0,\eta)$ such that $\mbox{\rm dist}\,\big{(}0;F_{t}(g_{t}(\bar{x})+h)\big{)}\leq\kappa(t)\;\text{ for all }\;h\in\eta_{1}\mathbb{B}\;\text{ and a.e. }\;t\in T.$ (6.7) Proof. Take finitely many vectors $e_{i}\in\eta\mathbb{B}$ as $i\in I$ such that $0\in\mbox{\rm int}\,\Delta$, where $\Delta$ is the simplex generated by vectors $\\{e_{i}\\}_{i\in I}$, i.e., given by $\Delta:=\bigg{\\{}\sum_{i\in I}\lambda_{i}e_{i}\;\bigg{|}\;\lambda_{i}\geq 0,\;\sum_{i\in I}\lambda_{i}=1\bigg{\\}}.$ Define the function $\kappa(t):=\max\\{{\rm dist}(0;F_{t}(g_{t}(\bar{x})+e_{i}))\;|\;i\in I\\}$, which is integrable on $T$ by assumption (6.6), and then pick any $h$ in the form $h=\sum_{i\in I}\lambda_{i}e_{i}$. Recalling that the graph of $F_{t}$ is convex for a.e. $t\in T$ ensures the fulfillment of the estimates ${\rm dist}\big{(}0;F_{t}(g_{t}(\bar{x})+h)\big{)}\leq\sum_{i\in I}\lambda_{i}{\rm dist}\big{(}0;F_{t}(g_{t}(\bar{x})+e_{i})\big{)}\leq\kappa(t)\;\mbox{ for a.e. }\;t\in T.$ Choosing finally $\eta_{1}>0$ with $\eta_{1}\mathbb{B}\subset\Delta$ verifies (6.7) and thus completes the proof of the lemma. $\hfill\square$ Having Lemmas 1 and 2 in hand, we now establish the integrable local Lipschitzian property of convex composite normal integrands (6.1) under some additional assumptions. ###### Theorem 6.1 (integrable local Lipschitzian property of composite integrands) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be a normal integrand represented in the composite form (6.1) around $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\Phi}$ on a complete finite measure space $(T,{\cal A},\mu)$, where $F\colon T\times\mathbb{R}^{q}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is a set-valued convex normal integrand that is integrably bounded as in (5.2), and where $g_{t}(x)$ is continuously differentiable around $\bar{x}$ with the uniformly bounded gradients $\nabla g_{t}(x)$ for all $x\in\mathbb{B}_{\eta}(\bar{x})\subset U$ and for a.e. $t\in T$ under the fulfillment of (6.6). Then the multifunction $\Phi$ is integrably locally Lipschitzian around $\bar{x}$. Proof. Select a set $\widehat{T}\subset T$ of full measure, and then take $\eta>0$ and $\kappa\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that $F$ satisfies (6.7) for all $t\in\widehat{T}$, and that $\Phi$ satisfies the integrable bounded condition from (5.2) over $\mathbb{B}_{\eta}(\bar{x})$ on $\widehat{T}$ with this function $\kappa$. Now choose $\varepsilon\in(0,\eta)$ ensuring the inequalities $\sup\big{\\{}\|\nabla g_{t}(x)\|\;\big{|}\;(t,x)\in\widehat{T}\times\mathbb{B}_{\varepsilon}(\bar{x})\big{\\}}\leq\ell\;\mbox{ and}$ $\|g_{t}(x)-g_{t}(u)\|\leq\ell\|x-u\|\;\mbox{ whenever}\;x,u\in\mathbb{B}_{\varepsilon}(\bar{x}),\;t\in\widehat{T}$ for some $\ell>0$. Suppose without loss of generality that $\varepsilon\ell<\eta$. Now taking $t\in\widehat{T}$, $x\in\mathbb{B}_{\varepsilon/2}(\bar{x})$, $y\in\Phi_{t}(x)$, and $x^{\ast}\in D^{\ast}\Phi_{t}(x,y)$, we observe that $g_{t}(x)\in\mathbb{B}_{\eta/2}(g_{t}(\bar{x}))$. Applying Lemma 1 to the mapping $G:=F_{t}$ at the point $g(\bar{x})$ gives us the coderivatives estimate (6.5), which ensures by the coderivative criterion (4.9) that $F_{t}$ is Lipschitz-like around $(g_{t}(x),y)$. Then applying the coderivative chain rule for deterministic multifunctions from (m18, , Theorem 3.11(iii)) to the composition in (6.1) yields the representation $x^{\ast}=\nabla g_{t}(x)^{\ast}\circ z^{\ast}\;\mbox{ with some }\;z^{\ast}\in D^{\ast}F_{t}\big{(}g(x),y\big{)}(y^{\ast})\;\mbox{ and }\;y^{*}\in\mathbb{R}^{m}.$ Employing again by the coderivative estimate (6.5) leads us to the inequality $\|z^{\ast}\|\leq\frac{4\kappa(t)}{\eta}\|y^{\ast}\|,$ where we use that $\|y(t)\|\leq\kappa(t)$ due to the choice of $y\in\Phi_{t}(x)$. This tells us that $\|x^{\ast}\|=\|\nabla g_{t}(x)^{\ast}\circ z^{\ast}\|\leq\|\nabla g_{t}(x)^{\ast})\|\cdot\|z^{\ast}\|\leq\frac{4\ell\kappa(t)}{\eta}\|y^{\ast}\|.$ Since the obtained estimate holds for all $t\in\widehat{T}$, $x\in\mathbb{B}_{\varepsilon/2}(\bar{x})$, and $y\in\Phi_{t}(x)$, we arrive at the coderivative condition (4.10), which ensures by Theorem 4.1 that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$. $\hfill\square$ Next we introduce a new property of random multifunctions that plays a crucial role in deriving composite coderivative Leibniz rules in what follows. ###### Definition 4 (integrably amenable multifunctions) A set-valued normal integrand $\Phi\colon T\times{\mathbb{R}^{n}}\to\mathbb{R}^{q}$ defined on a complete finite measure space $(T,\mathcal{A},\mu)$ is integrably amenable at $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$ if there exist a neighborhood $U$ of $\bar{x}$, a set-valued convex normal integrand $F\colon T\times\mathbb{R}^{q}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$, and a function $g\colon T\times U\to\mathbb{R}^{q}$, which is measurable with respect to $t$ on $T$ and continuously differentiable with respect to $x$ around $\bar{x}$, such that the composite representation (6.1) holds, and that the following qualification condition is satisfied for a.e. $t\in T$: $D^{\ast}F_{t}\big{(}g_{t}(\bar{x}),\bar{\mathpzc{y}}(t)\big{)}(0)\cap\operatorname{Ker}\nabla g_{t}(\bar{x})^{*}=\\{0\\}\;\mbox{ whenever }\;\bar{\mathpzc{y}}\in{\cal S}_{\Phi}(\bar{x},\bar{y}).$ (6.8) It follows from Definition 3(ii) that the qualification condition (6.8) fulfills automatically if the outer mapping $F$ in (6.1) is integrably quasi- Lipschitzian around $(\bar{x},\bar{\mathpzc{y}})$ for each $\bar{\mathpzc{y}}\in{\cal S}_{F}(\bar{x},\bar{y})$. Recall that Theorem 4.2 ensures the fulfillment of (6.8) when $F$ is integrably Lipschitz-like around $(\bar{x},\bar{\mathpzc{y}})$ for each $\bar{\mathpzc{y}}\in{\cal S}(\bar{x},\bar{y})$, provided that $(T,\mu,\mathcal{A})$ is a purely atomic measure space consisting of countable disjoint family of atoms. In the alternative setting of nonatomic spaces, we deduce from Theorem 4.3 that the integrable quasi-Lipschitzian property of $F$ to ensure (6.8) can be equivalently replaced but its locally Lipschitzian counterpart. On the other hand, observe from (6.8) that this condition is satisfied, independently of the Lipschitzian properties of $F$, if the Jacobian $\nabla g_{t}(\bar{x})$ is of full rank for a.e. $t\in T$. Now we are ready to derive the pointwise coderivative Leibniz rules for both regular and limiting coderivatives of expected-integral multifunctions with amenable set-valued integrands. ###### Theorem 6.2 (coderivative Leibniz rules for integral multifunctions with amenable integrands) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be an amenable multifunction at $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$, which is assumed to be integrably bounded as in (5.2) with the uniformly bounded gradients $\nabla g_{t}(x)$ while satisfying the integrability condition (6.6) for the outer mapping $F$ in (6.1). Then for every $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$ and every $y^{\ast}\in\mathbb{R}^{m}$ the regular coderivative upper estimate $\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\int_{T}\nabla g_{t}(\bar{x})^{\ast}\circ D^{\ast}F_{t}\big{(}g_{t}(\bar{x}),\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu$ (6.9) holds. If in addition the multifunction $\mathcal{S}_{\Phi}$ is inner semicompact at $(\bar{x},\bar{y})$, then we have the following upper estimate of the limiting coderivative of $\mathrm{E}_{\Phi}$ at $(\bar{x},\bar{y})$: $D^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\bigcup\limits_{\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})}\int_{T}\nabla g_{t}(\bar{x})^{\ast}\circ{D}^{\ast}F_{t}\big{(}g_{t}(\bar{x}),\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu.$ (6.10) Proof. It follows from Theorem 6.1 that $\Phi$ is integrably locally Lipschitzian around $\bar{x}$. Then we apply Theorem 5.2 to observe that the coderivative Leibniz rule (6.9) holds with the integral of the set-valued mapping $D^{\ast}\Phi_{t}(\bar{x},\bar{\mathpzc{y}}(t))$ on the the right-hand side. Applying now the coderivative chain rule for deterministic multifunctions from (m18, , Theorem 3.11(iii)) under the amenability assumption of Definition 4, we arrive at the claimed assertion (6.9). Finally, the limiting coderivative inclusion (6.10) is verified similarly to the above proof by involving the corresponding arguments of Theorem 5.2. $\hfill\square$ The concluding part of this section provides an efficient specification of the main Theorem 6.2 for the case of random inequality constraint systems described by smooth functions. In the deterministic framework, such constraint systems appear in nonlinear programming, while in the random setting under consideration they address parametric sets of feasible solutions in problems of stochastic programming. Let us formalize this as follows. Given $\varphi^{i}\colon T\times\mathbb{R}^{q}\times\mathbb{R}^{m}\to\mathbb{R}$ a finite family of convex normal integrands with $i\in I$, assume that $\varphi^{i}_{t}(z,y)$ are continuously differentiable with respect to $(z,y)$ around the reference points for a.e. $t\in T$ and then define the set-valued convex normal integrand $F\colon T\times\mathbb{R}^{q}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ by $F(t,z):=\big{\\{}y\in\mathbb{R}^{m}\;\big{|}\;\varphi^{i}_{t}(z,y)\leq 0\;\text{ for all }\;i\in I\big{\\}}.$ (6.11) For brevity, we skip here establishing specifications of Theorem 6.1 on Lipschitz stability for the case of integrands (6.11), while concentrating on deriving coderivative Leibniz rule for such systems. To proceed, let us introduce two constraint qualification conditions. The first qualification condition can be treated as a random uniform version of the classical Slater constraint qualification in deterministic convex programming. The second one requires the solution triviality for a certain adjoint system generated by the derivatives of $\varphi^{i}_{t}$ and $g_{t}$ at he reference points. ###### Definition 5 (integrable constraint qualifications) Consider the random constraint system (6.11). (i) Given a measurable mapping $\mathpzc{z}\colon T\to\mathbb{R}^{q}$, we say that the constraint system (6.11) satisfies the integrable Slater constraint qualification at $\mathpzc{z}$ if there is a number $\eta>0$ such that for all $h\in\eta\mathbb{B}$ there exists $\mathpzc{y}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})$ ensuring the strict inequality $\varphi^{i}_{t}\big{(}\mathpzc{z}(t)+h,\mathpzc{y}(t)\big{)}<0\;\text{ whenever }\;i\in I\;\text{ for a.e. }\;t\in T.$ (6.12) (ii) Fix $x\in{\mathbb{R}^{n}}$, $t\in T$, and $y\in F_{t}(g_{t}(x))$, and then define the adjoint system at $(t,x,y)$ by $(z^{\ast},0)=\sum_{i\in I_{t}(x,y)}\lambda_{i}\nabla\varphi^{i}_{t}\big{(}g_{t}(x),y\big{)},\;\lambda_{i}\geq 0,\;\nabla g_{t}(x)^{\ast}(z^{\ast})=0,$ (6.13) where $I_{t}(x,y):=\\{i\in I\;|\;\varphi^{i}_{t}(g(x),y)=0\\}$. We say that the integral triviality qualification condition (ITQC) holds at $\bar{x}$ if there exists $\eta>0$ such that for all $x\in\mathbb{B}_{\eta}(\bar{x})$, a.e. $t\in T$, and all $y\in F_{t}(g_{t}(x))$ the adjoint system (6.13) admits only the trivial solution $z^{\ast}=0$. The following consequence of Theorem 6.2 establishes pointwise coderivative Leibniz rules for expected-integral multifunctions $\mathrm{E}_{\Phi}$ with composite integrands (6.1), where $F$ is represented in the constraint form (6.11). The obtained coderivative estimates are given entirely in terms of the initial constraint data. ###### Corollary 2 (coderivative Leibniz rules over random constraint systems) Let $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ be an integrably bounded normal integrand on a complete finite measure space $(T,{\cal A},\mu)$ given in the composite form (6.1) around $\bar{x}$ with some fixed pair $(\bar{x},\bar{y})\in\operatorname{gph}\mathrm{E}_{\Phi}$, where $F\colon T\times\mathbb{R}^{q}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\mathbb{R}^{m}$ is the constraint convex-graph multifunction defined in (6.11), and where $g\colon T\times U\to\mathbb{R}^{q}$ is measurable in $t$ and continuously differentiable in $x$ while satisfying $\sup\big{\\{}\|\nabla g_{t}(x)\|\;\big{|}\;(t,x)\in U\times T\big{\\}}<\infty.$ (6.14) Assume further that the ITQC holds at $\bar{x}$, and that the integrable Slater constraint qualification is satisfied at $\mathpzc{z}:=g_{t}(\bar{x})$. Then for every $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})$ and every $y^{\ast}\in\mathbb{R}^{m}$ we have the inclusion $\widehat{D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\int_{T}\nabla g_{t}(\bar{x})^{\ast}\circ D^{\ast}F_{t}\big{(}g_{t}(\bar{x}),\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu,$ (6.15) where the limiting coderivative $D^{\ast}F_{t}(g_{t}(\bar{x}),\bar{\mathpzc{y}}(t))$ is computed by $\displaystyle D^{\ast}F_{t}\big{(}g_{t}(\bar{x}),\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})=\left\\{z^{\ast}\in\mathbb{R}^{q}\;\Bigg{|}\;\begin{array}[]{c}(z^{\ast},-y^{\ast})=\displaystyle\sum\limits_{i\in I_{t}(\bar{x},\bar{y}(t))}\lambda_{i}\nabla\varphi^{i}_{t}\big{(}g_{t}(\bar{x}),\bar{y}(t)\big{)}\\\ \vspace{-0.3cm}\hfil\\\ \text{ for some }\;\lambda_{i}\geq 0\;\text{ with }\;i\in I_{t}\big{(}\bar{x},\bar{y}(t)\big{)}\end{array}\right\\}.$ (6.19) If in addition the multifunction $\mathcal{S}_{\Phi}$ is inner semicompact at $(\bar{x},\bar{y})$, then we have $D^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\bigcup\limits_{\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})}\int_{T}\nabla g_{t}(\bar{x})^{\ast}\circ{D}^{\ast}F_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu,$ (6.20) where $D^{\ast}F_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t))$ is computed in (6.19). Proof. First we check that all the assumptions of Theorem 6.2 are fulfilled under the assumptions made in this corollary. Indeed, the required (local) uniform boundedness of the gradients $\nabla g_{t}(x)$ is formalized in (6.14), while the imposed integrable Slater constraint qualification (6.12) readily ensures the existence of $\eta_{1}>0$ for which condition (6.6) holds whenever $h\in\mathbb{B}_{\eta_{1}}(\bar{x})$. Further, choose $\eta_{2}>0$ such that for all $x\in\mathbb{B}_{\eta_{2}}(\bar{x})$, a.e. $t\in T$, and all $y\in F_{t}(g_{t}(x))$ we have that the adjoint system (6.13) admits only the trivial solution $z^{\ast}=0$. Denoting $\eta:=\min\\{\eta_{1}/(\ell+1),\eta_{2}\\}$ and fixing the triple $(x,y,t)$ from the above ensure that $g_{t}(x)-g_{t}(\bar{x})\in\eta_{1}\mathbb{B}$. Employing again the integrable Slater condition together with (m06, , Corollary 4.35) (by taking into account that the Slater condition yields in this setting the Mangasarian-Fromovitz constraint qualification assumed in (m06, , Corollary 4.35(b))), we arrive at the coderivative calculation $\displaystyle D^{\ast}F_{t}\big{(}g_{t}(x),y\big{)}(v^{\ast})=\left\\{z^{\ast}\in\mathbb{R}^{q}\;\Bigg{|}\;\begin{array}[]{c}(z^{\ast},-v^{\ast})=\displaystyle\sum\limits_{i\in I_{t}(x,y)}\lambda_{i}\nabla\varphi^{i}_{t}\big{(}g_{t}(x),y\big{)}\\\ \vspace{-0.3cm}\hfil\\\ \text{ for some }\;\lambda_{i}\geq 0\;\text{ with }\;i\in I_{t}(x,y)\end{array}\right\\}$ whenever $v^{*}\in\mathbb{R}^{m}$; this clearly implies (6.19). It now follows from the ITQC that the qualification condition (6.8) is satisfied, and thus $\Phi$ is an integrably amenable multifunction at $(\bar{x},\bar{y})$. Applying the coderivative Leibniz rules from Theorem 6.2 yields both inclusions in (6.15) and (6.20).$\hfill\square$ To conclude this section, observe that the obtained basic coderivative Leibniz rules in (6.10) and (6.20) readily imply, similarly to the proof of Corollary 5.15 by using the coderivative criterion (4.9), the Lipschitz-like property (4.8) of expected-integral multifunctions $\mathrm{E}_{\Phi}$ with composite integrands (6.1), as well as their specifications for random constraint systems (6.11). ## 7 Second-Order Subdifferentials of Expected-Integral Functionals In this section we study of expected functionals $\mathrm{E}_{\varphi}(x)$ of type (1.4) generated by extended-real-valued normal integrands $\varphi\colon T\times{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$ on complete finite measure spaces $(T,{\cal A},\mu)$. We refer the reader to chp19 ; chp192 ; chp20 ; mor-sag18 and the bibliographies therein for more results concerning first- order subdifferential calculus rules of convex and nonconvex expected-integral functionals of type (1.4). Our main focus here is on sequential and pointwise second-order Leibniz rules for $\mathrm{E}_{\varphi}(x)$ derived in terms of both basic and combined second-order subdifferentials that are defined in (2.12) and (2.13), respectively. To simplify our analysis, we postulate that the first-order subdifferential Leibniz rule holds as an equality, which means that for the reference point $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\varphi}$ there exists $\rho>0$ such that $\partial\mathrm{E}_{\varphi}(x)=\int_{T}\partial\varphi_{t}(x)d\mu\;\text{ whenever }\;x\in\mathbb{B}_{\rho}(\bar{x}).$ (7.1) The following proposition reveals some important cases where condition (7.1) is satisfied. ###### Proposition 10 (first-order subdifferential Leibniz rule as equality) Let $\varphi$ be a normal integrand on a complete finite measure space $(T,{\cal A},\mu)$, and let $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\varphi}$. Then we have (7.1) provided that either the assumptions in (i) or those in (ii) hold: $\varphi_{t}$ is convex for almost all $t\in T$, and $\bar{x}$ is an interior point of $\mbox{\rm dom}\,\mathrm{E}_{f}$. There exist $\widehat{T}\in\mathcal{A}$ with $\mu(T\backslash\widehat{T})=0$, $\kappa\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ and a neighborhood $U$ of $\bar{x}$ such that $|\varphi(t,u)-\varphi(t,v)|\leq\kappa(t)\|u-v\|\;\text{ for all }\;u,v\in U\;\text{ and }\;t\in\widehat{T},$ (7.2) and for all $x\in U$ the function $\varphi_{t}$ is lower regular at $x$ for a.e. $t\in T$. Furthermore, in both these cases there exists $\rho>0$ such that $\widehat{\partial}\mathrm{E}_{\varphi}(x)=\partial\mathrm{E}_{\varphi}(x)=\int_{T}\widehat{\partial}\varphi_{t}(x)d\mu=\int_{T}\partial\varphi_{t}(x)d\mu\;\text{ for all }\;x\in\mathbb{B}_{\rho}(\bar{x}).$ (7.3) Proof. The claimed result in case (i) with the formulas in (7.3) can be found, e.g., in chp19 ; chp192 . The verification of (7.3), and hence of (7.1), in case (ii) follows from Proposition 9. $\hfill\square$ To proceed further, we assume from now on that $\begin{array}[]{ll}\varphi_{t}\;\text{ is lower regular at }\;x\in\mathbb{B}_{\rho}(\bar{x})\;\text{ and all }\;t\in T_{na},\\\ \partial\varphi_{t}(x)\subset\kappa(t)\mathbb{B}\;\text{ for all }\;x\in\mathbb{B}_{\rho}(\bar{x})\;\text{ and a.e. }\;t\in T,\end{array}$ (7.4) where a constant $\rho>0$ and an integrable function $\kappa\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ are fixed in what follows. Similarly to (4.16) consider the multifunction $\mathcal{S}_{\varphi}\colon{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;\textnormal{L}^{1}(T,{\mathbb{R}^{n}})$ defined by $\mathcal{S}_{\varphi}(x,y):=\Big{\\{}\mathpzc{y}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})\;\Big{|}\;\int_{T}\mathpzc{y}(t)d\mu=y\;\text{ and }\;\mathpzc{y}(t)\in\partial\varphi_{t}(x)\;\text{ for a.e. }\;t\in T\Big{\\}}.$ (7.5) Let us start the derivation of second-order Leibniz rules with sequential results involving the combined second-order subdifferential mappings (2.13). These results are induced by the corresponding sequential Leibniz rules for coderivatives obtained in Section 5. Here are the two statements in this direction. For brevity we proof only the second one by taking into account that the proof of the first proposition is pretty similar with the usage of Proposition 6 instead of Proposition 7. ###### Proposition 11 (sequential second-order subdifferential Leibniz rule, I) Let $\varphi\colon T\times{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$ be a normal integrand with $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\varphi}$ and $\bar{y}\in\partial\mathrm{E}_{\varphi}(\bar{x})$. Suppose that $\varphi$ satisfies conditions (7.1) and (7.4) around $\bar{x}$ and pick $\bar{\mathpzc{y}}\in\mathcal{S}_{\varphi}(\bar{x},\bar{y})$ such that $\bar{y}=\int_{T}\bar{\mathpzc{y}}(t)d\mu$ and $\bar{\mathpzc{y}}(t)\in\partial\varphi_{t}(\bar{x})$ for a.e. $t\in T$. Then for every $p,q\in(1,\infty)$ with $1/p+1/q=1$ there exist sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{p}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{*}\\}\subset{\textnormal{L}}^{q}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,{\mathbb{R}^{n}})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,{\mathbb{R}^{n}})$ for which the following hold: 1. (i) $\mathpzc{x}_{k}^{*}(t)\in\breve{\partial}^{2}\varphi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}\big{(}\mathpzc{y}_{k}^{\ast}(t)\big{)}$ for a.e. $t\in T$ and all $k\in\mathbb{N}$. 2. (ii) $\|\bar{x}-x_{k}\|\to 0$, $\|\bar{x}-\mathpzc{x}_{k}\|_{p}\to 0$, and $\|\bar{\mathpzc{y}}-\mathpzc{y}_{k}\|_{1}\to 0$ as $k\to\infty$. 3. (iii) $\|\mathpzc{y}_{k}^{\ast}-\bar{\mathpzc{y}}^{\ast}\|_{\infty}\to 0$, $\|\mathpzc{x}_{k}^{*}\|_{q}\|\mathpzc{x}_{k}-x_{k}\|_{p}\to 0$, and $\displaystyle\int_{T}x_{k}^{*}(t)d\mu\to\bar{x}^{\ast}$ as $k\to\infty$. ###### Proposition 12 (sequential second-order subdifferential Leibniz rule, II) In the general setting of Proposition 11 $($without the specification of $p$ and $q$$)$, take any $\bar{x}^{\ast}\in\breve{\partial}^{2}\mathrm{E}_{\varphi}(\bar{x},\bar{y}^{\ast})(\bar{y}^{\ast})$. Then there exist sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{\infty}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{\ast}\\}\subset{\textnormal{L}}^{1}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,{\mathbb{R}^{n}})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,{\mathbb{R}^{n}})$ satisfying the following conditions: 1. (i) $\mathpzc{x}_{k}^{*}(t)\in\breve{\partial}^{2}\varphi_{t}\big{(}\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t)\big{)}\big{(}\mathpzc{y}_{k}^{\ast}(t)\big{)}$ for a.e. $t\in T$ and all $k\in\mathbb{N}$. 2. (ii) $\|\bar{x}-x_{k}\|\to 0$, $\|\bar{x}-\mathpzc{x}_{k}\|_{\infty}\to 0$, $\|\bar{\mathpzc{y}}-\mathpzc{y}_{k}\|_{1}\to 0$, and $\|\mathpzc{y}_{k}^{\ast}-\bar{y}^{\ast}\|_{\infty}\to 0$ as $k\to\infty$. 3. (iii) $\displaystyle\int_{T}\|\mathpzc{x}_{k}^{*}(t)\|\cdot\|\mathpzc{x}_{k}(t)-x_{k}\|d\mu\to 0$ and $\displaystyle\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu\to\bar{x}^{\ast}$ as $k\to\infty$. Proof. Define the set-valued normal integrand $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;{\mathbb{R}^{n}}$ by $\Phi_{t}(x):=\partial\varphi_{t}(x)$. It follows from (7.4) that $\Phi$ satisfies the conditions in (5.2). Furthermore, by (7.1) we have that $\partial\mathrm{E}_{\varphi}(x)=\mathrm{E}_{\Phi}(x)\;\text{ for all }\;x\in\mathbb{B}_{\rho}(\bar{x}).$ Applying now Proposition 7 gives us sequences $\\{x_{k}\\}\subset{\mathbb{R}^{n}}$, $\\{\mathpzc{x}_{k}\\}\subset\textnormal{L}^{\infty}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{x}_{k}^{\ast}\\}\subset{\textnormal{L}}^{1}({T},{\mathbb{R}^{n}})$, $\\{\mathpzc{y}_{k}\\}\subset\textnormal{L}^{1}(T,\mathbb{R}^{m})$, and $\\{\mathpzc{y}_{k}^{\ast}\\}\subset\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ satisfying the inclusions $\mathpzc{x}_{k}^{*}(t)\in\widehat{D}^{\ast}\Phi_{t}(\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t))(\mathpzc{y}_{k}^{\ast}(t))$ for a.e. $t\in T$ together with the convergence $\|\bar{x}-x_{k}\|\to 0$, $\|\bar{x}-\mathpzc{x}_{k}\|_{\infty}\to 0$, $\|\mathpzc{y}_{k}^{\ast}-\bar{y}^{\ast}\|_{\infty}\to 0$, $\displaystyle\int_{T}\|\bar{\mathpzc{y}}(t)-\mathpzc{y}_{k}(t)\|d\mu\to 0,\;\int_{T}\|\mathpzc{x}_{k}^{*}(t)\|\cdot\|\mathpzc{x}_{k}(t)-x_{k}\|d\mu\to 0,\;\mbox{ and }\;\int_{T}\mathpzc{x}_{k}^{*}(t)d\mu\to\bar{x}^{\ast}\;\mbox{ as }\;k\to\infty.$ Remembering that $\widehat{D}^{\ast}\Phi_{t}(\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t))(\mathpzc{y}_{k}^{\ast}(t))=\breve{\partial}^{2}\varphi_{t}(\mathpzc{x}_{k}(t),\mathpzc{y}_{k}(t))(\mathpzc{y}_{k}^{\ast}(t))$, we conclude the proof.$\hfill\square$ Our major goal in this section is to obtain pointwise second-order subdifferential Leibniz rules. To proceed, let us first specify the integrable quasi-Lipschitzian property of random multifunctions from Definition 3(ii) for the case of basic second-order subdifferential mappings (2.12). Given a normal integrand $\varphi\colon T\times{\mathbb{R}^{n}}\to\mathbb{R}^{m}$, pick $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\varphi}$, $\bar{y}\in\partial\mathrm{E}_{\varphi}(\bar{x})$, and $\bar{\mathpzc{y}}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})$ with $\bar{y}=\int_{T}\bar{\mathpzc{y}}(t)d\mu$ and $\bar{\mathpzc{y}}(t)\in\partial\varphi_{t}(\bar{x})$ for a.e. $t\in T$. We say that $\varphi$ enjoys the second-order integrable quasi-Lipschitzian property around $(\bar{x},\bar{\mathpzc{y}})$ if there exist $\eta>0$ and $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that $\sup\big{\\{}\|x^{\ast}\|\;\big{|}\;x^{\ast}\in\partial^{2}\varphi_{t}\big{(}\mathpzc{x}(t),\mathpzc{y}(t)\big{)}\big{(}\mathpzc{y}^{\ast}(t)\big{)}\big{\\}}\leq\ell(t)\|\mathpzc{y}^{\ast}(t)\|\;\text{ for a.e. }\;t\in T$ (7.6) whenever $\mathpzc{x}\in\mathbb{B}_{\eta}(\bar{x})$, $\mathpzc{y}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\partial\varphi(x)$, and $\mathpzc{y}^{\ast}\in\textnormal{L}^{\infty}(T,\mathbb{R}^{m})$ with $\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\cap\partial\varphi(x):=\big{\\{}\mathpzc{y}\in\textnormal{L}^{1}(T,\mathbb{R}^{m})\;\big{|}\;\mathpzc{y}\in\mathbb{B}_{\eta}(\bar{\mathpzc{y}})\;\text{ and }\;\mathpzc{y}(t)\in\partial\varphi_{t}(x)\;\text{ for a.e. }\;t\in T\big{\\}}.$ The first theorem provides a pointwise Leibniz-type estimate of the combined second-order subdifferential of $\varphi$ in terms of the basic second-order subdifferential of the integrand. ###### Theorem 7.1 (pointwise Leibniz-type estimate of the combined second- order subdifferential) Let $\varphi\colon T\times{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$ be a normal integrand defined on a complete finite measure space $(T,{\cal A},\mu)$ with $\bar{x}\in\mbox{\rm dom}\,\mathrm{E}_{\varphi}$ and $\bar{y}\in\partial\mathrm{E}_{\varphi}(\bar{x})$. Suppose that $\varphi$ satisfies conditions (7.1) and (7.4) and enjoys the second-order integrable quasi-Lipschitzian property (7.6) around $(\bar{x},\bar{\mathpzc{y}})$. Then for every $y^{\ast}\in\mathbb{R}^{m}$ we have $\breve{\partial}^{2}\mathrm{E}_{\varphi}(\bar{x},\bar{y})(y^{\ast})\subset\int_{T}{\partial}^{2}\varphi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu.$ (7.7) Proof. Consider again the set-valued normal integrand $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;{\mathbb{R}^{n}}$ given by $\Phi_{t}(x):=\partial\varphi_{t}(x)$. Observe that by (7.4) and (7.1) the multifunction $\Phi$ satisfies the conditions in (5.2) and $\partial\mathrm{E}_{\varphi}=\mathrm{E}_{\Phi}$, respectively, around $\bar{x}$. Furthermore, it follows from (7.6) that $\Phi$ enjoys the integrable quasi-Lipschitzian property around $(\bar{x},\bar{\mathpzc{y}})$. Applying Theorem 5.1, we verify the claimed inclusion (7.7). $\hfill\square$ The next theorem is the main result of this section. It establishes the pointwise second-order subdifferential Leibniz rule for the robust basic second-order construction (2.12). ###### Theorem 7.2 (basic second-order subdifferential Leibniz rule) In the setting of Theorem 7.1, assume in addition that the mapping ${\cal S}_{\varphi}$ from (7.5) is inner semicompact at $(\bar{x},\bar{y})$. Then we have $\partial^{2}\mathrm{E}_{\varphi}(\bar{x},\bar{y})(y^{\ast})\subset\bigcup\limits_{\bar{\mathpzc{y}}\in S_{\varphi}(\bar{x},\bar{y})}\int_{T}\partial^{2}\varphi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu.$ (7.8) Proof. Having $\Phi\colon T\times{\mathbb{R}^{n}}\;{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 2.0pt\hbox{$\rightarrow$}}\;{\mathbb{R}^{n}}$ with $\Phi_{t}(x):=\partial\varphi_{t}(x)$, observe as in the proof of Theorem 7.1 that $\Phi$ satisfies the conditions in (5.2) and $\partial\mathrm{E}_{\varphi}=\mathrm{E}_{\Phi}$ around $\bar{x}$, and that $\Phi$ enjoys the integrable quasi-Lipschitzian property around $(\bar{x},\bar{\mathpzc{y}})$ for all $\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})=\mathcal{S}_{\varphi}(\bar{x},\bar{y})$. Since $\mathcal{S}\varphi$ is inner semicompact at $(\bar{x},\bar{y})$, we get that $\mathcal{S}_{\Phi}$ is also inner semicompact at this point. Applying Theorem 5.2 tells us that $\displaystyle\partial^{2}\mathrm{E}_{\varphi}(\bar{x},\bar{y})(y^{\ast})$ $\displaystyle={D}^{\ast}\mathrm{E}_{\Phi}(\bar{x},\bar{y})(y^{\ast})\subset\bigcup\limits_{\bar{\mathpzc{y}}\in\mathcal{S}_{\Phi}(\bar{x},\bar{y})}\int_{T}{D}^{\ast}\Phi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu$ $\displaystyle=\bigcup\limits_{\bar{\mathpzc{y}}\in S_{\varphi}(\bar{x},\bar{y})}\int_{T}\partial^{2}\varphi_{t}\big{(}\bar{x},\bar{\mathpzc{y}}(t)\big{)}(y^{\ast})d\mu,$ which verifies (7.8) and thus completes the proof of this theorem. $\hfill\square$ The final result of this section provides a second-order subdifferential Leibniz rule for normal integrands in (1.1), which are represented in the form of maximum functions $\varphi(t,x):=\max\big{\\{}\psi_{i}(t,x)\;\big{|}\;i=1,\ldots,s\big{\\}}\;\mbox{ for }\;t\in T\;\mbox{ and }\;x\in{\mathbb{R}^{n}}.$ (7.9) The second-order Leibniz rule, which is obtained in the next statement, evaluates the (basic) second-order subdifferential of $\mathrm{E}_{\varphi}$ with $\varphi$ from (7.9) in terms of the second-order subdifferential of $\varphi_{t}$. The latter construction is constructively calculated in eh ; ms entirely via the given functions $\psi_{i}$ for various types of maximum functions. For brevity we do not present the precise formulas here while referring the reader to the aforementioned papers. ###### Corollary 3 (second-order subdifferential Leibniz rule with normal integrands as maximum functions) Let $\psi_{i}\colon T\times{\mathbb{R}^{n}}\to\overline{\mathbb{R}}$, $i=1,\ldots,s$, be a family of normal integrand functions on a complete finite measure space $(T,{\cal A},\mu)$, and let $U$ be an open subset of ${\mathbb{R}^{n}}$ where the functions $\psi_{i}$ satisfy the following assumptions: 1. (i) When $x\in U$ and $i=1,\ldots,s$, the functions $\psi^{i}_{t}$ are continuously differentiable around $x$ for a.e. $t\in T$. 2. (ii) There exists $\ell\in\textnormal{L}^{1}(T,\mathbb{R}_{+})$ such that for all $i=1,\dots,s$, all $u_{1},u_{2}\in U$, and a.e. $t\in T$ we have $|\psi_{i}(t,u_{1})-\psi_{i}(t,u_{2})|+\|\nabla_{x}\psi_{i}(t,u_{1})-\nabla_{x}\psi_{i}(t,u_{2})\|\leq\ell(t)\|u_{1}-u_{2}\|.$ Consider the maximum function (7.9) and assume that for some $\bar{x}\in U$ and $\bar{y}\in\partial\mathrm{E}_{\varphi}(\bar{x})$ the multifunction $\mathcal{S}_{f}$ is inner semicompact at $(\bar{x},\bar{y})$ and that $\varphi$ enjoys the second-order integrable quasi-Lipschitzian property (7.6) around $(\bar{x},\bar{\mathpzc{y}})$. Then for all $y^{\ast}\in{\mathbb{R}^{n}}$ we have the second-order subdifferential Leibniz rule (7.8), where the second-order subdifferentials $\partial^{2}\varphi_{t}$, $t\in T$, are computed in eh ; ms . Proof. It is well known (see, e.g., (m18, , Theorem 4.10)) that $\partial\varphi_{t}(x)={\rm co}\big{\\{}\nabla_{x}\psi_{i}(t,x)\;\big{|}\;\psi_{i}(t,x)=\varphi(t,x)\big{\\}}\;\mbox{ for all }\;x\in U.$ (7.10) Thus $\varphi_{t}$ satisfies the conditions (7.4) around $\bar{x}$. It follows from Proposition 10(ii) that (7.1) holds at every $x\in U$. Consequently, we get the equality $\partial\mathrm{E}_{\varphi}(x)=\mathrm{E}_{\Phi}(x)\;\text{ for all }\;x\in U\;\mbox{ with }\;\Phi(t,u):=\partial\varphi_{t}(u).$ Employing finally Theorem 4.1 and Proposition 4 tells us that the maximum function $\varphi$ satisfies the second-order integrable quasi-Lipschitzian assumption required in Theorem 7.2. Thus applying Theorem 7.2 we arrive at (7.8) and complete the proof of the corollary. $\hfill\square$ ## 8 Concluding Remarks The paper establishes general results on first-order and second-order generalized differentiation of random set-valued and single-valued mappings. We also introduce new Lipschitzian properties of such multifunctions and derive their generalized differential characterizations. The obtained results provide the foundation for broad applications of variational analysis and generalized differentiation to stochastic optimization and related topics at the same level of perfection as for deterministic counterparts. Among our future research topics, we mention applications to first-order and second- order optimality conditions in stochastic programming, sensitivity analysis in parametric stochastic optimization, tilt and full stability of random optimal solutions, stochastic variational inequalities, and stochastic numerical algorithms. Classes of problems in stochastic optimization of our special interest include two-stage stochastic programs, probabilistic programs, stochastic bilevel programs, etc. We are positive that the theory developed in this paper will be highly instrumental in applications to such classes of stochastic problems. Acknowledgements. The authors thank the Handling Associate Editor and two anonymous referees for their useful suggestions and remarks, which helped us to improve the original presentation. ## References * (1) W. Van Ackooij and R. Henrion, (Sub-) Gradient formulae for probability functions of random inequality systems under Gaussian distribution, SIAM/ASA J. Uncertainty Quantification, 5 (2017), pp. 63–87. * (2) W. Van Ackooij and P. Pérez-Aros, Generalized differentiation of probability functions acting on an infinite systems of constraints, SIAM J. Optim., 29 (2020), pp. 2179–2210. * (3) E. J. Balder, Necessary and sufficient conditions for $L_{1}$-strong-weak lower semicontinuity of integral functionals, Nonlinear Anal. 11 (1987), 1399–1404. * (4) E. J. Balder and A. R. Sambucini, Fatou’s lemma for multifunctions with unbounded values in a dual space, J. Convex Anal., 12 (2005), pp. 383–395. * (5) V. I. Bogachev, Measure Theory, Vols. I and II, Springer, Berlin, 2007. * (6) J. M. Burke, X. Chen and H. Sun, The subdifferential of measurable composite max integrands and smoothing approximation, Math. Program., 181 (2020), pp. 229–264. * (7) C. Castaing and M. Valadier, Convex Analysis and Measurable Multifunctions, Springer, Berlin, 1977. * (8) R. Correa, A. Hantoute and P. Pérez-Aros, Characterizations of the subdifferential of convex integral functions under qualification conditions, J. Func. Anal., 277 (2019), pp. 227–254. * (9) R. Correa, A. Hantoute and P. Pérez-Aros, Qualification conditions-free characterizations of the $\varepsilon$-Ssbdifferential of convex integral functions, Appl. Math. Optim. (2020), DOI:10.1007/s00245-019-09604-y. * (10) R. Correa, A. Hantoute and P. Pérez-Aros, Subdifferential calculus rules for possibly nonconvex integral functions, SIAM J. Control Optim., 58 (2020), pp. 462–484. * (11) D. Dentcheva and A. Ruszczyński, Subregular recourse in nonlinear multistage stochastic optimization, Math. Program. (2021), DOI:10.1007/s10107-020-01612-z. * (12) J. Diestel and J. J. Uhl, Jr., Vector Measures, AMS, Providence, NJ, 1977. * (13) A. L. Dontchev and R. T. Rockafellar, Implicit Functions and Solution Mappings: A View from Variational Analysis, 2nd edition, Springer, New York, 2014. * (14) K. Emich and R. Henrion, A simple formula for the second-ordrer subdiffgerential of maximum functions, Vietnan J. Math., 42 (2014), pp. 467–478. * (15) A. Hantoute, R. Henrion and P. Pérez-Aros, Subdifferential characterization of probability functions under Gaussian distribution, Math. Program., 174 (2019), pp. 167–194. * (16) R. Henrion and W. Römisch, On M-stationary points for a stochastic equilibrium problem under equilibrium constraints in electricity spot market modeling, Appl. Math., 52 (2007), pp. 473–494. * (17) C. Hess, Set-valued integration and set-valued probability theory: an overview, in Handbook of Measure Theory (E. Pap, ed.), Chapter 14, North Holland/Elsevier, Amsterdam, 2002. * (18) B. S. Mordukhovich, Sensitivity analysis in nonsmooth optimization, in Theoretical Aspects of Industrial Design (D. A. Field and V. Komkov, eds.), SIAM Proc. Appl. Math. 58, pp. 32–46, Philadelphia, PA, 1992. * (19) B. S. Mordukhovich, Complete characterizations of openness, metric regularity, and Lipschitzian properties of multifunctions, Trans. Amer. Math. Soc., 340 (1993), pp. 1–35. * (20) B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Basic Theory; II: Applications, Springer, Berlin, 2006. * (21) B. S. Mordukhovich, Variational Analysis and Applications, Springer, Cham, Switzerland, 2018. * (22) B. S. Mordukhovich and P. Pérez-Aros, New extremal principles with applications to stochastic and semi-infinite programming, Math. Program. (2020), DOI: 10.1007/s10107-020-01548-4. * (23) B. S. Mordukhovich and P. Pérez-Aros, Generalized sequential differential calculus for expected-integral functionals, to appear in Set-Valued Var. Anal. (2021), DOI: 10.1007/s11228-021-00590-4. * (24) B. S. Mordukhovich and R. T. Rockafellar, Second-order subdifferential calculus with application to tilt stability in optimization, SIAM J. Optim., 22 (2012), pp. 953–986. * (25) B. S. Mordukhovich and N. Sagara, Subdifferentials of nonconvex integral functionals in Banach spaces with applications to stochastic dynamic programming, J. Convex Anal., 25 (2018), pp. 643–673. * (26) B. S. Mordukhovich and N. Sagara, Subdifferentials of value functions in nonconvex dynamic programming for nonstationary stochastic processes, Comm. Stoch. Anal., 13 (2019), pp. 1-18, DOI:10.31390/cosa.13.3.05. * (27) B. S. Mordukhovich and M. E. Sarabi, Variational analysis and full stability of optimal solutions to constrained and minimax problems, Nonlinear Anal., 121 (2015), pp. 36–53. * (28) A. Shapiro, D. Dentcheva and A. Ruszczyński, Lectures on Stochastic Programming, SIAM, Philadelphia, PA, 2009. * (29) R. T. Rockafellar, Lipschitzian properties of multifunctions, Nonlinear Anal., 9 (1985), pp. 867–885. * (30) R. T. Rockafellar and R. J-B. Wets, Variational Analysis, Springer, Berlin, 1998.
On the inversion of Riordan arrays Paul Barry School of Science Waterford Institute of Technology Ireland <EMAIL_ADDRESS> ###### Abstract Many Riordan arrays play a significant role in algebraic combinatorics. We explore the inversion of Riordan arrays in this context. We give a general construct for the inversion of a Riordan array, and study this in the case of various subgroups of the Riordan group. For instance, we show that the inversion of an ordinary Bell matrix is an exponential Riordan array in the associated subgroup. Examples from combinatorics and algebraic combinatorics illustrate the usefulness of such inversions. We end with a brief look at the inversion of exponential Riordan arrays. A final example places Airey’s convergent factor in the context of a simple exponential Riordan array. ## 1 Preliminaries We let $\mathcal{F}=\\{a_{0}+a_{1}x+a_{2}x^{2}+\cdots\,|a_{i}\in\mathbf{R}\\}$ be the set of formal power series with coefficients $a_{i}$ drawn from the ring $\mathbf{R}$. This ring can be any ring over which the operations we will carry out make sense, but for concreteness it can be assumed to be $\mathbb{Q}$. For combinatorial problems, the ring $\mathbf{R}\\}$ is often the ring of integers $\mathbb{Z}$. It will be seen that most of the matrices we deal with have integer entries. We shall use two distinguished subsets of $\mathcal{F}$, namely $\mathcal{F}_{0}=\\{a_{0}+a_{1}x+a_{2}x^{2}+\cdots\,|a_{i}\in\mathbf{R},a_{0}\neq 0\\},$ and $\mathcal{F}_{1}=\\{a_{1}x+a_{2}x^{2}+\cdots\,|a_{i}\in\mathbf{R},a_{1}\neq 0\\}.$ Elements of $\mathcal{F}_{1}$ are composable and possess compositional inverses. If $f(x)\in\mathcal{F}_{1}$, we shall denote by $\bar{f}(x)$ or $\operatorname{Rev}(f)(x)$ its compositional inverse. Thus we have $\bar{f}(f(x))=x$ and $f(\bar{f}(x))=x$. Throughout our exposition, we will stipulate that $g(x)\in\mathcal{F}_{0}$, $f(x)\in\mathcal{F}_{0}$, $u(x)\in\mathcal{F}_{0}$ and $v(x)\in\mathcal{F}_{1}$. Without much loss of generality, we will also assume that $g(0)=f(0)=u(0)=v^{\prime}(0)=1$. A Riordan array [3, 14] may be defined by a pair $(u(x),v(x))\in\mathcal{F}_{0}\times\mathcal{F}_{1}$ of power series, represented by the invertible lower-triangular matrix $\left(t_{n,k}\right)$ where $t_{n,k}=[x^{n}]u(x)v(x)^{k}.$ Here, the functional $[x^{n}]$ acts on elements of $\mathcal{F}$ by returning the coefficient of $x^{n}$ of the power series in question [10]. We shall sometimes write $(g,f)_{n,k}$ for $t_{n,k}$. The product of two Riordan arrays is again a Riordan array, defined by $(g,f)\cdot(u,v)=(g.u(f),v(f)).$ In the matrix representation, this corresponds to the usual multiplication of matrices. The inverse of a Riordan array is a Riordan array, given by $(g,f)^{-1}=\left(\frac{1}{g(\bar{f})},\bar{f}\right).$ In the matrix representation, this coincides with the normal inverse of a matrix. The identity element is $(1,x)$. Note that all matrices considered will be lower-triangular matrices indexed by $(n,k)$ where $0\leq n,kle\infty$. We use suitable truncations of such matrices in the text. Many examples of Riordan arrays can be found in the On- Line Encyclopedia of Integer Sequences (OEIS) [15, 16], along with many of the sequences occurring in this note. Such sequences are referred to by their OEIS $Annnnnn$ numbers. For instance, the Catalan numbers $C_{n}=\frac{1}{n1+1}\binom{2n}{n}$ have the OEIS number A000108. All the matrices in this note are assumed to be lower-triangular, and most are invertible. The Fundamental Theorem of Riordan arrays (FTRA) specifies how a Riordan array $(g,f)$ operates on a generating function. Thus we have $(g(x),f(x))\cdot h(x)=g(x)h(f(x)).$ In the matrix representation, this corresponds to the matrix $(t_{n,k})$ multiplying the vector whose elements are $(h_{0},h_{1},h_{2},\ldots)$ where $h(x)=\sum_{n=0}^{\infty}h_{n}x^{n}$. The ability to switch between the power series view of Riordan arrays, and the FTRA, on the one hand, and the linear algebra view involving matrix multiplication and matrix inverses, makes Riordan arrays a powerful tool in many of the settings of algebraic combinatorics. To each Riordan array $(g(x),f(x))$ we associate the bivariate generating function $G(x,y)=\frac{g(x)}{1-yf(x)}.$ Thus we have that $G(x,y)=\sum_{n,k\geq 0}t_{n,k}x^{n}y^{k}.$ Fixing $y$, we can then seek the $x$-inversion $\operatorname{Rev}_{x}(xG(x,y)$ of the generating function $xG(x,y)$. By the _inversion of the Riordan array $(g(x),f(x))$_ we shall mean the array given by the expansion of the generating function $\frac{1}{x}\operatorname{Rev}_{x}(xG(x,y).$ Note that this is often called the “REVERT” transform of $G$. Where no confusion will be caused, we will continue to call it the inversion of $G$, or of the corresponding Riordan array. We shall denote this array by $(g(x),f(x))^{!}$ It is a Riordan array only in special cases. We give two combinatorially important examples. ###### Example 1. We consider the Riordan array $\left(\frac{1}{1+x},-x\right)$ which begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&-1&0&0&0&0\\\ 1&1&1&0&0&0\\\ -1&-1&-1&-1&0&0\\\ 1&1&1&1&1&0\\\ -1&-1&-1&-1&-1&-1\\\ \end{array}\right).$ To obtain its inversion, we solve the equation $\frac{\frac{u}{1+u}}{1-y(-u)}=x,$ or $\frac{u}{(1+u)(1+yu)}=x$ for $u$. We obtain $u=\frac{1-(1+y)x-\sqrt{1-2(1+y)x+(1-y)^{2}x^{2}}}{2xy}.$ Thus the inversion of $G(x,y)$ in this case is given by $\frac{u}{x}=\frac{1-(1+y)x-\sqrt{1-2(1+y)x+(1-y)^{2}x^{2}}}{2x^{2}y}.$ This expands to give the number triangle that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&3&1&0&0&0\\\ 1&6&6&1&0&0\\\ 1&10&20&10&1&0\\\ 1&15&50&50&15&1\\\ \end{array}\right).$ This is the triangle of Narayana numbers $\left(\frac{1}{k+1}\binom{n}{k}\binom{n+1}{k}\right)$ A001263. Thus we have $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&-1&0&0&0&0\\\ 1&1&1&0&0&0\\\ -1&-1&-1&-1&0&0\\\ 1&1&1&1&1&0\\\ -1&-1&-1&-1&-1&-1\\\ \end{array}\right)^{!}=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&3&1&0&0&0\\\ 1&6&6&1&0&0\\\ 1&10&20&10&1&0\\\ 1&15&50&50&15&1\\\ \end{array}\right).$ ###### Example 2. In this second example, we consider the Riordan array $\left(\frac{1}{(1+x)^{2}},\frac{-x}{1+x}\right)$, which begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -2&-1&0&0&0&0\\\ 3&3&1&0&0&0\\\ -4&-6&-4&-1&0&0\\\ 5&10&10&5&1&0\\\ -6&-15&-20&-15&-6&-1\\\ \end{array}\right).$ Apart from signs, this matrix enumerates faces of the $n$-simplex. In order to find its inversion, we solve for $u$ the equation $\frac{\frac{u}{(1+u)^{2}}}{1-y\frac{-u}{1+u}}=x,$ or $\frac{u}{(1+u)(1+(1+y)u)}=x.$ Choosing the solution $u$ with $u(0)=0$, we obtain that $\frac{u}{x}=\frac{1-(y+2)x-\sqrt{1-2(y+2)x+x^{2}y^{2}}}{2(1+y)x^{2}}.$ This is the generating function of the number triangle A126216 that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 2&1&0&0&0&0\\\ 5&5&1&0&0&0\\\ 14&21&9&1&0&0\\\ 42&84&56&14&1&0\\\ 132&330&300&120&20&1\\\ \end{array}\right).$ Among many combinatorial interpretations, this matrix enumerates faces of the $n$-associahedron. In the theory of Koszul operads [8], this result follows from the fact that $Trias^{!}=Tridend.$ ## 2 Generalities In this section, we gather some facts that will be important in later section. We begin by recalling the definitions of two transforms of importance. The _binomial matrix_ is the matrix $\mathbf{B}=\left(\binom{n}{k}\right)_{0\leq k,n\leq n}$ A007318. This is the Riordan array $\left(\frac{1}{1-x},\frac{x}{1-x}\right)$. It begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&2&1&0&0&0\\\ 1&3&3&1&0&0\\\ 1&4&6&4&1&0\\\ 1&5&10&10&5&1\\\ \end{array}\right).$ The _binomial transform_ of a sequence (regarded as an infinite vector) or an array is the result of multiplying that vector or matrix by the binomial matrix $\mathbf{B}$. In terms of generating functions, the Fundamental Theorem of Riordan arrays tells us that if such a sequence has generating function $g(x)$, then its binomial transform has generating function $\frac{1}{1-x}g\left(\frac{x}{1-x}\right)$. Similarly for an array with bivariate generating function $G(x,y)$, its ($x$-)binomial transform will have generating function $\frac{1}{1-x}G\left(\frac{x}{1-x},y\right)$. The _inverse binomial transform_ is obtained by multiplying by $\mathbf{B}^{-1}$, which corresponds to the Riordan array $\left(\frac{1}{1+x},\frac{x}{1+x}\right)$. In terms of generating functions, this means that the inverse binomial transform of the sequence with generating function $g(x)$ has generating function $\frac{1}{1+x}g\left(\frac{x}{1+x}\right)$. The _invert transform_ of a sequence with generating function $g(x)\in\mathcal{F}_{0}$ is the sequence with generating function $\frac{g(x)}{1-xg(x)}$. More generally, we shall say the $\frac{g(x)}{1-\alpha xg(x)}$ is the invert$(\alpha)$ transform of $g(x)$. These two transforms are related by the process of inversion. ###### Proposition 3. The revert transform of the invert transform of $g(x)\in\mathcal{F}_{0}$ is the inverse binomial transform of the revert transform of $g(x)$. ###### Proof. We recall that the revert transform of $g(x)$ is given by $\frac{1}{x}\operatorname{Rev}(xg(x)).$ We let $v(x)=xg(x)$ and we write $\bar{v}(x)=\operatorname{Rev}(xg(x))$. Then the inverse binomial transform of the revert transform of $g(x)$ is given by $\frac{1}{1+x}\left(\frac{1}{x}\bar{v}\right)\left(\frac{x}{1+x}\right).$ We then have $\displaystyle\frac{1}{1+x}\left(\frac{1}{x}\bar{v}\right)\left(\frac{x}{1+x}\right)$ $\displaystyle=\frac{1}{1+x}\frac{1}{\frac{x}{1+x}}\bar{v}\left(\frac{x}{1+x}\right)$ $\displaystyle=\frac{1}{x}\bar{v}\left(\frac{x}{1+x}\right)$ $\displaystyle=\frac{1}{x}\bar{v}\circ\left(\frac{x}{1+x}\right)(x)$ $\displaystyle=\frac{1}{x}\bar{v}\circ\overline{\frac{x}{1-x}}(x)$ $\displaystyle=\frac{1}{x}\overline{\frac{x}{1-x}\circ v}(x)$ $\displaystyle=\frac{1}{x}\operatorname{Rev}\left(\frac{v(x)}{1-v(x)}\right)$ $\displaystyle=\frac{1}{x}\operatorname{Rev}\left(\frac{xg(x)}{1-xg(x)}\right).$ ∎ We can extend the notion of invert transform to Riordan arrays by operating on the $x$-variable. Thus the invert transform of the Riordan array is the array with generating function $\frac{G(x,y)}{1-xG(x,y)},\quad\text{where}\quad G(x,y)=\frac{g(x)}{1-yf(x)}.$ We have the following result. ###### Proposition 4. The invert transform of the Riordan array $(g(x),f(x))$ is the Riordan array $\left(\frac{g(x)}{1-xg(x)},\frac{f(x)}{1-xg(x)}\right).$ ###### Proof. The Riordan array $(g(x),f(x))$ has bivariate generating function $\frac{g(x)}{1-yf(x)}$. The ($x$-)invert transform of this is given by $\frac{\frac{g(x)}{1-yf(x)}}{1-x\frac{g(x)}{1-yf(x)}}=\frac{g(x)}{1-xg(x)-yf(x)}.$ The Riordan array $\left(\frac{g(x)}{1-xg(x)},\frac{f(x)}{1-xg(x)}\right)$ has a bivariate generating function given by $\frac{\frac{g(x)}{1-xg(x)}}{1-y\frac{f(x)}{1-xg(x)}}=\frac{g(x)}{1-xg(x)-yf(x)}.$ ∎ More generally, the invert$(\alpha)$ transform of the Riordan array $(g(x),f(x))$ is the Riordan array $\left(\frac{g(x)}{1-\alpha xg(x)},\frac{f(x)}{1-\alpha xg(x)}\right).$ ###### Example 5. We consider the Riordan array $\left(\frac{1}{1+x},-x\right)$ of our first example. The invert transform of this array is given by $\left(\frac{\frac{1}{1+x}}{1-x\frac{1}{1+x}},\frac{-x}{1-x\frac{1}{1+x}}\right)=(1,-x(1+x)).$ This is the array $\left((-1)^{k}\binom{k}{n-k}\right)$, which begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&-1&0&0&0&0\\\ 0&-1&1&0&0&0\\\ 0&0&2&-1&0&0\\\ 0&0&1&-3&1&0\\\ 0&0&0&-3&4&-1\\\ \end{array}\right).$ To obtain its inversion, we solve the equation $\frac{u}{1+yu(1+u)}=x.$ We obtain that $\frac{u}{x}=\frac{1-xy-\sqrt{1-2xy+(y-4)x^{2}y}}{2x^{2}y}.$ This expands to give the array that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 0&1&1&0&0&0\\\ 0&0&3&1&0&0\\\ 0&0&2&6&1&0\\\ 0&0&0&10&10&1\\\ \end{array}\right).$ This is the array $\left(\binom{n}{2(n-k)}C_{n-k}\right)$. We now note that we have $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&2&1&0&0&0\\\ 1&3&3&1&0&0\\\ 1&4&6&4&1&0\\\ 1&5&10&10&5&1\\\ \end{array}\right)\cdot\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\ 0&1&1&0&0&0\\\ 0&0&3&1&0&0\\\ 0&0&2&6&1&0\\\ 0&0&0&10&10&1\\\ \end{array}\right)=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&3&1&0&0&0\\\ 1&6&6&1&0&0\\\ 1&10&20&10&1&0\\\ 1&15&50&50&15&1\\\ \end{array}\right).$ The generating function of the reversal of an array with generating function $G(x,y)$ is given by $G\left(xy,\frac{1}{y}\right)$. We can deduce from this that the inversion of the reversal of an array is the reversal of the inversion of the original array. Thus the inversion of the array that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&-2&0&0&0&0\\\ 1&3&3&0&0&0\\\ -1&-4&-6&-4&0&0\\\ 1&5&10&10&5&0\\\ -1&-6&-15&-20&-15&-6\\\ \end{array}\right)$ is given by the array that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&2&0&0&0&0\\\ 1&5&5&0&0&0\\\ 1&9&21&14&0&0\\\ 1&14&56&84&42&0\\\ 1&20&120&300&330&132\\\ \end{array}\right).$ The initial column of the array with generating function $G(x,y)$ has generating function $G(x,0)$. The initial column of the inversion of the array is then the revert transform of this original initial column. Likewise, the row sums of the array with generating function $G(x,y)$ have generating function $G(x,1)$. We then have that the row sums of the inversion of an array are given by the revert transform of the row sums of the original array. ###### Example 6. The revert transform of the sequence $1,-2,3,-4,5,-6,\ldots$ is the sequence $1,2,5,14,42,132,\ldots.$ The row sums of the matrix that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -2&-1&0&0&0&0\\\ 3&3&1&0&0&0\\\ -4&-6&-4&-1&0&0\\\ 5&10&10&5&1&0\\\ -6&-15&-20&-15&-6&-1\\\ \end{array}\right)$ begin $1,-3,7,-15,31,-63,127,\ldots.$ The revert transform of this sequence, which begins $1,3,11,45,197,903,\ldots$ is then given by the row sums of the array that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 2&1&0&0&0&0\\\ 5&5&1&0&0&0\\\ 14&21&9&1&0&0\\\ 42&84&56&14&1&0\\\ 132&330&300&120&20&1\\\ \end{array}\right).$ As the inversion process is involutive, we note that the inversion of the inversion of an array is the original array. Of importance in the sequel will be the technique of Lagrange inversion [5, 11]. The form that we shall use is the following. We have $[x^{n}]H(\bar{f})=\frac{1}{n}[x^{n-1}]H^{\prime}(x)\left(\frac{x}{f}\right)^{n},$ where $H(x)\in\mathbf{R}[[x]]$. ## 3 Main results We first find a general expression for the $(n,k)$-th term of the inversion $(g(x),f(x))^{!}$ of the Riordan array $(g(x),f(x))$. ###### Lemma 7. The $(n,k)$-th term $\hat{t}_{n,k}$ of the inversion of the Riordan array $(g(x),f(x))$ is given by $\hat{t}_{n,k}=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}.$ ###### Proof. Using Lagrange inversion, we find that $\displaystyle[x^{n}y^{k}]\frac{1}{x}\operatorname{Rev}\left(\frac{xg(x)}{1-yf(x)}\right)$ $\displaystyle=[x^{n+1}y^{k}]\operatorname{Rev}\left(\frac{xg(x)}{1-yf(x)}\right)$ $\displaystyle=\frac{1}{n+1}[x^{n}y^{k}]\left(\frac{1-yf(x)}{g(x)}\right)^{n+1}$ $\displaystyle=\frac{1}{n+1}[x^{n}y^{k}]\sum_{j=0}^{n+1}\binom{n+1}{j}(-1)^{j}y^{j}f(x)^{j}\left(\frac{1}{g(x)}\right)^{n+1}$ $\displaystyle=\frac{1}{n+1}[x^{n}]\binom{n+1}{k}f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}.$ ∎ ###### Proposition 8. The $(n,k)$-th term $\hat{t}_{n,k}$ of the inversion of the Riordan array $(g(x),f(x))$ is given by $\hat{t}_{n,k}=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}\left(\left(\operatorname{Rev}(xg(x))\right)^{\prime},f(\operatorname{Rev}(xg(x)))\right)_{n,k}.$ ###### Proof. Using Lagrange inversion, we have the following chain of equalities. $\displaystyle[x^{n-1}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n}$ $\displaystyle=n.\frac{1}{n}[x^{n-1}]f(x)^{k}\left(\frac{x}{xg(x)}\right)^{n}$ $\displaystyle=n.\frac{1}{n}[x^{n-1}]H^{\prime}(x)\left(\frac{x}{xg(x)}\right)^{n}\quad\text{where }\,H^{\prime}(x)=f(x)^{k}$ $\displaystyle=n[x^{n}]H(\operatorname{Rev}(xg(x))$ $\displaystyle=[x^{n-1}]\frac{d}{dx}H(\operatorname{Rev}(xg(x)))$ $\displaystyle=[x^{n-1}]H^{\prime}(\operatorname{Rev}(xg(x))).(\operatorname{Rev}(xg(x)))^{\prime}$ $\displaystyle=[x^{n-1}](f(\operatorname{Rev}(xg(x))))^{k}.(\operatorname{Rev}(xg(x)))^{\prime}$ Thus we have $[x^{n}]f(x)^{n}\left(\frac{1}{g(x)}\right)^{n+1}=[x^{n}](\operatorname{Rev}(xg(x))^{\prime}.(f(\operatorname{Rev}(xg(x))))^{k}=\left((\operatorname{Rev}(xg(x)))^{\prime},f(\operatorname{Rev}(xg(x))\right)_{n,k}.$ ∎ Notice that we have the factorization $\left((\operatorname{Rev}(xg(x)))^{\prime},f(\operatorname{Rev}(xg(x))\right)=\left((\operatorname{Rev}(xg(x)))^{\prime},\operatorname{Rev}(xg(x)\right)\cdot(1,f(x)).$ We now look at the consequences of this result for three particular subgroups of the Riordan group. These are 1. 1. The _Appell subgroup_ of Riordan arrays of the form $(g(x),x)$. 2. 2. The _associated or Lagrange subgroup_ of Riordan arrays of the form $(1,f(x))$. 3. 3. The _Bell subgroup_ of Riordan arrays of the form $(g(x),xg(x))$. We note that Riordan arrays of the form $(f^{\prime},f)$ constitute the _derivative subgroup_ of the Riordan group. Thus the Riordan array $\left((\operatorname{Rev}(xg(x)))^{\prime},f(\operatorname{Rev}(xg(x))\right)$ is a product of an element of the derivative subgroup times an element of the associated group. ###### Corollary 9. The inversion of the Appell array $(g(x),x)$ has its general $(n,k)$-th term given by $\hat{t}_{n,k}=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}\left(\left(\operatorname{Rev}(xg(x))\right)^{\prime},\operatorname{Rev}(xg(x))\right)_{n,k}.$ We have $\left(\left(\operatorname{Rev}(xg(x))\right)^{\prime},\operatorname{Rev}(xg(x))\right)=\left((xg(x))^{\prime},xg(x)\right)^{-1}.$ ###### Corollary 10. The $(n,k)$-th element of the inversion of the Lagrange array $(1,f(x))=(t_{n,k})$ is given by $\hat{t}_{n,k}=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}t_{n,k}.$ This follows since $g(x)=1$ means that the array $((xg(x))^{\prime},xg(x))=(1,x)$ and so its inverse matrix is also the identity matrix. In order to characterize the inversion of a Bell matrix, we need to review the definition of an exponential Riordan array. Such an array $[u,v]$ is given by two power series $(u,v)$ drawn respectively from $\mathcal{F}_{0}^{e}=\\{u_{0}+u_{1}\frac{x}{1!}+u_{2}\frac{x^{2}}{2!}+\cdots\,|u_{i}\in\mathbf{R},u_{0}\neq 0\\},$ and $\mathcal{F}_{1}^{e}=\\{v_{1}\frac{x}{1!}+v_{2}\frac{x^{2}}{2!}+\cdots\,|v_{i}\in\mathbf{R},v_{1}\neq 0\\}.$ The $(n,k)$-th element of the exponential Riordan array $[u,v]$ is then given by $\frac{n!}{k!}[x^{n}]u(x)v(x)^{k}.$ Given an ordinary power series $g(x)\in\mathcal{F}_{0}$, with $g(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots$, we shall write $g_{e}(x)=a_{0}+a_{1}\frac{x}{1!}+a_{2}\frac{x^{2}}{2!}+\cdots.$ Thus $g_{e}(x)\in\mathcal{F}_{0}^{e}$. (We have $g_{e}(t)=\mathcal{L}^{-1}\left\\{\frac{1}{s}g\left(\frac{1}{s}\right)\right\\}(t)$, where $\mathcal{L}$ is the Laplace transform). ###### Proposition 11. The inversion of the Bell matrix $(g(x),xg(x))$ is given by the exponential Riordan array $\left[\left(\frac{1}{x}\operatorname{Rev}(xg(x))\right)_{e},-x\right].$ The expression $\frac{1}{x}\operatorname{Rev}(xg(x))$ is the revert transform of $g(x)$, and hence we can also write this as $\left[(\operatorname{revert}(g(x))_{e},-x\right].$ ###### Proof. The $(n,k)$-th element of the exponential Riordan array $\left[\left(\frac{1}{x}\operatorname{Rev}(xg(x))\right)_{e},-x\right]$ is given by $\frac{n!}{k!}[x^{n}]\left(\frac{1}{x}\operatorname{Rev}(xg)\right)_{e}(-x)^{k}.$ We then have $\displaystyle\frac{n!}{k!}[x^{n}]\left(\frac{1}{x}\operatorname{Rev}(xg)\right)_{e}(-x)^{k}$ $\displaystyle=(-1)^{k}\frac{n!}{k!}[x^{n-k}]\left(\frac{1}{x}\operatorname{Rev}(xg)\right)_{e}$ $\displaystyle=(-1)^{k}\frac{n!}{k!}[x^{n-k}]\sum_{j=0}^{\infty}[x^{j}]\left(\frac{1}{x}\operatorname{Rev}(xg)\right)\frac{x^{j}}{j!}$ $\displaystyle=(-1)^{k}\frac{n!}{k!}\frac{1}{(n-k)!}[x^{n-k}]\left(\frac{1}{x}\operatorname{Rev}(xg)\right)$ $\displaystyle=(-1)^{k}\binom{n}{k}[x^{n-k}]\left(\frac{1}{x}\operatorname{Rev}(xg)\right)$ $\displaystyle=(-1)^{k}\binom{n}{k}[x^{n-k+1}]\operatorname{Rev}(xg)$ $\displaystyle=(-1)^{k}\binom{n}{k}\frac{1}{n-k+1}[x^{n-k}]\left(\frac{1}{g(x)}\right)^{n-k+1}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]x^{k}g(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}](xg(x))^{k}\left(\frac{1}{g(x)}\right)^{n+1}.$ This last expression is the $(n,k)$-th element of the inversion of $(g(x),xg(x))$. ∎ This is an interesting example of where an ordinary Riordan array is directly linked to an exponential Riordan array. ###### Example 12. Chebyshev polynomials of the second kind. The coefficient array of the scaled Chebyshev polynomials of the second kind $U_{n}(x/2)$ is given by the Bell matrix $\left(\frac{1}{1+x^{2}},\frac{x}{1+x^{2}}\right)$ [2]. This matrix A049310 begins $\left(\begin{array}[]{ccccccc}1&0&0&0&0&0&0\\\ 0&1&0&0&0&0&0\\\ -1&0&1&0&0&0&0\\\ 0&-2&0&1&0&0&0\\\ 1&0&-3&0&1&0&0\\\ 0&3&0&-4&0&1&0\\\ -1&0&6&0&-5&0&1\\\ \end{array}\right).$ The inverse of this matrix is the matrix A053121 that begins $\left(\begin{array}[]{ccccccc}1&0&0&0&0&0&0\\\ 0&1&0&0&0&0&0\\\ 1&0&1&0&0&0&0\\\ 0&2&0&1&0&0&0\\\ 2&0&3&0&1&0&0\\\ 0&5&0&4&0&1&0\\\ 5&0&9&0&5&0&1\\\ \end{array}\right).$ By the result above, we have $\left(\frac{1}{1+x^{2}},\frac{x}{1+x^{2}}\right)^{!}=\left[\frac{I_{1}(2x)}{x},-x\right].$ This inversion matrix begins $\left(\begin{array}[]{ccccccc}1&0&0&0&0&0&0\\\ 0&-1&0&0&0&0&0\\\ 1&0&1&0&0&0&0\\\ 0&-3&0&-1&0&0&0\\\ 2&0&6&0&1&0&0\\\ 0&-10&0&-10&0&-1&0\\\ 5&0&30&0&15&0&1\\\ \end{array}\right).$ In absolute value, this matrix is A097610, which counts Motzkin paths of length $n$ with $k$ horizontal steps. Because of the reversibility of the inversion process, we can also start with an exponential Riordan matrix of the form $[g_{e}(x),-x]$ and derive the (ordinary) Bell matrix for which it is the inversion. ###### Example 13. We start with the exponential Riordan array $\left[\frac{1}{1-x},-x\right]$. This matrix begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&-1&0&0&0&0\\\ 2&-2&1&0&0&0\\\ 6&-6&3&-1&0&0\\\ 24&-24&12&-4&1&0\\\ 120&-120&60&-20&5&-1\\\ \end{array}\right).$ This is a signed version of A094587, which counts permutations on $n$ letters with exactly $k+1$ cycles and with the first $k+1$ letters in separate cycles. Reversing our steps from above, we find that the Riordan array for which this is the inversion is the Riordan array $\left(\sum_{n=0}^{\infty}n!x^{n},x\sum_{n=0}^{\infty}n!x^{n}\right)^{-1}.$ This array begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&1&0&0&0&0\\\ 0&-2&1&0&0&0\\\ -1&1&-3&1&0&0\\\ -4&-2&3&-4&1&0\\\ -22&-6&-4&6&-5&1\\\ \end{array}\right).$ Its inverse $\left(\sum_{n=0}^{\infty}n!x^{n},x\sum_{n=0}^{\infty}n!x^{n}\right)$ begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 2&2&1&0&0&0\\\ 6&5&3&1&0&0\\\ 24&16&9&4&1&0\\\ 120&64&31&14&5&1\\\ \end{array}\right).$ The closely related Riordan array $\left(1,x\sum_{n=0}^{\infty}n!x^{n}\right)$ is A084938. The unsigned sequence $1,1,0,1,4,22,144,1089,9308,88562,\ldots$ counts the number of generators in arity $n$ of the operad _Lie_ , when considered as a free non-symmetric operad [13]. ## 4 Duality and Riordan involutions We say that a Riordan array $(g(x),f(x))$ is self-dual if we have $(g(x),f(x))^{!}=(g(x),f(x)).$ ###### Example 14. We consider the Riordan array $\left(-\frac{1}{1+x},\frac{x}{1+x}\right)$ which begins $\left(\begin{array}[]{cccccc}-1&0&0&0&0&0\\\ 1&-1&0&0&0&0\\\ -1&2&-1&0&0&0\\\ 1&-3&3&-1&0&0\\\ -1&4&-6&4&-1&0\\\ 1&-5&10&-10&5&-1\\\ \end{array}\right).$ Its generating function is given by $\frac{-\frac{1}{1+x}}{1-y\frac{x}{1+x}}=\frac{-1}{1-x(y-1)}.$ In order to find the required inversion, we must therefore solve the equation $\frac{-u}{1-u(y-1)}=x.$ The solution gives us $\frac{u}{x}=\frac{-1}{1-x(y-1)}.$ Thus we have $\left(-\frac{1}{1+x},\frac{x}{1+x}\right)^{!}=\left(-\frac{1}{1+x},\frac{x}{1+x}\right).$ The Riordan array $\left(-\frac{1}{1+x},\frac{x}{1+x}\right)$ is an example of a Riordan array that is self-dual. ###### Example 15. Our next example concerns the _cubical trialgebra operad_ [8]. The generating series of this operad is the generating series of the family of cubes $\frac{-x}{1+(y+2)x}$. This is $x$ times the generating function of the Riordan array $\left(\frac{-1}{1+2x},\frac{-x}{1+2x}\right)$. In order to find its inversion, we solve the equation $\frac{-u}{1+(y+2)u}=x,$ to find that $u=\frac{-x}{1+(y+2)x}.$ Thus we have that $\left(\frac{-1}{1+2x},\frac{-x}{1+2x}\right)^{!}=\left(\frac{-1}{1+2x},\frac{-x}{1+2x}\right).$ That is, the Riordan array $\left(\frac{-1}{1+2x},\frac{-x}{1+2x}\right)$ is self-dual. It is also an involution in the group of Riordan arrays: we have $\left(\frac{-1}{1+2x},\frac{-x}{1+2x}\right)^{2}=I=(1,x).$ In general, we have $\left(\frac{-1}{1+rx},\frac{-x}{1+rx}\right)^{!}=\left(\frac{-1}{1+rx},\frac{-x}{1+rx}\right),$ since the solution of $\frac{-u}{1+(y+r)u}=x,$ with $u(0)=0$ is given by $u=\frac{-x}{1+(y+r)x}.$ These matrices are again involutions in the group of Riordan arrays: $\left(\frac{-1}{1+rx},\frac{-x}{1+rx}\right)^{2}=I.$ ## 5 Inversions of one-parameter families In this section we investigate the inversions of elements of one-parameter families of Riordan arrays. ###### Example 16. The Riordan arrays $\mathbf{(1+rx,x)}$. The general element of this family begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ r&1&0&0&0&0\\\ 0&r&1&0&0&0\\\ 0&0&r&1&0&0\\\ 0&0&0&r&1&0\\\ 0&0&0&0&r&1\\\ \end{array}\right).$ This array has generating function $\frac{(1+rx)}{1-xy}$ from which we deduce that the generating function of the inversion is given by $\frac{\sqrt{1+2(y+2r)x+x^{2}y^{2}}-xy-1}{2rx}.$ This expands to give the arrays that begin $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -r&-1&0&0&0&0\\\ 2r^{2}&3r&1&0&0&0\\\ -5r^{3}&-10r^{2}&-6r&-1&0&0\\\ 14r^{4}&35r^{3}&30r^{2}&10r&1&0\\\ -42r^{5}&-126r^{4}&-140r^{3}&-70r^{2}&-15r&-1\\\ \end{array}\right).$ In order to find the general $(n,k)$-th element of this matrix, we take $g(x)=1+rx$ and $f(x)=x$ in the formula $\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}.$ We have $\displaystyle[x^{n}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}$ $\displaystyle=[x^{n}]x^{k}\left(\frac{1}{1+rx}\right)^{n+1}$ $\displaystyle=[x^{n-k}]\sum_{j=0}^{\infty}\binom{-(n+1)}{j}r^{j}x^{j}$ $\displaystyle=[x^{n-k}]\sum_{j=0}^{\infty}\binom{n+j}{j}(-r)^{j}x^{j}$ $\displaystyle=\binom{n+n-k}{n-k}(-r)^{n-j}$ $\displaystyle=\binom{2n-k}{n-k}(-r)^{n-k}.$ Thus we obtain $\hat{t}_{n,k}=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}\binom{2n-k}{n-k}(-r)^{n-k}.$ We therefore have $(1+rx,x)^{!}=\left(\frac{(-1)^{k}}{n+1}\binom{n+1}{k}\binom{2n-k}{n-k}(-r)^{n-k}\right).$ The matrix $((\operatorname{Rev}(xg(x))^{\prime},\operatorname{Rev}(xg(x)))$ in this case is the matrix $\left(\frac{1}{\sqrt{1+4rx}},\frac{\sqrt{1+4rx}-1}{2r}\right).$ This therefore has general term $\binom{2n-k}{n-k}(-r)^{n-k}$ and begins $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -2r&1&0&0&0\\\ 6r^{2}&-3r&1&0&0\\\ -20r^{3}&10r^{2}&-4r&1&0\\\ 70r^{4}&-35r^{3}&15r^{2}&-5r&1\\\ \end{array}\right).$ The row sums of the inversion $\sum_{k=0}^{n}\frac{(-1)^{k}}{n+1}\binom{n+1}{k}\binom{2n-k}{n-k}(-r)^{n-k}$, which begin $1,-r-1,2r^{2}+3r+1,-5r^{3}-10r^{2}-6r-1,14r^{4}+35r^{3}+30r^{2}+10r+1,\ldots$ are the revert transform of the row sums of $(1+rx,x)$, or $1,r+1,r+1,r+1,\ldots.$ The generating function of the inversion may be expressed as the continued fraction $\cfrac{1}{1+(y+r)x-\cfrac{r(y+r)x^{2}}{1+(y+2r)x-\cfrac{r(y+r)x}{1+(y+2r)x-\cdots}}}.$ The unsigned triangle $\frac{1}{n+1}\binom{n+1}{k}\binom{2n-k}{n-k}$, which begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 2&3&1&0&0&0\\\ 5&10&6&1&0&0\\\ 14&35&30&10&1&0\\\ 42&126&140&70&15&1\\\ \end{array}\right),$ counts Schroeder paths from $(0,0)$ to $(2n,0)$ having $k$-peaks. This is A060693. It is the inversion of the Riordan array $(1-x,-x)$ which begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&-1&0&0&0&0\\\ 0&1&1&0&0&0\\\ 0&0&-1&-1&0&0\\\ 0&0&0&1&1&0\\\ 0&0&0&0&-1&-1\\\ \end{array}\right).$ It follows that the matrix A088617 $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&3&2&0&0&0\\\ 1&6&10&5&0&0\\\ 1&10&30&35&14&0\\\ 1&15&70&140&126&42\\\ \end{array}\right),$ which counts Schroeder paths from $(0,0)$ to $(2n,0)$ with $k$ up-steps $U=(1,1)$ is the inversion of the triangle that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&-1&0&0&0&0\\\ 1&1&0&0&0&0\\\ -1&-1&0&0&0&0\\\ 1&1&0&0&0&0\\\ -1&-1&0&0&0&0\\\ \end{array}\right).$ ###### Example 17. We now turn to examine the inversions of the matrices of the form $\left(1-\frac{rx}{1+x},-x\right)=\left(\frac{1-x(r-1)}{1+x},-x\right)$, which begin $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -r&-1&0&0&0&0\\\ r&r&1&0&0&0\\\ -r&-r&-r&-1&0&0\\\ r&r&r&r&1&0\\\ -r&-r&-r&-r&-r&-1\\\ \end{array}\right).$ The inversion may be found by solving the equation $\frac{u(1-u(r-1))}{(1+u)(1+uy)}=x$ to get $\frac{u}{x}=\frac{1-(1+y)x-\sqrt{1-2(y+2r-1)x-(1-y)^{2}x^{2}}}{2x(xy+r-1)}.$ We obtain the family of matrices that begin $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ r&1&0&0&0\\\ r(2r-1)&3r&1&0&0\\\ r\left(5r^{2}-5r+1\right)&2r(5r-2)&6r&1&0\\\ r\left(14r^{3}-21r^{2}+9r-1\right)&5r\left(7r^{2}-6r+1\right)&10r(3r-1)&10r&1\\\ \end{array}\right).$ For $r=-2,\ldots,2$ we obtain the matrices $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -2&1&0&0&0\\\ 10&-6&1&0&0\\\ -62&48&-12&1&0\\\ 430&-410&140&-20&1\\\ \end{array}\right),\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -1&1&0&0&0\\\ 3&-3&1&0&0\\\ -11&14&-6&1&0\\\ 45&-70&40&-10&1\\\ \end{array}\right),\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 0&1&0&0&0\\\ 0&0&1&0&0\\\ 0&0&0&1&0\\\ 0&0&0&0&1\\\ \end{array}\right),$ $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 1&1&0&0&0\\\ 1&3&1&0&0\\\ 1&6&6&1&0\\\ 1&10&20&10&1\\\ \end{array}\right),\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 2&1&0&0&0\\\ 6&6&1&0&0\\\ 22&32&12&1&0\\\ 90&170&100&20&1\\\ \end{array}\right).$ We find that the general $(n,k)$-th term of the inversion matrix is given by $\hat{t}_{n,k}=\frac{1}{n+1}\binom{n+1}{k}\sum_{j=0}^{n+1}\binom{n+1}{j}\binom{2n-k-j}{n-k-j}(r-1)^{n-k-j}.$ The bivariate generating function of the inversion matrix may be expressed as the continued fraction $\cfrac{1}{1-(y+r)x-\cfrac{r(y+r-1)x^{2}}{1-(y+2r-1)x-\cfrac{r(y+r-1)}{1-(y+2r-1)x-\cdots}}}.$ The generating function of the initial column of the inversion matrix is given by $\frac{\sqrt{(1+x)^{2}-4rx}+x-1}{2(1-r)x}=\frac{1}{1-x}c\left(\frac{(r-1)x}{(1-x)^{2}}\right),$ where $c(x)=\frac{1-\sqrt{1-4x}}{2x}$ is the generating function of the Catalan numbers $C_{n}=\frac{1}{n+1}\binom{2n}{n}$ A000108. The generating function of the row sums of the inversion is given by $\frac{1-2x-\sqrt{1-4rx}}{2(x+r-1)x}.$ This expands to give the sequence with general term $\sum_{k=0}^{n}\frac{n-k+1}{n+1}\binom{n+k}{k}r^{k}.$ The coefficient array $\frac{n-k+1}{n+1}\binom{n+k}{k}$ is the array that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&2&2&0&0&0\\\ 1&3&5&5&0&0\\\ 1&4&9&14&14&0\\\ 1&5&14&28&42&42\\\ \end{array}\right).$ This is A009766, which has many combinatorial interpretations. The matrix $\left((\operatorname{Rev}(xg(x)))^{\prime},\operatorname{Rev}(xg(x))\right)$ is the matrix $\left(\frac{1-2r+x+\sqrt{1+2x(1-2r)+x^{2}}}{2(1-r)\sqrt{1+2x(1-2r)+x^{2}}},\frac{\sqrt{1+2x(1-2r)x+x^{2}}+x-1}{2(1-r)}\right),$ with general term $\sum_{j=0}^{n+1}\binom{n+1}{j}\binom{2n-k-j}{n-k-j}(r-1)^{n-k-j}.$ When $r=2$, the row sums of this matrix gives the sequence that begins $1,5,25,129,681,3653,19825,108545,598417,\ldots.$ This counts the number of peaks in Schroeder paths (A002002). ###### Example 18. The Riordan array $\mathbf{\left(\frac{1}{(1-x)^{m}},x\right)}$. In this example we calculate $\left(\frac{1}{(1-x)^{m}},x\right)^{!}$. We have $\displaystyle\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]x^{k}((1-x)^{m})^{n+1}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n-k}](1-x)^{m(n+1)}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n-k}]\sum_{j=0}^{(n+1)m}\binom{(n+1)m}{j}(-1)^{j}x^{j}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}\binom{(n+1)m}{n-k}(-1)^{n-k}$ $\displaystyle=\frac{(-1)^{n}}{n+1}\binom{n+1}{k}\binom{(n+1)m}{n-k}.$ Thus we have $\left(\frac{1}{(1-x)^{m}},x\right)^{!}=\left(\frac{(-1)^{n}}{n+1}\binom{n+1}{k}\binom{(n+1)m}{n-k}\right).$ The numbers $\frac{1}{n+1}\binom{n+1}{k}\binom{(n+1)m}{n-k}$ are the Fuss- Narayana numbers, coefficients of the Fuss-Narayana polynomials [1, 6]. The previous examples involved Riordan arrays for which $f(x)=x$ or $f(x)=-x$. We now look at a case where $f(x)$ is non-trivial. ###### Example 19. In this example, we consider the family of Pascal-like triangles $\left(\frac{1}{1-x},\frac{x(1+rx)}{1-x}\right)$. For instance, when $r=1$ we obtain the binomial matrix, while for $r=1$ we obtain the Delannoy triangle. We will actually work with the variant $\left(\frac{1}{1+x},-\frac{x(1+rx)}{1+x}\right)$. In order to find the inversion of these matrices, we must therefore solve the equation $\frac{u}{1+(1+y)u+ryu^{2}}=x.$ We thus obtain the generating function of the inversion as $\frac{u}{x}=\frac{1-(1+y)x-\sqrt{1-2(1+y)x+(1+2y(1-2r)+y^{2})x^{2}}}{2rx^{2}y}.$ This can be equivalently represented by the continued fraction $\cfrac{1}{1-(1+y)x-\cfrac{ryx^{2}}{1-(1+y)x-\cfrac{ryx^{2}}{1-(1+y)x-\cdots}}}.$ This generating function expands to give the inversion triangles of this family, which begin $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&r+2&1&0&0&0\\\ 1&3(r+1)&3(r+1)&1&0&0\\\ 1&2(3r+2)&2\left(r^{2}+6r+3\right)&2(3r+2)&1&0\\\ 1&5(2r+1)&10\left(r^{2}+3r+1\right)&10\left(r^{2}+3r+1\right)&5(2r+1)&1\\\ \end{array}\right).$ For $r=-2,\ldots,2$ we obtain the number triangles that begin as follows. $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&0&1&0&0&0\\\ 1&-3&-3&1&0&0\\\ 1&-8&-10&-8&1&0\\\ 1&-15&-10&-10&-15&1\\\ \end{array}\right),\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&1&1&0&0&0\\\ 1&0&0&1&0&0\\\ 1&-2&-4&-2&1&0\\\ 1&-5&-10&-10&-5&1\\\ \end{array}\right),\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&2&1&0&0&0\\\ 1&3&3&1&0&0\\\ 1&4&6&4&1&0\\\ 1&5&10&10&5&1\\\ \end{array}\right),$ $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&3&1&0&0&0\\\ 1&6&6&1&0&0\\\ 1&10&20&10&1&0\\\ 1&15&50&50&15&1\\\ \end{array}\right),\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 1&1&0&0&0&0\\\ 1&4&1&0&0&0\\\ 1&9&9&1&0&0\\\ 1&16&38&16&1&0\\\ 1&25&110&110&25&1\\\ \end{array}\right).$ We can regard these arrays as Pascal-like generalizations of the Narayana triangle, which is the case $r=1$. It is interesting to see that the binomial matrix belongs to this one-parameter family. The row sums, which begin $1,2,r+4,6r+8,2r^{2}+24r+16,20r^{2}+80r+32,5r^{3}+120r^{2}+240r+64,\ldots,$ have exponential generating function $\frac{e^{2x}I_{1}(2\sqrt{r}x)}{\sqrt{r}x}.$ For $r=1$, we get $C_{n+1}$. For $r=5$, the sequence begins $1,2,9,38,186,932,4889,\ldots.$ This is A249925. Interestingly, this is equal to $\sum_{k=0}^{n}C_{k}C_{n-k}F_{k+1}F_{n-k+1}$, were $F_{n}$ are the Fibonacci numbers. In general, the row sums are given by $\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor}\binom{n}{2k}2^{n-2k}C_{k}.$ The generating function of the row sums can be expressed as the following continue fraction. $\cfrac{1}{1-2x-\cfrac{rx^{2}}{1-2x-\cfrac{rx^{2}}{1-2x-\cdots}}}.$ In order to find an expression for the general $(n,k)$-th term of the inversion triangle, we calculate the following. $\displaystyle[x^{n}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}$ $\displaystyle=[x^{n}]\left(\frac{-x(1+rx)}{1+x}\right)^{k}(1+x)^{n-k+1}$ $\displaystyle=(-1)^{k}[x^{n-k}](1+rx)^{k}(1+x)^{n-k+1}$ $\displaystyle=(-1)^{k}[x^{n-k}]\sum_{j=0}^{k}\binom{k}{j}r^{j}x^{j}\sum_{i=0}^{n-k+1}\binom{n-k+1}{i}x^{i}$ $\displaystyle=(-1)^{k}\sum_{j=0}^{k}\binom{k}{j}r^{j}\binom{n-k+1}{n-k-j}.$ We thus obtain the general $(n,k)$-th element of the inversion to be $\hat{t}_{n,k}=\frac{1}{n+1}\binom{n+1}{k}\sum_{j=0}^{k}\binom{k}{j}r^{j}\binom{n-k+1}{n-k-j}.$ The inverse binomial transform of the inversion matrix is the matrix that begins $\left(\begin{array}[]{ccccccc}1&0&0&0&0&0&0\\\ 0&1&0&0&0&0&0\\\ 0&r&1&0&0&0&0\\\ 0&0&3r&1&0&0&0\\\ 0&0&2r^{2}&6r&1&0&0\\\ 0&0&0&10r^{2}&10r&1&0\\\ 0&0&0&5r^{3}&30r^{2}&15r&1\\\ \end{array}\right),$ with general term $\frac{1}{k+1}\binom{n}{k}\binom{k+1}{n-k+1}.$ The row sums of the second inverse binomial transform of the inversion matrix begin $1,0,r,0,2r^{2},0,5r^{3},0,14r^{4},0,42r^{5},\ldots$ making the link to the Catalan numbers explicit. ###### Example 20. Our last example of this section calculates the inversions $(\hat{t}_{n,k})$ of the Riordan arrays $\left(\frac{1}{(1-x)^{m}},\frac{x}{1-x}\right)=\left(\binom{n+m-1}{n-k}\right).$ We have $\displaystyle\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]f(x)^{k}\left(\frac{1}{g(x)}\right)^{n+1}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n}]\frac{x^{k}}{(1-x)^{k}}((1-x)^{m})^{n+1}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n-k}](1-x)^{m(n+1)-k}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}[x^{n-k}]\sum_{j=0}^{\infty}\binom{m(n+1)-k}{j}(-1)^{j}x^{j}$ $\displaystyle=\frac{(-1)^{k}}{n+1}\binom{n+1}{k}\binom{m(n+1)-k}{n-k}(-1)^{n-k}$ $\displaystyle=\frac{(-1)^{n}}{n+1}\binom{n+1}{k}\binom{m(n+1)-k}{n-k}.$ Thus we have $\left(\frac{1}{(1-x)^{m}},\frac{x}{1-x}\right)^{!}=\left(\binom{n+m-1}{n-k}\right)^{!}=\left(\frac{(-1)^{n}}{n+1}\binom{n+1}{k}\binom{m(n+1)-k}{n-k}\right).$ We can also deduce that $\left(\binom{m(n+1)-k}{n-k}\right)=\left(\left(\operatorname{Rev}\left(\frac{x}{(1-x)^{m}}\right)\right)^{\prime},\frac{\operatorname{Rev}\left(\frac{x}{(1-x)^{m}}\right)}{1-\operatorname{Rev}\left(\frac{x}{(1-x)^{m}}\right)}\right).$ In particular, we obtain $\binom{m(n+1)}{n}=[x^{n}]\left(\operatorname{Rev}\left(\frac{x}{(1-x)^{m}}\right)\right)^{\prime}=(n+1)[x^{n+1}]\operatorname{Rev}\left(\frac{x}{(1-x)^{m}}\right),$ or $\frac{1}{n+1}\binom{m(n+1)}{n}=[x^{n+1}]\operatorname{Rev}\left(\frac{x}{(1-x)^{m}}\right).$ The following table documents these inversions for $m=-1,\ldots,4$ [12]. $m$ | $\left(\binom{n+m-1}{n-k}\right)$ | $\left(\binom{n+m-1}{n-k}\right)^{!}=\left(\frac{(-1)^{n}}{n+1}\binom{n+1}{k}\binom{m(n+1)-k}{n-k}\right)$ ---|---|--- $-1$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -1&1&0&0&0\\\ 0&0&1&0&0\\\ 0&0&1&1&0\\\ 0&0&1&2&1\\\ \end{array}\right)$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 1&-1&0&0&0\\\ 2&-4&1&0&0\\\ 5&-15&9&-1&0\\\ 14&-56&56&-16&1\\\ \end{array}\right)$ $0$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 0&1&0&0&0\\\ 0&1&1&0&0\\\ 0&1&2&1&0\\\ 0&1&3&3&1\\\ \end{array}\right)$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 0&-1&0&0&0\\\ 0&-1&1&0&0\\\ 0&-1&3&-1&0\\\ 0&-1&6&-6&1\\\ \end{array}\right)$ $1$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 1&1&0&0&0\\\ 1&2&1&0&0\\\ 1&3&3&1&0\\\ 1&4&6&4&1\\\ \end{array}\right)$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -1&-1&0&0&0\\\ 1&2&1&0&0\\\ -1&-3&-3&-1&0\\\ 1&4&6&4&1\\\ \end{array}\right)$ $2$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 2&1&0&0&0\\\ 3&3&1&0&0\\\ 4&6&4&1&0\\\ 5&10&10&5&1\\\ \end{array}\right)$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -2&-1&0&0&0\\\ 5&5&1&0&0\\\ -14&-21&-9&-1&0\\\ 42&84&56&14&1\\\ \end{array}\right)$ $3$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 3&1&0&0&0\\\ 6&4&1&0&0\\\ 10&10&5&1&0\\\ 15&20&15&6&1\\\ \end{array}\right)$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -3&-1&0&0&0\\\ 12&8&1&0&0\\\ -55&-55&-15&-1&0\\\ 273&364&156&24&1\\\ \end{array}\right)$ $4$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 4&1&0&0&0\\\ 10&5&1&0&0\\\ 20&15&6&1&0\\\ 35&35&21&7&1\\\ \end{array}\right)$ | $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ -4&-1&0&0&0\\\ 22&11&1&0&0\\\ -140&-105&-21&-1&0\\\ 969&969&306&34&1\\\ \end{array}\right)$ The last two triangles on the right are signed versions of A243662 and A243663 [12]. ## 6 The inversion of exponential Riordan arrays In this section we give a brief introduction to the inversion of exponential Riordan arrays. We begin the section by looking at the example of the binomial matrix, but this time, regarded as the exponential Riordan array $[e^{x},x]$. ###### Example 21. We recall that the exponential Riordan array $[e^{x},x]$ has general element given by $\binom{n}{k}$. This is so because we have $\displaystyle\frac{n!}{k!}[x^{n}]e^{x}x^{k}$ $\displaystyle=\frac{n!}{k!}[x^{n-k}]\sum_{i=0}^{\infty}\frac{x^{i}}{i!}$ $\displaystyle=\frac{n!}{k!}\frac{1}{(n-k)!}$ $\displaystyle=\binom{n}{k}.$ By the theory of exponential Riordan arrays, the bivariate generating function of this array is given by $G_{e}(x,y)=e^{x}e^{xy}=e^{x+xy}=e^{x(1+y)}.$ The revert transform (in $x$) of $G_{e}(x,y)$ is given by $\frac{d}{dx}\operatorname{Rev}\left(\int_{0}^{x}G_{e}(t,y)\,dt\right)$. We have that $\int_{0}^{x}e^{t(1+y)}=\frac{e^{x(1+y)-1}}{1+y}.$ To carry out the inversion, we must then solve the equation $\frac{e^{u(1+y)-1}}{1+y}=x$ to find the solution $u(x)$ that satisfies $u(0)=0$. We find that $u=\frac{ln(1+x(1+y))}{1+y}.$ We then differentiate this (with respect to $x$) to get the revert transform of $G_{e}(x,y)$. We find that $\hat{G}_{e}(x,y)=\frac{1}{1+x(1+y)}.$ This is the generating function of the array that begins $\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ -1&-1&0&0&0&0\\\ 2&4&2&0&0&0\\\ -6&-18&-18&-6&0&0\\\ 24&96&144&96&24&0\\\ -120&-600&-1200&-1200&-600&-120\\\ \end{array}\right).$ This is the inversion of the exponential Riordan array $[e^{x},x]$. The general element of this matrix is $(-1)^{n}n!\binom{n}{k}$. The triangle is a signed version of A196347. ###### Example 22. We consider the inversion of the exponential Riordan array $[\cosh(x),x]$. The array $[\cosh(x),x]$ A119467 begins $\left(\begin{array}[]{ccccccc}1&0&0&0&0&0&0\\\ 0&1&0&0&0&0&0\\\ 1&0&1&0&0&0&0\\\ 0&3&0&1&0&0&0\\\ 1&0&6&0&1&0&0\\\ 0&5&0&10&0&1&0\\\ 1&0&15&0&15&0&1\\\ \end{array}\right).$ We have $G_{e}(x,y)=\cosh(x)e^{xy}$, and $\int_{0}^{x}G_{e}(t,y)\,dt=e^{xy}\left(\frac{e^{x}}{2(1+y)}-\frac{e^{-x}}{2(1-y)}\right)+\frac{y}{1-y^{2}}.$ Unfortunately, there is no closed solution to the equation $e^{uy}\left(\frac{e^{u}}{2(1+y)}-\frac{e^{-u}}{2(1-y)}\right)+\frac{y}{1-y^{2}}=x.$ Nevertheless, we can calculate as many rows of the inversion of the above matrix as we like by calculating the first column of the inverse of the exponential Riordan array $\left[1,e^{xy}\left(\frac{e^{x}}{2(1+y)}-\frac{e^{-x}}{2(1-y)}\right)+\frac{y}{1-y^{2}}\right].$ For instance, we have $\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 0&1&0&0&0\\\ 0&y&1&0&0\\\ 0&y^{2}+1&3y&1&0\\\ 0&y\left(y^{2}+3\right)&7y^{2}+4&6y&1\\\ \end{array}\right)^{-1}=\left(\begin{array}[]{ccccc}1&0&0&0&0\\\ 0&1&0&0&0\\\ 0&-y&1&0&0\\\ 0&2y^{2}-1&-3y&1&0\\\ 0&y\left(7-6y^{2}\right)&11y^{2}-4&-6y&1\\\ \end{array}\right).$ Thus we expand the polynomial sequence $1,-y,2y^{2}-1,y(7-6y^{2}),\ldots$ to obtain the rows of the inversion of $[cosh(x),x]$. Thus the matrix $[\cosh(x),x]^{!}$ begins $\left(\begin{array}[]{ccccccc}1&0&0&0&0&0&0\\\ 0&-1&0&0&0&0&0\\\ -1&0&2&0&0&0&0\\\ 0&7&0&-6&0&0&0\\\ 9&0&-46&0&24&0&0\\\ 0&-159&0&326&0&-120&0\\\ -225&0&2134&0&-2556&0&720\\\ \end{array}\right).$ The first column $1,0,-1,0,9,0,-225,\ldots$ is then the exponential revert transform of the sequence $1,0,1,0,1,0,1,0,\ldots$ which is the expansion of $\cosh(x)$. The numbers occurring in this reversion are the squares of the double factorial numbers. The row sums of the inversion, which begin $1,-1,1,1,-13,47,73,\ldots,$ are the exponential revert transform of the row sums $1,1,2,4,8,16,\ldots$ of $[\cosh(x),x]$. This latter sequence has exponential generating function $e^{x}\cosh(x)$. To find the revert transform of this, we proceed as follows. First, we calculate $\int_{0}^{x}e^{t}\cosh(t)\,dt=\frac{1}{4}\left(e^{2x}+2x-1\right).$ We then solve the equation $\frac{1}{4}\left(e^{2u}+2u-1\right)=x$ to obtain the solution $u(x)$ such that $u(0)=0$. We obtain $u(x)=\frac{1}{2}\left(-W\left(e^{4x+1}\right)+4x+1\right).$ The revert transform we seek is then the derivative of this. Thus the rows sums $1,-1,1,1,-13,47,73,-2447,16811,15551,-1726511,\ldots$ of the inversion of $[\cosh(x),x]$ have generating function $\frac{1}{2}\left(4-\frac{4W\left(e^{4x+1}\right)}{W\left(e^{4x+1}\right)+1}\right).$ These terms coincide with the coefficients of Airey’s converging factor A001662 [4]. Sergei N. Gladkovskii gives the following continued fraction expression for the generating function of the revert transform of $e^{x}\cosh(x)$: $\cfrac{1}{1+\cfrac{x}{1-2x+\cfrac{2x}{1-4x+\cfrac{3x}{1-6x+\cfrac{4x}{1-8x+\cdots}}}}}.$ ## 7 Conclusions In this note, we have defined the notion of the inversion of a Riordan array, along with methods for its construction. Examples show that the process of going from a Riordan array to its inversion leads to many number triangles and sequences of combinatorial importance. In the reverse direction, it is probable that many number triangles of combinatorial significance are the inversions of Riordan arrays, or arrays closely related to Riordan arrays. It is hoped that this interplay between Riordan arrays and their inversions will provide extra insight in the context of algebraic combinatorics and related fields. ## References * [1] D. D. Armstrong, Generalized noncrossing partitions and combinatorics of Coxeter groups, Thesis, Cornell Unvirsity, 2006, https://ecommons.cornell.edu/bitstream/handle/1813/3206/thesis.pdf. * [2] P. Barry and A. M. Mwafise, Classical and semi-classical orthogonal polynomials defined by Riordan arrays, and their moment sequences, _J. Integer Seq._ , 21 (2018), Article 18.1.5. * [3] P. Barry, _Riordan Arrays: a Primer_ , Logic Press, 2017. * [4] M. Bernstein and N. J. A. Sloane, Some canonical sequences of integers, Linear Alg. Applications, 226-228 (1995), 57-72; erratum 320 (2000), 210; arXiv:math/0205301 [math.CO], 2002. * [5] P. Henrici, An algebraic proof of the Langrange-Bürmann formula, _J. Math. Anal. Appl._ , 8 (1964), 218–224. * [6] A. N. Kirillov, On Some Quadratic Algebras I $\frac{1}{2}$: Combinatorics of Dunkl and Gaudin Elements,Schubert, Grothendieck, Fuss–Catalan, Universal Tutte and Reduced Polynomials, _Sigma_ 12 (2016), $002$, $172$ pages. * [7] C. W. Lee, The Associahedron and triangulations of the $n$-gon, _Europ. J. Combinatorics_ , 10 (1989), 551-560. * [8] J.-L. Loday and M. O. Ronco, Trialgebras and families of polytopes, https://arxiv.org/abs/math/0205043. * [9] J.-L. Loday, The multiple facets of the associahedron, preprint, (2005), _Proc. Academy Colloquium Series_ , Clay Mathematics Institute Publication, 2005 * [10] D. Merlini, R. Sprugnoli, and M. C. Verri, The method of coefficients, _Amer. Math. Monthly_ , 114 (2007), 40–57. * [11] D. Merlini, R. Sprugnoli, and M. C. Verri, Lagrange inversion: when and how, _Acta Appl. Math._ , 94 (2006), 233–249. * [12] J.-C. Novelli and J.-Y. Thibon, Hopf Algebras of $m$-permutations, $(m+1)$-ary trees, and $m$-parking functions, https://arxiv.org/abs/1403.5962. * [13] P. Salvatore and R. Tauraso, The Operad Lie is Free, _Journal of Pure and Applied Algebra_ , 213 (2009), 224–230. * [14] L. W. Shapiro, S. Getu, W. J. Woan, and L. C. Woodson, The Riordan group, _Discr. Appl. Math._ , 34 (1991), 229–239. * [15] N. J. A. Sloane, _The On-Line Encyclopedia of Integer Sequences_. Published electronically at https://oeis.org/, 2021. * [16] N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences, _Notices Amer. Math. Soc._ , 50 (2003), 912–915. 2010 Mathematics Subject Classification: Primary 15A30; Secondary 15B36, 05A10, 13F25. _Keywords:_ Riordan array, Riordan group, invert transform, revert transform, inversion, Lagrange inversion, associahedron, Narayana numbers, Fuss-Narayana polynomials. (Concerned with sequences A000108, A001263, A001662, A002002, A007318, A009766, A049310, A053121, A060693, A084938, A088617, A094587, A097610, A119467, A126216, A196347, A243662, A243663, and A249925.)
# Coherent seeding of the dynamics of a spinor Bose-Einstein condensate: from quantum to classical behavior Bertrand Evrard, An Qu, Jean Dalibard and Fabrice Gerbier Laboratoire Kastler Brossel, Collège de France, CNRS, ENS-PSL Research University, Sorbonne Université, 11 Place Marcelin Berthelot, 75005 Paris, France ###### Abstract We present experiments revealing the competing effect of quantum fluctuations and of a coherent seed in the dynamics of a spin-1 Bose-Einstein condensate, and discuss the relevance of a mean-field description of our system. We first explore a near-equilibrium situation, where the mean-field equations can be linearized around a fixed point corresponding to all atoms in the same Zeeman state $m=0$. Preparing the system at this classical fixed point, we observe a reversible dynamics triggered by quantum fluctuations, which cannot be understood within a classical framework. We demonstrate that the classical description becomes accurate provided a coherent seed of a few atoms only is present in the other Zeeman states $m=\pm 1$. In a second regime characterized by a strong non-linearity of the mean-field equations, we observe a collapse dynamics driven by quantum fluctuations. This behavior cannot be accounted for by a classical description and persists for a large range of initial states. We show that all our experimental results can be explained with a semi- classical description (truncated Wigner approximation), using stochastic classical variables to model the quantum noise. ###### pacs: ## I Introduction The mean-field approximation is an essential tool of many-body physics. In this approach, the interaction of a single body with the rest of the system is treated in an averaged way, neglecting fluctuations around the mean and erasing any spatial correlations. The original many-body problem is then reduced to a much simpler one-body problem, a tremendous simplification enabling a basic analysis of the problem at hand. The accuracy of the averaging improves with the number of particles in direct interaction. Consequently, the mean-field treatment is well suited for highly connected systems, while important deviations are common for systems with short range interactions in reduced dimensions. When applied to bosonic quantum systems, a mean-field approach often entails another important approximation where intrinsic quantum fluctuations (and the correlations they induce) are neglected. Since quantum fluctuations are reflected in the non-commutativity of observables, field operators in the second-quantization formalism are replaced by commuting $c$-numbers. A possible improvement consists in replacing the field operators by classical stochastic fields Gardiner and Zoller (2004); Polkovnikov (2010); Steel et al. (1998); Sinatra et al. (2002); Mathew and Tiesinga (2017), with a statistics properly chosen to be as close as possible to the original quantum problem. Such a semi-classical approach allows to account quantitatively for quantum fluctuations, while keeping the inherent simplicity of the mean-field equations. In this Letter, we study the role of quantum fluctuations and the emergence of mean-field behavior in a quantum spinor Bose-Einstein condensate Kawaguchi and Ueda (2012). The atoms are condensed in the same spatial mode and interact all-to-all. The mean-field approach is thus well appropriate to study the dynamics in the spin sector, and has indeed been successfully used to describe several situations, either at Zhang et al. (2003); Jacob et al. (2012) or out- of Zhang et al. (2005); Chang et al. (2005); Kronjäger et al. (2005, 2006); Black et al. (2007); Liu et al. (2009) equilibrium. More recently, several experiments addressed the dynamics of a condensate prepared in an unstable configuration, achieving a high sensitivity to both classical and quantum fluctuations Klempt et al. (2010); Bookjans et al. (2011); Lücke et al. (2011); Hamley et al. (2012); Lücke et al. (2014); Linnemann et al. (2016, 2017); Kunkel et al. (2018); Fadel et al. (2018); Lange et al. (2018); Tian et al. (2020); Yang et al. (2019); Qu et al. (2020); Mias et al. (2008); Cui et al. (2008); Wrubel et al. (2018); Evrard et al. (2021). Here, our goal is twofold. First, we reveal the effect of quantum fluctuations in two different dynamical regimes, corresponding to persistent oscillations or relaxation to a stationary state Evrard et al. (2021). Second, we address the relevance of a classical field description by comparing our experimental results systematically with three theoretical approaches. In the fully classical picture (C), we derive mean-field equations of motion and solve them for well-defined initial conditions, possibly including a coherent seed. In the semi-classical picture (SC), we keep the same mean-field equations of motion but for fluctuating initial conditions, with a probability distribution designed to model the quantum noise of the initial state. Finally, we perform a fully quantum treatment (Q), consisting in a numerical resolution of the many-body Schrödinger equation. ## II Spinor Bose-Einstein condensates We work with Bose-Einstein condensates of $N$ spin-1 sodium atoms in a tight optical trap. Due to the strong confinement, all atoms share the same spatial wave function $\psi({\boldsymbol{r}})$ Yi et al. (2002), such that the spin is the only relevant degree of freedom. In this regime, the Hamiltonian describing the spin-spin interaction is (up to an additive constant) Ohmi and Machida (1998); Ho (1998); Yi et al. (2002); Kawaguchi and Ueda (2012) $\displaystyle\hat{H}_{\rm int}=\frac{U_{s}}{2N}\sum_{i,j=1}^{N}\hat{\boldsymbol{s}}_{i}\cdot\hat{\boldsymbol{s}}_{j}=\frac{U_{s}}{2N}\hat{\boldsymbol{S}}^{2}\,.$ (1) Here $\hat{\boldsymbol{s}}_{i}$ denotes the spin of atom $i$, $\hat{\boldsymbol{S}}=\sum_{i}\hat{\boldsymbol{s}}_{i}$ the total spin, and $U_{s}$ the spin-spin interaction energy. In the single-mode limit, the spin- spin interaction is given by $U_{s}=(4\pi\hbar^{2}a_{s}N/M)\int d^{3}{\boldsymbol{r}}\,|\psi({\boldsymbol{r}})|^{4}$, where $a_{s}$ is a spin- dependent scattering length, $M$ is the mass of a sodium atom, and the spin- independent spatial mode $\psi$ is the lowest energy solution of the time- independent Gross-Pitaevskii equation Dalfovo et al. (1999). Note that technical fluctuations of the atom number $N$ translate into fluctuations of $U_{s}$ (other factors, such as fluctuations of the trap geometry, can also contribute to the latter). As will be discussed in more detail in Section IV, these technical fluctuations add to the intrinsic relaxation due to quantum fluctuations and thereby play a significant role in the interpretation of the experiments. We use a magnetic field ${\boldsymbol{B}}$ aligned along the $z$ axis to shift the energies of the individual Zeeman states $|m\rangle$, the eigenstates of $\hat{s}_{z}$ with eigenvalues $m=0,\pm 1$. Up to second order in $B$, the Zeeman Hamiltonian is $\hat{H}_{Z}=\sum_{i=1}^{N}p\hat{s}_{zi}+q\hat{s}_{zi}^{2}\,$, where $p\propto B$ and $q\propto B^{2}$ are the linear and quadratic Zeeman shifts, respectively. Noticing that $[\hat{S}_{z},\hat{H}_{\rm int}]=0\,$, the first term in $\hat{H}_{Z}$ is a constant of motion that can be removed by a unitary transformation. The total Hamiltonian thus reads Kawaguchi and Ueda (2012) $\displaystyle\hat{H}=\hat{H}_{\rm int}+\hat{H}_{Z}=\frac{U_{s}}{2N}\hat{\boldsymbol{S}}^{2}+q\left(\hat{N}_{+1}+\hat{N}_{-1}\right)\,,$ (2) where $\hat{N}_{m}$ is the number of atoms in $|m\rangle$. Under a mean-field approximation, the annihilation operators $\hat{a}_{m}$ are replaced by the $c$-numbers $\sqrt{N}\zeta_{m}=\sqrt{N_{m}}\exp(i\phi_{m})$. By convention we set $\phi_{0}=0$, and we focus on the situation $S_{z}=0$. We define the mean number of $(+1,-1)$ pairs $N_{\rm p}=({N}_{+1}+N_{-1})/2$, and take its normalized value $n_{\rm p}=N_{\rm p}/N$ and the conjugate phase $\theta=\phi_{+1}+\phi_{-1}$ as dynamical variables. In terms of these variables, the mean-field equations of motion are Zhang et al. (2005) $\displaystyle\hbar\dot{n}_{\rm p}$ $\displaystyle=-2U_{s}n_{\rm p}(1-2n_{\rm p})\sin\theta\,,$ (3) $\displaystyle\hbar\dot{\theta}$ $\displaystyle=-2q+2U_{s}(4n_{\rm p}-1)(1+\cos\theta)\,.$ (4) At $t=0$, the BEC is prepared in a generalized coherent spin state $|\psi_{\mathrm{ini}}\rangle=\left(\sum_{m}\zeta_{\mathrm{ini},m}|m\rangle\right)^{\otimes N}$, with $\displaystyle{\boldsymbol{\zeta}}_{\mathrm{ini}}=\begin{pmatrix}\sqrt{n_{\rm seed}}\,\mathrm{e}^{i\frac{\theta_{\mathrm{ini}}+\eta_{\mathrm{ini}}}{2}}\\\ \sqrt{1-2n_{\rm seed}}\\\ \sqrt{n_{\rm seed}}\,\mathrm{e}^{i\frac{\theta_{\mathrm{ini}}-\eta_{\mathrm{ini}}}{2}}\\\ \end{pmatrix}\,,$ (5) where $n_{\rm seed}=N_{\rm seed}/N$ and $N_{\rm seed}$ is the number of atoms in the $m=\pm 1$ states. The Larmor phase $\eta=\phi_{+1}-\phi_{-1}$ evolves as $\eta(t)=\eta_{\mathrm{ini}}-2pt/\hbar$ and does not play any important role in the following. We focus on the behavior of $N_{\rm p}(t)$ as a function of time. We notice that the state with all atoms in $m=0$ (i.e. $N_{\rm seed}=0$ and hence $n_{\rm p}=0$) is stationary according to Eq. (3,4). However, this state is not an eigenstate of $\hat{H}_{\rm int}$ and thus not a stationary state of the quantum equation of motion. In the absence of a seed, we identified in Ref. Evrard et al. (2021) two different regimes for the ensuing non-classical dynamics: * • For $U_{s}/N\ll q$, the dynamics is reversible: The number of pairs $N_{\rm p}(t)$ oscillates with a small amplitude. * • For $q\ll U_{s}/N$ , the dynamics is strongly damped and $N_{\rm p}(t)$ relaxes to a stationary value. Here, we revisit these experiments to investigate the effect of a coherent seeding of the $m=\pm 1$ modes. ## III Reversible dynamics #### Theoretical predictions We focus first on the situation where $U_{s}/N\ll q\ll U_{s}$ and $n_{\rm seed}\ll 1$. In this case, the reduced number of pairs $n_{\rm p}$ remains small at all times. Linearizing the mean-field Eqs. (3,4), we obtain SM $\displaystyle N_{\rm p}^{\rm(C)}(t)\approx\frac{2U_{s}}{q}\sin^{2}(\omega t)\cos^{2}\left(\frac{\theta_{\mathrm{ini}}}{2}\right)N_{\rm seed}\,,$ (6) where $\omega\approx\sqrt{2qU_{s}}$. Note that the oscillation frequency $\omega$ is independent on the initial conditions $\theta_{\mathrm{ini}}$ and $N_{\rm seed}$. In Sec. IV, we investigate a regime, where the frequency of the classical solution increases with $N_{\rm seed}$, with dramatic consequences on the semi-classical dynamics. To improve the prediction (6) and account for quantum fluctuations, we use a semi-classical approach, the truncated Wigner approximation Polkovnikov (2010); Steel et al. (1998); Sinatra et al. (2002); Mathew and Tiesinga (2017); Wrubel et al. (2018). The probability amplitudes $\zeta_{\mathrm{ini},m}$ are treated as complex random variables which sample the initial Wigner distribution of the initial state at $t=0$. The amplitudes are then propagated according to the mean-field equations of motion. Averaging the mean-field predictions over the fluctuations of $\bm{\zeta}_{\rm ini}$, we find SM ; Wrubel et al. (2018) $\displaystyle N_{\rm p}^{\rm(SC)}(t)\approx\frac{U_{s}}{2q}\sin^{2}(\omega t)\left[4\cos^{2}\left(\frac{\theta_{\mathrm{ini}}}{2}\right)N_{\rm seed}+1\right]\,.$ (7) In analogy with quantum optics, the term $\propto N_{\rm seed}$ in Eqs. (6,7) describes “stimulated emission” from the mode $m=0$ to the modes $m=\pm 1$, while the additional term in Eq. (7) can be interpreted as “spontaneous emission”. We have verified numerically that the SC results are in good agreement with a fully quantum treatment. Moreover, comparing equations (6) and (7), we notice that unless the initial phase is chosen such that $\theta_{\mathrm{ini}}\approx\pi$, a large seed $N_{\rm seed}\gg 1$ makes the C and SC treatments almost identical, irrespective of the precise value of $N$. In fact, seeding with a few atoms $N_{\rm seed}\approx 2-3$ and with $\theta=0$ is sufficient to reach a 90 % agreement between the two approaches. #### Experimental sequence We prepare a BEC in the state $m=0$ using evaporative cooling in a crossed laser trap with a large magnetic field $B=1\,$G ($q\gg U_{s}$). After evaporation, the BEC contains $N\approx 2000$ atoms in the state $m=0$, with $N_{\rm p}\approx 100$ residual thermal atoms in $m=\pm 1$. We then turn on a strong magnetic field gradient to pull the $m=\pm 1$ atoms out of the trap. After this purification step, we measure $N_{\rm p}\ll 1$ Qu et al. (2020). We add a coherent seed using a combination of magnetic field ramps and resonant radio frequency (rf) pulses. In a first step, a rf pulse is used to prepare the atoms in a coherent superposition with a probability $n_{\rm seed}$ to be in a given $m=\pm 1$ state. In a second step, the BEC is held in a large magnetic field, such that $q\gg U_{s}$ and $\theta_{\mathrm{ini}}$ can be tuned keeping $n_{\rm p}=n_{\rm seed}$ (see Supplementary Material (SM, ) for more details). In this way, we are able to prepare any coherent spin state given by Eq. (5), up to the phase $\eta_{\mathrm{ini}}$ which is irrelevant for the experiments described here. The main imperfection in the preparation originates from the fluctuations of the total atom number $\delta N\approx 0.1\,N$, which induce $\approx 10\%$ relative fluctuations on $N_{\rm seed}$. The magnetic field is then quenched to the desired value, and we let the system evolve for a time $t$ before measuring the population of each Zeeman state using a combination of Stern-Gerlach separation and fluorescence imaging with a detection sensitivity around $1.6$ atoms per spin component Qu et al. (2020). #### Experimental results In Fig. 1, we show the time evolution of $N_{\rm p}(t)$ for various initial states. In Fig. 1(a), we do not seed the dynamics. We observe an oscillation of $N_{\rm p}(t)$, not captured by the classical description of Eq. (6), but in good agreement with the semi-classical predictions (7) or with the numerical resolution of the Schrödinger equation. In Fig. 1(b), we prepare a seed with $N_{\rm seed}\approx 0.25\pm 0.03$ (inferred from a calibration of the rf power) and $\theta_{i}\approx 0$. Compared to (a), the amplitude of the oscillations is doubled, in good agreement with (7). In Fig. 1(c), we set $N_{\rm seed}\approx 1.8\pm 0.2$ and $\theta_{\mathrm{ini}}\approx 0$. The amplitude of the oscillations is further increased, and now also well reproduced by the fully classical treatment (6). In all cases (a,b,c), the condition $N_{\rm p}(t)\ll N$ remains fulfilled at all times. The validity of Eqs. (6,7) and the independence of the oscillation frequency on $N_{\rm seed}$ (as can be seen from Fig. 1) follow. Figure 1: Evolution of the number of $(+1,-1)$ pairs $N_{\rm p}$ (circles) for $q/h\approx 0.22\pm 0.03\,$Hz, $N\approx 1880\pm 190$ atoms and various seed sizes: $N_{\rm seed}\approx 0;\,0.25;\,1.8$ from (a) to (c). The initial phase is always set to $\theta_{\mathrm{ini}}\approx 0$. The solid lines are numerical solutions of the Schrödinger equation with the many-body Hamiltonian in Eq.(2) using $U_{s}/h=9.9\,$Hz. The red dashed lines correspond to the classical prediction (6). Here and in the following, error bars show the statistical error corresponding to two standard errors. We investigate the role of the initial phase $\theta_{\mathrm{ini}}$ in Fig. 2. In Fig. 2 (a), we plot the variation of $N_{\rm p}(T/2)$, with $T=\pi/\omega$ the period of oscillations, against $N_{\rm seed}$ for three values of $\theta_{\mathrm{ini}}$. For $N_{\rm seed}\ll 1$, we observe a saturation of $N_{\rm p}(T/2)$ at a value independent of $\theta_{\mathrm{ini}}$, consistent with the SC prediction (7). For such small seeds, the dynamics is triggered by quantum fluctuations. For larger seeds, unless the anti-phase-matching condition $\theta_{\mathrm{ini}}\approx\pi$ is fulfilled (red curves), stimulated emission becomes dominant and the fully classical description is accurate. We observe a linear increase of $N_{\rm p}(T/2)$ until the small-depletion approximation used to derive Eqs. (6,7) becomes inconsistent. For our data, this occurs for the point $N_{\rm seed}\approx 100$ , $\theta_{\mathrm{ini}}\approx 0$. In this case, an exact resolution of the mean-field equations (3,4) provides accurate results. In Fig. 2 (b), we set $N_{\rm seed}\approx 6.0$ and scan the phase $\theta_{\mathrm{ini}}$. We measure oscillations of $N_{\rm p}(T/2)$ in good agreement with Eqs. (6,7). Figure 2: (a) Number of pairs produced after half a period of evolution versus $N_{\rm seed}$ for $q/h\approx 0.33\pm 0.03\,$Hz and $N\approx 2920\pm 280$. The blue diamonds, green circles and red squares correspond to initial phases $\theta_{\mathrm{ini}}\approx 0$; $2.2$; and $3.3$ rad, respectively. For the three smallest seeds, $N_{\rm seed}$ is inferred from the calibration of the rf power. The solid lines are the semi-classical predictions given by Eq. (7) with $U_{s}/h\approx 12\,$Hz, assuming $N_{\rm p}\ll N$. For large $N_{\rm seed}$, this approximation breaks down, but a numerical solution of the non- linear classical mean-field Eqs. (3,4) with fixed initial conditions, becomes relevant. This fully classical treatment is shown as dashed lines. (b) Scan of the initial phase $\theta_{\mathrm{ini}}$ after half a period of evolution for $N_{\rm seed}\approx 6.0\,$. ## IV Relaxation dynamics Figure 3: Evolution of the fraction of $(+1,-1)$ pairs $n_{\rm p}=N_{\rm p}/N$ in a negligible magnetic field, for $N\approx 124\pm 12$ atoms and various seedings: $N_{\rm seed}=0;\,0.54;\,2.1;\,4.9;\,12.8;$ from (a) to (e). The initial phase is always set to $\theta_{\mathrm{ini}}\approx 0$. The solid lines are numerical solutions of the Schrödinger equation for $U_{s}/h=24.5\,$Hz. In (e), the red dashed line is the classical prediction from Eqs.(̇3,4). #### Theoretical prediction We now investigate the relaxation dynamics in a very small magnetic field, such that $q\ll U_{s}/N$. In this regime, the quadratic Zeeman shift $q$ is negligible and we set it to zero for the calculation. However, the assumption $n_{\rm p}\ll 1$ used to derive Eq. (6) is not valid and the mean-field equations (3,4) cannot be linearized. For $q=0$, the mean-field equations of motion can be solved directly. Taking for simplicity $\theta_{\mathrm{ini}}=0$, we find SM $\displaystyle n_{\rm p}^{\rm(C)}(t)=\frac{1}{4}-\frac{1-4n_{\rm seed}}{4}\,\cos(\Omega t)\,,$ (8) with an oscillation frequency $\displaystyle\Omega=\frac{4U_{s}}{\hbar}\,\sqrt{2n_{\rm seed}(1-2n_{\rm seed})}\,.$ (9) The non-linear dependence of $\Omega$ with $n_{\rm seed}$ reflects the non- linearity of the mean-field equations, and has dramatic consequences when one takes into account quantum fluctuations. The seeds spontaneously created from the vacuum of pairs induce random shifts of the oscillation frequency around its mean-field value. Averaging over many realizations therefore results in an intrinsic dephasing of the oscillations predicted in Eq. (8). More precisely, for the generalized coherent spin state prepared in our experiment, the initial number of atoms in the $m=\pm 1$ modes $N_{+1,\mathrm{ini}}+N_{-1,\mathrm{ini}}=\Sigma$ follows a binomial distribution of mean $2N_{\rm seed}$ (quantum partition noise). We use the random variable $\Sigma$ as an initial condition to solve the mean-field equations (3,4), i.e. substituting $n_{\rm seed}$ in Eq. (8) with $\Sigma/(2N)$. After averaging over the partition noise, we obtain for $N_{\rm seed}\gg 1$ SM $\displaystyle n_{\rm p}^{\rm(SC)}(t)\approx\frac{1}{4}-\frac{1-4n_{\rm seed}}{4}\cos(\Omega t)\mathrm{e}^{-\frac{1}{2}(\gamma_{\rm c}t)^{2}}\,,$ (10) with a collapse rate $\displaystyle\gamma_{\rm c}$ $\displaystyle=\frac{2U_{s}}{\sqrt{N}\hbar}\,|1-4n_{\rm seed}|\,.$ (11) The analytic formula (10) agrees very well with the numerical solution of the many-body Schrödinger equation for $N_{\rm seed}\gtrsim 1$. The case $N_{\rm seed}\ll 1$ can be treated using the truncated Wigner approximation Mathew and Tiesinga (2017) or an exact diagonalization of the interaction Hamiltonian (1) Evrard et al. (2021); Law et al. (1998). The dynamics also displays a relaxation of $n_{p}$ to $1/4$, but with a different asymptotic behavior, $n_{p}-1/4\propto 1/t$. In a related work law1998b, it was shown that Poissonian fluctuations of the atom number in each mode of a two-component BEC caused a Gaussian decay of the two-time correlation function. For the spin-1 and two-component cases, a similar mechanism is at work. The combination of non-linearities due to interactions and of quantum partition noise leads to dephasing and relaxation. In an actual experiment, the relaxation of $N_{\rm p}$ is also enhanced by purely classical noise sources of technical origin. In our case, we identify shot-to-shot fluctuations of $U_{s}$ (see Section II) as a significant additional mechanism contributing to the blurring of the oscillations. To account for this phenomenon, we average Eq. (10) over a Gaussian distribution of $U_{s}$ with variance $\delta U_{s}^{2}$. The resulting $n_{\rm p}(t)$ has the same functional form as in Eq. (10) with the replacement $\displaystyle\gamma_{{\rm c}}\to\Gamma=\sqrt{\gamma_{\rm c}^{2}+\gamma_{\rm t}^{2}},$ (12) with a technical blurring rate $\displaystyle\gamma_{\rm t}=\frac{4\,\delta U_{s}}{\hbar}\,\sqrt{2n_{\rm seed}(1-2n_{\rm seed})}\,.$ (13) For small enough seeds $n_{\rm seed}\ll 1/4$, the total dephasing rate can be written $\displaystyle\Gamma$ $\displaystyle\approx\gamma_{\rm c}\sqrt{1+2\left(\frac{2\delta U_{s}}{U_{s}}\right)^{2}N_{\rm seed}}.$ (14) This indicates a crossover from quantum to classical dephasing for seed sizes $N^{\ast}\approx U_{s}^{2}/(2\delta U_{s})^{2}$. #### Experimental considerations In order to achieve the “zero field” regime $Nq\ll U_{s}$ experimentally, the best option is to reduce the atom number. Indeed, the density and therefore $U_{s}$, cannot be arbitrarily increased due to undesired inelastic processes. Reducing the applied magnetic field further is not feasible due to ambiant stray fields and environment-induced fluctuations (at the sub-mG level in our experiment). Therefore, we lower $N$ by more than one order of magnitude with respect to the previous sections and prepare mesoscopic BECs of $N\approx 124\pm 12$ atoms. We also slightly tighten the trap in order to achieve $U_{s}/h\approx 24.5$ Hz. In this case, the central spatial density remains low enough to avoid inelastic collisions (more details in the Supplementary Material). Figure 4: Frequency (a) and relaxation rate (b) of the spin-mixing dynamics in a negligible magnetic field. The circles are obtained from a fit to the data of Fig. 3, with the error bars indicating the $95\%$ confidence interval. In (a), the red dashed line corresponds to the frequency $\omega$ predicted by the mean-field treatment. In (b), the dash-dotted blue line corresponds to the rate $\gamma_{\rm c}$ of the collapse driven by quantum fluctuations, the red dashed line is the damping rate $\gamma_{\rm t}$ due to technical fluctuations, and the solid purple line corresponds to the total damping rate $\Gamma=[\gamma_{{\rm c}}^{2}+\gamma_{\rm t}^{2}]^{1/2}$. We use the value $\delta U_{s}/U_{s}=0.13\pm 0.04$, obtained from a fit to the data. #### Experimental results We show in figure 3 the relaxation dynamics of $n_{\rm p}$ for various seed sizes $n_{\rm seed}$. We observe an acceleration of the initial dynamics for increasing $n_{\rm seed}$ and the emergence of rapidly damped oscillations. Eventually, $n_{\rm p}$ relaxes to the stationary value $\approx 1/4$ in all cases. Numerical simulations with $U_{s}$ taken as a fit parameter are overall in good agreement with the data, although they slightly underestimate the damping rate for the largest seed $N_{\rm seed}=12.8$. To compare these experiments with the theoretical predictions, we fit a function of the form (10) to the data of Fig. 3, leaving $\Omega$ and $\Gamma$ as free parameters. We report in Fig. 4a,b the fitted frequency and relaxation rate. The frequency is essentially insensitive to quantum or classical fluctuations, and the measured values agree well with the C or SC predictions. The relaxation rate varies little with $N_{\rm seed}$ in the range we have explored experimentally. This observation is explained by the SC theory including technical fluctuations. Indeed, the slow decrease of $\gamma_{\rm c}$ with $N_{\rm seed}$ is compensated by the increase of $\gamma_{\rm t}$. Using $\delta U_{s}/U_{s}\approx 0.13$ as determined in Fig. (4), we find a “quantum-classical crossover” for seed sizes around $N^{\ast}\approx 15$, close to the largest value we explored experimentally. For small seeds $N_{\rm seed}\lesssim 5$, our measurements are consistent with a collapse driven primarily by quantum fluctuations. On the contrary, for the largest $N_{\rm seed}\approx 12.8$, classical technical dephasing is the dominant damping mechanism. ## V Conclusion We investigated the dynamics of a spin-1 BEC prepared with a majority of atoms in the Zeeman state $m=0$ and possibly small coherent seeds in the $m=\pm 1$ modes. For a small but non-negligible magnetic field, we observe oscillations of the spin populations. This dynamics is triggered by quantum fluctuations in the absence of a seed, and cannot be captured in a completely classical approach. Adding a coherent seed is phase-sensitive Wrubel et al. (2018). In general it corresponds to a dramatic increase of the oscillation amplitude, and the classical predictions become accurate as soon as a few atoms (typically $N_{\rm seed}\gtrsim 2$) are used to seed the dynamics. We also studied the dynamics in a negligible magnetic field. In this second regime, the combination of non-linear mean-field equations and quantum noise leads to the relaxation of the spin populations. When the size of the seed increases, the intrinsic damping rate $\gamma_{\rm c}$ decreases and the mean- field picture becomes more and more relevant. However, it eventually fails for sufficiently long times. Experimentally, technical noise sources provide additional dephasing mechanisms of purely classical origin that can be completely described in the mean-field approach. In our experiment, we identify the fluctuations of the total atom number as the leading blurring mechanism when the seed size exceeds a dozen atoms. All the experiments presented in this Letter are well captured by a semi- classical theory, where quantum fluctuations are modeled using stochastic classical variables. An interesting direction for future work would be to test experimentally the validity of such a semi-classical description in other contexts, in particular in a chaotic regime Evrard et al. (2021); Rautenberg and Gärttner (2020); Tomkovič et al. (2017). ## References * Gardiner and Zoller (2004) C. Gardiner and P. Zoller, _Quantum noise_ (Springer Science, 2004). * Polkovnikov (2010) A. Polkovnikov, Annals of Physics 325, 1790 (2010). * Steel et al. (1998) M. J. Steel, M. K. Olsen, L. I. Plimak, P. D. Drummond, S. M. Tan, M. J. Collett, D. F. Walls, and R. Graham, Phys. Rev. A 58, 4824 (1998). * Sinatra et al. (2002) A. Sinatra, C. Lobo, and Y. Castin, Journal of Physics B: Atomic, Molecular and Optical Physics 35, 3599 (2002). * Mathew and Tiesinga (2017) R. Mathew and E. Tiesinga, Phys. Rev. A 96, 013604 (2017). * Kawaguchi and Ueda (2012) Y. Kawaguchi and M. Ueda, Physics Reports 520, 253 (2012). * Zhang et al. (2003) W. Zhang, S. Yi, and L. You, New Journal of Physics 5, 77 (2003). * Jacob et al. (2012) D. Jacob, L. Shao, V. Corre, T. Zibold, L. De Sarlo, E. Mimoun, J. Dalibard, and F. Gerbier, Phys. Rev. A 86, 061601 (2012). * Zhang et al. (2005) W. Zhang, D. L. Zhou, M.-S. Chang, M. S. Chapman, and L. You, Phys. Rev. A 72, 013602 (2005). * Chang et al. (2005) M.-S. Chang, Q. Qin, W. Zhang, L. You, and M. S. Chapman, Nature Physics 1, 111 (2005). * Kronjäger et al. (2005) J. Kronjäger, C. Becker, M. Brinkmann, R. Walser, P. Navez, K. Bongs, and K. Sengstock, Phys. Rev. A 72, 063619 (2005). * Kronjäger et al. (2006) J. Kronjäger, C. Becker, P. Navez, K. Bongs, and K. Sengstock, Phys. Rev. Lett. 97, 110404 (2006). * Black et al. (2007) A. T. Black, E. Gomez, L. D. Turner, S. Jung, and P. D. Lett, Phys. Rev. Lett. 99, 070403 (2007). * Liu et al. (2009) Y. Liu, E. Gomez, S. E. Maxwell, L. D. Turner, E. Tiesinga, and P. D. Lett, Phys. Rev. Lett. 102, 225301 (2009). * Klempt et al. (2010) C. Klempt, G. Gebreyesus, M. Scherer, T. Henninger, P. Hyllus, W. Ertmer, L. Santos, and J. J. Arlt, Phys. Rev. Lett. 104, 195303 (2010). * Bookjans et al. (2011) E. M. Bookjans, C. D. Hamley, and M. S. Chapman, Phys. Rev. Lett. 107, 210406 (2011). * Lücke et al. (2011) B. Lücke, M. Scherer, J. Kruse, L. Pezze, F. Deuretzbacher, P. Hyllus, J. Peise, W. Ertmer, J. Arlt, L. Santos, et al., Science 334, 773 (2011). * Hamley et al. (2012) C. D. Hamley, C. Gerving, T. Hoang, E. Bookjans, and M. S. Chapman, Nat. Phys. 8, 305 (2012). * Lücke et al. (2014) B. Lücke, J. Peise, G. Vitagliano, J. Arlt, L. Santos, G. Tóth, and C. Klempt, Phys. Rev. Lett. 112, 155304 (2014). * Linnemann et al. (2016) D. Linnemann, H. Strobel, W. Muessel, J. Schulz, R. J. Lewis-Swan, K. V. Kheruntsyan, and M. K. Oberthaler, Phys. Rev. Lett. 117, 013001 (2016). * Linnemann et al. (2017) D. Linnemann, J. Schulz, W. Muessel, P. Kunkel, M. Prüfer, A. Frölian, H. Strobel, and M. Oberthaler, Quantum Science and Technology 2, 044009 (2017). * Kunkel et al. (2018) P. Kunkel, M. Prüfer, H. Strobel, D. Linnemann, A. Frölian, T. Gasenzer, M. Gärttner, and M. K. Oberthaler, Science 360, 413 (2018). * Fadel et al. (2018) M. Fadel, T. Zibold, B. Décamps, and P. Treutlein, Science 360, 409 (2018). * Lange et al. (2018) K. Lange, J. Peise, B. Lücke, I. Kruse, G. Vitagliano, I. Apellaniz, M. Kleinmann, G. Tóth, and C. Klempt, Science 360, 416 (2018). * Tian et al. (2020) T. Tian, H.-X. Yang, L.-Y. Qiu, H.-Y. Liang, Y.-B. Yang, Y. Xu, and L.-M. Duan, Phys. Rev. Lett. 124, 043001 (2020). * Yang et al. (2019) H.-X. Yang, T. Tian, Y.-B. Yang, L.-Y. Qiu, H.-Y. Liang, A.-J. Chu, C. B. Dağ, Y. Xu, Y. Liu, and L.-M. Duan, Phys. Rev. A 100, 013622 (2019). * Qu et al. (2020) A. Qu, B. Evrard, J. Dalibard, and F. Gerbier, Phys. Rev. Lett. 125, 033401 (2020). * Mias et al. (2008) G. I. Mias, N. R. Cooper, and S. Girvin, Phys. Rev. A 77, 023616 (2008). * Cui et al. (2008) X. Cui, Y. Wang, and F. Zhou, Phys. Rev. A 78, 050701 (2008). * Wrubel et al. (2018) J. P. Wrubel, A. Schwettmann, D. P. Fahey, Z. Glassman, H. Pechkis, P. Griffin, R. Barnett, E. Tiesinga, and P. Lett, Phys. Rev. A 98, 023620 (2018). * Evrard et al. (2021) B. Evrard, A. Qu, J. Dalibard, and F. Gerbier, Phys. Rev. Lett. 126, 063401 (2021). * Yi et al. (2002) S. Yi, Ö. Müstecaplıoğlu, C.-P. Sun, and L. You, Phys. Rev. A 66, 011601 (2002). * Ohmi and Machida (1998) T. Ohmi and K. Machida, Journal of the Physical Society of Japan 67, 1822 (1998). * Ho (1998) T.-L. Ho, Phys. Rev. Lett. 81, 742 (1998). * Dalfovo et al. (1999) F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Reviews of Modern Physics 71, 463 (1999). * (36) for more details see Supplemental Material, wich includes the reference Uchino et al. (2010). * Law et al. (1998) C. K. Law, H. Pu, and N. P. Bigelow, Phys. Rev. Lett. 81, 5257 (1998). * Rautenberg and Gärttner (2020) M. Rautenberg and M. Gärttner, Phys. Rev. A 101, 053604 (2020). * Tomkovič et al. (2017) J. Tomkovič, W. Muessel, H. Strobel, S. Löck, P. Schlagheck, R. Ketzmerick, and M. K. Oberthaler, Phys. Rev. A 95, 011602 (2017). * Uchino et al. (2010) S. Uchino, M. Kobayashi, and M. Ueda, Phys. Rev. A 81, 063632 (2010). Supplemental Material: Coherent seeding of the dynamics of a spinor Bose-Einstein condensate: from quantum to classical behavior ## I Initial state preparation ### I.1 Oscillating regime We prepare the spinor BEC at $t=0$ in a generalized coherent spin state $|\psi_{\mathrm{ini}}\rangle=\left(\sum_{m}\zeta_{\mathrm{ini},m}|m\rangle\right)^{\otimes N}$, $\displaystyle{\boldsymbol{\zeta}}_{\mathrm{ini}}=\begin{pmatrix}\sqrt{n_{\rm seed}}\mathrm{e}^{i\frac{\theta_{\mathrm{ini}}+\eta_{\mathrm{ini}}}{2}}\\\ \sqrt{1-2n_{\rm seed}}\\\ \sqrt{n_{\rm seed}}\mathrm{e}^{i\frac{\theta_{\mathrm{ini}}-\eta_{\mathrm{ini}}}{2}}\\\ \end{pmatrix}\,.$ (S1) We prepare this state starting from $|m=0\rangle$ using a combination of magnetic field ramps and resonant radio-frequency (rf) pulses. In details, we first pulse a rf field resonant with the Zeeman splitting to populate the $m=\pm 1$ modes with a fraction $n_{\rm seed}=\sin^{2}(\Omega_{\rm rf}t_{1})/2$ of the atoms. Here, $\Omega_{\rm rf}$ is the rf Rabi frequency and $t_{1}$ the pulse duration. At this stage, we have prepared a coherent spin state of the form (S1) with $\theta_{\mathrm{ini}}\approx\pi$. To change $\theta_{\mathrm{ini}}$, we let the system evolve in a field $B=0.5\,$G ($q/h\approx 70\,$Hz) for a time $t_{2}<h/(2q)$, before quenching the magnetic field down to $28\pm 2\,$mG ($q/h\approx 0.22\,$Hz) in $t_{3}=4\,$ms to achieve the desired regime $U_{s}/N\ll q\ll U_{s}$. Interactions are negligible ($U_{s}/h\approx 10\,$Hz hence $U_{s}t_{2,3}/h\ll 1$), and the system simply acquires a phase shift $\Delta\theta_{2}=-2qt_{2}/\hbar$ while the magnetic field is held constant, and $\Delta\theta_{3}=-2\int q(t)dt/\hbar$ during the quench. This results in an initial phase $\theta_{\rm ini}=\pi-2qt_{2}/\hbar+\Delta\theta_{3}$ that is fully tunable from 0 to $2\pi$ by varying $t_{2}$. ### I.2 Relaxing regime We prepare mesoscopic BECs of $N\approx 124$ atoms in the same initial spin state as before. We lower the magnetic field down to $B=4.2\pm 1.5$ mG ($q/h\approx 5\,$mHz) in $t_{3}=20\,$ms. The ramp time corresponds to the time needed for the damping of eddy currents in the vacuum chamber. Because of the small atom number, the effects of the spin dependent interactions are negligible over the ramp ($U_{s}/h\approx 4\,$Hz, such that $U_{s}t_{3}/h\ll 1$) and the evolution of the state is essentially another phase shift of $\theta$, which can be compensated for by varying $t_{2}$. For these experiments, we always choose $t_{2}$ such that $\theta_{\mathrm{ini}}\approx 0$. Finally, we trigger the dynamics by recompressing the trap in $6\,$ms ($U_{s}/h\approx 4\to 24\,$Hz). By performing numerical simulations of the sequence with the many-body Schrödinger equation, we have checked that the ramp can be considered instantaneous to a good approximation. ## II Classical and semi-classical dynamics We detail here the calculations of the dynamics of $N_{\rm p}(t)$ given in the main text. We use a classical (C) approach based on the mean-field approximation and a semi-classical (SC) approach inspired by the truncated Wigner approximation (TWA). In both frameworks, the annihilation operators $\hat{a}_{m}$ are replaced by $c$-numbers $\alpha_{m}=\sqrt{N}\zeta_{m}$, with $N$ the number of condensed atoms and $\bm{\zeta}$ a spin-1 wavefunction (normalized to unity) parameterized as $\displaystyle{\boldsymbol{\zeta}}=\begin{pmatrix}\sqrt{n_{\rm p}}\mathrm{e}^{i\frac{\theta+\eta}{2}}\\\ \sqrt{1-2n_{\rm p}}\\\ \sqrt{n_{\rm p}}\mathrm{e}^{i\frac{\theta-\eta}{2}}\\\ \end{pmatrix}\,.$ (S2) Here $n_{\rm p}=(N_{+1}+N_{-1})/(2N)$ denotes the average number of $m=\pm 1$ pair normalized to the total atom number ($N_{\rm p}=Nn_{\rm p}$), and we have restricted ourselves to the situation $N_{+1}=N_{-1}$. We also have chosen $\zeta_{0}$ real without loss of generality. The mean field equations of motion for a spin-1 condensate in the single-mode regime take the form Zhang et al. (2005); Kawaguchi and Ueda (2012) $\displaystyle\hbar\dot{n}_{\rm p}$ $\displaystyle=-2U_{s}n_{\rm p}(1-2n_{\rm p})\sin\theta\,$ (S3) $\displaystyle\hbar\dot{\theta}$ $\displaystyle=-2q+2U_{s}(4n_{\rm p}-1)(1+\cos\theta)\,.$ (S4) The mean-field energy per atom is given by $\displaystyle\mathcal{E}_{s}=2U_{s}n_{\rm p}(1-2n_{\rm p})(1+\cos\theta)+2qn_{\rm p}\,.$ (S5) The energy $\mathcal{E}_{s}$ is a constant of motion, a fact that we will used repeatedly in the following. ### II.1 Dynamics in the oscillating regime In this section we derive the evolution of $N_{\rm p}(t)$ for the oscillating regime $q\gg U_{s}/N$. We assume $N_{\rm seed}\ll N$, i.e. the situation where quantum fluctuations may play a significant role. For $N_{\rm seed}\sim N$, a fully classical treatment is accurate. #### Classical solution : Assuming $n_{\rm p}\ll 1$, we linearize Eqs. (S3) and (S5), $\displaystyle\hbar\dot{n}_{\rm p}$ $\displaystyle\approx-2U_{s}n_{\rm p}\sin\theta\,$ (S6) $\displaystyle\mathcal{E}_{s}$ $\displaystyle\approx\Big{(}2U_{s}(1+\cos\theta)+2q\Big{)}n_{\rm p}\,.$ (S7) We use the second equation to express $\cos\theta$ as a function of $n_{\rm p}$ and of the constants $q,U_{s},\mathcal{E}_{s}$. Substituting in the first equation, we obtain a differential equation on $n_{\rm p}$ only, $\dot{n}_{\rm p}^{2}=-4\omega^{2}\left[n_{\rm p}-\alpha\right]^{2}+A$, where $\displaystyle\hbar\omega=\sqrt{q(q+2U_{s})},\hskip 14.22636pt\alpha=\frac{\mathcal{E}_{s}(q+U_{s})}{2(\hbar\omega)^{2}},$ (S8) and where $A$ is constant. Differentiating one more time, we find that either $n_{\rm p}$ is constant or it obeys the harmonic equation $\ddot{n}_{\rm p}+4\omega^{2}\left(n_{\rm p}-\alpha\right)=0$. The evolution is thus a harmonic motion at frequency $2\omega$, $\displaystyle n_{\rm p}(t)\approx n_{\rm seed}+2(\alpha-n_{\rm seed})\sin^{2}(\omega t)\,,$ (S9) with the initial conditions $n_{\rm p}(0)=n_{\rm seed}$ and $\theta(0)=\theta_{\mathrm{ini}}$. If we also assume (as in the experiments we performed) that $q\ll U_{s}$, we have $\mathcal{E}_{s}\approx 4U_{s}n_{\rm seed}\cos^{2}(\theta_{\mathrm{ini}}/2)\gg q$, and $\alpha\approx\mathcal{E}_{s}/(4q)\gg 1$. Eq. (S9) then reduces to $\displaystyle n_{\rm p}(t)\approx n_{\rm seed}+\frac{2U_{s}n_{\rm seed}}{q}\,\cos^{2}(\theta_{\mathrm{ini}}/2)\,\sin^{2}(\omega t)\,,$ i.e. to Eq. (6) in the main text. #### Semi-classical picture : We now consider the effect of quantum fluctuations within the TWA Polkovnikov (2010); Steel et al. (1998); Sinatra et al. (2002); Mathew and Tiesinga (2017); Wrubel et al. (2018). In this method, the $c$-numbers $\alpha_{m}$ used instead of the annihilation operators $\hat{a}_{m}$ in the mean-field approximation are treated as complex random variables. At $t=0$, these variables sample the Wigner distribution of the initial state $|\psi_{\mathrm{i}}\rangle$. Their mean values are given by $\displaystyle\bar{\boldsymbol{\alpha}}_{\mathrm{ini}}=N\begin{pmatrix}\sqrt{n_{\rm seed}}\,\mathrm{e}^{i\frac{\theta_{\mathrm{ini}}+\eta_{\mathrm{ini}}}{2}}\\\ \sqrt{1-2n_{\rm seed}}\\\ \sqrt{n_{\rm seed}}\,\mathrm{e}^{i\frac{\theta_{\mathrm{ini}}-\eta_{\mathrm{ini}}}{2}}\\\ \end{pmatrix}\,.$ (S10) In the limit $N_{\rm seed}\ll N$, the calculation can be simplified by neglecting the depletion of the mode $m=0$. For the $m=\pm 1$ modes, this approximation amounts to replacing coherent spin states by harmonic oscillator coherent states, which are considerably easier to handle. The initial quantum state is thus taken to be $\displaystyle|\psi_{\mathrm{ini}}\rangle\approx\frac{1}{\sqrt{N!}}\prod_{m=\pm 1}\mathrm{e}^{\bar{\alpha}_{m,\mathrm{ini}}\hat{a}_{m}^{\dagger}-\bar{\alpha}_{m,\mathrm{ini}}^{*}\hat{a}_{m}}\hat{a}_{0}^{\dagger N}|{\rm vac}\rangle\,.$ (S11) For $t>0$, the equations of evolution (S3,S4) remain valid in the TWA. The solution for initial conditions $\alpha_{\pm 1,\mathrm{ini}}$ is thus given by Eq. (S9) with the substitution $4N_{\rm seed}\cos^{2}(\theta_{\mathrm{ini}}/2)\to|\alpha_{+1,\mathrm{ini}}+\alpha_{-1,\mathrm{ini}}^{*}|^{2}$. To average over the initial distribution of $\alpha_{\pm 1,\mathrm{ini}}$, we recall that the Wigner distribution average $\langle\mathcal{O}(\alpha_{m},\alpha_{m}^{*})\rangle_{\rm Wig}$ of an operator $\mathcal{O}$ is equal to the expectation value $\langle\mathcal{O}^{\rm sym}(\hat{a}_{m},\hat{a}_{m}^{\dagger})\rangle$ of the corresponding symmetrically ordered operator $\mathcal{O}^{\rm sym}$ Polkovnikov (2010). We obtain $\displaystyle\langle\alpha_{+1,\mathrm{ini}}\alpha_{-1,\mathrm{ini}}^{\ast}\rangle_{\rm Wig}=\langle\hat{a}_{+1}\hat{a}_{-1}^{\dagger}\rangle=\bar{\alpha}_{+1,\mathrm{ini}}\bar{\alpha}_{-1,\mathrm{ini}}^{\ast}\,,$ (S12) $\displaystyle\langle|\alpha_{m,\mathrm{ini}}|^{2}\rangle_{\rm Wig}=\frac{1}{2}\langle\hat{a}^{\dagger}_{m}\hat{a}_{m}+\hat{a}_{m}\hat{a}_{m}^{\dagger}\rangle=|\bar{\alpha}_{m,\mathrm{ini}}|^{2}+\frac{1}{2}\,.$ (S13) This leads to $\displaystyle\langle N_{\rm p}(t)\rangle\approx\frac{U_{s}}{2q}\sin^{2}(\omega t)\,\left(|\bar{\alpha}_{+1,\mathrm{ini}}+|\bar{\alpha}_{-1,\mathrm{ini}}^{\ast}|^{2}+1\right)\,,$ which gives Eq. (7) in the main text. As a final remark, we note that the Bogoliubov method is also well suited to study the regime that we investigated here, and leads to the same result Mias et al. (2008); Uchino et al. (2010); Evrard et al. (2020). ### II.2 Relaxation dynamics We now discuss the regime $q\ll U_{s}/N$, in which we observe a relaxation of the number of pairs $N_{\rm p}$ to a stationary value. In this regime, the quantum fluctuations play an important role even for $N_{\rm seed}\gg 1$. We will thus consider that $N_{\rm seed}\gg 1$ and $N-N_{\rm seed}\gg 1$. For simplicity, we will focus on the situation $\theta_{\mathrm{ini}}=0$, for which the effect of the seed is maximal. The case with no seed has been treated using an exact diagonalization of the Hamiltonian Evrard et al. (2020) or the TWA Mathew and Tiesinga (2017). #### Classical solution In order to simplify the calculation, we neglect completely the quadratic Zeeman shift. In this regime $q\ll U_{s}/N$, the Zeeman term indeed plays no significant role even for the fully quantum model. Introducing the auxiliary variable $x=4n_{\rm p}-1$, the equations of motion and the energy become $\displaystyle\hbar\dot{x}$ $\displaystyle=-U_{s}(1-x^{2})\sin\theta\,,$ (S14) $\displaystyle\hbar\dot{\theta}$ $\displaystyle=2U_{s}x(1+\cos\theta)\,,$ (S15) $\displaystyle\mathcal{E}_{s}$ $\displaystyle=\frac{U_{s}}{4}(1-x^{2})(1+\cos\theta)=\mathrm{cst}\,.$ (S16) We combine the first and last equations to obtain $\displaystyle\dot{x}$ $\displaystyle=-\frac{4\mathcal{E}_{s}}{\hbar}\frac{\sin\theta}{1+\cos\theta}.$ (S17) Differentiating this equation, we eliminate the phase $\theta$ and obtain a simple harmonic equation, $\ddot{x}=-\Omega^{2}x$, with an oscillation frequency $\hbar\Omega=\sqrt{8U_{s}\mathcal{E}_{s}}$. For the initial conditions $n_{\rm p}(0)=n_{\rm seed}$ and $\theta(0)=0$, we have $\hbar\Omega=2U_{s}\sqrt{1-x_{0}^{2}}$ and $x(t)=x_{0}\cos(\Omega t)$ with $x_{0}=4n_{\rm seed}-1$. This corresponds to the results announce in Eqs. (8,9) of the main text. #### Quantum partition noise: The initial state $\displaystyle|\psi_{\mathrm{ini}}\rangle=\frac{1}{\sqrt{N!}}\left[\sum_{m=0,\pm 1}\zeta_{m}\,\hat{a}^{\dagger}_{m}\right]^{N}|{\rm vac}\rangle\,,$ is characterized by fluctuations of the number of $\pm 1$ atoms. We consider again the states with $|\zeta_{+1}|=|\zeta_{-1}|=\sqrt{N_{\rm seed}}$ and $\theta_{\mathrm{i}}=0$. We introduce the sum $\Sigma=N_{+1}+N_{-1}$, its relative value $s=\Sigma/N$ and the difference $\Delta=N_{+1}-N_{-1}$. The components of $\bm{\zeta}$ are related to the average $\bar{\Sigma}$ of $\Sigma$ by $\displaystyle|\zeta_{\pm 1}|^{2}=\frac{\bar{\Sigma}}{2},\hskip 14.22636pt|\zeta_{0}|^{2}=N-\bar{\Sigma}.$ (S18) The joint distribution of $\Sigma$ and $\Delta$ in the initial coherent spin state is $\displaystyle\mathcal{P}(\Sigma,\Delta)$ $\displaystyle=\frac{N!}{\left(\frac{\Sigma+\Delta}{2}\right)!\left(\frac{\Sigma-\Delta}{2}\right)!(N-\Sigma)!}\left(\frac{\bar{s}}{2}\right)^{\Sigma}(1-\bar{s})^{N-\Sigma}.$ (S19) We deduce from Eq. (S19) the distribution of $\Sigma$, $\displaystyle\mathcal{P}(\Sigma)$ $\displaystyle=\frac{N!}{\Sigma!(N-\Sigma)!}\bar{s}^{\Sigma}(1-\bar{s})^{N-\Sigma}.$ (S20) with $\Sigma\in[0,N]$. The normalization follows from the binomial formula. For large $N$ and $\Sigma$ away from the extreme values $0,N$, the binomial distribution is well approximated by a continuous Gaussian distribution $\displaystyle\mathcal{P}(\Sigma)$ $\displaystyle\approx\frac{1}{N}\frac{1}{\sqrt{2\pi}\sigma}\mathrm{e}^{-\frac{(s-\bar{s})^{2}}{2\sigma^{2}}}=p(s)ds.$ (S21) with a step size $ds=1/N$ and a standard deviation $\displaystyle\sigma$ $\displaystyle=\sqrt{\frac{\bar{s}(1-\bar{s})}{N}}=\sqrt{\frac{2n_{\rm seed}(1-2n_{\rm seed})}{N}}.$ (S22) One can check the normalization of both distributions, $\displaystyle\sum_{\Sigma=0}^{N}\mathcal{P}(\Sigma)$ $\displaystyle\to\int_{0}^{1}f(s)ds\approx\int_{-\infty}^{+\infty}\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-\frac{u^{2}}{2}}du=1.$ To extend the lower boundary to $-\infty$, we require $\bar{s}/\sigma=\sqrt{N}\times\sqrt{\bar{s}/(1-\bar{s})}\gg 1$, or $N\bar{s}=2N_{\rm seed}\gg 1$. #### Semi-classical picture of the dynamics: Similarly to what we have done in Sec. II.1, we average the mean field solution (S3,S4) with $2n_{\rm seed}\to s$ over the probability distribution $p(s)$ in Eq. (S21). This amounts to compute the integral $\displaystyle I=\frac{1}{2}\int_{0}^{1}\,s\,\cos[\Omega(s)t]\,p(s)\,ds.$ (S23) We use the fact that $p(s)$ is sharply peaked around $\bar{s}$, with a width $\sim 1/N$ much narrower than the scale of variation of the rest of the integrand $s\cos[\Omega(s)t]$. As a result, we extend the integral boundaries to $\pm\infty$, set $s\approx\bar{s}$ and expand the frequency $\Omega(s)$ to first order, $\displaystyle\Omega(s)\approx\bar{\Omega}+\bar{\Omega}^{\prime}(s-\bar{s})+\mathcal{O}(\epsilon^{2})\,,$ (S24) where $\bar{\Omega}=\Omega(\bar{s})$ and $\bar{\Omega}^{\prime}=\Omega^{\prime}(\bar{s})=(2U_{s}/\hbar)\times(1-2\bar{s})/\sqrt{\bar{s}(1-\bar{s})}$. With straightforward manipulations, we cast $I$ in the form of the Fourier transform of a Gaussian function, which is readily calculated. We find $\displaystyle I$ $\displaystyle=\frac{1}{2}\bar{s}\cos[\bar{\Omega}t]\,\mathrm{e}^{-\frac{1}{2}(\gamma_{\rm c}t)^{2}},$ (S25) with a damping rate $\displaystyle\gamma_{\rm c}=|\bar{\Omega}^{\prime}\sigma|=\,\frac{2U_{s}}{\sqrt{N}\hbar}\,|1-2\bar{s}|.$ (S26) Using $\bar{s}=2n_{\rm seed}$, this gives Eq. (11) in the main text. #### Classical fluctuations of $\Omega$: In addition to the intrinsic dephasing originating from quantum fluctuations, any technical fluctuations of $\Omega$ will also contribute to the observed relaxation. We consider here the dominant source of classical blurring in our experiment, namely fluctuations of the interaction strength $U_{s}$ mainly due to shot-to-shot atom number fluctuations. We model these fluctuations by considering a fluctuating interaction strength $U_{s}^{\prime}=U_{s}+\delta U_{s}x$, with $U_{s}$ the average value, $\delta U_{s}$ the standard deviation of the noise, and $x$ a centered Gaussian random variable of variance unity. This leads to a fluctuating oscillation frequency $\Omega(x)=\bar{\Omega}(1+x\cdot\delta U_{s}/U_{s})$. We neglect the fluctuations of $\gamma_{\rm c}$, which is legitimate for $N_{\rm seed}\gg 1$ and hence $\gamma_{\rm c}\ll\bar{\Omega}$. Averaging over the Gaussian probability distribution $p(x)$, we find that $\displaystyle I_{2}$ $\displaystyle=\left\langle\,\cos[\Omega(x)t]\mathrm{e}^{-\frac{1}{2}(\gamma_{\rm c}t)^{2}}\,\right\rangle_{x}=\cos[\bar{\Omega}t]e^{-\frac{1}{2}(\gamma_{\rm t}t)^{2}-\frac{1}{2}(\gamma_{\rm c}t)^{2}},$ (S27) with a classical (technical) damping rate given by $\displaystyle\gamma_{\rm t}$ $\displaystyle=\bar{\Omega}\frac{\delta U_{s}}{U_{s}}.$ (S28) From Eqs. (S27,S28) we obtain Eqs. (12,13) given in the main text. ## References * Zhang et al. (2005) W. Zhang, D. L. Zhou, M.-S. Chang, M. S. Chapman, and L. You, Phys. Rev. A 72, 013602 (2005). * Kawaguchi and Ueda (2012) Y. Kawaguchi and M. Ueda, Physics Reports 520, 253 (2012). * Polkovnikov (2010) A. Polkovnikov, Annals of Physics 325, 1790 (2010). * Steel et al. (1998) M. J. Steel, M. K. Olsen, L. I. Plimak, P. D. Drummond, S. M. Tan, M. J. Collett, D. F. Walls, and R. Graham, Phys. Rev. A 58, 4824 (1998). * Sinatra et al. (2002) A. Sinatra, C. Lobo, and Y. Castin, Journal of Physics B: Atomic, Molecular and Optical Physics 35, 3599 (2002). * Mathew and Tiesinga (2017) R. Mathew and E. Tiesinga, Phys. Rev. A 96, 013604 (2017). * Wrubel et al. (2018) J. P. Wrubel, A. Schwettmann, D. P. Fahey, Z. Glassman, H. Pechkis, P. Griffin, R. Barnett, E. Tiesinga, and P. Lett, Phys. Rev. A 98, 023620 (2018). * Mias et al. (2008) G. I. Mias, N. R. Cooper, and S. Girvin, Phys. Rev. A 77, 023616 (2008). * Uchino et al. (2010) S. Uchino, M. Kobayashi, and M. Ueda, Phys. Rev. A 81, 063632 (2010). * Evrard et al. (2020) B. Evrard, A. Qu, J. Dalibard, and F. Gerbier, Arxiv (2020), eprint 2010.13832.
# Journey to the Bound States111 © The Author, under exclusive license to Springer Nature Switzerland AG 2021. P. Hoyer, Journey to the Bound States, SpringerBriefs in Physics. Expanded version of lectures presented at the University of Pavia in January 2020. Slides are available at https://www.mv.helsinki.fi/home/hoyer/Talks.html Paul Hoyer Department of Physics, POB 64, FIN-00014 University of Helsinki, Finland<EMAIL_ADDRESS> ###### Abstract Guided by the observed properties of hadrons I formulate a perturbative bound state method for QED and QCD. The expansion starts with valence Fock states ($e^{+}e^{-},\ q\bar{q},\ qqq,\ gg$) bound by the instantaneous interaction of temporal gauge ($A^{0}=0$). The method is tested on Positronium atoms at rest and in motion, including hyperfine splitting at ${\cal O}\left(\alpha^{4}\right)$, electromagnetic form factors and deep inelastic scattering. Relativistic binding is studied for QED in $D=1+1$ dimensions, demonstrating the frame independence of the DIS electron distribution and its sea for ${x_{Bj}}\to 0$. In QCD a homogeneous solution of Gauss’ constraint in $D=3+1$ implies ${\cal O}\left(\alpha_{s}^{0}\right)$ confining potentials for $q\bar{q},\ q\bar{q}g,\ qqq$ and $gg$ states, whereas $q\bar{q}\,q\bar{q}$ is unconfined. Meson states lie on linear Regge trajectories and have the required frame dependence. A scalar bound state with vanishing four-momentum causes spontaneous chiral symmetry breaking when mixed with the vacuum. These lecture notes assume knowledge of field theory methods, but not of bound states. Brief reviews of existing bound state methods and Dirac electron states are included. Solutions to the exercises are given in the Appendix. ###### Contents 1. I Motivations and Outline 1. I.1 Motivations 2. I.2 Outline 2. II Features of hadrons 1. II.1 Quarkonia 2. II.2 Regge behavior and duality 3. II.3 DIS: Deep Inelastic Scattering 4. II.4 The QCD coupling at low scales 3. III Brief survey of present QED approaches to atoms 1. III.1 Recall: The Hydrogen atom in introductory Quantum Mechanics 2. III.2 The Schrödinger equation from Feynman diagrams (rest frame) 1. III.2.1 Bound states vs. Feynman diagrams 2. III.2.2 Forming an integral equation 3. III.3 The Bethe-Salpeter equation 4. III.4 Non-relativistic QED 5. III.5 Effective theories for heavy quarks 4. IV Dirac bound states 1. IV.1 Weak vs. strong binding 2. IV.2 The Dirac equation 3. IV.3 Dirac states 4. IV.4 * Dirac wave functions for central $A^{0}$ potentials 5. IV.5 * Coulomb potential $V(r)=-\alpha/r$ 6. IV.6 * Linear potential $V(r)=V^{\prime}r$ 5. V Fock expansion of bound states in temporal ($A^{0}=0$) gauge 1. V.1 Definition of the bound state method 1. V.1.1 Considerations 2. V.1.2 Choice of approach 2. V.2 Quantization in QED 1. V.2.1 Functional integral method 2. V.2.2 Canonical quantization 3. V.2.3 Temporal gauge in QED 3. V.3 Temporal gauge in QCD 1. V.3.1 Canonical quantization 2. V.3.2 Specification of temporal gauge in QCD 6. VI Applications to Positronium atoms 1. VI.1 The $\left|{e^{-}e^{+}}\right\rangle$ Fock states of Para- and Orthopositronium atoms 1. VI.1.1 Definition of the Fock states 2. VI.1.2 Translations 3. VI.1.3 Rotations 4. VI.1.4 Parity $\eta_{P}$ 5. VI.1.5 Charge conjugation $\eta_{C}$ 6. VI.1.6 Wave functions of Para - and Orthopositronium 2. VI.2 The Schrödinger equation for Positronium at ${\boldsymbol{P}}=0$ 3. VI.3 Positronium with momentum ${\boldsymbol{P}}$ 1. VI.3.1 Kinetic and potential energy 2. VI.3.2 The transverse photon Fock state 3. VI.3.3 The bound state condition 4. VI.4 * Hyperfine splitting of Positronium at ${\boldsymbol{P}}=0$ 1. VI.4.1 The transverse photon Fock state $\left|{e^{-}e^{+}\gamma}\right\rangle$ contribution 2. VI.4.2 Hyperfine splitting from annihilation: $e^{-}e^{+}\to\gamma\to e^{-}e^{+}$ 5. VI.5 * Electromagnetic form factor of Positronium atoms in an arbitrary frame 1. VI.5.1 Parapositronium form factor 2. VI.5.2 Positronium transition form factor 6. VI.6 * Deep inelastic scattering on Parapositronium in a general frame 7. VII QED in $D=1+1$ dimensions 1. VII.1 QED2 bound states in $A^{0}=0$ gauge 1. VII.1.1 Temporal gauge in $D=1+1$ 2. VII.1.2 States and wave functions of QED2 3. VII.1.3 Rest frame and non-relativistic limit 4. VII.1.4 Solution for any $M$ and $P$ 5. VII.1.5 Weak coupling limit 6. VII.1.6 Large separations between $e^{-}$ and $e^{+}$ 7. VII.1.7 Bound state masses and duality 2. VII.2 * Bound state form factors in QED2 1. VII.2.1 Form factor definition and symmetry under parity 2. VII.2.2 Gauge invariance 3. VII.2.3 Lorentz covariance 3. VII.3 * Deep Inelastic Scattering in D = 1+1 1. VII.3.1 The Bj limit of the form factor 2. VII.3.2 Numerical evaluation of the electron distribution 3. VII.3.3 ${x_{Bj}}\to 0$ limit of the electron distribution 8. VIII Applications to QCD bound states 1. VIII.1 The instantaneous potential of various Fock states 1. VIII.1.1 The $q\bar{q}$ potential 2. VIII.1.2 The $qqq$ potential 3. VIII.1.3 The $gg$ potential 4. VIII.1.4 The $q\bar{q}g$ potential 5. VIII.1.5 Limiting values of the $qqq$ and $q\bar{q}g$ potentials 6. VIII.1.6 Single quark or gluon Fock states 7. VIII.1.7 The potential of $q\bar{q}\,q\bar{q}$ Fock states 2. VIII.2 Rest frame wave functions of $q\bar{q}$ bound states 1. VIII.2.1 Bound state equation for the meson wave function $\Phi_{\alpha\beta}({\boldsymbol{x}})$ 2. VIII.2.2 Separation of radial and angular variables 3. VIII.2.3 The $0^{-+}$ trajectory: $\eta_{P}=(-1)^{j+1},\hskip 8.5359pt\eta_{C}=(-1)^{j}$ 4. VIII.2.4 The $0^{--}$ trajectory: $\eta_{P}=(-1)^{j+1},\hskip 8.5359pt\eta_{C}=(-1)^{j+1}$ 5. VIII.2.5 The $0^{++}$ trajectory: $\eta_{P}=(-1)^{j},\hskip 8.5359pt\eta_{C}=(-1)^{j}$ 3. VIII.3 * $q\bar{q}$ bound states in motion 1. VIII.3.1 The bound state equation 2. VIII.3.2 Solution of the $P\neq 0$ bound state equation for $V({\boldsymbol{x}})=0$ 3. VIII.3.3 Boost of the state $\left|{M,P}\right\rangle$ for $V({\boldsymbol{x}})=0$ 4. VIII.3.4 Solution of the $P\neq 0$ bound state equation at $\boldsymbol{x}_{\perp}=0$ 4. VIII.4 Properties of the $q\bar{q}$ bound states 1. VIII.4.1 String breaking and duality 2. VIII.4.2 Properties of the wave function at large separations $r$ 3. VIII.4.3 Discrete mass spectrum 4. VIII.4.4 Parton picture for $M\gg V(r)$ 5. VIII.5 * Glueballs in the rest frame 6. VIII.6 Spontaneous breaking of chiral symmetry 1. VIII.6.1 $M=0$ states with vanishing quark mass $m=0$ 2. VIII.6.2 Finite quark mass $m_{u}=m_{d}=m\neq 0$ 9. IX Bound state epilogue 10. A Solutions to exercises 1. A.1 Order of box diagram 2. A.2 Contribution of the diagrams in Fig. 11(b,c) 3. A.3 Derivation of (47) 4. A.4 Derivation of (64) 5. A.5 The expressions (65) for vacuum state 6. A.6 Derivation of the identities () 7. A.7 Derivation of (127) 8. A.8 Gauge transformations generated by Gauss operator 9. A.9 Derive (172). 10. A.10 Derivation of (181) 11. A.11 Verify (186). 12. A.12 Derive the expression for $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ in (). 13. A.13 Derive the expression (309). 14. A.14 Derive the expression (). 15. A.15 Do the $x$-integral in () numerically for the parameters in Fig. 16, and compare. 16. A.16 Derive the $q\bar{q}g$ potential () 17. A.17 Derive the expression for $\mathcal{H}_{V}^{(0)}\left|{8\otimes 8}\right\rangle$ in () 18. A.18 Verify that the expression () for $\Phi_{-+}({\boldsymbol{x}})$ satisfies the bound state equation (405) given the radial equation (417). 19. A.19 Derive the coupled equations () from the bound state equation (456). 20. A.20 Derive the frame dependence (459) of $\Phi^{(P)}_{V=0}({\boldsymbol{x}})$ using the boost generator $\mathcal{K}_{0}^{z}$. 21. A.21 Show that $\Phi^{(P)}(\tau)$ given by (471) satisfies the BSE (457) at $\boldsymbol{x}_{\perp}=0$. 22. A.22 Prove the orthogonality relation (473) for states with wave functions satisfying the BSE (456). 23. A.23 Verify the expression (476) for the global norm of $\Phi_{-+}({\boldsymbol{x}})$ in terms of $F_{1}(r)$. 24. A.24 Verify the expressions () for radial functions $H_{1}(r),H_{2}(r)$ and $H_{3}(r)$. ## I Motivations and Outline ### I.1 Motivations Hadrons differ qualitatively from atoms due to their relativistic binding, confinement and spontaneous chiral symmetry breaking. Deep inelastic scattering shows that hadrons have significant sea quark and gluon constituents. Nevertheless, the hadron spectrum has atomic features, with quantum numbers determined by the valence quarks only. Nuclei are multiquark states analogous to molecules, being comparatively loosely bound states of nucleons. The more recently discovered heavy multi-quark ($X,Y,Z$) states tend to be associated with hadron thresholds, as would be expected for weakly bound states of hadrons. These properties should emerge in a correct description of hadrons as QCD bound states. Many aspects have indeed been confirmed by numerical studies, see the reviews of lattice QCD by the Particle Data Group Zyla _et al._ (2020) and FLAG Aoki _et al._ (2020). Valuable insights have been obtained also through studies of models, especially the quark model Zyla _et al._ (2020). The QCD2019 Workshop Summary Brodsky _et al._ (2020) gives an overview of the experimental and theoretical status of hadron physics. We still lack even a qualitative understanding of main aspects, e.g., why the degrees of freedom of the non-valence constituents are not manifest in the hadron spectrum. The contrast to the dense excitation spectrum of nuclei due to rotational and vibrational modes is striking. The observed, puzzling similarities between hadrons and atoms allows us to benefit from the understanding of QED bound states, which gradually emerged since the beginnings of quantum field theory. Unfortunately, the fields of QED and QCD bound states have grown apart Blum (2017). Modern textbooks on the applications of QFT to particle physics hardly mention bound states. There are seemingly solid reasons to believe that QED methods are irrelevant for hadrons. My lectures are motivated by a concern that this conclusion might be premature. Let me briefly indicate why I do not find some of the common arguments completely conclusive. ##### Hadrons are non-perturbative, whereas QED is perturbative. In their review of QED bound state calculations Bodwin et al. Bodwin _et al._ (1985) remark that “precision bound-state calculations are essentially nonperturbative”. Perturbation theory for atoms needs to expand around an approximate bound state, whose wave function is necessarily non-polynomial in $\alpha$. This means that there is no unique perturbative expansion for bound states, since a polynomial in $\alpha$ may be shifted between the initial wave function and the higher order terms. Measurable quantities, such as the binding energy, have nevertheless unique expansions, being independent of the initial wave function. For example, the hyperfine splitting between Orthopositronium ($J^{PC}=1^{--}$) and Parapositronium ($0^{-+}$) is impressively known up to ${\cal O}\left(\alpha^{7}\right)$ corrections, and is in agreement with accurate data Murota (1988); Penin (2014); Adkins (2015, 2018) $\displaystyle\frac{\Delta E_{b}}{m_{e}}=$ $\displaystyle\frac{7}{12}\alpha^{4}-\Big{(}\frac{8}{9}+\frac{\ln 2}{2}\Big{)}\frac{\alpha^{5}}{\pi}-\frac{5}{24}\alpha^{6}\ln\alpha+\bigg{[}\frac{1367}{648}-\frac{5197}{3456}\pi^{2}+\Big{(}\frac{221}{144}\pi^{2}+\frac{1}{2}\Big{)}\ln 2-\frac{53}{32}\zeta(3)\bigg{]}\frac{\alpha^{6}}{\pi^{2}}$ $\displaystyle-\frac{7\alpha^{7}}{8\pi}\ln^{2}\alpha+\Big{(}\frac{17}{3}\ln 2-\frac{217}{90}\Big{)}\frac{\alpha^{7}}{\pi}\ln\alpha+{\cal O}\left(\alpha^{7}\right)$ (1) The freedom of choice of the initial wave function was used in the evaluation of the higher order terms, by expanding around states given by the Schrödinger equation. Analogously, hadrons may have a perturbative expansion based on an initial state that incorporates the relevant features, including confinement. ##### Confinement requires a scale $\Lambda_{QCD}$, which can arise only from renormalization. The form of the classical atomic potential, $V(r)=-\alpha/r$, follows from $\alpha$ being dimensionless ($\hbar=c=1$). A confining potential requires a parameter with dimension (the confinement scale), but the QCD action has no such parameter. The properties of heavy quarkonia are well described by the Schrödinger equation with the “Cornell potential” Eichten _et al._ (1980, 2008), $\displaystyle V(r)=V^{\prime}r-\frac{4}{3}\frac{{\alpha_{s}}}{r}\ \ \ \text{with}\ \ V^{\prime}\simeq 0.18\ \text{GeV}^{2},\ \ {\alpha_{s}}\simeq 0.39$ (2) This suggests that $V^{\prime}r$, similarly to the $1/r$ potential, should be determined by Gauss’ law. In section V.3.2 I consider a homogeneous solution of Gauss’ law that has a spatially constant color field energy density. It gives the classical $q\bar{q}$ potential (2), with $V^{\prime}$ determined by the energy density. This and the corresponding instantaneous potentials for $qqq,\ q\bar{q}g$ and other Fock states are derived in section VIII.1. ##### The QCD coupling ${\alpha_{s}}(Q)$ is large at low scales Q, excluding a perturbative expansion. Standard perturbative determinations of ${\alpha_{s}}(Q)$ are restricted to $Q\gtrsim m_{\tau}=1.78$ GeV, with $\alpha_{s}^{\overline{MS}}(m_{\tau})\simeq 0.33$ Zyla _et al._ (2020). Since ${\alpha_{s}}$ is not directly measurable its value at low $Q$ depends on the theoretical framework. A dispersive approach indicates that ${\alpha_{s}}(0)\simeq 0.5$ (Gehrmann _et al._ (2013), section II.4). Due to confinement no low momentum (IR) singularities are expected in loop integrals. Thus ${\alpha_{s}}$ may freeze at low scales, allowing a perturbative expansion. Strong binding is then due to the confining potential $V^{\prime}r$ in (2), not to the Coulomb potential $\propto{\alpha_{s}}$. The above observations, together with other theoretical and phenomenological arguments ’t Hooft (2003); Dokshitzer (2003); Dokshitzer and Kharzeev (2004); Dokshitzer (2010) prompt me to consider the possibility of a perturbative expansion for QCD bound states. Perturbation theory is our main analytic tool in the Standard Model, and merits careful consideration. Bound states are interesting in their own right, providing insights into the structure of QFT which are complementary to those of scattering. QED methods for atoms have been developed over a long time, and may now be close to optimal. A perturbative approach to hadrons raises issues which so far have received little attention. ### I.2 Outline To help the reader navigate through these fairly extensive lectures I provide here brief characterizations of the various chapters and sections. Those marked with a star * may be skipped in a first reading. Students are welcome to try the exercises, whose solutions are given in Appendix A. Chapter II summarizes features of hadron dynamics which motivate the bound state approach of these lectures. Heavy quarkonia have atomic characteristics, are nearly non-relativistic yet display confinement (II.1). Regge behavior and duality reveal a close connection between high energy scattering and bound states (II.2). The $Q^{2}$-dependence of Deep Inelastic Scattering distinguishes partons that are are intrinsic to the hadron from those that are created by the hard scattering (II.3). Data is consistent with a QCD coupling ${\alpha_{s}}(Q^{2})$ which “freezes” at low scales (II.4). Chapter III is a brief survey of established QED bound state methods. The reduction of a non-relativistic two-particle bound state to one particle in a central potential is recalled (III.1). The Schrödinger equation is derived by summing Feynman ladder diagrams (III.2). The derivation of the Bethe-Salpeter bound state equation is sketched, and its non-uniqueness noted (III.3). The non-relativistic effective field theory method NRQED is introduced (III.4). The corresponding heavy quark effective theories HQET, NRQCD and pNRQCD are noted (III.5). Chapter IV covers aspects of the Dirac bound states (IV.1). Time-ordered Feynman $Z$-diagrams give rise to virtual pairs (IV.2). A Bogoliubov transformation of the free creation and annihilation operators allows to define the Dirac state as a single fermion state (IV.3). The Dirac wave functions for central $A^{0}(r)$ potentials are given in terms of radial and angular functions (IV.4), and the explicit example of the Coulomb potential worked out (IV.5). The case of a linear potential, for which the spectrum is continuous, is considered in section IV.6. Chapter V motivates and defines the approach to bound states used here: Quantization at equal time in temporal $(A^{0}=0)$ gauge. A perturbative expansion around valence Fock states, bound by the instantaneous gauge potential (V.1). Comparison of quantization procedures using covariant, Coulomb and temporal gauges in QED (V.2). The vanishing of the color octet electric field ${\boldsymbol{E}}_{L}^{a}$ for color singlet states allows to include a homogeneous solution of Gauss’ constraint for each color component of the state. The boundary condition introduces a universal constant $\Lambda$ (V.3). Chapter VI applies the bound state method to Positronium atoms. The states and wave functions are defined (VI.1). The Schrödinger equation is derived for atoms at rest (VI.2) and in a general frame (VI.3). The ${\cal O}\left(\alpha^{4}\right)$ Hyperfine splitting between Ortho- and Parapositronia is evaluated in section VI.4. The Poincare covariance of the Positronium form factor is demonstrated (VI.5), as well as that of deep inelastic scattering on Parapositronium (VI.6). Chapter VII considers $e^{-}e^{+}$ bound states in $D=1+1$ dimensions (QED2). The bound state equation is solved analytically in a general frame (VII.1). The gauge and Lorentz invariance of electromagnetic form factors is verified (VII.2). The electron distribution given by deep inelastic scattering is numerically evaluated in the rest frame of the target and shown to agree with an earlier result in the Breit frame. DIS has a sea contribution for ${x_{Bj}}\to 0$ (VII.3). Chapter VIII applies the bound state method to QCD hadrons. The instantaneous ${\cal O}\left(\alpha_{s}^{0}\right)$ potential due to the homogeneous solution of Gauss’ constraint is evaluated for several Fock states ($q\bar{q},\ qqq,\ gg,\ q\bar{q}g$ and $q\bar{q}\,q\bar{q}$) (VIII.1). The wave functions of $q\bar{q}$ states in the rest frame are determined for all $J^{PC}$ quantum numbers (VIII.2). The bound state equation for states with general momentum ${\boldsymbol{P}}\neq 0$ is formulated (VIII.3). The states lie on nearly linear Regge trajectories with parallel daughter trajectories. Highly excited states have a non-vanishing overlap with multi-hadron states, with features that are consistent with the parton model, string breaking and duality (VIII.4). The glueball ($gg$) spectrum has features similar to that of $q\bar{q}$ mesons (VIII.5). There is a massless $0^{++}$ $q\bar{q}$ state which has vanishing four-momentum in all frames. It may mix with the vacuum without violating Poincaré invariance, giving rise to a spontaneous breaking of chiral symmetry (VIII.6). Chapter IX is a recapitulation and discussion of the principles followed in these lectues. Experienced readers may profit from reading this chapter before the more technical parts. ## II Features of hadrons The approach to QCD bound states presented here is guided by experimental information and its interpretation in models. Hadrons have properties that could not have been anticipated by our experience with QED. The quark and gluon constituents are strongly bound into color singlets. Colored states apparently have infinite excitation energies. An abundance of sea quarks and gluons in the nucleon has been revealed by deep inelastic scattering. An approximation scheme for QCD bound states should, even at lowest order, be compatible with the general features of hadron dynamics, including confinement, linear Regge trajectories and duality. Crucially, Nature has provided us with heavy (charm and bottom) quarks whose bound states (quarkonia) are approximately non-relativistic. Quarkonia reveal features of confinement without the added complication of relativistic binding. In this chapter I briefly review some central features of hadrons and the descriptive models they have inspired. ### II.1 Quarkonia I refer to Eichten _et al._ (2008) for a review of quarkonium phenomenology. The charm quark mass $m_{c}\sim 1.5$ GeV is larger than the confinement scale indicated by the nucleon radius, $1\ \rm{fm}^{-1}\simeq 200$ MeV. Charmonium ($c\bar{c}$) bound states are nearly non-relativistic, with average constituent velocities $\langle{v^{2}}\rangle\simeq 0.24$. As seen in Fig. 1 the spectrum is qualitatively similar to that of Positronium ($e^{-}e^{+}$) atoms, although with mass splittings differing in scale by up to $10^{11}$. This motivated studies of charmonia based on the Schrödinger equation. The short distance potential was expected to be given by single gluon exchange, $V_{1}(r)=-{\textstyle\frac{4}{3}}{\alpha_{s}}/r$. The data constrained the confining part of the potential (in the relevant range of $r$) to be close to linear, $V_{0}(r)\simeq V^{\prime}r$. This led to the Cornell potential $V(r)=V_{0}(r)+V_{1}(r)$ (2). The phenomenology based on the Cornell potential turned out to be successful. Not only the mass splittings but also the many transitions (electromagnetic via photons as well as strong via gluons) are fairly described. The early hope that the “The $J/\psi$ is the Hydrogen atom of QCD” has to a large extent been fulfilled. Figure 1: Comparison of Positronium and Charmonium $n^{\,2S+1}L_{J}$ states and transitions. Notice the ${\cal O}\left(10^{11}\right)$ difference in the hyperfine mass splitting $M(1^{\,3}S_{1})-M(1^{\,1}S_{0})$. The charmonium phenomenology faced a non-trivial test when applied to bottomonium ($b\bar{b}$) states (Fig. 2(a)). Due to the larger bottom quark mass $m_{b}\simeq 4.9$ GeV the non-relativistic approximation is better justified, with velocities $\langle{v^{2}}\rangle\simeq 0.08$. Since the QCD interactions are flavor blind the same potential (2) (probed at lower $r$) should describe the bottomonium spectrum and transitions. This was indeed found to be the case. The linear part of the potential is essential, contributing 50 % for charmonia and 35% for bottomonia Godfrey and Isgur (1985). Moreover, the phenomenological potential (2) closely agrees (Fig. 2(b)) with that calculated between static (infinitely heavy) quarks using lattice QCD in the quenched approximation Bali (2001). In a calculation with dynamical quarks the creation of a light quark pair (“string breaking”) is expected to terminate the linear rise of the potential at large $r$. See Bali _et al._ (2005, 2006) for a lattice QCD study of string breaking. Figure 2: (a) The bottomonium spectrum with some transitions indicated. From Rosner (2011). (b) The static potential between heavy quarks calculated using quenched lattice QCD ($r_{0}\simeq 0.5$ fm) compared to the phenomenological potential (2). From Bali (2001). It is reasonable to assume that the description of Positronia in QED and Quarkonia in QCD, based on the Schrödinger equation, should follow from analogous approximations of the underlying gauge theory. Yet this raises the question of how the confining potential can appear in QCD. The Coulomb $-\alpha/r$ potential of QED is a solution of Gauss’ law for $A^{0}$ without loop corrections. The same potential is given by the classical Maxwell equations, and its dependence $\propto 1/r$ is mandated by dimensional analysis. The linear potential in the quarkonium potential (2) has a parameter $V^{\prime}$ with dimension GeV2, which does not appear in the QCD action. The QCD scale $\Lambda_{QCD}$ is thought to arise via “dimensional transmutation” Coleman and Weinberg (1973), related to the renormalization of loop integrals. The scale is not expected at the classical (no loop) level. It should be possible to settle this issue by scrutinizing the derivation of the Schrödinger equation from the QED action, and considering its applicability for QCD. This is a main motivation of the present study. It is not quite as simple as it sounds – bound state perturbation theory is viewed as something of an “art” even in QED Itzykson and Zuber (1980); Bodwin _et al._ (1985). A confinement scale in the solution of Gauss’ law can (at the classical level) arise only due to a boundary condition. I shall argue that this possibility may exist for color singlet states in QCD, and study its consequences. Including a homogeneous solution of Gauss’ law implies a departure from standard methods. Feynman diagrams are based on free propagators and vanishing gauge fields at spatial infinity. Dyson-Schwinger equations are derived without boundary contributions to the functional integral of a total derivative Itzykson and Zuber (1980). The notion of a non-vanishing vacuum gluon field has been around since the beginnings of QCD. The MIT Bag Model Chodos _et al._ (1974) describes hadrons as free quarks in a of perturbative vacuum bubble, confined by a QCD vacuum pressure $B^{1/4}\simeq 200$ MeV (as illustrated Fig. 3). The present approach, described below in section V.3, agrees in spirit with the Bag Model but differs in its realization. There is no perturbative vacuum bubble, instead the quarks interact with the vacuum gluon field in the whole volume of the bound state. The universal energy density arises from a boundary condition on Gauss’ law which in temporal gauge concerns only the longitudinal gluon field. Figure 3: Sketch of the MIT Bag Model Chodos _et al._ (1974). The kinetic pressure of the quarks balances the pressure $B$ of the color field in the QCD vacuum. ### II.2 Regge behavior and duality The main features of hadron scattering amplitudes were uncovered already in the 1960-70’s, see Eden (1971); Phillips and Roy (1974); Melnitchouk _et al._ (2005); Kopeliovich and Rezaeian (2009) for reviews of Regge behavior and duality. Hadron-hadron scattering $a+b\to c+d$ is described by two variables, often taken to be the Lorentz invariants $s=E_{CM}^{2}=(p_{a}+p_{b})^{2}\geq(m_{a}+m_{b})^{2}$ and $t=(p_{a}-p_{c})^{2}\lesssim 0$. With increasing $s$ the scattering amplitude $A(s,t)$ tends to peak in the forward direction, $t\simeq 0$. This is described by “Regge exchange”, $\displaystyle A(s\to\infty,\ t\lesssim 0\ {\rm fixed})\simeq\beta(t)s^{\alpha(t)}\hskip 56.9055pt\alpha(t)=\alpha_{0}+\alpha^{\prime}t$ (3) The exchanged ”Reggeon” may be viewed as an off-shell ($t\leq 0$) hadron. Data shows that Regge trajectories are approximately linear, with a universal slope $\alpha^{\prime}\simeq 0.9\ {{\mathrm{GeV}}}^{2}$. Regge exchange is illustrated in Fig. 4(a) for $\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}$, to which the $\rho$ trajectory $\alpha_{\rho}(t)\simeq.5+.9\,t/{{\mathrm{GeV}}}^{2}$ contributes. In a Chew-Frautschi plot the spin $J$ of hadrons is plotted versus their squared masses $M^{2}$. Remarkably, the hadrons lie on the linear Regge trajectories determined by scattering data for $t\leq 0$, i.e., $\alpha(M^{2})=J$. This is shown for the $\rho$ trajectory states in Fig. 4(b)). Other hadrons with light ($u,d,s$) valence quarks such as nucleons and hyperons similarly lie on linear Regge trajectories. The reason for this is not understood, but it has inspired string-like models of hadrons, with the valence (di)quarks connected by a color flux tube Greensite ; Selem and Wilczek . Figure 4: (a) Scattering amplitude for $\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}$ with $\rho$ Regge exchange at high energies. (b) Chew-Frautschi plot of hadron spins $J$ vs. their $M^{2}$, and the Regge trajectory $\alpha_{\rho}(t)$. Plot from Desgrolard _et al._ (2001). Duality is a pervasive feature of hadron dynamics. In hadron scattering duality implies that $s$-channel resonances build (the imaginary part of) $t$-channel Regge exchange. This is illustrated by the flow of valence quarks in the dual diagrams Harari (1969); Rosner (1969); Zweig (2015) of Fig. 5(a). These diagrams may be “stretched” to emphasize either the $s$-channel resonances or the equivalent $t$-channel exchanges. Duality requires that the high energy Regge exchange amplitude (3) averages the resonance contributions when extrapolated to low energy, as shown for the $t=0$ $\pi N$ amplitude in Fig. 5(b). Figure 5: (a) Diagrams illustrating the duality between $s$-channel resonances and $t$-channel Regge exchange. (b) Data on the forward $\pi N\to\pi N$ amplitude compared to $\rho$ Regge exchange extrapolated to low energy. Plot from Melnitchouk _et al._ (2005) and W. Melnitchouk, private communication. Dual models Schwarz (1973); Veneziano (1974) provide a mathematical illustration of duality. The amplitude for $\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}$ is Lovelace (1968); Shapiro (1969), $\displaystyle A(s,t)=\frac{\Gamma[1-\alpha(s)]\Gamma[1-\alpha(t)]}{\Gamma[1-\alpha(s)-\alpha(t)]}$ (4) Here $\Gamma(x)$ is the Euler Gamma function and the $\rho$ trajectory $\alpha(s)={\textstyle\frac{1}{2}}+s$ (the scale is set by $\alpha^{\prime}=1$). The asymptotic behavior of the $\Gamma$-function for large argument ensures the Regge behavior (3). Taking first $s\to-\infty$ and then $s\to s\,e^{-i\pi}$ to reach the positive real $s$-axis from above gives $\displaystyle\lim_{s\to\infty+i\varepsilon}A(s,t)=\frac{\pi}{\Gamma[\alpha(t)]}\,\frac{e^{-i\pi\alpha(t)}}{\sin[\pi\alpha(t)]}\,s^{\alpha(t)}$ (5) The poles of $\Gamma[1-\alpha(s)]$ at $\alpha(s)=n,\ n=1,2,\ldots$ represent (zero-width) $s$-channel resonances, contributing $\delta$-functions to the imaginary part of the amplitude, $\displaystyle\lim_{\alpha(s)\to n+i\varepsilon}{\rm Im}\,A(s,t)=\lim_{\alpha(s)\to n+i\varepsilon}{\rm Im}\Big{[}\frac{1}{\alpha(s)-n}\Big{]}\,\frac{\Gamma[\alpha(t)+n]}{\Gamma(n)\Gamma[\alpha(t)]}\equiv-\pi\delta\big{[}\alpha(s)-n\big{]}R_{n}(t)$ (6) In Fig. 6(a) the imaginary part of the Regge amplitude (5) is seen to agree with the resonance contributions (6) at $t=0$. The $\delta$-functions are smeared over $n-{\textstyle\frac{1}{2}}<\alpha(s)<n+{\textstyle\frac{1}{2}}$. This demonstrates semilocal duality. The residue $R_{n}(t)$ of the pole at $\alpha(s)=n$ is an $n$th order polynomial in $t$. Expanding the residue into a sum of Legendre polynomials $P_{J}(\cos\theta)$, where $\theta$ is the CM scattering angle, shows that each pole is a superposition of resonances with spins $J=0,\ldots n$. Their coherent sum builds the $t$-dependence of the Regge exchange. This is demonstrated in Fig. 6(b) for the residue at $\alpha(s)=5$. The Regge and resonance contributions are practically indistinguishable in the forward peak, $\cos\theta\gtrsim 0.8$. Figure 6: (a) Comparison of the smeared resonance contributions (6) with the imaginary part of the Regge behavior (5) at $t=0$ (a common factor of $-\pi$ is omitted). (b) Comparison of the $t$-dependence of the residue $R_{5}(t)$ in (6) with that of the Regge behavior. Here $t=-{\textstyle\frac{1}{2}}s\,(1-\cos\theta)$ (the pion mass is neglected, $M_{\pi}=0$). Both figures are from my lectures at the 2015 International Summer Schools on Reaction Theory, http://cgl.soic.indiana.edu/jpac/schools.html . All resonance contributions to the elastic $\pi^{+}\pi^{-}\to\pi^{+}\pi^{-}$ amplitude must be positive at $\cos\theta=1$, since they are proportional to the square of their coupling to $\pi^{+}\pi^{-}$. It so happens that (with $M_{\pi}=0$) at least the first 230 coefficients of the Legendre polynomials for $J\leq 20$ are positive. I do not know of a general proof, but see Shapiro (1969) for a discussion. Soon after the first dual model with four external hadrons was discovered Veneziano (1968), corresponding $N$-point amplitudes were found. It was realized that these amplitudes describe string-like states Nielsen (2009), and that they could be relevant in a totally different context, including gravity Scherk and Schwarz (1974). The further developments of string theory were not connected to hadron physics. ### II.3 DIS: Deep Inelastic Scattering Deep Inelastic Scattering of leptons $\ell$ on nucleons $N$ (DIS, $\ell N\to\ell^{\prime}X$) Abramowicz and Caldwell (1999); Cooper-Sarkar (2012) probes the quark and gluon structure of the target. At large momentum transfers $-(\ell-\ell^{\prime})^{2}=Q^{2}\gg M_{N}^{2}$ the exchanged virtual photon resolution is $\sim 1/Q$ in the direction transverse to the beam momentum $\ell$. This ensures that the lepton scatters (coherently) on a single target constituent, up to “higher twist” corrections of ${\cal O}\left(1/Q^{2}\right)$. At ${\cal O}\left(\alpha_{s}^{0}\right)$ the “inclusive” cross section, summed over all states $X$, determines the fraction $x$ of the nucleon momentum carried by the struck quark (in a frame where the nuclon momentum is large). This should be independent of the probe and hence of $Q^{2}$, which is referred to as “Bjorken scaling” and is approximately satisfied by the data. Figure 7: Quark and gluon distributions determined from HERA data Cooper- Sarkar (2012) at (a) $Q^{2}=1.9\ {{\mathrm{GeV}}}^{2}$ and (b) at $Q^{2}=10\ {{\mathrm{GeV}}}^{2}$. DIS has ${\cal O}\left({\alpha_{s}}\right)$ contributions due to gluons of momentum $k$ emitted by the quarks. Ever harder emissions with $|k^{2}|\lesssim Q^{2}$ are resolved with increasing $Q^{2}$. This gives rise to calculable “scaling violations”, which have a logarithmic dependence on $Q^{2}$ and were found to agree with data. This (together with numerous other predictions for hard scattering processes) has established QCD as the theory of the strong interactions. It also implies that DIS provides a reliable measurement of quark and gluon parton distributions in the nucleon, $q(x,Q^{2})$ and $g(x,Q^{2})$. The data agrees with the leading twist $Q^{2}$-dependence down to remarkable low values of $Q^{2}\simeq 2$ GeV2. The large gluon contribution, and its steep increase for $x\to 0$, is a striking feature of DIS at high $Q^{2}$, as shown by Fig. 7(b) for $Q^{2}=10\ {{\mathrm{GeV}}}^{2}$. Even when multiplied by $x$ the gluon distribution $xg(x)$ dominates at low $x$, and is many times larger than the valence quark distributions $xu_{V}(x)$ and $xd_{V}(x)$. Sea quarks can arise from gluon splitting, $g\to q\bar{q}$. Hence $xS(x)$ is expected to follow the trend of $xg(x)$, as is confirmed in Fig. 7(b). The valence quark distributions hardly change as $Q^{2}$ decreases to $1.9\ {{\mathrm{GeV}}}^{2}$ (Fig. 7(a)). Photons of lower virtuality resolve fewer gluons, so the gluon distribution decreases quickly with $Q^{2}$. The sea quark distribution on the other hand evolves more slowly and maintains its rise at low $x$ down to $Q^{2}=1.9\ {{\mathrm{GeV}}}^{2}$. The trend of the parton distributions with decreasing $Q^{2}$ indicates which hadron constituents are intrinsic to the bound state, and which may be associated with the hard scattering vertex. DIS suggests that most (low-$x$) gluons are created by the interaction with the virtual photon. Hadrons may thus have no valence gluons, which is consistent with their observed quantum numbers. However, sea quarks seem to be present even at the hadronic scale Cooper-Sarkar (2009). Figure 8: Lower panel: A global fit of parton distributions at leading twist (LT) is shown to agree with DIS data for $F_{2}^{p}(x,Q^{2}=25\ {{\mathrm{GeV}}}^{2})$. Upper panel: The same fit evaluated at $Q^{2}=1\ {{\mathrm{GeV}}}^{2}$ (dot-dashed curve) is compared to $ep\to eX$ data. The solid line includes kinematic target mass corrections (TMC). Figure from Melnitchouk _et al._ (2005); Melnitchouk (2011). DIS experiments uncovered a surprising new form of duality, first noted by Bloom and Gilman in 1970 Bloom and Gilman (1970). The relation between $Q^{2}$, the momentum fraction $x$ and the mass squared of the inclusive system, $W^{2}=M_{X}^{2}$, is $\displaystyle x=1-\frac{W^{2}-M_{N}^{2}}{Q^{2}+W^{2}-M_{N}^{2}}$ (7) $W$ decreases with decreasing $Q$ when $x$ is fixed, reaching $W\sim M_{N^{*}}$ at low $Q$. The lower panel of Fig. 8 demonstrates that a global fit to DIS data agrees with measurements of the $F_{2}^{p}$ structure function at $Q^{2}=25\ {{\mathrm{GeV}}}^{2}$. In the upper panel the same fit, evolved to $Q^{2}=1\ {{\mathrm{GeV}}}^{2}$, is compared to data of $F_{2}^{p}$ at this lower value of $Q^{2}$. The inclusive system $X$ is now in the resonance region, as seen from the contributions of the $\Delta(1232)$ at $x\simeq 0.62$ and the $S_{11}(1535)$ (between the vertical dashed lines). The fit determined from data at high $Q^{2}$ averages the resonance contributions at low $Q^{2}$. This “Bloom-Gilman duality” implies an unexpected relation between the parton distributions and the transition form factors $\gamma^{*}p\to N^{*}$. Analogous features of duality have been observed in other aspects of lepton scattering Melnitchouk _et al._ (2005), in $e^{+}e^{-}$ annihilation to hadrons and in hard hadron-hadron collisions Fantoni _et al._ (2006); Dokshitzer (2010). Duality reflects a basic principle of hadron dynamics, which relates bound states to high energy scattering. ### II.4 The QCD coupling at low scales The coupling $g$ in the quark and gluon interaction terms of the QCD action is not a well-defined parameter. Its higher order corrections involve divergent loop integrals which need to be regularized. In renormalizable theories (such as QCD) the divergences arise from infinitely large loop momenta and are universal, i.e., the same for all physical processes. Removing the common divergence in $g$ leaves, however, a renormalization scale $\mu$ dependence in the coupling ${\alpha_{s}}(\mu^{2})=g^{2}(\mu)/4\pi$. This scale may intuitively be thought of as the momentum at which the loop integrals are cut off. Results summed to all orders in ${\alpha_{s}}(\mu)$ are independent of the choice of $\mu$. Processes with momentum transfers $Q>>\mu$ probe the part of the loop integrals that were not included in the definition of ${\alpha_{s}}(\mu)$. This gives rise to factors of $\log(Q^{2}/\mu^{2})$ which enhance higher order contributions. These factors may be absorbed in the coupling by making it $Q^{2}$-dependent Zyla _et al._ (2020); Dokshitzer (1998); Deur _et al._ (2016). Since QCD is “asymptotically free” its running coupling (for $n_{f}$ flavors) decreases logarithmically with $Q^{2}$, $\displaystyle{\alpha_{s}}(Q^{2})=\frac{12\pi}{(33-2n_{f})\log(Q^{2}/\Lambda_{QCD}^{2})}+{\cal O}\left(\frac{\log\log(Q^{2})}{\log(Q^{2})}\right)$ (8) As shown in Fig. 9(a) data on a variety of processes involving large scales $Q^{2}$ agree on the value of the QCD coupling and verify its predicted $Q^{2}$-dependence. Figure 9: (a) Measurements of the QCD coupling ${\alpha_{s}}(Q^{2})$ confirm its expected $Q^{2}$-dependence. From Zyla _et al._ (2020). (b) Results for the effective low energy coupling $\alpha_{0}(2\ {{\mathrm{GeV}}})$ (9) and for ${\alpha_{s}}(M_{Z}^{2})$ obtained from a fit to event shapes in $e^{+}e^{-}$ annihilations. The solutions labeled “old” were obtained with an incorrect analysis, see Dokshitzer (1998). From Dokshitzer _et al._ (1999). The perturbative analysis of ${\alpha_{s}}(Q^{2})$ in Fig. 9(a) is restricted to $Q\geq m_{\tau}=1.78$ GeV, with ${\alpha_{s}}(m_{\tau}^{2})\simeq 0.33$. It is remarkable that the expression (8) for ${\alpha_{s}}(Q^{2})$ (with higher order perturbative corrections) works down to $Q\simeq 2~{}{{\mathrm{GeV}}}$. Perturbative results for DIS and other hard processes are found to be valid down to similar values of $Q$, and to join smoothly with the distributions at lower $Q$ Abt _et al._ (2017); Dokshitzer (2010). There is no abrupt “phase transition” to non-perturbative physics. There are many studies (reviewed in Deur _et al._ (2016)) of the value of ${\alpha_{s}}$ in soft processes, where confinement effects dominate. Since ${\alpha_{s}}$ is not a physically measurable quantity the answer depends on the theoretical framework. A fairly model-independent result has been obtained using a dispersive approach Dokshitzer _et al._ (1996); Dokshitzer and Webber (1997); Dokshitzer _et al._ (1999). The observed $1/Q$ power corrections to event shapes in $e^{+}e^{-}$ annihilations determine an average low energy coupling, $\displaystyle\alpha_{0}(\mu_{I})\equiv\frac{1}{\mu_{I}}\int_{0}^{\mu_{I}}dk\,{\alpha_{s}}(k^{2})$ (9) This coupling should be universal, i.e., independent of the shape parameter considered. Data on several shape measures give consistent values, see Fig. 9(b). An analysis of the Thrust distribution at higher order gave Gehrmann _et al._ (2013), $\displaystyle\alpha_{0}(2\ {{\mathrm{GeV}}})=0.538\;{+0.102\atop-0.047}$ (10) Hadron data is compatible with a framework where the coupling stays perturbative down to $Q=0$ Dokshitzer (1998). Imposing a boundary condition on Gauss’ law gives an ${\cal O}\left(\alpha_{s}^{0}\right)$ confining potential which for $q\bar{q}$ Fock states agrees with (2), determined by quarkonium phenomenology and lattice QCD (section VIII.1.1). The states bound by this potential may serve as the basis for a perturbative expansion in ${\alpha_{s}}$, progressively including higher Fock states. In this scenario the QCD coupling is renormalized at an initial scale $\mu\simeq 2\ {{\mathrm{GeV}}}$. Loop corrections (higher Fock state fluctuations) with momenta $|k|<\mu$ are, due to the confining potential, free of infrared singularities. Thus the coupling is fixed for $Q<\mu$, at a value compatible with (10). Its running for $Q>\mu$ is essentially perturbative, being insensitive to the confining potential. ## III Brief survey of present QED approaches to atoms ### III.1 Recall: The Hydrogen atom in introductory Quantum Mechanics In Introductory Quantum Mechanics we write the Hamiltonian for the Hydrogen atom as a sum of the electron and proton kinetic energies, plus the potential energy. Since the atom is stationary in time the wave function may be expressed as $\exp(-iEt)\Phi({\boldsymbol{x}}_{e},{\boldsymbol{x}}_{p})$. The Schrödinger equation (including the mass contributions) is then $\displaystyle\Big{[}m_{e}+m_{p}-\frac{\boldsymbol{\nabla}_{e}^{2}}{2m_{e}}-\frac{\boldsymbol{\nabla}_{p}^{2}}{2m_{p}}+V({\boldsymbol{x}}_{e}-{\boldsymbol{x}}_{p})\Big{]}\Phi({\boldsymbol{x}}_{e},{\boldsymbol{x}}_{p})=E\Phi({\boldsymbol{x}}_{e},{\boldsymbol{x}}_{p})$ (11) We then transform to CM and relative coordinates, $\displaystyle{\boldsymbol{R}}=\frac{m_{e}{\boldsymbol{r}}_{e}+m_{p}{\boldsymbol{r}}_{p}}{m_{e}+m_{p}}\hskip 56.9055pt{\boldsymbol{r}}={\boldsymbol{r}}_{e}-{\boldsymbol{r}}_{p}$ (12) and get, $\displaystyle\Big{[}m_{e}+m_{p}-\frac{\boldsymbol{\nabla}_{\boldsymbol{R}}^{2}}{2(m_{e}+m_{p})}-\frac{\boldsymbol{\nabla}_{\boldsymbol{r}}^{2}}{2\mu}+V({\boldsymbol{r}})\Big{]}\Phi({\boldsymbol{R}},{\boldsymbol{r}})=E\Phi({\boldsymbol{R}},{\boldsymbol{r}})\hskip 56.9055pt\mu=\frac{m_{e}m_{p}}{m_{e}+m_{p}}$ (13) The dependence of the wave function on ${\boldsymbol{R}}$ and ${\boldsymbol{r}}$ may now be separated. Denoting the CM momentum of the bound state by ${\boldsymbol{P}}$ we have $\Phi({\boldsymbol{R}},{\boldsymbol{r}})=\exp(i{\boldsymbol{P}}\cdot{\boldsymbol{R}})\Phi({\boldsymbol{r}})$. The total energy is given by the electron and proton masses, the kinetic energy of the CM motion and an ${\cal O}\left(\alpha^{2}\right)$ binding energy, $E=m_{e}+m_{p}+{\boldsymbol{P}}^{2}/2(m_{e}+m_{p})+E_{b}$ with, $\displaystyle\Big{[}-\frac{\boldsymbol{\nabla}_{\boldsymbol{r}}^{2}}{2\mu}+V({\boldsymbol{r}})\Big{]}\Phi({\boldsymbol{r}})=E_{b}\Phi({\boldsymbol{r}})$ (14) The transformation has reduced the dynamics to that of one particle in an external potential, and the bound states are determined by the Schrödinger equation (14). The two-to-one particle reduction is possible only for non- relativistic kinematics. For relativistic motion one needs to transform the times together with the positions of the electron and proton in (12). The wave function of a Hydrogen atom with large CM momentum $|{\boldsymbol{P}}|\gtrsim m_{e}+m_{p}$ can nevertheless be determined. The energy is $E=\sqrt{{\boldsymbol{P}}^{2}+(m_{e}+m_{p}+E_{b})^{2}}$ and the wave function $\Phi({\boldsymbol{r}})$ depends on ${\boldsymbol{P}}$ (Lorentz contraction), as we shall see in section VI.3. ### III.2 The Schrödinger equation from Feynman diagrams (rest frame) #### III.2.1 Bound states vs. Feynman diagrams Bound states are (by definition) stationary in time and thus eigenstates of the Hamiltonian. The eigenstate condition gives the Schrödinger equation (14). On general grounds, bound states also appear as poles in scattering amplitudes at $E_{CM}=M-i\Gamma$, where $M$ is the bound state mass and $\Gamma$ its width. E.g., the $e^{-}p\to e^{-}p$ scattering amplitude has poles at the masses of the ground and all excited states of the Hydrogen atom. Since the binding energy $E_{b}<0$ the poles are below the threshold for scattering, $M<m_{e}+m_{p}$. QED scattering amplitudes can be calculated perturbatively, in terms of Feynman diagrams. The expansion is defined by the perturbative $S$-matrix, $\displaystyle S_{fi}={}_{out}\langle{f,\,t\to\infty}|$ $\displaystyle\left\\{{\rm T}\exp\Big{[}-i\int_{-\infty}^{\infty}dt\,H_{I}(t)\Big{]}\right\\}\left|{i,\,t\to-\infty}\right\rangle_{in}$ $\displaystyle H_{I}=e\int d{\boldsymbol{x}}\,\bar{\psi}\,e{\not{A}}\,\psi\ \ \ \ \mbox{(in\ QED)}$ (15) where the $in$ and $out$ states at $t=\pm\infty$ are free. Feynman diagrams of any finite order in the coupling $e$ for the process $i\to f$ are generated by expanding the time ordered exponential of the interaction Hamiltonian $H_{I}$. The interaction vertices are connected by free propagators. Unfortunately the $S$-matrix boundary condition of free states excludes bound states, which are bound due to interactions. There is no overlap between, say, an $e^{+}e^{-}$ Positronium atom, which has finite size, and a free $e^{+}e^{-}$ state, which has infinite size. As a consequence, there are no Positronium poles in any Feynman diagram for the $e^{+}e^{-}\to e^{+}e^{-}$ scattering amplitude. This is why (as I mentioned above) atoms may be called “non-perturbative”, and their perturbative expansion differs from that of the $S$-matrix. Nevertheless, it turns out that we can generate Positronium poles by (implicitly or explicitly) summing an infinite set of Feynman diagrams. The poles then arise through the divergence of the sum. The simplest set of diagrams to sum are the so-called “ladder diagrams” shown in Fig. 10. Figure 10: Ladder diagram expansion of the $e^{-}e^{+}\to e^{-}e^{+}$ scattering amplitude. Momenta are shown in the fermion direction, e.g., the positron energy $p_{2}^{0}>0$. At first sight it seems curious that all the ladder diagrams of Fig. 10 can be of the same order in $\alpha$, allowing the series to diverge at any value of the coupling. This is indeed true only for the special kinematics of bound states. In the rest frame all 3-momenta are of the order of the Bohr momentum, e.g., $|{\boldsymbol{p}}_{1}|$ is of ${\cal O}\left(\alpha m\right)$, and its kinetic energy $E_{p_{1}}=\sqrt{{\boldsymbol{p}}_{1}^{2}+m^{2}}\simeq m+{\boldsymbol{p}}_{1}^{2}/2m$ differs from $m$ by ${\cal O}\left(\alpha^{2}m\right)$. The exchanged momentum is similar, $(q_{1}-p_{1})^{0}\sim\alpha^{2}m$, $|{\boldsymbol{q}}_{1}-{\boldsymbol{p}}_{1}|\sim\alpha m$. Each propagator contributes a factor of ${\cal O}\left(1/\alpha^{2}\right)$, making the diagram with a single ladder of ${\cal O}\left(1/\alpha\right)$. In fact all ladder diagrams are of ${\cal O}\left(1/\alpha\right)$ for bound state kinematics, whereas all non-ladder diagrams are of higher order in $\alpha$. Exercise A.1: Convince yourself that the diagram with two ladders in Fig. 10 is of ${\cal O}\left(1/\alpha\right)$, like the single ladder diagram. Hint: The relevant loop momenta are commensurate with bound state kinematics. In processes where the momenta are even lower than in bound states the propagators are further enhanced and the ladder series in Fig. 10 diverges more strongly. This is the kinematic region where classical fields dominate, and Feynman diagrams give non-leading contributions. Bound states are at the borderline between quantum and classical physics. #### III.2.2 Forming an integral equation The expression for the sum of all ladder diagrams at leading ${\cal O}\left(1/\alpha\right)$ may be formulated as an integral equation. Bound state poles are just below threshold, $2m-M\sim\alpha^{2}m$, so also the initial and final $e^{\pm}$ must be off-shell by ${\cal O}\left(\alpha^{2}\right)$. Their propagators may be expressed using $\displaystyle\frac{\not{p}+m}{p^{2}-m^{2}+i\varepsilon}=\frac{1}{2E_{p}}\sum_{\lambda}\left[\frac{u({\boldsymbol{p}},\lambda)\bar{u}({\boldsymbol{p}},\lambda)}{p^{0}-E_{p}+i\varepsilon}+\frac{v(-{\boldsymbol{p}},\lambda)\bar{v}(-{\boldsymbol{p}},\lambda)}{p^{0}+E_{p}-i\varepsilon}\right]\hskip 56.9055ptE_{p}=\sqrt{{\boldsymbol{p}}^{2}+m^{2}}$ (16) At leading order in $\alpha$ we need only retain the $e^{-}$ pole in the electron propagator and the $e^{+}$ pole for the positron, e.g., $1/(p_{1}^{0}-E_{p_{1}}+i\varepsilon)\propto\alpha^{-2}$ and $1/(-p_{2}^{0}+E_{p_{2}}-i\varepsilon)\propto\alpha^{-2}$. In the following I show for conciseness only the spinors of the external propagators, e.g., $u({\boldsymbol{p}}_{1},\lambda_{1})$ for the incoming electron. The analysis is done in the rest frame, ${\boldsymbol{p}}_{1}+{\boldsymbol{p}}_{2}=0$. For bound state kinematics the spinors are trivial at leading order in $\alpha$, $\displaystyle u({\boldsymbol{p}},\lambda)$ $\displaystyle\equiv\frac{{\not{p}}+m}{\sqrt{E_{p}+m}}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)=\sqrt{2m}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)+{\cal O}\left(\alpha\right)$ (21) $\displaystyle v({\boldsymbol{p}},\lambda)$ $\displaystyle\equiv\frac{-{\not{p}}+m}{\sqrt{E_{p}+m}}\left(\begin{array}[]{c}0\\\\[5.69054pt] \bar{\chi}_{\lambda}\end{array}\right)=\sqrt{2m}\left(\begin{array}[]{c}0\\\\[5.69054pt] \bar{\chi}_{\lambda}\end{array}\right)+{\cal O}\left(\alpha\right)\hskip 28.45274pt\bar{\chi}_{\lambda}=i\sigma_{2}\chi_{\lambda}$ (26) The relation between $\bar{\chi}_{\lambda}$ and $\chi_{\lambda}$ follows from charge conjugation, see (179) below. In the single ladder diagram of Fig. 11(a) the Dirac structure of the electron line is $\bar{u}({\boldsymbol{q}}_{1},\lambda_{1}^{\prime})\gamma^{\mu}u({\boldsymbol{p}}_{1},\lambda_{1})=2m\delta_{\lambda_{1},\lambda_{1}^{\prime}}\delta^{\mu,0}+{\cal O}\left(\alpha\right)$. The positron line gives a similar result. In the photon propagator $(q_{1}-p_{1})^{2}=(q_{1}^{0}-p_{1}^{0})^{2}-({\boldsymbol{q}}_{1}-{\boldsymbol{p}}_{1})^{2}=-({\boldsymbol{q}}_{1}-{\boldsymbol{p}}_{1})^{2}+{\cal O}\left(\alpha^{4}m^{2}\right)$. The amplitude for this diagram is then, at lowest order in $\alpha$, denoting ${\boldsymbol{p}}\equiv{\boldsymbol{p}}_{1},\ {\boldsymbol{q}}\equiv{\boldsymbol{q}}_{1}$ and suppressing the conserved helicities, $\displaystyle A_{1}({\boldsymbol{p}},{\boldsymbol{q}})=(2m)^{2}\frac{-e^{2}}{({\boldsymbol{q}}-{\boldsymbol{p}})^{2}}\equiv(2m)^{2}\,V({\boldsymbol{q}}-{\boldsymbol{p}})$ (27) where the notation indicates that $V({\boldsymbol{p}}-{\boldsymbol{q}})$ is the single photon exchange potential in momentum space. The factor $(2m)^{2}$ is due to my normalization of the spinors in (21). Figure 11: (a) Single ladder diagram for the $e^{-}(p_{1})e^{+}(p_{2})\to e^{-}(q_{1})e^{+}(q_{2})$ scattering amplitude. (b) Double ladder diagram. (c) A diagram with crossed exchanges. Momenta are shown in the fermion (arrow) direction. A similar calculation of the double ladder amplitude in Fig. 11(b) gives, with $P^{0}\equiv p_{1}^{0}+p_{2}^{0}$ the CM energy, $\displaystyle A_{2}({\boldsymbol{p}},{\boldsymbol{q}})=\int\frac{d^{3}{\boldsymbol{\ell}}}{(2\pi)^{3}}\,A_{1}({\boldsymbol{p}},{\boldsymbol{\ell}})\frac{1}{P^{0}-2E_{\boldsymbol{\ell}}+i\varepsilon}\,V({\boldsymbol{q}}-{\boldsymbol{\ell}})$ (28) Exercise A.2: Derive the expression (28) for $A_{2}({\boldsymbol{p}},{\boldsymbol{q}})$. Why does diagram 11(c) contribute only at a higher order in $\alpha$? It is now straightforward to see that the amplitude with $n$ ladders may be expressed as $\displaystyle A_{n}({\boldsymbol{p}},{\boldsymbol{q}})=\int\frac{d^{3}{\boldsymbol{\ell}}}{(2\pi)^{3}}\,A_{n-1}({\boldsymbol{p}},{\boldsymbol{\ell}})\frac{1}{P^{0}-2E_{\ell}+i\varepsilon}\,V({\boldsymbol{q}}-{\boldsymbol{\ell}})\equiv A_{n-1}({\boldsymbol{p}},{\boldsymbol{\ell}})\,S({\boldsymbol{\ell}})\,V({\boldsymbol{q}}-{\boldsymbol{\ell}})$ (29) where a convolution over ${\boldsymbol{\ell}}$ is understood in the last expression. Summing over all ladder diagrams we get $\displaystyle A({\boldsymbol{p}},{\boldsymbol{q}})=\sum_{n=1}^{\infty}A_{n}({\boldsymbol{p}},{\boldsymbol{q}})=A_{1}({\boldsymbol{p}},{\boldsymbol{q}})+A({\boldsymbol{p}},{\boldsymbol{\ell}})\,S({\boldsymbol{\ell}})\,V({\boldsymbol{q}}-{\boldsymbol{\ell}})$ (30) which has the form of a Dyson Schwinger equation Itzykson and Zuber (1980). A bound state pole in the $e^{-}e^{+}\to e^{-}e^{+}$ amplitude has in the rest frame the structure $\displaystyle A({\boldsymbol{p}},{\boldsymbol{q}})=\frac{\Phi^{\dagger}({\boldsymbol{p}})\Phi({\boldsymbol{q}})}{P^{0}-M}+\ldots$ (31) where $M$ is the bound state mass. $\Phi^{\dagger}({\boldsymbol{p}})$ and $\Phi({\boldsymbol{q}})$ are the bound state wave functions, expressing the coupling to the initial and final states with relative momenta ${\boldsymbol{p}}$ and ${\boldsymbol{q}}$. Eq. (30) gives a bound state equation for the wave function since $A_{1}({\boldsymbol{p}},{\boldsymbol{q}})$ has no pole. Cancelling the factor $\Phi^{\dagger}({\boldsymbol{p}})/(P^{0}-M)$ on both sides and extracting a factor $P^{0}-2E_{{\boldsymbol{q}}}$ from the wave function (which gives the “truncated” wave function) we have (at $P^{0}=M$) $\displaystyle\Phi({\boldsymbol{q}})(M-2E_{q})=\int\frac{d^{3}{\boldsymbol{\ell}}}{(2\pi)^{3}}\,\Phi({\boldsymbol{\ell}})(M-2E_{\ell})\,\frac{1}{M-2E_{\ell}+i\varepsilon}\,\frac{-e^{2}}{({\boldsymbol{q}}-{\boldsymbol{\ell}})^{2}}=\int\frac{d^{3}{\boldsymbol{\ell}}}{(2\pi)^{3}}\,\Phi({\boldsymbol{\ell}})\,\frac{-e^{2}}{({\boldsymbol{q}}-{\boldsymbol{\ell}})^{2}}$ (32) where I used the explicit expression of the potential from (27). This is the Schrödinger equation in momentum space. We can go to coordinate space using $\displaystyle\frac{1}{({\boldsymbol{\ell}}-{\boldsymbol{q}})^{2}}$ $\displaystyle=\int d^{3}{\boldsymbol{x}}\,\frac{e^{i({\boldsymbol{q}}-{\boldsymbol{\ell}})\cdot{\boldsymbol{x}}}}{4\pi|{\boldsymbol{x}}|}$ $\displaystyle\Phi({\boldsymbol{q}})$ $\displaystyle\equiv\int d^{3}{\boldsymbol{x}}\,\Phi({\boldsymbol{x}})\,e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}$ (33) Defining the binding energy $E_{b}$ by $M=2m+E_{b}$ and expanding $E_{q}\simeq m+{\boldsymbol{q}}^{2}/2m$ on the lhs. of (32) we have as in (14), $\displaystyle\Big{(}E_{b}+\frac{\boldsymbol{\nabla}^{2}}{m}\Big{)}\Phi({\boldsymbol{x}})=V({\boldsymbol{x}})\Phi({\boldsymbol{x}})\hskip 56.9055ptV({\boldsymbol{x}})=-\frac{\alpha}{|{\boldsymbol{x}}|}$ (34) ### III.3 The Bethe-Salpeter equation The Bethe-Salpeter equation Salpeter and Bethe (1951); Itzykson and Zuber (1980); Silagadze (1998) is a generalization of the integral equation (32), obtained by considering all Feynman diagrams (not just the ladder ones), and without assuming non-relativistic kinematics. It is thus a formally exact framework for bound states with explicit Poincaré covariance, and so far is the only bound state equation which applies in any frame. Boost covariance requires that the relative time of the constituents in the wave function is frame dependent. It is possible to project on constituents at equal time in any one frame. I give a brief summary here, following Lepage (1978). A comprehensive review may be found in Nakanishi (1969, 1988). Let $G_{T}$ be a truncated Green function (i.e., without external propagators) for a $2\to 2$ process. Denote by $K$ a $2\to 2$ truncated “kernel” and by $S$ a 2-particle propagator. Then if $K$ is 2-particle irreducible, i.e., does not have two parts that are only connected by $S$, we have the Dyson-Schwinger identity $\displaystyle G_{T}=K+G_{T}\,S\,K$ (35) By construction, any Feynman diagram on the lhs. is either contained in $K$ or has the structure of $G_{T}\,S\,K$. Eq. (35) may be regarded as an exact equation for $G_{T}$ since it holds for the complete sum of Feynman diagrams. The product $G_{T}\,S\,K$ implies a convolution over the momenta and helicities of the two particles in the propagator $S$. If $G_{T}$ has a bound state pole it must have the form (31). Identifying the residues on both sides of (35) gives the Bethe-Salpeter equation for the truncated wave function shown in Fig. 12(a), $\displaystyle\Phi(P,q)=\int\frac{d^{4}\ell}{(2\pi)^{4}}\Phi(P,\ell)\,S(\ell)\,K(q-\ell)$ (36) Figure 12: (a) The exact Bethe-Salpeter equation (36) for a bound state of momentum $P$. The black dots on the fermion propagators in $S$ represent full self-energy corrections, and the irreducible kernel $K$ has contributions of all orders in $\alpha$. (b) The Bethe-Salpeter equation with free propagators and a single photon exchange kernel. The bound state momentum $P=(P^{0},{\boldsymbol{P}})$ satisfies $P^{0}=\sqrt{{\boldsymbol{P}}^{2}+M^{2}}$ by Poincaré invariance. The wave function $\Phi(P,q)$ depends on the relative energy $q^{0}$ of the constituents or, quivalently, on the difference in time $x^{0}$ of the constituents in coordinate space, $\displaystyle\Phi(P,x)=\int\frac{d^{4}q}{(2\pi)^{4}}\Phi(P,q)\,e^{-iq\cdot x}$ (37) The B-S wave function for the $e^{-}e^{+}$ component can be expressed as a matrix element of electron field operators between the bound state $\left|{P}\right\rangle$ and the vacuum, $\displaystyle\langle{0}|\mathrm{T}\\{\bar{\psi}_{\beta}(x_{2})\psi_{\alpha}(x_{1})\\}\left|{P}\right\rangle\equiv e^{-iP\cdot(x_{1}+x_{2})/2}\Phi_{\alpha\beta}(P,x_{1}-x_{2})$ (38) It thus describes a component of the bound state which has an electron at time $x_{1}^{0}$ at position ${\boldsymbol{x}}_{1}$, and a positron at time $x_{2}^{0}$ at position ${\boldsymbol{x}}_{2}$. For $x_{1}^{0}=x_{2}^{0}$ the B-S wave function describes an $e^{-}e^{+}$ Fock state, belonging to the Hilbert space of states defined at equal time and expressed in the free field basis. A Lorentz transformation $\Lambda P=(P^{\prime\,0},{\boldsymbol{P}}^{\prime})$ transforms the electron field as $\psi^{\prime}(x^{\prime})=S^{-1}(\Lambda)\psi(\Lambda x)$, where the $4\times 4$ matrix $S(\Lambda)$ is the Dirac spinor representation of the transformation, familiar from the Dirac equation Itzykson and Zuber (1980). Hence the B-S wave function transforms as $\displaystyle\Phi^{\prime}(P^{\prime},x^{\prime}_{1}-x^{\prime}_{2})=S(\Lambda)\Phi(P,x_{1}-x_{2})S^{-1}(\Lambda)$ (39) Poincaré covariance thus allows the constituents be taken at equal time ($x_{1}^{0}=x_{2}^{0}$) in at most one frame. The B-S equation has “abnormal” solutions Nakanishi (1969); Karmanov _et al._ (2020); Carbonell _et al._ (2021), which vanish in the non-relativistic limit and seem related to the dependence on relative time. Their physical significance is not fully understood. Expanding the propagator $S$ and the kernel $K$ in powers of $\alpha$ allows to solve the B-S equation perturbatively. There are many formally equivalent expansions. The Dyson-Schwinger equation (35) determines $K$ in terms of the truncated Green function $G_{T}$ and $S$, $\displaystyle K=\frac{1}{1+G_{T}\,S}\,G_{T}=G_{T}-G_{T}\,S\,G_{T}+\ldots$ (40) The choice of $S$ together with the standard perturbative expansion of $G_{T}$ fixes the expansion of the kernel. My remark in section I.1 that the perturbative expansion for bound states is not unique refers to this more precise statement. The B-S equation is difficult to solve when the kernel $K(\ell,q)$ depends on $(\ell-q)^{0}$, which implies retarded interactions. Already the single photon exchange kernel has denominator $(\ell^{0}-q^{0})^{2}-({\boldsymbol{\ell}}-{\boldsymbol{q}})^{2}$. The $(\ell-q)^{0}$ dependence arises from the propagation of transversely polarized photons, which create intermediate $e^{-}e^{+}\gamma$ states that affect the $e^{-}e^{+}$ B-S wave function. No analytic solution of the B-S equation is known even for single photon exchange with free propagators, illustrated in Fig. 12(b). Caswell and Lepage Caswell and Lepage (1978) noted that $S$ may be chosen so that the kernel $K(\ell,q)$ is static (independent of $\ell^{\,0}$ and $q^{0}$) at lowest order. This reduced the B-S equation to a Schrödinger equation which has an analytic solution, simplifying the calculation of higher order corrections in the rest frame. Bound states with arbitrary CM momenta are needed for scattering processes, e.g., form factors. It is then non-trivial to take into account the frame dependence of the wave function Brodsky and Primack (1969). Model dependent assumptions are often made, but there are few studies based on field theory. The B-S framework was used in Järvinen (2005) to determine the frame dependence of equal-time Positronium wave functions. It showed the importance of the $\left|{e^{+}e^{-}\gamma}\right\rangle$ intermediate state, and demonstrateed (apparently for the first time) that the standard Lorentz contraction of the $\left|{e^{+}e^{-}}\right\rangle$ Fock component holds at lowest order. I verify this result using a Fock state expansion in section VI.3. ### III.4 Non-relativistic QED The realization that there are many formally equivalent versions of the Bethe- Salpeter equation underlined the need for physical judgement in the choice of perturbative expansion. The most accurate data for atoms relates to their binding energies. These may be calculated in the rest frame, where the constituents have mean velocities of ${\cal O}\left(\alpha\right)$. It has been estimated that the probability for Positronium electrons to have 3-momenta $|{\boldsymbol{p}}|\gtrsim m$ is of order $\alpha^{5}\sim 10^{-11}$ Kinoshita and Lepage (1990). It is thus well motivated to expand the QED action in powers of $|{\boldsymbol{p}}|/m$. This defines the effective theory of non-relativistic QED (NRQED) Caswell and Lepage (1986); Kinoshita (1998). The constraints of gauge and rotational invariance allow only a limited number of terms at each order of $|{\boldsymbol{p}}|/m$ in the Lagrangian. The expansion begins as, $\displaystyle\mathcal{L}_{NRQED}=$ $\displaystyle-{\textstyle\frac{1}{4}}F_{\mu\nu}F^{\mu\nu}+\chi^{\dagger}\Big{\\{}i\partial_{t}-eA^{0}+\frac{\boldsymbol{D}^{2}}{2m}+\frac{\boldsymbol{D}^{4}}{8m^{3}}+c_{1}\frac{e}{2m}\,\boldsymbol{\sigma}\cdot\boldsymbol{B}+c_{2}\frac{e}{8m^{2}}\,\boldsymbol{\nabla}\cdot\boldsymbol{E}$ $\displaystyle+c_{3}\,\frac{ie}{8m^{2}}\,\boldsymbol{\sigma}\cdot(\boldsymbol{D}\times\boldsymbol{E}-\boldsymbol{E}\times\boldsymbol{D})\Big{\\}}\chi+\frac{d_{1}}{m^{2}}\,(\chi^{\dagger}\chi)^{2}+\frac{d_{2}}{m^{2}}\,(\chi^{\dagger}\boldsymbol{\sigma}\chi)^{2}+\ldots$ $\displaystyle+\mbox{~{}positron and positron-electron terms}.$ (41) The photon action $-F_{\mu\nu}F^{\mu\nu}/4$ is as in QED, since photons are relativistic. The field $\chi$ is a two-component Pauli spinor, representing the electron part (upper components) of the QED Dirac field. There are further terms involving the lower (positron) components, as well as terms mixing the positron and electron fields. $\boldsymbol{D}=\boldsymbol{\nabla}-ie{\boldsymbol{A}}$ is the covariant derivative, ${\boldsymbol{E}}$ and $\boldsymbol{B}$ are the electric and magnetic field operators. The NRQED action implies a finite momentum cutoff $\Lambda\sim m$. Contributions of momenta $|{\boldsymbol{p}}|\gtrsim\Lambda$ to low energy dynamics are included in the UV-divergent terms in $\mathcal{L}_{NRQED}$. Their coefficients $c_{i}$ and $d_{i}$ are process-independent and may thus be determined (as expansions in powers of $\alpha$) by comparing the results of QED and NRQED for selected processes, such as a scattering amplitude close to threshold. Since both theories are gauge invariant, one may use different gauges in their calculations. Coulomb gauge $\boldsymbol{\nabla}\cdot{\boldsymbol{A}}=0$ has been found to be convenient for bound state calculations in NRQED, while covariant gauges (e.g., Feynman gauge) is efficient for scattering amplitudes. The expansion in powers of $|{\boldsymbol{p}}|/m$ shows that the Coulomb field $A^{0}$ is the dominant interaction. In (III.4) the vector potential ${\boldsymbol{A}}$, although contributing at the same order in $\alpha$ as $A^{0}$, is suppressed by a power of $m$. The choice of initial bound state approximation is then evident: The lowest order terms in (III.4) give the familiar non-relativistic Hamiltonian of the Hydrogen atom in Quantum Mechanics. The Schrödinger equation with the $A^{0}$ potential is solved exactly, and the terms of higher orders in $|{\boldsymbol{p}}|/m$ are included using Rayleigh-Schrödinger perturbation theory. NRQED has turned out to be an efficient calculational method for the binding energies of atoms. It has, in particular, allowed the impressive expression (I.1) for the hyperfine splitting of Positronium. The evaluation of the higher order corrections are discussed in Caswell and Lepage (1986); Kinoshita and Lepage (1990); Kinoshita and Nio (1996); Pachucki (1997); Czarnecki _et al._ (1999); Adkins (2018); Haidar _et al._ (2020). The NRQED approach is limited to the rest (or non-relativistic) frames of weakly bound states. ### III.5 Effective theories for heavy quarks The large masses $m_{Q}\gg\Lambda_{QCD}$ of the charm and bottom quarks allow the formulation of effective theories for QCD that are analogous to NRQED. Heavy Quark Effective Theory (HQET, reviewed in Neubert (1994)) expands the heavy quark contribution to the QCD action in powers of $1/m_{Q}$. In a heavy- light bound state the heavy quark velocity is (in the $m_{Q}\to\infty$ limit) unaffected by soft, ${\cal O}\left(\Lambda_{QCD}\right)$ hadronic interactions. The light quark and gluon dynamics is in turn independent of the heavy quark flavor and spin. This implies mass degeneracies in the spectrum, such as between the pseudoscalar and vector mesons ($D$ and $D^{*}$). In leptonic decays $B\to D\ell\nu$ the light system does not feel the sudden change of heavy quark flavor, constraining the decay form factor in the recoilless limit. HQET provides many tests and constraints on the dynamics of heavy hadrons. Charmonia and bottomonia ($c\bar{c}$ and $b\bar{b}$) resemble Positronia, being nearly non-relativistic, compact bound states. This indicates that the coupling ${\alpha_{s}}$ is perturbative and the Bohr momentum is small, ${\alpha_{s}}m_{Q}\sim v\,m_{Q}\ll m_{Q}$. The QCD action can then be expanded in powers of $1/m_{Q}$ similarly as in NRQED. This defines the effective theory of Non-Relativistic QCD (NRQCD, reviewed in Brambilla _et al._ (2005); Pineda (2012)). The interactions of NRQCD are determined by matching with QCD at the cut-off scale $m_{Q}$. NRQCD has light quarks and gluons with momenta of ${\cal O}\left({\alpha_{s}}\,m_{Q}\right)$, but also “ultrasoft” fields at the binding energy scale ${\cal O}\left(\alpha_{s}^{2}\,m_{Q}\right)$. In order to further reduce the number of scales the ${\cal O}\left(v\,m_{Q}\right)$ interactions of NRQCD may be integrated out, defining “potential NRQCD” (pNRQCD) Brambilla _et al._ (2005); Pineda (2012) at the ${\cal O}\left(v^{2}\,m_{Q}\right)$ scale. Confinement effects of ${\cal O}\left(\Lambda_{QCD}\right)$ do not appear in the perturbative framework and their relative importance is unclear. If one assumes that ${\alpha_{s}}\,m_{Q}\gg\Lambda_{QCD}$ the matching between NRQCD and pNRQCD can be made perturbatively at ${\cal O}\left({\alpha_{s}}\,m_{Q}\right)$. The pNRQCD action has thus been determined, including non-leading orders in ${\alpha_{s}}$ and $1/{\alpha_{s}}\,m_{Q}$. The resulting heavy quark potential is found to agree with the one calculated using lattice methods at short distances ($\lesssim 0.25$ fm). Quantitative applications to quarkonia suffer from uncertainties concerning the influence of confinement. ## IV Dirac bound states ### IV.1 Weak vs. strong binding The QED atoms discussed above were weakly coupled ($\alpha\ll 1$). We have only a limited understanding of the dynamics of strong binding in QFT. Some features are known in $D=1+1$ dimensions (QED2), where the dimensionless parameter is $e/m$ Schwinger (1962); Coleman _et al._ (1975); Coleman (1976). For $e/m\ll 1$ the $e^{+}e^{-}$ states are weakly bound and approximately described by the Schrödinger equation. For $e/m\gg 1$ on the other hand the spectrum is that of weakly interacting bosons. This may be qualitatively understood since the large coupling locks the fermion degrees of freedom into compact neutral bound states. In the limit of $e/m\to\infty$ (the massless Schwinger model) QED2 has only a pointlike, non-interacting massive ($M=e/\sqrt{\pi}$) boson. The physical hadron spectrum does not resemble the strong binding limit of QED2. Solving the relativistic Bethe-Salpeter equation is complicated by the dependence of the kernel on the relative time of the constituents (section III.3). The time dependence is due to the exchange of transversely polarized photons. In chapter V I take this into account through a Fock expansion of the bound state, keeping the instantaneous (Coulomb) part of the interaction within each Fock state. The Dirac equation has no retardation effects since the potential $A^{\mu}$ is external, i.e., fixed. A space-dependent potential $A^{\mu}({\boldsymbol{x}})$ breaks translation invariance, so there are no eigenstates of 3-momentum. Nevertheless, Dirac solutions with large potentials give insights into relativistic binding. For a linear potential $eA^{0}({\boldsymbol{x}})=V^{\prime}|{\boldsymbol{x}}|$ it has long been known Plesset (1932) (but is rarely mentioned) that the Dirac spectrum is continuous. I discuss this case in section IV.6. Klein’s paradox Itzykson and Zuber (1980); Klein (1929); Hansen and Ravndal (1981) signals an essential difference between the Schrödinger and Dirac equations. For potentials of the order of the electron mass (i.e., relativistic binding) the Dirac wave function does not describe a single electron. The state has $e^{+}e^{-}$ pairs which are not constituents in the usual (non-relativistic) sense. As noted in Weinberg (2005) the Dirac wave function should (when possible) be normalized to unity, regardless of the number of pairs. The Dirac pairs do not add degrees of freedom to the Dirac spectrum, which corresponds to that of a single electron. This motivates the study of the states described by the Dirac wave functions in section IV.3. ### IV.2 The Dirac equation The Dirac equation $\displaystyle(i\not{\partial}-m-e{\not{A}})\psi(x)=0$ (42) should be distinguished from the operator equation of motion for the electron field, given by $\delta\mathcal{S}_{QED}/\delta\bar{\psi}(x)=0$. The $c$-numbered equation (42) studied by Dirac in 1928 Dirac (1928a, b) is a relativistic version of the Schrödinger equation, where $A^{\mu}(x)$ is an external, classical field. The condition (42) implies that propagation in the field $A^{\mu}(x)$ is singular for electrons with wave function $\psi(x)$. Scattering in the field is explicit in a perturbative expansion, $\displaystyle\frac{i}{i\not{\partial}-m-e{\not{A}}}=\frac{i}{i\not{\partial}-m}-\frac{i}{i\not{\partial}-m}ie{\not{A}}\frac{i}{i\not{\partial}-m}+\ldots$ (43) For time-independent potentials $A^{\mu}({\boldsymbol{x}})$ the static solutions $\psi(t,{\boldsymbol{x}})=\exp(-itM)\Psi({\boldsymbol{x}})$ have both positive and negative energy eigenvalues $M$. The corresponding wave functions $\Psi$ and $\overline{\Psi}$ satisfy $\displaystyle\big{[}-i\boldsymbol{\nabla}\cdot\boldsymbol{\gamma}+m+e{\not{A}}({\boldsymbol{x}})\big{]}\Psi_{n}({\boldsymbol{x}})$ $\displaystyle=M_{n}\gamma^{0}\Psi_{n}({\boldsymbol{x}})$ (44) $\displaystyle\big{[}-i\boldsymbol{\nabla}\cdot\boldsymbol{\gamma}+m+e{\not{A}}({\boldsymbol{x}})\big{]}\overline{\Psi}_{n}({\boldsymbol{x}})$ $\displaystyle=-\overline{M}_{n}\gamma^{0}\overline{\Psi}_{n}({\boldsymbol{x}})$ (45) where $M_{n},\ \overline{M}_{n}\geq 0$. The free ($A^{\mu}=0$) solutions are given by the spinors (21) as $\psi(x)=e^{-itp^{0}}\Psi_{{\boldsymbol{p}}\lambda}({\boldsymbol{x}})=u({\boldsymbol{p}},\lambda)e^{-ip\cdot x}$ and $\psi(x)=e^{itp^{0}}\overline{\Psi}_{{\boldsymbol{p}}\lambda}({\boldsymbol{x}})=v({\boldsymbol{p}},\lambda)e^{ip\cdot x}$ with $M_{p}=\overline{M}_{p}=p^{0}=\sqrt{{\boldsymbol{p}}^{2}+m^{2}}$. The solutions with negative kinetic energy $-\overline{M}_{p}$ are related to positrons. For potentials $A^{\mu}\gtrsim m$ the wave function $\psi(x)$ has both positive and negative energy components, due to contributions of $e^{-}e^{+}$ pairs (section IV.3). The Dirac equation with a Coulomb potential can be obtained from a sum of Feynman ladder diagrams, analogously to the Schrödinger equation Brodsky (1010); Gross (1982); Neghabian and Gloeckle (1983). There are some instructive differences, however. Relativistic two-particle dynamics cannot be reduced to that of a single particle in an external field, as in (12). We must therefore consider a limit where the mass of one particle goes to infinity. The recoil of the heavy particle may then be neglected. The heavy particle gives rise to a static potential in its rest frame. Consider again the diagrams in Fig. 11. Let the mass of the lower (antifermion) line be $m_{T}$ and its charge be $eZ$. We take $m_{T}\to\infty$ keeping the electron (fermion) momenta ${\boldsymbol{p}}_{1},{\boldsymbol{q}}_{1}$ fixed. The initial momentum of the antifermion is $p_{2}=(m_{T},\boldsymbol{0})$. Since ${\boldsymbol{q}}_{2}={\boldsymbol{p}}_{1}-{\boldsymbol{q}}_{1}$ is fixed as $m_{T}\to\infty$ the energy $q_{2}^{0}=\sqrt{m_{T}^{2}+{\boldsymbol{q}}_{2}^{2}}=m_{T}+{\cal O}\left(1/m_{T}\right)$. Thus kinematics ensures that no energy is transferred from the heavy target to the electron, i.e., $p_{1}^{0}=q_{1}^{0}$ up to ${\cal O}\left(1/m_{T}\right)$. In the diagrams of Fig. 11(b,c) the loop integral converges even without the antifermion propagator. Hence the limit $m_{T}\to\infty$ can be taken in the integrand. The antifermion spinors are non-relativistic so $\bar{v}({\boldsymbol{p}}_{2},\lambda_{2})(-ieZ)\gamma^{\mu}v({\boldsymbol{q}}_{2},\lambda_{2}^{\prime})\simeq- ieZ\,2m_{T}\delta^{\mu,0}\delta_{\lambda_{2},\lambda_{2}^{\prime}}$. The Born diagram of Fig. 11(a) is then, for large $m_{T}$ and relativistic electron momenta, $\displaystyle A_{1}({\boldsymbol{p}}_{1},{\boldsymbol{q}}_{1})=-ieZ\,2m_{T}\bar{u}({\boldsymbol{q}}_{1},\lambda^{\prime}_{1})\frac{-ie\gamma^{0}}{({\boldsymbol{q}}_{1}-{\boldsymbol{p}}_{1})^{2}}u({\boldsymbol{p}}_{1},\lambda_{1})$ (46) This corresponds to single scattering in the field of of the heavy particle with charge $-eZ$. When the electron is non-relativistic the positive energy pole of its propagator (16) dominates. Then the diagram of Fig. 11(c) with crossed photons is suppressed compared to the uncrossed diagram of Fig. 11(b). Now the crossed diagram does contribute and is required to get the result $\displaystyle A_{2}({\boldsymbol{p}}_{1},{\boldsymbol{q}}_{1})=i(eZ)^{2}2m_{T}\int\frac{d^{3}{\boldsymbol{\ell}}}{(2\pi)^{3}}\,\bar{u}({\boldsymbol{q}}_{1},\lambda^{\prime}_{1})(-ie\gamma^{0})\frac{1}{({\boldsymbol{\ell}}-{\boldsymbol{p}}_{1})^{2}}\frac{i({\not{\ell}}+m)}{\ell^{2}-m^{2}+i\varepsilon}\frac{1}{({\boldsymbol{q}}_{1}-{\boldsymbol{\ell}})^{2}}(-ie\gamma^{0})u({\boldsymbol{p}}_{1},\lambda_{1})$ (47) corresponding to double scattering in the external potential. Exercise A.3: Derive (47), and convince yourself that also the exchange of three photons reduces to scattering in an external potential. Hint: You need only consider the antifermion line, since the upper part is the same for the uncrossed and crossed diagrams. For ladders with $n$ exchanges all $n!$ diagrams with arbitrary crossings of the photons contribute. This means that the Bethe-Salpeter equation (36) reduces to the Dirac equation as $m_{T}\to\infty$ only for kernals of infinite degree in $\alpha$ (containing arbitrarily many crossed photons). The B-S equation can, however, be modified so that it does reduce to the Dirac equation even for finite kernels Gross (1982). In full QED a large charge $eZ$ is screened by the creation of $e^{+}e^{-}$ pairs. The $2\to 2$ ladder diagrams that give the Dirac equation do not describe true pair production. The $e^{+}e^{-}$ pairs in the Dirac state which are implied by Klein’s paradox Itzykson and Zuber (1980); Klein (1929); Hansen and Ravndal (1981) must therefore be virtual. The pairs only arise when the diagrams are time ordered, which is required to determine a state at an instant of time. Time ordering the electron propagator (16) gives a positive and negative energy part, $\displaystyle S(t,{\boldsymbol{p}})\equiv\int\frac{dp^{0}}{2\pi}\,i\frac{({\not{p}}+m)e^{-ip^{0}t}}{p^{2}-m^{2}+i\varepsilon}=\frac{1}{2E_{p}}\sum_{\lambda}\Big{[}\theta(t)\,u({\boldsymbol{p}},\lambda)\bar{u}({\boldsymbol{p}},\lambda)\,e^{-itE_{p}}-\theta(-t)\,v(-{\boldsymbol{p}},\lambda)\bar{v}(-{\boldsymbol{p}},\lambda)\,e^{itE_{p}}\Big{]}$ (48) In strong potentials the electron can scatter into a negative energy state which evolves backward in time. This corresponds to an intermediate $e^{-}e^{+}e^{-}$ state, as illustrated in Fig. 13(b). In weakly coupled bound states, described by the Schrödinger equation, such higher Fock components are suppressed. Figure 13: Time ordered diagrams for the double scattering (47) in an external, static potential. (a) The intermediate electron has positive energy. (b) The intermediate electron has negative energy, corresponding to the creation and subsequent annihilation of an $e^{+}e^{-}$ pair. This is often referred to as a “$Z$-diagram”. Multiple scattering gives rise to Fock states with any number of intermediate $e^{+}e^{-}$ pairs. Despite its apparent one-particle nature the Dirac wave function describes many pairs in the free Fock state basis. In order to see this more explicitly we need to define the Dirac states in terms field operators. The following study is based on Blaizot and Hoyer and previously published in Hoyer (2016). ### IV.3 Dirac states The Dirac wave functions define eigenstates of the Dirac Hamiltonian, $H_{D}(t)=\int d{\boldsymbol{x}}\,\bar{\psi}(t,{\boldsymbol{x}})\big{[}-i{\overset{\rightarrow}{\boldsymbol{\nabla}}}\cdot\boldsymbol{\gamma}+m+e\not{A}({\boldsymbol{x}})\big{]}\psi(t,{\boldsymbol{x}})$ (49) where $\psi(t,{\boldsymbol{x}})$ now is the electron field with the canonical anticommutation relation $\left\\{{\psi^{\dagger}_{\alpha}(t,{\boldsymbol{x}})},{\psi_{\beta}(t,{\boldsymbol{y}})}\right\\}=\delta_{\alpha,\beta}\,\delta^{3}({\boldsymbol{x}}-{\boldsymbol{y}})$ (50) This field may be expanded in the standard operator basis, which creates/annihilates free $e^{\pm}$ states, $\displaystyle\psi_{\alpha}(t=0,{\boldsymbol{x}})$ $\displaystyle=$ $\displaystyle\int\frac{d{\boldsymbol{k}}}{(2\pi)^{3}2E_{k}}\sum_{\lambda}\Big{[}u_{\alpha}({\boldsymbol{k}},\lambda)e^{i{\boldsymbol{k}}\cdot{\boldsymbol{x}}}b_{{\boldsymbol{k}},\lambda}+v_{\alpha}({\boldsymbol{k}},\lambda)e^{-i{\boldsymbol{k}}\cdot{\boldsymbol{x}}}d^{\dagger}_{{\boldsymbol{k}},\lambda}\Big{]}$ (51) $\displaystyle\left\\{{b_{{\boldsymbol{p}},\lambda}},{b^{\dagger}_{{\boldsymbol{q}},\lambda^{\prime}}}\right\\}$ $\displaystyle=$ $\displaystyle\left\\{{d_{{\boldsymbol{p}},\lambda}},{d^{\dagger}_{{\boldsymbol{q}},\lambda^{\prime}}}\right\\}=2E_{p}\,(2\pi)^{3}\delta^{3}({\boldsymbol{p}}-{\boldsymbol{q}})\delta_{\lambda,\lambda^{\prime}}$ (52) I take the classical, $c$-numbered potential $A^{\mu}({\boldsymbol{x}})$ to be time independent. There are no physical (propagating) photons. Since the Hamiltonian is quadratic in the fermion fields it can be diagonalized Blaizot and Ripka (1985). The positive (44) and negative (45) energy Dirac wave functions determine $e^{-}$ and $e^{+}$ states defined at $t=0$ by $\displaystyle\left|{M_{n}}\right\rangle$ $\displaystyle=$ $\displaystyle\int d{\boldsymbol{x}}\sum_{\alpha}\psi^{\dagger}_{\alpha}({\boldsymbol{x}})\Psi_{n,\alpha}({\boldsymbol{x}})\left|{\Omega}\right\rangle\equiv c_{n}^{\dagger}\left|{\Omega}\right\rangle$ (53) $\displaystyle\left|{\overline{M}_{n}}\right\rangle$ $\displaystyle=$ $\displaystyle\int d{\boldsymbol{x}}\sum_{\alpha}{\overline{\Psi}}_{n,\alpha}^{\,{\dagger}}({\boldsymbol{x}})\psi_{\alpha}({\boldsymbol{x}})\left|{\Omega}\right\rangle\equiv\bar{c}_{n}^{\dagger}\left|{\Omega}\right\rangle$ (54) Charge conjugation transforms the electron field as $\displaystyle\mathcal{C}\psi^{\dagger}(t,{\boldsymbol{x}})\mathcal{C}^{\dagger}=i\psi^{T}(t,{\boldsymbol{x}})\gamma^{2}$ (55) Hence $\displaystyle\mathcal{C}\left|{M}\right\rangle=\int d{\boldsymbol{x}}\,\psi^{T}({\boldsymbol{x}})i\gamma^{2}\Psi({\boldsymbol{x}})\left|{\Omega}\right\rangle=\int d{\boldsymbol{x}}\,\Psi^{T}({\boldsymbol{x}})i\gamma^{2}\psi({\boldsymbol{x}})\left|{\Omega}\right\rangle$ (56) has the form of $\left|{\overline{M}}\right\rangle$ in (54), with wave function $\overline{\Psi}({\boldsymbol{x}})=i\gamma^{2}\Psi^{*}({\boldsymbol{x}})$. This wave function satisfies the Dirac equation (44) with $M\to-M$ and $eA^{\mu}\to-eA^{\mu}$, as expected for a positron. The vacuum state $\left|{\Omega}\right\rangle$ is an eigenstate of the Hamiltonian with eigenvalue taken to be zero, $H_{D}\left|{\Omega}\right\rangle=0$ (57) Two equivalent expressions for $\left|{\Omega}\right\rangle$ are given in (65) below. Using $\left[{H_{D}},{\psi^{\dagger}({\boldsymbol{x}})}\right]=\psi^{\dagger}({\boldsymbol{x}})\gamma^{0}(i{\overset{\leftarrow}{\boldsymbol{\nabla}}}\cdot\boldsymbol{\gamma}+m+e{\not{A}})\hskip 56.9055pt\left[{H_{D}},{\psi({\boldsymbol{x}})}\right]=-\gamma^{0}(-i{\overset{\rightarrow}{\boldsymbol{\nabla}}}\cdot\boldsymbol{\gamma}+m+e{\not{A}})\psi({\boldsymbol{x}})$ (58) we see that both states (53) and (54) are eigenstates of the Dirac Hamiltonian with positive eigenvalues, $\displaystyle H_{D}\left|{M_{n}}\right\rangle$ $\displaystyle=$ $\displaystyle M_{n}\left|{M_{n}}\right\rangle\hskip 28.45274ptM_{n}>0$ $\displaystyle H_{D}\left|{\overline{M}_{n}}\right\rangle$ $\displaystyle=$ $\displaystyle\overline{M}_{n}\left|{\overline{M}_{n}}\right\rangle\hskip 28.45274pt\overline{M}_{n}>0$ (59) In terms of the wave functions in momentum space, $\Psi_{n}({\boldsymbol{x}})=\int\frac{d{\boldsymbol{p}}}{(2\pi)^{3}}\,\Psi_{n}({\boldsymbol{p}})e^{i{\boldsymbol{p}}\cdot{\boldsymbol{x}}}\hskip 56.9055pt\overline{\Psi}_{n}({\boldsymbol{x}})=\int\frac{d{\boldsymbol{p}}}{(2\pi)^{3}}\,\overline{\Psi}_{n}({\boldsymbol{p}})e^{i{\boldsymbol{p}}\cdot{\boldsymbol{x}}}$ (60) the eigenstate operators defined in (53) and (54) can be expressed as $\displaystyle c_{n}$ $\displaystyle=$ $\displaystyle\sum_{\boldsymbol{p}}\Psi_{n}^{\dagger}({\boldsymbol{p}})\big{[}u({\boldsymbol{p}},\lambda)b_{{\boldsymbol{p}},\lambda}+v(-{\boldsymbol{p}},\lambda)d_{-{\boldsymbol{p}},\lambda}^{\dagger}\big{]}\equiv B_{np}b_{p}+D_{np}d^{\dagger}_{p}$ $\displaystyle\hskip 284.52756pt\sum_{\boldsymbol{p}}\equiv\int\frac{d{\boldsymbol{p}}}{(2\pi)^{3}2E_{p}}\sum_{\lambda}$ $\displaystyle\bar{c}_{n}$ $\displaystyle=$ $\displaystyle\sum_{\boldsymbol{p}}\big{[}b_{{\boldsymbol{p}},\lambda}^{\dagger}u^{\dagger}({\boldsymbol{p}},\lambda)+d_{-{\boldsymbol{p}},\lambda}v^{\dagger}(-{\boldsymbol{p}},\lambda)\big{]}\overline{\Psi}_{n}({\boldsymbol{p}})\equiv\overline{B}_{np}b_{p}^{\dagger}+\overline{D}_{np}d_{p}$ (62) In the second expressions on the rhs. a sum over the repeated index $p\equiv({\boldsymbol{p}},\lambda)$ is implied. In the weak binding limit ($|{\boldsymbol{p}}|\ll m$) the positive energy spinor wave function $\Psi_{n}$ has only upper components, whereas $\overline{\Psi}_{n}$ has only lower components. Then $\left|{M_{n}}\right\rangle$ is a single electron state, whereas $\left|{\overline{M}_{n}}\right\rangle$ is a single positron state. The operators $c_{n}$ and $\bar{c}_{n}$ are related to $b,d$ via the Bogoliubov transformations (IV.3) and (62). Using the commutation relations (52) and the orthonormality of the Dirac wave functions we see that they obey standard anticommutation relations, $\displaystyle\left\\{{c_{m}},{c_{n}^{\dagger}}\right\\}$ $\displaystyle=$ $\displaystyle\sum_{\boldsymbol{p}}\Psi_{m,\alpha}^{\dagger}({\boldsymbol{p}})\big{[}u_{\alpha}({\boldsymbol{p}},\lambda)u_{\beta}^{\dagger}({\boldsymbol{p}},\lambda)+v_{\alpha}(-{\boldsymbol{p}},\lambda)v_{\beta}^{\dagger}(-{\boldsymbol{p}},\lambda)\big{]}\Psi_{n,\beta}({\boldsymbol{p}})=\int\frac{d{\boldsymbol{p}}}{(2\pi)^{3}}\Psi_{m,\alpha}^{\dagger}({\boldsymbol{p}})\Psi_{n,\alpha}({\boldsymbol{p}})=\delta_{mn}$ $\displaystyle\left\\{{\bar{c}_{m}},{c_{n}^{\dagger}}\right\\}$ $\displaystyle=$ $\displaystyle 0$ $\displaystyle\left\\{{\bar{c}_{m}},{\bar{c}_{n}^{\dagger}}\right\\}$ $\displaystyle=$ $\displaystyle\int\frac{d{\boldsymbol{p}}}{(2\pi)^{3}}\bar{\Psi}_{m,\alpha}^{\dagger}({\boldsymbol{p}})\bar{\Psi}_{n,\alpha}({\boldsymbol{p}})=\delta_{mn}$ (63) Inserting the completeness condition for the Dirac wave functions into the Dirac Hamiltonian (49) gives, $\displaystyle H_{D}$ $\displaystyle=$ $\displaystyle\sum_{n}\big{[}M_{n}c_{n}^{\dagger}c_{n}+\bar{M}_{n}\bar{c}_{n}^{\dagger}\bar{c}_{n}\big{]}$ (64) Exercise A.4: Derive (64). The expression for the vacuum state may be found using the methods in Blaizot and Ripka (1985). $H_{D}\left|{\Omega}\right\rangle=0$ when, in terms of the $B$ and $D$ coefficients defined in (IV.3) and (62), $\left|{\Omega}\right\rangle=N_{0}\exp\Big{[}-b_{q}^{\dagger}\big{(}B^{-1}\big{)}_{qn}D_{nr}d_{r}^{\dagger}\Big{]}\left|{0}\right\rangle=N_{0}\exp\Big{[}-d_{r}^{\dagger}\big{(}{\overline{D}}^{\,-1})_{rn}{\overline{B}}_{nq}b_{q}^{\dagger}\Big{]}\left|{0}\right\rangle$ (65) Sums over the repeated indices $q,n,r$ are implied in the exponents, and $N_{0}$ is a normalization constant. The perturbative vacuum satisfies $b_{p}\left|{0}\right\rangle=d_{p}\left|{0}\right\rangle=0$. The vacuum state $\left|{\Omega}\right\rangle$ describes the distribution of the $e^{+}e^{-}$ pairs that arise through perturbative contributions such as Fig. 13(b). It is a formal expression, involving a sum over all states $n$ and the inverted matrices $\big{(}B^{-1}\big{)}_{qn}$ and ${\overline{D}}^{\,-1})_{rn}$. In the weak binding limit $D_{nr}\to 0,\ \overline{B}_{nq}\to 0$ and $\left|{\Omega}\right\rangle\to\left|{0}\right\rangle$. The vacuum is “empty” in the bound state basis: $c_{n}\left|{\Omega}\right\rangle=\bar{c}_{n}\left|{\Omega}\right\rangle=0$. The pairs appear only in bases which do not diagonalize the Hamiltonian, such as the free basis generated by the $b^{\dagger}$ and $d^{\dagger}$ operators. Exercise A.5: (a) Show the equivalence of the two expressions for $\left|{\Omega}\right\rangle$ in (65). Hint: Prove that $B_{mp}\overline{B}_{np}+D_{mp}\overline{D}_{np}=0$. (b) Prove that $H_{D}\left|{\Omega}\right\rangle=0$. Hint: Note that $b_{p}$ essentially differentiates the exponents in (65). The Dirac bound states (53) may be expressed in terms of their electron and positron (“hole”) distributions, $\displaystyle\left|{M_{n}}\right\rangle$ $\displaystyle=\int\frac{d{\boldsymbol{p}}}{(2\pi)^{3}2E_{p}}\sum_{s}\big{[}e_{n}^{-}({\boldsymbol{p}},s)b^{\dagger}_{{\boldsymbol{p}}s}+e_{n}^{+}({\boldsymbol{p}},s)d_{-{\boldsymbol{p}}s}\big{]}\left|{\Omega}\right\rangle$ $\displaystyle e_{n}^{-}({\boldsymbol{p}},s)$ $\displaystyle=u^{\dagger}({\boldsymbol{p}},s)\Psi_{n}({\boldsymbol{p}})\hskip 56.9055pte_{n}^{+}({\boldsymbol{p}},s)=v^{\dagger}(-{\boldsymbol{p}},s)\Psi_{n}({\boldsymbol{p}})$ (66) with momentum space wave functions $\Psi_{n}({\boldsymbol{p}})$ defined as in (60). The corresponding electron and positron densities $\displaystyle\rho_{n}(e^{\mp},p)$ $\displaystyle\equiv\int\frac{d\Omega_{p}\,p^{2}}{(2\pi)^{3}2E_{p}}\sum_{s}|e^{\mp}({\boldsymbol{p}},s,)|^{2}$ (67) are normalized so that $\displaystyle\int_{0}^{\infty}dp\big{[}\rho_{n}(e^{-},p)+\rho_{n}^{+}(e^{+},p)\big{]}=1$ (68) ### IV.4 * Dirac wave functions for central $A^{0}$ potentials The wave functions $\Psi({\boldsymbol{x}})$ of Dirac bound states in rotationally symmetric potentials $eA^{0}({\boldsymbol{x}})=V(r)$ with ${\boldsymbol{A}}({\boldsymbol{x}})=0$ satisfy (${\boldsymbol{\alpha}}\equiv\gamma^{0}\boldsymbol{\gamma}$) $\displaystyle(-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0})\Psi({\boldsymbol{x}})=\big{[}M-V(r)\big{]}\Psi({\boldsymbol{x}})$ (69) The states may be characterized by their mass $M$, angular momentum $j,\,j^{z}\equiv\lambda$ and parity $\eta_{P}=\pm 1$. The angular momentum operator in the fermion representation is $\displaystyle\boldsymbol{\mathcal{J}}$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,\boldsymbol{J}\,\psi({\boldsymbol{x}})$ (70) where ${\boldsymbol{J}}$ is the sum of the orbital ${\boldsymbol{L}}$ and spin ${\boldsymbol{S}}$ angular momenta (which are not separately conserved), $\displaystyle\boldsymbol{J}={\boldsymbol{L}}+{\boldsymbol{S}}={\boldsymbol{x}}\times(-i\boldsymbol{\nabla})+{\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}$ (71) Operating on the states $\left|{M,j\lambda}\right\rangle$ in (53) we get $\displaystyle\boldsymbol{\mathcal{J}}\left|{M,j\lambda}\right\rangle$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,\boldsymbol{J}\,\Psi_{j\lambda}({\boldsymbol{x}})\left|{\Omega}\right\rangle\hskip 56.9055pt\boldsymbol{\mathcal{J}}^{2}\left|{M,j\lambda}\right\rangle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,\boldsymbol{J}^{2}\,\Psi_{j\lambda}({\boldsymbol{x}})\left|{\Omega}\right\rangle$ (72) The Dirac 4-spinor wave functions are thus required to satisfy $\displaystyle{\boldsymbol{J}}^{2}\Psi_{j\lambda}=j(j+1)\Psi_{j\lambda}\hskip 56.9055ptJ^{z}\Psi_{j\lambda}=\lambda\Psi_{j\lambda}$ (73) The parity operator is defined by $\displaystyle\mathbb{P}\psi^{\dagger}(t,{\boldsymbol{x}})\mathbb{P}^{\dagger}$ $\displaystyle=\psi^{\dagger}(t,-{\boldsymbol{x}})\gamma^{0}$ $\displaystyle\mathbb{P}\left|{M,j\lambda}\right\rangle$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,\gamma^{0}\,\Psi_{j\lambda}(-{\boldsymbol{x}})\left|{\Omega}\right\rangle=\eta_{P}\left|{M,j\lambda}\right\rangle$ (74) Hence the Dirac wave functions of states with parity $\eta_{P}$ should satisfy $\displaystyle\gamma^{0}\,\Psi_{j\lambda}(-{\boldsymbol{x}})=\eta_{P}\Psi_{j\lambda}({\boldsymbol{x}})$ (75) Denoting ${\boldsymbol{x}}=r(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)$ and $\hat{\boldsymbol{x}}\equiv{\boldsymbol{x}}/r$, the angular dependence of $\Psi_{j\lambda}({\boldsymbol{x}})$ may be expressed using the orthonormalized 2-spinors Itzykson and Zuber (1980), $\displaystyle\phi_{j\lambda+}(\theta,\varphi)=\frac{1}{\sqrt{2j}}$ $\displaystyle\left(\begin{array}[]{c}\sqrt{j+\lambda}\,Y_{j-{\textstyle\frac{1}{2}}}^{\lambda-{\textstyle\frac{1}{2}}}(\theta,\varphi)\\\\[11.38109pt] \sqrt{j-\lambda}\,Y_{j-{\textstyle\frac{1}{2}}}^{\lambda+{\textstyle\frac{1}{2}}}(\theta,\varphi)\end{array}\right)$ (78) $\displaystyle\phi_{j\lambda-}(\theta,\varphi)=\frac{1}{\sqrt{2(j+1)}}$ $\displaystyle\left(\begin{array}[]{c}\sqrt{j-\lambda+1}\,Y_{j+{\textstyle\frac{1}{2}}}^{\lambda-{\textstyle\frac{1}{2}}}(\theta,\varphi)\\\\[11.38109pt] -\sqrt{j+\lambda+1}\,Y_{j+{\textstyle\frac{1}{2}}}^{\lambda+{\textstyle\frac{1}{2}}}(\theta,\varphi)\end{array}\right)=\boldsymbol{\sigma}\cdot\hat{\boldsymbol{x}}\,\phi_{j\lambda+}(\theta,\varphi)$ (81) The $\pm$ notation refers to $j=\ell\pm{\textstyle\frac{1}{2}}$, where $\ell$ is the order of the spherical harmonic function $Y_{\ell}^{m}(\theta,\varphi)$, which becomes the conserved orbital angular momentum in the non-relativistic limit. In the standard notation $J^{\pm}=J^{x}\pm iJ^{y}$, $\displaystyle\boldsymbol{\sigma}\cdot{\boldsymbol{L}}$ $\displaystyle={\textstyle\frac{1}{2}}\sigma^{+}L^{-}+{\textstyle\frac{1}{2}}\sigma^{-}L^{+}+\sigma^{z}L^{z}$ $\displaystyle L^{\pm}\left|{\ell,\lambda}\right\rangle$ $\displaystyle=\sqrt{(\ell\mp\lambda)(\ell\pm\lambda+1)}\left|{\ell,\lambda\pm 1}\right\rangle$ (82) it is straightforward to verify that $\displaystyle\boldsymbol{\sigma}\cdot{\boldsymbol{L}}\,\phi_{j\lambda\pm}$ $\displaystyle=c_{j\pm}\phi_{j\lambda\pm}\hskip 56.9055pt\left\\{\begin{array}[]{l}c_{j+}=j-{\textstyle\frac{1}{2}}\\\\[5.69054pt] c_{j-}=-(j+{\textstyle\frac{3}{2}})\end{array}\right.$ (85) $\displaystyle({\boldsymbol{L}}+{\textstyle\frac{1}{2}}\boldsymbol{\sigma})^{2}\phi_{j\lambda\pm}$ $\displaystyle=j(j+1)\phi_{j\lambda\pm}$ (86) The 4-spinor Dirac wave functions $\Psi_{j\lambda\pm}({\boldsymbol{x}})$ describing the states $\left|{M,j\lambda\pm}\right\rangle$ of (53) with $\eta_{P}=(-1)^{j\mp{\textstyle\frac{1}{2}}}$ may now be defined in terms of two radial functions, $\displaystyle\Psi_{j\lambda\pm}({\boldsymbol{x}})$ $\displaystyle=\big{[}F_{j\pm}(r)+i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,G_{j\pm}(r)\big{]}\left(\begin{array}[]{c}\phi_{j\lambda\pm}(\theta,\varphi)\\\\[5.69054pt] 0\end{array}\right)$ (89) Since $\left[{{\boldsymbol{J}}},{{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}}\right]=0$ the angular momentum quantum numbers (73) are ensured by (85). The parity $\eta_{P}=(-1)^{j\mp 1/2}$ follows from $\gamma^{0}{\boldsymbol{\alpha}}\cdot(-\hat{\boldsymbol{x}})={\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\gamma^{0}$ and $\phi_{j\lambda\pm}(\pi-\theta,\varphi+\pi)=(-1)^{j\mp 1/2}\phi_{j\lambda\pm}(\theta,\varphi)$. The eigenvalue condition (IV.3) determines the bound state equation for $\Psi_{j\lambda\pm}({\boldsymbol{x}})$. For this we need the relations $\displaystyle-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}$ $\displaystyle=-i({\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}})\,\partial_{r}-\frac{1}{r}\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\times\boldsymbol{L}$ $\displaystyle-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}(i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}})$ $\displaystyle=\frac{2}{r}+\partial_{r}+\frac{1}{r}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}$ (90) Exercise A.6: Derive the identities (IV.4). Hint: Use $\alpha^{i}\alpha^{j}=\delta^{ij}+i\gamma_{5}\epsilon^{ijk}\alpha^{k}$. We may furthermore use $\displaystyle{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\times\boldsymbol{L}\left(\begin{array}[]{c}\phi_{j\lambda\pm}\\\\[5.69054pt] 0\end{array}\right)=\left(\begin{array}[]{c}0\\\\[5.69054pt] \boldsymbol{\sigma}\cdot\hat{\boldsymbol{x}}\times\boldsymbol{L}\,\phi_{j\lambda\pm}\end{array}\right)=-i\left(\begin{array}[]{c}0\\\\[5.69054pt] (\boldsymbol{\sigma}\cdot\hat{\boldsymbol{x}})(\boldsymbol{\sigma}\cdot{\boldsymbol{L}})\phi_{j\lambda\pm}\end{array}\right)=-c_{j\pm}\,i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\left(\begin{array}[]{c}\phi_{j\lambda\pm}\\\\[5.69054pt] 0\end{array}\right)$ (99) with $c_{j\pm}$ given in (85), while $\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}=2{\boldsymbol{S}}\cdot{\boldsymbol{L}}$ contributes $c_{j\pm}$ with unit Dirac matrix. The $m\gamma^{0}$ term in $H_{D}$ gives $m(F_{j\pm}-i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,G_{j\pm})$. Identifying the coefficients in $H_{D}\left|{M}\right\rangle=M\left|{M}\right\rangle$ of the two Dirac structures in (89) we get $\displaystyle\Big{(}\frac{j+3/2}{r}+\partial_{r}\Big{)}G_{j+}=(M_{j+}-V-m)F_{j+}$ $\displaystyle\Big{(}\frac{j-1/2}{r}-\partial_{r}\Big{)}F_{j+}=(M_{j+}-V+m)G_{j+}$ (100) $\displaystyle-\Big{(}\frac{j+3/2}{r}+\partial_{r}\Big{)}F_{j-}=(M_{j-}-V+m)G_{j-}$ $\displaystyle-\Big{(}\frac{j-1/2}{r}-\partial_{r}\Big{)}G_{j-}=(M_{j-}-V-m)F_{j-}$ (101) These reduce to second order equations for $F$ and $G$ separately. Suppressing the subscripts $j\pm$, $\displaystyle F^{\prime\prime}+\Big{(}\frac{2}{r}+\frac{V^{\prime}}{M-V+m}\Big{)}F^{\prime}+\Big{[}(M-V)^{2}-m^{2}-\frac{c(c+1)}{r^{2}}-\frac{c\,V^{\prime}}{r(M-V+m)}\Big{]}F=0$ $\displaystyle G^{\prime\prime}+\Big{(}\frac{2}{r}+\frac{V^{\prime}}{M-V-m}\Big{)}G^{\prime}+\Big{[}(M-V)^{2}-m^{2}-\frac{(c+1)(c+2)}{r^{2}}+\frac{(c+2)V^{\prime}}{r(M-V-m)}\Big{]}G=0$ (102) At the potentially singular points $M-V\pm m=0$ the solutions behave as $(M-V\pm m=0)^{\beta}$, with $\beta=0$ or $\beta=2$, and are thus locally normalizable there. If $\Psi_{j\lambda+}({\boldsymbol{x}})$ solves the Dirac equation (69) then $\widetilde{\Psi}_{j\lambda+}({\boldsymbol{x}})\equiv\gamma_{5}\Psi_{j\lambda+}({\boldsymbol{x}})$ solves this equation with $m\to-m$ and the same eigenvalue $M_{j+}$. This shows up as a symmetry of the bound state equations. Using $\phi_{j\lambda-}=\boldsymbol{\sigma}\cdot\hat{\boldsymbol{x}}\,\phi_{j\lambda+}$, $\displaystyle\widetilde{\Psi}_{j\lambda+}({\boldsymbol{x}})$ $\displaystyle=\big{[}F_{j+}(r)+i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,G_{j+}(r)\big{]}\left(\begin{array}[]{c}0\\\\[5.69054pt] \phi_{j\lambda+}\end{array}\right)=\big{[}{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,F_{j+}(r)+i\,G_{j+}(r)\big{]}\left(\begin{array}[]{c}\boldsymbol{\sigma}\cdot\hat{\boldsymbol{x}}\,\phi_{j\lambda+}\\\\[5.69054pt] 0\end{array}\right)$ (107) $\displaystyle=i\big{[}G_{j+}(r)-i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,F_{j+}(r)\big{]}\left(\begin{array}[]{c}\phi_{j\lambda-}\\\\[5.69054pt] 0\end{array}\right)=\Psi_{j\lambda-}({\boldsymbol{x}})\big{[}F_{j-}\to iG_{j+},\,G_{j-}\to-iF_{j+}\big{]}$ (110) Eq. (101) is indeed seen to transform into (100) when $m\to-m$, $M_{j-}\to M_{j+}$ and the $j-$ radial wave functions are replaced with the $j+$ functions as indicated in (107). This means that the solution of (101), if allowed by the quantum numbers, is given by $F_{j-}(r,m)=G_{j+}(r,-m),\ G_{j-}(r,m)=-F_{j+}(r,-m)$ with the same eigenvalue $M_{j-}=M_{j+}$. ”Squaring” the Dirac equation (69) by multiplying it with $-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0}$ gives $\displaystyle\big{(}-\boldsymbol{\nabla}^{2}+m^{2}\big{)}\Psi({\boldsymbol{x}})=\big{[}(M-V)^{2}+i{\boldsymbol{\alpha}}\cdot(\boldsymbol{\nabla}V)\big{]}\Psi({\boldsymbol{x}})$ (111) Since this equation depends on $m$ only via $m^{2}$ the eigenvalue $M$ is independent of the sign of $m$. The degeneracy $M_{j-}=M_{j+}$ is familiar in the case of a Coulomb potential, to which I turn next. ### IV.5 * Coulomb potential $V(r)=-\alpha/r$ There is a standard and elegant method Itzykson and Zuber (1980) for finding the Dirac spectrum in the case of a Coulomb potential $V(r)=-\alpha/r$. One starts from the squared Dirac equation (111), determining the eigenvalues of the Dirac matrix $i{\boldsymbol{\alpha}}\cdot(\boldsymbol{\nabla}V)$. For $V(r)=-\alpha/r$ all terms in (111) may then be formally identified, based on their $r$-dependence, with those of the Schrödinger equation, $\displaystyle\Big{(}-\frac{1}{2m}\boldsymbol{\nabla}^{2}-\frac{\alpha}{r}\Big{)}\Psi({\boldsymbol{x}})=E_{b}\Psi({\boldsymbol{x}})$ (112) The known solution of this equation allows to determine the masses $M$ of the Dirac states, $\displaystyle M_{nj}=m\left[1+\Bigg{(}\frac{\alpha}{n-(j+{\textstyle\frac{1}{2}})+\sqrt{(j+{\textstyle\frac{1}{2}})^{2}-\alpha^{2}}}\Bigg{)}^{2}\,\right]^{-{\textstyle\frac{1}{2}}}$ (113) The principal quantum number $n=1,2,3,\ldots$ and $j\leq n-{\textstyle\frac{1}{2}}$. There are two states for each mass, $M_{nj+}=M_{nj-}$, except only $M_{nj+}$ for $j=n-{\textstyle\frac{1}{2}}$. For $\alpha\to 0$ we recover the non-relativistic Schrödinger result which depends only on $n$, $M_{nj}=m(1-\alpha^{2}/2n^{2})$. For $r\to\infty$ (IV.4) reduces to $F^{\prime\prime}+(M^{2}-m^{2})F=0$, implying $F(r\to\infty)\sim\exp(-r\sqrt{m^{2}-M^{2}})$. Hence $M<m$ for normalizable solutions, as in (113). I illustrate using the the radial wave functions $F(r)$ and $G(r)$ of the states with maximal spin $j=n-{\textstyle\frac{1}{2}}$, and of the first radial excitation with $j=n-{\textstyle\frac{3}{2}}$. The wave functions $\Psi_{nj\lambda\pm}({\boldsymbol{x}})$ are expressed in terms of the radial functions $F_{nj\pm}$ and $G_{nj\pm}$ as in (89), with the angular functions $\phi_{j\lambda\pm}$ given in (78). Maximal spin, $j=n-{\textstyle\frac{1}{2}}$ $\displaystyle M_{j+\frac{1}{2},j,+}$ $\displaystyle=\frac{m\gamma}{j+{\textstyle\frac{1}{2}}}$ $\displaystyle\gamma\equiv\sqrt{(j+{\textstyle\frac{1}{2}})^{2}-\alpha^{2}}$ $\displaystyle F_{j+\frac{1}{2},j,+}(r)$ $\displaystyle=N_{1}\,r^{\gamma-1}\exp(-\mu r)$ $\displaystyle\mu\equiv\sqrt{m^{2}-M^{2}}=\frac{\alpha m}{j+{\textstyle\frac{1}{2}}}$ $\displaystyle G_{j+\frac{1}{2},j,+}(r)$ $\displaystyle=N_{1}\,\frac{\mu}{M+m}\,r^{\gamma-1}\exp(-\mu r)$ $\displaystyle N_{1}^{2}\equiv\frac{(2\mu)^{1+2\gamma}}{\Gamma(1+2\gamma)}\Big{[}1+\frac{\mu^{2}}{(M+m)^{2}}\Big{]}^{-1}$ (114) This state is not degenerate, i.e., there are no radial functions $F_{j+1/2,j,-},\ G_{j+1/2,j,-}$. In momentum space (60) the wave functions of the $n=1$ ground state $\left|{M,n=1,j={\textstyle\frac{1}{2}},\lambda,+}\right\rangle$ are, with $\chi_{\frac{1}{2}}=(1\ 0)^{\rm T}$ and $\chi_{-\frac{1}{2}}=(0\ 1)^{\rm T}$, $\displaystyle\Psi_{1,1/2,\lambda,+}({\boldsymbol{p}})$ $\displaystyle=\big{[}f(p)+{\boldsymbol{\alpha}}\cdot\hat{{\boldsymbol{p}}}\,g(p)\big{]}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)$ (117) $\displaystyle f(p)$ $\displaystyle=\sqrt{4\pi}\,N_{1}\,\Gamma(1+\gamma)\,\frac{\sin[\delta(1+\gamma)/2]}{p(\alpha^{2}m^{2}+p^{2})^{(1+\gamma)/2}}\hskip 56.9055pt\exp(i\delta)\equiv\frac{\alpha m+ip}{\alpha m-ip}$ $\displaystyle g(p)$ $\displaystyle=-\frac{\sqrt{4\pi}\,\alpha}{1+\gamma}\,N_{1}\,\Gamma(\gamma)\,\frac{\partial}{\partial p}\Big{[}\frac{\sin(\delta\gamma/2)}{p(\alpha^{2}m^{2}+p^{2})^{\gamma/2}}\Big{]}$ (118) The electron and positron density distributions (67) are then $\displaystyle\rho(e^{\mp},p)=\frac{p^{2}}{4\pi^{2}\,E_{p}}\big{[}E_{p}(f^{2}+g^{2})\pm m(f^{2}-g^{2})\pm 2p\,f\,g\big{]}$ (119) The electron density $\rho(e^{-},p)$ is strongly dominant. For $\alpha=1/137$ the contribution of the positron density to the state normalization is a mere $3.2\cdot 10^{-12}$. Even for $\alpha=0.999$ (see Fig. 14) the positron contributes only 3% to the normalization. The $j={\textstyle\frac{1}{2}}$ eigenvalue $M_{1,\frac{1}{2},+}$ is complex for $\alpha>1$. Figure 14: Electron and positron densities (119) in the $\left|{M,1,{\textstyle\frac{1}{2}},\lambda,+}\right\rangle$ Dirac state (117) with $V(r)=-\alpha/r$ and $\alpha=0.999$. The positron density is multiplied by a factor 10. First radial excitation, $j=n-{\textstyle\frac{3}{2}}$ $\displaystyle M_{j+\frac{3}{2},j,\pm}$ $\displaystyle=\frac{m(1+\gamma)}{\sqrt{(j+{\textstyle\frac{1}{2}})^{2}+1+2\gamma}}$ $\displaystyle\gamma\equiv\sqrt{(j+{\textstyle\frac{1}{2}})^{2}-\alpha^{2}}$ $\displaystyle F_{j+\frac{3}{2},j,+}(r)$ $\displaystyle=N_{2+}\,r^{\gamma-1}\exp(-\mu r)\Big{[}r-\frac{\alpha(1+2\gamma)}{(M+m)(j+{\textstyle\frac{1}{2}}-\gamma)+\alpha\mu}\Big{]}$ $\displaystyle\mu\equiv\sqrt{m^{2}-M^{2}}=\frac{\alpha M}{1+\gamma}$ $\displaystyle G_{j+\frac{3}{2},j,+}(r)$ $\displaystyle=N_{2+}\,r^{\gamma-1}\exp(-\mu r)\Big{[}\frac{\mu}{M+m}\,r-\frac{2j+1-2\gamma}{2\alpha}\,\frac{\alpha(1+2\gamma)}{(M+m)(j+{\textstyle\frac{1}{2}}-\gamma)+\alpha\mu}\Big{]}$ (120) The degenerate state with the same mass and spin but opposite parity has, as argued at the end of the previous section, radial functions with $F_{-}(m)=G_{+}(-m)$ and $G_{-}(m)=-F_{+}(-m)$ (up to the normalizations $N_{2\pm}$), $\displaystyle F_{j+\frac{3}{2},j,-}(r)$ $\displaystyle=N_{2-}\,r^{\gamma-1}\exp(-\mu r)\Big{[}\frac{\mu}{M-m}\,r-\frac{2j+1-2\gamma}{2\alpha}\frac{\alpha(1+2\gamma)}{(M-m)(j+{\textstyle\frac{1}{2}}-\gamma)+\alpha\mu}\Big{]}$ $\displaystyle G_{j+\frac{3}{2},j,-}(r)$ $\displaystyle=-N_{2-}\,r^{\gamma-1}\exp(-\mu r)\Big{[}r-\frac{\alpha(1+2\gamma)}{(M-m)(j+{\textstyle\frac{1}{2}}-\gamma)+\alpha\mu}\Big{]}$ (121) Unbound states, $M^{2}-m^{2}>0$ States with masses $M>m$ are unbound. The radial equations (100) imply for all $\left|{M,nj\lambda,+}\right\rangle$ states, $\displaystyle F_{nj+}(r\to\infty)$ $\displaystyle=N\,r^{\beta}\exp(\pm i\mu r)$ $\displaystyle\mu=\sqrt{M^{2}-m^{2}}$ $\displaystyle G_{nj+}(r\to\infty)$ $\displaystyle=N\,\frac{\mp i\mu}{M+m}r^{\beta}\exp(\pm i\mu r)$ $\displaystyle\beta=\mp\frac{i\alpha}{\sqrt{1-m^{2}/M^{2}}}-1$ (122) In the absence of a normalization condition the mass spectrum is continuous. At large $r$ (where $V\to 0$) the solution is a spherical wave with momentum $p=\pm\mu$, modulated by the phase factor $r^{\beta+1}$. The norm $r^{2}|\Psi|^{2}$ tends to a constant at large $r$. ### IV.6 * Linear potential $V(r)=V^{\prime}r$ Hadron phenomenology, and particularly the description Eichten _et al._ (1980, 2008) of quarkonia using the Schrödinger equation with the Cornell potential (2), motivates studying Dirac states with a linear potential, $eA^{0}({\boldsymbol{x}})=V(|{\boldsymbol{x}}|)=V^{\prime}r$, ${\boldsymbol{A}}=0$. The solutions of the Dirac equation for polynomial potentials have since the 1930’s Plesset (1932) been known to be quite different from those of the Schrödinger equation. I first recall the solutions of the Schrödinger equation (112) for a linear potential. The $\ell=0$ wave function $\phi(r)$ satisfies $\displaystyle\Big{[}-\frac{1}{2m}\Big{(}\partial_{r}^{2}+\frac{2}{r}\partial_{r}\Big{)}+V^{\prime}r\Big{]}\phi(r)=E_{b}\phi(r)$ (123) The normalizable solutions are given by an Airy function, $\displaystyle\phi(r)=\frac{N}{r}\,\textrm{Ai}\big{[}(2mV^{\prime})^{1/3}(r-E_{b}/V^{\prime})\big{]}$ (124) The discrete values of the binding energy $E_{b}$ are determined by requiring $\phi(r=0)$ to be regular, which implies $\textrm{Ai}\big{[}-(2mV^{\prime})^{1/3}E_{b}/V^{\prime})\big{]}=0$. Since the potential grows linearly with $r$ all states are bound (confined), and their wave functions vanish exponentially for $r\to\infty$. The Dirac radial functions on the other hand are oscillatory at large $r$, as seen from (100) and (IV.4), $\displaystyle F(r\to\infty)$ $\displaystyle\simeq N\,r^{\beta}\exp\big{[}i(M-V)^{2}/2V^{\prime}\big{]}$ $\displaystyle\beta=-\frac{im^{2}}{2V^{\prime}}-1$ $\displaystyle G(r\to\infty)$ $\displaystyle\simeq i\,F(r\to\infty)$ (125) This result (and its complex conjugate) is independent of the quantum numbers $n,j,\pm$. I retained some non-leading terms in the exponent for ease of notation. The essential feature is that $\displaystyle F(r\to\infty)$ $\displaystyle\sim-iG(r\to\infty)\sim Nr^{\beta}\exp\big{[}iV^{\prime}r^{2}/2\big{]}$ $\displaystyle r^{2}|F(r\to\infty)|^{2}=r^{2}|G(r\to\infty)|^{2}=N^{2}$ (126) Thus the normalization integral diverges even though the potential is confining. In the absence of a normalization constraint the mass spectrum is continuous for all $M$, in contrast to the discrete spectrum of the Schrödinger equation. A Dirac electron state (53) has Fock components with positrons in the vacuum $\left|{\Omega}\right\rangle$ (65). This is seen perturbatively in time ordered $Z$-diagrams such as in Fig. 13(b). The distribution of the positrons is traced by the $d$-operator in the state creation operator $c_{n}^{\dagger}$ (IV.3), motivating the definitions of the $e^{\mp}({\boldsymbol{p}},s)$ probabilities in (IV.3). A linear potential confines electrons, limiting their distribution to distances where $V^{\prime}r\lesssim M-m$. The same potential repulses positrons, pushing them to large distances with kinetic energy big enough to cancel their negative potential, $p-V^{\prime}r\sim M+m$. The exponent $\exp(ir\,V^{\prime}r/2)$ of $F(r\to\infty)$ in (126) implies momenta increasing with $r$ as $p\sim V^{\prime}r/2$. The relation between the $F$ and $G$ radial functions allows to verify that the $e^{+}$ distribution indeed dominates at large momenta (equivalent to large $r$), $\displaystyle\lim_{|{\boldsymbol{p}}|\to\infty}\,\frac{e^{-}({\boldsymbol{p}},s)}{e^{+}({\boldsymbol{p}},s)}=\lim_{|{\boldsymbol{p}}|\to\infty}\,\frac{u^{\dagger}({\boldsymbol{p}},s)\Psi({\boldsymbol{p}})}{v^{\dagger}(-{\boldsymbol{p}},s)\Psi({\boldsymbol{p}})}=0$ (127) Exercise A.7: Derive (127) for a state with $j=1/2$ and parity $\eta_{P}=+1$. Hint: Calculate the momentum space wave function (60) for $|{\boldsymbol{p}}|\to\infty$ using the stationary phase approximation. An equivalent interpretation is that the wave function is a superposition of electrons, confined to low $r$, and accelerating/decelerating positrons at large $r$, whose negative kinetic energy balances the positive potential. The spectrum is continuous because the positron energies are continuous. The tunneling of the $e^{+}$ to $r\simeq 0$ is exponentially suppressed with growing fermion mass $m$. Hence if the initial condition $G(r=0)/F(r=0)$ of the radial equations (100) is such as to include a positron contribution (beyond the tunneling rate) the wave function will grow rapidly with $r$ and start oscillating with an amplitude which is exponentially large in $m$. The precise values of $G(r=0)/F(r=0)$ which suppress the positrons at $r=0$ correspond to the discrete bound state masses $M$ of the normalizable solutions of the Schrödinger equation. All other values of $M$ give, in the $m\to\infty$ limit, wave functions which grow exponentially with $r$. These properties were confirmed quantitatively in $D=1+1$ dimensions, using the analytic expression of the wave function in terms of confluent hypergeometric functions Dietrich _et al._ (2013); Hoyer (2016). ## V Fock expansion of bound states in temporal ($A^{0}=0$) gauge ### V.1 Definition of the bound state method #### V.1.1 Considerations Perturbative expansions depend on the choice of a lowest order approximation. The perturbative $S$-matrix expands around free states, which works well for scattering amplitudes. Bound states are stationary in time and thus, in a sense, the very opposites of scattering amplitudes. QED approaches to atoms have been thoroughly considered, with conceptual milestones such as the Bethe- Salpeter equation Salpeter and Bethe (1951) (1951), the realization that it is not unique Caswell and Lepage (1978) (1978) and NRQED Caswell and Lepage (1986) (1986). Even in a first approximation atoms are described by wave functions that are non-polynomial in $\alpha$. NRQED expands around states defined by the Schrödinger equation. Poincaré symmetry can be explicitly realized only for generators that mutually commute. Equal-time bound states are defined as eigenstates of the Hamiltonian, which in their rest frame have explicit (kinematic) symmetry under space translations and rotations. The frame dependence of these states is defined by boosts. It is not trivial to determine the boost generators of atoms, which are spatially extended. Alternatively, states with general CM momentum ${\boldsymbol{P}}$ may be found as eigenstates of the Hamiltonian. Full rotational invariance is lost for ${\boldsymbol{P}}\neq 0$, but the requirement of a correct ${\boldsymbol{P}}$-dependence of the energy, $E({\boldsymbol{P}})=\sqrt{{\boldsymbol{P}}^{2}+M^{2}}\,$, is a strong constraint. Field theory ensures covariance, as emphasized by Weinberg in the preface of Weinberg (2005): “The point of view of this book is that quantum field theory is the way it is because (aside from theories like string theory that have an infinite number of particle types) it is the only way to reconcile the principles of quantum mechanics (including the cluster decomposition property) with those of special relativity.” The examples below will illustrate how subtly Poincaré covariance is realized for bound states. Much remains to be understood in this regard. Using “relativistic wave equations” is not sufficient, as demonstrated in Artru (1984). There are many formally equivalent approaches to bound states. In the following I briefly motivate and define my choice, guided by the properties of atoms and hadrons. Some further comments are given in chapter IX. #### V.1.2 Choice of approach Hamiltonian eigenstates Bound states can be identified in two equivalent but distinct ways: As poles in Green functions or as eigenstates of the Hamiltonian. The former involves propagation in time and space, allowing for explicit Poincaré invariance as in the Dyson-Schwinger framework. The propagation of bound state constituents is complicated by their state-dependent, mutual interactions. A Hamiltonian framework distinguishes time from space. The eigenstate condition involves no propagation in time, and Poincaré invariance emerges dynamically. I shall use the method of Hamiltonian eigenstates, akin to traditional quantum mechanics and NRQED. Instant time quantization Quantum states are traditionally defined at an instant of time $t$ (IT), but relativistic states are also commonly defined at equal Light-Front (LF) time $t+z$ Burkardt (1996); Brodsky _et al._ (1998). The latter is natural in the description of hard collisions, where a single probe (virtual photon or gluon) interacts with the target at a fixed LF time. LF states are described by boost-invariant wave functions, whereas IT wave functions transform dynamically under boosts. On the other hand, the LF choice of $z$-direction breaks rotational invariance, making angular momentum (other than $J^{z}$) dynamic even in the rest frame. The so called “zero modes” require special attention in LF quantization Collins (2018); Ji (2020); Mannheim _et al._ (2021). A perturbative approach allows to study the frame dependence of IT wave functions at each order of the expansion. Rest frame states can be characterized by their angular momentum ${\boldsymbol{J}}^{2}$ and $J^{z}$. Quantization is simpler at equal ordinary time. For these reasons I choose IT quantization. Temporal ($\boldsymbol{A^{0}=0}$) gauge Gauge theories have a local action, but the gauge may be fixed in all of space at an instant of time. The gauge dependent fields $A^{0}$ and ${\boldsymbol{A}}_{L}$ then give rise to an instantaneous potential, such as the Coulomb potential $V(r)=-\alpha/r$. The potential allows to define an initial bound state without the complications of retardation. In temporal gauge ($A^{0}=0$) the longitudinal electric field ${\boldsymbol{E}}_{L}=-\partial_{t}{\boldsymbol{A}}_{L}$ is given by a constraint for each physical state. This clearly separates the instantaneous from the propagating fields. Fock expansion States are conventionally defined by their expansion in a complete basis of Fock states. In temporal gauge the Fock state constituents are fermions and transversely polarized photons or gluons. The gauge constraint (Gauss’ law) determines the longitudinal electric field within each Fock state. For strong potentials the number of Fock constituents depends on the basis. The Dirac state (53) has an infinite number of constituents (65) in the free state basis due to $Z$-diagrams, Fig. 13(b). In the basis of the $c_{n}$ operators defined by the Bogoliubov transform (IV.3) the same Dirac state has a single constituent $c_{n}^{\dagger}\left|{0}\right\rangle$. I shall define a fermion Fock state as $\psi^{\dagger}(t,{\boldsymbol{x}})\left|{0}\right\rangle$, without specifying the expansion of the field in creation and annihilation operators. Initial state I take the valence Fock state as the initial bound state of the perturbative expansion. For Positronium this means $\left|{e^{-}e^{+}}\right\rangle$ bound by the $-\alpha/r$ potential. Hadron ($\left|{q\bar{q}}\right\rangle,\ \left|{qqq}\right\rangle$) quantum numbers correspond to their valence quarks. Higher order corrections in $\alpha$ will involve Fock states with a correspondingly larger number of constituents, as well as loop corrections to Fock states with fewer constituents. At each order of $\alpha$ the usual cancellation of collinear singularities between states with different numbers of constituents should thus be ensured. ### V.2 Quantization in QED #### V.2.1 Functional integral method Relativistic field theory is commonly defined using functional methods. Green functions are given by a functional integral over the fields weighted by the exponent of the action, $\exp(i\mathcal{S}/\hbar)$. In QED the photon propagator is thus $\displaystyle D^{\mu\nu}(x_{1},x_{2})$ $\displaystyle=\int\mathcal{D}(A^{\rho})\mathcal{D}(\bar{\psi},\psi)\,e^{i\mathcal{S}_{QED}/\hbar}\,A^{\mu}(x_{1})A^{\nu}(x_{2})$ $\displaystyle\mathcal{S}_{QED}$ $\displaystyle=\int d^{4}x\big{[}-{\textstyle\frac{1}{4}}F_{\mu\nu}F^{\mu\nu}+\bar{\psi}(i\not{\partial}-m-e\not{A})\psi\big{]}$ $\displaystyle F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ (128) A gauge fixing term $\mathcal{S}_{GF}$ must be added to the action $\mathcal{S}_{QED}$ for the integral to be well defined. Explicit Poincaré invariance is maintained with $\displaystyle\mathcal{S}_{GF}=-{\textstyle\frac{1}{2}}\lambda\int d^{4}x\,(\partial_{\mu}A^{\mu})^{2}$ (129) Expanding around free (hence Poincaré covariant) states gives the standard perturbative expansion of Green functions in terms of Feynman diagrams. This approach is well suited for scattering amplitudes. It is less convenient for bound states, for which free states are a poor approximation. As a consequence, the perturbative $S$-matrix (III.2.1) lacks bound state poles at any finite order. The poles can be generated through the divergence of an infinite sum of Feynman diagrams (or through an equivalent integral equation), as discussed in section III.2. However, it seems unlikely that confinement will be recovered in an expansion starting with free quarks and gluons. The covariant gauge fixing (129) introduces a time derivative $\partial_{0}A^{0}$ for the $A^{0}$ field, which is absent from $\mathcal{S}_{QED}$. This makes $A^{0}$ propagate in time like the transverse components ${\boldsymbol{A}}_{T}$, at the price of introducing a time- dependent kernel in the Bethe-Salpeter equation (section III.3). The $\partial_{0}A^{0}$ term is avoided in Coulomb gauge, $\boldsymbol{\nabla}\cdot{\boldsymbol{A}}=0$. The field equation for $A^{0}$ (Gauss’ law) in Coulomb gauge, $\displaystyle\frac{\delta\mathcal{S}_{QED}}{\delta A^{0}(x)}=-\boldsymbol{\nabla}^{2}A^{0}(x)-e\psi^{\dagger}\psi(x)=0$ (130) defines $A^{0}$ non-locally in terms of the electron field operator. For Positronium this gives the Coulomb potential $eA^{0}=-\alpha/r$, which allows an analytic solution of the Schrödinger equation and is the leading order interaction. The evaluation of higher order corrections in Coulomb gauge is rather complicated, especially for QCD. See Feinberg (1978) for a study of non-relativistic quarkonia based on the Bethe-Salpeter equation in Coulomb gauge. #### V.2.2 Canonical quantization The conjugate fields $\pi_{\alpha}$ of the fields $\varphi_{\alpha}$ in the Lagrangian density $\mathcal{L}(\varphi,\partial\varphi)$ are defined by $\displaystyle\pi_{\alpha}(t,{\boldsymbol{x}})=\frac{\partial\mathcal{L}(\varphi,\partial\varphi)}{\partial[\partial_{0}\varphi_{\alpha}(t,{\boldsymbol{x}})]}$ (131) Equal time (anti)commutation relations are imposed on the (fermion) boson fields, $\displaystyle\left[{\varphi_{\alpha}(t,{\boldsymbol{x}})},{\pi_{\beta}(t,{\boldsymbol{y}})}\right]_{\pm}=i\delta_{\alpha\beta}\delta^{3}({\boldsymbol{x}}-{\boldsymbol{y}})\hskip 56.9055pt\left[{\varphi_{\alpha}(t,{\boldsymbol{x}})},{\varphi_{\beta}(t,{\boldsymbol{y}})}\right]_{\pm}=\left[{\pi_{\alpha}(t,{\boldsymbol{x}})},{\pi_{\beta}(t,{\boldsymbol{y}})}\right]_{\pm}=0$ (132) and the Hamiltonian is given by $\displaystyle H(t)=\int d^{3}{\boldsymbol{x}}\Big{[}\sum_{\alpha}\pi_{\alpha}\partial_{0}\varphi_{\alpha}(t,{\boldsymbol{x}})-\mathcal{L}(\varphi,\partial\varphi)\Big{]}$ (133) In gauge theories the conjugate field of $A^{0}$ vanishes since $\mathcal{L}$ is independent of $\partial_{0}A^{0}$. The covariant gauge fixing term (129) adds $\partial_{0}A^{0}$, giving the conjugate fields $\displaystyle\pi^{0}$ $\displaystyle=-\lambda\,\partial_{\mu}A^{\mu}$ (134) $\displaystyle\pi^{i}$ $\displaystyle=-F^{0i}$ (135) This allows to define covariant commutation relations for the gauge field, the non-vanishing ones being $\displaystyle\left[{A_{\mu}(t,{\boldsymbol{x}})},{\pi_{\nu}(t,{\boldsymbol{y}})}\right]=i\,g_{\mu\nu}\delta^{3}({\boldsymbol{x}}-{\boldsymbol{y}})$ (136) The unphysical (gauge) degrees of freedom are removed by constraining physical states not to involve photons with time-like or longitudinal polarizations (Gupta-Bleuler method, see Itzykson and Zuber (1980) for details). Canonical quantization can be carried out also in Coulomb gauge, $\boldsymbol{\nabla}\cdot{\boldsymbol{A}}=0$. Due to the lack of a conjugate $A^{0}$ field this requires constraints which modify the commutation relations, see Weinberg (2005) for QED. The generalization to QCD is discussed in Christ and Lee (1980), demonstrating how terms related to Faddeev-Popov ghosts arise. The same study also addresses temporal gauge ($A^{0}=0$), which is an axial gauge without ghosts. Temporal gauge simplifies canonical quantization since the absence of both $A^{0}$ and its conjugate allows standard commutation relations for the spatial gauge field components ${\boldsymbol{A}}$. The gauge condition preserves rotational invariance and, most importantly for the present application, Gauss’ law is implemented as a constraint on physical states which determines ${\boldsymbol{E}}_{L}$, not as an operator relation like (130). The constraint is trivially satisfied for the vacuum (${\boldsymbol{E}}_{L}=0$), whereas in Coulomb gauge $A^{0}\left|{0}\right\rangle$ would have an overlap with $\left|{e^{-}e^{+}}\right\rangle$. I next discuss canonical quantization in temporal gauge for QED, and consider QCD in section V.3. #### V.2.3 Temporal gauge in QED The canonical quantization of QED in temporal gauge ($A^{0}=0$) is described in Willemsen (1978); Bjorken (1979); Christ and Lee (1980); Leibbrandt (1987); Strocchi (2013). The action (V.2.1) determines the electric field $E^{i}=F^{i0}=-\partial_{0}A^{i}$ to be conjugate (131) to $A_{i}\ (i=1,2,3)$, and $i\psi^{\dagger}$ to be conjugate to $\psi$. This gives the canonical commutation relations without constraints, $\displaystyle\left[{E^{i}(t,{\boldsymbol{x}})},{A^{j}(t,{\boldsymbol{y}})}\right]$ $\displaystyle=i\delta^{ij}\delta({\boldsymbol{x}}-{\boldsymbol{y}})$ $\displaystyle\left\\{{\psi^{\dagger}_{\alpha}(t,{\boldsymbol{x}})},{\psi_{\beta}(t,{\boldsymbol{y}})}\right\\}=\delta_{\alpha\beta}\,\delta({\boldsymbol{x}}-{\boldsymbol{y}})$ (137) All other (anti)commutators vanish. The Hamiltonian in temporal gauge is $\displaystyle\mathcal{H}(t)=\int d{\boldsymbol{x}}\big{[}E^{i}\partial_{0}A_{i}+i\psi^{\dagger}\partial_{0}\psi-\mathcal{L}\big{]}=\int d{\boldsymbol{x}}\big{[}{\textstyle\frac{1}{2}}E^{i}E^{i}+{\textstyle\frac{1}{4}}F^{ij}F^{ij}+\psi^{\dagger}(-i\alpha^{i}\partial_{i}-e\alpha^{i}A^{i}+m\gamma^{0})\psi\big{]}$ (138) Gauss’ operator is defined as usual by the derivative of the action wrt. $A^{0}$, $\displaystyle G(x)\equiv\frac{\delta\mathcal{S}_{QED}}{\delta{A^{0}(x)}}=\partial_{i}E^{i}(x)-e\psi^{\dagger}\psi(x)$ (139) but $G(x)=0$ (Gauss’ law) is not an operator relation, since $A^{0}=0$ is fixed by the gauge condition. The operator relation $\partial_{i}E^{i}(x)=e\psi^{\dagger}\psi(x)$ would not even be compatible with the commutation relations (137). The condition $A^{0}=0$ does not completely fix the gauge, since it allows time independent gauge transformations parametrized by $\Lambda({\boldsymbol{x}})$: ${\boldsymbol{A}}\to{\boldsymbol{A}}+\boldsymbol{\nabla}\Lambda({\boldsymbol{x}})$. Gauss’ operator $G(x)$ turns out to generate such transformations. An infinitesimal, time independent gauge transformation $\delta\Lambda({\boldsymbol{x}})$ is represented by the unitary operator, $\displaystyle U(t)=1+i\int d{\boldsymbol{y}}\,G(t,{\boldsymbol{y}})\delta\Lambda({\boldsymbol{y}})$ (140) Exercise A.8: Show using the commutation relations (137) that $U(t)$ (140) transforms the ${\boldsymbol{A}}(t,{\boldsymbol{x}})$ and $\psi(t,{\boldsymbol{x}})$ fields as required for a time-independent infinitesimal gauge transformation. Constraining the physical states to satisfy $\displaystyle G(x)\left|{phys}\right\rangle=\big{[}\boldsymbol{\nabla}\cdot{\boldsymbol{E}}(x)-e\psi^{\dagger}\psi(x)\big{]}\left|{phys}\right\rangle=0$ (141) ensures that they are invariant under time-independent gauge transformations. A physical state remains physical under time evolution since, as may be verified, $G(x)$ commutes with the Hamiltonian (138), $\displaystyle\left[{G(t,{\boldsymbol{x}})},{\mathcal{H}(t)}\right]=0$ (142) The electric field can be separated into its transverse and longitudinal parts, ${\boldsymbol{E}}={\boldsymbol{E}}_{T}+{\boldsymbol{E}}_{L}$, with $\boldsymbol{\nabla}\cdot{\boldsymbol{E}}_{T}=0$. Gauss constraint (141) then allows to solve for ${\boldsymbol{E}}_{L}$, $\displaystyle{\boldsymbol{E}}_{L}(t,{\boldsymbol{x}})\left|{phys}\right\rangle$ $\displaystyle=-\boldsymbol{\nabla}_{x}\int d{\boldsymbol{y}}\,\frac{e}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\,\psi^{\dagger}\psi(t,{\boldsymbol{y}})\left|{phys}\right\rangle$ (143) This seems like the instantaneous electric field $-\boldsymbol{\nabla}A^{0}$ in Coulomb gauge, $\boldsymbol{\nabla}\cdot{\boldsymbol{A}}=0$. The difference is that Gauss’ law is an operator equation in Coulomb gauge, whereas here it is a constraint on the physical states. The constraint specifies ${\boldsymbol{E}}_{L}(t,{\boldsymbol{x}})$ for each state $\left|{phys}\right\rangle$ at all positions ${\boldsymbol{x}}$ at a given time $t$. The electric field of the physical vacuum vanishes in temporal gauge, $\displaystyle E_{L}^{i}({\boldsymbol{x}})\left|{0}\right\rangle=-\partial_{i}^{x}\int d{\boldsymbol{y}}\frac{e}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\psi^{\dagger}\psi(t,{\boldsymbol{y}})\left|{0}\right\rangle=0$ (144) since the vacuum state has no net charge at any position. The Hamiltonian (138) has an instantaneous part determined by Gauss’ constraint, $\displaystyle\mathcal{H}_{V}^{QED}\left|{phys}\right\rangle\equiv{\textstyle\frac{1}{2}}\int d{\boldsymbol{x}}\,{\boldsymbol{E}}_{L}^{2}({\boldsymbol{x}})\left|{phys}\right\rangle$ $\displaystyle={\textstyle\frac{1}{2}}\int d{\boldsymbol{x}}d{\boldsymbol{y}}d{\boldsymbol{z}}\Big{[}\partial_{i}^{x}\frac{e}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\psi^{\dagger}\psi({\boldsymbol{y}})\Big{]}\Big{[}\partial_{i}^{x}\frac{e}{4\pi|{\boldsymbol{x}}-{\boldsymbol{z}}|}\psi^{\dagger}\psi({\boldsymbol{z}})\Big{]}\left|{phys}\right\rangle$ $\displaystyle={\textstyle\frac{1}{2}}\int d{\boldsymbol{x}}d{\boldsymbol{y}}\,\frac{e^{2}}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\big{[}\psi^{\dagger}\psi({\boldsymbol{x}})\big{]}\big{[}\psi^{\dagger}\psi({\boldsymbol{y}})\big{]}\left|{phys}\right\rangle$ (145) $\mathcal{H}_{V}\left|{phys}\right\rangle$ contributes a potential which depends only on the instantaneous positions of the electrons and positrons, regardless of their momenta (which may be relativistic). The other terms of $\mathcal{H}$ determine the propagation of the transverse photons and fermions in time, as well as the transitions between them. This method can be applied to atoms in any frame. Given that non-valence Fock states are suppressed by powers of $\alpha$, calculations with a given degree of precision require to include a limited number of terms in the Fock expansion. In section VI I illustrate the method by considering several aspects of Positronium at rest and in motion. In section VII I study the strongly bound states of QED in $D=1+1$ dimensions. ### V.3 Temporal gauge in QCD #### V.3.1 Canonical quantization The canonical quantization of QCD in temporal gauge $A^{0}_{a}=0$ proceeds as in QED Willemsen (1978); Bjorken (1979); Christ and Lee (1980); Leibbrandt (1987); Strocchi (2013). The QCD action is $\displaystyle\mathcal{S}_{QCD}$ $\displaystyle=\int d^{4}x\big{[}-{\textstyle\frac{1}{4}}F_{\mu\nu}^{a}F^{\mu\nu}_{a}+\bar{\psi}(i\not{\partial}-m-g\not{A}_{a}T^{a})\psi\big{]}$ $\displaystyle F_{\mu\nu}^{a}=\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a}-gf_{abc}A_{\mu}^{b}A_{\nu}^{c}$ (146) The electric field $E_{a}^{i}=F_{a}^{i0}=-\partial_{0}A_{a}^{i}$ is conjugate to $A_{i}^{a}=-A_{a}^{i}$, giving the equal-time commutation relations $\displaystyle\left[{E_{a}^{i}(t,{\boldsymbol{x}})},{A_{b}^{j}(t,{\boldsymbol{y}})}\right]$ $\displaystyle=i\delta_{ab}\delta^{ij}\delta({\boldsymbol{x}}-{\boldsymbol{y}})$ $\displaystyle\left\\{{\psi^{A\,{\dagger}}_{\alpha}(t,{\boldsymbol{x}})},{\psi_{\beta}^{B}(t,{\boldsymbol{y}})}\right\\}=\delta^{AB}\delta_{\alpha\beta}\,\delta({\boldsymbol{x}}-{\boldsymbol{y}})$ (147) The $a,b$ ($A,B$) are color indices in the adjoint (fundamental) representation of SU(3). The Hamiltonian is $\displaystyle\mathcal{H}_{QCD}$ $\displaystyle=\int d{\boldsymbol{x}}\big{[}E_{a}^{i}\partial_{0}A_{i}^{a}+i\psi_{A}^{\dagger}\partial_{0}\psi_{A}-\mathcal{L}_{QCD}\big{]}=\int d{\boldsymbol{x}}\big{[}{\textstyle\frac{1}{2}}E_{a}^{i}E_{a}^{i}+{\textstyle\frac{1}{4}}F_{a}^{ij}F_{a}^{ij}+\psi^{\dagger}(-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0}-g{\boldsymbol{\alpha}}\cdot{\boldsymbol{A}}_{a}T^{a})\psi\big{]}$ (148) where $\displaystyle\int d{\boldsymbol{x}}\,{\textstyle\frac{1}{4}}F_{a}^{ij}F_{a}^{ij}=\int d{\boldsymbol{x}}\big{[}{\textstyle\frac{1}{2}}A_{a}^{i}(-\delta_{ij}\boldsymbol{\nabla}^{2}+\partial_{i}\partial_{j})A_{a}^{j}+gf_{abc}(\partial_{i}A_{a}^{j})A_{b}^{i}A_{c}^{j}+{\textstyle\frac{1}{4}}g^{2}f_{abc}f_{ade}A_{b}^{i}A_{c}^{j}A_{d}^{i}A_{e}^{j}\big{]}$ (149) has both longitudinal and transverse gluon fields. Gauss’ operator $\displaystyle G_{a}(x)\equiv\frac{\delta\mathcal{S}_{QCD}}{\delta{A_{a}^{0}(x)}}=\partial_{i}E_{a}^{i}(x)+gf_{abc}A_{b}^{i}E_{c}^{i}-g\psi^{\dagger}T^{a}\psi(x)$ (150) generates time-independent gauge transformations similarly as in QED (140), which leave the gauge condition $A^{0}_{a}=0$ invariant. The longitudinal electric field ${\boldsymbol{E}}_{L}^{a}$ is fixed by constraining physical states to be invariant under the gauge transformations generated by $G_{a}(x)$, $\displaystyle G_{a}(x)\left|{phys}\right\rangle=0$ (151) This constraint is independent of time since Gauss’ operator commutes with the Hamiltonian, $\left[{G_{a}(t,{\boldsymbol{x}})},{\mathcal{H}(t)}\right]=0$. It constrains the longitudinal electric field for physical states, $\displaystyle\boldsymbol{\nabla}\cdot E_{L}^{a}({\boldsymbol{x}})\left|{phys}\right\rangle=g\big{[}-f_{abc}A_{b}^{i}E_{c}^{i}+\psi^{\dagger}T^{a}\psi({\boldsymbol{x}})\big{]}\left|{phys}\right\rangle$ (152) We may solve for ${\boldsymbol{E}}_{L}^{a}$ analogously222At higher orders in $g$ one needs to take into account the contribution of ${\boldsymbol{E}}_{L}$ on the rhs. of (152). For large gauge fields this leads to the issue of Gribov copies Gribov (1978), but they do not appear in a perturbative expansion. as for QED in section V.2.3, $\displaystyle{\boldsymbol{E}}_{L}^{a}({\boldsymbol{x}})\left|{phys}\right\rangle$ $\displaystyle=-\boldsymbol{\nabla}_{x}\int d{\boldsymbol{y}}\,\frac{g}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\,\mathcal{E}_{a}({\boldsymbol{y}})\left|{phys}\right\rangle$ $\displaystyle\mathcal{E}_{a}({\boldsymbol{y}})$ $\displaystyle=-f_{abc}A_{b}^{i}E_{c}^{i}({\boldsymbol{y}})+\psi^{\dagger}T^{a}\psi({\boldsymbol{y}})$ (153) The contribution of the longitudinal electric field to the QCD Hamiltonian (148) is then $\displaystyle\mathcal{H}_{V}^{QCD}\left|{phys}\right\rangle$ $\displaystyle\equiv{\textstyle\frac{1}{2}}\int d{\boldsymbol{x}}\,\big{(}{\boldsymbol{E}}_{L}^{a}\big{)}^{2}\left|{phys}\right\rangle={\textstyle\frac{1}{2}}\int d{\boldsymbol{y}}d{\boldsymbol{z}}\,\frac{{\alpha_{s}}}{|{\boldsymbol{y}}-{\boldsymbol{z}}|}\,\mathcal{E}_{a}({\boldsymbol{y}})\mathcal{E}_{a}({\boldsymbol{z}})\left|{phys}\right\rangle$ (154) #### V.3.2 Specification of temporal gauge in QCD There is a relevant difference between QED and QCD which needs to be considered when determining the longitudinal electric field from the QCD gauge constraint (152). To illustrate, compare the expectation value of the field in an $e^{-}e^{+}$ Fock component of Positronium and in an analogous color singlet $q\bar{q}$ component of a meson at $t=0$, $\displaystyle\left|{e^{-}e^{+}}\right\rangle$ $\displaystyle\equiv\bar{\psi}_{\alpha}({\boldsymbol{x}}_{1})\psi_{\beta}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (155) $\displaystyle\left|{q\,\bar{q}}\right\rangle$ $\displaystyle\equiv\sum_{A}\bar{\psi}_{\alpha}^{A}({\boldsymbol{x}}_{1})\psi_{\beta}^{A}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (156) The Dirac components $\alpha,\beta$ are irrelevant here and will be suppressed. Repeated color indices are summed. Note that “color singlet” refers to global gauge transformations333 The global SU(3) transformations should not be regarded as a subgroup of the local ones, see Ch. 7 of Strocchi (2013)., the local temporal gauge being fixed by (143) and (V.3.1). The expectation values of the QED (143) and QCD (V.3.1) longitudinal electric fields in these states are, using the canonical commutation relations for the fermions and recalling that ${\boldsymbol{E}}_{L}\left|{0}\right\rangle=0$, $\displaystyle\langle{e^{-}e^{+}}|{\boldsymbol{E}}_{L}({\boldsymbol{x}})\left|{e^{-}e^{+}}\right\rangle$ $\displaystyle=-\boldsymbol{\nabla}_{x}\Big{(}\frac{e}{4\pi|{\boldsymbol{x}}-{\boldsymbol{x}}_{1}|}-\frac{e}{4\pi|{\boldsymbol{x}}-{\boldsymbol{x}}_{2}|}\Big{)}\langle{e^{-}e^{+}}|e^{-}e^{+}\rangle$ (157) $\displaystyle\langle{q\,\bar{q}}|{\boldsymbol{E}}_{L}^{a}({\boldsymbol{x}})\left|{q\,\bar{q}}\right\rangle$ $\displaystyle=-\boldsymbol{\nabla}_{x}\Big{(}\frac{g}{4\pi|{\boldsymbol{x}}-{\boldsymbol{x}}_{1}|}-\frac{g}{4\pi|{\boldsymbol{x}}-{\boldsymbol{x}}_{2}|}\Big{)}\langle{q\,\bar{q}}|\bar{\psi}_{A}({\boldsymbol{x}}_{1})T_{AB}^{a}\psi_{B}({\boldsymbol{x}}_{2})\left|{0}\right\rangle\propto\mathrm{Tr}\,T^{a}=0$ (158) In QED the charges of $e^{-}$ and $e^{+}$ give rise to the expected dipole electric field, while in QCD the expectation value of an octet field in a singlet state vanishes everywhere. Comparing similarly444The singular “self- energy” contributions $\propto 1/0$ are independent of ${\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2}$ and subtracted. the instantaneous potentials $\mathcal{H}_{V}^{QED}$ (V.2.3) and $\mathcal{H}_{V}^{QCD}$ (154), $\displaystyle\langle{e^{-}e^{+}}|\mathcal{H}_{V}^{QED}\left|{e^{-}e^{+}}\right\rangle$ $\displaystyle=-\frac{\alpha}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}\,\langle{e^{-}e^{+}}|e^{-}e^{+}\rangle$ (159) $\displaystyle\langle{q\,\bar{q}}|\mathcal{H}_{V}^{QCD}\left|{q\,\bar{q}}\right\rangle$ $\displaystyle=-\frac{\alpha_{s}}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}\,\langle{q\,\bar{q}}|\bar{\psi}_{A}({\boldsymbol{x}}_{1})T_{AB}^{a}T_{BC}^{a}\psi_{C}({\boldsymbol{x}}_{2})\left|{0}\right\rangle=-C_{F}\frac{\alpha_{s}}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}\,\langle{q\,\bar{q}}|q\,\bar{q}\rangle$ (160) These are the Coulomb potentials of QED and QCD, again as expected. The electron feels only the positron field, and each quark of a given color interacts with its antiquark of opposite color. The sum over the potential energies of all color-anticolor components $A$ in (156) gives the Casimir $C_{F}=(N_{c}^{2}-1)/2N_{c}$ of the fundamental representation. The solution of the QED gauge constraint (141), the longitudinal electric field (143), is determined using the physical boundary condition that the electric field vanishes at spatial infinity. This boundary condition is no longer evident for QCD, since the expectation value (158) of the color electric field in any case vanishes at all ${\boldsymbol{x}}$, due to the sum over quark colors. There seems to be no compelling reason to require that the gauge field of each color-anticolor component $A$ of the state (156) should vanish at spatial infinity. The gauge constraint (152) fully determines ${\boldsymbol{E}}_{L}^{a}$ only given a boundary condition at spatial infinity. ${\boldsymbol{E}}_{L}^{a}$ may be specified by the particular solution (V.3.1) and a homogeneous solution ${\boldsymbol{E}}_{H}^{a}$ which satisfies $\displaystyle\boldsymbol{\nabla}\cdot{\boldsymbol{E}}_{H}^{a}\left|{phys}\right\rangle=0$ (161) There is apparently only one homogeneous solution which is invariant under translations and rotations, $\displaystyle{\boldsymbol{E}}^{a}_{H}({\boldsymbol{x}})\left|{phys}\right\rangle$ $\displaystyle=-\kappa\,\boldsymbol{\nabla}_{x}\int d{\boldsymbol{y}}\,{\boldsymbol{x}}\cdot{\boldsymbol{y}}\,\mathcal{E}_{a}({\boldsymbol{y}})\left|{phys}\right\rangle$ (162) where $\mathcal{E}_{a}({\boldsymbol{y}})$ is defined in (V.3.1) and the normalization $\kappa$ is independent of ${\boldsymbol{x}}$, but may depend on the state $\left|{phys}\right\rangle$. The complete longitudinal electric field is then $\displaystyle{\boldsymbol{E}}_{L}^{a}({\boldsymbol{x}})\left|{phys}\right\rangle$ $\displaystyle=-\boldsymbol{\nabla}_{x}\int d{\boldsymbol{y}}\Big{[}\kappa\,{\boldsymbol{x}}\cdot{\boldsymbol{y}}+\frac{g}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\Big{]}\mathcal{E}_{a}({\boldsymbol{y}})\left|{phys}\right\rangle$ (163) and its contribution to the Hamiltonian (148) is $\displaystyle\mathcal{H}_{V}$ $\displaystyle\equiv{\textstyle\frac{1}{2}}\int d{\boldsymbol{x}}\,E_{a,L}^{i}E_{a,L}^{i}={\textstyle\frac{1}{2}}\int d{\boldsymbol{x}}\Big{\\{}\partial_{i}^{x}\int d{\boldsymbol{y}}\Big{[}\kappa\,{\boldsymbol{x}}\cdot{\boldsymbol{y}}+\frac{g}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\Big{]}\mathcal{E}_{a}({\boldsymbol{y}})\Big{\\}}\Big{\\{}\partial_{i}^{x}\int d{\boldsymbol{z}}\Big{[}\kappa\,{\boldsymbol{x}}\cdot{\boldsymbol{z}}+\frac{g}{4\pi|{\boldsymbol{x}}-{\boldsymbol{z}}|}\Big{]}\mathcal{E}_{a}({\boldsymbol{z}})\Big{\\}}$ $\displaystyle=\int d{\boldsymbol{y}}d{\boldsymbol{z}}\Big{\\{}\,{\boldsymbol{y}}\cdot{\boldsymbol{z}}\Big{[}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\Big{]}+{\textstyle\frac{1}{2}}\frac{{\alpha_{s}}}{|{\boldsymbol{y}}-{\boldsymbol{z}}|}\Big{\\}}\mathcal{E}_{a}({\boldsymbol{y}})\mathcal{E}_{a}({\boldsymbol{z}})$ (164) where the terms of ${\cal O}\left(g\kappa,g^{2}\right)$ were integrated by parts. The term of ${\cal O}\left(\kappa^{2}\right)$ is due to an ${\boldsymbol{x}}$-independent field energy density. It is $\propto\int d{\boldsymbol{x}}$ but irrelevant provided it is universal, i.e., the same for all Fock components of all bound states. This determines the normalization $\kappa$ in (163) for each state $\left|{phys}\right\rangle$, up to a universal scale $\Lambda$. The scale $\Lambda$ is unrelated to the coupling $g$, so the $g\kappa$ term in (V.3.2) may be viewed as an instantaneous ${\cal O}\left(\alpha_{s}^{0}\right)$ potential. All relevant symmetries, in particular exact Poincaré invariance, must appear at each order of ${\alpha_{s}}$. The boost covariance of Positronia in QED is ensured by a combination of the Coulomb potential and ${\cal O}\left(\alpha\right)$ transverse photon exchange (section VI). The boost covariance of QCD bound states must at ${\cal O}\left(\alpha_{s}^{0}\right)$ be achieved by the instantaneous potential alone, akin to QED in $D=1+1$ (section VII). This appears to be satisfied (section VIII.3). ## VI Applications to Positronium atoms I now illustrate the approach to QED bound states described above with several applications to Positronium atoms. The expansion starts with valence Fock states, here $\left|{e^{-}e^{+}}\right\rangle$, with higher Fock states included perturbatively. The first task is then to define the valence Fock states and determine the constraints on their wave functions imposed by the symmetries of translations and rotations, as well as parity and charge conjugation (section VI.1). I express the valence Fock states using field operators (here the electron field $\psi(t,{\boldsymbol{x}})$), similarly as in the representations (53), (54) of the Dirac bound states. In section VI.2 I determine the wave functions of Para- and Orthopositronium atoms at lowest ${\cal O}\left(\alpha^{2}\right)$ in their binding energy $E_{b}$. The rest frame wave functions then satisfy the Schrödinger equation. For atomic CM momentum ${\boldsymbol{P}}\neq 0$ one needs to include the Fock state with one transverse photon $\left|{e^{-}e^{+}\gamma}\right\rangle$ (VI.3). The hyperfine splitting between Ortho- and Parapositronium at rest is calculated at ${\cal O}\left(\alpha^{4}\right)$ in the rest frame (VI.4), taking into account the transverse photon state $\left|{e^{-}e^{+}\gamma}\right\rangle$ and Orthopositronium annihilation into a virtual photon, $e^{-}e^{+}\to\gamma\to e^{-}e^{+}$. Finally I calculate the electromagnetic form factor of Positronium (VI.5) and deep inelastic scattering on Positronia (VI.6) in a general frame. ### VI.1 The $\left|{e^{-}e^{+}}\right\rangle$ Fock states of Para- and Orthopositronium atoms #### VI.1.1 Definition of the Fock states The $\left|{e^{-}e^{+}}\right\rangle$ Fock states of Parapositronium ($J^{PC}=0^{-+}$) and Orthopositronium ($J^{PC}=1^{--}$) atoms, jointly denoted by $\mathcal{B}$, may be expressed in terms of two electron fields, $\displaystyle\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle\equiv\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (165) where $\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}$ is a $4\times 4$ wave function of the atom $\mathcal{B}$ with CM momentum ${\boldsymbol{P}}$. The $\Lambda_{\pm}$ are Dirac projection operators, $\displaystyle{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})$ $\displaystyle\equiv\frac{1}{2E_{1}}\Big{[}E_{1}-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}+\gamma^{0}m\Big{]}=\big{[}{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\big{]}^{2}$ $\displaystyle{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})$ $\displaystyle\equiv\frac{1}{2E_{2}}\Big{[}E_{2}+i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}-\gamma^{0}m\Big{]}=\big{[}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\big{]}^{2}$ (166) where $E=\sqrt{-\boldsymbol{\nabla}^{2}+m^{2}}$. The projectors select the $b^{\dagger}$ operator in $\bar{\psi}({\boldsymbol{x}}_{1})$ and $d^{\dagger}$ in $\psi({\boldsymbol{x}}_{2})$, defined as in (51), $\displaystyle\bar{\psi}({\boldsymbol{x}}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}})$ $\displaystyle=\int\frac{d{\boldsymbol{k}}}{(2\pi)^{3}2E_{k}}\sum_{\lambda}\bar{u}({\boldsymbol{k}},\lambda)\,e^{-i{\boldsymbol{k}}\cdot{\boldsymbol{x}}}\,b_{{\boldsymbol{k}},\lambda}^{\dagger}\hskip 56.9055pt{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}})\psi({\boldsymbol{x}})=\int\frac{d{\boldsymbol{k}}}{(2\pi)^{3}2E_{k}}\sum_{\lambda}v({\boldsymbol{k}},\lambda)\,e^{-i{\boldsymbol{k}}\cdot{\boldsymbol{x}}}\,d_{{\boldsymbol{k}},\lambda}^{\dagger}$ (167) Since $b\left|{0}\right\rangle=d\left|{0}\right\rangle=0$ in (165) the projectors actually have no effect. However, in operations on the states they allow to use the anticommutation relations $\left\\{{\psi^{\dagger}_{\alpha}(t,{\boldsymbol{x}})},{\psi_{\beta}(t,{\boldsymbol{y}})}\right\\}=\delta_{\alpha,\beta}\,\delta^{3}({\boldsymbol{x}}-{\boldsymbol{y}})$, to which the $b$ and $d$ operators contribute. The projectors ensure that the coefficients of $b$ and $d$ in (165) vanish, so that no spurious contributions arise. I assume the normalization $\displaystyle\langle{e^{-}e^{+};\mathcal{B}^{\prime},{\boldsymbol{P}}^{\prime}}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle=2E_{P}(2\pi)^{3}\delta({\boldsymbol{P}}-{\boldsymbol{P}}^{\prime})\delta_{\mathcal{B},\mathcal{B}^{\prime}}\hskip 56.9055ptE_{P}=\sqrt{{\boldsymbol{P}}^{2}+4m^{2}}$ (168) where $E_{P}$ is the energy of the atom at ${\cal O}\left(\alpha^{0}\right)$. The Hamiltonian (138) is symmetric under translations, rotations, parity and charge conjugation. The states may be classified by their transformation under those symmetries, giving constraints on the wave functions $\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}})$. #### VI.1.2 Translations Under space translations ${\boldsymbol{x}}\to{\boldsymbol{x}}+{\boldsymbol{\ell}}$ the electron field is transformed by the operator $U(\boldsymbol{\ell})=\exp[-i{\boldsymbol{\ell}}\cdot\boldsymbol{\mathcal{P}}]\hskip 28.45274pt{\rm where}\hskip 28.45274pt\boldsymbol{\mathcal{P}}=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})(-i{\overset{\rightarrow}{\boldsymbol{\nabla}}})\psi({\boldsymbol{x}})$ (169) The momentum operator satisfies $\displaystyle\left[{\boldsymbol{\mathcal{P}}},{\psi({\boldsymbol{x}})}\right]=i{\overset{\rightarrow}{\boldsymbol{\nabla}}}\psi({\boldsymbol{x}})\hskip 56.9055pt\left[{\boldsymbol{\mathcal{P}}},{\bar{\psi}({\boldsymbol{x}})}\right]=\bar{\psi}({\boldsymbol{x}})i{\overset{\leftarrow}{\boldsymbol{\nabla}}}$ (170) With $\boldsymbol{\mathcal{P}}\left|{0}\right\rangle=0$ we have $\boldsymbol{\mathcal{P}}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle={\boldsymbol{P}}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$. #### VI.1.3 Rotations Rotations are generated by the angular momentum operator $\boldsymbol{\mathcal{J}}$, which was defined already in (70) for the Dirac equation. With ${\boldsymbol{\alpha}}\equiv\gamma^{0}\boldsymbol{\gamma}$, $\displaystyle\boldsymbol{\mathcal{J}}$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,\boldsymbol{J}\,\psi({\boldsymbol{x}})\hskip 82.51282pt\boldsymbol{J}={\boldsymbol{L}}+{\boldsymbol{S}}={\boldsymbol{x}}\times(-i\boldsymbol{\nabla})+{\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}$ (171) Rest frame states are taken to be eigenstates of $\boldsymbol{\mathcal{J}}^{2}$ and $\mathcal{J}^{z}$. For a ${\boldsymbol{P}}=0$ Positronium state expressed as in (165), $\displaystyle\boldsymbol{\mathcal{J}}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\left[{{\boldsymbol{J}}},{\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}\right]{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (172) Exercise A.9: Derive (172). For the state to have total angular momentum $j$ and $j^{z}=\lambda$ in the rest frame, $\displaystyle\boldsymbol{\mathcal{J}}^{2}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle=j(j+1)\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle\hskip 56.9055pt\mathcal{J}^{z}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle=\lambda\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle$ (173) the wave function should satisfy $\displaystyle\left[{J^{i}},{\left[{J^{i}},{\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})}\right]}\right]=j(j+1)\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})\hskip 56.9055pt\left[{J^{z}},{\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})}\right]=\lambda\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})$ (174) #### VI.1.4 Parity $\eta_{P}$ The parity operator $\mathbb{P}$ transforms the electron field as $\mathbb{P}\psi(t,{\boldsymbol{x}})\mathbb{P}^{\dagger}=\gamma^{0}\psi(t,-{\boldsymbol{x}})\hskip 56.9055pt\mathbb{P}\bar{\psi}(t,{\boldsymbol{x}})\mathbb{P}^{\dagger}=\bar{\psi}(t,-{\boldsymbol{x}})\gamma^{0}$ (175) Changing the integration variables ${\boldsymbol{x}}_{1,2}\to-{\boldsymbol{x}}_{1,2}$ in (165) and noting that $\gamma^{0}\Lambda_{\pm}({\boldsymbol{x}})=\Lambda_{\pm}(-{\boldsymbol{x}})\gamma^{0}$, $\mathbb{P}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\gamma^{0}e^{-i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}(-{\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})\gamma^{0}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})=\eta_{P}\left|{e^{-}e^{+};\mathcal{B},-{\boldsymbol{P}}}\right\rangle$ (176) if the wave function satisfies $\gamma^{0}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}(-{\boldsymbol{x}})\gamma^{0}=\eta_{P}\Phi_{\mathcal{B}}^{(-{\boldsymbol{P}})}({\boldsymbol{x}})\hskip 56.9055pt(\eta_{P}=\pm 1)$ (177) Note that parity reverses the CM momentum ${\boldsymbol{P}}$ of the state. #### VI.1.5 Charge conjugation $\eta_{C}$ The charge conjugation operator $\mathbb{C}$ transforms particles into antiparticles. $\mathbb{C}b({\boldsymbol{p}},\lambda)\mathbb{C}^{\dagger}=d({\boldsymbol{p}},\lambda)\hskip 56.9055pt\mathbb{C}d({\boldsymbol{p}},\lambda)\mathbb{C}^{\dagger}=b({\boldsymbol{p}},\lambda)$ (178) In the Dirac representation of the $\gamma$ matrices (here $T$ indicates transpose and ${\alpha_{2}}\equiv\gamma^{0}\gamma^{2}$) $\mathbb{C}\psi(t,{\boldsymbol{x}})\mathbb{C}^{\dagger}=-i\gamma^{2}\psi^{*}(t,{\boldsymbol{x}})=i{\alpha_{2}}\bar{\psi}^{T}(t,{\boldsymbol{x}})\hskip 56.9055pt\mathbb{C}\bar{\psi}(t,{\boldsymbol{x}})\mathbb{C}^{\dagger}=i\psi^{T}(t,{\boldsymbol{x}}){\alpha_{2}}$ (179) This implies $v({\boldsymbol{k}},\lambda)=-i\gamma^{2}u^{*}({\boldsymbol{k}},\lambda)$ and thus $\bar{\chi}_{\lambda}=i\sigma_{2}\chi_{\lambda}$ in (21). For a Positronium state to be an eigenstate of charge conjugation, $\mathbb{C}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle=\eta_{C}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ (180) its wave function should satisfy ${\alpha_{2}}\big{[}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}(-{\boldsymbol{x}})\big{]}^{T}{\alpha_{2}}=\eta_{C}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}})\hskip 56.9055pt(\eta_{C}=\pm 1)$ (181) Exercise A.10: Derive (181). #### VI.1.6 Wave functions of Para - and Orthopositronium Non-relativistic Para- and Orthopositronium have zero orbital angular momentum in the rest frame, $\big{[}{{\boldsymbol{L}}},{\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})}\big{]}=0$. Hence their wave functions have no angular dependence. The radial dependence factorizes from the Dirac structure since the spin, parity and charge conjugation constraints are independent of the radial coordinate $r=|{\boldsymbol{x}}|$, $\displaystyle\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})=N_{\mathcal{B}}^{(0)}\,\Gamma_{\mathcal{B}}\,F^{(0)}(r)$ (182) where $\Gamma_{\mathcal{B}}$ is an ${\boldsymbol{x}}$-independent $4\times 4$ Dirac matrix. Para- and Orthopositronia have the same radial function $F(r)$ and the same binding energy $E_{b}$ at ${\cal O}\left(\alpha^{2}\right)$. The energy degeneracy holds for all ${\boldsymbol{P}}$, indicating that the factorization (182) holds in any frame, $\displaystyle\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}})=N_{\mathcal{B}}^{(P)}\,\Gamma_{\mathcal{B}}\,F^{({\boldsymbol{P}})}({\boldsymbol{x}})\hskip 56.9055pt\int d{\boldsymbol{x}}\,|F^{({\boldsymbol{P}})}({\boldsymbol{x}})|^{2}=E_{P}$ (183) I verify this in section VI.3. The boosted radial function $F^{({\boldsymbol{P}})}({\boldsymbol{x}})$ is angular dependent for ${\boldsymbol{P}}\neq 0$ due to Lorentz contraction in the ${\boldsymbol{P}}$-direction. With its normalization fixed as in (183) the constants $N_{\mathcal{B}}^{(P)}$ are determined by the normalization (168) of the state. In the following I take ${\boldsymbol{P}}=(0,0,P)$ in the $z$-direction, and consider Orthopositronium with $j^{z}=\lambda$. The following Dirac structures $\Gamma_{\mathcal{B}}$ in (182) give the correct $J^{PC}$ quantum numbers of the Positronia: $\displaystyle\textbf{Parapositronium:}\ J^{PC}=0^{-+}\hskip 28.45274pt\Gamma_{Para}=\gamma_{5}$ (184) * • Spin: $\left[{{\boldsymbol{S}}},{\gamma_{5}}\right]=\left[{{\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}},{\gamma_{5}}\right]=0$, hence $j=s=0$. * • Parity: $\gamma^{0}\gamma_{5}\gamma^{0}=-\gamma_{5}$, hence $\eta_{P}=-1$. * • Charge conjugation: ${\alpha_{2}}\gamma_{5}^{T}{\alpha_{2}}=\gamma_{5}$, hence $\eta_{C}=+1$. $\displaystyle\textbf{Orthopositronium:}\ J^{PC}=1^{--}\hskip 28.45274pt\Gamma_{Ortho}^{\lambda}={\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}\hskip 28.45274pt{\boldsymbol{e}}_{\pm 1}=-\frac{1}{\sqrt{2}}(\pm 1,i,0)\hskip 28.45274pt{\boldsymbol{e}}_{0}=(0,0,1)$ (185) * • Spin: $\left[{S^{z}},{{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}}\right]=\lambda\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}$, hence $j^{z}=\lambda$, and $\sum_{i}\left[{S^{i}},{\left[{S^{i}},{{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}}\right]}\right]=2\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}$, hence $j=1$. * • Parity: $\gamma^{0}\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}\,\gamma^{0}=-{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}$, hence $\eta_{P}=-1$. * • Charge conjugation: ${\alpha_{2}}\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}^{T}\,{\alpha_{2}}=-{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}$, hence $\eta_{C}=-1$. The constants $N_{\mathcal{B}}^{(P)}$ of (183) are then determined by the state normalization (168) to be, at ${\cal O}\left(\alpha^{0}\right)$, $\displaystyle N_{Para}^{({\boldsymbol{P}})}=N_{Ortho}^{({\boldsymbol{P}})}(\lambda=0)=\frac{E_{P}}{2m}\hskip 56.9055ptN_{Ortho}^{({\boldsymbol{P}})}(\lambda=\pm 1)=1$ (186) Exercise A.11: Verify (186). ### VI.2 The Schrödinger equation for Positronium at ${\boldsymbol{P}}=0$ The Schrödinger equation for the rest frame wave function follows from the condition that the (Para- or Ortho-) Positronium state (165) be an eigenstate of the Hamiltonian (138), $\displaystyle\mathcal{H}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle=\big{[}\mathcal{H}_{0}(f)+\mathcal{H}_{V}\big{]}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle=(2m+E_{b})\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle$ (187) Transverse photons do not contribute to $E_{b}$ at ${\cal O}\left(\alpha^{2}\right)$. Their coupling to electrons is proportional to the 3-momentum of the electron, which in the rest frame is of ${\cal O}\left(\alpha m\right)$. The free fermion Hamiltonian acting on the electron fields gives (note that $\left\\{{\psi},{\bar{\psi}}\right\\}$ leaves a $\gamma^{0}$, and ${\boldsymbol{\alpha}}\gamma^{0}=-\gamma^{0}{\boldsymbol{\alpha}}$) $\displaystyle\big{[}{\mathcal{H}_{0}(f)},{\bar{\psi}({\boldsymbol{x}}_{1})}\big{]}$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})(-i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})\big{\\{}{\psi({\boldsymbol{x}})},{\bar{\psi}({\boldsymbol{x}}_{1})}\big{\\}}=\bar{\psi}({\boldsymbol{x}}_{1})(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}+m\gamma^{0})$ $\displaystyle\big{[}{\mathcal{H}_{0}(f)},{\psi({\boldsymbol{x}}_{2})}\big{]}$ $\displaystyle=-(-i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}+m\gamma^{0})\psi({\boldsymbol{x}}_{2})$ (188) Together with the projection operators $\Lambda_{\pm}$ in $\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle$ these give energies $E=\sqrt{-\boldsymbol{\nabla}^{2}+m^{2}}$, $\displaystyle(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}+m\gamma^{0})\frac{1}{2E_{1}}(E_{1}-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}+m\gamma^{0})$ $\displaystyle=\frac{1}{2E_{1}}\big{[}E_{1}(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}+m\gamma^{0})-{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}^{2}+m^{2}\big{]}=\Lambda_{+}(i{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1})E_{1}$ $\displaystyle-\frac{1}{2E_{2}}(E_{2}+i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}-m\gamma^{0})(-i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}+m\gamma^{0})$ $\displaystyle=\frac{1}{2E_{2}}\big{[}E_{2}(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}-m\gamma^{0})-{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}^{2}+m^{2}\big{]}=E_{2}\Lambda_{-}(i{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2})$ (189) Through a partial integration the $\boldsymbol{\nabla}^{2}$ in the energies acts on the wave function. At ${\cal O}\left(\alpha^{2}\right)$ we have $E\simeq m-\boldsymbol{\nabla}^{2}/2m$, giving the kinetic contribution of the Schrödinger equation with reduced mass $m/2$, $\displaystyle\Big{(}2m-\frac{\boldsymbol{\nabla}_{1}^{2}}{m}\Big{)}\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ (190) The potential energy arises from the instantaneous part (V.2.3) of the Hamiltonian, $\displaystyle\mathcal{H}_{V}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle$ $\displaystyle=\frac{1}{2}\int d{\boldsymbol{x}}d{\boldsymbol{y}}\,\frac{e^{2}}{4\pi|{\boldsymbol{x}}-{\boldsymbol{y}}|}\big{[}\psi^{\dagger}\psi({\boldsymbol{x}})\big{]}\big{[}\psi^{\dagger}\psi({\boldsymbol{y}})\big{]}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle$ (191) Because Gauss’ law in temporal gauge is imposed as a constraint on the physical states we have $\psi^{\dagger}\psi\left|{0}\right\rangle=0$ as in (144). The effect of $\mathcal{H}_{V}$ is then to multiply the wave function by the Coulomb potential, $\displaystyle-\frac{\alpha}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}\,\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ (192) Combining (190) and (192) the eigenstate condition (187) implies the Schrödinger equation for the wave function, $\displaystyle\Big{(}2m-\frac{\boldsymbol{\nabla}^{2}}{m}-\frac{\alpha}{|{\boldsymbol{x}}|}\Big{)}\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})=(2m+E_{b})\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})$ (193) Since the Schrödinger equation is a Dirac scalar it gives the usual $\ell=0$ radial equation for $F^{(0)}(r)$ in (182), $\displaystyle\frac{1}{m}\Big{[}{F^{(0)}}^{\prime\prime}(r)+\frac{2}{r}F^{(0)^{\prime}}(r)\Big{]}+\Big{[}\frac{\alpha}{r}+E_{b}\Big{]}F^{(0)}(r)=0\ \ \ \ \ {\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\Longrightarrow}\ \ \ \ \ F^{(0)}(r)=\frac{\alpha^{3/2}\,m^{2}}{\sqrt{4\pi}}\exp(-\alpha mr/2)\hskip 28.45274ptE_{b}=-{\textstyle\frac{1}{4}}m\alpha^{2}$ (194) where the normalization of $F^{(0)}(r)$ is determined by (183) ($E_{P=0}=2m$). ### VI.3 Positronium with momentum ${\boldsymbol{P}}$ Fock states quantized at equal time transform non-covariantly under boosts since the definition of time is frame dependent. The Poincaré invariance of the QED action nevertheless guarantees that measurables will be Lorentz covariant. In this section I demonstrate this for the binding energy of Positronia at ${\cal O}\left(\alpha^{2}\right)$. Lorentz covariance determines the momentum dependence of the binding, $\displaystyle\Delta E({\boldsymbol{P}})\equiv\sqrt{{\boldsymbol{P}}^{2}+(2m+E_{b})^{2}}-\sqrt{{\boldsymbol{P}}^{2}+4m^{2}}=\frac{2mE_{b}}{E_{P}}+{\cal O}\left(\alpha^{4}\right)\hskip 56.9055pt\Delta E({\boldsymbol{P}}=0)=E_{b}$ (195) where $E_{P}=\sqrt{{\boldsymbol{P}}^{2}+4m^{2}}$. In section VI.4 I evaluate the hyperfine splitting between Ortho- and Parapositronia at ${\cal O}\left(\alpha^{4}\right)$ for ${\boldsymbol{P}}=0$. In sections VI.5 and VI.6 I consider the covariance of form factors and deep inelastic scattering. The importance of properly taking into account the momentum dependence of bound state wave functions was emphasized in Brodsky and Primack (1969). The frame dependence of atomic wave functions is of general interest, since it shows how the classical concept of Lorentz contraction is realized for quantum bound states. Surprisingly, there appears to be only one study Järvinen (2005) of atoms with general CM momenta ${\boldsymbol{P}}$, even at leading order. The following analysis is equivalent to that one, but is formulated in terms of Fock states in temporal gauge. In atoms with CM momentum ${\boldsymbol{P}}$ transverse photons contribute at leading order to the binding energy $E_{b}$, since they couple at ${\cal O}\left(\alpha^{0}\right)$ to electrons whose momenta $\propto{\boldsymbol{P}}$. I consider first the kinetic and potential energies of the $\left|{e^{-}e^{+}}\right\rangle$ Fock state (165), and then determine the wave function of the $\left|{e^{-}e^{+}\gamma}\right\rangle$ state in Positronium. Only terms which contribute to $E_{b}$ at ${\cal O}\left(\alpha^{2}\right)$ are retained. #### VI.3.1 Kinetic and potential energy The above relations (VI.2) and (VI.2) are valid for all ${\boldsymbol{P}}$. After a partial integration the derivatives in $E_{1}$ and $E_{2}$ operate in (165) on $\exp\big{[}i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2\big{]}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$. Let ${\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}={\textstyle\frac{1}{2}}({\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}+{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2})+{\textstyle\frac{1}{2}}({\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}-{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2})$, and similarly for ${\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2}$. Then the first term gives $i{\boldsymbol{P}}/2$ while the second, denoted $\boldsymbol{\nabla}\equiv{\textstyle\frac{1}{2}}({\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}-{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2})$, gives ${\cal O}\left(\alpha\right)$ derivatives of $\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}})$, $\displaystyle E_{1}=\sqrt{({\textstyle\frac{1}{2}}{\boldsymbol{P}}-i\boldsymbol{\nabla})^{2}+m^{2}}\hskip 56.9055ptE_{2}=\sqrt{({\textstyle\frac{1}{2}}{\boldsymbol{P}}+i\boldsymbol{\nabla})^{2}+m^{2}}$ (196) At ${\cal O}\left(\alpha^{2}\right)$ we need to keep two powers of $\boldsymbol{\nabla}$. Using $\sqrt{1+x}=1+{\textstyle\frac{1}{2}}x-\frac{1}{8}x^{2}+{\cal O}\left(x^{3}\right)$ and denoting $\displaystyle E_{P}\equiv\sqrt{{\boldsymbol{P}}^{2}+4m^{2}}\hskip 56.9055pt\gamma\equiv\frac{E_{P}}{2m}\hskip 56.9055pt\beta\equiv\frac{P}{E_{P}}$ (197) we get $\displaystyle\mathcal{H}_{0}(f)\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle:\hskip 28.45274ptE_{1}+E_{2}\simeq E_{P}-\frac{2}{E_{P}}\Big{(}\boldsymbol{\nabla}_{\perp}^{2}-\frac{1}{\gamma^{2}}\boldsymbol{\nabla}_{\|}^{2}\Big{)}=E_{P}-\frac{1}{m\gamma}\Big{(}\boldsymbol{\nabla}_{\perp}^{2}-\frac{1}{\gamma^{2}}\boldsymbol{\nabla}_{\|}^{2}\Big{)}$ (198) where $\perp$ and $\|$ refer to the ${\boldsymbol{P}}$-direction, here taken to be along the $z$-axis. The potential energy depends only on the instantaneous positions of the fermions and thus gives the same result as in (192) for ${\boldsymbol{P}}=0$, the wave functions is multiplied by $\displaystyle\mathcal{H}_{V}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle:\hskip 42.67912pt-\frac{\alpha}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}=-e^{2}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}}\,\frac{1}{{\boldsymbol{q}}^{2}}e^{-i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}$ (199) This already signals the importance of transverse photon exchange, since the potential energy should be commensurate with $\Delta E({\boldsymbol{P}})$ in (195), which is ${\boldsymbol{P}}$-dependent. #### VI.3.2 The transverse photon Fock state For ${\boldsymbol{P}}\neq 0$ the transverse photon vertices $\propto{\boldsymbol{P}}$ are of ${\cal O}\left(\alpha^{0}\right)$, so transverse and Coulomb photon exchanges contribute at the same order in $\alpha$. The transverse photon and its conjugate electric field in temporal gauge $(A^{0}=0)$ may at $t=0$ be expanded in photon creation and annihilation operators $a^{\dagger}$ and $a$, with polarization 3-vectors ${\boldsymbol{\varepsilon}}_{s}$, $s=\pm 1$, $\displaystyle{\boldsymbol{A}}_{T}({\boldsymbol{x}})=\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s=\pm 1}\Big{[}{\boldsymbol{\varepsilon}}_{s}({\boldsymbol{q}})\,e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}a({\boldsymbol{q}},s)+{\boldsymbol{\varepsilon}}_{s}^{*}({\boldsymbol{q}})\,e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}a^{\dagger}({\boldsymbol{q}},s)\Big{]}$ $\displaystyle{\boldsymbol{E}}_{T}({\boldsymbol{x}})=i\int\frac{d{\boldsymbol{q}}}{2(2\pi)^{3}}\sum_{s=\pm 1}\Big{[}{\boldsymbol{\varepsilon}}_{s}({\boldsymbol{q}})\,e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}a({\boldsymbol{q}},s)-{\boldsymbol{\varepsilon}}_{s}^{*}({\boldsymbol{q}})\,e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}a^{\dagger}({\boldsymbol{q}},s)\Big{]}$ (200) $\displaystyle\left[{a({\boldsymbol{q}},s)},{a^{\dagger}({\boldsymbol{q}}^{\prime},s^{\prime})}\right]=(2\pi)^{3}2|{\boldsymbol{q}}|\delta({\boldsymbol{q}}-{\boldsymbol{q}}^{\prime})\delta_{s,s^{\prime}}$ $\displaystyle{\boldsymbol{q}}\cdot{\boldsymbol{\varepsilon}}_{s}({\boldsymbol{q}})=0\hskip 28.45274pt{\boldsymbol{\varepsilon}}_{s}^{*}({\boldsymbol{q}})\cdot{\boldsymbol{\varepsilon}}_{s^{\prime}}({\boldsymbol{q}})=\delta_{s,s^{\prime}}\hskip 28.45274pt\sum_{s=\pm 1}\varepsilon_{s}^{i}({\boldsymbol{q}}){\varepsilon_{s}^{j}}^{*}({\boldsymbol{q}})=\delta^{ij}-\frac{q^{i}q^{j}}{{\boldsymbol{q}}^{2}}$ The interaction between the electron and the transverse photon fields in the Hamiltonian (138) is given by $\displaystyle\mathcal{H}_{int}({\boldsymbol{A}}_{T})=-e\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{A}}_{T}({\boldsymbol{x}})\psi({\boldsymbol{x}})$ (201) This creates $\left|{e^{-}e^{+}\gamma}\right\rangle$ states with a photon of momentum ${\boldsymbol{q}}$ and polarization $s$. At leading order, $\displaystyle\mathcal{H}_{int}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle=e\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})$ $\displaystyle\times\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}\frac{1}{E_{P}}{\boldsymbol{P}}\cdot\varepsilon_{s}^{*}({\boldsymbol{q}})\,a^{\dagger}({\boldsymbol{q}},s)\big{(}-e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1}}+e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{2}}\big{)}\left|{0}\right\rangle\equiv\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ (202) Exercise A.12: Derive the expression for $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ in (VI.3.2). Operating with $\mathcal{H}_{int}$ a second time and retaining only the terms where the transverse photon is absorbed, giving an $\left|{e^{-}e^{+}}\right\rangle$ Fock state, $\displaystyle\mathcal{H}_{int}^{2}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ $\displaystyle=e^{2}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})$ $\displaystyle\times\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}\frac{1}{E_{P}^{2}}\big{[}{\boldsymbol{P}}\cdot\varepsilon_{s}^{*}({\boldsymbol{q}})\big{]}\big{[}{\boldsymbol{P}}\cdot\varepsilon_{s}({\boldsymbol{q}})\big{]}\big{|}-e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1}}+e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{2}}\big{|}^{2}\left|{0}\right\rangle$ (203) $\displaystyle\mbox{where}\ \ \ \big{|}-e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1}}+e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{2}}\big{|}^{2}=2-e^{-i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}-e^{i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}$ (204) The ${\boldsymbol{x}}_{1,2}$-independent term in (204) corresponds to the absorption of the photon on the same fermion from which it was emitted. This loop contribution gives a multiplicative renormalization of the state and does not contribute to the eigenstate condition at lowest order. Neglecting this term and summing over the photon polarization $s$ using (VI.3.2) we have $\displaystyle\mathcal{H}_{int}^{2}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ $\displaystyle=-e^{2}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})$ $\displaystyle\times\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\,\beta^{2}\,\frac{{\boldsymbol{q}}_{\perp}^{2}}{{\boldsymbol{q}}^{2}}\,\big{[}e^{-i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}+e^{i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}\big{]}\left|{0}\right\rangle$ (205) where $\beta=|{\boldsymbol{P}}|/E_{P}$ and ${\boldsymbol{q}}_{\perp}$ is the component of ${\boldsymbol{q}}$ orthogonal to ${\boldsymbol{P}}$ (i.e., to the $z$-axis). #### VI.3.3 The bound state condition The Positronium state, including the $\left|{e^{-}e^{+}\gamma}\right\rangle$ Fock component, can be expressed as the superposition $\displaystyle\left|{\mathcal{B},{\boldsymbol{P}}}\right\rangle=\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle+\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}C_{\gamma}({\boldsymbol{q}})\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ (206) where $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ is defined in (VI.3.2) and I anticipated that its relative weight $C_{\gamma}({\boldsymbol{q}})$ is independent of $s$. $C_{\gamma}({\boldsymbol{q}})$ should be determined so that $\left|{\mathcal{B},{\boldsymbol{P}}}\right\rangle$ is an eigenstate of $\mathcal{H}$, $\displaystyle\mathcal{H}\left|{\mathcal{B},{\boldsymbol{P}}}\right\rangle$ $\displaystyle=(\mathcal{H}_{0}(f)+\mathcal{H}_{V})\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle+\mathcal{H}_{int}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}C_{\gamma}({\boldsymbol{q}})\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ $\displaystyle+\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}\Big{[}1+\big{(}\mathcal{H}_{0}(f)+\mathcal{H}_{0}(A)\big{)}C_{\gamma}({\boldsymbol{q}})\Big{]}\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle=E_{\mathcal{B}}^{({\boldsymbol{P}})}\left|{\mathcal{B},{\boldsymbol{P}}}\right\rangle$ (207) $\mathcal{H}_{0}(f)+\mathcal{H}_{V}$ modify $\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}$ by the factors in (198) and (199). The $\mathcal{H}_{int}$ term is given by (VI.3.2), adding the factor $C_{\gamma}({\boldsymbol{q}})$ to the integrand of the ${\boldsymbol{q}}$-integration. The first contribution to $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ follows from (VI.3.2). The action of $\mathcal{H}_{0}(f)$ is similar to that on $\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ in (198), but now the additional ${\boldsymbol{x}}_{1,2}$ dependence in $\exp(-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1,2})$ leaves an ${\cal O}\left(\alpha\right)$ contribution. Since $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ is already suppressed by a factor $e$ we may here neglect the ${\cal O}\left(\alpha^{2}\right)$ terms, including its potential energy ($\mathcal{H}_{V}$). Recalling that $E_{i}=$ ${\scriptsize\sqrt{-\boldsymbol{\nabla}_{i}^{2}+m^{2}}}$ and separating the ${\cal O}\left(\alpha^{0}\right)$ contribution through, $\displaystyle(E_{1}+E_{2})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}$ $\displaystyle\simeq e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Big{(}\sqrt{{\textstyle\frac{1}{4}}E_{P}^{2}-i{\boldsymbol{P}}\cdot\boldsymbol{\nabla}_{1}}+\sqrt{{\textstyle\frac{1}{4}}E_{P}^{2}-i{\boldsymbol{P}}\cdot\boldsymbol{\nabla}_{2}}\,\Big{)}$ $\displaystyle\simeq e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Big{[}E_{P}-\frac{i}{E_{P}}\,{\boldsymbol{P}}\cdot(\boldsymbol{\nabla}_{1}+\boldsymbol{\nabla}_{2})\Big{]}$ (208) gives the sum of the fermion kinetic energies as $\displaystyle\mathcal{H}_{0}(f)\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle:\hskip 28.45274ptE_{1}+E_{2}=E_{P}-\beta q_{\|}+{\cal O}\left(\alpha^{2}\right)$ (209) where $\beta=P/E_{P}$ and ${\boldsymbol{P}}\cdot{\boldsymbol{q}}=Pq_{\|}$. The kinetic energy of the photon follows from $\mathcal{H}_{0}{(A)}=\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}\,2|{\boldsymbol{q}}|}|{\boldsymbol{q}}|\sum_{s=\pm 1}a^{\dagger}({\boldsymbol{q}},s)a({\boldsymbol{q}},s)$, $\displaystyle\mathcal{H}_{0}(A)\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle:\hskip 28.45274ptE_{\gamma}=|{\boldsymbol{q}}|$ (210) Comparing the coefficients of $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ in the eigenstate condition (VI.3.3) gives, since $E_{\mathcal{B}}^{({\boldsymbol{P}})}=E_{P}+{\cal O}\left(\alpha^{2}\right)$, $\displaystyle 1+C_{\gamma}({\boldsymbol{q}})(E_{P}+|{\boldsymbol{q}}|-\beta q_{\|})=C_{\gamma}({\boldsymbol{q}})E_{P}+{\cal O}\left(\alpha^{2}\right)\hskip 28.45274pt{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\Longrightarrow}\hskip 28.45274ptC_{\gamma}({\boldsymbol{q}})=-\frac{1}{|{\boldsymbol{q}}|-\beta q_{\|}}=-\frac{|{\boldsymbol{q}}|+\beta q_{\|}}{{\boldsymbol{q}}_{\perp}^{2}+q_{\|}^{2}/\gamma^{2}}$ (211) Including $C_{\gamma}({\boldsymbol{q}})$ in (VI.3.2) we have in (VI.3.3), $\displaystyle\mathcal{H}_{int}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}C_{\gamma}({\boldsymbol{q}})\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ $\displaystyle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})$ $\displaystyle\times e^{2}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\beta^{2}\,\frac{{\boldsymbol{q}}_{\perp}^{2}}{{\boldsymbol{q}}^{2}}\frac{|{\boldsymbol{q}}|+\beta q_{\|}}{{\boldsymbol{q}}_{\perp}^{2}+q_{\|}^{2}/\gamma^{2}}\,\big{[}e^{-i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}+e^{i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}\big{]}\left|{0}\right\rangle$ (212) The $\beta q_{\|}$ term in the numerator does not contribute since the remaining integrand is symmetric under ${\boldsymbol{q}}\to-{\boldsymbol{q}}$. We may then use this symmetry to set $\exp[i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})]\to\exp[-i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})]$ and cancel a factor $2|{\boldsymbol{q}}|$. Combining this transverse photon contribution with the Coulomb one (199) the integral over ${\boldsymbol{q}}$ becomes (with ${\boldsymbol{x}}={\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}$), $\displaystyle-e^{2}$ $\displaystyle\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}}\,\frac{1}{{\boldsymbol{q}}^{2}}\Big{[}1-\frac{\beta^{2}q_{\perp}^{2}}{{\boldsymbol{q}}_{\perp}^{2}+q_{\|}^{2}/\gamma^{2}}=\frac{{\boldsymbol{q}}^{2}/\gamma^{2}}{{\boldsymbol{q}}_{\perp}^{2}+q_{\|}^{2}/\gamma^{2}}\Big{]}e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}=-\frac{e^{2}}{\gamma^{2}}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}}\,\frac{e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}}{{\boldsymbol{q}}_{\perp}^{2}+q_{\|}^{2}/\gamma^{2}}=-\frac{\alpha}{\gamma\sqrt{{\boldsymbol{x}}_{\perp}^{2}+\gamma^{2}x_{\|}^{2}}}$ (213) Adding the kinetic energy (198) the eigenstate condition (VI.3.3) imposes the bound state condition on $\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}$, with the required eigenvalue (195), $\displaystyle\Big{[}E_{P}-\frac{1}{m\gamma}\Big{(}\boldsymbol{\nabla}_{\perp}^{2}-\frac{1}{\gamma^{2}}\boldsymbol{\nabla}_{\|}^{2}\Big{)}-\frac{\alpha}{\gamma\sqrt{{\boldsymbol{x}}_{\perp}^{2}+\gamma^{2}x_{\|}^{2}}}\Big{]}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}})=\Big{(}E_{P}+\frac{1}{\gamma}\,E_{b}\Big{)}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}})$ (214) A comparison with the ${\boldsymbol{P}}=0$ equation (193) shows that up to a normalization we have $\displaystyle\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}})=\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{\perp},\gamma x_{\|})$ (215) i.e., standard Lorentz contraction. The contraction is the same for all Dirac components, justifying the factorization (183). According to (194) the contracted radial function is $\displaystyle F^{({\boldsymbol{P}})}({\boldsymbol{x}})=\gamma\,F^{(0)}(r_{P})=\gamma\,\frac{\alpha^{3/2}m^{2}}{\sqrt{4\pi}}\,e^{-\alpha mr_{P}/2}\hskip 56.9055ptr_{P}\equiv\sqrt{{\boldsymbol{x}}_{\perp}^{2}+\gamma^{2}x_{\|}^{2}}\hskip 28.45274pt\gamma=\frac{E_{P}}{2m}=\frac{\sqrt{{\boldsymbol{P}}^{2}+4m^{2}}}{2m}$ (216) The Lorentz contraction of $F^{({\boldsymbol{P}})}({\boldsymbol{x}})$ agrees with the classical result. Recall, however, that the $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ Fock component (206) also contributes. The kinetic energies of its constituents (209), (210) are of ${\cal O}\left(\alpha\right)$, i.e., large compared to the ${\cal O}\left(\alpha^{2}\right)$ binding energy of $\left|{\mathcal{B},{\boldsymbol{P}}}\right\rangle$. By the uncertainty principle $\left|{\mathcal{B},{\boldsymbol{P}}}\right\rangle$ fluctuates into $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ only a fraction $\alpha$ of the time. Equivalently, the norm of the $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ Fock component is of ${\cal O}\left(\alpha\right)$. ### VI.4 * Hyperfine splitting of Positronium at ${\boldsymbol{P}}=0$ Hyperfine splitting (hfs) is defined as the difference between the Ortho- and Parapositronium binding energies, $E_{hfs}=E_{b}(Ortho)-E_{b}(Para)$. The hfs (I.1) is known to high accuracy, with current work addressing the ${\cal O}\left(\alpha^{7}m\right)$ contribution using the methods of NRQED. Here I shall illustrate the Fock state method in temporal gauge by evaluating the ${\cal O}\left(\alpha^{4}m\right)$ contribution. At this order the hfs arises from transverse photon exchange between the $e^{-}$ and $e^{+}$, as well as from the annihilation contribution $e^{-}e^{+}\to\gamma\to e^{-}e^{+}$ (for Orthopositronium only). In the Fock state approach this means considering the $\left|{e^{-}e^{+}\gamma}\right\rangle$ and $\left|{\gamma}\right\rangle$ Fock states, respectively. #### VI.4.1 The transverse photon Fock state $\left|{e^{-}e^{+}\gamma}\right\rangle$ contribution In section VI.3 I evaluated the transverse photon exchange contribution to Positronium with ${\boldsymbol{P}}\neq 0$ at leading order. Here I consider transverse photon exchange for Positronium at rest (${\boldsymbol{P}}=0$). The two electron-photon vertices are then proportional to ${\cal O}\left(\alpha\right)$ Bohr momenta, which makes the transverse photon contribution to be of ${\cal O}\left(\alpha^{4}\right)$ in the rest frame. I discuss only photon emission from the $e^{-}$ and absorption by the $e^{+}$. The converse contribution is identical and is taken into account by a factor 2 in the final result. Photons both emitted and absorbed on the $e^{-}$ do not contribute to the spin correlation (hfs) between the $e^{-}$ and $e^{+}$. The Positronium state including the Fock state with a transverse photon is (206) $\displaystyle\left|{\mathcal{B}}\right\rangle=\left|{e^{-}e^{+};\mathcal{B}}\right\rangle+\left|{e^{-}e^{+}\gamma;\mathcal{B}}\right\rangle$ (217) where $\left|{e^{-}e^{+};\mathcal{B}}\right\rangle$ is defined in (165) with ${\boldsymbol{P}}=0$ and the wave function $\Phi_{\mathcal{B}}({\boldsymbol{x}})=\Gamma_{\mathcal{B}}\,F(r)$ according to (182) and (186). Its Dirac structures are $\Gamma_{Para}=\gamma_{5}$, $\Gamma_{Ortho}={\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}$ as in (184), (185) and the radial function $F(r)$ is given in (194). The transverse photon state is as in (206) with $C_{\gamma}=-1/|{\boldsymbol{q}}|$ from (211) with ${\boldsymbol{P}}=0$, $\displaystyle\left|{e^{-}e^{+}\gamma;\mathcal{B}}\right\rangle=\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2|{\boldsymbol{q}}|}\sum_{s}\frac{-1}{|{\boldsymbol{q}}|}\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ (218) The $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ state created by $\mathcal{H}_{int}({\boldsymbol{A}}_{T})$ (201) acting on $\left|{e^{-}e^{+};\mathcal{B}}\right\rangle$ with $e^{-}\to e^{-}\gamma$ is given in (A.591). When the Hamiltonian acts on its eigenstate $\left|{\mathcal{B}}\right\rangle$ the energy eigenvalue may be read off from the coefficient of the $\left|{e^{-}e^{+};\mathcal{B}}\right\rangle$ Fock state. Projecting on this state, $\displaystyle\mathcal{H}\left|{\mathcal{B}}\right\rangle=\Big{[}2m-{\textstyle\frac{1}{4}}m\alpha^{2}+{\cal O}\left(\alpha^{4}\right)+\frac{\langle{e^{-}e^{+};\mathcal{B}}|\mathcal{H}_{int}\left|{e^{-}e^{+}\gamma;\mathcal{B}}\right\rangle}{\langle{e^{-}e^{+};\mathcal{B}}|e^{-}e^{+};\mathcal{B}\rangle}\Big{]}\left|{e^{-}e^{+};\mathcal{B}}\right\rangle+\ldots$ (219) The ${\cal O}\left(\alpha^{4}\right)$ term represents higher order corrections to the eigenvalue from $(\mathcal{H}_{0}+\mathcal{H}_{V})\left|{e^{-}e^{+};\mathcal{B}}\right\rangle$, e.g., due to the Taylor expansion of the energies $E_{i}$ in (VI.2). They are the same for Para- and Orthopositronium and do not affect the hfs. I keep only terms which are spin-, i.e., $\Gamma_{\mathcal{B}}$-dependent. The absorption of the photon on the $e^{+}$ is given by $\left[{\mathcal{H}_{int}},{\psi({\boldsymbol{x}}_{2})}\right]$ analogously as the emission from $e^{-}$ in (A.591), $\displaystyle\mathcal{H}_{int}\left|{e^{-}e^{+}\gamma;\mathcal{B}}\right\rangle=-e^{2}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}2{\boldsymbol{q}}^{2}}\sum_{s}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})$ $\displaystyle{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1}){\boldsymbol{\alpha}}\cdot{\boldsymbol{\varepsilon}}_{s}^{*}({\boldsymbol{q}})e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1}}{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\Gamma_{\mathcal{B}}F(r)$ $\displaystyle\times{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2}){\boldsymbol{\alpha}}\cdot{\boldsymbol{\varepsilon}}_{s}({\boldsymbol{q}})e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{2}}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (220) where $r=|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|$ and the extra pair of $\Lambda_{\pm}$ project $b^{\dagger}$ from $\bar{\psi}({\boldsymbol{x}}_{1})$ and $d^{\dagger}$ from $\psi({\boldsymbol{x}}_{2})$ (167). At ${\cal O}\left(\alpha^{4}\right)$ two momenta can contribute, one each from the photon vertices. The identities in (A.12) and (A.12) show that ${\boldsymbol{\alpha}}$ bracketed by two projectors becomes a derivative. This contribution reduces the Dirac structure of (VI.4.1) so that the overlap in (219) is independent of $\Gamma_{\mathcal{B}}$, i.e., it does not contribute to the hfs. Hence we need only consider the contributions $\displaystyle e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1}}{\overset{\leftarrow}{\Lambda}}({\boldsymbol{x}}_{1})\to-\frac{1}{2m}{\boldsymbol{\alpha}}\cdot{\boldsymbol{q}}\,e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1}}\hskip 56.9055pt{\overset{\rightarrow}{\Lambda}}({\boldsymbol{x}}_{2})e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{2}}\to-\frac{1}{2m}{\boldsymbol{\alpha}}\cdot{\boldsymbol{q}}\,e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{2}}$ (221) At ${\cal O}\left(\alpha^{4}\right)$ the remaining projectors can be set to lowest order, $\Lambda_{\pm}=(1\pm\gamma^{0})/2$. The products of ${\boldsymbol{\alpha}}$-matrices may be reduced through $\alpha_{i}\alpha_{j}=\delta_{ij}+i\varepsilon_{ijk}\alpha_{k}\gamma_{5}$. The $\delta_{ij}$-function does not contribute here since ${\boldsymbol{q}}\cdot{\boldsymbol{\varepsilon}}_{s}({\boldsymbol{q}})=0$, $\displaystyle{\varepsilon_{s}^{i}}^{*}q^{j}\alpha_{i}\alpha_{j}=i\varepsilon_{ijk}{\varepsilon_{s}^{i}}^{*}q^{j}\alpha_{k}\gamma_{5}\hskip 56.9055ptq^{\ell}\varepsilon_{s}^{m}\alpha_{\ell}\alpha_{m}=i\varepsilon_{\ell mn}q^{\ell}\varepsilon_{s}^{m}\alpha_{n}\gamma_{5}$ (222) Since $\left[{\gamma_{5}},{\Gamma_{\mathcal{B}}}\right]=0$ the $\gamma_{5}$’s cancel, $\gamma_{5}^{2}=1$. The sum over photon polarizations (VI.3.2) gives $\sum_{s}{\varepsilon_{s}^{i}}^{*}({\boldsymbol{q}})\varepsilon_{s}^{m}({\boldsymbol{q}})\to\delta^{im}$, since the $-q^{i}q^{m}/{\boldsymbol{q}}^{2}$ term vanishes. Then $i^{2}\varepsilon_{ijk}\varepsilon_{in\ell}q^{j}q^{\ell}={\boldsymbol{q}}^{2}\delta^{kn}-q^{k}q^{n}$. Writing $\alpha_{k}\Gamma_{\mathcal{B}}=\left[{\alpha_{k}},{\Gamma_{\mathcal{B}}}\right]+\Gamma_{\mathcal{B}}\alpha_{k}$ the second term does not give an hfs, while the commutator vanishes for $\Gamma_{Para}=\gamma_{5}$. Hence is suffices to consider the Orthopositronium $(j^{z}=\lambda)$ contribution, $\left[{\alpha_{k}},{{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}}\right]=-2i\varepsilon_{jkp}e_{\lambda}^{j}\alpha_{p}\gamma_{5}$. This is multiplied by the $\alpha_{n}$ in (222), giving $-2i\varepsilon_{jkp}e_{\lambda}^{j}\alpha_{p}\alpha_{n}\gamma_{5}=2\varepsilon_{jkp}\varepsilon_{nip}e_{\lambda}^{j}\alpha_{i}=2(e_{\lambda}^{n}\alpha_{k}-\delta^{kn}{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}})$. Combined with the ${\boldsymbol{q}}$-dependence found above we have $\displaystyle({\boldsymbol{q}}^{2}\delta^{kn}-q^{k}q^{n})2(e_{\lambda}^{n}\alpha_{k}-\delta^{kn}{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}})=-2\big{[}{\boldsymbol{q}}^{2}{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}+({\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{q}})({\boldsymbol{\alpha}}\cdot{\boldsymbol{q}})\big{]}$ (223) The contribution to (VI.4.1) that is relevant for the hfs is thus $\displaystyle\mathcal{H}_{int}\left|{e^{-}e^{+}\gamma;Ortho}\right\rangle=\frac{e^{2}}{4m^{2}}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}{\boldsymbol{q}}^{2}}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})\Lambda_{+}F(r)e^{-i{\boldsymbol{q}}\cdot({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}\big{[}{\boldsymbol{q}}^{2}{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}+({\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{q}})({\boldsymbol{\alpha}}\cdot{\boldsymbol{q}})\big{]}\Lambda_{-}\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (224) where $\Lambda_{\pm}=(1\pm\gamma^{0})/2$. In the matrix elements of (219) both electron fields are annihilated and the integral $\int d({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2=(2\pi)^{3}\delta(\boldsymbol{0})$ cancels with the norm (168) in the denominator up to a factor $4m$. Recalling that we only considered photon emission from $e^{-}$ and absorption on $e^{+}$ and so are getting half of the hfs, $\displaystyle{\textstyle\frac{1}{2}}E_{hfs}^{T}$ $\displaystyle=\frac{\langle{e^{-}e^{+};\mathcal{B}}|\mathcal{H}_{int}\left|{e^{-}e^{+}\gamma;\mathcal{B}}\right\rangle}{\langle{e^{-}e^{+};\mathcal{B}}|e^{-}e^{+};\mathcal{B}\rangle}$ $\displaystyle=\frac{e^{2}}{16m^{3}}\int d{\boldsymbol{x}}\,\frac{d{\boldsymbol{q}}}{(2\pi)^{3}{\boldsymbol{q}}^{2}}\,|F(r)|^{2}e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}\mathrm{Tr}\,\big{\\{}{\textstyle\frac{1}{2}}(1-\gamma^{0}){\boldsymbol{e}}_{\lambda}^{*}\cdot{\boldsymbol{\alpha}}{\textstyle\frac{1}{2}}(1+\gamma^{0})\big{[}{\boldsymbol{q}}^{2}{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}+{\boldsymbol{e}}_{\lambda}^{i}\alpha^{j}q^{i}q^{j}\big{]}\big{\\}}$ (225) The factor multiplying $q^{i}q^{j}$ in the integrand is symmetric under $q^{i}\to-q^{i},\ x^{i}\to-x^{i}$, allowing $q^{i}q^{j}\to{\textstyle\frac{1}{3}}{\boldsymbol{q}}^{2}\delta^{ij}$. The trace factor becomes ${\textstyle\frac{2}{3}}{\boldsymbol{q}}^{2}\mathrm{Tr}\,\big{\\{}({\boldsymbol{e}}_{\lambda}^{*}\cdot{\boldsymbol{\alpha}})({\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}})\big{\\}}={\textstyle\frac{8}{3}}{\boldsymbol{q}}^{2}$. With $F(r)$ given by (194), $\displaystyle{\textstyle\frac{1}{2}}E_{hfs}^{T}=\frac{e^{2}}{6m^{3}}\int d{\boldsymbol{x}}\,|F(r)|^{2}\int\frac{d{\boldsymbol{q}}}{(2\pi)^{3}}e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}}=\frac{4\pi\alpha}{6m^{3}}|F(0)|^{2}=\frac{1}{6}m\alpha^{4}$ (226) The contribution to the hfs from transverse photon exchange is thus as expected Penin (2014), $\displaystyle E_{hfs}^{T}=\frac{1}{3}m\alpha^{4}$ (227) #### VI.4.2 Hyperfine splitting from annihilation: $e^{-}e^{+}\to\gamma\to e^{-}e^{+}$ The $\left|{e^{-}e^{+}}\right\rangle\to\left|{\gamma}\right\rangle\to\left|{e^{-}e^{+}}\right\rangle$ transition is proportional to the square $|\Phi_{\mathcal{B}}(0)|^{2}$ of the ${\boldsymbol{P}}=0$ Positronium (165) wave function at the origin. $\Phi_{\mathcal{B}}({\boldsymbol{x}})=\Gamma_{\mathcal{B}}F(r)$ where $\Gamma_{\mathcal{B}}$ for Para- and Orthopositronium are in (184), (185) and their common radial function $F(r)$ is in (194). Counting also the vertex couplings the transition is $\propto e^{2}\,|F(0)|^{2}\propto\alpha^{4}$. Hence we may neglect the ${\cal O}\left(\alpha m\right)$ relative (Bohr) momenta in evaluating the hfs at ${\cal O}\left(\alpha^{4}\right)$. The projectors $\Lambda_{\pm}$ in the state (165) may then be replaced with ${\textstyle\frac{1}{2}}(1\pm\gamma^{0})$. Annihilating both the $e^{-}$ and $e^{+}$ in the state (165) by $\mathcal{H}_{int}$ gives $\displaystyle\mathcal{H}_{int}\left|{e^{-}e^{+};\mathcal{B}}\right\rangle$ $\displaystyle=-e\int d{\boldsymbol{x}}\,\bar{\psi}({\boldsymbol{x}}){\boldsymbol{\alpha}}\cdot{\boldsymbol{A}}({\boldsymbol{x}})\psi({\boldsymbol{x}})\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\textstyle\frac{1}{2}}(1+\gamma^{0})\Phi_{\mathcal{B}}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\textstyle\frac{1}{2}}(1-\gamma^{0})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (228) $\displaystyle=-e\int d{\boldsymbol{x}}\,\mathrm{Tr}\,\big{\\{}{\boldsymbol{\alpha}}\cdot{\boldsymbol{A}}({\boldsymbol{x}})\gamma^{0}{\textstyle\frac{1}{2}}(1+\gamma^{0})\Gamma_{\mathcal{B}}F(0){\textstyle\frac{1}{2}}(1-\gamma^{0})\big{\\}}\left|{0}\right\rangle=-{\textstyle\frac{1}{2}}eF(0)\int d{\boldsymbol{x}}\,\mathrm{Tr}\,[\alpha^{i}\Gamma_{\mathcal{B}}]\,A^{i}({\boldsymbol{x}})\left|{0}\right\rangle$ As expected due to charge conjugation invariance, this vanishes for Parapositronium, $\Gamma_{Para}=\gamma_{5}$. Hence the annihilation contribution to the hfs arises only from Orthopositronium, $\Gamma_{Ortho}^{\lambda}={\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}$ for states with $j^{z}=\lambda$: $\displaystyle\mathcal{H}_{int}\left|{e^{-}e^{+};\mathcal{O}_{\lambda}}\right\rangle$ $\displaystyle=-2eF(0)\int d{\boldsymbol{x}}\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{A}}({\boldsymbol{x}})\left|{0}\right\rangle\equiv-2eF(0)\left|{{\boldsymbol{A}},\lambda}\right\rangle$ (229) The relevant action of the Hamiltonian (138) on this state is given by the canonical commutation relations (137), $\displaystyle\mathcal{H}\left|{{\boldsymbol{A}},\lambda}\right\rangle\to\int d{\boldsymbol{y}}\,{\textstyle\frac{1}{2}}{\boldsymbol{E}}^{2}({\boldsymbol{y}})\int d{\boldsymbol{x}}\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{A}}({\boldsymbol{x}})\left|{0}\right\rangle=i\int d{\boldsymbol{x}}\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{E}}({\boldsymbol{x}})\left|{0}\right\rangle\equiv i\left|{{\boldsymbol{E}},\lambda}\right\rangle$ (230) $\mathcal{H}\left|{{\boldsymbol{E}},\lambda}\right\rangle$ has an overlap $C_{\mathcal{O}}$ with Orthopositronium. Neglecting the other states which do not contribute here, $\displaystyle\mathcal{H}_{int}\left|{{\boldsymbol{E}},\lambda}\right\rangle=-e\int d{\boldsymbol{y}}\,\psi^{\dagger}({\boldsymbol{y}})\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{A}}({\boldsymbol{y}})\psi({\boldsymbol{y}})\int d{\boldsymbol{x}}\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{E}}({\boldsymbol{x}})\left|{0}\right\rangle=ie\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}\,\psi({\boldsymbol{x}})\left|{0}\right\rangle=C_{\mathcal{O}}\left|{e^{-}e^{+};\mathcal{O}_{\lambda}}\right\rangle+\ldots$ (231) With the normalization (168) the overlap $C_{\mathcal{O}}$ is $\displaystyle C_{\mathcal{O}}=\frac{\langle{e^{-}e^{+};\mathcal{O}_{\lambda}}|\mathcal{H}\left|{{\boldsymbol{E}},\lambda}\right\rangle}{\langle{e^{-}e^{+};\mathcal{O}_{\lambda}}|e^{-}e^{+};\mathcal{O}_{\lambda}\rangle}=$ $\displaystyle=\frac{1}{4m(2\pi)^{3}\delta^{3}(0)}\langle{0}|\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\psi^{\dagger}({\boldsymbol{x}}_{2})\,{\boldsymbol{e}}_{\lambda}^{*}\cdot{\boldsymbol{\alpha}}\,F^{*}(|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|){\textstyle\frac{1}{2}}(1+\gamma^{0})\gamma^{0}\psi({\boldsymbol{x}}_{1})$ $\displaystyle\times ie\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\,{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}\,\psi({\boldsymbol{x}})\left|{0}\right\rangle=\frac{ie}{2m}\,F^{*}(0)$ (232) The Orthopositronium state may at ${\cal O}\left(\alpha^{4}\right)$ be considered as a superposition of the three states involved, $\displaystyle\left|{\mathcal{O}_{\lambda}}\right\rangle=\left|{e^{-}e^{+};\mathcal{O}_{\lambda}}\right\rangle+C_{A}\left|{{\boldsymbol{A}},\lambda}\right\rangle+C_{E}\left|{{\boldsymbol{E}},\lambda}\right\rangle$ (233) and should be an eigenstate of $\mathcal{H}$ with eigenvalue $E_{\mathcal{O}}$. Using (229), (230) and (VI.4.2), $\displaystyle\mathcal{H}\left|{\mathcal{O}_{\lambda}}\right\rangle$ $\displaystyle=(2m-{\textstyle\frac{1}{4}}m\alpha^{2}+C_{\mathcal{O}}C_{E})\left|{e^{-}e^{+};\mathcal{O}_{\lambda}}\right\rangle+C_{A}\,i\left|{{\boldsymbol{E}},\lambda}\right\rangle-2eF(0)\left|{{\boldsymbol{A}},\lambda}\right\rangle$ $\displaystyle=E_{\mathcal{O}}\,\Big{[}\left|{e^{-}e^{+};\mathcal{O}_{\lambda}}\right\rangle-\frac{2eF(0)}{E_{\mathcal{O}}}\left|{{\boldsymbol{A}},\lambda}\right\rangle+\frac{iC_{A}}{E_{\mathcal{O}}}\left|{{\boldsymbol{E}},\lambda}\right\rangle\Big{]}=E_{\mathcal{O}}\left|{\mathcal{O}_{\lambda}}\right\rangle$ (234) The eigenstate constraint requires (at leading order) $C_{A}=-2eF(0)/2m$ and $C_{E}=iC_{A}/2m=-2ieF(0)/4m^{2}$. With the value of $F(0)$ in (194) this gives the hfs term in $E_{\mathcal{O}}$ as $\displaystyle C_{\mathcal{O}}C_{E}=\frac{ie}{2m}\,\frac{-2ie}{4m^{2}}|F(0)|^{2}={\textstyle\frac{1}{4}}m\alpha^{4}$ (235) as quoted in Penin (2014). ### VI.5 * Electromagnetic form factor of Positronium atoms in an arbitrary frame In this section I evaluate the electromagnetic form factors of Positronium. The elastic form factor is evaluated with leading order wave functions in an arbitrary frame, demonstrating the Lorentz covariance of the result. The transition form factor from Para- to Orthopositronium is calculated in the rest frame only. The electromagnetic current $j^{\mu}(z)$ may be translated to the origin ($z=0$) using the four-momentum operator $\hat{P}$, $\displaystyle j^{\mu}(z)=\bar{\psi}(z)\gamma^{\mu}\psi(z)=e^{i\hat{P}\cdot z}j^{\mu}(0)e^{-i\hat{P}\cdot z}$ (236) The EM form factor $F_{AB}^{\mu}$ is the expectation value of the current between atoms $A,B$ of three-momenta ${\boldsymbol{P}_{A}},{\boldsymbol{P}_{B}}$ whose four-momenta satisfy $P_{A}^{2}=M_{A}^{2}$, $P_{B}^{2}=M_{B}^{2}$, $\displaystyle F_{AB}^{\mu}(z)$ $\displaystyle=\langle{B,{\boldsymbol{P}_{B}}}|j^{\mu}(z)\left|{A,{\boldsymbol{P}_{A}}}\right\rangle=e^{i(P_{B}-P_{A})\cdot z}\langle{B,{\boldsymbol{P}_{B}}}|j^{\mu}(0)\left|{A,{\boldsymbol{P}_{A}}}\right\rangle$ $\displaystyle F_{AB}^{\mu}(q)$ $\displaystyle=\int d^{4}z\,e^{-iq\cdot z}\,F^{\mu}_{AB}(z)=(2\pi)^{4}\delta^{4}(P_{b}-P_{a}-q)G^{\mu}_{AB}(q)$ (237) In the following I consider $G^{\mu}_{AB}(q)$, keeping in mind the four- momentum constraint. The Positronium state is defined in (165). With a short-hand notation $\Psi_{A}$ the wave function of the incoming state is $\displaystyle\left|{A,{\boldsymbol{P}}}\right\rangle$ $\displaystyle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})\Psi_{A}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ $\displaystyle\Psi_{A}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})$ $\displaystyle={\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}_{A}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi^{({\boldsymbol{P}}_{A})}_{A}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})$ (238) where the projectors $\Lambda_{\pm}$ are defined in (VI.1.1). The same notation for the final state $\langle{B,{\boldsymbol{P}}}|$ gives $\displaystyle G_{AB}^{\mu}(q)=\int d{\boldsymbol{y}}_{1}d{\boldsymbol{y}}_{2}\,d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\langle{0}|\psi^{\dagger}({\boldsymbol{y}}_{2})\Psi_{B}^{\dagger}({\boldsymbol{y}}_{1},{\boldsymbol{y}}_{2})\gamma^{0}\psi({\boldsymbol{y}}_{1})\,\bar{\psi}(0)\gamma^{\mu}\psi(0)\,\bar{\psi}({\boldsymbol{x}}_{1})\Psi_{A}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (239) The contraction of $\psi(0)$ with $\bar{\psi}({\boldsymbol{x}}_{1})$ corresponds to $j^{\mu}$ interacting with the $e^{-}$. This sets ${\boldsymbol{x}}_{1}={\boldsymbol{y}}_{1}=0$ and ${\boldsymbol{y}}_{2}={\boldsymbol{x}}_{2}$. I denote this contribution $G_{AB}^{\mu}(q,e^{-})$. Interaction with $e^{+}$ corresponds to $\psi(0)$ contracting with $\psi^{\dagger}({\boldsymbol{y}}_{2})$ and is denoted $G_{AB}^{\mu}(q,e^{+})$. It has a minus sign due to anticommutation and sets ${\boldsymbol{x}}_{2}={\boldsymbol{y}}_{2}=0$ and ${\boldsymbol{y}}_{1}={\boldsymbol{x}}_{1}$. Thus $\displaystyle G_{AB}^{\mu}(q,e^{-})$ $\displaystyle=\int d{\boldsymbol{x}}\,\mathrm{Tr}\,\big{\\{}\Psi_{A}(0,-{\boldsymbol{x}})\Psi_{B}^{\dagger}(0,-{\boldsymbol{x}})\gamma^{\mu}\gamma^{0}\big{\\}}$ (240) $\displaystyle G_{AB}^{\mu}(q,e^{+})$ $\displaystyle=-\int d{\boldsymbol{x}}\,\mathrm{Tr}\,\big{\\{}\Psi_{B}^{\dagger}(-{\boldsymbol{x}},0)\Psi_{A}(-{\boldsymbol{x}},0)\gamma^{0}\gamma^{\mu}\big{\\}}$ The two contributions are related by charge conjugation. Using (181) and recalling that $\alpha_{2}\gamma^{\mu}\alpha_{2}=-(\gamma^{\mu})^{T}$ and $\alpha_{2}\,{\boldsymbol{\alpha}}\,\alpha_{2}=-{\boldsymbol{\alpha}}^{T}$ we have $\displaystyle{\alpha_{2}}\Psi^{T}({\boldsymbol{x}}_{2},{\boldsymbol{x}}_{1}){\alpha_{2}}=\eta_{C}\,\Psi({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})$ (241) Multiplying the argument of the $\mathrm{Tr}\,$ in $G_{AB}^{\mu}(q,e^{-})$ by ${\alpha_{2}}$ from the left and right and taking its transpose shows that $\displaystyle G_{AB}^{\mu}(q,e^{+})=-\eta^{A}_{C}\eta^{B}_{C}G_{AB}^{\mu}(q,e^{-})$ (242) As expected the photon ($\eta_{C}^{\gamma}=-1$) requires $\eta^{A}_{C}=-\eta^{B}_{C}$ when $A$ and $B$ are eigenstates of charge conjugation, $\displaystyle G_{AB}^{\mu}(q)=(1-\eta^{A}_{C}\eta^{B}_{C})\int d{\boldsymbol{x}}\,\mathrm{Tr}\,\big{\\{}\Psi_{A}(0,-{\boldsymbol{x}})\Psi_{B}^{\dagger}(0,-{\boldsymbol{x}})\gamma^{\mu}\gamma^{0}\big{\\}}$ (243) #### VI.5.1 Parapositronium form factor I take both $A$ and $B$ to be Parapositronium and consider only $G_{AB}^{\mu}(q,e^{-})$. This is relevant for states which are not eigenstates of charge conjugation, e.g., a hypothetical $\mu^{-}e^{+}$ atom where the muon and electron have the same mass. Even for standard Positronium $G_{AB}^{\mu}(q,e^{-})\neq 0$ and should have the form required by Lorentz covariance, $\displaystyle G^{\mu}(q,e^{-})=(P_{A}+P_{B})^{\mu}F(q^{2})$ (244) After partial integrations in the state (VI.5) we need consider only the projector derivatives acting on the ${\cal O}\left(\alpha^{0}\right)$ phase $\exp[i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2]$, since $\boldsymbol{\nabla}\Phi^{(P)}_{A,B}({\boldsymbol{x}})$ is of ${\cal O}\left(\alpha\right)$. The projectors then become, $\displaystyle\Lambda_{\pm}({\boldsymbol{P}})\equiv\frac{1}{2E_{P}}\big{(}E_{P}\mp{\boldsymbol{\alpha}}\cdot{\boldsymbol{P}}\pm M\gamma^{0}\big{)}=\Lambda_{\pm}^{\dagger}({\boldsymbol{P}})=\big{[}\Lambda_{\pm}({\boldsymbol{P}})\big{]}^{2}\hskip 56.9055pt\Lambda_{+}({\boldsymbol{P}})\Lambda_{-}({\boldsymbol{P}})=0$ (245) where $E_{P}=\sqrt{{\boldsymbol{P}}^{2}+M^{2}}$ and $M=2m$ at leading order. The Parapositronium states are relativistically normalized (168), with the Lorentz contracted wave function $\Phi^{(P)}({\boldsymbol{x}})$ given in (183), (184), (186) and (216). Taking ${\boldsymbol{P}}=(0,0,P)=(0,0,M\sinh\xi)$ along the $z$-axis, $\displaystyle\Phi^{(P)}({\boldsymbol{x}})=\frac{E_{P}}{M}\,\gamma_{5}F^{({\boldsymbol{P}})}({\boldsymbol{x}})\hskip 56.9055ptF^{({\boldsymbol{P}})}({\boldsymbol{x}})$ $\displaystyle=\frac{E_{P}}{M}\,F^{(0)}(r_{P})=\frac{E_{P}}{M}\,\frac{\alpha^{3/2}m^{2}}{\sqrt{4\pi}}\,e^{-\alpha mr_{P}/2}$ (246) $\displaystyle r_{P}$ $\displaystyle\equiv\sqrt{x^{2}+y^{2}+z^{2}\cosh^{2}\xi}$ (247) The photon momentum ${\boldsymbol{q}}$ must be of ${\cal O}\left(\alpha\right)$ for a leading order overlap between $\Psi_{A}$ and $\Psi_{B}$. Hence we may set ${\boldsymbol{P}}_{B}={\boldsymbol{P}}_{A}+{\boldsymbol{q}}={\boldsymbol{P}}+{\cal O}\left(\alpha\right)$ in $G_{AB}^{\mu}(q,e^{-})$ (240). However, we need to retain the ${\boldsymbol{q}}$-dependence of the $\Psi_{A}(0,-{\boldsymbol{x}})\Psi_{B}^{\dagger}(0,-{\boldsymbol{x}})$ phase factor $\exp[i({\boldsymbol{P}}_{B}-{\boldsymbol{P}}_{A})\cdot{\boldsymbol{x}}/2]=\exp(i{\boldsymbol{q}}\cdot{\boldsymbol{x}}/2)$, which reflects the photon wave function. The expression for $G_{AB}^{\mu}(q,e^{-})$ is then $\displaystyle G^{\mu}(q,e^{-})$ $\displaystyle=\Big{(}\frac{E_{P}}{M}\Big{)}^{2}\int d{\boldsymbol{x}}\,|F^{({\boldsymbol{P}})}({\boldsymbol{x}})|^{2}e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}/2}\mathrm{Tr}\,\big{\\{}\Lambda_{+}({\boldsymbol{P}})\gamma_{5}\Lambda_{-}({\boldsymbol{P}})\Lambda_{-}({\boldsymbol{P}})\gamma_{5}\Lambda_{+}({\boldsymbol{P}})\gamma^{\mu}\gamma^{0}\big{\\}}$ (248) From the definitions (245) of $\Lambda_{\pm}({\boldsymbol{P}})$ follows that $\displaystyle\Lambda_{+}({\boldsymbol{P}})\gamma_{5}\Lambda_{-}({\boldsymbol{P}})=\frac{M}{E_{P}}\,\Lambda_{+}({\boldsymbol{P}})\gamma^{0}\gamma_{5}$ (249) Using this the trace in (248) becomes $\displaystyle\mathrm{Tr}\,^{\mu}=\Big{(}\frac{M}{E_{P}}\Big{)}^{2}\mathrm{Tr}\,\big{\\{}\Lambda_{+}({\boldsymbol{P}})\gamma^{\mu}\gamma^{0}\big{\\}}=\Big{(}\frac{M}{E_{P}}\Big{)}^{2}\,\frac{2P^{\mu}}{E_{P}}$ (250) so that $\displaystyle G^{\mu}(q,e^{-})$ $\displaystyle=\frac{2P^{\mu}}{E_{P}}\,\int d{\boldsymbol{x}}\,|F^{({\boldsymbol{P}})}({\boldsymbol{x}})|^{2}e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}/2}$ (251) Changing the integration variable to ${\boldsymbol{x}}_{R}\equiv(x,y,z\cosh\xi)$ gives $d{\boldsymbol{x}}=d{\boldsymbol{x}}_{R}/\cosh\xi$ and $r_{P}=|{\boldsymbol{x}}_{R}|$ in (246). The photon four-momentum $q$ is constrained by kinematics, $\displaystyle M^{2}=(P+q)^{2}=M^{2}+2P\cdot q+{\cal O}\left(\alpha^{2}\right)=M^{2}$ (252) which at ${\cal O}\left(\alpha\right)$ implies $P\cdot q=E_{P}q^{0}-{\boldsymbol{P}}\cdot{\boldsymbol{q}}=0$. In the Positronium rest frame (${\boldsymbol{P}}=0$) this means $q^{0}_{R}=0$. Thus $q^{z}=q^{z}_{R}\cosh\xi$, where $q^{z}_{R}$ is the $z$-component of the photon momentum in the rest frame. Hence ${\boldsymbol{q}}\cdot{\boldsymbol{x}}={\boldsymbol{q}}_{R}\cdot{\boldsymbol{x}}_{R}$. Recalling that $\cosh\xi=E_{P}/M$ and using the expression for $F^{({\boldsymbol{P}})}({\boldsymbol{x}})$ in (246) gives $\displaystyle G^{\mu}(q,e^{-})$ $\displaystyle=\frac{2P^{\mu}}{E_{P}}\,\frac{E_{P}}{M}\,\frac{\alpha^{3}m^{4}}{4\pi}\int d{\boldsymbol{x}}_{R}\,\exp\big{[}(-\alpha M|{\boldsymbol{x}}_{R}|+i{\boldsymbol{q}}_{R}\cdot{\boldsymbol{x}}_{R})/2\big{]}$ (253) The integral is as in the rest frame, i.e., it is ${\boldsymbol{P}}$-independent, so the result is covariant and agrees with (244), $\displaystyle G^{\mu}(q,e^{-})$ $\displaystyle=2P^{\mu}\,\frac{(\alpha M)^{4}}{(Q^{2}+\alpha^{2}M^{2})^{2}}$ (254) where $2P^{\mu}=(P_{A}+P_{B})^{\mu}+{\cal O}\left(\alpha\right)$ and $Q^{2}={\boldsymbol{q}}_{R}^{2}=-q^{2}$. #### VI.5.2 Positronium transition form factor The $\gamma^{*}(q)$ \+ Parapositronium $\to$ Orthopositronium transition electromagnetic form factor has the structure $\displaystyle G_{\lambda}^{\mu}(q)=i\varepsilon^{\mu\nu\rho\sigma}P_{\nu}q_{\rho}e_{\sigma}^{\lambda}F(q^{2})$ (255) where $P$ is the four-momentum of one of the Positronia, $e^{\lambda}$ is the polarization vector (185) of the Orthopositronium (with $e^{\lambda}_{\sigma=0}=0$) and $q$ is the photon momentum. Symmetries and gauge invariance force the kinematic factor to be of ${\cal O}\left(q\right)$, i.e., of ${\cal O}\left(\alpha\right)$. This reflects the spin flip, from $S=0$ for Parapositronium to $S=1$ for Orthopositronium. In section VI.3 I demonstrated that transverse photon exchange contributes to the binding of Positronium at leading order for ${\boldsymbol{P}}\neq 0$. The photon exchange may at ${\cal O}\left(q\right)$ involve a spin flip at one of its vertices, whereupon the transition to Orthopositronium proceeds at ${\cal O}\left(\alpha^{0}\right)$. I shall not here work out the ${\cal O}\left(\alpha\right)$ corrections to the wave function of Positronium in motion, but limit myself to evaluating the transition form factor in the rest frame of the target, ${\boldsymbol{P}}_{A}=0$. Using the expressions (245) for the $\Lambda_{\pm}({\boldsymbol{P}})$ projectors (with ${\boldsymbol{P}}_{A}=0,\ {\boldsymbol{P}}_{B}={\boldsymbol{q}},\ E_{B}\simeq M$) the Positronium wave functions given in (183), (184), (185) and (186) are, $\displaystyle\Psi_{A}(0,-{\boldsymbol{x}})$ $\displaystyle={\textstyle\frac{1}{2}}(1+\gamma^{0})\gamma_{5}\,F(r)$ $\displaystyle\Psi_{B}(0,-{\boldsymbol{x}})$ $\displaystyle=N_{\lambda}\Lambda_{+}({\boldsymbol{q}}){\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}\Lambda_{-}({\boldsymbol{q}})F(r)e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}/2}$ (256) The expression for $\Psi_{B}(0,-{\boldsymbol{x}})$ may be simplified using $\Lambda_{+}({\boldsymbol{q}})\Lambda_{-}({\boldsymbol{q}})=0$, $\displaystyle\Lambda_{+}({\boldsymbol{q}}){\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}\,\Lambda_{-}({\boldsymbol{q}})=\left\\{{\Lambda_{+}({\boldsymbol{q}})},{{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}}\right\\}\Lambda_{-}({\boldsymbol{q}})=\big{(}{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{\alpha}}-\frac{1}{M}{\boldsymbol{e}}_{\lambda}\cdot{\boldsymbol{q}}\big{)}\Lambda_{-}({\boldsymbol{q}})$ (257) The form factor (243) is then in the target rest frame, $\displaystyle G_{\lambda}^{\mu}(q)$ $\displaystyle=\frac{{N_{\lambda}}^{*}}{2M}\int d{\boldsymbol{x}}\,|F(r)|^{2}e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}/2}\mathrm{Tr}\,_{\lambda}^{\mu}$ $\displaystyle\mathrm{Tr}\,_{\lambda}^{\mu}$ $\displaystyle=\mathrm{Tr}\,\big{\\{}(1+\gamma^{0})\gamma_{5}(M+{\boldsymbol{\alpha}}\cdot{\boldsymbol{q}}-M\gamma^{0})\big{(}{\boldsymbol{e}}_{\lambda}^{*}\cdot{\boldsymbol{\alpha}}-\frac{1}{M}{\boldsymbol{e}}_{\lambda}^{*}\cdot{\boldsymbol{q}}\big{)}\gamma^{\mu}\gamma^{0}\big{\\}}=\mathrm{Tr}\,\big{\\{}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{q}}\,{\boldsymbol{e}}_{\lambda}^{*}\cdot{\boldsymbol{\alpha}}\,\gamma^{\mu}\gamma^{0}\big{\\}}$ $\displaystyle=-4i\,\delta^{\mu,i}\varepsilon_{ijk}q^{j}{e_{\lambda}^{k}}^{*}$ (258) This agrees with the kinematic factor in (255) for ${\boldsymbol{P}}=0$. The Orthopositronium is transversely polarized because ${\boldsymbol{e}}_{\lambda=0}\parallel{\boldsymbol{P}}={\boldsymbol{q}}$ gives $\mathrm{Tr}\,_{0}^{\mu}=0$. The normalization $N_{\lambda=\pm 1}=1$ (186). The invariant form factor has the same integral as in (253), $\displaystyle F(q^{2})=\frac{2}{M^{2}}\int d{\boldsymbol{x}}\,|F(r)|^{2}e^{i{\boldsymbol{q}}\cdot{\boldsymbol{x}}/2}=\frac{2}{M}\,\frac{(\alpha M)^{4}}{(Q^{2}+\alpha^{2}M^{2})^{2}}$ (259) where $Q^{2}=-q^{2}$. I leave it as an exercise (without a worked-out solution) to show that the transition form factor agrees with (255) in a general frame. ### VI.6 * Deep inelastic scattering on Parapositronium in a general frame The target $A$ vertex of Deep Inelastic Scattering (DIS), $\gamma^{*}(q)+A(P_{A})\to X$, is as in a transition form factor (VI.5) for each final state $X$, with the Parapositronium state $A$ defined in (VI.5). Now the photon is taken to have an asymptotically large momentum $q$, and the squared vertex is summed over the final states $X$. In the absence of radiative effects we may describe $X$ in the basis of free $e^{-}e^{+}$ states, $\displaystyle\left|{X}\right\rangle=b^{\dagger}_{{\boldsymbol{k}}_{1},\lambda_{1}}\,d^{\dagger}_{{\boldsymbol{k}}_{2},\lambda_{2}}\left|{0}\right\rangle$ (260) constrained by momentum conservation, $q+P_{A}=k_{1}+k_{2}$. In the Bjorken limit $\displaystyle{x_{Bj}}=\frac{Q^{2}}{2P_{A}\cdot q}$ (261) is fixed as $q\to\infty$, with $Q^{2}=-q^{2}>0$. The frame is defined by keeping the target 4-momentum $P_{A}^{\mu}$ fixed in the Bj limit, with the three-momenta of $q^{\mu}=(q^{0},0,0,-|{\boldsymbol{q}}|)$ and $P_{A}^{\mu}=(E_{A},0,0,P_{A})$ along the $z$-axis. The target mass $M=2m+{\cal O}\left(\alpha^{2}\right)$. The amplitude for $\gamma^{*}A\to X$ corresponding to (239) is, suppressing the momentum conserving $\delta$-function as in (VI.5), $\displaystyle G^{\mu}_{AX}(q)=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\langle{0}|d_{{\boldsymbol{k}}_{2},\lambda_{2}}b_{{\boldsymbol{k}}_{1},\lambda_{1}}\,\bar{\psi}(0)\gamma^{\mu}\psi(0)\bar{\psi}({\boldsymbol{x}}_{1})\Psi_{A}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (262) I consider only scattering from the electron, hence the contractions $\displaystyle\left\\{{b_{{\boldsymbol{k}}_{1},\lambda_{1}}},{\bar{\psi}(0)}\right\\}$ $\displaystyle=\bar{u}({\boldsymbol{k}}_{1},\lambda_{1})$ $\displaystyle\left\\{{d_{{\boldsymbol{k}}_{2},\lambda_{2}}},{\psi({\boldsymbol{x}}_{2})}\right\\}$ $\displaystyle=e^{-i{\boldsymbol{k}}_{2}\cdot{\boldsymbol{x}}_{2}}v({\boldsymbol{k}}_{2},\lambda_{2})$ (263) resulting in $\displaystyle G^{\mu}_{AX}(q)$ $\displaystyle=\int d{\boldsymbol{x}}_{2}\,\bar{u}({\boldsymbol{k}}_{1},\lambda_{1})\gamma^{\mu}\gamma^{0}\Psi_{A}(0,{\boldsymbol{x}}_{2})v({\boldsymbol{k}}_{2},\lambda_{2})e^{-i{\boldsymbol{k}}_{2}\cdot{\boldsymbol{x}}_{2}}$ (264) According to (167) the $\Lambda_{-}$ projector in (VI.5) reduces to unity when acting on $e^{-i{\boldsymbol{k}}_{2}\cdot{\boldsymbol{x}}_{2}}v({\boldsymbol{k}}_{2},\lambda_{2})$. After a partial integration of the $\gamma^{0}\Lambda_{+}$ projector in the target state (VI.5) it acts on $\Psi_{A}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})$ and gives $(\not{P}_{A}+M)$ when the ${\cal O}\left(\alpha\right)$ contribution from differentiating the radial function is neglected. The radial function $F(r)$ is as in (246) evaluated with Lorentz contracted argument. Thus $\displaystyle\gamma^{0}\Psi_{A}(0,{\boldsymbol{x}}_{2})$ $\displaystyle=\frac{E_{A}}{2M^{2}}(\not{P}_{A}+M)e^{i{\boldsymbol{P}_{A}}\cdot{\boldsymbol{x}}_{2}/2}\gamma_{5}\,F(|{\boldsymbol{x}}_{2P}|)$ $\displaystyle{\boldsymbol{x}}_{2P}^{\perp}$ $\displaystyle={\boldsymbol{x}}_{2}^{\perp}\hskip 56.9055ptz_{2P}=z_{2}\cosh\xi=z_{2}\,E_{A}/M$ (265) The integral over ${\boldsymbol{x}}_{2}$ in $G^{\mu}_{AX}(q)$ can be done by a change of integration variable, ${\boldsymbol{x}}_{2}\to{\boldsymbol{x}}_{2P}$, giving $\displaystyle G^{\mu}_{AX}(q)$ $\displaystyle=\frac{\sqrt{\pi}\,\alpha^{5/2}M^{2}}{8\big{(}\alpha^{2}M^{2}/16+{\boldsymbol{P}}_{2}^{2}\big{)}^{2}}\,\bar{u}({\boldsymbol{k}}_{1},\lambda_{1})\gamma^{\mu}(\not{P}_{A}+M)\gamma_{5}\,v({\boldsymbol{k}}_{2},\lambda_{2})$ $\displaystyle{\boldsymbol{P}}_{2}$ $\displaystyle\equiv\big{(}-{\boldsymbol{k}}_{2}^{\perp},({\textstyle\frac{1}{2}}P_{A}^{z}-k_{2}^{z})/\cosh\xi\big{)}$ (266) The denominator of $G^{\mu}_{AX}(q)$ shows that ${\boldsymbol{P}}_{2}$, i.e., ${\boldsymbol{k}}_{2}^{\perp}$ and ${\textstyle\frac{1}{2}}P_{A}^{z}-k_{2}^{z}$ must be of ${\cal O}\left(\alpha\right)$ for a leading order contribution. Squaring the form factor and summing over the helicities $\lambda_{1},\lambda_{2}$, $\displaystyle\sum_{\lambda_{1},\lambda_{2}}G^{\mu}_{AX}(q){G^{\nu}_{AX}}^{\dagger}(q)$ $\displaystyle=\frac{\pi\alpha^{5}M^{4}}{64\big{(}\alpha^{2}M^{2}/16+{\boldsymbol{P}}_{2}^{2}\big{)}^{4}}\,\mathrm{Tr}\,^{\mu\nu}$ $\displaystyle\mathrm{Tr}\,^{\mu\nu}$ $\displaystyle=\mathrm{Tr}\,\big{\\{}\gamma^{\mu}(\not{P}_{A}+M)(\not{k}_{2}+m)(\not{P}_{A}+M)\gamma^{\nu}(\not{k}_{1}+m)\big{\\}}$ $\displaystyle=2M^{2}\mathrm{Tr}\,\big{\\{}\gamma^{\mu}(\not{P}_{A}+M)\gamma^{\nu}(\not{k}_{1}+m)\big{\\}}=8M^{2}\big{[}P_{A}^{\mu}k_{1}^{\nu}+P_{A}^{\nu}k_{1}^{\mu}-g^{\mu\nu}(P_{A}\cdot k_{1}-{\textstyle\frac{1}{2}}M^{2})\big{\\}}$ (267) where I used $k_{2}={\textstyle\frac{1}{2}}P_{A}+{\cal O}\left(\alpha\right)$. We may check gauge invariance at leading order, $\displaystyle q_{\mu}\mathrm{Tr}\,^{\mu\nu}$ $\displaystyle=2M^{2}\mathrm{Tr}\,\big{\\{}(\not{k}_{1}+\not{k}_{2}-\not{P}_{A})(\not{P}_{A}+M)\gamma^{\nu}(\not{k_{1}}+m)\big{\\}}=2M^{2}\mathrm{Tr}\,\big{\\{}(m-{\textstyle\frac{1}{2}}\not{P}_{A})(\not{P}_{A}+M)\gamma^{\nu}(\not{k_{1}}+m)\big{\\}}=0$ (268) In the notation Peskin and Schroeder (1995) $\displaystyle\textrm{Im}\,W^{\mu\nu}(P_{a},q)$ $\displaystyle=\frac{1}{2}\int\frac{d{\boldsymbol{k}}_{1}d{\boldsymbol{k}}_{2}}{(2\pi)^{6}4E_{1}E_{2}}\,(2\pi)^{4}\delta^{4}(q+P_{a}-k_{1}-k_{2})G^{\mu}(q){G^{\nu}}^{\dagger}(q)$ $\displaystyle W^{\mu\nu}$ $\displaystyle=\Big{(}-g^{\mu\nu}+\frac{q^{\mu}q^{\nu}}{q^{2}}\Big{)}W_{1}+\Big{(}P_{a}^{\mu}-q^{\mu}\frac{P_{a}\cdot q}{q^{2}}\Big{)}\Big{(}P_{a}^{\nu}-q^{\nu}\frac{P_{a}\cdot q}{q^{2}}\Big{)}W_{2}$ (269) Identifying $W_{1}$ from the coefficient of $-g^{\mu\nu}$ in (VI.6) ($P_{A}\cdot k_{1}\gg{\textstyle\frac{1}{2}}M^{2}$) we get for the electron distribution, $\displaystyle f_{e/A}({x_{Bj}})$ $\displaystyle=\frac{1}{\pi}\textrm{Im}\,W_{1}=\int\frac{d{\boldsymbol{k}}_{2}}{(2\pi)^{2}4E_{1}E_{2}}\delta(q^{0}+E_{A}-E_{1}-E_{2})\frac{\alpha^{5}M^{6}\,P_{A}\cdot k_{1}}{16\big{(}\alpha^{2}M^{2}/16+{\boldsymbol{P}}_{2}^{2}\big{)}^{4}}$ (270) As $q=(q^{0},0,0,-|{\boldsymbol{q}}|)\to\infty$ at fixed ${x_{Bj}}$ and $P_{A}$, $\displaystyle E_{1}=\sqrt{({\boldsymbol{q}}+{\boldsymbol{P}}_{A}-{\boldsymbol{k}}_{2})^{2}+m^{2}}\simeq\sqrt{{\boldsymbol{q}}^{2}-2|{\boldsymbol{q}}|(P_{A}^{z}-k_{2}^{z})}\simeq|{\boldsymbol{q}}|-(P_{A}-k_{2})^{z}$ (271) Defining the light-front notation by $q^{\pm}\equiv q^{0}\pm q^{z}$ we have, $\displaystyle Q^{2}=-q^{+}q^{-}=2{x_{Bj}}P_{A}\cdot q\simeq{x_{Bj}}P_{A}^{+}q^{-}\ \ \ \ \Longrightarrow\ \ q^{+}=-{x_{Bj}}P_{A}^{+}$ (272) The energy constraint in (270) becomes $\displaystyle q^{0}+E_{A}-E_{1}-E_{2}$ $\displaystyle=q^{0}-|{\boldsymbol{q}}|+(P_{A}-k_{2})^{z}+E_{A}-E_{2}=q^{+}+P_{A}^{+}-k_{2}^{+}=P_{A}^{+}(1-{x_{Bj}})-k_{2}^{+}=0$ (273) Recalling that $k_{2}={\textstyle\frac{1}{2}}P_{A}$ at ${\cal O}\left(\alpha^{0}\right)$ I denote the ${\cal O}\left(\alpha\right)$ difference ${x_{Bj}}-{\textstyle\frac{1}{2}}$ as $\displaystyle{x_{Bj}}={\textstyle\frac{1}{2}}(1+\alpha\,\tilde{x}_{B})\hskip 56.9055ptk_{2}^{+}={\textstyle\frac{1}{2}}P_{A}^{+}(1-\alpha\tilde{x}_{B})=m\,e^{\xi}(1-\alpha\tilde{x}_{B})$ (274) Neglecting terms of ${\cal O}\left(\alpha^{2}\right)$ we have in ${\boldsymbol{P}}_{2}$ (VI.6), $\displaystyle k_{2}^{z}-{\textstyle\frac{1}{2}}P_{a}^{z}={\textstyle\frac{1}{2}}(k_{2}^{+}-k_{2}^{-})-{\textstyle\frac{1}{2}}P_{a}^{z}={\textstyle\frac{1}{2}}me^{\xi}(1-\alpha\tilde{x}_{B})-\frac{m^{2}e^{-\xi}}{2m(1-\alpha\tilde{x}_{B})}-m\sinh\xi=-m\alpha\tilde{x}_{B}\cosh\xi$ (275) With the change of variables $dk_{2}^{z}/E_{2}=dk_{2}^{+}/k_{2}^{+}$ the $\delta$-function in (270) may be integrated using (273). Substituting $P_{A}\cdot k_{1}/E_{1}k_{2}^{+}\simeq 2$ and the result (275) we have the frame independent result $\displaystyle f_{e/A}({x_{Bj}})$ $\displaystyle=\frac{\alpha^{5}M^{6}}{32}\int\frac{d{\boldsymbol{k}}_{2}^{\perp}}{(2\pi)^{2}}\frac{1}{\big{(}{{\boldsymbol{k}}_{2}^{\perp}}^{2}+\alpha^{2}M^{2}/16+\alpha^{2}M^{2}{\tilde{x}_{B}}^{2}/4\big{]}^{4}}=\frac{1}{6\pi\,\alpha}\,\frac{1}{({\tilde{x}_{B}}^{2}+{\textstyle\frac{1}{4}})^{3}}\hskip 56.9055pt\tilde{x}_{B}=\frac{2}{\alpha}({x_{Bj}}-{\textstyle\frac{1}{2}})$ (276) We may check that there is a single $e^{-}$ in the bound state, $\displaystyle\int_{0}^{1}d{x_{Bj}}\,f_{e/A}({x_{Bj}})={\textstyle\frac{1}{2}}\alpha\int_{-\infty}^{\infty}\frac{d\tilde{x}_{B}}{6\pi\alpha({\tilde{x}_{B}}^{2}+{\textstyle\frac{1}{4}})^{3}}=1$ (277) ## VII QED in $D=1+1$ dimensions In this section I apply the perturbative bound state method described in chapter V to QED in $D=1+1$ dimensions (QED2), also known as the “Massive Schwinger model” Schwinger (1962); Coleman _et al._ (1975); Coleman (1976). QED (and QCD) in two dimensions is often considered as a model for confinement, since the Coulomb potential is linear. The coupling $e$ has dimension of mass, so the dimensionless parameter relevant for dynamics is its ratio $e/m$ to the electron mass. For $e/m\ll 1$ the fermions are weakly bound and their wave function satifies the Schrödinger equation. For $e/m\gg 1$ the spectrum is that of weakly interacting bosons: The strong coupling locks the fermion degrees of freedom into compact neutral bound states. In the limit of the massless Schwinger model ($e/m\to\infty$) QED2 reduces to a free theory, with only a pointlike, non-interacting massive ($M=e/\sqrt{\pi}$) boson field. A perturbative approach requires the coupling to be small, $e/m<1$. Highly excited states are nevertheless strongly bound due to the linear potential. Several features are similar to those of the Dirac equation in a linear potential discussed in section IV.6. There are also important differences, first of all because translation invariance allows to define the bound state momentum. The peculiar feature of a constant (local) norm for wave functions at large separations of the charges occurs here as well, but now it does not imply a continuous spectrum. Highly excited states have features of duality similar to those observed for hadrons. In section VI we saw that transverse photon exchange contributes to the binding of Positronium atoms even at leading order for non-vanishing atomic momentum $P$. In $D=1+1$ there are no transverse photons. Boost covariance is realized differently, and requires a linear potential. I shall verifty that form factors and deep inelastic scattering are frame independent. ### VII.1 QED2 bound states in $A^{0}=0$ gauge #### VII.1.1 Temporal gauge in $D=1+1$ Quantization in temporal gauge proceeds as in section V.2.3, adapted to $D=1+1$. The QED2 action is $\displaystyle\mathcal{S}=\int d^{2}x\big{[}-{\textstyle\frac{1}{2}}F_{10}F^{10}+\bar{\psi}(\not{\partial}-m-e{\not{A}})\psi\big{]}$ (278) The electric field $E^{1}=F^{10}=-\partial_{0}A^{1}$ is conjugate to the photon field $A_{1}$, hence $\displaystyle\left[{E^{1}(t,x)},{A^{1}(t,y)}\right]=i\delta(x-y)$ (279) The Hamiltonian is $\displaystyle\mathcal{H}$ $\displaystyle=\int dx\big{[}E^{1}\partial_{0}A_{1}+i\psi^{\dagger}\partial_{0}\psi-\mathcal{L}\big{]}=\int dx\big{[}{\textstyle\frac{1}{2}}(E^{1})^{2}+\psi^{\dagger}(-i\alpha^{1}\partial_{1}+m\gamma^{0}-e\alpha^{1}A^{1})\psi\big{]}\equiv\mathcal{H}_{V}+\mathcal{H}_{0}+\mathcal{H}_{int}$ (280) $\displaystyle\mathcal{H}_{V}(t)$ $\displaystyle=\int dx\,{\textstyle\frac{1}{2}}\big{[}E^{1}(t,x)\big{]}^{2}\hskip 28.45274pt\mathcal{H}_{0}(t)=\int dx\big{[}\bar{\psi}(-i\alpha^{1}\partial_{1}+m\gamma^{0}\psi)\big{]}\hskip 28.45274pt\mathcal{H}_{int}(t)=-e\int dx\big{[}\psi^{\dagger}\,\alpha^{1}A^{1}(t,x)\psi\big{]}$ Gauss’ operator is $\displaystyle G(t,x)\equiv\frac{\delta\mathcal{S}}{\delta{A^{0}(t,x)}}=\partial_{1}E^{1}(t,x)-e\psi^{\dagger}\psi(t,x)$ (281) $G(t,x)=0$ (Gauss’ law) is imposed as a constraint on physical states, fixing the remaining gauge degrees of freedom and defining the value of $E^{1}$ for those states, $\displaystyle G(t,x)\left|{phys}\right\rangle$ $\displaystyle=\big{[}\partial_{1}E^{1}(x)-e\psi^{\dagger}\psi(x)\big{]}\left|{phys}\right\rangle=0$ (282) Solving for $E^{1}$, using $\partial_{x}^{2}|x-y|=2\delta(x-y)$, $\displaystyle E^{1}(t,x)\left|{phys}\right\rangle$ $\displaystyle=\partial_{x}\int dy\,{\textstyle\frac{1}{2}}e|x-y|\psi^{\dagger}\psi(t,y)\left|{phys}\right\rangle$ (283) The vacuum $\left|{0}\right\rangle$ is a physical state with locally vanishing charge distribution, $\displaystyle E^{1}(t,x)\left|{0}\right\rangle=0$ (284) The $\mathcal{H}_{V}$ part of the Hamiltonian (280) generates an instantaneous linear potential, $\displaystyle\mathcal{H}_{V}\left|{phys}\right\rangle$ $\displaystyle=\frac{1}{2}\int dx\,\big{[}E^{1}(x)\big{]}^{2}\left|{phys}\right\rangle=\frac{e^{2}}{8}\int dxdydz\big{[}\partial_{x}|x-y|\psi^{\dagger}\psi(y)\big{]}\big{[}\partial_{x}|x-z|\psi^{\dagger}\psi(z)\big{]}\left|{phys}\right\rangle$ $\displaystyle=-\frac{e^{2}}{4}\int dxdy\,\psi^{\dagger}\psi(x)|x-y|\psi^{\dagger}\psi(y)\left|{phys}\right\rangle$ (285) #### VII.1.2 States and wave functions of QED2 An $e^{-}e^{+}$ valence Fock state with CM momentum $P$ is defined analogously to Positronium (165), $\displaystyle\left|{M,P}\right\rangle=\int dx_{1}dx_{2}\,\bar{\psi}(x_{1})e^{iP(x_{1}+x_{2})/2}\Phi^{(P)}(x_{1}-x_{2})\psi(x_{2})\left|{0}\right\rangle$ (286) When bound by $\mathcal{H}_{V}$ (VII.1.1) this is taken as the lowest order contribution of a bound state expansion, where higher orders are perturbatively generated by $\mathcal{H}_{int}$. Hence the lowest order wave functions $\Phi^{(P)}(x_{1}-x_{2})$ and energy eigenvalues $E(P)$ are determined by the eigenstate condition $\displaystyle(\mathcal{H}_{0}+\mathcal{H}_{V})\left|{M,P}\right\rangle=E(P)\left|{M,P}\right\rangle$ (287) Each order of the perturbative expansion should be Poincaré covariant, which implies $E(P)=\sqrt{P^{2}+M^{2}}$. This will be seen to be satisfied, and the $P$-dependence of the wave function $\Phi^{(P)}(x)$ determined. I do not here consider the higher order corrections defined by $\mathcal{H}_{int}$ (280). At large values of the linear potential generated by $\mathcal{H}_{V}$ the state $\left|{M,P}\right\rangle$ has contributions from virtual $e^{\pm}$ pairs, as in (IV.3) and (65) for the Dirac case. In terms of Feynman diagrams these effects are due to $Z$-diagrams (Fig. 13(b)), and they give rise to negative energy components of the wave function. The virtual pairs are implicitly included by omitting from (286) the energy projectors $\Lambda_{\pm}$ (VI.1.1) used for Positronium in (165). Applying the free Hamiltonian to the state (286), $\displaystyle\mathcal{H}_{0}\left|{M,P}\right\rangle=\int dx_{1}dx_{2}\big{[}$ $\displaystyle\bar{\psi}(x_{1})(-i\alpha^{1}{\buildrel\leftarrow\over{\partial}}_{1}+m\gamma^{0})e^{iP(x_{1}+x_{2})/2}\Phi^{(P)}(x_{1}-x_{2})\psi(x_{2})$ $\displaystyle-\bar{\psi}(x_{1})e^{iP(x_{1}+x_{2})/2}\Phi^{(P)}(x_{1}-x_{2})(-i\alpha^{1}{\buildrel\rightarrow\over{\partial}}_{2}+m\gamma^{0})\psi(x_{2})\big{]}\left|{0}\right\rangle$ (288) Partially integrating the derivatives, so that they act on the wave function instead of on the electron fields, $\displaystyle\mathcal{H}_{0}\left|{M,P}\right\rangle=\int dx_{1}dx_{2}\big{[}$ $\displaystyle\bar{\psi}(x_{1})e^{iP(x_{1}+x_{2})/2}(i\alpha^{1}{\buildrel\rightarrow\over{\partial}}_{1}-{\textstyle\frac{1}{2}}\alpha^{1}P+m\gamma^{0})\Phi^{(P)}(x_{1}-x_{2})\psi(x_{2})$ $\displaystyle+\bar{\psi}(x_{1})\Phi^{(P)}(x_{1}-x_{2})(-i\alpha^{1}{\buildrel\leftarrow\over{\partial}}_{2}+{\textstyle\frac{1}{2}}\alpha^{1}P-m\gamma^{0})e^{iP(x_{1}+x_{2})/2}\psi(x_{2})\big{]}\left|{0}\right\rangle$ (289) The instantaneous potential generated by $\mathcal{H}_{V}$ (VII.1.1) is seen from $\displaystyle\mathcal{H}_{V}\left|{M,P}\right\rangle=\int dx_{1}dx_{2}\,\bar{\psi}(x_{1})e^{iP(x_{1}+x_{2})/2}{\textstyle\frac{1}{2}}e^{2}|x_{1}-x_{2}|\Phi^{(P)}(x_{1}-x_{2})\psi(x_{2})\left|{0}\right\rangle$ (290) In $D=1+1$ the Dirac matrices can be represented by the $2\times 2$ Pauli matrices. I shall use $\displaystyle\gamma^{0}=\sigma_{3}\hskip 56.9055pt\gamma^{1}=i\sigma_{2}\hskip 56.9055pt\alpha^{1}=\alpha_{1}=\gamma^{0}\gamma^{1}=\sigma_{1}$ (291) With this notation the bound state condition (287) implies for the wave function $\displaystyle i\partial_{x}\big{\\{}{\sigma_{1}},{\Phi^{(P)}(x)}\big{\\}}-{\textstyle\frac{1}{2}}P\big{[}{\sigma_{1}},{\Phi^{(P)}(x)}\big{]}+m\big{[}{\sigma_{3}},{\Phi^{(P)}(x)}\big{]}$ $\displaystyle=\big{[}E-V(x)\big{]}\Phi^{(P)}(x)$ $\displaystyle V(x)={\textstyle\frac{1}{2}}e^{2}\,|x|\equiv V^{\prime}|x|=V^{\prime}x\ \ \ (x\geq 0)$ (292) In the following I assume that $x\geq 0$. The wave function for $x<0$ is then determined by its parity $\eta_{P}$ as in (177), $\sigma_{3}\,\Phi^{(P)}(-x)\sigma_{3}=\eta_{P}\Phi^{(-P)}(x)\hskip 56.9055pt(\eta_{P}=\pm 1)$ (293) It can be shown (see Exercise A.19 for the derivation in $D=3+1$) that (VII.1.2) is equivalent to the two coupled equations $\displaystyle\Big{[}\frac{2}{E-V}\big{(}i\sigma_{1}{\buildrel\rightarrow\over{\partial}}_{x}+m\sigma_{3}-{\textstyle\frac{1}{2}}\sigma_{1}P\big{)}-1\Big{]}\Phi^{(P)}$ $\displaystyle=-\frac{2i}{(E-V)^{2}}P\partial_{x}\Phi^{(P)}+\frac{iV^{\prime}}{(E-V)^{2}}\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}$ $\displaystyle\Phi^{(P)}\Big{[}\big{(}i\sigma_{1}{\buildrel\leftarrow\over{\partial}}_{x}-m\sigma_{3}+{\textstyle\frac{1}{2}}\sigma_{1}P\big{)}\frac{2}{E-V}-1\Big{]}$ $\displaystyle=\frac{2i}{(E-V)^{2}}P\partial_{x}\Phi^{(P)}-\frac{iV^{\prime}}{(E-V)^{2}}\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}$ (294) The $2\times 2$ wave function may be expanded in Pauli matrices, $\displaystyle\Phi^{(P)}(x)=\phi^{\scriptscriptstyle{(P)}}_{0}(x)\,I+\phi^{\scriptscriptstyle{(P)}}_{1}(x)\,\sigma_{1}+\phi^{\scriptscriptstyle{(P)}}_{2}(x)\,i\sigma_{2}+\phi^{\scriptscriptstyle{(P)}}_{3}(x)\,\sigma_{3}$ (295) where $I$ stands for the unit $2\times 2$ matrix. The coefficients of the Pauli matrices in the bound state equation (VII.1.2) give four conditions, $\displaystyle I$ $\displaystyle:\hskip 28.45274pt2i\partial_{x}\phi^{\scriptscriptstyle{(P)}}_{1}(x)=(E-V)\phi^{\scriptscriptstyle{(P)}}_{0}(x)$ $\displaystyle\sigma_{1}$ $\displaystyle:\hskip 28.45274pt2i\partial_{x}\phi^{\scriptscriptstyle{(P)}}_{0}(x)+2m\phi^{\scriptscriptstyle{(P)}}_{2}(x)=(E-V)\phi^{\scriptscriptstyle{(P)}}_{1}(x)$ $\displaystyle i\sigma_{2}$ $\displaystyle:\hskip 28.45274ptP\phi^{\scriptscriptstyle{(P)}}_{3}(x)+2m\phi^{\scriptscriptstyle{(P)}}_{1}(x)=(E-V)\phi^{\scriptscriptstyle{(P)}}_{2}(x)$ $\displaystyle\sigma_{3}$ $\displaystyle:\hskip 28.45274ptP\phi^{\scriptscriptstyle{(P)}}_{2}(x)=(E-V)\phi^{\scriptscriptstyle{(P)}}_{3}(x)$ (296) #### VII.1.3 Rest frame and non-relativistic limit Consider first the rest frame, $P=0,\ E=M$. The conditions (VII.1.2) give $\displaystyle\phi^{\scriptscriptstyle(0)}_{0}(x)=\frac{2i}{M-V}\,\partial_{x}\phi^{\scriptscriptstyle(0)}_{1}(x)$ $\displaystyle\hskip 56.9055pt\phi^{\scriptscriptstyle(0)}_{2}(x)=\frac{2m}{M-V}\,\phi^{\scriptscriptstyle(0)}_{1}(x)\hskip 56.9055pt\phi^{\scriptscriptstyle(0)}_{3}(x)=0$ $\displaystyle\partial_{x}^{2}\phi^{\scriptscriptstyle(0)}_{1}(x)$ $\displaystyle+\frac{V^{\prime}}{M-V}\,\partial_{x}\phi^{\scriptscriptstyle(0)}_{1}(x)+\big{[}{\textstyle\frac{1}{4}}(M-V)^{2}-m^{2}\big{]}\phi^{\scriptscriptstyle(0)}_{1}(x)=0$ (297) In the NR limit, with $e\ll m$ and $V(x)\ll m$, the equation for $\phi^{\scriptscriptstyle(0)}_{1}(x)$ reduces to the Schrödinger equation with binding energy $E_{b}=M-2m$, $\displaystyle\Big{[}-\frac{1}{m}\partial_{x}^{2}+V(x)\Big{]}\phi^{\scriptscriptstyle(0)}_{1}(x)=E_{b}\phi^{\scriptscriptstyle(0)}_{1}(x)$ (298) The normalizable solution is given by the Airy function, $\displaystyle\phi^{\scriptscriptstyle(0)}_{1}(x)=N\mbox{Ai}\big{[}m(V-E_{b})/(mV^{\prime})^{2/3}\big{]}\hskip 56.9055pt(x\geq 0)$ (299) The coefficient $N$ may be chosen to be real, with size fixed by the normalization (168) of the state. The energy eigenvalues are determined by continuity at $x=0$. From (293) and (295) follows $\phi^{\scriptscriptstyle(0)}_{1}(-x)=-\eta_{P}\phi^{\scriptscriptstyle(0)}_{1}(x)$, so that $\displaystyle\phi^{\scriptscriptstyle(0)}_{1}(x=0)=0\ \ \ (\eta_{P}=+1)\hskip 56.9055pt\partial_{x}\phi^{\scriptscriptstyle(0)}_{1}(x=0)=0\ \ \ (\eta_{P}=-1)$ (300) The relations (VII.1.3) reduce in the NR limit to $\phi^{\scriptscriptstyle(0)}_{2}(x)=\phi^{\scriptscriptstyle(0)}_{1}(x)$, $\phi^{\scriptscriptstyle(0)}_{0}(x)=\phi^{\scriptscriptstyle(0)}_{3}(x)=0$. Hence the $2\times 2$ wave function has the structure $\displaystyle\Phi^{(0)}_{NR}(x)=(\sigma_{1}+i\sigma_{2})\phi^{\scriptscriptstyle(0)}_{1}(x)$ (301) The projectors (VI.1.1) in the NR limit are $\Lambda_{\pm}={\textstyle\frac{1}{2}}(1\pm\sigma_{3})$. The wave function satisfies $\Lambda_{+}\Phi^{(0)}_{NR}(x)=\Phi^{(0)}_{NR}(x)\Lambda_{-}=\Phi^{(0)}_{NR}(x)$, showing that it has no negative energy components. Hence there are no virtual $e^{-}e^{+}$ pairs in the NR bound state (286). #### VII.1.4 Solution for any $M$ and $P$ Consider now the bound state conditions (VII.1.2) for arbitrary momenta $P$, without assuming $V\ll E$. The last two relations allow to express $\phi^{\scriptscriptstyle{(P)}}_{2}(x)$ and $\phi^{\scriptscriptstyle{(P)}}_{3}(x)$ in terms of $\phi^{\scriptscriptstyle{(P)}}_{1}(x)$, $\displaystyle\phi^{\scriptscriptstyle{(P)}}_{2}(x)=\frac{E-V}{(E-V)^{2}-P^{2}}\,2m\phi^{\scriptscriptstyle{(P)}}_{1}(x)\hskip 56.9055pt\phi^{\scriptscriptstyle{(P)}}_{3}(x)=\frac{P}{(E-V)^{2}-P^{2}}\,2m\phi^{\scriptscriptstyle{(P)}}_{1}(x)$ (302) The denominators are the square of the kinetic 2-momentum $\Pi(x)\equiv(E-V,P)$. This motivates changing the variables $x$ into the “Lorentz invariant” $\tau(x)$, defined as $\displaystyle\tau(x)\equiv\big{[}(E-V)^{2}-P^{2}\big{]}/V^{\prime}\hskip 56.9055pt\partial_{x}=-2(E-V)\partial_{\tau}$ (303) The relation between $\partial_{x}$ and $\partial_{\tau}$ is crucial in the following, and is valid only for a linear potential, $V(x)=V^{\prime}x$ ($x\geq 0$). When the equations (VII.1.2) for $\phi^{\scriptscriptstyle{(P)}}_{0}(x)$ and $\phi^{\scriptscriptstyle{(P)}}_{1}(x)$ are expressed in terms of $\tau$ rather than $x$ they turn out to be frame independent, i.e., no factors of $E$ or $P$ appear Hoyer (1986). With the shorthand notation $\phi_{0,1}(\tau)\equiv\phi^{\scriptscriptstyle{(P)}}_{0,1}\big{[}x(\tau)\big{]}$, $\displaystyle\partial_{\tau}\phi_{1}(\tau)=\frac{i}{4}\,\phi_{0}(\tau)\hskip 56.9055pt\partial_{\tau}\phi_{0}(\tau)=\frac{i}{4}\Big{(}1-\frac{4m^{2}}{V^{\prime}\tau}\Big{)}\phi_{1}(\tau)$ (304) The superscript $(P)$ on $\phi_{0,1}(\tau)$ is omitted since as functions of $\tau$ they are the same in all frames. The $P$-dependence of $\phi^{\scriptscriptstyle{(P)}}_{0,1}\big{[}x(\tau)\big{]}$ as functions of $x$ arises only from the mapping $x(\tau)$ defined by (303). The equivalence of this with the $P$-dependence induced by actually boosting the state was verified in Dietrich _et al._ (2012) (in $A^{1}=0$ gauge). The parity constraint (293) on $\Phi^{(P)}(x)$ implies, in view of the expansion (295), the relations $\displaystyle\phi^{\scriptscriptstyle{(P)}}_{0,3}(x)=\eta_{P}\phi^{\scriptscriptstyle{(-P)}}_{0,3}(-x)\hskip 56.9055pt\phi^{\scriptscriptstyle{(P)}}_{1,2}(x)=-\eta_{P}\phi^{\scriptscriptstyle{(-P)}}_{1,2}(-x)$ (305) Consider first $\phi_{0}$ and $\phi_{1}$, which for $x\geq 0$ are functions only of $\tau$ in (304). Since $\tau(x)$ is invariant under $P\to-P$ we need not be concerned with the sign change of $P$ under parity. Continuity at $x=0$ requires for $\eta_{P}=+1$ that $\phi_{1}\big{[}\tau(x=0)\big{]}=0$ and for $\eta_{P}=-1$ that $\phi_{0}\big{[}\tau(x=0)\big{]}=0$. The relations (304) ensure that $\partial_{\tau}\phi_{1}(\tau)=0$ when $\phi_{0}(\tau)=0$ and vice versa, as required by the opposite parities of $\phi_{0}$ and $\phi_{1}$. For $P=0$ (303) gives $V^{\prime}\tau(x=0)=M^{2}$. Hence the condition $\phi_{1}(\tau=M^{2}/V^{\prime})=0$ determines the masses $M$ of the bound states with $\eta_{P}=+1$. Similarly, the zeros of $\phi_{0}(\tau)$ determine the $\eta_{P}=-1$ masses. When $P\neq 0$ we have $V^{\prime}\tau(x=0)=E^{2}-P^{2}$, whereas the zeros of the functions $\phi_{0,1}(\tau)$ are independent of $P$. Satisfying the parity constraint for all $P$ then requires the energies $E$ to satisfy $E^{2}-P^{2}=M^{2}$, as expected from Lorentz covariance. This allows to express $\tau(x)$ in (303) as $\tau(x)=\big{[}M^{2}-2EV+V^{2}\big{]}/V^{\prime}$. The function $\phi^{\scriptscriptstyle{(P)}}_{2}(x)$ given by (302) has the same $x\to-x$ symmetry as $\phi^{\scriptscriptstyle{(P)}}_{1}(x)$ as required by (305). $\phi^{\scriptscriptstyle{(P)}}_{3}(x)$ is likewise related to $\phi^{\scriptscriptstyle{(P)}}_{1}(x)$ by a coefficient which is symmetric under $x\to-x$, but antisymmetric under $P\to-P$. Hence $\phi^{\scriptscriptstyle{(P)}}_{3}(x)$ has opposite parity constraint compared to $\phi^{\scriptscriptstyle{(P)}}_{1}(x)$, which is again consistent with (305). Defining the $x$-dependent “boost parameter” $\zeta(x)$ by $\displaystyle\cosh\zeta=\frac{E-V}{\sqrt{V^{\prime}\tau}}\hskip 56.9055pt\sinh\zeta=\frac{P}{\sqrt{V^{\prime}\tau}}$ (306) the full wave function (295) may be expressed using (302) as $\displaystyle\Phi^{(P)}=\phi_{0}+\phi_{1}\Big{[}\sigma_{1}+\frac{2m(E-V)}{V^{\prime}\tau}\,i\sigma_{2}+\frac{2mP}{V^{\prime}\tau}\,\sigma_{3}\Big{]}=e^{-\sigma_{1}\zeta/2}\Big{(}\phi_{0}+\phi_{1}\sigma_{1}+\frac{2m}{\sqrt{V^{\prime}\tau}}\phi_{1}\,i\sigma_{2}\Big{)}e^{\sigma_{1}\zeta/2}$ (307) In the latter expression the term in ( ) depends on $\tau$ only, whereas $\zeta$ depends also explicitly on $P$. In the weak coupling limit $(V\ll m)$ $\zeta(x)$ reduces to the standard boost parameter $\xi$, $\displaystyle\cosh\xi=\frac{E}{M}\hskip 56.9055pt\sinh\xi=\frac{P}{M}$ (308) The expression (307) allows to determine the frame dependence of $\Phi^{(P)}(x)$ at constant $x$, $\displaystyle\left.\frac{\partial\Phi^{(P)}(x)}{\partial\xi}\right|_{x}=\left.\frac{xP}{E-V}\partial_{x}\Phi^{(P)}\right|_{\xi}-\frac{E}{2(E-V)}\,\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}$ (309) where the $x$-derivative on the rhs. is taken at constant $P$. Exercise A.13: Derive the expression (309). Hint: You may start from $e^{\sigma_{1}\zeta/2}\Phi^{(P)}e^{-\sigma_{1}\zeta/2}$, which is a function only of $\tau$. Eliminating $\phi_{0}(\tau)$ in (304) gives $\displaystyle\partial_{\tau}^{2}\phi_{1}(\tau)+\frac{1}{16}\Big{(}1-\frac{4m^{2}}{V^{\prime}\tau}\Big{)}\phi_{1}(\tau)=0$ (310) The regular, arbitrarily normalized analytic solutions for $\phi_{0,1}(\tau)$ are555The factor $\sqrt{V^{\prime}}$ in the wave function gives it the correct dimension, corresponding to a relativistically normalized state in $D=1+1$. Analytic solutions are given in Dietrich _et al._ (2013) also for bound states of fermions with unequal masses. Dietrich _et al._ (2013) $\displaystyle\phi_{1}(\tau)$ $\displaystyle=\sqrt{V^{\prime}}\,\tau\,\exp(-i\tau/4){\,{{}_{1}}F_{1}}(1-im^{2}/2V^{\prime},2,i\tau/2)=\phi_{1}^{*}(\tau)$ $\displaystyle\phi_{0}(\tau)$ $\displaystyle=-\phi_{1}(\tau)-4i\sqrt{V^{\prime}}\,\exp(-i\tau/4){\,{{}_{1}}F_{1}}(1-im^{2}/2V^{\prime},1,i\tau/2)=-\phi_{0}^{*}(\tau)$ (311) #### VII.1.5 Weak coupling limit The Positronium states of QED4 were in (165) defined with the $\Lambda_{\pm}$ projectors (VI.1.1), and the ${\boldsymbol{x}}$-dependence of the $\left|{e^{-}e^{+}}\right\rangle$ Fock state wave function $\Phi^{(P)}({\boldsymbol{x}})$ (183) was found to Lorentz contract (216). Here I verify that the QED2 wave functions have analogous properties in the weak coupling limit, although there is no transverse photon contribution and the states (286) are defined without projecting on the lowest Fock state. According to (VII.1.2) the Dirac structure (295) of the QED2 wave function reduces for $V\ll m$ and $M\simeq 2m$ to $\displaystyle\Phi^{(P)}_{NR}(x)=\Big{(}\sigma_{1}+\frac{E}{M}\,i\sigma_{2}+\frac{P}{M}\,\sigma_{3}\Big{)}\phi^{\scriptscriptstyle{(P)}}_{1,NR}(x)$ (312) and the variable $\tau$ of (303) simplifies to $\displaystyle V^{\prime}\tau_{NR}(x)=M^{2}-2EV(x)=M(M-2V^{\prime}x\cosh\xi)\hskip 56.9055pt\cosh\xi=\frac{E}{M}$ (313) The dependence on $x\cosh\xi$ means that $\phi_{1}\big{[}\tau_{NR}(x)\big{]}$ Lorentz contracts similarly as $F^{({\boldsymbol{P}})}({\boldsymbol{x}})$ in (216). The leading order expression (A.587) of the projectors $\Lambda_{\pm}$ is in $D=1+1$ $\displaystyle\Lambda_{\pm}(P)=\frac{1}{2E}(E\mp\sigma_{1}P\pm M\sigma_{3})$ (314) The Dirac structure of the QED2 wave function is $\sigma_{1}+i\sigma_{2}$ for $P=0$ (301). The analogy with QED4 (215) suggests that $\Phi^{(P)}_{NR}(x)$ should for general $P$ be a $2\times 2$ matrix proportional to $\displaystyle\Lambda_{+}(P)(\sigma_{1}+i\sigma_{2})\Lambda_{-}(P)=\frac{M(E+M)}{2E^{2}}\Big{(}\sigma_{1}+\frac{E}{M}\,i\sigma_{2}+\frac{P}{M}\,\sigma_{3}\Big{)}$ (315) which agrees with (312): The properties of the weakly bound states in QED2 and QED4 are analogous at all $P$. The non-relativistic limit of $\phi_{1}(\tau)$ may be determined from its analytic expression (VII.1.4). The scaling of the coordinate $x$ in the limit $m\to\infty$ at fixed $V^{\prime}={\textstyle\frac{1}{2}}e^{2}$ is given by the Schrödinger equation (298) for $P=0$: $\partial_{x}^{2}\propto mV^{\prime}x$, i.e., $x\propto(mV^{\prime})^{-1/3}$, and $E_{b}=M-2m\propto(mV^{\prime})^{2/3}/m$. In this limit Dietrich _et al._ (2013); Hoyer (2014) $\displaystyle\phi_{1,NR}^{\scriptscriptstyle{(0)}}(x)=\lim_{m\to\infty}\phi_{1}(\tau)=4\sqrt{V^{\prime}}\Big{(}\frac{V^{\prime}}{m^{2}}\Big{)}^{1/3}e^{\pi m^{2}/2V^{\prime}}\,\mbox{Ai}\big{[}m(V-E_{b})/(mV^{\prime})^{2/3}\big{]}\Big{[}1+{\cal{O}}\Big{(}m^{2}/V^{\prime}\Big{)}^{-2/3}\Big{]}$ (316) which relates the normalization of the NR solution (299) to that of the general solution (VII.1.4). #### VII.1.6 Large separations between $e^{-}$ and $e^{+}$ The variable $\tau(x)$ (303) grows with the separation $x$ of the fermions. For $|\tau|\to\infty$ the wave function $\phi_{1}(\tau)$ (VII.1.4) oscillates with constant amplitude, up to corrections of ${\cal O}\left(1/|\tau|\right)$: $\displaystyle\phi_{1}(|\tau|\to\infty)=\frac{4V^{\prime}}{\sqrt{\pi}\,m}\sqrt{\exp(\pi m^{2}/V^{\prime})-1}\;e^{-\theta(-\tau)\pi m^{2}/2V^{\prime}}\cos\Big{[}{\textstyle\frac{1}{4}}\tau-(m^{2}/2V^{\prime})\log({\textstyle\frac{1}{2}}|\tau|)+\arg\Gamma(1+im^{2}/2V^{\prime})-\pi/2\Big{]}$ (317) where $\theta(x)=1\ (0)$ for $x>0\ (x<0)$ is the step function. From (304) we have $\phi_{0}(\tau)=-4i\partial_{\tau}\phi_{1}(\tau)$, so that $\displaystyle\lim_{|\tau|\to\infty}\big{[}\phi_{1}(\tau)+\phi_{0}(\tau)\big{]}=\lim_{|\tau|\to\infty}\big{[}\phi_{1}(\tau)-\phi_{0}(\tau)\big{]}^{*}$ $\displaystyle=N\exp\big{[}i\tau/4-i(m^{2}/2V^{\prime})\log(|\tau|/2)+i\arg\Gamma(1+im^{2}/2V^{\prime})-i\pi/2\big{]}$ $\displaystyle N$ $\displaystyle=\frac{4V^{\prime}}{\sqrt{\pi}\,m}\sqrt{\exp(\pi m^{2}/V^{\prime})-1}\;e^{-\theta(-\tau)\pi m^{2}/2V^{\prime}}\hskip 56.9055pt$ (318) has an $x$-independent local norm $N^{2}$. This $x$-dependence of the wave function is made possible by modes with large negative kinetic energy, which balance the linear potential to give a fixed energy eigenvalue. The asymptotic wave function thus describes virtual $e^{-}e^{+}$ pairs, illustrated by time- ordered $Z$-diagrams such as in Fig. 13 b. The negative energy components created by the $b\,d$ operators in the state (286) dominate for $x\to\infty$ Dietrich _et al._ (2013); Hoyer (2014). The pairs give rise to a sea distribution for ${x_{Bj}}\to 0$ in deep inelastic scattering (see section VII.3 and Fig. 17). The Dirac radial functions (126) similarly have a constant local norm at large values of $r$. #### VII.1.7 Bound state masses and duality The wave function $\phi_{1}(\tau)$ in (VII.1.4) was chosen to satisfy $\phi_{1}(\tau=0)=0$, ensuring that $\phi_{2}(0)$ and $\phi_{3}(0)$ (302) are finite. The general solution of the differential equation (310) has $\phi_{1}(\tau=0)\neq 0$, giving singular $\phi_{2}$ and $\phi_{3}$. The requirement that the wave function is regular at $\tau=0$ implies a discrete QED2 spectrum. This criterion of local normalizability is the relativistic generalization of the requirement of a finite global norm for Schrödinger wave functions. In fact, the non-relativistic limit of the general solution for $\phi_{1}(\tau)$ (with singular $\phi_{2,3}(\tau=0)$) adds an Airy Bi function to (316), which increases exponentially at large $x$ Dietrich _et al._ (2013). The bound state masses $M$ of locally normalizable solutions are determined by the parity constraint (305) at $x=0$, which requires $\phi_{1}(\tau=M^{2}/V^{\prime})=0$ for $\eta_{P}=+1$, and $\partial_{\tau}\phi_{1}(\tau=M^{2}/V^{\prime})=0$ for $\eta_{P}=-1$. At high masses $M$ we may use the asymptotic expression (317) for $\phi_{1}(\tau\to\infty)$. The squared eigenvalues $M_{n}^{2}$ are then given by integers $n$ and lie on asymptotically linear “Regge trajectories”, $\displaystyle M_{n}^{2}=n\,2\pi V^{\prime}+{\cal O}\left(m^{2}\log n\right)\hskip 56.9055pt\eta_{P}=(-1)^{n}$ (319) For small electron masses $m$ the trajectory is linear down to low excitations Dietrich _et al._ (2013). In the range of $x$ where $V(x)\ll M$ the state (286) is dominated by positive energy $b^{\dagger}\,d^{\dagger}$ contributions, as would be expected for parton-hadron duality Dietrich _et al._ (2013); Hoyer (2014). The states have an overlap with multiple bound states generated via string breaking as shown in Fig. 19 a. I return to these issues in the context of QCD in $D=3+1$ dimensions (section VIII.4). Consider next a ground state of mass $M$. With increasing $x$ the first virtual $e^{-}e^{+}$ pair (string breaking) in the $P=0$ wave function is expected when the potential has reached twice the mass of the bound state, i.e., at $V(x)=2M$. This energetically allows a (virtual) bound state pair to appear (as depicted in Fig. 19 b below). I illustrate this with a numerical example in Fig. 15(a), where $e/m=0.71$ and $M=4.86\sqrt{V^{\prime}}$. The dynamics is nearly non-relativistic at low $x$, as evidenced by the agreement of the blue (exact, (VII.1.4)) and red dashed (Schrödinger, (316)) wave functions. Both are exponentially suppressed with increasing $x$. In the relativistic range, $V(x)\gtrsim M$, the exact wave function (blue line) begins to increase, reaching a maximum at $V(x)=2M$. The wave function is symmetric around $V^{\prime}x=M$ because $\tau(x)$ in (303) satisfies $\tau(x)=\tau(2M/V^{\prime}-x)$.666There is no symmetry for $V^{\prime}x>2M$ since the wave function is defined by parity for $x<0$. For $P=M\sinh\xi>0$ the virtual pair is expected at $V(x)=2E$, since each bound state has increased energy due to the boost: $M\cosh\xi=E$. This is verified in Fig. 15(b), and is again due to the symmetry of $\tau(x)$. Figure 15: The wave function $\phi_{1}\big{[}\tau(x)\big{]}$ of the $\eta_{P}=-1$ ground state for $m=2\sqrt{V^{\prime}}$, i.e., $e/m=1/\sqrt{2}$. Blue line: Relativistic expression (VII.1.4), implying a bound state mass $M=4.86\sqrt{V^{\prime}}$. Dashed red line: Non-relativistic Airy function (316), requiring $E_{b}=0.81\sqrt{V^{\prime}}$. Both functions are normalized to unity at $x=0$. (a) Rest frame, $P=0$. (b) $P=5\sqrt{V^{\prime}}$, with the non-relativistic (dashed red) line Lorentz contracted, $x\to xE/M$ as in (313). ### VII.2 * Bound state form factors in QED2 #### VII.2.1 Form factor definition and symmetry under parity The electromagnetic form factor is defined as for Positronium in $D=3+1$, see section VI.5. The form factor for $\gamma^{*}+A\to B$ is as in (VI.5) and (240), $\displaystyle F_{AB}^{\mu}(q)$ $\displaystyle=\int d^{2}z\,e^{-iq\cdot z}\langle{M_{B},P_{B}}|\bar{\psi}(z)\gamma^{\mu}\psi(z)\left|{M_{A},P_{A}}\right\rangle=(2\pi)^{2}\delta^{2}(P_{B}-P_{A}-q)G_{AB}^{\mu}(q)$ $\displaystyle G_{AB}^{\mu}(q)$ $\displaystyle=\int_{-\infty}^{\infty}dx\,e^{i(P_{B}-P_{A})x/2}\,\mathrm{Tr}\,\big{[}\Phi_{B}^{\dagger}(x)\gamma^{\mu}\gamma^{0}\Phi_{A}(x)-\Phi_{B}^{\dagger}(-x)\Phi_{A}(-x)\gamma^{0}\gamma^{\mu}\big{]}$ (320) where the two contributions to $G_{AB}^{\mu}(q)$ arise from scattering on $e^{-}$ and $e^{+}$, respectively. The wave function $\Phi_{A}(x)\equiv\Phi_{A}^{(P_{A})}(x)$ determines the state as in (286), and $\gamma^{0},\,\gamma^{1}$ are defined in (291). The $e^{-}$ and $e^{+}$ contributions can be related using an analogy to charge conjugation in $D=3+1$ (181). According to the relations (302) the parity relations (305) become, when the sign of $P$ is not reversed, $\displaystyle\phi^{\scriptscriptstyle{(P)}}_{0}(x)=\eta_{P}\phi^{\scriptscriptstyle{(P)}}_{0}(-x)\hskip 56.9055pt\phi^{\scriptscriptstyle{(P)}}_{1,2,3}(x)=-\eta_{P}\phi^{\scriptscriptstyle{(P)}}_{1,2,3}(-x)$ (321) This implies for the wave function expressed as in (295), in analogy to charge conjugation $\displaystyle\sigma_{2}\big{[}\Phi^{(P)}(-x)\big{]}^{T}\sigma_{2}=\eta_{P}{\Phi^{(P)}}(x)$ (322) Bracketing the trace of the second term in (VII.2.1) with $\sigma_{2}$ and transposing it, $\displaystyle-\mathrm{Tr}\,\big{[}\sigma_{2}\Phi_{B}^{\dagger}(-x)\Phi_{A}(-x)\gamma^{0}\gamma^{\mu}\sigma_{2}\big{]}^{T}=-\eta_{P}^{A}\eta_{P}^{B}\mathrm{Tr}\,\big{[}\gamma^{\mu}\gamma^{0}\Phi_{A}(x)\Phi_{B}^{\dagger}(x)\big{]}$ (323) where $\sigma_{2}(\gamma^{0}\gamma^{\mu})^{T}\sigma_{2}=\gamma^{\mu}\gamma^{0}$ for $\mu=0,\,1$. Hence the $e^{-}$ and $e^{+}$ contributions are related similarly as in (242) and $\displaystyle G_{AB}^{\mu}(q)$ $\displaystyle=(1-\eta_{P}^{A}\eta_{P}^{B})\int_{-\infty}^{\infty}dx\,e^{i(P_{B}-P_{A})x/2}\,\mathrm{Tr}\,\big{[}\Phi_{B}^{\dagger}(x)\gamma^{\mu}\gamma^{0}\Phi_{A}(x)\big{]}$ (324) vanishes unless $\eta_{P}^{A}=-\eta_{P}^{B}$. #### VII.2.2 Gauge invariance I follow the derivation in Dietrich _et al._ (2013) and consider only scattering from $e^{-}$, i.e., leave out the factor $1-\eta_{P}^{A}\eta_{P}^{B}$ of (324). Denoting $E_{A,B}=P_{A,B}^{\,0}$ and $P_{A,B}=P_{A,B}^{1}$, gauge invariance requires that $\displaystyle q_{\mu}G^{\mu}_{AB}=(P_{B}-P_{A})_{\mu}G^{\mu}_{AB}=\int_{-\infty}^{\infty}dx\,e^{i(P_{B}-P_{A})x/2}\,\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}\big{[}(E_{B}-E_{A})+(P_{B}-P_{A})\sigma_{1}\big{]}\Phi_{A}\big{\\}}=0$ (325) The bound state equation (VII.1.2) for $\Phi_{A}$ and $\Phi_{B}^{\dagger}$ are, with $V=V^{\prime}x$ and $x\geq 0$, $\displaystyle(E_{A}-V)\Phi_{A}$ $\displaystyle=i\partial_{x}\left\\{{\sigma_{1}},{\Phi_{A}}\right\\}-{\textstyle\frac{1}{2}}P_{A}\left[{\sigma_{1}},{\Phi_{A}}\right]+m\left[{\sigma_{3}},{\Phi_{A}}\right]$ $\displaystyle\Big{|}\ -\Phi_{B}^{\dagger}\,\times$ $\displaystyle\Phi_{B}^{\dagger}(E_{B}-V)$ $\displaystyle=-i\partial_{x}\big{\\{}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{\\}}+{\textstyle\frac{1}{2}}P_{B}\big{[}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{]}-m\big{[}{\sigma_{3}},{\Phi_{B}^{\dagger}}\big{]}$ $\displaystyle\Big{|}\ \times\,\Phi_{A}\hskip 8.5359pt$ (326) Multiplying the equations as indicated in the margin their sum becomes $\displaystyle\Phi_{B}^{\dagger}(E_{B}-E_{A})\Phi_{A}=$ $\displaystyle-i\Phi_{B}^{\dagger}\partial_{x}\left\\{{\sigma_{1}},{\Phi_{A}}\right\\}-i\partial_{x}\big{\\{}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{\\}}\Phi_{A}+{\textstyle\frac{1}{2}}P_{B}\big{[}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{]}\Phi_{A}+{\textstyle\frac{1}{2}}P_{A}\Phi_{B}^{\dagger}\left[{\sigma_{1}},{\Phi_{A}}\right]$ $\displaystyle-m\left[{\sigma_{3}},{\Phi_{B}^{\dagger}}\right]\Phi_{A}-m\Phi_{B}^{\dagger}\left[{\sigma_{3}},{\Phi_{A}}\right]$ (327) When the trace is taken the terms with $\partial_{x}$ form a total derivative, $\displaystyle-i\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}(\sigma_{1}\partial_{x}\Phi_{A}+\partial_{x}\Phi_{A}\sigma_{1})+(\sigma_{1}\partial_{x}\Phi_{B}^{\dagger}+\partial_{x}\Phi_{B}^{\dagger}\sigma_{1})\Phi_{A}\big{\\}}=-i\mathrm{Tr}\,\big{\\{}\partial_{x}\big{(}\Phi_{B}^{\dagger}\sigma_{1}\Phi_{A}+\Phi_{B}^{\dagger}\Phi_{A}\sigma_{1}\big{)}\big{\\}}$ (328) Partially integrating this term in (325) it becomes $\displaystyle-{\textstyle\frac{1}{2}}(P_{B}-P_{A})\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}\sigma_{1}\Phi_{A}+\Phi_{B}^{\dagger}\Phi_{A}\sigma_{1}\big{\\}}$ (329) The ${\cal O}\left(P\right)$ terms in (VII.2.2) may in the trace be expressed as $\displaystyle{\textstyle\frac{1}{2}}P_{B}\mathrm{Tr}\,\big{\\{}(\sigma_{1}\Phi_{B}^{\dagger}-\Phi_{B}^{\dagger}\sigma_{1})\Phi_{A}\big{\\}}+{\textstyle\frac{1}{2}}P_{A}\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}(\sigma_{1}\Phi_{A}-\Phi_{A}\sigma_{1})\big{\\}}={\textstyle\frac{1}{2}}(P_{B}-P_{A})\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}\Phi_{A}\sigma_{1}-\Phi_{B}^{\dagger}\sigma_{1}\Phi_{A}\big{\\}}$ (330) The ${\cal O}\left(m\right)$ term vanishes when traced, $\displaystyle-m\mathrm{Tr}\,\big{\\{}(\sigma_{3}\Phi_{B}^{\dagger}-\Phi_{B}^{\dagger}\sigma_{3})\Phi_{A}+\Phi_{B}^{\dagger}(\sigma_{3}\Phi_{A}-\Phi_{A}\sigma_{3})\big{\\}}=0$ (331) The sum of (329) and (330) gives $\displaystyle-(P_{B}-P_{A})\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}\sigma_{1}\Phi_{A}\big{\\}}$ (332) This cancels the second term in (325), thus ensuring gauge invariance. #### VII.2.3 Lorentz covariance Lorentz covariance requires that the form factor can be written, given the gauge invariance (325), $\displaystyle G^{\mu}_{AB}(q)$ $\displaystyle=\epsilon^{\mu\nu}(P_{B}-P_{A})_{\nu}G_{AB}(q^{2})$ $\displaystyle\epsilon^{01}=-\epsilon^{10}=1$ (333) With $E_{A,B}=M_{A,B}\cosh\xi$ and $P_{A,B}=M_{A,B}\sinh\xi$ we have $\displaystyle\frac{\delta E_{A,B}}{\delta\xi}=P_{A,B}\hskip 85.35826pt\frac{\delta P_{A,B}}{\delta\xi}=E_{A,B}$ (334) Recalling that $P_{0}=P^{0}$ whereas $P_{1}=-P^{1}$ we should have $\displaystyle\frac{\delta G_{AB}^{0}(q)}{\delta\xi}=G_{AB}^{1}(q)\hskip 85.35826pt\frac{\delta G_{AB}^{1}(q)}{\delta\xi}=G_{AB}^{0}(q)$ (335) Subtracting the coupled bound state equations in (VII.1.2) gives an expression for $\displaystyle\frac{P}{E-V}\partial_{x}\Phi^{(P)}=-{\textstyle\frac{1}{2}}\partial_{x}\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}-{\textstyle\frac{1}{4}}iP\big{\\{}{\sigma_{1}},{\Phi^{(P)}}\big{\\}}+{\textstyle\frac{1}{2}}im\big{\\{}{\sigma_{3}},{\Phi^{(P)}}\big{\\}}+\frac{V^{\prime}}{2(E-V)}\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}$ (336) Using this in (309) gives $\displaystyle\frac{\partial\Phi^{(P)}}{\partial\xi}$ $\displaystyle=-{\textstyle\frac{1}{2}}\partial_{x}\Big{(}x\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}\Big{)}+{\textstyle\frac{1}{2}}ix\Big{(}-{\textstyle\frac{1}{2}}P\big{\\{}{\sigma_{1}},{\Phi^{(P)}}\big{\\}}+m\big{\\{}{\sigma_{3}},{\Phi^{(P)}}\big{\\}}\Big{)}$ $\displaystyle\frac{\partial\Phi^{(P){\dagger}}}{\partial\xi}$ $\displaystyle={\textstyle\frac{1}{2}}\partial_{x}\Big{(}x\big{[}{\sigma_{1}},{\Phi^{(P){\dagger}}}\big{]}\Big{)}+{\textstyle\frac{1}{2}}ix\Big{(}{\textstyle\frac{1}{2}}P\big{\\{}{\sigma_{1}},{\Phi^{(P){\dagger}}}\big{\\}}-m\big{\\{}{\sigma_{3}},{\Phi^{(P){\dagger}}}\big{\\}}\Big{)}$ (337) According to the expression (324) for $G_{AB}^{\mu}$ (without the factor $1-\eta_{P}^{A}\eta_{P}^{B}$), $\displaystyle\frac{\partial G_{AB}^{\mu}}{\partial\xi}=\int_{-\infty}^{\infty}dx\,e^{i(P_{B}-P_{A})x/2}\Big{(}{\textstyle\frac{1}{2}}ix(E_{B}-E_{A})\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}(x)\gamma^{\mu}\gamma^{0}\Phi_{A}(x)\big{\\}}+\frac{\partial}{\partial\xi}\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}(x)\gamma^{\mu}\gamma^{0}\Phi_{A}(x)\big{\\}}\Big{)}$ (338) The second term has the contributions $\displaystyle\mathrm{Tr}\,\Big{\\{}\frac{\partial\Phi_{B}^{\dagger}}{\partial\xi}\gamma^{\mu}\gamma^{0}\Phi_{A}\Big{\\}}$ $\displaystyle={\textstyle\frac{1}{2}}\mathrm{Tr}\,\Big{\\{}\big{[}{\sigma_{1}},{\partial_{x}(x\Phi_{B}^{\dagger})}\big{]}\gamma^{\mu}\gamma^{0}\Phi_{A}+ix\Big{(}{\textstyle\frac{1}{2}}P_{B}\big{\\{}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{\\}}\gamma^{\mu}\gamma^{0}\Phi_{A}-m\big{\\{}{\sigma_{3}},{\Phi_{B}^{\dagger}}\big{\\}}\Big{)}\gamma^{\mu}\gamma^{0}\Phi_{A}\Big{\\}}$ $\displaystyle\mathrm{Tr}\,\Big{\\{}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\frac{\partial\Phi_{A}}{\partial\xi}\Big{\\}}$ $\displaystyle={\textstyle\frac{1}{2}}\mathrm{Tr}\,\Big{\\{}-\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\left[{\sigma_{1}},{\partial_{x}(x\Phi_{A})}\right]+ix\Big{(}-{\textstyle\frac{1}{2}}P_{A}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\left\\{{\sigma_{1}},{\Phi_{A}}\right\\}+m\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\left\\{{\sigma_{3}},{\Phi_{A}}\right\\}\Big{)}\Big{\\}}$ (339) The ${\cal O}\left(P\right)$ terms may be expressed as $\displaystyle{\textstyle\frac{1}{2}}ix\mathrm{Tr}\,\big{\\{}$ $\displaystyle{\textstyle\frac{1}{2}}P_{B}(\sigma_{1}\Phi_{B}^{\dagger}+\Phi_{B}^{\dagger}\sigma_{1})\gamma^{\mu}\gamma^{0}\Phi_{A}-{\textstyle\frac{1}{2}}P_{A}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}(\sigma_{1}\Phi_{A}+\Phi_{A}\sigma_{1})\big{\\}}$ $\displaystyle={\textstyle\frac{1}{2}}ix\mathrm{Tr}\,\big{\\{}-{\textstyle\frac{1}{2}}P_{B}\big{[}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{]}\gamma^{\mu}\gamma^{0}\Phi_{A}-{\textstyle\frac{1}{2}}P_{A}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\left[{\sigma_{1}},{\Phi_{A}}\right]\big{\\}}+{\textstyle\frac{1}{2}}ix\mathrm{Tr}\,\big{\\{}(P_{B}-P_{A})\sigma_{1}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\Phi_{A}\big{\\}}$ (340) The ${\cal O}\left(m\right)$ terms are $\displaystyle{\textstyle\frac{1}{2}}ixm\mathrm{Tr}\,\big{\\{}$ $\displaystyle\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}(\sigma_{3}\Phi_{A}+\Phi_{A}\sigma_{3})-(\sigma_{3}\Phi_{B}^{\dagger}+\Phi_{B}^{\dagger}\sigma_{3})\gamma^{\mu}\gamma^{0}\Phi_{A}\big{\\}}={\textstyle\frac{1}{2}}ixm\mathrm{Tr}\,\big{\\{}\big{[}{\sigma_{3}},{\Phi_{B}^{\dagger}}\big{]}\gamma^{\mu}\gamma^{0}\Phi_{A}+\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\big{[}{\sigma_{3}},{\Phi_{A}}\big{]}\big{\\}}$ (341) In (338) write $E_{B}-E_{A}=(E_{B}-V)-(E_{A}-V)$ and add the ${\cal O}\left(P\right)$ (VII.2.3) and ${\cal O}\left(m\right)$ (341)contributions to the respective terms, so as to be able to make use of the bound state equations (VII.2.2) for $\Phi_{A}$ and $\Phi_{B}^{\dagger}$, $\displaystyle\frac{\partial G_{AB}^{\mu}}{\partial\xi}=$ $\displaystyle\int_{-\infty}^{\infty}dx\,e^{i(P_{B}-P_{A})x/2}\Big{(}{\textstyle\frac{1}{2}}\mathrm{Tr}\,\big{\\{}\big{[}{\sigma_{1}},{\partial_{x}(x\Phi_{B}^{\dagger})}\big{]}\gamma^{\mu}\gamma^{0}\Phi_{A}-\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\left[{\sigma_{1}},{\partial_{x}(x\Phi_{A})}\right]\big{\\}}$ $\displaystyle+{\textstyle\frac{1}{2}}ix\mathrm{Tr}\,\big{\\{}\big{(}\Phi_{B}^{\dagger}(E_{B}-V)-{\textstyle\frac{1}{2}}P_{B}\big{[}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{]}+m\big{[}{\sigma_{3}},{\Phi_{B}^{\dagger}}\big{]}\big{)}\gamma^{\mu}\gamma^{0}\Phi_{A}\big{\\}}$ $\displaystyle-{\textstyle\frac{1}{2}}ix\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\big{(}(E_{A}-V)\Phi_{A}+{\textstyle\frac{1}{2}}P_{A}\big{[}{\sigma_{1}},{\Phi_{A}}\big{]}-m\big{[}{\sigma_{3}},{\Phi_{A}}\big{]}\big{)}\big{\\}}$ $\displaystyle+{\textstyle\frac{1}{2}}ix\mathrm{Tr}\,\big{\\{}(P_{B}-P_{A})\sigma_{1}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\Phi_{A}\big{\\}}\Big{)}$ (342) According to the BSE (VII.2.2) the expression in ( ) on the second line equals $-i\partial_{x}\big{\\{}{\sigma_{1}},{\Phi_{B}^{\dagger}}\big{\\}}$ and that on the third line equals $i\partial_{x}\left\\{{\sigma_{1}},{\Phi_{A}}\right\\}$. Combined with the expression on the first line, noting also the $\partial_{x}x=1$ contribution we have $\displaystyle\frac{\partial G_{AB}^{\mu}}{\partial\xi}$ $\displaystyle=\int_{-\infty}^{\infty}dx\,e^{i(P_{B}-P_{A})x/2}\Big{(}{\textstyle\frac{1}{2}}\mathrm{Tr}\,\big{\\{}\big{(}\sigma_{1}\Phi_{B}^{\dagger}-\Phi_{B}^{\dagger}\sigma_{1}\big{)}\gamma^{\mu}\gamma^{0}\Phi_{A}-\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\big{(}\sigma_{1}\Phi_{A}-\Phi_{A}\sigma_{1}\big{)}\big{\\}}$ $\displaystyle\hskip 85.35826pt+x\partial_{x}\mathrm{Tr}\,\big{\\{}\sigma_{1}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\Phi_{A}\big{\\}}+{\textstyle\frac{1}{2}}ix(P_{B}-P_{A})\mathrm{Tr}\,\big{\\{}\sigma_{1}\Phi_{B}^{\dagger}\gamma^{\mu}\gamma^{0}\Phi_{A}\big{\\}}\Big{)}$ $\displaystyle=-\int_{-\infty}^{\infty}dx\,e^{i(P_{B}-P_{A})x/2}\,\mathrm{Tr}\,\big{\\{}\Phi_{B}^{\dagger}\sigma_{1}\gamma^{\mu}\gamma^{0}\Phi_{A}\big{\\}}$ (343) I partially integrated the first term on the second line and noted that $\big{[}{\gamma^{\mu}\gamma^{0}},{\sigma_{1}}\big{]}=0$. Comparing with the expression (324) for $G_{AB}^{\mu}$ verifies the Lorentz covariance condition (335). ### VII.3 * Deep Inelastic Scattering in D = 1+1 #### VII.3.1 The Bj limit of the form factor I considered Deep Inelastic Scattering (DIS) $e^{-}+A\to e^{-}+X$ on Positronium atoms $A$ of QED4 in section VI.6, demonstrating its frame invariance. In that case the final state was taken to be a free $e^{-}e^{+}$ pair. In QED2 there are no free electrons, and bound states can have arbitrarily high mass. The target vertex is then described by the form factor $\gamma^{*}+A\to B$, where $B$ is a bound state whose mass is $\propto Q$ in the Bj limit (261). In $D=1+1$ the mass selects a unique $B$, which represents the inclusive system $X$. The present approach to DIS in QED2 was previously considered in the Breit frame Dietrich _et al._ (2013), where $q^{0}=E_{B}-E_{A}=0$ while $q^{1}=-Q\to-\infty$ in the Bj limit. This is a standard frame for QCD4, which allows the target to be described in terms of a parton distribution. It is instructive to repeat the QED2 calculation in a frame where the target momentum is kept fixed in the Bj limit, and to verify that the parton distribution is indeed boost invariant. The parton distribution was in Eq. (A22) of Dietrich _et al._ (2013) defined in terms of the invariant form factor $G_{AB}(q^{2})$ (333) as $\displaystyle f({x_{Bj}})=\frac{1}{16\pi V^{\prime}m^{2}}\,\frac{1}{{x_{Bj}}}|Q^{2}G_{AB}(q^{2})|^{2}$ (344) I consider the Bj limit where the photon momentum $q^{1}\to-\infty$ at fixed ${x_{Bj}}$, in a frame where the target 2-momentum $P_{A}$ is fixed. Since $q=P_{B}-P_{A}$, $\displaystyle{x_{Bj}}=\frac{Q^{2}}{2P_{A}\cdot q}=-\frac{(P_{B}-P_{A})^{2}}{2P_{A}\cdot(P_{B}-P_{A})}\simeq\frac{2P_{A}\cdot P_{B}-M_{B}^{2}}{2P_{A}\cdot P_{B}}=1-\frac{M_{B}^{2}}{2P_{A}\cdot P_{B}}\simeq 1-\frac{M_{B}^{2}}{2P_{A}^{+}E_{B}}$ (345) where I used $2P_{A}\cdot P_{B}\simeq 2(E_{A}+P_{A}^{1})E_{B}\equiv 2P_{A}^{+}E_{B}$, neglecting the finite difference between $P_{B}^{0}\equiv E_{B}$ and $-P_{B}^{1}$, $\displaystyle E_{B}=\sqrt{M_{B}^{2}+(P_{B}^{1})^{2}}\simeq|P_{B}^{1}|+\frac{M_{B}^{2}}{2|P_{B}^{1}|}=-P_{B}^{1}+P_{A}^{+}(1-{x_{Bj}})\hskip 14.22636pti.e.\hskip 14.22636ptP_{B}^{+}=P_{A}^{+}(1-{x_{Bj}})$ (346) Defining $\gamma^{\pm}=\gamma^{0}\pm\gamma^{1}=\sigma_{3}\pm i\sigma_{2}$ and with $V^{\prime}\tau=[(E-V)+P^{1}][(E-V)-P^{1}]$ the expression (307) for the bound state wave functions becomes $\displaystyle\Phi=\phi_{0}+\phi_{1}\Big{[}\sigma_{1}+\frac{2m(E-V)}{V^{\prime}\tau}\,i\sigma_{2}+\frac{2mP^{1}}{V^{\prime}\tau}\,\sigma_{3}\Big{]}=\phi_{0}+\phi_{1}\Big{[}\sigma_{1}+\frac{m\gamma^{+}}{E-V-P^{1}}-\frac{m\gamma^{-}}{E-V+P^{1}}\Big{]}$ (347) For $P_{B}^{1}\to-\infty$ this gives using (346) $\displaystyle\Phi_{B}=\phi_{B0}+\phi_{B1}\Big{[}\sigma_{1}-\frac{m\gamma^{-}}{P_{A}^{+}(1-{x_{Bj}})-V}\Big{]}$ (348) The invariant form factor $G_{AB}(q^{2})$ (333) may be expressed in terms of $G_{AB}^{0}(q)$ (324). Using $\gamma^{+}\gamma^{+}=0$ and $\gamma^{+}\gamma^{-}=2(1-\sigma_{1})$ as well as (322) we get for bound states of opposite parities $\eta_{A}\eta_{B}=-1$, $\displaystyle G_{AB}(q^{2})=$ $\displaystyle-\frac{1}{q^{1}}G_{AB}^{0}(q)\simeq\frac{2}{E_{B}}\int_{-\infty}^{\infty}dx\,e^{iq^{1}x/2}\mathrm{Tr}\,\big{[}\Phi_{B}^{\dagger}(x)\Phi_{A}(x)\big{]}=\frac{4i}{E_{B}}\int_{0}^{\infty}dx\,\sin\big{(}{\textstyle\frac{1}{2}}q^{1}x\big{)}\mathrm{Tr}\,\big{[}\Phi_{B}^{\dagger}(x)\Phi_{A}(x)\big{]}$ $\displaystyle{\textstyle\frac{1}{2}}\mathrm{Tr}\,\big{[}\Phi_{B}^{\dagger}(x)\Phi_{A}(x)\big{]}=\phi_{B0}^{*}(\tau_{B})\phi_{A0}(\tau_{A})+\phi_{B1}^{*}(\tau_{B})\phi_{A1}(\tau_{A})\Big{[}1+\frac{2m^{2}}{\big{[}P_{A}^{+}(1-{x_{Bj}})-V\big{]}(P_{A}^{+}-V)}\Big{]}$ (349) The arguments of the wave functions are, with $V=V^{\prime}x$: $\displaystyle V^{\prime}\tau_{A}$ $\displaystyle=M_{A}^{2}-2E_{A}V+V^{2}$ $\displaystyle V^{\prime}\tau_{B}$ $\displaystyle=2E_{B}\Big{(}\frac{M_{B}^{2}}{2E_{B}}-V\Big{)}+V^{2}=2E_{B}\big{[}P_{A}^{+}(1-{x_{Bj}})-V\big{]}+V^{2}$ (350) At fixed $x$, $\tau_{B}\to\pm\infty$ for $P_{A}^{+}(1-{x_{Bj}})\begin{array}[]{c}>\vspace{-2mm}\\\ <\\\ \end{array}V$. We may thus use the asymptotic expressions (317) and (VII.1.6) for the $\phi_{B}$ wave functions, see also (A.14). For $\eta_{B}=+1$ and with $n$ an integer, $\displaystyle G_{AB}(q^{2})=$ $\displaystyle(-1)^{n}\frac{16iV^{\prime}}{\sqrt{\pi}mE_{B}}\,\sqrt{e^{\pi m^{2}/V^{\prime}}-1}\int_{0}^{\infty}dx\,\exp\big{[}-\theta(-\tau_{B})\pi m^{2}/2V^{\prime}\big{]}$ $\displaystyle\times\Big{\\{}i\sin\varphi_{B}(x)\phi_{A0}(\tau_{A})+\cos\varphi_{B}(x)\phi_{A1}(\tau_{A})\Big{[}1+\frac{2m^{2}}{\big{[}P_{A}^{+}(1-{x_{Bj}})-V\big{]}(P_{A}^{+}-V)}\Big{]}\Big{\\}}$ $\displaystyle\varphi_{B}(x)$ $\displaystyle\equiv{\textstyle\frac{1}{2}}\big{[}P_{A}^{+}(1-{x_{Bj}})-P_{A}^{1}\big{]}x+\frac{m^{2}}{2V^{\prime}}\log\Big{|}1-\frac{V^{\prime}x}{P_{A}^{+}(1-{x_{Bj}})}\Big{|}-{\textstyle\frac{1}{4}}V^{\prime}x^{2}$ (351) Exercise A.14: Derive the expression (VII.3.1). So far I arbitrarily normalized the wave functions by adopting the solutions (VII.1.4). Since $M_{B}\propto Q$ we need to know the relative normalization of the wave functions $\Phi_{B}(\tau_{B})$ in the Bj limit. This may be determined using duality, as shown in Dietrich _et al._ (2013). At large $M_{B}$ and for $V(x)\ll M_{B}$ the bound state wave functions have the form of free $e^{-}e^{+}$ states. The normalization of $\Phi_{B}(x=0)$ can thus be chosen to agree with that of free $e^{-}e^{+}$ (“partonic”) states created by a pointlike current. The result is that the normalization factor is independent of $M_{B}$ and a function of the electron mass $m$ only, see Eq. (4.16) of Dietrich _et al._ (2013). Hence the Bj limit and the ${x_{Bj}}$-dependence of $f({x_{Bj}})$ at fixed $m$ are not affected by this normalization of $\Phi_{B}$. I do not here consider the normalization of $\Phi_{A}$, which affects the magnitude of $f({x_{Bj}})$. Using $Q^{2}=2{x_{Bj}}P_{A}\cdot P_{B}$ from (345) the electron distribution (344) becomes $\displaystyle f({x_{Bj}})=\frac{(P_{A}\cdot P_{B})^{2}{x_{Bj}}}{4\pi V^{\prime}m^{2}}|G_{AB}(q^{2})|^{2}$ (352) From the boost invariance of $G_{AB}(q^{2})$ shown in section VII.2.3 follows that also $f({x_{Bj}})$ is invariant. The expression (352) is finite in the Bj limit, given that $2P_{A}\cdot P_{B}\simeq P_{A}^{+}P_{B}^{-}\simeq 2P_{A}^{+}E_{B}$ and that $E_{B}\,G_{AB}(q^{2})$ (VII.3.1) is finite. I next discuss the numerical evaluation of the $x$-integral in (VII.3.1). #### VII.3.2 Numerical evaluation of the electron distribution The integrand of $G_{AB}(q^{2})$ in (VII.3.1) is regular at $P_{A}^{+}-V^{\prime}x=0$ since $\phi_{A1}(\tau_{A}=0)=0$. Similarly $\phi_{B1}(\tau_{B}=0)=0$. In the Bj limit $\tau_{B}=0$ implies $P_{A}^{+}(1-{x_{Bj}})-V^{\prime}x=0$ (VII.3.1), i.e., $x=x_{0}$ with $\displaystyle x_{0}=P_{A}^{+}(1-{x_{Bj}})/V^{\prime}$ (353) The $\tau_{B}\to\infty$ limit of $\Phi_{B}$ assumed in (VII.3.1) fails at $x=x_{0}$. In the asymptotic expression for $\Phi_{B}$ the $1/(x-x_{0})$ singularity is regulated by the phase $\varphi_{B}(x)\propto\log|x-x_{0}|$ in $\cos[\varphi_{B}(x)]$. This requires attention in a numerical evaluation of the integral. A Principal Value prescription cannot be used due to the step function $\theta(-\tau_{B})\simeq\theta(x-x_{0})$. In the following exercise I outline a method for the numerical evaluation of the $x$-integral. Exercise A.15: Do the $x$-integral in (VII.3.1) numerically for the parameters in Fig. 16, and compare the results. A check of the boost invariance of $f({x_{Bj}})$ (352) is shown in Fig. 16. The electron distributions obtained in the rest frame and in a frame with boost parameter $\xi=1$ closely agree. Figure 16: QED2 electron distributions (352) for $m=0.5\sqrt{V^{\prime}}$ and $.05<{x_{Bj}}<.95$ of target $A$ ground state $(\eta_{A}=-1,\ M_{A}=2.674\sqrt{V^{\prime}})$ evaluated in the rest frame $(\xi_{A}=0$, blue) and at $\xi_{A}=1$ ($E_{A}=M_{A}\cosh\xi_{A}$, thick red dashed line). The two curves agree at ${\cal O}\left(10^{-5}\right)$, which indicates the accuracy of the numerical evaluation. The overall normalization is arbitrary. The electron distribution (352) in the rest frame ($P_{A}^{1}=0$) is compared with the one in the Breit frame ($P_{A}^{1}=Q/2{x_{Bj}}\to\infty$ in the Bj limit) in Fig. 17. The agreement shows the equivalence of the Breit and target rest frames. Figure 17: QED2 electron distribution (352) for $m=0.14\sqrt{V^{\prime}}$ ($M_{A}=2.52\sqrt{V^{\prime}}$) in the rest frame ($P_{A}=0$, red curves) compared with the Breit frame result in Fig. 8 of Dietrich _et al._ (2013), blue dots ($m=0.1\,e=0.14\,\sqrt{V^{\prime}}$, see (VII.1.2)). The normalization of the red curve was treated as a free parameter. #### VII.3.3 ${x_{Bj}}\to 0$ limit of the electron distribution The electron distribution in Fig. 17 increases for ${x_{Bj}}\to 0$, analogously to the sea quark distribution for hadrons. In Eq. (6.17) of Dietrich _et al._ (2013) the leading ${x_{Bj}}$-dependence was found to be (with scale $e^{2}=2V^{\prime}$), $\displaystyle{x_{Bj}}f({x_{Bj}})\sim\cos^{2}\big{[}\big{(}m^{2}\log{x_{Bj}}+{\textstyle\frac{1}{2}}M_{A}^{2}\big{)}/e^{2}\big{]}$ (354) States of the form (286) appear to have just a single $e^{-}e^{+}$ pair, created by the electron fields $\bar{\psi}$ and $\psi$. However, a strong electric field creates virtual $e^{-}e^{+}$ pairs. In time-ordered Feynman diagrams they shows up in “$Z$”-diagrams like Fig. 13(b), where an electron scatters into a negative energy state, creating an intermediate state with an additional $e^{-}e^{+}$ pair. The mixing of the $b^{\dagger}$ and $d$ operators is explicit for the Dirac states created by the $c_{n}^{\dagger}$ operator (IV.3), and the Dirac ground state $\Omega$ (65) has an indefinite number of pairs in the free state basis. The constant norm of the bound state wave functions at large $x$ (VII.1.6) apparently reflects the virtual pairs created by the linear potential $V^{\prime}x$. The dominant contribution to the form factor (VII.3.1) for ${x_{Bj}}\to 0$ comes from the large $x$ part of the integrand, namely from $I_{1}$ in (A.606). More precisely, the leading behavior is due to $I_{1c}$ (A.15), for which the angle $\varphi_{C}$ (A.15) depends on $x$ mainly through ${\textstyle\frac{1}{2}}P_{A}^{+}{x_{Bj}}\,x$. This allows the integration over $u=x-x_{1}$ in $I_{1c}$ (A.611) to contribute over a range $\propto 1/{x_{Bj}}$. To leading order in the ${x_{Bj}}\to 0$ limit we may set $x_{1}=0$. The logarithmic terms in $\varphi_{C}$ are at leading order in the $x\to\infty$ limit, $\displaystyle\log\Big{[}\frac{x_{0}\tau_{A}}{2(x-x_{0})}\Big{]}=\log({\textstyle\frac{1}{2}}P_{A}^{+}x)\,\big{[}1+{\cal O}\left(x^{-1}\right)\big{]}$ (355) Hence $\displaystyle I_{1c}\simeq-\frac{4V^{\prime}e^{-\pi m^{2}/4V^{\prime}}}{\sqrt{\pi}\,m}\sqrt{2\sinh(\pi m^{2}/2V^{\prime})}\,{\rm Im}\int_{0}^{i\infty}dx\,\exp\Big{[}{\textstyle\frac{1}{2}}iP_{A}^{+}{x_{Bj}}x+\frac{im^{2}}{2V^{\prime}}\log({\textstyle\frac{1}{2}}P_{A}^{+}x)-i\frac{M_{A}^{2}}{4V^{\prime}}-i\arg\Gamma\Big{(}1+\frac{im^{2}}{2V^{\prime}}\Big{)}\Big{]}$ (356) Defining $v=-{\textstyle\frac{1}{2}}iP_{A}^{+}{x_{Bj}}x$ we have $dx=2idv/(P_{A}^{+}{x_{Bj}})$, ${\rm Im}\to{\rm Re}$ and $\log({\textstyle\frac{1}{2}}P_{a}^{+}x)=\log v-\log{x_{Bj}}+{\textstyle\frac{1}{2}}i\pi$. The $v$-integral becomes $\displaystyle\int_{0}^{\infty}dvv^{im^{2}/2V^{\prime}}e^{-v}=\Gamma(1+im^{2}/2V^{\prime})=\frac{\sqrt{\pi}\,m}{\sqrt{2V^{\prime}\sinh(\pi m^{2}/2V^{\prime})}}\,e^{i\arg\Gamma(1+im^{2}/2V^{\prime})}$ (357) whose modulus and phase cancel the corresponding terms in (356). This leaves $\displaystyle I_{1}\simeq-\frac{8V^{\prime}}{{x_{Bj}}P_{a}^{+}}\,e^{-\pi m^{2}/2V^{\prime}}\cos\big{[}(m^{2}\log{x_{Bj}})+{\textstyle\frac{1}{2}}M_{a}^{2})/2V^{\prime}\big{]}\hskip 56.9055pt\mbox{for}\ \ {x_{Bj}}\to 0$ (358) Since $I_{2}$ and $I_{3}$ give non-leading contributions in the ${x_{Bj}}\to 0$ limit this defines, via (A.606), the frame independent parton distribution (352). It agrees with the analytic result (354) Dietrich _et al._ (2013), which was evaluated in the Breit frame. The analytic approximation given by (358) for $f({x_{Bj}}\to 0)$ is compared with the numerical evaluation in Fig. 18 for ${x_{Bj}}\leq 0.1$. Figure 18: Red line: Numerical evaluation of parton distribution (352) using (A.606) ($P_{a}=0$ and $m=0.14\,\sqrt{V^{\prime}}$). Dashed blue line: Analytic approximation for ${x_{Bj}}\to 0$ given by (358). ## VIII Applications to QCD bound states ### VIII.1 The instantaneous potential of various Fock states I consider color singlet QCD bound states in temporal gauge $A^{0}_{a}=0$, as described in section V.3. The scale $\Lambda$ required for confinement is introduced via a boundary condition on the solutions of Gauss’ constraint (151), in terms of the homogeneous solution (162). This affects the longitudinal electric field ${\boldsymbol{E}}_{L}^{a}$ for each color component of a Fock state, whereas the full color singlet state does not generate a color octet field. The gauge invariance condition of electromagnetc form factors is satisfied, and the $q\bar{q}$ bound state energies have the correct frame dependence. The longitudinal electric field (163) determines the field energy $\mathcal{H}_{V}$ (V.3.2), which defines an instantaneous potential for each Fock state, $\displaystyle\mathcal{H}_{V}(t=0)$ $\displaystyle=\int d{\boldsymbol{y}}d{\boldsymbol{z}}\Big{[}\,{\boldsymbol{y}}\cdot{\boldsymbol{z}}\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}+{\textstyle\frac{1}{2}}\frac{{\alpha_{s}}}{|{\boldsymbol{y}}-{\boldsymbol{z}}|}\Big{]}\mathcal{E}_{a}({\boldsymbol{y}})\mathcal{E}_{a}({\boldsymbol{z}})\equiv\mathcal{H}_{V}^{(0)}+\mathcal{H}_{V}^{(1)}$ $\displaystyle\mathcal{E}_{a}({\boldsymbol{y}})$ $\displaystyle=-f_{abc}A_{b}^{i}E_{c}^{i}({\boldsymbol{y}})+\psi_{A}^{\dagger}T_{AB}^{a}\psi_{B}({\boldsymbol{y}})$ (359) where a sum over repeated indices is understood. $\mathcal{H}_{V}^{(0)}$ is due to the homogeneous solution (162) of Gauss’ constraint and generates an ${\cal O}\left(\alpha_{s}^{0}\right)$ potential, while $\mathcal{H}_{V}^{(1)}$ gives the standard ${\cal O}\left({\alpha_{s}}\right)$ Coulomb potential. Recall that $\mathcal{E}_{a}({\boldsymbol{x}})\left|{0}\right\rangle=0$ (162) since Gauss’ constraint (151) is not an operator condition in temporal gauge. The potentials are independent of the quark Dirac index $\alpha$ and of the gluon Lorentz index $i=1,2,3$. I consider color singlet $q\bar{q}$ (meson), $qqq$ (baryon), $gg$ (glueball), $q\bar{q}g$ (higher Fock state of a meson) and $q\bar{q}\,q\bar{q}$ (molecular or tetraquark) Fock states. #### VIII.1.1 The $q\bar{q}$ potential A $q\bar{q}$ Fock state with quarks at ${\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2}$, summed over the colors $A$, is invariant under global color transformations, $\displaystyle\left|{q\bar{q}}\right\rangle=\bar{\psi}_{A}^{\alpha}({\boldsymbol{x}}_{1})\psi_{A}^{\beta}({\boldsymbol{x}}_{2})\left|{0}\right\rangle\equiv\bar{\psi}_{A}({\boldsymbol{x}}_{1})\psi_{A}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (360) I suppress the irrelevant Dirac indices and $t=0$ is understood. The canonical commutation relations (147) of the fields in $\mathcal{E}_{a}({\boldsymbol{x}})$ (VIII.1) give $\displaystyle\big{[}{\mathcal{E}_{a}({\boldsymbol{x}})},{\bar{\psi}_{A}({\boldsymbol{x}}_{1})}\big{]}$ $\displaystyle=\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T_{A^{\prime}A}^{a}\delta({\boldsymbol{x}}-{\boldsymbol{x}}_{1})$ $\displaystyle\big{[}{\mathcal{E}_{a}({\boldsymbol{x}})},{\psi_{A}({\boldsymbol{x}}_{2})}\big{]}$ $\displaystyle=-T_{AA^{\prime}}^{a}\psi_{A^{\prime}}({\boldsymbol{x}}_{2})\delta({\boldsymbol{x}}-{\boldsymbol{x}}_{2})$ (361) where $T^{a}$ is the SU(3) generator in the fundamental representation. I shall make use of the following relations for the SU($N_{c}$) generators777Useful properties of the SU($N_{c}$) generators may be found in Haber (2021). $\displaystyle\left[{T^{a}},{T^{b}}\right]$ $\displaystyle=if_{abc}T^{c}\hskip 56.9055pt\mathrm{Tr}\,\big{\\{}T^{a}T^{b}\big{\\}}={\textstyle\frac{1}{2}}\delta^{ab}\hskip 56.9055ptT^{a}T^{a}=C_{F}\,I=\frac{N_{c}^{2}-1}{2N_{c}}\,I$ $\displaystyle T_{AB}^{a}T_{CD}^{a}$ $\displaystyle=\frac{1}{2}\Big{(}\delta_{AD}\delta_{BC}-\frac{1}{N_{c}}\delta_{AB}\delta_{CD}\Big{)}\hskip 99.58464ptT^{a}T^{b}T^{a}=-\frac{1}{2N_{c}}\,T^{b}$ $\displaystyle f_{abc}f_{abd}$ $\displaystyle=N_{c}\delta_{cd}\hskip 184.9429ptf_{abd}T^{a}T^{b}={\textstyle\frac{1}{2}}iN_{c}T^{d}$ (362) The weight ${\boldsymbol{y}}\cdot{\boldsymbol{z}}$ of $\mathcal{H}_{V}^{(0)}$ is ${\boldsymbol{x}}_{1}^{2},\,{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}$ or ${\boldsymbol{x}}_{2}^{2}$ depending on the quark field on which $\mathcal{E}_{a}({\boldsymbol{y}})$ and $\mathcal{E}_{a}({\boldsymbol{z}})$ act, $\displaystyle\mathcal{H}_{V}^{(0)}\left|{q\bar{q}}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\int d{\boldsymbol{y}}d{\boldsymbol{z}}\;{\boldsymbol{y}}\cdot{\boldsymbol{z}}\,\mathcal{E}_{a}({\boldsymbol{y}})\mathcal{E}_{a}({\boldsymbol{z}})\bar{\psi}_{A}({\boldsymbol{x}}_{1})\psi_{A}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}({\boldsymbol{x}}_{1}^{2}-2{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}+{\boldsymbol{x}}_{2}^{2})\bar{\psi}_{A}({\boldsymbol{x}}_{1})\,T^{a}_{AB}T^{a}_{BC}\psi_{C}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}C_{F}\,({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}\,\left|{q\bar{q}}\right\rangle$ (363) The term ${\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}$ is the contribution of an ${\boldsymbol{x}}$-independent field energy density $E_{\Lambda}$. Its integral is proportional to the volume of space and irrelevant only if universal, i.e., $E_{\Lambda}$ must be the same for all Fock states. In particular, $E_{\Lambda}$ cannot depend on ${\boldsymbol{x}}_{1}$ or ${\boldsymbol{x}}_{2}$. This requires to choose the normalization $\kappa$ of the homogeneous solution for this Fock state as $\displaystyle\kappa_{q\bar{q}}=\frac{\Lambda^{2}}{gC_{F}}\frac{1}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}$ (364) which also serves to define the universal constant $\Lambda$. The field energy density is then $\displaystyle E_{\Lambda}=\frac{\Lambda^{4}}{2g^{2}C_{F}}$ (365) This value of $E_{\Lambda}$ must be imposed on all types of Fock states, e.g., $\left|{qqq}\right\rangle$, and in each case will determine the normalization of the corresponding homogeneous solution. Subtracting $E_{\Lambda}{\textstyle\int}d{\boldsymbol{x}}$ in (VIII.1.1) the remaining $g\kappa$ term gives $\displaystyle\mathcal{H}_{V}^{(0)}\left|{q\bar{q}}\right\rangle\equiv V_{q\bar{q}}^{(0)}\left|{q\bar{q}}\right\rangle\hskip 56.9055ptV_{q\bar{q}}^{(0)}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})=g\kappa_{q\bar{q}}\,C_{F}\,({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}=\Lambda^{2}|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|$ (366) The gluon exchange potential due to $\mathcal{H}_{V}^{(1)}$ is similarly obtained using (VIII.1.1). The commutators of $\mathcal{E}_{a}({\boldsymbol{y}})$ and $\mathcal{E}_{a}({\boldsymbol{z}})$ with the same quark now gives an infinite, $\sim 1/0$ contribution. This “self-energy” is independent of ${\boldsymbol{x}}_{1}$ and ${\boldsymbol{x}}_{2}$ and can be subtracted. Altogether, $\displaystyle\mathcal{H}_{V}\left|{q\bar{q}}\right\rangle$ $\displaystyle=\big{[}V_{q\bar{q}}^{(0)}+V_{q\bar{q}}^{(1)}\big{]}\left|{q\bar{q}}\right\rangle\hskip 56.9055ptV_{q\bar{q}}^{(1)}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})=-C_{F}\frac{{\alpha_{s}}}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}$ (367) $V_{q\bar{q}}^{(0)}+V_{q\bar{q}}^{(1)}$ agrees with the Cornell potential (2) Eichten _et al._ (1980, 2008). The first term of the Fock expansion thus gives a good approximation for heavy quarkonia. #### VIII.1.2 The $qqq$ potential An SU(3) color singlet $qqq$ Fock state has the form (suppressing the Dirac indices) $\displaystyle\left|{qqq}\right\rangle=\epsilon_{ABC}\,\psi_{A}^{\dagger}({\boldsymbol{x}}_{1})\psi_{B}^{\dagger}({\boldsymbol{x}}_{2})\psi_{C}^{\dagger}({\boldsymbol{x}}_{3})\left|{0}\right\rangle$ (368) where $\epsilon_{ABC}$ is the fully antisymmetric tensor with $\epsilon_{123}=1$. Note that this state is a color singlet of SU($N_{c}$) only for $N_{c}=3$. In a global transformation $\psi_{A}^{\dagger}({\boldsymbol{x}})\to\psi_{A^{\prime}}^{\dagger}({\boldsymbol{x}})U_{A^{\prime}A}^{\dagger}$ the state is invariant: $\epsilon_{ABC}U_{A^{\prime}A}^{\dagger}U_{B^{\prime}B}^{\dagger}U_{C^{\prime}C}^{\dagger}=\epsilon_{A^{\prime}B^{\prime}C^{\prime}}\det(U)=\epsilon_{A^{\prime}B^{\prime}C^{\prime}}$ provided $U$ is a $3\times 3$ matrix with unit determinant. When $\mathcal{H}_{V}^{(0)}$ (VIII.1) operates on $\left|{qqq}\right\rangle$ the factor ${\boldsymbol{y}}\cdot{\boldsymbol{z}}$ is ${\boldsymbol{x}}_{i}\cdot{\boldsymbol{x}}_{j}$ for the commutator (VIII.1.1) of $\mathcal{E}_{a}({\boldsymbol{y}})$ with $\psi^{\dagger}({\boldsymbol{x}}_{i})$ and of $\mathcal{E}_{a}({\boldsymbol{z}})$ with $\psi^{\dagger}({\boldsymbol{x}}_{j})$, or vice versa. It suffices to consider the two generic cases, $\displaystyle{\boldsymbol{x}}_{1}^{2}:\hskip 28.45274pt\epsilon_{ABC}\,\psi_{A^{\prime\prime}}^{\dagger}({\boldsymbol{x}}_{1})\psi_{B}^{\dagger}({\boldsymbol{x}}_{2})\psi_{C}^{\dagger}({\boldsymbol{x}}_{3})\left|{0}\right\rangle T_{A^{\prime\prime}A^{\prime}}^{a}T_{A^{\prime}A}^{a}=C_{F}\left|{qqq}\right\rangle=\frac{4}{3}\left|{qqq}\right\rangle$ $\displaystyle{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}:\hskip 8.5359pt2\epsilon_{ABC}\,\psi_{A^{\prime}}^{\dagger}({\boldsymbol{x}}_{1})\psi_{B^{\prime}}^{\dagger}({\boldsymbol{x}}_{2})\psi_{C}^{\dagger}({\boldsymbol{x}}_{3})\left|{0}\right\rangle T_{A^{\prime}A}^{a}T_{B^{\prime}B}^{a}=\big{(}\epsilon_{B^{\prime}A^{\prime}C}-{\textstyle\frac{1}{N_{c}}}\epsilon_{A^{\prime}B^{\prime}C}\big{)}\,\psi_{A^{\prime}}^{\dagger}({\boldsymbol{x}}_{1})\psi_{B^{\prime}}^{\dagger}({\boldsymbol{x}}_{2})\psi_{C}^{\dagger}({\boldsymbol{x}}_{3})\left|{0}\right\rangle$ $\displaystyle\hskip 239.00298pt=-\Big{(}1+\frac{1}{N_{c}}\Big{)}\left|{qqq}\right\rangle=-\frac{4}{3}\left|{qqq}\right\rangle$ (369) where I used (VIII.1.1). The two eigenvalues are equal and opposite for $N_{c}=3$, which ensures translation invariance, $\displaystyle\mathcal{H}_{V}^{(0)}\left|{qqq}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\frac{4}{3}\big{[}d_{qqq}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3})\big{]}^{2}\left|{qqq}\right\rangle$ $\displaystyle d_{qqq}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3})$ $\displaystyle\equiv\frac{1}{\sqrt{2}}\sqrt{({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}+({\boldsymbol{x}}_{2}-{\boldsymbol{x}}_{3})^{2}+({\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{1})^{2}}$ (370) To ensure the universal value of $E_{\Lambda}$ in (365), i.e., the universality of the spatially constant energy density, the homogeneous solution in (163) should be normalized for the $\left|{qqq}\right\rangle$ Fock state (368) as $\displaystyle\kappa_{qqq}=\frac{\Lambda^{2}}{gC_{F}}\,\frac{1}{d_{qqq}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3})}$ (371) This gives the ${\cal O}\left(\alpha_{s}^{0}\right)$ potential $\displaystyle V_{qqq}^{(0)}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3})=g\kappa_{qqq}\,{\textstyle\frac{4}{3}}\,\big{[}d_{qqq}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3})\big{]}^{2}=\Lambda^{2}\,d_{qqq}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3})$ (372) The ${\cal O}\left({\alpha_{s}}\right)$ gluon exchange potential given by $\mathcal{H}_{V}^{(1)}$ in (VIII.1) is determined by the eigenvalue of the ${\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}$ term in (VIII.1.2), $\displaystyle V_{qqq}^{(1)}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{3})=-\frac{2}{3}\,{\alpha_{s}}\Big{(}\frac{1}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}+\frac{1}{|{\boldsymbol{x}}_{2}-{\boldsymbol{x}}_{3}|}+\frac{1}{|{\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{1}|}\Big{)}$ (373) #### VIII.1.3 The $gg$ potential A $gg$ Fock state which is invariant under global color SU($N_{c}$) transformations is expressed in terms of the gluon field in temporal gauge as $\displaystyle\left|{gg}\right\rangle=A_{a}^{i}({\boldsymbol{x}}_{1})A_{a}^{j}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (374) To find the action of $\mathcal{H}_{V}$ (VIII.1) on this state we may use the canonical commutator in (147), $\displaystyle\left[{\mathcal{E}_{a}({\boldsymbol{y}})},{A_{b}^{i}({\boldsymbol{x}}_{1})}\right]$ $\displaystyle=\left[{-f_{acd}A_{c}^{k}({\boldsymbol{y}})E_{d}^{k}({\boldsymbol{y}})},{A_{b}^{i}({\boldsymbol{x}}_{1})}\right]=if_{abc}A_{c}^{i}({\boldsymbol{x}}_{1})\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{1})$ $\displaystyle\left[{\mathcal{E}_{a}({\boldsymbol{y}})},{A_{b}^{i}({\boldsymbol{x}}_{1})A_{b}^{j}({\boldsymbol{x}}_{2})}\right]$ $\displaystyle=if_{abc}A_{b}^{i}({\boldsymbol{x}}_{1})A_{c}^{j}({\boldsymbol{x}}_{2})\big{[}-\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{1})+\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{2})\big{]}$ $\displaystyle\left[{\mathcal{E}_{a}({\boldsymbol{y}})},{\left[{\mathcal{E}_{a}({\boldsymbol{z}})},{A_{b}^{i}({\boldsymbol{x}}_{1})A_{b}^{j}({\boldsymbol{x}}_{2})}\right]}\right]$ $\displaystyle=N_{c}\,A_{a}^{i}({\boldsymbol{x}}_{1})A_{a}^{j}({\boldsymbol{x}}_{2})\big{[}\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{1})-\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{2})\big{]}\big{[}\delta({\boldsymbol{z}}-{\boldsymbol{x}}_{1})-\delta({\boldsymbol{z}}-{\boldsymbol{x}}_{2})\big{]}$ (375) Hence $\displaystyle\mathcal{H}_{V}^{(0)}\left|{gg}\right\rangle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}N_{c}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}\left|{gg}\right\rangle$ (376) and to ensure the universal value of $E_{\Lambda}$ in (365), $\displaystyle\kappa_{gg}=\frac{\Lambda^{2}}{g\sqrt{C_{F}N_{c}}}\,\frac{1}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}$ (377) This gives $\displaystyle\mathcal{H}_{V}\left|{gg}\right\rangle=\big{(}V_{gg}^{(0)}+V_{gg}^{(1)}\big{)}\left|{gg}\right\rangle=\Big{(}\sqrt{\frac{N_{c}}{C_{F}}}\,\Lambda^{2}|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|-N_{c}\,\frac{{\alpha_{s}}}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}\Big{)}\left|{gg}\right\rangle$ (378) #### VIII.1.4 The $q\bar{q}g$ potential The Hamiltonian (148) creates ${\cal O}\left(g\right)$ color singlet $\left|{q\bar{q}g}\right\rangle$ Fock states from $\left|{q\bar{q}}\right\rangle$ (360). I consider the instantaneous potential generated by $\mathcal{H}_{V}$ (VIII.1) for states of the form $\displaystyle\left|{q\bar{q}g}\right\rangle\equiv\bar{\psi}_{A}({\boldsymbol{x}}_{1})\,A_{b}^{i}({\boldsymbol{x}}_{g})T^{b}_{AB}\psi_{B}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (379) Proceeding similarly as above gives the potential $\displaystyle V_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})$ $\displaystyle=\frac{\Lambda^{2}}{\sqrt{C_{F}}}\,d_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})+{\textstyle\frac{1}{2}}\,{\alpha_{s}}\Big{[}\frac{1}{N_{c}}\,\frac{1}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}-N_{c}\Big{(}\frac{1}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{g}|}+\frac{1}{|{\boldsymbol{x}}_{2}-{\boldsymbol{x}}_{g}|}\Big{)}\Big{]}$ $\displaystyle d_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})$ $\displaystyle\equiv\sqrt{{\textstyle\frac{1}{4}}(N_{c}-{\textstyle\frac{2}{N_{C}}})({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}+N_{c}({\boldsymbol{x}}_{g}-{\textstyle\frac{1}{2}}{\boldsymbol{x}}_{1}-{\textstyle\frac{1}{2}}{\boldsymbol{x}}_{2})^{2}}$ (380) $V_{q\bar{q}g}$ is a confining potential, as it restricts both $|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|$ and the distance of ${\boldsymbol{x}}_{g}$ from the average of ${\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2}$. Exercise A.16: Derive the $q\bar{q}g$ potential (VIII.1.4). #### VIII.1.5 Limiting values of the $qqq$ and $q\bar{q}g$ potentials When two of the quarks in the baryon $\left|{qqq}\right\rangle$ Fock state are close to each other the potential should (for $N_{c}=3$) reduce to that of the color $3\otimes\bar{3}$ meson potential (366), (367). Setting ${\boldsymbol{x}}_{2}={\boldsymbol{x}}_{3}$ in $V_{qqq}$ (372) and (373) (and subtracting the infinite Coulomb energy) indeed gives $\displaystyle V_{qqq}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{2})=\Lambda^{2}|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|-\frac{4}{3}\frac{{\alpha_{s}}}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}=V_{q\bar{q}}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})$ (381) Similarly the potential of the $q\bar{q}g$ Fock state should reduce to the $3\otimes\bar{3}$ meson potential when a quark and gluon coincide. Setting ${\boldsymbol{x}}_{g}={\boldsymbol{x}}_{2}$ in $V_{q\bar{q}g}$ (VIII.1.4) gives $\displaystyle V_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{2})=\Lambda^{2}|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|-C_{F}\frac{{\alpha_{s}}}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}=V_{q\bar{q}}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})$ (382) On the other hand, when the quarks in $q\bar{q}g$ coincide the potential (VIII.1.4) should become the $8\otimes 8$ glueball potential (378). This is also fulfilled, $\displaystyle V_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{g},{\boldsymbol{x}}_{1})=\frac{\Lambda^{2}\sqrt{N_{c}}}{\sqrt{C_{F}}}\,|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{g}|-N_{c}\,\frac{{\alpha_{s}}}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{g}|}=V_{gg}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{g})$ (383) #### VIII.1.6 Single quark or gluon Fock states The above examples indicate that only color singlet Fock states have translation invariant potentials. For the record I confirm this for single quark and gluon states. In (VIII.1.1) we already saw that $\displaystyle\mathcal{H}_{V}^{(0)}\left|{q}\right\rangle\equiv\mathcal{H}_{V}^{(0)}\bar{\psi}_{A}({\boldsymbol{x}}_{q})\left|{0}\right\rangle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}C_{F}\,{\boldsymbol{x}}_{q}^{2}\,\left|{q}\right\rangle$ (384) For the ${\cal O}\left(\kappa^{2}\right)$ term to give $E_{\Lambda}$ (365) we need $\displaystyle\kappa_{q}=\frac{\Lambda^{2}}{gC_{F}}\,\frac{1}{|{\boldsymbol{x}}_{q}|}$ (385) This gives the potential $\displaystyle V_{q}^{(0)}=\Lambda^{2}|{\boldsymbol{x}}_{q}|$ (386) which is not invariant under translations. The case of a single gluon is similar. Using our previous result (376), $\displaystyle\mathcal{H}_{V}^{(0)}\left|{g}\right\rangle\equiv\mathcal{H}_{V}^{(0)}A_{a}^{i}({\boldsymbol{x}}_{g})\left|{0}\right\rangle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}N_{c}{\boldsymbol{x}}_{g}^{2}\left|{g}\right\rangle$ (387) This requires $\displaystyle\kappa_{g}=\frac{\Lambda^{2}}{g\sqrt{C_{F}N_{c}}}\,\frac{1}{|{\boldsymbol{x}}_{g}|}$ (388) so that $\displaystyle V_{g}^{(0)}=\frac{\Lambda^{2}\sqrt{N_{c}}}{\sqrt{C_{F}}}\,|{\boldsymbol{x}}_{g}|$ (389) I conclude, without further proof, that only color singlet Fock states are compatible with Poincaré invariance. #### VIII.1.7 The potential of $q\bar{q}\,q\bar{q}$ Fock states As the number of quarks and gluons in a Fock state increases a subset of them may form a color singlet, and thus not be confined. In this final example I show that this is the case for color singlet states $\left|{q\bar{q}\,q\bar{q}}\right\rangle$, with two quarks and two antiquarks. There are two ways to combine the four quarks into a color singlet. In a $q\bar{q}$ basis we can have $\displaystyle\left|{1\otimes 1}\right\rangle\equiv\left|{(q_{1}\bar{q}_{2})_{1}(q_{3}\bar{q}_{4})_{1}}\right\rangle\hskip 28.45274pt\mbox{and}\hskip 28.45274pt\left|{8\otimes 8}\right\rangle\equiv\left|{(q_{1}\bar{q}_{2})_{8}(q_{3}\bar{q}_{4})_{8}}\right\rangle$ (390) where $q_{i}\equiv q({\boldsymbol{x}}_{i})$. The two independent configurations in a diquark basis $\left|{(q_{1}q_{3})_{\bar{3}}(\bar{q}_{2}\bar{q}_{4})_{3}}\right\rangle$ and $\left|{(q_{1}q_{3})_{6}(\bar{q}_{2}\bar{q}_{4})_{\bar{6}}}\right\rangle$ can be expressed in terms of these. I shall use the $q\bar{q}$ basis (390) here. Then $\displaystyle\left|{1\otimes 1}\right\rangle$ $\displaystyle=\bar{\psi}_{A}({\boldsymbol{x}}_{1})\psi_{A}({\boldsymbol{x}}_{2})\,\bar{\psi}_{B}({\boldsymbol{x}}_{3})\psi_{B}({\boldsymbol{x}}_{4})\left|{0}\right\rangle$ (391) $\displaystyle\left|{8\otimes 8}\right\rangle$ $\displaystyle=\bar{\psi}_{A}({\boldsymbol{x}}_{1})T_{AB}^{a}\psi_{B}({\boldsymbol{x}}_{2})\,\bar{\psi}_{C}({\boldsymbol{x}}_{3})T_{CD}^{a}\psi_{D}({\boldsymbol{x}}_{4})\left|{0}\right\rangle$ $\displaystyle={\textstyle\frac{1}{2}}\bar{\psi}_{A}({\boldsymbol{x}}_{1})\psi_{B}({\boldsymbol{x}}_{2})\,\bar{\psi}_{B}({\boldsymbol{x}}_{3})\psi_{A}({\boldsymbol{x}}_{4})\left|{0}\right\rangle-{\textstyle\frac{1}{2N_{c}}}\left|{1\otimes 1}\right\rangle$ The coefficients of ${\boldsymbol{y}}\cdot{\boldsymbol{z}}$ in $\mathcal{H}_{V}^{(0)}\left|{1\otimes 1}\right\rangle$ are, apart from the common factor $\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}$: $\displaystyle({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2},\ ({\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4})^{2}:$ $\displaystyle\hskip 28.45274ptC_{F}\left|{1\otimes 1}\right\rangle$ $\displaystyle{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{3},\ -{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{4},\ -{\boldsymbol{x}}_{2}\cdot{\boldsymbol{x}}_{3},\ {\boldsymbol{x}}_{2}\cdot{\boldsymbol{x}}_{4}:$ $\displaystyle\hskip 28.45274pt2\bar{\psi}_{A}({\boldsymbol{x}}_{1})T_{AB}^{a}\psi_{B}({\boldsymbol{x}}_{2})\bar{\psi}_{C}({\boldsymbol{x}}_{3})T_{CD}^{a}\psi_{D}({\boldsymbol{x}}_{4})=2\left|{8\otimes 8}\right\rangle$ (392) In terms of the relative separations $\displaystyle{\boldsymbol{d}}_{12}={\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}\hskip 56.9055pt{\boldsymbol{d}}_{34}={\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4}\hskip 56.9055pt{\boldsymbol{d}}={\textstyle\frac{1}{2}}({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2}-{\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4})$ (393) and with a similar analysis for $\mathcal{H}_{V}^{(0)}\left|{8\otimes 8}\right\rangle$, $\displaystyle\mathcal{H}_{V}^{(0)}\left|{1\otimes 1}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\big{\\{}\big{[}C_{F}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}+C_{F}({\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4})^{2}\big{]}\left|{1\otimes 1}\right\rangle+2({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\cdot({\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4})\left|{8\otimes 8}\right\rangle\big{\\}}$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\big{[}C_{F}({\boldsymbol{d}}_{12}^{2}+{\boldsymbol{d}}_{34}^{2})\left|{1\otimes 1}\right\rangle+2{\boldsymbol{d}}_{12}\cdot{\boldsymbol{d}}_{34}\left|{8\otimes 8}\right\rangle\big{]}$ $\displaystyle\mathcal{H}_{V}^{(0)}\left|{8\otimes 8}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\big{\\{}\big{[}N_{c}{\boldsymbol{d}}^{2}+{\textstyle\frac{N_{c}^{2}-2}{4N_{c}}}({\boldsymbol{d}}_{12}^{2}+{\boldsymbol{d}}_{34}^{2})+{\textstyle\frac{N_{c}^{2}-4}{2N_{c}}}{\boldsymbol{d}}_{12}\cdot{\boldsymbol{d}}_{34}\big{]}\left|{8\otimes 8}\right\rangle+{\textstyle\frac{C_{F}}{N_{c}}}{\boldsymbol{d}}_{12}\cdot{\boldsymbol{d}}_{34}\left|{1\otimes 1}\right\rangle\big{\\}}$ (394) Exercise A.17: Derive the expression for $\mathcal{H}_{V}^{(0)}\left|{8\otimes 8}\right\rangle$ in (VIII.1.7). The expression (VIII.1) for $\mathcal{H}_{V}$ shows that the color structure of $\mathcal{H}_{V}^{(0)}$ and $\mathcal{H}_{V}^{(1)}$ is the same. Hence $\mathcal{H}_{V}^{(1)}$ is given by the coefficients of ${\boldsymbol{x}}_{i}\cdot{\boldsymbol{x}}_{j}$, multiplied by ${\textstyle\frac{1}{2}}{\alpha_{s}}/|{\boldsymbol{x}}_{i}-{\boldsymbol{x}}_{j}|$ (for $i\neq j$). The coefficients of the $\left|{1\otimes 1}\right\rangle$ and $\left|{8\otimes 8}\right\rangle$ states on the rhs. of (VIII.1.7) allow to determine the eigenstates of $\mathcal{H}_{V}^{(0)}$ and thus the normalizations $\kappa$ of the corresponding homogeneous solutions. The coefficients, and thus the eigenstates, depend on the separations ${\boldsymbol{d}},\,{\boldsymbol{d}}_{12}$ and ${\boldsymbol{d}}_{34}$. At large separations of the $q_{1}\bar{q}_{2}$ and $q_{3}\bar{q}_{4}$ pairs, i.e., for $|{\boldsymbol{d}}|\gg|{\boldsymbol{d}}_{12}|,|{\boldsymbol{d}}_{34}|$, $\mathcal{H}_{V}\left|{8\otimes 8}\right\rangle\sim N_{c}{\boldsymbol{d}}^{2}\left|{8\otimes 8}\right\rangle$. The other eigenstate then approaches $\left|{1\otimes 1}\right\rangle$ with the smaller eigenvalue $\sim C_{F}({\boldsymbol{d}}_{12}^{2}+{\boldsymbol{d}}_{34}^{2})$. Since the separation of color octet charges gives a large potential energy it may be expected that eigenstates of the full Hamiltonian are dominated by unconfined $\left|{1\otimes 1}\right\rangle$ color configurations at large separations. ### VIII.2 Rest frame wave functions of $q\bar{q}$ bound states In this section I determine the ${\cal O}\left(\alpha_{s}^{0}\right)$ meson eigenstates of the QCD Hamiltonian (148) in the rest frame. The valence quark state of the meson is (at $t=0$) defined by its wave function $\Phi_{\alpha\beta}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$, $\displaystyle\left|{M}\right\rangle=\frac{1}{\sqrt{N_{c}}}\sum_{A,B;\alpha,\beta}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}_{\alpha}^{A}({\boldsymbol{x}}_{1})\delta^{AB}\Phi_{\alpha\beta}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\psi_{\beta}^{B}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (395) where $N_{c}=3$ in QCD and $A,B$ are quark color indices. This is a singlet state under global color SU($N_{c}$) transformations. Contributions of higher orders in ${\alpha_{s}}$ and are neglected here888The hyperfine splitting of Positronium (section VI.4) gives an example of how higher Fock states are taken into account.. #### VIII.2.1 Bound state equation for the meson wave function $\Phi_{\alpha\beta}({\boldsymbol{x}})$ The bound state condition for the meson state (395) of mass $M$ is at ${\cal O}\left(\alpha_{s}^{0}\right)$, $\displaystyle\big{(}\mathcal{H}_{0}^{(q)}+\mathcal{H}_{V}^{(0)}\big{)}\left|{M}\right\rangle=M\left|{M}\right\rangle$ (396) The quark kinetic energy operator in the Hamiltonian (148) is $\displaystyle\mathcal{H}_{0}^{(q)}=\int d{\boldsymbol{x}}\,\psi_{A}^{\dagger}({\boldsymbol{x}})(-i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})\psi_{A}({\boldsymbol{x}})$ (397) $\displaystyle\left[{\mathcal{H}_{0}^{(q)}},{\bar{\psi}({\boldsymbol{x}}_{1})}\right]=\bar{\psi}({\boldsymbol{x}}_{1})(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}+m\gamma^{0})\hskip 56.9055pt\left[{\mathcal{H}_{0}^{(q)}},{\psi({\boldsymbol{x}}_{2})}\right]=(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}-m\gamma^{0})\psi({\boldsymbol{x}}_{2})$ (398) The meson is bound by the instantaneous potential generated by $\mathcal{H}_{V}^{(0)}$ (VIII.1). Its action on color singlet $q\bar{q}$ Fock states is given in (366), $\displaystyle\left[{\mathcal{H}_{V}^{(0)}},{\bar{\psi}_{\alpha}^{A}({\boldsymbol{x}}_{1})\psi_{\beta}^{A}({\boldsymbol{x}}_{2})}\right]=V^{\prime}\,|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|\,\bar{\psi}_{\alpha}^{A}({\boldsymbol{x}}_{1})\psi_{\beta}^{A}({\boldsymbol{x}}_{2})\hskip 56.9055ptV({\boldsymbol{x}})=V^{\prime}|{\boldsymbol{x}}|=\Lambda^{2}|{\boldsymbol{x}}|$ (399) where a sum over the quark colors $A$ is implied. I neglect the ${\cal O}\left({\alpha_{s}}\right)$ Coulomb energy (367). Shifting the derivatives in (398) from the fields onto the wave function by partial integration in (396) the coefficients of each Fock component gives the bound state equation for $\Phi({\boldsymbol{x}})$ (${\boldsymbol{x}}={\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}$), $\displaystyle\big{(}i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0}\big{)}\Phi({\boldsymbol{x}})+\Phi({\boldsymbol{x}})\big{(}i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}-m\gamma^{0}\big{)}=\big{[}M-V({\boldsymbol{x}})\big{]}\Phi({\boldsymbol{x}})$ (400) where $V({\boldsymbol{x}})=V^{\prime}|{\boldsymbol{x}}|$. Equivalent forms of this BSE are $\displaystyle i\boldsymbol{\nabla}\cdot\left\\{{{\boldsymbol{\alpha}}},{\Phi({\boldsymbol{x}})}\right\\}+m\left[{\gamma^{0}},{\Phi({\boldsymbol{x}})}\right]=\big{[}M-V({\boldsymbol{x}})\big{]}\Phi({\boldsymbol{x}})$ (401) $\displaystyle\Big{[}\frac{2}{M-V}(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})-1\Big{]}\Phi({\boldsymbol{x}})+\Phi({\boldsymbol{x}})\Big{[}(i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}-m\gamma^{0})\frac{2}{M-V}-1\Big{]}=0$ (402) Introducing the notation $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{\pm}\equiv\frac{2}{M-V}(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})\pm 1$ $\displaystyle{\overset{\leftarrow}{\mathfrak{h}}}_{\pm}\equiv(i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}-m\gamma^{0})\frac{2}{M-V}\pm 1$ (403) which satisfy $(r=|{\boldsymbol{x}}|)$ $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}{\overset{\rightarrow}{\mathfrak{h}}}_{+}$ $\displaystyle=\frac{4}{(M-V)^{2}}(-{\overset{\rightarrow}{\boldsymbol{\nabla}}}^{2}+m^{2})-1+\frac{4iV^{\prime}}{r(M-V)^{3}}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})$ $\displaystyle{\overset{\leftarrow}{\mathfrak{h}}}_{+}{\overset{\leftarrow}{\mathfrak{h}}}_{-}$ $\displaystyle=(-{\overset{\leftarrow}{\boldsymbol{\nabla}}}^{2}+m^{2})\frac{4}{(M-V)^{2}}-1+(i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}-m\gamma^{0})\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,\frac{4iV^{\prime}}{r(M-V)^{3}}$ (404) the bound state equation (402) is $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi({\boldsymbol{x}})+\Phi({\boldsymbol{x}}){\overset{\leftarrow}{\mathfrak{h}}}_{-}=0$ (405) The meson states (395) are relativistic (strongly bound) when $m\lesssim\Lambda$, and for high excitations at any $m$ due to the linear potential. The example of Dirac states (53) shows that for strong fields the vacuum (65) has $b^{\dagger}d^{\dagger}$ pairs in a free fermion basis, and fermion eigenstates are created by a superposition of the $b^{\dagger}$ and $d$ operators (IV.3). Analogously, the meson state (395) is a two-particle Fock state only in a Bogoliubov rotated operator basis. In the free basis we may view the pairs as arising from the $Z$-diagrams (Fig. 13b) of a time- ordered perturbative expansion. For DIS in QED2 the pairs give rise to a sea- like distribution of electrons $\propto 1/{x_{Bj}}$, as discussed in section VII.3.3 and shown in Fig. 17. I return to this issue in section VIII.4.2. #### VIII.2.2 Separation of radial and angular variables The $4\times 4$ wave function $\Phi_{\alpha\beta}({\boldsymbol{x}})$ may be expressed as a sum of terms with distinct Dirac structures $\Gamma_{\alpha\beta}^{(i)}({\boldsymbol{x}})$, radial functions $F_{i}(r)$ and angular dependence given by the spherical harmonics $Y_{j\lambda}(\hat{\boldsymbol{x}})$: $\displaystyle\Phi_{\alpha\beta}^{j\lambda}({\boldsymbol{x}})=\sum_{i}\Gamma_{\alpha\beta}^{(i)}F_{i}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})$ (406) where $r=|{\boldsymbol{x}}|$ and $\hat{\boldsymbol{x}}={\boldsymbol{x}}/r$. The 16 independent Dirac structures $\Gamma^{(i)}$ should be rotationally invariant to allow a simple classification of the states according to their angular momentum $j$ and $j^{z}=\lambda$. As discussed in section VI.1 for Positronium the generator of rotations for the quark fields is $\displaystyle\boldsymbol{\mathcal{J}}$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi_{A}^{\dagger}({\boldsymbol{x}})\,\boldsymbol{J}\,\psi_{A}({\boldsymbol{x}})\hskip 82.51282pt\boldsymbol{J}={\boldsymbol{L}}+{\boldsymbol{S}}={\boldsymbol{x}}\times(-i\boldsymbol{\nabla})+{\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}$ $\displaystyle\boldsymbol{\mathcal{J}}\left|{M}\right\rangle$ $\displaystyle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}_{A}({\boldsymbol{x}}_{1})\left[{{\boldsymbol{J}}},{\Phi({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}\right]\psi_{A}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (407) For example, the rotationally invariant Dirac structure ${\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}$ satisfies $\left[{{\boldsymbol{J}}},{{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}}\right]=0$ as shown in (A.581) and (A.582). When $\left[{{\boldsymbol{J}}},{\Gamma^{(i)}({\boldsymbol{x}})}\right]=0$ we get, since $\left[{{\boldsymbol{L}}},{F_{i}(r)}\right]=\left[{{\boldsymbol{S}}},{F_{i}(r)}\right]=\left[{{\boldsymbol{S}}},{Y_{j\lambda}(\hat{\boldsymbol{x}})}\right]=0$, $\displaystyle\left[{{\boldsymbol{J}}},{\Gamma^{(i)}F_{i}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})}\right]=\Gamma^{(i)}F_{i}(r)\left[{{\boldsymbol{L}}},{Y_{j\lambda}(\hat{\boldsymbol{x}})}\right]$ (408) Because $Y_{j\lambda}(\hat{\boldsymbol{x}})$ is an eigenfunction of ${\boldsymbol{L}}^{2}$ and $L^{z}$ this ensures that $\displaystyle\boldsymbol{\mathcal{J}}^{2}\left|{M}\right\rangle=j(j+1)\left|{M}\right\rangle\hskip 56.9055pt\mathcal{J}^{z}\left|{M}\right\rangle=\lambda\left|{M}\right\rangle$ (409) The $\Gamma^{(i)}({\boldsymbol{x}})$ need contain at most one power of the Dirac vector ${\boldsymbol{\alpha}}=\gamma^{0}\boldsymbol{\gamma}$ since higher powers may be reduced using $\displaystyle\alpha_{i}\alpha_{j}=\delta_{ij}+i\epsilon_{ijk}\alpha_{k}\gamma_{5}$ (410) Rotational invariance requires that ${\boldsymbol{\alpha}}$ be dotted into a vector. I shall use the three orthogonal vectors ${\boldsymbol{x}},\ {\boldsymbol{L}}={\boldsymbol{x}}\times(-i\boldsymbol{\nabla})$ and ${\boldsymbol{x}}\times{\boldsymbol{L}}$. Each of the four Dirac structures $1,\ {\boldsymbol{\alpha}}\cdot{\boldsymbol{x}},\ {\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}$ and ${\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}$ can be multiplied by the rotationally invariant Dirac matrices $\gamma^{0}$ and/or $\gamma_{5}$. This gives altogether $4\times 2\times 2=16$ possible $\Gamma^{(i)}({\boldsymbol{x}})$. Other invariants may be expressed in terms of these, e.g., $\displaystyle i\,{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}$ $\displaystyle=({\boldsymbol{\alpha}}\cdot{\boldsymbol{x}})\,\frac{1}{r}i\,\partial_{r}+\frac{1}{r^{2}}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times\boldsymbol{L}$ $\displaystyle({\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla})({\boldsymbol{\alpha}}\cdot{\boldsymbol{x}})$ $\displaystyle=3+r\partial_{r}+\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}$ (411) The $\Gamma^{(i)}({\boldsymbol{x}})$ may be grouped according to the parity $\eta_{P}$ (177) and charge conjugation $\eta_{C}$ (181) quantum numbers that they imply for the wave function, $\displaystyle\gamma^{0}\Phi(-{\boldsymbol{x}})\gamma^{0}=\eta_{P}\Phi({\boldsymbol{x}})\hskip 56.9055pt{\alpha_{2}}\big{[}\Phi(-{\boldsymbol{x}})\big{]}^{T}{\alpha_{2}}=\eta_{C}\Phi({\boldsymbol{x}})$ (412) Since $Y_{j\lambda}(-\hat{\boldsymbol{x}})=(-1)^{j}Y_{j\lambda}(\hat{\boldsymbol{x}})$ states of spin $j$ can belong to one of four “trajectories”, here denoted by the parity and charge conjugation quantum numbers of their $j=0$ member: $\begin{array}[]{llcl}0^{-+}\ \mbox{trajectory}&[s=0,\ \ell=j]:&-\eta_{P}=\eta_{C}=(-1)^{j}&\gamma_{5},\ \gamma^{0}\gamma_{5},\ \gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}},\ \gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}\\\\[5.69054pt] 0^{--}\ \mbox{trajectory}&[s=1,\ \ell=j]:&\eta_{P}=\eta_{C}=-(-1)^{j}&\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}},\ \gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}},\ {\boldsymbol{\alpha}}\cdot{\boldsymbol{L}},\ \gamma^{0}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}\\\\[5.69054pt] 0^{++}\ \mbox{trajectory}&[s=1,\ \ell=j\pm 1]:&\eta_{P}=\eta_{C}=+(-1)^{j}&1,\ {\boldsymbol{\alpha}}\cdot{\boldsymbol{x}},\ \gamma^{0}{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}},\ {\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}},\ \gamma^{0}{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}},\ \gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}\\\\[5.69054pt] 0^{+-}\ \mbox{trajectory}&[\mbox{exotic}]:&\eta_{P}=-\eta_{C}=(-1)^{j}&\gamma^{0},\ \gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}\vspace{-.4cm}}\end{array}$ (413) The non-relativistic spin $s$ and orbital angular momentum $\ell$ are indicated in brackets. Relativistic effects mix the $\ell=j\pm 1$ states on the $0^{++}$ trajectory, resulting in a pair of coupled radial equations. The $j=0$ state on the $0^{--}$ trajectory and the entire $0^{+-}$ trajectory are incompatible with the $s,\ell$ assignments and thus exotic in the quark model. They turn out to be missing also in the relativistic case. The bound state equation (401) has no solutions for states on the $0^{+-}$ trajectory ($\Gamma^{(i)}=\gamma^{0}$ or $\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}$) since $\displaystyle i\boldsymbol{\nabla}\cdot\left\\{{{\boldsymbol{\alpha}}},{\gamma^{0}}\right\\}=i\boldsymbol{\nabla}\cdot\left\\{{{\boldsymbol{\alpha}}},{\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}}\right\\}=m\left[{\gamma^{0}},{\gamma^{0}}\right]=m\left[{\gamma^{0}},{\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}}\right]=0$ (414) #### VIII.2.3 The $0^{-+}$ trajectory: $\eta_{P}=(-1)^{j+1},\hskip 8.5359pt\eta_{C}=(-1)^{j}$ According to the classification (413) we expand the wave function $\Phi_{-+}({\boldsymbol{x}})$ of the $0^{-+}$ trajectory states as $\displaystyle\Phi_{-+}({\boldsymbol{x}})=\Big{[}F_{1}(r)+i\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,F_{2}(r)+{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}\,F_{3}(r)+\gamma^{0}\,F_{4}(r)\Big{]}\gamma_{5}\,Y_{j\lambda}(\hat{\boldsymbol{x}})$ (415) Using this in the bound state equation (400), noting that $i\boldsymbol{\nabla}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}={\boldsymbol{L}}^{2}$ and collecting terms with the same Dirac structure we get the conditions: $\displaystyle\gamma_{5}:\hskip 14.22636pt$ $\displaystyle-(3+r\partial_{r})F_{2}+j(j+1)F_{3}+mF_{4}={\textstyle\frac{1}{2}}(M-V)F_{1}$ $\displaystyle\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}:\hskip 14.22636pt$ $\displaystyle\frac{1}{r}\partial_{r}F_{1}={\textstyle\frac{1}{2}}(M-V)F_{2}$ $\displaystyle\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}:\hskip 14.22636pt$ $\displaystyle\frac{1}{r^{2}}F_{1}={\textstyle\frac{1}{2}}(M-V)F_{3}$ $\displaystyle\gamma^{0}\gamma_{5}:\hskip 14.22636pt$ $\displaystyle mF_{1}={\textstyle\frac{1}{2}}(M-V)F_{4}$ (416) Expressing $F_{2},\ F_{3}$ and $F_{4}$ in terms of $F_{1}$ we find the radial equation (denoting $F_{1}^{\prime}\equiv\partial_{r}F_{1}$) $\displaystyle F_{1}^{\prime\prime}+\Big{(}\frac{2}{r}+\frac{V^{\prime}}{M-V}\Big{)}F_{1}^{\prime}+\Big{[}{\textstyle\frac{1}{4}}(M-V)^{2}-m^{2}-\frac{j(j+1)}{r^{2}}\Big{]}F_{1}=0$ (417) in agreement with the corresponding result in Eq. (2.24) of Geffen and Suura (1977). The wave function (415) may be expressed as $\displaystyle\Phi_{-+}({\boldsymbol{x}})$ $\displaystyle=\Big{[}\frac{2}{M-V}(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})+1\Big{]}\gamma_{5}\,F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})=F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})\,\gamma_{5}\Big{[}(i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}-m\gamma^{0})\frac{2}{M-V}+1\Big{]}$ $\displaystyle={\overset{\rightarrow}{\mathfrak{h}}}_{+}\gamma_{5}\,F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})=F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})\,\gamma_{5}{\overset{\leftarrow}{\mathfrak{h}}}_{+}$ (418) Exercise A.18: Verify that the expression (VIII.2.3) for $\Phi_{-+}({\boldsymbol{x}})$ satisfies the bound state equation (405) given the radial equation (417). Hint: The identities (VIII.2.1) are useful. Both the quark and antiquark contributions to the BSE have a spin-dependent (${\boldsymbol{S}}={\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}$) interaction which cancels in their sum. The contribution from the quark term is, taking into account the radial equation, $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi_{-+}({\boldsymbol{x}})=\frac{8V^{\prime}}{r(M-V)^{3}}\,{\boldsymbol{S}}\cdot({\overset{\rightarrow}{\boldsymbol{L}}}\,\gamma_{5}-im\,{\boldsymbol{x}}\,\gamma^{0})\,F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})$ (419) #### Non-relativistic limit of the $0^{-+}$ trajectory wave functions The non-relativistic (NR) limit in the rest frame is defined by $\displaystyle\frac{V}{m}\to 0\hskip 56.9055pt\frac{\partial}{\partial r}\sim\frac{1}{r}\sim\sqrt{m\,V}$ (420) The binding energy $E_{b}\sim V$ is defined by $M=2m+E_{b}$. In the radial equation (417) we have $\displaystyle\frac{V^{\prime}}{M-V}=\frac{V}{r(M-V)}\ll\frac{1}{r}\hskip 56.9055pt{\textstyle\frac{1}{4}}(M-V)^{2}-m^{2}\simeq m(E_{b}-V)$ (421) so it becomes the radial Schrödinger equation in the NR limit, $\displaystyle F_{1,NR}^{\prime\prime}+\frac{2}{r}F_{1,NR}^{\prime}+\Big{[}m(E_{b}-V)-\frac{j(j+1)}{r^{2}}\Big{]}F_{1,NR}=0$ (422) In the wave function (VIII.2.3) we have at leading order $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{+}=\frac{2}{M-V}(i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0})+1\simeq 1+\gamma^{0}$ (423) giving $\displaystyle\Phi_{-+}^{NR}=(1+\gamma^{0})\gamma_{5}\,F_{1,NR}(r){Y_{j\lambda}}(\Omega)$ (424) #### VIII.2.4 The $0^{--}$ trajectory: $\eta_{P}=(-1)^{j+1},\hskip 8.5359pt\eta_{C}=(-1)^{j+1}$ According to the classification (413) we expand the wave function $\Phi_{--}({\boldsymbol{x}})$ of the $0^{--}$ trajectory states as $\displaystyle\Phi_{--}({\boldsymbol{x}})=\Big{[}\gamma^{0}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}\,G_{1}(r)+i\,\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,G_{2}(r)+\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}\,G_{3}(r)+m{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}\,G_{4}(r)\Big{]}Y_{j\lambda}(\hat{\boldsymbol{x}})$ (425) Collecting terms with distinct Dirac structures in the bound state equation (401), $\displaystyle\gamma^{0}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}:\hskip 14.22636pt$ $\displaystyle G_{2}-(2+r\partial_{r})G_{3}+m^{2}G_{4}={\textstyle\frac{1}{2}}(M-V)G_{1}$ $\displaystyle\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}:\hskip 14.22636pt$ $\displaystyle\frac{j(j+1)}{r^{2}}G_{1}={\textstyle\frac{1}{2}}(M-V)G_{2}$ $\displaystyle\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}:\hskip 14.22636pt$ $\displaystyle\frac{1}{r^{2}}(1+r\partial_{r})G_{1}={\textstyle\frac{1}{2}}(M-V)G_{3}$ $\displaystyle m\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}:\hskip 14.22636pt$ $\displaystyle G_{1}={\textstyle\frac{1}{2}}(M-V)G_{4}$ (426) Expressing $G_{2},\ G_{3}$ and $G_{4}$ in terms of $G_{1}$ we find the radial equation for the $0^{--}$ trajectory, $\displaystyle G_{1}^{\prime\prime}+\Big{(}\frac{2}{r}+\frac{V^{\prime}}{M-V}\Big{)}G_{1}^{\prime}+\Big{[}{\textstyle\frac{1}{4}}(M-V)^{2}-m^{2}-\frac{j(j+1)}{r^{2}}+\frac{V^{\prime}}{r(M-V)}\Big{]}G_{1}=0$ (427) in agreement with the corresponding result in Eq. (2.38) of Geffen and Suura (1977). The $0^{--}$ radial equation differs from the $0^{-+}$ one (417) only by the term $\propto\,V^{\prime}/r(M-V)$. Using $\displaystyle i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}\,\boldsymbol{\gamma}\cdot{\boldsymbol{L}}=\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,\frac{i{\boldsymbol{L}}^{2}}{r^{2}}+\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}\frac{1}{r^{2}}(1+r\partial_{r})$ (428) allows the wave function to be expressed in terms of the projector $\mathfrak{h}_{+}$ of (403) as, $\displaystyle\Phi_{--}({\boldsymbol{x}})$ $\displaystyle={\overset{\rightarrow}{\mathfrak{h}}}_{+}\,\boldsymbol{\gamma}\cdot{\overset{\rightarrow}{\boldsymbol{L}}}\,G_{1}(r)\,Y_{j\lambda}(\hat{\boldsymbol{x}})=G_{1}(r)\,Y_{j\lambda}(\hat{\boldsymbol{x}})\,\boldsymbol{\gamma}\cdot{\overset{\leftarrow}{\boldsymbol{L}}}\,{\overset{\leftarrow}{\mathfrak{h}}}_{+}$ (429) where ${\overset{\leftarrow}{L}}^{i}=-i{\overset{\leftarrow}{\partial}}_{k}x^{j}\varepsilon_{ijk}$. The $j=0$ state on the $0^{--}$ trajectory is missing since ${\boldsymbol{L}}\,Y_{00}(\hat{\boldsymbol{x}})=0$. The quark contribution to the bound state equation (405) is, with $\boldsymbol{S}={\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}$, $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi_{--}({\boldsymbol{x}})=\frac{4V^{\prime}}{r(M-V)^{3}}\big{[}{\overset{\rightarrow}{\boldsymbol{L}}}^{2}\gamma^{0}\gamma_{5}-2m\,\boldsymbol{S}\cdot{\boldsymbol{x}}\times{\overset{\rightarrow}{\boldsymbol{L}}}\big{]}\,G_{1}(r)\,Y_{j\lambda}(\hat{\boldsymbol{x}})$ (430) #### Non-relativistic limit of the $0^{--}$ trajectory wave functions The NR limit of the radial equation (427) reduces as in the $0^{-+}$ case to $\displaystyle G_{1,NR}^{\prime\prime}+\frac{2}{r}G_{1,NR}^{\prime}+\Big{[}m(E_{b}-V)-\frac{j(j+1)}{r^{2}}\Big{]}G_{1,NR}=0$ (431) The equality of the $0^{-+}$ and $0^{--}$ eigenvalues reflects the spin $s$ independence of the NR limit, since $\ell=j$ for both. The wave function is $\displaystyle\Phi_{--}^{NR}=(1+\gamma^{0}){\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}\,G_{1,NR}(r){Y_{j\lambda}}(\Omega)$ (432) #### VIII.2.5 The $0^{++}$ trajectory: $\eta_{P}=(-1)^{j},\hskip 8.5359pt\eta_{C}=(-1)^{j}$ According to the classification (413) we expand the wave function $\Phi_{++}({\boldsymbol{x}})$ of the $0^{++}$ trajectory states in terms of six Dirac structures999The radial functions $F_{i}$ and $G_{i}$ are unrelated to those in sections VIII.2.3 and VIII.2.4., $\displaystyle\Phi_{++}({\boldsymbol{x}})=\left\\{\Big{[}F_{1}(r)+i\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,F_{2}(r)+{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}\,F_{3}(r)\Big{]}+\gamma^{0}\Big{[}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}\,G_{1}(r)+i\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,G_{2}(r)+{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}\,G_{3}(r)\Big{]}\right\\}Y_{j\lambda}(\hat{\boldsymbol{x}})$ (433) Collecting terms with distinct Dirac structures in the bound state equation (401), $\displaystyle 1:\hskip 14.22636pt$ $\displaystyle-(3+r\partial_{r})F_{2}+j(j+1)F_{3}={\textstyle\frac{1}{2}}(M-V)F_{1}$ $\displaystyle{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}:\hskip 14.22636pt$ $\displaystyle\frac{1}{r}\partial_{r}F_{1}+mG_{2}={\textstyle\frac{1}{2}}(M-V)F_{2}$ $\displaystyle{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}:\hskip 14.22636pt$ $\displaystyle\frac{1}{r^{2}}F_{1}+mG_{3}={\textstyle\frac{1}{2}}(M-V)F_{3}$ $\displaystyle\gamma^{0}\gamma_{5}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}:\hskip 14.22636pt$ $\displaystyle G_{2}-(2+r\partial_{r})G_{3}={\textstyle\frac{1}{2}}(M-V)G_{1}$ $\displaystyle\gamma^{0}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}:\hskip 14.22636pt$ $\displaystyle\frac{1}{r^{2}}\,j(j+1)G_{1}+mF_{2}={\textstyle\frac{1}{2}}(M-V)G_{2}$ $\displaystyle\gamma^{0}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}:\hskip 14.22636pt$ $\displaystyle\frac{1}{r^{2}}(1+r\partial_{r})G_{1}+mF_{3}={\textstyle\frac{1}{2}}(M-V)G_{3}$ (434) It turns out to be convenient to express the above radial functions in terms of two new ones, $H_{1}(r)$ and $H_{2}(r)$: $\displaystyle F_{1}$ $\displaystyle=-\frac{2}{(M-V)^{2}}\big{[}{\textstyle\frac{1}{4}}(M-V)^{2}-m^{2}\big{]}H_{1}-\frac{4m}{M-V}\partial_{r}(rH_{2})$ $\displaystyle F_{2}$ $\displaystyle=-\frac{1}{r(M-V)}\partial_{r}H_{1}+2mH_{2}$ $\displaystyle F_{3}$ $\displaystyle=-\frac{1}{r^{2}(M-V)}H_{1}$ $\displaystyle G_{1}$ $\displaystyle=2H_{2}$ $\displaystyle G_{2}$ $\displaystyle=\frac{2}{r}\partial_{r}\Big{[}-\frac{m}{(M-V)^{2}}H_{1}+\frac{2}{M-V}\partial_{r}(rH_{2})\Big{]}+(M-V)H_{2}$ $\displaystyle G_{3}$ $\displaystyle=\frac{2}{r^{2}}\Big{[}-\frac{m}{(M-V)^{2}}H_{1}+\frac{2}{M-V}\partial_{r}(rH_{2})\Big{]}$ (435) The bound state conditions (VIII.2.5) are satisfied provided $H_{1,2}$ satisfy the coupled radial equations, $\displaystyle H_{1}^{\prime\prime}+\Big{(}\frac{2}{r}+\frac{V^{\prime}}{M-V}\Big{)}H_{1}^{\prime}+\Big{[}{\textstyle\frac{1}{4}}(M-V)^{2}-m^{2}-\frac{j(j+1)}{r^{2}}\Big{]}H_{1}$ $\displaystyle=4m(M-V)H_{2}$ (436) $\displaystyle H_{2}^{\prime\prime}+\Big{(}\frac{2}{r}+\frac{V^{\prime}}{M-V}\Big{)}H_{2}^{\prime}+\Big{[}{\textstyle\frac{1}{4}}(M-V)^{2}-m^{2}-\frac{j(j+1)}{r^{2}}+\frac{V^{\prime}}{r(M-V)}\Big{]}H_{2}$ $\displaystyle=\frac{mV^{\prime}}{r(M-V)^{2}}H_{1}$ (437) These agree with Eqs. (2.48) and (2.49) for $F_{2}^{GS}$ and $G_{1}^{GS}$ of Geffen and Suura (1977), when $H_{1}=(M-V)F_{2}^{GS}$ and $H_{2}=-i\,G_{1}^{GS}/(M-V)$. The wave function $\Phi_{++}({\boldsymbol{x}})$ (433) can be expressed in terms of the $H_{1,2}(r)$ radial functions and the $\mathfrak{h}_{+}$ operators (403) as $\displaystyle\Phi_{++}({\boldsymbol{x}})$ $\displaystyle={\overset{\rightarrow}{\mathfrak{h}}}_{+}\big{[}-{\textstyle\frac{1}{2}}H_{1}+2\,\boldsymbol{\gamma}\cdot{\overset{\rightarrow}{\boldsymbol{L}}}\,\gamma_{5}H_{2}+2im\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,H_{2}\big{]}Y_{j\lambda}(\hat{\boldsymbol{x}})+\frac{m}{M-V}\big{[}{\overset{\rightarrow}{\mathfrak{h}}}_{+}\gamma^{0}H_{1}+8H_{2}\big{]}Y_{j\lambda}(\hat{\boldsymbol{x}})$ (438) $\displaystyle=Y_{j\lambda}(\hat{\boldsymbol{x}})\big{[}-{\textstyle\frac{1}{2}}H_{1}-2H_{2}\gamma_{5}\,\boldsymbol{\gamma}\cdot{\overset{\leftarrow}{\boldsymbol{L}}}\,+2imH_{2}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,\big{]}{\overset{\leftarrow}{\mathfrak{h}}}_{+}-Y_{j\lambda}(\hat{\boldsymbol{x}})\big{[}H_{1}\gamma^{0}{\overset{\leftarrow}{\mathfrak{h}}}_{+}-8H_{2}\big{]}\frac{m}{M-V}$ The quark contribution to the bound state equation (405) is, with $\boldsymbol{S}={\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}$, $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi_{++}({\boldsymbol{x}})$ $\displaystyle=-\frac{4V^{\prime}}{r(M-V)^{3}}\Big{[}\boldsymbol{S}\cdot{\overset{\rightarrow}{\boldsymbol{L}}}+\frac{m}{M-V}\gamma^{0}r\partial_{r}\Big{]}H_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})+\frac{8V^{\prime}}{r(M-V)^{3}}\big{[}{\overset{\rightarrow}{\boldsymbol{L}}}^{2}+m^{2}r^{2}\big{]}\gamma^{0}\,H_{2}(r)Y_{j\lambda}$ (439) When $m=0$ chiral symmetry implies that $\Phi({\boldsymbol{x}})$ and $\gamma_{5}\Phi({\boldsymbol{x}})$ define bound states with the same mass $M$, as is apparent from the bound state equation (400). The radial equations (436) and (437) in fact decouple and coincide with the radial equations of the $0^{-+}$ (417) and $0^{--}$ (427) trajectories, respectively. The $\Phi_{++}$ wave functions correspondingly reduce to $\gamma_{5}\Phi_{-+}$ and $\gamma_{5}\Phi_{--}$. I discuss the case of spontaneously broken chiral symmetry in section VIII.6. #### Non-relativistic limit of the $0^{++}$ trajectory wave functions States with the same $j$ on the $0^{-+}$ and $0^{--}$ trajectories are degerate in the NR limit since both have $\ell=j$. States with the same $j$ on the $0^{++}$ trajectory have $\ell=j\pm 1$ and thus unequal binding energies. The radial $0^{++}$ functions $H_{1}$ (436) and $H_{2}$ (437) remain coupled in the NR limit, $\displaystyle H_{1,NR}^{\prime\prime}+\frac{2}{r}H_{1,NR}^{\prime}+\Big{[}m(E_{b}-V)-\frac{j(j+1)}{r^{2}}\Big{]}H_{1,NR}$ $\displaystyle=8m^{2}H_{2,NR}$ (440) $\displaystyle H_{2,NR}^{\prime\prime}+\frac{2}{r}H_{2,NR}^{\prime}+\Big{[}m(E_{b}-V)-\frac{j(j+1)}{r^{2}}\Big{]}H_{2,NR}$ $\displaystyle=\frac{V}{4mr^{2}}H_{1,NR}$ (441) The lhs. of both equations scale as $1/r^{2}\sim mV$, implying the ratio $\displaystyle\frac{H_{2,NR}}{H_{1,NR}}\sim\frac{V}{m}$ (442) In the expression (438) for $\Phi_{++}$ the leading contribution $\propto H_{1}$ vanishes for $\mathfrak{h}_{+}\simeq 1+\gamma^{0}$ (423). This requires to retain the ${\cal O}\left(\sqrt{V/m}\right)$ term in $\mathfrak{h}_{+}$, $\displaystyle\mathfrak{h}_{+}\simeq 1+\gamma^{0}+\frac{i}{m}{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}$ (443) Then the contribution $\sim\sqrt{V/m}\,H_{1}\sim\sqrt{m/V}\,H_{2}$ matches the leading $H_{2}$ contribution $m{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,H_{2}\sim\sqrt{m/V}\,H_{2}$. The $2\boldsymbol{\gamma}\cdot{\boldsymbol{L}}\,\gamma_{5}H_{2}$ term is subdominant, as are the ${\cal O}\left(V/m\right)$ corrections in $\mathfrak{h}_{+}$. This gives $\displaystyle\Phi_{++}^{NR}=\frac{i}{2m}(1+\gamma^{0})\big{[}-{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}H_{1,NR}(r)+4m^{2}{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}H_{2,NR}(r)\big{]}{Y_{j\lambda}}(\Omega)$ (444) Orbital angular momentum is conserved in the NR limit, implying $\displaystyle\big{[}{{\boldsymbol{L}}^{2}},{\Phi_{++}^{NR}}\big{]}=\ell(\ell+1)\Phi_{++}^{NR}\hskip 56.9055pt\ell=j\pm 1$ (445) Using $\displaystyle\big{[}{{\overset{\rightarrow}{\boldsymbol{L}}}^{2}},{{\boldsymbol{x}}}\big{]}$ $\displaystyle=2\big{(}-\boldsymbol{\nabla}\,r^{2}+{\boldsymbol{x}}\,r\partial_{r}+3{\boldsymbol{x}}\big{)}$ (446) $\displaystyle\big{[}{{\overset{\rightarrow}{\boldsymbol{L}}}^{2}},{\boldsymbol{\nabla}}\big{]}$ $\displaystyle=2\big{(}{\boldsymbol{x}}\,\boldsymbol{\nabla}^{2}-\boldsymbol{\nabla}\,r\partial_{r}\big{)}$ (447) gives $\displaystyle\big{[}{{\boldsymbol{L}}^{2}},{\Phi_{++}^{NR}}\big{]}$ $\displaystyle=i(1+\gamma^{0})\,{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}\,\frac{1}{2m}\big{[}2rH_{1,NR}^{\prime}-j(j+1)H_{1,NR}-8m^{2}r^{2}H_{2,NR}\Big{]}{Y_{j\lambda}}$ $\displaystyle+i(1+\gamma^{0})\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,2m\Big{[}\frac{1}{2m}(E_{b}-V)H_{1,NR}+2rH_{2,NR}^{\prime}+2H_{2,NR}+j(j+1)H_{2,NR}\Big{]}{Y_{j\lambda}}$ (448) Comparing with the Dirac structures in (444) and (445) gives two conditions, $\displaystyle 8m^{2}H_{2,NR}$ $\displaystyle=\frac{2}{r}H_{1,NR}^{\prime}+\frac{1}{r^{2}}\big{[}\ell(\ell+1)-j(j+1)\big{]}H_{1,NR}=\frac{2}{r}H_{1,NR}^{\prime}+\frac{1}{r^{2}}\big{[}\pm(2j+1)+1\big{]}H_{1,NR}$ (449) $\displaystyle m(E_{b}-V)H_{1,NR}$ $\displaystyle=-4m^{2}rH_{2,NR}^{\prime}+\big{[}\pm(2j+1)-1\big{]}2m^{2}H_{2,NR}\hskip 85.35826pt\mbox{for}\ \ \ell=j\pm 1$ (450) Using the expression (449) for $8m^{2}H_{2,NR}$ in the radial equation (440) gives the expected NR radial equation, $\displaystyle H_{1,NR}^{\prime\prime}+\Big{[}m(E_{b}-V)-\frac{\ell(\ell+1)}{r^{2}}\Big{]}H_{1,NR}$ $\displaystyle=0$ (451) To check the self-consistency of (449) with (450) we may use (449) to express $H_{2,NR}$ and $H_{2,NR}^{\prime}$ in terms of $H_{1,NR},\,H_{1,NR}^{\prime}$ and $H_{1,NR}^{\prime\prime}$ and use this in (450). The result agrees with (451). Using the expression (449) for $H_{2,NR}$ in the wave function (444) we have $\displaystyle\Phi_{++}^{NR}=-\frac{i}{2m}(1+\gamma^{0})\Big{\\{}{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}H_{1,NR}(r)-{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\Big{[}\frac{1}{r}H_{1,NR}^{\prime}+\frac{1}{2r^{2}}\big{[}\pm(2j+1)+1\big{]}H_{1,NR}\Big{]}\Big{\\}}{Y_{j\lambda}}$ (452) Separating $\boldsymbol{\nabla}$ into its radial and angular derivatives, $\displaystyle{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}$ $\displaystyle=({\boldsymbol{\alpha}}\cdot{\boldsymbol{x}})\,\frac{1}{r}\partial_{r}-i\frac{1}{r^{2}}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times\boldsymbol{L}$ (453) the radial derivative of $H_{1}$ cancels, so that the $\ell=j\pm 1$ NR wave functions are, $\displaystyle\Phi_{++}^{NR}=\frac{i}{2mr^{2}}(1+\gamma^{0})\Big{\\{}{\textstyle\frac{1}{2}}{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\big{[}\pm(2j+1)+1\big{]}+i{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\times{\boldsymbol{L}}\Big{\\}}H_{1,NR}{Y_{j\lambda}}$ (454) ### VIII.3 * $q\bar{q}$ bound states in motion A perturbative expansion for bound states should at each order respect Poincaré invariance. The spatial extent and mutual interactions of the constituents make this non-trivial. The bound state energy needs to have the correct dependence on the momentum, $E({\boldsymbol{P}})=\sqrt{M^{2}+{\boldsymbol{P}}^{2}}$, and scattering amplitudes (form factors) should transform covariantly under rotations and boosts. For Positronium this requires a frame dependent combination of Coulomb and transverse photon exchange, as discussed in section VI.3. However, the ${\cal O}\left(\alpha_{s}^{0}\right)$ instantaneous potential arising from the homogeneous solution of Gauss’ law in QCD (section VIII.1) must ensure Poincaré invariance on its own, without assistance from ${\cal O}\left({\alpha_{s}}\right)$ gluon exchange. This is analogous to $e^{+}e^{-}$ bound states in QED2, due to the absence of transverse photons in $D=1+1$ dimensions. In that case it is essential that the potential is linear (section VII.1.4). The correct frame dependence of the energy $E({\boldsymbol{P}})$ turns out to be similarly ensured for $q\bar{q}$ states, due to the linearity of the potential (366). I have not considered $qqq$ states, which have a different potential (372). #### VIII.3.1 The bound state equation In a general frame the ${\boldsymbol{P}}=0$ state (395) becomes, at $t=0$, $\displaystyle\left|{M,P}\right\rangle=\frac{1}{\sqrt{N_{c}}}\sum_{A,B}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}^{A}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\delta^{AB}\Phi^{(P)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\psi^{B}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (455) which is an eigenstate of the momentum operator $\bm{\mathcal{P}}$ (169) with eigenvalue ${\boldsymbol{P}}$. In the following I take ${\boldsymbol{P}}=(0,0,P)$ along the $z$-axis. The derivatives in $\mathcal{H}_{0}^{(q)}$ (398) act after the partial integration also on $\exp[i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2]$, giving rise to a new term in the bound state equation (401), $\displaystyle i\boldsymbol{\nabla}\cdot\big{\\{}{{\boldsymbol{\alpha}}},{\Phi^{(P)}({\boldsymbol{x}})}\big{\\}}-{\textstyle\frac{1}{2}}{\boldsymbol{P}}\cdot\big{[}{{\boldsymbol{\alpha}}},{\Phi^{(P)}({\boldsymbol{x}})}\big{]}+m\big{[}{\gamma^{0}},{\Phi^{(P)}({\boldsymbol{x}})}\big{]}$ $\displaystyle=\big{[}E-V({\boldsymbol{x}})\big{]}\Phi^{(P)}({\boldsymbol{x}})$ (456) The potential $V({\boldsymbol{x}})=V^{\prime}|{\boldsymbol{x}}|$ is independent of the bound state momentum ${\boldsymbol{P}}$, being determined by the instantaneous positions ${\boldsymbol{x}}_{1,2}$ of the quarks. An alternative form of this BSE is $\displaystyle\Big{[}i{\overset{\rightarrow}{\boldsymbol{\nabla}}}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}\big{(}E-V+P{\alpha_{3}}\big{)}+m\gamma^{0}\Big{]}\Phi^{(P)}({\boldsymbol{x}})+\Phi^{(P)}({\boldsymbol{x}})\Big{[}i{\overset{\leftarrow}{\boldsymbol{\nabla}}}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}\big{(}E-V-P{\alpha_{3}}\big{)}-m\gamma^{0}\Big{]}=0$ (457) It is possible to express the BSE equivalently as two coupled equations, $\displaystyle\Big{[}\frac{2}{E-V}\big{(}i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0}-{\textstyle\frac{1}{2}}{\boldsymbol{\alpha}}\cdot{\boldsymbol{P}}\big{)}-1\Big{]}\Phi^{(P)}$ $\displaystyle=-\frac{2i}{(E-V)^{2}}{\boldsymbol{P}}\cdot\boldsymbol{\nabla}\Phi^{(P)}+\frac{V^{\prime}}{r(E-V)^{2}}\left[{i{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}},{\Phi^{(P)}}\right]$ $\displaystyle\Phi^{(P)}\Big{[}\big{(}i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}-m\gamma^{0}+{\textstyle\frac{1}{2}}{\boldsymbol{\alpha}}\cdot{\boldsymbol{P}}\big{)}\frac{2}{E-V}-1\Big{]}$ $\displaystyle=\frac{2i}{(E-V)^{2}}{\boldsymbol{P}}\cdot\boldsymbol{\nabla}\Phi^{(P)}-\frac{V^{\prime}}{r(E-V)^{2}}\left[{i{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}},{\Phi^{(P)}}\right]$ (458) Exercise A.19: Derive the coupled equations (VIII.3.1) from the bound state equation (456). Note: Can you find a simpler derivation than the one presented in A.19? Since ${\boldsymbol{P}}$ breaks rotational symmetry in (456) (except for rotations around the $z$-axis) the radial and angular variables do not separate as in (406). This makes a solution of the BSE more challenging. In QED2 $\Phi^{(P)}(x)$ can be expressed (307) in terms of the rest frame wave function evaluated at a “boost invariant” variable $\tau(x)$ (303). A similar relation works here as well, but only at $\boldsymbol{x}_{\perp}=(x,y)=0$. $\Phi^{(P)}(0,0,z)$ then serves as a boundary condition on the BSE (456). Before turning to the expression for $\Phi^{(P)}(0,0,z)$ I consider the case of a vanishing potential. The exact solution can be found for $V=0$, at any $P$ and ${\boldsymbol{x}}$. This provides a boundary condition for the $V\neq 0$ BSE in the limit $r\to 0$, in which $E-V^{\prime}r\to E$ on the rhs. of the BSE (456). Solving the partial differential equation with boundary conditions at $\boldsymbol{x}_{\perp}=0$ and $r\to 0$ should determine the wave function for all ${\boldsymbol{x}}$ (but this remains to be demonstrated). #### VIII.3.2 Solution of the $P\neq 0$ bound state equation for $V({\boldsymbol{x}})=0$ The free solution of (456) is, for ${\boldsymbol{P}}=(0,0,P)$, $\displaystyle\Phi^{(P)}_{V=0}({\boldsymbol{x}})$ $\displaystyle=\exp(-{\textstyle\frac{1}{2}}\xi{\alpha_{3}})\,\Phi_{V=0}^{(0)}({\boldsymbol{x}}_{R})\exp({\textstyle\frac{1}{2}}\xi{\alpha_{3}})$ (459) $\displaystyle{\boldsymbol{x}}_{R}=(x,y,z\cosh\xi)$ $\displaystyle\hskip 56.9055ptE=M\cosh\xi\hskip 56.9055ptP=M\sinh\xi$ where $\Phi_{V=0}^{(0)}({\boldsymbol{x}})$ is the solution of the BSE with $V=0$ in the rest frame. Its relation to $\Phi^{(P)}_{V=0}({\boldsymbol{x}})$ corresponds to standard Lorentz contraction, with the $j=1/2$ boost representations $\exp(\pm{\textstyle\frac{1}{2}}\xi{\alpha_{3}})$ familiar from the Dirac equation. I denote by $B({\boldsymbol{x}})$ the lhs. of the BSE (457) with $V=0$ and $\Phi^{(P)}({\boldsymbol{x}})=\Phi^{(P)}_{V=0}({\boldsymbol{x}})$ given by (459). Thus $B=0$ is required for (459) to be a solution. Multiplying by $e^{\xi{\alpha_{3}}/2}$ from the left and $e^{-\xi{\alpha_{3}}/2}$ from the right, $\displaystyle e^{\xi{\alpha_{3}}/2}B({\boldsymbol{x}})e^{-\xi{\alpha_{3}}/2}$ $\displaystyle=e^{\xi{\alpha_{3}}/2}\big{[}i{\overset{\rightarrow}{\boldsymbol{\nabla}}}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}\big{(}E+P{\alpha_{3}}\big{)}+m\gamma^{0}\big{]}e^{-\xi{\alpha_{3}}/2}\,\Phi_{V=0}^{(0)}({\boldsymbol{x}}_{R})$ $\displaystyle+\Phi_{V=0}^{(0)}({\boldsymbol{x}}_{R})e^{\xi{\alpha_{3}}/2}\big{[}i{\overset{\leftarrow}{\boldsymbol{\nabla}}}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}\big{(}E-P{\alpha_{3}}\big{)}-m\gamma^{0}\big{]}e^{-\xi{\alpha_{3}}/2}$ (460) Since $z_{R}=z\cosh\xi$ we have $\partial_{z}=\cosh\xi\,\partial_{z_{R}}$, and $\displaystyle i{\alpha_{3}}{\buildrel\rightarrow\over{\partial}}_{z}$ $\displaystyle=e^{\xi{\alpha_{3}}}{\alpha_{3}}i{\buildrel\rightarrow\over{\partial}}_{z_{R}}-\sinh\xi\,i{\buildrel\rightarrow\over{\partial}}_{z_{R}}$ $\displaystyle i{\alpha_{3}}{\buildrel\leftarrow\over{\partial}}_{z}$ $\displaystyle=i{\buildrel\leftarrow\over{\partial}}_{z_{R}}\,{\alpha_{3}}e^{-\xi{\alpha_{3}}}+i{\buildrel\leftarrow\over{\partial}}_{z_{R}}\sinh\xi$ (461) The terms $\propto\sinh\xi$ give Dirac scalar contributions to (VIII.3.2), and cancel each other. Using $E\pm P{\alpha_{3}}=M\exp(\pm\xi{\alpha_{3}})$, $i\boldsymbol{\nabla}_{\perp}\cdot{\boldsymbol{\alpha}_{\perp}}\,\exp(-\xi{\alpha_{3}}/2)=\exp(\xi{\alpha_{3}}/2)i\boldsymbol{\nabla}_{\perp}\cdot{\boldsymbol{\alpha}_{\perp}}$ and similarly for the $m\gamma^{0}$ terms we get, $\displaystyle e^{\xi{\alpha_{3}}/2}B({\boldsymbol{x}})e^{-\xi{\alpha_{3}}/2}=e^{\xi{\alpha_{3}}}\big{[}i{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{R}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}M+m\gamma^{0}\big{]}\Phi_{V=0}^{(0)}({\boldsymbol{x}}_{R})+\Phi_{V=0}^{(0)}({\boldsymbol{x}}_{R})\big{[}i{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{R}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}M-m\gamma^{0}\big{]}e^{-\xi{\alpha_{3}}}$ (462) Expressing $\exp(\pm\xi{\alpha_{3}})=\cosh\xi\pm{\alpha_{3}}\sinh\xi$ the coefficent of $\cosh\xi$ is the rest frame BSE at ${\boldsymbol{x}}={\boldsymbol{x}}_{R}$, which $\Phi_{V=0}^{(0)}$ satisfies by definition. The BSE allows to relate the coefficients of ${\alpha_{3}}\sinh\xi$, leaving the anticommutator with ${\alpha_{3}}$, $\displaystyle e^{\xi{\alpha_{3}}/2}B({\boldsymbol{x}})e^{-\xi{\alpha_{3}}/2}=\sinh\xi\big{\\{}{{\alpha_{3}}},{\big{(}i{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{R}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}M+m\gamma^{0}\big{)}\Phi_{V=0}^{(0)}({\boldsymbol{x}}_{R})}\big{\\}}={\textstyle\frac{1}{2}}M\sinh\xi\big{\\{}{{\alpha_{3}}},{{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi_{V=0}^{(0)}}\big{\\}}$ (463) where ${\overset{\rightarrow}{\mathfrak{h}}}_{-}$ is defined in (403) and evaluated at $V=0$. The explicit expressions for ${\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi({\boldsymbol{x}})$ in (419), (430) and (439) are all $\propto V^{\prime}$ and thus vanish for $V=0$. Hence $B({\boldsymbol{x}})=0$ and $\Phi^{(P)}_{V=0}({\boldsymbol{x}})$ of (459) solves the BSE for all $P$. #### VIII.3.3 Boost of the state $\left|{M,P}\right\rangle$ for $V({\boldsymbol{x}})=0$ Instead of solving the BSE at a finite momentum ${\boldsymbol{P}}$ we may boost the rest frame state. This is feasible for $V=0$ using the boost generator of free quarks. Suppressing the irrelevant color indices, $\displaystyle\boldsymbol{\mathcal{K}_{0}}(t)=t\boldsymbol{\mathcal{P}}+\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\big{[}-{\boldsymbol{x}}H_{0}+{\textstyle\frac{1}{2}}i{\boldsymbol{\alpha}}\big{]}\psi({\boldsymbol{x}})$ (464) The expressions for the generators of translations $\boldsymbol{\mathcal{P}}$ (169) in space and $\mathcal{H}_{0}$ (397) in time (the free Hamiltonian) are, $\displaystyle\boldsymbol{\mathcal{P}}$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})(-i\boldsymbol{\nabla})\psi({\boldsymbol{x}})$ $\displaystyle\mathcal{H}_{0}$ $\displaystyle=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})H_{0}\psi({\boldsymbol{x}})\equiv\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})(-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0})\psi({\boldsymbol{x}})$ (465) These operators satisfy the Lie algebra of the Poincaré group (I do not here consider rotations, and set $t=0$). The commutators of local operators $\mathcal{O}=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})O({\boldsymbol{x}})\psi({\boldsymbol{x}})$ satisfy $\displaystyle\left[{\mathcal{O}_{i}},{\mathcal{O}_{j}}\right]=\int d{\boldsymbol{x}}\,\psi^{\dagger}({\boldsymbol{x}})\left[{O_{i}},{O_{j}}\right]\psi({\boldsymbol{x}})$ (466) This allows to verify the Lie algebra in terms of the structures $O_{i}$ (here $P^{i}=-i\partial_{i}$): $\displaystyle\left[{P^{i}},{P^{j}}\right]$ $\displaystyle=0\hskip 56.9055pt\mbox{since}\ \partial_{i}\partial_{j}=\partial_{j}\partial_{i}$ $\displaystyle\left[{P^{i}},{K_{0}^{j}}\right]$ $\displaystyle=\left[{-i\partial_{i}},{-x^{j}(-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0})+{\textstyle\frac{1}{2}}i\alpha_{j}}\right]=i\delta^{ij}H_{0}$ $\displaystyle\left[{H},{K_{0}^{i}}\right]$ $\displaystyle=\left[{-i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}+m\gamma^{0}},{-x^{i}H_{0}+{\textstyle\frac{1}{2}}i\alpha_{i}}\right]=i\alpha_{i}H_{0}+{\textstyle\frac{1}{2}}i\left[{H_{0}},{\alpha_{i}}\right]={\textstyle\frac{1}{2}}i\left\\{{\alpha_{i}},{H_{0}}\right\\}=iP^{i}$ (467) An infinitesimal boost in the $z$-direction of the (non-interacting) state $\left|{M,P}\right\rangle$ with ${\boldsymbol{P}}=(0,0,P)$ is generated by the operator $1-id\xi\mathcal{K}_{0}^{z}$, as verified by the eigenvalues, $\displaystyle\mathcal{P}^{z}(1-id\xi\mathcal{K}_{0}^{z})\left|{M,P}\right\rangle$ $\displaystyle=(1-id\xi\mathcal{K}_{0}^{z})\mathcal{P}^{z}\left|{M,P}\right\rangle- id\xi\left[{\mathcal{P}^{z}},{\mathcal{K}_{0}^{z}}\right]\left|{M,P}\right\rangle=(P+d\xi E)(1-id\xi\mathcal{K}_{0}^{z})\left|{M,P}\right\rangle$ $\displaystyle\mathcal{H}_{0}(1-id\xi\mathcal{K}_{0}^{z})\left|{M,P}\right\rangle$ $\displaystyle=(1-id\xi\mathcal{K}_{0}^{z})\mathcal{H}_{0}\left|{M,P}\right\rangle- id\xi\left[{\mathcal{H}_{0}},{\mathcal{K}_{0}^{z}}\right]\left|{M,P}\right\rangle=(E+d\xi P)(1-id\xi\mathcal{K}_{0}^{z})\left|{M,P}\right\rangle$ (468) The expression (455) for $\left|{M,P}\right\rangle$ in terms of the wave function $\Phi^{(P)}$ allows to determine the wave function for $(1-id\xi\mathcal{K}_{0}^{z})\left|{M,P}\right\rangle$, and thus to deduce its frame dependence (459). Exercise A.20: Derive the frame dependence (459) of $\Phi^{(P)}_{V=0}({\boldsymbol{x}})$ using the boost generator $\mathcal{K}_{0}^{z}$. Hint: Use the bound state equation in the form of (VIII.3.1) (with $V=0$). The boost demonstrates that the relative normalizations of wave functions with different momenta $P$ is correctly given by (459). This applies also to the interacting case ($V\neq 0$) considered next, since the $P$-dependence of the component $\Phi^{(P)}({\boldsymbol{x}}=0)$ is given by $V=0$. #### VIII.3.4 Solution of the $P\neq 0$ bound state equation at $\boldsymbol{x}_{\perp}=0$ Apart from $\boldsymbol{x}_{\perp}=0$ the following requires $E=\sqrt{M^{2}+P^{2}}$ and a linear potential $V=V^{\prime}z$ with $z>0$. The wave function for $z<0$ may be determined using parity or charge conjugation (412). As in the $D=1+1$ case (section VII.1.4) the coordinate $z$ is transformed into the variable $\tau(z)$ $\displaystyle\tau(z)\equiv\big{[}(E-V)^{2}-P^{2}\big{]}/V^{\prime}=(M^{2}-2EV+V^{2})/V^{\prime}\hskip 56.9055pt({\boldsymbol{x}}_{\perp}=0)$ (469) Since $\tau(z)$ depends on $E$ the transformation $z\to\tau$ is different for the rest frame wave function $\Phi^{(0)}(0,0,z)$ compared to that for $\Phi^{(P)}(0,0,z)$. These two wave functions will be related at the same value of $\tau$, and therefore at different values of $z$. For $V\ll E$ (weak binding) $\tau(z)\simeq M^{2}/V^{\prime}-2Mz\cosh\xi$ and the transformation is equivalent to $z\to z_{R}$ as in (459) (standard Lorentz contraction). I shall somewhat sloppily denote the wave functions expressed in terms of $\tau$ using the same symbols, $\Phi^{(0)}(\tau)$ and $\Phi^{(P)}(\tau)$. It should be kept in mind that these are related to the original wave functions at ${\boldsymbol{x}}=(0,0,z)$ through the $P$-dependent transformation (469). The variable $\zeta(z)$ takes the place of the boost parameter $\xi$, $\displaystyle\cosh\zeta=\frac{E-V}{\sqrt{V^{\prime}\tau}}=\sqrt{1+\frac{P^{2}}{V^{\prime}\tau}}\hskip 56.9055pt\sinh\zeta=\frac{P}{\sqrt{V^{\prime}\tau}}$ (470) $\zeta(z)$ depends on $P$ as well as $\tau$. The definition (469) shows that $V^{\prime}\tau\geq-P^{2}$ for real values of $z$. Hence when $P\neq 0$ there is a range of $z$ for which $V^{\prime}\tau<0$. To avoid considering complex values of $\zeta$ I shall assume values of $z$ and $P$ such that $\tau>0$. I discuss below how to determine the $\boldsymbol{x}_{\perp}=0$ wave function in the range where $\tau<0$. The solution of the BSE (457) at $\boldsymbol{x}_{\perp}=0$ is related to the rest frame wave function through $\displaystyle\Phi^{(P)}(\tau)$ $\displaystyle=\exp(-{\textstyle\frac{1}{2}}\zeta{\alpha_{3}})\,\Phi^{(0)}(\tau)\exp({\textstyle\frac{1}{2}}\zeta{\alpha_{3}})$ (471) The same relation holds also for $\boldsymbol{\nabla}_{\perp}\Phi^{(P)}(\tau)$. This requires $\boldsymbol{\nabla}_{\perp}\zeta=0$, which follows from $\boldsymbol{\nabla}_{\perp}V({\boldsymbol{x}})=0$ at $\boldsymbol{x}_{\perp}=0$. By construction, $\Phi^{(0)}(\tau)$ depends only on $\tau$, whereas $\Phi^{(P)}(\tau)$ has an explicit $P$-dependence through $\zeta$ (470). Exercise A.21: Show that $\Phi^{(P)}(\tau)$ given by (471) satisfies the BSE (457) at $\boldsymbol{x}_{\perp}=0$. Hint: Follow the proof of section VIII.3.2 for the $V=0$ case. Pay attention to derivatives of $\zeta$. As seen in section VIII.2 the wave functions of all rest frame $q\bar{q}$ states are found by solving radial equations, which are ordinary differential equations in $r$. The relation (471) then determines $\Phi^{(P)}(0,0,z)$ and $\boldsymbol{\nabla}_{\perp}\Phi^{(P)}(0,0,z)$ in all frames, when the $q\bar{q}$ pairs are aligned with ${\boldsymbol{P}}$. This boundary condition on the BSE (together with the one for $r\to 0$ based on (459)) should allow to determine $\Phi^{(P)}({\boldsymbol{x}})$ at all ${\boldsymbol{x}}$ by solving the partial differential equation (456) in $(x_{\perp},z)$. This remains to be demonstrated. For a rest frame wave function $\tau=(M-V^{\prime}z)^{2}/V^{\prime}\geq 0$ (469), whereas in general $\tau\geq-P^{2}$. This leaves a gap $-P^{2}\leq\tau<0$ in the boundary condition (471). In the $D=1+1$ case the analytic functions (VII.1.4) determine the solution for all $\tau$. Here we may use the analog of the expression (309), $\displaystyle\left.(E-V)\frac{\partial\Phi^{(P)}({\boldsymbol{x}})}{\partial P}\right|_{z}=\left.\frac{zP}{E}\partial_{z}\Phi^{(P)}({\boldsymbol{x}})\right|_{P}-{\textstyle\frac{1}{2}}\,\big{[}{{\alpha_{3}}},{\Phi^{(P)}({\boldsymbol{x}})}\big{]}\hskip 28.45274pt\mbox{at}\ \ \ {\boldsymbol{x}}=(0,0,z)$ (472) which is a consequence of (471). On the lhs. the $|_{z}$ indicates that the $P$-derivative is to be taken at fixed $z$, while on the rhs. the $z$-derivative is at fixed $P$. The derivation is the same as in the $D=1+1$ case, see Exercise A.13. This equation determines $\Phi^{(P)}(0,0,z)$ for all $P$ and $z$, with the rest frame wave function $\Phi^{(0)}(0,0,z)$ serving as boundary condition at $P=0$. In particular, the solution covers the gap $-P^{2}\leq\tau<0$. ### VIII.4 Properties of the $q\bar{q}$ bound states The $q\bar{q}$ wave functions have novel features at large values of the linear potential (366). There is little ab initio knowledge of strongly bound states since they are usually associated with large values of the coupling, i.e., non-perturbative dynamics. Here the confining potential is of ${\cal O}\left(\alpha_{s}^{0}\right)$, so relativistic binding is compatible with a perturbative expansion in ${\alpha_{s}}$. It is essential that the lowest order bound states, determined by the relativistic solutions $\Phi^{(P)}({\boldsymbol{x}})$ of the BSE (456), provide a reasonable approximation of the true states. #### VIII.4.1 String breaking and duality A first issue is why we should trust the linear potential $V(r)=V^{\prime}r$ at large $r$. “String breaking” will prevent the potential from reaching large values. I touched upon this already in section VII.1.7, for QED in $D=1+1$ dimensions. The key appears to be quark-hadron duality, which is a pervasive feature of hadron data and poorly understood theoretically, see the review Melnitchouk _et al._ (2005) and conference101010“First Workshop on Quark- Hadron Duality and the Transition to pQCD”, http://www.lnf.infn.it/conference/duality05/ .. Bound state solutions of the BSE with different eigenvalues $E_{A}\neq E_{B}$ are orthogonal, $\displaystyle\langle{M_{B},{\boldsymbol{P}}_{B}}|M_{A},{\boldsymbol{P}}_{A}\rangle\propto\delta({\boldsymbol{P}}_{A}-{\boldsymbol{P}}_{B})\delta_{A,B}$ (473) Exercise A.22: Prove (473) for states with wave functions satisfying the BSE (456). Hint: The proof is analogous to the standard one for non-relativistic systems. However, the $q\bar{q}$ states are not orthogonal to $q\bar{q}\,q\bar{q}$ states. Contracting the quark fields as in Fig. 19(a) gives $\displaystyle\langle{B,C}|A\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{N_{C}}}\int\Big{[}\prod_{k=A,B,C}d{\boldsymbol{x}}_{1k}d{\boldsymbol{x}}_{2k}\Big{]}\exp\big{\\{}i{\textstyle\frac{1}{2}}\big{[}({\boldsymbol{x}}_{1A}+{\boldsymbol{x}}_{2A})\cdot{\boldsymbol{P}}_{A}-({\boldsymbol{x}}_{1B}+{\boldsymbol{x}}_{2B})\cdot{\boldsymbol{P}}_{B}-({\boldsymbol{x}}_{1C}+{\boldsymbol{x}}_{2C})\cdot{\boldsymbol{P}}_{C}\big{]}\big{\\}}$ $\displaystyle\times$ $\displaystyle\langle{0}|\big{[}\psi^{\dagger}({\boldsymbol{x}}_{2B})\Phi_{B}^{\dagger}\gamma^{0}\psi({\boldsymbol{x}}_{1B})\big{]}\big{[}\psi^{\dagger}({\boldsymbol{x}}_{2C})\Phi_{C}^{\dagger}\gamma^{0}\psi({\boldsymbol{x}}_{1C})\big{]}\big{[}\bar{\psi}({\boldsymbol{x}}_{1A})\Phi_{A}\psi({\boldsymbol{x}}_{2A})\big{]}\left|{0}\right\rangle$ $\displaystyle=$ $\displaystyle-\frac{(2\pi)^{3}}{\sqrt{N_{C}}}\,\delta^{3}({\boldsymbol{P}}_{A}-{\boldsymbol{P}}_{B}-{\boldsymbol{P}}_{C})\int d\boldsymbol{\delta}_{1}d\boldsymbol{\delta}_{2}\,e^{i\boldsymbol{\delta}_{1}\cdot{\boldsymbol{P}}_{C}/2-i\boldsymbol{\delta}_{2}\cdot{\boldsymbol{P}}_{B}/2}\mathrm{Tr}\,\big{[}\gamma^{0}\Phi_{B}^{\dagger}(\boldsymbol{\delta}_{1})\Phi_{A}(\boldsymbol{\delta}_{1}+\boldsymbol{\delta}_{2})\Phi_{C}^{\dagger}(\boldsymbol{\delta}_{2})\big{]}$ where $\boldsymbol{\delta}_{1}={\boldsymbol{x}}_{1B}-{\boldsymbol{x}}_{2B}$ and $\boldsymbol{\delta}_{2}={\boldsymbol{x}}_{1C}-{\boldsymbol{x}}_{2C}$. Note that the process $A\to B+C$ is not mediated by a Hamiltonian interaction. It expresses that state $A$ has an overlap with $B+C$ 111111I thank Yiannis Makris for a helpful comment concerning this.. For example, Parapositronium has no overlap with $\left|{\gamma\gamma}\right\rangle$, but decays into this state do occur through the action of $\mathcal{H}_{int}$. A bound state can overlap two bound states only for relativistic binding, so this phenomenon has no precedent for atoms. Quark-hadron duality in $e^{+}e^{-}\to hadrons$ means that the final state is described, in an average sense, by the $q\bar{q}$ state first created by the virtual photon. This holds also for individual resonances in the direct channel, and is consistent with an overlap between the $q\bar{q}$ and final hadron states. I show below (VIII.4.4) how the wave functions of highly excited bound states indeed reduce to those of free $q\bar{q}$ states. Another example of duality is the observation that the inclusive momentum distribution of hadrons produced in hard processes agrees with the perturbatively calculated gluon distribution Dokshitzer (2010). This “Local Parton Hadron Duality” works down to low momenta. It shows that the transition from parton to hadron occurs with a minimal change of momentum, in the spirit of an overlap between states. I shall assume that highly excited bound states give a gross description of the final multi-hadron states, similarly as high energy quarks and gluons describe hadron jets. This can and should be quantified by evaluating matrix elements like (VIII.4.1). Figure 19: (a) The overlap $\langle{B,C}|A\rangle$ arises through the tunneling of a $q\bar{q}$ pair in the instantaneous field, “string breaking”. Since the potential is linear the field energy is nearly the same before and after the split. (b) Hadron loop correction through the creation and annihilation of a $q\bar{q}$ pair, which may be important for unitarity. #### VIII.4.2 Properties of the wave function at large separations $r$ Non-relativistic (Schrödinger) wave functions describe the distribution of a fixed number of bound state constituents. The single particle probability constraint $\int d{\boldsymbol{x}}\,|\Phi|^{2}=1$ (global norm) determines the energy eigenvalues. The Dirac equation also describes a single electron, but in a strong potential ($V\gtrsim m$) the wave function has negative energy components. In time-ordered dynamics these $E<0$ components correspond to $e^{+}e^{-}$ pairs in the wave function. This is related to the Klein paradox Klein (1929) and illustrated by the perturbative $Z$-diagram of Fig. 13(b). For a linear potential the local norm of the Dirac wave function approaches a constant at large $r$ (126) Plesset (1932), where it is dominated by the $E<0$ components (127). This may be interpreted as positrons, which due to their positive charge are repelled by the potential, and accelerated to high momenta at large $r$. In section III.2 we saw that the crossed ladder diagram of Fig. 11(c) does not contribute to atomic bound states at lowest order in $\alpha$, i.e., to non- relativistic dynamics. When time ordered this Feynman diagram is the same as the pair-producing $Z$-diagram of Fig. 13(b) (after the addition of the antifermion line). In QCD the uncrossed and crossed diagrams are distinguished also by their dependence on the number $N_{c}$ of colors. This is illustrated in Fig. 20 for $q\bar{q}\to q\bar{q}$ with single and double gluon exchange. The initial and final states are taken to be SU($N_{c}$) singlets, implying a sum over the colors $A$ and $B$, each normalized by $1/\sqrt{N_{c}}$. Figure 20: Color structure of QCD Feynman diagrams. (a) Single gluon exchange. Planar (b) and non-planar (c) two-gluon exchange. The initial and final states are color singlets, implying a sum over the quark colors $A$ and $B$. The color factors are thus, including also their behavior for $N_{c}\to\infty$: $\displaystyle C(a)$ $\displaystyle=\frac{g^{2}}{(\sqrt{N_{c}})^{2}}\sum_{A,B}\sum_{a}T_{BA}^{a}T_{AB}^{a}=\frac{g^{2}}{N_{c}}\mathrm{Tr}\,(T^{a}T^{a})=g^{2}\,\frac{N_{c}^{2}-1}{2N_{c}}\sim{\textstyle\frac{1}{2}}\,g^{2}N_{c}$ $\displaystyle C(b)$ $\displaystyle=\frac{g^{4}}{N_{c}}\mathrm{Tr}\,(T^{b}T^{a}T^{a}T^{b})=g^{4}\,\Big{(}\frac{N_{c}^{2}-1}{2N_{c}}\Big{)}^{2}\sim{\textstyle\frac{1}{4}}\,g^{4}N_{c}^{2}$ $\displaystyle C(c)$ $\displaystyle=\frac{g^{4}}{N_{c}}\mathrm{Tr}\,(T^{b}T^{a}T^{b}T^{a})=-\frac{g^{4}}{2N_{c}^{2}}\mathrm{Tr}\,(T^{a}T^{a})=-g^{4}\,\frac{N_{c}^{2}-1}{4N_{c}^{2}}\sim-{\textstyle\frac{1}{4}}\,g^{4}$ (475) The suppression of $C(c)$ compared to $C(b)$ for $N_{c}\to\infty$ is a general feature of non-planar diagrams ’t Hooft (1974a); Witten (1980); Coleman (1980). Due to the relation of diagram (c) in Fig. 20 with the $Z$-diagram of Fig. 13(b) there are no such pair contributions to $q\bar{q}$ states in the $N_{c}\to\infty$ limit, despite the relativistic binding. In particular, there are no sea quarks in the ’t Hooft model of QCD2 ’t Hooft (1974b), where $N_{c}\to\infty$. The connection between the sea quark distribution at low ${x_{Bj}}$ and the behavior of the wave function at large quark separations (the virtual pairs) is apparent for QED2 (section VII.3.3). The normalizing integral of the $0^{-+}$ trajectory wave functions (VIII.2.3) can for all angular momenta $j$ be expressed as, $\displaystyle\int d{\boldsymbol{x}}\,\mathrm{Tr}\,\Big{[}\Phi_{-+}^{\dagger}({\boldsymbol{x}})\Phi_{-+}({\boldsymbol{x}})\Big{]}=8\int_{0}^{\infty}dr\,r^{2}F_{1}^{*}(r)\Big{[}1-\frac{2V^{\prime}}{(M-V)^{3}}\partial_{r}\Big{]}F_{1}(r)$ (476) Exercise A.23: Verify the expression (476) for the global norm of $\Phi_{-+}({\boldsymbol{x}})$ in terms of $F_{1}(r)$. For $r\to\infty$ (at fixed $j$) the asymptotic radial wave function $F_{1}(r)$, $\displaystyle F_{1}(r)\simeq\frac{1}{r}\,\exp\big{[}i(M-V)^{2}/4V^{\prime}\big{]}\ \ \mbox{and}\ \ c.c.\ \ \ (r\to\infty)$ (477) satisfies the radial wave equation (417) up to terms of ${\cal O}\left(r^{0}\right)$. The integrand (local norm) in (476) tends to $r^{2}|F_{1}(r)|^{2}$ for large $r$, and is thus independent of $r$. This feature is common to states of all quantum numbers. The probability density similarly tends to a constant also in lower spatial dimensions ($D=1+1$ (VII.1.6) and $D=2+1$), as well as in the Dirac equation for a linear potential (126). At large $r$ the radial derivative dominates in the expression (VIII.2.3) for $\Phi_{-+}({\boldsymbol{x}})$: ${\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}\simeq{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}}\,\partial_{r}$ where ${\boldsymbol{\hat{{\boldsymbol{x}}}}}={\boldsymbol{x}}/r$. Using also $F^{\prime}_{1}(r\to\infty)\simeq{\textstyle\frac{1}{2}}iV^{\prime}r\,F_{1}(r\to\infty)$, $\displaystyle\Phi_{-+}(r\to\infty)$ $\displaystyle\simeq\Big{[}-\frac{2}{V^{\prime}r}(i{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}}\,\partial_{r}+m\gamma^{0})+1\Big{]}\gamma_{5}\,F_{1}(r)Y_{j\lambda}(\Omega)=(1+{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}})\gamma_{5}\,F_{1}(r)Y_{j\lambda}({\boldsymbol{\hat{{\boldsymbol{x}}}}})$ (478) The projector $\Lambda_{+}$ which selects the $b^{\dagger}$ operator in $\bar{\psi}({\boldsymbol{x}})$ is as in (167), $\displaystyle\bar{\psi}({\boldsymbol{x}}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}})$ $\displaystyle=\int\frac{d{\boldsymbol{k}}}{(2\pi)^{3}2E_{k}}\sum_{\lambda}\bar{u}({\boldsymbol{k}},\lambda)\,e^{-i{\boldsymbol{k}}\cdot{\boldsymbol{x}}}\,b_{{\boldsymbol{k}},\lambda}^{\dagger}\hskip 56.9055pt{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}})\equiv\frac{1}{2E}\Big{[}E-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}+\gamma^{0}m\Big{]}$ (479) A partial integration in the $b_{{\boldsymbol{k}},\lambda}^{\dagger}$ component of the state (395) makes $\Lambda_{+}$ operate on the wave function. Noting that $\displaystyle\boldsymbol{\nabla}^{2}\Phi_{-+}(r\to\infty)\simeq\partial_{r}^{2}\Phi_{-+}\simeq-{\textstyle\frac{1}{4}}(V^{\prime}r)^{2}\Phi_{-+}\hskip 56.9055ptE\,\Phi_{-+}\equiv\sqrt{-\boldsymbol{\nabla}^{2}+m^{2}}\,\Phi_{-+}\simeq{\textstyle\frac{1}{2}}V^{\prime}r\,\Phi_{-+}$ (480) we see that ${\overset{\rightarrow}{\Lambda}}_{+}$ annihilates the asymptotic wave function, $\displaystyle{\overset{\rightarrow}{\Lambda}}_{+}\Phi_{-+}(r\to\infty)$ $\displaystyle=\frac{1}{2E}\Big{[}E+i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+\gamma^{0}m\Big{]}\Phi_{-+}\simeq\frac{1}{2}\Big{[}1+\frac{i{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}}\,\partial_{r}}{E}\Big{]}\Phi_{-+}$ $\displaystyle\simeq{\textstyle\frac{1}{2}}(1-{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}})(1+{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}})\gamma_{5}\,F_{1}(r\to\infty)Y_{j\lambda}({\boldsymbol{\hat{{\boldsymbol{x}}}}})=0$ (481) Consequently the state has no $b^{\dagger}$ contribution at large $r$. Similarly $d^{\dagger}$ does not contribute, leaving only the negative energy component $d\,b$. This is characteristic of the pair (sea quark) contributions from $Z$-diagrams. The negative kinetic energy is cancelled by the positive potential energy at large $r$, leaving a finite eigenvalue $M$. My tentative interpretation of this is as follows. The delicate cancellation between large negative kinetic and positive potential energies means that bound state components with $V(r)\gg M$ will not affect physical processes with resolution below those energies. Hadron loop corrections like in Fig. 19 b may change the wave function considerably. Nevertheless, since such corrections are due to overlaps the original wave function still approximately describes physical processes. Deep inelastic processes reveal the ${x_{Bj}}\to 0$ sea quark distribution as far as its resolution permits. #### VIII.4.3 Discrete mass spectrum Due to the infinite sea one cannot impose a global normalization condition on the bound state wave functions. There is, however, a new local normalization condition. The solutions of the $q\bar{q}$ bound state equation (400) are generally singular at $M-V(r)=0$, as indicated by the coefficients $\propto 1/(M-V)$ in the radial equations (417), (427) and (436)-(437). A physical wave function should be locally normalizable at $M-V=0$, in line with the standard requirement of local normalizability at $r=0$. The radial equation (417) of the $0^{-+}$ trajectory allows $F_{1}(r)\sim(M-V)^{\gamma}$ with $\gamma=0$ and $\gamma=2$ as $M-V(r)\to 0$. The integrand (local norm) in (476) is finite at $M-V=0$ only if $\gamma=2$. For $r\to 0$ we have as usual $F_{1}(r)\sim r^{\beta}$, with $\beta=j$ or $\beta=-j-1$. Only $\beta=j$ makes the integrand in (476) finite at $r=0$. The two constraints, at $M-V(r)=0$ and $r=0$, determine the bound state masses, but leave the magnitude of the wave function unconstrained. This is a general feature, valid for states of all quantum numbers. The vanishing of the radial wave functions at $M-V(r)=0$ generalizes the vanishing for $r\to\infty$ of non-relativistic wave functions, which are defined only for $V(r)\ll M$. In the limit of non-relativistic dynamics ($m\to\infty$) the wave functions which are regular at $M-V=0$ become globally normalizable Dietrich _et al._ (2013). The Dirac equation may be viewed as the limit of a two-particle equation where the mass $m_{2}$ of one particle tends to infinity, turning it into a static source. The point $V(r)=M$ (where $M$ includes $m_{2}$) recedes to $r=\infty$ as $m_{2}\to\infty$. Hence there is no condition on the Dirac wave function at $M-V=0$, and the spectrum is continuous Plesset (1932). The radial equation (417) can readily be solved numerically, subject to the boundary conditions $F_{1}(r\to 0)\sim r^{j}$ and $F_{1}(r\to M/V^{\prime})\sim(M-V)^{2}$. As seen in Fig. 21, for the linear potential $V(r)=V^{\prime}r$ and quark mass $m=0$ the states lie on linear Regge trajectories and their parallel daughter trajectories. The mass spectra of the $0^{--}$ and $0^{++}$ trajectories are similar Hoyer (2016). Figure 21: (a) Masses $M$ of the mesons on the $0^{-+}$ trajectory for $m=0$, in units of $\sqrt{V^{\prime}}$. (b) Plot of the spin $j$ vs. $M^{2}/V^{\prime}$ for the states listed in (a). Figure taken from Hoyer (2016). #### VIII.4.4 Parton picture for $M\gg V(r)$ Duality in $e^{+}e^{-}\to hadrons$ implies that the perturbative process $e^{+}e^{-}\to q\bar{q}$ gives an average of the direct channel resonances Melnitchouk _et al._ (2005). More generally, PQCD describes inclusive cross sections and hadron distributions (jets) at high energies in terms of unconfined quarks and gluons. Confinement (hadronization) is expected to become important as the color charges separate beyond the hadronic scale. Duality implies that the wave functions of highly excited bound states $(M\gg 2m)$ should agree with the parton model, i.e., be compatible with the perturbative quark and gluon distribution for $V=V^{\prime}r\ll M$. This is already indicated by the form of the bound state equation (400), where $M$ and $V$ appear only in the combination $M-V$. When $V\ll M$ this is a free particle equation, with no negative energy ($d,\,b$) contributions from $Z$-diagrams. It is instructive to see how this limit emerges from the wave functions. I shall again use the $0^{-+}$ trajectory states to illustrate the emergence of the parton picture. The asymptotic expression (477) for $F_{1}(r)$ at large $r$ satisfies the radial equation also when $M\to\infty$ at finite $r$, up to terms of ${\cal O}\left(M^{0}\right)$.121212In $D=1+1$ the $x\to\infty$ and $M\to\infty$ limits are similarly related. The QED2 wave functions $\phi_{0}$ and $\phi_{1}$ (VII.1.4) depend on $x$ and $M$ only through the variable $\tau$ (303), and $\tau\to\infty$ in both limits. In the expression (VIII.2.3) for the wave function the radial derivative then dominates, $F_{1}^{\prime}\simeq-i{\textstyle\frac{1}{2}}MF_{1}$, so $\displaystyle\Phi_{-+}(M\to\infty)\simeq\Big{[}\frac{2}{M}(i{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}}\,\partial_{r}+m\gamma^{0})+1\Big{]}\gamma_{5}\,F_{1}(r)Y_{j\lambda}(\Omega)\simeq(1+{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}})\gamma_{5}\,F_{1}Y_{j\lambda}$ (482) Consider again the projector $\Lambda_{+}$ (479) which projects out the $b^{\dagger}$ term in the $\bar{\psi}({\boldsymbol{x}}_{1})$ field of the state (395). After a partial integration it operates on $\Phi_{-+}({\boldsymbol{x}})$, with the quark energy now giving $\displaystyle E\,\Phi_{-+}(M\to\infty)\equiv\sqrt{-\boldsymbol{\nabla}^{2}+m^{2}}\,\Phi_{-+}\simeq{\textstyle\frac{1}{2}}M\,\Phi_{-+}$ (483) The dominance of the radial derivative in ${\overset{\rightarrow}{\Lambda}}_{+}$ implies, $\displaystyle{\overset{\rightarrow}{\Lambda}}_{+}\Phi_{-+}(M\to\infty)\simeq\frac{1}{2}\Big{(}1+\frac{i{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}}\,\partial_{r}+\gamma^{0}m}{E}\Big{)}\Phi_{-+}\simeq{\textstyle\frac{1}{2}}(1+{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}})(1+{\boldsymbol{\alpha}}\cdot{\boldsymbol{\hat{{\boldsymbol{x}}}}})\gamma_{5}\,F_{1}(r)Y_{j\lambda}=\Phi_{-+}$ (484) The result is opposite to that of (VIII.4.2): Now only the valence quark operators $b^{\dagger},\,d^{\dagger}$ contribute to the state, $\displaystyle\left|{M}\right\rangle_{V\ll M}\simeq\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\int\frac{d{\boldsymbol{k}}_{1}d{\boldsymbol{k}}_{2}}{(2\pi)^{6}4E_{1}E_{2}}\sum_{\lambda_{1},\lambda_{2}}e^{-i({\boldsymbol{k}}_{1}\cdot{\boldsymbol{x}}_{1}+{\boldsymbol{k}}_{2}\cdot{\boldsymbol{x}}_{2})}\,F_{1}Y_{j\lambda}\big{[}\bar{u}({\boldsymbol{k}}_{1},\lambda_{1})\gamma_{5}v({\boldsymbol{k}}_{2},\lambda_{2})\big{]}\,b_{{\boldsymbol{k}}_{1},\lambda_{1}}^{\dagger}\,d_{{\boldsymbol{k}}_{2},\lambda_{2}}^{\dagger}\left|{0}\right\rangle$ (485) Since $F_{1}Y_{j\lambda}$ depends only on ${\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}$, and $\displaystyle{\boldsymbol{k}}_{1}\cdot{\boldsymbol{x}}_{1}+{\boldsymbol{k}}_{2}\cdot{\boldsymbol{x}}_{2}={\textstyle\frac{1}{2}}({\boldsymbol{k}}_{1}+{\boldsymbol{k}}_{2})({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})+{\textstyle\frac{1}{2}}({\boldsymbol{k}}_{1}-{\boldsymbol{k}}_{2})({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ (486) the integral over $({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2$ sets ${\boldsymbol{k}}_{1}=-{\boldsymbol{k}}_{2}\equiv{\boldsymbol{k}}$. Denoting ${\boldsymbol{x}}\equiv{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}$, $\displaystyle\left|{M}\right\rangle_{V\ll M}\simeq\int d{\boldsymbol{x}}\int\frac{d{\boldsymbol{k}}}{(2\pi)^{3}4E_{k}^{2}}\sum_{\lambda_{1},\lambda_{2}}e^{-i{\boldsymbol{k}}\cdot{\boldsymbol{x}}}\big{[}\bar{u}({\boldsymbol{k}},\lambda_{1})\gamma_{5}v(-{\boldsymbol{k}},\lambda_{2})\big{]}\,F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})\,b_{{\boldsymbol{k}},\lambda_{1}}^{\dagger}\,d_{-{\boldsymbol{k}},\lambda_{2}}^{\dagger}\left|{0}\right\rangle$ (487) For $j=0$ the angular integral over ${\boldsymbol{\hat{{\boldsymbol{x}}}}}$ gives, with ${\boldsymbol{x}}\cdot{\boldsymbol{k}}=kr\cos\theta$, $\displaystyle\int_{-1}^{1}dcos\theta\int_{0}^{2\pi}d\varphi\,e^{-ikr\cos\theta}\frac{1}{\sqrt{4\pi}}=-\frac{i\sqrt{\pi}}{kr}\big{(}e^{ikr}-e^{-ikr}\big{)}$ (488) At large $M\gg V$ the phase $\phi(r)$ of $F_{1}(r)$ (477) changes quickly with $r$, allowing to estimate the $r$-integral using the method of stationary phase, $\displaystyle\phi(r)=\pm kr+(M-V)^{2}/4V^{\prime}\hskip 28.45274pt\partial_{r}\phi(r_{s})=0\hskip 28.45274ptV^{\prime}r_{s}=M-2k\hskip 28.45274pt\phi(r_{s})=k(M-k)/V^{\prime}$ (489) The fact that only the $\exp(+ikr)$ term in (488) contributes sets $\cos\theta=-1$, i.e., ${\boldsymbol{x}}$ and ${\boldsymbol{k}}$ are antiparallel (with my choice of asymptotic solution in (477)). The stationary phase method may again be used in the $k$-integration, since $\phi(r_{s})$ depends sensitively on $k$, $\displaystyle\partial_{k}\phi(r_{s})=0\hskip 56.9055ptk_{s}={\textstyle\frac{1}{2}}M$ (490) Thus the quark and antiquark momenta are each half the resonance mass, as expected by energy conservation. With $k=M/2$ in (489) we have $V^{\prime}r_{s}\ll M$, i.e., the stationary value $r_{s}$ is compatible with the limit assumed in (482). A similar result will be obtained for any (fixed) angular momentum $j$, since each term in the angular integral corresponding to (488) will be $\propto\exp(\pm kr)$, multiplied by powers of $kr$ which do not affect the stationary phase approximation. Thus the wave functions of highly excited states on the $0^{-+}$ trajectory (413) take the form $\displaystyle\left|{M}\right\rangle_{V\ll M}\propto\int d\hat{\boldsymbol{k}}\sum_{\lambda_{1},\lambda_{2}}\big{[}\bar{u}({\boldsymbol{k}},\lambda_{1})\gamma_{5}v(-{\boldsymbol{k}},\lambda_{2})\big{]}\,Y_{j\lambda}(\hat{\boldsymbol{k}})\,b_{{\boldsymbol{k}},\lambda_{1}}^{\dagger}\,d_{-{\boldsymbol{k}},\lambda_{2}}^{\dagger}\left|{0}\right\rangle\hskip 56.9055pt|{\boldsymbol{k}}|={\textstyle\frac{1}{2}}M$ (491) The bound state wave function thus agrees with that of free $q\bar{q}$ states perturbatively produced by a current with corresponding $J^{PC}$ quantum numbers, as expected by duality. This may be used to determine the absolute normalization of highly excited bound states, as discussed in section IV of Dietrich _et al._ (2013). ### VIII.5 * Glueballs in the rest frame I consider states of two transversely polarized gluons $\left|{gg}\right\rangle$, bound by the instantaneous linear potential $V_{gg}^{(0)}$ (378), $\displaystyle V_{gg}(r)=\sqrt{\frac{N}{C_{F}}}\,\Lambda^{2}\,r=\frac{3}{2}\,\Lambda^{2}\,r\equiv V_{g}^{\prime}r$ (492) The ${\cal O}\left({\alpha_{s}}\right)$ instantaneous gluon exchange $V_{gg}^{(1)}$ in (378) as well as higher Fock components ($\left|{ggg}\right\rangle$, $\left|{ggq\bar{q}}\right\rangle\ldots$) are ignored. Hence the Hamiltonian (148) is approximated as $\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{V}$, where $\mathcal{H}_{V}$ (V.3.2) generates the linear potential and the kinetic term in (148), $\displaystyle\mathcal{H}_{0}$ $\displaystyle=\int d{\boldsymbol{x}}\big{[}{\textstyle\frac{1}{2}}E_{a,T}^{i}E_{a,T}^{i}+{\textstyle\frac{1}{2}}A_{a,T}^{i}(-\boldsymbol{\nabla}^{2})A_{a,T}^{i}\big{]}$ (493) involves only transversely polarized gluons $A^{i}_{a,T}$, which satisfy $\boldsymbol{\nabla}\cdot{\boldsymbol{A}}_{a,T}=0$, and their conjugate electric fields $-E_{a,T}^{i}$. The canonical commutation relations (147) imply $\displaystyle\left[{\mathcal{H}_{0}},{A^{i}_{a,T}({\boldsymbol{x}})}\right]$ $\displaystyle=iE^{i}_{a,T}({\boldsymbol{x}})$ $\displaystyle\left[{\mathcal{H}_{0}},{E^{i}_{a,T}({\boldsymbol{x}})}\right]=i\boldsymbol{\nabla}^{2}A^{i}_{a,T}({\boldsymbol{x}})$ (494) Consequently the bound state condition $\displaystyle(\mathcal{H}_{0}+\mathcal{H}_{V})\left|{gg}\right\rangle=M\left|{gg}\right\rangle$ (495) requires $\left|{gg}\right\rangle$ to have both $A$ and $E$ components. In terms of the wave functions $\Phi^{ij}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$, $\displaystyle\left|{gg}\right\rangle$ $\displaystyle\equiv\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\big{[}A^{i}_{a,T}({\boldsymbol{x}}_{1})A^{j}_{a,T}({\boldsymbol{x}}_{2})\Phi^{ij}_{AA}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})+A^{i}_{a,T}E^{j}_{a,T}\Phi^{ij}_{AE}+E^{i}_{a,T}A^{j}_{a,T}\Phi^{ij}_{EA}+E^{i}_{a,T}E^{j}_{a,T}\Phi^{ij}_{EE}\big{]}\left|{0}\right\rangle$ (496) where sums over the color $a$ and 3-vector indices $i,j$ are understood (here $A$ is not a color index!). The constituent ${\boldsymbol{A}}$ and ${\boldsymbol{E}}$ fields are assumed to be normal ordered (their mutual commutators are subtracted). As shown in section VIII.1.3 the action of $\mathcal{H}_{V}$ on $A^{i}_{a,T}({\boldsymbol{x}}_{1})A^{j}_{a,T}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ gives the potential (492). Since $\mathcal{E}_{a}({\boldsymbol{y}})$ (V.3.1) has similar commutators with the $A$ and $E$ fields, $\displaystyle\left[{\mathcal{E}_{a}({\boldsymbol{y}})},{A_{d}^{i}({\boldsymbol{x}})}\right]$ $\displaystyle=-i\,f_{abd}A_{b}^{i}({\boldsymbol{x}})\delta({\boldsymbol{x}}-{\boldsymbol{y}})$ $\displaystyle\left[{\mathcal{E}_{a}({\boldsymbol{y}})},{E_{d}^{i}({\boldsymbol{x}})}\right]$ $\displaystyle=-i\,f_{abd}E_{b}^{i}({\boldsymbol{x}})\delta({\boldsymbol{x}}-{\boldsymbol{y}})$ (497) the same potential (492) is obtained for all four components of $\left|{gg}\right\rangle$ in (496), $\displaystyle\mathcal{H}_{V}\left|{gg}\right\rangle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,V_{gg}(|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|)\big{[}A_{a}({\boldsymbol{x}}_{1})A_{a}({\boldsymbol{x}}_{2})\Phi_{AA}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})+A_{a}E_{a}\Phi_{AE}+E_{a}A_{a}\Phi_{EA}+E_{a}E_{a}\Phi_{EE}\big{]}\left|{0}\right\rangle$ (498) where I suppressed the 3-vector indices $i,j$ and the label $T$ of the transverse fields, which are unaffected by $\mathcal{H}_{0}$ and $\mathcal{H}_{V}$. Using the commutation relations (494), $\displaystyle\mathcal{H}_{0}\left|{gg}\right\rangle=i\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\Big{\\{}$ $\displaystyle\big{[}E_{a}({\boldsymbol{x}}_{1})A_{a}({\boldsymbol{x}}_{2})+A_{a}({\boldsymbol{x}}_{1})E_{a}({\boldsymbol{x}}_{2})\big{]}\Phi_{AA}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})+\big{[}E_{a}E_{a}+A_{a}A_{a}\boldsymbol{\nabla}^{2}\big{]}\Phi_{AE}$ $\displaystyle+\big{[}A_{a}A_{a}\boldsymbol{\nabla}^{2}+E_{a}E_{a}\big{]}\Phi_{EA}+\big{[}A_{a}E_{a}+E_{a}A_{a}\big{]}\boldsymbol{\nabla}^{2}\Phi_{EE}\Big{\\}}\left|{0}\right\rangle$ (499) where $\boldsymbol{\nabla}$ differentiates $\Phi({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ wrt. ${\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}$. The stationarity condition (495) implies the following relations between the wave functions $\Phi({\boldsymbol{x}})$: $\displaystyle\boldsymbol{\nabla}^{2}(\Phi_{AE}+\Phi_{EA})$ $\displaystyle=-i(M-V)\Phi_{AA}$ $\displaystyle\Phi_{AA}+\boldsymbol{\nabla}^{2}\Phi_{EE}$ $\displaystyle=-i(M-V)\Phi_{AE}$ $\displaystyle\Phi_{AA}+\boldsymbol{\nabla}^{2}\Phi_{EE}$ $\displaystyle=-i(M-V)\Phi_{EA}$ $\displaystyle\Phi_{AE}+\Phi_{EA}$ $\displaystyle=-i(M-V)\Phi_{EE}$ (500) where $V=V_{g}^{\prime}|{\boldsymbol{x}}|=V^{\prime}_{g}r$ as in (492). This implies $\displaystyle\Phi_{AE}$ $\displaystyle=\Phi_{EA}=-{\textstyle\frac{1}{2}}i(M-V)\Phi_{EE}$ $\displaystyle\Phi_{AA}$ $\displaystyle=\frac{1}{M-V}\,\boldsymbol{\nabla}^{2}\big{[}(M-V)\Phi_{EE}\big{]}$ $\displaystyle\frac{1}{M-V}\,\boldsymbol{\nabla}^{2}$ $\displaystyle\big{[}(M-V)\Phi_{EE}\big{]}+\boldsymbol{\nabla}^{2}\Phi_{EE}=-{\textstyle\frac{1}{2}}(M-V)^{2}\Phi_{EE}$ (501) The last equation is $\displaystyle\boldsymbol{\nabla}^{2}\Phi_{EE}({\boldsymbol{x}})-\frac{V_{g}^{\prime}}{M-V}\partial_{r}\Phi_{EE}({\boldsymbol{x}})-\frac{V_{g}^{\prime}}{r(M-V)}\Phi_{EE}({\boldsymbol{x}})+{\textstyle\frac{1}{4}}(M-V)^{2}\Phi_{EE}({\boldsymbol{x}})=0$ (502) Separating the radial and angular dependence according to $\displaystyle\Phi_{EE}({\boldsymbol{x}})=F(r)Y_{\ell\lambda}(\Omega)$ (503) where $Y_{\ell\lambda}$ is the standard spherical harmonic function, the radial equation becomes $\displaystyle F^{\prime\prime}(r)+\Big{(}\frac{2}{r}-\frac{V_{g}^{\prime}}{M-V}\Big{)}F^{\prime}(r)+\Big{[}{\textstyle\frac{1}{4}}(M-V)^{2}-\frac{V_{g}^{\prime}}{r(M-V)}-\frac{\ell(\ell+1)}{r^{2}}\Big{]}F(r)=0$ (504) There is a single dimensionful parameter $V_{g}^{\prime}$. Scaling $r=R/\sqrt{V_{g}^{\prime}}$ and $M=\mathcal{M}\sqrt{V_{g}^{\prime}}$ the bound state equation in terms of the dimensionless variables $R,\mathcal{M}$ becomes $\displaystyle\partial_{R}^{2}F(R)+\Big{(}\frac{2}{R}-\frac{1}{\mathcal{M}-R}\Big{)}\partial_{R}F(R)+\Big{[}{\textstyle\frac{1}{4}}(\mathcal{M}-R)^{2}-\frac{1}{R(\mathcal{M}-R)}-\frac{\ell(\ell+1)}{R^{2}}\Big{]}F(R)=0$ (505) Figure 22: (a) Glueball spectrum: Orbital angular momentum $\ell$ versus $M^{2}/V^{\prime}_{g}$. (b) Glueball masses $\mathcal{M}=M/\sqrt{V^{\prime}_{g}}$ of the radial equation (505). For $R\to 0$ we have the standard behaviors $F\sim R^{\alpha}$, with $\alpha=\ell$ or $\alpha=-\ell-1$. Since $\Phi_{AA}\sim\partial_{R}^{2}\Phi_{EE}$ only the $\alpha=\ell$ solution gives a locally finite norm at $R=0$. For $\mathcal{M}-R\to 0$ with $F\sim(\mathcal{M}-R)^{\beta}$ we have $\beta=0$, and a second solution $F\sim\log(\mathcal{M}-R)$. Only the $\beta=0$ solution gives a locally finite norm at $\mathcal{M}-R=0$. These constraints on the solutions of (505) at $R=0$ and $R=\mathcal{M}$ determine the allowed masses $\mathcal{M}$. The glueball states lie on approximately linear Regge and daughter trajectories (Fig. 22). At ${\cal O}\left(\alpha_{s}^{0}\right)$ the spectrum is independent of the vector indices $i,j$ of the wave function in (496). ### VIII.6 Spontaneous breaking of chiral symmetry Let us now return to the bound state equation (401) of a meson ($q\bar{q}$) state in the rest frame, $\displaystyle i\boldsymbol{\nabla}\cdot\left\\{{{\boldsymbol{\alpha}}},{\Phi({\boldsymbol{x}})}\right\\}+m\left[{\gamma^{0}},{\Phi({\boldsymbol{x}})}\right]=\big{[}M-V({\boldsymbol{x}})\big{]}\Phi({\boldsymbol{x}})$ (506) where the ${\cal O}\left(\alpha_{s}^{0}\right)$ potential is linear $V=V^{\prime}|{\boldsymbol{x}}|$ (366). In section VIII.4.3 we saw that the bound state mass spectrum is determined by the requirement that $|\Phi({\boldsymbol{x}})|^{2}$ is integrable at $r=|{\boldsymbol{x}}|=0$ and at $M-V^{\prime}r=0$. The latter condition is the generalization of the standard condition that the solutions of the Schrödinger equation are normalizable. So far I did not discuss the special case of $M=0$, in which case the constraints at $r=0$ and $M-V^{\prime}r=0$ coincide Dietrich _et al._ (2013); Hoyer (2018). Such solutions have vanishing four-momentum in all frames. This allows the $J^{PC}=0^{++}$ state to mix with the ground state (vacuum) without violating Poincaré invariance. The chiral symmetry of the QCD action for massless quarks is then not manifest in the states, as in a spontaneous breakdown of chiral invariance. In this preliminary study I consider two degenerate flavors, $m_{u}=m_{d}=m$. Suppressing color and Dirac indices the states (395) are $\displaystyle\left|{M,i}\right\rangle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{q}({\boldsymbol{x}}_{1})\Phi({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\tau^{i}\,q({\boldsymbol{x}}_{2})\left|{0}\right\rangle\hskip 56.9055ptq=\left(\begin{array}[]{c}u\\\\[5.69054pt] d\end{array}\right)$ (509) where the $\tau^{i}$ are Pauli matrices for isospin $I=1$ states and $\tau^{i}\to 1$ for $I=0$. #### VIII.6.1 $M=0$ states with vanishing quark mass $m=0$ The $J^{PC}=0^{++}$, $I=0$ “$\sigma$” wave function may be expressed in the general form (406), with three Dirac structures131313The present radial functions $H_{i}$ ($i=1,2,3$) are distinct from the functions $H_{1},H_{2}$ in (436) and (437). allowed by (413) for $j=0$, $\displaystyle\Phi_{\sigma}({\boldsymbol{x}})=H_{1}(r)+i\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,H_{2}(r)+i\,\gamma^{0}{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,H_{3}(r)$ (510) where $\hat{\boldsymbol{x}}={\boldsymbol{x}}/r$. For $M=m=0$ the BSE (506) is $i\boldsymbol{\nabla}\cdot\left\\{{{\boldsymbol{\alpha}}},{\Phi_{\sigma}}\right\\}+V^{\prime}r\,\Phi_{\sigma}=0$, and requires141414A more detailed derivation is given in Exercise A.24 for $m\neq 0$. $\displaystyle i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}H_{1}^{\prime}-\frac{2}{r}H_{2}-H_{2}^{\prime}+{\textstyle\frac{1}{2}}V^{\prime}r\,\Phi_{\sigma}=0$ (511) The coefficients of the three Dirac structures impose $H_{2}=-(2/V^{\prime}r)H_{1}^{\prime},\ H_{3}=0$ and $\displaystyle H_{1}^{\prime\prime}(r)+\frac{1}{r}H_{1}^{\prime}(r)+{\textstyle\frac{1}{4}}(V^{\prime}r)^{2}H_{1}(r)=0$ (512) This differential equation has an analytic solution, giving the wave function $\displaystyle\Phi_{\sigma}({\boldsymbol{x}})$ $\displaystyle=N\big{[}J_{0}({\textstyle\frac{1}{4}}V^{\prime}r^{2})+i\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,J_{1}({\textstyle\frac{1}{4}}V^{\prime}r^{2})\big{]}\hskip 56.9055pt(m=M=0)$ (513) where $N$ is a normalization constant and $J_{0},\,J_{1}$ are Bessel functions. The $\sigma$ state (509), $\displaystyle\left|{\sigma}\right\rangle=\hat{\sigma}\left|{0}\right\rangle\hskip 56.9055pt\hat{\sigma}=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{q}({\boldsymbol{x}}_{1})\Phi_{\sigma}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})q({\boldsymbol{x}}_{2})$ (514) has $M={\boldsymbol{P}}=0$, i.e., vanishing 4-momentum in all frames. When mixed with the vacuum it causes a spontaneous breaking of chiral symmetry. The $I=1$ (non-anomalous) chiral transformations are generated by $\displaystyle Q_{5i}=\int d{\boldsymbol{x}}q^{\dagger}({\boldsymbol{x}})\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q({\boldsymbol{x}})$ (515) which transform the $\sigma$ state into a Goldstone boson (pion) $\displaystyle i\left[{Q_{5i}},{\hat{\sigma}}\right]$ $\displaystyle=\hat{\pi}_{i}\hskip 76.82234pt\hat{\pi}_{i}=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{q}({\boldsymbol{x}}_{1})\Phi_{\pi}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\tau^{i}q({\boldsymbol{x}}_{2})$ (516) $\displaystyle i\left[{Q_{5i}},{\hat{\pi}_{j}}\right]$ $\displaystyle=-\delta_{ij}\,\hat{\sigma}\hskip 56.9055pt\Phi_{\pi}({\boldsymbol{x}})=-i\Phi_{\sigma}({\boldsymbol{x}})\gamma_{5}$ Like $\hat{\sigma}\left|{0}\right\rangle$, the state $\hat{\pi}_{i}\left|{0}\right\rangle$ is an eigenstate of the ${\cal O}\left(\alpha_{s}^{0}\right)$ Hamiltonian with vanishing 4-momentum in all frames. Chiral transformations thus transform a vacuum state with a $\sigma$ condensate into another one with a mixture of zero-mass pions. It is common to describe the pion in terms of a local field $\varphi_{\pi,i}$, which is a good approximation at low momentum transfers. Then, recalling that $\left|{\pi_{j}}\right\rangle=\hat{\pi}_{j}\left|{0}\right\rangle$ is time independent and normalized as in (513) and (516), $\displaystyle\langle{0}|\varphi_{\pi,i}(x)\left|{\pi_{j}}\right\rangle$ $\displaystyle=\delta_{ij}$ $\displaystyle\varphi_{\pi,i}(x)$ $\displaystyle=-\frac{i}{4N}\bar{q}(x)\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)$ (517) Spontaneous chiral symmetry breaking implies that the Goldstone bosons $\left|{\pi_{j}}\right\rangle$ are annihilated by the axial vector current $j^{\mu}_{5i}(x)=\bar{q}(x)\gamma^{\mu}\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)$ and by its divergence $\partial_{\mu}j_{5i}^{\mu}(x)=2im\,\bar{q}(x)\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)$. With $f_{\pi}\simeq 93$ MeV, $\displaystyle\langle{0}|\bar{q}(x)\gamma^{\mu}\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)\,\left|{\pi_{j},P}\right\rangle$ $\displaystyle=i\,\delta_{ij}P^{\mu}\,f_{\pi}\,e^{-iP\cdot x}$ (518) $\displaystyle\langle{0}|\bar{q}(x)\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)\,\left|{\pi_{j},P}\right\rangle$ $\displaystyle=-i\,\delta_{ij}\frac{M_{\pi}^{2}}{2m}\,f_{\pi}\,e^{-iP\cdot x}$ (519) Since the Goldstone pion has $P^{\mu}=0$ in all reference frames the rhs. of (518) vanishes. The lhs. also vanishes: $-iN\mathrm{Tr}\,(\gamma^{\mu}\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}\gamma^{0}J_{0}(0)\gamma_{5}\tau^{j}\gamma^{0})=iN\delta^{ij}\mathrm{Tr}\,(\gamma^{\mu})=0$. The rhs. of (519) is finite in the $m\to 0$ limit, since $M_{\pi}^{2}\propto m$ Gell-Mann _et al._ (1968). We have then $\displaystyle\langle{0}|\bar{q}(x)\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{q}({\boldsymbol{x}}_{1})\Phi_{\pi}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\tau^{j}q({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ $\displaystyle=-iN\mathrm{Tr}\,(\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}\gamma^{0}\gamma_{5}\tau^{j}\gamma^{0})=4iN\,\delta^{ij}=-i\frac{M_{\pi}^{2}}{2m}\,f_{\pi}\,\delta^{ij}$ This determines the normalization of the pion wave function, $\displaystyle\Phi_{\pi}({\boldsymbol{x}})=i\,\frac{M_{\pi}^{2}}{8m}\,f_{\pi}\big{[}J_{0}({\textstyle\frac{1}{4}}V^{\prime}r^{2})+i\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,J_{1}({\textstyle\frac{1}{4}}V^{\prime}r^{2})\big{]}\gamma_{5}$ (520) #### VIII.6.2 Finite quark mass $m_{u}=m_{d}=m\neq 0$ The pion becomes massive ($M_{\pi}>0$) through the explicit breaking of chiral symmetry by a non-vanishing quark mass $m\neq 0$. The $0^{++}$ $\sigma$ must remain massless ($M_{\sigma}=0$) to ensure that its mixing with the vacuum does not break Poincaré invariance. The wave function $\Phi_{\sigma}({\boldsymbol{x}})$ which solves the BSE (506) with $M=0$ but $m\neq 0$ has the structure of (510) with $\displaystyle H_{1}(r)$ $\displaystyle=Ne^{-iV^{\prime}r^{2}/4}\Big{\\{}\Big{[}1-\frac{2m^{2}}{(V^{\prime}r)^{2}}\Big{]}L_{{\textstyle\frac{im^{2}}{2V^{\prime}}}-{\textstyle\frac{1}{2}}}\big{(}{\textstyle\frac{1}{2}}iV^{\prime}r^{2}\big{)}+\frac{2m^{2}}{(V^{\prime}r)^{2}}L_{{\textstyle\frac{im^{2}}{2V^{\prime}}}+{\textstyle\frac{1}{2}}}\big{(}{\textstyle\frac{1}{2}}iV^{\prime}r^{2}\big{)}\Big{\\}}=N\big{[}J_{0}({\textstyle\frac{1}{4}}V^{\prime}r^{2})+{\cal O}\left(m^{2}\right)\big{]}$ $\displaystyle H_{2}(r)$ $\displaystyle=Ne^{-iV^{\prime}r^{2}/4}\Big{\\{}\Big{[}\frac{2}{V^{\prime}r^{2}}-i\Big{]}L_{{\textstyle\frac{im^{2}}{2V^{\prime}}}-{\textstyle\frac{1}{2}}}\big{(}{\textstyle\frac{1}{2}}iV^{\prime}r^{2}\big{)}-\frac{2}{V^{\prime}r^{2}}L_{{\textstyle\frac{im^{2}}{2V^{\prime}}}+{\textstyle\frac{1}{2}}}\big{(}{\textstyle\frac{1}{2}}iV^{\prime}r^{2}\big{)}\Big{\\}}=N\big{[}J_{1}({\textstyle\frac{1}{4}}V^{\prime}r^{2})+{\cal O}\left(m^{2}\right)\big{]}$ $\displaystyle H_{3}(r)$ $\displaystyle=-\frac{2m}{V^{\prime}r}\,H_{2}(r)$ (521) where the $L_{n}(x)$ are Laguerre functions. Exercise A.24: Verify the expressions (VIII.6.2) for radial functions $H_{1}(r),H_{2}(r)$ and $H_{3}(r)$. The pion state in the rest frame is $\displaystyle\left|{\pi,i}\right\rangle=\hat{\pi}_{i}\left|{0}\right\rangle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{q}({\boldsymbol{x}}_{1})\Phi_{\pi}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\tau^{i}q({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (522) The pion wave function has the form given in (415), where $F_{3}=0$ for $j=0$. With the notational change $F_{2}\to F_{2}/r$ this means $\displaystyle\Phi_{\pi}({\boldsymbol{x}})=\Big{[}F_{1}(r)+i\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,F_{2}(r)+\gamma^{0}\,F_{4}(r)\Big{]}\gamma_{5}$ (523) Using this in the bound state equation (506) and collecting terms with the same Dirac structure we get the conditions: $\displaystyle\gamma_{5}:\hskip 14.22636pt$ $\displaystyle-\frac{2}{r}F_{2}-F_{2}^{\prime}+mF_{4}={\textstyle\frac{1}{2}}(M_{\pi}-V)F_{1}$ $\displaystyle i\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,\gamma_{5}:\hskip 14.22636pt$ $\displaystyle F_{1}^{\prime}={\textstyle\frac{1}{2}}(M_{\pi}-V)F_{2}$ $\displaystyle\gamma^{0}\gamma_{5}:\hskip 14.22636pt$ $\displaystyle mF_{1}={\textstyle\frac{1}{2}}(M_{\pi}-V)F_{4}$ (524) Eliminating $F_{2}$ and $F_{4}$ gives the radial equation $\displaystyle F_{1}^{\prime\prime}+\Big{(}\frac{2}{r}+\frac{V^{\prime}}{M_{\pi}-V}\Big{)}F_{1}^{\prime}+\big{[}{\textstyle\frac{1}{4}}(M_{\pi}-V)^{2}-m^{2}\big{]}F_{1}=0$ (525) For the regular solution $F_{1}(r\to 0)\sim r^{0}[1+{\cal O}\left(r^{2}\right)]$, $F_{2}(r\to 0)\sim r$ and $F_{4}(0)=F_{1}(0)\,2m/M_{\pi}$. Thus $\displaystyle\Phi_{\pi}^{(0)}({\boldsymbol{x}}=0)=F_{1}(0)\Big{(}1+\frac{2m}{M_{\pi}}\gamma^{0}\Big{)}\gamma_{5}$ (526) The superscript on $\Phi_{\pi}$ reminds that this is the rest frame wave function. In a frame with ${\boldsymbol{P}}\neq 0$ the pion state takes the form (455) at $t=0$. For a general time, suppressing color and Dirac indices, $\displaystyle\left|{M_{\pi},i,{\boldsymbol{P}},t}\right\rangle=e^{-iP^{0}t}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{q}(t,{\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\pi}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\tau^{i}\,q(t,{\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (527) where $P^{0}=\sqrt{{\boldsymbol{P}}^{2}+M_{\pi}^{2}}$. The axial current identities (518) and (519) probe the pion state at $t=x^{0}$ and ${\boldsymbol{x}}_{1}={\boldsymbol{x}}_{2}={\boldsymbol{x}}$, involving $\Phi_{\pi}^{({\boldsymbol{P}})}(0)$. Since $V^{\prime}r=0$ at $r=0$ the frame dependence of the wave function at the origin is that of a non-interacting state, given by (459). The wave function of a pion with momentum ${\boldsymbol{P}}={\boldsymbol{\hat{\xi}}}\,M_{\pi}\sinh|\boldsymbol{\xi}|$ is then, at the origin, $\displaystyle\Phi_{\pi}^{({\boldsymbol{P}})}({\boldsymbol{x}}=0)$ $\displaystyle=e^{-\boldsymbol{\xi}\cdot{\boldsymbol{\alpha}}/2}\Phi_{\pi}^{(0)}({\boldsymbol{x}}=0)e^{\boldsymbol{\xi}\cdot{\boldsymbol{\alpha}}/2}=F_{1}(0)\Big{(}1+\frac{2m}{M_{\pi}}\gamma^{0}e^{\boldsymbol{\xi}\cdot{\boldsymbol{\alpha}}}\Big{)}\gamma_{5}=F_{1}(0)\Big{[}1+\frac{2m}{M_{\pi}^{2}}(\gamma^{0}P^{0}+{\boldsymbol{P}}\cdot\boldsymbol{\gamma})\Big{]}\gamma_{5}$ $\displaystyle\gamma^{0}\Phi_{\pi}^{({\boldsymbol{P}})}(0)\gamma^{0}$ $\displaystyle=-F_{1}(0)\Big{[}1+\frac{2m}{M_{\pi}^{2}}\not{P}\Big{]}\gamma_{5}$ (528) The CSB matrix elements become $\displaystyle\langle{0}|\bar{q}(x)\gamma^{\mu}\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)\,\left|{M_{\pi},j,{\boldsymbol{P}},x^{0}}\right\rangle$ $\displaystyle=\delta_{ij}e^{-iP\cdot x}\,\mathrm{Tr}\,\big{\\{}\gamma^{\mu}\gamma_{5}\gamma^{0}\Phi_{\pi}^{({\boldsymbol{P}})}(0)\gamma^{0}\big{\\}}=\delta_{ij}e^{-iP\cdot x}F_{1}(0)\,\frac{2m}{M_{\pi}^{2}}\,4P^{\mu}$ $\displaystyle\langle{0}|\bar{q}(x)\gamma_{5}{\textstyle\frac{1}{2}}\tau^{i}q(x)\,\left|{M_{\pi},j,{\boldsymbol{P}},x^{0}}\right\rangle$ $\displaystyle=\delta_{ij}e^{-iP\cdot x}\,\mathrm{Tr}\,\big{\\{}\gamma_{5}\gamma^{0}\Phi_{\pi}^{({\boldsymbol{P}})}(0)\gamma^{0}\big{\\}}=\delta_{ij}e^{-iP\cdot x}(-4)F_{1}(0)$ (529) Comparing with the rhs. of relations (518) and (519) gives in both cases $\displaystyle F_{1}(0)=i\,\frac{M_{\pi}^{2}}{8m}\,f_{\pi}$ (530) in agreement with our previous result (520) for $m=0$. I leave to the future a more comprehensive study of spontaneous chiral symmetry breaking in the present context. ## IX Bound state epilogue I conclude with several remarks on bound states. The more subjective ones may serve to stimulate further discussion on these topics. Confinement is an essential aspect of hadrons, and has been demonstrated in lattice QCD. Analytic approaches to confinement are often formulated in terms of quark and gluon Green functions, aiming to show that colored states are unphysical. This deals directly with the fundamental fields of the theory, and benefits from the accumulated experience with (non-)perturbative methods for local fields. A drawback is that one needs to prove that something does not exist, with little guidance from data. Here I try to approach confinement from the opposite direction, in terms of the color singlet bound states that are the asymptotic states of QCD. A main challenge is that bound states are extended objects, and more difficult to deal with than the pointlike quarks and gluons. The experience gained from QED atoms is valuable, even though it does not address confinement. Experts regard atoms as “non-perturbative” Bodwin _et al._ (1985), yet use PQED for precise evaluations of binding energies. This cautions not to rush to judgement based on the non-perturbative nature of hadrons. The hadron spectrum has surprising “atomic” features, despite their large binding energies. Why can hadrons be classified by their valence quarks and $J^{PC}$ quantum numbers only Zyla _et al._ (2020)? Their gluon and sea quark constituents do not feature in the spectrum as clearly as they do in deep inelastic scattering. Why do hadron decays obey the OZI rule Okubo (1963); Zweig (1964); Iizuka (1966), e.g., favor $\phi(1020)\to K\bar{K}$ over $\phi(1020)\to\pi\pi\pi$? What causes quark-hadron duality, which in various guises pervades hadron dynamics? These features are pictured by dual diagrams (Harari (1969); Rosner (1969); Zweig (2015) and Fig. 19 a) which show only valence quarks, no gluons. We lack understanding based on QCD. Simple features are precious: experience shows that they often have correspondingly simple explanations. Taking the data at face value limits the options in choosing the approach. An explanation based on QCD needs to use perturbation theory, which is our only general analytic method. Perturbative methods are successful for the atomic spectrum as well as for hard scattering in QCD. Addressing bound states in motion with a perturbative expansion is possible in QED and conceivable in QCD. Imposing restrictive conditions on an explicit yet formally exact method may reveal the QCD solution, or the theoretical inconsistency of such conditions. Quarks and gluons do not move faster than light. This implies retarded interactions between bound state constituents. In a Fock state expansion the retardation is described by higher Fock states, with gluons “on their way” between valence quarks. For the non-relativistic constituents of atoms photon exchange is almost instantaneous, so retardation is a higher order, relativistic correction. For light, relativistic quarks retarded interactions would be expected to be prominent and higher Fock states significant. Yet data suggests that most hadrons may be viewed as $q\bar{q}$ and $qqq$ states. Is it conceivable that the valence quark Fock states dominate also for light hadrons? Gauge theories can, depending on the choice of gauge, have instantaneous interactions. The absence of the $\partial_{t}A^{0}$ and $\boldsymbol{\nabla}\cdot{\boldsymbol{A}}_{L}$ terms in the action means that $A^{0}$ and the longitudinal ${\boldsymbol{A}}_{L}$ fields do not propagate in time and space. Their values are determined by the gauge, which can be fixed over all space at an instant of time. This is illustrated by a comparison of the $A^{0}$ propagator in the Coulomb and Landau gauges, $\displaystyle D_{C}^{00}(q^{0},{\boldsymbol{q}})$ $\displaystyle=i\,\frac{1}{{\boldsymbol{q}}^{2}}\hskip 73.97733pt\mbox{Coulomb gauge:}\ \boldsymbol{\nabla}\cdot{\boldsymbol{A}}=0$ $\displaystyle D_{L}^{00}(q^{0},{\boldsymbol{q}})$ $\displaystyle=-i\frac{1-(q^{0})^{2}/q^{2}}{q^{2}+i\varepsilon}\hskip 28.45274pt\mbox{Landau gauge:}\ \ \partial_{\mu}A^{\mu}=0$ (531) The Coulomb gauge propagator is independent of $q^{0}$ and thus $\propto\delta(t)$ after a Fourier transform $q^{0}\to t$. The gauge fixing term $\propto(\partial_{\mu}A^{\mu})^{2}$ of Landau gauge adds the missing derivatives $\partial_{t}A^{0}$ and $\boldsymbol{\nabla}\cdot{\boldsymbol{A}}_{L}$ to the action, allowing all components of $A^{\mu}$ to propagate. The free field boundary condition of the perturbative $S$-matrix at $t=\pm\infty$ is also covariant, resulting in an explicitly Poincaré invariant expansion of scattering amplitudes in terms of Feynman diagrams in Landau gauge. Bound states defined at an instant of time retain explicit symmetry only under space translations and rotations (in the rest frame). Coulomb gauge maintains these symmetries, sets ${\boldsymbol{A}}_{L}=0$ and determines $A^{0}$ through Gauss’ law, $\displaystyle\frac{\delta\mathcal{S}_{QED}}{\delta A^{0}(x)}=-\boldsymbol{\nabla}^{2}A^{0}(t,{\boldsymbol{x}})-e\psi^{\dagger}\psi(t,{\boldsymbol{x}})=0$ (532) The positions of the charges at any time $t$ determine $A^{0}$ instantaneously at all positions ${\boldsymbol{x}}$. This gives the Coulomb potential $V(r)=-\alpha/r$ for Positronium, which is the dominant interaction in atomic rest frames. The $e^{-}e^{+}$ Fock state wave function is determined by the Schrödinger equation, and other Fock states are suppressed by powers of $\alpha$. These features make Coulomb gauge a common choice for bound state calculations. Quantization in Coulomb gauge is complicated by the absence of a conjugate field for $A^{0}$, see Feinberg (1978); Christ and Lee (1980); Weinberg (2005). Gauss’ law (532) is an operator relation which defines $A^{0}$ in terms of $\psi^{\dagger}\psi$ as a non-local quantum field. The classical potential of the Schrödinger equation given by $A^{0}$ in Coulomb gauge is more simply realized in temporal gauge, $A^{0}=0$. Quantization is straightforward in temporal gauge, since ${\boldsymbol{A}}_{L}$ does have a conjugate field, ${\boldsymbol{E}}_{L}$. Gauss’ law is no longer an operator equation of motion since $A^{0}$ is fixed. Physical Fock states are defined to be invariant under time independent gauge transformations (which preserve $A^{0}=0$). This constrains the value of ${\boldsymbol{E}}_{L}$ for each physical state to be such that Gauss’ law is satisfied (Willemsen (1978); Bjorken (1979); Christ and Lee (1980); Leibbrandt (1987); Strocchi (2013) and section V), giving rise to the classical potential. In a perturbative approach the classical potential is weak, being proportional to $\alpha$ or ${\alpha_{s}}$. The QCD action has no parameter like $\Lambda_{QCD}\sim 1$ fm-1 for the confinement scale. However, Gauss’ law determines ${\boldsymbol{E}}_{L}^{a}$ only up to a boundary condition. In section V.3.2 I consider adding a homogeneous solution (162) to the gluon exchange term. This gives rise to a spatially constant gluon field energy density (365) for each color component of a Fock state. Being a color octet, ${\boldsymbol{E}}_{L}^{a}$ cancels in the sum over the color components of singlet Fock states, avoiding long range effects. The homogeneous solution generates a confining potential of ${\cal O}\left(\alpha_{s}^{0}\right)$. In section VIII I determined the potential for various Fock states and made first checks of the ${\cal O}\left(\alpha_{s}^{0}\right)$ dynamics. Essential and non-trivial features include the gauge invariance of electromagnetic form factors and the correct dependence of the bound state energies on the CM momentum, $E({\boldsymbol{P}})$. String breaking and duality can arise through an overlap between single and multiple bound states as suggested by dual diagrams (Fig. 19 a). There is a $J^{PC}=0^{++}$ solution with $P^{\mu}=0$ in all frames whose mixing with the vacuum allows solutions with spontaneously broken chiral symmetry. A non-vanishing field energy density would explain why confinement is not seen in Feynman diagrams, which expand around free states. However, the homogeneous solution (162) represents a major departure from previous experience. It requires much further study, both of theoretical consistency and phenomenological relevance. ###### Acknowledgements. During my work on the topics presented here I have benefited from the collaboration and advice of many colleagues. Particular thanks are due to Jean-Paul Blaizot, Stan Brodsky, Dennis D. Dietrich, Matti Järvinen, Stephane Peigné and Johan Rathsman. I am grateful for the hospitality of the University of Pavia, and the stimulating response to my lectures there in early 2020. During earlier stages of this project I have enjoyed visits of a month or more to ECT* (Trento), CERN-TH (Geneva), CP3 (Odense), NIKHEF (Amsterdam), IPhT (Saclay) and GSI (Darmstadt). I have the privileges of Professor Emeritus at the Physics Department of Helsinki University. Travel grants from the Magnus Ehrnrooth Foundation have allowed me to maintain contacts and present my research to colleagues. ## Appendix A Solutions to exercises ### A.1 Order of box diagram The leading contribution comes from the range of the loop integral $\int d^{4}\ell$ where $\ell^{0}\sim\alpha^{2}m$ and $|{\boldsymbol{\ell}}|\sim\alpha m$. Thus $\int d^{4}\ell\sim\alpha^{5}$. The fermions are off-shell similarly to $E_{p_{1}}-m\sim\alpha^{2}$, as are the photons, $q^{2}=(q^{0})^{2}-{\boldsymbol{q}}^{2}\simeq-{\boldsymbol{q}}^{2}\sim\alpha^{2}$. Considering also the factor $e^{4}\sim\alpha^{2}$ from the vertices the power of $\alpha$ is altogether $5+4\times(-2)+2=-1$. ### A.2 Contribution of the diagrams in Fig. 11(b,c) As in Ex. A.1 the leading contribution comes from the range of the loop integral $\int d^{4}\ell$ where $\ell$ is of the order of the bound state momenta. Then the Dirac structures simplify as in $A_{1}$ of (27), giving $(2m)^{4}$. The two photon propagators reduce to the $\ell^{0}$-independent Coulomb exchanges contained in $A_{1}({\boldsymbol{p}},{\boldsymbol{\ell}})$ and $V({\boldsymbol{\ell}}-{\boldsymbol{q}})$ of (28). The fermion propagators with momenta $\ell$ and $\ell-p_{1}-p_{2}\equiv\ell-P$ may be expressed as in (16), keeping only the terms with the electron and positron poles. The relevant part of the $\ell^{0}$ integral is then $\displaystyle\int\frac{d\ell^{0}}{2\pi}\,\frac{1}{\ell^{0}-E_{\ell}+i\varepsilon}\,\frac{1}{\ell^{0}-P^{0}+E_{\ell}-i\varepsilon}=\frac{i}{P^{0}-2E_{\ell}+i\varepsilon}$ (A.533) Noting that the factor of $1/2E_{\ell}$ in each fermion propagator (16) gives $1/(2m)^{2}$ we arrive at (28). In the crossed diagram (c) of Fig. 11 the second factor in (A.533) is $1/(q_{1}^{0}-p_{2}^{0}-\ell^{0}+E-i\varepsilon)$. The pole in $\ell^{0}$ now has ${\rm Im}\,\ell^{0}<0$ as in the first factor of (A.533). This allows closing the contour in the ${\rm Im}\,\ell^{0}>0$ hemisphere giving no leading order contribution. In fact only the ladder diagrams shown in Fig. 10 contribute at leading ${\cal O}\left(1/\alpha\right)$. They generate the classical field of the bound state. ### A.3 Derivation of (47) For any (finite) momentum exchange the antifermion energy in the loops of Fig. 11(b,c) is $\bar{E}=m_{T}+{\cal O}\left(1/m_{T}\right)$. Since $p_{2}^{0}=m_{T}$ we need only retain the negative energy pole term in the antifermion propagators (cf. (16)), in which $-p_{2}^{0}+\bar{E}$ is of ${\cal O}\left(1/m_{T}\right)$. For the electron lines to go on-shell requires non- vanishing energy transfer $\ell^{0}-p_{1}^{0}\neq 0$, which causes the antifermion propagator to be off-shell by ${\cal O}\left(m_{T}\right)$. The photon propagators are thus independent of $\ell^{0}$, e.g., $D^{00}(\ell- p_{1})=i/({\boldsymbol{\ell}}-{\boldsymbol{p}}_{1})^{2}$. The only relevant poles of the $\ell^{0}$ integration are in the antifermion propagators, $\displaystyle\int\frac{d\ell^{0}}{2\pi}\Big{[}\frac{i}{\ell^{0}-p_{1}^{0}-i\varepsilon}+\frac{i}{q_{1}^{0}-\ell^{0}-i\varepsilon}\Big{]}=i\int\frac{d\ell^{0}}{2\pi}\,2\pi i\delta(\ell^{0}-p_{1}^{0})=-1$ (A.534) The factor $-i$ at the antifermion vertices cancels with the $i$ of the Coulomb photon propagators. The standard rules for the electron line then gives (47). In the diagram with three uncrossed photon exchanges with momenta $\ell_{1}-p_{1},\ \ell_{2}-\ell_{1}$ and $q_{1}-\ell_{2}$ the two antifermion propagators similarly give $\displaystyle\int\frac{d\ell_{1}^{0}\,d\ell_{2}^{0}}{(2\pi)^{2}}\,\frac{i^{2}}{(\ell_{1}^{0}-p_{1}^{0}-i\varepsilon)(\ell_{2}^{0}-q_{1}^{0}-i\varepsilon)}$ (A.535) The five other diagrams with crossed photons ensure the convergence of the integrals similarly as in (A.534), and do not contribute when the integration contours of $\ell_{1}^{0}$ and $\ell_{2}^{0}$ are closed in the upper half plane. The full result is then due to (A.535), which equals 1 (given the convergence). If the contours are chosen differently then the same result will come from diagrams with crossed photons. The factors associated with the electron line are again given by the standard rules, making the three photon contribution equal to scattering from an external potential. ### A.4 Derivation of (64) Inserting the completeness condition for the Dirac wave functions, $\sum_{n}\big{[}\Psi_{n,\alpha}({\boldsymbol{x}})\Psi_{n,\beta}^{\dagger}({\boldsymbol{y}})+\overline{\Psi}_{n,\alpha}({\boldsymbol{x}})\overline{\Psi}_{n,\beta}^{\dagger}({\boldsymbol{y}})\big{]}=\delta_{\alpha\beta}\delta^{3}({\boldsymbol{x}}-{\boldsymbol{y}})$ (A.536) into the Dirac Hamiltonian (49) gives, recalling that the wave functions satisfy (44) and (45), $\displaystyle H_{D}$ $\displaystyle=$ $\displaystyle\int d{\boldsymbol{x}}\,d{\boldsymbol{y}}\,\bar{\psi}_{\alpha^{\prime}}({\boldsymbol{x}})\big{[}-i\boldsymbol{\nabla}_{\boldsymbol{x}}\cdot\boldsymbol{\gamma}+m+e\not{A}({\boldsymbol{x}})\big{]}_{\alpha^{\prime}\alpha}\sum_{n}\big{[}\Psi_{n,\alpha}({\boldsymbol{x}})\Psi_{n,\beta}^{\dagger}({\boldsymbol{y}})+\overline{\Psi}_{n,\alpha}({\boldsymbol{x}})\overline{\Psi}_{n,\beta}^{\dagger}({\boldsymbol{y}})\big{]}\psi_{\beta}({\boldsymbol{y}})$ (A.537) $\displaystyle=$ $\displaystyle\sum_{n}\int d{\boldsymbol{x}}\,d{\boldsymbol{y}}\,\psi_{\alpha}^{\dagger}({\boldsymbol{x}})\big{[}M_{n}\Psi_{n,\alpha}({\boldsymbol{x}})\Psi_{n,\beta}^{\dagger}({\boldsymbol{y}})-\bar{M}_{n}\overline{\Psi}_{n,\alpha}({\boldsymbol{x}})\overline{\Psi}_{n,\beta}^{\dagger}({\boldsymbol{y}})\big{]}\psi_{\beta}({\boldsymbol{y}})$ $\displaystyle=$ $\displaystyle\sum_{n}\big{[}M_{n}c_{n}^{\dagger}c_{n}-\bar{M}_{n}\bar{c}_{n}\bar{c}_{n}^{\dagger}\big{]}\to\sum_{n}\big{[}M_{n}c_{n}^{\dagger}c_{n}+\bar{M}_{n}\bar{c}_{n}^{\dagger}\bar{c}_{n}\big{]}$ In the last step I normal-ordered the operators, neglecting the zero-point energies according to (57). ### A.5 The expressions (65) for vacuum state (a) The $B$ and $D$ coefficients defined in (IV.3) and (62) satisfy $B_{mp}\overline{B}_{np}+D_{mp}\overline{D}_{np}=\sum_{\boldsymbol{p}}\Psi_{m,\alpha}^{\dagger}({\boldsymbol{p}})\big{[}u_{\alpha}({\boldsymbol{p}},\lambda)u_{\beta}^{\dagger}({\boldsymbol{p}},\lambda)+v_{\alpha}(-{\boldsymbol{p}},\lambda)v_{\beta}^{\dagger}(-{\boldsymbol{p}},\lambda)\big{]}\overline{\Psi}_{n,\beta}({\boldsymbol{p}})=\int\frac{d{\boldsymbol{p}}}{(2\pi)^{3}}\Psi_{m,\alpha}^{\dagger}({\boldsymbol{p}})\overline{\Psi}_{n,\alpha}({\boldsymbol{p}})=0$ (A.538) Multiplying by $(B^{-1})_{qm}({\overline{D}}^{\,-1})_{rn}$ and summing over $m,n$ gives $-(B^{-1})_{qm}D_{mr}=({\overline{D}}^{\,-1})_{rn}{\overline{B}}_{nq}$ (A.539) Using also $b^{\dagger}_{q}d^{\dagger}_{r}=-d^{\dagger}_{r}b^{\dagger}_{q}$ shows the equivalence of the two expressions for the vacuum state in (65). (b) In order to verify that $c_{n}\left|{\Omega}\right\rangle=0$ we note that since $b_{p}$ essentially differentiates the exponent in $\left|{\Omega}\right\rangle$, $B_{nq}b_{q}\left|{\Omega}\right\rangle=-B_{nq}\big{(}B^{-1}\big{)}_{qm}D_{mr}d_{r}^{\dagger}\left|{\Omega}\right\rangle=-D_{nr}d_{r}^{\dagger}\left|{\Omega}\right\rangle$ (A.540) This cancels the contribution of the second term in the definition (IV.3) of $c_{n}$. The demonstration that $\bar{c}_{n}$ annihilates the vacuum is similar. Thus $c_{n}\left|{\Omega}\right\rangle=\bar{c}_{n}\left|{\Omega}\right\rangle=H_{D}\left|{\Omega}\right\rangle=0$ (A.541) ### A.6 Derivation of the identities (IV.4) We may start by evaluating (here I make no difference between lower and upper indices) $\displaystyle(\hat{\boldsymbol{x}}\times{\boldsymbol{L}})^{i}=\epsilon_{ijk}\hat{x}^{j}\epsilon_{kln}x^{l}(-i\partial_{n})=(\delta_{il}\delta_{jn}-\delta_{in}\delta_{jl})\hat{x}^{j}x^{l}(-i\partial_{n})=-i\hat{x}^{i}r\partial_{r}+ir\partial_{i}$ (A.542) Multiplying by $\alpha^{i}/r$ we have the first relation, $\displaystyle\frac{1}{r}{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\times{\boldsymbol{L}}=-i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,\partial_{r}+i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}$ (A.543) The second identity: $\displaystyle-i({\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla})i({\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}})=\alpha^{i}\alpha^{j}\partial_{i}\frac{x^{j}}{r}=(\delta_{ij}+i\gamma_{5}\epsilon_{ijk}\alpha^{k})\Big{(}\frac{\delta^{ij}}{r}-\frac{x^{i}x^{j}}{r^{3}}+\frac{x^{j}}{r}\partial_{i}\Big{)}=\frac{2}{r}+\partial_{r}+\frac{1}{r}\gamma_{5}{\boldsymbol{\alpha}}\cdot{\boldsymbol{L}}$ (A.544) ### A.7 Derivation of (127) The momentum space wave function (60) for $j={\textstyle\frac{1}{2}}$ and positive parity is $\displaystyle\Psi_{1/2,\lambda,+}=2\pi\int_{0}^{\infty}dr\int_{-1}^{1}d\cos\theta\,r^{2}e^{-ipr\cos\theta}\big{[}F(r)+iG(r)\,\alpha\cdot\hat{\boldsymbol{p}}\,\cos\theta\big{]}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)$ (A.547) I chose the $z$-axis of the ${\boldsymbol{x}}$-integration along ${\boldsymbol{p}}$. The integration over the azimuthal angle $\varphi$ leaves only the $\alpha_{z}$ component of ${\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}$. Expressing the factor $\cos\theta$ as a derivative of $\exp(-ipr\cos\theta)$ and using $G=iF$ (which holds at large $r$, where the stationary point is located for large $p$) we have $\displaystyle\Psi_{1/2,\lambda,+}=2\pi\int_{0}^{\infty}dr\,r^{2}\,F(r)\big{(}1-\frac{i}{r}\,\alpha\cdot\hat{\boldsymbol{p}}\,\partial_{p}\big{)}\frac{i}{pr}\big{(}e^{-ipr}-e^{ipr}\big{)}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)$ (A.550) For $p\to\infty$ the phase of $e^{\pm ipr}$ is rapidly oscillating, so the leading contribution comes from the region of the $r$-integration where the phase of the integrand is stationary. The stationary phase approximation is $\displaystyle\int dr\,f(r)e^{i\varphi(r)}\simeq e^{\varepsilon(\varphi^{\prime\prime}(r_{s}))i\pi/4}\,f(r_{s})\,e^{i\varphi(r_{s})}$ (A.551) The function $f(r)$ is assumed to be varying slowly compared to the phase $\exp[i\varphi(r)]$. The phase is stationary at $\varphi^{\prime}(r_{s})=0$ and $\varepsilon(x)=1\ (-1)$ for $x>0\ (x<0)$. According to (126) we have in the integral of (A.550) $\varphi(r)=V^{\prime}r^{2}/2\mp pr$. There is a stationary phase for $r>0$ only with the $\exp(-ipr)$ term, giving $r_{s}=p/V^{\prime}$, $\varphi(r_{s})=-p^{2}/2V^{\prime}$ and $\varphi^{\prime\prime}(r_{s})>0$. From (126) the contribution proportional to the unit Dirac matrix in (A.550) is then $\displaystyle\big{(}\Psi_{1/2,\lambda,+}\big{)}_{F}=\frac{2\pi iN}{p}\Big{(}\frac{p}{V^{\prime}}\Big{)}^{\beta+1}\,e^{-ip^{2}/2V^{\prime}}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)$ (A.554) The leading term in the contribution $\propto\alpha\cdot\hat{\boldsymbol{p}}$ has $\partial_{p}\to-ip/V^{\prime}=-ir_{s}$, giving the full result $\displaystyle\Psi_{1/2,\lambda,+}=\frac{2\pi iN}{p}\Big{(}\frac{p}{V^{\prime}}\Big{)}^{\beta+1}(1-\alpha\cdot\hat{\boldsymbol{p}})\,e^{-ip^{2}/2V^{\prime}}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)$ (A.557) Consider now the expressions for the $u$ and $v$ spinors for $|{\boldsymbol{p}}|\to\infty$, $\displaystyle u({\boldsymbol{p}},\lambda)$ $\displaystyle\equiv\frac{{\not{p}}+m}{\sqrt{E_{p}+m}}\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)\simeq\sqrt{|{\boldsymbol{p}}|}\,(1+\alpha\cdot\hat{\boldsymbol{p}})\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)\ \ \ (|{\boldsymbol{p}}|\to\infty)$ (A.562) $\displaystyle v({\boldsymbol{p}},\lambda)$ $\displaystyle\equiv\frac{-{\not{p}}+m}{\sqrt{E_{p}+m}}\left(\begin{array}[]{c}0\\\\[5.69054pt] \bar{\chi}_{\lambda}\end{array}\right)\simeq\sqrt{|{\boldsymbol{p}}|}\,(1+\alpha\cdot\hat{\boldsymbol{p}})\left(\begin{array}[]{c}0\\\\[5.69054pt] \bar{\chi}_{\lambda}\end{array}\right)\ \ \ (|{\boldsymbol{p}}|\to\infty)$ (A.567) Consequently, omitting a common factor in the distributions $e^{\pm}({\boldsymbol{p}},s)$ (IV.3), $\displaystyle e^{-}(p,s)$ $\displaystyle=u^{\dagger}({\boldsymbol{p}},s)\Psi_{1/2,\lambda,+}\sim\left(\begin{array}[]{cc}\chi_{s}^{\dagger}&0\end{array}\right)(1+\alpha\cdot\hat{\boldsymbol{p}})(1-\alpha\cdot\hat{\boldsymbol{p}})\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)=0$ (A.571) $\displaystyle e^{+}(p,s)$ $\displaystyle=v^{\dagger}(-{\boldsymbol{p}},s)\Psi_{1/2,\lambda,+}\sim\left(\begin{array}[]{cc}0&\bar{\chi}_{s}^{\dagger}\end{array}\right)(1-\alpha\cdot\hat{\boldsymbol{p}})(1-\alpha\cdot\hat{\boldsymbol{p}})\left(\begin{array}[]{c}\chi_{\lambda}\\\\[5.69054pt] 0\end{array}\right)=-2\bar{\chi}_{s}^{\dagger}\,\boldsymbol{\sigma}\cdot\hat{\boldsymbol{p}}\,\chi_{\lambda}$ (A.575) which establishes (127). ### A.8 Gauge transformations generated by Gauss operator The unitary operator defined in (140), $\displaystyle U(t)=1+i\int d{\boldsymbol{y}}\,G(t,{\boldsymbol{y}})\delta\Lambda({\boldsymbol{y}})=1+i\int d{\boldsymbol{y}}\,\big{[}\partial_{i}E^{i}(t,{\boldsymbol{y}})-e\psi^{\dagger}\psi(t,{\boldsymbol{y}})\big{]}\delta\Lambda({\boldsymbol{y}})$ (A.576) transforms $A^{j}(t,{\boldsymbol{x}})$ as, $\displaystyle\delta A^{j}(t,{\boldsymbol{x}})$ $\displaystyle\equiv U(t)A^{j}(t,{\boldsymbol{x}})U^{-1}(t)-A^{j}(t,{\boldsymbol{x}})=i\left[{\int d{\boldsymbol{y}}\,\big{[}\partial_{i}E^{i}(t,{\boldsymbol{y}})-e\psi^{\dagger}\psi(t,{\boldsymbol{y}})\big{]}\delta\Lambda({\boldsymbol{y}})},{A^{j}(t,{\boldsymbol{x}})}\right]$ $\displaystyle=-i\left[{\int d{\boldsymbol{y}}\,E^{i}(t,{\boldsymbol{y}})\partial_{i}^{y}\delta\Lambda({\boldsymbol{y}})},{A^{j}(t,{\boldsymbol{x}})}\right]=\partial_{j}\delta\Lambda({\boldsymbol{x}})$ Similarly for the electron field, $\displaystyle\delta\psi(t,{\boldsymbol{x}})$ $\displaystyle\equiv U(t)\psi(t,{\boldsymbol{x}})U^{-1}(t)-\psi(t,{\boldsymbol{x}})=-ie\left[{\int d{\boldsymbol{y}}\,\psi^{\dagger}\psi(t,{\boldsymbol{y}})\delta\Lambda({\boldsymbol{y}})},{\psi(t,{\boldsymbol{x}})}\right]=ie\,\delta\Lambda({\boldsymbol{x}})\psi(t,{\boldsymbol{x}})$ (A.578) ### A.9 Derive (172). The anticommutation relation of the electron fields gives $\displaystyle\boldsymbol{\mathcal{J}}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}=0}\right\rangle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})\Big{[}{\boldsymbol{J}}{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})-{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2}){\boldsymbol{J}}\Big{]}\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.579) ${\boldsymbol{J}}$ commutes with the $\Lambda_{\pm}$ projectors (VI.1.1), which have a rotationally invariant form. It is instructive to see how this works out explicitly. The commutator $\big{[}{{\overset{\rightarrow}{\Lambda}}_{-}},{{\boldsymbol{L}}+{\boldsymbol{S}}}\big{]}$ gets contributions from the derivatives in ${\overset{\rightarrow}{\Lambda}}_{-}$ differentiating ${\boldsymbol{x}}$ in ${\boldsymbol{L}}=-i{\boldsymbol{x}}\times\boldsymbol{\nabla}$, and from the commutators of the Dirac matrices in ${\overset{\rightarrow}{\Lambda}}_{-}$ with ${\boldsymbol{S}}={\textstyle\frac{1}{2}}\gamma_{5}{\boldsymbol{\alpha}}$. The $\boldsymbol{\nabla}^{2}$ in $E=\sqrt{-\boldsymbol{\nabla}^{2}+m^{2}}$ of ${\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}})$ commutes with $L^{i}$, $\displaystyle\left[{\partial_{j}\partial_{j}},{\varepsilon_{ik\ell}x^{k}\partial_{\ell}}\right]=\partial_{j}\varepsilon_{ij\ell}\partial_{\ell}+\varepsilon_{ij\ell}\partial_{\ell}\partial_{j}=0$ (A.580) since $\partial_{j}\partial_{\ell}=\partial_{\ell}\partial_{j}$, whereas $\varepsilon_{ij\ell}=-\varepsilon_{i\ell j}$. The commutator of $i{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}$ with $L^{i}$ (in my notation $\alpha_{j}=\alpha^{j}$), $\displaystyle\left[{i\alpha_{j}\partial_{j}},{-i\varepsilon_{ik\ell}x^{k}\partial_{\ell}}\right]=\varepsilon_{ij\ell}\alpha_{j}\partial_{\ell}$ (A.581) is cancelled by its commutator with $S^{i}$. Using $\alpha_{i}\alpha_{j}=\delta_{ij}+i\varepsilon_{ijk}\alpha_{k}\gamma_{5}$, $\displaystyle\left[{i\alpha_{j}\partial_{j}},{{\textstyle\frac{1}{2}}\gamma_{5}\alpha_{i}}\right]=-\varepsilon_{jik}\alpha_{k}\partial_{j}$ (A.582) Finally, $\left[{\gamma^{0}},{{\boldsymbol{S}}}\right]=0$. We see that the commutator $\big{[}{{\overset{\rightarrow}{\Lambda}}_{-}},{{\boldsymbol{J}}}\big{]}=0$ requires contributions from both ${\boldsymbol{L}}$ and ${\boldsymbol{S}}$. Similarly ${\boldsymbol{J}}$ commutes with other rotationally invariant structures. Bringing ${\boldsymbol{J}}$ through the $\Lambda_{\pm}$ projectors gives (172). The commutator with the orbital angular momentum ${\boldsymbol{L}}$ in (172) arises from $\displaystyle{\boldsymbol{x}}_{1}\times(-i{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1})\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})-\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\boldsymbol{x}}_{2}\times(i{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2})=({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\times(-i{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1})\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ (A.583) Hence in $\left[{{\boldsymbol{L}}},{\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})}\right]={\boldsymbol{x}}\times[-i\boldsymbol{\nabla}\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})]$ the ${\boldsymbol{x}}$-derivatives of ${\boldsymbol{L}}$ apply only to $\Phi_{\mathcal{B}}^{(0)}({\boldsymbol{x}})$. ### A.10 Derivation of (181) The transformation (179) of the fields under charge conjugation gives $\displaystyle\mathcal{C}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle=-\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\psi({\boldsymbol{x}}_{1})^{T}{\alpha_{2}}{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2}){\alpha_{2}}\bar{\psi}^{T}({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.584) Take the transpose on the rhs., recalling that the anticommutation of the fields gives a minus sign, $\displaystyle\mathcal{C}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ $\displaystyle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{2}){\alpha_{2}}^{T}\frac{1}{2E_{2}}(E_{2}+i{\boldsymbol{\alpha}}^{T}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2}-\gamma^{0}m){\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}}^{T}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ $\displaystyle\times\frac{1}{2E_{1}}(E_{1}-i{\boldsymbol{\alpha}}^{T}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}+\gamma^{0}m){\alpha_{2}}^{T}\psi({\boldsymbol{x}}_{1})\left|{0}\right\rangle$ (A.585) Recalling that ${\alpha_{2}}{\boldsymbol{\alpha}}^{T}{\alpha_{2}}=-{\boldsymbol{\alpha}}$ and changing integration variables ${\boldsymbol{x}}_{1}\leftrightarrow{\boldsymbol{x}}_{2}$ we get $\displaystyle\mathcal{C}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1}){\alpha_{2}}{\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}}^{T}({\boldsymbol{x}}_{2}-{\boldsymbol{x}}_{1}){\alpha_{2}}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.586) Comparing with the definition (165) of $\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ we see that (180) implies (181). ### A.11 Verify (186). At lowest order in $\alpha$ the projectors $\Lambda_{\pm}$ in the definition (165) of the Positronium state may be expressed in terms of the CM momentum ${\boldsymbol{P}}$, by partial integration and ignoring the ${\cal O}\left(\alpha\right)$ contributions from differentiating $\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$, $\displaystyle\Lambda_{\pm}({\boldsymbol{P}})=\frac{1}{2E_{P}}\big{(}E_{P}\mp{\boldsymbol{\alpha}}\cdot{\boldsymbol{P}}\pm 2m\gamma^{0}\big{)}=\Lambda_{\pm}^{\dagger}({\boldsymbol{P}})=\big{[}\Lambda_{\pm}({\boldsymbol{P}})\big{]}^{2}\hskip 56.9055pt\Lambda_{+}({\boldsymbol{P}})\Lambda_{-}({\boldsymbol{P}})=0$ (A.587) Anticommutating the fields in $\langle{e^{-}e^{+};\mathcal{B}^{\prime},{\boldsymbol{P}}^{\prime}}|$ with those in $\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ gives using (183), $\displaystyle\langle{e^{-}e^{+};\mathcal{B}^{\prime},{\boldsymbol{P}}^{\prime}}\left|{e^{-}e^{+};\mathcal{B},{\boldsymbol{P}}}\right\rangle$ $\displaystyle=N_{\mathcal{B}^{\prime}}^{({\boldsymbol{P}}^{\prime})*}N_{\mathcal{B}}^{(P)}\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,e^{i({\boldsymbol{P}}-{\boldsymbol{P}}^{\prime})\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}{F^{({\boldsymbol{P}}^{\prime})}}^{*}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})F^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\mathrm{Tr}\,_{\mathcal{B},\mathcal{B}^{\prime}}$ $\displaystyle=\big{|}N_{\mathcal{B}}^{(P)}\big{|}^{2}(2\pi)^{3}\delta({\boldsymbol{P}}-{\boldsymbol{P}}^{\prime})\int d{\boldsymbol{x}}\,|F^{({\boldsymbol{P}})}({\boldsymbol{x}})|^{2}\,\mathrm{Tr}\,_{\mathcal{B},\mathcal{B}^{\prime}}$ (A.588) $\displaystyle\mathrm{Tr}\,_{\mathcal{B},\mathcal{B}^{\prime}}$ $\displaystyle=\mathrm{Tr}\,\big{\\{}\Gamma_{\mathcal{B}^{\prime}}^{\dagger}\Lambda_{+}({\boldsymbol{P}})\Gamma_{\mathcal{B}}\Lambda_{-}({\boldsymbol{P}})\big{\\}}$ Commuting $\Lambda_{-}({\boldsymbol{P}})$ through $\Gamma_{\mathcal{B}}$ gives for $\Gamma_{\mathcal{B}}=\gamma_{5}$ (Parapositronium) and $\Gamma_{\mathcal{B}}={\alpha_{3}}$ (Orthopositronium with $\lambda=0$), $\displaystyle\mathrm{Tr}\,_{\mathcal{B},\mathcal{B}^{\prime}}=\frac{2m}{E_{P}}\,\mathrm{Tr}\,\big{\\{}\Gamma_{\mathcal{B}^{\prime}}^{\dagger}\Lambda_{+}({\boldsymbol{P}})\gamma^{0}\Gamma_{\mathcal{B}}\big{\\}}=\frac{8m^{2}}{E_{P}^{2}}\delta_{\mathcal{B},\mathcal{B}^{\prime}}$ (A.589) while for $\Gamma_{\mathcal{B}}={\boldsymbol{e}}_{\pm}\cdot{\boldsymbol{\alpha}}$ (Orthopositronium with $\lambda=\pm 1$), $\displaystyle\mathrm{Tr}\,_{\mathcal{B},\mathcal{B}^{\prime}}=\mathrm{Tr}\,\big{\\{}\Gamma_{\mathcal{B}^{\prime}}^{\dagger}\Lambda_{+}({\boldsymbol{P}})\Gamma_{\mathcal{B}}\big{\\}}=2\delta_{\mathcal{B},\mathcal{B}^{\prime}}$ (A.590) Using these expressions for $\mathrm{Tr}\,_{\mathcal{B},\mathcal{B}^{\prime}}$ and the normalization (183) of $F^{({\boldsymbol{P}})}({\boldsymbol{x}})$ in (A.11) the normalization (168) of the state implies (186). ### A.12 Derive the expression for $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ in (VI.3.2). The contribution to $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ from $\left[{\mathcal{H}_{int}},{\bar{\psi}({\boldsymbol{x}}_{1})}\right]$ is $\displaystyle e\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1}){\boldsymbol{\alpha}}\cdot{\boldsymbol{\varepsilon}}_{s}^{*}({\boldsymbol{q}})e^{-i{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{1}}a^{\dagger}({\boldsymbol{q}},s){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.591) where I inserted ${\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})$ to select the $b^{\dagger}$ contribution in $\bar{\psi}({\boldsymbol{x}}_{1})$ as in (167). We have then $\displaystyle{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\alpha^{j}{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})$ $\displaystyle={\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\frac{1}{2E_{1}}\Big{[}(E_{1}+i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}-m\gamma^{0})\alpha^{j}+\big{\\{}{\alpha^{j}},{-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}}\big{\\}}\Big{]}={\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\frac{-2i{\buildrel\leftarrow\over{\partial}}_{1,j}}{2E_{1}}$ $\displaystyle\to{\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})\frac{-P^{j}}{E_{P}}$ (A.592) The first term in the square bracket vanishes when multiplied by ${\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})$. The final result follows after partial integration, with the leading order term due to $i{\buildrel\rightarrow\over{\partial}}_{1,j}\exp[i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2]$ in (A.591). The contribution to $\left|{e^{-}e^{+}\gamma;{\boldsymbol{q}},s}\right\rangle$ from $\left[{\mathcal{H}_{int}},{\psi({\boldsymbol{x}}_{2})}\right]$ is similarly $\displaystyle e\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1}){\overset{\leftarrow}{\Lambda}}_{+}({\boldsymbol{x}}_{1})e^{i{\boldsymbol{P}}\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\Phi_{\mathcal{B}}^{({\boldsymbol{P}})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2}){\boldsymbol{\alpha}}\cdot{\boldsymbol{\varepsilon}}_{s}^{*}({\boldsymbol{q}})e^{-{\boldsymbol{q}}\cdot{\boldsymbol{x}}_{2}}a^{\dagger}({\boldsymbol{q}},s){\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.593) where now $\displaystyle{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})\alpha^{j}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})$ $\displaystyle=\alpha^{j}\frac{1}{2E_{2}}\big{[}(E_{2}-i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}+m\gamma^{0})+\big{\\{}{\alpha^{j}},{i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}}\big{\\}}\big{]}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})=\frac{2i{\buildrel\rightarrow\over{\partial}}_{2,j}}{2E_{2}}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})$ $\displaystyle\to\frac{P^{j}}{E_{P}}{\overset{\rightarrow}{\Lambda}}_{-}({\boldsymbol{x}}_{2})$ (A.594) Using (A.12) in (A.591) and (A.12) in (A.593) and adding the two contributions gives (VI.3.2). ### A.13 Derive the expression (309). The frame dependence of functions like $\phi_{0}(\tau)$ and $\phi_{1}(\tau)$ that do not explicitly depend on $P$ or $E$ arises only due to the $P$-dependence of $\tau(x)$. $\displaystyle\left.\frac{\partial\tau}{\partial P}\right|_{x}=\frac{\partial}{\partial P}\big{[}M^{2}-2EV+V^{2}\big{]}/V^{\prime}=-\frac{2xP}{E}$ (A.595) Recalling also that for functions that depend on $x$ only via $\tau$, $\displaystyle\left.\frac{\partial}{\partial x}\right|_{P}=\left.\frac{\partial\tau}{\partial x}\right|_{P}\frac{\partial}{\partial\tau}=-2(E-V)\frac{\partial}{\partial\tau}$ (A.596) we have $\displaystyle\left.\frac{\partial\phi_{0,1}}{\partial\xi}\right|_{x}=\left.E\frac{\partial\phi_{0,1}}{\partial P}\right|_{x}=\left.\frac{xp}{E-V}\partial_{x}\phi_{0,1}\right|_{P}$ (A.597) Applying this to $e^{\sigma_{1}\zeta/2}\Phi^{(P)}e^{-\sigma_{1}\zeta/2}$ gives $\displaystyle\left.\frac{\partial}{\partial\xi}\Big{[}e^{\sigma_{1}\zeta/2}\Phi^{(P)}e^{-\sigma_{1}\zeta/2}\Big{]}\right|_{x}$ $\displaystyle=\left.e^{\sigma_{1}\zeta/2}\bigg{\\{}\frac{1}{2}\,\frac{\partial\zeta}{\partial\xi}\right|_{x}\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}+\left.\frac{\partial\Phi^{(P)}}{\partial\xi}\right|_{x}\bigg{\\}}e^{-\sigma_{1}\zeta/2}$ (A.598) $\displaystyle=\frac{xp}{E-V}\,\partial_{x}\Big{[}e^{\sigma_{1}\zeta/2}\Phi^{(P)}e^{-\sigma_{1}\zeta/2}\Big{]}\Big{|}_{\xi}=\left.\frac{xp}{E-V}\,e^{\sigma_{1}\zeta/2}\bigg{\\{}\frac{1}{2}\,\frac{\partial\zeta}{\partial x}\right|_{\xi}\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}+\left.\frac{\partial\Phi^{(P)}}{\partial x}\right|_{\xi}\bigg{\\}}e^{-\sigma_{1}\zeta/2}$ which implies $\displaystyle\frac{\partial\Phi^{(P)}}{\partial\xi}\Big{|}_{x}=\frac{xp}{E-V}\,\frac{\partial\Phi^{(P)}}{\partial x}\Big{|}_{\xi}+\frac{1}{2}\Big{(}\frac{xp}{E-V}\,\frac{\partial\zeta}{\partial x}\Big{|}_{\xi}-\frac{\partial\zeta}{\partial\xi}\Big{|}_{x}\Big{)}\big{[}{\sigma_{1}},{\Phi^{(P)}}\big{]}$ (A.599) It remains to work out the derivatives of $\zeta$. From its definition (306), $\displaystyle\partial_{x}(\sinh\zeta)\big{|}_{\xi}$ $\displaystyle=\partial_{x}\zeta\big{|}_{\xi}\,\cosh\zeta=\frac{V^{\prime}P(E-V)}{(V^{\prime}\tau)^{3/2}}\hskip 61.17325pt\Longrightarrow\hskip 28.45274pt\frac{\partial\zeta}{\partial x}\Big{|}_{\xi}=\frac{P}{\tau}$ $\displaystyle\partial_{\xi}(\sinh\zeta)\big{|}_{x}$ $\displaystyle=\partial_{\xi}\zeta\big{|}_{x}\,\cosh\zeta\,=\frac{(E-V)(M^{2}-EV)}{(V^{\prime}\tau)^{3/2}}\hskip 28.45274pt\Longrightarrow\hskip 28.45274pt\frac{\partial\zeta}{\partial\xi}\Big{|}_{x}=\frac{M^{2}-EV}{V^{\prime}\tau}$ (A.600) Using these in (A.599) gives (309). ### A.14 Derive the expression (VII.3.1). The expressions (317) for $\phi_{1}(\tau)$ and $\phi_{0}(\tau)$ at large $\tau$ are, $\displaystyle\phi_{1}(|\tau|\to\infty)$ $\displaystyle=\frac{4V^{\prime}}{\sqrt{\pi}\,m}\,\sqrt{e^{\pi m^{2}/V^{\prime}}-1}\,e^{-\theta(-\tau)\pi m^{2}/2V^{\prime}}\sin\big{[}{\textstyle\frac{1}{4}}\tau-{\textstyle\frac{1}{2}}m^{2}/V^{\prime}\log({\textstyle\frac{1}{2}}|\tau|)+\arg\Gamma(1+im^{2}/2V^{\prime})\big{]}$ $\displaystyle\phi_{0}(|\tau|\to\infty)$ $\displaystyle=-i\frac{4V^{\prime}}{\sqrt{\pi}\,m}\,\sqrt{e^{\pi m^{2}/V^{\prime}}-1}\,e^{-\theta(-\tau)\pi m^{2}/2V^{\prime}}\cos\big{[}{\textstyle\frac{1}{4}}\tau-{\textstyle\frac{1}{2}}m^{2}/V^{\prime}\log({\textstyle\frac{1}{2}}|\tau|)+\arg\Gamma(1+im^{2}/2V^{\prime})\big{]}$ (A.601) Since state $B$ has positive parity $\phi_{B1}[\tau_{B}(x=0)]=\phi_{B1}(\tau_{B}=M_{B}^{2}/V^{\prime})=0$ according to (305). This determines the masses $M_{Bn}$ in the Bj limit, $\displaystyle{\textstyle\frac{1}{4}}M_{Bn}^{2}/V^{\prime}-{\textstyle\frac{1}{2}}m^{2}/V^{\prime}\log({\textstyle\frac{1}{2}}M_{Bn}^{2}/V^{\prime})+\arg\Gamma(1+im^{2}/2V^{\prime})=n\cdot\pi$ (A.602) where $n$ is a large positive integer. Subtracting the lhs. from the arguments of the sin and cos functions in (A.14) gives a sign $(-1)^{n}$ to $\phi_{B0}$ and $\phi_{B1}$. Using also $-2E_{B}\,x=2P_{B}^{1}\,x-2P_{A}^{+}(1-{x_{Bj}})x$ from (346) gives $\displaystyle\phi_{B1}(|\tau_{B}|\to\infty)=$ $\displaystyle\frac{4V^{\prime}(-1)^{n}}{\sqrt{\pi}\,m}\,\sqrt{e^{\pi m^{2}/V^{\prime}}-1}\,\exp\big{[}-\theta(-\tau_{B})\pi m^{2}/2V^{\prime}\big{]}$ $\displaystyle\times\sin\Big{[}{\textstyle\frac{1}{2}}P_{B}^{1}x-{\textstyle\frac{1}{2}}P_{A}^{+}(1-{x_{Bj}})x+{\textstyle\frac{1}{4}}V^{\prime}x^{2}-\frac{m^{2}}{2V^{\prime}}\log\Big{|}1-\frac{V^{\prime}x}{P_{A}^{+}(1-{x_{Bj}})}\Big{|}\Big{]}$ $\displaystyle\phi_{B0}(|\tau_{B}|\to\infty)=$ $\displaystyle-i\frac{4V^{\prime}(-1)^{n}}{\sqrt{\pi}\,m}\,\sqrt{e^{\pi m^{2}/V^{\prime}}-1}\,\exp\big{[}-\theta(-\tau_{B})\pi m^{2}/2V^{\prime}\big{]}$ $\displaystyle\times\cos\Big{[}{\textstyle\frac{1}{2}}P_{B}^{1}x-{\textstyle\frac{1}{2}}P_{A}^{+}(1-{x_{Bj}})x+{\textstyle\frac{1}{4}}V^{\prime}x^{2}-\frac{m^{2}}{2V^{\prime}}\log\Big{|}1-\frac{V^{\prime}x}{P_{A}^{+}(1-{x_{Bj}})}\Big{|}\Big{]}$ (A.603) The asymptotically large phase ${\textstyle\frac{1}{2}}P_{B}^{1}x$ must cancel the one in the factor $\sin\big{(}{\textstyle\frac{1}{2}}q^{1}x\big{)}=\sin\big{[}{\textstyle\frac{1}{2}}(P_{B}^{1}-P_{A}^{1})x\big{]}$ of (VII.3.1) to give a non-vanishing result. Using $\displaystyle\sin\alpha\sin\beta$ $\displaystyle={\textstyle\frac{1}{2}}\big{[}\cos(\alpha-\beta)-\cos(\alpha+\beta)\big{]}$ $\displaystyle\sin\alpha\cos\beta$ $\displaystyle={\textstyle\frac{1}{2}}\big{[}\sin(\alpha+\beta)+\sin(\alpha-\beta)\big{]}$ (A.604) and defining the angle $\displaystyle\varphi_{B}(x)\equiv{\textstyle\frac{1}{2}}\big{[}P_{A}^{+}(1-{x_{Bj}})-P_{A}^{1}\big{]}x+\frac{m^{2}}{2V^{\prime}}\log\Big{|}1-\frac{V^{\prime}x}{P_{A}^{+}(1-{x_{Bj}})}\Big{|}-{\textstyle\frac{1}{4}}V^{\prime}x^{2}$ (A.605) the expression (VII.3.1) for $G_{AB}$ gives (VII.3.1) when terms with asymptotically large phases are neglected, ### A.15 Do the $x$-integral in (VII.3.1) numerically for the parameters in Fig. 16, and compare. Separate the integral in the form factor (VII.3.1) into three parts, $\displaystyle E_{B}G(q^{2})$ $\displaystyle=(-1)^{n}\frac{16iV^{\prime}}{\sqrt{\pi}m}\,\sqrt{e^{\pi m^{2}/V^{\prime}}-1}\,(I_{1}+I_{2}+I_{3})$ (A.606) $\displaystyle I_{1}$ $\displaystyle=\int_{0}^{\infty}dx\big{[}i\sin\varphi_{B}\,\phi_{A0}(\tau_{A})+\cos\varphi_{B}\,\phi_{A1}(\tau_{A})\big{]}e^{-\theta(x-x_{0})\pi m^{2}/2V^{\prime}}$ $\displaystyle I_{2}$ $\displaystyle=\int_{0}^{x_{0}}dx\,\cos\varphi_{B}\,\phi_{A1}(\tau_{A})\,\frac{2m^{2}}{V^{\prime}(x_{0}-x)(P_{A}^{+}-V^{\prime}x)}$ $\displaystyle I_{3}$ $\displaystyle=-\int_{x_{0}}^{\infty}dx\,\cos\varphi_{B}\,\phi_{A1}(\tau_{A})\,\frac{2m^{2}}{V^{\prime}(x-x_{0})(P_{A}^{+}-V^{\prime}x)}e^{-\pi m^{2}/2V^{\prime}}$ (A.607) The $I_{1}$ integrand oscillates with constant amplitude at large $x$, where the approximation (A.14) for $\phi_{A}(\tau_{A}\to\infty)$ applies. $I_{1}$ is further divided into three parts. In $I_{1a}$ the range is $0<x<x_{1}$, where $x_{1}>x_{0}$ and $\tau_{A}(x_{1})>0$, ensuring that $\theta(-\tau_{A})=0,\ \theta(-\tau_{B})=1$ for $x>x_{1}$. $I_{1b}$ integrates over $x_{1}\leq x\leq\infty$ with $\phi_{A0}$ and $\phi_{A1}$ replaced by the difference with their large $x$ approximations, $\phi_{A}-\phi_{A}^{as}$. Finally $I_{1c}$ integrates $\phi_{A}^{as}$over $x_{1}\leq x\leq\infty$: $\displaystyle I_{1a}$ $\displaystyle=\int_{0}^{x_{1}}dx\big{[}i\sin\varphi_{B}\,\phi_{A0}(\tau_{A})+\cos\varphi_{B}\,\phi_{A1}(\tau_{A})\big{]}e^{-\theta(x-x_{0})\pi m^{2}/2V^{\prime}}$ $\displaystyle I_{1b}$ $\displaystyle=\int_{x_{1}}^{\infty}dx\big{\\{}i\sin\varphi_{B}[\phi_{A0}(\tau_{A})-\phi_{A0}^{as}(\tau_{A})]+\cos\varphi_{B}[\phi_{A1}(\tau_{A})-\phi_{A1}^{as}(\tau_{A})]\big{\\}}e^{-\pi m^{2}/2V^{\prime}}$ (A.608) $\displaystyle I_{1c}$ $\displaystyle=\int_{x_{1}}^{\infty}dx\big{\\{}i\sin\varphi_{B}\,\phi_{A0}^{as}(\tau_{A})+\cos\varphi_{B}\,\phi_{A1}^{as}(\tau_{A})\big{\\}}e^{-\pi m^{2}/2V^{\prime}}$ The oscillations in $I_{1b}$ at large $x$ are damped, allowing a numerical integration. The phase in $\phi_{A}^{as}$ (A.14) is $\displaystyle\varphi_{A}={\textstyle\frac{1}{4}}\tau_{A}-{\textstyle\frac{1}{2}}m^{2}/V^{\prime}\log({\textstyle\frac{1}{2}}|\tau_{A}|)+\arg\Gamma(1+im^{2}/2V^{\prime})$ (A.609) The $I_{1c}$ integral reduces to $\displaystyle I_{1c}$ $\displaystyle=-\frac{4V^{\prime}}{\sqrt{\pi}\,m}\sqrt{1-e^{-\pi m^{2}/V^{\prime}}}\int_{x_{1}}^{\infty}dx\,\sin\varphi_{C}$ $\displaystyle\varphi_{C}$ $\displaystyle\equiv-(\varphi_{A}+\varphi_{B})={\textstyle\frac{1}{2}}P_{A}^{+}{x_{Bj}}x-{\textstyle\frac{1}{4}}M_{A}^{2}/V^{\prime}-\arg\Gamma(1+im^{2}/2V^{\prime})+\frac{m^{2}}{2V^{\prime}}\big{[}\log({\textstyle\frac{1}{2}}\tau_{A})-\log(x/x_{0}-1)\big{]}$ (A.610) This integral is evaluated by rotating the contour in $u\equiv x-x_{1}$ by $\pi/2$ to ensure exponential convergence, $\displaystyle\int_{x_{1}}^{\infty}dx\,\sin\varphi_{C}={\rm Im}\int_{x_{1}}^{\infty}dx\,e^{i\varphi_{C}(x)}={\rm Im}\int_{0}^{i\infty}du\,e^{i\varphi_{C}(u+x_{1})}$ (A.611) In $I_{2}$ the range $0<x<x_{0}$ is transformed into $0<y<\infty$ through $\displaystyle y=-\log(1-x/x_{0})\hskip 56.9055ptx=x_{0}(1-e^{-y})\hskip 56.9055pt\frac{dx}{x_{0}-x}=dy$ (A.612) The $y$-integration is further split into $0<y<y_{2}$ and $y_{2}<y<\infty$. The first path is finite and readily integrated numerically. The latter path is rotated by $\pi/2$, giving exponential convergence in $y_{2}<y<y_{2}+i\infty$ when using $\cos\varphi_{B}={\rm Re}\exp(-i\varphi_{B})$, due to the term $ym^{2}/2V^{\prime}$ in $-\varphi_{B}$ (VII.3.1). $x\simeq x_{0}$ is constant on the complex path when $y_{2}$ is large, allowing the integral to be be evaluated analytically. The result for $I_{2}$ should be independent of $y_{2}$. In $I_{3}$ the integration contour is split into $x_{0}<x<2x_{0}$ and $2x_{0}<x<\infty$. In the first range the integration variable is changed to $\displaystyle y=-\log(x/x_{0}-1)\hskip 56.9055ptx=x_{0}(1+e^{-y})\hskip 56.9055pt\frac{dx}{x_{0}-x}=-dy$ (A.613) The $y$-range $0<y<\infty$ is further split into $0<y<y_{3}$ and $y_{3}<y<\infty$. The first is integrated numerically, and in the second the path is rotated by $\pi/2$, ranging over $y_{3}<y<y_{3}+i\infty$. For large $y_{3}$ the value of $x\simeq x_{0}$ is constant on the complex contour, allowing an analytic integration. The integration over $2x_{0}<x<\infty$ is numerically stable, as the oscillations at large $x$ are damped. ### A.16 Derive the $q\bar{q}g$ potential (VIII.1.4) Using the commutators in (VIII.1.1) and (VIII.1.3) the operation of $\mathcal{E}_{a}({\boldsymbol{y}})$ (VIII.1) on the $\left|{q\bar{q}g}\right\rangle$ state (379) gives $\displaystyle\mathcal{E}_{a}({\boldsymbol{y}})\left|{q\bar{q}g}\right\rangle$ $\displaystyle=\big{\\{}\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T_{A^{\prime}A}^{a}A_{b}^{i}({\boldsymbol{x}}_{g})T_{AB}^{b}\psi_{B}({\boldsymbol{x}}_{2})\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{1})$ $\displaystyle+\bar{\psi}_{A}({\boldsymbol{x}}_{1})if_{abc}A_{c}^{i}({\boldsymbol{x}}_{g})T_{AB}^{b}\psi_{B}({\boldsymbol{x}}_{2})\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{g})$ $\displaystyle-\bar{\psi}_{A}({\boldsymbol{x}}_{1})A_{b}^{i}({\boldsymbol{x}}_{g})T_{AB}^{b}T_{BB^{\prime}}^{a}\psi_{B^{\prime}}({\boldsymbol{x}}_{2})\delta({\boldsymbol{y}}-{\boldsymbol{x}}_{2})\big{\\}}\left|{0}\right\rangle$ (A.614) When $\mathcal{E}_{a}({\boldsymbol{y}})$ and $\mathcal{E}_{a}({\boldsymbol{z}})$ in $\mathcal{H}_{V}^{(0)}$ (V.3.2) act on the same (quark or gluon) constituent we may use the previous results (VIII.1.1) and (376) showing that the coefficients of ${\boldsymbol{x}}_{1}^{2}$ and ${\boldsymbol{x}}_{2}^{2}$ are $C_{F}$ while that of ${\boldsymbol{x}}_{g}^{2}$ is $N_{c}$, multiplied by the common factor $\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}$. The new contributions are $\displaystyle{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}:$ $\displaystyle\hskip 28.45274pt-2\,\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T^{a}_{A^{\prime}A}\,A_{b}^{i}({\boldsymbol{x}}_{g})T^{b}_{AB}\,T^{a}_{BB^{\prime}}\psi_{B^{\prime}}({\boldsymbol{x}}_{2})\left|{0}\right\rangle=\frac{1}{N_{c}}\left|{q\bar{q}g}\right\rangle$ $\displaystyle{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{g}:$ $\displaystyle\hskip 28.45274pt2\,\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T^{a}_{A^{\prime}A}\,if_{abc}A_{c}^{i}({\boldsymbol{x}}_{g})\,T^{b}_{AB}\psi_{B}({\boldsymbol{x}}_{2})\left|{0}\right\rangle=-N_{c}\left|{qg\bar{q}}\right\rangle$ $\displaystyle{\boldsymbol{x}}_{2}\cdot{\boldsymbol{x}}_{g}:$ $\displaystyle\hskip 28.45274pt-2\bar{\psi}_{A}({\boldsymbol{x}}_{1})\,if_{abc}A_{c}^{i}({\boldsymbol{x}}_{g})\,T^{b}_{AB}\,T^{a}_{BB^{\prime}}\psi_{B^{\prime}}({\boldsymbol{x}}_{2})\left|{0}\right\rangle=-N_{c}\left|{qg\bar{q}}\right\rangle$ (A.615) Altogether, $\displaystyle\mathcal{H}_{V}^{(0)}\left|{q\bar{q}g}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\big{[}C_{F}({\boldsymbol{x}}_{1}^{2}+{\boldsymbol{x}}_{2}^{2})+N_{c}{\boldsymbol{x}}_{g}^{2}-N_{c}({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})\cdot{\boldsymbol{x}}_{g}+{\textstyle\frac{1}{N_{c}}}{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}\big{]}\left|{q\bar{q}g}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\big{[}d_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})\big{]}^{2}\left|{q\bar{q}g}\right\rangle$ $\displaystyle d_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})$ $\displaystyle\equiv\sqrt{{\textstyle\frac{1}{4}}(N_{c}-{\textstyle\frac{2}{N_{c}}})({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}+N_{c}({\boldsymbol{x}}_{g}-{\textstyle\frac{1}{2}}{\boldsymbol{x}}_{1}-{\textstyle\frac{1}{2}}{\boldsymbol{x}}_{2})^{2}}$ (A.616) For the ${\cal O}\left(\kappa^{2}\right)$ term to give the universal energy $E_{\Lambda}$ (365) we need to choose the normalization of the homogeneous solution as $\displaystyle\kappa_{q\bar{q}g}=\frac{\Lambda^{2}}{g\sqrt{C_{F}}}\,\frac{1}{d_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{g},{\boldsymbol{x}}_{2})}$ (A.617) The ${\cal O}\left(g\kappa\right)$ contribution to $\mathcal{H}_{V}$ gives the potential, $\displaystyle V_{q\bar{q}g}^{(0)}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})=g\kappa_{q\bar{q}g}\,\big{[}d_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})\big{]}^{2}=\frac{\Lambda^{2}}{\sqrt{C_{F}}}\,d_{q\bar{q}g}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})$ (A.618) When the self-energies are subtracted $\mathcal{H}_{V}^{(1)}$ has contributions only from the three terms in (A.16), $\displaystyle V_{q\bar{q}g}^{(1)}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2},{\boldsymbol{x}}_{g})={\textstyle\frac{1}{2}}\,{\alpha_{s}}\Big{[}\frac{1}{N_{c}}\,\frac{1}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2}|}-N_{c}\Big{(}\frac{1}{|{\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{g}|}+\frac{1}{|{\boldsymbol{x}}_{2}-{\boldsymbol{x}}_{g}|}\Big{)}\Big{]}$ (A.619) ### A.17 Derive the expression for $\mathcal{H}_{V}^{(0)}\left|{8\otimes 8}\right\rangle$ in (VIII.1.7) Recall from (391), $\displaystyle\left|{8\otimes 8}\right\rangle=\bar{\psi}_{A}({\boldsymbol{x}}_{1})T_{AB}^{b}\psi_{B}({\boldsymbol{x}}_{2})\,\bar{\psi}_{C}({\boldsymbol{x}}_{3})T_{CD}^{b}\psi_{D}({\boldsymbol{x}}_{4})\left|{0}\right\rangle$ $\displaystyle\bar{\psi}_{A}({\boldsymbol{x}}_{1})\psi_{B}({\boldsymbol{x}}_{2})\,\bar{\psi}_{B}({\boldsymbol{x}}_{3})\psi_{A}({\boldsymbol{x}}_{4})\left|{0}\right\rangle=2\left|{8\otimes 8}\right\rangle+{\textstyle\frac{1}{N_{c}}}\left|{1\otimes 1}\right\rangle$ (A.620) In the following I leave out the common factor $\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}$ in $\mathcal{H}_{V}^{(0)}$ (VIII.1), $\displaystyle\mathcal{H}_{V}^{(0)}$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\int d{\boldsymbol{y}}\,d{\boldsymbol{z}}\,{\boldsymbol{y}}\cdot{\boldsymbol{z}}\,\mathcal{E}_{a}({\boldsymbol{y}})\mathcal{E}_{a}({\boldsymbol{z}})$ (A.621) and make use of the commutation relations (VIII.1.1), $\displaystyle\big{[}{\mathcal{E}_{a}({\boldsymbol{x}})},{\bar{\psi}_{A}({\boldsymbol{x}}_{1})}\big{]}=\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T_{A^{\prime}A}^{a}\delta({\boldsymbol{x}}-{\boldsymbol{x}}_{1})\hskip 56.9055pt\big{[}{\mathcal{E}_{a}({\boldsymbol{x}})},{\psi_{A}({\boldsymbol{x}}_{2})}\big{]}=-T_{AA^{\prime}}^{a}\psi_{A^{\prime}}({\boldsymbol{x}}_{2})\delta({\boldsymbol{x}}-{\boldsymbol{x}}_{2})$ (A.622) and of the SU($N_{c}$) generator relations (VIII.1.1). The commutators of $\mathcal{E}_{a}({\boldsymbol{y}})$ and $\mathcal{E}_{a}({\boldsymbol{z}})$ with the same quark at ${\boldsymbol{x}}_{i}\ (i=1,\ldots 4)$ in $\left|{8\otimes 8}\right\rangle$ gives ${\boldsymbol{y}}\cdot{\boldsymbol{z}}={\boldsymbol{x}}_{i}^{2}$, color factor $T^{a}T^{a}=C_{F}\,I$ and state $\left|{8\otimes 8}\right\rangle$. The commutators with $\bar{\psi}({\boldsymbol{x}}_{1})$ and $\psi({\boldsymbol{x}}_{2})$ gives $\displaystyle{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}:$ $\displaystyle\hskip 28.45274pt-2\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T_{A^{\prime}A}^{a}T_{AB}^{b}T_{BB^{\prime}}^{a}\psi_{B^{\prime}}({\boldsymbol{x}}_{2})\bar{\psi}_{C}({\boldsymbol{x}}_{3})T_{CD}^{b}\psi_{D}({\boldsymbol{x}}_{4})\left|{0}\right\rangle={\textstyle\frac{1}{N_{c}}}\left|{8\otimes 8}\right\rangle$ (A.623) The commutators with $\bar{\psi}({\boldsymbol{x}}_{1})$ and $\bar{\psi}({\boldsymbol{x}}_{3})$ give $\displaystyle{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{3}:$ $\displaystyle\hskip 28.45274pt2\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T_{A^{\prime}A}^{a}T_{AB}^{b}\psi_{B}({\boldsymbol{x}}_{2})\bar{\psi}_{C^{\prime}}({\boldsymbol{x}}_{3})T_{C^{\prime}C}^{a}T_{CD}^{b}\psi_{D}({\boldsymbol{x}}_{4})\left|{0}\right\rangle$ (A.624) The color factors $\displaystyle{\textstyle\frac{1}{2}}(\delta_{A^{\prime}C}\delta_{AC^{\prime}}-{\textstyle\frac{1}{N_{c}}}\delta_{A^{\prime}A}\delta_{C^{\prime}C})T_{AB}^{b}T_{CD}^{b}={\textstyle\frac{1}{4}}\delta_{A^{\prime}C}\delta_{AC^{\prime}}(\delta_{AD}\delta_{BC}-{\textstyle\frac{1}{N_{c}}}\delta_{AB}\delta_{CD})-{\textstyle\frac{1}{2N_{c}}}\delta_{A^{\prime}A}\delta_{C^{\prime}C}T_{AB}^{b}T_{CD}^{b}$ (A.625) give for the coefficient of ${\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{3}$ in (A.624), $\displaystyle\big{[}{\textstyle\frac{1}{2}}\bar{\psi}_{B}({\boldsymbol{x}}_{1})\psi_{B}({\boldsymbol{x}}_{2})\bar{\psi}_{D}({\boldsymbol{x}}_{3})\psi_{D}({\boldsymbol{x}}_{4})\left|{0}\right\rangle-{\textstyle\frac{1}{2N_{c}}}\bar{\psi}_{D}({\boldsymbol{x}}_{1})\psi_{B}({\boldsymbol{x}}_{2})\bar{\psi}_{B}({\boldsymbol{x}}_{3})\psi_{D}({\boldsymbol{x}}_{4})\big{]}\left|{0}\right\rangle-{\textstyle\frac{1}{N_{c}}}\left|{8\otimes 8}\right\rangle={\textstyle\frac{C_{F}}{N_{c}}}\left|{1\otimes 1}\right\rangle-{\textstyle\frac{2}{N_{c}}}\left|{8\otimes 8}\right\rangle$ (A.626) The commutators with $\bar{\psi}({\boldsymbol{x}}_{1})$ and $\bar{\psi}({\boldsymbol{x}}_{4})$ give $\displaystyle{\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{4}:$ $\displaystyle\hskip 28.45274pt-2\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})T_{A^{\prime}A}^{a}T_{AB}^{b}\psi_{B}({\boldsymbol{x}}_{2})\bar{\psi}_{C}({\boldsymbol{x}}_{3})T_{CD}^{b}T_{DD^{\prime}}^{a}\psi_{D^{\prime}}({\boldsymbol{x}}_{4})\left|{0}\right\rangle$ (A.627) Now the color factors $\displaystyle{\textstyle\frac{1}{2}}(\delta_{A^{\prime}D^{\prime}}\delta_{AD}-{\textstyle\frac{1}{N_{c}}}\delta_{A^{\prime}A}\delta_{D^{\prime}D})T_{AB}^{b}T_{CD}^{b}={\textstyle\frac{1}{4}}\delta_{A^{\prime}D^{\prime}}\delta_{AD}(\delta_{AD}\delta_{BC}-{\textstyle\frac{1}{N_{c}}}\delta_{AB}\delta_{CD})-{\textstyle\frac{1}{2N_{c}}}\delta_{A^{\prime}A}\delta_{DD^{\prime}}T_{AB}^{b}T_{CD}^{b}$ (A.628) give for the coefficient of ${\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{4}$ in (A.627), $\displaystyle\big{(}-{\textstyle\frac{N_{c}}{2}}+{\textstyle\frac{1}{2N_{c}}}\big{)}\bar{\psi}_{A^{\prime}}({\boldsymbol{x}}_{1})\psi_{B}({\boldsymbol{x}}_{2})\bar{\psi}_{B}({\boldsymbol{x}}_{3})\psi_{A^{\prime}}({\boldsymbol{x}}_{4})\left|{0}\right\rangle+{\textstyle\frac{1}{N_{c}}}\left|{8\otimes 8}\right\rangle=-{\textstyle\frac{C_{F}}{N_{c}}}\left|{1\otimes 1}\right\rangle-{\textstyle\frac{N_{c}^{2}-2}{N_{c}}}\left|{8\otimes 8}\right\rangle$ (A.629) The coefficients of ${\boldsymbol{x}}_{2}\cdot{\boldsymbol{x}}_{3},\ {\boldsymbol{x}}_{2}\cdot{\boldsymbol{x}}_{4}$ and ${\boldsymbol{x}}_{3}\cdot{\boldsymbol{x}}_{4}$ are the same as those of ${\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{4},\ {\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{3}$ and ${\boldsymbol{x}}_{1}\cdot{\boldsymbol{x}}_{2}$, respectively. Altogether, $\displaystyle\mathcal{H}_{V}^{(0)}\left|{8\otimes 8}\right\rangle$ $\displaystyle=\big{(}{\textstyle\frac{1}{2}}\kappa^{2}{\textstyle\int}d{\boldsymbol{x}}+g\kappa\big{)}\Big{\\{}\big{[}{\textstyle\frac{1}{2}}N_{c}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{4})^{2}+{\textstyle\frac{1}{2}}N_{c}({\boldsymbol{x}}_{2}-{\boldsymbol{x}}_{3})^{2}-{\textstyle\frac{1}{2N_{c}}}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})^{2}$ $\displaystyle-{\textstyle\frac{1}{2N_{c}}}({\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4})^{2}-{\textstyle\frac{2}{N_{c}}}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\cdot({\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4})\big{]}\left|{8\otimes 8}\right\rangle+{\textstyle\frac{C_{F}}{N_{c}}}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\cdot({\boldsymbol{x}}_{3}-{\boldsymbol{x}}_{4})\left|{1\otimes 1}\right\rangle\Big{\\}}$ (A.630) When expressed in terms of the separations (393) this gives (VIII.1.7). ### A.18 Verify that the expression (VIII.2.3) for $\Phi_{-+}({\boldsymbol{x}})$ satisfies the bound state equation (405) given the radial equation (417). The BSE as in (405) applied to $\Phi_{-+}({\boldsymbol{x}})$ in the alternative forms of (VIII.2.3), $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi({\boldsymbol{x}})+\Phi({\boldsymbol{x}}){\overset{\leftarrow}{\mathfrak{h}}}_{-}=0$ $\displaystyle\Phi_{-+}({\boldsymbol{x}})={\overset{\rightarrow}{\mathfrak{h}}}_{+}\gamma_{5}\,F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})=F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})\,\gamma_{5}{\overset{\leftarrow}{\mathfrak{h}}}_{+}$ (A.631) allows the use of $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}{\overset{\rightarrow}{\mathfrak{h}}}_{+}$ $\displaystyle=\frac{4}{(M-V)^{2}}(-{\overset{\rightarrow}{\boldsymbol{\nabla}}}^{2}+m^{2})-1+\frac{4iV^{\prime}}{r(M-V)^{3}}\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})$ $\displaystyle{\overset{\leftarrow}{\mathfrak{h}}}_{+}{\overset{\leftarrow}{\mathfrak{h}}}_{-}$ $\displaystyle=(-{\overset{\leftarrow}{\boldsymbol{\nabla}}}^{2}+m^{2})\frac{4}{(M-V)^{2}}-1+(i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}-m\gamma^{0})\,{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}\,\frac{4iV^{\prime}}{r(M-V)^{3}}$ (A.632) Moving the $\gamma_{5}$ to the right in the BSE, $\displaystyle{\overset{\rightarrow}{\mathfrak{h}}}_{-}{\overset{\rightarrow}{\mathfrak{h}}}_{+}\gamma_{5}F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})$ $\displaystyle+F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})\gamma_{5}{\overset{\leftarrow}{\mathfrak{h}}}_{+}{\overset{\leftarrow}{\mathfrak{h}}}_{-}$ $\displaystyle=\Big{[}\frac{8}{(M-V)^{2}}(-\boldsymbol{\nabla}^{2}+m^{2})-2+\frac{4iV^{\prime}x^{j}}{r(M-V)^{3}}\big{\\{}{\alpha_{j}},{i{\buildrel\rightarrow\over{\partial}}_{k}\alpha_{k}+m\gamma^{0}}\big{\\}}\Big{]}F_{1}(r)Y_{j\lambda}(\hat{\boldsymbol{x}})\gamma_{5}$ (A.633) Using $\left\\{{\alpha_{j}},{\alpha_{k}}\right\\}=2\delta_{jk}$, $\left\\{{\alpha_{j}},{\gamma^{0}}\right\\}=0$, $x^{j}{\buildrel\rightarrow\over{\partial}}_{j}=r{\overset{\rightarrow}{\partial}_{r}}$ and $\boldsymbol{\nabla}^{2}=(1/r^{2})\partial_{r}(r^{2}\partial_{r})-{\boldsymbol{L}}^{2}/r^{2}$ with ${\boldsymbol{L}}^{2}Y_{j\lambda}(\hat{\boldsymbol{x}})=j(j+1)Y_{j\lambda}(\hat{\boldsymbol{x}})$ gives the radial equation (417). ### A.19 Derive the coupled equations (VIII.3.1) from the bound state equation (456). I make use of commutator identities such as, $\displaystyle\left[{A},{BC}\right]$ $\displaystyle=$ $\displaystyle\left[{A},{B}\right]C+B\left[{A},{C}\right]$ (A.634) $\displaystyle\left\\{{A},{BC}\right\\}$ $\displaystyle=$ $\displaystyle\left[{A},{B}\right]C+B\left\\{{A},{C}\right\\}=\left\\{{A},{B}\right\\}C-B\left[{A},{C}\right]$ (A.635) $\displaystyle\left\\{{A},{\left\\{{B},{C}\right\\}}\right\\}$ $\displaystyle=$ $\displaystyle-\left[{B},{\left[{A},{C}\right]}\right]\hskip 32.72049pt{\rm when}\ \ \big{\\{}{A},{B}\big{\\}}=0$ (A.636) $\displaystyle\left\\{{A},{\left[{B},{C}\right]}\right\\}$ $\displaystyle=$ $\displaystyle-\left\\{{B},{\left[{A},{C}\right]}\right\\}\hskip 28.45274pt{\rm when}\ \ \big{\\{}{A},{B}\big{\\}}=0$ (A.637) $\displaystyle\left[{A},{\left\\{{B},{C}\right\\}}\right]$ $\displaystyle=$ $\displaystyle-\left[{B},{\left\\{{A},{C}\right\\}}\right]\hskip 28.45274pt{\rm when}\ \ \big{\\{}{A},{B}\big{\\}}=0$ (A.638) $\displaystyle\left\\{{A},{\left[{A},{C}\right]}\right\\}$ $\displaystyle=$ $\displaystyle\left[{A},{\left\\{{A},{C}\right\\}}\right]=\left[{A^{2}},{C}\right]$ (A.639) $\displaystyle\left\\{{A},{\left\\{{A},{C}\right\\}}\right\\}$ $\displaystyle=$ $\displaystyle 2A\left\\{{A},{C}\right\\}\hskip 42.67912pt{\rm when}\ \ A^{2}=1$ (A.640) $\displaystyle\left[{A},{\left[{A},{C}\right]}\right]$ $\displaystyle=$ $\displaystyle 2A\left[{A},{C}\right]\hskip 46.94687pt{\rm when}\ \ A^{2}=1$ (A.641) Taking the commutator $i\boldsymbol{\nabla}\cdot\left[{{\boldsymbol{\alpha}}},{\rm BSE}\right]$ of the bound state equation (456) gives $\displaystyle\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{(E-V)\Phi^{(P)}}\right]$ $\displaystyle=\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\big{\\{}{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\big{\\}}}\right]-{\textstyle\frac{1}{2}}\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\big{[}{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\big{]}}\right]+m\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\big{[}{\gamma^{0}},{\Phi^{(P)}}\big{]}}\right]$ (A.642) The first term on the rhs. vanishes due to the commutator identity (A.639), when we recall that $\boldsymbol{\nabla}$ in the BSE always operates on $\Phi^{(P)}$. The identity (A.636) implies for the third term on the rhs. of (A.642), $\displaystyle m\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\big{[}{\gamma^{0}},{\Phi^{(P)}}\big{]}}\right]=-m\left\\{{\gamma^{0}},{\big{\\{}{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\big{\\}}}\right\\}$ (A.643) Using the original BSE (456) on the rhs. of (A.643) we get $\displaystyle m\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\big{[}{\gamma^{0}},{\Phi^{(P)}}\big{]}}\right]$ $\displaystyle=-m\left\\{{\gamma^{0}},{{\textstyle\frac{1}{2}}\left[{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\right]}\right\\}+m^{2}\left\\{{\gamma^{0}},{\left[{\gamma^{0}},{\Phi^{(P)}}\right]}\right\\}-m\left\\{{\gamma^{0}},{(E-V)\Phi^{(P)}}\right\\}$ $\displaystyle={\textstyle\frac{1}{2}}m\left\\{{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\left[{\gamma^{0}},{\Phi^{(P)}}\right]}\right\\}-m(E-V)\left\\{{\gamma^{0}},{\Phi^{(P)}}\right\\}$ $\displaystyle={\textstyle\frac{1}{2}}\left\\{{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{-\big{\\{}{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\big{\\}}+{\textstyle\frac{1}{2}}\big{[}{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\big{]}+(E-V)\Phi^{(P)}}\right\\}-m(E-V)\left\\{{\gamma^{0}},{\Phi^{(P)}}\right\\}$ (A.644) where I used (A.637), (A.639) and in the last step expressed $m\left[{\gamma^{0}},{\Phi_{P}}\right]$ using the BSE (456). The second term on the rhs. of (A.19) vanishes according to (A.639). Inserting this result in (A.642) we have $\displaystyle\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{(E-V)\Phi^{(P)}}\right]$ $\displaystyle=$ $\displaystyle-{\textstyle\frac{1}{2}}\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\left[{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\right]}\right]-{\textstyle\frac{1}{2}}\left\\{{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\left\\{{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\right\\}}\right\\}+{\textstyle\frac{1}{2}}(E-V)\left[{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\right]-m(E-V)\left\\{{\gamma^{0}},{\Phi^{(P)}}\right\\}$ (A.645) The sum of the first two terms on the rhs. simplifies. With $\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}=\alpha^{i}\partial_{i}$ and ${\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}=P^{j}\alpha^{j}$, $\displaystyle\left[{\alpha^{i}},{\left[{\alpha^{j}},{\partial_{i}\Phi^{(P)}}\right]}\right]$ $\displaystyle=\alpha^{i}(\alpha^{j}\partial_{i}\Phi^{(P)}-\partial_{i}\Phi^{(P)}\alpha^{j})-(\alpha^{j}\partial_{i}\Phi^{(P)}-\partial_{i}\Phi^{(P)}\alpha^{j})\alpha^{i}$ $\displaystyle\left\\{{\alpha^{j}},{\left\\{{\alpha^{i}},{\partial_{i}\Phi^{(P)}}\right\\}}\right\\}$ $\displaystyle=\alpha^{j}(\alpha^{i}\partial_{i}\Phi^{(P)}+\partial_{i}\Phi^{(P)}\alpha^{i})+(\alpha^{i}\partial_{i}\Phi^{(P)}+\partial_{i}\Phi^{(P)}\alpha^{i})\alpha^{j}$ (A.646) so that $\displaystyle\left[{\alpha^{i}},{\left[{\alpha^{j}},{\partial_{i}\Phi^{(P)}}\right]}\right]+\left\\{{\alpha^{j}},{\left\\{{\alpha^{i}},{\partial_{i}\Phi^{(P)}}\right\\}}\right\\}$ $\displaystyle=(\alpha^{i}\alpha^{j}+\alpha^{j}\alpha^{i})\partial_{i}\Phi^{(P)}+\partial_{i}\Phi^{(P)}(\alpha^{j}\alpha^{i}+\alpha^{i}\alpha^{j})=4\partial_{j}\Phi^{(P)}$ (A.647) Using this in (A.19) and dividing by $E-V$ gives $\displaystyle\frac{1}{E-V}\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{(E-V)\Phi^{(P)}}\right]-{\textstyle\frac{1}{2}}\left\\{{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\right\\}+m\left\\{{\gamma^{0}},{\Phi^{(P)}}\right\\}=-\frac{2i}{E-V}{\boldsymbol{P}}\cdot\boldsymbol{\nabla}\Phi^{(P)}$ (A.648) For a linear potential $i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}\,V^{\prime}|{\boldsymbol{x}}|=iV^{\prime}{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}/r$, where $r=|{\boldsymbol{x}}|$. Bringing this derivative to the rhs. in (A.648), $\displaystyle\left[{i\boldsymbol{\nabla}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\right]-{\textstyle\frac{1}{2}}\left\\{{{\boldsymbol{P}}\cdot{\boldsymbol{\alpha}}},{\Phi^{(P)}}\right\\}+m\left\\{{\gamma^{0}},{\Phi^{(P)}}\right\\}=\frac{1}{E-V}\Big{(}-2i{\boldsymbol{P}}\cdot\boldsymbol{\nabla}\Phi^{(P)}+\frac{V^{\prime}}{r}\left[{i{\boldsymbol{\alpha}}\cdot{\boldsymbol{x}}},{\Phi^{(P)}}\right]\Big{)}$ (A.649) The lhs. is now the same as in the original BSE (456), with commutators and anticommutators interchanged. Adding and subtracting the two equations and dividing by $E-V$ we get equations (VIII.3.1). ### A.20 Derive the frame dependence (459) of $\Phi^{(P)}_{V=0}({\boldsymbol{x}})$ using the boost generator $\mathcal{K}_{0}^{z}$. The action of $\mathcal{K}_{0}^{z}(t=0)$ (464) on the state (455) with ${\boldsymbol{P}}=(0,0,P)$ and $V=0$, $\displaystyle\left|{M,P}\right\rangle_{0}=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})e^{iP(z_{1}+z_{2}/2}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.650) is determined by $\displaystyle\left[{\mathcal{K}_{0}^{z}},{\bar{\psi}({\boldsymbol{x}}_{1})}\right]$ $\displaystyle=\psi^{\dagger}({\boldsymbol{x}}_{1})\big{[}z_{1}(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}-m\gamma^{0})+{\textstyle\frac{1}{2}}i{\alpha_{3}}\big{]}\gamma^{0}=\bar{\psi}({\boldsymbol{x}}_{1})\big{[}z_{1}(i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{1}-m\gamma^{0})-{\textstyle\frac{1}{2}}i{\alpha_{3}}\big{]}$ $\displaystyle\left[{\mathcal{K}_{0}^{z}},{\psi({\boldsymbol{x}}_{2})}\right]$ $\displaystyle=-\big{[}z_{2}(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{2}-m\gamma^{0})+{\textstyle\frac{1}{2}}i{\alpha_{3}}\big{]}\psi({\boldsymbol{x}}_{2})$ (A.651) Making the derivatives act on the wave function through partial integration, $\displaystyle\mathcal{K}^{z}_{0}\left|{M,P}\right\rangle_{0}=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})\Big{\\{}$ $\displaystyle\big{[}z_{1}(-i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}-m\gamma^{0})-{\textstyle\frac{1}{2}}i{\alpha_{3}}\big{]}e^{iP(z_{1}+z_{2}/2}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ $\displaystyle-e^{iP(z_{1}+z_{2}/2}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\big{[}z_{2}(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2}-m\gamma^{0})+{\textstyle\frac{1}{2}}i{\alpha_{3}}\big{]}\Big{\\}}\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.652) Using $z_{2}(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2})+{\textstyle\frac{1}{2}}i{\alpha_{3}}=(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2})z_{2}-{\textstyle\frac{1}{2}}i{\alpha_{3}}$ and then expressing $z_{1,2}={\textstyle\frac{1}{2}}(z_{1}+z_{2})\pm{\textstyle\frac{1}{2}}(z_{1}-z_{2})$, the BSE (457) satisfied by $\Phi^{(P)}_{V=0}$, $\displaystyle\big{(}i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}-{\textstyle\frac{1}{2}}P{\alpha_{3}}+m\gamma^{0}\big{)}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})+\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\big{(}-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2}+{\textstyle\frac{1}{2}}P{\alpha_{3}}-m\gamma^{0}\big{)}=E\,\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ (A.653) reduces the coefficient of ${\textstyle\frac{1}{2}}(z_{1}+z_{2})$ to $-E$, with $E=M\cosh\xi=\sqrt{P^{2}+M^{2}}$. We have then $\displaystyle\mathcal{K}^{z}_{0}\left|{M,P}\right\rangle_{0}=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})e^{iP(z_{1}+z_{2}/2}\Big{\\{}-{\textstyle\frac{1}{2}}(z_{1}+z_{2})E\,\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})-{\textstyle\frac{1}{2}}i\big{[}{{\alpha_{3}}},{\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}\big{]}$ (A.654) $\displaystyle-{\textstyle\frac{1}{2}}(z_{1}-z_{2})\big{[}(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}-{\textstyle\frac{1}{2}}P{\alpha_{3}}+m\gamma^{0})\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})+\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})(i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2}-{\textstyle\frac{1}{2}}P{\alpha_{3}}+m\gamma^{0})\big{]}\Big{\\}}\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ where ${\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}$ and ${\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2}$ only differentiate $\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$. Subtracting the two BSE equations (VIII.3.1) gives $\displaystyle\big{(}i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{1}-{\textstyle\frac{1}{2}}P{\alpha_{3}}+m\gamma^{0}\big{)}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})+\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\big{(}i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{2}-{\textstyle\frac{1}{2}}P{\alpha_{3}}+m\gamma^{0}\big{)}=-2i\frac{P}{E}\,{\buildrel\rightarrow\over{\partial}}_{z_{1}}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ (A.655) Thus $\displaystyle-id\xi\mathcal{K}^{z}_{0}\left|{M,P}\right\rangle_{0}$ $\displaystyle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,\bar{\psi}({\boldsymbol{x}}_{1})e^{iP(z_{1}+z_{2}/2}\Big{\\{}{\textstyle\frac{1}{2}}id\xi\,E(z_{1}+z_{2})\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})-{\textstyle\frac{1}{2}}d\xi\big{[}{{\alpha_{3}}},{\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})}\big{]}$ $\displaystyle+d\xi(z_{1}-z_{2})\frac{P}{E}\,\partial_{z_{1}}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\Big{\\}}\psi({\boldsymbol{x}}_{2})\left|{0}\right\rangle$ (A.656) Let us now assume that the frame dependence (459) holds, and show that it agrees with the change in the wave function (A.20) caused by the infinitesimal boost. According to (459), $\displaystyle\Phi^{(P+dP)}_{V=0}({\boldsymbol{x}})$ $\displaystyle=e^{-(\xi+d\xi){\alpha_{3}}/2}\Phi^{(P=0)}_{V=0}({\boldsymbol{x}}_{R})e^{(\xi+d\xi){\alpha_{3}}/2}$ $\displaystyle{\boldsymbol{x}}_{R}$ $\displaystyle=(x,y,z\cosh(\xi+d\xi))=(x,y,z\cosh\xi)+(0,0,d\xi z\sinh\xi)$ (A.657) The first term within the $\\{\ \\}$ of (A.20) reflects the change in the plane wave phase of $\left|{M,P}\right\rangle_{0}$, $\displaystyle e^{i(P+d\xi E)(z_{1}+z_{2})/2}=e^{i(P(z_{1}+z_{2})/2}\big{[}1+{\textstyle\frac{1}{2}}i\,d\xi E(z_{1}+z_{2})\big{]}$ (A.658) The second term is due to the $\exp[\mp(\xi+d\xi){\alpha_{3}}/2]$ factors in $\Phi^{(P+dP)}_{V=0}({\boldsymbol{x}})$, $\displaystyle\exp[-(\xi+d\xi){\alpha_{3}}/2]\Phi^{(P=0)}_{V=0}({\boldsymbol{x}}_{1R}-{\boldsymbol{x}}_{2R})=(1-{\textstyle\frac{1}{2}}d\xi{\alpha_{3}})\exp(-\xi{\alpha_{3}}/2)\Phi^{(P=0)}_{V=0}({\boldsymbol{x}}_{1R}-{\boldsymbol{x}}_{2R})$ (A.659) and similarly for $\Phi^{(P=0)}_{V=0}({\boldsymbol{x}}_{1R}-{\boldsymbol{x}}_{2R})\exp[(\xi+d\xi){\alpha_{3}}/2]$. The third term in (A.20) relates to the Lorentz contraction, i.e., the change in ${\boldsymbol{x}}_{R}$ (A.20), $\displaystyle e^{-\xi{\alpha_{3}}/2}d\xi(z_{1}-z_{2})\sinh\xi\frac{\partial}{\partial z_{1R}}\Phi^{(P=0)}_{V=0}({\boldsymbol{x}}_{1R}-{\boldsymbol{x}}_{2R})e^{\xi{\alpha_{3}}/2}=d\xi(z_{1}-z_{2})\frac{\sinh\xi}{\cosh\xi}\,\frac{\partial}{\partial z_{1}}\Phi^{(P)}_{V=0}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ (A.660) This confirms that the frame dependence of the state $\left|{M,P}\right\rangle_{0}$ (A.650) implied by (A.20) agrees with the transformation of a boost. ### A.21 Show that $\Phi^{(P)}(\tau)$ given by (471) satisfies the BSE (457) at $\boldsymbol{x}_{\perp}=0$. Denoting by $B$ the lhs. of (457) at $\boldsymbol{x}_{\perp}=0$ when the wave function $\Phi^{(P)}(\tau)$ is given by (471), $\displaystyle e^{\zeta{\alpha_{3}}/2}\,B\,e^{-\zeta{\alpha_{3}}/2}$ $\displaystyle=e^{\zeta{\alpha_{3}}/2}\big{[}i{\overset{\rightarrow}{\boldsymbol{\nabla}}}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}(E-V+P{\alpha_{3}})+m\gamma^{0}\big{]}e^{-\zeta{\alpha_{3}}/2}\Phi^{(0)}(\tau)$ $\displaystyle+\Phi^{(0)}(\tau)e^{\zeta{\alpha_{3}}/2}\big{[}i{\overset{\leftarrow}{\boldsymbol{\nabla}}}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}(E-V-P{\alpha_{3}})-m\gamma^{0}\big{]}e^{-\zeta{\alpha_{3}}/2}$ (A.661) We need to show that $B=0$. For the $i\partial_{z}$ terms, $\displaystyle e^{\zeta{\alpha_{3}}/2}i{\buildrel\rightarrow\over{\partial}}_{z}{\alpha_{3}}e^{-\zeta{\alpha_{3}}/2}$ $\displaystyle=i{\buildrel\rightarrow\over{\partial}}_{z}{\alpha_{3}}-{\textstyle\frac{1}{2}}i({\buildrel\rightarrow\over{\partial}}_{z}\zeta)$ $\displaystyle e^{\zeta{\alpha_{3}}/2}i{\buildrel\leftarrow\over{\partial}}_{z}{\alpha_{3}}e^{-\zeta{\alpha_{3}}/2}$ $\displaystyle=i{\buildrel\leftarrow\over{\partial}}_{z}{\alpha_{3}}+{\textstyle\frac{1}{2}}i({\buildrel\rightarrow\over{\partial}}_{z}\zeta)$ (A.662) The contributions $\propto\partial_{z}\zeta$ cancel in (A.21). Transforming $\partial_{z}=-2(E-V)\partial_{\tau}=-2\sqrt{V^{\prime}\tau}\cosh\zeta\,\partial_{\tau}$ (which requires both a linear potential and $\boldsymbol{x}_{\perp}=0$), $\displaystyle i{\buildrel\rightarrow\over{\partial}}_{z}{\alpha_{3}}$ $\displaystyle=-2\sqrt{V^{\prime}\tau}e^{\zeta{\alpha_{3}}}{\alpha_{3}}\,i{\buildrel\rightarrow\over{\partial}}_{\tau}+2\sqrt{V^{\prime}\tau}\sinh\zeta\,i{\buildrel\rightarrow\over{\partial}}_{\tau}$ $\displaystyle i{\buildrel\leftarrow\over{\partial}}_{z}{\alpha_{3}}$ $\displaystyle=-2i{\buildrel\leftarrow\over{\partial}}_{\tau}{\alpha_{3}}\sqrt{V^{\prime}\tau}e^{-\zeta{\alpha_{3}}}-2i{\buildrel\leftarrow\over{\partial}}_{\tau}\sqrt{V^{\prime}\tau}\sinh\zeta$ (A.663) The terms $\propto\sinh\zeta$ cancel in (A.21). Expressing $E-V\pm P{\alpha_{3}}=\sqrt{V^{\prime}\tau}\exp(\pm\zeta{\alpha_{3}})$, commuting the $\exp(\pm\zeta{\alpha_{3}}/2)$ factors using $i\boldsymbol{\nabla}_{\perp}\cdot{\boldsymbol{\alpha}_{\perp}}\,\exp(-\zeta{\alpha_{3}}/2)=\exp(\zeta{\alpha_{3}}/2)i\boldsymbol{\nabla}_{\perp}\cdot{\boldsymbol{\alpha}_{\perp}}$ (since $\boldsymbol{\nabla}_{\perp}\zeta=0$) and similarly for the $m\gamma^{0}$ terms we get, $\displaystyle e^{\zeta{\alpha_{3}}/2}\,B\,e^{-\zeta{\alpha_{3}}/2}$ $\displaystyle=e^{\zeta{\alpha_{3}}}\big{[}-2\sqrt{V^{\prime}\tau}\,i{\buildrel\rightarrow\over{\partial}}_{\tau}{\alpha_{3}}+i{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{\perp}\cdot{\boldsymbol{\alpha}_{\perp}}-{\textstyle\frac{1}{2}}\sqrt{V^{\prime}\tau}+m\gamma^{0}\big{]}\Phi^{(0)}(\tau)$ $\displaystyle+\Phi^{(0)}(\tau)\big{[}-2\,i{\buildrel\leftarrow\over{\partial}}_{\tau}{\alpha_{3}}\sqrt{V^{\prime}\tau}+i{\overset{\leftarrow}{\boldsymbol{\nabla}}}_{\perp}\cdot{\boldsymbol{\alpha}_{\perp}}-{\textstyle\frac{1}{2}}\sqrt{V^{\prime}\tau}-m\gamma^{0}\big{]}e^{-\zeta{\alpha_{3}}}$ (A.664) The terms in [ ] depend only on $\tau$, i.e., they are as in the $\zeta=0$ BSE of $\Phi^{(0)}(\tau)$. Expressing $\exp(\pm\zeta{\alpha_{3}})=\cosh\zeta\pm{\alpha_{3}}\sinh\zeta$ the coefficent of $\cosh\zeta$ is the rest frame BSE, which $\Phi^{(0)}(\tau)$ satisfies by definition. Consequently the two terms in [ ] give equal and opposite contributions to the $\zeta=0$ BSE. Using this leaves an anticommutator with ${\alpha_{3}}$. Transforming back $\tau\to z$ at $\zeta=0$ allows to identify the ${\overset{\rightarrow}{\mathfrak{h}}}_{-}$ operator (403), $\displaystyle e^{\zeta{\alpha_{3}}/2}\,B\,e^{-\zeta{\alpha_{3}}/2}$ $\displaystyle=\sinh\zeta\big{\\{}{{\alpha_{3}}},{\big{(}-2\sqrt{V^{\prime}\tau}\,i{\buildrel\rightarrow\over{\partial}}_{\tau}{\alpha_{3}}+i{\overset{\rightarrow}{\boldsymbol{\nabla}}}_{\perp}\cdot{\boldsymbol{\alpha}_{\perp}}-{\textstyle\frac{1}{2}}\sqrt{V^{\prime}\tau}+m\gamma^{0}\big{)}\Phi^{(0)}(\tau)}\big{\\}}$ (A.665) $\displaystyle=\sinh\zeta\big{\\{}{{\alpha_{3}}},{\Big{[}i{\overset{\rightarrow}{\boldsymbol{\nabla}}}\cdot{\boldsymbol{\alpha}}-{\textstyle\frac{1}{2}}\big{(}M-V)+m\gamma^{0}\Big{]}\Phi^{(0)}(0,0,z)}\big{\\}}={\textstyle\frac{1}{2}}(M-V)\sinh\zeta\big{\\{}{{\alpha_{3}}},{{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi^{(0)}(0,0,z)}\big{\\}}$ The expressions in (419), (430) and (439) show that $\big{\\{}{{\alpha_{3}}},{{\overset{\rightarrow}{\mathfrak{h}}}_{-}\Phi^{(0)}(0,0,z)}\big{\\}}=0$ for all wave functions. Hence $B=0$ and $\Phi^{(P)}(\tau)$ given by (471) solves the BSE for all $P$ at $\boldsymbol{x}_{\perp}=0$. ### A.22 Prove the orthogonality relation (473) for states with wave functions satisfying the BSE (456). I follow the proof presented in Dietrich _et al._ (2013). From the expression (455) for the states in terms of their wave functions, $\displaystyle\langle{M_{B},{\boldsymbol{P}}_{B}}|M_{A},{\boldsymbol{P}}_{A}\rangle$ $\displaystyle=\int d{\boldsymbol{x}}_{1B}d{\boldsymbol{x}}_{2B}d{\boldsymbol{x}}_{1A}d{\boldsymbol{x}}_{2A}\langle{0}|\psi^{\dagger}({\boldsymbol{x}}_{2B})e^{-i{\boldsymbol{P}}_{B}\cdot({\boldsymbol{x}}_{1B}+{\boldsymbol{x}}_{2B})/2}\Phi_{B}^{({\boldsymbol{P}}_{B}){\dagger}}({\boldsymbol{x}}_{1B}-{\boldsymbol{x}}_{2B})\gamma^{0}\psi({\boldsymbol{x}}_{1B})$ $\displaystyle\times\bar{\psi}({\boldsymbol{x}}_{1A})e^{i{\boldsymbol{P}}_{A}\cdot({\boldsymbol{x}}_{1A}+{\boldsymbol{x}}_{2A})/2}\Phi_{A}^{({\boldsymbol{P}}_{A})}({\boldsymbol{x}}_{1A}-{\boldsymbol{x}}_{2A})\psi({\boldsymbol{x}}_{1A})\left|{0}\right\rangle$ (A.666) The field contractions set ${\boldsymbol{x}}_{1A}={\boldsymbol{x}}_{1B}\equiv{\boldsymbol{x}}_{1}$ and ${\boldsymbol{x}}_{2A}={\boldsymbol{x}}_{2B}\equiv{\boldsymbol{x}}_{2}$. Then $\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}=\int d[({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2]d({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})$ and the integral over ${\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2}$ sets ${\boldsymbol{P}}_{A}={\boldsymbol{P}}_{B}\equiv{\boldsymbol{P}}$, $\displaystyle\langle{M_{B},{\boldsymbol{P}}_{B}}|M_{A},{\boldsymbol{P}}_{A}\rangle$ $\displaystyle=\int d{\boldsymbol{x}}_{1}d{\boldsymbol{x}}_{2}\,e^{i({\boldsymbol{P}}_{A}-{\boldsymbol{P}}_{B})\cdot({\boldsymbol{x}}_{1}+{\boldsymbol{x}}_{2})/2}\mathrm{Tr}\,\Big{[}\Phi_{B}^{({\boldsymbol{P}}_{B}){\dagger}}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\Phi_{A}^{({\boldsymbol{P}}_{A})}({\boldsymbol{x}}_{1}-{\boldsymbol{x}}_{2})\Big{]}$ $\displaystyle=(2\pi)^{3}\delta^{3}({\boldsymbol{P}}_{A}-{\boldsymbol{P}}_{B})\int d{\boldsymbol{x}}\,\mathrm{Tr}\,\Big{[}\Phi_{B}^{({\boldsymbol{P}}){\dagger}}({\boldsymbol{x}})\Phi_{A}^{({\boldsymbol{P}})}({\boldsymbol{x}})\Big{]}$ (A.667) The BSE (456) for $\Phi_{A}^{({\boldsymbol{P}})}$ and $\Phi_{B}^{({\boldsymbol{P}}){\dagger}}$ are $\displaystyle i\boldsymbol{\nabla}\cdot\big{\\{}{{\boldsymbol{\alpha}}},{\Phi_{A}^{({\boldsymbol{P}})}({\boldsymbol{x}})}\big{\\}}-{\textstyle\frac{1}{2}}{\boldsymbol{P}}\cdot\big{[}{{\boldsymbol{\alpha}}},{\Phi_{A}^{({\boldsymbol{P}})}({\boldsymbol{x}})}\big{]}+m\big{[}{\gamma^{0}},{\Phi_{A}^{({\boldsymbol{P}})}({\boldsymbol{x}})}\big{]}$ $\displaystyle=\big{[}E_{A}-V({\boldsymbol{x}})\big{]}\Phi_{A}^{({\boldsymbol{P}})}({\boldsymbol{x}})$ $\displaystyle-i\boldsymbol{\nabla}\cdot\big{\\{}{{\boldsymbol{\alpha}}},{\Phi_{B}^{({\boldsymbol{P}}){\dagger}}({\boldsymbol{x}})}\big{\\}}+{\textstyle\frac{1}{2}}{\boldsymbol{P}}\cdot\big{[}{{\boldsymbol{\alpha}}},{\Phi_{B}^{({\boldsymbol{P}}){\dagger}}({\boldsymbol{x}})}\big{]}-m\big{[}{\gamma^{0}},{\Phi_{B}^{({\boldsymbol{P}}){\dagger}}({\boldsymbol{x}})}\big{]}$ $\displaystyle=\big{[}E_{B}-V({\boldsymbol{x}})\big{]}\Phi_{B}^{({\boldsymbol{P}}){\dagger}}({\boldsymbol{x}})$ (A.668) Multiplying the first equation by $\Phi_{B}^{(P){\dagger}}({\boldsymbol{x}})$ from the left, the second by $\Phi^{(P)}_{A}({\boldsymbol{x}})$ from the right and taking the trace of their difference the terms $\propto{\textstyle\frac{1}{2}}{\boldsymbol{P}}$, $m$ and $V$ cancel, giving $\displaystyle 2i\mathrm{Tr}\,\Big{[}{\boldsymbol{\alpha}}\cdot\boldsymbol{\nabla}\big{\\{}{\Phi_{B}^{({\boldsymbol{P}}){\dagger}}({\boldsymbol{x}})},{\Phi_{A}^{({\boldsymbol{P}})}({\boldsymbol{x}})}\big{\\}}\Big{]}=\big{[}E_{A}-E_{B}\big{)}\mathrm{Tr}\,\Big{[}\Phi_{B}^{({\boldsymbol{P}}){\dagger}}({\boldsymbol{x}})\Phi_{A}^{({\boldsymbol{P}})}({\boldsymbol{x}})\Big{]}$ (A.669) Integrating both sides $\int_{-\infty}^{\infty}d{\boldsymbol{x}}$ the lhs. vanishes due to the substitution at (one component of) ${\boldsymbol{x}}=\pm\infty$. The vanishing of the rhs. implies (for $M_{A}\neq M_{B}$) the orthogonality of the states according to (A.22). ### A.23 Verify the expression (476) for the global norm of $\Phi_{-+}({\boldsymbol{x}})$ in terms of $F_{1}(r)$. According to (A.22) the bound state norm is proportional to the trace on the lhs. of (476). The expression (VIII.2.3) of $\Phi_{-+}({\boldsymbol{x}})$ implies $\displaystyle N$ $\displaystyle\equiv\int d{\boldsymbol{x}}\,\mathrm{Tr}\,\big{\\{}\Phi_{-+}^{\dagger}({\boldsymbol{x}})\Phi_{-+}({\boldsymbol{x}})\big{\\}}$ $\displaystyle=\int dr\,r^{2}d\Omega\,\mathrm{Tr}\,\Big{\\{}Y_{j\lambda}^{*}(\Omega)F_{1}^{*}(r)\gamma_{5}\Big{[}(-i{\boldsymbol{\alpha}}\cdot{\overset{\leftarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})\frac{2}{M-V}+1\Big{]}\Big{[}\frac{2}{M-V}(i{\boldsymbol{\alpha}}\cdot{\overset{\rightarrow}{\boldsymbol{\nabla}}}+m\gamma^{0})+1\Big{]}\gamma_{5}F_{1}(r)Y_{j\lambda}(\Omega)\Big{\\}}$ $\displaystyle=4\int dr\,r^{2}d\Omega\,Y_{j\lambda}^{*}F_{1}^{*}\Big{[}{\buildrel\leftarrow\over{\partial}}_{j}\frac{4}{(M-V)^{2}}{\buildrel\rightarrow\over{\partial}}_{j}+\frac{4m^{2}}{(M-V)^{2}}+1\Big{]}F_{1}Y_{j\lambda}$ $\displaystyle=4\int dr\,r^{2}d\Omega\,Y_{j\lambda}^{*}F_{1}^{*}\Big{[}-\frac{4}{(M-V)^{2}}{\overset{\rightarrow}{\boldsymbol{\nabla}}}^{2}-\frac{8V^{\prime}}{(M-V)^{3}}\partial_{r}+\frac{4m^{2}}{(M-V)^{2}}+1\Big{]}F_{1}Y_{j\lambda}$ (A.670) Expressing ${\overset{\rightarrow}{\boldsymbol{\nabla}}}^{2}$ in spherical coordinates and using the radial equation (417), $\displaystyle{\overset{\rightarrow}{\boldsymbol{\nabla}}}^{2}F_{1}(r)Y_{j\lambda}(\Omega)=\Big{[}F^{\prime\prime}_{1}+\frac{2}{r}F^{\prime}_{1}-\frac{j(j+1)}{r^{2}}F_{1}\Big{]}Y_{j\lambda}=\Big{[}-\frac{V^{\prime}}{M-V}F^{\prime}_{1}-{\textstyle\frac{1}{4}}(M-V)^{2}F_{1}+m^{2}F_{1}\Big{]}Y_{j\lambda}$ (A.671) Substituting this into (A.23) and using $\int d\Omega\,|Y_{j\lambda}(\Omega)|^{2}=1$ gives (476). ### A.24 Verify the expressions (VIII.6.2) for radial functions $H_{1}(r),H_{2}(r)$ and $H_{3}(r)$. The structure (510) of the $J^{PC}=0^{++}$ wave function, $\displaystyle\Phi_{\sigma}({\boldsymbol{x}})=H_{1}(r)+i\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,H_{2}(r)+i\,\gamma^{0}{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}\,H_{3}(r)$ (A.672) follows from its parity and charge conjugation quantum numbers, as listed in (413). The three Dirac structures involving the orbital angular momentum operator ${\boldsymbol{L}}$ do not contribute for $j=0$, since ${\boldsymbol{L}}$ acts on $Y_{00}$ in (406), which has no angular dependence. The BSE (506) involves the (anti)commutators $\displaystyle{\textstyle\frac{1}{2}}\left\\{{{\boldsymbol{\alpha}}},{\Phi_{\sigma}({\boldsymbol{x}})}\right\\}$ $\displaystyle={\boldsymbol{\alpha}}H_{1}(r)+i\hat{\boldsymbol{x}}H_{2}(r)+\gamma^{0}\hat{\boldsymbol{x}}\times{\boldsymbol{\alpha}}\,\gamma_{5}H_{3}(r)$ $\displaystyle{\textstyle\frac{1}{2}}\left[{\gamma^{0}},{\Phi_{\sigma}({\boldsymbol{x}})}\right]$ $\displaystyle=i\gamma^{0}{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}H_{2}(r)+i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}H_{3}(r)$ (A.673) Inserting the expression (A.672) into the BSE (506) with $M=0$ gives, using $\boldsymbol{\nabla}f(r)=\hat{\boldsymbol{x}}f^{\prime}(r)$, $\partial_{i}x^{j}=\delta_{ij}$ and $\boldsymbol{\nabla}\cdot\hat{\boldsymbol{x}}\times{\boldsymbol{\alpha}}\,f(r)=0$, $\displaystyle i\,{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}H_{1}^{\prime}-\frac{2}{r}H_{2}-H_{2}^{\prime}+im\gamma^{0}{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}H_{2}+im{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}H_{3}+{\textstyle\frac{1}{2}}V^{\prime}r\,\Phi_{\sigma}({\boldsymbol{x}})=0$ (A.674) The coefficients of the Dirac structures $1,\ i{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}$ and $i\gamma^{0}{\boldsymbol{\alpha}}\cdot\hat{\boldsymbol{x}}$ impose $\displaystyle-\frac{2}{r}H_{2}-H_{2}^{\prime}+{\textstyle\frac{1}{2}}V^{\prime}r\,H_{1}$ $\displaystyle=0$ $\displaystyle H_{1}^{\prime}+mH_{3}+{\textstyle\frac{1}{2}}V^{\prime}r\,H_{2}$ $\displaystyle=0$ $\displaystyle mH_{2}+{\textstyle\frac{1}{2}}V^{\prime}r\,H_{3}$ $\displaystyle=0$ (A.675) The first and third equations allow to express $H_{1}$ and $H_{3}$, respectively, in terms of $H_{2}$. Substituting them into the second equation gives the differential equation $\displaystyle H_{2}^{\prime\prime}+\frac{1}{r}H_{2}^{\prime}+\Big{[}\frac{1}{4}(V^{\prime}r)^{2}-m^{2}-\frac{4}{r^{2}}\Big{]}H_{2}=0$ (A.676) It is straightforward to verify that the expressions for $H_{i}(r)$ given in (VIII.6.2) satisfy the above equations. Their properties $H_{1}(r\to 0)\sim r^{0}$, $H_{2}(r\to 0)\sim r^{2}$ and $H_{3}(r\to 0)\sim r^{1}$ ensure that the wave function is locally normalizable at $r=0$. ## References * Zyla _et al._ (2020) P. Zyla _et al._ (Particle Data Group), PTEP 2020, 083C01 (2020). * Aoki _et al._ (2020) S. Aoki _et al._ (Flavour Lattice Averaging Group), Eur. Phys. J. C 80, 113 (2020), arXiv:1902.08191 [hep-lat] . * Brodsky _et al._ (2020) S. J. Brodsky, V. D. Burkert, D. S. Carman, J. P. Chen, Z. F. Cui, M. Döring, H. G. Dosch, J. P. Draayer, L. Elouadrhiri, D. I. Glazier, A. N. H. Blin, T. Horn, K. Joo, H. C. Kim, V. Kubarovsky, S. E. Kuhn, Y. Lu, W. Melnitchouk, C. Mezrag, V. I. Mokeev, J. W. Qiu, M. Radici, D. Richards, C. D. Roberts, J. Rodríguez-Quintero, J. Segovia, A. P. Szczepaniak, G. F. de Téramond, and D. Winney, Int. J. Mod. Phys. E 29, 2030006 (2020), arXiv:2006.06802 [hep-ph] . * Blum (2017) A. S. Blum, Stud. Hist. Phil. Sci. B 60, 46 (2017), arXiv:2011.05908 [physics.hist-ph] . * Bodwin _et al._ (1985) G. T. Bodwin, D. R. Yennie, and M. A. Gregorio, Rev. Mod. Phys. 57, 723 (1985). * Murota (1988) T. Murota, Prog. Theor. Phys. Suppl. 95, 46 (1988). * Penin (2014) A. A. Penin, _Proceedings, 12th DESY Workshop on Elementary Particle Physics: Loops and Legs in Quantum Field Theory (LL2014): Weimar, Germany, April 27-May 2, 2014_ , PoS LL2014, 074 (2014). * Adkins (2015) G. S. Adkins, Hyperfine Interact. 233, 59 (2015). * Adkins (2018) G. Adkins, J. Phys. Conf. Ser. 1138, 012005 (2018). * Eichten _et al._ (1980) E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, and T.-M. Yan, Phys. Rev. D21, 203 (1980). * Eichten _et al._ (2008) E. Eichten, S. Godfrey, H. Mahlke, and J. L. Rosner, Rev. Mod. Phys. 80, 1161 (2008), arXiv:hep-ph/0701208 [hep-ph] . * Gehrmann _et al._ (2013) T. Gehrmann, G. Luisoni, and P. F. Monni, Eur. Phys. J. C 73, 2265 (2013), arXiv:1210.6945 [hep-ph] . * ’t Hooft (2003) G. ’t Hooft, Nucl. Phys. B Proc. Suppl. 121, 333 (2003), arXiv:hep-th/0207179 . * Dokshitzer (2003) Y. L. Dokshitzer, in _2002 European School of high-energy physics, Pylos, Greece, 25 Aug-7 Sep 2002: Proceedings_ (2003) pp. 1–33, arXiv:hep-ph/0306287 [hep-ph] . * Dokshitzer and Kharzeev (2004) Y. L. Dokshitzer and D. E. Kharzeev, Ann. Rev. Nucl. Part. Sci. 54, 487 (2004), arXiv:hep-ph/0404216 . * Dokshitzer (2010) Y. Dokshitzer, _Proceedings, Workshop on Critical examination of RHIC paradigms (CERP 2010): Austin, USA, April 14-17, 2010_ , PoS CERP2010, 001 (2010). * Godfrey and Isgur (1985) S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985). * Bali (2001) G. S. Bali, Phys. Rept. 343, 1 (2001), arXiv:hep-ph/0001312 [hep-ph] . * Bali _et al._ (2005) G. S. Bali, H. Neff, T. Duessel, T. Lippert, and K. Schilling (SESAM), Phys. Rev. D 71, 114513 (2005), arXiv:hep-lat/0505012 . * Bali _et al._ (2006) G. S. Bali, T. Dussel, T. Lippert, H. Neff, Z. Prkacin, and K. Schilling, Nucl. Phys. B Proc. Suppl. 153, 9 (2006), arXiv:hep-lat/0512018 . * Rosner (2011) J. L. Rosner, in _9th Conference on Flavor Physics and CP Violation_ (2011) arXiv:1107.1273 [hep-ph] . * Coleman and Weinberg (1973) S. R. Coleman and E. J. Weinberg, Phys. Rev. D 7, 1888 (1973). * Itzykson and Zuber (1980) C. Itzykson and J. Zuber, _Quantum Field Theory_ , International Series In Pure and Applied Physics (McGraw-Hill, New York, 1980). * Chodos _et al._ (1974) A. Chodos, R. Jaffe, K. Johnson, C. B. Thorn, and V. Weisskopf, Phys. Rev. D 9, 3471 (1974). * Eden (1971) R. J. Eden, Rept. Prog. Phys. 34, 995 (1971). * Phillips and Roy (1974) R. J. N. Phillips and D. P. Roy, Rept. Prog. Phys. 37, 1035 (1974). * Melnitchouk _et al._ (2005) W. Melnitchouk, R. Ent, and C. Keppel, Phys. Rept. 406, 127 (2005), arXiv:hep-ph/0501217 . * Kopeliovich and Rezaeian (2009) B. Z. Kopeliovich and A. H. Rezaeian, Int. J. Mod. Phys. E 18, 1629 (2009), arXiv:0811.2024 [hep-ph] . * (29) J. Greensite, _An introduction to the confinement problem_, Vol. 821, Lecture Notes in Physics (2011). * (30) A. Selem and F. Wilczek, in _Ringberg Workshop on New Trends in HERA Physics 2005_, arXiv:hep-ph/0602128 . * Desgrolard _et al._ (2001) P. Desgrolard, M. Giffon, E. Martynov, and E. Predazzi, Eur. Phys. J. C 18, 555 (2001), arXiv:hep-ph/0006244 . * Harari (1969) H. Harari, Phys. Rev. Lett. 22, 562 (1969). * Rosner (1969) J. L. Rosner, Phys. Rev. Lett. 22, 689 (1969). * Zweig (2015) G. Zweig, Int. J. Mod. Phys. A 30, 1430073 (2015). * Schwarz (1973) J. H. Schwarz, Phys. Rept. 8, 269 (1973). * Veneziano (1974) G. Veneziano, Phys. Rept. 9, 199 (1974). * Lovelace (1968) C. Lovelace, Phys. Lett. B 28, 264 (1968). * Shapiro (1969) J. A. Shapiro, Phys. Rev. 179, 1345 (1969). * Veneziano (1968) G. Veneziano, Nuovo Cim. A 57, 190 (1968). * Nielsen (2009) H. B. Nielsen, in _The Birth of String Theory_ (2009) arXiv:0904.4221 [hep-ph] . * Scherk and Schwarz (1974) J. Scherk and J. H. Schwarz, Nucl. Phys. B 81, 118 (1974). * Abramowicz and Caldwell (1999) H. Abramowicz and A. Caldwell, Rev. Mod. Phys. 71, 1275 (1999), arXiv:hep-ex/9903037 . * Cooper-Sarkar (2012) A. Cooper-Sarkar, J. Phys. G 39, 093001 (2012), arXiv:1206.0894 [hep-ph] . * Cooper-Sarkar (2009) A. M. Cooper-Sarkar, in _38th International Symposium on Multiparticle Dynamics_ (2009) arXiv:0901.4001 [hep-ph] . * Melnitchouk (2011) W. Melnitchouk, in _3rd International Workshop on Nucleon Structure at Large Bjorken x, AIP Conference Proceedings_, Vol. 1369 (2011) pp. 172–179. * Bloom and Gilman (1970) E. D. Bloom and F. J. Gilman, Phys. Rev. Lett. 25, 1140 (1970). * Fantoni _et al._ (2006) A. Fantoni, S. Liuti, and O. A. Rondon-Aramayo, eds., _Quark-hadron duality and the transition to pQCD. Proceedings, 1st Workshop, Frascati, Italy, June 6-8, 2005_ (2006). * Dokshitzer (1998) Y. L. Dokshitzer, in _High-energy physics. Proceedings, 29th International Conference, ICHEP’98, Vancouver, Canada, July 23-29, 1998. Vol. 1, 2_ (1998) pp. 305–324, arXiv:hep-ph/9812252 [hep-ph] . * Deur _et al._ (2016) A. Deur, S. J. Brodsky, and G. F. de Teramond, Nucl. Phys. 90, 1 (2016), arXiv:1604.08082 [hep-ph] . * Dokshitzer _et al._ (1999) Y. L. Dokshitzer, G. Marchesini, and G. P. Salam, Eur. Phys. J. direct C3, 1 (1999), arXiv:hep-ph/9812487 . * Abt _et al._ (2017) I. Abt, A. M. Cooper-Sarkar, B. Foster, V. Myronenko, K. Wichmann, and M. Wing, Phys. Rev. D 96, 014001 (2017), arXiv:1704.03187 [hep-ex] . * Dokshitzer _et al._ (1996) Y. L. Dokshitzer, G. Marchesini, and B. R. Webber, Nucl. Phys. B 469, 93 (1996), arXiv:hep-ph/9512336 . * Dokshitzer and Webber (1997) Y. L. Dokshitzer and B. R. Webber, Phys. Lett. B 404, 321 (1997), arXiv:hep-ph/9704298 . * Salpeter and Bethe (1951) E. E. Salpeter and H. A. Bethe, Phys. Rev. 84, 1232 (1951). * Silagadze (1998) Z. K. Silagadze, (1998), arXiv:hep-ph/9803307 . * Lepage (1978) G. P. Lepage, _Two-body Bound States in Quantum Electrodynamics_ , Ph.D. thesis, SLAC-R-0212 (1978). * Nakanishi (1969) N. Nakanishi, Prog. Theor. Phys. Suppl. 43, 1 (1969). * Nakanishi (1988) N. Nakanishi, Prog. Theor. Phys. Suppl. 95, 1 (1988). * Karmanov _et al._ (2020) V. Karmanov, J. Carbonell, and H. Sazdjian, PoS LC2019, 050 (2020), arXiv:2001.00401 [hep-ph] . * Carbonell _et al._ (2021) J. Carbonell, V. A. Karmanov, and H. Sazdjian, Eur. Phys. J. C 81, 50 (2021), arXiv:2101.03566 [hep-ph] . * Caswell and Lepage (1978) W. E. Caswell and G. P. Lepage, Phys. Rev. A18, 810 (1978). * Brodsky and Primack (1969) S. J. Brodsky and J. R. Primack, Annals Phys. 52, 315 (1969). * Järvinen (2005) M. Järvinen, Phys. Rev. D71, 085006 (2005), arXiv:hep-ph/0411208 [hep-ph] . * Kinoshita and Lepage (1990) T. Kinoshita and G. P. Lepage, Adv. Ser. Direct. High Energy Phys. 7, 81 (1990). * Caswell and Lepage (1986) W. E. Caswell and G. P. Lepage, Phys. Lett. 167B, 437 (1986). * Kinoshita (1998) T. Kinoshita, in _International Workshop on Hadronic Atoms and Positronium in the Standard Model_ (1998) arXiv:hep-ph/9808351 . * Kinoshita and Nio (1996) T. Kinoshita and M. Nio, Phys. Rev. D53, 4909 (1996), arXiv:hep-ph/9512327 [hep-ph] . * Pachucki (1997) K. Pachucki, Phys. Rev. A 56, 297 (1997). * Czarnecki _et al._ (1999) A. Czarnecki, K. Melnikov, and A. Yelkhovsky, Phys. Rev. A 59, 4316 (1999), arXiv:hep-ph/9901394 . * Haidar _et al._ (2020) M. Haidar, Z.-X. Zhong, V. Korobov, and J.-P. Karr, Phys. Rev. A 101, 022501 (2020), arXiv:1911.03235 [physics.atom-ph] . * Neubert (1994) M. Neubert, Phys. Rept. 245, 259 (1994), arXiv:hep-ph/9306320 . * Brambilla _et al._ (2005) N. Brambilla, A. Pineda, J. Soto, and A. Vairo, Rev. Mod. Phys. 77, 1423 (2005), arXiv:hep-ph/0410047 . * Pineda (2012) A. Pineda, Prog. Part. Nucl. Phys. 67, 735 (2012), arXiv:1111.0165 [hep-ph] . * Schwinger (1962) J. S. Schwinger, Phys. Rev. 128, 2425 (1962). * Coleman _et al._ (1975) S. R. Coleman, R. Jackiw, and L. Susskind, Annals Phys. 93, 267 (1975). * Coleman (1976) S. R. Coleman, Annals Phys. 101, 239 (1976). * Plesset (1932) M. S. Plesset, Phys. Rev. 41, 278 (1932). * Klein (1929) O. Klein, Z. Phys. 53, 157 (1929). * Hansen and Ravndal (1981) A. Hansen and F. Ravndal, Phys. Scripta 23, 1036 (1981). * Weinberg (2005) S. Weinberg, _The Quantum theory of fields. Vol. 1: Foundations_ (Cambridge University Press, 2005). * Dirac (1928a) P. A. Dirac, Proc. Roy. Soc. Lond. A A117, 610 (1928a). * Dirac (1928b) P. Dirac, Proc. Roy. Soc. Lond. A A118, 351 (1928b). * Brodsky (1010) S. J. Brodsky, Atomic physics and astrophysics. Vol.1. Brandeis University Summer Institute in Theoretical Physics , 95 (1971), (SLAC-PUB-1010). * Gross (1982) F. Gross, Phys. Rev. C 26, 2203 (1982). * Neghabian and Gloeckle (1983) A. Neghabian and W. Gloeckle, Can. J. Phys. 61, 85 (1983). * (86) J.-P. Blaizot and P. Hoyer, unpublished (2014) . * Hoyer (2016) P. Hoyer (2016) arXiv:1605.01532 [hep-ph] . * Blaizot and Ripka (1985) J.-P. Blaizot and G. Ripka, _Quantum theory of finite systems_ (The MIT Press, 1985). * Dietrich _et al._ (2013) D. D. Dietrich, P. Hoyer, and M. Järvinen, Phys. Rev. D87, 065021 (2013), arXiv:1212.4747 [hep-ph] . * Artru (1984) X. Artru, Phys. Rev. D 29, 1279 (1984). * Burkardt (1996) M. Burkardt, Adv. Nucl. Phys. 23, 1 (1996), arXiv:hep-ph/9505259 . * Brodsky _et al._ (1998) S. J. Brodsky, H.-C. Pauli, and S. S. Pinsky, Phys. Rept. 301, 299 (1998), arXiv:hep-ph/9705477 . * Collins (2018) J. Collins, (2018), arXiv:1801.03960 [hep-ph] . * Ji (2020) X. Ji, Nucl. Phys. B, 115181 (2020), arXiv:2003.04478 [hep-ph] . * Mannheim _et al._ (2021) P. D. Mannheim, P. Lowdon, and S. J. Brodsky, Phys. Rept. 891, 1 (2021), arXiv:2005.00109 [hep-ph] . * Feinberg (1978) F. L. Feinberg, Phys. Rev. D17, 2659 (1978). * Christ and Lee (1980) N. H. Christ and T. D. Lee, Phys. Rev. D22, 939 (1980). * Willemsen (1978) J. F. Willemsen, Phys. Rev. D17, 574 (1978). * Bjorken (1979) J. D. Bjorken, in _Quantum chromodynamics: Proceedings, 7th SLAC Summer Institute on Particle Physics (SSI 79), Stanford, Calif., 9-20 Jul 1979_ (1979) p. 219. * Leibbrandt (1987) G. Leibbrandt, Rev. Mod. Phys. 59, 1067 (1987). * Strocchi (2013) F. Strocchi, Int. Ser. Monogr. Phys. 158, 1 (2013). * Gribov (1978) V. N. Gribov, Nucl. Phys. B139, 1 (1978). * Peskin and Schroeder (1995) M. E. Peskin and D. V. Schroeder, _An Introduction to quantum field theory_ (Addison-Wesley, Reading, USA, 1995). * Hoyer (1986) P. Hoyer, Phys. Lett. B172, 101 (1986). * Dietrich _et al._ (2012) D. D. Dietrich, P. Hoyer, and M. Järvinen, Phys. Rev. D85, 105016 (2012), arXiv:1202.0826 [hep-ph] . * Hoyer (2014) P. Hoyer (2014) arXiv:1402.5005 [hep-ph] . * Haber (2021) H. E. Haber, SciPost Phys. Lect. Notes 21, 1 (2021), arXiv:1912.13302 [math-ph] . * Geffen and Suura (1977) D. A. Geffen and H. Suura, Phys. Rev. D16, 3305 (1977). * ’t Hooft (1974a) G. ’t Hooft, Nucl. Phys. B 72, 461 (1974a). * Witten (1980) E. Witten, NATO Sci. Ser. B 59, 403 (1980). * Coleman (1980) S. R. Coleman, in _17th International School of Subnuclear Physics: Pointlike Structures Inside and Outside Hadrons_ (1980) p. 0011. * ’t Hooft (1974b) G. ’t Hooft, Nucl. Phys. B 75, 461 (1974b). * Hoyer (2018) P. Hoyer, (2018), arXiv:1807.05598v2 [hep-ph] . * Gell-Mann _et al._ (1968) M. Gell-Mann, R. Oakes, and B. Renner, Phys. Rev. 175, 2195 (1968). * Okubo (1963) S. Okubo, Phys. Lett. 5, 165 (1963). * Zweig (1964) G. Zweig, CERN-TH-412 (1964). * Iizuka (1966) J. Iizuka, Prog. Theor. Phys. Suppl. 37, 21 (1966).
# A construction for bipartite Turán numbers Ivan Livinsky ###### Abstract We consider in detail the well-known family of graphs $G(q,t)$ that establish an asymptotic lower bound for Turán numbers $\mathrm{ex}(n,K_{2,t+1})$. We prove that $G(q,t)$ for some specific $q$ and $t$ also gives an asymptotic bound for $K_{3,3}$ and for some higher complete bipartite graphs as well. The asymptotic bounds we prove are the same as provided by the well-known Norm- graphs. ## 1 Introduction In his 1996 paper [1] for a prime power $q$ and $t\mid q-1$ Füredi introduced graphs $G(q,t)$ defined as follows. Let $\mathbf{F}=\mathrm{GF}(q)$ be a finite field having $q$ elements. Let $H\subseteq\mathbf{F}^{\ast}$ be a subgroup of the multiplicative group containing $t$ elements. Define $V(G(q,t))=(\mathbf{F}\times\mathbf{F}\setminus\\{(0,0)\\})/\sim$ where two pairs $(a_{1},b_{1})$, $(a_{2},b_{2})$ are equivalent iff there exists $h\in H$ such that $a_{1}=ha_{2}$ and $b_{1}=hb_{2}$. Hence, $G(q,t)$ has $(q^{2}-1)/t$ vertices. We write $\langle a,b\rangle$ for the equivalence class containing pair $(a,b)$. Two vertices $\langle a,b\rangle$ and $\langle x,y\rangle$ are joined by an edge iff $ax+by\in H.$ We can easily see that in this way a simple graph is correctly defined. For a pair $(a,b)$ and $h\in H$ fixed the equation $ax+by=h$ defines a line containing $q$ points in $\mathbf{F}^{2}$ and any two points on this line are pairwise non-equivalent. Therefore, degree of each vertex is either $q$ or $q-1$. Thus, $G(q,t)$ contains at least $\frac{1}{2t}(q^{2}-1)(q-1)$ edges. ###### Theorem 1.1 ([1]). For arbitrary prime power $q$ and $t\mid q-1$ the graph $G(q,t)$ is $K_{2,t+1}$-free. We reproduce Füredi’s proof here since we will use some of its steps later. ###### Proof. If a vertex $\langle x,y\rangle$ is attached to two distinct vertices $\langle a_{1},b_{1}\rangle$ and $\langle a_{2},b_{2}\rangle$ then there exist $h_{1},h_{2}\in H$ such that the following system of linear equations $\displaystyle a_{1}x+b_{1}y$ $\displaystyle=h_{1},$ $\displaystyle a_{2}x+b_{2}y$ $\displaystyle=h_{2}$ holds. First, we show that the matrix of the system $\left(\begin{array}[]{cc}a_{1}&b_{1}\\\ a_{2}&b_{2}\\\ \end{array}\right)$ has full rank. Indeed, both lines are nonzero and if $a_{1}=ca_{2}$ and $b_{1}=cb_{2}$ for some $c\in\mathbf{F}^{\ast}$ then we have also $h_{1}=ch_{2}$ implying that $c=h_{1}h_{2}^{-1}\in H$. Thus, we have that $\langle a_{1},b_{1}\rangle=\langle a_{2},b_{2}\rangle$ and we arrive to a contradiction. Therefore, for arbitrary $h_{1}$ and $h_{2}$ there exists a unique solution $(x,y)$. We have $t^{2}$ choices for $h_{1}$ and $h_{2}$ totally, and all the solutions are divided into $t$ classes. Therefore, there are at most $t$ vertices attached to both $\langle a_{1},b_{1}\rangle$ and $\langle a_{2},b_{2}\rangle$. ∎ We show that for some specific choice of $q$ and $t$ the graphs $G(q,t)$ in a similar way give lower bounds for larger bipartite graphs as well. In fact, using them we can obtain the same asymptotic bounds as given by the well-known Norm-graphs [4]. ## 2 Construction for $K_{3,3}$ Let $q$ be a prime power. Consider the graph $G=G(q^{2},q+1)$. Let $\mathbf{F}=\mathrm{GF}(q^{2})$. ###### Theorem 2.1. For arbitrary prime power $q$ the graph $G(q^{2},q+1)$ is $K_{3,3}$-free. We will need an auxiliary lemma. ###### Lemma 2.2. Let $H\subset\mathbf{F}^{\ast}$, $|H|=q+1$, be a subgroup. Let $a,b\in\mathbf{F}$, $a\neq 0$, $b\neq 0$. Then the equation $ax+by=1$ has at most two solutions $(x,y)$ for $x,y\in H$. ###### Proof. Assume that $(x,y)$ is a solution. Then $by=1-ax$, $x^{q+1}=y^{q+1}=1$ and $\displaystyle b^{q+1}$ $\displaystyle=(by)^{q+1}=(1-ax)^{q+1}=(1-ax)^{q}(1-ax)$ $\displaystyle=(1-a^{q}x^{q})(1-ax)=(1-\tfrac{a^{q}}{x})(1-ax)=1-ax-\frac{a^{q}}{x}+a^{q+1}.$ Therefore, $x$ is a solution to a proper quadratic equation $ax^{2}-(a^{q+1}-b^{q+1}+1)x+a^{q}=0,$ and $y=\frac{1-ax}{b}$. Therefore, there are at most two possible pairs $(x,y)$. ∎ ###### Proof of Theorem 2.1. Consider three distinct vertices $\langle a_{1},b_{1}\rangle$, $\langle a_{2},b_{2}\rangle$, $\langle a_{3},b_{3}\rangle$ and assume that another vertex $\langle x,y\rangle$ is attached to all of them. We have a system of equations $\displaystyle a_{1}x+b_{1}y=h_{1},$ $\displaystyle a_{2}x+b_{2}y=h_{2},$ $\displaystyle a_{3}x+b_{3}y=h_{3}.$ We know from the proof of Theorem 1.1 that the matrix $\left(\begin{array}[]{cc}a_{1}&b_{1}\\\ a_{2}&b_{2}\\\ \end{array}\right)$ has rank two. Therefore, the third equation must be a linear combination of the first two. That is, there must exist uniquely defined coefficients $\alpha,\beta\in\mathbf{F}$ such that $\displaystyle\alpha a_{1}+\beta a_{2}$ $\displaystyle=a_{3},$ $\displaystyle\alpha b_{1}+\beta b_{2}$ $\displaystyle=b_{3},$ $\displaystyle\alpha h_{1}+\beta h_{2}$ $\displaystyle=h_{3}.$ Moreover, $\alpha\neq 0$ and $\beta\neq 0$ since otherwise we will have equality between the initial vertices. Consider the last equation. Let $r=h_{1}h_{3}^{-1}$ and $s=h_{2}h_{3}^{-1}$. Then $\alpha r+\beta s=1.$ However, according to Lemma 2.2 there are at most two solutions $(r,s)$ to this equation. Therefore, there are at most $2(q+1)$ triples $(h_{1},h_{2},h_{3})$ such that the original system has a solution. These triples define at most two vertices $\langle x,y\rangle$. Therefore, $G$ is $K_{3,3}$-free. ∎ We have that $|V(G)|=\frac{q^{4}-1}{q+1}=(q^{2}+1)(q-1)=q^{3}-q^{2}+q-1$. We can also give an exact formula for the number of edges. ###### Theorem 2.3. Let $G=G(q^{2},q+1)$. If $q$ is odd then $|E(G)|=\frac{1}{2}(q^{5}-q^{4}+q^{3}-2q^{2}+1)$. If $q=2^{k}$ then $|E(G)|=\frac{1}{2}(q^{5}-q^{4}+q^{3}-2q^{2})$. ###### Proof. If $q$ is odd then $q^{2}\equiv 1\ (\mathrm{mod}\ 4)$ and $-1$ is a square in $\mathbf{F}$. Therefore, the equation $x^{2}+y^{2}=c$ has exactly $|\mathbf{F}|-1=q^{2}-1$ solutions for all $c\neq 0$. If $q=2^{k}$ then this equation defines a line in $\mathbf{F}^{2}$ and has $|\mathbf{F}|=q^{2}$ solutions. A vertex $\langle x,y\rangle$ has degree $q^{2}-1$ in $G$ iff $x^{2}+y^{2}\in H$ according to the construction. Therefore, the number of vertices of degree $q^{2}-1$ is equal to $q^{2}-1$ for $q$ odd, and to $q^{2}$ for $q$ even. Thus, for $q$ odd $\displaystyle|E(G)|$ $\displaystyle=\frac{1}{2}\left((q^{3}-2q^{2}+q)q^{2}+(q^{2}-1)^{2}\right)$ $\displaystyle=\frac{1}{2}(q^{5}-q^{4}+q^{3}-2q^{2}+1).$ And for $q$ even $\displaystyle|E(G)|$ $\displaystyle=\frac{1}{2}\left((q^{3}-2q^{2}+q-1)q^{2}+q^{2}(q^{2}-1)\right)$ $\displaystyle=\frac{1}{2}(q^{5}-q^{4}+q^{3}-2q^{2}).$ ∎ Therefore, for $n=q^{3}-q^{2}+q-1$ we have a $K_{3,3}$-free graph with $\frac{1}{2}n^{\frac{5}{3}}+\frac{1}{3}n^{\frac{4}{3}}+\mathrm{O}(n)$ edges. Together with the upper bound [2] for $K_{3,3}$ this gives the asymptotic formula $\mathrm{ex}(n,K_{3,3})=\frac{1}{2}n^{\frac{5}{3}}(1+o(1)).$ We can also obtain a lower bound for the graphs $K_{3,2t^{2}+1}$. ###### Theorem 2.4. For arbitrary prime power $q$ and $t\mid q-1$ the graph $G(q^{2},t(q+1))$ is $K_{3,2t^{2}+1}$-free. ###### Proof. We follow the same steps as in the proof of Theorem 2.1. We arrive to the equation $\alpha r+\beta s=1,$ where now we assume that $r,s$ belong to the subgroup $H$ of order $t(q+1)$. Let $H^{\prime}$ be a subgroup of order $q+1$. Then we can choose $t$ coset representatives $g_{1},\ldots,g_{t}$ of $H/H^{\prime}$. We have that $r=g_{i}r^{\prime}$, $s=g_{j}s^{\prime}$ for some $r^{\prime},s^{\prime}\in H^{\prime}$ and $\alpha^{\prime}r^{\prime}+\beta^{\prime}s^{\prime}=1,$ where $\alpha^{\prime}=g_{i}\alpha$, $\beta^{\prime}=g_{j}\beta$. According to Lemma 2.2 this equation has at most two solutions. Since the choice of $g_{i},g_{j}$ can be arbitrary we have that there are at most $2t^{2}$ pairs $(r,s)$. Therefore, $G$ is $K_{3,2t^{2}+1}$-free. ∎ This gives an asymptotic bound of the form $\mathrm{ex}(n,K_{3,2t^{2}+1})\geq\frac{1}{2}t^{\frac{2}{3}}n^{\frac{5}{3}}(1+o(1)).$ This asymptotic bound was also proved by Montágh in his PhD thesis [5] using a factorization of the Brown graph [6]. ## 3 General case Alon, Kollár, Rónyai, and Szabó introduced Norm-graphs in the papers [3, 4]. They constructed a family of graphs that were $K_{r,(r-1)!+1}$-free and had $n$ vertices and $\frac{1}{2}n^{2-\frac{1}{r}}(1+o(1))$ edges. Their construction depended heavily on the following algebro-geometric lemma ###### Lemma 3.1 ([3]). Let $q$ be a prime power, let $\mathbf{F}=\mathrm{GF}(q^{r})$, and let $N:\mathbf{F}\rightarrow\mathrm{GF}(q),\quad x\mapsto x^{1+q+\cdots+q^{r-1}}$ be the norm map of $\mathbf{F}$ over $\mathrm{GF}(q)$. Let $c_{1},\ldots,c_{r},d_{1},\ldots,d_{r}\in\mathbf{F}$ be some elements. If $d_{i}\neq d_{j}$ for $i\neq j$ then the system of equations $\displaystyle N(x+d_{1})$ $\displaystyle=c_{1},$ $\displaystyle N(x+d_{2})$ $\displaystyle=c_{2},$ $\displaystyle\ldots$ $\displaystyle N(x+d_{r})$ $\displaystyle=c_{r},$ has at most $r!$ solutions for $x\in\mathbf{F}$. Using Lemma 3.1 we establish the same asymptotic bound using graphs $G(q,t)$. ###### Theorem 3.2. For arbitrary prime power $q$ the graph $G(q^{r-1},q^{r-2}+\cdots+q+1)$ is $K_{r,(r-1)!+1}$-free. ###### Proof. Let $\mathbf{F}=\mathrm{GF}(q^{r-1})$. Let $H\subset\mathbf{F}^{\ast}$ be a subgroup of order $q^{r-2}+\cdots+q+1$. Consider distinct $r$ vertices $\langle a_{1},b_{1}\rangle$, $\ldots$, $\langle a_{r},b_{r}\rangle$ and assume that another vertex $\langle x,y\rangle$ is attached to all of them. We have another system of linear equations $\displaystyle a_{1}x+b_{1}y$ $\displaystyle=h_{1},$ $\displaystyle a_{2}x+b_{2}y$ $\displaystyle=h_{2},$ $\displaystyle\ldots$ $\displaystyle a_{r}x+b_{r}y$ $\displaystyle=h_{r}.$ As before, we have that all equations from third to last are linear combinations of the first two. That is, for every $j=3,\ldots,r$ there exist uniquely defined nonzero elements $\alpha_{j},\beta_{j}\in\mathbf{F}$ such that $\displaystyle\alpha_{j}a_{1}+\beta_{j}a_{2}$ $\displaystyle=a_{j},$ $\displaystyle\alpha_{j}b_{1}+\beta_{j}b_{2}$ $\displaystyle=b_{j},$ $\displaystyle\alpha_{j}h_{1}+\beta_{j}h_{2}$ $\displaystyle=h_{j}.$ Therefore, we have a system of $r-2$ equations $\displaystyle\alpha_{3}h_{1}+\beta_{3}h_{2}$ $\displaystyle=h_{3},$ $\displaystyle\ldots$ $\displaystyle\alpha_{r}h_{1}+\beta_{r}h_{2}$ $\displaystyle=h_{r}.$ Note that $N(h)=1$ for all $h\in H$. Therefore for each $j=3,\ldots,r$ we have that $N\left(\frac{\alpha_{j}}{\beta_{j}}+\frac{h_{2}}{h_{1}}\right)=N\left(\frac{h_{j}}{h_{1}\beta_{j}}\right)=N(\beta_{j})^{-1}.$ Moreover, we also have that $N\left(\frac{h_{2}}{h_{1}}\right)=1.$ Note that $\frac{\alpha_{j}}{\beta_{j}}\neq 0$, and $\frac{\alpha_{i}}{\beta_{i}}\neq\frac{\alpha_{j}}{\beta_{j}}$ when $i\neq j$ since otherwise we would get $\langle a_{i},b_{i}\rangle=\langle a_{j},b_{j}\rangle$. We have a system of $r-1$ equations that satisfies the conditions of Lemma 3.1. Thus, it has at most $(r-1)!$ solutions for $h_{2}/h_{1}$. Each solution defines a unique vertex $\langle x,y\rangle$ attached to all $\langle a_{i},b_{i}\rangle$. Therefore, $G$ is $K_{r,(r-1)!+1}$-free. ∎ We have that $G=G(q^{r-1},q^{r-2}+\cdots+q+1)$ has $(q^{r-1}+1)(q-1)$ vertices and at least $\frac{1}{2}(q^{2r-2}-1)(q-1)$ edges. Therefore, it achieves the asymptotic lower bound of the form $\mathrm{ex}(n,K_{r,(r-1)!+1})\geq\frac{1}{2}n^{2-\frac{1}{r}}(1+o(1)).$ Finally, we can prove a lower bound for the graphs $K_{r,t^{r-1}(r-1)!+1}$. ###### Theorem 3.3. For arbitrary prime power $q$ and $t\mid q-1$ the graph $G(q^{r-1},t(q^{r-2}+\cdots+q+1))$ is $K_{r,t^{r-1}(r-1)!+1}$-free. ###### Proof. Let $H$ and $H^{\prime}$ be subgroups of $\mathbf{F}^{\ast}$ orders $t(q^{r-1}+\cdots+q+1)$ and $q^{r-1}+\cdots+q+1$ respectively. Choose $t$ coset representatives $g_{1},\ldots,g_{t}$ of $H/H^{\prime}$. We follow the same steps as in Theorem 3.2. The only difference is that in the final system we obtain equations of the form $N\left(\frac{\alpha_{j}}{\beta_{j}}+\frac{h_{2}}{h_{1}}\right)=N\left(\frac{h_{j}}{h_{1}}\right)N(\beta_{j})^{-1},$ but $N\left(\frac{h_{j}}{h_{1}}\right)$ for $j=2,\ldots,r$ can be any of the $t$ elements $N(g_{1}),\ldots,N(g_{t})$ only. Therefore, in this case there are at most $t^{r-1}(r-1)!$ solutions for $h_{2}/h_{1}$. Again, each of these solutions uniquely defines a vertex $\langle x,y\rangle$ attached to all $\langle a_{i},b_{i}\rangle$. ∎ We obtain an asymptotic bound $\mathrm{ex}(n,K_{r,t^{r-1}(r-1)!+1})\geq\frac{1}{2}t^{\frac{r-1}{r}}n^{2-\frac{1}{r}}(1+o(1)).$ Therefore, our construction achieves the same asymptotic lower bounds for bipartite Turán numbers as the Norm-graphs do. ## References * [1] Z. Füredi, _New asymptotics for bipartite Turán numbers_ , J. Combin. Theory Ser. A 75 (1996), no. 1, 141–144. * [2] Z. Füredi, _An upper bound on Zarankiewicz problem_ , Combin. Probab. Comput. 5 (1996), no. 1, 29–33. * [3] J. Kollár, L. Rónyai, T. Szabó, _Norm-graphs and bipartite Turán numbers_. Combinatorica 16, 399–406, 1996. * [4] N. Alon, L. Rónyai, and T. Szabó, _Norm-graphs: variations and applications_ J. Combin. Theory Ser. B, 76(2), 280–290, 1999. * [5] B. Montágh, _Unavoidable substructures_ , PhD Thesis, University of Memphis, May 2005. * [6] W. G. Brown, _On graphs that do not contain a Thomsen graph_ , Canad. Math. Bull. 9 (1966), 281–285.
# On the detection and identification of edge disconnections in a multi-agent consensus network Gianfranco Parlangeli and Maria Elena Valcher G. Parlangeli is with the Dipartimento di Ingegneria dell’Innovazione, Università del Salento, Via per Monteroni, 73100 Lecce, Italy, e-mail<EMAIL_ADDRESS>M.E. Valcher is with the Dipartimento di Ingegneria dell’Informazione Università di Padova, via Gradenigo 6B, 35131 Padova, Italy, e-mail: <EMAIL_ADDRESS> ###### Abstract In this paper we investigate the problem of the sudden disconnection of an edge in a discrete-time multi-agent consensus network. If the graph remains strongly connected, the multi-agent system still achieves consensus, but in general, unless the information exchange between each pair of agents is symmetric, the agents’ states converge to a drifted value of the original consensus value. Consequently the edge disconnection can go unnoticed. In this paper the problems of detecting an edge disconnection and of identifying in a finite number of steps the exact edge that got disconnected are investigated. Necessary and sufficient conditions for both problems to be solvable are presented, both in case all the agents’ states are available and in case only a subset of the agents’ states is measured. Finally, an example of a network of 7 agents is provided, to illustrate some of the theoretical results derived in the paper. Keywords$-$ Multi-agent system, consensus, strongly connected network, Laplacian, detection and identification, edge disconnection. ## I Introduction The technology advances of the last decades brought brand new opportunities in several engineering areas and stimulated a significant thrust of research in communications, information processing and control theory [1, 17, 45], The availability of miniaturized low-cost processing and integrated communication devices allowing peer-to-peer communication pushed a renewed interest in the analysis and design of distributed heterogeneous sensor networks and in the cooperative control of networks of autonomous agents [3, 8]. For a network of agents, _consensus_ is a fundamental target, useful for coordination, and a key tool to achieve decentralized architectures [22, 33, 35]. _Consensus_ refers to the situation when agents achieve a common decision on a local variable, which is updated performing local computation and exchanging local information [22]. Under the assumption that each agent follows a prescribed protocol, the use of such local variable makes the system behave as if the agents were fully connected (i.e. as if the communication graph were complete) [23], and this condition enables a variety of applications in a wide range of fields [6], [19], [34], [36], [42]. However, the adherence of each agent to the agreed protocol may fail for a number of reasons, and several research directions have been explored to achieve robust consensus in the presence of intermittent transmissions, transmission errors, faults or noise [20], [21], [30, 44]. Malfunctions in a network, may be temporary or intermittent, casual or intentional (for instance, they may be the results of a cyber-attack [28]), and a long stream of research has been devoted to address fault-detection and identification (FDI) problems or fault-tolerant control strategies for multi- agent systems, see [30, Section II-B] for an extensive literature review. FDI algorithms for multi-agent systems split into two classes: centralized and distributed ones. In the first case [9, 27, 32], a central unit gathers all the system information to perform the fault diagnosis algorithm. In the distributed case [11, 27, 38], all agents run some local fault detection algorithm, based on local information and on the signals received from neighbouring agents, and they coordinate to decide whether and where the fault occurred. Most of the literature on FDI for multi-agent systems, however, assumes that faults act additively either on the state-update equations or on the signals exchanged by pairs of agents [11, 26, 27, 38]. Even if an edge disconnection can be modelled in this way, this set-up is too general and it does not exploit the correlation between the fault signal and the state evolution that characterizes faults resulting from edge disconnections. As a result, FDI conditions deduced in the general set-up do not exploit the special nature of these faults, thus leading to conservative conditions for a successful and prompt diagnosis when dealing with edge disconnection. On the other hand, in general, the problem of detecting an edge disconnection has been investigated for multi-agent systems that are not necessarily consensus networks. For instance, in [4] and [29] the possibility of detecting an edge or a node disconnection in a multi-agent system is investigated, and the concepts of discernibility from the states or the outputs are investigated. In [43] the problem of detecting an edge disconnection is addressed for a diffusive network, by resorting to an impulsive input applied at one specific node. Simple graph conditions allow to determine whether the problem is solvable. The residual signal is generated at the specific node by assuming that the whole network topology is known, and it depends on the whole state trajectory. The authors of [9] use stochastic techniques and propose an algorithm based on the full state knowledge to promptly detect abrupt topological changes of the network such as a link failure, creation, or degradation. In [12, 13, 39] the detection of a link disconnection in a network from noisy measurements at a single node is investigated. By making use of a Maximum A Posteriori Probability technique, conditions for asymptotic detection, in terms of the network spectrum and graph, are derived. If the detector does not know the whole state of the system, perfect detection is not possible. Note, also, that none of the previous references explicitly addresses the identification problem. Few contributions have specifically addressed the problem of detecting and/or identifying a link disconnection in a consensus network, by making use of the network properties and of the very specific change of the network description that results from this fault. Specifically, in [31] a multi-agent system subjected to multiple communication link failures is considered, with the goal of deriving useful design guidelines for reliable and fault-tolerant multi- agent networks. The possibility of detecting multiple link failures corresponding to at least some initial conditions is characterised in terms of the communication graph properties. In [25] an algorithm for diffusion of information among nodes reaching a weighted agreement is proposed, which is informative of the topology of the network and can be applied to the detection (but in general not the identification) of a link failure. In [32] a method to detect and identify link failures in a network of continuous-time homogeneous agents, with a weighted and directed communication graph, is proposed, assuming that only the output responses of a subset of nodes are available. Jump discontinuities in the output derivatives are used to detect link failures. The order of the derivative at which the discontinuity is observed depends on the relative degree of the agents transfer matrix, and on the distance of the observation point from the disconnected edge. A graph based algorithm to identify the link is presented. It is worth remarking that all the previous references adopt a centralised approach to the problem solution. This paper focuses on the problem of detecting and identifying an edge removal within a network of agents reaching consensus. This type of fault may result in a disconnected network, for which consensus is no longer achievable, and in this case the fault cannot go unnoticed but it can be detected significantly later than it occurred. Alternatively, the communication network may remain connected, but a different algorithm is performed with respect to the original one and this leads to agree on a final value which is different from the one the original network would have achieved. For this reason it is fundamental to detect and possibly identify disconnected edges, so that they can be promptly restored. We assume, as it was done in the large majority of the aforementioned references, that the agents represent devices with limited functionalities. In general they have not the capability of detecting an edge disconnection, not even when they are directly affected by it (by this meaning that they are either the transmitter or the receiver at the extremes of the disconnected edge), and hence they cannot send an alarm signal. This assumption is extremely realistic when dealing with large networks of cheap sensors, for instance, whose software is configured to perform very basic tasks. Accordingly, we assume that the detection and identification of a broken edge cannot be performed at local level by means of a distributed algorithm that involves a subset of the agents and we propose centralised algorithms that are able to identify a link disconnection from the knowledge of either the entire state evolution or the state evolution of a subset of its agents. In fact, a distributed FDI is typically possible only under the condition that a selected group of agents is able to both generate a residual signal and to communicate with each other to reach a decision about the occurrence of a fault. This requires additional features with respect to the basic ones that are imposed by the consensus algorithm. In addition to the fact that for certain multi-agent systems, the physical nature of the devices does not offer alternatives, the choice of adopting a centralised approach has several motivations: first of all, what can be obtained in a centralised way always represents a benchmark that distributed solutions try to approach as much as they can, and since a clear analysis of this problem is not available yet in the literature, we believe that this is the first goal to achieve. Secondly, a centralised solution requires weaker properties in terms of observability/reconstructibility than the ones that a distributed solution would impose on the single agents. It is often the case that a wise choice of a small set of agents whose states need to be monitored in a centralised way allows an effective detection and identification, but those same agents would not be able to independently perform FDI. This aspect will be better clarified at the end of the paper. Finally, several distributed FDI algorithms rely on the estimate of the overall state vector, and this is extremely demanding from a computational point of view, as well as not realistic, since it presumes that single agents may have a complete knowledge of the overall system structure. Solutions that require the selected agents to estimate only portions of the state vector typically impose very strong conditions of the structural properties of the overall system and they are strongly dependent on the specific system structure. The paper structure is the following one. Section II presents the class of discrete-time multi-agent systems and the general setting of the problem. In Section III the effects of the disconnection of a single link are explored. The problem of detecting an edge disconnection is tackled in Section IV, and it is split into four subsections. The discernibility of two networks, one of them obtained from the other as a result of the disconnection of a link, is studied in Subsection IV.A from a theoretical standpoint, assuming that the whole state vector is available for measurement. Some special situations that prevent discernibility are discussed. In Subsection IV.B an algorithm for link failure detection and isolation is introduced and it is shown that, if discernibility conditions are satisfied, the algorithm can detect each fault and isolate it under an additional condition. Section V is devoted to the discernibility problem when only a subset of nodes is available and in Subsection V.A an algorithm for detecting the edge disconnection under these conditions is provided. Also in this case, if discernibility conditions are satisfied, the algorithm is able to detect each fault, while fault isolation is ensured under an additional condition. In Section VI the case of a network of 7 nodes with only three states available for measurement is considered, and the results derived in the paper are illustrated for different edge disconnections. The results provided in this paper have been inspired by [4], where the concept of discernibility of a multi-agent system from the (faulty) one resulting from an edge or a node disconnection have been first investigated. Compared to [4], we tailor our analysis to a consensus network and hence account for the fact that discernibility in a consensus network does not reduce to the observability of a special matrix pair associated with the healthy and the faulty systems. Indeed, the dominant eigenvalue/eigenvector that ensure consensus do not change after the edge disconnection and need to be separately accounted for. In addition we have explicitly addressed the identification problem and proposed an algorithm to identify which specific link of the system got disconnected. To achieve this goal we have proposed a residual generator that is based on a (full-order) dead-beat observer. This solution has the great advantage of zeroing the effects of the initial conditions in a finite number of steps, thus making it possible to promptly detect an edge disconnection in the minimal number of steps, even when the effects of such a disconnection are small and hence may be erroneously interpreted as the effect of disturbances. This manuscript extends the edge failure analysis first performed in [40] and then summarized in [41] (this latter paper presents some preliminary results about node disconnection and compares such results with those obtained in [40] for the edge disconnection) in the following way. First of all in this paper we address the case of a directed communication graph, rather than an undirected one. While Sections II and III represent slightly modified versions of the original Sections II and III in [40], the analysis in the following sections is sigificantly improved and extended compared to the one presented in [40]. The proof of Proposition 3 is new. The whole part of subsection IV.A, after Remark 6 (in particular, Proposition 7), and subsection IV.B (in particular, Proposition 8) are original. Since this last part deals with the problem of providing a method to detect and identify the specific edge that got disconnected, we believe that there is a significant added value with respect to the original conference paper [40]. Also, the proof of Proposition 9 in Section V is original, and the whole Subsection V.A, providing conditions for fault detection and identification of an edge disconnection when only a subset of the states is available, is original. Finally, Section VI provides a useful illustrative example and is new. We believe that the current results provide a first meaningful step toward the final goal of first determining necessary and sufficient conditions for detecting and identifying all kinds of edge/mode faults, for general classes of homogeneous multi-agent systems, and then designing practical algorithms to perform their detection and identification. Notation. ${\mathbb{Z}}_{+}$ and $\mathbb{R}_{+}$ denote the set of nonnegative integer and real numbers, respectively. Given $k,n\in{\mathbb{Z}}_{+},k<n,$ we denote by $[k,n]$ the set of integers $\\{k,k+1,\dots,n\\}$. We let $\mathbf{e}_{i}$ denote the $i$-th element of the canonical basis in ${\mathbb{R}}^{k}$ ($k$ being clear from the context), with all entries equal to zero except for the $i$-th one which is unitary. ${\bf 1}_{k}$ and ${\bf 0}_{k}$ denote the $k$-dimensional real vector whose entries are all $1$ or all $0$, respectively. Given a real matrix $A$, the $(i,j)$-th entry of $A$ is denoted either by $a_{ij}$ or by $[A]_{ij}$, and its transpose by $A^{\top}$. Given a vector ${\bf v}$, the $i$-th entry of ${\bf v}$ is denoted by $v_{i}$ or by $[{\bf v}]_{i}$. The _spectrum_ of $A\in{\mathbb{R}}^{n\times n}$, denoted by $\sigma(A)$, is the set of its eigenvalues and the _spectral radius_ of $A$, denoted by $\rho_{A}$, is the maximum modulus of the elements of $\sigma(A)$. For a nonnegative matrix $A\in{\mathbb{R}}_{+}^{n\times n}$, i.e., a matrix whose entries are nonnegative real numbers, the spectral radius is always an eigenvalue. A nonnegative and nonzero matrix is called positive, while a matrix whose entries are all positive is called strictly positive. Nonnegative, positive and strictly positive vectors are analogously defined. A positive matrix $A\in\mathbb{R}_{+}^{n\times n},n>1,$ is irreducible if no permutation matrix $P$ can be found such that $P^{\top}AP=\begin{bmatrix}A_{11}&A_{12}\cr 0&A_{22}\end{bmatrix},$ where $A_{11}$ and $A_{22}$ are square (non-vacuous) matrices. By the Perron- Frobenius theorem [5, 15, 22], for an irreducible positive matrix $A$ the spectral radius $\rho_{A}$ is a simple real dominant eigenvalue, and the corresponding left and right eigenvectors are strictly positive. Positive eigenvectors of a positive irreducible matrix necessarily correspond to the spectral radius. A directed weighted graph ${\mathcal{G}}$ is a triple $({\mathcal{V}},{\mathcal{E}},{\color[rgb]{0,0,0}{\mathcal{W}}})$, where ${\mathcal{V}}=[1,N]$ is the set of vertices, ${\mathcal{E}}\subseteq{\mathcal{V}}\times{\mathcal{V}}$ is the set of arcs, and ${\color[rgb]{0,0,0}{\mathcal{W}}}$ is the matrix of the weights of ${\mathcal{G}}$. ${\color[rgb]{0,0,0}{\mathcal{W}}}$ is called adjacency matrix of the graph. The $(i,j)$-th entry of ${\color[rgb]{0,0,0}{\mathcal{W}}}$, $[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{ij}$, is nonzero if and only if the arc $(j,i)$ belongs to ${\mathcal{E}}$. We assume that $[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{ii}=0,$ for all $i\in[1,N]$, namely there are no self-loops. A weighted graph ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}},{\color[rgb]{0,0,0}{\mathcal{W}}})$ is undirected if the arcs are bidirectional, namely $(j,i)\in{\mathcal{E}}$ if and only if $(i,j)\in{\mathcal{E}}$ and $[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{ij}=[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{ji}.$ Therefore for undirected graphs ${\color[rgb]{0,0,0}{\mathcal{W}}}={\color[rgb]{0,0,0}{\mathcal{W}}}^{T}$. A path connecting $j$ and $i$ is an ordered sequence of arcs $(j,i_{1}),(i_{1},i_{2}),\dots,(i_{k-1},i_{k}),(i_{k},i)\in{\mathcal{E}}$. A directed (resp. undirected) graph ${\mathcal{G}}$ is strongly connected (connected) if, for every pair of vertices $j$ and $i$, there is a path connecting them. ${\mathcal{G}}$ is strongly connected (connected) if and only if its (symmetric) adjacency matrix ${\color[rgb]{0,0,0}{\mathcal{W}}}$ is irreducible. The Laplacian matrix [16] associated with the adjacency matrix ${\color[rgb]{0,0,0}{\mathcal{W}}}$ is defined as ${\mathcal{L}}:={\mathcal{C}}-{\color[rgb]{0,0,0}{\mathcal{W}}},$ where ${\mathcal{C}}$ is the (diagonal) connectivity matrix, whose diagonal entries are the sums of the corresponding row entries of ${\color[rgb]{0,0,0}{\mathcal{W}}}$, namely $[{\mathcal{C}}]_{ii}=\sum_{j=1}^{N}[{\mathcal{W}}]_{ij},\forall i\in[1,N]$. Clearly, by the way the Laplacian has been defined ${\mathcal{L}}{\bf 1}_{N}={\bf 0}_{N}$. Also, ${\mathcal{L}}$ is irreducible if and only if ${\color[rgb]{0,0,0}{\mathcal{W}}}$ is irreducible. If ${\mathcal{G}}$ is undirected then the associated Laplacian ${\mathcal{L}}$ is symmetric. A family $\pi=\\{{\mathcal{V}}_{1},..,{\mathcal{V}}_{k}\\}$ of non-empty subsets of ${\mathcal{V}}$ such that $\cup_{i=1}^{k}{\mathcal{V}}_{i}={\mathcal{V}}$ and ${\mathcal{V}}_{i}\cap{\mathcal{V}}_{j}=\emptyset$, $\forall i\neq j$, is called a partition of the vertex set ${\mathcal{V}}$. When so, ${\mathcal{V}}_{i}$ is called the $i$-th _cell_ of the partition $\pi$, and the vector ${\mathbf{x}}\in\mathbb{R}^{N}$ satisfying $x_{\ell}=1$ if $\ell\in{\mathcal{V}}_{i}$ and $x_{\ell}=0$ if $\ell\notin{\mathcal{V}}_{i}$, i.e., ${\bf x}=\sum_{\ell\in{\mathcal{V}}_{i}}{\bf e}_{\ell}$, is called the _characteristic vector_ of the $i$-th cell ${\mathcal{V}}_{i}$. Finally, the matrix $P_{\pi}\in\mathbb{R}^{N\times k}$, whose $i$-th column is the characteristic vector of the subset ${\mathcal{V}}_{i}$ is called the _characteristic matrix_ of $\pi$. ## II Problem setup Consider a multi-agent system consisting of $N>2$ agents, each of them indexed in the integer set $[1,N]$. The state of the $i$-th agent is described by the scalar variable $x_{i}$ that updates according to the following discrete-time linear state-space model [23]: $x_{i}(t+1)=x_{i}(t)+v_{i}(t),\qquad t\in\mathbb{Z}_{+},$ where $v_{i}$ is the input of the $i$-th agent. The communication among the $N$ agents is described by a fixed directed graph ${\mathcal{G}}$ with adjacency matrix ${\color[rgb]{0,0,0}{\mathcal{W}}}\in{\mathbb{R}}^{N\times N}$. The $(i,j)$-th entry, $i\neq j$, of ${\color[rgb]{0,0,0}{\mathcal{W}}}$ is positive, i.e., $[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{ij}>0$, if there is information flowing from agent $j$ to agent $i$, and $[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{ij}=0$ otherwise. Each agent adopts the (nearest neighbor linear) consensus protocol [23], which amounts to saying that the input $v_{i}$ takes the form: $v_{i}(t)=\kappa\sum_{j=1}^{N}[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{ij}(x_{j}(t)-x_{i}(t)),$ (1) where $\kappa>0$ is a given real parameter known as coupling strength111 If we regard the discrete-time system describing the $i$-th agent dynamics as the discretized version of the continuous-time equation $\dot{x}_{i}(t)=v_{i}(t)$, $\kappa$ represents the sampling time.. If we stack the states of the agents in a single state vector ${\bf x}\in\mathbb{R}^{N}$, the overall multi-agent system becomes ${\bf x}(t+1)=(I_{N}-\kappa{\mathcal{L}}){\bf x}(t)=:A{\bf x}(t),$ (2) where ${\mathcal{L}}=[\ell_{ij}]\in{\mathbb{R}}^{N\times N}$ is the Laplacian associated with the adjacency matrix ${\color[rgb]{0,0,0}{\mathcal{W}}}$. It is easy to deduce the relationship between eigenvalues $\lambda_{A}\in\sigma(A)$ and $\lambda_{\mathcal{L}}\in\sigma({\mathcal{L}})$, namely $\lambda_{A}=1-\kappa\lambda_{\mathcal{L}}.$ (3) In particular, ${\mathcal{L}}{\bf 1}_{N}={\bf 0}_{N}$ implies that $A{\bf 1}_{N}={\bf 1}_{N}$. System (2) can be used to describe a wide variety of situations where each agent/node shares information with its neighbours with the final goal of converging to a common decision. If so, we refer to the multi-agent system as to a consensus network. More formally, system (2) is a consensus network if for every initial state ${\bf x}(0)$ there exists $\alpha\in\mathbb{R}$ such that $\lim_{t\to+\infty}{\bf x}(t)=\alpha\mathbf{1}_{N}.$ (4) The constant $\alpha$ is called the _consensus value_ [22] for system (2), corresponding to the given initial state. If the agents’ communication graph is strongly connected, namely the Laplacian ${\mathcal{L}}$ is irreducible [22], and the coupling strength $\kappa$ satisfies the following constraint: $0<\kappa<\frac{1}{\max_{i\in[1,N]}\ell_{ii}},$ (5) $\ell_{ii}$ being the $i$-th diagonal entry of ${\mathcal{L}}$, system (2) is a consensus network (see Theorem 2 in [22]). Moreover, the consensus value is $\alpha={\bf w}_{A}^{\top}{\bf x}(0),$ (6) where ${\bf w}_{A}$ is the left eigenvector of $A$ corresponding to $1$ and satisfying ${\bf w}_{A}^{\top}{\bf 1}_{N}=1$. In the special case when the graph is undirected and hence ${\mathcal{L}}$ and $A$ are symmetric, ${\bf w}_{A}=\frac{1}{N}{\bf 1}_{N}$ and hence the consensus value is the average value of the agents’ initial conditions. Assumption 1. In the following we steadily assume that ${\mathcal{L}}$ is irreducible and $\kappa$ satisfies the inequalities in (5). Consequently, $A=I_{N}-\kappa{\mathcal{L}}$ is a positive irreducible matrix. Perron- Frobenius theorem and condition $A{\bf 1}_{N}={\bf 1}_{N}$ ensure that $1$ is a simple dominant eigenvalue of $A$. The eigenspace associated with the unitary eigenvalue is $\langle{\bf 1}_{N}\rangle$, and the positive eigenvectors of $A$ necessarily correspond to $\lambda=1$ and hence belong to $\langle{\bf 1}_{N}\rangle$. In this paper we investigate the effects of an edge disconnection on a consensus network, and the possibility of detecting and identifying such a failure. ## III Consensus after an edge disconnection If the communication from agent $r$ to agent $h$ is interrupted, namely the arc $(r,h),r\neq h,$ is disconnected, then the Laplacian $\bar{\mathcal{L}}$ of the new digraph222In order not to make the notation heavy, in this part of the paper we denote by $\bar{\mathcal{G}}$ the new digraph, by $\bar{\mathcal{L}}$ its Laplacian and by $\bar{A}$ the new system matrix, without highlighting in the notation the specific link that gets disconnected. Later on, we will modify the notation to distinguish the effects of different edge disconnections. $\bar{\mathcal{G}}$ is related to the Laplacian ${\mathcal{L}}=[\ell_{ij}]$ of the original digraph ${\mathcal{G}}$ by the relationship $\bar{\mathcal{L}}={\mathcal{L}}+\ell_{hr}{\bf e}_{h}{\bf e}_{h}^{\top}-\ell_{hr}{\bf e}_{h}{\bf e}_{r}^{\top}={\mathcal{L}}+\ell_{hr}{\bf e}_{h}[{\bf e}_{h}-{\bf e}_{r}]^{\top},$ where $\ell_{hr}=-[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{hr}<0$. Consequently, the new state-update matrix of the multi-agent system becomes $\bar{A}:=I_{N}-\kappa\bar{\mathcal{L}}=A-\kappa\ell_{hr}{\bf e}_{h}[{\bf e}_{h}-{\bf e}_{r}]^{\top}.$ (7) In the specific case when the graph is undirected, the disconnection of the arc $(r,h)$ implies also the disconnection of the arc $(h,r)$. Consequently $\displaystyle\bar{\mathcal{L}}$ $\displaystyle=$ $\displaystyle{\mathcal{L}}+\ell_{hr}[{\bf e}_{h}-{\bf e}_{r}][{\bf e}_{h}-{\bf e}_{r}]^{\top},$ $\displaystyle\bar{A}$ $\displaystyle=$ $\displaystyle A-\kappa\ell_{hr}[{\bf e}_{h}-{\bf e}_{r}][{\bf e}_{h}-{\bf e}_{r}]^{\top}.$ (8) If the edge disconnection compromises the agents’ mutual exchange of information, to the extent of destroying the graph connectivity, then consensus will not be reached, and the effects of the fault on the network will make fault detection eventually possible. On the other hand, an edge disconnection that does not affect the graph connectivity, will allow the system to still reach consensus but on a different value from the original one, and the fault may hence go unnoticed and seriously affect the system functioning. For these reasons, in this paper we will investigate the effects of a single edge disconnection by assuming that the strong connectedness of the communication graph is preserved after the failure. The following proposition, that makes use of some results derived in [23] for consensus networks with switching topologies, highlights that under this assumption we still have a consensus network, but in general the consensus value is preserved after the disconnection only if the communication graph is undirected. ###### Proposition 1. Let ${\mathcal{L}}$ be the Laplacian of a strongly connected graph ${\mathcal{G}}$, and set $A:=I_{N}-\kappa{\mathcal{L}}$, where $\kappa>0$ is a fixed coupling strength, that has been chosen in order to ensure that system (2) is a consensus network. Let $\bar{\mathcal{L}}$ be the Laplacian of the graph $\bar{\mathcal{G}}$, obtained from ${\mathcal{G}}$ by removing the arc $(r,h)$, and set $\bar{A}:=I_{N}-\kappa\bar{\mathcal{L}}$. If $\bar{\mathcal{G}}$ is still strongly connected, then * i) the system ${\bf x}(t+1)=(I_{N}-\kappa\bar{\mathcal{L}}){\bf x}(t)=\bar{A}{\bf x}(t)$ is still a consensus network; * ii) if the graph ${\mathcal{G}}$ is undirected, then for every choice of ${\bf x}(0)$ and every time $\tau\geq 0$ at which the edge disconnection may occur, the new network converges to the same consensus value to which the original network would have converged before the disconnection; * iii) if the graph ${\mathcal{G}}$ is directed, then for every $\tau\geq 0$ there are initial states ${\bf x}(0)$ corresponding to which the consensus value obtained by the new network, after the disconnection at $t=\tau$, differs from the original one (6). ###### Proof. i) Since the original network is a consensus network, ${\mathcal{L}}$ is irreducible and $\kappa$ satisfies the constraint (5). On the other hand, by assumption, the Laplacian $\bar{\mathcal{L}}$ is still irreducible and if we denote by $\bar{\ell}_{ij}$ the $(i,j)$-th entry of $\bar{\mathcal{L}}$, then $\max_{i\in[1,N]}\bar{\ell}_{ii}\leq\max_{i\in[1,N]}\ell_{ii},$ thus ensuring that $0<\kappa<\frac{1}{\max_{i\in[1,N]}\bar{\ell}_{ii}}.$ This is the case for both directed and undirected graphs. Consequently, also the new network is a consensus network. ii) and iii) Follow from the results obtained in [23] (see for instance Theorems 4 and 9) for the consensus of continuous-time systems described as integrators and with switching communication topologies. ∎ ## IV Detecting an edge disconnection In the rest of the paper we will focus on the case when the communication graph is directed, and investigate in detail under what conditions we can detect the edge disconnection. We will also steadily make the following assumption. Assumption 2. The graph $\bar{\mathcal{G}}$, describing the communication network after the link failure, is strongly connected, and hence $\bar{A}$ is still a positive irreducible matrix having $1$ as dominant eigenvalue and ${\bf 1}_{N}$ as dominant eigenvector. We distinguish the case when we can observe the states of all the agents and the case when we can observe the states of a subset of the agents. We also adjust the definitions introduced in [4] (for the continuous-time case), to keep into account that if the system has already reached consensus, which is an equilibrium point for both the original network and the faulty one, then every edge disconnection will not alter the consensus status, and hence will produce a fault that is necessarily undetectable. Consequently, we introduce the following definitions. ###### Definition 1. Consider the multi-agent consensus network (2), and the network obtained from (2) upon disconnection of the edge from agent $r$ to agent $h$: ${\bf x}(t+1)=\bar{A}{\bf x}(t),$ (9) with $\bar{A}$ described as in (7). The two networks are said to be discernible if for every fault time $\tau\geq 0$ and every state ${\bf x}(\tau)\not\in\langle{\bf 1}_{N}\rangle$, there exists $t>\tau$ such that the state trajectory of the faulty system (9) at time $t$, ${\bf x}(t)=\bar{A}^{t-\tau}{\bf x}(\tau)$, is different from the state trajectory of the original system at time $t$. If only the states of $p<N$ agents are available, and we assume without loss of generality that they are the first $p$ agents, we say that the two networks are discernible from the observation of the first $p$ agents if for every fault time $\tau\geq 0$ and every state ${\bf x}(\tau)\not\in\langle{\bf 1}_{N}\rangle$, the first $p$ entries of any state trajectory of the faulty system (9) at time $t\geq\tau$, ${\bf x}(t)=\bar{A}^{t-\tau}\bar{\bf x}_{\tau}$, are different from the first $p$ entries of the state trajectory of the original system at time $t$ for at least one time instant $t$, namely for every $\bar{\bf x}_{\tau}\in{\mathbb{R}}^{N}$ there exists $t\geq\tau$ such that $\begin{bmatrix}I_{p}&0\end{bmatrix}\bar{A}^{t-\tau}\bar{\bf x}_{\tau}\neq\begin{bmatrix}I_{p}&0\end{bmatrix}A^{t-\tau}{\bf x}(\tau).$ (10) ###### Remark 2. If the edge fault is the outcome of an external attack, it is immediate to realise that the concept of discernibility is in perfect agreement with the property of a cyberphysical system to not be subjected to undetectable attacks, explored in [28]. Note that the concept of discernibility here adopted, and suitably adapted from the one given in [4], is different from the concept of “detectabilty” (of an edge) adopted in [31] that only requires that the state trajectories of the healthy and the faulty systems differ for at least one choice of the initial condition. Finally, discernibility is introduced here as a system theoretic property with exact definition and mathematical characterisation. It is clear that in a real-life environment, we need to account for disturbances and modeling errors, that require to modify the previous theoretical condition (10) to introduce a minimal threshold below which the disagreement of the measured output with respect to the expected one is not interpreted as the effect of a fault. ### IV-A Discernibility after edge disconnection In order to characterize discernibility, we may exploit the analysis in [4], that refers to the matrices $\Delta:=\begin{bmatrix}A&0\cr 0&\bar{A}\end{bmatrix}=I_{2N}-\kappa\begin{bmatrix}{\mathcal{L}}&0\cr 0&\bar{\mathcal{L}}\end{bmatrix},\quad\Gamma_{N}:=\begin{bmatrix}I_{N}&-I_{N}\end{bmatrix}.$ (11) It is worth observing, however, that discernibility analysis in [4] is carried out for homogeneous multi-agent systems, whose agents are generically described by the same linear state-space model, and it is defined in such a way that it coincides with the observability property of the pair $(\Delta,\Gamma_{N})$. Observability, however, is impossible to guarantee when the matrices $A$ and $\bar{A}$ are expressed in terms of the Laplacian as in (2) and (8), and this is quite reasonable since the lack of observability is related to the fact that if the disconnection happens when the network is already in its steady state, then the fault cannot be detected, since the constant trajectory $\alpha{\bf 1}_{N}$ is compatible both with the original network and with the faulty one. We have modified the two definitions of discernibility just to rule out this case, that is unavoidable and cannot be regarded as a sign of bad performance. Clearly, this will lead to different characterizations of the two discernibility properties. We can now provide the following result, that adjusts and extends the one given in Theorem 1 of [4]. ###### Proposition 3. Given the networks (2) and (9), this latter obtained from the former after the disconnection of the edge $(r,h)$, assume that Assumptions 1 and 2 hold. Then the following facts are equivalent: * i) the networks (2) and (9) are discernible; * ii) the unobservable states of the pair $(\Delta,\Gamma_{N})$ are those in $\langle{\bf 1}_{2N}\rangle$ and they correspond to the unitary eigenvalue; * iii) the unobservable states of the pair $(A,[{\bf e}_{r}-{\bf e}_{h}]^{\top})$ are those in $\langle{\bf 1}_{N}\rangle$ and they correspond to the unitary eigenvalue; * iv) there is no eigenvalue-eigenvector pair $(\lambda,{\bf v})$, with $\lambda\in{\mathbb{C}}$ and ${\bf v}\neq 0$, except for $\lambda=1$ and ${\bf v}\in\langle{\bf 1}_{N}\rangle$, such that $A{\bf v}=\lambda{\bf v}\quad{\rm and}\quad[{\bf v}]_{r}=[{\bf v}]_{h}.$ (12) * v) there is no eigenvalue-eigenvector pair $(\lambda,{\bf v})$, with $\lambda\in{\mathbb{C}}$ and ${\bf v}\neq 0$, common to $A$ and $\bar{A}$, except for $\lambda=1$ and ${\bf v}\in\langle{\bf 1}_{N}\rangle$. ###### Proof. i) $\Leftrightarrow$ ii) Suppose that the networks (2) and (9) are not discernible. Then there exist ${\bf x}(0)\not\in\langle{\bf 1}_{N}\rangle$ such that $\bar{A}^{t}{\bf x}(0)=A^{t}{\bf x}(0)$ for every $t\geq 0$. This is equivalent to saying that $\begin{bmatrix}{\bf x}(0)\cr{\bf x}(0)\end{bmatrix}$ is not observable for the pair $(\Delta,\Gamma_{N})$ and it does not belong to $\langle{\bf 1}_{2N}\rangle$. Conversely, suppose that there exists an unobservable state of the pair $(\Delta,\Gamma_{N})$, ${\bf v}=\begin{bmatrix}{\bf v}_{1}\cr{\bf v}_{2}\end{bmatrix}\not\in\langle{\bf 1}_{2N}\rangle$. Clearly, ${\bf v}_{1}$ must coincide with ${\bf v}_{2}$, and hence condition ${\bf v}\not\in\langle{\bf 1}_{2N}\rangle$ implies ${\bf v}_{1}\not\in\langle{\bf 1}_{N}\rangle$. This implies that if at some $\tau\geq 0$ the original network gets disconnected when ${\bf x}(\tau)={\bf v}_{1}$ then $\bar{A}^{t-\tau}{\bf x}(\tau)=A^{t-\tau}{\bf x}(\tau)$ for every $t\geq\tau$, thus ruling out discernibility. ii) $\Leftrightarrow$ iii) Condition ii) is easily seen to be equivalent to the following condition, expressed in terms of PBH observabilty matrix: if there exist $\lambda\in{\mathbb{C}}$ and $\begin{bmatrix}{\bf v}\cr\bar{\bf v}\end{bmatrix}\neq 0$ such that $\begin{bmatrix}\lambda I_{N}-A&0\cr 0&\lambda I_{N}-\bar{A}\cr I_{N}&-I_{N}\end{bmatrix}\begin{bmatrix}{\bf v}\cr\bar{\bf v}\end{bmatrix}=0,$ (13) then $\lambda=1$ and $\begin{bmatrix}{\bf v}\cr\bar{\bf v}\end{bmatrix}\in\langle 1_{2N}\rangle$. Similarly, condition iii) is equivalent to saying that if $\lambda\in{\mathbb{C}}$ and ${\bf v}\neq 0$ exist such that $\begin{bmatrix}\lambda I_{N}-A\cr[{\bf e}_{h}-{\bf e}_{r}]^{\top}\end{bmatrix}{\bf v}=0,$ (14) then $\lambda=1$ and ${\bf v}\in\langle 1_{N}\rangle$. On the other hand, it is easily seen that (13) is equivalent to $\begin{bmatrix}\lambda I_{N}-A&0\cr 0&\lambda I_{N}-\bar{A}\end{bmatrix}\begin{bmatrix}{\bf v}\cr{\bf v}\end{bmatrix}=0,$ (15) namely to $\displaystyle A{\bf v}$ $\displaystyle=$ $\displaystyle\lambda{\bf v},$ $\displaystyle\bar{A}{\bf v}$ $\displaystyle=$ $\displaystyle\lambda{\bf v},$ and due to the relation between $A$ and $\bar{A}$, the previous two identities are, in turn, equivalent to $\displaystyle A{\bf v}$ $\displaystyle=$ $\displaystyle\lambda{\bf v},$ $\displaystyle[{\bf e}_{h}-{\bf e}_{r}]^{\top}{\bf v}$ $\displaystyle=$ $\displaystyle 0,$ that can be expressed in terms of the PBH observability criterion as in (14). This proves ii) $\Leftrightarrow$ iii). iii) $\Leftrightarrow$ iv) Obvious. iv) $\Leftrightarrow$ v) It is easily seen that (12) holds if and only if $A{\bf v}=\lambda{\bf v}=\bar{A}{\bf v}.$ So, the equivalence immediately follows. ∎ ###### Remark 4. Conditions ii), iii) and iv) in Proposition 3 could be equivalently expressed in terms of the matrices ${\mathcal{L}}$ and $\bar{\mathcal{L}}$ (instead of $A$ and $\bar{A}$), and by replacing $\lambda_{A}=1$ with $\lambda_{\mathcal{L}}=0$. In the following, we consider some special cases of matrices $A$ (or, equivalently, graph Laplacians ${\mathcal{L}}$) for which condition iv) in Proposition 3 is violated. These situations rule out in advance discernibility. If there exists $\lambda\in\sigma(A),\lambda\neq 1,$ of geometric multiplicity greater than $1$, then an eigenvector of $A$ corresponding to $\lambda$ can be found such that condition (12) is satisfied thus making the old network and the new network not discernible. So, a necessary condition for discernibility is that all the eigenvalues of $A$ have unitary geometric multiplicity ($A$ is cyclic [37] or, equivalently, non-derogatory [10]). ###### Lemma 5. If $A$ has an eigenvalue $\lambda\neq 1$ of geometric multiplicity greater than $1$, then there exists an eigenvector ${\bf v}$ corresponding to $\lambda$ such that condition (12) holds. ###### Proof. If $\lambda\in\sigma(A),\lambda\neq 1,$ has geometric multiplicity greater than $1$, then there exist $2$ linearly independent eigenvectors, say ${\bf v}_{1}$ and ${\bf v}_{2}$, corresponding to $\lambda$. Suppose that neither of these eigenvectors has the $r$-th and the $h$-th entries that coincide. Introduce the $2\times 2$ matrix $M_{r,h}:=\begin{bmatrix}[{\bf v}_{1}]_{r}&[{\bf v}_{2}]_{r}\cr[{\bf v}_{1}]_{h}&[{\bf v}_{2}]_{h}\end{bmatrix}.$ If $M_{r,h}$ is nonsingular, there exist $a_{1},a_{2}\in{\mathbb{R}}\setminus\\{0\\}$ such that $\begin{bmatrix}1\cr 1\end{bmatrix}=M_{r,h}\begin{bmatrix}a_{1}\cr a_{2}\end{bmatrix}.$ If $M_{r,h}$ is singular, there exist $a_{1},a_{2}\in{\mathbb{R}}\setminus\\{0\\}$ such that $\begin{bmatrix}0\cr 0\end{bmatrix}=M_{r,h}\begin{bmatrix}a_{1}\cr a_{2}\end{bmatrix}.$ In both cases the eigenvector ${\bf v}:=a_{1}{\bf v}_{1}+a_{2}{\bf v}_{2}$ satisfies $[{\bf v}]_{r}=[{\bf v}]_{h}.$ ∎ ###### Remark 6. If the graph ${\mathcal{G}}$ is undirected, $A$ is symmetric and hence diagonalizable. Therefore algebraic multiplicities and geometric multiplicities coincide. So, a necessary condition for discernibility is that the $N$ eigenvalues of $A$ are all distinct. We now further explore condition (12) of Proposition 3 and connect it to a topological condition on ${\mathcal{G}}$. To this end, we need to introduce the concept of nontrivial almost equitable partition for a directed weighted graph ${\mathcal{G}}$, by extending the analogous notion given for undirected unweighted graphs in [14]. Given a directed weighted graph ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}},{\color[rgb]{0,0,0}{\mathcal{W}}})$, a partition $\pi=\\{{\mathcal{V}}_{1},..,{\mathcal{V}}_{k}\\}$ of the set of vertices ${\mathcal{V}}=[1,N]$ is said to be an _equitable partition_ for ${\mathcal{G}}$ if for every pair of cells ${\mathcal{V}}_{i},{\mathcal{V}}_{j},i,j\in[1,k]$, and every node $v$ of ${\mathcal{V}}_{i}$, the sum of the weights of all edges from the nodes in ${\mathcal{V}}_{j}$ to the node $v$ is a constant value that depends only on $i$ and $j$, not on $v$. The partition $\pi$ is said to be an _almost_ (or _relaxed_) equitable partition if the above condition holds for every pair $(i,j),i,j\in[1,k]$, with $j\neq i$. In formal terms, a partition $\pi$ is an almost equitable partition for the directed weighted graph ${\mathcal{G}}$ if for every ${\mathcal{V}}_{i},{\mathcal{V}}_{j},i,j\in[1,k],i\neq j$, and every pair of nodes $v_{1},v_{2}$ of ${\mathcal{V}}_{i}$, $\sum_{u\in{\mathcal{V}}_{j}}[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{v_{1}u}=\sum_{u\in{\mathcal{V}}_{j}}[{\color[rgb]{0,0,0}{\mathcal{W}}}]_{v_{2}u},$ or, equivalently, by using the Laplacian entries $\sum_{u\in{\mathcal{V}}_{j}}\ell_{v_{1}u}=\sum_{u\in{\mathcal{V}}_{j}}\ell_{v_{2}u}.$ (16) This amounts to saying that $\sum_{u\in{\mathcal{V}}_{j}}\ell_{vu}=d_{ij}$ for every $v\in{\mathcal{V}}_{i}$ and every $j\neq i$. Clearly, $d_{ij}<0$. Note that for any directed weighted graph ${\mathcal{G}}$, two trivial almost equitable partitions always exist, namely (1) the one corresponding to $k=1$ and ${\mathcal{V}}_{1}={\mathcal{V}}$, and (2) the one corresponding to $k=N$ and each ${\mathcal{V}}_{i}$ consisting of a single distinct node. In the following, when talking about almost equitable partitions, we will always rule out the two trivial ones. The fact that $\sum_{u\in{\mathcal{V}}_{j}}\ell_{vu}=d_{ij}$ for any $v\in{\mathcal{V}}_{i}$ and $j\neq i$ suggests that it is possible to define (an adjacency matrix and hence) a Laplacian ${\mathcal{L}}_{\pi}\in{\mathbb{R}}^{k\times k}$ for ${\mathcal{G}}$, associated with $\pi$, as follows [7]: $[{\mathcal{L}}_{\pi}]_{ij}=\left\\{\begin{array}[]{ccc}d_{ij},&{\rm if}&i\neq j;\\\ -\sum_{h\in[1,k]\atop h\neq i}d_{ih},&{\rm if}&i=j.\end{array}\right.$ (17) We now provide the following result that extends the analogous one for undirected unweighted graphs derived in [7]. ###### Proposition 7. Given a directed weighted graph ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}},{\color[rgb]{0,0,0}{\mathcal{W}}})$, let $\pi=\\{{\mathcal{V}}_{1},..,{\mathcal{V}}_{k}\\}$ be a partition of the set of vertices ${\mathcal{V}}$ and let $P_{\pi}$ be the characteristic matrix of $\pi$ (see Notation in Section I). If $\pi$ is a (nontrivial) almost equitable partition, then: * i) ${\mathcal{L}}P_{\pi}=P_{\pi}{\mathcal{L}}_{\pi};$ * ii) the spectra of ${\mathcal{L}}_{\pi}$ and ${\mathcal{L}}$ satisfy $\sigma({\mathcal{L}}_{\pi})\subset\sigma({\mathcal{L}})$ and the associated eigenvectors are related as follows ${\mathbf{u}}\in{\rm ker}(\lambda I_{k}-{\mathcal{L}}_{\pi})\Rightarrow P_{\pi}{\mathbf{u}}\in{\rm ker}(\lambda I_{N}-{\mathcal{L}});$ (18) * iii) $\forall\lambda\in\sigma({\mathcal{L}}_{\pi})\subset\sigma({\mathcal{L}})$, $\exists{\mathbf{v}}\in{\rm ker}(\lambda I_{N}-{\mathcal{L}})$ such that $\forall j\in[1,k]$ $[{\mathbf{v}}]_{r}=[{\mathbf{v}}]_{s}\qquad\forall r,s\in{\mathcal{V}}_{j}.$ (19) ###### Proof. i) Set $n_{i}:=|{\mathcal{V}}_{i}|,i\in[1,k].$ It entails no loss of generality assuming that ${\mathcal{V}}_{1}=[1,n_{1}]$ and ${\mathcal{V}}_{i}=[\sum_{h=1}^{i-1}n_{h}+1,\sum_{h=1}^{i-1}n_{h}+n_{i}]$ for $i\in[2,k]$. Consequently, $P_{\pi}=\begin{bmatrix}{\bf 1}_{n_{1}}&&&\cr&{\bf 1}_{n_{2}}&&\cr&&\ddots&\cr&&&{\bf 1}_{n_{k}}\end{bmatrix}.$ It is easily seen that ${\mathcal{L}}P_{\pi}=\begin{bmatrix}-\sum_{j\neq 1}d_{1j}{\bf 1}_{n_{1}}&d_{12}{\bf 1}_{n_{1}}&\dots&d_{1k}{\bf 1}_{n_{1}}\cr d_{21}{\bf 1}_{n_{2}}&-\sum_{j\neq 2}d_{2j}{\bf 1}_{n_{2}}&\dots&d_{2k}{\bf 1}_{n_{2}}\cr\vdots&\vdots&\ddots&\vdots\cr d_{k1}{\bf 1}_{n_{k}}&d_{k2}{\bf 1}_{n_{k}}&\dots&-\sum_{j\neq k}d_{kj}{\bf 1}_{n_{k}}\end{bmatrix}$ where we used the fact that if $v\in{\mathcal{V}}_{i}$ then $0={\bf e}_{v}^{\top}{\mathcal{L}}{\bf 1}_{N}=\sum_{u\in{\mathcal{V}}_{i}}\ell_{vu}+\sum_{j\neq i}d_{ij}.$ It is immediate then to see that i) holds. ii) Let $\lambda$ be arbitrary in $\sigma({\mathcal{L}}_{\pi})$. If ${\bf u}$ is an eigenvector of ${\mathcal{L}}_{\pi}$ corresponding to $\lambda$, then ${\mathcal{L}}_{\pi}{\bf u}=\lambda{\bf u}$. So, by making use of point i), we get ${\mathcal{L}}P_{\pi}{\bf u}=P_{\pi}{\mathcal{L}}_{\pi}{\bf u}=\lambda P_{\pi}{\bf u}$. This shows that $\lambda\in\sigma({\mathcal{L}})$ (and hence $\sigma({\mathcal{L}}_{\pi})\subset\sigma({\mathcal{L}})$), and that $P_{\pi}{\bf u}$ is an eigenvector of ${\mathcal{L}}$ corresponding to $\lambda$. iii) By point ii) it is immediate to see that for every eigenvalue $\lambda$ of ${\mathcal{L}}_{\pi}$ there exists an eigenvector ${\bf v}$ of ${\mathcal{L}}$ corresponding to $\lambda$, taking the form ${\bf v}=P_{\pi}{\bf u}$. Such an eigenvector clearly satisfies (19). ∎ To highlight the impact of this condition on our objectives, consider Fig.1. As a consequence of Proposition 7, the multi-agent system whose communication graph is depicted in Fig. 1 is subject to undetectable edge failures, which are highlighted in red. Indeed, according to Proposition 3 point iv), the failure of every link $(r,h)$ such that the matrix $A$ has an eigenvector (corresponding to some non-unitary eigenvalue) whose $r$-th and $h$-th entries coincide is not detectable because it produces a faulty network which is not discernible from the original one. If $r$ and $h$ belong to the same $i$-th cell of an almost equitable partition then it follows directly from Proposition 7 that for every non-unitary eigenvalue of $A$, related through (3) to a nonzero eigenvalue of $\mathcal{L}_{\pi}$, there exists an eigenvector of $A$ whose $r$-th and $h$-th entries coincide. Some of these undetectable links are critical and their failure may significantly change the network structure, as for example egde $(1,2)$ whose failure affects the network strong connectivity. Figure 1: An example of a graph with a nontrivial equitable partition. Failures of red edges are undetectable. ### IV-B How to detect and identify an edge disconnection when the whole state vector is available If the states of all the agents are available, it is possible to easily implement a residual based fault detection scheme to detect the disconnection of the edge $(r,h)$. Specifically, assume, first, for the sake of simplicity that all the eigenvalues of $A$ are real and that the Jordan form of $A$ is $J_{A}=\begin{bmatrix}1&0\cr 0&\tilde{J}_{A}\end{bmatrix}=\begin{bmatrix}1&0&\dots&0\cr 0&J_{2}&\dots&0\cr\vdots&\vdots&\ddots&\vdots\cr 0&0&\dots&J_{n}\end{bmatrix},$ (20) where $J_{i},i\in[2,n],$ is an elementary Jordan block (of size $k_{i,max}$) corresponding to the eigenvalue $\lambda_{i}$, with $|\lambda_{i}|<1$. Note that we do not assume that $\lambda_{i}\neq\lambda_{j}$ for $i\neq j$, but each $\lambda_{i}$ corresponds to a specific chain of generalised eigenvectors ${\bf v}_{i}^{(k)},k\in[1,k_{i,max}]$, where ${\bf v}_{i}^{(k)},$ is a generalised eigenvector of order $k$ of $A$ corresponding to $\lambda_{i}$. Let $T\in{\mathbb{R}}^{N\times N}$ be the nonsingular transformation matrix with columns $T=\begin{bmatrix}{\bf 1}_{N}&{\bf v}_{2}^{(1)}&{\bf v}_{2}^{(2)}&\dots&{\bf v}_{2}^{(k_{2,max})}&\dots&{\bf v}_{n}^{(k_{n,max})}\end{bmatrix}.$ (21) Then $T^{-1}AT=J_{A}$ and hence $T^{-1}A=J_{A}T^{-1}$. Set $W:=\begin{bmatrix}{\bf 0}_{N-1}&I_{N-1}\end{bmatrix}T^{-1}\in{\mathbb{R}}^{(N-1)\times N}.$ (22) Considering (20) and (22), it is easy to verify that $WA-\tilde{J}_{A}W=0$. Consequently, as far as the system is correctly functioning, namely ${\bf x}(t)=A{\bf x}(t-1)$, then the residual signal ${\bf r}(t)=W{\bf x}(t)-\tilde{J}_{A}W{\bf x}(t-1),\quad t\geq 1,$ is identically zero. In the case when the matrix $A$ has also complex conjugate eigenvalues, then we can follow the same procedure and reasoning as above, by replacing the Jordan form with the real Jordan form, and hence pairing together pairs of complex conjugate eigenvalues and replacing pairs of complex generalised eigenvectors of some order with equivalent pairs of real generalised eigenvectors of the same order (see e.g. [18], Section 3.4.1). The details are a little bit more involved from a notational viewpoint, but the substance of the result does not change. For this reason, we omit the details. We want to show, now, that under the discernibility assumption, unless ${\bf x}(\tau)\in\langle{\bf 1}_{N}\rangle$, namely the disconnection of the edge $(r,h)$ takes place at a time $t=\tau$ when the multi-agent system has already reached consensus, then it is not possible that ${\bf r}(t)$ is identically zero for $t>\tau$. This allows one to detect the disconnection of the edge $(r,h)$. Additionally, we propose conditions that ensure that the edge disconnection is not only detected but also identified, namely the specific broken edge can be identified from the sequence of residual vectors. To this end, it is convenient to replace the original notation $\bar{A}$ for the state-space matrix after disconnection with a more specific notation that indicates which specific edge got disconnected. Accordingly, we introduce the following notation: $\bar{A}_{ij}:=A-\kappa\ell_{ij}{\bf e}_{i}[{\bf e}_{i}-{\bf e}_{j}]^{\top},$ which is the state matrix of the system once the edge $(j,i)$ gets disconnected for some $i,j\in[1,N],i\neq j$, (in particular, for $(j,i)=(r,h)$). ###### Proposition 8. Consider the networks (2) and (9), this latter obtained from the former after the disconnection of the edge $(r,h)$ at some time $t=\tau$. If Assumptions 1-2 hold, and the networks (2) and (9) are discernible (i.e., one of the equivalent conditions of Proposition 3 holds), then, unless ${\bf x}(\tau)\in\langle{\bf 1}_{N}\rangle$, namely the network has already reached consensus, there exists $t\in[\tau+1,\tau+N]$ such that ${\bf r}(t)\neq 0$. Moreover, if for every $j\in[1,N]\setminus\\{r,h\\}$ (and not only for $j=r$), the following conditions hold: (i) the faulty network obtained by disconnecting $(j,h)$ is still strongly connected and discernible from the original network, (ii) $\sigma(\bar{A}_{hr})\cap\sigma(\bar{A}_{hj})=\\{1\\},$ (23) then it is possible to identify from the residual signal the edge $(r,h)$ that got disconnected. ###### Proof. If the disconnection of edge $(r,h)$ takes place at $t=\tau$, then for every $k\geq 1$ $\displaystyle{\bf r}(\tau+k)$ $\displaystyle=$ $\displaystyle W\bar{A}_{hr}{\bf x}(\tau+k-1)-\tilde{J}_{A}W{\bf x}(\tau+k-1)$ (24) $\displaystyle=$ $\displaystyle[W\bar{A}_{hr}-\tilde{J}_{A}W]\bar{A}_{hr}^{k-1}{\bf x}(\tau)$ $\displaystyle=$ $\displaystyle-\kappa\ell_{hr}W{\bf e}_{h}[{\bf e}_{h}-{\bf e}_{r}]^{\top}\bar{A}_{hr}^{k-1}{\bf x}(\tau),$ where we used the identities $\bar{A}_{hr}=A-\kappa\ell_{hr}{\bf e}_{h}[{\bf e}_{h}-{\bf e}_{r}]^{\top}$ and $WA-\tilde{J}_{A}W=0$. Therefore ${\bf r}(\tau+k)=0$ for every $k\geq 1$ if and only if ${\bf x}(\tau)\in{\rm ker}{\mathcal{W}}_{hr},$ where ${\mathcal{W}}_{hr}:=\begin{bmatrix}-\kappa\ell_{hr}W{\bf e}_{h}({\bf e}_{h}-{\bf e}_{r})^{\top}\cr-\kappa\ell_{hr}W{\bf e}_{h}({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}\cr\vdots\cr-\kappa\ell_{hr}W{\bf e}_{h}({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}^{N-1}\end{bmatrix},$ (25) This amounts to saying that there exist $\lambda\in{\mathbb{C}}$ and ${\bf v}\neq 0$ such that the PBH observability matrix satisfies $\begin{bmatrix}\lambda I_{N}-\bar{A}_{hr}\cr W{\bf e}_{h}[{\bf e}_{h}-{\bf e}_{r}]^{\top}\end{bmatrix}{\bf v}=0,$ but this is easily seen to be equivalent to the existence of $\lambda\in{\mathbb{C}}$ and ${\bf v}\neq 0$ such that $\left\\{\begin{matrix}\bar{A}_{hr}{\bf v}=\lambda{\bf v}\cr[{\bf e}_{h}-{\bf e}_{r}]^{\top}{\bf v}=0,\end{matrix}\right.$ and therefore to the existence of $\lambda\in{\mathbb{C}}$ and ${\bf v}\neq 0$ such that $\left\\{\begin{matrix}A{\bf v}=\lambda{\bf v}\cr[{\bf v}]_{h}=[{\bf v}]_{r}.\end{matrix}\right.$ Discernibility assumption rules out the possibility that the previous condition holds unless $\lambda=1$ and ${\bf v}\in\langle{\bf 1}_{N}\rangle$. On the other hand, if the previous condition holds only for $\lambda=1$ and ${\bf v}\in\langle{\bf 1}_{N}\rangle$, this means that ${\bf x}(\tau)\in\langle{\bf 1}_{N}\rangle$, namely the disconnection had taken place after the network had reached consensus, a situation in which detection is not possible. This proves that there exists $k\in[1,N]$ such that ${\bf r}(\tau+k)\neq 0$. Now we want to prove that under the assumptions that: (i) the disconnection of any edge $(j,h)$ results in a new strongly connected network, discernible from the original one, and (ii) condition (23) holds, it is possible to uniquely identify the broken link from the residuals. By the previous part of the proof, if the disconnection takes place at $t=\tau$ and ${\bf x}(\tau)\not\in\langle{\bf 1}_{N}\rangle$, then at least one of the values ${\bf r}(\tau+k),k=1,2,\dots,N,$ must be nonzero. Set $k^{*}:=\min\\{k\geq 1:{\bf r}(\tau+k)\neq 0\\}.$ (26) Then ${\bf r}(\tau+k^{*})=c_{k^{*}}\cdot W{\bf e}_{h},$ where $\displaystyle c_{k^{*}}$ $\displaystyle:=$ $\displaystyle-\kappa\ell_{hr}({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}^{k^{*}-1}{\bf x}(\tau)$ $\displaystyle=$ $\displaystyle-\kappa\ell_{hr}({\bf e}_{h}-{\bf e}_{r})^{\top}{\bf x}(\tau+k^{*}-1)$ $\displaystyle=$ $\displaystyle-\kappa\ell_{hr}\left([{\bf x}(\tau+k^{*}-1)]_{h}-[{\bf x}(\tau+k^{*}-1)]_{r}\right)\neq 0.$ By Lemma 12 in the Appendix, we can claim that this vector uniquely identifies the index $h$, namely one of the extremes of the edge that got disconnected. We now note that $\begin{bmatrix}{\bf r}(\tau+k^{*})\cr{\bf r}(\tau+k^{*}+1)\cr\vdots\cr{\bf r}(\tau+k^{*}+2N-1)\end{bmatrix}=\begin{bmatrix}W{\bf e}_{h}&&&\cr&W{\bf e}_{h}&&\cr&&\ddots&\cr&&&W{\bf e}_{h}\end{bmatrix}$ $\cdot\begin{bmatrix}({\bf e}_{h}-{\bf e}_{r})^{\top}\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}\cr\vdots\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}^{2N-1}\end{bmatrix}(-\kappa\ell_{hr}){\bf x}(\tau+k^{*}-1).$ We have just proved that we can uniquely identify $W{\bf e}_{h}$ from the first nonzero residual. Moreover, the block diagonal matrix having $W{\bf e}_{h}$ as diagonal block is clearly of full column rank, and hence the vector ${\color[rgb]{0,0,0}{\bf Y}}\neq 0$ such that $\begin{bmatrix}{\bf r}(\tau+k^{*})\cr{\bf r}(\tau+k^{*}+1)\cr\vdots\cr{\bf r}(\tau+k^{*}+2N-1)\end{bmatrix}=\begin{bmatrix}W{\bf e}_{h}&&&\cr&W{\bf e}_{h}&&\cr&&\ddots&\cr&&&W{\bf e}_{h}\end{bmatrix}{\color[rgb]{0,0,0}{\bf Y}}$ is uniquely determined. Now, we want to show that under assumptions (i) and (ii) we can uniquely identify the index $r$ such that ${\color[rgb]{0,0,0}{\bf Y}}\in{\rm Im}\begin{bmatrix}({\bf e}_{h}-{\bf e}_{r})^{\top}\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}\cr\vdots\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}^{2N-1}\end{bmatrix}.$ If this were not the case, then there would be another index $j\neq r$ (and $j\neq h$) such that ${\color[rgb]{0,0,0}{\bf Y}}\in{\rm Im}\begin{bmatrix}({\bf e}_{h}-{\bf e}_{r})^{\top}\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}\cr\vdots\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}^{2N-1}\end{bmatrix}\cap{\rm Im}\begin{bmatrix}({\bf e}_{h}-{\bf e}_{j})^{\top}\cr({\bf e}_{h}-{\bf e}_{j})^{\top}\bar{A}_{hj}\cr\vdots\cr({\bf e}_{h}-{\bf e}_{j})^{\top}\bar{A}_{hj}^{2N-1}\end{bmatrix},$ and hence there would be two nonzero vectors ${\bf z}_{r}$ and ${\bf z}_{j}$ such that ${\color[rgb]{0,0,0}{\bf Y}}=\begin{bmatrix}({\bf e}_{h}-{\bf e}_{r})^{\top}\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}\cr\vdots\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}^{2N-1}\end{bmatrix}{\bf z}_{r}=-\begin{bmatrix}({\bf e}_{h}-{\bf e}_{j})^{\top}\cr({\bf e}_{h}-{\bf e}_{j})^{\top}\bar{A}_{hj}\cr\vdots\cr({\bf e}_{h}-{\bf e}_{j})^{\top}\bar{A}_{hj}^{2N-1}\end{bmatrix}{\bf z}_{j}.$ Clearly, neither ${\bf z}_{r}$ nor ${\bf z}_{j}$ can belong to $\langle{\bf 1}_{N}\rangle$, otherwise ${\color[rgb]{0,0,0}{\bf Y}}$ would be zero. Condition $0=\begin{bmatrix}({\bf e}_{h}-{\bf e}_{r})^{\top}&({\bf e}_{h}-{\bf e}_{j})^{\top}\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}&({\bf e}_{h}-{\bf e}_{j})^{\top}\bar{A}_{hj}\cr\vdots&\vdots\cr({\bf e}_{h}-{\bf e}_{r})^{\top}\bar{A}_{hr}^{2N-1}&({\bf e}_{h}-{\bf e}_{j})^{\top}\bar{A}_{hj}^{2N-1}\end{bmatrix}\begin{bmatrix}{\bf z}_{r}\cr{\bf z}_{j}\end{bmatrix}$ corresponds to saying that the unobservable subspace of the matrix pair $\left(\begin{bmatrix}\bar{A}_{hr}&0\cr 0&\bar{A}_{hj}\end{bmatrix},\begin{bmatrix}({\bf e}_{h}-{\bf e}_{r})^{\top}&({\bf e}_{h}-{\bf e}_{j})^{\top}\end{bmatrix}\right)$ includes the vector $\begin{bmatrix}{\bf z}_{r}\cr{\bf z}_{j}\end{bmatrix}\not\in\langle{\bf 1}_{2N}\rangle.$ Clearly, by the irreducibility assumption on $\bar{A}_{hr}$ and $\bar{A}_{hj}$, this cannot be an eigenvector corresponding to $\lambda=1$. On the other hand, the fact that ${\bf z}_{r}$ and ${\bf z}_{j}$ are both nonzero implies that there is an eigenvalue $\lambda\neq 1$ common to $\sigma(\bar{A}_{hr})$ and $\sigma(\bar{A}_{hj})$. But this contradicts assumption (ii), and hence $r$ is uniquely determined. ∎ We want now to sketch an algorithm to identify the edge $(r,h)$ that got disconnected. Suppose that at $t=\tau$ the edge $(r,h)$ gets disconnected and that the first nonzero residual after $t=\tau$ is ${\bf r}(\tau+k^{*})=c_{k^{*}}\cdot W{\bf e}_{h}$, with $k^{*}>0$ and $c_{k^{*}}\neq 0$ defined as in (26) and (IV-B), respectively. By the previous reasoning, we can claim that there exists a unique value of $h\in[1,N]$ such that ${\bf r}(\tau+k^{*})\in\langle W{\bf e}_{h}\rangle$, and this allows to uniquely identify $h$ and hence the coefficient $c_{k^{*}}$. From the knowledge of $h$ and $c_{k^{*}}$ one can infer the identity of $r$ by comparing the state value of each in-neighbour of $h$ with the state value of the same node which produces the residual ${\bf r}(\tau+k^{*})$. Since it must hold $[{\bf x}(\tau+k^{*}-1)]_{r}=\dfrac{c_{k^{*}}}{\kappa\ell_{hr}}+[{\bf x}(\tau+k-1)]_{h},$ (28) $r$ must belong to the following set $\displaystyle r\in{\mathcal{R}}_{k^{*}}\\!\\!\\!$ $\displaystyle:=$ $\displaystyle\\!\\!\\!\\{i\in[1,N],i\neq h:\ell_{hi}\neq 0\ {\rm and}$ $\displaystyle\\!\\!\\![{\bf x}(\tau+k^{*}-1)]_{i}=\dfrac{c_{k^{*}}}{\kappa\ell_{hi}}+[{\bf x}(\tau+k^{*}-1)]_{h}\\}.$ If $|{\mathcal{R}}_{k^{*}}|=1$, then $r$ is identified at the first step. If not, one can evaluate the set ${\mathcal{R}}_{k^{*}+1}$ and then the intersection ${\mathcal{R}}_{k^{*}}\cap{\mathcal{R}}_{k^{*}+1}$. By proceeding in this way, based on the previous proof, this procedure identifies in a finite number of steps the value of the index $r$ that represents the first extreme of the edge that got disconnected, since there exists $0\leq d\leq 2N-1$ such that the set $\cap_{i=0}^{d}{\mathcal{R}}_{k^{*}+i}$ consists of a single element. We now explore the more interesting case of discernibility from the observation of the first $p$ agents. ## V Discernibility from the observation of the first $p$ agents after edge disconnection By referring to the second part of Definition 1, it is easily seen that discernibility of the two systems from the observation of the first $p$ agents imposes the observability of the original system. If not, condition (10) would be contradicted for any unobservable state ${\bf x}(\tau)$ and ${\bf x}_{\tau}=0$. On the other hand, the lack of observability of the faulty system could lead to some pathological situations, since the output measurements could possibly lead to believe that the faulty network has already reached the consensus to some constant value, while it is still evolving. So, in the following we will assume: Assumption 3. Both the original system and the faulty one are observable from the first $p$ agents, namely both $(A,\begin{bmatrix}I_{p}&0\end{bmatrix})$ and $(\bar{A}_{hr},\begin{bmatrix}I_{p}&0\end{bmatrix})$ are observable. Under this assumption, we will characterise discernibility from the observation of the first $p$ agents in terms of the matrix pair $(\Delta,\Gamma_{p})$, with $\Delta:=\begin{bmatrix}A&0\cr 0&\bar{A}_{hr}\end{bmatrix}\qquad\Gamma_{p}:=\begin{bmatrix}I_{p}&0&-I_{p}&0\end{bmatrix}.$ (29) It is worth noticing that since $A$ is a positive irreducible matrix, having ${\bf 1}_{N}$ as dominant eigenvector corresponding to the unitary eigenvalue, clearly $1$ is always an observable eigenvalue of the pair $(A,\begin{bmatrix}I_{p}&0\end{bmatrix})$, and hence if the pair would not be observable, the eigenvalues of the non-observable subsystem would necessarily have modulus smaller than $1$. The same reasoning applies to $\bar{A}_{hr}$, as far as it remains irreducible. Finally, the irreducibility assumption on both $A$ and $\bar{A}_{hr}$ ensures that the eigenspace of both $A$ and $\bar{A}_{hr}$ corresponding to $\lambda=1$ is $\langle{\bf 1}_{N}\rangle$. So, the only unobservable eigenvectors of $(\Delta,\Gamma_{p})$ corresponding to the unitary eigenvalue are those belonging to $\langle{\bf 1}_{2N}\rangle$. Assumption 3 and the previous comments are fundamental to derive the following result, that extends Proposition 3 in [4]. ###### Proposition 9. Consider the networks (2) and (9), this latter obtained from the former after the disconnection of the edge $(r,h)$, and assume that Assumptions 1, 2 and 3 hold. The following facts are equivalent: * i) the networks (2) and (9) are discernible from the observation of the first $p$ agents; * ii) the unobservable states of the pair $(\Delta,\Gamma_{p})$ are those in $\langle{\bf 1}_{2N}\rangle$ and they correspond to the unitary eigenvalue; * iii) for every $\lambda\in\sigma(A)\cap\sigma(\bar{A}_{hr}),\lambda\neq 1$, ${\rm rank}\begin{bmatrix}\lambda I_{N}-A&0\cr 0&\lambda I_{N}-\bar{A}_{hr}\cr I_{p}\ \ \ 0&-I_{p}\ \ \ 0\end{bmatrix}=2N;$ * iv) there are no $\lambda\in{\mathbb{C}}$ and nonzero vectors ${\bf v},\bar{\bf v}$, except for $\lambda=1$ and ${\bf v}=\bar{\bf v}\in\langle{\bf 1}_{N}\rangle$, such that $\left\\{\begin{array}[]{rcl}A{\bf v}=\lambda{\bf v},&&\bar{A}_{hr}\bar{\bf v}=\lambda\bar{\bf v}\\\ \begin{bmatrix}I_{p}&0\end{bmatrix}{\bf v}&=&\begin{bmatrix}I_{p}&0\end{bmatrix}\bar{\bf v}.\end{array}\right.$ (30) ###### Proof. i) $\Leftrightarrow$ ii) Suppose that the networks (2) and (9) are not discernible from the observation of the first $p$ agents. Then there exist ${\bf x}(0)\not\in\langle{\bf 1}_{N}\rangle$ and $\bar{\bf x}_{0}\in{\mathbb{R}}^{N}$ such that $\begin{bmatrix}I_{p}&0\end{bmatrix}\bar{A}^{t}{\bf x}_{0}=\begin{bmatrix}I_{p}&0\end{bmatrix}A^{t}{\bf x}(0)$ for every $t\geq 0$. This is equivalent to saying that $\begin{bmatrix}{\bf x}(0)\cr{\bf x}_{0}\end{bmatrix}$, which does not belong to $\langle{\bf 1}_{2N}\rangle$, is not observable for the pair $(\Delta,\Gamma_{p})$. Conversely, suppose that there exists an unobservable state of the pair $(\Delta,\Gamma_{p})$, ${\bf x}\not\in\langle{\bf 1}_{2N}\rangle$. Since the only eigenvectors of $\Delta$ corresponding to the unitary eigenvalue and belonging to the unobservable subspace are those in $\langle{\bf 1}_{2N}\rangle$, this implies that there exists an eigenvector ${\bf v}=\begin{bmatrix}{\bf v}_{1}\cr{\bf v}_{2}\end{bmatrix}\not\in\langle{\bf 1}_{2N}\rangle$ of $\Delta$ corresponding to some $\lambda\neq 1$ and satisfying $\begin{bmatrix}I_{p}&0\end{bmatrix}{\bf v}_{1}=\begin{bmatrix}I_{p}&0\end{bmatrix}{\bf v}_{2}.$ Note that Assumption 3 ensures that both ${\bf v}_{1}$ and ${\bf v}_{2}$ are nonzero vectors. Since ${\bf v}_{1}\not\in\langle{\bf 1}_{n}\rangle$, if $\lambda$ is real then we have found a state that contradicts discernibility from the observation of the first $p$ agents. If $\lambda$ is complex then we can simply use the real part of ${\bf v}_{1}$ to disprove discernibility from the observation of the first $p$ agents. ii) $\Leftrightarrow$ iii) Condition ii) is easily seen to be equivalent to the following condition, expressed in terms of PBH observabilty matrix: if there exist $\lambda\in{\mathbb{C}}$ and $\begin{bmatrix}{\bf v}\cr\bar{\bf v}\end{bmatrix}\neq 0$ such that $\begin{bmatrix}\lambda I_{N}-A&0\cr 0&\lambda I_{N}-\bar{A}_{hr}\cr I_{p}\ \ \ 0&-I_{p}\ \ \ 0\end{bmatrix}\begin{bmatrix}{\bf v}\cr\bar{\bf v}\end{bmatrix}=0,$ (31) then $\lambda=1$ and $\begin{bmatrix}{\bf v}\cr\bar{\bf v}\end{bmatrix}\in\langle 1_{2N}\rangle$. Clearly any such $\lambda$ must be in $\sigma(\Delta)$. On the other hand, if $\lambda$ would not be a common eigenvalue of $A$ and $\bar{A}_{hr}$ then either ${\bf v}$ or $\bar{\bf v}$ would be zero and this would mean that either $(A,\begin{bmatrix}I_{p}&0\end{bmatrix})$ or $(\bar{A}_{hr},\begin{bmatrix}I_{p}&0\end{bmatrix})$ are not observable. This would contradict Assumption 3. Therefore we have proved that ii) is equivalent to iii). iii) $\Leftrightarrow$ iv) Obvious. ∎ ###### Remark 10. If the networks (2) and (9) are discernible from the observation of the first $p$ agents, they are discernible. If not, a state ${\bf x}\not\in\langle{\bf 1}_{N}\rangle$ could be found such that $\bar{A}_{hr}^{t}{\bf x}=A^{t}{\bf x}$ for every $t\geq 0$, and hence a fortiori $\begin{bmatrix}I_{p}&0\end{bmatrix}\bar{A}_{hr}^{t}{\bf x}=\begin{bmatrix}I_{p}&0\end{bmatrix}A^{t}{\bf x}$ for every $t\geq 0$. This implies that a necessary condition for discernibility from the observation of the first $p$ agents is that all the nonunitary eigenvalues of $A$ and $\bar{A}_{hr}$ have unitary geometric multiplicity. ### V-A How to detect and identify an edge disconnection when the states of the first $p$ agents are available Also in this case we may detect an edge disconnection by making use of the measurements of the states of the first $p$ agents. Since the pair $(A,\begin{bmatrix}I_{p}&0\end{bmatrix})$ is observable, let $L$ be a matrix in ${\mathbb{R}}^{N\times p}$ such that $A+L\begin{bmatrix}I_{p}&0\end{bmatrix}$ is nilpotent. We can construct the closed-loop dead-beat observer of the state of the multi-agent system [24] as $\hat{\bf x}(t+1)=A\hat{\bf x}(t)-L[{\bf y}(t)-\begin{bmatrix}I_{p}&0\end{bmatrix}\hat{\bf x}(t)].$ (32) Clearly, after a finite number of steps $\tau_{0}$333By exploiting the observability of the pair $(A,\begin{bmatrix}I_{p}&0\end{bmatrix})$ and Rosenbrock’s theorem, we can claim that the minimum $\tau_{0}$ ranges in the interval $\left[\lceil\frac{N}{p}\rceil,N-p+1\right]$. that depends on the nilpotency index of $A+L\begin{bmatrix}I_{p}&0\end{bmatrix}$, we have $\hat{\bf x}(t)={\bf x}(t)$, and hence the residual signal ${\bf r}(t)=\begin{bmatrix}I_{p}&0\end{bmatrix}\hat{\bf x}(t)-{\bf y}(t)=\begin{bmatrix}I_{p}&0\end{bmatrix}[\hat{\bf x}(t)-{\bf x}(t)],$ is identically zero from $t=\tau_{0}$ onward until a fault occurs. Now suppose that at $t=\tau\geq\tau_{0}$ the disconnection of the edge $(r,h)$ takes place444In the transient phase, until the estimation error goes to zero, the residual signal may be not zero. As a result it is not possible to detect an edge disconnection in a reliable way. This is the reason why a dead-beat observer is preferable over an asymptotic observer, since this transient phase lasts a finite number of time instants., and hence the multi-agent state updates according to (9). We want to show that, unless the multi-agent system has already reached consensus, under Assumptions 1, 2 and 3 and any of the equivalent conditions of Proposition 9 the residual signal will necessarily become nonzero at some time instant $t>\tau$. To this goal it is sufficient to show that for the system obtained by putting together (9) and (32), namely $\displaystyle\\!\\!\begin{bmatrix}\hat{\bf x}(t+1)\cr{\bf x}(t+1)\end{bmatrix}\\!\\!$ $\displaystyle=$ $\displaystyle\\!\\!\begin{bmatrix}A+L\begin{bmatrix}I_{p}&0\end{bmatrix}&-L\begin{bmatrix}I_{p}&0\end{bmatrix}\cr 0&\bar{A}_{hr}\end{bmatrix}\begin{bmatrix}\hat{\bf x}(t)\cr{\bf x}(t)\end{bmatrix}$ (33) $\displaystyle{\bf r}(t)\\!\\!$ $\displaystyle=$ $\displaystyle\\!\\!\begin{bmatrix}\begin{bmatrix}I_{p}&0\end{bmatrix}&\begin{bmatrix}-I_{p}&0\end{bmatrix}\end{bmatrix}\begin{bmatrix}\hat{\bf x}(t)\cr{\bf x}(t)\end{bmatrix}$ (34) the only unobservable states are those in $\langle{\bf 1}_{2N}\rangle.$ Consider the PBH observability matrix $\begin{bmatrix}\lambda I_{N}-A-L\begin{bmatrix}I_{p}&0\end{bmatrix}&L\begin{bmatrix}I_{p}&0\end{bmatrix}\cr 0&\lambda I_{N}-\bar{A}_{hr}\cr\hline\cr\begin{bmatrix}I_{p}&0\end{bmatrix}&\begin{bmatrix}-I_{p}&0\end{bmatrix}\end{bmatrix}.$ (35) By resorting to elementary operations on the rows of the matrix, it is easily seen that the previous matrix is of full column rank for $\lambda\in{\mathbb{C}}$ if and only if the PBH observability matrix $\begin{bmatrix}\lambda I_{N}-A&0\cr 0&\lambda I_{N}-\bar{A}_{hr}\cr\hline\cr\begin{bmatrix}I_{p}&0\end{bmatrix}&\begin{bmatrix}-I_{p}&0\end{bmatrix}\end{bmatrix}$ (36) is of full column rank for that $\lambda$. Moreover when both PBH matrices have not full column rank, they have the same kernel. By the assumption that (2) and (9) are discernible from the observation of the first $p$ agents it follows (see condition iii) of Proposition 9) that (36) is of full column rank for every $\lambda\neq 1$ and for $\lambda=1$ its kernel is $\langle{\bf 1}_{2N}\rangle.$ Therefore, the only unobservable states of the overall system are those in $\langle{\bf 1}_{2N}\rangle$, and this implies that the residual ${\bf r}(t)$ cannot remain zero after an edge disconnection. We now want to show that, under suitable assumptions, one can deduce from the residual the exact information about which edge got disconnected. ###### Proposition 11. Consider the networks (2) and (9), this latter obtained from the former after the disconnection of the edge $(r,h)$ at some time $t=\tau\geq\tau_{0}$. Assume that for every edge $(j,i),\ i,j\in[1,N],j\neq i$, (and not only for $(j,i)=(r,h)$) (i) the faulty network obtained by disconnecting $(j,i)$ is strongly connected and discernible from the original network based on the observation of the first $p$ states, (ii) Assumption 3 holds and hence $(A,\begin{bmatrix}I_{p}&0\end{bmatrix})$ and $(\bar{A}_{ij},\begin{bmatrix}I_{p}&0\end{bmatrix})$ are observable, and (iii) $\sigma(\bar{A}_{hr})\cap\sigma(\bar{A}_{ij})=\\{1\\}.$ (37) Then, unless ${\bf x}(\tau)\in\langle{\bf 1}_{N}\rangle$, namely the network has already reached consensus, it is possible to identify from the residual signal ${\bf r}(t),t\geq\tau$, generated by (34), the edge $(r,h)$ that got disconnected. ###### Proof. To prove that the previous observer-based residual generator produces distinct residual sequences corresponding to different faulty systems (provided that ${\bf x}(\tau)$, the state of the multi-agent system at the time of edge disconnection, is not in the equilibrium, yet, namely it is not a multiple of ${\bf 1}_{N}$), it is sufficient to prove that if $(r,h)\neq(j,i)$ then the two systems $\left({\mathbb{A}}_{hr},\begin{bmatrix}I_{p}&0_{p\times(N-p)}&-I_{p}&0_{p\times(N-p)}\end{bmatrix}\right)$ $\left({\mathbb{A}}_{ij},\begin{bmatrix}I_{p}&0_{p\times(N-p)}&-I_{p}&0_{p\times(N-p)}\end{bmatrix}\right),$ with ${\mathbb{A}}_{hr}:=\begin{bmatrix}A_{L}&-L\begin{bmatrix}I_{p}&0_{p\times(N-p)}\end{bmatrix}\cr 0&{\bar{A}}_{hr}\end{bmatrix},$ ${\mathbb{A}}_{ij}:=\begin{bmatrix}A_{L}&-L\begin{bmatrix}I_{p}&0_{p\times(N-p)}\end{bmatrix}\cr 0&{\bar{A}}_{ij}\end{bmatrix},$ generate distinct residual trajectories, provided that neither of them has already reached the equilibrium at the time the disconnection occurs. This amounts to saying that the only unobservable states of the system $\left(\begin{bmatrix}{\mathbb{A}}_{hr}&\vline&0\cr\hline\cr 0&\vline&{\mathbb{A}}_{ij}\end{bmatrix},\begin{bmatrix}I_{p}&0&-I_{p}&0&\vline\ -I_{p}&0&I_{p}&0\end{bmatrix}\right),$ (38) taking the form $\begin{bmatrix}{\bf v}_{hr}\cr{\bf v}_{hr}\cr{\bf v}_{ij}\cr{\bf v}_{ij}\end{bmatrix}$ are those belonging to $\langle\begin{bmatrix}{\bf 1}_{2N}\cr 0\end{bmatrix},\begin{bmatrix}0\cr{\bf 1}_{2N}\end{bmatrix}\rangle.$ By making use of the PBH observability matrix: $\begin{bmatrix}\lambda I_{2N}-{\mathbb{A}}_{hr}&0\cr 0&\lambda I_{2N}-{\mathbb{A}}_{ij}\cr\hline\cr[I_{p}\ 0\ -I_{p}\ 0]&[-I_{p}\ 0\ I_{p}\ 0]\end{bmatrix}$ (39) it is easily seen that a vector with the previous block structure belongs to the kernel of (39) for $\lambda=1$ if and only if ${\bf v}_{hr}$ is a common eigenvector (corresponding to $\lambda=1$) of $A$ and $\bar{A}_{hr}$ and ${\bf v}_{ij}$ is a common eigenvector (corresponding to $\lambda=1$) of $A$ and $\bar{A}_{ij}$. Therefore the overall vector belongs to $\langle\begin{bmatrix}{\bf 1}_{2N}\cr 0\end{bmatrix},\begin{bmatrix}0\cr{\bf 1}_{2N}\end{bmatrix}\rangle.$ On the other hand, if $\lambda\in\sigma(\bar{A}_{hr}),\lambda\neq 1,$ then it is easy to see that under the hypothesis (37) and by the observability of $(A_{L},\begin{bmatrix}I_{p}&0\end{bmatrix})$, we have $\lambda\not\in\sigma({\mathbb{A}}_{ij})$, and therefore it must be ${\bf v}_{ij}=0$. But this means that $\begin{bmatrix}{\bf v}_{hr}\cr{\bf v}_{hr}\end{bmatrix}$ should belong to the kernel of (35), but for $\lambda\neq 1$ the matrix (35) is of full column rank. Analogous reasoning holds if $\lambda\in\sigma(\bar{A}_{ij}),\lambda\neq 1.$ ∎ ## VI An illustrative example We now apply the previous results to the case of a network of $7$ agents, whose communication graph is depicted in Figure 2 assuming that each agent is described as a discrete-time integrator and runs the algorithm (1) with $\kappa=0.25$. All the weights of the graph are equal to $1$. The set of the observed nodes is $\\{1,2,3\\}$, and we apply the strategy described in Section V. The resulting system matrix is $A=\begin{bmatrix}0.75&0&0&0.25&0&0&0\\\ 0&0.75&0&0&0.25&0&0\\\ 0&0&0.75&0&0&0.25&0\\\ 0&0&0&0.75&0&0&0.25\\\ 0.25&0&0&0&0.5&0.25&0\\\ 0&0.25&0&0&0&0.75&0\\\ 0&0&0.25&0&0.25&0&0.5\\\ \end{bmatrix}$ (40) and it is easily verified that the pair $(A,\begin{bmatrix}I_{3}&0\end{bmatrix})$ is observable. A dead beat state observer has been derived following three steps: (1) the pair $(A,[I_{3}\ 0])$, with $A$ as in (40), is reduced to multi-output observability canonical form $(A_{o},H_{o})$ by resorting to a suitable transformation matrix $T$ (see e.g. [2]). (2) The matrix $L_{o}$ that makes $A_{o}+L_{o}H_{o}$ nilpotent with minimum possible nilpotency index is trivially obtained by imposing that $A_{o}+L_{o}H_{o}$ is block-diagonal, with diagonal blocks that are (single- output) observability canonical forms with zero coefficients in the last column. (3) The desired observer gain matrix is then $L=T^{-1}L_{o}$. It is worth noticing that there is a transient phase, due to the presence of an estimation error, consisting of $3$ time steps, after which the residual becomes zero (i.e., $\tau_{0}=3$). Figure 2: A sketch of the graph of the illustrative example of Section VI. Two simulation results are plotted in Figure 3. The three curves represent, in both cases, the multi-agent system outputs, namely the states of the first three agents. On the other hand, instead of reporting the values of the residual ${\bf r}(t)$, we have chosen to represent with black circles the _detection signal_ $d(t)$ which is unitary if ${\bf r}(t)\neq 0$ and $0$ if ${\bf r}(t)=0$. In both simulations we have disconnected the edge $(6,5)$ in the interval $[10,14]$, and the edge $(5,7)$ in the interval $[20,24]$. It is worth noticing that $A$ and $\bar{A}_{7,5}$ do not have common eigenvalues (apart from $1$), so the original and the faulty networks are discernible, while $A$ and $\bar{A}_{5,6}$ have $\lambda=0.5$ and the corresponding eigenvector in common, so this case fails to satisfy condition (v) of Proposition 3 and, in turn, condition (iv) of Proposition 9 and condition (i) of Proposition 11. In the first simulation, corresponding to the upper plot of Figure 3, it is assumed ${\bf x}(0)=[10\ -1\ 1\ 8\ 5\ 5\ 12]^{\top}$ and $\hat{\bf x}(0)=0.$ After an initial transient of 3 steps, the estimated state $\hat{\bf x}(3)$ is equal to the real state ${\bf x}(3)$ (while ${\bf x}(t)\neq\hat{\bf x}(t)$ for $t<3$, and this is the reason why the estimation signal is nonzero), and the detection signal is zero up to $t=11$ when the disconnection of the edge $(6,5)$ is detected. Then the link is restored, and the disconnection of the edge $(5,7)$ is detected at time $t=22$. The second simulation shows what it may happen if the conditions of Propositions 9 and Proposition 11 are not met. It is assumed ${\bf x}(0)=[-5\ 5\ 5\ -5\ -5\ 5\ -5]^{\top}$ and $\hat{\bf x}(0)=0$. After an initial transient phase, $\hat{\bf x}(3)={\bf x}(3)$, however, the detection signal is zero up to time $t=22$, when the second edge disconnection is detected, and this shows that the first link disconnection remains undetected because of the special structure of the graph topology and the specific value of the system state at the time of the disconnection. Figure 3: Simulation Results: output evolution and detection signal of system (40) under different initial conditions. In the bottom plot, the first link disconnection cannot be detected from system evolution. ## VII Conclusions In this paper we have addressed the problem of detecting and identifying an edge disconnection in a discrete-time consensus network, by assuming that the link failure does not compromise the strong connectedness of the underlying directed communication network. The cases when the states of all the agents are available and when only a proper subset of them is available are both considered, and sufficient conditions ensuring that the problem is solvable are provided. An example concludes the paper, illustrating both the case when detection from the measurement of the states of 3 of the 7 agents is possible and the case when it is not. It is worth noticing that we have solved the discernibility problem from the first $p$ states by resorting to a full order dead-beat observer, but due to the structure of the state to output matrix the use of a reduced-order dead- beat observer would be straightforward, and it would ensure the same performance in terms of nilpotency index. Future research efforts will aim at finding an algorithm to efficiently identify the disconnected edge when only $p$ of the $N$ states are available, as it has been done here in the case when all the agents are measured. Also, the case of noisy measurements and/or modelling errors needs to be addressed. As mentioned in the Introduction, distributed fault detection and identification algorithms have been proposed by assuming that faults are additive. It would be of extreme interest to adapt such algorithms to the specific case when the fault is an edge disconnection, without losing the information about the specific nature and structure of the fault. ## Appendix: A technical lemma ###### Lemma 12. Consider the positive irreducible matrix $A=I_{N}-\kappa{\mathcal{L}}\in{\mathbb{R}}^{N\times N}$ and let $J_{A}$ be its (real) Jordan form. Let $T\in{\mathbb{R}}^{N\times N}$ be the nonsingular transformation matrix such that $T^{-1}AT=J_{A}=\begin{bmatrix}1&0\cr 0&\tilde{J}_{A}\end{bmatrix}=\begin{bmatrix}1&0&\dots&0\cr 0&J_{2}&\dots&0\cr\vdots&&\ddots&\vdots\cr 0&0&\dots&J_{n}\end{bmatrix},$ where $J_{i}$ is a Jordan block corresponding to $\lambda_{i}$ and $\lambda_{i}\neq 1$ for every $i\in[2,n]$. Define $W$ as in (22), namely as $W:=\begin{bmatrix}{\bf 0}_{N-1}&I_{N-1}\end{bmatrix}T^{-1}.$ Then for every pair of distinct nodes $h,i\in{\mathcal{V}}=[1,N],i\neq h,$ $\langle W{\bf e}_{i}\rangle\neq\langle W{\bf e}_{h}\rangle.$ ###### Proof. Suppose, by contradiction, that there exist $h,i\in{\mathcal{V}}=[1,N],i\neq h,$ and nonzero $\alpha,\beta\in{\mathbb{R}}$, such that $0=W[\alpha{\bf e}_{i}+\beta{\bf e}_{h}]=\begin{bmatrix}{\bf 0}_{N-1}&I_{N-1}\end{bmatrix}T^{-1}[\alpha{\bf e}_{i}+\beta{\bf e}_{h}].$ Since $T$ is a nonsingular matrix, there exists a vector ${\bf c}\neq 0$ such that $\alpha{\bf e}_{i}+\beta{\bf e}_{h}=T{\bf c}.$ By replacing this expression in the previous identity we obtain $0=\begin{bmatrix}{\bf 0}_{N-1}&I_{N-1}\end{bmatrix}T^{-1}T{\bf c}=\begin{bmatrix}{\bf 0}_{N-1}&I_{N-1}\end{bmatrix}{\bf c}.$ This amounts to saying that ${\bf c}=\gamma{\bf e}_{1}$ for some $\gamma\neq 0$ and $\alpha{\bf e}_{i}+\beta{\bf e}_{h}=T\gamma{\bf e}_{1}=\gamma{\bf 1}_{N}.$ But this is clearly not possible, since the vector on the left hand side has two nonzero entries, while the one on the right hand side has all nonzero (and identical) entries. ∎ ## References * [1] P. Antsaklis and J. Baillieul. Special issue on technology of networked control systems. Proc. IEEE, 95, no. 1:5–8, 2007. * [2] P. J. Antsaklis and A. N. Michel. Linear Systems. Birkhäuser, Boston, MA, 2006. * [3] J. Baillieul and P.J. Antsaklis. Control and communication challenges in networked real-time systems. Proc. IEEE, 95(1):9–28, 2007. * [4] G. Battistelli and P. Tesi. Detecting topology variations in networks of linear dynamical systems. IEEE Trans. Control Network Systems, 5, no. 3:1287–1299, 2018. * [5] A. Berman and R.J. Plemmons. Nonnegative matrices in the mathematical sciences. Academic Press, New York, 1979. * [6] F. Bullo, J. Cortés, and S. Martínez. Distributed Control of Robotic Networks. Princeton University Press, 2009. * [7] D. M. Cardoso, C. Delorme, and P. Rama. Laplacian eigenvectors and eigenvalues and almost equitable partitions. European J. Combinatorics, 28(3):665–673, 2007. * [8] C.-Y. Chong and S.P. Kumar. Sensor networks: evolution, opportunities, and challenges. Proc. IEEE, 91(8):1247–1256, 2003. * [9] J. Costanzo, D. Materassi, and B. Sinopoli. Using Viterbi and Kalman to detect topological changes in dynamic networks. In Proc. 2017 American Control Conference, pages 5410–5415. IEEE, 2017. * [10] C.G. Cullen. Matrices and Linear Transformations. Dover Publications; 2nd edition, 1990. * [11] M.R. Davoodi, K. Khorasani, H.A. Talebi, and H.R. Momeni. Distributed fault detection and isolation filter design for a network of heterogeneous multiagent systems. IEEE Trans. Control Systems Technology, 22(3):1061–1069, 2014. * [12] R. Dhal, J. A. Torres, and S. Roy. Link-failure detection in network synchronization processes. In Proc. 2013 IEEE Global Conference on Signal and Information Processing, pages 779–782,2013. * [13] R. Dhal, J. Abad Torres, and S. Roy. Detecting link failures in complex network processes using remote monitoring. Physica A: Statistical Mechanics and its Applications, 437:36–54, 2015. * [14] M. Egerstedt, S. Martini, M. Cao, K. Camlibel, and A. Bicchi. Interacting with networks: How does structure relate to controllability in single-leader, consensus networks? IEEE Control Systems Magazine, 32(4):66–73, 2012. * [15] L. Farina and S. Rinaldi. Positive linear systems: theory and applications. Wiley-Interscience, Series on Pure and Applied Mathematics, New York, 2000\. * [16] M. Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical J., 23:298–305, 1973. * [17] H. Gharavi and S.P. Kumar. Special issue on sensor networks and applications. Proc. IEEE, 91(8):1151–1153, 2003. * [18] R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge Univ. Press, Cambridge (GB), 1985. * [19] N.E. Leonard, D.A. Paley, F. Lekien, R. Sepulchre, D.M. Fratantoni, and R.E. Davis. Collective motion, sensor networks, and ocean sampling. Proc. IEEE, 95(1):48–74, 2007. * [20] Z. Li and J. Chen. Robust consensus of linear feedback protocols over uncertain network graphs. IEEE Trans. Automatic Control, 62(8):4251–4258, 2017. * [21] S. Miah, B. Nguyen, A. Bourque, and D. Spinello. Nonuniform coverage control with stochastic intermittent communication. IEEE Trans. Automatic Control, 60(7):1981–1986, 2014. * [22] R. Olfati-Saber, J.A. Fax, and R.M. Murray. Consensus and cooperation in networked multi-agent systems. Proc. IEEE, 95, no. 1:215–233, 2007. * [23] R. Olfati-Saber and R.M. Murray. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Automatic Control, 49, no. 9:1520 –1533, 2004. * [24] J. O’Reilly. Observers for Linear Systems. Academic Press, 1983. * [25] P. K. Pandey, B. Adhikari, and S. Chakraborty. A diffusion protocol for detection of link failure and utilization of resources in multi-agent systems. IEEE Trans. Network Science and Engineering, 2019. * [26] F. Pasqualetti, R. Carli, A. Bicchi, and F. Bullo. Distributed estimation and detection under local information. In Proc. 2nd IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages 263–268, Annecy, France, 2010. * [27] F. Pasqualetti, F. Dorfler, and F. Bullo. Attack detection and identification in cyber-physical systems. IEEE Trans. Automatic Control, 58, no. 11:2715–2729, 2013. * [28] F. Pasqualetti, F. Dorfler, and F. Bullo. Control-theoretic methods for cyberphysical security: Geometric principles for optimal cross-layer resilient control systems. IEEE Control Systems Magazine, 35, no. 1:110–127, 2015. * [29] D. Patil, P. Tesi, and S. Trenn. Indiscernible topological variations in dae networks. Automatica, 101:280–289, 2019. * [30] J. Qin, Q. Ma, Y. Shi, and L. Wang. Recent advances in consensus of multi-agent systems: A brief survey. IEEE Trans. Industr. Electronics, 64(6):4972–4983, 2017. * [31] M. Rahimian, A. Ajorlou, and A. Aghdam. Detectability of multiple link failures in multi-agent systems under the agreement protocol. In Proc. 2012 IEEE Conf. Decision Control, pages 118–123, Maui, HI, USA, 2012. * [32] M. A. Rahimian and V.M. Preciado. Failure detection and isolation in integrator networks. In Proc. 2015 American Control Conference, pages 677–682, 2015\. * [33] W. Ren, R. W. Beard, and E. M. Atkins. A survey of consensus problems in multi-agent coordination, 2005. In Proc. 2005 American Control Conference, pages 1859–1864, 2005\. * [34] W. Ren and R.W. Beard. Distributed Consensus in Multi-vehicle Cooperative Control: Theory And Applications. Springer, 2007. * [35] W. Ren, R.W. Beard, and E.M. Atkins. Information consensus in multivehicle cooperative control. IEEE Control Systems Magazine, 27 (2):71–82, 2007. * [36] W. Ren and Y. Cao. Distributed coordination of multi-agent networks: emergent problems, models, and issues. Springer Science & Business Media, 2010. * [37] E.D. Sontag. Mathematical Control Theory. Deterministic Finite Dimensional Systems. Springer-Verlag, New York, 2nd edition, 1998. * [38] A. Teixeira, I. Shames, H. Sandberg, and K.H. Johansson. Distributed fault detection and isolation resilient to network model uncertainties. IEEE Trans. Cybernetics, 44(11):2024–2037, 2014. * [39] J. A Torres, R. Dhal, and S. Roy. Detecting link failures in complex network processes using remote monitoring. In Proc. 2015 American Control Conference, pages 189–194, 2015. * [40] M.E. Valcher. Consensus in the presence of communication faults. In Proc. 2019 European Control Conf., pages 1062–1067, Napoli, Italy, 2019. * [41] M.E. Valcher and G. Parlangeli. On the effects of communication failures in a multi-agent consensus network. In Proc. 23rd International Conference on System Theory, Control and Computing, pages 709–720, Sinaia, Romania, 2019. * [42] W.C. Wah. Synchronization in complex networks of nonlinear dynamical systems. World Scientific, 2007. * [43] M. Xue. Designing local inputs to identify link failures in a diffusive network: A graph perspective. In Proc. 58th Conference on Decision and Control, pages 5525–55230, Nice, France, 2019. * [44] K. You, Z. Li, and L. Xie. Consensus condition for linear multi-agent systems over randomly switching topologies. Automatica, 49(10):3125–3132, 2013. * [45] F. Zhao and L. Guibas. Wireless sensor networks: An information processing approach. Morgan Kaufmann, 2004.
# Profiling Software Developers with Process Mining and N-Gram Language Models João Caldeira<EMAIL_ADDRESS>Fernando Brito e Abreu<EMAIL_ADDRESS>Jorge Cardoso<EMAIL_ADDRESS>Ricardo Ribeiro ricardo.ribeiro@iscte- iul.pt Claudia Werner<EMAIL_ADDRESS>ISTAR, Iscte - Instituto Universitário de Lisboa, Portugal DCTI, Iscte - Instituto Universitário de Lisboa, Portugal CISUC, University of Coimbra, Portugal and Huawei Munich Research Center, Germany COPPE, Federal University of Rio de Janeiro, Brazil INESC-ID Lisboa, Portugal ###### Abstract Context: Profiling developers is challenging since many factors, such as their skills, experience, development environment and behaviors, may influence a detailed analysis and the delivery of coherent interpretations. Objective: We aim at profiling software developers by mining their software development process. To do so, we performed a controlled experiment where, in the realm of a Python programming contest, a group of developers had the same well-defined set of requirements specifications and a well-defined sprint schedule. Events were collected from the PyCharm IDE, and from the Mooshak automatic jury where subjects checked-in their code. Method: We used n-gram language models and text mining to characterize developers’ profiles, and process mining algorithms to discover their overall workflows and extract the correspondent metrics for further evaluation. Results: Findings show that we can clearly characterize with a coherent rationale most developers, and distinguish the top performers from the ones with more challenging behaviors. This approach may lead ultimately to the creation of a catalog of software development process smells. Conclusions: The profile of a developer provides a software project manager a clue for the selection of appropriate tasks he/she should be assigned. With the increasing usage of low and no-code platforms, where coding is automatically generated from an upper abstraction layer, mining developer’s actions in the development platforms is a promising approach to early detect not only behaviors but also assess project complexity and model effort. ###### keywords: Software Development , Process Mining , Developer’s Profile , N-Gram Language Models , Software Development Process Smells ††journal: Journal of Systems and Software ## 1 Introduction ### 1.1 Motivation Software development can be characterized as a socio-technical phenomenon [1]. Understanding the actual dependencies between development tasks and teams’ behaviors to fulfill them is a serious challenge for most software project managers concerned with the allocation and coordination of resources [2]. Being able to group developers with similar behaviors, for instance, based on the time they spent on each activity or working on a specific artifact, is a step forward in that understanding. This requires analyzing developers’ traces (i.e. executed actions/commands) within the IDE. Traditional process mining techniques come to the rescue of such concerns. However, within the software development context, the latter usually assumes a structured and noise-free input and produces spaghetti-like processes. As such, a lot of variances may mislead the results and correspondent interpretation [3]. Process variant analysis is a research stream within the process mining domain. In the last decade, several novel approaches to effectively mine process variants have been proposed [4, 5, 6]. The latter evolved to detect the existence of similarities and differences in behaviors within a common business process, which can be considered as “fingerprints” left by process instances [7, 8]. Applying process mining algorithms on large event logs, containing a significant number of cases and events, usually requires the use of powerful computational systems and, even then, may lead to long processing times. Process variant comparison techniques, in particular due to massive manipulation of vectors and matrices, are computationally heavy. Software development event logs, generated from IDE usage, often have events at the thousands, hundreds of different activities, hierarchical states, and many different resources associated with events. Therefore, the aforementioned performance problem is usually noticeable. Natural language techniques can mitigate it by performing initial filtering, aggregation of events and in finding local regularities. Even if event aggregation is not desired in mining processes, the trade-off between the practical aspects versus the accuracy of certain algorithms should be carefully evaluated in the software development realm. In this paper, we propose an approach to profile developers using a stack of text mining to express developers’ fingerprints, and process mining to discover, model and assist in hypothesis evaluation regarding their workflows. We used events collected from the IDE during development sessions as input for the unsupervised learning techniques and process mining algorithms. Additionally, since the process of coding can be represented as a grammar with a specific semantic [9], we find it useful to assess how similar this grammar is to a natural language, and in finding optimal parameters for the text mining algorithms. A development session executed by one developer at his/her IDE can be considered an instance of a process, where the goal is to produce a software product or maintain an existing one. Its workflow of activities depends on many factors, such as the development methodology, program design or individual experience [10]. Furthermore, developers are usually free to produce code without a referential model or guidelines on how to execute the coding tasks and, most often without any intelligent guidance from traditional development tools. This poses challenges when one wants to detect similarly or deviating programming profiles to assess productivity and optimize resource planning. To validate our approach for profiling developers, while controlling for spurious effects, we performed a controlled experiment where, in the realm of a Python programming contest, a group of developers had the same well-defined set of requirements specifications and a well-defined sprint schedule. Events were collected from the PyCharm IDE, and from the Mooshak automatic judge where subjects checked-in their code stepwise. ### 1.2 Using students as surrogates for professional software developers In this study we use students as surrogates for professional software developers. Therefore it is worth reviewing the discussion in the literature on using students as surrogates for professionals. Almost half a century ago, the practice of using students in research was already widespread, due to the convenience of their availability and usual willingness to participate in experiments. For instance, in consumer behavior (marketing) studies, researchers tested whether students could be used as consumer surrogates, but results were inconclusive [11, 12, 13]. Also since the seventies, as reported in [14], students have been used as surrogates for managers on decision support systems (DSS). The same study reports that undergraduate students were more used than graduate students, which could be a validity hindrance in that case, since graduate students are more closer to managers in age, maturity and education. Students have also been extensively used as surrogates in Software Engineering studies. For instance, a study carried out with students on detection methods for software requirements inspections [15] was replicated with similar results using professionals as subjects [16]. Another study on lead-time impact assessment for software development projects did not find significant differences between students and professionals [17]. In [18], the performance in Personal Software Process (PSP) improvement tasks was compared between freshmen students, graduate students, and industry people, and again no significant differences were found between the three groups. Two separate studies in Requirements Engineering provided somehow complementary conclusions. While in [19] definitive conclusions about the suitability of students in projects could not be drawn, in [20] the authors argue that it may be possible to influence students to provide answers that are in line with industrial practice, although it was not clear under which conditions could that influence be exerted in empirical investigations. A systematic literature review on using students as surrogates for professionals can be found in [21]. The author concludes that many factors influence the results of experimental studies such as the number of subjects, nature of tasks and previous experience on that, motivation levels of subjects, training provided, and incentives for participation in the experiment. In other words, the appropriateness of students as surrogates for professionals depends on current study conditions. In section 6.3, we argue why this may hold in our study. ### 1.3 Contributions The main objectives for this work are the following: * • To evaluate if software development sessions can be mined as any natural language; * • To assess if coherent development fingerprints can be discovered from an event log containing developer’s IDE interactions and submission of answers to several coding problems; * • To appraise the impact of individual behaviors in the outcome of a programming task given a group of developers. The remainder of this paper is organized as follows: section 2 provides background related to the research area and emphasizes the need for the proposed approach. Subsequent sections, outline the related work in section 3, detail the methodology and experiment setup in section 4 and present the results, its corresponding analysis and implications in section 5. Threats to validity are presented in section 6 and the concluding comments and future work in section 7. ## 2 Background ### 2.1 Language models Natural languages (e.g., English, Portuguese, etc.) possess a rich vocabulary and therefore are complex and powerful. A programming language or a sequence of development actions in plain English, as seen in Figure 1111This word cloud, where the size of each word is proportional to its relative frequency, was generated from data collected during the validation experiment of our proposed approach., is an artificial language but is expected to follow the same principles of a natural language. The rationale is that although a given piece of software is written with an artificial language, it is a natural product of the human mind as prose or poetry in natural language [9]. As such, it is also amenable to statistical analysis like the ones performed in the area known as “text mining”, where natural language processing (NLP) algorithms and analytical methods are used. We argue that development sessions viewed as a sequence of actions like those in Figure 1 and represented by a well-defined vocabulary, can be regarded from the same perspective. In this paper, we describe a novel method to detect different developers’ profiles based on models built from development interactions using n-gram probabilistic language models [22]. Furthermore, we combine these unsupervised learning models, which present a good fit in capturing local regularities in text data, and process mining algorithms, which are known to perform well in the modelling of complex business processes. Figure 1: Word cloud with example of frequent activities on interactions in PyCharm and submissions to Mooshak ### 2.2 Topic modeling Understanding unstructured data is a major challenge in software development, and having a predefined data model is not a common scenario when dealing with such type of data. Moreover, those data are typically text-heavy. As such, topic modeling has become one of the most used methods to mine software repositories [23]. Topic modeling is a method for unsupervised classification of documents, by modeling each document as a mixture of topics and each topic as a mixture of words [24]. Despite some limitations, such as the order of the documents, it is frequently used to build models from unstructured textual data, as it presents an effective means of data mining where subjects represent documents or even a textual representation of actions executed in certain contexts [25]. Within the most prevalent methods used to mine software repositories, we find algorithms such as LDA (Latent Dirichlet Allocation) and many of its variations, LSI (Latent Semantic Indexing), LSA (Latent Semantic Analysis), PLSI (Probabilistic Latent Semantic Indexing), and ICA (Independent Component Analysis) [26, 27]. These algorithms are used to cluster documents, identify features, derive source code metrics for bugs prediction, assess code evolution, trace links between pairs of artifacts and detect code clones, among other things [28, 23]. ### 2.3 Software development process mining Process modeling is a persistent topic in the research literature concerned with software development practices. The analysis of fingerprints in event logs [6], the discovery of deviating cases using trace clustering [3] and mining of sequences of developers interactions [29] are examples of topics covered by researchers to overcome or mitigate recurrent problems. However, often the suggested solutions are complex and difficult to automate in a coherent software development process mining pipeline. These constraints led researchers to highlight that software analytics does not need to be hard and, on the contrary, it can and should be simplified [30, 31, 32]. In Table 1, we present a comparison of typical text mining characteristics and purposes, along with how we view topic modeling applied in software development process mining. Software developers execute a stream of actions/commands when using their IDE. Those commands, seen as a work session, can also be represented textually as a narrative along the timeline. We expect that stream to contain the semantics required to identify different developer profiles. Therefore, logs containing a sequence of IDE commands/actions can be mined with topic modeling as any other document would be in searching for different topics. In this context, we are searching for different behaviors such as programming styles or patterns of IDE usage. Table 1: Traditional Text Mining (TM) vs. Software Development Process Mining (SDPM) | TM | SDPM ---|---|--- Inputs Corpus | Documents/Articles | Development Work Sessions Document | Mixture of Topics | Mixture of Behaviours Topic | Frequent Words/Terms | Frequent Actions/Commands Outputs Discovers | Distinct Subjects/Topics | Development Patterns Usefulness | Identify Social Trends | Optimize Resource Allocation | Frame Research Interests | Detect Practices Deviations | Sentiment Analysis | Forensic Project Analysis #### 2.3.1 Preliminary definitions To justify the usefulness of collecting IDE events, and provide context to our proposal, we introduce in this section some preliminary definitions required to understand concepts such as development actions, development sessions, development actions repository and development profiles. Definition 1. Development action * • A development action is an event defined as a tuple (a, c, t, (p1,v1),…,(pn,vn)) where a is the command action or process activity name, c is a development session or case id, t is the timestamp and the set (p1,v1),…,(pn,vn) (where n $\geqslant 0$) contains the event or case properties/attributes and corresponding values, such as developer location, operating system or IDE type. * • A development action is defined by a token t included in the development session vocabulary to be formed by a set containing all the possible IDE commands, denoted by V: V = (t0,t1,…tn) : $\forall$t $\in$ V, t = $\langle$ide_command_or_activity$\rangle$ Definition 2. Development session * • A development session is a trace, defined by a non-empty sequence $\sigma$=e1,…,en of command actions such that $\forall$i,j $\in$ [1..n] ei.c=ej.c. * • A development session is defined by a sentence formed by a set of tokens from vocabulary V, denoted by: $\omega$ = (t0,t1,…tn) : $\forall$t $\in$ $\omega$, t $\in$ V Definition 3. Development actions repository * • A repository of actions or event log is a set of development actions mapped to a variable number of development sessions, defined as L=$\sigma$1,…,$\sigma$n. * • We consider an event log a set of sequential tokens t from the vocabulary V, where t can be repeated. Definition 4. Development profile * • An event log L can be partitioned into a finite set of groups called process variants or, in our case, profiles or fingerprints, $\varsigma$1,$\varsigma$2,…,$\varsigma$n, where $\exists$p such that $\forall$ $\varsigma$k and $\forall$ $\sigma$i,$\sigma$j $\in$ $\varsigma$k, $\sigma$i.p=$\sigma$j.p. The definition of process variant emphasizes that process executions in the same group must have the same value for a given attribute, and each process execution belongs uniquely to a process variant. In our approach, the same value for a given attribute will be dynamically computed and concatenated into the original dataset. The algorithms to model processes will then be based on this clustering action. Definition 5. N-gram language models. * • A language model is a statistical model that allows computing the probability of a sentence, or predict the next word in a sentence for a given language [33]. From a generative perspective, all sentences of a (natural) language can be described in terms of the product of a set of conditional probabilities [34]. Hence, the probability of a sentence $\omega$ = (t0,t1,…tn) is given by : P($\omega$) = P(t0)P(t1$|$t0)P(t2$|$t0t1)…P(tn$|$t0t1…tn-1) ## 3 Related Work ### 3.1 Natural language models #### 3.1.1 Language modeling The use of natural language models was presented as an approach to recommend analogical libraries based on a knowledge base of analogical libraries mined from tags of millions of Stack Overflow questions [35]. This approach used a combination of a word embedding technique and domain-specific relational and categorical knowledge mined from Stack Overflow. Evidence showed that accurate recommendation of analogical libraries is not only possible but also a desirable solution. A system that assists developers in API usage with code completion recommendation, using a n-gram probabilistic language model, supported by API sentences extracted from source code corpora, is described in [34]. #### 3.1.2 Topic modeling A survey on the use of topic models when mining software repositories is presented in [26]. The authors found that only a limited number of software engineering tasks were being targeted, and researchers use topic models as black boxes without fully evaluating their fundamental assumptions. Finally, they provide guidelines on how to apply topic models to specific software engineering tasks. With the goal of predicting future developer behavior in the IDE and to make better recommendations to developers, [36] used topic models and specifically applied the Temporal Latent Dirichlet Allocation algorithm on two large interaction datasets for two different IDEs, Microsoft Visual Studio and ABB Robot Studio. The authors concluded that the approach was promising for both predicting future IDE commands and producing empirically-interpretable observation. An approach to detect duplicate bug reports, using information retrieval and topic modeling, namely LDA, was presented in [24]. The latter revealed an improvement of up to 20% in accuracy, when compared to other state-of-the-art approaches. A study of software logging using topic models, with the aim of understanding the relationship between the topics of a code snippet and the likelihood of a code snippet being logged (i.e. to contain a logging statement) is described in [37]. The findings highlight the topics containing valuable information that can help to guide and driving developers’ logging decisions. A similar approach is presented in [38], based on the structure and dynamics of knowledge network in domain-specific Q&A sites, particularly on Stack Overflow. A large-scale study on security-related questions on Stack Overflow was presented in [39]. Two heuristics were used to extract the questions that are related to security from the dataset based on the posts’ tags. Later, to cluster different security-related questions based on their texts, the authors used LDA tuned with a Genetic Algorithm (GA). ### 3.2 Mining software repositories An application of mining three software repositories: team wiki (used during requirement engineering), version control system (development and maintenance), and issue tracking system (corrective and adaptive maintenance) in the context of an undergraduate Software Engineering course was presented in [40]. Visualizations and metrics provided insights into practices and procedures followed during various phases of a software development life- cycle, granting a multi-faceted view to the instructor and serving as a feedback tool on the development process and quality by students. Examples of insights produced by mining software repositories include understanding and assessing: (i) the degree of individual contributions in a team, (ii) the quality of commit messages, (iii) the intensity and consistency of commit activities, (iv) the trend and quality of the bug fixing process, (v) the component and developer entropy and, (vi) process compliance and verification. Experimentation revealed that not only product quality but also process quality varies significantly among student teams and mining process aspects can help the instructor in giving directed and specific feedback. #### 3.2.1 Mining developers’ behavior An investigation on how developers spend their time based on a fine-grained IDE interaction events dataset is presented in [41]. Its authors propose an inference model of development activities to precisely measure the time spent in editing, navigating and searching for artifacts, interacting with the UI, and performing corollary activities, such as inspection and debugging. In [42], the authors present an empirical study where app stores were mined to find out if developers update third-party libraries in mobile apps and also to identify update patterns. Evidence found unveiled that mobile developers rarely update their apps regarding used libraries and when they do, they mainly update GUI-related ones. The measurement of developers’ elapsed time in program comprehension activities beyond their IDE interactions is described in [43] in a field study with professionals. Findings showed that, on average, developers spend 58% of their time on program comprehension activities, and they frequently use web browsers and document editors to perform program comprehension activities. Regarding the impact of programming languages, developers’ experience, and project phase on the time spent on program comprehension, evidences shown that senior developers spend significantly fewer percentages of time on program comprehension than junior developers. The assessment of development behaviors and testing practices in real-world projects is reported in [44]. The authors performed a study involving thousands of developers who were monitored closely on their development activities during the usage of four different IDEs. Results demonstrated that half of the developers’ population does not test programs and they rarely run their tests in the IDE. Regarding the behaviors and beliefs towards Test- Driven Development (TDD), findings show this activity as a nonfrequent practice, and software developers only spend a quarter of their time engineering tests, whereas they think they test half of their time. #### 3.2.2 Mining end-users’ behavior Guidelines for the analysis of data collected during software in operation (i.e. when a software product is used by its end-users) are presented in [45]. The authors adopted techniques for extracting knowledge on software operation data, such as users’ profiling, clickstream analysis, and classification analysis. ### 3.3 Wrap-up The aforementioned approaches use a series of n-gram models, topic modeling, and process mining methods mainly to assist programmers in their most basic daily duties, and to discover how end-users operate software products. Our work uses similar methods however, it focuses on finding developers’ fingerprints with the aim of understanding and profiling programmers’ behaviors. This approach may provide professors a way to assess students’ performance within class tasks. Regarding software and project managers, at an enterprise level, they may use it to improve their task assignment strategies depending on project characteristics, devising adequate replacements in turnover situations, and balancing the constitution of software teams. As for process quality monitoring and enhancement, it can help in finding the good and bad processes followed by a development team or organization. ## 4 Study Setup In a controlled experiment where the main objective is analysing programmers’ behavior, there is an obvious main source of variability that should be blocked: the nature of the programming task itself. In other words, the optimal setting is to have several programmers222as many as possible to achieve statistical significance performing the same task. Other sources of variability are the programming language used, the IDE used, the working conditions and available schedule. Being able “recruit” participants in industry for such an experiment, while blocking all the aforementioned factors is not feasible in a professional context. However, we were able to do that during an academic event dubbed Pythacon333A twisted contraction of Python + Hackaton : https://sites.google.com/iscte-iul.pt/pythacon. In this event, the same well- defined tasks on software development were performed individually by many participants. Pythacon’s first phase consisted of taking a Python In class- MOOC [46]. The second phase consisted in a programming contest with six problems with increasing difficulty. The Mooshak444Available from its home page at http://www.ncc.up.pt/mooshak automatic judge was used to assess participants’ performance in their quest for producing solutions in Python for the aforementioned problems [47]. The subjects of this experiment were undergraduate students from three 1st cycle Bologna degrees555LEI (Computer Engineering), ETI (Telecommunications and Computer Engineering), IGE (Computer Science and Business Management) and LCD (Data Science) at Iscte, a public university in Lisbon, Portugal. LCD students did not attend the first phase because their syllabus already included two courses on Python. As such, they acted as the control group regarding the “treatment” of taking an in-class MOOC. All subjects acted as Python developers while trying to build solutions to the proposed problems, upon the PyCharm IDE666https://www.jetbrains.com/pycharm/, in the same premises777A large open-space where each participant had an individual table, a portable computer and good natural light and an equal sprint duration (4 hours). As such, the aforementioned confounding factors were blocked. We developed a PyCharm plugin that captures all relevant IDE events, such as, navigational, editing and debugging actions. Each Pythacon participant installed it in its IDE right after reading and signing an informed consent. When starting the IDE for the first time after plugin installation, they were requested to provide their student id number, that was added to the events log. As for the Mooshak automatic judge, that somehow mimics a continuous integration pipeline with an acceptance test battery, it has embedded login and logging mechanisms that allow identifying each participant and its events (problem submissions and corresponding outcomes). ### 4.1 Development sessions extraction and storage Interaction events collected with our PyCharm plugin, were stored in a JSON file on each subject’s computer. A sample event instance is presented in Listing 1. The field tags are self-explanatory. By the end of Pythacon’s programming contest, all event files were uploaded to a central server. Data were then stored into a MySQL database table where the username and event timestamp were composed as an unique key for purging duplicated data. The BPMN model in Figure 2 presents the complete schema for the data collection workflow. Listing 1: Sample PyCharm Event Instance ⬇ { ”session” : ”c51973e3-562a-4b65-b6df-49f4c37792e1”, ”timestamp_begin” : ”2020-09-18T09:00:06.054Z”, ”username” : ”87788”, ”graduation” : ”IGE”, ”projectname” : ”PythaconResolution”, ”filename” : ”P4.py”, ”extension” : ”py”, ”categoryName”: ”NavBarToolbar”, ”commandName”: ”Run”, ”platform”: ”JetBrains s.r.o. / PyCharmCore”, ”platform_branch”: ”PyCharm”, ”platform_version”: ”2020.2.1”, ”java”: ”11.0.8+10-b944.31”, ”os”: ”Mac OS X 10.15.6”, ”os_arch”: ”x86_64”, ”country”: ”Portugal”, ”city”: ”Lisbon”, …. ”hash”: ”0000a3a2cf78485419f15d7913789b16” //To detect event tampering } Figure 2: Data Collection Workflow ### 4.2 Data analysis The complete workflow followed in data pre-processing, aggregation and analysis is presented in Figure 3. Figure 3: Study Computation and Analysis Process #### 4.2.1 N-gram language models evaluation Documents containing natural language, software code or development sessions, are often repetitive and highly predictable. A good language model should capture the regularities in the corpus. If carefully produced from a representative corpus, it will predict, with high confidence, the contents of a new document drawn from the same population. In other words, the model will not find a new document particularly surprising. In natural language processing (NLP), this idea is captured by a measure called perplexity, or its log-transformed version, cross-entropy [9]. Given a document containing the textual representation of a development session within the IDE, $s=a_{1},\dots,a_{n}$, where terms represent development commands or activities, and a language model $M$, we assume that the probability of the document estimated by the model is $p\textsubscript{M}(s)$. We can write down the cross-entropy measure as: $H_{M}(s)=-\frac{1}{n}\log p_{M}(a_{1},\dots,a_{n})$ (1) and by the formulation presented in earlier: $H_{M}(s)=-\frac{1}{n}\sum_{1}^{n}\log p_{M}(a_{i}|a_{1},\dots,a_{n})$ (2) This measures how “surprised” a model is by looking at an unseen document. A model with low entropy for target documents is expected to be a good model. Higher probabilities are given (closer to 1, and thus lower absolute log values) to more frequent words, and lower probabilities to rare ones. If a hypothetical optimal model is deployed to predict developers’ actions, it is possible to guess what the next action would be and at the same time characterize developers’ behaviors. To shed light on how regular development sessions are, we performed a series of experiments with both natural language and development sessions corpora, first comparing the “naturalness” (using cross-entropy) of IDE actions on development sessions with English texts, and then comparing various session corpora to each other to further gain insight into the similarities and differences between sessions corpora. Our natural language studies were based on a R package with widely used corpora from Jane Austen’s novels888https://cran.r-project.org/web/packages/janeaustenr/index.html. To compute the models perplexity and obtain the correspondent cross-entropy, we used the SRILM package999SRILM Toolkit - http://www.speech.sri.com/projects/srilm. All the models were evaluated using a 5-fold cross validation strategy, meaning the corpus was randomly divided into 5 parts, where 80% was used as the training set and 20% as the test set, and this process was repeated 5 times. #### 4.2.2 Topic models evaluation To determine the optimal number of topics to model developers’ sessions, we used the R package ldatuning101010https://cran.r-project.org/web/packages/ldatuning/ldatuning.pdf, that applies an empirical approach rather than intuition. Metrics such as CaoJuan2009 [48] and Arun2010 [49] are to be minimized (tend to 0), whilst metrics like Deveaud2014 [50] and Griffiths2004 [51] are expected to be maximized (tend to 1). The lower the distances to the objective values, the better the model and, consequently, the optimal number of topics are found in that particular point. #### 4.2.3 Process models evaluation Process Mining is now a mature discipline with validated techniques producing accurate outcomes on several business domains [52]. Discovery is the ability to construct a process model, by capturing the behavior of a process based on an event log [53]. Following model discovery, conformance checking stands for the confrontation of a process model with the “reality” represented by the logged events during the actual execution of the corresponding deployed process. Conformance checking can be used to detect deviations from prescribed processes, determine differences and/or similarities between process variants or verify the accuracy of documented processes [53]. It can also be used to calculate the efficiency or to measure the quality of a process model. Quality is normally assessed considering four metrics: * • Fitness. Represents how much behavior in a log is correctly captured (or can be reproduced) by a discovered model [54]. * • Precision. Refers to how much more behavior is captured in the model than what was observed in the log. It deals with avoiding overly under fitted models [55]. * • Generalization. Focuses on avoiding overly precise models based on the assumption that logs are by their nature incomplete, meaning that, to a certain extent, a model should be able to reproduce not yet seen behaviour in the log [56]. * • Simplicity. Alludes to the rule that the simplest model that can describe the behavior found in a log is indeed the best model. Model complexity, the opposite of simplicity, is dependent on the number of nodes and arcs in the underlying graph [57]. To calculate the previous metrics we used the Process Mining library for Python (PM4Py)111111https://pm4py.fit.fraunhofer.de/documentation#discovery. ### 4.3 Research questions The research questions for this work are the following: * • RQ1: Do n-gram language models capture local regularities in software development sessions? Methods used. Computation of n-gram language models perplexity/cross-entropy using SRILM and LDA with n-gram windows. * • RQ2: Can we coherently characterize development sessions in terms of fingerprints? Methods used. Topic Modeling using the LDA algorithm with n-gram window tuning. * • RQ3: Are there any significant variation in sessions simplicity and interactions magnitude between distinct participants? Methods used. Process models discovery using the Directly Follows Graph mining algorithm and hypothesis testing. ## 5 Study Results In this section, we present the results of our experiment, regarding its research questions. Table 2: Participants Statistics | LEI | ETI | IGE | LCD | Total ---|---|---|---|---|--- Participants | 12 | 9 | 7 | 9 | 37 Attended MOOC | Yes | Yes | Yes | No | 28 ### 5.1 RQ1. Do n-gram language models capture local regularities in software development sessions ? #### 5.1.1 Local syntactic structure To answer this question we estimated n-gram models of plain English corpus and the development session IDE commands and their categories. From Figure 4, we observe that, although English has a higher level of cross- entropy across all n-gram models, it declines rapidly, saturating around tri- or 4-grams. The same happens with our development sessions models, which have generally lower cross-complexity for unigram models, and also saturate around tri-grams models. This indicates, as expected, that development sessions repetitive context can also be captured by language models. Figure 4: Plain English vs. Python Development Sessions Cross-Entropies using n-gram models We find that a typical development session is far more regular than English, with entropies starting from 4.2 bits and declining to 2.7 bits by IDE command and starting at 2.2 bits and saturating around 0.7 bits for command categories. Our findings may have implications in the way we manage developers’ activities. They provide more and detailed evidence to confirm what was already mentioned by [36], the possibility to design and build even more optimized recommendation systems to help and guide developers on the activities they are executing or should be doing next. Moreover, they shed light on the optimal number of n-grams to use, thus avoiding the waste of computing resources and at the same time provide further evidence for the usefulness of using text mining techniques to detect and monitor developers’ behaviors. #### 5.1.2 Semantics In the context of IDE usage, each development session may have its own semantics. Whilst to capture the local syntactic structure of a language we used n-gram language models, to assess the semantics of the development sessions we used LDA. Figure 5 shows the cross-entropy regarding the semantics analysis for the development sessions. It consists in finding the entropy for n-gram models, each having $k$ topics, where $k$ varies from 2 to 10, and where each combination of n-grams and $k$ topics was calculated with a 5-fold cross-validation strategy. Figure 5: Development sessions modeling using LDA with $k$ topics and n-grams As one can easily observe, when using LDA to assess the number of topics, entropy grows with the order of the n-grams. The higher the n-gram window the less the model is able to predict future cases because the perplexity is higher. Regarding the number of topics on each n-gram model, we can confirm the expected behavior, when the number of topics increase, independently of the n-gram model, the entropy tends to decline. Concerning the interpretation of the n-gram results perspective, they show that, given the randomness of IDE actions performed by developers, to increase the n-gram value in characterizing a session, decreases the ability for LDA to find similar ones. As for the number of topics, the bigger the number of topics the better the model can detect similar sessions. Based on Figure 5, we argue that, when using LDA to detect similarities within development sessions, we should evaluate carefully the use of more than tri- or four-gram models. In one hand we know that the higher the n-gram model, the higher the computational resources needed. On the other hand we have evidences that five-gram models have an entropy of around 5 bits, which is by itself a high value. ### 5.2 RQ2. Can we coherently characterize development sessions in terms of fingerprints ? Figure 1 in section 2.1 provided a small sample of the activities aggregating the commands issued by developers. Those were defined according to the method used by [41], and by adding extra activities reflecting the results of the submission events. Regarding IDE interactions, the commands were recoded into activities like: Editing, Navigating, Debugging, Refactoring, Executing and Spurious. As for the submission actions, we used their native identifiers : Accepted_Answer, Wrong_Answer, Compile_Time_Error, Invalid_Submission, Runtime_Error and Time_Limit_Exceeded. From these, we computed the optimal number of patterns(topics) by assessing the probabilistic coherence of multiple topics using the metrics described in 4.2.2 and uni-gram, bi-grams and tri-grams models only. Figure 6: Assessing the optimal number of distinct patterns to search #### 5.2.1 Optimal number of patterns To decide the optimal number of patterns, we took into account the highest value for the number of topics where any of the metric is close to the objective value, either when minimizing or maximizing its value and therefore, we picked 37. This number represents the size of the population, which in reality confirms that there is a great deal of variance between sessions. When applying the LDA algorithm with k=37, due to the average of the probabilities of an activity to belong to a session and the average of probabilities of a participant to belong to a specific session pattern, LDA has placed the participants in only 19 different patterns. Figure 6 shows the optimal number of topics evaluation. #### 5.2.2 High performers Figure 7 shows the topics, or in our case, the referred fingerprints, identified to characterize what we call the high performers121212Eight participants were in this condition., the same is to say, the ones with good process smells. In this group we have those who ranked above quartile 3, meaning that they have answered correctly to at least four exercises. From the six fingerprints found for those participants, we can observe the following: * • Fingerprint 19. Cautious coder. Aggressive executor. Contains a profile of development centered in frequent editing actions, mixed with permanent execution of the code to validate the result before submitting to Mooshak. This pattern was the fingerprint of participant D. * • Fingerprint 20. Cautious coder. Reveals a similar pattern, however, with less prevalence of program execution and therefore less testing actions. Exercise submission actions are very rare. With this fingerprint we find 2 participants, G and H. * • Fingerprint 24. Cautious coder. Test skipper. With no surprise, in this fingerprint, we find editing also as the most frequent action. However, the next common action is not program execution, but submissions for validation. This pattern was the fingerprint of participant C. * • Fingerprint 26. Insecure. Testers. Participants characterized in this group followed explicitly a permanent program execution, followed by editing activities. They have submitted their answers infrequently, meaning that they have probably tested well their work before any submission. This pattern was the fingerprint of participant F. * • Fingerprint 30. Insecure. Debuggers. This pattern reveals participants more focused on debugging activities, followed by a mix of editing and navigational actions. In a certain sense, it looks as if they have replaced their program execution tests with fine grain debugging practices. This pattern was the fingerprint of participant E. * • Fingerprint 35. Balanced coders. Confident. Provides evidence for a pattern of high frequency in editing, followed by a balanced persistence of program execution practices and navigational activities. There were however no frequent activities related with the submission of code to answer the exercises. It suggests these participants only submitted their answers after careful review of their code and without the need for deeper debugging tasks. With this fingerprint we find the top 2 participants, A and B. Figure 7: Development fingerprints for the top(8) performers #### 5.2.3 Low performers Figure 8 represents the characteristics of those who had more difficulties in executing the tasks and which we may consider as having bad process smells. It plots the unique fingerprints of the last eight participants, the ones with zero or just one correct answer. * • Fingerprint 7. General coding limitations. Reveals a practice focused almost exclusively on editing actions, and a very small prevalence (near zero) of navigational, program executions or exercise submission activities. This pattern was the fingerprint of participant V. * • Fingerprint 18. General coding limitations. Shows a similar profile as fingerprint 7 regarding editing practices. However, program executions and answer submissions appear more often in the complete work sessions, yet with a low frequency (near zero) when compared with editing. This pattern was the fingerprint of participant U. * • Fingerprint 25. Limited python/algorithmic skills. It characterizes a practice where editing is also the prevalent action and combines this frequently with debugging and program execution activities. Answer submission is however infrequent, as none shows in the most common actions. This pattern was the fingerprint of 5 participants, S, T, W, X and Y. None of them has attended the MOOC training sessions. * • Fingerprint 33. Limited python/algorithmic skills. This practice is characterized as usual by frequent editing actions, and then followed by a decreasing and balanced editing, navigational and refactoring decisions. However, submission actions are absent. This pattern was the fingerprint of participant Z. Figure 8: Development fingerprints for the bottom(8) performers Figure 9: Development fingerprints characterizing all participants It is striking to observe that the fingerprints characterizing the high performers are significantly different, either in the top activities and also in probabilities, from the ones describing the low performers. Figure 9 presents the distinct fingerprints detected to characterize all participants. Based on these findings, one may argue that the variation in the participants scores was only due to the quality of the code they have produced, moreover, that the variation in the fingerprints was due to their own programming skills. Additionally one may suggest as an explanation for the top performers, the knowledge acquired in the course they belong to, due to the MOOC training attendance or the dimension of their coding interactions during the contest and not based on their coding behaviors. That is a possibility we cannot reject immediately. However, we may assess this hypothesis if we mine, from a different perspective, the overall process for each participant, course as a group of participants with same backgrounds, according to their MOOC training participation and finally according to their performance. If the reason for the higher performance is related with the magnitude of their interactions or the quality of their code, we expect to see no significant variance in the process simplicity amongst different participants. On the contrary, if variation in the process exists between groups, that may be an indication that the quality of the outcome is indeed related to the development workflow. Following the above rationale, we mined the correspondent processes using a mining algorithm appropriate for processes with thousands of events and where a fuzzy or spaghetti-like process behavior is expected to exist. We used a Directly-Follows Graph algorithm from the Process Mining library for Python mentioned earlier, and assessed the models produced using the quality metrics described in section 4.2.3. Table 3 summarizes the fingerprint results for the referred participants, along with the metrics used to evaluate the quality for the process models discovered for each of them. The hypothesis we later tested was as follows: Are there significant differences in the processes complexity or development interactions between the different graduation courses or between the top, bottom and the rest of the participants ?. Table 3: Process Models Evaluation | Course | Fingerprint | Interactions | Fitness | Precision | Generalization | Simplicity | Average | Duration ---|---|---|---|---|---|---|---|---|--- By Course LEI | – | – | 33011 | 0.017 | 1 | 0.142 | 0.432 | 0.398 | 00:07:09 ETI | – | – | 26557 | 0.818 | 1 | 0.153 | 0.439 | 0.602 | 00:05:01 IGE | – | – | 16635 | 0.199 | 1 | 0.143 | 0.438 | 0.445 | 00:02:19 LCD | – | – | 30057 | 0.198 | 1 | 0.126 | 0.426 | 0.438 | 00:06:47 MOOC Training MOOC | – | – | 76203 | 0.039 | 1 | 0.124 | 0.411 | 0.394 | 00:27:27 NO_MOOC | – | – | 30057 | 0.198 | 1 | 0.126 | 0.426 | 0.438 | 00:06:18 Performance Type High (Top 8) | – | – | 23351 | 0.009 | 1 | 0.140 | 0.437 | 0.396 | 00:04:07 Low (Bottom 8) | – | – | 25869 | 0.957 | 1 | 0.153 | 0.447 | 0.639 | 00:04:10 Middle | – | – | 57040 | 0.037 | 1 | 0.121 | 0.410 | 0.392 | 00:19:58 High Performers A | ETI | 35 | 4447 | 0.089 | 1 | 0.158 | 0.489 | 0.434 | 00:00:32 B | IGE | 35 | 3554 | 0.123 | 1 | 0.181 | 0.538 | 0.460 | 00:00:21 C | LEI | 24 | 1515 | 0.908 | 1 | 0.192 | 0.507 | 0.652 | 00:00:05 D | LEI | 19 | 2194 | 0.118 | 1 | 0.202 | 0.524 | 0.461 | 00:00:09 E | IGE | 30 | 4809 | 0.353 | 1 | 0.162 | 0.472 | 0.497 | 00:00:35 F | IGE | 26 | 2495 | 0.228 | 1 | 0.173 | 0.543 | 0.486 | 00:00:12 G | LEI | 20 | 2481 | 0.978 | 1 | 0.181 | 0.529 | 0.672 | 00:00:10 H | LEI | 20 | 1856 | 0.170 | 1 | 0.174 | 0.506 | 0.462 | 00:00:07 Low Performers S* | LCD | 25 | 3520 | 0.042 | 1 | 0.179 | 0.514 | 0.434 | 00:00:22 T* | LCD | 25 | 4615 | 0.204 | 1 | 0.186 | 0.524 | 0.478 | 00:00:35 U | LEI | 18 | 2200 | 0.021 | 1 | 0.173 | 0.521 | 0.429 | 00:00:10 V | IGE | 7 | 1064 | 0.987 | 1 | 0.189 | 0.617 | 0.698 | 00:00:03 W* | LCD | 25 | 2599 | 0.128 | 1 | 0.188 | 0.544 | 0.465 | 00:00:12 X* | LCD | 25 | 4393 | 0.336 | 1 | 0.239 | 0.527 | 0.526 | 00:00:32 Y* | LCD | 25 | 1964 | 0.145 | 1 | 0.203 | 0.585 | 0.483 | 00:00:07 Z | ETI | 33 | 2781 | 0.831 | 1 | 0.251 | 0.565 | 0.662 | 00:00:13 Interactions \- Represent actions within the IDE, Duration \- Means the time to build/compute the process model * \- Participant did not attend the MOOC ### 5.3 RQ3. Are there any significant variation in sessions simplicity and interactions magnitude between distinct participants ? Simplicity is one of the dimensions to analyze a process model, and to calculate it, PM4Py takes into account only a Petri net model. The criteria adopted for calculating simplicity is the inverse arc degree as described in [55]. Since we mined individual processes, they would represent the behavior simplicity of each participant in the programming exercise. Interactions magnitude refers to the sum of the number of command actions executed in the IDE plus the submission of answers in the Mooshak platform. In other words, interactions are represented by the events generated during the programming exercise by each individual. The objective of this test is to assess if there is a relation between the performance along with the sessions simplicity or magnitude of interactions on different sets of participants. For this purpose we tried the analysis of variance (ANOVA). ANOVA Test. Tests if there are significant different statistics between groups of participants, or the same is to say, helps to figure out if one needs to accept or reject the null hypothesis. A one way ANOVA is used to compare two means from two independent (unrelated) groups using the F-distribution. The null hypothesis for the test is that the two means are equal. Therefore, a significant result means that the two means are unequal. It has the ability to tell if at least two groups were different from each other, however, it won’t tell which groups were different and by which magnitude. If a test returns a significant f-statistic, then one may need to run an ad hoc test (eg: Tukey HSD) to learn exactly which groups had a difference in means. Tukey HSD (”honestly significant difference” or ”honest significant difference”). Is a statistical tool used to determine if the relationship between two sets of data is statistically significant – that is, whether there’s a strong chance that an observed numerical change in one value is causally related to an observed change in another value. In other words, the Tukey test is a way to test an experimental hypothesis. #### 5.3.1 Variables Assumptions The use of ANOVA has several assumptions, such as: i) the dependent variable should be measured at the continuous level or absolute scale; ii) the independent variables should define at least two categorical treatments, that corresponds to the groups to which the participants belong; iii) there should be no significant outliers in the groups since they can have a negative effect on ANOVA; iv) the distribution of the dependent variables should be as normally distributed as possible. Having all other conditions satisfied, we assessed the normality. #### 5.3.2 Normality Tests To test normality, we may use two well-known tests of normality, namely the Kolmogorov-Smirnov and the Shapiro-Wilk tests. We only considered the Shapiro- Wilk test to assess normality since the latter is more appropriate for small sample sizes ($<$ 50 samples). The results are presented in Table 4, and from them, we cannot reject the null hypothesis, therefore, we accept that both Simplicity and Interactions are normally distributed justifying the use of ANOVA. Table 4: Normality Tests | Shapiro-Wilk ---|--- Factor | Statistics(W) | Sig.* Symplicity | 0.94802 | 0.08334 Interactions | 0.95760 | 0.16940 *Statistically significant if Sig. $<$ 0.05 #### 5.3.3 Findings From Table 5, we can confirm the significant variance between the ones with less quality in their code(Bottom5) and the rest of the participants (Top5 and Others), and this difference is larger between the top five and bottom five performers. Figures 10 and 11 show the process models discovered for both, respectively. These results provide evidences to state that, the differences in the proficiency between certain participants are not only related with the coding skills each of them may have. The process complexity followed by each developer may also influence the outcome, or at least, may be used as a valid indicator for assessing quality between developers. However, when these groups are configured to contain the top and bottom eight participants in terms of score, that significance is no longer visible within the same levels of confidence ($\alpha<$ 0.05). This reinforces the fact that if significant differences exist, they are most likely and only in the individual behaviors. As for analyzing the variance between different courses, we found no significant differences, either for the development sessions simplicity as well as for the magnitude of interactions. As mentioned earlier, we have assembled a method to capture local regularities and overall structure of development processes, the so called fingerprints. Based on the data we obtained, namely the probabilities of activities in the development sessions, and the metrics from the discovered processes, we may classify those fingerprints as good or bad process smells and start to create a catalog of software development process smells. Later on, models can be built to evaluate automatically if a coding session is following a good or a bad practice and suggest guidance actions to developers. Figure 10: Process Model characterizing Top5 Participants Figure 11: Process Model characterizing Bottom5 Participants Table 5: One-Way ANOVA Results | Factor | Df. | Sum Sq. | Mean Sq. | F-value | p-value ---|---|---|---|---|---|--- Analysis of Variance Test - Top5/Bottom 5/Others Simplicity | Performance | 2 | 0.01150 | 0.005750 | 5.984 | 0.00594* | Residuals | 34 | 0.03267 | 0.000961 | | Post Hoc Test | Treatments | Diff | Lower | Upper | p-adj | | Others-Top5 | 0.01418519 | -0.02279834 | 0.05116871 | 0.6192293 | | Bottom5-Top5 | 0.06160000 | 0.01355700 | 0.10964300 | 0.0094695* | | Bottom5-Others | 0.04741481 | 0.01043129 | 0.08439834 | 0.0094774* | Interactions | Performance | 2 | 1431864 | 715932 | 0.555 | 0.579 | Residuals | 34 | 43821404 | 1288865 | | Analysis of Variance Test - $>$ Q3(Top 8)/Q1(Bottom 15)/Others Simplicity | Performance | 2 | 0.00335 | 0.001676 | 1.396 | 0.262 | Residuals | 34 | 0.04082 | 0.001201 | | Interactions | Course | 2 | 165352 | 82676 | 0.062 | 0.94 | Residuals | 34 | 45087916 | 1326115 | | Analysis of Variance Test - Top8/Bottom 8/Others Simplicity | Performance | 2 | 0.00656 | 0.003279 | 2.963 | 0.0651 | Residuals | 34 | 0.03762 | 0.001106 | | Interactions | Performance | 2 | 34612 | 17306 | 0.013 | 0.987 | Residuals | 34 | 45218656 | 1329960 | | Analysis of Variance Test - LEI/ETI/LCD/IGE Simplicity | Course | 3 | 0.00371 | 0.001237 | 1.009 | 0.401 | Residuals | 33 | 0.04046 | 0.001226 | | Interactions | Course | 3 | 3919333 | 1306444 | 1.043 | 0.386 | Residuals | 33 | 41333934 | 1252543 | | *Statistically significant if p-value $<$ 0.05 Df. \- Degrees of freedom, Sum Sq. \- Sum of Square, Mean Sq. \- Mean of Square ## 6 Threats to Validity The following types of validity issues were considered when interpreting the results of this article. ### 6.1 Construct Validity Construct validity refers to the degree to which inferences can legitimately be made from the operationalizations in a study to the theoretical constructs on which those operationalizations were based. For operationalizing language models assessment, we used metrics such as perplexity and cross-entropy, and CaoJuan2009, Arun2010, Deveaud2014, and Griffiths2004 to evaluate, from an empirical perspective, the optimal number of topics, and validated their values from multiple perspectives. Other metrics could have been used for the same purpose, such as topic coherence, which may lead to recommending a different optimal number of topics. Since our sample was not very large, we had to use it for training and test purposes. To strengthen significance, models were trained using 5-fold cross- validation. We are aware that process model metrics such as Precision and Generalization are far from being usable in a more generic process mining context. However, in this study, we were only focused on process simplicity, and regarding that purpose, the mining and tests are valid since we used the same algorithm for all participants. Each participant used their student number to activate the PyCharm events collector plugin. This approach served as an identification method. Events collected and stored in JSON files on developers’ devices could have been manually changed. We tried to mitigate this threat of having data tampering by using a hash function on each event at the moment of its creation. As such, each event contains not only information about the IDE activities, but also a hash code introduced as a new property in the event for later comparison with original event data. ### 6.2 Internal Validity Internal validity refers to the degree of confidence that the causal relationship being tested is trustworthy and not influenced by other factors or variables. One typical threat to internal validity related to how subjects are selected. In our case, the population from where our sample was taken, corresponds to all undergraduate students in computer science areas in our university that had attended at least two programming courses. That population was invited to participate by email. The sampling process was the result of a random process of free will where those students that spontaneously decided to participate performed their inscription online. As such, we do not consider this to be a significant validity threat. Another recurrent internal validity threat is the existence of spurious factors affecting the outcome of the experiment. In mitigation, the programming contest in our study allowed us to block possible confounding factors since they were constant for all subjects: the programming language (Python), IDE (PyCharm), problem complexity (same requirements spec), sprint schedule (4 hours), environment conditions (large shared open space with private tables), and external interference (no contacts were allowed). Once again, we believe that this threat is also not significant. ### 6.3 External Validity External validity refers to the extent to which results from a study can be applied (generalized) to other situations, groups or events. To fully claim that undergraduate students are surrogates of professional programmers, a representative sample of both groups should be assigned the same requirements specification for a Python program, to measure the difference on their outcome. We are not aware of such a study having been published. Nevertheless, there is a likelihood that our students are at least good surrogates for novice professional software developers in Python, because: 1. (i) Python has a low learning curve, based on our experience, corroborated by [58], so that the level of proficiency of a professional Python programmer seems to be achievable quickly; 2. (ii) our students had attended successfully, on average, two Python courses; 3. (iii) a questionnaire filled during inscription showed that participating students, albeit having gone through similar academic paths, had different maturity and skills, as we would expect in professional programmers; that difference most probably will not fade out within the one or two years that will take for the vast majority of these students to become professionals. ### 6.4 Conclusion Validity The conclusion validity describes our ability to draw statistically correct conclusions based on the measurements. A common threat here is the sample size, but in our case we were able to get a sufficiently large number of subjects to grant statistical significance. We carefully evaluated the models perplexity computed to answer RQ2, and the assumption tests to justify the applications of the statistical tests in answering RQ3, however, we have also to accept that our sample is not of large proportions. We performed an experiment using data from 37 software developers executing well defined and identical programming tasks. Since this is a moderate population size for this type of analysis, we agree this may be a threat to generalize conclusions or make bold assertions. Nevertheless, to our best knowledge, this is the first study involving development sessions and the usage of language models, text and process mining to detect developer’s fingerprints during programming tasks. As such, researchers can start from our initial findings and try to falsify our current results and correspondent conclusions. ## 7 Conclusion ### 7.1 Main conclusions In this work, we tried to understand if fingerprints from development sessions could be extracted from the IDE and an automatic judge platform interactions. Furthermore, we assessed if those sessions could be mined with text mining methods. We mined the PyCharm and Mooshak events from a group of developers during a Python programming contest aiming to solve six different exercises. Our research regarding development interactions shows that they can be mined as a natural language and using text mining methods with tri- or four-grams being the optimal value for such task. Coherent development fingerprints were discovered and evaluated using process mining methods and correspondent quality metrics. We confirm a significant difference in the process simplicity between the top performers and the ones with unsatisfactory outcomes on the programming exercises. Results provide evidences to sustain that, to achieve software with good quality, it is not only needed to have developers with the right skills and consistent knowledge about the languages and tools used on their daily tasks. It is also desirable to have developers to follow consistent practices during the development sessions, otherwise, their behaviors may impact the final outcome. Our approach can be particularly relevant in cases where educators want to assess development profiles within a group of students, before and after classes are given. It can be also a valid approach to measure and monitor productivity within and between software teams. As we showed, by analyzing the development sessions fingerprints and complexity, non efficient developers can easily be detected. Last generation IDEs provide a plethora of functionalities, such as code completion, automated packaging and optimized continuous integration features to assist programmers on their daily activities. However, these IDEs do not guide developers on their coding practices. The fingerprints we detected, either classified as good or bad practices131313We already called them - software development process smells, may be used as a trigger for IDE vendors to evaluate the possibility to include additional intelligence in their tools, such as task/workflow monitoring and suggesting program runs, identify testing slots and appropriately recommend debugging and refactoring actions along a development session. ### 7.2 Future work #### 7.2.1 Automation We are still scratching the surface in mining developers’ activities. Existing mining tools are not ready yet to automate the complete flow of: collect and pre-process data, discover processes behaviors, compute metrics, and export results. Further work is required to set up a pipeline capable of providing just-in-time feedback, both to software developers, to provide self-awareness on performance/behavior, as to software project managers, since the profile of team members allows a more informed resource allocation. #### 7.2.2 Low and No Code paradigms Novel software development paradigms, such as low and no code, shift the focus from the textual programming and put it into the visual artifacts and components from which modern applications are built upon. Our work fits well in cases where textual programming is banned, giving rise to the so-called citizen developers, and therefore, most likely to distinct development processes and coding behaviors when compared with conventional programming practices. We plan to perform additional experiments using low or no code platforms and assess developers’ process fingerprints and overall behaviors. ## Acknowledgement This work was partially funded by the Portuguese Foundation for Science and Technology, under ISTAR-Iscte projects UIDB/04466/2020 and UIDP/04466/2020, and INESC-ID project UIDB/50021/2020. ## References * [1] A. Fuggetta, E. Di Nitto, Software process, in: Proceedings of the on Future of Software Engineering - FOSE 2014, ACM Press, New York, New York, USA, 2014, pp. 1–12. doi:10.1145/2593882.2593883. * [2] J. Herbsleb, Building a Socio-Technical Theory of Coordination: Why and How (Outstanding Research Award), in: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2016, Association for Computing Machinery, New York, NY, USA, 2016, pp. 2–10. doi:10.1145/2950290.2994160. * [3] B. Hompes, W. Van Der Aalst, P. Dixit, B. F. A. Hompes, J. C. A. M. Buijs, W. M. P. Van Der Aalst, P. M. Dixit, J. Buurman, Discovering Deviating Cases and Process Variants Using Trace Clustering, Tech. rep., Department of Mathematics and Computer Science Eindhoven University of Technology, Eindhoven, The Netherlands (2015). * [4] A. Bolt, W. M. van der Aalst, M. de Leoni, Finding process variants in event logs: (Short Paper), in: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 10573 LNCS, Springer Verlag, 2017, pp. 45–52. doi:10.1007/978-3-319-69462-7{\\_}4. * [5] A. Bolt, M. de Leoni, W. M. van der Aalst, Process variant comparison: Using event logs to detect differences in behavior and business rules, Information Systems 74 (1) (2018) 53–66. doi:10.1016/j.is.2017.12.006. * [6] F. Taymouri, M. La Rosa, J. Carmona, Business Process Variant Analysis Based on Mutual Fingerprints of Event Logs, in: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 12127 LNCS, Springer, 2020, pp. 299–318. doi:10.1007/978-3-030-49435-3{\\_}19. * [7] W. Van Der Aalst, T. Weijters, L. Maruster, Workflow mining: Discovering process models from event logs, IEEE Transactions on Knowledge and Data Engineering 16 (9) (2004) 1128–1142. doi:10.1109/TKDE.2004.47. * [8] J. E. Cook, A. L. Wolf, Discovering Models of Software Processes from Event-Based Data, in: ACM Transactions on Software Engineering and Methodology, Vol. 7, ACM, 1996, pp. 215–249. doi:10.1145/287000.287001. * [9] A. Hindle, E. T. Barr, Z. Su, M. Gabel, P. Devanbu, On the Naturalness of Software, in: Proceedings of the 34th International Conference on Software Engineering, IEEE Press, Zurich, Switzerland, 2012, p. 837–847. * [10] J. Caldeira, F. Brito e Abreu, J. Reis, J. Cardoso, Assessing Software Development Teams’ Efficiency using Process Mining, in: 2019 International Conference on Process Mining (ICPM), Institute of Electrical and Electronics Engineers (IEEE), 2019, pp. 65–72. doi:10.1109/icpm.2019.00020. * [11] B. M. Enis, K. K. Cox, J. E. Stafford, Students as subjects in consumer behavior experiments, Journal of Marketing Research 9 (1) (1972) 72–74. * [12] F. K. Shuptrine, On the validity of using students as subjects in consumer behavior investigations, The Journal of Business 48 (3) (1975) 383–390. * [13] G. M. Hampton, Students as subjects in international behavioral studies, Journal of International Business Studies (1979) 94–96. * [14] W. Remus, Using students as subjects in experiments on decision support systems, in: Proceedings of the Twenty-Second Annual Hawaii International Conference on System Sciences. Volume III: Decision Support and Knowledge Based Systems Track, Vol. 3, IEEE Computer Society, 1989, pp. 176–177. * [15] A. A. Porter, L. G. Votta, V. R. Basili, Comparing detection methods for software requirements inspections: A replicated experiment, IEEE Transactions on software Engineering 21 (6) (1995) 563–575. * [16] A. Porter, L. Votta, Comparing detection methods for software requirements inspections: A replication using professional subjects, Empirical software engineering 3 (4) (1998) 355–379. * [17] M. Höst, B. Regnell, C. Wohlin, Using students as subjects—a comparative study of students and professionals in lead-time impact assessment, Empirical Software Engineering 5 (3) (2000) 201–214. * [18] P. Runeson, Using students as experiment subjects–an analysis on graduate and freshmen student data, in: Proceedings of the 7th International Conference on Empirical Assessment in Software Engineering, Citeseer, 2003, pp. 95–102. * [19] P. Berander, Using students as subjects in requirements prioritization, in: Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004\. ISESE’04., IEEE, 2004, pp. 167–176. * [20] M. Svahnberg, A. Aurum, C. Wohlin, Using students as subjects-an empirical evaluation, in: Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement, 2008, pp. 288–290. * [21] S. C. Kotakonda, R. Engu, Are students good proxies for studying professional: A systematic literature review (2012). * [22] J. H. M. Daniel Jurafsky, Speech and Language Processing, 3rd Edition, Pearson Prentice Hall, 2020. * [23] L. Chen, M. A. Babar, A systematic review of evaluation of variability management approaches in software product lines, Information and Software Technology 53 (4) (2011) 344–362. * [24] A. T. Nguyen, T. Nguyen, T. N. Nguyen, D. Lo, C. Sun, Duplicate Bug Report Detection with a Combination of Information Retrieval and Topic Modeling, in: 2012 27th IEEE/ACM International Conference on Automated Software Engineering, ASE 2012 - Proceedings, ACM Press, New York, New York, USA, 2012, pp. 70–79. * [25] A. Agrawal, W. Fu, T. Menzies, What is Wrong with Topic Modeling? (and How to Fix it Using Search-based Software Engineering), Information and Software Technology Journaldoi:10.1016/j.infsof.2018.02.005. * [26] T. H. Chen, S. W. Thomas, A. E. Hassan, A survey on the use of topic models when mining software repositories, Empirical Software Engineering 21 (5) (2016) 1843–1919. doi:10.1007/s10664-015-9402-8. * [27] D. M. Blei, A. Y. Ng, M. I. Jordan, Latent Dirichlet Allocation, Journal of Machine Learning Research 3 (2003) 993–1022. * [28] T. Menzies, L. Minku, F. Peters, The Art and Science of Analyzing Software Data; Quantitative Methods, in: Proceedings - International Conference on Software Engineering, Vol. 2, IEEE Computer Society, 2015, pp. 959–960. doi:10.1109/ICSE.2015.306. * [29] K. Damevski, D. C. Shepherd, J. Schneider, L. Pollock, Mining Sequences of Developer Interactions in Visual Studio for Usage Smells, IEEE Transactions on Software Engineering 43 (4) (2017) 359–371. doi:10.1109/TSE.2016.2592905. * [30] W. Fu, T. Menzies, Easy over hard: A case study on deep learning, in: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Vol. Part F130154, Association for Computing Machinery, 2017, pp. 49–60. doi:10.1145/3106237.3106256. * [31] A. Agrawal, W. Fu, D. Chen, X. Shen, T. Menzies, How to ”DODGE” Complex Software Analytics?, Transactions in Software Engineering. * [32] W. Fu, T. Menzies, X. Shen, Tuning for software analytics: Is it really necessary?, Information and Software Technology 76 (2016) 135–146. doi:10.1016/j.infsof.2016.04.017. * [33] P. F. Brown, P. V. deSouza, R. L. Mercer, V. J. D. Pietra, J. C. Lai, Class-Based N-Gram Models of Natural Language, Computational Linguistics 18 (4) (1992) 467–479. * [34] A. L. Santos, G. Prendi, H. Sousa, R. Ribeiro, Stepwise API usage assistance using n-gram language models, Journal of Systems and Software 131 (2017) 461–474. doi:10.1016/j.jss.2016.06.063. * [35] C. Chen, Z. Xing, Y. Liu, What’s Spain’s Paris? Mining analogical libraries from Q&A discussions, Empirical Software Engineering 24 (3) (2019) 1155–1194. doi:10.1007/s10664-018-9657-y. * [36] K. Damevski, H. Chen, D. C. Shepherd, N. A. Kraft, L. Pollock, Predicting future developer behavior in the IDE using topic models, IEEE Transactions on Software Engineering 44 (11) (2018) 1100–1111. doi:10.1109/TSE.2017.2748134. * [37] H. Li, T. H. P. Chen, W. Shang, A. E. Hassan, Studying software logging using topic models, Empirical Software Engineering 23 (5) (2018) 2655–2694. doi:10.1007/s10664-018-9595-8. * [38] D. Ye, Z. Xing, N. Kapre, The structure and dynamics of knowledge network in domain-specific Q&A sites: a case study of stack overflow, Empirical Software Engineering 22 (1) (2017) 375–406. doi:10.1007/s10664-016-9430-z. * [39] X.-L. Yang, D. Lo, X. Xia, Z.-Y. Wan, J.-L. Sun, What Security Questions Do Developers Ask? A Large-Scale Study of Stack Overflow Posts, Journal of Computer Science and Technology 31 (5) (2016) 910–924. doi:10.1007/s11390-016-1672-0. * [40] M. Mittal, A. Sureka, Process mining software repositories from student projects in an undergraduate software engineering course, in: 36th International Conference on Software Engineering, ICSE Companion 2014 - Proceedings, Association for Computing Machinery, 2014, pp. 344–353. doi:10.1145/2591062.2591152. * [41] R. Minelli, A. Mocci, M. Lanza, I Know What You Did Last Summer An Investigation of How Developers Spend Their Time, in: 23rd International Conference on Program Comprehension, IEEE, 2015, pp. 25–35. doi:10.1109/ICPC.2015.12. * [42] P. Salza, F. Palomba, D. D. Nucci, C. D’uva, A. De Lucia, F. Ferrucci, Do Developers Update Third-Party Libraries in Mobile Apps, in: Proceedings of the 26th Conference on Program Comprehension, Vol. 12, Association for Computing Machinery, 2018, p. 255–265. * [43] X. Xia, L. Bao, D. Lo, Z. Xing, A. E. Hassan, S. Li, Measuring Program Comprehension: A Large-Scale Field Study with Professionals, IEEE Transactions on Software Engineering 44 (10) (2018) 951–976. doi:10.1109/TSE.2017.2734091. * [44] M. Beller, G. Gousios, A. Panichella, S. Proksch, S. Amann, A. Zaidman, Developer Testing in the IDE: Patterns, Beliefs, and Behavior, IEEE Transactions on Software Engineering 45 (3) (2019) 261–284. doi:10.1109/TSE.2017.2776152. * [45] S. Pachidi, M. Spruit, I. Van De Weerd, Understanding users’ behavior with software operation data mining, Computers in Human Behavior 30 (2014) 583–594. doi:10.1016/j.chb.2013.07.049. * [46] J. P. G. Gomes, Learning to code in class with moocs: process, factors and outcomes, Master in computer science and business management, Instituto Universitário de Lisboa (Iscte), Lisbon, Portugal, supervision: Cláudia Werner (COPPE/UFRJ) and Fernando Brito e Abreu (ISTAR/Iscte (dec 2020). * [47] J. P. Leal, F. Silva, Mooshak: A Web-based multi-site programming contest system, Software - Practice and Experience 33 (6) (2003) 567–581. doi:10.1002/spe.522. * [48] J. Cao, T. Xia, J. Li, Y. Zhang, S. Tang, A density-based method for adaptive LDA model selection, Neurocomputing 72 (7-9) (2009) 1775–1781. doi:10.1016/j.neucom.2008.06.011. * [49] R. Arun, V. Suresh, C. E. Madhavan, M. N. Murty, On finding the natural number of topics with Latent Dirichlet Allocation: Some observations, in: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 6118 LNAI, 2010, pp. 391–402. doi:10.1007/978-3-642-13657-3{\\_}43. * [50] R. Deveaud, E. Sanjuan, P. Bellot, E. SanJuan, Accurate and Effective Latent Concept Modeling for Ad Hoc Information Retrieval, Document Numérique, Lavoisier, (2014) 61–84doi:10.3166/DN.17.1.61. * [51] T. L. Griffiths, M. Steyvers, Finding scientific topics, Proceedings of the National Academy of Sciences of the United States of America 101 (SUPPL. 1) (2004) 5228–5235. doi:10.1073/pnas.0307752101. * [52] W. Poncin, A. Serebrenik, M. V. D. Brand, Process Mining Software Repositories, 2011 15th European Conference on Software Maintenance and Reengineering (2011) 5–14doi:10.1109/CSMR.2011.5. * [53] W. Van Der Aalst, Process Mining: Data Science in Action, 2nd Edition, Springer-Verlag Berlin Heidelberg, 2016. doi:10.1007/978-3-662-49851-4. * [54] A. Berti, W. van der Aalst, A Novel Token-Based Replay Technique to Speed Up Conformance Checking and Process Enhancement, Tech. rep., Process and Data Science group, Lehrstuhl fur Informatik 9 52074 , RWTH Aachen University, Germany, Aachen (7 2020). * [55] J. Munoz-Gama, J. Carmona, A Fresh Look at Precision in Process Conformance, in: R. Hull, J. Mendling, S. Tai (Eds.), Business Process Management - 8th International Conference, BPM2010, Hoboken, NJ, USA, September 13-16, 2010, Vol. 6336 of Lecture Notes in Computer Science, Springer, 2010, pp. 211–226. doi:10.1007/978-3-642-15618-2{\\_}16. * [56] J. C. Buijs, B. F. Van Dongen, W. M. Van Der Aalst, Quality dimensions in process discovery: The importance of fitness, precision, generalization and simplicity, International Journal of Cooperative Information Systems 23 (1). doi:10.1142/S0218843014400012. * [57] F. Rojas Blum, Metrics in process discovery, Tech. rep., Computer Science Department, Universidad de Chile, Chile (2015). * [58] A. Nagpal, G. Gabrani, Python for data analytics, scientific and technical applications, in: 2019 Amity international conference on artificial intelligence (AICAI), IEEE, 2019, pp. 140–145.
Discrete Double Fibrations Michael Lambert Presheaves on a small category are well-known to correspond via a category of elements construction to ordinary discrete fibrations over that same small category. Work of R. Paré proposes that presheaves on a small double category are certain lax functors valued in the double category of sets with spans. This paper isolates the discrete fibration concept corresponding to this presheaf notion and shows that the category of elements construction introduced by Paré leads to an equivalence of virtual double categories. § INTRODUCTION A discrete fibration is functor $F\colon\F\to\C$ such that for each arrow $f\colon C\to FY$ in $\C$, there is a unique arrow $f^*Y\to Y$ in $\F$ whose image under $F$ is $f$. Such functors correspond to ordinary presheaves \[ \dfib(\C) \simeq [\C^{op},\set] \] via a well-known “category of elements" construction <cit.>. This is a “representation theorem" in the sense that discrete fibrations over $\C$ correspond to set-valued representations $\C$. Investigation of higher-dimensional structures leads to the question of analogues of such well-established developments in lower-dimensional settings. In the case of fibrations, for example, ordinary fibrations have their analogues in “2-fibrations" over a fixed base 2-category [1]. Discrete fibrations over a fixed base 2-category have their 2-dimensional version in “discrete 2-fibrations" <cit.>. The justification that the notion of (discrete) fibration is the correct one in each case consists in the existence of a representation theorem taking the form of an equivalence with certain representations of the base structure via a category of elements construction. In the double-categorical world, R. Paré proposes <cit.> that certain span-valued, lax functors on a double category $\mathbb B$ are presheaves on $\mathbb B$. This paper aims to isolate the notion of “discrete double fibration" corresponding to this notion of presheaf. As justification of the proposed definition, Paré's double category of elements will be used to exhibit a representation theorem on the pattern of those reviewed above. The technically interesting part of the results is that ultimately an equivalence of virtual double categories is achieved. For this the language of monoids and modules in virtual double categories as in <cit.> will be used. §.§ Overview and Motivation To give away the answer completely, a discrete double fibration should be a category object in the category of discrete fibrations. This follows the approach to double category theorizing that a “double ___" is a category object in the category of ___'s. The point of the paper is thus to show that this expected definition comports with Paré's notion of presheaf by exhibiting the representation theorem as discussed above. This will be proved directly for all the 1-categorical structure involved. However, lax functors and their transformations are fragments of a higher double-dimensional structure. That is, modules and their multimodulations make lax functors into a virtual double category. The question, then, arises as to the corresponding virtual double category structure on the discrete fibration side of the desired equivalence. It turns out that there is already well-established language for these structures from <cit.>. For a category with finite limits $\C$, the category of monoids in the bicategory of spans in $\C$ is equivalent to the category of category objects in $\C$. In fact the correspondence is much stronger than this, since – under the assumption of a fixed universe of concrete sets – the data on either side of the equivalence is the same modulo rearrangement of tuples. This general result leads to the special case for $\C = \dfib$, yielding the equivalence \[ \mon(\Span(\dfib)) \simeq \cat(\dfib). \] This is the main clue leading to a natural candidate for virtual double category structures on discrete double fibrations. For monoids in the bicategory part of any double category $\mathbb D$ form the 1-categorical part of the virtual double category of monoids and modules in $\mathbb D$. That is, there is a virtual double category $\dblmod(\mathbb D)$ of modules whose underlying 1-category is precisely monoids in the bicategory part of $\mathbb D$. This leads to the natural candidate for virtual double category structure on discrete double fibrations as $\dblmod(\spn(\dfib))$. The ultimate objective of the paper is thus to show that an appropriate slice of such modules is equivalent to the virtual double category of lax presheaves via the double category of elements construction. §.§ Organization and Results Section 2 reviews lax functors, their transformations and the elements construction from Paré's <cit.>. The main definition of “discrete double fibration" is given as a double functor $P\colon\mathbb E\to\mathbb B$ for which both $P_0$ and $P_1$ are discrete fibrations. This is equivalently a category object in the category of discrete fibrations. The section culminates in the first main contribution of the paper, proved as Theorem <ref> below. This result says that for any strict double category $\mathbb B$, there is an equivalence of 1-categories \[ \dfib(\mathbb B) \simeq \lax(\mathbb B^{op},\spn) \] between the category of discrete double fibrations over $\mathbb B$ and span-valued lax functors on $\mathbb B$, induced by the double category of elements construction. This is achieved by constructing a pseudo-inverse for the double category of elements. Section 3 introduces the virtual double category structure on lax span-valued functors as in <cit.> and <cit.> with the goal of extending the theorem above to an equivalence of virtual double categories. It turns out that a convenient language for setting up the virtual double category structure on discrete double fibrations is that of monoids and modules in virtual double categories as in <cit.> and later <cit.>. Section 4 extends the elements construction and its pseudo-inverse to functors of virtual double categories and culminates in the second result, proved as Theorem <ref>. That is, for any strict double category $\mathbb B$, there is an equivalence of virtual double categories \[ \dblfib(\mathbf B) \simeq \dbllax(\mathbb B^{op},\spn) \] induced by the elements functor. §.§ Conventions and Notation Double categories go back to Ehresmann <cit.>. Other references include <cit.>, <cit.>, and <cit.>. By the phrase “double category" we will always mean a strict double category, which is a category object in categories. Adaptations for the pseudo-case in some cases are easy to make but tedious; in other cases these are more subtle and require separate treatment. One quirk of our presentation is that, owing to different conventions concerning which arrows are “vertical" and which are “horizontal," we prefer to use the more descriptive language of “arrows," “proarrows," and “cells." Arrows are of course the ordinary arrows between objects (the horizontal arrows in <cit.> and <cit.> vs. the vertical ones in <cit.> and <cit.>); whereas the proarrows are the objects of the bicategory part (that is, the vertical ones of <cit.> and <cit.> vs. the horizontal ones of <cit.> and <cit.>). In this language, the arrows of the canonical example of categories and profunctors would be the ordinary functors while the proarrows are the profunctors. This choice is inspired by the language of a “proarrow equipment" in <cit.> and <cit.>, which sought to axiomatize this very situation as a 2-category “equipped with proarrows." Throughout use blackboard letters $\mathbb A$, $\mathbb B$, $\mathbb C$, $\mathbb D$ for double categories. The 0-part of such $\mathbb D$ is the category of objects and arrows, denoted by `$\mathbb D_0$'. The 1-part is the category of proarrows and cells, denoted by `$\mathbb D_1$'. External composition is written with a tensor `$\otimes$'. The external unit is denoted with $u\colon \mathbb D_0\to\mathbb D_1$. External source and target functors are $\src, \tgt\colon \mathbb D_1\rightrightarrows \mathbb D_0$. Internal structure is denoted by juxtaposition in the usual order. The terms “domain" and “codomain" refer to internal structure, whereas “source" and “target" refer to external structure. One result requires the notion of the “transpose" of a double category $\mathbb D$, denoted here by $\mathbb D^\dagger$. When $\mathbb D$ is strict, $\mathbb D^\dagger$ is a strict double category and transposing extends to a functor $(-)^\dagger\colon \mathbf{Dbl} \to \mathbf{Dbl}$ on the 1-category of strict double categories and double functors. Script letters $\C$, $\D$, $\X$, denote ordinary categories with objects sets $\C_0$ and arrow sets $\C_1$. Well-known 1-, 2-, and bi-categories referred to throughout are given in boldface such as $\set$, $\cat$, $\mathbf{Span}$, and $\mathbf{Rel}$. We always refer to the highest level of structure commonly associated with the objects of the category. Thus, $\cat$ is the 2-category of categories, unless specified otherwise. The main exception is that $\dfib$ is the ordinary category of discrete fibrations. Throughout we take for granted the notions of monoid and category internal to a given category with finite limits. Some 2-categorical concepts are referenced but not in a crucial way. Common double-categories are presented in mixed typeface and named by their proarrows. In particular $\prof$ is the double category of profunctors; $\mathbb R\mathbf{el}$ is the double category of sets and relations; $\spn$ (rather than $\dblset$) is the double category of sets and spans. § LAX FUNCTORS, ELEMENTS, AND DISCRETE DOUBLE FIBRATIONS The main objects of study are lax functors between double categories. The definitions are recalled in this section. The double category of elements is recalled, and it is seen how its construction leads naturally to the definition of a discrete double fibration. §.§ Lax Functors As groundwork, recall here the definitions of lax functor and their natural transformations. [See e.g. 7.2 of <cit.> or 1.1.2 of <cit.>] A lax functor between (pseudo-) double categories $F\colon\mathbb A\to\mathbb B$ consists of assignments \[ A\mapsto FA\qquad f\mapsto Ff\qquad m\mapsto Fm\qquad \alpha\mapsto F\alpha \] on objects, arrows, proarrows and cells, respecting domains, codomains, sources and targets; and laxity cells \ar@{} [dr]|{\phi_A} FA \ar@{=}[d] \ar[r]^{u_{FA}}|-@{|} & FA \ar@{=}[d] & & \ar@{} [drr]|{\phi_{n,m}} FA \ar@{=}[d] \ar[r]^{Fm}|-@{|} & FB \ar[r]^{Fn}|-@{|} & FC \ar@{=}[d] \\ FA \ar[r]_{Fu_A}|-@{|} & FA & & FA \ar[rr]_{F(n\otimes m)}|-@{|} & & FC for each object $A$ and composable proarrows $m\colon A\slashedrightarrow B$ and $n\colon B\slashedrightarrow C$, all subject to the following axioms. * [Internal Functoriality] The object and arrows assignments $A\mapsto FA$, $f\mapsto Ff$ above define an ordinary functor $F_0\colon \mathbb A_0\to\mathbb B_0$; and the proarrow and cell assignments define an ordinary functor $F_1\colon \mathbb A_1\to\mathbb B_1$ with respect to internal identities and composition of cells. * [Naturality] For any $f\colon A\to B$, there is an equality $$\xymatrix{ \ar@{} [dr]|{u_{Ff}} \cdot \ar[d]_{Ff} \ar[r]^{u_{FA}}|-@{|} & \cdot \ar[d]^{Ff} & & \ar@{} [dr]|{\phi_A} \cdot \ar@{=}[d] \ar[r]^{u_{FA}}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{} [dr]|{\phi_B} \cdot \ar@{=}[d] \ar[r]|-@{|} & \cdot \ar@{=}[d] & = & \ar@{}[dr]|{Fu_f} \cdot \ar[d]_{Ff} \ar[r]|-@{|} & \cdot \ar[d]^{Ff} \\ \cdot \ar[r]_{Fu_B}|-@{|} & \cdot & & \cdot \ar[r]_{Fu_B}|-@{|} & \cdot and for any externally composable cells $\alpha\colon m\Rightarrow p$ and $\beta\colon n\Rightarrow q$, there is an equality $$\xymatrix{ \ar@{}[dr]|{F\alpha} \cdot \ar[d]_{Ff} \ar[r]^{Fm}|-@{|} & \ar@{}[dr]|{F\beta} \cdot \ar[d]^{} \ar[r]^{Fn}|-@{|} & \cdot \ar[d]^{Fh} && \ar@{}[drr]|{\phi_{n,m}} \cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \cdot \ar[r]^{Fn}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{}[drr]|{\phi_{q,p}} \cdot \ar@{=}[d] \ar[r]_{Fp}|-@{|} & \cdot \ar[r]_{Fq}|-@{|} & \cdot \ar@{=}[d] & = & \ar@{}[drr]|{F(\beta\otimes\alpha)} \cdot \ar[d]_{Ff} \ar[rr]|{F(n\otimes m)} & & \cdot \ar[d]^{Ff} \\ \cdot \ar[rr]_{F(q\otimes p)}|-@{|} & & \cdot & & \cdot \ar[rr]_{F(q\otimes p)}|-@{|} & & \cdot of composite cells. * [Unit and Associativity] Given a proarrow $m\colon A\slashedrightarrow B$, the unit laxity cells satisfy $$\xymatrix{ \ar@{}[dr]|{\phi_A} \cdot \ar@{=}[d] \ar[r]^{u_{FA}}|-@{|} & \ar@{}[dr]|{Fu_m} \cdot \ar[d]^{} \ar[r]^{Fm}|-@{|} & \cdot \ar@{=}[d] && \ar@{}[drr]|{\cong} \cdot \ar@{=}[d] \ar[r]^{u_{FA}}|-@{|} & \cdot \ar[r]^{Fm}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{}[drr]|{\phi_{m, u_A}} \cdot \ar@{=}[d] \ar[r]_{Fu_A}|-@{|} & \cdot \ar[r]_{Fm}|-@{|} & \cdot \ar@{=}[d] & = & \ar@{}[drr]|{\cong} \cdot \ar@{=}[d] \ar[rr]|{Fm} & & \cdot \ar@{=}[d] \\ \cdot \ar[rr]_{F(m\otimes u_A)}|-@{|} & & \cdot & & \cdot \ar[rr]_{F(m\otimes u_A)}|-@{|} & & \cdot $$\xymatrix{ \ar@{}[dr]|{Fu_m} \cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \ar@{}[dr]|{\phi_B} \cdot \ar[d]^{} \ar[r]^{u_{FB}}|-@{|} & \cdot \ar@{=}[d] && \ar@{}[drr]|{\cong} \cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \cdot \ar[r]^{u_{FB}}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{}[drr]|{\phi_{u_B, m}} \cdot \ar@{=}[d] \ar[r]_{Fm}|-@{|} & \cdot \ar[r]_{Fu_B}|-@{|} & \cdot \ar@{=}[d] & = & \ar@{}[drr]|{\cong} \cdot \ar@{=}[d] \ar[rr]|{Fm} & & \cdot \ar@{=}[d] \\ \cdot \ar[rr]_{F(u_B\otimes m)}|-@{|} & & \cdot & & \cdot \ar[rr]_{F(u_B\otimes m)}|-@{|} & & \cdot and for any three composable proarrows $m\colon A\slashedrightarrow B$, $n\colon B\slashedrightarrow C$, and $p\colon C\slashedrightarrow D$, the laxity cells are associative in the sense that $$\xymatrix{ \ar@{}[dr]|{Fu_m}\cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \ar@{}[drr]|{\phi_{p,n}} \cdot \ar@{=}[d] \ar[r]^{Fn}|-@{|} & \cdot \ar[r]^{Fp}|-@{|} & \cdot \ar@{=}[d] & & & & \ar@{}[drr]|{\phi_{n,m}}\cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \cdot \ar[r]^{Fn}|-@{|} & \cdot \ar@{}[dr]|{Fu_p} \ar@{=}[d] \ar[r]^{Fp}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{}[drrr]|{\phi_{p\otimes n, m}} \cdot \ar@{=}[d] \ar[r]|-@{|} & \cdot \ar[rr]|-@{|} & & \cdot \ar@{=}[d] & & = & & \ar@{}[drrr]|{\phi_{p,n\otimes m}} \cdot \ar@{=}[d] \ar[rr]|-@{|} & & \cdot \ar[r]|-@{|} & \cdot \ar@{=}[d]\\ \cdot \ar[rrr]_{F((p\otimes n)\otimes m)}|-@{|} & & & \cdot & & & & \cdot \ar[rrr]_{F(p\otimes (n\otimes m))}|-@{|}& & & \cdot\\ are equal, modulo composition with the image under $F$ of the associativity iso cell \[(p\otimes n)\otimes m\cong p\otimes (n\otimes m) \] given with the structure of $\mathbb A$. Notice that the structural isomorphism reduce to strict identities when $\mathbb A$ and $\mathbb B$ are strict double categories. A pseudo double functor is a lax double functor where the laxity cells are isomorphisms; a strict double functor is one whose laxity cells are identities. Generally speaking, use lower-case Greek letters for the laxity cells corresponding to the Latin capital letter for the functor. Thus, $F$ is a lax functor with laxity cells denoted by `$\phi$' with subscripts; `$\gamma$' is used for a lax functor $G$. [Cf. 1.2 of <cit.>] The usual object functor $\ob(-)\colon \cat\to\set$ extends to a lax double functor $\Ob(-)\colon \prof\to \spn$ in the following way. On a category $\C$ take the object set $\C_0$ to be the image as usual; likewise take the object part $F_0\colon \C_0\to\D_0$ to be the image of a given functor $F\colon \C\to\D$. On a profunctor $P\colon \C\slashedrightarrow \D$ – that is, a functor $P\colon \C^{op}\times\D\to\set$ – take the image to be the disjoint union \[ \Ob(P):=\coprod_{C,D}P(C,D). \] The assignment on a given transformation of profunctors is induced by the universal property of the coproduct. This is a bona fide lax functor [Cf. 1.1.5 <cit.>] Let $F, G\colon\mathbb A\rightrightarrows\mathbb B$ denote lax functors with laxity cells $\phi$ and $\gamma$. A natural transformation $\tau\colon F \to G$ assigns to each object $A$ an arrow $\tau_A\colon FA\to GA$ and to each proarrow $m\colon A\slashedrightarrow B$ a cell $$\xymatrix{ \ar @{} [dr] |{\tau_m} FA \ar[r]^{Fm}|-@{|} \ar[d]_{\tau_A} & FB \ar[d]^{\tau_B} \\ GA \ar[r]_{Gm}|-@{|} & GB in such a way that the following axioms are satisfied. * [Naturality] Given any arrow $f\colon A\to B$, the usual naturality square $$\xymatrix{ FA \ar[r]^{Ff} \ar[d]_{\tau_A} & FB \ar[d]^{\tau_B} \\ GA \ar[r]_{Gf} & GB commutes; and given any cell $\alpha\colon m\Rightarrow n$, the corresponding naturality square commutes in the sense that the compositions $$\xymatrix{ \ar@{} [dr]|{F\alpha} \cdot \ar[d]_{} \ar[r]^{Fm}|-@{|} & \cdot \ar[d]^{} & & \ar@{} [dr]|{\tau_m} \cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{} [dr]|{\tau_n} \cdot \ar@{=}[d] \ar[r]|-@{|} & \cdot \ar@{=}[d] & = & \ar@{}[dr]|{G\alpha} \cdot \ar[d]_{} \ar[r]|-@{|} & \cdot \ar[d]^{} \\ \cdot \ar[r]_{Gn}|-@{|} & \cdot & & \cdot \ar[r]_{Gn}|-@{|} & \cdot are equal. * [Functoriality] For any object $A$, the compositions $$\xymatrix{ \ar@{} [dr]|{u_{\tau_A}} \cdot \ar[d]_{} \ar[r]^{u_{FA}}|-@{|} & \cdot \ar[d]^{} & & \ar@{} [dr]|{\phi_A} \cdot \ar@{=}[d] \ar[r]^{u_{FA}}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{} [dr]|{\gamma_A} \cdot \ar@{=}[d] \ar[r]|-@{|} & \cdot \ar@{=}[d] & = & \ar@{}[dr]|{\tau_{u_A}} \cdot \ar[d]_{} \ar[r]|-@{|} & \cdot \ar[d]^{} \\ \cdot \ar[r]_{Gu_A}|-@{|} & \cdot & & \cdot \ar[r]_{Gu_A}|-@{|} & \cdot are equal; and for any composable proarrows $m\colon A\slashedrightarrow B$ and $n\colon B\slashedrightarrow C$, the composites $$\xymatrix{ \cdot\ar@{}[dr]|{\tau_m}\ar[d]_{\tau_A}\ar[r]^{Fm}|-@{|} &\cdot\ar@{}[dr]|{\tau_n}\ar[d] \ar[r]^{Fn}|-@{|} &\cdot\ar[d]^{\tau_C} && \cdot\ar@{}[drr]|{\phi_{n,m}} \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \cdot \ar[r]^{Fn}|-@{|} & \cdot \ar@{=}[d] \\ \cdot\ar@{}[drr]|{\gamma_{n,m}}\ar@{=}[d]\ar[r]_{Gm}|-@{|}&\cdot\ar[r]_{Gn}|-@{|}&\cdot\ar@{=}[d]&=&\cdot\ar@{}[drr]|{\tau_{n\otimes m}}\ar[d]_{\tau_A} \ar[rr]|-@{|} && \cdot\ar[d]^{\tau_C}\\ \cdot\ar[rr]_{G(n\otimes m)}|-@{|} &&\cdot &&\cdot\ar[rr]_{G(n\otimes m)}|-@{|}&&\cdot are equal. Let $\lax(\mathbb A,\mathbb B)$ denote the category of lax functors $\mathbb A\to\mathbb B$ and their transformations. The composition is “component-wise" and identity transformations are those with identity morphisms in each of their components. In a few simple cases, this category can be described in terms of well-known structures. These equivalences are given by the “elements" construction at the beginning of the next subsection. A lax functor $\mathbf 1\to\spn$ is essentially a small category. In fact, taking elements induces an equivalence \[ \mathbf{Lax}(\mathbf 1,\spn)\simeq \cat. \] In terms of monoids and their morphisms (i.e. a 1-category of monoids and their morphisms in a bicategory), this means that \[ \mathbf{Lax}(\mathbf 1,\spn)\simeq \mathbf{Mon}(\Span(\set)). \] as 1-categories. This is a primordial example or special case of our main results, Theorem <ref> and Theorem <ref>. [See 3.13 of <cit.>] If $\C$ is an ordinary category, let $\mathcal V\C$ denote the “vertical double category" formed from the objects of $\C$ and using the morphisms of $\C$ as the proarrows with only identity arrows and cells. There is then an equivalence \[ \mathbf{Lax}(\mathcal V\C,\spn)\simeq \cat/\C \] of ordinary categories given by taking elements. §.§ Discrete Double Fibrations The fibration properties of the double category of elements construction lead to the axiomatization of our notion of “discrete double fibration." Let us recall the details of this double category and its associated projection functor. [3.7 <cit.>] Let $F\colon \mathbb B^{op}\to\spn$ denote a lax functor with laxity cells $\phi$. The double category of elements $\dblelt(F)$ has * as objects those pairs $(B,x)$ with $x\in FB$; * as morphisms $f\colon (B,x)\to (C,y)$ those arrows $f\colon B\to C$ of $\mathbb B$ such that $f^*y=x$ holds under the action of the transition function $f^*\colon FC\to FB$; * as proarrows $(B,x)\slashedrightarrow (C,y)$ those pairs $(m,s)$ consisting of a proarrow $m\colon B\slashedrightarrow C$ and an element $s\in Fm$ such that $s_0=x$ and $s_1=y$ both hold; and * as cells $(m,s)\Rightarrow (n,t)$ those cells $\theta$ of $\mathbb B$ as at left below for which the associated morphism of spans on the right \ar@{}[dr]|{\theta}A \ar[d]_{f} \ar[r]^{m}|-@{|} & B \ar[d]^g & & FC \ar[d]_{f^*} & \ar[l]_{(-)_0} Fn \ar[d]^{\theta^*} \ar[r]^{(-)_1} & FD \ar[d]^{g^*} \\ C \ar[r]_n|-@{|} & D & & FA & \ar[l]^{(-)_0} Fm \ar[r]_{(-)_1} & FB satisfies $\theta^*(t)=s$. The 1-category structure on objects and morphisms is the same as the ordinary category of elements. Internal composition of cells uses the internal composition in $\mathbb B$ and the strict equalities of the form $\theta^*(t)=s$ in the definition. External composition uses the given laxity morphisms. For example, given composable proarrows $(m,s)\colon (B,x)\slashedrightarrow (C,y)$ and $(n,t)\colon (C,y)\slashedrightarrow (D,z)$, the composite is defined as \[ (n,t) \otimes (m,s):= (n\otimes m,\phi_{n,p}(s,t)). \] Of course this is a well-defined proarrow. Identities and external composition of cells is similar. This makes $\dblelt(F)$ a double category. It is only as strict as $\mathbb B$. There is a projection double functor $\Pi\colon \dblelt(F)\to\mathbb B$ taking the indexing objects, morphisms, proarrows and cells to $\mathbb B$. This is strict even if $\mathbb B$ is pseudo. It is remarked in <cit.> that taking elements extends to a functor. Here are the details. [Morphism from a Transformation of Lax Functors] Let $\tau\colon F\to G$ denote a transformation of lax functors $F,G\colon\mathbb B^{op}\rightrightarrows \spn$ as in Definition <ref>. Define what will be a strict functor of double categories $\dblelt(\tau)\colon\dblelt(F)\to\dblelt(G)$ over $\mathbb B$. On objects and arrows, take \[ (D,x) \mapsto (D, \tau_Dx) \qquad\qquad f\mapsto f \] The arrow assignment is well-defined because $f^*\tau_D = \tau_Cf^*$ holds by the strict naturality condition in the definition. This assignment is functorial by construction. Now, for proarrows, send \[ (m,s)\mapsto (m,\tau_ms)\qquad\qquad\alpha\mapsto \alpha \] similarly to the object and arrow assignments. Well-definition again follows from naturality. Functoriality is by construction. The rest of the important content is in the following result. Given a transformation $\tau\colon F\to G$ of span-valued, lax functors, the assignments above yield a morphism from $\Pi\colon\dblelt(F)\to\mathbb B$ to $\Pi\colon\dblelt(G)\to\mathbb B$ over $\mathbb B$. This association is functorial, meaning that the elements functor \[ \dblelt(-)\colon \lax(\mathbb B^{op},\spn) \longrightarrow \mathbf{Dbl}/\mathbb B \] valued in the slice of double categories over $\mathbb B$ is well-defined. It is left to see that $\dblelt(\tau)\colon \dblelt(F)\to\dblelt(G)$ preserves external composites and units. The definition of external composition in the elements construction incorporates the laxity morphisms coming with $F$ and $G$. So, the preservation of external composition reduces to the statement that these laxity morphisms interact in the proper way with the components of $\tau$ used in the definition. But this is precisely what the “Functoriality" condition of Definition <ref> axiomatizes. Unit preservation follows similarly. As above, the elements functor is valued in the slice of the category of double categories over $\mathbb B$. However, those double functors strictly in its image possess certain fibration properties leading to the definition of a “discrete double fibration." First the properties: Let $F\colon \mathbb B^{op}\to\spn$ denote a lax double functor. The projection functors $\Pi_0$ and $\Pi_1$ underlying the canonical projection $\Pi\colon \mathbb E\mathbf{lt}(F)\to \mathbb B$ are discrete fibrations. Here are the required lifts. Given $(D,x)$ and $f\colon C\to D$, the lift is \[ f\colon (C,f^*x)\to (D,x) \] where $f^*\colon FD\to FC$ is the transition function. Similarly, given $(n,s)$ for a proarrow $n$ and $s\in Fn$ and a cell $\alpha\colon m\Rightarrow n$, the cartesian cell above $\alpha$ with codomain $(n,s)$ is (A, f^*x)\ar@{}[dr]|{\alpha} \ar[d]_{f} \ar[r]^{(m,\alpha^*s)}|-@{|} & (B,g^*y) \ar[d]^{g}\\ (C, x) \ar[r]_{(n, s)}|-@{|} & (D,y) where $\alpha^*\colon Fn\to Fm$ is the transition function between the vertices of the spans. Notice that by the fact that $\alpha^*$ induces a morphism of spans, the source and target of $(m, \alpha^*s)$ are well-defined. Such functors are equivalently characterized with a distinctly double categorical flavor. A double functor $P\colon \mathbb E\to\mathbb B$ between strict double categories for which $P_0$ and $P_1$ are discrete fibrations is equivalently a category object in $\dfib$, and thus equivalently a monoid in the bicategory of spans in $\dfib$. In the first place $\dfib$ has strict pullbacks, so the statement itself makes sense. On the one hand, a category object in any subcategory of an arrow category closed under finite limits is an internal functor. Thus, a category in $\dfib$ is a double functor. Its object and arrow parts must be objects of $\dfib$, that is, discrete fibrations. On the other hand, a double functor is a category object in the arrow category of $\cat$. But if $P_0$ and $P_1$ are discrete fibrations, then such $P$ lives in $\cat(\dfib)$. A monoid in spans in categories is equivalently a double category; similarly a monoid in spans in the arrow category of $\cat$ is equivalently a double functor. Thus, the components of such a double functor are discrete fibrations if and only if the double functor is in fact a monoid in spans in $\dfib$. A discrete double fibration is a category object in $\dfib$. A morphism of discrete double fibrations is a pair of double functors $(H,K)$ making a commutative square. Take $\cat(\dfib)$ to be the category of discrete double fibrations. Let $\dfib(\mathbb B)$ denote the subcategory of discrete double fibrations with fixed target $\mathbb B$ and morphisms with $K = 1_\mathbb B$. This is equivalently the fiber of the codomain projection $\cod\colon \cat(\dfib) \to \mathbf{Dbl}$ over $\mathbb B$. Although the official definition has a succinct and philosophically appropriate phrasing, as it is convenient, any of the equivalent characterizations of discrete double fibrations in Proposition <ref> may be used throughout. The name used is “discrete double fibration" because this is a discretization of a more general concept of “double fibration." This will be a double functor $P\colon \mathbb E\to\mathbb B$ whose underlying functors $P_0$ and $P_1$ are fibrations but that satisfy some further compatibility conditions. The results so far means that by fiat the elements functor is valued in discrete double fibrations \[ \dblelt(-)\colon \lax(\mathbb B^{op},\spn) \longrightarrow \dfib(\mathbb B) \] over $\mathbb B$. The purpose of this section is now to exhibit the pseudo-inverse yielding an equivalence of categories. First end this subsection with some general results. [Domain Projection] For any double category, the domain projection functor \[ \dom\colon \mathbb B/X\longrightarrow \mathbb B \] from the double slice $\mathbb B/X$ is a discrete double fibration. This is the image under the elements functor $\dblelt(-)\colon \mathbf{Lax}(\mathbb B^{op},\spn)\to \dfib(\mathbb B)$ of the canonical representable functor on $X$. A double functor $P\colon \mathbb E\to\mathbb B$ is a discrete double fibration over a strict double category $\mathbb B$ if, and only if, the transpose square \mathbb E^\dagger_1 \ar[d]_{P_1^\dagger}\ar[r]^{\cod} & \mathbb E^\dagger_0 \ar[d]^{P_0^\dagger} \\ \mathbb B^\dagger_1 \ar[r]_{\cod}& \mathbb B^\dagger_0 is a pullback in $\cat$. Straightforward verification. Lemma <ref> is the analogue of the characterization of ordinary discrete fibrations, saying that a functor $F\colon \F\to\C$ is a discrete fibration if, and only if, the square \F_1 \ar[d]_{F_1} \ar[r]^{d_1} & \F_0 \ar[d]^{F_0} \\ \C_1 \ar[r]_{d_1} & \C_0 is a pullback in $\set$. This characterization will be important in the monadicity developments in forthcoming work. §.§ Pseudo-Inverse on Objects and Horizontal Morphisms The present goal is to construct a pseudo-inverse for the elements functor. This will be a functor from discrete double fibrations back to lax span-valued double functors. For a discrete double fibration $P\colon \mathbb E\to\mathbb B$, begin correspondences leading to a lax double functor $F_P\colon \mathbb B^{op}\to\spn$ in the following way. [Transition Morphisms] On objects, take $B\mapsto \mathbb E_B$, the set of objects of $\mathbb E_0$ over $B\in\mathbb B_0$ via $P_0$. Call this the “fiber" over $B$. Since $P_0$ is a discrete fibration, for every horizontal morphism $f\colon B\to C$, there is a corresponding transition function $f^*\colon \mathbb E_C\to\mathbb E_B$ given on $x\in \mathbb E_C$ by taking the domain $f^*x$ of the unique horizontal morphism over $f$ with codomain $x$. Given a proarrow $m\colon A\slashedrightarrow B$, required is a span from $\mathbb E_A$ to $\mathbb E_B$. For this, let $\mathbb E_m$ denote the fiber of $P_1$ over $m$, and send $m$ to the span \[ \mathbb E_A \xleftarrow{\src} \mathbb E_m \xrightarrow{\tgt} \mathbb E_B. \] Finally, associated to a cell $\alpha$ of $\mathbb B$ of the form A \ar@{}[dr]|{\alpha} \ar[d]_{f} \ar[r]^{m}|-@{|} & B \ar[d]^{g} \\ C \ar[r]_{n}|-@{|} & D will be a cell of $\spn$, that is, a morphism of the spans associated to $m$ and $n$. In light of the definitions so far, required is a function $\mathbb E_n\to \mathbb E_m$ making the diagram \mathbb E_C \ar[d] & \ar[l] \mathbb E_n \ar@{-->}[d]^{\alpha^*} \ar[r] & \mathbb E_D \ar[d] \\ \mathbb E_A & \ar[l] \mathbb E_m \ar[r] & \mathbb E_B commute. But such a function $\mathbb E_n\to \mathbb E_m$ is given by the fact that $P_1$ is a discrete fibration. That is, given $u\in \mathbb E_n$, there is a unique cell of $\mathbb E$ with codomain $u$ over $\alpha$, which, by the fact that $P_0$ is a discrete fibration, must be of the form f^*x \ar@{}[dr]|{\Downarrow} \ar[d] \ar[r]|-@{|} & g^*y \ar[d] \\ x \ar[r]|-@{|}_u & y with source and target the unique lifts over $f$ and $g$ respectively. Thus, send $u$ to the vertical source of this lifted cell, which, by construction, is over $m$, hence well-defined. The cell diagram in $\spn$ displayed above then commutes by construction of $\mathbb E_n\to \mathbb E_m$. [Laxity Cells I] Let $D$ denote an object of $\mathbb B$. The laxity cell for the corresponding unit proarrow is of the form \mathbb E_D \ar@{=}[d] & \ar[l]_{1} \mathbb E_D \ar@{-->}[d] \ar[r]^{1} & \mathbb E_D \ar@{=}[d] \\ \mathbb E_D & \ar[l]^{\src} \mathbb E_{u_D} \ar[r]_{\tgt} & \mathbb E_D The functor between vertices $\lambda_D\colon\mathbb E_D \to \mathbb E_{u_D}$ is given by $X \mapsto u_X$. Notice that the unit condition of Definition <ref> is satisfied because the external composition for $\mathbb E$ is strict. [Laxity Cells II] Let $m\colon A\slashedrightarrow B$ and $n\colon B\slashedrightarrow C$ denote proarrows of $\mathbb B$. Required for $F_P$ are laxity cells for such $m$ and $n$ of the form \ar @{} [drrrr] |{\phi_{n,m}} \mathbb E_A \ar@{=}[d] & \ar[l]_{\src} \mathbb E_m\ar[r]^{\tgt} & \mathbb E_B & \mathbb E_n \ar[l]_{\src} \ar[r]^{\tgt} & \mathbb E_C \ar@{=}[d] \\ \mathbb E_A & & \ar[ll]^{\src} \mathbb E_{n\otimes m}\ar[rr]_{\tgt} & & \mathbb E_C between spans. The composed span in the domain of the cell has vertex given by the pullback. The cell amounts to a functor between vertices respecting the source and target functors. This is given by external composition of proarrows and cells over $m$ and $n$ respectively: $$\xymatrix{ \mathbb E_m\times_{\mathbb E_B}\ar[d]_{\phi_{n,m}}\mathbb E_n & & u\colon x\slashedrightarrow y, v\colon y\slashedrightarrow z \ar@{|->}[d] \\ \mathbb E_{n\otimes m} & & v\otimes u This is strictly functorial and a well-defined span morphism by the assumptions on $P\colon\mathbb E\to\mathbb B$. The laxity coherence law follows from the fact that external composition for $\mathbb E$ is associative. Let $P\colon\mathbb E\to\mathbb B$ denote a double fibration. The assignments \[ D\mapsto \mathbb E_D \qquad f\mapsto f^*\qquad m\mapsto \mathbb E_m\qquad \alpha\mapsto \alpha^* \] with unit and laxity cells as above define a lax functor $F_P\colon \mathbb B^{op}\to\spn$. What remains to check is the naturality conditions in Definition <ref>. Take cells $\alpha\colon m\Rightarrow p$ and $\beta\colon n\Rightarrow q$ of $\mathbb B$. The first condition amounts to the commutativity of the square made by the functors between vertices: \mathbb E_p\times_{\mathbb E_N}\mathbb E_q \ar[r]^{\;\;\;\phi} \ar[d]_{\alpha^*\times\beta^*} & \mathbb E_{q\otimes p} \ar[d]^{(\beta\otimes \alpha)^*} \\ \mathbb E_m\times_{\mathbb E_B}\mathbb E_n \ar[r]_{\;\;\;\phi} & \mathbb E_{n\otimes m} But this follows by uniqueness assumptions. Naturality for the unit cells follows similarly. [Pseudo-Inverse on Morphisms of Discrete Double Fibrations] Start with a morphism $H\colon P\to B$ of discrete double fibrations $P\colon \mathbb E\to\mathbb B$ and $Q\colon \mathbb G\to\mathbb B$. Define correspondences for what will be a transformation $F_H\colon F_P\to F_Q$ between lax double functors $F_P$ and $F_Q$ arising as in Lemma <ref>. On objects $B$, define $(F_H)_B$ to be the restriction of $H$ to the fibers $H_0\colon \mathbb E_B\to\mathbb G_B$. This is well defined since $QH=P$ holds. Similarly, for a proarrow $m\colon B\slashedrightarrow C$, take the component proarrow $(F_H)_v$ in $\spn$ to be the span \mathbb E_B \ar[d]_{H_0} & \ar[l] \mathbb E_m \ar[d]^{H_1} \ar[r] & \mathbb E_C \ar[d]^{H_0} \\ \mathbb G_B & \ar[l] \mathbb G_m \ar[r] & \mathbb G_C Again this is well-defined since $QH=P$ holds. The assignments in Construction <ref> make a transformation $F_H\colon F_P\to F_Q$ of lax functors as in Definition <ref>. These assignments result in a functor \[ F_{(-)}\colon\lax(\mathbb B^{op},\spn)\longrightarrow \dfib(\mathbb B). \] The first naturality condition holds because $H$ commutes with $P$ and $Q$ and because $Q_0$ is a discrete fibration; the second naturality condition holds again because $QH=P$ is true and because $Q_1$ is also a discrete fibration. Proarrow functoriality follows from the fact that $H$ is a strict double functor. §.§ An Equivalence of 1-Categories The pseudo-inverse construction of the last subsection in fact induces an equivalence of categories, leading to the first representation theorem. To define a natural isomorphism $\eta\colon 1\cong F_{\dblelt(-)}$ between functors, needed are component transformations of lax functors indexed by lax $H\colon \mathbb B^{op}\to\spn$. For such a lax functor $H$, the required transformation of lax functors $\eta_H\colon H\to F_{\dblelt(H)}$ is given by \[ \eta_{H,A}\colon HA\longrightarrow \dblelt(H)_A\qquad X\mapsto (A,x) \] on objects $A\in |\mathbb B_0|$ and by \[ \eta_{H,m}\colon Hm\longrightarrow \dblelt(F)_m\qquad s\mapsto (m,s) \] for a proarrow $m\colon B\slashedrightarrow C$, giving a morphism of spans HB \ar[d]_{\eta} & Hm \ar[l] \ar[d]^{\eta} \ar[r] & HC \ar[d]^{\eta} \\ \dblelt(H)_B & \ar[l] \dblelt(H)_m \ar[r] & \dblelt(H)_C. Since the maps $\eta_{H,A}$ and $\eta_{H,v}$ just add in indices, they are both bijections of sets. Moreover, these components result in a transformation of lax functors as in Definition <ref> by construction. Thus, such $\eta_H$ is the vertical component of a supposed transformation of functors of virtual double categories. Naturality in $H$ is proved in the theorem below. On the other hand, a natural isomorphism $\epsilon\colon\elt(F_{(-)})\cong 1$ is given by components $\epsilon_P\colon \dblelt(F_P) \longrightarrow \mathbb E$ where $P\colon\mathbb E\to\mathbb B$ is a discrete double fibration. On objects and arrows take \[ (D,x)\mapsto x\qquad (C,x)\xrightarrow{f} (D,y) \mapsto f^*x\xrightarrow{!} y \] where $f^*x\to y$ is the unique arrow above $f$ with codomain $y$. It is well-defined because $f^*y=x$ holds. These are bijections by uniqueness of these lifts; they are functorial and respect the fibering over $\mathbb B$. On proarrows and cells, take \[(m,s)\mapsto s \qquad \alpha\;\mapsto\; \alpha^*s\Rightarrow s \] where $\alpha^*s\Rightarrow s$ is the unique lift in $\mathbb E$ of $\alpha$ with codomain $s$. Again these are bijections, functorial and fiber-respecting. Naturality in $P$ is proved in the following result. For any strict double category $\mathbb B$, there is an equivalence of categories \[ \dfib(\mathbb B) \simeq \lax(\mathbb B^{op},\spn) \] induced by the double category of elements construction. It remains to check the naturality of the isomorphisms in Constructions <ref> and <ref>. If $H$ and $G$ are lax functors with a transformation $\tau\colon H\to G$, the diagram H \ar[d]_\tau \ar[r]^{\eta} & F_{\dblelt(H)}\ar[d] \\ G \ar[r]_{\eta} & F_{\dblelt(G)} commutes because indexing commutes with applying the components of $\tau$. Naturality in $P$ also follows. Let $H\colon P\to Q$ be a morphism of double fibrations $P\colon\mathbb E\to\mathbb B$ and $Q\colon\mathbb G\to\mathbb B$. For naturality, it is required that the square \dblelt(F_P) \ar[d]_{\dblelt(F_H)} \ar[r]^{\;\;\epsilon} & \mathbb E \ar[d]^{H} \\ \dblelt(F_Q) \ar[r]_{\;\;\epsilon} & \mathbb G commutes. Chasing an object $(D,X)$ or a proarrow $(m,s)$ around each side of the square, the result either way is $HX$ in the former case and is $Hs$ in the latter case. Thus, to check are the arrows and cells. Given an arrow $f\colon (C,X)\to (D,Y)$, the counter-clockwise direction gives the left arrow below whereas the clockwise direction gives the one on the right: \[ !\colon f^*Hy \to y\qquad\qquad H(!)\colon Hf^*y\to Hy \] These are strictly equal, however, by uniqueness of lifts. A formally similar argument works to show that the square commutes also at the level of cells. § MONOIDS, MODULES AND MULTIMODULATIONS The main representation theorem of the paper asserts an equivalence of certain virtual double categories. To define these first recall the definition, presented here in the form of <cit.>. Virtual double categories have been known under the name “$\mathbf{f.c.}$-multicategories," that is, as an example of a “generalized multicategory," in this case relative to the free-category monad in <cit.>. §.§ Virtual Double Categories [See 2.1 of <cit.>] A virtual double category $\mathbb D$ consists of an underlying category $\mathbb D_0$, giving the objects and morphisms of $\mathbb D$, together with, for any two objects $C$ and $D$, a class of proarrows $v\colon C\slashedrightarrow D$ and for any “arity" $k$ multicells of the form A_0 \ar@{}[drrr]|{\mu} \ar[d]_f \ar[r]^{m_1}|-@{|} & A_1 \ar[r]^{m_2}|-@{|} & \cdots \ar[r]^{m_k}|-@{|} & A_k \ar[d]^g \\ B_0 \ar[rrr]_{n}|-@{|} & & & B_1 all subject to the unit, composition and associativity axioms, as detailed in the reference. The list of proarrows $(m_1, m_2, \dots, m_k)$ is a “$k$-ary multisource." The definition allows nullary multisources with $k=0$. A functor of virtual double categories $F\colon\mathbb C\to \mathbb D$ sends objects to objects, arrows to arrows, proarrows to proarrows and multicells to multicells in such a way as to preserve domains, codomains, multisources (so arities in particular), targets, identities and compositions. Every double category is a virtual double category by forgetting external composition. In particular, for $\C$ with pullbacks $\Span(\C)$ is a virtual double category. [Cf. 5.1 <cit.>] A unit proarrow for an object $D\in\mathbb D$ in a virtual double category is a proarrow $u_D\colon D\slashedrightarrow D$ and a nullary opcartesian cell \ar@{}[dr]|{\Downarrow} D \ar@{=}[r] \ar@{=}[d]& D \ar@{=}[d] \\ D \ar[r]_{u_D}|-@{|} & D. A virtual double category has units if it is equipped with a choice of unit for each object. The unit proarrow of any honest double category is a unit in this sense. Existence of units is part of the requirements for a virtual equipment as described in <cit.>. The full structure will not be needed here. All the interesting examples possess all units, so without further ado all virtual double categories will be assumed to have units. Functors of virtual double categories will be assumed to be normal in the sense that they preserve nullary opcartesian cells. The universal property of any unit $u_D$ on a given object $D$ implies that $\mathbb D$ possesses generic multicells \ar@{}[drrr]|{\Downarrow}\cdot \ar[r]^{u_D}|-@{|} \ar@{=}[d] & \cdot \ar[r]^{u_D}|-@{|} & \cdots\ar[r]^{u_D}|-@{|} & \cdot \ar@{=}[d] \\ \cdot\ar[rrr]_{u_D}|-@{|} & & & \cdot of any arity $k$. These are given from the unique factorization. Moreover, by uniqueness of these factorizations, a correct multicomposite of any such cells gives the generic multicell of the proper arity determined in this way. [Terminal Object] The terminal object $\mathbf 1$ in $\vdbl$ is peculiar and is needed later. It has a single object $\bullet$, an identity arrow $1_\bullet$ and an “identity proarrow" $u_\bullet$. Whereas one might expect there to be only a single multicell $u_\bullet \Rightarrow u_\bullet$, in fact required are generic multicells with multisources of all arities $k$ \bullet \ar@{}[drrr]|{\mu_k} \ar[d]_{1_\bullet} \ar[r]^{u_\bullet}|-@{|} & \bullet \ar[r]^{u_\bullet}|-@{|} & \cdots \ar[r]^{u_\bullet}|-@{|} & \bullet \ar[d]^{1_\bullet}\\ \bullet \ar[rrr]_{u_\bullet}|-@{|} & & & \bullet as otherwise the natural definitions on objects, arrows and proarrows will not extend to a unique functor of virtual double categories $\mathbb D\to\mathbf 1$. Multi-composition is defined to give the generic multicell with the appropriate arity. Notice, then, that in particular $\mathbf 1$ has units by fiat. A point of a virtual double category is a (normalized) functor $D\colon\mathbf 1\to\mathbb D$. A point thus consists of an object $D$, its identity arrow $1_D$, its unit proarrow $u_D$ and the corresponding generic multicells. A notion of transformation gives the 2-categorical structure on virtual double categories. Let $F,G\colon\mathbb C\rightrightarrows \mathbb D$ denote functors of virtual double categories. A transformation $\tau\colon F\to G$ assigns to each object $C$ of $\mathbb C$ an arrow $\tau_C\colon FC\to GC$ and to each proarrow $m\colon C\slashedrightarrow D$ a cell FC \ar@{}[dr]|{\tau_m} \ar[d]_{\tau_C} \ar[r]^{Fm}|-@{|} & FD \ar[d]^{\theta_D} \\ GC \ar[r]_{Gm}|-@{|} & GD in such a way that * [Arrow Naturality] for each arrow $f\colon C\to D$, the square FC \ar[d]_{\tau_C} \ar[r]^{Ff} & FD \ar[d]^{\theta_D} \\ GC \ar[r]_{Gf} & GD commutes; and * [Cell Naturality] for each multicell \cdot \ar@{}[drrr]|{\mu} \ar[d]_f \ar[r]^{m_1}|-@{|} & \cdot \ar[r]^{m_2}|-@{|} & \cdots \ar[r]^{m_k}|-@{|} & \cdot\ar[d]^g\\ \cdot \ar[rrr]_n|-@{|} & & & \cdot the composed multicells on either side of \cdot \ar@{}[dr]|{\tau_{m_1}} \ar[d] \ar[r]^{Fm_1}|-@{|} & \cdot \ar@{}[dr]|{\tau_{m_2}} \ar[d] \ar[r]^{Fm_2}|-@{|} & \cdots\ar@{}[dr]|{\tau_{m_k}} \ar[r]^{Fm_k}|-@{|} & \cdot \ar[d] & & \cdot \ar@{}[drrr]|{F\mu} \ar[d] \ar[r]^{Fm_1}|-@{|} & \cdot \ar[r]^{Fm_2}|-@{|} & \cdots \ar[r]^{Fm_k}|-@{|} & \cdot \ar[d] \\ \cdot \ar@{}[drrr]|{G\mu} \ar[d] \ar[r]_{Gm_1}|-@{|} & \cdot \ar[r]_{Gm_2}|-@{|} & \cdots \ar[r]_{Gm_k}|-@{|} & \cdot \ar[d] & = & \cdot \ar@{}[drrr]|{\tau_n} \ar[d] \ar[rrr]_{Fn}|-@{|} & & & \cdot \ar[d] \\ \cdot \ar[rrr]_{Gn}|-@{|} & & & \cdot & & \cdot \ar[rrr]_{Gn}|-@{|} & & & \cdot are equal. Denote the 2-category of virtual double categories, their functors and transformations by $\vdbl$. $\vdbl$ has (strict 2-)pullbacks. Given two functors $F\colon\mathbb A\to\mathbb C$ and $G\colon\mathbb B\to\mathbb C$, the pullback has as its underlying category \[ (\mathbb A\times_{\mathbb C}\mathbb B)_0 = \mathbb A_0\times_{\mathbb B_0}\mathbb C_0. \] Proarrows are pairs $(m,n)$ for a proarrow $m$ of $\mathbb A$ and one $n$ of $\mathbb B$ satisfying $Fm=Gn$. Similarly, multicells are pairs $(\mu,\nu)$ with $\mu$ in $\mathbb A$ and $\nu$ in $\mathbb B$ such that $F\mu=G\nu$. Composition uses composition in $\mathbb A$ and $\mathbb B$. This is a virtual double category fitting into a commutative square \mathbb A\times_{\mathbb C}\mathbb B \ar[d]_{d_0} \ar[r]^{\;\;\;\;d_1} & \mathbb B \ar[d]^G \\ \mathbb A \ar[r]_F & \mathbb C with the expected 2-categorical universal property <cit.>.. Further limits of a 2-categorical variety abound in $\vdbl$. The comma category is of interest. These are well-known in $\cat$ (e.g. I.6 <cit.>). The formal abstraction and and elementary phrasing of its universal property in an arbitrary 2-category appear in 1 of <cit.>. The 2-category $\vdbl$ has comma objects. Given functors $F\colon\mathbb A\to\mathbb C$ and $G\colon\mathbb B\to\mathbb C$, the (purported) comma $F/G$ has its underlying 1-category as $(F/G)_0 = F_0/G_0$. Proarrows are triples $(m, \alpha, n)$ with $m$ and $n$ proarrows of $\mathbb A$ and $\mathbb B$, respectively, and $\alpha$ a cell $\alpha\colon Fm \Rightarrow Gn$ of $\mathbb C$. A multicell is thus a pair of multicells \cdot \ar@{}[drrr]|{\mu} \ar[d] \ar[r]^{m_1}|-@{|} & \cdot \ar[r]^{m_2}|-@{|} & \cdots \ar[r]^{m_k}|-@{|} & \cdot\ar[d] && \cdot \ar@{}[drrr]|{\nu} \ar[d] \ar[r]^{n_1}|-@{|} & \cdot \ar[r]^{n_2}|-@{|} & \cdots \ar[r]^{n_k}|-@{|} & \cdot\ar[d]\\ \cdot \ar[rrr]_p|-@{|} & & & \cdot && \cdot \ar[rrr]_q|-@{|} & & & \cdot from $\mathbb A$ and $\mathbb B$, respectively, of the same arity and satisfying the equation \cdot \ar@{}[dr]|{\alpha_1} \ar[d] \ar[r]^{Fm_1}|-@{|} & \cdot \ar@{}[dr]|{\alpha_2} \ar[d] \ar[r]^{Fm_2}|-@{|} & \cdots\ar@{}[dr]|{\alpha_k} \ar[r]^{Fm_k}|-@{|} & \cdot \ar[d] & & \cdot \ar@{}[drrr]|{F\mu} \ar[d] \ar[r]^{Fm_1}|-@{|} & \cdot \ar[r]^{Fm_2}|-@{|} & \cdots \ar[r]^{Fm_k}|-@{|} & \cdot \ar[d] \\ \cdot \ar@{}[drrr]|{G\nu} \ar[d] \ar[r]_{Gn_1}|-@{|} & \cdot \ar[r]_{Gn_2}|-@{|} & \cdots \ar[r]_{Gn_k}|-@{|} & \cdot \ar[d] & = & \cdot \ar@{}[drrr]|{\beta} \ar[d] \ar[rrr]_{Fp}|-@{|} & & & \cdot \ar[d] \\ \cdot \ar[rrr]_{Gq}|-@{|} & & & \cdot & & \cdot \ar[rrr]_{Gq}|-@{|} & & & \cdot Composition is given by that in $\mathbb A$ and $\mathbb B$. So defined, $F/G$ comes with evident projection functors to $\mathbb A$ and $\mathbb B$. The expected transformation \ar@{}[dr]|{\Rightarrow} F/G\ar[d]_{d_0} \ar[r]^{d_1} & \mathbb B \ar[d]^G \\ \mathbb A \ar[r]_F & \mathbb C has components $f$ for objects $(A, f, B)$ and $\alpha$ for proarrows $(m,\alpha, n)$. It satisfies the required 2-categorical universal property in the reference by construction. This makes $\vdbl$ into a “representable 2-category" <cit.> and <cit.>. [Slice Virtual Double Category] Let $D$ denote an object in a virtual double category $\mathbb D$. The slice virtual double category over $D$ is defined as the comma $1/D$ \ar@{}[dr]|{\Rightarrow} 1/D\ar[d] \ar[r]^{d_1} & \mathbf 1 \ar[d]^D \\ \mathbb D \ar[r]_1 & \mathbb D Denote this as usual by $\mathbb D/D$. The coslice is defined analogously. §.§ Virtual Double Category Structure on Double Presheaves The virtual double category structure on $\mathbf{Lax}(\mathbb B^{op},\spn)$ will be given by taking so-called “modules" as proarrows and “multimodulations" as the multicells. Here we revisit the the definitions for lax functors between arbitrary double categories. A path of proarrows is a sequence $\mathbf m = (m_1, \dots, m_k)$ such that, reading left to right, the target of one proarrow is the source of the next. The external composite is denoted by $[\mathbf m]$. [Cf. 3.2 <cit.>] A module between lax functors $M\colon F\slashedrightarrow G\colon \mathbb A\rightrightarrows \mathbb B$ of double categories assigns * to each proarrow $m\colon A \slashedrightarrow B$ of $\mathbb A$, a proarrow $Mm\colon FA\slashedrightarrow GB$ of $\mathbb B$; * to each cell of $\mathbb A$ as at left, one of $\mathbb B$ as at right $$\xymatrix{ \ar @{} [dr] |{\theta} A \ar[r]^{m}|-@{|} \ar[d]_{f} & B \ar[d]^{g} & & & & \ar @{} [dr] |{M\theta} FA \ar[r]^{Mm}|-@{|} \ar[d]_{Ff} & GB \ar[d]^{Gg} \\ C \ar[r]_{n}|-@{|} & D & & & & FC \ar[r]_{Mn}|-@{|} & GD \\ * for each pair of proarrows $m\colon A\slashedrightarrow B$ and $n\colon B\slashedrightarrow C$, action multicells of $\mathbb B$ $$\xymatrix{ \ar @{} [drr] |{\lambda_{m,n}} \cdot \ar[r]^{Fm}|-@{|} \ar@{=}[d]_{}& \cdot \ar[r]^{Mn}|-@{|} & \cdot \ar@{=}[d] & & & \ar @{} [drr] |{\rho_{m,n}} \cdot \ar[r]^{Mm}|-@{|} \ar@{=}[d]_{}& \cdot \ar[r]^{Gn}|-@{|} & \cdot \ar@{=}[d] \\ \cdot \ar[rr]_{M(n\otimes m)}|-@{|}& & \cdot & & & \cdot \ar[rr]_{M(n\otimes m)}|-@{|}& & \cdot \\ in such a way that the following axioms hold. * [Functoriality] For any (internal) composite of monocells $\beta\alpha$ and any proarrow $m$, the equations $M\beta M\alpha = M(\beta\alpha)$ and $M1_m= 1_{Mm}$ hold; * [Naturality] For any external composite $\beta\otimes \alpha$, the equalities $$\xymatrix{ \ar@{}[drr]|{\lambda} \cdot \ar[r]^{Fm}|-@{|} \ar@{=}[d]& \cdot \ar[r]^{Mn}|-@{|}& \cdot \ar@{=}[d]& & & & \ar@{}[dr]|{F\alpha} \cdot \ar[r]^{Fm}|-@{|} \ar[d] & \ar@{}[dr]|{M\beta} \cdot \ar[r]^{Mn}|-@{|} \ar[d]& \cdot\ar[d] \\ \ar@{}[drr]|{M(\beta\otimes \alpha)}\cdot \ar[rr]_{}|-@{|} \ar[d] & & \cdot \ar[d] & & =& & \ar@{}[drr]|{\lambda}\cdot \ar[r]|-@{|} \ar@{=}[d]& \cdot \ar[r]|-@{|}& \cdot \ar@{=}[d] \\ \cdot \ar[rr]_{M(q\otimes p)}|-@{|} & & \cdot & & & & \cdot \ar[rr]_{M(q\otimes p)}|-@{|} & & \cdot $$\xymatrix{ \ar@{}[drr]|{\rho} \cdot \ar[r]^{Mm}|-@{|} \ar@{=}[d]& \cdot \ar[r]^{Gn}|-@{|}& \cdot \ar@{=}[d]& & & & \ar@{}[dr]|{M\alpha} \cdot \ar[r]^{Mm}|-@{|} \ar[d] & \ar@{}[dr]|{G\beta} \cdot \ar[r]^{Gn}|-@{|} \ar[d]& \cdot\ar[d] \\ \ar@{}[drr]|{M(\beta\otimes \alpha)}\cdot \ar[rr]_{}|-@{|} \ar[d] & & \cdot \ar[d] & & =& & \ar@{}[drr]|{\rho}\cdot \ar[r] \ar@{=}[d]& \cdot \ar[r]& \cdot \ar@{=}[d] \\ \cdot \ar[rr]_{M(q\otimes p)}|-@{|} & & \cdot & & & & \cdot \ar[rr]_{M(q\otimes p)}|-@{|} & & \cdot both hold. * [Associativity] For any composable sequence of proarrows $(m,n,p)$ of $\mathbb A$, the equalities (modulo suppressed associativity isomorphisms) $$\xymatrix{ \ar@{}[dr]|{1}\cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \ar@{}[drr]|{\lambda} \cdot \ar@{=}[d] \ar[r]^{Fn}|-@{|} & \cdot \ar[r]^{Mw}|-@{|} & \cdot \ar@{=}[d] & & & & \ar@{}[drr]|{F\gamma}\cdot \ar@{=}[d] \ar[r]^{Fm}|-@{|} & \cdot \ar[r]^{Fn}|-@{|} & \cdot \ar@{}[dr]|{1} \ar@{=}[d] \ar[r]^{Mw}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{}[drrr]|{\lambda} \cdot \ar@{=}[d] \ar[r]|-@{|} & \cdot \ar[rr]|-@{|} & & \cdot \ar@{=}[d] & & = & & \ar@{}[drrr]|{\lambda} \cdot \ar@{=}[d] \ar[rr]|-@{|} & & \cdot \ar[r]|-@{|} & \cdot \ar@{=}[d]\\ \cdot \ar[rrr]_{M(p\otimes (n\otimes m))}|-@{|} & & & \cdot & & & & \cdot \ar[rrr]_{M((p\otimes n)\otimes m)}|-@{|}& & & \cdot\\ $$\xymatrix{ \ar@{}[drr]|{\rho} \cdot \ar[r]^{Mm}|-@{|} \ar@{=}[d] & \cdot \ar[r]^{Gn}|-@{|} & \ar@{}[dr]|{1} \cdot \ar[r]^{Gp}|-@{|} \ar@{=}[d] & \cdot \ar@{=}[d] & & & & \ar@{}[dr]|{1} \cdot \ar[r]^{Mm}|-@{|} \ar@{=}[d] & \ar@{}[drr]|{G\gamma} \cdot \ar[r]^{Gn}|-@{|} \ar@{=}[d] & \cdot \ar[r]^{Gp}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{}[drrr]|{\rho} \cdot \ar@{=}[d] \ar[rr]|-@{|} & & \cdot \ar[r]|-@{|} & \cdot \ar@{=}[d] & & = & & \ar@{}[drrr]|{\rho}\cdot \ar@{=}[d] \ar[r]|-@{|} & \cdot \ar[rr]|-@{|} & & \cdot \ar@{=}[d]\\ \cdot \ar[rrr]_{M(p\otimes (n\otimes m))}|-@{|} & & & \cdot & & & & \cdot \ar[rrr]_{M((p\otimes n)\otimes m)}|-@{|} & & & \cdot\\ are valid. * [Compatibility] The composites $$\xymatrix{ \ar@{}[drr]|{\lambda} \cdot \ar[r]^{Fm}|-@{|} \ar@{=}[d] & \cdot \ar[r]^{Mn}|-@{|} & \ar@{}[dr]|{1} \cdot \ar[r]^{Gp}|-@{|} \ar@{=}[d] & \cdot \ar@{=}[d] & & & & \ar@{}[dr]|{1} \cdot \ar[r]^{Fm}|-@{|} \ar@{=}[d] & \ar@{}[drr]|{\rho} \cdot \ar[r]^{Mn}|-@{|} \ar@{=}[d] & \cdot \ar[r]^{Gp}|-@{|} & \cdot \ar@{=}[d] \\ \ar@{}[drrr]|{\rho} \cdot \ar@{=}[d] \ar[rr]|-@{|} & & \cdot \ar[r]|-@{|} & \cdot \ar@{=}[d] & & = & & \ar@{}[drrr]|{\lambda}\cdot \ar@{=}[d] \ar[r]|-@{|} & \cdot \ar[rr]|-@{|} & & \cdot \ar@{=}[d]\\ \cdot \ar[rrr]_{M(p\otimes (n\otimes m))}|-@{|} & & & \cdot & & & & \cdot \ar[rrr]_{M((p\otimes n)\otimes m)}|-@{|} & & & \cdot\\ are equal. * [Unit] For any proarrow $m\colon A\slashedrightarrow B$ of $\mathbb A$, the composites $$\xymatrix{ \ar@{}[dr]|{\gamma} \cdot \ar[r]^{u_{FA}}|-@{|} \ar@{=}[d]& \ar@{}[dr]|{1} \cdot \ar[r]^{Mm}|-@{|} \ar@{=}[d] & \cdot \ar@{=}[d] & & & & \ar@{}[dr]|{1} \cdot \ar[r]^{Mm}|-@{|} \ar[d] & \ar@{}[dr]|{\gamma} \cdot \ar[r]^{u_{GB}}|-@{|} \ar[d]& \cdot\ar[d] \\ \ar@{}[drr]|{\lambda} \cdot \ar[r]_{Fu_A}|-@{|} \ar[d] & \cdot \ar[r]|-@{|} & \cdot \ar[d] & & \text{and}& & \ar@{}[drr]|{\rho}\cdot \ar[r]|-@{|} \ar@{=}[d]& \cdot \ar[r]_{Gu_B}|-@{|}& \cdot \ar@{=}[d] \\ \cdot \ar[rr]_{M(m\otimes u_A)}|-@{|} & & \cdot & & & & \cdot \ar[rr]_{M(u_B\otimes m)}|-@{|} & & \cdot are equal to the respective canonical composition multicells for $(m_{FA},Mm)$ and $(Mu,m_{GA})$. Multicells are given by the notion of a “multimodulation," recalled next. A path of cells $\theta\colon (\theta_1,\dots, \theta_k)$ a sequence of cells $\theta_1\colon m_1\Rightarrow n_1,\dots, \theta_k\colon m_k\Rightarrow n_k$ has the target of a given cell equal to the source of the next. Externally composable sequences of modules $M_1,\dots, M_k$ between lax functors will be thought of as proarrows $F^i\slashedrightarrow F^{i+1}$ from a lax functor $F^i$ to one $F^{i+1}$, notated with superscripts so as not to confuse these with the components of the functors. [See 4.1 of <cit.> and 1.2.3 of <cit.>] A multimodulation \ar@{}[drrr]|{\mu}\cdot\ar[d]_{\tau} \ar[r]^{M_1}|-@{|} & \cdot\ar[r]^{M_2}|-@{|} &\cdots \ar[r]^{M_k}|-@{|} & \cdot \ar[d]^{\sigma} \\ \cdot \ar[rrr]_{N}|-@{|}&&& \cdot from modules $M_i$ to $N$ with source $\tau$ and target $\sigma$ assigns to each path $\mathbf m = (m_1, \dots, m_k)$ of proarrows of $\mathbb A$, a multicell \ar@{}[drrr]|{\mu_{\mathbf m}}\cdot\ar[d]_{\tau} \ar[r]^{M_1m_1}|-@{|} & \cdot\ar[r]^{M_2m_2}|-@{|} &\cdots \ar[r]^{M_km_k}|-@{|} & \cdot \ar[d]^{\sigma} \\ \cdot \ar[rrr]_{N{[\mathbf m]}}|-@{|}&&& \cdot in such a way that the following axioms are satisfied. * [Naturality] for any path of cells $\theta_1\colon m_1\Rightarrow n_1,\dots, \theta_k\colon m_k\Rightarrow n_k$, the two composites \cdot \ar[d] \ar@{}[dr]|{M_1\theta_1} \ar[r]^{M_1m_1}|-@{|} & \cdot \ar[d]\ar@{}[dr]|{M_2\theta_2} \ar[r]^{M_2m_2}|-@{|} & \cdots \ar@{}[dr]|{M_k\theta_k} \ar[r]^{M_km_k} &\ar[d] \cdot && \cdot \ar@{}[drrr]|{\mu_{\mathbf m}} \ar[d] \ar[r]^{M_1m_1}|-@{|} & \cdot \ar[r]^{M_2m_2}|-@{|} & \cdots \ar[r]^{M_km_k} &\ar[d] \cdot\\ \cdot \ar[d] \ar@{}[drrr]|{\mu_{\mathbf n}} \ar[r]_{M_1n_1}|-@{|} & \cdot \ar[r]_{M_2n_2} & \cdots \ar[r]_{M_kn_k} &\cdot \ar[d] & = & \cdot\ar[d] \ar@{}[drrr]|{N[\theta]} \ar[rrr]_{N[\mathbf m]}|-@{|} &&& \cdot\ar[d]\\ \cdot \ar[rrr]_{N[\mathbf n]} &&&\cdot & & \cdot \ar[rrr]_{N[\mathbf n]}|-@{|} &&&\cdot are equal. * The following equivariance axioms are satisfied: * [Left and Right Equivariance] For any paths of proarrows $y\mathbf m = (m_1,\dots, m_k,y)$ and $\mathbf mx = (x, m_1, \dots, m_k)$, the composites on either side of \ar@{}[dr]|{\tau_x}\cdot \ar[r]^{F^0x}|-@{|} \ar[d]_{} & \ar@{}[drrr]|{\mu_{[\mathbf m]}}\cdot \ar[r]^{M_1m_1}|-@{|} \ar[d]^{} & \cdot \ar[r]^{M_2m_2}|-@{|} &\cdots \ar[r]^{M_km_k}|-@{|}& \cdot \ar[d]^{} & & \ar@{}[drr]|{\lambda} \cdot \ar@{=}[d] \ar[r]^{F^0x}|-@{|} & \cdot \ar[r]^{M_1m_1}|-@{|} & \ar@{}[dr]|{1} \cdot \ar@{=}[d] \ar[r]^{M_2m_2}|-@{|} & \ar@{}[dr]|{1}\cdots \ar[r]^{M_km_k}|-@{|} & \cdot\ar@{=}[d] \\ \ar@{}[drrrr]|{\lambda}\cdot \ar[r]_{G^0x}|-@{|} \ar@{=}[d] &\cdot \ar[rrr]_{N[\mathbf m]}|-@{|} & & & \cdot \ar@{=}[d] & = &\ar@{}[drrrr]|{\mu_{[\mathbf mx]}} \cdot \ar[rr]_{M_1(m_1\otimes x)}|-@{|} \ar[d]_{} & & \cdot \ar[r]_{M_2m_2}|-@{|} & \cdots \ar[r]_{M_km_k}|-@{|}& \cdot \ar[d]^{} \\ \cdot \ar[rrrr]_{N[\mathbf mx]}|-@{|} & && & \cdot &&\cdot \ar[rrrr]_{N_{[\mathbf mx]}}|-@{|} & &&&\cdot \ar@{}[drrr]|{\mu_{[\mathbf m]}}\cdot \ar[d]_{} \ar[r]^{M_1m_1}|-@{|}&\cdots \ar[r]^{}|-@{|} & \cdot \ar[r]^{M_km_k}|-@{|} & \ar@{}[dr]|{\tau_y} \cdot \ar[d] \ar[r]^{F^ny}|-@{|} & \cdot \ar[d]^{} && \ar@{}[dr]|{1}\cdot \ar[r]^{M_1m_1}|-@{|} \ar@{=}[d]& \ar@{}[dr]|{1} \cdots \ar[r]^{}|-@{|} & \ar@{}[drr]|{\rho}\cdot\ar@{=}[d] \ar[r]^{M_km_k}|-@{|} &\cdot\ar[r]^{F^ny}|-@{|}&\cdot \ar@{=}[d] \\ \ar@{}[drrrr]|{\rho}\cdot \ar@{=}[d] \ar[rrr]_{N[\mathbf m]}|-@{|} && & \cdot \ar[r]_{G^ny}|-@{|} &\cdot \ar@{=}[d] & = & \ar@{}[drrrr]|{\mu[y\mathbf m]}\cdot \ar[d]_{} \ar[r]_{M_1m_1}|-@{|} &\cdots \ar[r]_{}|-@{|} &\cdot \ar[rr]_{M_k(y\otimes m_k)}|-@{|} && \cdot\ar[d]^{}\\ \cdot \ar[rrrr]_{N[y\mathbf m]}|-@{|} &&& & &&\cdot \ar[rrrr]_{N[y\mathbf m]}|-@{|} && & &\cdot are equal. * [Inner Equivariance] For any path of proarrows $(m_1,\dots, m_i,x,m_{i+1},\dots, m_k)$, the composite $$\xymatrix{ \ar@{}[dr]|{1}\cdot \ar@{=}[d] \ar[r]^{M_1m_1}|-@{|} &\ar@{}[dr]|{1}\cdots\ar[r]^{}|-@{|} &\ar@{}[drr]|{\rho}\cdot\ar@{=}[d]\ar[r]^{M_im_i}|-@{|}&\cdot\ar[r]^{F^ix}|-@{|}&\ar@{}[dr]|{1}\cdot\ar@{=}[d]\ar[r]^{}|-@{|}&\ar@{}[dr]|{1}\cdots\ar[r]^{M_km_k}|-@{|}&\cdot \ar@{=}[d]\\ \ar@{}[drrrrrr]|{\mu_{[\mathbf m]}}\cdot\ar[d]_{}\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot\ar[rr]^{}|-@{|}&&\cdot\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot \ar[d]^{} \\ \cdot \ar[rrrrrr]_{N[\mathbf m]}|-@{|} &&&&&& \cdot is equal to $$\xymatrix{ \ar@{}[dr]|{1}\cdot \ar@{=}[d] \ar[r]^{M_1m_1}|-@{|} &\ar@{}[dr]|{1}\cdots\ar[r]^{}|-@{|} &\ar@{}[drr]|{\lambda}\cdot\ar@{=}[d]\ar[r]^{F^ix}|-@{|}&\cdot\ar[r]^{M_{i+1}m_{i+1}}|-@{|}&\ar@{}[dr]|{1}\cdot\ar@{=}[d]\ar[r]^{}|-@{|}&\ar@{}[dr]|{1}\cdots\ar[r]^{M_km_k}|-@{|}&\cdot \ar@{=}[d]\\ \ar@{}[drrrrrr]|{\mu_{[\mathbf m]}}\cdot\ar[d]_{}\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot\ar[rr]^{}|-@{|}&&\cdot\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot \ar[d]^{} \\ \cdot \ar[rrrrrr]_{N[\mathbf m]}|-@{|} &&&&&& \cdot \\ for $i=1,\dots, k-1$. [Paré] Double functors $F\colon \mathbb A\to\mathbb B$ and horizontal transformations, together with modules and their multi-modulations giving the proarrows and multicells, comprise a virtual double category denoted by $\dbllax(\mathbb A,\mathbb B)$. See the lead-up to Theorem 1.2.5 of <cit.>. Our main interest is of course in the virtual double category $\dbllax(\mathbb B^{op},\spn)$. By the Theorem, in general, this is not a genuine double category. This is because composition of proarrows need not exist. The paper <cit.> is a dedicated study of this issue. §.§ Monoids and Modules The additional structure on $\dfib(\mathbb B)$ making it a virtual double category goes by well-known terminology from another context. We already know that $\dfib(\mathbb B)$ could have been defined as $\cat(\dfib)/\mathbb B$. Another way of look at $\cat(\dfib)$ is that it is the category $\mon(\Span(\dfib))$ of monoids in spans in discrete fibrations. In general $\mon(\Span(\C))$ for a category $\C$ with finite limits is the underlying category of the virtual double category $\prof(\C) = \dblmod(\Span(\C))$ of modules in spans in $\C$ as in <cit.> or <cit.>. So, we take the modules and their multicells from this context as the virtual double category structure on $\dfib(\mathbb B)$. Here are the definitions. [Cf. 5.3.1 <cit.> or 2.8 of <cit.>] Let $\mathbb D$ denote a virtual double category. Define its virtual double category of monoids and modules, denoted by $\dblmod(\mathbb D)$, by taking * objects: monoids, namely, triples $(r,\mu,\eta)$ consisting of a proarrow $r\colon A\slashedrightarrow A$ and cells $$\xymatrix{ \ar@{}[drr]|{\mu} \ar@{=}[d] \ar[r]^{r}|-@{|} A & A \ar[r]^{r}|-@{|} & A \ar@{=}[d] & \ar@{}[dr]|{\eta} A \ar@{=}[r] \ar@{=}[d] & A \ar@{=}[d] \\ A \ar[rr]_{r}|-@{|} & & A & A \ar[r]_{r}|-@{|} & A \\ satisfying the usual axioms for a monoid, namely, the multiplication law $\mu(1,\mu) = \mu(\mu,1)$ and the unit laws $\mu(1,\eta)=1$ and $\mu(\eta,1)=1$; * arrows: monoid homomorphisms $(r,\mu,\eta) \to (s,\nu,\epsilon)$, namely, those pairs $(f,\phi)$ consisting of an arrow $f\colon A\to B$ and a cell $$\xymatrix{ \ar@{}[dr]|{\phi} A \ar[d]_{f} \ar[r]^{r}|-@{|} & A\ar[d]^{f}\\ B \ar[r]_{s}|-@{|}& B\\ satisfying the unit axiom $\phi\eta = \epsilon f$ and multiplication axiom $\nu(\phi,\phi) = \phi\mu$. * proarrows: so-called modules $(r,\mu,\eta) \slashedrightarrow (s,\nu, \epsilon)$, namely, triples $(m,\lambda,\rho)$ with $m\colon A\slashedrightarrow B$ a proarrow and $\lambda$, $\rho$ left and right action cells $$\xymatrix{ \ar@{}[drr]|{\lambda} A \ar[r]^{r}|-@{|} \ar@{=}[d] & A \ar[r]^{m}|-@{|} & B \ar@{=}[d] & \ar@{}[drr]|{\rho} A \ar@{=}[d] \ar[r]^{m}|-@{|} & B \ar[r]^{s}|-@{|} & B \ar@{=}[d] \\ A \ar[rr]_{m}|-@{|} & & B & A \ar[rr]_{m}|-@{|} & & B \\ satisfying the module axioms $\lambda(\mu, 1) = \lambda(1,\lambda)$ and $\rho(1,\mu) = \rho(\rho,1)$ for the multiplication and $\lambda(\eta,1) = 1$ and $\rho(1,\eta) = 1$ for the units; a sequence of modules consists of finitely many modules $(m_i,\lambda_i,\rho_i)$ for which $\src\,m_{i+1} = \tgt\,m_{i}$ and $s_{i+1} = r_{i}$ both hold; * multicells from a sequence of modules $(m_i, \lambda_i,\rho_i)$ to one $(n, \lambda, \rho)$ consist of those multicells in $\mathbb A$ $$\xymatrix{ \ar@{}[drr]|{\gamma} \cdot \ar[d]_{f} \ar[r]^{m_1}|-@{|} & \cdots \ar[r]^{m_p}|-@{|} & \cdot \ar[d]^{g} \\ \cdot \ar[rr]_{n}|-@{|} && \cdot \\ satisfying the equivariance axioms expressed by the equalities of composite cells: * [Left] \ar@{}[dr]|{\phi}\cdot \ar[r]^{r_1}|-@{|} \ar[d]_{f} & \ar@{}[drrr]|{\gamma}\cdot \ar[r]^{m_1}|-@{|} \ar[d]^{f} & \cdot \ar[r]^{m_2}|-@{|} &\cdots \ar[r]^{m_p}|-@{|}& \cdot \ar[d]^{g} & & \ar@{}[drr]|{\lambda} \cdot \ar@{=}[d] \ar[r]^{r_1}|-@{|} & \cdot \ar[r]^{m_1}|-@{|} & \ar@{}[dr]|{1} \cdot \ar@{=}[d] \ar[r]^{m_2}|-@{|} & \ar@{}[dr]|{1}\cdots \ar[r]^{m_p}|-@{|} & \cdot\ar@{=}[d] \\ \ar@{}[drrrr]|{\lambda}\cdot \ar[r]_{s_1}|-@{|} \ar@{=}[d] &\cdot \ar[rrr]_{n}|-@{|} & & & \cdot \ar@{=}[d] & = &\ar@{}[drrrr]|{\gamma} \cdot \ar[rr]_{m_1}|-@{|} \ar[d]_{f} & & \cdot \ar[r]_{m_2}|-@{|} & \cdots \ar[r]_{m_p}|-@{|}& \cdot \ar[d]^{g} \\ \cdot \ar[rrrr]_{n}|-@{|} & && & \cdot &&\cdot \ar[rrrr]_{n}|-@{|} & &&&\cdot * [Right] \ar@{}[drrr]|{\gamma}\cdot \ar[d]_{f} \ar[r]^{m_1}|-@{|}&\cdots \ar[r]^{m_{p-1}}|-@{|} & \cdot \ar[r]^{m_p}|-@{|} & \ar@{}[dr]|{\psi} \cdot \ar[d]_{g} \ar[r]^{r_p}|-@{|} & \cdot \ar[d]^{g} && \ar@{}[dr]|{1}\cdot \ar[r]^{m_1}|-@{|} \ar@{=}[d]& \ar@{}[dr]|{1} \cdots \ar[r]^{m_{p-1}}|-@{|} & \ar@{}[drr]|{\rho}\cdot\ar@{=}[d] \ar[r]^{m_p}|-@{|} &\cdot\ar[r]^{r_p}|-@{|}&\cdot \ar@{=}[d] \\ \ar@{}[drrrr]|{\rho}\cdot \ar@{=}[d] \ar[rrr]_{n}|-@{|} && & \cdot \ar[r]_{s_q}|-@{|} &\cdot \ar@{=}[d] &=&\ar@{}[drrrr]|{\gamma}\cdot \ar[d]_{f} \ar[r]_{m_1}|-@{|} &\cdots \ar[r]_{m_{p-1}}|-@{|} &\cdot \ar[rr]_{m_p}|-@{|} && \cdot\ar[d]^{g}\\ \cdot \ar[rrrr]_{n}|-@{|} &&& & &&\cdot \ar[rrrr]_{n}|-@{|} && & &\cdot * [Inner] $$\xymatrix{ \ar@{}[dr]|{1}\cdot \ar@{=}[d] \ar[r]^{m_1}|-@{|} &\ar@{}[dr]|{1}\cdots\ar[r]^{m_{i-1}}|-@{|} &\ar@{}[drr]|{\rho}\cdot\ar@{=}[d]\ar[r]^{m_i}|-@{|}&\cdot\ar[r]^{r_i}|-@{|}&\ar@{}[dr]|{1}\cdot\ar@{=}[d]\ar[r]^{m_{i+1}}|-@{|}&\ar@{}[dr]|{1}\cdots\ar[r]^{m_p}|-@{|}&\cdot \ar@{=}[d]\\ \ar@{}[drrrrrr]|{\gamma}\cdot\ar[d]_{f}\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot\ar[rr]^{}|-@{|}&&\cdot\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot \ar[d]^{g} \\ \cdot \ar[rrrrrr]_{n}|-@{|} &&&&&& \cdot is equal to $$\xymatrix{ \ar@{}[dr]|{1}\cdot \ar@{=}[d] \ar[r]^{m_1}|-@{|} &\ar@{}[dr]|{1}\cdots\ar[r]^{m_{i}}|-@{|} &\ar@{}[drr]|{\lambda}\cdot\ar@{=}[d]\ar[r]^{r_i}|-@{|}&\cdot\ar[r]^{m_{i+1}}|-@{|}&\ar@{}[dr]|{1}\cdot\ar@{=}[d]\ar[r]^{m_{i+2}}|-@{|}&\ar@{}[dr]|{1}\cdots\ar[r]^{m_p}|-@{|}&\cdot \ar@{=}[d]\\ \ar@{}[drrrrrr]|{\gamma}\cdot\ar[d]_{f}\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot\ar[rr]^{}|-@{|}&&\cdot\ar[r]^{}|-@{|}&\cdots\ar[r]^{}|-@{|}&\cdot \ar[d]^{g} \\ \cdot \ar[rrrrrr]_{n}|-@{|} &&&&&& \cdot \\ for $i=1,\dots, p-1$. Compositions and identities are given by those in $\mathbb A$. For any virtual double category $\mathbb D$, $\dblmod(\mathbb D)$ has units. Any object in $\dblmod(\mathbb D)$ is a monoid. Its equipped multiplication gives the unit proarrow. For more see 5.5 of <cit.>. The definition above omits most of the diagrams and states just the equations out of space considerations. However, upon writing down all the diagrams, one might notice a formal similarity between these axioms and those for modules and multimodulations in $\dbllax(\mathbb B^{op},\spn)$. It is the point of the next section to show that this is in fact an equivalence of virtual double categories. First, however, let us consider some examples. §.§ Examples Let $\mathscr C$ denote a category with finite limits. Then $\Span(\mathscr C)$ is a double category. As in <cit.>, denote $\dblmod(\Span(\mathscr C))$ by $\prof(\mathscr C)$. Several choices of $\C$ are of interest. Modules in $\prof(\C)$ are already known by well-established terminology. [Cf. 2.41 of <cit.>] Let $\mathscr C$ denote a category with finite limits; and let $\mathbb C$ and $\mathbb D$ denote internal categories. An internal profunctor $M\colon \mathbb C\slashedrightarrow\mathbb D$ is a module, i.e. a proarrow in $\prof(\mathscr C)$. A multicell of internal profunctors is thus a multicell as above. The virtual double category $\prof(\set)$ is $\prof$. That is, a monoid in $\Span(set)$ is a category. A unit proarrow for such $\C$ is thus the span $\C_0 \leftarrow \C_1 \to \C_0$ formed from the domain and codomain maps with actions given by composition. Letting $\C = \cat$ as a 1-category, $\prof(\cat)$ consists of usual double categories and double functors as the objects and arrows. Internal profunctors $M\colon\mathbb A\slashedrightarrow \mathbb B$ between double categories consist of a span $\mathbb A_0 \xleftarrow{\partial_0} \mathscr M \xrightarrow{\partial_1} \mathbb B_0$ and left and right action functors \[ L\colon \mathbb A_1\times_{\mathbb A_0} \mathscr M \longrightarrow \mathscr M \qquad\qquad R\colon \mathscr M\times_{\mathbb B_0} \mathbb B_1 \longrightarrow \mathscr M \] satisfying the axioms above. A multicell of internal profunctors $(M_1,\dots, M_k) \Rightarrow N$ thus consists of a functor \[ m\colon \mathscr M^1\times_{\mathbb A^1_0} \cdots \times_{\mathbb A^k_0}\mathscr M^k \longrightarrow \mathscr N \] from the vertex of the composite of $(M_1,\dots, M_k)$, making a morphism of spans, and satisfying the various equivariance requirements as in the definition. Notice that owing to the peculiarities of the cell structure of $\mathbf 1$ as in Example <ref>, a point $\mathbb D \colon \mathbf 1\to\prof(\cat)$ is a double category $\mathbb D$, with the identity double functor $1\colon \mathbb D\to\mathbb D$, the unit proarrow $u\colon\mathbb D\slashedrightarrow \mathbb D$, namely, the span formed by the external source and target functors with actions given by external composition, and finally multicells of all arities given by iterated external composition. This can all be generalized to $\cat(\C)$ for arbitrary $\C$ with finite limits. Let $\C = \cat^{\mathbf 2}$, the “arrow category" of $\cat$ and consider $\prof(\cat^{\mathbf 2})$. A monoid is then a double functor, a morphism is a commutative square of double functors. A module consists of two modules in the former sense – one between domains of the two double categories and one between the codomains; the vertices of these modules are related by a functor making a morphism of spans. Multicells have a similar “two-tiered" structure. Letting $\C=\dfib$, the virtual double category $\prof(\dfib)$ is the sub-virtual double category of the previous example where all the objects are not just double functors but are instead discrete double fibrations. There is a codomain functor \[ \cod\colon\prof(\dfib) \longrightarrow \prof(\cat) \] taking an object $P\colon \mathbb E\to\mathbb B$ its codomain $\mathbb B$ and every proarrow $M\colon P\slashedrightarrow Q$ to the module between double categories giving the codomains of $M$. Take $\prof(\dfib)/\mathbb B$ to be the pullback of $\cod$ in $\vdbl$ along the point $\mathbb B \colon \mathbf 1 \to \prof(\cat)$. The virtual double category of discrete double fibrations over a double category $\mathbb B$ is $\prof(\dfib)/\mathbb B$. Denote this by $\dblfib(\mathbb B)$. A module between discrete double fibrations $M\colon P\slashedrightarrow Q$ thus consists of a discrete fibration $M\colon\mathscr M\to\mathbb B_1$ and a morphism of spans \mathbb E_0 \ar[d]_{P_0} & \mathscr M \ar[l]_{\partial_0} \ar[r]^{\partial_1} \ar[d]^{M} & \mathbb G_0 \ar[d]^{Q_0} \\ \mathbb B_0 & \mathbb B_1 \ar[l]^{\src} \ar[r]_{\tgt} & \mathbb B_0 and left and right actions functors making commutative squares $$\xymatrix{ \mathbb E_1\times_{\mathbb E_0} \mathscr M \ar[r]^{\;\;\;\;L} \ar[d]_{P_1\times M} & \mathscr M \ar[d]^{M} & & \mathscr M\times_{\mathbb G_0}\mathbb G_1 \ar[r]^{\;\;\;\;R} \ar[d]_{M\times Q_1} & \mathscr M \ar[d]^M \\ \mathbb B_1\times_{\mathbb B_0}\mathbb B_1 \ar[r]_{\;\;\;\;\;-\otimes -} & \mathbb B_1 & & \mathbb B_1\times_{\mathbb B_0}\mathbb B_1 \ar[r]_{\;\;\;\;\;-\otimes -} & \mathbb B_1 that satisfy the action requirements as in the definition. A multicell $\mu\colon (M^1,\dots, M^k) \Rightarrow N$ between such modules consists of a functor $\mu$ making a commutative square $$\xymatrix{ \mathscr M^1\times_{\mathbb E_0^1}\cdots \times_{\mathbb E_0^k}\mathscr M^k \ar[d] \ar[rr]^{\qquad\mu} & & \mathscr N \ar[d]^N \\ \mathbb B_1\times_{\mathbb B_0} \cdots \times_{\mathbb B_0}\mathbb B_1 \ar[rr]_{\qquad-\otimes -\cdots -\otimes -} & & \mathbb B_1 satisfying the equivariance requirements above. As in 3.9 in <cit.>, the mod-construction $\dblmod(-)$ defines an endo-2-functor $\dblmod(-)\colon\vdbl\to\vdbl$. Another way to look at the codomain functor the previous example is that it is induced from the codomain functor $\cod \colon\dfib \to\cat$, passing first through $\Span(-)$ and then $\dblmod(-)$. § THE FULL REPRESENTATION THEOREM This section extends the result of Theorem <ref>, culminating in a proof that elements construction extends to an equivalence of virtual double categories \[ \dblfib(\mathbb B) \simeq \dbllax(\mathbb B^{op},\spn) \] This appears below as Theorem <ref>. §.§ Extending the Elements Construction The elements functor of Lemma <ref> extends to one between virtual double categories. Needed are assignments on modules and multimodulations. [Elements from a Module] Let $M\colon F\slashedrightarrow G$ denote a module between lax double functors as in Definition <ref>. Construct a category $\dblelt(M)$ in the following way. Objects are pairs $(m,s)$ with $m\colon B\slashedrightarrow C$ a proarrow of $\mathbb B$ and $s\in Mm$. A morphism $(m,s)\to (n,t)$ is a cell $\alpha$ with source $m$ and target $n$ for which the equation $M\alpha(t)=s$ holds. So defined, $\dblelt(M)$ is a category since $M$ is strictly functorial on cells. Notice that there are thus projection functors \[\dblelt(F)_0 \xleftarrow{\partial_0} \dblelt(M) \xrightarrow{\partial_1} \dblelt(G)_0 \] taking an object $(m,s)$ to $\partial_0(v,s)=(B,\partial_0s)$ and $\partial_1(v,s)=(C,\partial_1s)$ and extended to morphisms as follows. Given a cell A \ar@{}[dr]|{\alpha} \ar[d]_{f} \ar[r]^{m}|-@{|} & B \ar[d]^{g} \\ C \ar[r]_n|-@{|} & D take $\partial_0\alpha$ to be the morphism $f\colon (A,\partial_0s) \to (B,\partial_0t)$ and analogously for $\partial_1\alpha$. These are well-defined by the commutativity conditions coming with the morphism of spans $M\alpha$. The assignments are then functorial by that assumed for $M$. [Actions] Form the pullback of $\partial_0\colon \dblelt(M)\to\dblelt(F)_0$ along the target projection $\dblelt(F)_1\to\dblelt(F)_0$ and give assignments \[ L\colon \dblelt(F)_1\times_{\dblelt(F)_0}\dblelt(M)\to\dblelt(M) \] that will amount to an action. Summarize these assignments on objects and arrows at once by the picture: (A,x)\ar@{}[dr]|{\alpha} \ar[d]_f \ar[r]^{(m,u)}|-@{|} & (B, y)\ar[d]^g & (p,r)\ar[d]^{\beta} \ar@{}[drr]|{\mapsto} & & (p\otimes m, \lambda(u,r)) \ar[d]^{\beta\otimes\alpha} \\ (C,z) \ar[r]_{(n,v)}|-@{|} & (D,w) & (q,s) & & (q\otimes n,\lambda(v,s)) where $\lambda$ is the action cell coming with $M$. Of course $\alpha$ and $\beta$ are composable by the construction of the pullback, but it needs to be seen that the composite $\beta\otimes\alpha$ does give a morphism of $\dblelt(M)$ But this is equivalent to the validity of the equation \[ M(\beta\otimes\alpha)(\lambda(v,s))= \lambda(u,r) \] But this holds by the naturality condition for $\lambda$ in Definition <ref>, since $u= F\alpha(v)$ and $M\beta(s)=r$ both hold by the construction of morphisms in $\dblelt(M)$. So defined, $L$ is a functor by the strict interchange law in $\mathbb B$; by the fact that $M$ is strictly horizontally functorial; and by the normalization hypothesis for units. A functor $R$ for a right action of $\dblelt(G)_1$ on $\dblelt(M)$ is constructed analogously. It remains to see that the action axioms are satisfied and that they are suitably compatible, yielding an internal profunctor. The assignments of Construction <ref> are well-defined functors yielding an internal profunctor between discrete double fibrations $\dblelt(M) \colon\dblelt(F)\slashedrightarrow \dblelt(G)$. The action functors $L$ and $R$ are unital by the normalization assumption for vertical composition with units in $\mathbb B$. Required are action iso cells such as \dblelt(F)_1\times_{\dblelt(F)_0}\dblelt(F)_1\times_{\dblelt(F)_0}\dblelt(M)\ar[d]_{\otimes\times 1}\ar[rr]^{\qquad1\times L} && \dblelt(F)_1\times_{\dblelt(F)_0}\dblelt(M) \ar[d]^{L}\\ \dblelt(F)_1\times_{\dblelt(F)_0}\dblelt(M)\ar[rr]_L && \dblelt(M) and similarly for $R$. But chasing an object of the domain around either of the square as above and comparing, commutativity is given by associativity of proarrow composition in $\mathbb B$. Lastly, the actions $L$ and $R$ should be compatible in the sense that \dblelt(F)_1\times_{\dblelt(F)_0}\dblelt(M)\times_{\dblelt(G)_0}\dblelt(G)_1 \ar[d]_{L\times 1}\ar[rr]^{\qquad 1\times R} & & \dblelt(F)_1\times_{\dblelt(F)_0}\dblelt(M) \ar[d]^L \\ \dblelt(M)\times_{\dblelt(G)_0}\dblelt(G)_1 \ar[rr]_R & & \dblelt(M) commutes. But again chasing objects and arrows around each side of the square shows that commutativity follows from the compatibility assumption in Definition <ref>. [Elements from a Multimodulation] Start with a multimodulation of contravariant lax $\spn$-valued functors F^0 \ar@{}[drrr]|{\mu} \ar[d]_\tau \ar[r]^{M_1}|-@{|} & F^1 \ar[r]^{M_2}|-@{|} & \cdots \ar[r]^{M_k}|-@{|} & F^k \ar[d]^{\sigma} \\ G^0 \ar[rrr]_N|-@{|} & & & G^1 as in Definition <ref>. This means that there are projection spans $\dblelt(F^{i-1})_0\leftarrow \mathscr \dblelt(M_i) \to\dblelt(F^{i})$ and one for $\mathscr N$, each with appropriate left and right actions as in Construction <ref>. Define what will be a functor \[ \dblelt(\mu)\colon\dblelt(M_1) \times_{\dblelt(F^1)_0}\cdots \times_{\dblelt(F^{k-1})_0}\dblelt(M_k)\longrightarrow \dblelt(N) \] in the following way. On objects take \[ ((m_1,s_1),\dots, (m_k,s_k))\mapsto ([\mathbf m],\mu_{\mathbf m}(s)) \] where $\mu_{\mathbf m}$ is the given function coming with $\mu$ and $s=(s_1,\dots s_k)$. An arrow of the supposed source is a sequence of externally composable cells $\theta_i\colon (m_i,s_i) \to (n_i,t_i)$. Assign to such a sequence the morphism of $\mathscr N$ represented by their composite \[ (\theta_1,\dots, \theta_k)\mapsto \theta_k\otimes\theta_{k-1}\otimes\cdots\otimes \theta_1. \] This does define a morphism $([\mathbf m],\mu_{\mathbf m}(s))\to ([\mathbf n],\mu_{\mathbf n}(t))$ of $\dblelt(N)$ by the strict composition for the module $N$ as in Definition <ref>. This functor has several naturality and equivariance properties, coming from the assumed properties of the original multimodulation $\mu$. For example, notice that $\dblelt(\mu)$ commutes with the projections and the $0$-level of the induced double functors $\dblelt(\tau)_0$ and $\dblelt(\sigma)_0$ by construction. Further properties are summarized in the next result. The functor $\dblelt(\mu)$ of Construction <ref> defines a multicell between internal profunctors of the form \dblelt(F^0) \ar@{}[drrr]|{\dblelt(\mu)} \ar[d] \ar[r]^{\dblelt(M_1)}|-@{|} &\dblelt(F^1)\ar[r]^{\;\;\;\dblelt(M_2)}|-@{|} &\cdots \ar[r]^{\dblelt(M_k)\;\;\;\;}|-@{|}& \dblelt(F^k)\ar[d] \\ \dblelt(G^0) \ar[rrr]_{\dblelt(N)}|-@{|} & & & \dblelt(G^1) This completes assignments for the elements functor $\dblelt(-)\colon \dbllax(\mathbb B^{op},\spn)\to \dblfib(\mathbb B)$ between virtual double categories. The appropriate commutativity at the $0$-level was observed above. That the action is left equivariant is the statement that the square \dblelt(F^0)_1\times_{\dblelt(F^0)_0}\dblelt(M_1)\times_{\dblelt(F^1)_0}\cdots \times_{\dblelt(F^{k-1})_0}\dblelt(M_k) \ar[rrr]^{\qquad\qquad\qquad \dblelt(\tau)\times\dblelt(\mu)} \ar[d]_{L\times 1} & & & \dblelt(G^0)\times_{\dblelt(G^0)_0}\dblelt(N) \ar[d]^{L}\\ \dblelt(M_1)\times_{\dblelt(F^1)_0}\cdots \times_{\dblelt(F^{k-1})_0}\dblelt(M_k) \ar[rrr]_{\qquad\dblelt(\mu)} & & & \dblelt(N) commutes. But chasing an object of the upper left corner around both sides of the square reveals that commutativity at the object level is precisely the left equivariance condition in Definition <ref>. Right and inner equivariance follow by the same type of argument. The assignments are already known to be well-defined on objects and morphisms. It follows easily that these assignments are suitably functorial in the sense of virtual double categories. §.§ Extending the Pseudo-Inverse Likewise, the pseudo-inverse of Lemma <ref> extends to a functor of virtual double categories. [Pseudo-Inverse for Modules] Let $M\colon P\slashedrightarrow Q$ denote an internal profunctor between discrete double fibrations $P\colon \mathbb E\to\mathbb B$ and $Q\colon \mathbb G\to\mathbb B$ as in Remark <ref>. Construct what will be a module $F_M\colon F_P\slashedrightarrow F_Q$ between the associated lax functors $F_P$ and $F_Q$ from Lemma <ref> in the following way. Let $\mathscr M_m$ denote the inverse image of the proarrow $m\colon B\slashedrightarrow C$ of $\mathbb B$ under the discrete fibration $\Pi\colon \mathscr M\to\mathbb B_1$ coming with $M$. To each such proarrow $m$ assign the span of sets \[ \mathbb E_B \xleftarrow{\partial_0} \mathscr M_m \xrightarrow{\partial_1} \mathbb G_C \] Note that this is well-defined by the first assumed commutativity condition for $M$. To each cell of $\mathbb B$, assign the morphism of spans A\ar@{}[dr]|{\theta}\ar[d]_f \ar[r]^{m}|-@{|} & B \ar[d]^g & \ar@{}[drr]|{\mapsto} & & & \mathbb E_B \ar[d]_{f^*} & \ar[l] \mathscr M_n \ar[d]^{\theta^*} \ar[r] & \mathbb G_C \ar[d]^{g^*} \\ C \ar[r]_n|-@{|} & D & & & & \mathbb E_A & \ar[l] \mathscr M_m \ar[r] & \mathbb G_D with $\theta^*\colon \mathscr M_n\to\mathscr M_m$ given by taking an object of $\mathscr M$ over $n$ to the domain of the unique morphism of $\mathscr M$ above $\theta$ via $\Pi\colon \mathscr M\to\mathbb B_1$. This is well-defined and makes a span morphism. To complete the data, start with composable vertical arrows $m\colon A\slashedrightarrow B$ and $n\colon B\slashedrightarrow B$ and give assignments $\lambda$ and $\rho$ by using the given actions $L$ and $R$, taking \[ \lambda_{m,n}\colon \mathbb E_m\times_{\mathbb E_D}\mathscr M_n \to \mathscr M_{n\otimes m} \qquad (\tilde m, \tilde n)\mapsto L(\tilde m, \tilde n) \] and similarly for $\rho$. These are well-defined by the second row of commutativity conditions in Definition <ref>. Additionally, these maps commute with the projections, in the sense that the diagrams \mathbb E_A \ar@{=}[d] &\ar[l] \mathscr M_m\times_{\mathbb G_B}\mathbb Gn \ar[d]^{\rho} \ar[r] & \mathbb G_C \ar@{=}[d] & & & \mathbb E_A \ar@{=}[d] &\ar[l] \mathbb E_u\times_{\mathbb E_B}\mathscr M_n \ar[d]^\lambda \ar[r] & \mathbb G_C \ar@{=}[d] \\ \mathbb E_A & \ar[l] \mathscr M_{n\otimes m} \ar[r] & \mathbb G_C & & & \mathbb E_A & \ar[l] \mathscr M_{n\otimes m} \ar[r] & \mathbb G_C both commute. The assignments of Construction <ref> make $F_M\colon F_P\slashedrightarrow F_Q$ a module in the sense of Definition <ref>. All of the requirements for $F_M$ to be a module between lax functors are met by the corresponding properties of the original module $M$, together with the fact that $\Pi\colon \mathscr M\to\mathbb B_1$ is a discrete fibration. [Pseudo-Inverse Assignment on Modulations] Start with a modulation $U$ in $\dblfib(\mathbb B)$ as in Remark <ref>. Thus, in particular, we have a functor \[ U\colon \mathscr M^1\times_{\mathbb E^1_0} \cdots \times_{\mathbb E^{k-1}_0} \mathscr M^k\longrightarrow \mathscr N \] commuting with the projections to the end factors and commuting with the $(k-1)$-fold proarrow composition on $\mathbb B$. Required is a multi-modulation F_{P^0} \ar@{}[drrr]|{F_U} \ar[r]^{F_{M_1}}|-@{|} \ar[d]_{F_H} & F_{P^1} \ar[r]^{F_{M_1}}|-@{|} & \cdots \ar[r]^{F_{M_k}}|-@{|} & F_{P^k} \ar[d]^{F_K} \\ F_{Q^0} \ar[rrr]_{F_N}|-@{|} & & & F_{Q^1} Unpacking the constructions at a path of proarrows $\mathbf m=(m_1,\dots m_k)$, this is just to ask for a corresponding set function \[ \mathscr M^1_{m_1}\times_{\mathbb E^1_{A_1}}\times\cdots \times_{\mathbb E^{k-1}_{A_{k-1}}}\mathscr M_{m_k}\longrightarrow \mathscr N_{[\mathbf m]} \] which is given by the arrow part of $U$, namely, $U_1$. Well-definition follows from the fact that $U$ commutes with the projections to $\mathbb B$ and the $(k-1)$-fold iterated proarrow composition of $\mathbb B$. The choice of $(F_U)_{\mathbf m}= U_1$ in Construction <ref> results in a multimodulation of modules between lax functors as in Definition <ref>. This extends the functor $F_{(-)}$ to a functor of virtual double categories \[ F_{(-)} \colon\dblfib(\mathbb B) \longrightarrow \dbllax(\mathbb B^{op},\spn). \] The horizontal naturality condition holds by construction of the transition functions corresponding to cells $\theta$ and because $\Pi_1\colon \mathscr N\to\mathbb B_1$ is a discrete fibration. Right, left and inner equivariance then follow from the corresponding properties of the original functor $U$. §.§ The Equivalence of Virtual Double Categories The extended elements construction and the purported pseudo-inverse induce an equivalence of virtual double categories, leading to the full representation theorem, namely, Theorem <ref> below. Extend the isomorphism of Construction <ref> to an isomorphism of functors of virtual double categories. The required multimodulation for the cell-components is straightforward to produce. Given $M\colon H\slashedrightarrow G$ and a proarrow $m\colon B\slashedrightarrow C$ of $\mathbb B$, define a function \[ \eta_{M,m}\colon Mm\longrightarrow \dblelt(M)_m\qquad s\mapsto (m,s) \] again just adding in an index. This is a bijection fitting into a morphism of spans HB \ar[d]_{\eta_{H,B}} & \ar[l] Mm \ar[d]^{\eta_{M,m}} \ar[r] & GC \ar[d]^{\eta_{G,C}} \\ \dblelt(H)_B & \ar[l] \dblelt(M)_m \ar[r] & \dblelt(G)_C that defines the required invertible modulation \ar@{}[dr]|{\eta_M} H \ar[d]_{\eta_H} \ar[r]^{M}|-@{|} & G \ar[d]^{\eta_G} \\ \dblelt(F_H) \ar[r]_{\dblelt(M)}|-@{|} & \dblelt(F_G) as in Definition <ref> by construction of the elements functor and its purported pseudo-inverse. This is easy to check from the definitions. The assignments in Construction <ref> yield a natural isomorphism of functors of virtual double categories $\eta\colon 1\cong \dblelt(F_{(-)})$. As discussed above, the components of the purported transformation are all well-defined, so it remains only to check the “cell naturality" condition of Definition <ref>. Start with a generic multimodulation \ar@{}[drrr]|{\mu} H^0 \ar[d]_\tau \ar[r]^{M_1}|-@{|} & H^1 \ar[r]^{M_2}|-@{|} & \cdots \ar[r]^{M_k} & H^k \ar[d]^\sigma \\ G^0 \ar[rrr]_{N}|-@{|} & & & G^1 in $\dbllax(\mathbb B^{op},\spn)$. By construction, the composite on right side of the condition sends an $k$-tuple $s=(s_1,\dots, s_k)$ to $((m_1,s_1), \dots, (m_k,s_k))$ and then to $([\mathbf m],\mu_{[\mathbf m]}(s))$; where as that on the left side sends the same element to $\mu_{[\mathbf m]}(s)$ first and then to $([\mathbf m],\mu_{[\mathbf m]}(s))$. The point is that by construction evaluating and indexing commute. In any case, the two sides are equal, and $\eta$ so defined is a natural isomorphism as claimed. Extend the natural isomorphism of Construction <ref> to one of functors of virtual double categories $\epsilon \colon \dblelt(F_{(-)})\cong 1$. Take an internal profunctor $M\colon P \slashedrightarrow Q$ of discrete double fibrations $P\colon\mathbb E\to\mathbb B$ and $Q\colon\mathbb G\to\mathbb B$. The required span morphism \mathbb E_0 \ar@{=}[d] & \ar[l] \dblelt(F_M) \ar@{-->}[d]^{\epsilon_M} \ar[r] & \mathbb G_0 \ar@{=}[d] & & (m,s) \xrightarrow{\alpha} (n,t) \ar@{}[d]|{\rotatebox[origin=c]{270}{$\mapsto$}} \\ \mathbb E_0 & \ar[l] \mathscr M \ar[r] & \mathbb G_0 & & s\xrightarrow{!} t is given by the fact that $M\colon \mathscr M \to\mathbb B_1$ is a discrete fibration. It is a functor and an isomorphism by uniqueness and equivariant by construction, making a morphism of internal profunctors. The assignments in Construction <ref> are a natural isomorphism of functors of virtual double categories. The one condition to check is the “Cell Naturality" of Definition <ref>. Taking a multicell between internal profunctors \ar@{}[drrr]|{\mu} P^0 \ar[d]_H \ar[r]^{M_1}|-@{|} & P^1 \ar[r]^{M_2}|-@{|} & \cdots \ar[r]^{M_k} & P^k \ar[d]^K \\ Q^0 \ar[rrr]_{N}|-@{|} & & & Q^1 the statement of the condition reduces to checking that the equation \[ \mu\circ (\epsilon_{M_1}\times \cdots \epsilon_{M_k}) = \epsilon_N\circ \dblelt(F_\mu) \] holds. But this is true by definition of $\dblelt(F_\mu)$ and the components of $\epsilon$. For on the one hand, a $k$-tuple $((m_1,s_1),\dots, (m_k,s_k))$ is sent to $s= (s_1,\dots, s_k)$ and then to $\mu(s)$. On the other hand, the same $k$-tuple is sent to $([\mathbf m],s)$ by $\dblelt(F_\mu)$ and then to $\mu(s)$ by $\epsilon_N$. The same kind of check works at the level of arrows. Thus, $\epsilon$ is a natural isomorphism. There is an equivalence of virtual double categories \[ \dbllax(\mathbb B^{op},\spn)\simeq \dblfib(\mathbf B) \] for any double category $\mathbb B$ induced by the elements functor $\dblelt(-)$. This is proved by Propositions <ref> and <ref>. § PROSPECTUS Let us close with a preview of forthcoming work relating to the present results. §.§ Monadicity It is well-known that ordinary discrete fibrations over a fixed base are monadic over a slice of the category of sets. This fact is of central importance in the elementary axiomatization of results relating to presheaves and sheaves in the language of an elementary topos <cit.>, <cit.>. In this development, “base-valued functors" (i.e. presheaves) are axiomatized as certain algebras for a monad on a slice of the ambient topos. Any parallel development in a double categorical setting of these presheaf results will require an analogous monadicity result. Forthcoming work will establish that discrete double fibrations over a fixed base double category are monadic over a certain slice of the double category of categories. Pursing some notion of “double topos" as a forum for formal category theory, this will give a setting for elementary axiomatization of elements of presheaf and Yoneda theory for double categories. §.§ Double Fibrations The main definition of the paper anticipates the natural question about whether there is a more general notion of a “double fibration" of which a discrete double fibration is a special case. For recall that each ordinary discrete fibration $F\colon\F\to\C$ between 1-categories is a (split) fibration in a more general sense. Split fibrations of course have lifting properties with respect to certain compatibly chosen “cartesian arrows" and correspond via a category of elements construction to contravariant category-valued 2-functors on the base category. The question of the double-categorical analogue is the subject of forthcoming work with G. Cruttwell, D. Pronk and M. Szyld. The evidence of the correctness of the proposed definition will be a representation theorem like Theorem <ref> in the present paper, but suitably upping the dimension of the representing structure. § ACKNOWLEDGMENTS Thanks to Dr. Dorette Pronk for supervising the author's thesis where some of the ideas for this paper were first conceived. Thanks also to Geoff Cruttwell, Dorette Pronk and Martin Szyld for a number of helpful conversations, comments, and suggestions on the material in this project and related research. Thanks in particular to Geoff Cruttwell for clarifying some questions on virtual double categories. Special thanks are due to Bob Paré for his encouragement when the author was just getting into double-categorical Yoneda theory. [1] M. Buckley. Fibred 2-categories and bicategories. Journal of Pure and Applied Algebra, 218(6):1034-1074, 2014. [CS10]FrameworkGenMulticatz G.S.H. Cruttwell and M.A. Shulman. A unified framework for generalized multicategories. Theory and Applications of Categories, 24(21):580-655, 2010. [Dia73]DiaconescuThesis R. Diaconescu. Change of base for some toposes. PhD Thesis, Dalhousie University, 1973. [Dia75]DiaconescuChangeOfBase R. Diaconescu. Change of base for toposes with generators. Journal of Pure and Applied Algebra, 6(3):191-218. [Ehr63]Ehresmann C. Ehresmann. Catégories et structures. Dunod, Paris, 1963. [GP99]GP M. Grandis and R. Paré. Limits in Double Categories. Cahiers de Topologie et Géométrie Différentielle Catégoriques, 40(3):162-220. [GP04]GP2 M. Grandis and R. Paré. Adjoints for Double Categories. Cahiers de Topologie et Géométrie Différentielle Catégoriques, 45(3):193-240. [Gray74]GrayFormalCats J. Gray. Formal Category Theory: Adjointness for 2-Categories. Volume 391 of Lecture Notes in Mathematics, Spring, Berlin, 1974. [Joh14]TT P.T. Johnstone. Topos Theory. Dover, Minneapolis, 2014. [Lam20]MeFibrations M. Lambert. Discrete 2-fibrations. Preprint: <https://arxiv.org/abs/2001.11477>, 2020. [Lei04]Operads T. Leinster. Higher Operads, Higher Categories. Volume 298 of London Mathematical Society Lecture Notes Series, Cambridge University Press, Cambridge, 2004. [Mac98]MacLane S. MacLane. Category Theory for the Working Mathematician. Volume 5 of Graduate Texts in Mathematics, Springer, Berlin, 1998. [Paré11]YonedaThyDblCatz R. Paré. Yoneda theory for double categories. Theory and Applications of Categories, 25(17):436-489, 2011. [Paré13]ModuleComposition R. Paré. Composition of modules for lax functors. Theory and Applications of Categories, 27(16):393-444, 2013. [Shul08]FramedBicats M. Shulman. Framed bicategories and monoidal fibrations. Theory and Applications of Categories, 20(18):650-738, 2008. [Str74]StreetFibrations R. Street. Fibrations and Yoneda's lemma in a 2-category. In G.M. Kelly ed. Category Seminar: Proceedings Sydney Category Theory Seminar 1972/1973, Volume 420 of Lecture Notes in Mathematics, Spring, Berlin, pp. 104-133, 1974. [Str76]StreetLimitsIndexed R. Street. Limits indexed by category-valued 2-functors. Journal of Pure and Applied Algebra, 8(2):149-181, 1976. [Wood82]Pro1 R. Wood. Abstract proarrows I. Cahiers de Topologie et Géométrie Différentielle Catégoriques, 23(3): 279-290, 1982. [Wood85]Pro2 R. Wood. Proarrows II. Cahiers de Topologie et Géométrie Différentielle Catégoriques, 26(2): 135-168, 1985.
††institutetext: Department of Physics and Astrophysics, University of Delhi, Delhi 110007, India # CP violation with GeV-scale Majorana neutrino in $\Lambda_{b}\to(\Lambda_{c}^{+},p^{+})\pi^{+}\mu^{-}\mu^{-}$ decays Diganta Das, Jaydeb Das<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We explore the possibility of CP violation in baryonic $\Lambda_{b}\to(\Lambda_{c}^{+},p^{+})\pi^{+}\mu^{-}\mu^{-}$ decays which are mediated by two Majorana sterile neutrino and are $|\Delta L|=2$ lepton number violating processes. Appreciable CP asymmetry can be obtained if there are two on-shell Majorana neutrinos that are quasi-degenerate in mass with the mass difference of the order of average decay widths. We find that given the present constraints on the heavy to light mixing element $|V_{\mu N}|$, the $\Lambda_{b}\to p^{+}\pi^{+}\mu^{-}\mu^{-}$ and $\Lambda_{b}\to\Lambda_{c}^{+}\pi^{+}\mu^{-}\mu^{-}$ decay rates are suppressed but could be within the experimental reach at the LHC. If searches of the modes are performed, then experimental limits on the rates can be translated to constraints on the Majorana neutrino mass $m_{N}$ and heavy to light mixing element squared $|V_{\mu N}|^{2}$. We show that the constraints on the $(m_{N},|V_{\mu N}|^{2})$ parameter space coming from the $|\Delta L|=2$ baryonic decays are complementary to the bounds coming from other processes. ###### Keywords: Baryon Decays, $|\Delta L|=2$ process, CP violation, Majorana neutrino ## 1 Introduction The neutrino oscillation experiments confirm that at least two of the three active light neutrinos are massive Fukuda:1998mi ; Wendell:2010md ; Ambrosio:2003yz . This opens up the possibility of CP violation in the leptonic interactions which can be searched in neutrino oscillation experiments Cabibbo:1977nk . Leptonic CP violation can arise in the same manner as in the quark sector, namely complex phases in the leptonic mixing matrix. Whether the neutrinos are of the Dirac or Majorana type, CP violation is expected in both cases. But two additional sources of CP-violating phases can arise if the neutrinos are Majorana rather than if they are Dirac. Majorana character plays an important role as far the origin of the smallness of the active neutrino masses are concerned. If $N_{R}$ is a Standard Model right-handed gauge-singlet (and hence sterile) neutrino, then the Standard Model allows both Dirac mass term of the type $m_{D}(\overline{\nu}_{L}N_{R}+{\rm h.c})$, and a Majorana term of the type $m_{N}N_{R}N_{R}$. Then, via ‘see-saw’ mechanism one can have small active neutrino mass $m_{\nu}\sim m_{D}^{2}/m_{N}$ if $m_{D}$ is at the electroweak scale or lower Mohapatra:1979ia ; Schechter:1980gr ; Schechter:1981cv ; Minkowski:1977sc ; Yanagida:1979as ; Ramond:1979py ; Levy:1980ws . In the simplest version of the mechanism, the so-called type-I see-saw, the heavy electroweak singlet $N_{R}$ is of few TeV are introduced that give rise to the light eigenstates $m_{\nu}\lesssim 1$eV. However, low energy see-saw mechanism, where the sterile states $N_{R}$ are in the range of few hundreds of MeV to a few GeV, also have been proposed Buchmuller:1991ce ; Asaka:2005an ; delAguila:2007ap ; He:2009ua ; Kersten:2007vk ; Ibarra:2010xw ; Nemevsek:2012cd . These so-called GeV-scale sterile neutrinos have several advantages: they could simultaneously explain the baryon asymmetry of the universe Asaka:2005an ; Asaka:2005pn ; Canetti:2014dka ; Canetti:2012kh ; Shuve:2014zua , and can be experimentally searched both at the intensity and the energy frontier. An important distinguishing feature between Dirac and Majorana sterile neutrino is that the later participates in $|\Delta L|=2$ lepton number violating (LNV) decays. For a light Majorana exchange, the neutrino-less double beta decay ($0\nu\beta\beta$) Pas:2015eia ; Rodejohann:2011mu ; DellOro:2016tmg ; GomezCadenas:2011it is one of the most sensitive probe of lepton number violation. But it was recently pointed out that with the exchange of heavy Majorana neutrino at the GeV scale, this rate can be enhanced Drewes:2016lqo ; Asaka:2016zib . Unfortunately, $0\nu\beta\beta$ process is yet to be experimentally verified and the best limit on the half- lives of different isotopes (${}^{76}{\rm Ge}$, ${}^{136}{\rm Xe}$, ${}^{130}{\rm Te}$) come from several different experiments Aalseth:2017btx ; Agostini:2018tnm ; Alduino:2017ehq ; Albert:2017owj ; KamLAND-Zen:2016pfg . Due to the lack of evidence of LNV decay so far, it is imperative to pursue complementary search strategies. This is further reinforced by the fact that observation of $0\nu\beta\beta$ only confirms lepton number violation in the first family of neutrinos, and to observe the same in other families, alternative processes must be investigated. Lepton number violating rare decays of mesons and baryons which are mediated by Majorana neutrino is important in this regard. For light or heavy Majorana neutrino exchange, decay rates are too suppressed to be accessed by current experiments. But, if the Majorana mass is within a few hundred MeV to a few GeV then the decay rates can be within the sensitivity reach of future experiments Helo:2010cw ; Cvetic:2010rw . Due to ongoing searches of LNV processes at flavor factories including the LHC and Belle-II, there have been theoretical interests in the LNV decays of hadrons Helo:2010cw ; Cvetic:2010rw ; Atre:2005eb ; Dib:2000wm ; Ali:2001gsa ; Zhang:2010um ; Yuan:2013yba ; Godbole:2020doo ; Cvetic:2020lyh ; Shuve:2016muy ; Chun:2019nwi ; Mandal:2017tab ; Abada:2017jjx ; Abada:2019bac ; Mejia-Guisao:2017nzx ; Zhang:2021wjj ; Cvetic:2019shl ; Barbero:2013fc ; Mandal:2016hpr ; Zamora-Saa:2016qlk ; Cvetic:2015ura ; Cvetic:2015naa ; Cvetic:2014nla ; Cvetic:2013eza ; Kim:2018uht ; Kim:2019xqj ; Milanes:2018aku ; Mejia-Guisao:2017gqp ; Milanes:2016rzr ; Castro:2013jsn ; Quintero:2011yh ; Littenberg:1991rd ; Barbero:2002wm , $\tau$-lepton decays Castro:2012gi ; Dib:2011hc ; Yuan:2017xdp ; Zamora-Saa:2016ito ; Kim:2017pra , and in different scattering processes Das:2017zjc ; Das:2017rsu ; Das:2017nvm ; Cvetic:2019rms ; Cvetic:2018elt ; Fuks:2020zbm ; Fuks:2020att ; Cai:2017mow ; Ruiz:2020cjx ; Najafi:2020dkp . The LHCb has searched for the process $B^{-}\to\pi^{+}\mu^{-}\mu^{-}$ Aaij:2014aba and the NA48/2 has searched for $K^{-}\to\pi^{+}\mu^{-}\mu^{-}$ CERNNA48/2:2016tdo and these experiments provide stringent constraints on the heavy to light mixing matrix elements. With large integrated luminosity coming from the Belle-II as well as upgrade of the LHCb, sensitivity to $|\Delta L|=2$ processes in mesons and baryons is expected to increase. In this paper we study lepton number violating four-body $\mathcal{B}_{1}\to\mathcal{B}_{2}^{\mp}\pi^{\mp}\ell_{1}^{\pm}\ell_{2}^{\pm}$ decay, where $\mathcal{B}_{1}^{0}$ is $\Lambda_{b}$ and $\mathcal{B}_{2}^{+}$ is either a $\Lambda_{c}^{+}$ or $p^{+}$, and $\ell_{1}$ and $\ell_{2}$ can in general be of different flavors. Previously, in Ref. Mejia-Guisao:2017nzx these decays were considered in a model involving single on-shell Majorana exchange at the GeV scale. We are interested in a scenario where the decays are mediated by the exchange of two almost degenerate Majorana neutrinos of mass in the range between a few hundred MeV to a few GeV so that they can be on-shell. An interesting consequence of two Majorana exchange is the possibility of CP violation. We show that the CP violation can be appreciable if the two Majoranas are almost degenerate with the mass difference of the order of decay widths, $\Delta m_{N}\sim\Gamma_{N}$. There are well-motivated models where quasi degenerate Majorana neutrinos in the range of few hundreds of MeV to few GeV are predicted Dib:2014pga . We calculate the branching ratios for $\Delta m_{N}\sim\Gamma_{N}$ and find that for the present experimental bound on $|V_{\mu N}|^{2}$, the $\Lambda_{b}\to(\Lambda_{c}^{+},p^{+})\pi^{+}\mu\mu$ rates might be within the reach of LHC in the future. Even if the modes are not immediately seen, experimental limits on the decay rates can be used to obtain constrain on the neutrino mass $m_{N}$ and the neutrino mixing matrix elements $|V_{\mu N}|^{2}$. The paper is organized as follows. In section 2 we work out the formalism for a generic $\mathcal{B}_{1}\to\mathcal{B}_{2}^{\mp}\pi^{\mp}\ell_{1}^{\pm}\ell_{2}^{\pm}$ decay mediated by on-shell Majorana neutrino. In section 3 we perform a numerical analysis of the CP asymmetry for $\Lambda_{b}\to(\Lambda_{c},p)\pi\mu\mu$, and discuss the constraint on the $(m_{N},|V_{\mu N}|^{2})$ parameter space assuming experimental upper limits. We summarize our results in sec 4. Some details of our derivations are given in the appendixes. ## 2 $\mathcal{B}_{1}\to\mathcal{B}_{2}^{\mp}\pi^{\mp}\ell_{1}^{\pm}\ell_{2}^{\pm}$ formalism We consider a model scenario where in addition to the components $\nu_{\ell L}$ of the left-handed $SU(2)_{L}$ doublets of the Standard Model, there are two right-handed singlet sterile neutrinos denoted by $N_{1},N_{2}$. The flavor eigenstates $\nu_{\ell L}$ can be written in terms of the mass eigenstates as $\nu_{\ell L}=\sum_{i=1}^{3}U_{\ell i}\nu_{iL}+V_{\ell N_{1}}N_{1}+V_{\ell N_{2}}N_{2}\,,$ (1) where $\nu_{iL}$ are the light mass eigenstates. We assume that the heavy to light mixing elements $V_{\ell N_{1}}$ and $V_{\ell N_{2}}$ are free parameters and can be constrained by experiments. They in general can be complex $V_{\ell N_{j}}=|V_{\ell N_{j}}|e^{i\phi_{\ell j}}\,,\quad(j=1,2)\,,$ (2) where $\phi_{\ell j}$ is a CP-odd phase. According to our convention, $V_{\ell N}$ is the mixing element between negatively charged lepton $\ell$ and Majorana neutrino $N$. We are interested to calculate the decay widths of $\mathcal{B}_{1}({p_{\mathcal{B}_{{1}}}})\to\mathcal{B}_{2}({p_{\mathcal{B}_{{2}}}})\pi^{+}(p_{\pi})\ell_{1}^{-}(p_{1})\ell_{2}^{-}(p_{2})$ and its CP conjugate mode $\bar{\mathcal{B}}_{1}({p_{\mathcal{B}_{{1}}}})\to\bar{\mathcal{B}}_{2}({p_{\mathcal{B}_{{2}}}})\pi^{-}(p_{\pi})\ell_{1}^{+}(p_{1})\ell_{2}^{+}(p_{2})$ in this model. The decays can be viewed as a two step processes: first, the $\mathcal{B}_{1}$ decays via a charged current interaction $\mathcal{B}_{1}\to\mathcal{B}_{2}^{\mp}N_{j}\ell_{1}^{\pm}$, followed by the decay of the heavy neutrino $N_{j}\to\ell_{2}^{\pm}\pi^{\mp}$. For these processes there are two dominant “s-channel” topologies, the direct channel ($D$) and the crossed channel ($C$), as shown in figure 1. The “t-channel” topologies are expected to be suppressed and are neglected. Appreciable decay rates can be obtained if the neutrinos have kinematically allowed mass $m_{\pi}+\ell_{1}<m_{N_{j}}<({m_{\mathcal{B}_{{1}}}}-{m_{\mathcal{B}_{{2}}}}-\ell_{2})\,,\quad\text{or/and}\quad m_{\pi}+\ell_{2}<m_{N_{j}}<({m_{\mathcal{B}_{{1}}}}-{m_{\mathcal{B}_{{2}}}}-\ell_{1})\,.$ (3) Figure 1: The direct ($D$) and cross($C$) channel Feynman diagrams for $\mathcal{B}_{1}\to\mathcal{B}_{2}^{\mp}\pi^{\mp}\ell_{1}^{\pm}\ell_{2}^{\pm}$ decay. We denote the momentum of the heavy neutrino in the $D$ channel by $p_{N}={p_{\mathcal{B}_{{1}}}}-{p_{\mathcal{B}_{{2}}}}-p_{1}$, and for the $C$ channel by $p^{\prime}_{N}={p_{\mathcal{B}_{{1}}}}-{p_{\mathcal{B}_{{2}}}}-p_{2}$. Defining $\Gamma_{\mathcal{B}_{1}}\equiv\Gamma(\mathcal{B}_{1}\to\mathcal{B}_{2}\pi^{+}\ell_{1}^{-}\ell_{2}^{-})$ and $\Gamma_{\overline{\mathcal{B}}_{1}}\equiv\Gamma(\bar{\mathcal{B}}_{1}\to\bar{\mathcal{B}}_{2}\pi^{-}\ell_{1}^{+}\ell_{2}^{+})$ the decay widths can be written as $\Gamma_{\mathcal{B}_{1}(\overline{\mathcal{B}}_{1})}=(2-\delta_{\ell_{1}\ell_{2}})\frac{1}{2!}\frac{1}{2{m_{\mathcal{B}_{{1}}}}}\int d_{4}^{\rm PS}\bigg{|}\overline{\mathcal{M}}_{\rm tot}^{+(-)}\bigg{|}^{2}\,.$ (4) The symmetry factor $1/2!$ comes because the two charged leptons can be the same. The $|\overline{\mathcal{M}}_{\rm tot}^{+}|^{2}$ ($|\overline{\mathcal{M}}_{\rm tot}^{-}|^{2}$) is the total matrix element mod-squared of $\mathcal{B}_{1}\to\mathcal{B}_{2}\pi^{+}\ell_{1}^{-}\ell_{2}^{-}$ $(\bar{\mathcal{B}}_{1}\to\bar{\mathcal{B}}_{2}\pi^{-}\ell_{1}^{+}\ell_{2}^{+})$ after averaging over the initial spin and summing over the final spins $\displaystyle|\overline{\mathcal{M}}_{\rm tot}^{\pm}|^{2}$ $\displaystyle=\frac{1}{2}\sum_{spins}\bigg{|}\mathcal{M}_{\rm tot}^{\pm}\bigg{|}^{2}\,$ $\displaystyle=\frac{1}{2}\sum_{spins}\bigg{|}\sum_{j=1}^{2}\big{(}\mathcal{M}_{D_{j}}^{\pm}+\mathcal{M}_{C_{j}}^{\pm}\big{)}\bigg{|}^{2}\,,$ $\displaystyle=\frac{1}{2}\sum_{spins}\bigg{[}\sum_{i,j=1}^{2}\mathcal{M}_{D_{i}}^{\pm}(\mathcal{M}_{D_{j}}^{\pm})^{\ast}+\sum_{i,j=1}^{2}\mathcal{M}_{C_{i}}^{\pm}(\mathcal{M}_{C_{j}}^{\pm})^{\ast}+\sum_{i,j=1}^{2}\mathcal{M}_{D_{i}}^{\pm}(\mathcal{M}_{C_{j}}^{\pm})^{\ast}+\sum_{i,j=1}^{2}\mathcal{M}_{C_{i}}^{\pm}(\mathcal{M}_{D_{j}}^{\pm})^{\ast}\bigg{]}\,,$ $\displaystyle=\mathcal{N}\bigg{[}\sum_{i,j=1}^{2}v_{i}^{\pm}(v_{j}^{\pm})^{\ast}m_{N_{i}}m_{N_{j}}P_{D_{i}}P_{D_{j}}^{\ast}T_{\pm}(DD^{\ast})+\sum_{i,j=1}^{2}v_{i}^{\pm}(v_{j}^{\pm})^{\ast}m_{N_{i}}m_{N_{j}}P_{D_{i}}P_{C_{j}}^{\ast}T_{\pm}(DC^{\ast})\,$ $\displaystyle\quad+(D\leftrightarrow C)\bigg{]}\,.$ (5) In the second line of (2), the suffix $D_{j}(C_{j})$ stand for direct(cross) channel with $j^{\rm th}$ neutrino exchange, and in the last line we have introduced the following notations $\displaystyle\mathcal{N}=\frac{1}{2}G_{F}^{4}|V_{ud}|^{2}|V_{qb}|^{2}f_{\pi}^{2},\,\,\,\ v_{i}^{+}=V_{\ell_{1}N_{i}}V_{\ell_{2}N_{i}},\,\,\,\ v_{i}^{-}=(v_{i}^{+})^{\ast}\,,$ (6) where $V_{qb}=V_{ub}$ for $\Lambda_{b}\to p^{+}\pi^{+}\ell^{-}\ell^{-}$, $V_{qb}=V_{cb}$ for $\Lambda_{b}\to\Lambda_{c}^{+}\pi^{+}\ell^{-}\ell^{-}$, and $f_{\pi}$ is the pion decay constant. In the last line of (2), the spin summed and averaged matrix element mod squared splits in to universal functions $T_{\pm}(XY^{\ast})$, where $X(Y)=D,C$, and the functions $P_{X_{j}}$ which are functions of the masses $m_{N_{1}},m_{N_{2}}$ and decay widths $\Gamma_{N_{1}},\Gamma_{N_{2}}$ of the exchanged neutrinos $\displaystyle P_{D_{j}}=\frac{1}{(p_{N}^{2}-m_{N_{j}}^{2})+i\Gamma_{N_{j}}m_{N_{j}}},\,\,P_{C_{j}}=\frac{1}{(p_{N}^{\prime 2}-m_{N_{j}}^{2})+i\Gamma_{N_{j}}m_{N_{j}}}\,.$ (7) Using (2), the total decay widths can be conveniently written as $\displaystyle\Gamma_{\mathcal{B}_{1}}$ $\displaystyle=(2-\delta_{\ell_{1}\ell_{2}})\sum_{i,j=1}^{2}v_{i}^{+}(v_{j}^{+})^{\ast}\Big{(}\widehat{\Gamma}(DD^{\ast})_{ij}+\widehat{\Gamma}(CC^{\ast})_{ij}+\widehat{\Gamma}_{+}(DC^{\ast})_{ij}+\widehat{\Gamma}_{+}(D^{\ast}C)_{ij}\Big{)}\,,$ (8) $\displaystyle\Gamma_{\overline{\mathcal{B}}_{1}}$ $\displaystyle=(2-\delta_{\ell_{1}\ell_{2}})\sum_{i,j=1}^{2}v_{i}^{-}(v_{j}^{-})^{\ast}\Big{(}\widehat{\Gamma}(DD^{\ast})_{ij}+\widehat{\Gamma}(CC^{\ast})_{ij}+\widehat{\Gamma}_{-}(DC^{\ast})_{ij}+\widehat{\Gamma}_{-}(D^{\ast}C)_{ij}\Big{)}\,,$ (9) where the quantities $\widehat{\Gamma}_{\pm}$ are $\displaystyle\widehat{\Gamma}_{\pm}(XY^{\ast})_{ij}$ $\displaystyle=\frac{\mathcal{N}}{2{m_{\mathcal{B}_{{1}}}}2!}\int m_{N_{i}}m_{N_{j}}P_{X_{i}}P_{Y_{j}}^{\ast}T_{\pm}(XY^{\ast})d\Phi_{4},\,\,\,X,Y=C,D\,.$ (10) The expressions of $T_{\pm}(XY^{\ast})$ and the requisite kinematics to evaluate these expressions are given in appendix A and B, respectively, and the four-body phase space $d\Phi_{4}$ is given in appendix C. In equations (8)-(9), using the relation $T_{+}(XX^{\ast})=T_{-}(XX^{\ast})$ (see appendix A) we have defined $\displaystyle\widehat{\Gamma}(XX^{\ast})_{ij}\equiv\widehat{\Gamma}_{+}(XX^{\ast})_{ij}=\widehat{\Gamma}_{-}(XX^{\ast})_{ij}\,,\quad X=D,C\,.$ (11) To physically interpret the terms, $\widehat{\Gamma}(XX^{\ast})_{ij}$ are the contributions of $N_{i}$ exchange in the $X$ channel and the conjugate of $N_{j}$ exchange in the $X^{\ast}$ channel. The interference terms $\widehat{\Gamma}(XY^{\ast})_{ij}$ are the contributions of $N_{i}$ exchange in the $X$ channel and the conjugate of $N_{j}$ exchange the $Y$ channel. Numerically, $D-C$ channel interference contributions $\widehat{\Gamma}(XY^{\ast})_{ij}$ for $X\neq Y$, are insignificant compared to $\widehat{\Gamma}(XX^{\ast})_{ij}$ and are ignored in our calculations. In addition of the decay rates, the quantities of interests are their sum and differences $\displaystyle\Gamma_{\mathcal{B}_{1}}+\Gamma_{\overline{\mathcal{B}}_{1}}$ $\displaystyle=2(2-\delta_{\ell_{1}\ell_{2}})\Bigg{[}|V_{\ell_{1}N_{1}}|^{2}|V_{\ell_{2}N_{1}}|^{2}\bigg{(}\widehat{\Gamma}(DD^{\ast})_{11}+\widehat{\Gamma}(CC^{\ast})_{11}\bigg{)}\,$ $\displaystyle+|V_{\ell_{1}N_{2}}|^{2}|V_{\ell_{2}N_{2}}|^{2}\bigg{(}\widehat{\Gamma}(DD^{\ast})_{22}+\widehat{\Gamma}(CC^{\ast})_{22}\bigg{)}\,$ $\displaystyle+2\cos(\theta_{21})|V_{\ell_{1}N_{1}}||V_{\ell_{2}N_{1}}||V_{\ell_{1}N_{2}}||V_{\ell_{2}N_{2}}|\bigg{(}{\rm Re}\widehat{\Gamma}(DD^{\ast})_{12}+{\rm Re}\widehat{\Gamma}(CC^{\ast})_{12}\bigg{)}\Bigg{]}\,,$ (12) $\displaystyle\Gamma_{\mathcal{B}_{1}}-\Gamma_{\overline{\mathcal{B}}_{1}}$ $\displaystyle=4(2-\delta_{\ell_{1}\ell_{2}})|V_{\ell_{1}N_{1}}||V_{\ell_{2}N_{1}}||V_{\ell_{1}N_{2}}||V_{\ell_{2}N_{2}}|\bigg{[}\sin(\theta_{21})\bigg{(}{\rm Im}\widehat{\Gamma}(DD^{\ast})_{12}+{\rm Im}\widehat{\Gamma}(CC^{\ast})_{12}\bigg{)}\bigg{]}\,,$ (13) where the CP-odd phase, based on the convention adopted in (2), is $\displaystyle\begin{split}\theta_{ij}&=\arg(V_{\ell_{1}N_{i}})+\arg(V_{\ell_{2}N_{i}})-\arg(V_{\ell_{1}N_{j}})-\arg(V_{\ell_{2}N_{j}})\,,\\\ &=(\phi_{1i}+\phi_{2i}-\phi_{1j}-\phi_{2j}),\,\,\,\ i,j=1,2\,.\end{split}$ (14) A CP-even phase $\Delta\xi=\xi_{1}-\xi_{2}$ essential for CP violation is also present in the interference of $N_{1}$ and $N_{2}$ contributions $\displaystyle{\rm Re}\widehat{\Gamma}(XX^{\ast})_{12}=\frac{\mathcal{N}}{2{m_{\mathcal{B}_{{1}}}}2!}\int m_{N_{1}}m_{N_{2}}|P_{X_{1}}||P_{X_{2}}|\cos(\Delta\xi)T(XX^{\ast})d_{4}^{\rm PS},\,\,\,X=C,D\,,$ (15) $\displaystyle{\rm Im}\widehat{\Gamma}(XX^{\ast})_{12}=\frac{\mathcal{N}}{2{m_{\mathcal{B}_{{1}}}}2!}\int m_{N_{1}}m_{N_{2}}|P_{X_{1}}||P_{X_{2}}|\sin(\Delta\xi)T(XX^{\ast})d_{4}^{\rm PS},\,\,\,X=D,C\,,$ (16) where $\xi_{1,2}$ are given as $\displaystyle\tan\xi_{1}=\frac{m_{N_{1}}\Gamma_{N_{1}}}{k_{N}^{2}-m_{N_{1}}^{2}},\,\,\,\,\,\ \tan\xi_{2}=\frac{m_{N_{2}}\Gamma_{N_{2}}}{k_{N}^{2}-m_{N_{2}}^{2}}\,,$ (17) and $k_{N}^{2}=p_{N}^{2}$ for $D$-channel and $k_{N}^{2}=(p^{\prime}_{N})^{2}$ for $C$-channel. ## 3 Results Following the formalism in the previous section, we turn to numerical analysis with specific decay modes. At the LHC, about 5% of the total $b$-hadrons produced are $\Lambda_{b}$ baryons, and both at the LHCb and CMS the muon reconstruction efficiency is comparatively higher than the other two charged leptons. We therefore are interested in the modes $\Lambda_{b}\to\Lambda_{c}\pi\mu\mu$ and $\Lambda_{b}\to p\pi\mu\mu$ channels. Since $\ell_{1}=\ell_{2}=\mu$, the CP-odd phase is $\theta_{21}=2(\phi_{\mu 2}-\phi_{\mu 1})$. For numerical analysis, form factor parameterizing the $\Lambda_{b}^{0}\to\Lambda_{c}^{+}$ and $\Lambda_{b}^{0}\to p^{+}$ hadronic matrix elements are taken from the lattice QCD calculations Detmold:2015aaa , and we take the decay constant of pion $f_{\pi}=130.2(0.8)$ MeV from Aoki:2019cca . We also need to know the total decay widths of the heavy neutrinos $\Gamma_{N_{1,2}}$ as a function of their masses. For Majorana neutrino mass between $m_{\pi}+m_{\mu}<m_{N}<({m_{\mathcal{B}_{{1}}}}-{m_{\mathcal{B}_{{2}}}}-m_{\mu})$, both purely leptonic as well as semi-hadronic decays may be relevant. For $m_{N}<1$ GeV, the decays to leptonic modes as well to light pseudo-scalar and vector mesons have been calculated in Atre:2009rg . For higher values of $m_{N}$, decays to semi-hadronic mods are increasingly difficult due to the limited knowledge of the resonances. An inclusive approach based on quark- hadron duality was adopted in Ref. Helo:2010cw ; Gribanov:2001vv to calculate the widths of the semi-hadronic channel. For this analysis, we leave the decay width as a phenomenological parameter that can be measured by experiments. Following the analysis of Mejia-Guisao:2017nzx we take the neutrino the lifetimes $\tau_{N}=\hbar/\Gamma_{N}=[10,100,1000]$ps for numerical illustration. We are interested in the signal of leptonic CP asymmetry $\mathcal{A}_{\rm CP}=\frac{\Gamma_{\mathcal{B}_{1}}-\Gamma_{\overline{\mathcal{B}}_{1}}}{\Gamma_{\mathcal{B}_{1}}+\Gamma_{\overline{\mathcal{B}}_{1}}}\,.$ (18) The reason why this asymmetry will be present in the decay can be understood as follows. There are two interfering amplitudes coming from the two intermediate neutrinos $N_{1}$ and $N_{2}$. The interfering amplitudes have CP-odd phase $\theta_{21}$ that changes sign for the conjugate process. A CP- even phase $\Delta\xi$ comes from an absorptive part that is generated due to the interference of the two neutrino contributions and does not change sign in the conjugate process. In general, $\theta_{21}$ can be anything but a maximal $\mathcal{A}_{\rm CP}$ can be obtained for $\theta_{21}=\pi/2$ as can be seen from equation (13). To understand the behavior of $\mathcal{A}_{\rm CP}$ with the neutrino mass, we note that ${\rm Im}[\widehat{\Gamma}(DD^{\ast})_{ij}]\propto{\rm Im}[P_{D_{i}}P_{D_{j}}^{\ast}]$. For our choices of the neutrino lifetime $\tau_{N}$ and the kinematically allowed neutrino mass $m_{N}$, the approximation $\Gamma_{N_{j}}\ll m_{N_{j}}$ is always valid so that $\displaystyle|P_{D_{j}}|^{2}=\frac{\pi}{m_{N_{j}}\Gamma_{N_{j}}}\delta(p_{N}^{2}-m_{N_{j}}^{2})\,,\quad|P_{C_{j}}|^{2}=\frac{\pi}{m_{N_{j}}\Gamma_{N_{j}}}\delta(p^{\prime 2}_{N}-m_{N_{j}}^{2})\,,$ (19) which yields $\frac{\widehat{\Gamma}({XX^{\ast}})_{ii}}{\widehat{\Gamma}({XX^{\ast}})_{jj}}=\frac{\Gamma_{N_{j}}}{\Gamma_{N_{i}}}\,.$ (20) When the mass difference between the neutrinos is such that $\Gamma_{N_{j}}\ll\Delta m_{N}$ then we can write $\displaystyle{\rm Im}[P_{D_{1}}P_{D_{2}}^{\ast}]\bigg{|}_{\Gamma_{N_{j}}\ll\Delta m_{N}}$ $\displaystyle=$ $\displaystyle\mathcal{P}\Big{(}\frac{1}{p_{N}^{2}-m_{N_{1}}^{2}}\Big{)}\pi\delta(p_{N}^{2}-m_{N_{2}}^{2})-\mathcal{P}\Big{(}\frac{1}{p_{N}^{2}-m_{N_{2}}^{2}}\Big{)}\pi\delta(p_{N}^{2}-m_{N_{1}}^{2})$ (21) $\displaystyle=$ $\displaystyle\frac{\pi}{m_{N_{2}}^{2}-m_{N_{1}}^{2}}\Big{(}\delta(p_{N}^{2}-m_{N_{1}}^{2})+\delta(p_{N}^{2}-m_{N_{2}}^{2})\Big{)}\,,$ $\displaystyle=$ $\displaystyle\frac{1}{y}\frac{2\pi}{(m_{N_{1}}+m_{N_{2}})(\Gamma_{N_{1}}+\Gamma_{N_{1}})}\Big{(}\delta(p_{N}^{2}-m_{N_{1}}^{2})+\delta(p_{N}^{2}-m_{N_{2}}^{2})\Big{)}\,,$ where $y=\Delta m_{N}/\Gamma_{N}$ and $\Gamma_{N}=(\Gamma_{N_{1}}+\Gamma_{N_{2}})/2$. This yields $\displaystyle\frac{{\rm Im}\widehat{\Gamma}({XX^{\ast}})_{12}}{\widehat{\Gamma}({XX^{\ast}})_{jj}}$ $\displaystyle=\frac{{\rm Im}\widehat{\Gamma}({XX^{\ast}})_{12}\big{|}_{\Gamma_{N_{j}}\ll\Delta m_{N}}}{\widehat{\Gamma}({XX^{\ast}})_{jj}}\frac{{\rm Im}\widehat{\Gamma}({XX^{\ast}})_{12}}{{\rm Im}\widehat{\Gamma}({XX^{\ast}})_{12}\big{|}_{\Gamma_{N_{j}}\ll\Delta m_{N}}}\,,$ $\displaystyle=\frac{1}{y}\frac{4\pi}{(m_{N_{1}}+m_{N_{2}})(\Gamma_{N_{1}}+\Gamma_{N_{1}})}\frac{m_{N_{j}}\Gamma_{N_{j}}}{\pi}\eta\,,$ (22) where the suppression factor $\eta\equiv\frac{{\rm Im}\widehat{\Gamma}({XX^{\ast}})_{12}}{{\rm Im}\widehat{\Gamma}({XX^{\ast}})_{12}\big{|}_{\Gamma_{N_{j}}\ll\Delta m_{N}}}\,,$ (23) accounts for the departure from the approximation $\Gamma_{N_{j}}\ll\Delta m_{N}$ in the term ${\rm Im}\widehat{\Gamma}({XX^{\ast}})$. Assuming the neutrinos to be almost degenerate $m_{N_{1}}\sim m_{N_{2}}=m_{N}$ $\displaystyle\frac{{\rm Im}\widehat{\Gamma}({XX^{\ast}})_{12}}{\widehat{\Gamma}({XX^{\ast}})_{jj}}$ $\displaystyle=\frac{2\Gamma_{N_{j}}}{(\Gamma_{N_{1}}+\Gamma_{N_{1}})}\frac{\eta(y)}{y}\,.$ (24) We define another factor $\displaystyle\delta_{j}(y)$ $\displaystyle=$ $\displaystyle\frac{{\rm Re}\widehat{\Gamma}({XX^{\ast}})_{12}}{\widehat{\Gamma}({XX^{\ast}})_{jj}}\,,$ (25) which measures the interference of the two neutrinos in the real part of $\widehat{\Gamma}(XX^{\ast})$. Following (20) we get $\frac{\delta_{1}}{\delta_{2}}=\frac{\Gamma_{N_{1}}}{\Gamma_{N_{2}}}\,.$ (26) Since we are considering same sign di-muon in the final state, $\eta$ and $\delta_{j}$ are same for the $D$ and $C$-channels which follows from the fact that $\displaystyle\bar{\Gamma}({DD^{\ast}})_{jj}=\bar{\Gamma}({CC^{\ast}})_{jj},\,\ {\rm Re}\bar{\Gamma}({DD^{\ast}})_{12}={\rm Re}\bar{\Gamma}({CC^{\ast}})_{12},\,$ $\displaystyle{\rm Im}\bar{\Gamma}({DD^{\ast}})_{12}={\rm Im}\bar{\Gamma}({CC^{\ast}})_{12}\,.$ (27) The CP asymmetry can now be written in a convenient form as $\displaystyle\mathcal{A}_{CP}=\frac{4\sin\theta_{21}}{\frac{|V_{\ell N_{1}}||V_{\ell N_{1}}|}{|V_{\ell N_{2}}||V_{\ell N_{2}}|}\Big{(}1+\frac{\Gamma_{N_{2}}}{\Gamma_{N_{1}}}\Big{)}+\frac{|V_{\ell N_{2}|}|V_{\ell N_{2}}|}{|V_{\ell N_{1}}||V_{\ell N_{1}}|}\Big{(}1+\frac{\Gamma_{N_{1}}}{\Gamma_{N_{2}}}\Big{)}+4\delta(y)\cos\theta_{21}}\frac{\eta(y)}{y}\,,$ (28) where we define $\delta(y)=\frac{\delta_{1}(y)+\delta_{2}(y)}{2}\,.$ (29) For nearly degenerate neutrinos, it is natural to assume $|V_{\mu N_{1}}|\sim|V_{\mu N_{2}}|=|V_{\mu N}|$. This further simplifies the expression of $\mathcal{A}_{CP}$ $\displaystyle\mathcal{A}_{CP}=\frac{4\sin\theta_{21}}{\Big{(}1+\frac{\Gamma_{N_{2}}}{\Gamma_{N_{1}}}\Big{)}+\Big{(}1+\frac{\Gamma_{N_{1}}}{\Gamma_{N_{2}}}\Big{)}+4\delta(y)\cos\theta_{21}}\frac{\eta(y)}{y}\,.$ (30) Figure 2: The factors $\delta(y)$ and $\eta(y)/y$ as a function of $y=\Delta m_{N}/\Gamma_{N}$. The CP asymmetry observable $\mathcal{A}_{\rm CP}$ for $\Lambda_{b}\to\Lambda_{c}\pi\mu\mu$ is shown as a function of $y$ for different values of the weak phase. An identical plot is obtained for $\Lambda_{b}\to p\pi\mu\mu$. In these plots we have taken $|V_{\mu N_{1}}|^{2}=|V_{\mu N_{2}}|^{2}=1$ Figure 3: The branching ratios $\mathcal{B}(\Lambda_{b}\to\Lambda_{c}\pi\mu\mu)$ and $\mathcal{B}(\Lambda_{b}\to p\pi\mu\mu)$ for $|V_{\mu N_{1}}|^{2}\sim|V_{\mu N_{2}}|^{2}=|V_{\mu N}|^{2}=10^{-5}$, the weak phase $\theta_{21}=\pi/4$, the mass difference $\Delta m_{N}=10^{-15}$GeV, and the neutrino lifetimes $\tau_{N}=[100,1000]$ps. In figure 2 we show the the suppression factor $\eta(y)/y$ and $\delta(y)$ as a function of $y$. This figure demonstrates that the $\mathcal{A}_{\rm CP}$ will be maximum for $y\sim 1$, i.e., when $\Delta m_{N}\sim\Gamma_{N}$. In figure 2 we also show the $\mathcal{A}_{\rm CP}$ for the $\Lambda_{b}\to\Lambda\pi\mu\mu$ mode for different values of $\theta_{21}$ and as a function of $y$. An identical plot is obtained for $\Lambda_{b}\to p\pi\mu\mu$. For a particular mode, the possibility to observe $\mathcal{A}_{\rm CP}$ does not depend entirely on its size but also depends on the decay rates. In figure 3 we show the CP-averaged branching ratios $\displaystyle\mathcal{B}r(\mathcal{B}_{1}\rightarrow\mathcal{B}_{2}\pi\mu\mu)=\frac{1}{2}\bigg{(}\mathcal{B}r(\mathcal{B}_{1}\to\mathcal{B}_{2}\pi^{+}\mu^{-}\mu^{-})+\mathcal{B}r(\overline{\mathcal{B}}_{1}\to\overline{\mathcal{B}}_{2}\pi^{-}\mu^{+}\mu^{+})\bigg{)}\,,$ (31) of $\Lambda_{b}\to\Lambda_{c}\pi\mu\mu$ and $\Lambda_{b}\to p\pi\mu\mu$ as a function of sterile neutrino mass $m_{N_{1}}$ for neutrino lifetimes $\tau_{N_{1}}\sim\tau_{N_{2}}\sim 100$ps and 1000ps, $|V_{\mu N}|^{2}\sim 10^{-5}$, $\theta_{21}=\pi/4$, and the neutrino mass difference $\Delta m_{N}=10^{-15}$ GeV. We find that the branching ratio of $\Lambda_{b}\to\Lambda_{c}\pi\mu\mu$ can be within $10^{-10}-10^{-9}$ range, whereas $\mathcal{B}(\Lambda_{b}\to p\pi\mu\mu)\sim 10^{-12}-10^{-11}$ is suppressed due to small CKM element $V_{ub}$. These rates are within the reach of the future LHC sensitivity. For a detailed discussions of the number of events expected at the LHCb and CMS please see Ref. Mejia-Guisao:2017nzx . Figure 4: Exclusion regions on the $(m_{N_{1}},|V_{\mu N}|^{2})$ parameter space for $\mathcal{B}r(\Lambda_{b}\to p\pi\mu\mu)<10^{-8},10^{-9}$ and $\mathcal{B}r(\Lambda_{b}\to\Lambda_{c}\pi\mu\mu)<10^{-7},10^{-8}$ for different values of $\tau_{N}$, $\theta_{21}=\pi/4$, and $\Delta m_{N}=10^{-15}$ GeV. Even if the decays are not fully observed, upper limits can be translated to bounds on the $m_{N}$ vs $|V_{\mu N}|^{2}$ parameter space. In figures 4 we show the exclusion region in the ($m_{N}$,$|V_{\mu N}|^{2}$) plane obtained by assuming upper bounds $\mathcal{B}r(\Lambda_{b}\to p\pi\mu\mu)<10^{-8},10^{-9}$ and $\mathcal{B}r(\Lambda_{b}\to\Lambda_{c}\pi\mu\mu)<10^{-7},10^{-8}$ for different choices of the heavy neutrino lifetimes. The regions are shown in brown, light-green, and light-red correspond to exclusion regions obtained for $\tau_{N}=10{\rm ps},100{\rm ps}$ and $1000$ps respectively. To compare our bounds, in the figure 4 we also show the exclusion limits from LHCb Shuve:2016muy ; Aaij:2014aba , Belle Liventsev:2013zz , L3 Adriani:1992pq , Delphi Abreu:1996pa , NA3 Badier:1985wg , CHARM Vilain:1994vg , NuTeV Vaitaitis:1999wq and NA48 CERNNA48/2:2016tdo experiments. These comparisons show that the LNV modes $\Lambda_{b}\to p\pi\mu\mu$ and $\Lambda_{b}\to p\pi\mu\mu$ can give complementary bounds on the sterile neutrino parameters. And with the possibility to observe CP asymmetry, the modes should be searched at the LHC. ## 4 Summary In this paper, we have studied lepton number violating baryonic decays $\Lambda_{b}\to\Lambda_{c}\pi\mu\mu$ and $\Lambda_{b}\to p\pi\mu\mu$ that are mediated by on-shell sterile Majorana neutrinos. The decays are studied in a model where there are two Majorana neutrinos. An interesting consequence of considering two Majorana neutrinos is that it gives rise to the possibility of CP violation in these modes. We find that appreciable CP asymmetry can be achieved if neutrinos are quasi-degenerate and the mass difference is of the order of decay widths. We have shown that in the absence of observation, upper limits to the branching ratios can give limits on the $m_{N}$ vs $|V_{\mu N}|^{2}$ parameter space that is comparable to limits obtained by other methods. ## Acknowledgements The authors would like to thank Debajyoti Choudhury for fruitful discussions. DD acknowledges the DST, Govt. of India for the INSPIRE Faculty Fellowship (grant number IFA16-PH170). JD acknowledges the Council of Scientific and Industrial Research (CSIR), Govt. of India for JRF fellowship grant with File No. 09/045(1511)/2017-EMR-I. ## Appendix A $\mathcal{B}_{1}\to\mathcal{B}_{2}^{\mp}\pi^{\mp}\ell_{1}^{\pm}\ell_{2}^{\pm}$ amplitudes The effective Hamiltonian for the $\mathcal{B}_{1}\to\mathcal{B}_{2}\ell N$ decay and the subsequent decay of the intermediate neutrino $N\to\ell\pi$ are $\displaystyle\mathcal{H}_{\rm eff}^{b\to q\ell N}=\frac{G_{F}}{\sqrt{2}}V_{qb}\bar{q}\gamma_{\mu}(1-\gamma_{5})b\Bigg{(}\sum_{\ell=e}^{\tau}\sum_{i=1}^{3}U_{\ell i}\bar{\nu}_{i}\gamma^{\mu}(1-\gamma_{5})\ell+\sum_{\ell=e}^{\tau}\sum_{j=1}^{n}V_{\ell N_{j}}\bar{N_{j}^{c}}\gamma^{\mu}(1-\gamma_{5})\ell\Bigg{)}+{\rm h.c}\,,$ (32) $\displaystyle\mathcal{H}_{\rm eff}^{N_{j}\to\ell\pi}=\frac{G_{F}}{\sqrt{2}}V_{ud}\bar{d}\gamma_{\mu}(1-\gamma_{5})u\sum_{\ell=e}^{\tau}\sum_{j=1}^{n}V_{\ell N_{j}}\bar{\ell}\gamma^{\mu}(1-\gamma_{5})N_{j}^{c}+{\rm h.c}\,.$ (33) The $\mathcal{B}_{1}\to\mathcal{B}_{2}\pi\ell\ell$ amplitudes for the $D$ and $C$ channel diagrams are $\displaystyle\mathcal{M}_{D_{j}}^{\pm}$ $\displaystyle=(G_{F}^{2}V_{qb}M_{N_{j}})\big{(}V_{\ell_{1}N_{j}}V_{\ell_{2}N_{j}}\big{)}P_{D_{j}}H_{\nu}^{\pm}L^{\nu\alpha\pm}_{D}J_{\alpha}^{\pm}\,,$ (34) $\displaystyle\mathcal{M}_{C_{j}}^{\pm}$ $\displaystyle=(G_{F}^{2}V_{qb}M_{N_{j}})\big{(}V_{\ell_{1}N_{j}}V_{\ell_{2}N_{j}}\big{)}P_{C_{j}}H_{\nu}^{\pm}L^{\alpha\nu\pm}_{C}J_{\alpha}^{\pm}\,,$ (35) were the ‘+’ correspond to the modes $\Lambda_{b}^{0}\to(\Lambda_{c}^{+},p^{+})\pi^{+}\ell_{1}^{-}\ell_{2}^{-}$ and the ‘-’ correspond the conjugate modes. According to our convention, $H_{\nu}^{+}=H_{\nu}$ and $H_{\nu}^{-}=H_{\nu}^{\ast}$. The CKM elements $V_{qb}=V_{ub}$ for $\Lambda_{b}^{0}\to p^{+}\pi\ell\ell$ and $V_{qb}=V_{cb}$ for $\Lambda_{b}^{0}\to\Lambda_{c}^{+}\pi\ell\ell$ modes. The leptonic part of the amplitudes are $\displaystyle L^{\nu\alpha\pm}_{D}=\bar{u}_{\ell_{1}}(p_{1})\gamma^{\nu}\gamma^{\alpha}(1\pm\gamma_{5})v_{\ell_{2}}(p_{2})\,,\quad L^{\alpha\nu\pm}_{C}=\bar{u}_{\ell_{1}}(p_{1})\gamma^{\alpha}\gamma^{\nu}(1\pm\gamma_{5})v_{\ell_{2}}(p_{2})\,.$ (36) The hadronic amplitudes $H^{\mu}$ are calculated using the form factor parametrization of $\mathcal{B}_{1}\to\mathcal{B}_{2}$ transitions from Ref. Detmold:2015aaa $\displaystyle\langle{\mathcal{B}_{{2}}}(k,s_{k})|\bar{s}\gamma^{\mu}b|{\mathcal{B}_{{1}}}(p,s_{p})\rangle=$ $\displaystyle\bar{u}(k,s_{k})\Bigg{[}f^{V}_{t}(q^{2})(m_{{\mathcal{B}_{{1}}}}-m_{{\mathcal{B}_{{2}}}})\frac{q^{\mu}}{q^{2}}$ $\displaystyle+$ $\displaystyle f^{V}_{0}(q^{2})\frac{m_{{\mathcal{B}_{{1}}}}+m_{{\mathcal{B}_{{2}}}}}{s_{+}}\\{p^{\mu}+k^{\mu}-\frac{q^{\mu}}{q^{2}}(m_{{\mathcal{B}_{{1}}}}^{2}-m_{{\mathcal{B}_{{2}}}}^{2})\\}$ $\displaystyle+$ $\displaystyle f^{V}_{\perp}(q^{2})\\{\gamma^{\mu}-\frac{2m_{{\mathcal{B}_{{2}}}}}{s_{+}}p^{\mu}-\frac{2m_{{\mathcal{B}_{{1}}}}}{s_{+}}k^{\mu}\\}\Bigg{]}u(p,s_{p})\,,$ (37) $\displaystyle\langle{\mathcal{B}_{{2}}}(k,s_{k})|\bar{s}\gamma^{\mu}\gamma_{5}b|{\mathcal{B}_{{1}}}(p,s_{p})\rangle=$ $\displaystyle-\bar{u}(k,s_{k})\gamma_{5}\Bigg{[}f_{t}^{A}(q^{2})(m_{{\mathcal{B}_{{1}}}}+m_{{\mathcal{B}_{{2}}}})\frac{q^{\mu}}{q^{2}}$ $\displaystyle+$ $\displaystyle f_{0}^{A}(q^{2})\frac{m_{{\mathcal{B}_{{1}}}}-m_{{\mathcal{B}_{{2}}}}}{s_{-}}\\{p^{\mu}+k^{\mu}-\frac{q^{\mu}}{q^{2}}(m_{{\mathcal{B}_{{1}}}}^{2}-m_{{\mathcal{B}_{{2}}}}^{2})\\}$ $\displaystyle+$ $\displaystyle f_{\perp}^{A}(q^{2})\\{\gamma^{\mu}+\frac{2m_{{\mathcal{B}_{{2}}}}}{s_{-}}p^{\mu}-\frac{2m_{{\mathcal{B}_{{1}}}}}{s_{-}}k^{\mu}\\}\Bigg{]}u(p,s_{p})\,.$ (38) Using (A) and (A) we can write the expression of $H^{\mu}$ as $\displaystyle H^{\mu}=\langle{\mathcal{B}_{{2}}}(k,s_{k})|\bar{c}\gamma^{\mu}(1-\gamma_{5})b|{\mathcal{B}_{{1}}}(p,s_{p})\rangle$ $\displaystyle=\bar{u}(k,s_{k})\Big{(}A_{1}q^{\mu}+A_{2}k^{\mu}+A_{3}\gamma^{\mu}+\gamma_{5}\big{\\{}A_{4}q^{\mu}+A_{5}k^{\mu}+A_{6}\gamma^{\mu}\big{\\}}\Big{)}u(p,s_{p})\,,$ (39) where the $q^{2}$ dependent functions $A_{i}$ can be written in terms of the form factors as $\displaystyle A_{1}=f_{t}^{V}\frac{{m_{\mathcal{B}_{{1}}}}-{m_{\mathcal{B}_{{2}}}}}{q^{2}}+f_{0}^{V}\frac{{m_{\mathcal{B}_{{1}}}}+{m_{\mathcal{B}_{{2}}}}}{s_{+}}\bigg{(}1-\frac{{m^{2}_{\mathcal{B}_{{1}}}}-{m^{2}_{\mathcal{B}_{{2}}}}}{q^{2}}\bigg{)}-f_{\perp}^{V}\frac{2{m_{\mathcal{B}_{{2}}}}}{s_{+}}\,,$ (40) $\displaystyle A_{2}=2f_{0}^{V}\frac{{m_{\mathcal{B}_{{1}}}}+{m_{\mathcal{B}_{{2}}}}}{s_{+}}-f_{\perp}^{V}\bigg{(}\frac{2{m_{\mathcal{B}_{{2}}}}}{s_{+}}+\frac{2{m_{\mathcal{B}_{{1}}}}}{s_{+}}\bigg{)}\,,$ (41) $\displaystyle A_{3}=f_{\perp}^{V}\,,$ (42) $\displaystyle A_{4}=f_{t}^{A}\frac{{m_{\mathcal{B}_{{1}}}}+{m_{\mathcal{B}_{{2}}}}}{q^{2}}+f_{0}^{A}\frac{{m_{\mathcal{B}_{{1}}}}-{m_{\mathcal{B}_{{2}}}}}{s_{-}}\bigg{(}1-\frac{{m^{2}_{\mathcal{B}_{{1}}}}-{m^{2}_{\mathcal{B}_{{2}}}}}{q^{2}}\bigg{)}+f_{\perp}^{A}\frac{2{m_{\mathcal{B}_{{2}}}}}{s_{-}}\,,$ (43) $\displaystyle A_{5}=2f_{0}^{A}\frac{{m_{\mathcal{B}_{{1}}}}-{m_{\mathcal{B}_{{2}}}}}{s_{-}}+f_{\perp}^{A}\bigg{(}\frac{2{m_{\mathcal{B}_{{2}}}}}{s_{-}}-\frac{2{m_{\mathcal{B}_{{1}}}}}{s_{-}}\bigg{)}\,,$ (44) $\displaystyle A_{6}=f_{\perp}^{A}\,.$ (45) Finally, the amplitudes for the pion production is $\displaystyle J_{\alpha}^{+}=\braket{\pi^{+}(k)}{\bar{u}\gamma_{\mu}(1-\gamma_{5})d}{0}=ik_{\mu}V_{ud}f_{\pi}\,,\quad J_{\alpha}^{-}=-ik_{\mu}V_{ud}^{\ast}f_{\pi}=(J_{\alpha}^{+})^{\dagger}\,.$ (46) The matrix element mod-squared after summing over the final spins and averaging over the initial spin is given in equation (2). The quadratic terms $T_{\pm}(XY^{\ast})$ and $T(XY^{\ast})$ given on (2) are $\displaystyle T_{\pm}(DD^{\ast})$ $\displaystyle=\sum_{spins}[H_{\nu}^{\pm}(H_{\rho}^{\pm})^{\ast}]\sum_{spins}[L_{D}^{\nu\alpha\pm}(L_{D}^{\rho\beta\pm})^{\ast}]k_{\pi\alpha}k_{\pi\beta}$ (47) $\displaystyle T_{\pm}(CC^{\ast})$ $\displaystyle=\sum_{spins}[H_{\nu}^{\pm}(H_{\rho}^{\pm})^{\ast}]\sum_{spins}[L_{C}^{\alpha\nu\pm}(L_{C}^{\beta\rho\pm})^{\ast}]k_{\pi\alpha}k_{\pi\beta}$ (48) $\displaystyle T_{\pm}(DC^{\ast})$ $\displaystyle=\sum_{spins}[H_{\nu}^{\pm}(H_{\rho}^{\pm})^{\ast}]\sum_{spins}[L_{D}^{\nu\alpha\pm}(L_{C}^{\beta\rho\pm})^{\ast}]k_{\pi\alpha}k_{\pi\beta}$ (49) $\displaystyle T_{\pm}(D^{\ast}C)$ $\displaystyle=\sum_{spins}[(H_{\nu}^{\pm})^{\ast}H_{\rho}^{\pm}]\sum_{spins}[(L_{D}^{\nu\alpha\pm})^{\ast}L_{C}^{\beta\rho\pm}]k_{\pi\alpha}k_{\pi\beta}$ (50) $\displaystyle T(DD^{\ast})$ $\displaystyle\equiv T_{+}(DD^{\ast})=T_{-}(DD^{\ast}),\,\,\,T(CC^{\ast})\equiv T_{+}(CC^{\ast})=T_{-}(CC^{\ast})\,,$ (51) $\displaystyle T_{+}(DC^{\ast})$ $\displaystyle=T_{-}(D^{\ast}C),\,\,\,T_{-}(DC^{\ast})=T_{+}(D^{\ast}C).$ (52) In the appendix B we describe the kinematics required to calculate the momentum dot products required to evaluate the quadratic terms $T_{\pm}(XY^{\ast})$. ## Appendix B Kinematics As mentioned in the texts, the contribution coming from the interference of the direct and cross channel diagrams are negligibly small and neglected in our calculations. Therefore, kinematics of the direct and cross channel can be evaluated independently. In this section, we work out the kinematics for the direct channel. The cross channel can be obtained trivially from the results presented here. Referring to diagrams 1, in this section we work out the kinematics of $\mathcal{B}^{0}_{1}({p_{\mathcal{B}_{{1}}}})\to\mathcal{B}_{2}({p_{\mathcal{B}_{{2}}}})\pi^{+}(p_{\pi})\ell_{1}^{-}(p_{1})\ell_{2}^{-}(p_{2})$ decay in the ${\mathcal{B}_{{1}}}(p_{{\mathcal{B}_{{1}}}})$ rest frame (${\mathcal{B}_{{1}}}$-RF). In this frame the four momentum of ${\mathcal{B}_{{2}}}(p_{{\mathcal{B}_{{2}}}})$ and $W_{1}(q)$ are $\displaystyle p^{{\mathcal{B}_{{1}}}\text{-RF}}_{{\mathcal{B}_{{2}}}}\equiv(m_{{\mathcal{B}_{{1}}}}-E^{{\mathcal{B}_{{1}}}\text{-RF}}_{q},0,0,{\bf p}^{{\mathcal{B}_{{1}}}\text{-RF}}_{{\mathcal{B}_{{2}}}})\,,$ (53) $\displaystyle q^{{\mathcal{B}_{{1}}}\text{-RF}}\equiv(E^{{\mathcal{B}_{{1}}}\text{-RF}}_{q},0,0,-{\bf p}^{{\mathcal{B}_{{1}}}\text{-RF}}_{{\mathcal{B}_{{2}}}})\,,$ (54) where the $q^{0}$ and the ${\bf p}^{{\mathcal{B}_{{1}}}\text{-RF}}_{{\mathcal{B}_{{2}}}}$ are $\displaystyle E^{{\mathcal{B}_{{1}}}\text{-RF}}_{q}=\frac{{m^{2}_{\mathcal{B}_{{1}}}}+q^{2}-{m^{2}_{\mathcal{B}_{{2}}}}}{2{m_{\mathcal{B}_{{1}}}}}\,,\quad|{\bf p}^{{\mathcal{B}_{{1}}}\text{-RF}}_{{\mathcal{B}_{{2}}}}|=\frac{\sqrt{\lambda({m^{2}_{\mathcal{B}_{{1}}}},{m^{2}_{\mathcal{B}_{{2}}}},q^{2})}}{2{m_{\mathcal{B}_{{1}}}}}\,,$ (55) and $\lambda(a,,b,c)=a^{2}+b^{2}+c^{2}-2(ab+bc+ca)$. In the $W^{-}$-RF, we define $\theta_{1}$ as the angle made by $\ell_{1}$ with respect to the ${\mathcal{B}_{{2}}}$. _i_.e., in the $+\hat{z}$ direction. The four momentum of $\ell_{1}(p_{1})$ and $N(p_{N})$ reads $\displaystyle p_{1}^{W^{-}\text{-RF}}=(E_{1}^{W^{-}\text{-RF}},|{\bf p}_{1}^{W^{-}\text{-RF}}|\sin\theta_{1},0,|{\bf p}_{1}^{W^{-}\text{-RF}}|\cos\theta_{1})\,,$ (56) $\displaystyle p_{N}^{W^{-}\text{-RF}}=(\sqrt{q^{2}}-E_{1}^{W^{-}\text{-RF}},-|{\bf p}_{1}^{W^{-}\text{-RF}}|\sin\theta_{1},0,-|{\bf p}_{1}^{W^{-}\text{-RF}}|\cos\theta_{1})\,,$ (57) where $E_{1}^{W^{-}\text{-RF}}$ and ${\bf p}_{1}^{W^{-}\text{-RF}}$ are given as $\displaystyle E_{1}^{W^{-}\text{-RF}}=\frac{q^{2}+m_{1}^{2}-p_{N}^{2}}{2\sqrt{q^{2}}}\,,\quad|{\bf p}_{1}^{W^{-}\text{-RF}}|=\frac{\sqrt{\lambda(q^{2},m_{1}^{2},p_{N}^{2})}}{2\sqrt{q^{2}}}\,.$ (58) The Lorentz boost matrix to transform four vectors from the $W^{-}\text{-RF}$ to the ${\mathcal{B}_{{1}}}\text{-RF}$ reads $\Lambda_{W^{-}\to{\mathcal{B}_{{1}}}}=\begin{pmatrix}\gamma_{1}&-\gamma_{1}\beta_{1x}&-\gamma_{1}\beta_{1y}&-\gamma_{1}\beta_{1z}\\\ -\gamma_{1}\beta_{1x}&1+(\gamma_{1}-1)\frac{\beta^{2}_{1x}}{\vec{\beta}_{1}^{2}}&(\gamma_{1}-1)\frac{\beta_{1x}\beta_{1y}}{\vec{\beta}_{1}^{2}}&(\gamma_{1}-1)\frac{\beta_{1x}\beta_{1z}}{\vec{\beta}_{1}^{2}}\\\ -\gamma_{1}\beta_{1y}&(\gamma_{1}-1)\frac{\beta_{1x}\beta_{1y}}{\vec{\beta}_{1}^{2}}&1+(\gamma_{1}-1)\frac{\beta^{2}_{1y}}{\vec{\beta}_{1}^{2}}&(\gamma_{1}-1)\frac{\beta_{1y}\beta_{1z}}{\vec{\beta}_{1}^{2}}\\\ -\gamma_{1}\beta_{1z}&(\gamma_{1}-1)\frac{\beta_{1x}\beta_{1z}}{\vec{\beta}_{1}^{2}}&(\gamma_{1}-1)\frac{\beta_{1y}\beta_{1z}}{\vec{\beta}_{1}^{2}}&1+(\gamma_{1}-1)\frac{\beta^{2}_{1z}}{\vec{\beta}_{1}^{2}}\\\ \end{pmatrix}\,,$ (59) where the velocity $\vec{\beta}_{1}$ is the velocity of the $W^{-}(q)$ as seen in the ${\mathcal{B}_{{1}}}\text{-RF}$ and $\displaystyle\gamma_{1}=\frac{1}{\sqrt{1-\vec{\beta}_{1}^{2}}}\,,\quad\beta_{1x}=0\,,\quad\beta_{1y}=0\,,\quad\beta_{1z}=\frac{|{\bf p}^{{\mathcal{B}_{{1}}}\text{-RF}}_{{\mathcal{B}_{{2}}}}|}{E^{{\mathcal{B}_{{1}}}\text{-RF}}_{q}}\,.$ (60) In the heavy neutrino rest frame $N\text{-RF}$, the four momentum of $\ell_{2}(p_{2})$ and the $W^{+}(p_{\pi})$ are given as $\displaystyle p_{2}^{N{\rm-RF}}\equiv(E_{2}^{N{\rm-RF}},|{\bf p}_{2}^{N{\rm- RF}}|\sin\theta_{2}\cos\phi,|{\bf p}_{2}^{N{\rm- RF}}|\sin\theta_{2}\sin\phi,|{\bf p}_{2}^{N{\rm-RF}}|\cos\theta_{2})\,,$ (61) $\displaystyle p_{\pi}^{N{\rm-RF}}\equiv(E_{\pi}^{N{\rm-RF}},-|{\bf p}_{2}^{N{\rm-RF}}|\sin\theta_{2}\cos\phi,-|{\bf p}_{2}^{N{\rm- RF}}|\sin\theta_{2}\sin\phi,-|{\bf p}_{2}^{N{\rm-RF}}|\cos\theta_{2})\,,$ (62) where $\theta_{2}$ is the angle made the lepton $\ell_{2}$ with respect to the $+\hat{z}$ direction. The angle $\phi$ is made by the plane containing $\ell_{2}$ and $W^{+}$ with respect to the plane containing $\ell_{1}$ and $N$ as seen in the ${{\mathcal{B}_{{1}}}{\rm-RF}}$ and is defined as $\displaystyle\cos\phi=(\hat{p}_{1}^{{\mathcal{B}_{{1}}}{\rm- RF}}\times\hat{p}_{N}^{{\mathcal{B}_{{1}}}{\rm- RF}}).(\hat{p}_{2}^{{\mathcal{B}_{{1}}}{\rm- RF}}\times\hat{p}_{W^{+}}^{{\mathcal{B}_{{1}}}{\rm-RF}})\,,$ (63) $\displaystyle\sin\phi=-\bigg{[}(\hat{p}_{1}^{{\mathcal{B}_{{1}}}{\rm- RF}}\times\hat{p}_{N}^{{\mathcal{B}_{{1}}}{\rm- RF}})\times(\hat{p}_{2}^{{\mathcal{B}_{{1}}}{\rm- RF}}\times\hat{p}_{W^{+}}^{\mathcal{B}_{1}{\rm- RF}})\bigg{]}.\hat{p}_{N}^{{\mathcal{B}_{{1}}}{\rm-RF}}\,.$ (64) The energies $E_{2}^{N{\rm-RF}},E_{\pi}^{N{\rm-RF}}$ and three momentum $|{\bf p}_{2}^{N{\rm-RF}}|$ are $E_{2}^{N{\rm- RF}}=\frac{p_{N}^{2}+m_{2}^{2}-p_{\pi}^{2}}{2\sqrt{p_{N}^{2}}}\,,\quad E_{\pi}^{N{\rm-RF}}=\sqrt{p_{N}^{2}}-E_{2}^{N{\rm-RF}},\quad|{\bf p}_{2}^{N{\rm- RF}}|=\frac{\sqrt{\lambda(p_{N}^{2},m_{2}^{2},p_{\pi}^{2})}}{2\sqrt{p_{N}^{2}}}\,.$ (65) The Lorentz boost required to go from the heavy neutrino-rest frame to the $W^{-}$ rest-frame is given as $\Lambda_{N\to W^{-}}=\begin{pmatrix}\gamma_{2}&-\gamma_{2}\beta_{2x}&-\gamma_{2}\beta_{2y}&-\gamma_{2}\beta_{2z}\\\ -\gamma_{2}\beta_{2x}&1+(\gamma_{2}-1)\frac{\beta^{2}_{2x}}{\vec{\beta}_{2}^{2}}&(\gamma_{2}-1)\frac{\beta_{2x}\beta_{2y}}{\vec{\beta}_{2}^{2}}&(\gamma_{2}-1)\frac{\beta_{2x}\beta_{2z}}{\vec{\beta}_{2}^{2}}\\\ -\gamma_{2}\beta_{2y}&(\gamma_{2}-1)\frac{\beta_{2x}\beta_{2y}}{\vec{\beta}_{2}^{2}}&1+(\gamma_{2}-1)\frac{\beta^{2}_{2y}}{\vec{\beta}_{2}^{2}}&(\gamma_{2}-1)\frac{\beta_{2y}\beta_{2z}}{\vec{\beta}_{2}^{2}}\\\ -\gamma_{2}\beta_{2z}&(\gamma_{2}-1)\frac{\beta_{2x}\beta_{2z}}{\vec{\beta}_{2}^{2}}&(\gamma_{2}-1)\frac{\beta_{2y}\beta_{2z}}{\vec{\beta}_{2}^{2}}&1+(\gamma_{2}-1)\frac{\beta^{2}_{2z}}{\vec{\beta}_{2}^{2}}\\\ \end{pmatrix}\,,$ (66) where the velocity $\vec{\beta}_{2}$ is the velocity of the $N$ as seen in the $W^{-}$-RF $\displaystyle\gamma_{2}=\frac{1}{\sqrt{1-\vec{\beta}_{2}^{2}}}\,,\quad\beta_{2x}=\frac{|{\bf p}^{W^{-}\rm-RF}_{N}|}{E^{W^{-}\rm- RF}_{N}}\sin\theta_{1}\,,\quad\beta_{2y}=0\,,\quad\beta_{2z}=\frac{|{\bf p}^{W^{-}\rm-RF}_{N}|}{E^{W^{-}\rm-RF}_{N}}\cos\theta_{1}\,.$ (67) ## Appendix C Phase space The differential width for the four-body final state is $d\Gamma=\frac{1}{2{m_{\mathcal{B}_{{1}}}}}|\mathcal{M}|^{2}d\Phi_{4}\bigg{(}{\mathcal{B}_{{1}}}\to{\mathcal{B}_{{2}}}\ell_{1}\ell_{2}\pi\bigg{)}\,,$ (68) The four-body phase-space can be split up as $\displaystyle d\Phi_{4}({\mathcal{B}_{{1}}}\to{\mathcal{B}_{{2}}}\ell_{1}\ell_{2}\pi)$ $\displaystyle=d\Phi_{3}({\mathcal{B}_{{1}}}\to{\mathcal{B}_{{2}}}\ell_{1}N)\frac{dp_{N}^{2}}{2\pi}d\Phi_{2}(N\to\ell_{2}\pi)\,,\quad\text{for $D$-channel}\,,$ (69) $\displaystyle=d\Phi_{3}({\mathcal{B}_{{1}}}\to{\mathcal{B}_{{2}}}\ell_{2}N)\frac{dp_{N}^{2}}{2\pi}d\Phi_{2}(N\to\ell_{1}\pi)\,,\quad\text{for $C$-channel}\,,$ (70) The expressions of different phase space are given below $\displaystyle d\Phi_{3}({\mathcal{B}_{{1}}}\to{\mathcal{B}_{{2}}}\ell_{1}N)=\frac{\sqrt{\lambda(1,{m^{2}_{\mathcal{B}_{{2}}}}/{m^{2}_{\mathcal{B}_{{1}}}},q^{2}/{m^{2}_{\mathcal{B}_{{1}}}})}}{4\pi}\frac{\sqrt{\lambda(1,m_{\ell_{1}}^{2}/q^{2},p_{N}^{2}/q^{2})}}{(8\pi)^{2}}\int dq^{2}d\cos\theta_{1}\,,$ (71) $\displaystyle d\Phi_{2}(N\to\ell_{2}\pi)=\frac{\sqrt{\lambda(1,m_{\ell_{2}}^{2}/p_{N}^{2},p_{\pi}^{2}/p_{N}^{2})}}{8\pi}\int\frac{d\cos\theta_{2}}{2}\frac{d\phi}{2\pi}\,,$ (72) The limits of the integration of the angles are as follows $\displaystyle-1\leq\cos\theta_{1}\leq 1\,,\quad-1\leq\cos\theta_{2}\leq 1\,,\quad 0\leq\phi\leq 2\pi\,.\quad$ (73) ## References * (1) Y. Fukuda et al. [Super-Kamiokande], _Evidence for oscillation of atmospheric neutrinos_ , Phys. Rev. Lett. 81, 1562-1567 (1998), [arXiv:hep-ex/9807003 [hep-ex]]. * (2) R. Wendell et al. [Super-Kamiokande], _Atmospheric neutrino oscillation analysis with sub-leading effects in Super-Kamiokande I, II, and III_ , Phys. Rev. D 81, 092004 (2010), [arXiv:1002.3471 [hep-ex]]. * (3) M. Ambrosio et al. [MACRO], _Atmospheric neutrino oscillations from upward through going muon multiple scattering in MACRO_ , Phys. Lett. B 566, 35-44 (2003), [arXiv:hep-ex/0304037 [hep-ex]]. * (4) N. Cabibbo, _Time Reversal Violation in Neutrino Oscillation_ , Phys. Lett. B 72, 333-335 (1978), * (5) R. N. Mohapatra and G. Senjanovic, _Neutrino Mass and Spontaneous Parity Nonconservation_ , Phys. Rev. Lett. 44, 912 (1980), * (6) J. Schechter and J. W. F. Valle, _Neutrino Masses in $SU(2)\times U(1)$ Theories_, Phys. Rev. D 22, 2227 (1980), * (7) J. Schechter and J. W. F. Valle, _Neutrino Decay and Spontaneous Violation of Lepton Number_ , Phys. Rev. D 25, 774 (1982), * (8) P. Minkowski, _$\mu\to e\gamma$ at a Rate of One Out of $10^{9}$ Muon Decays?_, Phys. Lett. B 67, 421-428 (1977), * (9) T. Yanagida, _Horizontal gauge symmetry and masses of neutrinos_ , Conf. Proc. C 7902131, 95-99 (1979) KEK-79-18-95. * (10) P. Ramond, _The Family Group in Grand Unified Theories_ , [arXiv:hep-ph/9809459 [hep-ph]]. * (11) S. L. Glashow, in: M. Levy et al. (Eds.), _QUARKS AND LEPTONS. PROCEEDINGS, SUMMER INSTITUTE, CARGESE, FRANCE, NATO Sci. Ser. B 61, p.707 (1980)_, * (12) W. Buchmuller, C. Greub and P. Minkowski, _Neutrino masses, neutral vector bosons and the scale of B-L breaking_ , Phys. Lett. B 267, 395-399 (1991), * (13) T. Asaka, S. Blanchet and M. Shaposhnikov, _The nuMSM, dark matter and neutrino masses_ , Phys. Lett. B 631, 151-156 (2005), [arXiv:hep-ph/0503065 [hep-ph]]. * (14) F. del Aguila, J. A. Aguilar-Saavedra, J. de Blas and M. Zralek, _Looking for signals beyond the neutrino Standard Model_ , Acta Phys. Polon. B 38, 3339-3348 (2007) [arXiv:0710.2923 [hep-ph]]. * (15) X. G. He, S. Oh, J. Tandean and C. C. Wen, _Large Mixing of Light and Heavy Neutrinos in Seesaw Models and the LHC_ , Phys. Rev. D 80, 073012 (2009), [arXiv:0907.1607 [hep-ph]]. * (16) J. Kersten and A. Y. Smirnov, _Right-Handed Neutrinos at CERN LHC and the Mechanism of Neutrino Mass Generation_ , Phys. Rev. D 76, 073005 (2007), [arXiv:0705.3221 [hep-ph]]. * (17) A. Ibarra, E. Molinaro and S. T. Petcov, _TeV Scale See-Saw Mechanisms of Neutrino Mass Generation, the Majorana Nature of the Heavy Singlet Neutrinos and $(\beta\beta)_{0\nu}$-Decay_, JHEP 09, 108 (2010), [arXiv:1007.2378 [hep-ph]]. * (18) M. Nemevsek, G. Senjanovic and Y. Zhang, _Warm Dark Matter in Low Scale Left-Right Theory_ , JCAP 07, 006 (2012), [arXiv:1205.0844 [hep-ph]]. * (19) T. Asaka and M. Shaposhnikov, _The $\nu$MSM, dark matter and baryon asymmetry of the universe_, Phys. Lett. B 620, 17-26 (2005), [arXiv:hep-ph/0505013 [hep-ph]]. * (20) L. Canetti, M. Drewes, T. Frossard and M. Shaposhnikov, _Dark Matter, Baryogenesis and Neutrino Oscillations from Right Handed Neutrinos_ , Phys. Rev. D 87, 093006 (2013), [arXiv:1208.4607 [hep-ph]]. * (21) L. Canetti, M. Drewes and B. Garbrecht, _Probing leptogenesis with GeV-scale sterile neutrinos at LHCb and Belle II_ , Phys. Rev. D 90, no.12, 125005 (2014), [arXiv:1404.7114 [hep-ph]]. * (22) B. Shuve and I. Yavin, _Baryogenesis through Neutrino Oscillations: A Unified Perspective_ , Phys. Rev. D 89, no.7, 075014 (2014), [arXiv:1401.2459 [hep-ph]]. * (23) H. Päs and W. Rodejohann, _Neutrinoless Double Beta Decay_ , New J. Phys. 17, no.11, 115010 (2015), [arXiv:1507.00170 [hep-ph]]. * (24) W. Rodejohann, _Neutrino-less Double Beta Decay and Particle Physics_ , Int. J. Mod. Phys. E 20, 1833-1930 (2011), [arXiv:1106.1334 [hep-ph]]. * (25) S. Dell’Oro, S. Marcocci, M. Viel and F. Vissani, _Neutrinoless double beta decay: 2015 review_ , Adv. High Energy Phys. 2016, 2162659 (2016), [arXiv:1601.07512 [hep-ph]]. * (26) J. J. Gomez-Cadenas, J. Martin-Albo, M. Mezzetto, F. Monrabal and M. Sorel, _The Search for neutrinoless double beta decay_ , Riv. Nuovo Cim. 35, no.2, 29-98 (2012), [arXiv:1109.5515 [hep-ex]]. * (27) M. Drewes and S. Eijima, _Neutrinoless double $\beta$ decay and low scale leptogenesis_, Phys. Lett. B 763, 72-79 (2016), [arXiv:1606.06221 [hep-ph]]. * (28) T. Asaka, S. Eijima and H. Ishida, _On neutrinoless double beta decay in the $\nu$MSM_, Phys. Lett. B 762, 371-375 (2016), [arXiv:1606.06686 [hep-ph]]. * (29) C. E. Aalseth et al. [Majorana], _Search for Neutrinoless Double- $\beta$ Decay in 76Ge with the Majorana Demonstrator_, Phys. Rev. Lett. 120, no.13, 132502 (2018), [arXiv:1710.11608 [nucl-ex]]. * (30) M. Agostini et al. [GERDA], _Improved Limit on Neutrinoless Double- $\beta$ Decay of 76Ge from GERDA Phase II_, Phys. Rev. Lett. 120, no.13, 132503 (2018), [arXiv:1803.11100 [nucl-ex]]. * (31) C. Alduino et al. [CUORE], _First Results from CUORE: A Search for Lepton Number Violation via $0\nu\beta\beta$ Decay of 130Te_, Phys. Rev. Lett. 120, no.13, 132501 (2018), [arXiv:1710.07988 [nucl-ex]]. * (32) J. B. Albert et al. [EXO], _Search for Neutrinoless Double-Beta Decay with the Upgraded EXO-200 Detector_ , Phys. Rev. Lett. 120, no.7, 072701 (2018), [arXiv:1707.08707 [hep-ex]]. * (33) A. Gando et al. [KamLAND-Zen], _Search for Majorana Neutrinos near the Inverted Mass Hierarchy Region with KamLAND-Zen_ , Phys. Rev. Lett. 117, no.8, 082503 (2016), [arXiv:1605.02889 [hep-ex]]. * (34) G. Cvetic, C. Dib, S. K. Kang and C. S. Kim, _Probing Majorana neutrinos in rare $K$ and $D,D_{s},B,B_{c}$ meson decays_, Phys. Rev. D 82, 053010 (2010) [arXiv:1005.4282 [hep-ph]]. * (35) J. C. Helo, S. Kovalenko and I. Schmidt, _Sterile neutrinos in lepton number and lepton flavor violating decays_ , Nucl. Phys. B 853, 80-104 (2011), [arXiv:1005.1607 [hep-ph]]. * (36) A. Atre, V. Barger and T. Han, _Upper bounds on lepton-number violating processes_ , Phys. Rev. D 71, 113014 (2005), [arXiv:hep-ph/0502163 [hep-ph]]. * (37) C. Dib, V. Gribanov, S. Kovalenko and I. Schmidt, _$K$ meson neutrinoless double muon decay as a probe of neutrino masses and mixings_ Phys. Lett. B 493, 82-87 (2000), [arXiv:hep-ph/0006277 [hep-ph]]. * (38) A. Ali, A. V. Borisov and N. B. Zamorin, _Majorana neutrinos and same sign dilepton production at LHC and in rare meson decays_ , Eur. Phys. J. C 21, 123-132 (2001), [arXiv:hep-ph/0104123 [hep-ph]]. * (39) J. M. Zhang and G. L. Wang, _Lepton-Number Violating Decays of Heavy Mesons_ , Eur. Phys. J. C 71, 1715 (2011), [arXiv:1003.5570 [hep-ph]]. * (40) H. Yuan, T. Wang, G. L. Wang, W. L. Ju and J. M. Zhang, _Lepton-number violating four-body decays of heavy mesons_ , JHEP 08, 066 (2013), [arXiv:1304.3810 [hep-ph]]. * (41) R. M. Godbole, S. P. Maharathy, S. Mandal, M. Mitra and N. Sinha, _Interference Effect in LNV and LNC Meson Decays for Left Right Symmetric Model_ , [arXiv:2008.05467 [hep-ph]]. * (42) G. Cvetic, C. S. Kim, S. Mendizabal and J. Zamora-Saa, _Exploring CP-violation, via heavy neutrino oscillations, in rare B meson decays at Belle II_ , Eur. Phys. J. C 80, no.11, 1052 (2020), [arXiv:2007.04115 [hep-ph]]. * (43) B. Shuve and M. E. Peskin, _Revision of the LHCb Limit on Majorana Neutrinos_ Phys. Rev. D 94, no.11, 113007 (2016), [arXiv:1607.04258 [hep-ph]]. * (44) E. J. Chun, A. Das, S. Mandal, M. Mitra and N. Sinha, _Sensitivity of Lepton Number Violating Meson Decays in Different Experiments_ , Phys. Rev. D 100, no.9, 095022 (2019), [arXiv:1908.09562 [hep-ph]]. * (45) S. Mandal, M. Mitra and N. Sinha, _Constraining the right-handed gauge boson mass from lepton number violating meson decays in a low scale left-right model_ , Phys. Rev. D 96, no.3, 035023 (2017), [arXiv:1705.01932 [hep-ph]]. * (46) A. Abada, V. De Romeri, M. Lucente, A. M. Teixeira and T. Toma, _Effective Majorana mass matrix from tau and pseudoscalar meson lepton number violating decays_ , JHEP 02, 169 (2018), [arXiv:1712.03984 [hep-ph]]. * (47) A. Abada, C. Hati, X. Marcano and A. M. Teixeira, _Interference effects in LNV and LFV semileptonic decays: the Majorana hypothesis_ , JHEP 09, 017 (2019), [arXiv:1904.05367 [hep-ph]]. * (48) J. Mejia-Guisao, D. Milanes, N. Quintero and J. D. Ruiz-Alvarez, _Exploring GeV-scale Majorana neutrinos in lepton-number-violating $\Lambda_{b}^{0}$ baryon decays_, Phys. Rev. D 96, no.1, 015039 (2017), [arXiv:1705.10606 [hep-ph]]. * (49) G. Zhang and B. Q. Ma, _Searching for lepton number violating $\Lambda$ baryon decays mediated by GeV-scale Majorana neutrino with LHCb_, [arXiv:2101.05566 [hep-ph]]. * (50) G. Cvetič and C. S. Kim, _Sensitivity bounds on heavy neutrino mixing $|U_{\mu N}|^{2}$ and $|U_{\tau N}|^{2}$ from LHCb upgrade_, Phys. Rev. D 100, no.1, 015014 (2019), [arXiv:1904.12858 [hep-ph]]. * (51) C. Barbero, L. F. Li, G. López Castro and A. Mariano, _Matrix elements of four-quark operators and $\Delta$L = 2 hyperon decays_, Phys. Rev. D 87, no.3, 036010 (2013), [arXiv:1301.3448 [hep-ph]]. * (52) S. Mandal and N. Sinha, _Favoured $B_{c}$ Decay modes to search for a Majorana neutrino_, Phys. Rev. D 94, no.3, 033001 (2016), [arXiv:1602.09112 [hep-ph]]. * (53) G. Moreno and J. Zamora-Saa, _Rare meson decays with three pairs of quasi-degenerate heavy neutrinos_ , Phys. Rev. D 94, no.9, 093005 (2016), [arXiv:1606.08820 [hep-ph]]. * (54) G. Cvetic, C. S. Kim, R. Kogerler and J. Zamora-Saa, _Oscillation of heavy sterile neutrino in decay of $B\to\mu e\pi$_, Phys. Rev. D 92, 013015 (2015), [arXiv:1505.04749 [hep-ph]]. * (55) G. Cvetic, C. Dib, C. S. Kim and J. Zamora-Saa, _Probing the Majorana neutrinos and their CP violation in decays of charged scalar mesons $\pi,K,D,D_{s},B,B_{c}$_, Symmetry 7, 726-773 (2015), [arXiv:1503.01358 [hep-ph]]. * (56) G. Cvetič, C. S. Kim and J. Zamora-Saá, _CP violation in lepton number violating semihadronic decays of $K,D,D_{s},B,B_{c}$_, Phys. Rev. D 89, no.9, 093012 (2014), [arXiv:1403.2555 [hep-ph]]. * (57) G. Cvetič, C. S. Kim and J. Zamora-Saá, _CP violations in $\pi^{\pm}$ Meson Decay_, J. Phys. G 41, 075004 (2014), [arXiv:1311.7554 [hep-ph]]. * (58) C. S. Kim, G. López Castro and D. Sahoo, _Constraints on a sub-eV scale sterile neutrino from nonoscillation measurements_ , Phys. Rev. D 98, no.11, 115021 (2018), [arXiv:1809.02265 [hep-ph]]. * (59) C. S. Kim, Y. Kwon, D. Lee, S. Oh and D. Sahoo, _Probing sterile neutrinos in $B(D)$ meson decays at Belle II (BESIII)_, Eur. Phys. J. C 80, no.8, 730 (2020), [arXiv:1908.00376 [hep-ph]]. * (60) D. Milanés and N. Quintero, _Search for lepton-number-violating signals in the charm sector_ , Phys. Rev. D 98, no.9, 096004 (2018), [arXiv:1808.06017 [hep-ph]]. * (61) J. Mejia-Guisao, D. Milanés, N. Quintero and J. D. Ruiz-Alvarez, _Lepton number violation in $B_{s}$ meson decays induced by an on-shell Majorana neutrino_, Phys. Rev. D 97, no.7, 075018 (2018), [arXiv:1708.01516 [hep-ph]]. * (62) D. Milanes, N. Quintero and C. E. Vera, _Sensitivity to Majorana neutrinos in $\Delta L=2$ decays of $B_{c}$ meson at LHCb_, Phys. Rev. D 93, no.9, 094026 (2016), [arXiv:1604.03177 [hep-ph]]. * (63) G. L. Castro and N. Quintero, _Bounding resonant Majorana neutrinos from four-body B and D decays_ , Phys. Rev. D 87, 077901 (2013), [arXiv:1302.1504 [hep-ph]]. * (64) N. Quintero, G. Lopez Castro and D. Delepine, _Lepton number violation in top quark and neutral B meson decays_ , Phys. Rev. D 84, 096011 (2011) [erratum: Phys. Rev. D 86, 079905 (2012)], [arXiv:1108.6009 [hep-ph]]. * (65) L. S. Littenberg and R. E. Shrock, _Upper bounds on Delta L = 2 decays of baryons_ , Phys. Rev. D 46, 892-894 (1992), * (66) C. Barbero, G. Lopez Castro and A. Mariano, _Double beta decay of Sigma- hyperons_ , Phys. Lett. B 566, 98-107 (2003), [arXiv:nucl-th/0212083 [nucl-th]]. * (67) G. Lopez Castro and N. Quintero, _Lepton number violating four-body tau lepton decays_ , Phys. Rev. D 85, 076006 (2012) [erratum: Phys. Rev. D 86, 079904 (2012)], [arXiv:1203.0537 [hep-ph]]. * (68) C. Dib, J. C. Helo, M. Hirsch, S. Kovalenko and I. Schmidt, _Heavy Sterile Neutrinos in Tau Decays and the MiniBooNE Anomaly_ , Phys. Rev. D 85, 011301 (2012), [arXiv:1110.5400 [hep-ph]]. * (69) H. Yuan, Y. Jiang, T. h. Wang, Q. Li and G. L. Wang, _Testing the nature of neutrinos from four-body $\tau$ decays_, J. Phys. G 44, no.11, 115002 (2017), [arXiv:1702.04555 [hep-ph]]. * (70) J. Zamora-Saa, _Resonant $CP$ violation in rare $\tau^{\pm}$ decays_, JHEP 05, 110 (2017), [arXiv:1612.07656 [hep-ph]]. * (71) C. S. Kim, G. López Castro and D. Sahoo, _Discovering intermediate mass sterile neutrinos through $\tau^{-}\to\pi^{-}\mu^{-}e^{+}\nu$ (or $\bar{\nu}$) decay_, Phys. Rev. D 96, no.7, 075016 (2017), [arXiv:1708.00802 [hep-ph]]. * (72) A. Das, P. S. B. Dev and C. S. Kim, _Constraining Sterile Neutrinos from Precision Higgs Data_ , Phys. Rev. D 95, no.11, 115013 (2017), [arXiv:1704.00880 [hep-ph]]. * (73) A. Das, Y. Gao and T. Kamon, _Heavy neutrino search via semileptonic Higgs decay at the LHC_ , Eur. Phys. J. C 79, no.5, 424 (2019), [arXiv:1704.00881 [hep-ph]]. * (74) A. Das and N. Okada, _Bounds on heavy Majorana neutrinos in type-I seesaw and implications for collider searches_ , Phys. Lett. B 774, 32-40 (2017), [arXiv:1702.04668 [hep-ph]]. * (75) G. Cvetič, A. Das, S. Tapia and J. Zamora-Saá, _Measuring the heavy neutrino oscillations in rare W boson decays at the Large Hadron Collider_ , J. Phys. G 47, no.1, 015001 (2020), [arXiv:1905.03097 [hep-ph]]. * (76) G. Cvetič, A. Das and J. Zamora-Saá, _Probing heavy neutrino oscillations in rare $W$ boson decays_, J. Phys. G 46, 075002 (2019), [arXiv:1805.00070 [hep-ph]]. * (77) B. Fuks, J. Neundorf, K. Peters, R. Ruiz and M. Saimpert, _Probing the Weinberg Operator at Colliders_ , [arXiv:2012.09882 [hep-ph]]. * (78) B. Fuks, J. Neundorf, K. Peters, R. Ruiz and M. Saimpert, _Majorana Neutrinos in Same-Sign $W^{\pm}W^{\pm}$ Scattering at the LHC: Breaking the TeV Barrier_, [arXiv:2011.02547 [hep-ph]]. * (79) Y. Cai, T. Han, T. Li and R. Ruiz, _Lepton Number Violation: Seesaw Models and Their Collider Tests_ , Front. in Phys. 6, 40 (2018), [arXiv:1711.02180 [hep-ph]]. * (80) R. Ruiz, _A quantitative study on helicity inversion in Majorana neutrino decays at the LHC_ , Phys. Rev. D 103, no.1, 015022 (2021), [arXiv:2008.01092 [hep-ph]]. * (81) F. Najafi, J. Kumar and D. London, _CP Violation in Rare Lepton-Number-Violating $W$ Decays at the LHC_, [arXiv:2011.03686 [hep-ph]]. * (82) R. Aaij et al. [LHCb], _Search for Majorana neutrinos in $B^{-}\to\pi^{+}\mu^{-}\mu^{-}$ decays_, Phys. Rev. Lett. 112, no.13, 131802 (2014), [arXiv:1401.5361 [hep-ex]]. * (83) J. R. Batley et al. [NA48/2], _Searches for lepton number violation and resonances in $K^{\pm}\to\pi\mu\mu$ decays_, Phys. Lett. B 769, 67-76 (2017), [arXiv:1612.04723 [hep-ex]]. * (84) C. O. Dib, M. Campos and C. S. Kim, _CP Violation with Majorana neutrinos in K Meson Decays_ , JHEP 02, 108 (2015), [arXiv:1403.8009 [hep-ph]]. * (85) W. Detmold, C. Lehner and S. Meinel, _$\Lambda_{b}\to p\ell^{-}\bar{\nu}_{\ell}$ and $\Lambda_{b}\to\Lambda_{c}\ell^{-}\bar{\nu}_{\ell}$ form factors from lattice QCD with relativistic heavy quarks_, Phys. Rev. D 92, no.3, 034503 (2015), [arXiv:1503.01421 [hep-lat]]. * (86) S. Aoki et al. [Flavour Lattice Averaging Group], _FLAG Review 2019: Flavour Lattice Averaging Group (FLAG)_ , Eur. Phys. J. C 80, no.2, 113 (2020), [arXiv:1902.08191 [hep-lat]]. * (87) A. Atre, T. Han, S. Pascoli and B. Zhang, _The Search for Heavy Majorana Neutrinos_ , JHEP 05, 030 (2009), [arXiv:0901.3589 [hep-ph]]. * (88) V. Gribanov, S. Kovalenko and I. Schmidt, _Sterile neutrinos in tau lepton decays_ , Nucl. Phys. B 607, 355-368 (2001), [arXiv:hep-ph/0102155 [hep-ph]]. * (89) D. Liventsev et al. [Belle], _Search for heavy neutrinos at Belle_ , Phys. Rev. D 87, no.7, 071102 (2013) [erratum: Phys. Rev. D 95, no.9, 099903 (2017)], [arXiv:1301.1105 [hep-ex]]. * (90) O. Adriani et al. [L3], _Search for isosinglet neutral heavy leptons in Z0 decays_ , Phys. Lett. B 295, 371-382 (1992), * (91) P. Abreu et al. [DELPHI], _Search for neutral heavy leptons produced in Z decays_ , Z. Phys. C 74, 57-71 (1997) [erratum: Z. Phys. C 75, 580 (1997)], * (92) J. Badier et al. [NA3], _Direct Photon Production From Pions and Protons at 200- GeV/$c$_, Z. Phys. C 31, 341 (1986), * (93) P. Vilain et al. [CHARM II], _Search for heavy isosinglet neutrinos_ , Phys. Lett. B 343, 453-458 (1995), * (94) A. Vaitaitis et al. [NuTeV and E815], _Search for neutral heavy leptons in a high-energy neutrino beam_ , Phys. Rev. Lett. 83, 4943-4946 (1999), [arXiv:hep-ex/9908011 [hep-ex]].
# Bohm potential is real and its effects are measurable Sergio A. Hojman<EMAIL_ADDRESS>Departamento de Ciencias, Facultad de Artes Liberales, Universidad Adolfo Ibáñez, Santiago 7491169, Chile. Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago 7800003, Chile. Centro de Recursos Educativos Avanzados, CREA, Santiago 7500018, Chile. Felipe A. Asenjo<EMAIL_ADDRESS>Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago 7491169, Chile. Héctor M. Moya-Cessa<EMAIL_ADDRESS>Instituto Nacional de Astrofísica Óptica y Electrónica Calle Luis Enrique Erro No. 1, Santa María Tonantzintla, Pue., 72840, Mexico. Francisco Soto–Eguibar<EMAIL_ADDRESS>Instituto Nacional de Astrofísica Óptica y Electrónica Calle Luis Enrique Erro No. 1, Santa María Tonantzintla, Pue., 72840, Mexico. ###### Abstract We analyze Bohm’s potential effects both in the realms of Quantum Mechanics and Optics, as well as in the study of other physical phenomena described in terms of classical and quantum wave equations. We approach this subject by using theoretical arguments as well as experimental evidence. We find that the effects produced by Bohm’s potential are both theoretically responsible for the early success of Quantum Mechanics correctly describing atomic and nuclear phenomena and, more recently, by confirming surprising accelerating behavior of free waves and particles experimentally, for instance. ## I Introduction A recently published article umul deals with the reality of Bohm’s potential. In the last section of the paper, the sentence “As a result, our analysis put forth that the term, named as the quantum potential, must be equal to zero.” is, at least, misleading, if not outright wrong. We believe it is important to remark that this conclusion is erroneous, as we demonstrate in the following sections of our manuscript, by considering theoretical as well as experimental results. In order to do this, we discuss some of the numerous findings published in articles related to the study of the Bohm potential and its effects, which may be traced back to almost a century ago in the seminal works of Madelung and of Bohm published in 1927 and 1952, respectively madel ; bohm . Numerous authors published theoretical as well as experimental articles dealing with Bohm’s potential, touching upon different subjects including quantum mechanics, optics, field theory and general relativity slep ; skro ; pleb ; dewitt ; vz1 ; mas1 ; har ; mas2 ; mas3 ; sahthesis ; hor ; sah1 ; berry ; sivichis2 ; sivichis ; bloch ; riv1 ; ah1 ; ah2 ; rivka ; asenjoHojmanclasquandsper ; sahfaz20201 ; sahfaz20202 ; impens ; uaiinaoe ; KGHojmanAsenjo ; phol ; holland ; wyatt . Most of the authors did not mention (or realized that) the fact that it was the non–-vanishing character of Bohm’s potential which produced the new (and sometimes counter–intuitive or surprising) phenomena. A crucial issue about the Bohm potential is that some of the wave equation solutions have properties that depart from their classical (or point–like) counterparts. The new effects appear because waves are extended objects in contradistinction to the strictly local character of particles. The difference between classical (point–like) and quantum (wave–like) behavior is illustrated by confronting the WKB (approximated) treatment of a quantum problem (as compared) to the full quantum behavior of Schrödinger equation exact solutions or, equivalently, to the difference between ray (eikonal) approximation and wave optics treatments, for instance. One simple way in which the wave-like behavior produces a striking effect that is impossible to get in the point-like (classical) limit, was found by Berry and Balazs in 1979 berry . They showed that an accelerating Airy beam solves exactly the full quantum Schrödinger equation for a free (vanishing external potential) particle. This surprising result was later experimentally confirmed using light beams sivichis in 2007 and its acceleration control Cerda in 2011, and electron beams bloch in 2013. These phenomena could not have taken place for vanishing Bohm potential, as we show below. Thus, the Bohm potential for Airy beams is non-vanishing. Similar to that solution, several others share this feature, producing a series of new and different phenomena. Below we show the theoretical foundations for the existence of a non–vanishing Bohm potential, to later discuss the physical effects of its presence, and how it produces new solutions for different wave equations. ## II The Madelung–Bohm formulation of Quantum Mechanics Let us consider the usual way to represent the one–dimensional Schrödinger equations for a particle moving in a real external potential $V({x},t)$ in terms of complex wavefunctions $\psi=\psi({x},t)$ and $\psi^{*}=\psi^{*}({x},t)$, $-\frac{{\hbar}^{2}}{2m}\psi^{\prime\prime}+V\psi-i\hbar\dot{\psi}=0\,,$ (1) and $-\frac{{\hbar}^{2}}{2m}\psi^{*^{\prime\prime}}+V\psi^{*}+i\hbar\dot{\psi}^{*}=0\,,$ (2) where ${}^{\prime}\equiv\partial_{x}$ and $\dot{\ }\equiv\partial_{t}$. The Madelung–Bohm madel ; bohm version of quantum mechanics is equivalent to Schrödinger’s. It is written in terms of the polar decomposition of ${{\psi}}=A\exp\left(iS/\hbar\right)$, where the amplitude $A({x},t)$ and the phase $S({x},t)$ are real functions. The two real Schrödinger equations, written in terms of these two new functions, are madel ; bohm $\displaystyle\frac{1}{2m}\left(S^{\prime}\right)^{2}+V_{B}+V+\dot{S}$ $\displaystyle=$ $\displaystyle 0\,,$ (3) $\displaystyle\frac{1}{m}\left(A^{2}\,S^{\prime}\right)^{\prime}+\left({A^{2}}\right){\dot{\ }}$ $\displaystyle=$ $\displaystyle 0\,.$ (4) where the Bohm potential $V_{B}(x,t)$ is defined by $V_{B}\equiv-\frac{\hbar^{2}}{2m}\frac{A^{\prime\prime}}{A}\ .$ (5) The first equation (3) is sometimes called the Quantum Hamilton–Jacobi (QHJ) equation for the (external) potential $V$. The quantum modification consists in the addition of the Bohm potential to the classical Hamilton-Jacobi equation. Besides, the second equation (4) is the continuity (probability conservation) equation. Eqs. (3) and (4) can be straightforwardly written for a three-dimensional space. Interestingly, in one-dimension, the continuity equation (4) is identically solved by defining the arbitrary wavefunction potential function ${f}={f}({x},t)$, such that $f^{\prime}=A^{2}\,,\qquad\dot{f}=-\frac{A^{2}}{m}S^{\prime}\ .$ (6) ## III Theoretical arguments and Experimental results In this section we analyze different theoretical topics and experimental findings that establish that Bohm potential may be different from zero. First of all, note that Planck’s constant $\hbar$ appears only in Bohm potential in the Madelung–Bohm formulation of Quantum Mechanics, which is, of course, equivalent to the usual formulation. If Bohm potential vanishes identically, $\hbar$ would disappear in the formulation of Quantum Mechanics. Even though this observation could debunk, by itself, the notion that in Quantum Mechanics Bohm’s potential can vanish identically, we will proceed to deepen our analysis further. There are many results, found independently, that point out to the fact that classical and quantum dispersion relations are not equivalent slep ; skro ; pleb ; dewitt ; vz1 ; mas1 ; har ; mas2 ; mas3 ; sahthesis ; hor ; sah1 ; berry ; sivichis2 ; sivichis ; bloch ; riv1 ; ah1 ; ah2 ; rivka ; asenjoHojmanclasquandsper ; sahfaz20201 ; sahfaz20202 ; impens ; uaiinaoe ; KGHojmanAsenjo ; phol ; holland ; wyatt . It was recently found out asenjoHojmanclasquandsper that all of these results have a common feature: the classical and quantum dispersion relations do not coincide because, in all of the cases considered (and in many other examples), the Bohm potentials do not vanish. Furthermore, for a given external potential $V(x,t)$ the expressions for the Bohm potential $V_{B}(x,t)$ depend on the the wavefunction amplitude $A(x,t)=\sqrt{(\psi(x,t)\psi^{*}(x,t))}$. Therefore, one quantum problem, say the free particle, which is solved by infinitely many different wavefunctions, does not have a unique Bohm potential. As a matter of fact, in a recent article sahfaz20201 , it is proved that not all external potentials $V(x,t)$ are compatible with vanishing Bohm potentials. No wavefunctions solutions to the Schrödinger equations for the Morse potential (or the Pöschl–Teller potential, for instance) produce vanishing Bohm potentials. The necessary and sufficient condition for an external potential to accommodate some (but not all) wavefunction solutions which have vanishing Bohm potential can be given in terms of the wavefunction potential function $f(x,t)$, defined through relations (6). In a one-dimensional case, for any system with vanishing Bohm potential, the potential $f$ function is given by $f(x,t)=\frac{a(t)^{2}}{3}x^{3}+a(t)b(t)x^{2}+b(t)^{2}x+c(t),$ (7) for arbitrary functions $a(t)$, $b(t)$ and $c(t)$. In this case, the external potential $V=V(x,t)$ is given by $\displaystyle V=\ -\ \left(\frac{1}{2}\frac{m{\dot{f}}^{2}}{f^{\prime 2}}+m\int_{\tilde{x}=0}^{\tilde{x}=x}\left(\frac{\dot{f}\dot{f}^{\prime}}{f^{\prime 2}}-\frac{\ddot{f}}{f^{\prime}}\right)d\tilde{x}+\dot{\mu}\right),$ (8) where $\mu(t)$ is an arbitrary function of time only, which produces a force $F=F(x,t)=\ -\ V^{\prime}(x,t)$, $\displaystyle F=\left(\frac{1}{2}\frac{m{\dot{f}}^{2}}{f^{\prime 2}}\right)^{\prime}+m\left(\frac{\dot{f}\dot{f}^{\prime}}{f^{\prime 2}}-\frac{\ddot{f}}{f^{\prime}}\right)\ .$ (9) The free particle, the attractive and repulsive harmonic oscillators belong to this family, for instance. It is important to stress that this means that some (but not all) of the solutions to the aforementioned problems produce vanishing Bohm potentials. Besides, any wavefunction potential function $f$ that does not fulfill (7) can produce a non-vanishing Bohm potential for some external potential. Details are given in Ref. sahfaz20201 . In a different scenario, there are other well-known quantum solutions with non-vanishing Bohm potential. The usual solutions to the $n$-bound state of the harmonic oscillator with mass $m$ and angular frequency $\omega$ may be written for a wavefunction with an amplitude $A_{n}(x,t)=\frac{1}{\sqrt{2^{n}\ n!}}\cdot\left(\frac{m\omega}{\pi\hbar}\right)^{\frac{1}{4}}e^{-\frac{m\omega x^{2}}{2\hbar}}H_{n}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)\ ,$ (10) and phase $S_{n}(t)=-\left(n+{1}/{2}\right)\hbar\omega t$, where $H_{n}$ are the Hermite polynomials. The Bohm potentials associated to these solutions are clearly different from zero and depend on $n$. These solutions (with non–vanishing Bohm potential) are crucial in the construction of the well known and successful “Shell Model” in nuclear physics. One cannot therefore, claim that solutions with non–vanishing Bohm potential are in some sense “unphysical”. Something similar happens with the well known solution of the quantum Coulomb problem which accurately predicts the atomic spectra and transition probabilities and gives rise to understanding spectroscopy and the Periodic Table of Elements. On the other hand, a free solution may also produce a non-vanishing Bohm potential. The well-known plane wave free particle solution $\psi_{1}(x,t)=A\exp(ikx-i\omega t)$, produces a vanishing Bohm potential, but there are other not so well known Airy function free particle solutions which produce non–vanishing Bohm potentials. Let us consider the Berry and Balazs solution berry given by $\displaystyle A(x,t)=$ $\displaystyle\mathrm{Ai}\left(\frac{\beta}{\hbar^{2/3}}\left(x-\frac{\beta^{3}}{4m^{2}}t^{2}\right)\right),$ (11a) $\displaystyle S(x,t)$ $\displaystyle=\frac{\beta^{3}t}{2m}\left(x-\frac{\beta^{3}}{6m^{2}}t^{2}\right)\,,$ (11b) with a non–zero constant $\beta$. This solution of Schrödinger equation for free particles, with $V=0$, has a non–vanishing Bohm potential which depends both in space $x$ and time $t$ given by $V_{B}(x,t)=-\frac{\beta^{3}}{2m}\left(x-\frac{\beta^{3}}{4m^{2}}t^{2}\right)\,.$ (12) This Bohm potential produces the constant acceleration $a_{\textrm{Airy}}$ experienced by the Airy wave packet $a_{\textrm{Airy}}=-\frac{V_{B}^{\prime}}{m}=\frac{\beta^{3}}{2m^{2}}\,.$ (13) This result gives rise to the velocity of the Airy package $v_{\textrm{Airy}}=p_{\textrm{Airy}}/m$ that may be computed from the momentum $p_{\textrm{Airy}}$ which is given by $p_{\textrm{Airy}}=\partial S(x,t)/\partial x$. Therefore, there is a solution to the free Schrödinger equation which has a constant acceleration given by (13) in spite of being in the presence of a vanishing (external) force. Of course, the most amazing feature of the Berry–Balazs result is that the acceleration of free beams has been experimentally detected in 2007 using light beams sivichis and in 2013 using electron beams bloch . Finally, it has been shown in Ref. sahfaz20202 that a non-vanishing Bohm potentials may cancel some external potentials, allowing to have quantum solutions for particles that behave as free classical particles even though they are interacting with an external potential. ## IV Discussion We have clearly established by using both theoretical and experimental arguments, and examples, that the Bohm potential and its effects are real and measurable (even tough unknown or misunderstood almost a century after its definition). The above discussed examples (and many others in the references) show that a non–vanishing Bohm potential has different kinds of effects in quantum mechanics, optics and wave propagation, in general. It is our belief that the recognition of the role that it plays in wave dynamics can bring up new and interesting insights in these different fields. ## References * (1) Y. Z. Umul, Optik 224, 165729 (2020) https://doi.org/10.1016/j.ijleo.2020.165729 * (2) E. Madelung, Z. Physik 40, 322 (1927). * (3) D. Bohm, Phys. Rev. 85, 166 (1952). * (4) J. Slepian, Elec. Eng. 68, 1080 (1949). * (5) G.V.Skrotskii, Soviet Phys. Doklady 1957,2, 226 (1957) * (6) J. Plebanksi, Phys. Rev.118, 1396 (1960) * (7) B.S. DeWitt, R.W. Brehme, Ann. Phys. (N.Y.) 9, 220 (1960). * (8) G. Velo and D. Zwanziger, Phys. Rev. 186, 1337 (1969); 188, 2218 (1969). * (9) B. Mashhoon, Phys. Rev. D 8, 4297 (1973). * (10) A. J. Hanson and T. Regge, Annals of Physics 87, 498 (1974). * (11) B. Mashhoon, Phys. Rev. D 11, 2679 (1975). * (12) B. Mashhoon, Annals of Physics 89, 254 (1975) * (13) S. A. Hojman, “Electromagnetic and Gravitational Interactions of a Spherical Relativistic Top”, Ph.D. thesis, Princeton University, 1975 (unpublished). * (14) S. Hojman and T. Regge, Studies in Mathematical Physics, Essays in Honor of Valentin Bargmann, Ed. E.H. Lieb, B. Simon and A.S. Wightman, Princeton University Press, pg. 195 (1976). * (15) S. Hojman, Phys. Rev. D 18, 2741 (1978). * (16) M. V. Berry and N. L. Balazs, Am. J. Phys. 47, 264 (1979). * (17) D. N. Christodoulides, N. K. Efremidis, P. Di Trapani and B. A. Malomed, Opt. Lett. 29, 1446 (2004). * (18) G. A. Siviloglou, J. Broky, A. Dogariu and D. N. Christodoulides, Phys. Rev. Lett. 99, 213901 (2007). * (19) N. Voloch-Bloch, Y. Lereah, Y. Lilach, A. Gover and A. Arie, Nature 494, 331 (2013). * (20) R. Bekenstein, J. Nemirovsky, I. Kaminer, and M. Segev, Phys. Rev. X 4, 011038 (2014). * (21) F.A. Asenjo and S.A. Hojman, Class. Quantum Grav. 34, 205011 (2017). * (22) F.A. Asenjo and S.A. Hojman, Phys. Rev. D 96, 044021 (2017). * (23) A. Patsyk, M. A. Bandres, R. Bekenstein, and M. Segev, Phys. Rev. X 8, 011001 (2018) * (24) S. A. Hojman and F. A. Asenjo, Phys. Scr. 95 085001 (2020). * (25) S.A. Hojman and F.A. Asenjo, Phys. Lett. A 384, 126263 (2020). * (26) S.A. Hojman and F.A. Asenjo, Phys. Rev. A 102, 052211 (2020). * (27) F. Impens, R. Duboscq, and D. Guéry-Odelin, Phys. Rev. Lett. 124, 250403 (2020). * (28) Felipe A. Asenjo, Sergio A. Hojman, Héctor M. Moya-Cessa, and Francisco Soto-Eguibar, arXiv:2011.02381 * (29) F. A. Asenjo and S. A. Hojman, Eur. Phys. J. C 77, 732 (2017). * (30) P. Holland, Nuovo Cimento B 116, 1043 (2001). * (31) P. R. Holland, The Quantum Theory of Motion: an account of the de Broglie-Bohm causal interpretation of quantum mechanics, (Cambridge University Press, 1993). * (32) R. E. Wyatt, Quantum Dynamics with Trajectories: introduction to quantum hydrodynamics (Springer, 2005). * (33) S. Chávez-Cerda, U. Ruiz, V. Arrizón, H.M. Moya-Cessa, Opt. Exp. 19, 16448 (2011).
# Hodge theory on Alexander invariants – a survey Eva Elduque Department of Mathematics, University of Michigan-Ann Arbor, 530 Church St, Ann Arbor, MI 48109, USA<EMAIL_ADDRESS>http://www- personal.umich.edu/ elduque , Christian Geske Department of Mathematics, Northwestern University, 2033 Sheridan Rd, Evanston, IL 60208, USA. <EMAIL_ADDRESS>, Moisés Herradón Cueto Department of Mathematics, Louisiana State University, 303 Lockett Hall, Baton Rouge, LA 70803, USA<EMAIL_ADDRESS>http://www.math.lsu.edu/ moises , Laurenţiu Maxim Department of Mathematics, University of Wisconsin-Madison, 480 Lincoln Drive, Madison WI 53706-1388, USA<EMAIL_ADDRESS>https://www.math.wisc.edu/ maxim/ and Botong Wang Department of Mathematics, University of Wisconsin- Madison, 480 Lincoln Drive, Madison WI 53706-1388, USA<EMAIL_ADDRESS>http://www.math.wisc.edu/ wang/ Dedicated to the memory of Prof. Ştefan Papadima ###### Abstract. We survey recent developments in the study of Hodge theoretic aspects of Alexander-type invariants associated with smooth complex algebraic varieties. ###### Key words and phrases: infinite cyclic cover, Alexander module, mixed Hodge structure, thickened complex, limit mixed Hodge structure, semisimplicity ###### 2010 Mathematics Subject Classification: 14C30, 14D07, 14F45, 32S30, 32S35, 32S40, 55N30 ## 1\. Introduction In this expository note, we survey recent developments in the study of Hodge theoretic aspects of Alexander-type invariants of complex algebraic manifolds. Our main goal is to provide the reader with a down-to-earth introduction and multiple access points to the circle of ideas discussed in detail in our paper [10]. Let $U$ be a connected topological space of finite homotopy type, let $\displaystyle\xi\colon\pi_{1}(U)\twoheadrightarrow\mathbb{Z}$ be an epimorphism, and denote by $U^{\xi}$ the infinite cyclic cover of $U$ corresponding to $\ker\xi$. Let $k$ be $\mathbb{Q}$ or $\mathbb{R}$, and denote by $R=k[t^{\pm 1}]$ the ring of Laurent polynomials in variable $t$ with $k$-coefficients. The group $\mathbb{Z}$ of covering transformations of $U^{\xi}$ induces an $R$-module structure on each group $H_{i}(U^{\xi};k)$, classically referred to as the $i$-th (homology) $k$-Alexander module of the pair $(U,\xi)$. Since $\xi\colon\pi_{1}(U)\to\mathbb{Z}$ is represented by a homotopy class of continuous maps $U\to S^{1}$, whenever such a representative $f\colon U\to S^{1}$ for $\xi$ is fixed (that is, $\xi=f_{*}$), it is also customary to use the notation $U^{f}$ for the corresponding infinite cyclic cover of $U$. Since $U$ is homotopy equivalent to a finite CW-complex, $H_{i}(U^{\xi};k)$ is a finitely generated $R$-module, for each integer $i$. As a motivating example, let us consider the case of a fiber bundle $f\colon U\to S^{1}$ with connected fiber $F$ a finite CW-complex. Then $\xi=f_{*}\colon\pi_{1}(U)\to\pi_{1}(S^{1})=\mathbb{Z}$ is surjective, and the corresponding infinite cyclic cover $U^{f}$ is homeomorphic to $F\times\mathbb{R}$ and hence homotopy equivalent to $F$. The deck group action on $H_{i}(U^{f};k)$ is isomorphic (up to a choice of orientation on $S^{1}$) to the monodromy action on $H_{i}(F;k)$, which gives the latter vector spaces $R$-module structures. Therefore $H_{i}(U^{f};k)\cong H_{i}(F;k)$ is a torsion $R$-module for all $i\geq 0$. This applies in particular to the case of the Milnor fibration $f\colon U\to S^{1}$ associated to a reduced complex hypersurface singularity germ, with $F$ the corresponding Milnor fiber [19]. However, the Alexander modules $H_{i}(U^{\xi};k)$ are not torsion $R$-modules in general, since $U^{\xi}$ may not be of finite type; e.g., if $U=S^{1}\vee S^{2}$ and $\xi=id_{\mathbb{Z}}:\pi_{1}(U)\cong\mathbb{Z}\to\mathbb{Z}$, then $U^{\xi}$ is a bouquet of $S^{2}$’s, one for each integer, and hence $H_{2}(U^{\xi};k)\cong R$ . One then considers the torsion part $A_{i}(U^{\xi};k)\coloneqq\operatorname{Tors}_{R}H_{i}(U^{\xi};k)$ of the $R$-module $H_{i}(U^{\xi};k)$. This is a $k$-vector space of finite dimension on which a generating covering transformation (i.e., $t$-multiplication) acts as a linear automorphism. Alexander invariants were first introduced in the classical knot theory, where it was noted that, in order to study a knot $K\subset S^{3}$, it is useful to consider the topology of its complement $U=S^{3}\setminus K$ through the lens of the infinite cyclic cover induced by the abelianization map $\xi:\pi_{1}(U)\twoheadrightarrow H_{1}(U;\mathbb{Z})\cong\mathbb{Z}$. Such invariants were quickly adopted in singularity theory, for investigating the topology of Milnor fibers associated to complex hypersurface singularity germs, see, e.g., [5]. By analogy with knot theory, Libgober, Dimca, Nemethi and others considered Alexander modules for the study of the topology of complements of complex affine hypersurfaces, see, e.g., [8], [9], [14], [16], [17]. In this context, it was shown that the Alexander modules depend on the position of singularities of a hypersurface. A more general setup was considered in [2], where Alexander modules were associated with any complex quasi-projective manifold $U$ endowed with an epimorphism $\xi\colon\pi_{1}(U)\twoheadrightarrow\mathbb{Z}$. It was shown in loc.cit. that all eigenvalues of the $t$-action on $A_{i}(U^{\xi};k)$ are roots of unity, for any integer $i$, and upper bounds on the sizes of Jordan blocks of the $t$-action on each $A_{i}(U^{\xi};k)$ were obtained. In [10], we investigated a question of S. Papadima about the existence of mixed Hodge structures on the torsion parts $A_{i}(U^{\xi};k)$ of the Alexander modules of a complex algebraic manifold $U$ endowed with an epimorphism $\xi\colon\pi_{1}(U)\twoheadrightarrow\mathbb{Z}$. Note that the infinite cyclic cover $U^{\xi}$ is not in general a complex algebraic variety, so Deligne’s classical mixed Hodge theory does not apply. We answered Papadima’s question positively in the case when the epimorphism $\xi$ is induced by an algebraic morphism $f\colon U\to\mathbb{C}^{*}$. More precisely, we proved the following result (see [10, Theorem 1.0.2]): ###### Theorem 1.1. Let $U$ be a smooth connected complex algebraic variety, with an algebraic map $f\colon U\to\mathbb{C}^{*}$. Assume that $\xi=f_{*}\colon\pi_{1}(U)\to\mathbb{Z}$ is an epimorphism, and denote by $U^{f}=\\{(x,z)\in U\times\mathbb{C}\mid f(x)=e^{z}\\}$ the corresponding infinite cyclic cover. Then the torsion part $A_{i}(U^{f};\mathbb{Q})$ of the $\mathbb{Q}[t^{\pm 1}]$-module $H_{i}(U^{f};\mathbb{Q})$ carries a canonical $\mathbb{Q}$-mixed Hodge structure for any $i\geq 0$. In this algebraic context, partial results have been previously obtained in the following special situations: 1. (1) When $H_{i}(U^{\xi};\mathbb{Q})$ is $\mathbb{Q}[t^{\pm 1}]$-torsion for all $i\geq 0$ and the $t$-action is unipotent, see [12]; 2. (2) When $f\colon U=\mathbb{C}^{n}\setminus\\{f=0\\}\to\mathbb{C}^{*}$ is induced by a reduced complex polynomial $f\colon\mathbb{C}^{n}\to\mathbb{C}$ which is transversal at infinity (i.e., the hyperplane at infinity in $\mathbb{C}P^{n}$ is transversal in the stratified sense to the projectivization of $\\{f=0\\}$), for $\xi=f_{*}$ and $i<n$; see [8, 16]. In this case it was shown in [17] that the Alexander modules $H_{i}(U^{\xi};\mathbb{Q})$ are torsion $\mathbb{Q}[t^{\pm 1}]$-modules for $i<n$, while $H_{n}(U^{\xi};\mathbb{Q})$ is free and $H_{i}(U^{\xi};\mathbb{Q})=0$ for $i>n$. Furthermore, the $t$-action on $H_{i}(U^{\xi};\mathbb{C})$ is semisimple for $i<n$, and the corresponding eigenvalues are roots of unity of order $d=\deg(f)$. 3. (3) When $f\colon U=\mathbb{C}^{n}\setminus\\{f=0\\}\to\mathbb{C}^{*}$ is induced by a complex polynomial $f\colon\mathbb{C}^{n}\to\mathbb{C}$ which has at most isolated singularities, including at infinity, in the sense that both the projectivization of $\\{f=0\\}$ and its intersection with the hyperplane at infinity have at most isolated singularities. In this case, and with $\xi=f_{*}$, there is only one interesting Alexander module, $H_{n-1}(U^{\xi};\mathbb{Q})$, which is torsion (see [14, Theorem 4.3, Remark 4.4]), and a mixed Hodge structure on it was constructed in [15]; see also [13] for the case of plane curves under some extra conditions. Moreover, we showed in [10] that, if $U$ and $f$ are as in case (2) above, the mixed Hodge structure of Theorem 1.1 recovers the mixed Hodge structure obtained by different means in both [8] and [16]. In this note, we indicate the main steps in the proof of Theorem 1.1, and describe several geometric applications. For complete details, the interested reader may consult our paper [10]. We also rely heavily on terminology and constructions from [20]. Acknowledgements. E. Elduque is partially supported by an AMS-Simons Travel Grant. L. Maxim is partially supported by the Simons Foundation Collaboration Grant #567077. B. Wang is partially supported by a Sloan Fellowship and a WARF research grant. ## 2\. Preliminaries ### 2.1. Setup. Notations. Definitions. Examples Let $k$ be either $\mathbb{Q}$ or $\mathbb{R}$. Let $R$ denote the ring $k[t^{\pm 1}]$ of Laurent polynomials in the variable $t$ over the field $k$. Let $U$ be a smooth connected complex algebraic variety, and let $f\colon U\rightarrow\mathbb{C}^{*}$ be an algebraic map inducing an epimorphism $f_{*}\colon\pi_{1}(U)\twoheadrightarrow\mathbb{Z}$ on fundamental groups. Let $\exp\colon\mathbb{C}\to\mathbb{C}^{*}$ be the infinite cyclic cover, and let $U^{f}$ be the following fiber product: (1) ${U^{f}\subset U\times\mathbb{C}}$${\mathbb{C}}$${U}$${\mathbb{C}^{*}.}$$\scriptstyle{f_{\infty}}$$\scriptstyle{\pi}$${\lrcorner}$$\scriptstyle{\exp}$$\scriptstyle{f}$ Hence $U^{f}$ is embedded in $U\times\mathbb{C}$ as $U^{f}=\\{(x,z)\in U\times\mathbb{C}\mid f(x)=e^{z}\\},$ and we let $f_{\infty}$ be the restriction to $U^{f}$ of the projection $U\times\mathbb{C}\to\mathbb{C}$. Since $\exp$ is an infinite cyclic cover, $\pi\colon U^{f}\to U$ is the infinite cyclic cover induced by $\ker f_{*}$. The group of covering transformations of $U^{f}$ is isomorphic to $\mathbb{Z}$, and it induces an $R$-module structure on each group $H_{i}(U^{f};k)$, with $1\in\mathbb{Z}$ corresponding to $t\in R=k[t^{\pm 1}]$. We will also say $t$ acts on $U^{f}$ as the deck transformation $(x,z)\mapsto(x,z+2\pi i)$. ###### Definition 2.1. The $i$-th homological Alexander module of $U$ associated to the algebraic map $f\colon U\rightarrow\mathbb{C}^{*}$ is the $R$-module $H_{i}(U^{f};k).$ Since $U$ has the homotopy type of a finite CW complex, the homology Alexander modules $H_{*}(U^{f};k)$ of the pair $(U,f)$ are finitely generated $R$-modules. ###### Example 2.2. Let $f\colon\mathbb{C}^{n}\to\mathbb{C}$ be a weighted homogeneous polynomial. Then $f\colon U=\mathbb{C}^{n}\setminus\\{f=0\\}\to\mathbb{C}^{*}$ is a locally trivial topological fibration (called the global Milnor fibration of $f$), whose fiber $F$ has the homotopy type of a finite CW complex. Assume that $F$ is connected (e.g., the $\gcd$ of the exponents of the distinct irreducible factors of $f$ is $1$). As already mentioned in the introduction, in this case we have that $H_{i}(U^{f};k)\cong H_{i}(F;k)$ is a torsion $R$-module for all $i\geq 0$. Moreover, the $t$-action on $H_{i}(U^{f};k)$ is semisimple for all $i\geq 0$. ###### Example 2.3. Let $f\colon\mathbb{C}^{n}\to\mathbb{C}$ be a complex polynomial with $V:=\\{f=0\\}$, and consider the induced map $f\colon U=\mathbb{C}^{n}\setminus V\to\mathbb{C}^{*}$. Assume $f$ has an irreducible decomposition $f=f_{1}^{n_{1}}\cdots f_{r}^{n_{r}}$ with $\gcd(n_{1},\ldots,n_{r})=1$. Then $H_{1}(U;\mathbb{Z})\cong\mathbb{Z}^{r}$ is generated by homology classes of positively oriented meridians $\gamma_{i}$ about the (regular parts of the) irreducible components of (the underlying reduced hypersurface of) $V$. Moreover, $f_{*}:\pi_{1}(U)\to\mathbb{Z}$ is the epimorphism given by assigning the integer $n_{i}$ to each meridian $\gamma_{i}$, $i=1,\ldots,r$. Assume moreover that $V$ is “in general position at infinity”, that is, the hyperplane at infinity in $\mathbb{C}P^{n}$ is transversal (in the stratified sense) to the underlying reduced hypersurface of the projectivization of $V$. Then it was shown in [18] (see also [17, 8, 16] in the case when $f$ is reduced) that the Alexander modules $H_{i}(U^{f};\mathbb{Q})$ are torsion $\mathbb{Q}[t^{\pm 1}]$-modules for $i<n$, while $H_{n}(U^{f};\mathbb{Q})$ is free and $H_{i}(U^{f};\mathbb{Q})=0$ for $i>n$. Furthermore, the $t$-action on $H_{i}(U^{\xi};\mathbb{C})$ is semisimple for $i<n$, and the corresponding eigenvalues are roots of unity of order $d=\deg(f)$. ###### Example 2.4. Let ${\mathcal{A}}$ be an essential hyperplane arrangement in $\mathbb{C}^{n}$, $n\geq 2$, defined by the zero set of a reduced polynomial $f=f_{1}^{n_{1}}\cdots f_{r}^{n_{r}}:\mathbb{C}^{n}\to\mathbb{C}$ with $\gcd(n_{1},\ldots,n_{r})=1$. Let $U=\mathbb{C}^{n}\setminus{\mathcal{A}}$ be the corresponding arrangement complement, and $U^{f}$ the infinite cyclic cover induced by $f\colon U\rightarrow\mathbb{C}^{*}$. By [11, Theorem 4], we have that $H_{j}(U^{f};\mathbb{Q})$ is a torsion $R$-module for all $j<n$, a free $R$-module for $j=n$, and $0$ for $j>n$. We next describe how the homology Alexander modules can be realized as homology groups of a certain local system on $U$. Let $\underline{k}_{U^{f}}$ be the constant $k$-sheaf on $U^{f}$, and define $\mathcal{L}=\pi_{!}\underline{k}_{U^{f}}.$ The action of $t$ on $U^{f}$ as deck transformations induces an automorphism of $\mathcal{L}$, making ${\mathcal{L}}$ into a local system of rank $1$ free $R$-modules. It can be easily seen that ${\mathcal{L}}$ can be described by the monodromy representation $\begin{array}[]{ccc}\pi_{1}(U)&\longrightarrow&\operatorname{Aut}_{R}(R)\\\ \gamma&\mapsto&\left(1\mapsto t^{f_{*}(\gamma)}\right)\end{array}$ and, moreover, there are natural isomorphisms of $R$-modules for all $i$: $H_{i}(U;\mathcal{L})\cong H_{i}(U^{f};k).$ For our purposes, it is more convenient to work with the cohomological version of the Alexander modules. ###### Definition 2.5. The $i$-th cohomology Alexander module of $U$ associated to the algebraic map $f$ is the $R$-module $H^{i}(U;\overline{{\mathcal{L}}}),$ where $\overline{{\mathcal{L}}}:={\mathcal{L}}\otimes_{t\mapsto t^{-1}}R$ is the conjugate (and dual) local system of ${\mathcal{L}}$. In general, the cohomological Alexander modules are not isomorphic as $R$-modules to the cohomology of the corresponding infinite cyclic cover $U^{f}$. Indeed, $H^{i}(U;\overline{{\mathcal{L}}})$ is a finitely generated $R$-module for all $i$, whereas, if $H_{i}(U^{f};k)$ is not a finite dimensional $k$-vector space, then $H^{i}(U^{f};k)$ is not a finitely generated $R$-module. Alternatively, if $\pi\colon U^{f}\to U$ is the covering map, then $H^{*}(U^{f};k)$ and $H^{*}(U;{\mathcal{L}})$ can be computed as the cohomology of $U$ with coefficients in $R\pi_{*}\pi^{-1}\underline{k}_{U}=\pi_{*}\underline{k}_{U^{f}}$ and ${\mathcal{L}}=\pi_{!}\underline{k}_{U^{f}}$ respectively, so they need not be isomorphic. ### 2.2. Universal coefficient theorem and duality The two types of Alexander modules defined in Section 2.1 are related by the Universal Coefficient Theorem, in the sense that there is a natural short exact sequence of $R$-modules $0\to\operatorname{Ext}^{1}_{R}(H_{i-1}(U;\mathcal{L}),R)\to H^{i}(U;\overline{{\mathcal{L}}})\to\operatorname{Hom}_{R}(H_{i}(U;\mathcal{L}),R)\to 0.$ The relation between the cohomology Alexander modules and the corresponding infinite cyclic cover can be made more precise as follows. ###### Proposition 2.6. [10, Proposition 2.4.1] There is a natural $R$-module isomorphism $\operatorname{Tors}_{R}H^{i}(U;\overline{\mathcal{L}})\cong(\operatorname{Tors}_{R}H_{i-1}(U^{f};k))^{\vee_{k}},$ where ${}^{\vee_{k}}$ denotes the dual as a $k$-vector space. Moreover, this isomorphism is functorial for the pair $(U,f)$. As a consequence of Proposition 2.6, we have the following result which applies, in particular, to the situations considered in Examples 2.2, 2.3 and 2.4. ###### Corollary 2.7. Assume that $H_{i}(U^{f};k)$ is a torsion $R$-module for some $i\geq 0$. Then, there exists a canonical isomorphism $\operatorname{Tors}_{R}H^{i+1}(U;\overline{\mathcal{L}})\cong H^{i}(U^{f};k).$ Moreover, if $H_{i+1}(U^{f};k)$ is also a torsion $R$-module, then, so is $H^{i+1}(U;\overline{\mathcal{L}})$. Hence, in that case, $H^{i+1}(U;\overline{\mathcal{L}})$ and $H^{i}(U^{f};k)$ are naturally isomorphic. ## 3\. Mixed Hodge structures on Alexander modules In this section, we indicate the main steps in the proof of Theorem 1.1. ### 3.1. Construction A standard procedure for obtaining a mixed Hodge structure is to identify the correct mixed Hodge complex of sheaves (see, e.g., [20, Definition 3.13]), whose hypercohomology groups will automatically carry the desired mixed Hodge structures (see, e.g., [20, Theorem 3.18II]). This is roughly the approach we use in [10]. The proof of Theorem 1.1 makes use of a sequence of reductions, and it relies on the construction of a suitable thickening of the Hodge-de Rham complex. We will describe these reductions and the relevant constructions in the following subsections. We begin by noting that, since the dual of a mixed Hodge structure is again a mixed Hodge structure, the identification of Proposition 2.6 allows us to reduce the proof of Theorem 1.1 to the construction of mixed Hodge structures on the torsion $R$-modules $\operatorname{Tors}_{R}H^{i}(U;\overline{\mathcal{L}})$. #### 3.1.1. Reduction to the case of unipotent $t$-action Let $k=\mathbb{C}$, let $U$ be a smooth connected complex algebraic variety, and let $f\colon U\rightarrow\mathbb{C}^{*}$ be an algebraic map with associated local system $\mathcal{L}$ of $R$-modules as in Section 2.1. Since $R$ is a PID, we have the primary decomposition $A_{i}(U^{f};\mathbb{C}):=\operatorname{Tors}_{R}H_{i}(U;\mathcal{L})\cong\bigoplus_{j=1}^{r}R/\big{(}(t-\lambda_{j})^{p_{j}}\big{)}$ with $p_{j}\geq 1$ for all $j=1,\ldots,r$. The set $\\{\lambda_{j}\in\mathbb{C}\mid j=1,\ldots,r\\}$ is uniquely determined by $A_{i}(U^{f};\mathbb{C})$. The following result was essentially proved in [2, Proposition 1.4], but see also [10, Proposition 2.6.1] for a slight generalization to the current setup. ###### Proposition 3.1. Every $\lambda_{j}$ defined above is a root of unity. In particular, using Proposition 2.6, one has the following. ###### Corollary 3.2. Let $k=\mathbb{C}$. The eigenvalues of the action of $t$ on $\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})$ are all roots of unity. Corollary 3.2 implies that we can choose $N\in\mathbb{N}$ such that $t^{N}-1$ acts nilpotently on $\operatorname{Tors}_{R}H^{i}(U;\overline{\mathcal{L}})$ for all $i$. Consider the following pull-back diagram: ${U_{N}=\\{(x,z)\in U\times\mathbb{C}^{*}\mid f(x)=z^{N}\\}}$${\mathbb{C}^{*}}$${U}$${\mathbb{C}^{*}}$$\scriptstyle{p}$${\lrcorner}$$\scriptstyle{f_{N}}$$\scriptstyle{z\mapsto z^{N}}$$\scriptstyle{f}$ where $p$ is an $N$-sheeted cyclic cover, and note that all maps involved in this diagram are algebraic and $U_{N}$ is a smooth algebraic variety. We can then define, as in Section 2.1, $(U_{N})^{f_{N}}$, $(f_{N})_{\infty}$, $\pi_{N}$ and ${\mathcal{L}}_{N}$ for the map $f_{N}\colon U_{N}\to\mathbb{C}^{*}$. We also define: $\begin{array}[]{rcl}\theta_{N}\colon U^{f}&\longrightarrow&U_{N}^{f_{N}}\\\ U\times\mathbb{C}\ni(x,z)&\longmapsto&(x,e^{z/N},z/N)\in U_{N}\times\mathbb{C}\subset U\times\mathbb{C}^{*}\times\mathbb{C},\\\ \end{array}$ which fits into the following commutative diagram: (2) ${U^{f}}$${U^{f_{N}}_{N}}$${U_{N}}$${U}$${\mathbb{C}}$${\mathbb{C}}$${\mathbb{C}^{*}}$${\mathbb{C}^{*}.}$$\scriptstyle{f_{\infty}}$$\scriptstyle{\sim}$$\scriptstyle{\theta_{N}}$$\scriptstyle{\pi}$$\scriptstyle{(f_{N})_{\infty}}$$\scriptstyle{\pi_{N}}$${\lrcorner}$$\scriptstyle{f_{N}}$$\scriptstyle{p}$${\lrcorner}$$\scriptstyle{f}$$\scriptstyle{z\mapsto\frac{z}{N}}$$\scriptstyle{\exp}$$\scriptstyle{\exp}$$\scriptstyle{z\mapsto z^{N}}$ The map $\theta_{N}$ allows us to identify $U^{f}$ with $U_{N}^{f_{N}}$ in a canonical way, which we will do from now on. In particular, we can also identify the constant sheaves $\underline{k}_{U_{N}^{f_{N}}}$ and $\underline{k}_{U^{f}}$ canonically. Let $R(N)\coloneqq k[t^{N},t^{-N}]$. Since the deck group of the infinite cyclic cover $\pi_{N}$ is generated by $t^{N}$, the corresponding Alexander modules $H_{i}(U_{N}^{f_{N}};k)$ are finitely generated $R(N)$-modules. Since $R$ is a rank $N$ free $R(N)$-module, we can also consider $\mathcal{L}$ as a local system of rank $N$ free $R(N)$-modules on $U$. Moreover, $\theta_{N}$ induces the canonical isomorphism $p_{*}{\mathcal{L}}_{N}={\mathcal{L}}$ of local systems of $R(N)$-modules, which can be further used to prove the following (cf. [10, Proposition 2.6.3]). ###### Lemma 3.3. In the above notations, $\theta_{N}$ induces the following canonical isomorphisms of $R(N)$-modules: $\operatorname{Tors}_{R}H^{i}(U;\overline{{\mathcal{L}}})\cong\operatorname{Tors}_{R(N)}H^{i}(U_{N};\overline{{\mathcal{L}}}_{N})$ for any integer $i\geq 0$. ###### Remark 3.4. Notice that the only eigenvalue of the action of $t^{N}$ in $\operatorname{Tors}_{R(N)}H^{*}(U_{N};\overline{\mathcal{L}}_{N})$ is $1$. So Lemma 3.3 allows us to reduce the problem of constructing a mixed Hodge structure on $\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})$ to the case when the only eigenvalue is $1$. #### 3.1.2. Isolating the torsion As seen in Section 3.1.1, upon passing to a finite cover $U_{N}$ of $U$ (and replacing $f$ by $f_{N}$, ${\mathcal{L}}$ by ${\mathcal{L}}_{N}$ and $t$ by $t^{N}$), we may assume that $t-1$ acts nilpotently on $\operatorname{Tors}_{R}H^{i}(U;\overline{\mathcal{L}})$, for any integer $i\geq 0$. We will assume this is the case from now on. In this section, we explain how $\operatorname{Tors}_{R}H^{i}(U;\overline{\mathcal{L}})$ can be understood in terms of the cohomology of certain $k$-local systems of finite rank on $U$. Let $s=t-1$. For $m\in\mathbb{N}$, we set $R_{m}:=R/s^{m}R$ and let $\overline{\mathcal{L}}_{m}:=\overline{\mathcal{L}}\otimes_{R}R_{m}$ be the corresponding rank one local system of $R_{m}$-modules. Since $s$ acts nilpotenly on $\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})$, there exists an integer $m\geq 0$ such that $s^{m}$ annihilates $\operatorname{Tors}_{R}H^{i}(U;\overline{\mathcal{L}})$, for all $i\geq 0$. With the above notations, it can be seen (cf. [10, Lemma 3.1.8, Corollary 3.1.9]) that the maps of sheaves $\overline{\mathcal{L}}\twoheadrightarrow\overline{\mathcal{L}}_{m}\overset{\cdot s^{m}}{\hookrightarrow}\overline{\mathcal{L}}_{2m}$ induce an exact sequence $0\to\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})\to H^{*}(U;\overline{\mathcal{L}}_{m})\overset{\cdot s^{m}}{\to}H^{*}(U;\overline{\mathcal{L}}_{2m}).$ Hence, (3) $\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})\cong\ker\left(H^{*}(U;\overline{\mathcal{L}}_{m})\overset{\cdot s^{m}}{\to}H^{*}(U;\overline{\mathcal{L}}_{2m})\right)$ Our next goal is to endow each $H^{i}(U;\overline{\mathcal{L}}_{m})$ with a canonical mixed Hodge structure for all $m\geq 1$, such that the map $H^{*}(U;\overline{\mathcal{L}}_{m})\overset{\cdot s^{m}}{\to}H^{*}(U;\overline{\mathcal{L}}_{2m})(-m)$ is a morphism of mixed Hodge structures (where $(-m)$ denotes the $-m$th Tate twist). This will be achieved by resolving $\overline{\mathcal{L}}_{m}$ by a certain mixed Hodge complex. Note that a mixed Hodge structure on $H^{*}(U;\overline{\mathcal{L}}_{m})$ induces by (3) a canonical mixed Hodge structure on $\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})$. #### 3.1.3. Thickening Deligne’s Hodge-de Rham mixed Hodge complex Assume $k=\mathbb{R}$, and let as before $s=t-1$. Let $(X,D)$ be a good compactification of the smooth variety $U$, and let $j:U\hookrightarrow X$ be the inclusion map. Let $\mathcal{E}^{\bullet}_{U}$ be the real de Rham complex, and $\Omega^{\bullet}_{X}(\log D)$ the log de Rham complex (see, e.g., [20, Section 4.1]). It is known by work of Deligne that $(j_{*}\mathcal{E}^{\bullet}_{U},\Omega^{\bullet}_{X}(\log D))$ form the real and, resp., complex part of the Hodge-de Rham mixed Hodge complex $\mathcal{H}dg^{\bullet}(X\,\log D),$ whose hypercohomology endows $H^{*}(U;\mathbb{R})$ with a canonical mixed Hodge structure. To resolve $\overline{\mathcal{L}}_{m}:=\overline{\mathcal{L}}\otimes_{R}R_{m}$ by a mixed Hodge complex, we will perform a thickening of the Hodge-de Rham mixed Hodge complex, a procedure already used in [2]. Recall that for any $m\geq 1$, an $m$-thickening of a $k$-cdga $(A,d,\wedge)$ in the direction $\eta\in A^{1}\cap\ker d$ is the cochain complex of $R_{m}$-modules denoted by $\displaystyle A(\eta,m)=(A{\otimes_{k}}R_{m},d_{\eta})$ and described by: * (i) for $p\in\mathbb{Z}$, the $p$-th graded component of $A(\eta,m)$ is $A^{p}{\otimes_{k}}R_{m}$. * (ii) for $\omega\in A$ and $\phi\in R_{m}$, we set $d_{\eta}(\omega\otimes\phi)=d\omega\otimes\phi+(\eta\wedge\omega)\otimes{s\phi}$. If $(\mathcal{A},\wedge,d)$ is a sheaf of cdgas on $X$ and $\eta\in\Gamma(X,\mathcal{A}^{1})\cap\ker d$ is a closed global section, then we define similarly the $m$-thickening of $\mathcal{A}$ in direction $\eta$, denoted by $\mathcal{A}(\eta,m)$. In the above notations, we have the following result (see [10, Section 5.2]). ###### Lemma 3.5. Let $k=\mathbb{R}$ and let $\mathcal{E}^{\bullet}_{U}$ be the real de Rham complex on $U$. Then a canonical resolution of $\overline{\mathcal{L}}_{m}$ as a sheaf of $R_{m}$-modules is given by the thickened complex $\mathcal{E}^{\bullet}_{U}(\Im\frac{df}{f},m)$ with a modified $R_{m}$-module structure, the action of $s$ becoming multiplication by $-\frac{\log t}{2\pi}$ (expressed as a power series in $s=t-1$). Here, $\Im$ denotes the imaginary part. We can similarly thicken the log de Rham complex $\Omega^{\bullet}_{X}(\log D)$ by the logarithmic form $\frac{1}{i}\frac{df}{f}$ (which is cohomologous to $\Im\frac{df}{f}$). Puting everything together, this leads to a thickening $\mathcal{H}dg^{\bullet}(X\,\log D)\left(\frac{1}{i}\frac{df}{f},m\right)$ of the Hodge-de Rham complex, whose hypercohomology computes $H^{*}(U;\overline{\mathcal{L}}_{m})$. Moreover, since by [10, Theorem 4.2.1] the thickened complex of a mixed Hodge complex of sheaves is again a mixed Hodge complex of sheaves (assuming $\eta\in W_{1}\cap F^{1}$), the thickened Hodge-de Rham complex $\mathcal{H}dg^{\bullet}(X\,\log D)\left(\frac{1}{i}\frac{df}{f},m\right)$ is an $\mathbb{R}$-mixed Hodge complex of sheaves on $X$ (cf. [10, Theorem 5.4.3]). This then yields: ###### Corollary 3.6. For all $i$ and $m$, $H^{i}(U;\overline{\mathcal{L}}_{m})\cong\mathbb{H}^{i}\left(X,j_{*}\mathcal{E}^{\bullet}_{U}\left(\Im\frac{df}{f},m\right)\right)$ has a canonical $\mathbb{R}$-mixed Hodge structure. While we omit here the discussion on filtrations (roughly speaking, these are made of the filtrations on the Hodge-de Rham complex and powers of $s$), let us just say that these are defined so that multiplication by $s$ induces a mixed Hodge complex morphism into the $-1$st Tate twist. In view of (3) and Lemma 3.5 this then yields: ###### Corollary 3.7. Suppose that the action of $t$ on $\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})$ is unipotent. The $\mathbb{R}$-vector spaces $\operatorname{Tors}_{R}H^{*}(U;\overline{\mathcal{L}})$ admit natural $\mathbb{R}$-mixed Hodge structures for which multiplication by $\log t$ (seen as a power series in $s=t-1$) determines a morphism of mixed Hodge structures into the $-1$st Tate twist. ###### Remark 3.8. If the $t$-action on on $\mathrm{Tors}_{R}\,H^{*}(U;\overline{\mathcal{L}})$ is not unipotent, then $t$ gets replaced by $t^{N}$ which acts unipotently, and then $\log t^{N}$ is a morphism of mixed Hodge structures into the $-1$st Tate twist. Finally, we have the following result (cf. [10, Theorem 5.4.10]) which, in view of Proposition 2.6, completes the justification of Theorem 1.1: ###### Theorem 3.9 ($\mathbb{Q}$-MHS). The mixed Hodge structure on $\operatorname{Tors}_{R}H^{*}(U;\overline{{\mathcal{L}}})$ defined above for $k=\mathbb{R}$ comes from a (necessarily unique) mixed Hodge structure defined for $k=\mathbb{Q}$. ### 3.2. Properties Let $U$ be a smooth connected complex algebraic variety, and let $f\colon U\rightarrow\mathbb{C}^{*}$ be an algebraic map inducing an epimorphism $\pi_{1}(U)\twoheadrightarrow\mathbb{Z}$, with corresponding infinite cyclic cover $U^{f}$. There are several choices made in the construction of our mixed Hodge structure on the torsion part $A_{*}(U^{f};\mathbb{Q})$ of the Alexander modules, e.g., a good compactification $(X,D)$ of $U$, a finite cover $U_{N}$ of $U$ on which the monodromy is unipotent, a positive integer $m$ such that $(t-1)^{m}$ annihilates $\operatorname{Tors}_{R}H^{*}(U;\overline{{\mathcal{L}}})$. In [10], we showed that the mixed Hodge structure we constructed on $A_{*}(U^{f};\mathbb{Q})$ is independent of all these choices, see [10, Theorem 5.4.7] (independence of compactification), [10, Theorem 5.4.8] (independence of $U_{N}$), [10, Corollary 5.4.4] (independence of nilpotence index $m$). Furthermore, the mixed Hodge structure on $A_{*}(U^{f};\mathbb{Q})$ behaves functorially with respect to algebraic maps over $\mathbb{C}^{*}$, see [10, Theorem 5.4.9]. In view of Proposition 2.6, it suffices to prove the above statements for the mixed Hodge structure on $\operatorname{Tors}_{R}H^{*}(U;\overline{{\mathcal{L}}})$, in which case the assertions follow, roughly speaking, by constructing appropriate morphisms of mixed Hodge complexes of sheaves and taking hypercohomology. ## 4\. Relation with other mixed Hodge structures and applications In this section, we relate our mixed Hodge structures on the torsion parts of the Alexander modules with other known mixed Hodge structures, and indicate several geometric consequences. ### 4.1. Hodge theory of the infinite cyclic covering map. Applications The infinite cyclic covering map $U^{f}\to U$ induces a natural map of vector spaces $A_{i}(U^{f};\mathbb{Q})\to H_{i}(U;\mathbb{Q})$. In [10, Theorem 6.0.1], we prove the following result. ###### Theorem 4.1. In the setting of Theorem 1.1, the vector space homomorphism $H_{i}(\pi)\colon A_{i}(U^{f};\mathbb{Q})\to H_{i}(U;\mathbb{Q})$ induced by the covering $\pi\colon U^{f}\rightarrow U$ is a morphism of mixed Hodge structures for all $i\geq 0$, where $H_{i}(U;\mathbb{Q})$ is equipped with (the dual of) Deligne’s mixed Hodge structure. Using the faithful flatness of $\mathbb{R}$ over $\mathbb{Q}$ it suffices to work with $k=\mathbb{R}$ and prove the result in cohomology. Once again, this amounts to constructing an appropriate morphism of mixed Hodge complexes of sheaves which, upon taking hypercohomology, gives a mixed Hodge structure morphism $H^{i}(\pi)\colon H^{i}(U;\mathbb{R})\to\operatorname{Tors}_{R}H^{i+1}(U;\overline{{\mathcal{L}}})$. The geometric meaning of the morphism $H^{i}(\pi)$ is achieved by carefully tracing arrows in the derived category. ###### Remark 4.2. The notation $H^{i}(\pi)$ of the previous paragraph is justified by the fact that, under the assumption that $H_{i}(U^{f};\mathbb{R})$ is a torsion $R$-module, the above $H^{i}(\pi)$ coincides with the map $H^{i}(U;\mathbb{R})\to H^{i}(U^{f};\mathbb{R})$ induced in cohomology by the covering map $\pi\colon U^{f}\rightarrow U$ (the dual of the map $H_{i}(\pi)$ induced in homology); compare with Corollary 2.7. In the following subsection we indicate several applications of Theorem 4.1. #### 4.1.1. Bounding the weights. Size of Jordan blocks Theorem 4.1 and our construction can be used to obtain a bound on the weight filtrations of the mixed Hodge structures on the torsion parts of the Alexander modules (see [10, Theorem 7.4.1]). This bound coincides with the known bound for the homology of smooth algebraic varieties of the same dimension as the generic fiber of $f$ (cf. [4, Corollaire 3.2.15]). Specifically, we get the following. ###### Theorem 4.3. Assume the setting of Theorem 1.1. Let $i\geq 0$. If $\ell\notin[i,2i]\cap[i,2\dim_{\mathbb{C}}(U)-2]$, then $\mathrm{Gr}^{W}_{-\ell}A_{i}(U^{f};\mathbb{Q})=0$ where $\mathrm{Gr}^{W}_{-\ell}$ denotes the $-\ell$th graded piece of the weight filtration. This theorem is proved by first using Lemma 3.3 to reduce to the case where $t$ acts unipotently on $A_{i}(U^{f};\mathbb{Q})$ for all $i$. The main step in the proof amounts to constructing the following exact sequence of mixed Hodge structures: $H^{i}(U;\mathbb{Q})\xrightarrow{H^{i}(\pi)}\operatorname{Tors}_{R}H^{i+1}(U;\overline{{\mathcal{L}}})\xrightarrow{\cdot\log(t)}\operatorname{Tors}_{R}H^{i+1}(U;\overline{{\mathcal{L}}})(-1)\to H^{i+1}(U;\mathbb{Q}).$ Given this exact sequence, the assertion of Theorem 4.3 follows by using the bounds for the nonzero weights on the cohomology of the smooth variety $U$ together with the finite dimensionality of the mixed Hodge structures. Other consequences of our construction and Theorem 4.1 are related to the $t$-action on the torsion parts of the Alexander modules. For example, we apply it to determine bounds on the size of the Jordan blocks of this $t$-action (see [10, Corollary 7.4.2]), which nearly cut in half existing bounds, as in [2, Proposition 1.10]. Specifically, we prove: ###### Corollary 4.4. Every Jordan block of the action of $t$ on $A_{i}(U^{f};\mathbb{C})$ has size at most $\min\\{\lceil(i+1)/2\rceil,n-\lfloor(i+1)/2\rfloor\\}.$ In particular, $A_{1}(U^{f};\mathbb{C})$ is a semisimple $R$-module. To prove the above corollary, one simply needs to observe that, by Remark 3.8, applying $\log(t^{N})$ decreases the weight by 2, and use Theorem 4.3. This implies that $(\log(t^{N}))^{m}=0$, where $m$ is the bound in the corollary. Finally one observes that this implies that $(t^{N}-1)^{m}=0$. ###### Remark 4.5. If $f\colon\mathbb{C}^{2}\rightarrow\mathbb{C}$ is a polynomial function such that $f^{-1}(0)$ is reduced and connected, let $U=\mathbb{C}^{2}\setminus f^{-1}(0)$, and consider the induced map $f\colon U\rightarrow\mathbb{C}^{*}$. By [8, Corollary 1.7], the action of $t$ on $H_{1}(U^{f};\mathbb{C})$ is semisimple. The last part of the above corollary generalizes this result, not only to algebraic varieties $U$ that are not affine connected curve complements, but also to connected curve complements in which the corresponding $f$ is given by a non-reduced polynomial, or even a rational function. #### 4.1.2. Semisimplicity and its consequences A very natural question is to understand under which conditions the $t$-action on $A_{i}(U^{f};\mathbb{Q})$ is a mixed Hodge structure morphism. We prove the following result (see [10, Corollary 7.0.4 and Proposition 7.0.5]): ###### Theorem 4.6. Assume the setting of Theorem 1.1. Let $i\geq 0$. The $t$-action on $A_{i}(U^{f};\mathbb{Q})$ is a mixed Hodge structure morphism if and only if it is semisimple. The ‘only if’ portion of the above theorem is a consequence of a simple fact about mixed Hodge structures: if a mixed Hodge structure $V$ is endowed with an endormophism $t$ such that $\log(t^{N})$ is a morphism $V\to V(-1)$ (Remark 3.8), then necessarily $t^{N}=\mathrm{id}_{V}$. For the forward direction, it suffices to use note that $t^{N}$ is unipotent for some $N$ (Proposition 3.1), so the hypothesis implies that $t^{N}=\mathrm{id}$. The Milnor long exact sequence then shows that $A_{i}(U^{f};\mathbb{Q})$ is a quotient of $H_{i}(U_{N};\mathbb{Q})$. It only remains to note that $t$ acts on the latter group by deck transformations on $U_{N}$, an algebraic map, which makes the induced map in homology a mixed Hodge structure morphism. When the $t$-action is semisimple, we show that the mixed Hodge structure on the torsion parts $A_{i}(U^{f};\mathbb{Q})$ of the Alexander modules can be constructed directly using a finite cyclic cover, which, unlike an infinite cyclic cover, is always a complex algebraic variety. This bypasses our rather abstract general construction of the mixed Hodge structure. In [10], we present two different viewpoints. In the first, we utilize cap product with the pullback of a generator of $H^{1}(\mathbb{C}^{*};\mathbb{Q})$. In the second, we utilize a generic fiber of the algebraic map, which is always a complex algebraic variety. For the following result see [10, Corollary 7.1.3 and Corollary 7.2.1]. ###### Theorem 4.7. Assume the setting of Theorem 1.1. Let $i\geq 0$ and assume that the $t$-action on $A_{i}(U;\mathbb{Q})$ is semisimple. Let $N$ be such that the action of $t^{N}$ on $A_{i}(U^{f};\mathbb{Q})$ is the identity, and let $U_{N}=\\{(x,z)\in U\times\mathbb{C}^{*}\mid f(x)=z^{N}\\}$ denote the corresponding $N$-fold cyclic cover. Equip the rational homology of $U_{N}$ with the (dual of) Deligne’s mixed Hodge structure. 1. (A) Let $f_{N}\colon U_{N}\rightarrow\mathbb{C}^{*}$ denote the algebraic map induced by projection onto the second component, and let $\operatorname{gen}\in H^{1}(\mathbb{C}^{*};\mathbb{Q})$ be a generator. Then $A_{i}(U^{f};\mathbb{Q})$ is isomorphic as a mixed Hodge structure to the image of the mixed Hodge structure morphism induced by cap product with $f^{*}_{N}(\operatorname{gen})$ $(-)\frown f^{*}_{N}(\operatorname{gen})\colon H_{i+1}(U_{N};\mathbb{Q})(-1)\rightarrow H_{i}(U_{N};\mathbb{Q}),$ where $(-1)$ denotes the $-1$th Tate twist of a mixed Hodge structure. 2. (B) Let $F\hookrightarrow U$ be the inclusion of any generic fiber of $f$ and let $F\hookrightarrow U_{N}$ be any lift of this inclusion. Then $A_{i}(U^{f};\mathbb{Q})$ is isomorphic as a mixed Hodge structure to the image of the mixed Hodge structure morphism $H_{i}(F;\mathbb{Q})\rightarrow H_{i}(U_{N};\mathbb{Q})$ induced by the inclusion, where $H_{i}(F;\mathbb{Q})$ is equipped with (the dual of) Deligne’s mixed Hodge structure. The first viewpoint granted by semisimplicity, in terms of cap products (Theorem 4.7A), is suggested by the thickened complexes that play the central role in our construction. As for the second viewpoint, note that the homologies of different choices of generic fibers in the same degree may have different mixed Hodge structures, but _any_ choice is allowed in Theorem 4.7B. This shows that the mixed Hodge structures on the torsion parts of the Alexander modules are common quotients of the homologies of all generic fibers, when semisimplicity holds. In fact, under the assumptions of Theorem 4.7, the inclusion $F\hookrightarrow U$ of a generic fiber of $f\colon U\to\mathbb{C}^{*}$ lifts to $U_{N}$ and $U^{f}$ via maps $i_{N}$ and $i_{\infty}$, making the following diagram commutative, where the vertical arrows are covering space maps. ${U^{f}}$ ${U_{N}}$${F}$${U}$$\scriptstyle{\pi_{N}}$$\scriptstyle{\pi}$$\scriptstyle{p}$$\scriptstyle{i}$$\scriptstyle{i_{N}}$$\scriptstyle{i_{\infty}}$ Note that, in homology, the composition $i_{N}=\pi_{N}\circ i_{\infty}$ factors through $A_{*}(U^{f};\mathbb{Q})$, hence we get a diagram: (4) ${H_{j}(F;\mathbb{Q})}$${A_{j}(U^{f};\mathbb{Q})}$${H_{j}(U_{N};\mathbb{Q}).}$$\scriptstyle{H_{j}(i_{\infty})}$$\scriptstyle{H_{j}(i_{N})}$$\scriptstyle{H_{j}(\pi_{N})}$ Moreover, it can be shown by topological arguments that $H_{j}(i_{\infty})$ is surjective (see [10, Proposition 2.5.3]), and the semisimplicity assumption on the $t$-action yields that $H_{j}(\pi_{N})$ is injective (see [10, Corollary 7.0.2]). Since $i_{N}$ is an algebraic map, it follows that $H_{j}(i_{N})$ is a morphism of mixed Hodge structures, and the same is true for $H_{j}(\pi_{N})$ by Theorem 4.1. Putting everything together, we get the following more general consequence of Theorem 4.1 (see [10, Corollary 7.2.1]), from which Theorem 4.7B follows readily. ###### Corollary 4.8. Let $N$ be such that the action of $t^{N}$ on $A_{j}(U^{f};\mathbb{Q})$ is unipotent. Suppose that the $t$-action on $A_{j}(U^{f};\mathbb{Q})$ is semisimple. Then, we have the following commutative diagram, where all the arrows are morphisms of mixed Hodge structures: ${H_{j}(F;\mathbb{Q})}$${A_{j}(U^{f};\mathbb{Q})}$${H_{j}(U_{N};\mathbb{Q}).}$$\scriptstyle{H_{j}(i_{\infty})}$$\scriptstyle{H_{j}(i_{N})}$$\scriptstyle{H_{j}(\pi_{N})}$ Making use of Theorem 4.6, we also have the following (see [10, Corollary 7.2.3]). ###### Corollary 4.9. The $t$-action on $A_{j}(U^{f};\mathbb{Q})$ is semisimple if and only if for any generic fiber $F\subset U^{f}$, the induced map in homology $H_{j}(i_{\infty})\colon H_{j}(F;\mathbb{Q})\to A_{j}(U^{f};\mathbb{Q})$ is a mixed Hodge structure morphism. ###### Example 4.10 (Global Milnor fiber). Let $f\in\mathbb{C}[x_{1},\ldots,x_{n}]$ be a weighted homogeneous polynomial, and let $U=\mathbb{C}^{n}\setminus\\{f=0\\}$. As already mentioned in Example 2.2, we have a global Milnor fibration $f\colon U\to\mathbb{C}^{*}$. Assume that $f\colon U\to\mathbb{C}^{*}$ induces an epimorphism on fundamental groups, i.e., the greatest common divisor of the exponents of the distinct irreducible factors of $f$ is $1$. Let $F$ be a fiber of $f\colon U\to\mathbb{C}^{*}$, and let as before $i_{\infty}\colon F\hookrightarrow U^{f}$ be a lift of the inclusion $i\colon F\hookrightarrow U$. Since $f\colon U\rightarrow\mathbb{C}^{*}$ is a fibration, we have that $i_{\infty}$ is a homotopy equivalence, so it induces isomorphisms $H_{j}(F;\mathbb{Q})\rightarrow H_{j}(U^{f};\mathbb{Q})$ for all $j$, which are compatible with the $t$-action (see [10, Lemma 2.5.2]). Since the $t$-action on $F$ comes from an algebraic map $F\rightarrow F$ of finite order, it follows that the $t$-action on $H_{j}(F)$ is semisimple. Applying Corollary 4.8, we see that the Alexander modules recover in this case the Deligne mixed Hodge structure on the global Milnor fiber. Specifically, the map $H_{j}(F;\mathbb{Q})\rightarrow H_{j}(U^{f};\mathbb{Q})$ induced by $i_{\infty}$ is a mixed Hodge structure morphism, where $H_{j}(U^{f};\mathbb{Q})$ is endowed with the (dual) Deligne mixed Hodge structure. ###### Remark 4.11. Recall from Example 2.3 that if $f:\mathbb{C}^{n}\to\mathbb{C}$ is a reduced complex polynomial so that $V=\\{f=0\\}$ is in general position at infinity, then the Alexander modules $H_{i}(U^{f};\mathbb{Q})$ of $U=\mathbb{C}^{n}\setminus V$ and $f\colon U\to\mathbb{C}^{*}$ are torsion $\mathbb{Q}[t^{\pm 1}]$-modules for $i<n$, while $H_{n}(U^{f};\mathbb{Q})$ is free and $H_{i}(U^{f};\mathbb{Q})=0$ for $i>n$. Moreover, the $t$-action on $H_{i}(U^{\xi};\mathbb{C})$ is semisimple for $i<n$, and the corresponding eigenvalues are roots of unity of order $d=\deg(f)$. A mixed Hodge structure on $H_{i}(U^{\xi};\mathbb{Q})$, for $i<n$, has been constructed by Dimca- Libgober [8] and Liu [16]. Corollary 4.8 can be used directly to show that our mixed Hodge structure on $A_{*}(U^{f};\mathbb{Q})$ coincides in this case with those constructed by Dimca-Libgober and Liu, see [10, Corollary 7.3.6]. Theorem 4.7, when it applies, reinforces the significance of semisimplicity. Our results from [10] show that, in fact, semisimplicity is not a rare occurrence (we have already encountered such instances in Examples 2.2 and 2.3). For instance, when $f$ is proper, we have the following (see [10, Corollary 8.0.2]): ###### Theorem 4.12. Let $U$ be a smooth complex algebraic variety, and let $f\colon U\to\mathbb{C}^{*}$ be a proper algebraic map. Then the torsion part $A_{i}(U^{f};\mathbb{Q})$ of the homology Alexander module $H_{i}(U^{f};\mathbb{Q})$ is a semisimple $R$-module, for all $i\geq 0$. If the map $f\colon U\to\mathbb{C}^{*}$ of Theorem 4.12 is a projective submersion, then $f$ is a fibration, and let $F$ be its fiber. The semisimplicity of $A_{i}(U^{f};\mathbb{Q})\cong H_{i}(U^{f};\mathbb{Q})\cong H_{i}(F;\mathbb{Q})$ is in this case a direct consequence of Deligne’s decomposition theorem [3, 4]. In the general case, Theorem 4.12 is proved by using the decomposition theorem of Beĭlinson–Bernstein–Deligne [1]. In view of Corollary 4.8, a nice application of the semisimplicity statement of Theorem 4.12 is the following purity result (see [10, Corollary 8.0.6]). ###### Theorem 4.13. If $f\colon U\to\mathbb{C}^{*}$ is a proper algebraic map, then $A_{i}(U^{f};\mathbb{Q})$ carries a pure Hodge structure of weight $-i$. ###### Remark 4.14. In fact, we do not know of any example where semisimplicity does not hold. This lack of examples is mainly due to the fact that higher Alexander modules are harder to compute than the first (which, as seen in Corollary 4.4, is always semisismple, and can be computed from a presentation of the fundamental group). ### 4.2. Relation with the limit mixed Hodge structure The mixed Hodge structure on the torsion part $A_{*}(U^{f};\mathbb{Q})$ of the Alexander modules of the pair $(U,f)$ can be regarded as a global version of the limit mixed Hodge structure on the generic fiber of $f$, in the following sense. Let $f\colon U\to\mathbb{C}^{*}$ be an algebraic map inducing an epimorphism on fundamental groups, and let $U^{f}$ denote as before the corresponding infinite cyclic cover of $U$. Let $D^{*}$ be a sufficiently small punctured disk centered at $0$ in $\mathbb{C}$, such that $f\colon f^{-1}(D^{*})\rightarrow D^{*}$ is a fibration, and let $T^{*}=f^{-1}(D^{*})$. The infinite cyclic cover $(T^{*})^{f}$ is homotopy equivalent to $F$, where $F$ denotes any fiber of the form $f^{-1}(c)$, for $c\in D^{*}$. In fact, $(T^{*})^{f}$ can be regarded as the canonical fiber of $f\colon f^{-1}(D^{*})\rightarrow D^{*}$. If $f$ is proper, $H_{i}((T^{*})^{f};\mathbb{Q})$ is also endowed with a limit mixed Hodge structure, which can be compared with the one we constructed on $A_{i}(U^{f};\mathbb{Q})$ via the following result: ###### Theorem 4.15. In the setup of Theorem 1.1, assume moreover that $f\colon U\to\mathbb{C}^{*}$ is proper. Then, in the above notations, the inclusion $(T^{*})^{f}\subset U^{f}$ induces for all $i\geq 0$ an epimorphism of $\mathbb{Q}$-mixed Hodge structures (5) $\displaystyle H_{i}((T^{*})^{f};\mathbb{Q})\twoheadrightarrow A_{i}(U^{f};\mathbb{Q}),$ where $H_{i}((T^{*})^{f};\mathbb{Q})$ is endowed with its limit mixed Hodge structure. If, moreover, $f$ is a fibration, then the two mixed Hodge structures are isomorphic. The mixed Hodge structure morphism of Theorem 4.15 is realized, upon taking hypercohomology and $\mathbb{Q}$-duals, by a suitable morphism of mixed Hodge complexes of sheaves. In more detail, let $(X,D)$ be a good compactification of $U$ by a simple normal crossing divisor $D=X\setminus U$ such that $f\colon U\rightarrow\mathbb{C}^{*}$ extends to an algebraic map $\bar{f}\colon X\rightarrow\mathbb{C}P^{1}$. By replacing $f\colon U\rightarrow\mathbb{C}^{*}$ with a finite cyclic cover $f_{N}\colon U_{N}\rightarrow\mathbb{C}^{*}$ if necessary, we may assume that $E:=\bar{f}^{-1}(0)$ is reduced and $X\setminus U=\bar{f}^{-1}(\\{0,\infty\\})$. Let $i\colon E\hookrightarrow X$ be the inclusion. By restricting $\bar{f}$ above a sufficiently small punctured disk $D^{*}$ centered at $0\in\mathbb{C}$, one can define the nearby cycle functor $\psi_{\bar{f}}$ of Deligne, and there is a vector space isomorphism $\mathbb{H}^{*}(E;\psi_{\bar{f}}\underline{\mathbb{Q}})\cong H^{*}(F;\mathbb{Q}),$ where $F$ is any fiber of $f$ over $D^{*}$. A clockwise loop in $D^{*}$ determines a monodromy homeomorphism from $F$ to itself and so equips $\mathbb{H}^{*}(E;\psi_{\bar{f}}\underline{\mathbb{Q}})$ with the structure of a torsion module over $\mathbb{Q}[t^{\pm 1}]$. The limit mixed Hodge structure on $H^{*}(F;\mathbb{Q})$ is realized by a mixed Hodge complex $\psi_{\bar{f}}^{\textnormal{Hdg}}$ of sheaves on $E$, assigned to $\psi_{\bar{f}}\underline{\mathbb{Q}}$; see [20, Theorem 11.22], and also [10, Theorem 2.11.1]. The mixed Hodge structure morphism (5) is then induced by a morphism of mixed Hodge complexes from the thickened Hodge-de Rham complex (shifted by $[1]$ and with an appropriate twisting of the $R$-module structure) and $i_{*}\psi_{\bar{f}}^{\textnormal{Hdg}}$. For complete details and a geometric interpretation of this morphism of mixed Hodge complexes, see [10, Section 9]. ## 5\. Examples. Hyperplane arrangements Let $n\geq 2$. Let $f_{1},\ldots,f_{d}$ be degree $1$ polynomials in $\mathbb{C}[x_{1},\ldots,x_{n}]$ defining $d$ distinct hyperplanes and let $f=f_{1}\cdot\ldots\cdot f_{d}$. The zeros of $f$ define a hyperplane arrangement ${\mathcal{A}}$ of $d$ hyperplanes in $\mathbb{C}^{n}$. Let $U\subset\mathbb{C}^{n}$ be the corresponding arrangement complement, with induced map $f\colon U\to\mathbb{C}^{*}$. For the purpose of studying Alexander invariants of the pair $(U,f)$, it suffices to assume that ${\mathcal{A}}$ is essential, that is, the intersection of some subset of hyperplanes of ${\mathcal{A}}$ is a point (see, e.g., [10, Remark 10.1.2]). As already indicated in Example 2.4, if ${\mathcal{A}}$ is essential then $H_{j}(U^{f};\mathbb{Q})$ is a torsion $R$-module for all $j<n$, a free $R$-module for $j=n$, and $0$ for $j>n$. In particular, by Theorem 1.1, we can endow $H_{j}(U^{f};\mathbb{Q})$ and $H^{j}(U^{f};\mathbb{Q})$ with canonical mixed Hodge structures, for $0\leq j\leq n-1$. By Corollary 4.4, the $t$-action on $H^{1}(U^{f};\mathbb{Q})$ is semisimple. If $N$ is chosen such that $t^{N}=1$ on $H^{1}(U^{f};\mathbb{Q})$, let $H^{1}(U^{f};\mathbb{Q})_{1}:=\ker\left(H^{1}(U^{f};\mathbb{Q})\xrightarrow{\cdot(t-1)}H^{1}(U^{f};\mathbb{Q})\right)$ $H^{1}(U^{f};\mathbb{Q})_{\neq 1}:=\ker\left(H^{1}(U^{f};\mathbb{Q})\xrightarrow{\cdot(t^{N-1}+\ldots+t+1)}H^{1}(U^{f};\mathbb{Q})\right).$ Then we have an isomorphism of mixed Hodge structures $H^{1}(U^{f};\mathbb{Q})\cong H^{1}(U^{f};\mathbb{Q})_{1}\oplus H^{1}(U^{f};\mathbb{Q})_{\neq 1},$ and, moreover, the following result holds (see [10, Theorem 10.1.5]): ###### Theorem 5.1. Let ${\mathcal{A}}$ be an essential arrangement of $d$ hyperplanes in $\mathbb{C}^{n}$ defined by the zeros of a reduced polynomial $f$ of degree $d$, for $n\geq 2$. Then, 1. (i) $H^{1}(U^{f};\mathbb{Q})_{1}$ is a pure Hodge structure of type $(1,1)$, and has dimension $d-1$. 2. (ii) $H^{1}(U^{f};\mathbb{Q})_{\neq 1}$ is a pure Hodge structure of weight $1$. To prove the above result, one first notices that, by a Lefschetz type argument, we can assume that ${\mathcal{A}}$ is an essential line arrangent in $\mathbb{C}^{2}$. By the cohomological version of Theorem 4.1, the map $H^{1}(\pi)\colon H^{1}(U;\mathbb{Q})\to H^{1}(U^{f};\mathbb{Q})$ is a mixed Hodge structure morphism, and Milnor’s long exact sequence for $\pi\colon U^{f}\to U$ yields that ${\rm Image}(H^{1}(\pi))=H^{1}(U^{f};\mathbb{Q})_{1}$. Part (i) of Theorem 5.1 is then a consequence of the classical fact that $H^{1}(U;\mathbb{Q})$ is a pure Hodge structure of type $(1,1)$, see [21]. For part (ii), we use the cohomological version of Corollary 4.8, which yields a monomorphism of mixed Hodge structures $H^{1}(U^{f};\mathbb{Q})\hookrightarrow H^{1}(F;\mathbb{Q}),$ where $F$ is the generic fiber of $f\colon U\to\mathbb{C}^{*}$. The assertion follows by a careful analysis of the dimensions of the weight filtration on $H^{1}(F;\mathbb{Q})$ (the only possible weights being $1$ and $2$). In fact, by [6, Theorem 2.1] (and the discussion following it), the generic fiber of $f$ is connected, and one can show by direct computation that $\dim\mathrm{Gr}^{W}_{2}H^{1}(F;\mathbb{Q})=d-1$ (see [10, Lemma 10.1.8]). ###### Remark 5.2. If ${\mathcal{A}}$ is a central hyperplane arrangement ($f$ is a homogeneous polynomial), then $f$ determines a global Milnor fibration with fiber $F$, so $H^{j}(U^{f};\mathbb{Q})\cong H^{j}(F;\mathbb{Q})$ is an isomorphism of mixed Hodge structures for all $j$ (see Example 4.10). Moreover, in this case the $t$-action is semisimple. Theorem 5.1 provides a generalization (for $j=1$) of a similar result for central arrangements (see, e.g., [7, Theorem 7.7] and the references therein). ## References * [1] A. A. Beĭlinson, J. Bernstein, and P. Deligne. Faisceaux pervers. In Analysis and topology on singular spaces, I (Luminy, 1981), volume 100 of Astérisque, pages 5–171. Soc. Math. France, Paris, 1982. * [2] N. Budur, Y. Liu, and B. Wang. The monodromy theorem for compact Kähler manifolds and smooth quasi-projective varieties. Math. Ann., 371(3-4):1069–1086, 2018. * [3] P. Deligne. Théorème de Lefschetz et critères de dégénérescence de suites spectrales. Inst. Hautes Études Sci. Publ. Math., (35):259–278, 1968. * [4] P. Deligne. Théorie de Hodge. II. Inst. Hautes Études Sci. Publ. Math., (40):5–57, 1971. * [5] A. Dimca. Singularities and topology of hypersurfaces. Universitext. Springer-Verlag, New York, 1992. * [6] A. Dimca. Hyperplane arrangements, $M$-tame polynomials and twisted cohomology. In Commutative algebra, singularities and computer algebra (Sinaia, 2002), volume 115 of NATO Sci. Ser. II Math. Phys. Chem., pages 113–126. Kluwer Acad. Publ., Dordrecht, 2003. * [7] A. Dimca. Hyperplane arrangements. An introduction. Universitext. Springer, Cham, 2017. * [8] A. Dimca and A. Libgober. Regular functions transversal at infinity. Tohoku Math. J. (2), 58(4):549–564, 2006. * [9] A. Dimca and A. Némethi. On the monodromy of complex polynomials. Duke Math. J., 108(2):199–209, 2001. * [10] E. Elduque, C. Geske, M. Herradón Cueto, L. Maxim and B. Wang. Mixed Hodge structures on Alexander modules. arXiv:2002.01589. * [11] E. Elduque. Twisted Alexander modules of hyperplane arrangement complements, arXiv:1702.06267, 2017\. * [12] R. Hain. The de Rham homotopy theory of complex algebraic varieties I. K-theory, 1(3):271–324, 1987. * [13] Vik. S. Kulikov and V. S. Kulikov. On the monodromy and mixed Hodge structure in the cohomology of an infinite cyclic covering of the complement to a plane curve. Izv. Ross. Akad. Nauk Ser. Mat., 59(2):143–162, 1995. * [14] A. Libgober. Homotopy groups of the complements to singular hypersurfaces. II. Ann. of Math. (2), 139(1):117–144, 1994. * [15] A. Libgober. Position of singularities of hypersurfaces and the topology of their complements. J. Math. Sci. 82 (1): 3194–3210. 1996. * [16] Y. Liu. Nearby cycles and Alexander modules of hypersurface complements. Adv. Math., 291:330–361, 2016. * [17] L. Maxim. Intersection homology and Alexander modules of hypersurface complements. Comment. Math. Helv., 81(1):123–155, 2006. * [18] L. Maxim and K. Wong. Twisted Alexander invariants of complex hypersurface complements. Proc. Roy. Soc. Edinburgh Sect. A, 148(5):1049–1073, 2018. * [19] J. W. Milnor. Singular points of complex hypersurfaces. Annals of Mathematics Studies, No. 61. Princeton University Press, Princeton, NJ; University of Tokyo Press, Tokyo (1968). * [20] C. Peters and J. Steenbrink. Mixed Hodge structures, volume 52. Springer Science & Business Media, 2008. * [21] B. Z. Shapiro. The mixed Hodge structure of the complement to an arbitrary arrangement of affine complex hyperplanes is pure. Proc. Amer. Math. Soc., 117(4):931–933, 1993.
# Energy-based Dropout in Restricted Boltzmann Machines: Why not go random Mateus Roder, , Gustavo H. de Rosa, , Victor Hugo C. de Albuquerque, , André L. D. Rossi, , and João P. Papa, Mateus Roder, Gustavo H. de Rosa, André L. D. Rossi, and João P. Papa are with the São Paulo State University, Brazil and Victor Hugo C. de Albuquerque is with ARMTEC Tecnologia em Robótica, Fortaleza/CE, Brazil. (email: {mateus.roder, gustavo.rosa, andre.rossi, <EMAIL_ADDRESS>[email protected]).Manuscript received xx/xx/xxxx. ###### Abstract Deep learning architectures have been widely fostered throughout the last years, being used in a wide range of applications, such as object recognition, image reconstruction, and signal processing. Nevertheless, such models suffer from a common problem known as overfitting, which limits the network from predicting unseen data effectively. Regularization approaches arise in an attempt to address such a shortcoming. Among them, one can refer to the well- known Dropout, which tackles the problem by randomly shutting down a set of neurons and their connections according to a certain probability. Therefore, this approach does not consider any additional knowledge to decide which units should be disconnected. In this paper, we propose an energy-based Dropout (E-Dropout) that makes conscious decisions whether a neuron should be dropped or not. Specifically, we design this regularization method by correlating neurons and the model’s energy as an importance level for further applying it to energy-based models, such as Restricted Boltzmann Machines (RBMs). The experimental results over several benchmark datasets revealed the proposed approach’s suitability compared to the traditional Dropout and the standard RBMs. ###### Index Terms: Machine learning, Restricted Boltzmann Machines, Regularization, Dropout, Energy-based Dropout ## I Introduction Machine learning (ML) techniques have been broadly investigated to create authentic representations of the real world. Recently, deep learning has emerged as a significant area in ML [1], since its techniques have achieved outstanding results and established several hallmarks in a wide range of applications, such as image classification, object detection, and speech recognition, to cite a few. Restricted Boltzmann Machines (RBMs) [2] attracted considerable attention in the past years, mainly due to their simplicity, high-level parallelism, and comprehensive representation capacity. Such models stand for stochastic neural networks based on energy principles and guided by physical laws. Usually, these networks learn in an unsupervised fashion [3] and are applied in various problems, e.g., image reconstruction, collaborative filtering, and feature extraction. Machine learning algorithms are commonly trained according to an error metric called loss function (training error). Nevertheless, their biggest challenge lies in achieving a low generalization error (testing error). Whenever there is a high discrepancy between training and testing errors, the model expects to “memorize” the training data, losing its generalization capacity and leading to reduced recognition rates when confronted with new data. One can acknowledge such a problem as overfitting. Numerous attempts have been engaged in order to lessen the overfitting problem in classification tasks, such as early-stopping training or even introducing regularization methods such as soft-weight sharing [4], L1 [5], and L2 [6], DropConnect [7], among others. Alternatively, the best way to employ a regularization method would be to average the predictions of all possible parameter configurations, weighing the possibilities and checking out which would perform better. Nevertheless, such a methodology demands a cumbersome computational effort, only feasible for pitiful or non-complex models [8]. Some years ago, an regularization approach known as Dropout was proposed by Srivastava et al. [9] and aimed to turn off learning neurons using a random Bernoulli distribution. In other words, neurons and their outgoing and incoming connections are temporarily removed from the network according to a probability, allowing the evaluation of distinct sub-architectures and providing more robust training knowledge. Although it seems a straightforward method, the problem lies in that neurons are randomly dropped based only on a probability value ($p$), not taking advantage of valuable information related to the model itself. Also, the $p$ value have to be carefully chosen, since high probabilities of shutting off neurons may negatively impact the learning process. Therefore, we aim to address such a problem through an energy-based Dropout, which creates a relationship between the system’s neurons and its energy, removing standard Dropout’s hyper-parameter ($p$) and the aleatory behavior while feeding in more robust information about the learning process itself. In a nutshell, the main contributions of this paper are threefold: (i) to introduce a new type of regularization based on the model’s energy, (ii) to introduce an energy-based Dropout in the context of RBMs, and (iii) to fill the lack of research regarding Dropout-based regularizations in RBMs. The remainder of this paper is organized as follows. Section II presents some studies and theoretical background concerning Dropout. Section III explains the energy-based Dropout, while Section IV presents the central concepts of RBM, Dropout RBM, and energy-based Dropout RBM. Section V discusses the experimental setup employed in this work, while Section VI presents the experimental results. Finally, Section VII states conclusions and future works. ## II Background and Related Work Dropout is a probability-based method [9] that decides whether a set of neurons should be dropped or not. This section presents the main concepts regarding such an approach and studies concerning such a regularization method. ### II-A Related Works Only a few recent studies have addressed the RBMs’ overfitting problem with Dropout-based regularization. For instance, Wang et al. [10] have introduced a fast version of Dropout, but not aiming RBMs as their primary focus. The proposed approach is employed in classification and regression tasks and works by sampling from a Gaussian approximation instead of applying the Monte Carlo “optimization”. Ba et al. [11] proposed an adaptive Dropout for training deep neural networks, which is achieved by computing local expectations of binary dropout variables and by calculating derivatives using backpropagation and stochastic gradient descent. The experiments showed that the method achieved low misclassification rates in the MNIST and NORB datasets, highly competitive with CNNs. Su et al. [12] introduced a Dropout-based RBM considering field-programmable gate arrays, enabling improved implementation and hardware efficiency. Additionally, Wang et al. [13] presented an extensive review of different regularization methods in the context of RBMs, such as weight decay, network pruning, and Dropconnect. Although all these methods have obtained state-of- the-art results in some applications, their main drawback concerns setting up parameters. Tomczak [14] employed different regularization methods for RBMs to improve their classification and generalization performance. In the experiments, the application of the considered regularization techniques did not result in any improvement. Nevertheless, when combining the information-theoretic regularization and the reconstruction cost, the proposed approach improved the log-probabilities. In summary, RBM-related works show that when the main task is classification, such technique takes little advantages from the Dropout regularizer. On the other hand, it may boost the unsupervised learning, increasing the log- probabilities, and providing robustness data reconstruction. Considering that an RBM has a simple architecture, connections can quickly saturate, thus forcing the latent space to learn only the more prominent features from the data, which cause difficulties in data generalization and generation. It is interesting to employ an advanced regularization method, as the proposed approach, such that the energy associated with the latent representation indicates which hidden neurons need to be off to encourage others to learn more. ### II-B Dropout Regularization Dropout is a robust regularization method with a low computational cost that evaluates countless sub-architectures by randomly dropping out some neurons along the training process. Such a heuristic inhibits units from learning their neighbors’ mistakes or “memorizing” the input data, been widely employed for classification tasks. Figure 1 illustrates examples of both standard and Dropout network architectures. | ---|--- (a) | (b) Figure 1: Examples of: (a) standard network architecture and (b) a Dropout network architecture. Furthermore, it is straightforward to elucidate the mathematical foundations of Dropout. Let $\bm{r}$ be a vector of $n$ neurons of a specific layer $L$, where each variable $r_{i}$, $i=\\{1,2,\dots,n\\}$, assumes the value 0 (zero) with probability $p$, regardless of other variables $r_{j}$, $j=\\{1,2,\dots,n\\}$, where $i\neq j$. If $r_{i}=0$, the $i^{th}$ unit from the layer $L$ is temporarily switched-off alongside with its connections, while the unit is held when $r_{i}=1$. Notice the probability $p$ is sampled directly from a Bernoulli distribution [9], as follows: $r_{i}\sim Bernoulli(p),\forall i=\\{1,2,\dots,n\\}.$ (1) Besides, such a probability value is re-sampled for every batch during training. Let $\gamma$ be the network activation function and $\bm{W}^{L}\in\Re^{m\times n}$ the weight matrix in a specific layer $L$. The activation vector $\bm{y}^{L}\in\Re^{n}$ can be formulated as follows: $\bm{y}^{L}=\gamma(\bm{W}^{L}\bm{x}^{L}),$ (2) where $\bm{x}^{L}\in\Re^{m}$ is the input from layer $L$. In order to consider the dropout of neurons in this layer, the previous equation can be extended to the following: $\bm{y}^{L}=\bm{r}\ast\gamma(\bm{W}^{L}\bm{x}^{L}),$ (3) where $\ast$ stands for the point-wise operator. Notably, the Dropout regularization provides training based on all possible $2^{n}$ sub-networks, as neurons are randomly shut down according to a probability $p$. Nevertheless, at the inference time (testing step), the weight matrix $\bm{W}^{L}$ needs to be re-scaled with $p$ in order to consider all possible sub-networks, as follows: $\bm{\tilde{W}}^{L}=p\bm{W}^{L}.$ (4) ## III Energy-based Dropout In this section, we present the proposed approach denoted as energy-based Dropout (E-Dropout), which establishes a straightforward relationship between hidden neurons and the system’s energy, hereinafter denoted “Importance Level” (${\cal I}$). The idea is to take advantage of the model’s behavior for further enabling a more conscious decision whether a set of neurons should be dropped or not. Let ${\cal{I}}^{L}\in\Re^{n}$ be the Importance Level of the hidden neurons at a specific layer $L$, which directly correlates the hidden probabilities with the RBM total energy. One can define ${\cal{I}}^{L}$ as follows: ${\cal{I}}^{L}=\dfrac{\bigg{(}\dfrac{P_{tr}(\bm{x}^{L}=1)}{P_{i}(\bm{x}^{L}=1)}\bigg{)}}{|\Delta E|},$ (5) where $P_{tr}(\bm{x}^{L}=1)$ represents the probability of activating hidden neurons in layer $L$ after the training procedure, and $P_{i}(\bm{x}^{L}=1)$ stands for the activation probability of the hidden neurons in layer $L$ given the input data $\bm{x}$ only, i.e., before training. Finally, $|\Delta E|$ represents the absolute value of the system’s energy variation, i.e., the energy after training subtracted from the initial energy measured. The main intuition behind such a relationship derives from the RBM’s energy, in which the hidden configuration participate directly to the total energy, as shown in Equation 8 in the next section. The idea is to represent a gain or loss in information by applying a ratio between the pre- and post-neurons activation. Looking towards Equation 5, one can observe an innovative way to model the relationship between neuron probability and the system’s energy. In short, the meaning of a hidden neuron in the model is proportional to its importance level. After computing $\cal{I}^{L}$ for each hidden neuron, it is possible to obtain the Dropout mask $\bm{s}$ by comparing it with a uniformly distributed random vector as follows: $\bm{s}=\begin{cases}1,&\text{if ${\cal{I}}^{L}<\bm{u}$}\\\ 0,&\text{otherwise,}\end{cases}$ (6) where $\bm{u}\in\Re^{n}$ is a uniformly distributed random vector, i.e., $\bm{u}\in[0,1)$. Furthermore, one can calculate the activation vector $\bm{y}^{L}$ as follows: $\bm{y}^{L}=\bm{s}\ast\gamma(\bm{W}^{L}\bm{x}^{L}).$ (7) It is crucial to highlight that neurons tend to increase or decrease their importance level during the learning process based on the information acquired from the data distribution, where a neuron is less likely to be dropped out when its importance assumes a higher value. Additionally, when the system’s energy is close to zero (more accurate data distribution learning), the energy-based Dropout allows a continuous drop out of neurons to learn additional information. Finally, during the inference phase, it is unnecessary to re-scale the weight matrix. ## IV Restricted Boltzmann Machines Restricted Boltzmann Machines [15] are stochastic neural networks that deal with unlabeled data efficiently. In other words, RBMs are a suitable approach for unsupervised problems such as image reconstruction, feature extraction, pre-training deep networks, and collaborative filtering. Such networks are modeled as bipartite graphs and parametrized by physical concepts like energy and entropy. Thereby, RBMs have a simple architecture with two binary-valued layers: the visible layer $\bm{v}$ with $m$ units, and the hidden layer $\bm{h}$ with $n$ units. Each connection between a visible $v_{i}$ and a hidden unit $h_{j}$ is weighted by $w_{ij}$. The weight matrix $\bm{W}_{m\times n}$ retains the knowledge of the network111Since RBMs have one hidden layer only, we omitted the layer index $L$.. Figure 2 shows the standard architecture of an RBM. Figure 2: The standard RBM architecture. While the visible layer handles the data, the hidden layer performs the feature extraction by detecting patterns and learning the data distribution in a probabilistic manner. Equation 8 describes the energy function of an RBM, where $\bm{a}\in\Re^{m}$ and $\bm{b}\in\Re^{n}$ stand for the biases of visible and hidden units, respectively: $E(\bm{v},\bm{h})=-\sum_{i=1}^{m}a_{i}v_{i}-\sum_{j=1}^{n}b_{j}h_{j}-\sum_{i=1}^{m}\sum_{j=1}^{n}v_{i}h_{j}w_{ij}.$ (8) In addition, the joint probability of an arrangement $(\bm{v},\bm{h})$ can be modeled as folows: $P(\bm{v},\bm{h})=\frac{e^{-E(\bm{v},\bm{h})}}{Z},$ (9) where $Z$ is the partition function, which is a normalization term for the probability over all possible visible and hidden states. Moreover, the marginal probability of an input vector is represented as follows: $P(\bm{v})=\frac{\displaystyle\sum_{\bm{h}}e^{-E(\bm{v},\bm{h})}}{Z}.$ (10) As in bipartite graph and in an undirected model, the activations for both units (visible and hidden) are mutually independent. Therefore, the formulation of their conditional probabilities is straightforward, being defined by as follows: $P(\bm{v}|\bm{h})=\prod_{i=1}^{m}P(v_{i}|\bm{h}),$ (11) and $P(\bm{h}|\bm{v})=\prod_{j=1}^{n}P(h_{j}|\bm{v}),$ (12) where $P(\bm{v}|\bm{h})$ and $P(\bm{h}|\bm{v})$ represent the probability of the visible layer given the hidden states and the probability of the hidden layer given the visible states, respectively. From Equations 11 and 12, we can derive the probability of a single active visible neuron $i$ given the hidden states, and the probability of a single active hidden neuron $j$ given the visible states, as follows: $P(v_{i}=1|\bm{h})=\sigma\left(\sum_{j=1}^{n}w_{ij}h_{j}+a_{i}\right),$ (13) and $P(h_{j}=1|\bm{v})=\sigma\left(\sum_{i=1}^{m}w_{ij}v_{i}+b_{j}\right),$ (14) where $\sigma(\cdot)$ stands for the logistic-sigmoid function. Essentially, an RBM learns a set of parameters $\theta=(\bm{W},\bm{a},\bm{b})$ during the training process. Such task can be modeled as an optimization problem aiming to maximize the product of data probabilities for all training set ${\cal V}$, as follows: $\arg\max_{\Theta}\prod_{\bm{v}\in{\cal V}}P(\bm{v}).$ (15) Such a problem is commonly treated by applying the negative of the logarithm function, known as the Negative Log-Likelihood (NLL), which represents the approximation of the reconstructed data regarding the original data distribution. Therefore, it is possible to take the partial derivatives of $\bm{W}$, $\bm{a}$ and $\bm{b}$ at iteration $t$. Equations 16, 17 and 18 describe the update rules for this set of parameters: $\bm{W}^{(t+1)}=\bm{W}^{(t)}+\eta(\bm{v}P(\bm{h}|\bm{{v}})-\bm{\tilde{v}}P(\bm{\tilde{h}}|{\bm{\tilde{v}}})),$ (16) $\bm{a}^{(t+1)}=\bm{a}^{(t)}+(\bm{v}-\bm{\tilde{v}}),$ (17) and $\bm{b}^{(t+1)}=\bm{b}^{(t)}+(P(\bm{h}|\bm{v})-P(\bm{\tilde{h}}|\bm{\tilde{v}})),$ (18) where $\eta$ is the learning rate, $\bm{\tilde{v}}$ stands for the reconstructed input data given $\bm{h}$, and $\bm{\tilde{h}}$ represents an estimation of the hidden vector $\bm{h}$ given $\bm{\tilde{v}}$. Hinton et al. [15] proposed one of the most efficient ways to train an RBM and estimate the visible and hidden layers, known as the Contrastive Divergence (CD). Such an approach uses Gibbs sampling to infer the neurons’ states, initializing the visible units with the training data. ### IV-A Dropout RBMs Considering the concepts mentioned above, a Dropout RBM can be formulated as a simple RBM extended with one binary random vector $\bm{r}\in\\{0,1\\}^{n}$. In this new formulation, $\bm{r}$ stands for the activation or dropout of the neurons in the hidden layer, where each variable $r_{i}$ determines whether the neuron $h_{i}$ is going to be dropped out or not. Figure 3 illustrates such an idea, in which the hidden unit $h_{2}$ is shutoff. Figure 3: The Dropout-based RBM architecture. Notice that $\bm{r}$ is re-sampled for every mini-batch during learning. As units were dropped from the hidden layer, Equation 14 can be rewritten as follows: $P(h_{j}=1|\bm{r},\bm{v})=\begin{cases}0,&\text{if}\ r_{j}=0\\\ \sigma\left(\sum_{i=1}^{m}W_{ij}v_{i}+b_{j}\right),&\text{otherwise}.\end{cases}$ (19) Therefore, a Dropout RBM can be understood as a blend of several RBMs, each one using different subsets of their hidden layers. As we are training the model with different subsets, the weight matrix $\bm{W}$ needs to be scaled at testing time, being multiplied by $p$ in order to adjust its weights (Equation 4). ### IV-B E-Dropout RBMs As aforementioned in Section III, one can use Equation 5 to calculate the importance level $\cal{I}$ of the hidden neurons. Nevertheless, when dealing with an E-Dropout RBM, as the system’s energy approximates to zero, $\cal{I}$ tends to overflow with large values. Therefore, it is necessary to re-scale $\cal{I}$ between $0$ and $1$ as follows: ${\cal{I}}=\dfrac{{\cal{I}}}{\max\\{{\cal{I}}\\}}.$ (20) After computing $\cal{I}$, one can use Equation 6 to calculate the Dropout mask $\bm{s}$. Therefore, Equation 14 can be rewritten as follows: $P(h_{j}=1|\bm{s},\bm{v})=\begin{cases}0,&\text{if}\ s_{j}=0\\\ \sigma\left(\sum_{i=1}^{m}W_{ij}v_{i}+b_{j}\right),&\text{otherwise}.\end{cases}$ (21) Furthermore, it is worth using mini-batches while training the network, which can be accomplished by calculating Equation 20 for every sample in the mini- batch followed by its average. ## V Experiments In this section, we present the methodological setup used to evaluate the E-Dropout considering RBMs222RBMs, Dropout-RBMs, Weight-RBMs, and Energy- Dropout RBMs are available in Learnergy library [16]. in the task of binary image reconstruction. Besides, we compare the proposed method against a standard-Dropout, RBMs without Dropout, among others, and describe the employed datasets and the experimental setup. ### V-A Modeling E-Dropout RBMs As aforementioned in Section III, the energy-based Dropout uses Equation 5 to calculate an importance level ${\cal I}$ for each neuron. Additionally, it computes the dropout mask $\bm{s}$ using Equation 6. Finally, it uses $\bm{s}$ in the same way as the standard Dropout method. Note that we consider the very same fundamental concepts presented in Section IV. ### V-B Datasets Three well-known image datasets were employed throughout the experiments: * • MNIST333http://yann.lecun.com/exdb/mnist [17]: set of $28\times 28$ grayscale images of handwritten digits (0-9), i.e., 10 classes. The original version contains a training set with $60,000$ images from digits ‘0’-‘9’, as well as a test set with $10,000$ images; * • Fashion-MNIST444https://github.com/zalandoresearch/fashion-mnist [18]: set of $28\times 28$ grayscale images of clothing objects. The original version contains a training set with $60,000$ images from $10$ distinct objects (t-shirt, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot), and a test set with $10,000$ images; * • Kuzushiji-MNIST555https://github.com/rois-codh/kmnist [19]: set of $28\times 28$ grayscale images of hiragana characters. The original version contains a training set with $60,000$ images from $10$ previously selected hiragana characters, and a test set with $10,000$ images. ### V-C Experimental Setup Concerning the experimental setup, we employed five different RBM architectures, in which the main difference lies in the regularization method. In this case, RBM does not employ Dropout, the “Weight” RBM (W-RBM) with L2 regularization employing a penalty of $5\cdot 10^{-3}$ (the mean value of the ranges proposed by Hinton [2]), the “standard-dropout” RBM (D-RBM) uses the traditional Dropout, the “DropConnect” RBM (DC-RBM) uses the traditional DropConnect, and the “E-Dropout” RBM (E-RBM) employs the proposed energy-based Dropout. Additionally, when considering the “standard-dropout” and the “DropConnect”, we used $p=0.5$ as stated by Srivastava et al. [9] and Wan et al. [7], respectively. Since the learning rate and the number of hidden neurons are important hyperparameters of an RBM, we fixed each RBM according to Table I, in which four different models have been considered, i.e., $M_{a}$, $M_{b}$, $M_{c}$, $M_{d}$. To provide more shreds of evidence of the E-Dropout suitability, we employed four distinct architectures, differing only in the number of hidden neurons and learning rates. Notice that three out of four architectures have $1,024$ hidden neurons. The reason is that RBMs with more feature detector units have more chances to learn unimportant information from the data distribution. Moreover, we decreased the learning rate to verify the E-Dropout ability to improve significantly when the network learns slowly. Furthermore, we have considered $50$ epochs for the RBM learning procedure with mini-batches of size $256$, while all RBMs were trained using the Contrastive Divergence algorithm with $k=1$ (CD-1). TABLE I: RBM hyperparameters configuration. Parameter | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- $n$ (hidden neurons) | 512 | 1,024 | 1,024 | 1,024 $\eta$ (learning rate) | 0.1 | 0.1 | 0.03 | 0.01 Two distinct metrics assessed the performance of the models on the test set, i.e., the Mean Squared Error (MSE) and the Structural Similarity Index (SSIM) [20]. The former is often known as the reconstruction error, which encodes the quality of the pixels reconstructed by the RBMs. In contrast, the latter provides a more efficient analysis of the image structure itself, which compares the quality between the original and reconstructed images. To provide robust statistical analysis and acknowledge that the experiments’ results are independent and continuous over a particular dependent variable (e.g., number of observations), we identified the Wilcoxon signed-rank test [21] satisfied our obligations. It is a non-parametric hypothesis test used to compare two or more related observations (in our case, repeated measurements of the MSE and SSIM values) to assess whether there are statistically significant differences between them or not. Therefore, we evaluated different RBM models with distinct Dropout methods ten times to mitigate the RBMs’ stochastic nature for every dataset and architecture. Notice the statistical evaluation considers each model at once. Finally, all the experiments were run in a desktop computer with 16 Gb of RAM ($2,400$MHz clock), an AMD processor containing six cores with 3 GHz of a clock, and a video card (GPU) GTX 1060 with 6 Gb of memory. ## VI Experimental Results This section presents the experimental results concerning the E-Dropout RBM, D-RBM, W-RBM, and RBM, considering three well-known literature datasets. ### VI-A MNIST Considering the MNIST dataset, Table II exhibits the mean reconstruction errors and their respective standard deviation over the testing set, where the best results are in bold according to the Wilcoxon signed-rank test. Considering model $M_{b}$ and $M_{c}$, the E-RBM could not obtain better results as the RBM, while for models $M_{a}$ and $M_{d}$, we can highlight that E-RBM was more accurate than RBM, W-RBM, D-RBM, and DC-RBM. Furthermore, these achievements evidence that E-Dropout is less sensitive to different learning rates. TABLE II: Mean reconstruction errors and their respective standard deviation on MNIST testing dataset. Technique | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- RBM [15] | 20.968 $\pm$ 0.059 | 17.728 $\pm$ 0.040 | 20.644 $\pm$ 0.039 | 28.214 $\pm$ 0.068 W-RBM [2] | 35.460 $\pm$ 0.126 | 31.753 $\pm$ 0.125 | 33.235 $\pm$ 0.062 | 39.583 $\pm$ 0.046 D-RBM [9] | 47.757 $\pm$ 0.242 | 47.968 $\pm$ 0.267 | 67.572 $\pm$ 0.0.339 | 80.155 $\pm$ 0.244 DC-RBM [7] | 26.470 $\pm$ 0.134 | 25.595 $\pm$ 0.235 | 28.437 $\pm$ 0.142 | 35.856 $\pm$ 0.146 E-RBM | 20.511 $\pm$ 0.061 | 18.369 $\pm$ 0.072 | 21.277 $\pm$ 0.050 | 26.14 $\pm$ 0.174 Table III exhibits the mean SSIM and their respective standard deviation over all experiments, where the best results are in bold according to the Wilcoxon signed-rank test. Considering model $M_{b}$ and $M_{c}$, the E-RBM obtained, statistically, the same results as the RBM, while for models $M_{a}$ and $M_{d}$, the E-RBM was significantly better than the RBM, W-RBM, D-RBM, and DC-RBM. It is interesting to note that the proposed approach overpass the other regularizers methods for all models, also for the architecture with $1,024$ hidden neurons and the lowest learning rate, the E-Dropout supported a $2.5\%$ performance improvement on SSIM, in front of an RBM (the second-best model). TABLE III: Mean SSIM values and their respective standard deviation on MNIST testing dataset. Technique | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- RBM [15] | 0.8170 $\pm$ 0.001 | 0.8410 $\pm$ 0.001 | 0.8150 $\pm$ 0.000 | 0.7490 $\pm$ 0.001 W-RBM [2] | 0.7243 $\pm$ 0.001 | 0.7500 $\pm$ 0.001 | 0.7367 $\pm$ 0.001 | 0.6884 $\pm$ 0.001 D-RBM [9] | 0.5690 $\pm$ 0.002 | 0.5460 $\pm$ 0.003 | 0.3750 $\pm$ 0.002 | 0.2950 $\pm$ 0.001 DC-RBM [7] | 0.7684 $\pm$ 0.001 | 0.7659 $\pm$ 0.002 | 0.7468 $\pm$ 0.001 | 0.6934 $\pm$ 0.001 E-RBM | 0.8240 $\pm$ 0.000 | 0.8410 $\pm$ 0.000 | 0.8160 $\pm$ 0.001 | 0.7740 $\pm$ 0.002 In summary, one can notice that E-RBM performed better than all the baselines with regularization. Additionally, Figure 4 depicts the mean reconstruction error over the training set only for models that employ Dropout regularization and the naive version (RBM) since such comparison stands for the work focus and more curves generate visually unattractive graphics. One can observe that most of the RBM models achieved better results than the D-RBM considering the same number of hidden neurons and learning rate. Nevertheless, for the models $M_{a}$ and $M_{b}$, the E-RBM achieved better reconstruction errors, besides converging faster at the first iterations. Figure 4: Mean reconstruction error over the MNIST training set. Figure 5 depicts the mean SSIM over the testing set for both Dropout methods and the RBM naive version regarding all models. One crucial point to highlight is that all RBM and E-RBM models achieved better results than the D-RBM ones, probably due to the latter “constant” neurons shutdown. Moreover, the E-Dropout achieved the best SSIM considering models $M_{a}$, $M_{b}$, and $M_{c}$, thus fostering the proposed regularization technique. Figure 5: Mean structural similarity index over the MNIST testing set. ### VI-B Fashion-MNIST Regarding the Fashion-MNIST dataset, Table IV exhibits the mean reconstruction errors and their respective standard deviation over all experiments. Considering the E-RBM, it is clear its superiority regarding the baselines, once it achieved the lowest errors overall RBM architectures. We can highlight the performance on model $M_{d}$, which was $5.77\%$ better than standard RBM. TABLE IV: Mean reconstruction errors and their respective standard deviation on Fashion-MNIST testing set. Technique | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- RBM [15] | 55.258 $\pm$ 0.097 | 53.204 $\pm$ 0.075 | 58.077 $\pm$ 0.066 | 67.293 $\pm$ 0.070 W-RBM [2] | 66.195 $\pm$ 0.238 | 59.660 $\pm$ 0.152 | 62.769 $\pm$ 0.115 | 73.732 $\pm$ 0.114 D-RBM [9] | 127.76 $\pm$ 0.694 | 104.78 $\pm$ 0.722 | 118.52 $\pm$ 0.782 | 119.95 $\pm$ 0.497 DC-RBM [7] | 71.161 $\pm$ 0.105 | 68.983 $\pm$ 0.146 | 73.101 $\pm$ 0.190 | 80.538 $\pm$ 0.205 E-RBM | 53.858 $\pm$ 0.180 | 52.288 $\pm$ 0.095 | 55.064 $\pm$ 0.091 | 61.52 $\pm$ 0.085 Table V exhibits the results concerning the SSIM measure, been the best ones according to the Wilcoxon’s signed-rank test in bold. We can observe that the E-RBM achieved better results than other baselines for models $M_{b}$ and $M_{c}$ (similar to DC-RBM). Surprisingly, the DC-RBM achieved better results regarding models $M_{a}$, $M_{c}$, and $M_{d}$, which was unexpected since such behavior was not observed in Table IV. TABLE V: Mean SSIM values and their respective standard deviation on Fashion-MNIST testing set. Technique | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- RBM [15] | 0.5630 $\pm$ 0.001 | 0.5940 $\pm$ 0.000 | 0.5410 $\pm$ 0.000 | 0.4760 $\pm$ 0.000 W-RBM [2] | 0.5295 $\pm$ 0.001 | 0.5608 $\pm$ 0.001 | 0.5463 $\pm$ 0.001 | 0.4913 $\pm$ 0.001 D-RBM [9] | 0.2010 $\pm$ 0.002 | 0.2570 $\pm$ 0.002 | 0.2220 $\pm$ 0.002 | 0.2130 $\pm$ 0.001 DC-RBM [7] | 0.5791 $\pm$ 0.001 | 0.5856 $\pm$ 0.001 | 0.5729 $\pm$ 0.001 | 0.5405 $\pm$ 0.001 E-RBM | 0.5760 $\pm$ 0.001 | 0.6130 $\pm$ 0.001 | 0.5710 $\pm$ 0.001 | 0.5150 $\pm$ 0.001 Additionally, Figure 6 depicts the mean reconstruction error for all architectures that employ Dropout and its naive version (RBM). One can note that the standard Dropout technique disturbed the RBM learning step. Moreover, the E-RBM achieved the best results in all models and the lowest reconstruction errors. Figure 6: Mean reconstruction error over Fashion-MNIST training set. In the same manner, but assessing the models’ performance by the SSIM measure (Figure 7), we can confirm that D-RBM was not able to be competitive against the energy-based Dropout and the RBM. Besides, E-RBM obtained better results than the RBMs without Dropout, strengthening its capability to increase the network’s learning power. Figure 7: Mean structural similarity index over Fashion-MNIST testing set. ### VI-C Kuzushiji-MNIST Considering the Kuzushiji-MNIST dataset, from Table VI, one can see that the E-RBM achieved the lowest mean reconstruction errors for settings $M_{a}$ and $M_{d}$. For the model $M_{b}$, RBM was the best in front of the employed architectures, while for $M_{c}$, the difference between E-RBM and RBM was not significant. On the other hand, for model $M_{d}$, the E-RBM had a performance improvement of $3.32\%$ compared to RBM. TABLE VI: Mean reconstruction error and their respective standard deviation on Kuzushiji-MNIST. Technique | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- RBM [15] | 46.470 $\pm$ 0.121 | 37.587 $\pm$ 0.064 | 43.38 $\pm$ 0.070 | 58.262 $\pm$ 0.062 W-RBM [2] | 76.839 $\pm$ 0.139 | 67.470 $\pm$ 0.139 | 70.537 $\pm$ 0.059 | 83.548 $\pm$ 0.061 D-RBM [9] | 89.810 $\pm$ 0.249 | 80.475 $\pm$ 0.207 | 93.291 $\pm$ 0.189 | 109.436 $\pm$ 0.085 DC-RBM [7] | 60.703 $\pm$ 0.149 | 53.330 $\pm$ 0.219 | 58.538 $\pm$ 0.141 | 74.056 $\pm$ 0.186 E-RBM | 44.853 $\pm$ 0.133 | 38.511 $\pm$ 0.086 | 43.544 $\pm$ 0.074 | 54.937 $\pm$ 0.143 Additionally, Table VII exhibits the results for the SSIM metric. One can see that E-RBM kept the same previous behavior, meaning that for models $M_{a}$, and $M_{d}$, it achieved better results than the other baselines, while for model $M_{b}$ RBM and E-RBM have no statistical difference. TABLE VII: Mean SSIM and their respective standard deviation on Kuzushiji-MNIST. Technique | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- RBM [15] | 0.6910 $\pm$ 0.001 | 0.7480 $\pm$ 0.001 | 0.7040 $\pm$ 0.000 | 0.6060 $\pm$ 0.001 W-RBM [2] | 0.5675 $\pm$ 0.001 | 0.6135 $\pm$ 0.001 | 0.5948 $\pm$ 0.001 | 0.5262 $\pm$ 0.001 D-RBM [9] | 0.3890 $\pm$ 0.002 | 0.4290 $\pm$ 0.002 | 0.2970 $\pm$ 0.002 | 0.2040 $\pm$ 0.000 DC-RBM [7] | 0.6561 $\pm$ 0.001 | 0.6703 $\pm$ 0.001 | 0.6577 $\pm$ 0.001 | 0.5933 $\pm$ 0.001 E-RBM | 0.7040 $\pm$ 0.001 | 0.7480 $\pm$ 0.000 | 0.7150 $\pm$ 0.000 | 0.6370 $\pm$ 0.000 Moreover, Figure 8 depicts the mean reconstruction error over the training set for all the Dropout-based models and its naive version, respectively. In this particular dataset, the MSE was considerably lower than Fashion-MNIST’s ones, even though its digits seem more complex and have fewer details than Fashion- MNIST objects. Figure 8: Mean reconstruction error over Kuzushiji-MNIST’s training set. Furthermore, Figure 9 exhibits the mean SSIM over the testing set considering the same approach that Figure 8. In this particular dataset, the E-RBM achieved almost the same performance as the RBM, for all configurations of $n$ (number of hidden neurons) and $\eta$ (learning rate). Figure 9: Mean structural similarity index over Kuzushiji-MNIST’s testing set. ### VI-D Dropout Overall Discussion In addition to the reconstruction error and the visual quality of the reconstructed image assessed by SSIM, in this section, we analyze the behavior of E-Dropout related to the number of neurons dropped out over the training epochs and the weights learned by the E-RBM, D-RBM, and RBM models since it is the only ones employing neuron deactivation, in addition the original RBM. The third architecture ($M_{a}$) is used here to illustrate how the E-Dropout affects the number of neurons turned off in the training process for the three datasets, as shown in Figures 10, 11 and 12. For clarity, D-RBM tends to turn off $60,000$ ($n*p*60,000\\_images/mini-batch$) neurons on every epoch, while E-RBM considers the neurons activation and the system energy batch-by-batch, and therefore, does not have a “mean value”. Figure 10: Mean number of neurons dropped over MNIST’s training set. Figure 11: Mean number of neurons dropped over Fashion-MNIST’s training set. Figure 12: Mean number of neurons dropped over Kuzushiji-MNIST’s training set. These results indicate that E-Dropout behavior depends on the dataset since it considers the relationship between neurons activation and the system’s energy derived from the data itself. Considering the MNIST dataset, the E-Dropout starts by almost turning off the same amount of neurons that the standard Dropout, and slowly decrease this value over the epochs. For the Fashion-MNIST dataset, the E-Dropout starts with almost all neurons and starts dropping them out similar to a sigmoid function shape. On the other hand, considering the Kuzushiji-MNIST dataset, the E-Dropout starts by turning off approximately $37,000$ neurons and rapidly decreasing these values, while increasing the number of dropped out neurons in the last epochs. | | ---|---|--- (I-A) | (II-A) | (III-A) | | ---|---|--- (I-B) | (II-B) | (III-B) | | ---|---|--- (I-C) | (II-C) | (III-C) Figure 13: $M_{c}$ \- MNIST subset of learned weights: (I-A) RBM, (II-A) D-RBM and (III-A) E-RBM. Fashion-MNIST subset of learned weights: (I-B) RBM, (II-B) D-RBM and (III-B) E-RBM. Kuzushiji-MNIST subset of learned weights: (I-C) RBM, (II-C) D-RBM and (III-C) E-RBM. Regarding the weights learned by the models, Figure 13 depict a subset of model $M_{c}$ for MNIST (I-A, II-A, III-A), Fashion-MNIST (I-B, II-B, III-B), and Kuzushiji-MNIST (I-C, II-C, III-C) datasets, respectively. Overall, the D-RBM provides some sparsity but less “clear” weights, representing any images’ details. The RBM provides a fair representation of these details, and even though, in some cases, it is clear that its weights are less informative. Finally, the E-RBM portrays more accurate images representation, mainly the high-frequency ones, such as the inner drawings. Additionally, one can establish a parallel with the temperature regularization effect showed by [22] and [23], in which low temperatures forces de connections to small values, providing network sparsity at the step that improves the lower bound in the learning process. Such behavior is interesting since the E-RBM was encouraged to prevent co-adaptations selectively. Furthermore, Figures 14, 15 and 16 depict a subset of model $M_{c}$ reconstructed images over MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets, respectively. | | | ---|---|---|--- (I) | (II) | (III) | (IV) Figure 14: $M_{c}$ \- MNIST subset of reconstructed images: (I) RBM, (II) D-RBM, (III) E-RBM and (IV) Original. | | | ---|---|---|--- (I) | (II) | (III) | (IV) Figure 15: $M_{c}$ \- Fashion-MNIST subset of reconstructed images: (I) RBM, (II) D-RBM, (III) E-RBM and (IV) Original. | | | ---|---|---|--- (I) | (II) | (III) | (IV) Figure 16: $M_{c}$ \- Kuzushiji-MNIST subset of reconstructed images: (I) RBM, (II) D-RBM, (III) E-RBM and (IV) Original. Finally, Table VIII shows the computational burden over all methods and architectures, regarding $50$ epoch of training. It is essential to highlight that all datasets consume almost the same computational load due to the same characteristics, and, for that, it was summarized in one general table. The mean and standard deviation are from the ten repetitions taken from the experiments. TABLE VIII: Mean time in seconds and their respective standard deviation. Technique | $\mathbf{M_{a}}$ | $\mathbf{M_{b}}$ | $\mathbf{M_{c}}$ | $\mathbf{M_{d}}$ ---|---|---|---|--- RBM [15] | 250 $\pm$ 3 | 250 $\pm$ 3 | 250 $\pm$ 3 | 250 $\pm$ 3 W-RBM [2] | 250 $\pm$ 3 | 250 $\pm$ 3 | 250 $\pm$ 3 | 250 $\pm$ 3 D-RBM [9] | 250 $\pm$ 3 | 250 $\pm$ 3 | 250 $\pm$ 3 | 250 $\pm$ 3 DC-RBM [7] | 1050 $\pm$ 3 | 1100 $\pm$ 3 | 1100 $\pm$ 3 | 1100 $\pm$ 3 E-RBM | 300 $\pm$ 3 | 300 $\pm$ 4 | 300 $\pm$ 4 | 300 $\pm$ 4 Table VIII shows that E-RBM has a little more computational load than RBM, W-RBM, and D-RBM, but considering a high number of training epochs. On the other hand, the DC-RBM was the more power-consume model since the DropConnect needs to sample a weight mask for every instance on the mini-batch. In summary, the improvement achieved by the E-RBM in the image reconstruction task depicted in previous sections overcome the slightly worst performance in processing time against the simpler baselines. ## VII Conclusion This article proposed a new regularization method, known as energy-based Dropout, an enhanced parameterless version of the traditional Dropout. Based on physical principles, it creates a direct correlation between the system’s energy and its hidden neurons, denoted as Importance Level ($\cal{I}$). Furthermore, as Restricted Boltzmann Machines are also physical-based neural networks, they were considered the perfect architecture to validate the proposed approach. The energy-based Dropout was validated in Restricted Boltzmann Machines through a binary image reconstruction task. Three well-known literature, datasets, MNIST, Fashion-MNIST, and Kuzushiji-MNIST, were employed to validate the proposed approach. Considering the experimental results discussed in the paper, one can observe that the energy-based Dropout proved to be a suitable regularization technique, obtaining significantly better SSIM rates than its counterpart Dropout in all three datasets. Additionally, when comparing the energy-based Dropout to the standard RBM, it outperformed the latter in two out of three datasets, being slightly worse in the one that it could not achieve the best result. Moreover, it is possible to perceive that the weights learned by the energy-Dropout approach were able to recognize different patterns and high-frequency details, besides had less sharp edges when compared to the standard RBM and Dropout-based RBM. When comparing all the employed techniques, more demanding tasks benefit more from the energy-based Dropout than easier ones, i.e., tasks with higher reconstruction errors seem to achieved the best result when using the energy- based Dropout. Moreover, when comparing the proposed method and the standard one, the proposed regularization obtained significantly better results, reinforcing its capacity to improve RBMs’ learning procedure. Regarding future works, we aim at expanding some concepts of the energy- Dropout regularization technique to the classification task and other suitable machine learning algorithms, such as Deep Belief Networks (DBNs) and Deep Boltzmann Machines (DBMs). ## Acknowledgements The authors are grateful to São Paulo Research Foundation (FAPESP) grants #14/12236-1, #2019/07825-1 and #2019/02205-5. Also, VHCA received support from the Brazilian National Council for Research and Development (CNPq), grants #304315/2017-6 and #430274/2018-1. JPP also acknowledges CNPq grants #307066/2017-7 and #427968/2018-6. ## References * [1] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 35, no. 8, pp. 1798–1828, 2013. * [2] G. E. Hinton, “A practical guide to training restricted boltzmann machines,” in _Neural Networks: Tricks of the Trade_ , ser. Lecture Notes in Computer Science, G. Montavon, G. Orr, and K.-R. Müller, Eds. Springer Berlin Heidelberg, 2012, vol. 7700, pp. 599–619. * [3] Y. Bengio, “Learning deep architectures for AI,” _Foundations and Trends in Machine Learning_ , vol. 2, no. 1, pp. 1–127, 2009. * [4] S. J. Nowlan and G. E. Hinton, “Simplifying neural networks by soft weight-sharing,” _Neural Computation_ , vol. 4, no. 4, pp. 473–493, July 1992. * [5] R. Tibshirani, “Regression shrinkage and selection via the lasso,” _Journal of the Royal Statistical Society: Series B (Methodological)_ , vol. 58, no. 1, pp. 267–288, 1996. * [6] A. E. Hoerl and R. W. Kennard, “Ridge regression: Biased estimation for nonorthogonal problems,” _Technometrics_ , vol. 12, no. 1, pp. 55–67, 1970\. * [7] L. Wan, M. Zeiler, S. Zhang, Y. Le Cun, and R. Fergus, “Regularization of neural networks using dropconnect,” in _International conference on machine learning_ , 2013, pp. 1058–1066. * [8] H. Y. Xiong, Y. Barash, and B. J. Frey, “Bayesian prediction of tissue-regulated splicing using rna sequence and cellular context,” _Bioinformatics_ , vol. 27, no. 18, pp. 2554–2562, Sep. 2011. * [9] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” _The Journal of Machine Learning Research_ , vol. 15, no. 1, pp. 1929–1958, Jan. 2014. * [10] S. Wang and C. Manning, “Fast dropout training,” in _Proceedings of the 30th International Conference on Machine Learning_ , 2013, pp. 118–126. * [11] L. J. Ba and B. Frey, “Adaptive dropout for training deep neural networks,” in _Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2_ , ser. NIPS’13. Curran Associates Inc., 2013, pp. 3084–3092. * [12] J. Su, D. B. Thomas, and P. Y. K. Cheung, “Increasing network size and training throughput of fpga restricted boltzmann machines using dropout,” in _2016 IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)_ , May 2016, pp. 48–51. * [13] B. Wang and D. Klabjan, “Regularization for unsupervised deep neural nets,” in _Thirty-First AAAI Conference on Artificial Intelligence_ , 2017. * [14] J. M. Tomczak, “Learning informative features from restricted boltzmann machines,” _Neural Processing Letters_ , vol. 44, no. 3, pp. 735–750, 2016\. * [15] G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” _Neural Computation_ , vol. 14, no. 8, pp. 1771–1800, 2002\. * [16] M. Roder, G. H. de Rosa, and J. P. Papa, “Learnergy: Energy-based machine learners,” 2020. * [17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” _Proceedings of the IEEE_ , vol. 86, no. 11, pp. 2278–2324, 1998. * [18] H. Xiao, K. Rasul, and R. Vollgraf. (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. * [19] T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha. (2018) Deep learning for classical japanese literature. * [20] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” _Trans. Img. Proc._ , vol. 13, no. 4, pp. 600–612, 2004. [Online]. Available: http://dx.doi.org/10.1109/TIP.2003.819861 * [21] F. Wilcoxon, “Individual comparisons by ranking methods,” _Biometrics Bulletin_ , vol. 1, no. 6, pp. 80–83, 1945. * [22] L. A. Passos and J. P. Papa, “Temperature-based deep boltzmann machines,” _Neural Processing Letters_ , vol. 48, no. 1, pp. 95–107, 2018. * [23] G. Li, L. Deng, Y. Xu, C. Wen, W. Wang, J. Pei, and L. Shi, “Temperature based restricted boltzmann machines,” _Scientific reports_ , vol. 6, p. 19133, 2016\. | Mateus Roder Mateus Roder is a Bachelor in Manufacturing Engineering at São Paulo State University (UNESP), Itapeva-SP (2018), and a former Scientific Initiation FAPESP’s scholarship holder, focusing on image processing and classification, machine learning algorithms, and meta-heuristic optimization. Currently, is a student and FAPESP’s scholarship holder in Master of Computer Science at São Paulo State University, FC/Bauru, focusing on restricted Boltzmann machines and deep learning. Email<EMAIL_ADDRESS> ---|--- | Gustavo Henrique de Rosa is a Bachelor in Computer Science at São Paulo State University (UNESP), FC/Bauru (2016), and a former Scientific Initiation FAPESP’s scholarship holder with an internship at the Harvard University, focusing on image processing formulations, pattern recognition, pattern classification, machine learning algorithms, and meta-heuristic optimization. Master of Science in Computer Science at São Paulo State University, IBILCE/Rio Preto (2018), and a former Master of Science FAPESP’s scholarship holder with an internship at the University of Virginia, focusing on deep learning and meta-heuristic optimization. Currently, as a Ph.D. student in Computer Science at São Paulo State University, FC/Bauru, and a Ph.D. FAPESP’s scholarship holder, focusing on natural language processing and adversarial learning. Email<EMAIL_ADDRESS> ---|--- | Victor Hugo C. de Albuquerque [M’17, SM’19] is a professor and senior researcher at the ARMTEC Tecnologia em Robótica, Brazil. He has a Ph.D in Mechanical Engineering from the Federal University of Paraíba (UFPB, 2010), an MSc in Teleinformatics Engineering from the Federal University of Ceará (UFC, 2007), and he graduated in Mechatronics Engineering at the Federal Center of Technological Education of Ceará (CEFETCE, 2006). He is a specialist, mainly, in IoT, Machine/Deep Learning, Pattern Recognition, Robotic. ---|--- | André Rossi André L.D. Rossi received his B.Sc. degree in Computer Science from the Universidade Estadual de Londrina (UEL), Brazil, and his M.Sc. and Ph.D. degrees in Computer Science from University of São Paulo (USP), Brazil. André Rossi is currently an Associate Professor at the São Paulo State University (UNESP), Brazil. His main research interests are Machine Learning and Computer Vision. Email<EMAIL_ADDRESS> ---|--- | João P. Papa [SM’17] received his B.Sc.in Information Systems from the São Paulo State University (UNESP), SP, Brazil. In 2005, he received his M.Sc. in Computer Science from the Federal University of São Carlos, SP, Brazil. In 2008, he received his Ph.D. in Computer Science from the University of Campinas, SP, Brazil. During 2008-2009, he had worked as a post-doctorate researcher at the same institute, and during 2014-2015 he worked as a visiting scholar at Harvard University. He has been a Professor at the Computer Science Department, São Paulo, State University, since 2009. He was also the recipient of the Alexander von Humboldt research fellowship in 2017. Email: <EMAIL_ADDRESS> ---|---
# Deep Parametric Continuous Convolutional Neural Networks Shenlong Wang${}^{1,3,^{\ast}}$ Simon Suo${}^{2,3,^{\ast}}$ Wei-Chiu Ma3 Andrei Pokrovsky3 Raquel Urtasun1,3 1University of Toronto, 2University of Waterloo, 3Uber Advanced Technologies Group {slwang, suo, weichiu, andrei<EMAIL_ADDRESS> ###### Abstract Standard convolutional neural networks assume a grid structured input is available and exploit discrete convolutions as their fundamental building blocks. This limits their applicability to many real-world applications. In this paper we propose Parametric Continuous Convolution, a new learnable operator that operates over non-grid structured data. The key idea is to exploit parameterized kernel functions that span the full continuous vector space. This generalization allows us to learn over arbitrary data structures as long as their support relationship is computable. Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes, and lidar motion estimation of driving scenes. ## 1 Introduction Discrete convolutions are the most fundamental building block of modern deep learning architectures. Its efficiency and effectiveness relies on the fact that the data appears naturally in a dense grid structure (e.g., 2D grid for images, 3D grid for videos). However, many real world applications such as visual perception from 3D point clouds, mesh registration and non-rigid shape correspondences rely on making statistical predictions from non-grid structured data. Unfortunately, standard convolutional operators cannot be directly applied in these cases. Multiple approaches have been proposed to handle non-grid structured data. The simplest approach is to voxelize the space to form a grid where standard discrete convolutions can be performed [30, 24]. However, most of the volume is typically empty, and thus this results in both memory inefficiency and wasted computation. Geometric deep learning [3, 15] and graph neural network approaches [25, 16] exploit the graph structure of the data and model the relationship between nodes. Information is then propagated through the graph edges. However, they either have difficulties generalizing well or require strong feature representations as input to perform competitively. End-to-end learning is typically performed via back-propagation through time, but it is difficult to learn very deep networks due to the memory limitations of modern GPUs. In contrast to the aforementioned approaches, in this paper we propose a new learnable operator, which we call parametric continuous convolution. The key idea is a parameterized kernel function that spans the full continuous vector space. In this way, it can handle arbitrary data structures as long as its support relationship is computable. This is a natural extension since objects in the real-world such as point clouds captured from 3D sensors are distributed unevenly in continuous domain. Based upon this we build a new family of deep neural networks that can be applied on generic non-grid structured data. The proposed networks are both expressive and memory efficient. We demonstrate the effectiveness of our approach in both semantic labeling and motion estimation of point clouds. Most importantly, we show that very deep networks can be learned over raw point clouds in an end-to-end manner. Our experiments show that the proposed approach outperforms the state-of-the-art by a large margin in both outdoor and indoor 3D point cloud segmentation tasks, as well as lidar motion estimation in driving scenes. Importantly, our outdoor semantic labeling and lidar flow experiments are conducted on a very large scale dataset, containing 223 billion points captured by a 3D sensor mounted on the roof of a self-driving car. To our knowledge, this is 2 orders of magnitude larger than any existing benchmark. ## 2 Related Work #### Deep Learning for 3D Geometry: Deep learning approaches that exploit 3D geometric data have recently become populer in the computer vision community. Early approaches convert the 3D data into a two-dimensional RGB + depth image [17, 10] and exploit conventional convolutional neural networks (CNNs). Unfortunately, this representation does not capture the true geometric relationships between 3D points (i.e. neighboring pixels could be potentially far away geometrically). Another popular approach is to conduct 3D convolutions over volumetric representations [30, 21, 24, 9, 18]. Voxelization is employed to convert point clouds into a 3D grid that encodes the geometric information. These approaches have been popular in medical imaging and indoor scene understanding, where the volume is relatively small. However, typical voxelization approaches sacrifice precision and the 3D volumetric representation is not memory efficient. Sparse convolutions [9] and advanced data structures such as oct-trees [24] have been used to overcome these difficulties. Learning directly over point clouds has only been studied very recently. The pioneer work of PointNet [20], learns an MLP over individual points and aggregates global information using pooling. PointNet++ [22], the follow-up, improves the ability to capture local structures through a multi-scale grouping strategy. #### Graph Neural Networks: Graph neural networks (GNNs) [25] are generalizations of neural networks to graph structured data. Early approaches apply neural networks either over the hidden representation of each node or the messages passed between adjacent nodes in the graph, and use back-propagation through time to conduct learning. Gated graph neural networks (GGNNs) [16] exploit gated recurrent units along with modern optimization techniques, resulting in improved performance. In [23], GGNNs are applied to point cloud segmentation, achieving significant improvements over the state-of-the-art. One of the major difficulties of graph neural networks is that propagation is conducted in a synchronous manner and thus it is hard to scale up to graphs with millions of nodes. Inference in graphical models as well as recurrent neural networks can be seen as special cases of graph neural networks. #### Graph Convolution Networks: An alternative formulation is to learn convolution operations over graphs. These methods can be categorized into spectral and spatial approaches depending on which domain the convolutions are applied to. For spectral methods, convolutions are converted to multiplication by computing the graph Laplacian in Fourier domain [4, 2, 31]. Parameterized spectral filters can be incorporated to reduce overfitting [4]. These methods are not feasible for large scale data due to the expensive computation, since there is no FFT-like trick over generic graph. Spatial approaches directly propagate information along the node neighborhoods in the graph. This can be implemented either through low-order approximation of spectral filtering[6, 15, 7], or diffusion in a support domain [19, 2, 27, 31, 26]. Our approach generalizes spatial approaches in two ways: first, we use more expressive convolutional kernel functions; second, the output of the convolution could be any point in the whole continuous domain. Figure 1: Unlike grid convolution, parametric continuous convolution uses kernel functions that are defined for arbitrary points in the continuous support domain. As a result, it is possible to output features at points not seen in the input. #### Other Approaches: Edge-conditioned filter networks [27] use a weighting network to communicate between adjacent nodes on the graph [13] conditioned on edge labels, which is primarily formulated as relative point locations. In contrast, our approach is not constrained to a fixed graph structure, and has the flexibility to output features at arbitrary points over the continuous domain. In a concurrent work, [26] uses similar parametric function form $f(\mathbf{x}_{i}-\mathbf{x}_{j})$ to aggregate information between points. However, they only use shallow isotropic gaussian kernels to represent the weights, while we use expressive deep networks to parameterize the continuous filters. Figure 2: Detailed Computation Block for the Parametric Continuous Convolution Layer. ## 3 Deep Parametric Continuous CNNs ### 3.1 Parametric Continuous Convolutions Standard CNNs use discrete convolutions (i.e., convolutions defined over discrete domain) as basic operations. $h[n]=(f\ast g)[n]=\sum_{m=-M}^{M}f[n-m]g[m]$ where $f:\mathcal{G}\rightarrow\mathbb{R}$ and $g:\mathcal{S}\rightarrow\mathbb{R}$ are functions defined over the support domain of finite integer set: $\mathcal{G}=\mathcal{Z}^{D}$ and $\mathcal{S}=\\{-M,-M+1,...,M-1,M\\}^{D}$ respectively. In contrast, continuous convolutions can be defined as $h(\mathbf{x})=(f\ast g)(\mathbf{x})=\int_{-\infty}^{\infty}f(\mathbf{y})g(\mathbf{x}-\mathbf{y})d\mathbf{y}$ (1) where both the kernel $g:\mathcal{S}\rightarrow\mathbb{R}$ and the feature $f:\mathcal{G}\rightarrow\mathbb{R}$ are defined as continuous functions over the support domain $\mathcal{G}=\mathbb{R}^{D}$ and $\mathcal{S}=\mathbb{R}^{D}$ respectively. Continuous convolutions require the integration in Eq. (1) to be analytically tractable. Unfortunately, this is not possible for real-world applications, where the input features are complicated and non-parametric, and the observations are sparse points sampled over the continuous domain. Motivated by monte-carlo integration [5] we derive our continuous convolution operator. In particular, given continuous functions $f$ and $g$ with a finite number of input points ${\mathbf{y}_{i}}$ sampled from the domain, the convolution at an arbitrary point $\mathbf{x}$ can be approximated as: $h(\mathbf{x})=\int_{-\infty}^{\infty}f(\mathbf{y})g(\mathbf{x}-\mathbf{y})d\mathbf{y}\approx\sum_{i}^{N}\frac{1}{N}f(\mathbf{y}_{i})g(\mathbf{x}-\mathbf{y}_{i})$ The next challenge we need to solve is constructing the continuous convolutional kernel function $g$. Conventional 2D and 3D discrete convolution kernels are parameterized in a way that each point in the support domain is assigned a value (_i.e_. the kernel weight). Such a parameterization is infeasible for continuous convolutions, since the kernel function $g$ is defined over an infinite number of points (i.e., has infinite support). Instead, in this paper we propose to use parametric continuous functions to model $g$. We name our approach Parametric Continuous Convolutions. In particular, we use a multi-layer perceptron (MLP) as the approximator. With reference to the universal approximation theorem of [12], MLPs are expressive and capable of approximating continuous functions over $\mathbb{R}^{n}$. Thus we define: $g(\mathbf{z};\theta)=MLP(\mathbf{z};\theta)$ The kernel function $g(\mathbf{z};\theta):\mathbb{R}^{D}\rightarrow\mathbb{R}$ spans the full continuous support domain while remaining parametrizable by a finite number of parameters. Note that other choices such as polynomials are possible, however low-order polynomials are not expressive, whereas learning high-order polynomials can be numerically unstable for back-propagation. Figure 3: Architecture of the Deep Parametric Continuous CNNs for Semantic Labeling Task. ### 3.2 From Convolutions to Deep Networks In this section, we first design a new convolution layer based on the parametric continuous convolutions derived in the previous subsection. We then propose a deep learning architecture using this new convolution layer. #### Parametric Continuous Convolution Layer: Note that, unlike standard discrete convolutions which are conducted over the same point set, the input and output points of our parametric continuous convolution layer can be different. This is important for many practical applications, where we want to make dense predictions based on partial observations. Furthermore, this allow us to abstract information from redundant input points (i.e., pooling). As a consequence, the input of each convolution layer contains three parts: the input feature vector $\mathcal{F}=\\{\mathbf{f}_{\mathrm{in},j}\in\mathbb{R}^{F}\\}$, the associated locations in the support domain $\mathcal{S}=\\{\mathbf{y}_{j}\\}$, as well as the output domain locations $\mathcal{O}=\\{\mathbf{x}_{i}\\}$. For each layer, we first evaluate the kernel function $g_{d,k}(\mathbf{y}_{i}-\mathbf{x}_{j};\theta)$ for all $\mathbf{x}_{j}\in\mathcal{S}$ and all $\mathbf{y}_{i}\in\mathcal{O}$, given the parameters $\theta$. Each element of the output feature vector is then computed as: ${h}_{k,i}=\sum_{d}^{F}\sum_{j}^{N}g_{d,k}(\mathbf{y}_{i}-\mathbf{x}_{j})f_{d,j}$ Let $N$ be the number of input points, $M$ be the number of output points, and $D$ the dimensionality of the support domain. Let $F$ and $O$ be predefined input and output feature dimensions respectively. Note that these are hyperparameters of the continuous convolution layer analogous to input and output feature dimensions in standard grid convolution layers. Fig. 1 depicts our parametric continuous convolutions in comparison with conventional grid convolution. Two major differences are highlighted: 1) the kernel function is continuous given the relative location in support domain; 2) the input/ouput points could be any points in the continuous domain as well and can be different. | | | | | ---|---|---|---|---|--- | | | | | | | | | | | | | | | | | | | | | | | | | Input | Ground Truth | Ours PCCN | Input | Ground Truth | Ours PCCN Figure 4: Semenatic Segmentation Results on Stanford Indoor3D Dataset #### Deep Parametric Continuous CNNs: Using the parametric continuous convolution layers as building blocks, we can construct a new family of deep networks which operates on unstructured data defined in a topological group under addition. In the following discussions, we will focus on multi-diumensional euclidean space, and note that this is a special case. The network takes the input features and their associated positions in the support domain as input. Then the hidden representations are generated from successive parametric continuous convolution layers. Following standard CNN architectures, we can add batch normalization, non-linearities and residual connections between layers. Pooling can also be employed over the support domain to aggregate information. In practice, we find adding residual connection between parametric continuous convolution layers is critical to help convergence. Please refer to Fig. 2 for an example of the computation graph of a single layer, and to Fig. 3 for an example of the network architecture employed for our indoor semantic segmentation task. #### Learning: All of our building blocks are differentiable, thus our networks can be learned through back-prop: $\frac{\partial h}{\partial\theta}=\frac{\partial h}{\partial g}\cdot\frac{\partial g}{\partial\theta}=\sum_{d}^{F}\sum_{j}^{N}f_{d,j}\cdot\frac{\partial g}{\partial\theta}$ | | | ---|---|---|--- | | | | | | | | | | | | | | | Ground Truth | 3D-FCN | Ours PCCN | Ours 3D-FCN+PCCN Figure 5: Semenatic Segmentation Results on Driving Scene Dataset; Colored: correct prediciton; white: wrong prediciton. ### 3.3 Discussions #### Locality Enforcing Continuous Convolution: Standard grid convolution are computed over a limited kernel size $M$ to keep locality. Similarly, locality can be enforced in our parametric continuous convolutions by constraining the influence of the function $g$ to points close to $\mathbf{x}$, _i.e_., $g(\mathbf{z})=MLP(\mathbf{z})w(\mathbf{z})$ where $w(\cdot)$ is a modulating window function. This can be achieved in differently. First, we can constrain the cardinality of its local support domain and only keep non-zero kernel values for its K-nearest neighbors: $w(\mathbf{z})=\mathbf{1}_{\mathbf{z}\in\mathrm{KNN}(\mathcal{S},\mathbf{x})}$. Alternatively we can keep non-zero kernel values for points within a fixed radius $r$: $w(\mathbf{z})=\mathbf{1}_{||\mathbf{z}||_{2}<r}$. #### Efficient Continuous Convolution: For each continuous convolution layer, the kernel function is evaluated $N\times|\mathcal{S}|\times F\times O$ times, where $|\mathcal{S}|$ is the cardinality of the support domain, and the intermediate weight tensor is stored for backpropagation. This is expensive in practice, especially when both the number of points and the feature dimension are large. With the locality enforcing formulation, we can constrain the cardinality of $\mathcal{S}$. Furthermore, motivated by the idea of separable filters, we use the fact that this computation can be factorized if the kernel function value across different output dimensionality is shared. That is to say, we can decompose the weight tensor $W\in\mathbb{R}^{N\times|\mathcal{S}|\times F\times O}$ into two tensors $W_{1}=\mathbb{R}^{F\times O}$ and $W_{2}=\mathbb{R}^{N\times|\mathcal{S}|\times O}$ , where $W_{1}$ is a linear weight matrix and $W_{2}$ is evaluated through the MLP. With this optimization, only $N\times|\mathcal{S}|\times O$ kernel evaluations need to be computed and stored. Lastly, in inference stage, through merging the operations of batchnorm and fc layer in MLP, 3x speed boosting can be achieved. #### Special Cases: Many previous convolutional layers are special cases of our approach. For instance, if the points are sampled over the finite 2D grid we recover conventional 2D convolutions. If the support domain is defined as concatenation of the spatial vector and feature vector with a gaussian kernel $g(\cdot)$, we recover the bilateral filter. If the support domain is defined as the neighboring vertices of a node we recover the first-order spatial graph convolution [15]. | | ---|---|--- Figure 6: Semantic Labeling on KITTI Dataset without Retraining ## 4 Experimental Evaluation We demonstrate the effectiveness of our approach in the tasks of semantic labeling and motion estimation of 3D point clouds, and show state-of-the-art performance. We conduct point-wise semantic labeling experiments over two datasets: a very large-scale outdoor lidar semantic segmentation dataset that we collected and labeled in house and a large indoor semantic labeling dataset. To our knowledge, these are the largest real-world outdoor and indoor datasets that are available for this task. The datasets are fully labeled and contain 137 billion and 629 million points respectively. The lidar flow experiment is also conducted on this dataset with ground-truth 3D motion label for each point. Method | blue!25 mIOU | blue!25 mAcc | ceiling | floor | wall | beam | column | window | door | chair | table | bookcase | sofa | board | clutter ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- PointNet [20] | blue!2541.09 | blue!2548.98 | 88.80 | 97.33 | 69.80 | 0.05 | 3.92 | 46.26 | 10.76 | 52.61 | 58.93 | 40.28 | 5.85 | 26.38 | 33.22 3D-FCN-TI [29] | blue!2547.46 | blue!25 54.91 | 90.17 | 96.48 | 70.16 | 0.00 | 11.40 | 33.36 | 21.12 | 76.12 | 70.07 | 57.89 | 37.46 | 11.16 | 41.61 SEGCloud [29] | blue!25 48.92 | blue!2557.35 | 90.06 | 96.05 | 69.86 | 0.00 | 18.37 | 38.35 | 23.12 | 75.89 | 70.40 | 58.42 | 40.88 | 12.96 | 41.60 Ours PCCN | blue!25 58.27 | blue!25 67.01 | 92.26 | 96.20 | 75.89 | 0.27 | 5.98 | 69.49 | 63.45 | 66.87 | 65.63 | 47.28 | 68.91 | 59.10 | 46.22 Table 1: Semantic Segmentation Results on Stanford Large-Scale 3D Indoor Scene Dataset ### 4.1 Semantic Segmentation of Indoor Scenes #### Dataset: We use the Stanford large-scale 3D indoor scene dataset [1] and follow the training and testing procedure used in [29]. We report the same metrics, i.e., mean-IOU, mean class accuracy (TP / (TP + FN)) and class-wise IOU. The input is six dimensional and is composed of the xyz coordinates and RGB color intensity. Each point is labeled with one of 13 classes shown in Tab. 1. #### Competing Algorithms: We compare our approach to PointNet [20] and SegCloud [29]. We evaluate the proposed end-to-end continuous convnet with eight continuous convolution layers (Ours PCCN). The kernels are defined over the continuous support domain of 3D Euclidean space. Each intermediate layer except the last has 32 dimensional hidden features followed by batchnorm and ReLU nonlinearity. The dimension of the last layer is 128. We observe that the distribution of semantic labels within a room is highly correlated with the room type (_e.g_. office, hallway, conference room, _etc_.). Motivated by this, we apply max pooling over all the points in the last layer to obtain a global feature, which is then concatenated to the output feature of each points in the last layer, resulting in a 256 dimensional feature. A fully connected layer with softmax activation is used to produce the final logits. Our network is trained end-to-end with cross entropy loss, using Adam optimizer. #### Results: As shown in Tab. 1 our approach outperforms the state-of-the-art by 9.3% mIOU and 9.6% mACC. Fig. 4 shows qualitative results. Despite the diversity of geometric structures, our approach works very well. Confusion mainly occurs between columns vs walls and window vs bookcase. It is also worth noting that our approach captures visual information encoded in RGB channels. The last row shows two failure cases. In the first one, the door in the washroom is labeled as clutter whearas our algorithm thinks is door. In the second one, the board on the right has a window-like texture, which makes the algorithm predict the wrong label. | ---|--- Flow Field | Overlay of Target and Warped Source Figure 7: Right: purple shows target frame, yellow shows source frame warped to target frame using ground truth flow ### 4.2 Semantic Segmentation of Driving Scenes #### Dataset: We first conduct experiments on the task of point cloud segmentation in the context of autonomous driving. Each point cloud is produced by a full sweep of a roof-mounted Velodyne-64 lidar sensor driving in several cities in North America. The dataset is composed of snippets each having 300 consecutive frames. The training and validation set contains 11,337 snippets in total while the test set contains 1,644 snippets. We report metrics on a subset of the test set which is generated by sampling 10 frames from each snippet to avoid bias brought due to scenes where the ego-car is static (e.g., when waiting at a traffic light). Each point is labeled with one of seven classes defined in Tab. 2. We adopt mean intersection-over-union (meanIOU) and point- wise accuracy (pointAcc) as our evaluation metrics. Method | blue!25 pACC | blue!25 mIOU | vehicle | bicyclist | pedestrian | motorcycle | animal | background | road | params size ---|---|---|---|---|---|---|---|---|---|--- PointNet [20] | blue!25 91.96 | blue!25 38.05 | 76.73 | 2.85 | 6.62 | 8.02 | 0.0 | 89.83 | 91.96 | 20.34MB 3D-FCN [11] | blue!25 94.31 | blue!25 49.28 | 86.74 | 22.30 | 38.26 | 17.22 | 0.98 | 86.91 | 92.56 | 74.66MB Ours PCCN | blue!25 94.56 | blue!25 46.35 | 86.62 | 8.31 | 41.84 | 7.24 | 0.00 | 87.27 | 93.20 | 9.34MB Ours 3D-FCN+PCCN | blue!25 95.45 | blue!25 58.06 | 91.83 | 40.23 | 47.74 | 42.91 | 1.25 | 89.27 | 93.18 | 74.67MB Table 2: Semenatic Segmentation Results on Driving Scenes Dataset #### Baselines: We compare our approach to the point cloud segmentation network (PointNet) [20] and a 3D fully convolutional network (3D-FCN) conducted over a 3D occupancy grid. We use a resolution of 0.2m for each voxel over a 160mx80mx6.4m range. This results in an occupancy grid encoded as a tensor of size 800x400x32. We define a voxel to be occupied if it contains at least one point. We use ResNet-50 as the backbone and replace the last average pooling and fully connected layer with two fully convolutional layers and a trilinear upsampling layer to obtain dense voxel predictions. The model is trained from scratch with the Adam optimizer[14] to minimize the class-reweighted cross- entropy loss. Finally, the voxel-wise predictions are mapped back to the original points and metrics are computed over points. We adapted the open- sourced PointNet model onto our dataset and trained from scratch. The architecture and loss function remain the same with the original paper, except that we removed the point rotation layer since it negatively impacts validation performance on this dataset. | | | ---|---|---|--- | | | Ground Truth | Ours 3D-FCN+PCCN | Ground Truth | Ours 3D-FCN+PCCN Figure 8: Lidar Flow Results on Driving Scene Dataset #### Our Approaches: We evaluate two versions of our approach. Our first instance conducts continuous convolutions directly over the raw xyz-intensity lidar points (Ours PCCN). Our second version (Ours 3D-FCN+PCCN) performs continuous convolutions over the features extracted from 3D-FCN. Ours PCCN has 16 continuous conv layers with residual connections, batchnorm and ReLU non-linearities. We use the spatial support in $\mathbb{R}^{3}$ to define our kernel. We train the network with point-wise cross-entropy loss and Adam [14] optimizer. In contrast, Ours 3D-FCN+PCCN model has 7 residual continuous convolutional layers on top of the trained 3D-FCN model and performs end-to-end fine-tuning using Adam optimizer. #### Results: As shown in Tab. 2, by exploiting sophisticated feature via 3D convolutions, 3D-FCN+PCCN results in the best performance. Fig. 5 shows qualitative comparison between models. As shown in the figure, all models produce good results. Performance differences often result from ambiguous regions. In particular, we can see that the 3D-FCN model oversegements the scene: it mislabels a background pole as vehicle (red above egocar), nearby spurirous points as bicyclist (green above egocar), and a wall as pedestrian (purple near left edge). This is reflected in the confidence map (as bright regions). We observe a significant improvement in our 3D-CNN + PCCN model, with all of the above corrected with high confidence. For more results and videos please refer to the supplementary material. #### Model Sizes: We also compare the model sizes of the competing algorithms in Tab. 2. In comparison to the 3D-FCN approach, the end-to-end continuous convolution network’s model size is eight times smaller , while achieving comparable results. And the 3D-FCN+PCCN is just 0.01MB larger than 3D-FCN, but the performance is improved by a large margin in terms of mean IOU. #### Complexity and Runtime We benchmark the proposed model’s runtime over a GTX 1080 Ti GPU and Xeon E5-2687W CPU with 32 GB Memory. The forward pass of a 8-layer PCCN model (32 feature dim in each layer with 50 neighbours) takes 33ms. The KD-Tree neighbour search takes 28 ms. The end-to-end computation takes 61ms. The number of operations of each layer is 1.32GFLOPs. #### Generalization: To demonstrate the generalization ability of our approach, we evaluate our model, trained with only North American scenes, on the KITTI dataset [8], which was captured in Europe. As shown in Fig. 6, the model achieves good results, with well segmented dynamic objects, such as vehicles and pedestrians. ### 4.3 Lidar Flow #### Dataset: We also validate our proposed method over the task of lidar based motion estimation, refered to as lidar flow. In this task, the input is two consecutive frames of lidar sweep. The goal is to estimation the 3D motion field for each point in the first frame, to undo both ego-motion and the motion of dynamic objects. The ground-truth ego-motion is computed through a comprehensive filters that take GPS, IMU as well as ICP based lidar alignment against pre-scaned 3D geometry of the scene as input. And the ground-truth 6DOF dynamics object motion is estimated from the temporal coherent 3D object tracklet, labeled by in-house annotators. Combining both we are able to get the ground-truth motion field. Fig. 7 shows the colormapped flow field and the overlay between two frames after undoing per-point motion. This task is crucial for many applications, such as multi-rigid transform alignment, object tracking, global pose estimation, _etc_. The training and validation set contains 11,337 snippets while the test set contains 1,644 snippets. We use 110k frame pairs for training and validation, and 16440 frame pairs for testing. End-point error, and outlier percentage at 10 cm and 20 cm are used as metric. #### Competing Algorithms: We compare against the 3D-FCN baseline using the same architecture and volumetric representation as used in Sec. 4.2. We also adopt a similar 3D-FCN + PCCN architecture with 7 residual continuous convolution layers added as a polishing network. In this task, we remove the ReLU nonlinearity and supervise the PCCN layers with MSE loss at every layer. The training objective function is mean square error loss between the ground-truth flow vector and the prediction. #### Results: Tab. 3 reports the quantitative results. As shown in the table, our 3D-FCN+PCCN model outperforms the 3D-FCN by 0.351cm in end-point error and our method reduces approximately $20\%$ of the outliers. Fig. 18 shows sample flow predictions compared with ground truth labels. As shown in the figure, our algorithm is able to capture both global motion of the ego-car including self rotation, and the motion of each dynamic objects in the scene. For more results please refer to our supplementary material. Method | EPE (cm) | Outlier$\%_{10}$ | Outlier$\%_{20}$ ---|---|---|--- 3D-FCN | 8.161 | 25.92% | 7.12 % Ours 3D-FCN+PCCN | 7.810 | 19.84% | 5.97% Table 3: Lidar Flow Results on Driving Scenes Dataset ## 5 Conclusions We have presented a new learnable convolution layer that operates over non- grid structured data. Our convolution kernel function is parameterized by multi-layer perceptrons and spans the full continuous domain. This allows us to design a new deep learning architecture that can be applied to arbitrary structured data, as long as the support relationships between elements are computable. We validate the performance on point cloud segmentation and motion estimation tasks, over very large-scale datasets with up to 200 bilion points. The proposed network achieves state-of-the-art performance on all the tasks and datasets. ## References * [1] I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis, M. Fischer, and S. Savarese. 3d semantic parsing of large-scale indoor spaces. In CVPR, 2016. * [2] D. Boscaini, J. Masci, E. Rodolà, and M. Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In NIPS, 2016. * [3] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 2017. * [4] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. ICLR, 2014. * [5] R. E. Caflisch. Monte carlo and quasi-monte carlo methods. Acta numerica, 1998. * [6] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS, 2016. * [7] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In NIPS, 2015. * [8] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012. * [9] B. Graham and L. van der Maaten. Submanifold sparse convolutional networks. arXiv, 2017. * [10] S. Gupta, R. Girshick, P. Arbeláez, and J. Malik. Learning rich features from rgb-d images for object detection and segmentation. In ECCV, 2014. * [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. * [12] K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 1991. * [13] X. Jia, B. De Brabandere, T. Tuytelaars, and L. V. Gool. Dynamic filter networks. In NIPS. 2016. * [14] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015. * [15] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv, 2016. * [16] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. arXiv, 2015. * [17] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. * [18] D. Maturana and S. Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In IROS, 2015. * [19] F. Monti, D. Boscaini, J. Masci, E. Rodolà, J. Svoboda, and M. M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. CVPR, 2017. * [20] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. 2016\. * [21] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view cnns for object classification on 3d data. In CVPR, 2016. * [22] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. 2017\. * [23] X. Qi, R. Liao, J. Jia, S. Fidler, and R. Urtasun. 3d graph neural networks for rgbd semantic segmentation. In CVPR, 2017. * [24] G. Riegler, A. O. Ulusoys, and A. Geiger. Octnet: Learning deep 3d representations at high resolutions. 2017\. * [25] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. TNN, 2009. * [26] K. T. Schütt, P. Kindermans, H. Sauceda, S. Chmiela, A. Tkatchenko, and K. Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. arXiv, 2017. * [27] M. Simonovsky and N. Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. CVPR, 2017. * [28] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In ICCV, 2015. * [29] L. P. Tchapmi, C. B. Choy, I. Armeni, J. Gwak, and S. Savarese. Segcloud: Semantic segmentation of 3d point clouds. arXiv preprint arXiv:1710.07563, 2017. * [30] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In CVPR, 2015. * [31] L. Yi, H. Su, X. Guo, and L. Guibas. Syncspeccnn: Synchronized spectral cnn for 3d shape segmentation. CVPR, 2017. Appendix In this supplementary material, we first validate our method’s generalization ability by testing the proposed models on KITTI dataset without re-training. We then provide more details about the lidar flow task. We also show additional results for all tasks, with analysis of failure modes. Lastly, we conduct a point cloud classification experiment using the deep parameteric continuous convolutional networks. In the supplementary video, we show our results on semantic segmentation and lidar flow results over a sequential data. We also show the generalization ability of the proposed method by training on our dataset and test it on KITTI as well as a truck sequence. ## Appendix A Generalization We show the generalization ability of our proposed model by training over one dataset and test it over another in our supplementary video. To be specific, we have used the following configurations: * • Train our proposed semantic labeling network on the driving scene data (several north America cities), and test it on KITTI (Europe). * • Train our proposed semantic labeling network on the driving scene data (non- highway road), and test it on a lidar sequence mounted on top of a high-way driving truck (highway road). * • Train our proposed lidar flow network on the driving scene data (several north America cities), and test it on KITTI (Europe). | | ---|---|--- Figure 9: Lidar Flow Results on KITTI Dataset | | ---|---|--- | | Figure 10: Semantic Segmentation on KITTI Dataset Under all the settings, our algorithm is able to generalize well. Fig. 9 shows our lidar flow model’s performance on KITTI. As shown in Fig. 9, our Lidar flow model generalizes well to the unseen KITTI dataset. From left to right, the figures show the most common scenarios with moving, turning, and stationary ego-car. The model produces covincing flow predictions in all three cases. Fig. 10 includes some additional segmentation results on KITTI. For more results over several sequences, please refer to our supplementary video. ## Appendix B Lidar Flow Data and Analysis #### Ground-truth Generation In this paragraph, we describe how we generate the ground-truth lidar flow data in detail. For each consecutive frame, we first get the global vehicle pose transform from frame 0 to frame 1: $\mathbf{R}_{\mathrm{ego}},\mathbf{t}_{\mathrm{ego}}$ with the help of additional sensors and prior information. This global vehicle pose transform represents how far away the vehicle moves and how the vehicle turns. This localization accuracy is at centi-meter scale. Therefore, the motion per each static point is: $\mathbf{f}_{\mathrm{static- gt}}^{(0)}=\mathbf{R}_{\mathrm{ego}}^{T}(\mathbf{x}^{(0)}_{static}-\mathbf{t}_{\mathrm{ego}})-\mathbf{x}^{(0)}_{static}$ where $\mathbf{f}_{\mathrm{gt}}^{(k)}$ is the ground-truth flow at the frame $k$ under the ego-car centered coordinate, $\mathbf{x}^{(k)}$ is the point’s location at the frame $k$ under the ego-car centered coordinate. For dynamic objects in the scene, e.g. other vehicles and pedestrians, the motion between each lidar frame in the vehicle coordinate is not only due to the self-driving car’s ego-motion. The movement of the dynamic objects themselves are also contributing to the motion. In this project, we assume rigid motion for all the objects. The labeling of the dynamics objects include two steps. Firstly, using the global pose that we get, we visualize the point cloud of 3D objects from two frames in the same reference coordinate and label the pose changes $\mathbf{R}_{\mathrm{obj}},\mathbf{t}_{\mathrm{obj}}$ between the objects at the different time. Secondly, both ego-motion and object motion are considered in order to generate the ground-truth flow vector: $\mathbf{f}_{\mathrm{dynamic- gt}}^{(0)}=\mathbf{R}_{\mathrm{obj}}^{T}(\mathbf{R}_{\mathrm{ego}}^{T}(\mathbf{x}^{(0)}_{dynamic}-\mathbf{t}_{\mathrm{ego}})-\mathbf{t}_{\mathrm{obj}})-\mathbf{x}^{(0)}_{dynamic}$ Please refer to Fig. 11 for an illustration. Figure 11: Flow data generation. The source of motion comes from two components: motion of the ego-car and motion of the dynamic objects. #### Ground-truth Motion Analysis We also conduct an analysis over the ground-truth motion distribution. In Fig. 12 we show the 2D histogram of the GT 3D translation component along $x$ and $y$ axis respectively. We also show the motion distribution across different object types, e.g. static background, vehicle and pedestrian. As we can see, different semantic types have different motion patterns. And the heaviest density of distribution is on the y-axis, which suggests the forward motion is the major motion pattern of our ego-car. Figure 12: Ground-truth Motion Distribution of the Lidar Flow Dataset (unit in meters) #### Ground-truth Validation and Visualization We validate the quality of our ground-truth motion labels by overlaying target frame points ($\mathbf{x}^{(1)}$) and source frame points warped with ground- truth motion ($\mathbf{x}^{(0)}+\mathbf{f}_{\mathrm{gt}}^{(0)}$). Fig. 13 shows overlays of entire scenes and Fig. 14 shows overlays of individual dynamic objects. For vehicles, points align perfectly across frames. For pedestrians, the correspondence is also near perfect: the only discrepancy is caused by non-rigid motion (e.g. changing posture). | | ---|---|--- | | Figure 13: Flow Ground Truth Overlay of Entire Scene. Yellow: target frame, purple: warped source frame. | | | | ---|---|---|---|--- | | | | Figure 14: Flow Ground Truth Overlay of Individual Dynamic Objects. Green: target frame, red: warped source frame. ## Appendix C More Results In this section, we show additional qualitative results of the proposed algorithm over all the tasks. ### C.1 Semantic Segmentation for Indoor Scenes Fig. 15 and Fig. 16 show more qualitatitive results over the stanford dataset. As the figure shown, in most cases our model is able to predict the semantic labels correctly. | | ---|---|--- | | | | | | | | | | Input | GT | Ours Figure 15: Semantic Segmentation Results on Stanford Indoor Dataset | | ---|---|--- | | | | | | | | | | Input | GT | Ours Figure 16: Semantic Segmentation Results on Stanford Indoor Dataset ### C.2 Semantic Segmentation for Driving Scenes Fig. 17 shows additional results for semantic labeling in driving scenes. As shown, the results capture very small dynamics, _e.g_. pedestrians and bicyclists. This suggests our model’s potential in object detection and tracking. The model is also able to distinguish between road and non-road through lidar intensity and subtle geometry structure such as road curbs. This validates our model’s potential in map automation. More specifically, we see that most error occur on road boundaries (bright curves in error map). | | ---|---|--- | | | | | | | | | | | | | | | | | | | | Ground Truth | Ours | Error Map Figure 17: Semantic Segmentation Results on Driving Scenes Dataset ### C.3 Lidar Flow We show additional results on Lidar flow estimation in Fig. 18. Unlike the visualization in the main submission, we visualize the colored vector in order to better depicts the magnitudes of the motion vector. As shown in the figure, our model is able to capture majority flow field. The majority of the error happens at the object boundary. This suggests that a better support domain that includes both space and intensity features could be potentially used to boost performance. | | ---|---|--- | | | | | | | | | | | | | | | | | | | | Ground Truth | Ours | Error Map Figure 18: Lidar Flow Results on Driving Scenes Dataset ## Appendix D Activations We visualize the activation maps of the trained PCCN network over a single lidar frame from the driving scene dataset for segmentation. Fig. 19 depicts the activation map at layer 1 of PCCN. As we can see, at the early conv layer the method mainly captures low-level geometry details, _e.g_. the z coordinate, the intensity peak, _etc_. Fig. 20 shows the activation map at layer 8 of PCCN. The conv layers begin to capture information with more semantic meaning, e.g. the road curb and the dynamic objects. | | ---|---|--- | | Figure 19: Activation Map of PCCN at Layer 1 | | ---|---|--- | | Figure 20: Activation Map of PCCN at Layer 8 ## Appendix E Point Cloud Classification To verify the applicability of the proposed parameteric continuous convolution over global prediction task, we conduct a simple point cloud classification task on the ModelNet40 benchmark. This dataset contains CAD models from 40 categories. The state-of-the-art and most representative algorithms conducted on ModelNet40 are compared [28]. We randomly sampled 2048 points for each training and testing sample over the 3D meshes and feed the point cloud into our neural network. The architecture contains 6 continuous convolution layers with 32-dimensional hidden features, followed by two layers with 128-dimensions and 512 dimensions respectively. The output of the last continuous convolution layer is fed into a max pooling layer to generate the global 512-dimensional feature, followed by two fc layers to output the final logits. Tab. 4 reports the classification performance. As we can see in the table, the performance is comparable with PointNet and slightly below PointNet++. Here we use a naive global max pooling to aggregate global information for our method. We expect to achieve better results with more comprehensive and hierachical pooling strategies. Table 4: ModelNet40 Point Cloud Classification Method | Input | Accuracy ---|---|--- MVCNN [28] | Multi-view Image | 90.1% 3DShapeNet [30] | Volume | 84.7% VoxNet [18] | Volume | 85.9% Subvolume [21] | Volume | 89.2% ECC [27] | Point | 87.4% PointNet vanilla [20] | Point | 87.2% PointNet [20] | Point | 89.2% PointNet++ [22] | Point | 91.9% Ours | Point | 88.9%
# Intestinal Parasites Classification Using Deep Belief Networks Mateus Roder 1School of Sciences, São Paulo State University, Bauru, Brazil 1 0000-0002-3112-5290 Leandro A. Passos 1School of Sciences, São Paulo State University, Bauru, Brazil 1 0000-0003-3529-3109 Luiz Carlos Felix Ribeiro 1School of Sciences, São Paulo State University, Bauru, Brazil 1 0000-0003-1265-0273 Barbara Caroline Benato 2Institute of Computing, University of Campinas, Campinas, Brazil2 0000-0003-0806-3607 Alexandre Xavier Falcão 2Institute of Computing, University of Campinas, Campinas, Brazil2 0000-0002-2914-5380 João Paulo Papa 11 0000-0002-6494-7514 {mateus.roder, leandro.passos<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Currently, approximately $4$ billion people are infected by intestinal parasites worldwide. Diseases caused by such infections constitute a public health problem in most tropical countries, leading to physical and mental disorders, and even death to children and immunodeficient individuals. Although subjected to high error rates, human visual inspection is still in charge of the vast majority of clinical diagnoses. In the past years, some works addressed intelligent computer-aided intestinal parasites classification, but they usually suffer from misclassification due to similarities between parasites and fecal impurities. In this paper, we introduce Deep Belief Networks to the context of automatic intestinal parasites classification. Experiments conducted over three datasets composed of eggs, larvae, and protozoa provided promising results, even considering unbalanced classes and also fecal impurities. ###### Keywords: Intestinal Parasites Deep Belief Networks Restricted Boltzmann Machines Data Augmentation ## 1 Introduction Estimates reveal that around $4$ billion people in the world are infected with some intestinal parasite [21]. The human intestinal parasitism is a public health problem, especially in tropical countries [9], in which such infections can lead children and immunodeficient adults to death. The detection and diagnosis of human intestinal parasitosis depend on the visual analysis of optical microscopy images obtained from fecal samples mostly. However, the manual analysis of those images is time-consuming and error-prone. In order to circumvent this problem, Suzuki et al. [19] proposed a fully automated enteroparasitosis diagnosis system via image analysis, which addressed the $15$ most common species of protozoa and helminths in Brazil. The proposed approach is composed of three main steps: (i) image segmentation, (ii) object delineation, and its further (iii) classification. Previous works have also investigated protozoa and helminth parasites classification. Suzuki et al. [20], for instance, introduced the Optimum Path Forest [11, 10] classifier for such a task, with results that outperformed Support Vector Machines and Artificial Neural Networks. Later on, Peixinho et al. [14] explored Convolutional Neural Networks (CNNs) in this context. Further, Peixinho et al. [13] proposed generating synthetic samples to increase the number of images for under-represented classes by adding points onto a 2D projection space. Furthermore, Benato et al. [1] investigated an approach to cope with the lack of supervised data by interactively propagating labels to reduce the user effort in data annotation. Finally, Castelo et al. [3] used bag of visual words to extract key points from superpixel-segmented images and further build a visual dictionary to automatic classify intestinal parasites. Apart from the techniques mentioned earlier, Restricted Boltzmann Machines (RBMs) [17] obtained notorious attention due to their promising results in a wide variety of tasks such as data reconstruction [12], exudate identification in retinal images [7], and collaborative filtering [16], to cite a few. Moreover, RBMs can be used as the building block for more complex and deep models such as Deep Belief Networks (DBNs) [6] and Deep Boltzmann Machines (DBMs) [15]. However, as far as we are concerned, no work has employed RBM-based models in the task of intestinal parasite classification to date. Therefore, the main contributions of this work are threefold: (i) to propose an effective method for parasite classification using RBMs and DBNs; (ii) to evaluate the ability of Restricted Boltzmann Machines ability for data augmentation; and (iii) to foster the scientific literature concerning both RBM-based applications and intestinal parasites identification. The remainder of this paper is organized as follows: Section 2 introduces the theoretical background concerning RBMs and DBNs, while Sections 3 and 4 present the methodology and the experimental results, respectively. Finally, Section 5 states conclusions and future works. ## 2 Theoretical Background In this section, we provide a brief description of the main concepts regarding RBM and DBN formulations, as well as their discriminative variant to deal with classification problems. ### 2.1 Restricted Boltzmann Machines Restricted Boltzmann Machines stand for energy-based neural networks that can learn the probability distribution over a set of input vectors. Such models are named after the Boltzmann distribution, a measurement that uses the system’s energy to obtain the probability of a given state. Energy-based models are inspired by physics since they assign a scalar energy value for each variable configuration, thus learning by adjusting their parameters to minimize the energy of the system. Moreover, they are modeled as a bipartite graph, i.e., there are no connections between units from the same layer. Such a technique assumes binary-valued nodes, although there are extensions to real- and even complex-valued inputs [18, 8]. Given an initial configuration $(\bm{v},\bm{h})$, the energy of the system can be computed as follows: $E(\bm{v},\bm{h})=-\sum_{i=1}^{m}b_{i}v_{i}-\sum_{j=1}^{n}c_{j}h_{j}-\sum_{i=1}^{m}\sum_{j=1}^{n}W_{ij}v_{i}h_{j},$ (1) where $\bm{v}\in\Re^{m}$ and $\bm{h}\in\Re^{n}$ stand for the visible and hidden layers, respectively, and $\bm{b}\in\Re^{m}$ and $\bm{c}\in\Re^{n}$ denote their bias vectors. Additionally, $\bm{W}_{m\times n}$ corresponds to the weight matrix concerning the connections between layers $\bm{v}$ and $\bm{h}$. The learning procedure aims at finding $\bm{W}$, $\bm{a}$, and $\bm{b}$ in such a way Equation 1 is minimized. However, calculating the joint probability of the model is intractable since it requires computing every possible initial configuration. Moreover, one can estimate the conditional probabilities using alternated iterations over a Monte Carlo Markov Chain (MCMC) approach, where the probabilities of both input and hidden units can be computed as follows: $p(h_{j}=1|\bm{v})=\sigma\left(c_{j}+\sum_{i=1}^{m}W_{ij}\bm{v}_{i}\right),$ (2) and $p(v_{i}=1|\bm{h})=\sigma\left(b_{i}+\sum_{j=1}^{n}W_{ij}\bm{h}_{j}\right),$ (3) where $\sigma$ stands for the logistic-sigmoid function. Since the visible and hidden units are conditionally independent, one can train the network using the MCMC algorithm with Gibbs sampling through Contrastive Divergence (CD) [5]. ### 2.2 Deep Belief Networks Restricted Boltzmann Machines can also be employed to compose more complex models. They are commonly used as building blocks to generate the so-called Deep Belief Networks [6], which are composed of a visible and a set of $L$ hidden layers. In this model, each layer is connected to the next through a weight matrix $\textbf{W}^{(l)}$, $l\in[1,L]$. In short, DBNs consider each set of two subsequent layers as an RBM trained in a greedy fashion, where the hidden layer of the bottommost RBM feeds the next RBM’s visible layer. For classification purposes, a Softmax layer is appended to the model. Afterwards, the model is fine-tuned using the backpropagation algorithm, as depicted in Figure 1. Notice that $\textbf{h}^{(l)}$ stand for the $l$-th hidden layer. Figure 1: DBN architecture with two hidden layers for classification purposes. ## 3 Methodology In this section, we introduce the dataset employed in this work, as well as the technical details concerning the experimental setup. ### 3.1 Dataset The experiments consider datasets from human intestinal parasites divided into three groups: (i) Helminth eggs (i.e., Eggs) with $12,691$ images, (ii) Helminth larvae (i.e., Larvae) with $1,598$ images, and (iii) Protozoan cysts (i.e., Protozoa) with $37,372$ images. Notice that all datasets contain fecal impurities, which is a diverse class that looks alike to some parasites. Each dataset comprises the following categories and their respective label in parenthesis: * • Helminth eggs: _H.nana_ (1), _H.diminuta_ (2), _Ancilostomideo_ (3), _E.vermicularis_ (4), _A.lumbricoides_ (5), _T.trichiura_ (6), _S.mansoni_ (7), _Taenia_ (8), and impurities (9). * • Helminth larvae: larvae (1) and impurities (2); and * • Protozoan cysts: _E.coli_ (1), _E.histolytica_ (2), _E.nana_ (3), _Giardia_ (4), _I.butschlii_ (5), _B.hominis_ (6), and impurities (7). These are the most common species of human intestinal parasites in Brazil, and they are also responsible for public health problems in most tropical countries [19]. Notice that all datasets are unbalanced with considerably more impurity samples. The objects of interest were first segmented from the background, converted to grayscale, and further resized to $50\times 50$ pixels. Table 2(a) presents the distribution of samples per class. ### 3.2 Data augmentation In this paper, we proposed two different synthetic data generation approaches to overcome the class imbalance problem: (i) an Autoencoder (AE) and (ii) an additional RBM for image reconstruction purposes. In all cases, the models were trained with examples of the class to be oversampled only. Further, to allow a fair comparison, both the RBM and the AE contain similar architectures. Table 1 presents the hyperparameters employed while training the models for data augmentation. Model | Hyper-parameter | Search interval | Best value ---|---|---|--- AE | $\eta$ | $[10^{-5},10^{-2}]$ | $10^{-3}$ $p_{drop}$ | $[0,0.4]$ | $0.2$ Hidden dim | $\\{250,500,2000\\}$ | $500$ Batch size | $\\{16,32,128\\}$ | $32$ RBM | $\eta$ | $[10^{-5},10^{-2}]$ | $10^{-4}$ Hidden dim | $\\{500,2000\\}$ | $500$ Batch size | $\\{4,8,16\\}$ | $8$ Table 1: Hyper-parameter setting up. Regarding the synthetic data generation, our policy is to oversample the minority classes in which the sum of total samples generated, for all classes, does not overpass approximately $50\%$ of the majority class (impurities). Table 2(b) presents the augmentation results. Class | # samples ---|--- Eggs | Larvae | Protozoa 1 | 500 | 246 | 868 2 | 83 | 1,352 | 659 3 | 286 | – | 1,783 4 | 103 | – | 1,931 5 | 835 | – | 3,297 6 | 435 | – | 309 7 | 254 | – | 28,525 8 | 379 | – | – 9 | 9,816 | – | – Total | 12,691 | 1,598 | 37,372 (a) Original Class | # samples ---|--- Eggs | Larvae | Protozoa 1 | 1,000 (500) | 738 (492) | 868 2 | 415 (332) | 1,352 | 1,977 (1,318) 3 | 572 (286) | – | 1,783 4 | 412 (309) | – | 1,931 5 | 835 | – | 3,297 6 | 870 (435) | – | 1,236 (927) 7 | 2,508 (2,254) | – | 28,525 8 | 379 | – | – 9 | 9,816 | – | – Total | 14,807 (2,116) | 2,090 (492) | 39,619 (2,245) (b) Augmented Table 2: Class frequency regarding the (a) original and (b) augmented datasets. The values in parenthesis stand for the number of samples generated artificially. ### 3.3 Experimental Setup Three different models were considered in this paper: one RBM with $500$ hidden neurons and two DBNs, i.e., the first with two hidden layers (DBN-2) containing $500$ neurons each, and the other comprising three hidden layers (DBN-3) with $2,000$ neurons in the first two levels and $500$ neurons in the uppermost layer111In case of acceptance, we shall provide the link to the source-code.. All models were trained for $100$ epochs considering each RBM stack with a learning rate $\eta=10^{-5}$ and mini-batches of $64$ samples. Further, the networks were fine-tuned for an additional $100$ epochs with mini-batches of size $128$. ### 3.4 Evaluation procedure Since we have unbalanced datasets, the standard accuracy (ACC) may not be suitable to evaluate the proposed models since it favors classifiers biased towards the most common classes. To address such an issue, we considered the Balanced Accuracy score (BAC) [2] implemented in sklearn222Available at https://scikit-learn.org.. Additionally, the Cohen’s kappa coefficient [4] is employed to assess the degree of agreement between the classifier and the ground truth labels. Such a value lies in the interval $[-1,1]$, where the lower and upper boundaries represent a complete disagreement and an agreement, respectively. Finally, we employed the Wilcoxon signed-rank test [22] with significance of $5\%$ to evaluate the statistical similarity among the best results. ## 4 Experimental Results In this section, we present the experimental results concerning automatic human parasites classification. ### 4.1 Classification results Table 3 presents the mean results, concerning the standard accuracy, the balanced accuracy, and the Kappa value with respect to the Larvae dataset. Results are presented over the RBM, DBN-2, and DBN-3 techniques using three distinct configurations, i.e., the original dataset and its augmented versions using RBM (Aug-RBM) and AE (Aug-AE). Moreover, the best ones regarding Wilcoxon test are in bold. The results confirm the robustness of the proposed approaches since all models with RBM Augmentor achieved more than $94\%$ of BAC. One can highlight the DBN-2 results using the Aug-RBM with $95\%$ and $0.901$ of mean accuracy and Kappa values, respectively. Such results provide good shreds of evidence towards the relevance of data augmentation with respect to the baseline, once Aug-RBM supported an improvement of around $5.6\%$ concerning the standard accuracy, $17.3\%$ regarding BAC, and $38\%$ considering the Kappa value. Although Aug-AE provided some improvements, RBM figures as the most accurate approach for such a task. | RBM | DBN-2 | DBN-3 ---|---|---|--- | Aug-RBM | Aug-AE | Baseline | Aug-RBM | Aug-AE | Baseline | Aug-RBM | Aug-AE | Baseline ACC | 94.03$\pm$0.30 | 77.03$\pm$1.85 | 90.14$\pm$0.14 | 95.05$\pm$0.34 | 90.66$\pm$0.87 | 90.53$\pm$0.23 | 94.85$\pm$0.33 | 92.15$\pm$0.65 | 89.61$\pm$1.26 BAC | 94.07$\pm$0.28 | 69.71$\pm$2.95 | 80.19$\pm$0.38 | 95.09$\pm$0.33 | 90.63$\pm$0.75 | 81.24$\pm$0.41 | 94.87$\pm$0.34 | 91.40$\pm$0.79 | 80.99$\pm$2.29 Kappa | 0.880$\pm$0.005 | 0.445$\pm$0.053 | 0.637$\pm$0.006 | 0.901$\pm$0.007 | 0.804$\pm$0.018 | 0.653$\pm$0.007 | 0.897$\pm$0.007 | 0.832$\pm$0.014 | 0.630$\pm$0.041 Table 3: Effectiveness over Larvae dataset using the proposed approaches. Table 4 presents the results regarding the Eggs dataset. In this scenario, DBN-3 obtained the best results concerning the ACC and Kappa values, while the standard RBM performed better over the BAC measure. This behavior is surprising since both Kappa and BAC were proposed to cope with unbalanced data evaluation, thus expecting to behave similarly to the other models. | RBM | DBN-2 | DBN-3 ---|---|---|--- | Aug-RBM | Aug-AE | Baseline | Aug-RBM | Aug-AE | Baseline | Aug-RBM | Aug-AE | Baseline ACC | 93.54$\pm$0.37 | 84.25$\pm$1.13 | 90.30$\pm$0.052 | 94.03$\pm$0.19 | 92.13$\pm$0.99 | 91.91$\pm$0.45 | 94.41$\pm$0.32 | 94.01$\pm$0.19 | 93.08$\pm$0.31 BAC | 92.09$\pm$0.68 | 67.15$\pm$2.54 | 79.94$\pm$0.55 | 90.98$\pm$0.77 | 88.36$\pm$1.77 | 78.34$\pm$1.33 | 91.06$\pm$0.62 | 90.39$\pm$0.30 | 78.67$\pm$1.75 Kappa | 0.884$\pm$0.006 | 0.685$\pm$0.025 | 0.769$\pm$0.009 | 0.891$\pm$0.004 | 0.857$\pm$0.015 | 0.794$\pm$0.009 | 0.897$\pm$0.006 | 0.890$\pm$0.003 | 0.820$\pm$0.009 Table 4: Effectiveness over Eggs dataset using the proposed approaches. The behavior observed in the Protozoa dataset, presented in Table 5, highlights an interesting scenario. One of the best ACC ($87.51\%$) and Kappa ($0.736$) results were achieved with the simplest model, i.e., an RBM using Aug-RBM. Such behavior points out that, for such a dataset, we can compress the input data into a small latent space, thus extracting useful and representative features with only $500$ units, while the performance is still remarkable even with unbalanced classes. Moreover, concerning BAC values, one can observe that DBN-2 and DBN-3 with data augmentation by Restricted Boltzmann Machines, as well as DBN-3 using AE for synthetic data generation, obtained similar results. | RBM | DBN-2 | DBN-3 ---|---|---|--- | Aug-RBM | Aug-AE | Baseline | Aug-RBM | Aug-AE | Baseline | Aug-RBM | Aug-AE | Baseline ACC | 87.51$\pm$0.14 | 75.85$\pm$0.13 | 86.21$\pm$0.30 | 86.97$\pm$0.31 | 87.01$\pm$0.22 | 85.97$\pm$0.50 | 85.97$\pm$0.59 | 87.29$\pm$0.37 | 84.73$\pm$0.94 BAC | 77.84$\pm$0.82 | 43.85$\pm$0.84 | 63.77$\pm$1.15 | 78.84$\pm$1.22 | 73.83$\pm$0.74 | 62.97$\pm$2.88 | 77.66$\pm$1.88 | 77.87$\pm$1.58 | 60.55$\pm$2.85 Kappa | 0.736$\pm$0.004 | 0.368$\pm$0.009 | 0.662$\pm$0.006 | 0.731$\pm$0.007 | 0.710$\pm$0.005 | 0.659$\pm$0.012 | 0.711$\pm$0.010 | 0.724$\pm$0.009 | 0.615$\pm$0.023 Table 5: Effectiveness over Protozoa dataset using the proposed approaches. ### 4.2 Training Analysis Regarding the training analysis, we considered the datasets aumented with RBMs only since these models outperformed the ones using Autoencoders. Figure 2 depicts the evolution of the Kappa values over the testing set during training. One can notice that: (i) data augmentation provided a considerable improvement in the results, (ii) training with data augmentation led to more stable results (Figures 2a and 2b), and (iii) differently from the other two datasets, techniques over Protozoa kept learning up to $80$ epochs (Figure 2c). Such behavior is somehow expected since Protozoa dataset poses a more challenging scenario. The stable results provided by data augmentation may allow us to apply some criteria for convergence analysis during training, such as early stop. Figure 2: Average Kappa values over the testing set concerning (a) Larvae, (b) Eggs, and (c) Protozoa datasets. ### 4.3 Data Augmentation Analysis Figure 3 shows some synthetic data generated by RBMs using $500$ hidden neurons. One can observe that RBMs were able to generate useful samples, which corroborates the aforementioned results, i.e., such a process improved the parasites classification. Besides, the less accurate results concern the ones related to the Larvae dataset since we have a small subset of samples and their shape change considerably among the parasites. (a) (b) (c) (d) (e) (f) Figure 3: Data augmentation analysis: (a) real and (b) synthetic Larvae samples, (c) real and (d) synthetic Eggs samples, and (e) real and (f) synthetic Protozoa samples. ## 5 Conclusions and Future Works This paper dealt with the problem of human intestinal parasites classification through RBM and DBN approaches. Experiments conducted over three distinct scenarios composed of Larvae, Eggs, and Protozoa, which are also partially surrounded by fecal impurities, confirmed the robustness of the models for classification purposes. Additionally, the performance of RBMs was also compared against Autoencoders for data augmentation since the datasets are highly unbalanced. Regarding future works, we intend to analyze the behavior of the models over a broader spectrum using colored images, as well as employing other RBM-based models, such as the Infinite RBMs (iRBMs) and the DBMs, to the task of human intestinal parasites classification. ## Acknowledgments The authors are grateful to FAPESP grants #2013/07375-0, #2014/12236-1, #2017/25908-6, #2019/07825-1, and #2019/07665-4, as well as CNPq grants #307066/2017-7, and #427968/2018-6. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001. ## References * [1] Benato, B.C., Telea, A.C., Falcão, A.X.: Semi-supervised learning with interactive label propagation guided by feature space projections. In: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). pp. 392–399. IEEE (2018) * [2] Brodersen, K.H., Ong, C.S., Stephan, K.E., Buhmann, J.M.: The balanced accuracy and its posterior distribution. In: Proceedings of the 2010 20th International Conference on Pattern Recognition. pp. 3121–3124. ICPR ’10, IEEE Computer Society, Washington, DC, USA (2010). https://doi.org/10.1109/ICPR.2010.764, https://doi.org/10.1109/ICPR.2010.764 * [3] Castelo-Fernández, C., Falcão, A.X.: Learning visual dictionaries from class-specific superpixel segmentation. In: Vento, M., Percannella, G. (eds.) Computer Analysis of Images and Patterns. pp. 171–182. Springer International Publishing, Cham (2019) * [4] Fleiss, J.L., Cohen, J.: The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement 33(3), 613–619 (1973) * [5] Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Computation 14(8), 1771–1800 (2002) * [6] Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Computation 18(7), 1527–1554 (2006) * [7] Khojasteh, P., Passos, L.A., Carvalho, T., Rezende, E., Aliahmad, B., Papa, J.P., Kumar, D.K.: Exudate detection in fundus images using deeply-learnable features. Computers in biology and medicine 104, 62–69 (2019) * [8] Nakashika, T., Takaki, S., Yamagishi, J.: Complex-valued restricted boltzmann machine for direct learning of frequency spectra. In: Interspeech. pp. 4021–4025 (2017) * [9] Organization, W.H.: Working to Overcome the Global Impact of Neglected Tropical Diseases. First WHO Report on Neglected Tropical Diseases (2010)) * [10] Papa, J.P., Falcão, A.X., Albuquerque, V.H.C., Tavares, J.M.R.S.: Efficient supervised optimum-path forest classification for large datasets. Pattern Recognition 45(1), 512–520 (2012) * [11] Papa, J.P., Falcão, A.X., Suzuki, C.T.N.: Supervised pattern classification based on optimum-path forest. International Journal of Imaging Systems and Technology 19(2), 120–131 (2009) * [12] Passos, L.A., Santana, M.C., Moreira, T., Papa, J.P.: $\kappa$-entropy based restricted boltzmann machines. In: The 2019 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. IEEE (2019) * [13] Peixinho, A.Z., Benato, B.C., Nonato, L.G., Falcão, A.X.: Delaunay triangulation data augmentation guided by visual analytics for deep learning. In: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). pp. 384–391. IEEE (2018) * [14] Peixinho, A.Z., Martins, S.B., Vargas, J.E., Falcão, A.X., Gomes, J.F., Suzuki, C.T.N.: Diagnosis of human intestinal parasites by deep learning. In: Computational Vision and Medical Image Processing V: Proceedings of the 5th Eccomas Thematic Conference (VipIMAGE) (2015). https://doi.org/10.1201/b19241 * [15] Salakhutdinov, R., Hinton, G.E.: Deep boltzmann machines. In: AISTATS. vol. 1, p. 3 (2009) * [16] Salakhutdinov, R., Mnih, A., Hinton, G.: Restricted boltzmann machines for collaborative filtering. In: Proceedings of the 24th international conference on Machine learning. pp. 791–798. ACM (2007) * [17] Smolensky, P.: Parallel distributed processing: Explorations in the microstructure of cognition. vol. 1, chap. Information Processing in Dynamical Systems: Foundations of Harmony Theory, pp. 194–281. MIT Press, Cambridge, MA, USA (1986) * [18] Srivastava, N., Salakhutdinov, R.R.: Multimodal learning with deep boltzmann machines. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25, pp. 2222–2230. Curran Associates, Inc. (2012) * [19] Suzuki, C.T.N., Gomes, J.F., Falcão, A.X., Papa, J.P., Hoshino-Shimizu, S.: Automatic segmentation and classification of human intestinal parasites from microscopy images. IEEE Transactions on Biomedical Engineering 60(3), 803–812 (March 2013). https://doi.org/10.1109/TBME.2012.2187204 * [20] Suzuki, C.T.N., Gomes, J.F., Falcão, A.X., Shimizu, S.H., Papa, J.P.: Automated diagnosis of human intestinal parasites using optical microscopy images. In: 2013 IEEE 10th International Symposium on Biomedical Imaging. pp. 460–463 (April 2013). https://doi.org/10.1109/ISBI.2013.6556511 * [21] (WHO), P.A.H.O.P.H.O.: French-Speaking Caribbean: Towards World Health Assembly Resolution 54.19 (May 2007)) * [22] Wilcoxon, F.: Individual comparisons by ranking methods. Biometrics Bulletin 1(6), 80–83 (1945)
11institutetext: São Paulo State University - UNESP, Bauru, Brasil 11email: {mateus.roder, clayton.pereira, leandro.passos, luiz.felix, <EMAIL_ADDRESS> https://www.fc.unesp.br/ 22institutetext: Recogna Laboratory - www.recogna.tech # A Layer-Wise Information Reinforcement Approach to Improve Learning in Deep Belief Networks Mateus Roder 1122 0000-0002-3112-5290 Leandro A. Passos 1122 0000-0003-3529-3109 Luiz Carlos Felix Ribeiro 1122 0000-0003-1265-0273 Clayton Pereira 1122 0000-0002-0427-4880 João Paulo Papa 1122 0000-0002-6494-7514 ###### Abstract With the advent of deep learning, the number of works proposing new methods or improving existent ones has grown exponentially in the last years. In this scenario, “very deep” models were emerging, once they were expected to extract more intrinsic and abstract features while supporting a better performance. However, such models suffer from the gradient vanishing problem, i.e., backpropagation values become too close to zero in their shallower layers, ultimately causing learning to stagnate. Such an issue was overcome in the context of convolution neural networks by creating “shortcut connections” between layers, in a so-called deep residual learning framework. Nonetheless, a very popular deep learning technique called Deep Belief Network still suffers from gradient vanishing when dealing with discriminative tasks. Therefore, this paper proposes the Residual Deep Belief Network, which considers the information reinforcement layer-by-layer to improve the feature extraction and knowledge retaining, that support better discriminative performance. Experiments conducted over three public datasets demonstrate its robustness concerning the task of binary image classification. ###### Keywords: Deep Belief Networks Residual Networks Restricted Boltzmann Machines. ## 1 Introduction Machine learning-based approaches have been massively studied and applied to daily tasks in the last decades, mostly due to the remarkable accomplishments achieved by deep learning models. Despite the success attained by these techniques, they still suffer from a well-known drawback regarding the backpropagation-based learning procedure: the vanishing gradient. This kind of problem becomes more prominent on deeper models since the gradient vanishes and is not propagated adequately to former layers, thus, preventing a proper parameter update. To tackle such an issue, He et al. [4] proposed the ResNet, a framework where the layers learn residual functions concerning the layer inputs, instead of learning unreferenced functions. In short, the idea is mapping a set of stacked layers to a residual map, which comprises a combination of the set input and output and then mapping it back to the desired underlying mapping. The model achieved fast popularity, being applied in a wide range of applications, such as traffic surveillance [7], medicine [12, 8], and action recognition [2], to cite a few. Moreover, many works proposed different approaches using the idea of residual functions. Lin et al. [11], for instance, proposed the RetinaNet, a pyramidal-shaped network that employs residual stages to deal with one-shot small object detection over unbalanced datasets. Meanwhile, Szegedy et al. [17] proposed the Inception-ResNet for object recognition. Later, Santos et al. [16] proposed the Cascade Residual Convolutional Neural Network for video segmentation. In the context of deep neural networks, there exist another class of methods that are composed of Restricted Boltzmann Machines (RBMs) [6], a stochastic approach represented by a bipartite graph whose training is given by the minimization of the energy between a visible and a latent layer. Among these methods, Deep Belief Networks (DBNs) [5] and Deep Boltzmann Machines [15, 13] achieved a considerable popularity in the last years due the satisfactory results over a wide variety of applications [14, 3, 18]. However, as far as we are concerned, no work addressed the concept of reinforcing the feature extraction over those models in a layer-by-layer fashion. Therefore, the main contributions of this paper are twofold: (i) to propose the Residual Deep Belief Network (Res-DBN), a novel approach that combines each layer input and output to reinforce the information conveyed through it, and (ii) to support the literature concerning both DBNs and residual-based models. The remainder of this paper is presented as follows: Section 2 introduces the main concepts regarding RBMs and DBNs, while Section 3 proposes the Residual Deep Belief Network. Further, Section 4 describes the methodology and datasets employed in this work. Finally, Sections 5 and 6 provide the experimental results and conclusions, respectively. ## 2 Theoretical Background This section introduces a brief theoretical background regarding Restricted Boltzmann Machines and Deep Belief Networks. ### 2.1 Restricted Boltzmann Machines Restricted Boltzmann Machine stands for a stochastic physics-inspired computational model capable of learning data distribution intrinsic patterns. The process is represented as a bipartite graph where the data composes a visible input-like layer $\bm{v}$, and a latent $n$-dimensional vector $\bm{h}$, composed of a set of hidden neurons whose the model tries to map such inputs onto. The model’s training procedure dwells on the minimization of the system’s energy, given as follows: $E(\bm{v},\bm{h})=-\sum_{i=1}^{m}b_{i}v_{i}-\sum_{j=1}^{n}c_{j}h_{j}-\sum_{i=1}^{m}\sum_{j=1}^{n}w_{ij}v_{i}h_{j},$ (1) where $m$ and $n$ stand for the dimensions of the visible and hidden layers, respectively, while $b$ and $c$ denote their respective bias vectors, further, W corresponds to the weight matrix connecting both layers, in which $w_{ij}$ stands for the connection between visible unit $i$ and the $j$ hidden one. Notice the model is restricted, thus implying no connection is allowed among the same layer neurons. Ideally, the model was supposed to be solved by computing the joint probability of the visible and hidden neurons in an analytic fashion. However, such an approach is intractable since it requires the partition function calculation, i.e., computing every possible configuration of the system. Therefore, Hinton proposed the Contrastive Divergence (CD) [6], an alternative method to estimate the conditional probabilities of the visible and hidden neurons using Gibbs sampling over a Monte Carlo Markov Chain (MCMC). Hence, the probabilities of both input and hidden units are computed as follows: $p(h_{j}=1|\bm{v})=\sigma\left(c_{j}+\sum_{i=1}^{m}w_{ij}v_{i}\right),$ (2) and $p(v_{i}=1|\bm{h})=\sigma\left(b_{i}+\sum_{j=1}^{n}w_{ij}h_{j}\right),$ (3) where $\sigma$ stands for the logistic-sigmoid function. ### 2.2 Deep Belief Networks Conceptually, Deep Belief Networks are graph-based generative models composed of a visible and a set of hidden layers connected by weight matrices, with no connection between neurons in the same layer. In practice, the model comprises a set of stacked RBMs whose hidden layers greedily feeds the subsequent RBM visible layer. Finally, a softmax layer is attached at the top of the model, and the weights are fine-tuned using backpropagation for classification purposes. Figure 1 depicts the model. Notice that $\textbf{W}^{(l)}$, $l\in[1,L]$, stands for the weight matrix at layer $l$, where $L$ denotes the number of hidden layers. Moreover, $\bm{v}$ stands for the visible layer, as well as $\bm{h}^{(l)}$ represents the $l^{th}$ hidden layer. Figure 1: DBN architecture with two hidden layers for classification purposes. ## 3 Information Reinforcement in DBNs In this section, we present the proposed approach concerning the residual reinforcement layer-by-layer in Deep Belief Networks, from now on called Res- DBN. Since such a network is a hybrid model between sigmoid belief networks and binary RBMs [5], it is important to highlight some “tricks” to make use of the information provided layer-by-layer. As aforementioned, DBNs can be viewed as hybrid networks that model the data’s prior distribution in a layer-by-layer fashion to improve the lower bound from model distribution. Such a fact motivated us to make use of the information learned in each stack of RBM for reinforcement since the greedy-layer pre- training uses the activation of latent binary variables as the input of the next visible layer. Generally speaking, such activation is defined by Eq. 2, and its pre-activation vector, $\bm{a}^{(l)}$, as follows: $a^{(l)}_{j}=c^{(l)}_{j}+\sum_{i=1}^{m}w^{(l)}_{ij}x^{(l-1)}_{i},$ (4) where, $c^{(l)}_{j}$ stands for the bias from hidden layer $l$, $m$ is the number of units present on the previous layer, $w^{(l)}_{ij}$ represents the weight matrix for layer $l$, and $x^{(l-1)}_{i}$ stands for the input data from layer $l-1$, where $x^{0}_{i}=v_{i}$. Therefore, it is possible to use the “reinforcement pre-activation” vector, denoted as $\hat{\bm{a}}^{(l)}$, from layer $l$, $\bm{\forall}$ $l>1$. Since the standard RBM output of post-activation (provided by Eq. 2) is in $[0,1]$ interval, it is necessary to limit the reinforcement term of the proposed approach as follows: $\hat{\bm{a}}^{(l)}=\dfrac{\delta(\bm{a}^{(l-1)})}{max\\{\delta(a^{(l-1)}_{j})\\}},$ (5) where, $\delta$ stands for the Rectifier111$\delta(z)=max(0,z)$. function, while $max$ returns the maximum value from the $\delta$ output vector for normalization purposes. Then, the new input data and the information aggregation for layer $l$ is defined by adding the values obtained from Eq. 5 to the post-activation, i.e., applying $\sigma(\bm{a}^{(l-1)})$, as follows: $x^{(l-1)}_{i}=\sigma(a^{(l-1)}_{j})+\hat{a}^{(l)}_{j},$ (6) where $x^{(l-1)}_{i}$ stands for the new input data to layer $l$, $\bm{\forall}$ $l>1$, and its normalized and vectorized form can be obtained as follows: $\bm{x}^{(l-1)}=\dfrac{\bm{x}^{(l-1)}}{max\\{x^{(l-1)}_{i}\\}}.$ (7) It is important to highlight that, in Eq. 5, we only use the positive pre- activations to retrieve and propagate the signal that is meaningful for neurons excitation, i.e., values greater than 0, which generates a probability of more than $50\%$ after applying sigmoid activation. Figure 2: Res-DBN architecture with 3 hidden layers. The Figure 2 depicts the Res-DBN architecture, with hidden layers connected by the weights $\textbf{W}^{(l)}$. The dashed connections stand for the reinforcement approach, with the information aggregation occuring as covered by the Eqs. 4 to 7, from a generic hidden layer to the next one ($\bm{h}^{(1)}\rightarrow\bm{h}^{(2)}$, for instance). ## 4 Methodology In this section, we present details regarding the datasets employed in our experiments, as well as the experimental setup applied for this paper. ### 4.1 Datasets Three well-known image datasets were employed throughout the experiments: * $\bullet$ MNIST222http://yann.lecun.com/exdb/mnist [10]: set of $28\times 28$ binary images of handwritten digits (0-9), i.e., 10 classes. The original version contains a training set with $60,000$ images from digits ‘0’-‘9’, as well as a test set with $10,000$ images. * $\bullet$ Fashion-MNIST333https://github.com/zalandoresearch/fashion-mnist [20]: set of $28\times 28$ binary images of clothing objects. The original version contains a training set with $60,000$ images from $10$ distinct objects (t-shirt, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot), and a test set with $10,000$ images. * $\bullet$ Kuzushiji-MNIST444https://github.com/rois-codh/kmnist [1]: set of $28\times 28$ binary images of hiragana characters. The original version contains a training set with $60,000$ images from $10$ previously selected hiragana characters, and a test set with $10,000$ images. ### 4.2 Experimental Setup Concerning the experiments, we employed the concepts mentioned in Section 3, considering two main phases: (i) the DBN pre-training and (ii) the discriminative fine-tuning. Regarding the former, it is important to highlight that the information reinforcement is performed during the greedy layer-wise process, in which the hidden layers ($l={1,2,\dots,L}$) receive the positive “residual” information. Such a process takes into account a mini-batch of size $128$, a learning rate of $0.1$, $50$ epochs for the bottommost RBM convergence, and $25$ epochs for the intermediate and top layers convergence555Such a value is half of the initial one to evaluate Res-DBN earlier convergence.. Moreover, regarding the classification phase, a softmax layer was attached at the top of the model after the DBN pre-training, performing the fine-tuning process for $20$ epochs through backpropagation using the well-known ADAM [9] optimizer. The process employed a learning rate of $10^{-3}$ for all layers. Furthermore, it was performed $15$ independent executions for each model to provide statistical analysis. To assess the robustness of the proposed approach, we employed seven different DBN architectures changing the number of hidden neurons and layers, as denoted in Table 1. Model | Res-DBN | | DBN ---|---|---|--- (a) | $i$:500:500:10 | | $i$:500:500:10 (b) | $i$:500:500:500:10 | | $i$:500:500:500:10 (c) | $i$:500:500:500:500:10 | | $i$:500:500:500:500:10 (d) | $i$:1000:1000:10 | | $i$:1000:1000:10 (e) | $i$:1000:1000:1000:10 | | $i$:1000:1000:1000:10 (f) | $i$:1000:1000:1000:1000:10 | | $i$:1000:1000:1000:1000:10 (g) | $i$:2000:2000:2000:2000:10 | | $i$:2000:2000:2000:2000:10 Table 1: Different setups, where $i$ stands for the number of neurons on the input layer. ## 5 Experiments In this Section, we present the experimental results concerning seven distinct DBN architectures, i.e., (a), (b), (c), (d), (e), (f) and (g), over the aforementioned datasets. Table 2 provides the average accuracies and standard deviations for each configuration on $15$ trials, where the proposed approach is compared against the standard DBN formulation in each dataset for each configuration. Further, results in bold represent the best values according to the statistical Wilcoxon signed-rank test [19] with significance $p\leq 0.05$ concerning each model configuration. On the other hand, underlined values represent the best results overall models regarding each dataset, without a statistical difference, i.e., results similar to the best one achieved. Experiment | MNIST | | | Fashion MNIST | | | Kuzushiji MNIST ---|---|---|---|---|---|---|--- Res-DBN | | DBN | | | Res-DBN | | DBN | | | Res-DBN | | DBN (a) | $\bm{97.39\pm 0.08}$ | | $97.23\pm 0.09$ | | | $81.13\pm 0.33$ | | $\bm{81.52\pm 0.27}$ | | | $\bm{86.49\pm 0.18}$ | | $84.78\pm 0.29$ (b) | $\bm{97.61\pm 0.07}$ | | $97.44\pm 0.11$ | | | $81.49\pm 0.50$ | | $81.41\pm 0.57$ | | | $\bm{87.75\pm 0.20}$ | | $85.81\pm 0.18$ (c) | $97.59\pm 0.10$ | | $97.57\pm 0.09$ | | | $81.66\pm 0.33$ | | $81.51\pm 0.60$ | | | $\bm{88.21\pm 0.18}$ | | $86.97\pm 0.30$ (d) | $\bm{97.66\pm 0.10}$ | | $97.40\pm 0.10$ | | | $81.55\pm 0.35$ | | $81.15\pm 0.64$ | | | $\bm{87.67\pm 0.19}$ | | $86.24\pm 0.21$ (e) | $\underline{\bm{97.85\pm 0.06}}$ | | $97.48\pm 0.12$ | | | $\bm{82.05\pm 0.48}$ | | $81.59\pm 0.51$ | | | $\bm{88.95\pm 0.16}$ | | $87.57\pm 0.20$ (f) | $\underline{97.80\pm 0.37}$ | | $97.68\pm 0.29$ | | | $82.16\pm 0.50$ | | $82.19\pm 0.46$ | | | $\underline{\bm{89.63\pm 0.23}}$ | | $88.81\pm 0.40$ (g) | $\underline{\bm{97.88\pm 0.19}}$ | | $97.51\pm 0.30$ | | | $\underline{82.73\pm 0.53}$ | | $\underline{82.63\pm 0.36}$ | | | $\underline{\bm{89.45\pm 0.78}}$ | | $88.70\pm 0.60$ Table 2: Experimental results on different datasets. Regarding the original MNIST dataset, the preeminence of the proposed model over the standard version of the RBM is evident, since the best results were obtained exclusively by Res-DBN and, from these, five out of seven scenarios presented statistical significance. Such a behavior is stressed in the Kuzushiji MNIST dataset, where the best results were obtained solely by the Res-DBN over every possible configuration. The results’ similarity between these datasets is somehow expected since both are composed of handwritten digits or letters. The Fashion MNIST dataset presents the single experimental scenario, i.e., model (a), where the proposed model was outperformed by the traditional DBN, although by a small margin. In all other cases Res-DBN presented results superior or equal to the traditional formulation, which favors the Res-DBN use over the DBNs. Finally, one can observe the best results overall were obtained using a more complex model, i.e., with a higher number of layers and neurons, as denoted by the underlined values. Additionally, the proposed model outperformed or at least is equivalent, to the standard DBN in virtually all scenarios, except one concerning the Fashion-MNIST dataset. ### 5.1 Training Evaluation Figures 3, 4, and 5 depict the models’ learning curves over the test sets regarding MNIST, Fashion MNIST, and Kuzushiji MNIST, respectively. In Figure 3, one can observe that Res-DBN(e) converged faster than the remaining approaches, obtained reasonably good results after seven iterations. At the end of the process, Res-DBN(f) and (g) boosted and outperformed Res-DBN(e), as well as any of standard DBN approaches, depicted as dashed lines. Figure 3: Accuracy on MNIST test set. Regarding Fashion MNIST, it can be observed in Figure 4 that Res-DBN(e) was once again the fastest technique to converge, obtaining acceptable results after five iterations. However, after iteration number five, all models seem to overfit, explaining the performance decrease observed over the testing samples. Finally, after $14$ iterations, the results start increasing once again, being Res-DBN(g) the most accurate technique after $20$ iterations. Figure 4: Accuracy on Fashion MNIST test set. Finally, the Kuzushiji learning curve, depicted in Figure 5, displays a behavior silimiar to the MNIST dataset. Moreover, it shows that Res-DBN provided better results than its traditional variant in all cases right from the beginning of the training. In some cases with a margin greater than $2\%$, showing a promissing improvement. Figure 5: Accuracy on Kuzushiji MNIST test set. ## 6 Conclusions In this paper, we proposed a novel approach based on reinforcing DBN’s layer- by-layer feature extraction in a residual fashion, the so-called Residual Deep Belief Network. Experiments conducted over three public datasets confirm the sturdiness of the model. Moreover, it is important to highlight faster convergence achieved by Res-DBN in front of DBN, once half of the epochs were employed for pre-training hidden layers, and the results outperformed the latter model. Regarding future work, we intend to investigate the model in the video domain, applying it to classification and recognition tasks, as well as to propose a similar approach regarding Deep Boltzmann Machines. ## Acknowledgments The authors are grateful to FAPESP grants #2013/07375-0, #2014/12236-1, #2017/25908-6, #2019/07825-1, and #2019/07665-4, as well as CNPq grants #307066/2017-7, and #427968/2018-6. ## References * [1] Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., Ha, D.: Deep learning for classical japanese literature. arXiv preprint arXiv:1812.01718 (2018) * [2] Feichtenhofer, C., Pinz, A., Wildes, R.: Spatiotemporal residual networks for video action recognition. In: Advances in neural information processing systems. pp. 3468–3476 (2016) * [3] Hassan, M.M., Alam, M.G.R., Uddin, M.Z., Huda, S., Almogren, A., Fortino, G.: Human emotion recognition using deep belief network architecture. Information Fusion 51, 10–18 (2019) * [4] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR. pp. 770–778 (2016) * [5] Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Computation 18(7), 1527–1554 (2006) * [6] Hinton, G.: Training products of experts by minimizing contrastive divergence. Neural Computation 14(8), 1771–1800 (2002) * [7] Jung, H., Choi, M.K., Jung, J., Lee, J.H., Kwon, S., Young Jung, W.: Resnet-based vehicle classification and localization in traffic surveillance systems. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 61–67 (2017) * [8] Khojasteh, P., Passos, L.A., Carvalho, T., Rezende, E., Aliahmad, B., Papa, J.P., Kumar, D.K.: Exudate detection in fundus images using deeply-learnable features. Computers in biology and medicine 104, 62–69 (2019) * [9] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) * [10] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998) * [11] Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980–2988 (2017) * [12] Passos, L.A., Pereira, C.R., Rezende, E.R., Carvalho, T.J., Weber, S.A., Hook, C., Papa, J.P.: Parkinson disease identification using residual networks and optimum-path forest. In: 2018 IEEE 12th International Symposium on Applied Computational Intelligence and Informatics (SACI). pp. 000325–000330. IEEE (2018) * [13] Passos, L.A., Papa, J.P.: A metaheuristic-driven approach to fine-tune deep boltzmann machines. Applied Soft Computing p. 105717 (2019) * [14] Pereira, C.R., Passos, L.A., Lopes, R.R., Weber, S.A., Hook, C., Papa, J.P.: Parkinson’s disease identification using restricted boltzmann machines. In: International Conference on Computer Analysis of Images and Patterns. pp. 70–80. Springer (2017) * [15] Salakhutdinov, R., Hinton, G.E.: Deep boltzmann machines. In: AISTATS. vol. 1, p. 3 (2009) * [16] Santos, D.F., Pires, R.G., Colombo, D., Papa, J.P.: Video segmentation learning using cascade residual convolutional neural network. In: 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). pp. 1–7. IEEE (2019) * [17] Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence (2017) * [18] Wang, J., Wang, K., Wang, Y., Huang, Z., Xue, R.: Deep boltzmann machine based condition prediction for smart manufacturing. Journal of Ambient Intelligence and Humanized Computing 10(3), 851–861 (2019) * [19] Wilcoxon, F.: Individual comparisons by ranking methods. Biometrics Bulletin 1(6), 80–83 (1945) * [20] Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
Accepted by J. Renew. Sustain. Energy gray2.4,-7.00.750.03.5 # Effect of low-level jet height on wind farm performance Srinidhi N. Gadde<EMAIL_ADDRESS>Richard J. A. M. Stevens <EMAIL_ADDRESS>Physics of Fluids Group, Max Planck Center Twente for Complex Fluid Dynamics, J. M. Burgers Center for Fluid Dynamics and MESA+ Research Institute, University of Twente, P. O. Box 217, 7500 AE Enschede, The Netherlands ###### Abstract Low-level jets (LLJs) are the wind maxima in the lowest 50 to 1000 m of atmospheric boundary layers. Due to their significant influence on the power production of wind farms it is crucial to understand the interaction between LLJs and wind farms. In the presence of a LLJ, there are positive and negative shear regions in the velocity profile. The positive shear regions of LLJs are continuously turbulent, while the negative shear regions have limited turbulence. We present large eddy simulations of wind farms in which the LLJ is above, below, or in the middle of the turbine rotor swept area. We find that the wakes recover relatively fast when the LLJ is above the turbines. This is due to the high turbulence below the LLJ and the downward vertical entrainment created by the momentum deficit due to the wind farm power production. This harvests the jet’s energy and aids wake recovery. However, when the LLJ is below the turbine rotor swept area, the wake recovery is very slow due to the low atmospheric turbulence above the LLJ. The energy budget analysis reveals that the entrainment fluxes are maximum and minimum when the LLJ is above and in the middle of the turbine rotor swept area, respectively. Surprisingly, we find that the negative shear creates a significant entrainment flux upward when the LLJ is below the turbine rotor swept area. This facilitates energy extraction from the jet, which is beneficial for the performance of downwind turbines. Low-level jet; Wind farm; Large eddy simulation; Stable boundary layer; Energy entrainment ††preprint: AIP/123-QED ## I Introduction A low-level jet (LLJ) is the maximum in the wind velocity profile in the atmospheric boundary layer (ABL). When the wind in the residual layerBrutsaert (1982) is decoupled from the surface friction and subjected to inertial oscillations, the flow in the residual layer accelerates to super-geostrophic magnitudes and forms a LLJ Blackadar (1957). These jets are observed in the lowest 50 to 1000 m of the ABL (Smedman, Högström, and Bergström, 1996) and are most pronounced in weak to moderately stable ABLs Baas _et al._ (2009); Banta (2008). Figure 1 shows a sketch of the velocity, potential temperature, and turbulence flux profiles in a stable ABL. LLJs Kelley _et al._ (2004); Banta _et al._ (2002); Smedman, Tjernström, and Högström (1993) have been reported all over the world with frequent occurrences in India Prabha _et al._ (2011), China Liu _et al._ (2014), the Great Plains of the United States Arritt _et al._ (1997) and the North Sea region of Europe Kalverla _et al._ (2019); Wagner _et al._ (2019). Field observations show that in the IJmuiden region of North Sea, LLJs are observed with a frequency of 7.56% in summer and 6.61% in springDuncan (2018); Kalverla _et al._ (2017). LLJs in the North Sea are associated with shallow boundary-layer heights Duncan (2018); Baas _et al._ (2009), i.e. these jets can influence the wind farm power production. It has been reported that LLJs can increase the capacity factors by 60% under nocturnal conditions Wilczak _et al._ (2015), and measurements in Western OklahomaGreene _et al._ (2009) indicate that LLJs increase the power production compared to the case without jets. As a result, the importance, relevance, and urgency of research into LLJ for wind farm applications have been outlined by van Kuik et al. van Kuik _et al._ (2016) in their long term European Research Agenda and a recent review by Porté-Agel et al. Porté-Agel, Bastankhah, and Shamsoddin (2020). It is well established in the wind energy community that LLJs affect the performance of wind turbines Sisterson and Frenzen (1978). Below the jet height ($z_{\text{jet}}$) the velocity profile has a positive shear, and above the jet height there is a negative shear. The top panel of Fig. 1 shows a turbine with hub-height ($z_{h}$) lower than the jet height, i.e. $z_{\text{jet}}>z_{h}$, operating in the positive shear region. The bottom panel shows a turbine with the hub-height higher than the jet height, i.e. $z_{\text{jet}}<z_{h}$, operating in the negative shear region. The potential temperature profile shows significant surface inversion with a residual layer above. Above the surface inversion the boundary layer has negligible turbulence and the region is associated with the negative shear, see Fig. 1. As noted above, LLJs generally form at the top of stable surface inversions Baas _et al._ (2009), above which the turbulence is negligible Blackadar (1957). During an LLJ event, the turbulence intensity and turbulence kinetic energy are lower than for unstable conditions Gutierrez _et al._ (2016). The effect of LLJs on wind turbine and wind farm performance has been studied before. Lu & Porté-Agel Lu and Porté-Agel (2011) performed large eddy simulations (LES) of an ‘infinite’ wind farm in the stable boundary layer, and they report the formation of non-axisymmetric wakes and a decrease in the LLJ strength due to the energy extraction by the turbines. LLJ elimination due to wind turbine momentum extraction has also been reported in similar LES studies Abkar, Sharifi, and Porté-Agel (2016); Bhaganagar and Debnath (2015); Sharma, Parlange, and Calaf (2017); Ali _et al._ (2017). Also, mesoscale simulations in which wind farms are modeled as localized roughness elements show that LLJs are eliminated by wind farmsFitch, Olson, and Lundquist (2013). Furthermore, due to the velocity maximum and strong shear, both the power production and the fatigue loads on wind turbines are affected by the LLJ Gutierrez _et al._ (2017). Figure 1: Problem definition. The top figure shows turbines operating in positive shear when the LLJ is above the turbine rotor swept area ($z_{\mathsf{jet}}>z_{h}$). The bottom figure shows turbines operating in negative shear when the LLJ is below the turbine rotor swept area ($z_{\mathsf{jet}}<z_{h}$). We study whether the negative shear increases the energy extraction from the jet by creating an upward flux. The figure also shows that the turbulent momentum flux is negligible above the LLJ. The potential temperature $\theta$ profile reveals that the boundary layer is stably stratified and shows a residual layer with a constant temperature above the LLJ. In a fully developed wind farm boundary layer, the power production depends on the vertical entrainment fluxes from above, which is created by the momentum deficit inside the wind farm. Wind turbines operating below the LLJ are subjected to positive shear and continuous turbulence. In that case, the downward entrainment fluxes are enhanced due to which the energy of the LLJ is harvested Lu and Porté-Agel (2011); Na _et al._ (2018); Gadde and Stevens (2020). Recently, Doosttalab _et al._ Doosttalab _et al._ (2020) conducted experiments in which they studied the interaction between a wind farm and a synthetic jet, they report that entrainment fluxes are enhanced due to the shear layer associated with the LLJ. However, situations might arise when the turbines have to operate in the negative shear region with reduced turbulence. Aeroelastic simulations of the interaction between a LLJ and a wind turbine show that the loading on a wind turbine decreases when it operates in the jet’s negative shear region. Based on these simulations, Gutierrez et al. Gutierrez _et al._ (2016) suggest installing turbines at heights where negative shear occurs. However, the region where negative shear occurs is also a region of reduced turbulence, which negatively affects wake recovery and hence the power production of downwind turbines. Stevens and Meneveau (2017); Porté-Agel, Bastankhah, and Shamsoddin (2020); Ali _et al._ (2018) Previous studies have mostly focused on wind turbines and wind farms operating in the jet’s positive shear region. The presence of negative shear and negligible turbulence above the LLJ leads to scenarios in which the wind farm and LLJ interaction is not straightforward. As mentioned before, when the LLJ is above the turbines, the momentum deficit creates a downward entrainment flux, which facilitates the extraction of LLJ’s energy, see the top schematic in Fig. 1. However, it is not yet explored if the negative shear creates an upward entrainment flux when the LLJ is below the turbine rotor swept area, see the bottom schematic in Fig. 1. Therefore, the objective of the study is to understand how changing the LLJ height relative to the turbine hub-height affects wake recovery and power production of downstream turbines. This can be done in two ways: by keeping the jet height constant and changing the turbine height or simulating jets of different heights which involves multiple spin-up simulations with different boundary layer properties. Changing jet height is complicated and computationally expensive, therefore we follow the easiest and the most straightforward way of changing the ratio $z_{\text{jet}}<z_{h}$ by changing the turbine height. In this work, we consider three different scenarios: * 1. the LLJ above the turbine rotor swept area, i.e. $z_{\text{jet}}<z_{h}$. * 2. the LLJ in the middle of the turbine rotor swept area, i.e. $z_{\text{jet}}\approx z_{h}$. * 3. the LLJ below the turbine rotor swept area, i.e. $z_{\text{jet}}>z_{h}$. In section II, the simulation methodology and the wind farm configuration are described. In section III the main observations are discussed and in section IV the major conclusions are detailed. ## II Simulation methodology ### II.1 Governing equations We numerically integrate the filtered Navier-Stokes equation coupled with the Boussinesq approximation to model buoyancy. The governing equations are: $\displaystyle\partial_{\mathit{i}}\widetilde{u}_{\mathit{i}}$ $\displaystyle=0,$ (1) $\displaystyle\begin{split}\partial_{\mathit{t}}\widetilde{u}_{\mathit{i}}+\partial_{\mathit{j}}\left(\widetilde{u}_{\mathit{i}}\widetilde{u}_{\mathit{j}}\right)&=-\partial_{\mathit{i}}\widetilde{p}-\partial_{\mathit{j}}\tau_{\mathit{ij}}+g\beta(\widetilde{\theta}-\widetilde{\theta}_{\mathit{0}})\delta_{\mathit{i3}}\\\ &+f_{c}(U_{g}-\widetilde{u})\delta_{i2}-f_{c}(V_{g}-\widetilde{v})\delta_{i1}+\widetilde{f}_{x}\delta_{i1}+\widetilde{f}_{y}\delta_{i2},\end{split}$ (2) $\displaystyle\partial_{\mathit{t}}\widetilde{\theta}+\widetilde{u}_{\mathit{j}}\partial_{\mathit{j}}\widetilde{\theta}$ $\displaystyle=-\partial_{\mathit{j}}q_{\mathit{j}},$ (3) here the tilde represents a spectral cut-off filter of size $\Delta$, $\widetilde{u}_{\mathit{i}}=\left(\widetilde{u},\widetilde{v},\widetilde{w}\right)$ and $\widetilde{\theta}$ are the filtered velocity and potential temperature, respectively, $g$ is the gravitational acceleration, $\beta=1/\theta_{\mathit{0}}$ is the buoyancy parameter with respect to the reference potential temperature $\theta_{\mathit{0}}$, $\delta_{\mathit{ij}}$ is the Kronecker delta, and $f_{c}$ is the Coriolis parameter. The ABL is forced by a mean pressure $p_{\infty}$, represented by the geostrophic wind with the relation, $U_{g}=-\frac{1}{{\rho}f_{c}}\frac{\partial{p_{\infty}}}{\partial{y}}$ and $V_{g}=\frac{1}{{\rho}f_{c}}\frac{\partial{p_{\infty}}}{\partial{x}}$ as its components. $\widetilde{p}=\widetilde{p}^{*}/\rho+\sigma_{kk}/3$ is the modified pressure, which is the sum of the trace of the SGS stress, $\sigma_{kk}/3$, and the kinematic pressure $\widetilde{p}^{*}/\rho$, where $\rho$ is the density of the fluid. It is well established that the actuator disk model can capture the wake dynamics starting from $1$ to $2$ diameters downstream of the turbine sufficiently accurately Stevens and Meneveau (2017); Stevens, Martínez-Tossas, and Meneveau (2018); Wu and Porté-Agel (2011). Therefore, the actuator disk model can be used to study the large scale flow phenomena in a wind farm on which we focus here. We note that the actuator disk model cannot capture the vortex structures near the turbine due to the absence of the turbine bladesSørensen (2011); Troldborg, Sørensen, and Mikkelsen (2010); Stevens and Meneveau (2017). To capture vortex structures very high resolution actuator line model simulations are required, which is not feasible for large wind farms Stevens, Martínez-Tossas, and Meneveau (2018). Therefore, we use a well- validated actuator disk model Jimenez _et al._ (2007, 2008); Calaf, Meneveau, and Meyers (2010); Stevens, Graham, and Meneveau (2014); Stevens, Gayme, and Meneveau (2016); Zhang, Arendshorst, and Stevens (2019); Gadde and Stevens (2019) in this study. The turbine forces $\widetilde{f}_{x}$ and $\widetilde{f}_{y}$ in equation (2) are modeled using the turbine force $F_{t}=-\frac{1}{2}\rho{C_{T}}{U^{2}_{\infty}}\frac{\pi}{4}D^{2},$ (4) where $C_{T}$ is the thrust coefficient and $U_{\infty}$ is the upstream undisturbed reference velocity. Equation (4) is only applicable for isolated turbines Jimenez _et al._ (2007, 2008). In wind farm simulations the upstream velocity $U_{\infty}$ cannot be readily specified. Consequently, it is common practice Calaf, Meneveau, and Meyers (2010); Calaf, Parlange, and Meneveau (2011) to use actuator disk theory to relate $U_{\infty}$ with the rotor disk velocity $U_{d}$, $U_{\infty}=\frac{U_{d}}{\left(1-a\right)}$ (5) where $a$ is the axial induction factor. The turbine forces are calculated by substituting equation (5) in equation (4). For a detailed description and validation of the employed actuator disk model we refer the reader to Refs. Calaf, Meneveau, and Meyers (2010); Calaf, Parlange, and Meneveau (2011); Stevens, Martínez-Tossas, and Meneveau (2018). The terms involving molecular viscosity are neglected due to the high Reynolds number of the ABL flow. $\tau_{\mathit{ij}}=\widetilde{u_{\mathit{i}}u_{\mathit{j}}}-\widetilde{u}_{\mathit{i}}\widetilde{u}_{\mathit{j}}$ is the traceless part of the SGS stress tensor and $q_{\mathit{j}}=\widetilde{u_{\mathit{j}}\theta}-\widetilde{u}_{\mathit{j}}\widetilde{\theta}$ is the SGS heat flux tensor. The SGS stresses and heat fluxes are modeled as, $\displaystyle\tau_{\mathit{ij}}$ $\displaystyle=\widetilde{u_{\mathit{i}}u_{\mathit{j}}}-\widetilde{u}_{\mathit{i}}\widetilde{u}_{\mathit{j}}=-2\nu_{T}\widetilde{S}_{ij}=-2(C_{s}\Delta)^{2}|\widetilde{S}|\widetilde{S}_{ij},$ (6) $\displaystyle q_{\mathit{j}}$ $\displaystyle=\widetilde{u_{\mathit{j}}\theta}~{}~{}-\widetilde{u}_{\mathit{j}}\widetilde{\theta}~{}~{}=-\nu_{\theta}\partial_{j}\widetilde{\theta}~{}~{}=-(D_{s}\Delta)^{2}|\widetilde{S}|\partial_{j}\widetilde{\theta},$ (7) where $\widetilde{S}_{ij}=\frac{1}{2}\left(\partial_{j}{\widetilde{u}_{i}}+\partial_{i}{\widetilde{u}_{j}}\right)$ is the grid-scale strain rate tensor, $\nu_{T}$ is the eddy viscosity, $C_{s}$ is the Smagorinsky coefficient for the SGS stresses, $\Delta$ is the grid size, $\nu_{\theta}$ is the eddy heat diffusivity, $D_{s}$ is the Smagorinsky coefficient for the SGS heat flux, and $|\widetilde{S}|=\sqrt{2\widetilde{S}_{ij}\widetilde{S}_{ij}}$. To model the SGS stresses without any ad-hoc modifications, we use a tuning-free, scale- dependent, dynamic model based on the Lagrangian averaging of the coefficients Bou-Zeid, Meneveau, and Parlange (2005); Stoll and Porté-Agel (2006, 2008). The model has been found to be highly suitable for inhomogeneous flows such as the flow inside wind farmsStevens, Gayme, and Meneveau (2016). For further details and validation of the code we refer to Gadde et al. (2020) Gadde, Stieren, and Stevens (2020). ### II.2 Numerical method We use a standard pseudo-spectral method to calculate the derivatives in horizontal directions and a second-order central difference scheme to calculate the gradients in the vertical direction. A second-order Adams- Bashforth scheme is employed to advance the solution in time. The aliasing errors in the non-linear terms are removed by the 3/2 anti-aliasing methodCanuto _et al._ (1988). The advective terms in the governing equations are written in the rotational formFerziger and Perić (2002). We discretize the horizontal directions uniformly with $n_{x}$, $n_{y}$, grid points in the streamwise and spanwise directions, respectively. This results in grid sizes of $\Delta_{x}=L_{x}/n_{x}$, $\Delta_{y}=L_{y}/n_{y}$ in the horizontal directions. In the vertical direction, we use a uniform grid up to a certain height, above which we use a stretched grid. The vertical grid size in the uniform region of the computational domain is represented by $\Delta_{z}$. The horizontal and vertical computational planes are staggered, such that for the horizontal velocity components, the first vertical grid point above the ground is located at $\Delta_{z}/2$. No-slip and free-slip boundary conditions are imposed at the lowest and the topmost computational plane, respectively. We use the Monin-Obukhov similarity theory Moeng (1984) to model the instantaneous stress and heat flux at the wall by using the velocity and temperature at the first grid point above the wall $\displaystyle\tau_{i3|w}=-{u_{*}^{2}}\frac{\widetilde{u}_{i}}{\widetilde{u}_{r}}=-\Bigg{(}\frac{\widetilde{u}_{r}\kappa}{\text{ln}(\Delta{z}/2z_{o})-\psi_{M}}\Bigg{)}^{2}\frac{\widetilde{u}_{i}}{\widetilde{u}_{r}},$ (8) and $\displaystyle q_{*}$ $\displaystyle=\frac{u_{*}\kappa(\theta_{s}-\widetilde{\theta})}{\text{ln}(\Delta{z}/2z_{os})-\psi_{H}}.$ (9) In the above equations, $\widetilde{u}_{i}$ and $\widetilde{\theta}$ represent the filtered grid-scale velocities and potential temperature at the first grid point above the ground, $u_{*}$ is the frictional velocity, $z_{o}$ is the roughness height for momentum, $z_{os}$ is the roughness height for heat flux, $\kappa=0.4$ is the von Kármán constant, $\widetilde{u}_{r}=\sqrt{\widetilde{u}^{2}+\widetilde{v}^{2}}$ is the resolved velocity magnitude, and $\theta_{s}$ is the grid scale potential temperature at the surface. $\psi_{M}$ and $\psi_{H}$ are the stability corrections for momentum and heat flux, respectively. We use the stability correction used by Beare et al. Beare _et al._ (2006) to simulate the stable boundary layer, i.e. $\psi_{M}=-4.8z/L$ and $\psi_{H}=-7.8z/L$, where $L=-({u_{*}}^{3}\theta_{0})/({\kappa}gq_{*})$ is the surface Obukhov length. We note that, for convenience, the tildes representing filtered LES quantities are omitted in the remainder of the paper. ### II.3 Boundary layer characteristics Table 1: The table gives the size of the computational domain and the used grid resolution in the streamwise ($n_{x}$), spanwise ($n_{y}$), and vertical ($n_{z}$) direction, respectively. $C_{r}$ is the surface cooling rate, $z_{i}$ is the boundary layer height, $z_{\text{jet}}$ is the jet height, $u_{*}$ is the friction velocity, $u_{\text{jet}}/G$ is the non-dimensionalized jet velocity, and $z_{i}/L$ represents the stability parameter. Domain size | $n_{x}\times n_{y}\times n_{z}$ | $C_{r}$ [$\text{K}\cdot\text{h}^{-1}$] | $z_{i}$ [m] | $z_{\text{jet}}$ [m] | $u_{*}$ [m$\text{s}^{-1}$] | $u_{\text{jet}}/G$ | $z_{i}/L$ ---|---|---|---|---|---|---|--- $11.52$ km $\times$ $4.6$ km $\times$ $3.84$ km | $1280\times 512\times 384$ | 0.50 | 131.6 | 125 | 0.192 | 1.21 | 2.95 Figure 2: (a) Horizontally averaged wind magnitude $u_{\text{mag}}/G$, (b) the vertical momentum flux, and (c) the temperature profiles plotted as a function of the height. Height is normalized with the jet height. We consider a continuously turbulent, moderately stable ABL with a capping inversion at approximately 1000 m. The temperature profile is slightly modified form of the one used in the LES of second Global Earth and Water Cycle Experiment (GEWEX) ABL study (GABLS-2) single column intercomparison setup Kumar _et al._ (2010). The boundary layer is initialized with a constant temperature of 286 K below 1000 m, a capping inversion of strength 6 K between 1000 m and 1150 m, followed by a constant temperature gradient of 5 $\text{K}\cdot\text{km}^{-1}$ above. The initial temperature profile is shown by the blue dashed lines in Fig. 2(c). The roughness height is, $z_{o}=0.002$ m for momentum corresponding to offshore conditions Dörenkämper _et al._ (2015) and $z_{os}=\frac{z_{o}}{10}=0.0002$ m Brutsaert (1982) for modelling the heat flux. The surface is cooled at a constant rate of 0.5 $\text{K}\cdot\text{hour}^{-1}$. The geostrophic forcing is set to $G=(U_{g},V_{g})=(8.0,0.0)$ $\text{m}\text{s}^{-1}$ and the Coriolis parameter is set to $f_{c}=1.159\times 10^{-4}$ $\text{s}^{-1}$ corresponding to a latitude of $52.8^{\circ}$, which is representative for the Dutch North Sea. The velocity is initialized with the geostrophic velocity, and uniform random perturbations are added to the initial velocities and temperature up to a height of 500 m to trigger turbulence. We note that the boundary layer reaches a quasi-steady state at the end of 8${}^{\text{th}}$ hour. The boundary layer characteristics relevant to this study are given in table 1. The jet height $z_{\text{jet}}$ is approximately $125$ m. The boundary layer height, defined as the height where the shear stress reaches 5% of its surface valueBeare _et al._ (2006), is 131.6 m. The ratio of boundary layer height to the surface Obukhov length is $z_{i}/L=2.95$. Scaling regimes reported by Holtslag and Nieuwstadt Holtslag and Nieuwstadt (1986) show that for $z_{i}/L<3$, stable boundary layers show negligible intermittency throughout the boundary layer. This confirms that the boundary layer is moderately stable Holtslag and Nieuwstadt (1986). The jet velocity is 9.68 m/s. It is worth mentioning here that LLJs with heights between 80 and 200 m and jet velocities of $8$ to $10$ m/s are frequently observed in the Dutch North Sea region Baas _et al._ (2009). Figure 2(a) shows the horizontally averaged velocity magnitude $u_{\text{mag}}=\left<\sqrt{\overline{u}^{2}+\overline{v}^{2}}\right>$ variation with height and Fig. 2(b) the corresponding horizontally averaged vertical turbulent momentum flux $\tau=\left<\sqrt{(\overline{u^{\prime}w^{\prime}})^{2}+(\overline{v^{\prime}w^{\prime}})^{2}}\right>$, where $\overline{u^{\prime}w^{\prime}}=\left(\overline{{uw}}+\overline{\tau_{xz}}\right)-\overline{{u}}~{}\overline{{w}}$ and $\overline{v^{\prime}w^{\prime}}=\left(\overline{{vw}}+\overline{\tau_{yz}}\right)-\overline{{v}}~{}\overline{{w}}$. This figure reveals that there is negligible turbulence above the jet. Figure 2(c) presents the horizontally averaged potential temperature with surface inversion top at approximately 140 m. The inversion height is defined as the height at which the temperature gradient is highest. The inversion top acts as a lid separating the turbulent and non-turbulent regions of the boundary layer. The temperature profile shows a prominent residual layer above the LLJ. ### II.4 Computational domain and wind farm layout Figure 3: Schematic to show the wind farm layout with the damping layer to prevent gravity waves and the fringe layer used in the concurrent precursor method. The black circles denote the wind turbine locations. All the flow statistics are sampled from the shaded region with dimensions $70D\times{20D}\times{D}$ centered around the wind farm; see also figure 6. The computational domain is $11.52$ km $\times$ $4.6$ km $\times$ $3.84$ km, which is discretized by $1280\times 512\times 384$ grid points. The grid points are uniformly distributed in horizontal directions. This leads to a uniform grid resolution of 9 m in both streamwise and spanwise directions. In the vertical direction a grid spacing of 5 m is used up to 1500 m, above which the grid is slowly stretched. Here we emphasize that our simulations benefit from using an advanced Lagrangian dynamic SGS scale model, which has been shown to capture the dynamics of stable boundary layers very well Stoll and Porté-Agel (2008); Gadde and Stevens (2019). It is worth mentioning here that in the LES intercomparison of the most widely studied stable boundary layer, Beare et al. Beare _et al._ (2006) report that a grid size of 6.25 m produces reasonably acceptable results compared to high-resolution LES of stable boundary layers. To provide perspective, recently, Allaerts and Meyers Allaerts and Meyers (2018) in the simulation of wind farms in a stable boundary layer, used a horizontal resolution of 12.5 m and a vertical resolution of 5 m. Furthermore, Ali et al. Ali _et al._ (2017) in their simulations of wind farms in diurnal cycles, which also includes stable boundary layers, use a horizontal resolution of 24.5 m and a vertical resolution of 7.8 m. So the resolution employed here is relatively high for simulations of such large wind farms. We employ the concurrent precursor technique Stevens, Graham, and Meneveau (2014) to introduce the inflow conditions sampled from the precursor simulation into the wind farm domain. LLJs generally occur over small regions and have limited spanwise width. Therefore, we use fringe layers in both streamwise and spanwise directions to remove the effect of periodicity. A Rayleigh damping layerKlemp and Lilly (1978) with a damping constant of 0.016 $\text{s}^{-1}$ is used in the top 25$\%$ of the domain to damp out the gravity waves triggered by the wind farm. We consider a wind farm with 40 turbines distributed in 4 columns and 10 rows, see Fig. 2. We choose turbines of diameter, $D=80$ m and the turbines are separated by a distance of $7D$ and $5D$ in the streamwise and spanwise directions, respectively. The objective of our study is to study the effect of $z_{\text{jet}}/z_{h}$ on wake recovery and wind farm power production. To achieve that, we vary the turbine hub height such that $z_{\text{jet}}>z_{h}$, $z_{\text{jet}}{\approx}z_{h}$, and $z_{\text{jet}}<z_{h}$, corresponding hub- heights are $z_{h}=0.5z_{\text{jet}}$, $z_{\text{jet}}$, $1.5z_{\text{jet}}$. The three cases represent the scenarios when the LLJ is below, above, and in the middle of the turbine rotor swept area. We perform two additional simulations with $z_{h}/D=0.75$ and turbine diameters 160 m and 240 m to study the effect of turbine diameter on wind farm performance. In these simulations, the turbines are separated by $720$ m in the streamwise direction. In the spanwise direction, for the cases with turbine diameter $160$ m and $240$ m, the turbines are separated by $480$ m and $720$ m, respectively. Calaf et al. Calaf, Meneveau, and Meyers (2010) and Meyers and Meneveau Meyers and Meneveau (2013) showed that a resolution of $25$ m $\leq\Delta{x}\leq$ $50$ m in the streamwise direction and of $10$ m $\leq\Delta{y}\leq$ $25$ m in the spanwise direction is sufficient when an actuator disk method is used to model the turbines. Furthermore, Wu and Porté-Agel Wu and Porté-Agel (2011) showed that one needs $8$ points along the diameter in the vertical direction and $5$ points along the diameter in the spanwise direction. Clearly, our simulations satisfy these criteria as we use a $5$ m resolution in the vertical and a $9$ m resolution in the horizontal directions. This means the turbine disk is discretized by $16$ points in the vertical and $9$ points in the spanwise direction, respectively. We use a proportional-integral (PI) controllerAllaerts and Meyers (2015) to ensure that the mean wind direction at hub-height is from West to East. The wind angle controller has been successfully used in our previous study of wind farms in neutral and stable boundary layers Gadde and Stevens (2019) and ensures that the wind farm geometry is the same for all considered cases. Yaw misalignment due to the local changes in the wind angle is prevented by rotating the actuator disks such that the disks are always perpendicular to the local wind angle. Figure 3 shows the wind farm layout and the dimensions of the different regions in the computational domain. ## III Results & discussions The simulations were carried out in two stages. In the first stage, only the boundary layer in the precursor domain is simulated. After the quasi-steady conditions are reached, the turbines are introduced at the end of the 8${}^{\text{th}}$ hour. In this second stage, the simulations are continued for two more hours, and the statistics are collected in the last hour. Each simulation costs about 0.3 million CPU hours. In section III.1 the flow structures are analyzed, followed by a discussion on power production, momentum flux, and wake recovery in section III.2. In section III.3, an energy budget analysis is presented in which we discuss the diverse processes affecting the wind farm performance in the presence of a LLJ. ### III.1 Flow structures Figure 4: Normalized instantaneous velocity $u_{\text{mag}}/G$ at hub-height for the three cases, i.e. (a) $z_{\text{jet}}>z_{h}$, (b) $z_{\text{jet}}\approx z_{h}$, and (c) $z_{\text{jet}}<z_{h}$. Figs. (d), (e), and (f) present the corresponding time-averaged turbulence intensity, $\sigma_{u}/u_{\text{hub}}$ where $\sigma_{u}=\sqrt{2k/3}$ and k is the turbulent kinetic energy. (g) Side view of the instantaneous streamwise velocity in an x-z plane through the second turbine column for the different cases. A visualization of the instantaneous velocity at hub-height is presented in Figs. 4(a), (b), and (c). Figures 4(d), (e), and (f) show the time-averaged turbulence intensity for all three cases. The turbulence intensity is calculated as $\sigma_{u}=\sqrt{2k/3}$, where $k=0.5(\overline{u^{\prime 2}}+\overline{v^{\prime 2}}+\overline{w^{\prime 2}})$ is the resolved turbulent kinetic energy and $u_{\text{hub}}=\sqrt{\overline{{u}}^{2}+\overline{{v}}^{2}+\overline{{w}}^{2}}$ is the velocity at the hub-height at the inlet. When the LLJ is above the turbines, small scale structures are visible in the entrance region in front of the wind farm, see Fig. 4(a). In this case, the turbines operate in a completely turbulent region, and the wakes show significant turbulence towards the end of the wind farm, see Fig. 4(d). The wakes recover relatively fast due to the high atmospheric turbulence and the additional wake generated turbulence. In contrast, Figs. 4(b) and 4(e) show less turbulence in the entrance region of the wind farm and behind the first turbine row for the $z_{h}\approx z_{\text{jet}}$ case. However, towards the rear of the wind farm, we observe significant turbulence. This effect is prominent when the LLJ is below the turbines ($z_{\text{jet}}<z_{h}$), and we observe only marginal wake turbulence behind the first turbine row, see Fig. 4(c) and 4(f). This will affect the wake recovery and consequently the power production of the second turbine row. The limited turbulence at hub-height at the farm entrance is due to the strong thermal stratification associated with the surface inversion top. However, after the first couple of rows, we observe significant turbulence created by the wakes. It is widely accepted that the turbine wake meandering and corresponding wake turbulence is related to the atmospheric turbulence Mao and Sørensen (2018); Larsen _et al._ (2008), and in the absence of atmospheric turbulence, the wake turbulence is also limited. In essence, the wake recovery is affected when turbines operate in the negative shear region above the LLJ. Figure 5: (a) The row-averaged power normalized with the power production of the first row. Results from the additional simulations with $D=160$ m and $D=240$ m are also included. (b) Planar averaged streamwise vertical momentum flux versus height. (c) Spanwise averaged streamwise velocity normalized with the upstream velocity at hub-height as a function of the streamwise location. (d) Streamwise variation of the turbulence intensity at hub-height for the three different cases. Figure 4(g) shows the side view of the wind farm in an x-z plane passing through the second turbine column. The top panel in Fig. 4(g) shows the turbines operating in a turbulent region. It is worth noting that the turbine wakes show significant wake turbulence, which aids the extraction of momentum from the LLJ. When $z_{\text{jet}}\approx z_{h}$, the turbines in the first row extract the energy in the jet, and turbines operate in a well-mixed region after the second turbine row. Moreover, the LLJ reduces in strength after the first turbine row, and therefore rows that are further downstream cannot benefit from the jet anymore. However, when the LLJ is below the turbines ($z_{\text{jet}}<z_{h}$), the first couple of turbine rows are in a non- turbulent region, and therefore the turbulence after the first turbine row is limited. Furthermore, we observe no transverse wake meandering behind the first turbine row due to low atmospheric turbulence in the thermally stratified region above the jet. The jet’s strength is reduced due to energy extraction towards the rear of the wind farm because of the positive entrainment flux created from below due to the energy extraction by turbines. This will be discussed in detail in the next section. ### III.2 Power production and wake recovery The row-averaged power normalized by the first row’s power production is presented in Fig. 5(a). The turbine power is averaged in the $10^{\text{th}}$ hour of the simulation. Results from the additional simulations with the diameters $160$ m and $240$ m are also included in the figure. When the LLJ is above the turbines ($z_{\text{jet}}>z_{h}$), we observe that the relative power production is higher, which means that velocity recovers faster than in the other cases, see the plot of wake recovery in Fig. 5(c). When $z_{h}\approx z_{\text{jet}}$, the power production continuously reduces towards the rear of the wind farm, see Fig. 5(a). The corresponding wake recovery shows that the velocity continuously drops in the downstream direction, which indicates that the wake recovery is negligible; see the dashed line in Fig. 5(c). Interestingly, when the jet is below the turbines ($z_{\text{jet}}<z_{h}$), the power production of the second row is severely affected due to the absence of turbulence in the wake of the first turbine row. However, the power production increases further downstream due to wake generated turbulence, and it shows an upward trend towards the back of the wind farm. For this case, the wake turbulence becomes significant for $x/D>30$ behind the second turbine row, and subsequently, the turbines entrain high momentum wind from the LLJ, and the wake recovers significantly. For the additional cases with turbines with bigger diameter of 160 m and 240 m, we find that the overall trends in the normalized power production as a function of the downstream position remains the same even though the streamwise turbine spacing is small. This confirms that the results presented in this study capture the relevant physics of the different scenarios, i.e. when the LLJ is below, in the middle, or above the turbine rotor swept area. To understand the wake recovery and the associated power production of downstream turbines, the planar averaged vertical turbulent flux of streamwise momentum $\left<\overline{u^{\prime}w^{\prime}}\right>$ and normalized by the $\left<\overline{u^{\prime}w^{\prime}}\right>$ at the wall are plotted in Fig. 5(b). When the LLJ is above the turbine rotor swept area ($z_{\text{jet}}>z_{h}$), there is a significant negative (downward) momentum flux, which extracts the jet’s momentum and eliminates it towards the rear of the wind farm. However, when the LLJ is below the turbines ($z_{\text{jet}}<z_{h}$), the turbines operate in the negative shear region, and a significant positive entrainment flux is created. As a result, the jet’s energy is entrained towards the turbines, and the power production shows an upward trend towards the end of the wind farm, see Fig. 5(a). In essence, when the LLJ is below the turbine rotor swept area, the momentum deficit by the turbines creates a significant positive turbulent flux from below due to the negative shear. This enhances the wake recovery further downstream which is further elucidated below. For continuous production of turbulence $\overline{u^{\prime}w^{\prime}}\frac{\partial{\overline{u}}}{\partial{z}}$ should be negative. Therefore, in the presence of positive shear ($\frac{\partial{\overline{u}}}{\partial{z}}$ is positive) $\overline{u^{\prime}w^{\prime}}$ should be negative to produce turbulence. However, when the shear is negative ($\frac{\partial{\overline{u}}}{\partial{z}}$ is negative) $\overline{u^{\prime}w^{\prime}}$ should be positive to sustain turbulence. The tendency of the velocity deficit in the turbine wakes is to create a positive entrainment flux below the hub-height and a negative entrainment flux above hub-height. This leads to the following two scenarios: * 1. When the LLJ is above the turbine rotor swept area ($z_{\text{jet}}>z_{h}$), LLJ energy is pulled towards the turbines due to the momentum deficit created by the turbines. We have a significant downward entrainment flux, which is utilized by the turbines for power production. Due to which the LLJ strength is reduced. In this case, $\overline{u^{\prime}w^{\prime}}$ is negative, and the horizontally averaged $\frac{\partial{\overline{u}}}{\partial{z}}$ is positive, and there is a net negative vertical flux towards the turbines. * 2. When the LLJ is below the turbine rotor swept area ($z_{\text{jet}}<z_{h}$), the high momentum LLJ with the positive entrainment flux from below aid power production. The turbines extract the LLJ energy transported by the positive entrainment fluxes. In this case, $\overline{u^{\prime}w^{\prime}}$ is positive and the horizontally averaged $\frac{\partial{\overline{u}}}{\partial{z}}$ is negative. The negative shear created by the wind turbine wakes contributes to the negative shear already present above the LLJ, and this aids power production of downstream turbines. To quantify the turbulence produced by the wakes, the streamwise variation of the horizontally averaged turbulence intensity at the hub-height is plotted in Fig. 5(d). When the LLJ is above the turbine rotor ($z_{\text{jet}}>z_{h}$), the turbulence intensity upstream of the farm is 1.97%, while it is 1.0% and 0.46% for $z_{\text{jet}}\approx z_{h}$ and $z_{\text{jet}}<z_{h}$, respectively. We observe negligible wake turbulence behind the first turbine row in Fig. 4(d) when the LLJ is below the turbine rotor swept area ($z_{h}>{z_{\text{jet}}}$). However, after turbulence is created by the wakes, the turbulence intensity increases to about approximately $4.4\%$ further downstream. It is clear from the above data that there is limited upstream turbulence when the LLJ is below the turbine rotor swept area. Consequently, there is negligible wake recovery until there is a wake generated turbulence. In essence, the negative shear above the jet creates a positive entrainment flux, which increases the turbulence intensity when the LLJ is below the turbine rotor swept area. This accelerates the wake recovery and allows the turbines to extract energy from the jet. The turbulence intensity for the $z_{\text{jet}}\approx z_{h}$ case develops in a very similar way as for the ${z_{\text{jet}}}<z_{h}$ case as in both cases it is mostly determined by the wake added turbulence. However, when the LLJ is above the turbine rotor swept area, the turbulence intensity inside the wind farm is higher as the atmospheric turbulence interacts with the wind turbine wakes. ### III.3 Energy budget analysis To further understand the different processes involved in the power production of a wind farm in the presence of a LLJ we perform an energy budget analysis. The analysis is similar to the budget analysis performed by Allaerts and Meyers (Allaerts and Meyers, 2017) for wind farms in conventionally neutral boundary layers. The steady-state, time-averaged energy equation is obtained by multiplying equation (2) with $\widetilde{u}_{i}$Allaerts and Meyers (2017); Sagaut (2006) and performing time-averaging, which results in: $\centering\begin{split}&\overbrace{\overline{u}_{j}\partial_{j}\left({\frac{1}{2}\overline{u}_{i}\overline{u}_{i}}+\frac{1}{2}\overline{u^{\prime}_{i}u^{\prime}_{i}}\right)}^{\text{Kinetic energy flux}}+\overbrace{\partial_{j}\left(\frac{1}{2}{\overline{u^{\prime}_{j}u^{\prime}_{i}u^{\prime}_{i}}}+\overline{u}_{i}\overline{u^{\prime}_{i}u^{\prime}_{j}}\right)}^{\text{ Turbulent transport}}+\overbrace{\partial_{j}\left(\overline{u_{i}\tau_{ij}}\right)}^{\text{SGS transport}}\\\ &=\overbrace{-\partial_{i}{\left(\overline{pu_{i}}\right)}}^{\text{Flow work}}+\overbrace{g\beta(\overline{u_{i}\theta}-\overline{u}_{i}\theta_{0})\delta_{i3}}^{\text{Buoyancy}}+\overbrace{f_{c}\left(\overline{u}_{i}U_{g}\right)\delta_{i2}-f_{c}\left(\overline{u}_{i}V_{g}\right)\delta_{i1}}^{\text{Geostrophic forcing}}\\\ &+\overbrace{\overline{f_{i}u_{i}}}^{\text{Turbine power}}+\overbrace{\overline{\tau_{ij}S_{ij}}}^{\text{Dissipation}},\end{split}\@add@centering$ (10) where the time-averaging is represented by the overline, and $\overline{u^{\prime}_{i}u^{\prime}_{j}}=\left(\overline{{u_{i}u_{j}}}\right)-\overline{{u}_{i}}~{}\overline{{u}_{j}}$ indicates the momentum fluxes. To obtain the total power produced by each row we numerically integrate each term in equation (10) around a control volume surrounding each row. The control volume is chosen such that it encloses a row of wind farm, see Fig. 6. We note here that the fringe layers are not included in the control volume. Performing integration and rearranging equation (10) gives, $\centering\begin{split}\overbrace{\int_{\forall}\overline{f_{i}u_{i}}d\forall}^{\text{$\mathbb{P}$, Turbine power}}&=\overbrace{\int_{\forall}\overline{u}_{j}\partial_{j}\left({\frac{1}{2}\overline{u}_{i}\overline{u}_{i}}+\frac{1}{2}\overline{u^{\prime}_{i}u^{\prime}_{i}}\right)d\forall}^{\text{$\mathbb{E}_{k}$, Kinetic energy flux}}\\\ &+\overbrace{\int_{\forall}\partial_{j}\left(\frac{1}{2}{\overline{u^{\prime}_{j}u^{\prime}_{i}u^{\prime}_{i}}}+\overline{u}_{i}\overline{u^{\prime}_{i}u^{\prime}_{j}}\right)d\forall}^{\text{$\mathbb{T}_{\text{t}}$, Turbulent transport}}+\overbrace{\int_{\forall}\partial_{j}\left(\overline{u_{i}\tau_{ij}}\right)d\forall}^{\text{$\mathbb{T_{\text{sgs}}}$, SGS transport}}\\\ &+\overbrace{\int_{\forall}\partial_{i}{\left(\overline{pu_{i}}\right)}d\forall}^{\text{$\mathbb{F}$, Flow work}}-\overbrace{\int_{\forall}g\beta(\overline{u_{i}\theta}-\overline{u}_{i}\theta_{0})\delta_{i3}d\forall}^{\text{$\mathbb{B}$, Buoyancy}}\\\ &-\overbrace{\int_{\forall}f_{c}\left(\overline{u}_{i}U_{g}\right)\delta_{i2}-f_{c}\left(\overline{u}_{i}V_{g}\right)\delta_{i1}d\forall}^{\text{$\mathbb{G}$, Geostrophic forcing}}-\overbrace{\int_{\forall}\overline{\tau_{ij}S_{ij}}d\forall,}^{\text{$\mathbb{D}$, Dissipation}}\end{split}\@add@centering$ (11) Figure 6: The shaded region shows the control volume used in the energy budget analysis. The control volume around each turbine row has dimensions of $7\text{D}\times{20\text{D}}\times{\text{D}}$ in the streamwise, spanwise, and vertical directions, respectively. In the vertical direction the control volume starts at a height of $z_{h}-D/2$. Figure 7: Energy budget for (a) $z_{\text{jet}}>z_{h}$ (b) $z_{\text{jet}}\approx z_{h}$ and (c) $z_{\text{jet}}<z_{h}$. (d) Integrated entrainment flux over top and bottom planes of the control volume. Dashed lines with filled symbols and solid lines with open symbols represent $\mathbb{T}_{t}$ on the top and bottom plane, respectively. (e) Normalized net entrainment for different cases. where $\mathbb{P}$ is the power produced by a turbine row, $\mathbb{E}_{k}$ is the kinetic energy flux containing resolved kinetic energy, $\mathbb{T}_{t}$ is the turbulent transport, which involves the transport of mean flow energy by turbulence Tennekes and Lumley (1972) and higher-order turbulence terms, $\mathbb{T}_{\text{sgs}}$ is the mean energy transport by SGS stresses, $\mathbb{F}$ is the flow work, which is the pressure drop across the turbines, $\mathbb{B}$ is the turbulence destruction caused by buoyancy under stable stratification, $\mathbb{G}$ represents the geostrophic forcing driving the flow, and $\mathbb{D}$ is the SGS dissipation. Figure 7(a), (b), and (c) present the energy budget analysis for cases when the LLJ is above ($z_{\text{jet}}>z_{h}$), in the middle ($z_{\text{jet}}\approx z_{h}$), or below ($z_{\text{jet}}<z_{h}$) the turbine rotor swept area, respectively. All the terms are normalized by the absolute value of the power produced by the first turbine row. This normalization provides insight into the effect of wake recovery on power production. The SGS transport term $\mathbb{T}_{\text{sgs}}$ and the buoyancy terms $\mathbb{B}$ are negligible and left out of the plots for brevity. In both plots, the energy sources are positive, and sinks are negative. Both turbine power and dissipation act as energy sinks in the boundary layer. Fig. 7(a) shows that when the LLJ is above the turbine rotor swept area ($z_{\text{jet}}>z_{h}$), the kinetic energy $\mathbb{E}_{k}$ continuously decreases in the downstream direction. This reduction in the mean kinetic energy is compensated by the turbulent transport term $\mathbb{T}_{t}$. The turbulent transport slightly reduces after the sixth turbine row due to the reduction in the strength of the LLJ. In this case, the downward entrainment of the fluxes compensates for the decrease in mean kinetic energy. In a fully developed wind farm boundary layer, the power production is completely balanced by the turbulent entrainment from above Calaf, Meneveau, and Meyers (2010); Cal _et al._ (2010). When the turbines operate in the positive shear region, the entrainment from above replenishes the energy extracted by the turbines. In addition to entrainment, the geostrophic forcing $\mathbb{G}$ and pressure drop $\mathbb{F}$ act as an additional energy source. In contrast, turbulence destruction by buoyancy $\mathbb{B}$ and dissipation $\mathbb{D}$ remove energy from the control volume. When the LLJ is above the turbine rotor swept area ($z_{\text{jet}}>z_{h}$), there is positive shear in the boundary layer, due to which there is significant entrainment and wake recovery. When the LLJ is in the middle of the turbine rotor swept area ($z_{\text{jet}}\approx z_{h}$), while the kinetic energy $\mathbb{E}_{k}$ continuously decreases the entrainment $\mathbb{T}_{t}$ is reduced as the energy in the LLJ is extracted by the upwind turbines reducing the entrainment for the rest of the turbines. Furthermore, turbulent entrainment $\mathbb{T}_{t}$ is nearly equal to the turbulence dissipation $\mathbb{D}$, this limits the contribution of turbulence to power production. When the LLJ is below the turbine rotor swept area ($z_{\text{jet}}<z_{h}$), the kinetic energy $\mathbb{E}_{k}$ contribution decreases continuously with downstream position in the wind farm and the turbulent transport $\mathbb{T}_{t}$ is less than for the case when the LLJ is above the turbines (Fig. 7(c)). The turbulent transport $\mathbb{T}_{t}$ is created entirely by the wake turbulence and the momentum deficit created by the turbines. The power production is mainly due to the mean flow energy extraction $\mathbb{E}_{k}$ and the entrainment due to positive entrainment flux. In essence, the wake recovery and the entrainment due to turbulent transport are affected when the LLJ is below the turbine rotor swept area. To further elucidate the effect of entrainment on power production, we plot the integrated vertical entrainment flux on the top and bottom planes of the control volume in Fig. 7(d). Open symbols represent the integrated flux on the bottom plane, and filled symbols represent the integrated flux on the top plane of the control volume. When the LLJ is above the turbine rotor swept area ($z_{\text{jet}}>z_{h}$), the negative flux from the top is dominant. However, when the LLJ is below the turbine rotor swept area ($z_{\text{jet}}<z_{h}$), there is significant positive flux in the bottom plane indicating positive entrainment from below. This clearly shows that there is significant positive entrainment flux towards the turbine rotors when the LLJ is below the turbine rotor swept area. This is beneficial for the power production of turbines further downstream. Figure 7(e) provides a comparison of the net entrainment $|\mathbb{T}_{t}|$ for all the three cases. The figure shows that entrainment is strongest when the LLJ is above the turbine rotor swept area ($z_{\text{jet}}>z_{h}$) and least when $z_{\text{jet}}\approx z_{h}$. When the LLJ is in the middle of the turbine rotor swept area ($z_{\text{jet}}\approx z_{h}$) the entrainment is affected as the LLJ energy is mostly extracted by the turbines in the first couple of rows. When the LLJ is below the turbine rotor swept area ($z_{\text{jet}}<z_{h}$), there is increased entrainment due to the positive entrainment flux. This creates a stronger turbulent transport $\mathbb{T}_{t}$ than for the $z_{\text{jet}}\approx z_{h}$ case, but not as much as for the case when the LLJ is above the turbine rotor swept area. ## IV Conclusions We performed LES of wind farms to study the effect of the LLJ height compared to the turbine-height on the interaction between LLJs and large wind farms, see Fig. 1. We considered three scenarios, wherein the LLJ is above, below, and in the middle of the turbine rotor swept area. We find that the relative power production of the turbines further downstream in the wind farm depends on the jet height relative to the hub-height. The power production relative to the first-row power is maximum when the LLJ is above the turbine rotor swept area due to higher turbulence intensity below the LLJ, wherein the atmospheric turbulence adds to the turbine wake generated turbulence and leads to a faster wake recovery. However, when the LLJ is below the turbine rotor swept area, the turbines operate in the negative shear region of the LLJ in which the atmospheric turbulence is limited, and the thermal stability is strong. In the absence of atmospheric turbulence, the wakes are very stable Mao and Sørensen (2018); Keck _et al._ (2014), and wake recovery is slow. However, after the first two turbine rows, the wakes generate sufficient turbulence to promote the wake recovery further downstream. The energy budget analysis reveals that the vertical entrainment dominates the power production when the LLJs are above the turbine rotor swept area. In contrast, when the LLJ is in the middle of the turbine rotor swept area, the jet’s energy is extracted by the first turbine row, and the rest of the rows do not directly benefit from the jet. Interestingly, when the LLJ is below the turbine rotor swept area, the mean negative shear and the shear created by the wakes create a positive entrainment flux from below, which helps turbines further downstream to harvest the jet’s energy. Although the negative shear above the LLJ creates a positive turbulent entrainment flux, the turbulence production it creates is limited due to the high thermal stratification above the jet, i.e. the flux that is created is smaller than the flux that is created when the LLJ is above the turbines. Gutierrez et al. Gutierrez _et al._ (2017) report the reduction in the turbine loads due to the negative shear in the LLJ and therefore suggest installing turbines such that they are in this region. Our results show that wake recovery is affected when the turbines operate in the negative shear region, and therefore, it might not be beneficial in terms of wake recovery. Here, we emphasize again that we used a generalized LLJ to study the physical phenomena that result from the interaction of a LLJ with a large wind farm. However, further work will be required to investigate the effect of higher thermal stratification, complex terrain, the strength of the geostrophic wind and its direction, the transition from land to sea on the LLJ characteristics, and how this affects the performance of wind farms. ## author’s contributions All authors contributed equally to the manuscript. ###### Acknowledgements. We thank the anonymous referees whose comments have been invaluable in improving the quality of the manuscript. This work is part of the Shell- NWO/FOM-initiative Computational sciences for energy research of Shell and Chemical Sciences, Earth and Live Sciences, Physical Sciences, FOM, and STW. This work was carried out on the national e-infrastructure of SURFsara, a subsidiary of SURF corporation, the collaborative ICT organization for Dutch education and research, and an STW VIDI grant (No. 14868). ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References ## References * Abkar, Sharifi, and Porté-Agel (2016) Abkar, M., Sharifi, A., and Porté-Agel, F., “Wake flow in a wind farm during a diurnal cycle,” J. Turb. 625, 012031 (2016). * Ali _et al._ (2017) Ali, N., Cortina, G., Hamilton, N., Calaf, M., and Cal, R. B., “Turbulence characteristics of a thermally stratified wind turbine array boundary layer via proper orthogonal decomposition,” J. Fluid Mech. 828, 175–95 (2017). * Ali _et al._ (2018) Ali, N., Hamilton, N., Cortina, G., and Calaf, M., “Anisotropy stress invariants of thermally stratified wind turbine array boundary layers using large eddy simulations,” J. Renew. Sustain. Energy 10, 013301 (2018). * Allaerts and Meyers (2015) Allaerts, D. and Meyers, J., “Large eddy simulation of a large wind-turbine array in a conventionally neutral atmospheric boundary layer,” Phys. Fluids 27, 065108 (2015). * Allaerts and Meyers (2017) Allaerts, D. and Meyers, J., “Boundary-layer development and gravity waves in conventionally neutral wind farms,” J. Fluid Mech. 814, 95–130 (2017). * Allaerts and Meyers (2018) Allaerts, D. and Meyers, J., “Gravity waves and wind-farm efficiency in neutral and stable conditions,” Boundary-Layer Meteorol. 166, 269 (2018). * Arritt _et al._ (1997) Arritt, R. W., Rink, T. D., Segal, M., Todey, D. P., Clark, C. A., Mitchell, M. J., and Labas, K. M., “The Great Plains low-level jet during the warm season of 1993,” Mon. Weather Rev. 125, 2176–2192 (1997). * Baas _et al._ (2009) Baas, P., Bosveld, F. C., Baltink, H. K., and Holtslag, A. A. M., “A climatology of nocturnal low-level jets at Cabauw,” J. Appl. Meteor. Climatol. 48, 1627–1642 (2009). * Banta (2008) Banta, R. M., “Stable-boundary-layer regimes from the perspective of the low-level jet,” Acta Geophysica 56, 58–87 (2008). * Banta _et al._ (2002) Banta, R. M., Newsom, R. K., Lundquist, J. K., Pichugina, Y. L., Coulter, R. L., and Mahrt, L., “Nocturnal low-level jet characteristics over Kansas during CASES-99,” Boundary-Layer Meteorol. 105, 221–252 (2002). * Beare _et al._ (2006) Beare, R. J., Macvean, M. K., Holtslag, A. A. M., Cuxart, J., Esau, I., Golaz, J.-C., Jimenez, M. A., Khairoutdinov, M., Kosovic, B., Lewellen, D., Lund, T. S., Lundquist, J. K., Mccabe, A., Moene, A. F., Noh, Y., Raasch, S., and Sullivan, P., “An intercomparison of large eddy simulations of the stable boundary layer,” Boundary-Layer Meteorol. 118, 247–272 (2006). * Bhaganagar and Debnath (2015) Bhaganagar, K. and Debnath, M., “The effects of mean atmospheric forcings of the stable atmospheric boundary layer on wind turbine wake,” J. Renew. Sustain. Energy 7, 013124 (2015). * Blackadar (1957) Blackadar, A. K., “Boundary layer wind maxima and their significance for the growth of nocturnal inversions,” Bull. Am. Meteorol. Soc. 38, 283–290 (1957). * Bou-Zeid, Meneveau, and Parlange (2005) Bou-Zeid, E., Meneveau, C., and Parlange, M. B., “A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows,” Phys. Fluids 17, 025105 (2005). * Brutsaert (1982) Brutsaert, W., _Evaporation into the atmosphere: theory, history and applications_ , Vol. 1 (1982). * Cal _et al._ (2010) Cal, R. B., Lebrón, J., Castillo, L., Kang, H. S., and Meneveau, C., “Experimental study of the horizontally averaged flow structure in a model wind-turbine array boundary layer,” J. Renew. Sustain. Energy 2, 013106 (2010). * Calaf, Meneveau, and Meyers (2010) Calaf, M., Meneveau, C., and Meyers, J., “Large eddy simulations of fully developed wind-turbine array boundary layers,” Phys. Fluids 22, 015110 (2010). * Calaf, Parlange, and Meneveau (2011) Calaf, M., Parlange, M. B., and Meneveau, C., “Large eddy simulation study of scalar transport in fully developed wind-turbine array boundary layers,” Phys. Fluids 23, 126603 (2011). * Canuto _et al._ (1988) Canuto, C., Hussaini, M. Y., Quarteroni, A., and Zang, T. A., _Spectral Methods in Fluid Dynamics_ (Springer, Berlin, 1988). * Doosttalab _et al._ (2020) Doosttalab, A., Siguenza-Alvarado, D., Pulletikurthi, V., Jin, Y., Evans, H. B., Chamorro, L. P., and Castillo, L., “Interaction of low-level jets with wind turbines: On the basic mechanisms for enhanced performance,” J. Renew. Sustain. Energy 12, 053301 (2020). * Dörenkämper _et al._ (2015) Dörenkämper, M., Witha, B., Steinfeld, G., Heinemann, D., and Kühn, M., “The impact of stable atmospheric boundary layers on wind-turbine wakes within offshore wind farms,” J. Wind Eng. Ind. Aerodyn. 144, 146–153 (2015). * Duncan (2018) Duncan, J. B., _Observational analyses of the North Sea low-level jet_ (Petten: TNO, 2018). * Ferziger and Perić (2002) Ferziger, J. H. and Perić, M., _Computational methods for fluid dynamics_ (Springer, 2002). * Fitch, Olson, and Lundquist (2013) Fitch, A. C., Olson, J. B., and Lundquist, J. K., “Parameterization of wind farms in climate models,” J. Wind Eng. Ind. Aerodyn. 26, 6439–6458 (2013). * Gadde and Stevens (2019) Gadde, S. N. and Stevens, R. J. A. M., “Effect of Coriolis force on a wind farm wake,” J. Phys. Conf. Ser. 1256, 012026 (2019). * Gadde and Stevens (2020) Gadde, S. N. and Stevens, R. J. A. M., “Interaction between low-level jets and wind farms in a stable atmospheric boundary layer,” submitted (2020). * Gadde, Stieren, and Stevens (2020) Gadde, S. N., Stieren, A., and Stevens, R. J. A. M., “Large-eddy simulations of stratified atmospheric boundary layers: Comparison of different subgrid models,” Boundary-Layer Meteorol. , 1–20 (2020). * Greene _et al._ (2009) Greene, S., McNabb, K., Zwilling, R., Morrissey, M., and Stadler, S., “Analysis of vertical wind shear in the southern great plains and potential impacts on estimation of wind energy production,” Int. J. Global Energy 32, 191–211 (2009). * Gutierrez _et al._ (2016) Gutierrez, W., Araya, G., Kiliyanpilakkil, P., Ruiz-Columbie, A., Tutkun, M., and Castillo, L., “Structural impact assessment of low level jets over wind turbines,” J. Renew. Sustain. Ener. 8, 023308 (2016). * Gutierrez _et al._ (2017) Gutierrez, W., Ruiz-Columbie, A., Tutkun, M., and Castillo, L., “Impacts of the low-level jet’s negative wind shear on the wind turbine,” Wind Energy Science 2, 533–545 (2017). * Holtslag and Nieuwstadt (1986) Holtslag, A. A. M. and Nieuwstadt, F. T. M., “Scaling the atmospheric boundary layer,” Boundary-Layer Meteorol. 36, 201–209 (1986). * Jimenez _et al._ (2007) Jimenez, A., Crespo, A., Migoya, E., and Garcia, J., “Advances in large-eddy simulation of a wind turbine wake,” J. Phys. Conf. Ser. 75, 012041 (2007). * Jimenez _et al._ (2008) Jimenez, A., Crespo, A., Migoya, E., and Garcia, J., “Large-eddy simulation of spectral coherence in a wind turbine wake,” Environ. Res. Lett. 3, 015004 (2008). * Kalverla _et al._ (2019) Kalverla, P. C., Duncan Jr., J. B., Steeneveld, G.-J., and Holtslag, A. A. M., “Low-level jets over the North Sea based on ERA5 and observations: together they do better,” Wind Energy Science 4, 193–209 (2019). * Kalverla _et al._ (2017) Kalverla, P. C., Steeneveld, G.-J., Ronda, R. J., and Holtslag, A. A. M., “An observational climatology of anomalous wind events at offshore meteomast IJmuiden (North Sea),” J. Wind. Eng. Ind. Aerodyn. 165, 86–99 (2017). * Keck _et al._ (2014) Keck, R. E., de Maré, M., Churchfield, M. J., Lee, S., Larsen, G., and Madsen, H. A., “On atmospheric stability in the dynamic wake meandering model,” Wind Energy 17, 1689–1710 (2014). * Kelley _et al._ (2004) Kelley, N., Shirazi, M., Jager, D., Wilde, S., Adams, J., Buhl, M., Sullivan, P., and Patton, E., “Lamar low-level jet project interim report,” National Renewable Energy Laboratory, National Wind Technology Center, Golden, CO, Technical Paper No. NREL/TP-500-34593 (2004). * Klemp and Lilly (1978) Klemp, J. B. and Lilly, D. K., “Numerical simulation of hydrostatic mountain waves,” J. Atmos. Sci. 68, 46–50 (1978). * van Kuik _et al._ (2016) van Kuik, G. A. M., Peinke, J., Nijssen, R., Lekou, D., Mann, J., Sørensen, J. N., Ferreira, C., van Wingerden, J. W., Schlipf, D., Gebraad, P., Polinder, H., Abrahamsen, A., van Bussel, G. J. W., S$\o$rensen, J. D., Tavner, P., Bottasso, C. L., Muskulus, M., Matha, D., Lindeboom, H. J., Degraer, S., Kramer, O., Lehnhoff, S., Sonnenschein, M., S$\o$rensen, P. E., Künneke, R. W., Morthorst, P. E., and Skytte, K., “Long-term research challenges in wind energy - a research agenda by the European Academy of Wind Energy,” Wind Energy Science 1, 1–39 (2016). * Kumar _et al._ (2010) Kumar, V., Svensson, G., Holtslag, A. A. M., Meneveau, C., and Parlange, M. B., “Impact of surface flux formulations and geostrophic forcing on large-eddy simulations of diurnal atmospheric boundary layer flow,” J. Appl. Meteorol. Climatol. 49, 1496–1516 (2010). * Larsen _et al._ (2008) Larsen, G. C., Madsen, H. A., Thomsen, K., and Larsen, T. J., “Wake meandering: A pragmatic approach,” Wind Energy 11, 377–395 (2008). * Liu _et al._ (2014) Liu, H., He, M., Wang, B., and Zhang, Q., “Advances in low-level jet research and future prospects,” J. Meteorol. Res. 28, 57–75 (2014). * Lu and Porté-Agel (2011) Lu, H. and Porté-Agel, F., “Large-eddy simulation of a very large wind farm in a stable atmospheric boundary layer,” Phys. Fluids 23, 065101 (2011). * Mao and Sørensen (2018) Mao, X. and Sørensen, J. N., “Far-wake meandering induced by atmospheric eddies in flow past a wind turbine,” J. Fluid Mech. 846, 190–209 (2018). * Meyers and Meneveau (2013) Meyers, J. and Meneveau, C., “Flow visualization using momentum and energy transport tubes and applications to turbulent flow in wind farms,” J. Fluid Mech. 715, 335–358 (2013). * Moeng (1984) Moeng, C.-H., “A large-eddy simulation model for the study of planetary boundary-layer turbulence,” J. Atmos. Sci. 41, 2052–2062 (1984). * Na _et al._ (2018) Na, J. S., Koo, E., Jin, E. K., Linn, R., Ko, S. C., Muñoz-Esparza, D., and Lee, J. S., “Large-eddy simulations of wind-farm wake characteristics associated with a low-level jet,” Wind Energy 21, 163–173 (2018). * Porté-Agel, Bastankhah, and Shamsoddin (2020) Porté-Agel, F., Bastankhah, M., and Shamsoddin, S., “Wind-turbine and wind-farm flows: A review,” Boundary-Layer Meteorol. 74, 1–59 (2020). * Prabha _et al._ (2011) Prabha, T. V., Goswami, B. N., Murthy, B. S., and Kulkarni, J. R., “Nocturnal low-level jet and ‘atmospheric streams’ over the rain shadow region of indian western ghats,” Q. J. R. Meteorol. Soc. 137, 1273–1287 (2011). * Sagaut (2006) Sagaut, P., _Large eddy simulation for incompressible flows: an introduction_ (Springer Science & Business Media, 2006). * Sharma, Parlange, and Calaf (2017) Sharma, V., Parlange, M. B., and Calaf, M., “Perturbations to the spatial and temporal characteristics of the diurnally-varying atmospheric boundary layer due to an extensive wind farm,” Boundary-Layer Meteorol. 162, 255–282 (2017). * Sisterson and Frenzen (1978) Sisterson, D. L. and Frenzen, P., “Nocturnal boundary-layer wind maxima and the problem of wind power assessment.” Environ. Sci. Technol. 12, 218–221 (1978). * Smedman, Högström, and Bergström (1996) Smedman, A., Högström, U., and Bergström, H., “Low level jets–a decisive factor for off-shore wind energy siting in the baltic sea,” Wind Engineering 20, 137–147 (1996). * Smedman, Tjernström, and Högström (1993) Smedman, A.-S., Tjernström, M., and Högström, U., “Analysis of the turbulence structure of a marine low-level jet,” Boundary-Layer Meteorol. 66, 105–126 (1993). * Sørensen (2011) Sørensen, J. N., “Aerodynamic aspects of wind energy conversion,” Annu. Rev. Fluid Mech. 43, 427–448 (2011). * Stevens, Gayme, and Meneveau (2016) Stevens, R. J. A. M., Gayme, D. F., and Meneveau, C., “Generalized coupled wake boundary layer model: applications and comparisons with field and LES data for two real wind farms,” Wind Energy 19, 2023–2040 (2016). * Stevens, Graham, and Meneveau (2014) Stevens, R. J. A. M., Graham, J., and Meneveau, C., “A concurrent precursor inflow method for large eddy simulations and applications to finite length wind farms,” Renewable Energy 68, 46–50 (2014). * Stevens, Martínez-Tossas, and Meneveau (2018) Stevens, R. J. A. M., Martínez-Tossas, L. A., and Meneveau, C., “Comparison of wind farm large eddy simulations using actuator disk and actuator line models with wind tunnel experiments,” Renewable Energy 116, 470–478 (2018). * Stevens and Meneveau (2017) Stevens, R. J. A. M. and Meneveau, C., “Flow structure and turbulence in wind farms,” Annu. Rev. Fluid Mech. 49, 311–339 (2017). * Stoll and Porté-Agel (2006) Stoll, R. and Porté-Agel, F., “Effects of roughness on surface boundary conditions for large-eddy simulation,” Boundary-Layer Meteorol. 118, 169–187 (2006). * Stoll and Porté-Agel (2008) Stoll, R. and Porté-Agel, F., “Large-eddy simulation of the stable atmospheric boundary layer using dynamic models with different averaging schemes,” Boundary-Layer Meteorol. 126, 1–28 (2008). * Tennekes and Lumley (1972) Tennekes, H. and Lumley, J. L., _A first course in turbulence_ (MIT press, 1972). * Troldborg, Sørensen, and Mikkelsen (2010) Troldborg, N., Sørensen, J. N., and Mikkelsen, R., “Numerical simulations of wake characteristics of a wind turbine in uniform inflow,” Wind Energy 13, 86–99 (2010). * Wagner _et al._ (2019) Wagner, D., Steinfeld, G., Witha, B., Wurps, H., and Reuder, J., “Low level jets over the Southern North Sea,” Meteorol. Zeitschrift 28, 389–415 (2019). * Wilczak _et al._ (2015) Wilczak, J., Finley, C., Freedman, J., Cline, J., Bianco, L., Olson, J., Djalalova, I., Sheridan, L., Ahlstrom, M., Manobianco, J., Zack, J., Carley, J. R., Benjamin, S., Coulter, R., Berg, L. K., Mirocha, J., Clawson, K., Natenberg, E., and Marquis, M., “The Wind Forecast Improvement Project (WFIP): A public–private partnership addressing wind energy forecast needs,” Bull. Am. Meteorol. Soc. 96, 1699–1718 (2015). * Wu and Porté-Agel (2011) Wu, Y.-T. and Porté-Agel, F., “Large-eddy simulation of wind-turbine wakes: Evaluation of turbine parametrisations,” Boundary-Layer Meteorol. 138, 345–366 (2011). * Zhang, Arendshorst, and Stevens (2019) Zhang, M., Arendshorst, M. G., and Stevens, R. J. A. M., “Large eddy simulations of the effect of vertical staggering in extended wind farms,” Wind Energy 22, 189–204 (2019).
# The KM3NeT Open Science System Jutta Schnabel1, Tamas Gal1, and Zineb Aly2 ###### Abstract The KM3NeT neutrino detectors are currently under construction at two locations in the Mediterranean Sea, aiming to detect the Cherenkov light generated by high-energy relativistic charged particles in sea water. The KM3NeT collaboration will produce scientific data valuable both for the astrophysics and neutrino physics communities as well as for the Earth and Sea science community. An Open Science Portal and infrastructure are under development to provide public access to open KM3NeT data, software and services. In this contribution, the current architecture, interfaces and usage examples are presented. 1Erlangen Centre for Astroparticle Physics (ECAP), Erlangen, Germany; <EMAIL_ADDRESS><EMAIL_ADDRESS> 2 Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France ## 1 Introduction The KM3NeT collaboration has committed to support open science in its research. In order to facilitate the sharing of FAIR data, the demonstrator of an open science system has been developed by the collaboration. It includes not only a platform to provide science-ready data, but also offers open software and tutorials and integrates the offered products with widely-used repositories and platforms. For this and further developments, test data sets and example analysis workflows are crucial to shape data format and access options according to the needs of the scientists. Therefore, example use cases have been provided to provide KM3NeT research results to the fields of astrophysics, neutrino physics and accross domain boundaries. ## 2 The KM3NeT detectors The Cubic Kilometre Neutrino Telescope, KM3NeT, is a European neutrino research infrastructure located in the Mediterranean Sea. KM3NeT operates water-Cherenkov detectors from two locations: Toulon, in France, hosts the Oscillation Research with Cosmics in the Abyss (ORCA) detector at a depth of 2475m, while Portopalo di Capo Passero, Sicily, in Italy, hosts the Astroparticle Research with Cosmics in the Abyss (ARCA) detector at a depth of 3400m. ARCA is characterized by wider spacing of a 3D array of Digital Optical Modules, DOMs, at a spacing optimised for the detection of high-energy cosmic neutrinos in the TeV to PeV energy range. Its forseen 230 strings are 650m long, spaced 90m apart. On the other hand, ORCA is more compact to optimise the detection of atmospheric neutrinos in the GeV range. ORCA will consist of 115 strings in a 20m triangular grid, with a 9m vertical spacing between the DOMs. Currently, there are 6 Detection Units, DUs, operational for ORCA and one DU for ARCA. At the shore stations of both ORCA and ARCA, computer farms perform the first data filtering and event triggering to search for the signal of neutrinos, prior to streaming data to central KM3NeT data centres for storage and further analysis by KM3NeT scientists. ### 2.1 Scientific goal Beyond the detection of astrophysical neutrinos and neutrino oscillation studies as primary objectives, KM3NeT detectors have a wide range of capabilities. In fact, KM3NeT data can be used for Supernova bursts studies, searches for physics beyond the standard model, acoustic neutrino detection and for Earth and Sea science applications such as bioluminescence activity, whale songs or dolphin clicks. More details on the motivations behind these objectives can be found in (Adrián-Martínez & al. 2016). ### 2.2 Data sets and data formats The basic data structure produced from the neutrino detectors is an event, which can be reduced to an array of values per event including particle arrival time, direction, energy, classification, processing parameters as well as uncertainties on derived quantites. However, as the format definition for public data sets are guided by community standards or are developed in exchange with the prospective users, the format varies depending on the scientific context. For astrophysics data, the available standards set by the IVOA are applied, and data from a point-source search with the ANTARES detector (Illuminati & the ANTARES Collaboration 2019) is offered as testdata set through a KM3NeT- hosted server with the DaCHS software (Demleitner et al. 2014). This allows to perform e.g. a Simple Cone Search, or retrieve the data set as VOTable. For particle events with high significance for multi-messenger analysis, the data is offered as VOEvent through the KM3NeT alert system, see (Lincetto 2021). However, for event data to be used beyond the Virtual Observatory environment, the KM3NeT Open Data Centre (ODC, has been set up, a platform providing event tables as HDF5 or FITS files and the associated meta data through a REST-API. As a test data sample, all events detected by the ORCA detector within one week with 4 DUs were processed and made available as reference point for further format developments. In the ODC, also additional high-level science data like event distributions from simulations is offered to support the interpretation of the event tables. It also serves as interface to acoustic data samples measured with a hydrophone hosted in the ORCA detector as an example for environmental data useable for sea science. ### 2.3 Analysis examples Analysis examples employing a python interface software and test data samples are served in the form of Jupyter notebooks. As example for a search for neutrino point sources, the ANTARES neutrino sample is combined with supplementary services providing a background event estimate and the detector acceptance. For the ORCA test data set, four analysis examples are offered. In the first three examples, the continuity of data-taking and the event distribution in local coordinates is analysed, the use of the reconstructed direction and quality parameters is used to select events with increased probability to be of astrophyical origin, and the event sample is converted to galactic coordinates. In the fourth example, the galactic coordinates are used to search for events coincident in space and time with a Gravitational Wave event provided by the Gravitational-wave Candidate Event Database of LIGO. In order to promote a quick understanding of how to analyse KM3NeT data, an education portal offering self-paced online courses has been set up. It offers information on KM3NeT science and a detailed introduction to the use of software analysis tools, facilitating the integration of KM3NeT data in user- defined analyses. ## 3 Data access and sharing ### 3.1 The KM3NeT software landscape KM3NeT uses a self-hosted GitLab instance as the main platform to develop software. The continuous integration (CI) feature is a powerful automation tool and is utilised to generate consistently up-to-date test reports, documentation and software releases for a large variety of software and firmware including scientific analyses and even documents like research papers. The CI utilises Docker containers, and KM3NeT adopts this solution for operating-system-level virtualisation and provides Docker and Singularity containers of all major software to facilitate easy software use in versatile contexts. KM3NeT develops open-source software for accessing and working with data taken by the detector, produced in simulations or in other analysis pipelines e.g. event reconstructions. All data produced by the detector and consecutive processing pipelines which are presented to the public are readable using Python-based tools, which are released each time a new version is tagged. The choice of Python is mainly motivated by its ease of use and massively increased popularity in a wide range of scientific research areas during the past years. Each release is uploaded to the Python Package Index (PyPI) and can be installed via the pip-package installer. As examples, the km3pipe and km3io packages allow handling of various data formats used in the KM3NeT experiment and include a provenance demonstrator for high-level data processing. Public data from the ODC can be accessed using the openkm3 package. It interlinks with the ODC REST-API and allows to query metadata of the resources and collections. All projects which are declared as open source and are hosted on the KM3NeT GitLab server are visible to the public. However, only KM3NeT members are able to log in and create content on the GitLab server. To enable the interaction with users outside the KM3NeT collaboration, open source software is mirrored to GitHub. The GitHub organisation “KM3NeT” has been created to collect all the related projects. It is also planned to link KM3NeT software to the ESCAPE Open Science and Software Repository. ### 3.2 Interlinking to repositories and registries Registering the KM3NeT open science products with well-used platforms and assigning global identifiers such as DOIs is a key to findability of the data. In the VO, the KM3NeT server is registered as a registry of resources, making KM3NeT data fully findable within the Virtual Observatory, and each resource being identifiable through the naming of the individual endpoint of the service within the registry. Although the VO does support the integration of a DOI in their resource metadata, it does not provide an authority to assign DOIs. To this end, Zenodo was chosen as well-established data repository in the physics community to obtain a DOI through mirroring the data to the repository. For this demonstrator, the event sample from the KM3NeT ORCA use case will be registered with Zenodo. ## 4 Linking to the EOSC The current setup foremost serves as a starting point for further developments to improve data sharing with observatories and scientists. Being involved in the ESCAPE project, the KM3NeT collaboration pursues a deepened integration and technology sharing of the open science system with the wider scientfic community in both astroparticle and particle physics. This includes efforts to enhance the understanding of how to integrate neutrino data in the Virtual Observatory environment to increase its use in multimessenger analyses, especially regarding the necessity to include theoretical predictions for neutrino measurements with the detectors for neutrino source models. Sharing of e.g. detector sensitivity estimates for the full KM3NeT detector is aimed for not only in the astrophysics context, but also for the area of particle physics, and the specifications to make also this simulation-derived data FAIR are currently investigated. The ESCAPE project also offers the platform to further pursue standardiziation of data processing, metadata assignment and common format development. Regarding data processing, a sound provenance scheme for the complex event processing workflow is developed drawing on the experience of and in exchange with ESCAPE partners, and the standardization of data formats beyond the VO as well as common simulation of airshowers especially in cooperation with the CTA Observatory is pursued. The development of the EOSC services will also help to pick up these solutions for data storage, sharing, software development and workflow integration in the KM3NeT context. ## 5 Summary With the KM3NeT Open Data Centre, Virtual Observatory server, Gitlab software development platform and Open Science and Education Portals in place, the KM3NeT collaboration has developed an architecture covering all major requirements for open science and data sharing and is dedicated to improving open science by building on the architecture presented here. All relevant links can be found at http://openscience.km3net.de. ## References * Adrián-Martínez & al. (2016) Adrián-Martínez, S., & al. 2016, Journal of Physics G: Nuclear and Particle Physics, 43, 084001. URL https://doi.org/10.1088%2F0954-3899%2F43%2F8%2F084001 * Demleitner et al. (2014) Demleitner, M., Neves, M. C., Rothmaier, F., & Wambsganss, J. 2014, Astronomy and Computing, 7, 27. 1408.5733 * Illuminati & the ANTARES Collaboration (2019) Illuminati, G., & the ANTARES Collaboration 2019, PoS(ICRC2019)920 * Lincetto (2021) Lincetto, M. 2021, in ADASS XXX, edited by J.-E. Ruiz, & F. Pierfederici (San Francisco: ASP), vol. TBD of ASP Conf. Ser., 999 TBD
# Are in-person lectures beneficial for all students? A Study of a Large Statistics Class Ellen S. Fireman, Zachary S. Donnini, Daniel J. Eck, and Michael B. Weissman Ellen S. Fireman: Department of Statistics, University of Illinois Urbana-Champaign, 725 S. Wright St. room 101, Champaign, IL 61820<EMAIL_ADDRESS>Zachary S. Donnini: Yale University<EMAIL_ADDRESS>Daniel J. Eck (corresponding author): Department of Statistics, University of Illinois Urbana-Champaign, 725 S. Wright St. room 101, Champaign, IL 61820<EMAIL_ADDRESS>Michael B. Weissman: Department of Physics, University of Illinois Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801<EMAIL_ADDRESS>Acknowledgements: We would like to thank Yuk-Tung Liu for essential technical assistance, Kurt Tuohy from the UIUC Atlas team for crucial timely help in obtaining anonymized covariates, and Karle Flanagan for informative conversations about similar versions of another course. IRB approval was obtained for the project. ###### Abstract Over 1000 students over four semesters were given the option of taking an introductory statistics class either by in-person attendance in lectures, augmented by online recorded lectures, or by taking the same class without the in-person lectures. The all-online students did slightly better on computer- graded exams. The causal effect of choosing only online lectures was estimated by adjusting for potential confounders using four methods. The four nearly identical point estimates remained positive but were small and not statistically significant. No statistically significant differences were found in preliminary comparisons of effects on females/males, U.S./non-U.S. citizens, freshmen/non-freshman, and lower-scoring/higher-scoring math ACT groups. Keywords: online vs in-person lectures, statistics education, large online lectures, average treatment effect, causal inference ## 1 Introduction Interest in using the increasing availability of good Internet access to expand the use of online education has accelerated greatly due to the Covid-19 epidemic. A common impression is that online education is somewhat inferior to in-person education (Loeb, 2020) largely based on secondary education results, e.g. Heppen et al. (2017). On the other hand, a meta-analysis based largely on college-level courses concluded that online components mixed with in-person components offered some advantages over in-person controls (Means et al., 2009). The circumstances favoring one or the other method have not yet been well mapped-out. It is not obvious a priori whether to expect online or in-person lectures to be more effective. Some reasons one might expect online lectures to be more effective, especially in any cumulative course, requiring understanding each step before moving on to the next, include that students can: 1. 1. replay the parts with which they have difficulty, 2. 2. fast-forward through parts that they find unnecessary, 3. 3. listen when they’re in the mood, 4. 4. take breaks if they have trouble concentrating for 50 or 80 minutes, 5. 5. take just as much time as they need on in-lecture exercises, 6. 6. make up lectures missed due to emergencies, and 7. 7. use closed-captions if they prefer them to spoken English. The disadvantages of online lectures include: 1. 1. distractions in a non-classroom environment, 2. 2. loss of the direct sense of personal involvement and interaction with other students, and 3. 3. inability to ask questions during lecture. The balance of factors is likely to depend on course content, structure, student characteristics, and lecture style. Simple comparisons of outcomes for online and in-person students do not give the causal effect of the teaching mode, since without random assignment the students in each mode may systematically differ. Adjustments for differences between the students enrolled in different modes should be able to approximately correct for those differences if they are not too large, as is routinely done, e.g. in Coates et al. (2004). There are a few well-controlled studies to check under what conditions each lecture mode is best, including a very few randomized controlled trials. Throughout this paper we will follow the convention of labelling as “insignificant” any effects for which a 95% confidence interval includes zero, to avoid the distraction of considering effects of unknown sign. We do not, however, mean to imply a non-zero prior probability for any null hypothesis. Most randomized studies have been conducted on economics classes. One (with a good review of previous work) found that in-person material had a significant positive effect on some but not all outcome measures, when compared to specially prepared online material that did not include videos of the in- person lectures (Arias et al., 2018). Another found that eliminating in-person material had negative effects on exam scores (Alpert et al., 2016). A blended online/in-person method had statistically insignificant negative effects compared to the in-person version (Alpert et al., 2016). Another study found small but statistically significant benefits of having more in-person classes (Joyce et al., 2014). Small, statistically insignificant, benefits of using in-person lectures, with indications that those benefits were concentrated in some subgroups, were found in another (Figlio et al., 2013). One particularly relevant randomized study found no significant effect on test scores in a beginning statistics class with the treatment difference consisting of reducing in-person lecture hours by about a factor of three (Bowen et al., 2014). The authors note that although students were randomized, they were unable to randomize instructors (Bowen et al., 2014). The authors emphasized that the success of their “hybrid” treatment, with reduced in- person components, may have depended on the use of interactive online materials (Bowen et al., 2014). Of the non-randomized studies with fairly stringent controls, one (Bettinger et al., 2017) used instrumental variables (especially the intermittent availability of the in-person version of a class) to try to extract the causal effect of using a completely online version instead of a largely in-person version of a course taught at a large for-profit university. It found that the online effect on grades and subsequent performance in other courses was noticeably negative overall, especially for weaker students. Since in-person classes were only taught when teachers specifically volunteered, while online classes were offered even when no teacher volunteered to be involved, there may have been systematic differences between the treatment and control teachers (Bettinger et al., 2017). The generalizability of the mixed conclusions of these varied studies to courses with different characteristics is unknown. Some of the key variables that they have suggested might moderate the online treatment effect are the nature of the treatment itself, which is far from uniform in the different studies, and the strength of the students. Here we report results from four semesters of a large beginning statistics course at a large selective public university. The course had been taught in approximately its current form for several semesters using in-person lectures only and online homework and pre-lectures. It became convenient to offer a version that was almost identical to the standard in-person version but with students allowed to sign up to watch videos of the same lectures online rather than in-person. This paper describes an estimated causal effect on objective exam scores of using the new online version vs. the in-person version. It was not possible to randomly assign students to the two versions, but fortunately most of the measured student characteristics in the two treatment groups were not dramatically different. We find that, although online students did slightly better than in-person students on objective exams, once adjusted for known prior differences between the student groups there was no statistically significant estimated causal difference between exam performance of students in online and in-person versions, with narrow enough 95% confidence intervals to exclude differences with much practical importance. The possibility of important missing confounders is typically the biggest potential issue for a non-randomized study attempting to estimate causal effects. One obvious unmeasured confounder is prior commitment to this particular course, as opposed to more general conscientiousness. We suspected that students who chose to allot a scarce schedule slot to this course would be more likely to put effort into it than would students who avoided any constraints on their other courses by choosing the flexible online option. If so, our estimates of the average treatment effect (ATE) would tend to favor the in-person version, compared to the true ATE. Indications of such unmeasured confounding leading to overestimates of the advantages of attending in-person lectures have been seen in at least one other study, using a difference-of-difference method comparing randomized and non-randomized treatment and control groups, adjusting for covariates similar to the ones we used (Joyce et al., 2014). While we suspect that the effect would have the same sign in our observations, skipping lectures for which one has signed up is enough different from choosing not to sign up for them that we cannot use the prior result (Joyce et al., 2014) to make a quantitative estimate of the effect. Instead, we used a comparison of our main outcome, objective exam scores, with a more effort-weighted outcome, homework scores, to try to estimate whether any effect of different commitments in the two treatment groups was likely to be important. With regard to prior preparation for statistics, routine anonymous online surveys with greater than 80% response rates in the Spring and Fall of 2019 showed very small differences between the fraction of online students (80%) and in-person students (77%) self-reporting having taken prior courses. Even neglecting the likely collinearity between this covariate and others used, no plausible effect magnitude of this covariate could have an important effect on our estimated ATE, so we did not attempt to analyze transcripts to sort out such effects. Our intent at this stage was only to estimate the ATE over the whole collection of students. Our data were underpowered for a determination of different effects on most subgroups. Nevertheless, due to increased interest in and concern about using online methods together with the impracticality of getting more data on in-person lectures in the near future, we include a first look at interactions of the online treatment with several covariates for which we heard specific concerns: freshman status, ACT scores, gender, and U.S. citizenship. We found no statistically significant effects. ## 2 Treatment Methods The treatment studied here did not consist of simply substituting online components for in-person components. Experience with online lectures in a larger statistics course with similar structure had indicated that students enrolled in an online version did at least as well as those enrolled for in- person lectures. (Unfortunately we do not have sufficiently uniform comparative data on outcomes in that course to use in a research publication.) Therefore once we started making the lectures available online we considered it unfair not to allow all of the students full access to them. Furthermore, in these large courses the number of students who miss lectures for a variety of reasons and then request access to the online versions is so large that it was easiest just to open up the online lectures to all. Therefore the treatment under study consists of removing enrollment in an in-person lecture while keeping the other course components, including online lectures, unchanged. This resembles one previous study (Joyce et al., 2014), but contrasts with most previous work, e.g. Alpert et al. (2016) and Figlio et al. (2013), in which access to online resources was restricted for in-person students or in which substantially different presentations were used for the online group (Coates et al., 2004). Almost all elements of the course were shared between the two treatment groups. These include an incomplete-notes workbook/text filled out during lectures, very frequent automated randomized homework exercises (24 in a 15 week course) linked to an online discussion board, practice exams, prelecture videos, access to a user-friendly statistical computing program (http://www.istics.net/DataProgram/), and little-used in-person office hours. The versions shared a website and received the same email notifications. The lectures were recorded at the in-person class and then posted online, usually within several hours, with occasional very minor edits. The course material was reorganized between the first batch of two-semesters and the second to improve the logical flow and to let us drop an un-enforced recommendation that students had taken a previous statistics course. In between these two two-semester batches, there was a semester with different instructors in charge of the online and in-person versions, so we did not analyze data from that semester. Other than the lecture delivery, the one difference between online and in- person versions was that a small amount of bonus credit was available to the in-person students who answered questions in lecture using an i-clicker, except for the first semester of the new version, Spring 2019, when new i-clicker exercises were not yet ready. Although both groups received bonus points for completing their lecture note workbooks, more were given to the online students to approximately balance the i-clicker points. Thus those students who registered for in-person lectures had some extra motivation to frequently attend. The algorithm for counting bonus points raised final grades by an amount approximately proportional to how far the non-bonus grade was from 100, so the bonus points primarily served to motivate students who were not otherwise doing well in the course. To offer students maximum flexibility, online students were given the option of arranging to go to lectures and obtain i-clicker points by individually registering their i-clickers. Several expressed interest, but none followed through. Few if any students without i-clickers were observed in lecture. Therefore dilution of the treatment effect by crossover from online to in- person treatment appeared to be negligible. Data collection stopped in Spring 2020 since all students were transferred to the online version when Covid-19 hit, and we had to switch to an un-proctored open-book exam format. This forced switch would have provided an opportunity to estimate unmeasured confounding effects, if it had happened at the start of the semester and with reliable exams, but neither of those conditions held. ## 3 Evaluation Methods The only evaluation we looked at for this project was performance on computer- graded exams, to avoid any subconscious bias on the part of graders. These exams also are our best measure of learning outcomes, as opposed to effort. Almost all the students took the exams in-person in mixed groups of in-person and online registrants. A small number of the online students (roughly 3%) and occasional in-person students took the same exams remotely, proctored via a commercial service using webcam monitoring. The evaluation method changed between the two-semester batches. The first batch used hand-graded midterm exams, which we did not look at for the purposes of this study, and cumulative computer-graded finals, the results used here. The second batch used three equal-weighted computer graded exams, with the semester average used here. We did separate analyses of these two batches. Despite the different forms of exam scores used, the standard deviations for all four semesters were very similar, ranging from 8.8 to 10.5. (All scores presented here are on a 100 point scale, with a difference of 10 representing one grade point.) There was more variation of exam means and standard deviations within two-semester batches than between them. To reduce statistical uncertainty, we combined all the data for an overall comparison of whether switching to all-online lectures affected objective exam scores in these two closely related batches of the same course. We used four methods to adjust for differences between the students in the two versions in order to estimate the causal ATE of dropping in-person lectures. These were multiple linear regression (MLR), stabilized inverse propensity weight (IPW) (Lunceford and Davidian, 2004; Austin and Stuart, 2015), doubly- robust (DR) (Robins et al., 1994), and a nonparametric outcome highly-adaptive lasso (OHAL) method (Ju et al., 2020) as described below. We included as covariates those relevant predictors to which we had access and whose values were set before the treatment started. The campus data support service supplied anonymized data files with the relevant covariates. These were student year in school (treated categorically), semester when the course was taken (also categorical), ACTmath score (including SAT equivalents), Gender, U.S./non-U.S. citizenship, overall ACT score, and the approximate median ACT score of the major in which the student was enrolled, obtained by averaging the scores of the 25th and 75th percentiles, available from a university web site for prospective students. (This ACTmajor score was initially used as a proxy for ACT scores before those were available to us, but to our surprise remained a significant predictor even after individual ACT scores were included.) High-school grade-point averages (HSGPA) were obtainable for 675 students of the 1105. The remainder of the students were from systematically different major groups, especially transfer students and most international students, who it was also important to include in the ATE estimate. Therefore we ran a full analysis omitting the HSGPA on the larger sample, which also had the advantage of reducing statistical error. To estimate any correction for systematic differences in the traits measured by HSGPA, we tested how much inclusion of HSGPA changed the ATE on the subsample for which HSGPA was available. The initial exploration was via MLR using least-squares fitting, using the same point-and-click program used in the class. The MLR method gives unbiased estimates if the linear effects model is correctly specified. Since the MLR residuals were non-Gaussian and not constant across the predictor space, unsurprising given the constraints on the ObjectiveExam conditional distribution (Faraway, 2016, page 281), we checked the confidence intervals using nonparametric bootstrap methods, and checked the ATE estimate using other methods. The first adjustment method was a standard stabilized IPW analysis (Lunceford and Davidian, 2004; Austin and Stuart, 2015). This method gives unbiased estimates of the ATE if the main-effect logistic regression model for the propensity of different types of students to take each version is correctly specified. Our propensity score model used logistic regression on the same covariates as the MLR model. We checked that the important predictive covariates were fairly well-balanced in the pseudo-sample generated by the IPW method (Austin and Stuart, 2015). The third method, DR, corrects for any imbalances in the IPW pseudo-sample by using MLR estimates to give unbiased estimates if either the linear effect model or the logistic propensity model is correct (Robins et al., 1994). We checked the key bottom-line ATE estimates with the OHAL targeted minimum loss method that avoids potentially problematic parametric misspecification (Kang et al., 2007; Ju et al., 2020). The ATE confidence intervals for the MLR, IPW, and DR estimates of the ATE were estimated using standard bootstrap methods (DiCiccio and Efron, 1996). Confidence intervals on the small adjustment for inclusion of HSGPA were determined by a paired bootstrap analysis, in which the change of the ATE estimate from including HSGPA in the model was determined for each bootstrap sample. That allows the resulting small HSGPA adjustment to be made with little increase in the confidence interval width for the ATE. Confidence intervals for the OHAL procedure were obtained using cross-validated standard errors (Ju et al., 2020, page 115). Full descriptions including the R code and full results for these methods are presented in the Supplementary Appendix. Since all methods gave nearly the same point estimates and confidence intervals, we focus here on the MLR results, whose coefficients have simple intuitive interpretations. We included all 1177 students for whom we had final exam grades in an initial calculation of the raw point difference between online and in-person. Of those, we dropped 66 students for whom no admission test scores were available as well as 6 students not enrolled for undergraduate degrees before doing a full analysis on the remaining 1105 students. This analysis sample included 91% of the in-person students for whom we had final grades and 93% of the online students with final grades. The overall raw score average difference between online and in-person groups was very similar in the full sample (1.19) and the analysis sample (1.38). ## 4 Results Of the students on whom we have records (those registered at the official drop date) 4 of 506 (0.8%) in-person students either dropped out by petition or otherwise failed to take the final exam, while 11 of 708 (1.6%) online students did so. The difference is not large enough to reject the null hypothesis of random dropouts at a 95% confidence level. At any rate, the dropout rate in both groups was quite low. The raw exam scores and their SDs for each semester are given in Table 1. The score scale is 0-100, with 10 points corresponding to one grade point. Table 1: Raw exam scores, their standard deviations, and additional summary statistics for each semester. Semester | $n$ | Mean score | SD | $n_{\text{OL}}$ | $n_{\text{IP}}$ | $\text{Mean}_{\text{OL}}$ | $\text{Mean}_{\text{IP}}$ | $\text{Mean}_{\text{OL}}$ \- $\text{Mean}_{\text{IP}}$ | SE(diff) ---|---|---|---|---|---|---|---|---|--- F17 | 271 | 87.27 | 8.82 | 135 | 136 | 86.52 | 88.01 | -1.49 | 1.07 Sp18 | 274 | 85.36 | 10.31 | 158 | 116 | 86.88 | 83.29 | 3.59 | 1.27 Sp19 | 267 | 83.30 | 10.84 | 201 | 66 | 84.64 | 79.19 | 5.45 | 1.69 F19 | 293 | 86.65 | 10.16 | 153 | 140 | 87.44 | 85.78 | 1.66 | 1.18 Although the online students usually did better, one cannot draw any conclusions about the treatment effect without adjusting for differences between the student groups. Table 2 compares the values of the covariates in the two treatment groups. The US/international and male/female distributions were not too far from uniform. The ACT scores and HSGPA were rather well matched, with differences between the group means always less than 20% of the SD for the individuals. The ACTmajor scores, however, differed significantly more than would be expected by random assignment. The enrollment by college year was very substantially different. (Students said that advisors to incoming freshmen discouraged them from taking the online version.) Table 2: Summary information for the collected covariates taken across the online and in-person groups. The abbreviations OL and IP are, respectively, shorthand for online and in-person. The random null p-value assesses the significance of differences in covariate composition between the online and in-person groups using a permutation test. Trait | $n$ | $n_{\text{OL}}$ | $n_{\text{IP}}$ | Mean | SD | $\text{Mean}_{\text{OL}}$ \- $\text{Mean}_{\text{IP}}$ | SE(diff) | Random null p-value ---|---|---|---|---|---|---|---|--- ACT | 1105 | 647 | 458 | 30.5 | 3.5 | 0.38 | 0.21 | 0.065 ACTmath | 1105 | 647 | 458 | 32.2 | 4.0 | 0.45 | 0.25 | 0.064 ACTverbal | 1105 | 647 | 458 | 58.8 | 8.0 | 0.60 | 0.49 | 0.216 ACTmajor | 1105 | 647 | 458 | 30.2 | 2.6 | 0.48 | 0.16 | 0.003 HSGPA | 675 | 377 | 298 | 3.5 | 0.3 | 0.04 | 0.03 | 0.183 | | | | | | % of OL | % of IP | International | 435 | 270 | 165 | NA | NA | 0.42 | 0.36 | 0.048 Female | 474 | 290 | 184 | NA | NA | 0.45 | 0.40 | 0.119 Freshmen | 155 | 40 | 115 | NA | NA | 0.06 | 0.25 | $\approx 0.000$ Sophomore | 407 | 212 | 195 | NA | NA | 0.33 | 0.43 | 0.001 Junior | 312 | 226 | 86 | NA | NA | 0.35 | 0.19 | $\approx 0.000$ Senior | 231 | 169 | 62 | NA | NA | 0.26 | 0.14 | $\approx 0.000$ Table 3 gives the multiple regression, stabilized IPW, and doubly-robust estimates for the ATE for the two batches of two semesters and for the combination of the four semesters, along with the 95% confidence intervals. All the covariates were used except ACTverbal, which was predictable with $R^{2}=0.89$ from the other covariates, and whose inclusion would have a tiny effect (+0.02) on the $\text{ATE}_{\text{MLR}}$. The four overall ATE estimates are nearly identical. Although overall the point estimate for the online effect is positive, it is very small for practical purposes and not statistically significant at the 95% confidence level in this sample. There are some small variations between the semesters, e.g. between Fall and Spring semesters, but we lack sufficient power to see if these are systematic much less to track down possible systematic causes. For example, omission of the Sp19 data, for which there was no i-clicker bonus incentive to attend lectures, slightly lowers the $\text{ATE}_{\text{MLR}}$ point estimate, from 0.64 to 0.35, too small an effect to draw any conclusions. It is also possible that the differences between the treatment groups differed between Falls, where adviser guidance was important in course choice, and Springs, where peer advice probably played a bigger role. The ATE difference between inverse propensity weighting in fall and spring averages was just short of conventional statistical significance, leaving a weak hint that unmeasured confounders should be considered. Table 3: Estimates and 95% confidence intervals for the ATEs of online learning across semesters. Confidence intervals for $\text{ATE}_{\text{MLR}}$, $\text{ATE}_{\text{IPW}}$, and $\text{ATE}_{\text{DR}}$ are obtained from the percentiles of a nonparametric bootstrap. Semesters | $\text{ATE}_{\text{MLR}}$ | $\text{ATE}_{\text{IPW}}$ | $\text{ATE}_{\text{DR}}$ | $\text{ATE}_{\text{OHAL}}$ ---|---|---|---|--- F17+Sp18 | -0.01 | 0.16 | 0.05 | | (-1.42, 1.43) | (-1.21, 1.58) | (-1.35, 1.49) | Sp19+F19 | 1.54 | 1.66 | 1.36 | | (-0.09, 3.21) | (-0.07, 3.45) | (-0.28, 3.01) | All | 0.63 | 0.75 | 0.58 | 0.63 | (-0.44, 1.71) | (-0.34, 1.86) | (-0.44, 1.63) | (-0.50, 1.75) Table 4 shows the multiple regression coefficients for the covariates of the MLR model, for which $R^{2}=0.34$. With few exceptions, the ATE was not very sensitive to omission of any of these covariates. Removal of all the ACT scores increases the ATE by about 0.5. Removal of the college class increases the ATE by about 0.3. Other covariates had even smaller effects on the ATE. Table 4: The least-squares point estimates for the predictive coefficients in the multiple linear regression model based on the 1105 student sample are shown. The 95% confidence intervals are calculated by a nonparametric bootstrap method, and are close to those obtained using a t distribution. The intercept represents a very hypothetical domestic male senior in the in-person F19 class with 0’s for all ACT scores. Variable | slope | 95% Confidence Interval ---|---|--- Intercept | 29.48 | (22.52, 36.05) Online | 0.63 | (-0.36, 1.68) Gender | 0.41 | (-0.63, 1.43) International | 0.86 | (-0.26, 2.01) F17 | 0.05 | (-1.28, 1.33) S18 | -1.09 | (-2.47, 0.22) S19 | -2.60 | (-4.02, -1.17) FR | -1.90 | (-3.66, -0.10) SO | -0.27 | (-1.45, 1.21) JR | -0.50 | (-1.79, 0.81) ACTMajor | 0.52 | (0.31, 0.74) ACT | 0.22 | (-0.04, 0.49) ACTMath | 1.07 | (0.80, 1.32) Inclusion of HSGPA in the model on the subsample of 675 students for which it was available increased $R^{2}$ from 0.32 to 0.38. It increased the ATE in this subsample by 0.11, 0.09 and 0.15 for MLR, IPW and DR methods, respectively, each with 95% confidence intervals of about $\pm 0.4$. Assuming that the traits measured by HSGPA have roughly similar effects and similar group differences in the 39% of the sample for which HSGPA was not available, the ATE estimates would then be 0.83, 0.73, and 0.73 for IPW, DR, and MLR respectively, each with 95% CIs of $\pm 1.2$, assuming the errors add in quadrature. All these estimates of ATE in the overall sample remain insignificantly positive. We explored interaction effects of the treatment with the variables suspected of being relevant to the effectiveness of the online treatment (gender, citizenship, freshman status and ACTmath). Adding these effects, either individually or all simultaneously, gave no significant interaction term. Stratifying by the same variables gave insignificantly positive online $\text{ATE}_{\text{MLR}}$ for both US and non-US citizens, both freshmen and non-freshman, and both the upper and lower halves of the ACTmath distribution. The stratified estimated $\text{ATE}_{\text{MLR}}$ was insignificantly negative for females and nominally significantly positive (p=0.035) for males. The difference between the stratified results for the genders was not significant. Due to the multiple comparisons the effect for males should not be considered to be shown to be positive with confidence. Although the ACTmath SD was 4.1, enough range to easily see its predictive power, less than 1% of the students scored below 20, the approximate national median. Thus we have essentially no evidence on how well the online treatment would work in a course like this for students with ACTmath scores below 20. Since all the methods of estimating ATE from our covariates gave nearly the same results, the one serious remaining issue is possible unmeasured confounders. Given the very close balance in self-reported prior statistics courses taken, that would be an implausible confounder. Students’ unmeasured commitment (UC) to the course seems the most obvious variable likely to affect learning outcomes, to differ between groups choosing different versions, and not to show up in more general covariates. That commitment should affect how much effort students put into doing the homework. In fact homework (HW) was less predictable than ObjectiveExam from the covariates to which we had access ($R^{2}=0.11$ for HW in the HSGPA group, in contrast to 0.37 for ObjectiveExam), suggesting that HW may reflect causes not picked up by our standard covariates. In contrast to ObjectiveExam, HW was much better predicted by HSGPA than by ACT scores, consistent with our intuition that HSGPA and HW might show relatively large effort-dependent contributions. Including HW as a covariate would bias the ATE estimate for three reasons. Most importantly, HW could be on the causal path from OL to ObectiveExam, so including it would remove part of the treatment effect, since students using exclusively online lectures may spend more time going back-and-forth between lecture segments and related homework problems, improving their understanding. It would also introduce some slight M-bias (Greenland, 2003), probably negative, since HW is a descendant of both UC and the other covariates. It could also bias any treatment effect toward zero simply by serving as a marker of overall treatment effects. Nevertheless, HW should pick up any dramatic imbalance of UC between the treatment groups. Including HW reduced $\text{ATE}_{\text{MLR}}$ from 0.64 to 0.34 in the overall sample and from 0.82 to 0.41 in the HSGPA-available subset. These small differences in the direction of the expected bias show no sign of any motivational confounding issue. ## 5 Discussion Before discussing the online effect, we note in passing some observations, not directly relevant to our research questions but perhaps useful in other contexts, about test score predictors. The ACT scores (particularly ACTmath) were the most important predictors of scores among the covariates to which we had access. It is interesting that ACTmajor, the approximate median of the ACT of the student’s major, remains a highly significant predictor even when ACTmath is included. The effect is strongest among non-freshmen, suggesting that it arises largely from the effects of motivation, ability, and interests on how students sort into majors after they start taking college classes. Since the treatment under study consisted simply of removing in-person enrollment from otherwise fixed course components, it might seem surprising that a negative effect was not found. The general presumption is that removing resources will usually have at least a small negative effect (Coates et al., 2004). Furthermore, most previous studies have found somewhat negative effects of switching to all-online lectures (Coates et al., 2004). There are several potential explanations. First, although the point estimate of the effect was weakly positive the statistical CIs are large enough that we cannot rule out a small negative effect from dropping in-person enrollment. The bottom of the overall CI range, however, is not more negative than -0.05 grade points, and thus would be of little practical significance. Second, although we have done the best we could to adjust for relevant confounders, the possibility of important missing confounders always remains. Although we suspected that prior commitment to this particular course might slant our estimates to favor the in-person version inclusion of a commitment- related proxy variable (HW score) showed no evidence of any major effects of this sort. Adjustments for the covariates to which we had access, which provided fairly good predictors, had a small effect on the ATE, so we doubt that others would qualitatively change the results. Third, attending in-person lectures probably reduced the effort spent following the same lectures online. Thus the true treatment effect of dropping the in-person lectures could indeed have been close to zero or even weakly positive, as in our point estimate. The absence of significant interaction effects with freshman status or with ACTmath scores may seem more surprising, since previous work has generally indicated that better-prepared or stronger students have better online effects (Bettinger et al., 2017). Most of the students taking this class were already comfortable with some mathematics. For the larger introductory statistics course, for which informal results were qualitatively similar, students had a much broader range of math preparation. Nevertheless, almost all students even in that course had been selected into a highly-ranked public university, so we do not know how well these methods would work for students who could not get in to such a university. Given the restricted range of our sample, the lack of an interaction term with ACTmath is entirely compatible with previous results (Bettinger et al., 2017). We were also somewhat surprised there was no indication that non-US students did particularly well online. Although our results were underpowered for looking at interaction effects of the treatment even with measured covariates, presumably there are some differences among students in the relative value of online and in-person lectures. If students tended to take the version most suited to produce good test results for themselves, the estimated ATE would not give an accurate prediction of the net effect of requiring all students to take one version or the other. Either required version would give lower results than estimated here. If the students’ choices tended to be mistaken, the effect of requiring a single version for all students would be higher than our estimate. After Covid struck, all students were in fact required to take the online version, but historical controls on expected test scores would have been too imprecise to estimate any such effects reliably even if drastic changes in the testing protocol had not also been necessary. Since the “online” version here still offered in-person office hours, it is not exactly equivalent to a fully remote online version. These office hours were not used very much, however. Summer sessions of the course, taken remotely, function well without them. After Covid hit they were replaced with Zoom versions during the semester. We think that this change is not very important. Concerning the generalizability of the results, some specific features of this statistics class may lead to better online results than would be found for many other classes: 1. 1. This is a cumulative logical/mathematical course, which may make some online benefits especially relevant, especially the ability to go back over difficult steps. 2. 2. The prior work to develop effective beginning statistics courses without in- person discussion sections had already created important online course elements: very frequent homework with randomized numbers, supplementary pre- lectures. 3. 3. The incentive to fill out the incomplete notebook may have reduced the effect of environmental distractions by keeping students actively engaged in lecture. 4. 4. It’s possible that nearly all of the students in this course were well enough prepared and web-savvy to avoid some detrimental effects of missing in-person lectures. 5. 5. Unlike in most previous work on online delivery (Coates et al., 2004), we made no effort to develop anything special for the online class beyond the online materials already available for the in-person class. Perhaps the videos of the traditional in-person lectures were more useful than most newly-developed online material, for which optimization is based on less experience. 6. 6. The results pertain to only a single lecturer. Other lecturers with different lecture styles may get a different balance of online and in-person outcomes. 7. 7. Although we found no evidence that the students needed the in-person experience, we did not test whether the lecturer needed it. The course was developed over several semesters with live feedback, and the lectures were delivered to in-person classes. The lecturer here (ESF) subjectively feels that live feedback was important for the course development. Anecdotally, many other lecturers tell us that they need the in-person experience to lecture well. Although we suspect that for many other STEM courses online lecture delivery will be at least as useful as in-person lecture delivery, so long as the lecturer can stay motivated and aware of what students are finding easy or difficult, we reiterate that we are not claiming that in-person course components add little value to online components as a general rule. For example, we make no claims about the ability to replace course components such as labs with online materials. We also are not sure how well online exam- taking systems will hold up under massive use. We have made no effort to compare with a conventional purely in-person version, with no online lecture backup, but we suspect that it would be inferior to either of the versions we examined. One key aspect that should be investigated in future studies, when they become possible, is the indirect effect of lecture mode on learning via its effect on the instructors rather than via direct effects on the students. ## References * Alpert et al. (2016) Alpert, W. T., K. A. Couch, and O. R. Harmon (2016). A randomized assessment of online learning. American Economic Review 106(5), 378–82. * Arias et al. (2018) Arias, J., J. Swinton, and K. Anderson (2018). Online vs. face-to-face: A comparison of student outcomes with random assignment. e-Journal of Business Education and Scholarship of Teaching 12(2), 1–23. * Austin and Stuart (2015) Austin, P. C. and E. A. Stuart (2015). Moving towards best practice when using inverse probability of treatment weighting (iptw) using the propensity score to estimate causal treatment effects in observational studies. Statistics in Medicine 34(28), 3661–3679. * Bettinger et al. (2017) Bettinger, E. P., L. Fox, S. Loeb, and E. S. Taylor (2017). Virtual classrooms: How online college courses affect student success. American Economic Review 107(9), 2855–75. * Bowen et al. (2014) Bowen, W. G., M. M. Chingos, K. A. Lack, and T. I. Nygren (2014). Interactive learning online at public universities: Evidence from a six-campus randomized trial. Journal of Policy Analysis and Management 33(1), 94–111. * Coates et al. (2004) Coates, D., B. R. Humphreys, J. Kane, and M. A. Vachris (2004). “no significant distance” between face-to-face and online instruction: Evidence from principles of economics. Economics of Education Review 23(5), 533–546. * DiCiccio and Efron (1996) DiCiccio, T. J. and B. Efron (1996). Bootstrap confidence intervals. Statistical science, 189–212. * Faraway (2016) Faraway, J. J. (2016). Extending the linear model with R: generalized linear, mixed effects and nonparametric regression models. CRC press. * Figlio et al. (2013) Figlio, D., M. Rush, and L. Yin (2013). Is it live or is it internet? experimental estimates of the effects of online instruction on student learning. Journal of Labor Economics 31(4), 763–784. * Greenland (2003) Greenland, S. (2003). Quantifying biases in causal models: classical confounding vs collider-stratification bias. Epidemiology 14(3), 300–306. * Heppen et al. (2017) Heppen, J. B., N. Sorensen, E. Allensworth, K. Walters, J. Rickles, S. S. Taylor, and V. Michelman (2017). The struggle to pass algebra: Online vs. face-to-face credit recovery for at-risk urban students. Journal of Research on Educational Effectiveness 10(2), 272–296. * Joyce et al. (2014) Joyce, T. J., S. Crockett, D. A. Jaeger, O. Altindag, and S. D. O’Connell (2014). Does classroom time matter? a randomized field experiment of hybrid and traditional lecture formats in economics. Technical report, National Bureau of Economic Research. * Ju et al. (2020) Ju, C., D. Benkeser, and M. J. van Der Laan (2020). Robust inference on the average treatment effect using the outcome highly adaptive lasso. Biometrics 76(1), 109–118. * Kang et al. (2007) Kang, J. D., J. L. Schafer, et al. (2007). Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical science 22(4), 523–539. * Loeb (2020) Loeb, S. (2020). How effective is online learning? what the research does and doesn’t tell us. Education Week. * Lunceford and Davidian (2004) Lunceford, J. K. and M. Davidian (2004). Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Statistics in Medicine 23(19), 2937–2960. * Means et al. (2009) Means, B., Y. Toyama, R. Murphy, M. Bakia, and K. Jones (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. * Robins et al. (1994) Robins, J. M., A. Rotnitzky, and L. P. Zhao (1994). Estimation of regression coefficients when some regressors are not always observed. Journal of the American statistical Association 89(427), 846–866.
18141 LABEL:LastPageJan. 19, 2021Mar. 22, 2022 # Higher Order Automatic Differentiation of Higher Order Functions Mathieu Huota , Sam Statona and Matthijs Vákárb University of Oxford Utrecht University ###### Abstract. We present semantic correctness proofs of automatic differentiation (AD). We consider a forward-mode AD method on a higher order language with algebraic data types, and we characterise it as the unique structure preserving macro given a choice of derivatives for basic operations. We describe a rich semantics for differentiable programming, based on diffeological spaces. We show that it interprets our language, and we phrase what it means for the AD method to be correct with respect to this semantics. We show that our characterisation of AD gives rise to an elegant semantic proof of its correctness based on a gluing construction on diffeological spaces. We explain how this is, in essence, a logical relations argument. Throughout, we show how the analysis extends to AD methods for computing higher order derivatives using a Taylor approximation. ###### Key words and phrases: automatic differentiation, software correctness, denotational semantics The authors contributed equally to this work. ## 1\. Introduction Automatic differentiation (AD), loosely speaking, is the process of taking a program describing a function, and constructing the derivative of that function by applying the chain rule across the program code. As gradients play a central role in many aspects of machine learning, so too do automatic differentiation systems such as TensorFlow [AAB+16], PyTorch [PGC+17] or Stan [CHB+15]. Programs denotational semantics automatic differentiation Programs denotational semantics Differential geometry math differentiation Differential geometry Figure 1. Overview of semantics/correctness of AD. Differentiation has a well-developed mathematical theory in terms of differential geometry. The aim of this paper is to formalize this connection between differential geometry and the syntactic operations of AD, particularly for AD methods that calculate higher order derivatives. In this way we achieve two things: (1) a compositional, denotational understanding of differentiable programming and AD; (2) an explanation of the correctness of AD. This intuitive correspondence (summarized in Fig. 1) is in fact rather complicated. In this paper, we focus on resolving the following problem: higher order functions play a key role in programming, and yet they have no counterpart in traditional differential geometry. Moreover, we resolve this problem while retaining the compositionality of denotational semantics. #### 1.0.1. Higher order functions and differentiation. A major application of higher order functions is to support disciplined code reuse. Code reuse is particularly acute in machine learning. For example, a multi-layer neural network might be built of millions of near-identical neurons, as follows. $\begin{array}[]{ll}\begin{aligned} &\mathrm{neuron}_{n}:\boldsymbol{(}\mathbf{real}^{n}\boldsymbol{\mathop{*}}\boldsymbol{(}\mathbf{real}^{n}\boldsymbol{\mathop{*}}\mathbf{real}\boldsymbol{)}\boldsymbol{)}\to\mathbf{real}\\\ &\mathrm{neuron}_{n}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda\langle x,\langle w,b\rangle\rangle.\,\varsigma(w\cdot x+b)\\\ &\mathrm{layer}_{n}:(\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}P\boldsymbol{)}\to{\tau}_{2})\to\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}P^{n}\boldsymbol{)}\to{\tau}_{2}^{n}\\\ &\mathrm{layer}_{n}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda f.\,\lambda\langle x,\langle p_{1},\dots,p_{n}\rangle\rangle.\,\langle f\langle x,p_{1}\rangle,\dots,f\langle x,p_{n}\rangle\rangle\\\ &\mathrm{comp}:\boldsymbol{(}(\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}P\boldsymbol{)}\to{\tau}_{2})\boldsymbol{\mathop{*}}(\boldsymbol{(}{\tau}_{2}\boldsymbol{\mathop{*}}Q\boldsymbol{)}\to{\tau}_{3})\boldsymbol{)}\to\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}\boldsymbol{(}P\boldsymbol{\mathop{*}}Q\boldsymbol{)}\boldsymbol{)}\to{\tau}_{3}\\\ &\mathrm{comp}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda\langle f,g\rangle.\,\lambda\langle x,(p,q)\rangle.\,g\langle f\langle x,p\rangle,q\rangle\end{aligned}&\raisebox{-22.76219pt}[0.0pt]{ \leavevmode\hbox to106.35pt{\vbox to89.59pt{\pgfpicture\makeatletter\hbox{\hskip 34.9209pt\lower-24.46246pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\pgfsys@beginscope\pgfsys@invoke{ }\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{\hbox to0.0pt{ {}{}{} \hss} {}{}{} \hss} {}{}{} { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{}{0.0pt}\pgfsys@invoke{ }\definecolor[named]{tikz@color}{rgb}{.5,.5,.5}\definecolor[named]{.}{rgb}{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@invoke{ }\pgfsys@color@gray@fill{.5}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.2pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\color[rgb]{0.75,0.75,0.75}\definecolor[named]{pgfstrokecolor}{rgb}{0.75,0.75,0.75}\pgfsys@color@gray@stroke{0.75}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.75}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.75,0.75,0.75}{}\pgfsys@moveto{0.0pt}{-0.00737pt}\pgfsys@lineto{71.13188pt}{-0.00737pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{}{0.0pt}\pgfsys@invoke{ }\definecolor[named]{tikz@color}{rgb}{.5,.5,.5}\definecolor[named]{.}{rgb}{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@invoke{ }\pgfsys@color@gray@fill{.5}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.2pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\color[rgb]{0.75,0.75,0.75}\definecolor[named]{pgfstrokecolor}{rgb}{0.75,0.75,0.75}\pgfsys@color@gray@stroke{0.75}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.75}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.75,0.75,0.75}{}\pgfsys@moveto{0.0pt}{30.77252pt}\pgfsys@lineto{71.13188pt}{30.77252pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setdash{}{0.0pt}\pgfsys@invoke{ }\definecolor[named]{tikz@color}{rgb}{.5,.5,.5}\definecolor[named]{.}{rgb}{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@invoke{ }\pgfsys@color@gray@fill{.5}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.2pt}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\color[rgb]{0.75,0.75,0.75}\definecolor[named]{pgfstrokecolor}{rgb}{0.75,0.75,0.75}\pgfsys@color@gray@stroke{0.75}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.75}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.75,0.75,0.75}{}\pgfsys@moveto{0.0pt}{61.55244pt}\pgfsys@lineto{71.13188pt}{61.55244pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{15.9682pt}{-4.89998pt}\pgfsys@lineto{15.9682pt}{-3.49998pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{35.92845pt}{-4.89998pt}\pgfsys@lineto{35.92845pt}{-3.49998pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{55.8887pt}{-4.89998pt}\pgfsys@lineto{55.8887pt}{-3.49998pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{-3.5pt}{-0.00737pt}\pgfsys@lineto{-2.5pt}{-0.00737pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{-3.5pt}{30.77252pt}\pgfsys@lineto{-2.5pt}{30.77252pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {}{{}}{} {}{}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{-3.5pt}{61.55244pt}\pgfsys@lineto{-2.5pt}{61.55244pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } \pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{-3.49998pt}\pgfsys@lineto{71.13188pt}{-3.49998pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } \pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\color[rgb]{0.75,0.75,0.75}\definecolor[named]{pgfstrokecolor}{rgb}{0.75,0.75,0.75}\pgfsys@color@gray@stroke{0.75}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.75}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.75,0.75,0.75}\pgfsys@rectcap\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{71.13188pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } \pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\color[rgb]{0.75,0.75,0.75}\definecolor[named]{pgfstrokecolor}{rgb}{0.75,0.75,0.75}\pgfsys@color@gray@stroke{0.75}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.75}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.75,0.75,0.75}\pgfsys@rectcap\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{61.54305pt}\pgfsys@lineto{71.13188pt}{61.54305pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } \pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }{}\pgfsys@moveto{-2.5pt}{0.0pt}\pgfsys@lineto{-2.5pt}{61.54305pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } \pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\color[rgb]{0.75,0.75,0.75}\definecolor[named]{pgfstrokecolor}{rgb}{0.75,0.75,0.75}\pgfsys@color@gray@stroke{0.75}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.75}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.75,0.75,0.75}\pgfsys@rectcap\pgfsys@invoke{ }{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{61.54305pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } \pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\color[rgb]{0.75,0.75,0.75}\definecolor[named]{pgfstrokecolor}{rgb}{0.75,0.75,0.75}\pgfsys@color@gray@stroke{0.75}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0.75}\pgfsys@invoke{ }\definecolor{pgffillcolor}{rgb}{0.75,0.75,0.75}\pgfsys@rectcap\pgfsys@invoke{ }{}\pgfsys@moveto{71.13188pt}{0.0pt}\pgfsys@lineto{71.13188pt}{61.54305pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \hss}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}} {} {}{}{}{}{}{}{}{}{}{} {} {} {}{} {} {} {}{}{}{} {} {} {} {}{}{}{} {}{}{}{} {} {} {} {}{}{}{} {} {} {} {}{} {}{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} { {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} } { } { } {} {} {} { {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} { } {} } \pgfsys@beginscope\pgfsys@invoke{ } \pgfsys@beginscope\pgfsys@invoke{ }{} \pgfsys@setlinewidth{0.6pt}\pgfsys@invoke{ }\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ } {}{}{}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}}{}{} {}{{}{}{}} {}{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@curveto{0.0pt}{0.0pt}{0.52441pt}{0.00133pt}{0.72583pt}{0.00185pt}\pgfsys@curveto{0.92725pt}{0.00235pt}{1.25024pt}{0.00319pt}{1.45166pt}{0.0037pt}\pgfsys@curveto{1.65308pt}{0.0042pt}{1.97609pt}{0.00485pt}{2.1775pt}{0.00552pt}\pgfsys@curveto{2.37892pt}{0.0062pt}{2.70192pt}{0.00775pt}{2.90334pt}{0.0086pt}\pgfsys@curveto{3.10475pt}{0.00946pt}{3.42776pt}{0.01076pt}{3.62917pt}{0.01169pt}\pgfsys@curveto{3.83058pt}{0.01262pt}{4.15358pt}{0.01428pt}{4.355pt}{0.01538pt}\pgfsys@curveto{4.55641pt}{0.01648pt}{4.87943pt}{0.01833pt}{5.08084pt}{0.01968pt}\pgfsys@curveto{5.28226pt}{0.02104pt}{5.60526pt}{0.02362pt}{5.80667pt}{0.02524pt}\pgfsys@curveto{6.00809pt}{0.02684pt}{6.33109pt}{0.02943pt}{6.5325pt}{0.03139pt}\pgfsys@curveto{6.73392pt}{0.03334pt}{7.05693pt}{0.037pt}{7.25835pt}{0.03938pt}\pgfsys@curveto{7.45976pt}{0.04176pt}{7.78276pt}{0.0458pt}{7.98418pt}{0.04863pt}\pgfsys@curveto{8.1856pt}{0.05144pt}{8.50859pt}{0.05629pt}{8.71pt}{0.0597pt}\pgfsys@curveto{8.91142pt}{0.06311pt}{9.23444pt}{0.06915pt}{9.43585pt}{0.07324pt}\pgfsys@curveto{9.63727pt}{0.07733pt}{9.96027pt}{0.0844pt}{10.16168pt}{0.08925pt}\pgfsys@curveto{10.3631pt}{0.09412pt}{10.6861pt}{0.10246pt}{10.88751pt}{0.10834pt}\pgfsys@curveto{11.08893pt}{0.11423pt}{11.41193pt}{0.12465pt}{11.61334pt}{0.13173pt}\pgfsys@curveto{11.81476pt}{0.13881pt}{12.13777pt}{0.15099pt}{12.33919pt}{0.15942pt}\pgfsys@curveto{12.5406pt}{0.16788pt}{12.8636pt}{0.1826pt}{13.06502pt}{0.19267pt}\pgfsys@curveto{13.26643pt}{0.20274pt}{13.58943pt}{0.22003pt}{13.79085pt}{0.23207pt}\pgfsys@curveto{13.99226pt}{0.24411pt}{14.31528pt}{0.26495pt}{14.5167pt}{0.27948pt}\pgfsys@curveto{14.71811pt}{0.29399pt}{15.0411pt}{0.3194pt}{15.24252pt}{0.33673pt}\pgfsys@curveto{15.44394pt}{0.35406pt}{15.76694pt}{0.38368pt}{15.96835pt}{0.40445pt}\pgfsys@curveto{16.16977pt}{0.4252pt}{16.49278pt}{0.46147pt}{16.6942pt}{0.48631pt}\pgfsys@curveto{16.89561pt}{0.51115pt}{17.21861pt}{0.55394pt}{17.42003pt}{0.58357pt}\pgfsys@curveto{17.62144pt}{0.6132pt}{17.94444pt}{0.66449pt}{18.14586pt}{0.69992pt}\pgfsys@curveto{18.34727pt}{0.73537pt}{18.67029pt}{0.79669pt}{18.87169pt}{0.83905pt}\pgfsys@curveto{19.0731pt}{0.88141pt}{19.3961pt}{0.9547pt}{19.59752pt}{1.00526pt}\pgfsys@curveto{19.79893pt}{1.05583pt}{20.12195pt}{1.14336pt}{20.32336pt}{1.20349pt}\pgfsys@curveto{20.52478pt}{1.26361pt}{20.84778pt}{1.36708pt}{21.0492pt}{1.43864pt}\pgfsys@curveto{21.25061pt}{1.51022pt}{21.57361pt}{1.6343pt}{21.77502pt}{1.71936pt}\pgfsys@curveto{21.97644pt}{1.80443pt}{22.29945pt}{1.95091pt}{22.50087pt}{2.05177pt}\pgfsys@curveto{22.70229pt}{2.15263pt}{23.02528pt}{2.32697pt}{23.2267pt}{2.44637pt}\pgfsys@curveto{23.42812pt}{2.56578pt}{23.75111pt}{2.77155pt}{23.95253pt}{2.91238pt}\pgfsys@curveto{24.15395pt}{3.05322pt}{24.47696pt}{3.29572pt}{24.67838pt}{3.46149pt}\pgfsys@curveto{24.87979pt}{3.62727pt}{25.20279pt}{3.91287pt}{25.4042pt}{4.10727pt}\pgfsys@curveto{25.60562pt}{4.30165pt}{25.92862pt}{4.63567pt}{26.13004pt}{4.86261pt}\pgfsys@curveto{26.33145pt}{5.08955pt}{26.65446pt}{5.47916pt}{26.85588pt}{5.7429pt}\pgfsys@curveto{27.0573pt}{6.00665pt}{27.3803pt}{6.45882pt}{27.58171pt}{6.76357pt}\pgfsys@curveto{27.78313pt}{7.06831pt}{28.10612pt}{7.58952pt}{28.30754pt}{7.93936pt}\pgfsys@curveto{28.50896pt}{8.28922pt}{28.83195pt}{8.88652pt}{29.03337pt}{9.28505pt}\pgfsys@curveto{29.23479pt}{9.6836pt}{29.5578pt}{10.36154pt}{29.75922pt}{10.81175pt}\pgfsys@curveto{29.96063pt}{11.26196pt}{30.28363pt}{12.02603pt}{30.48505pt}{12.52988pt}\pgfsys@curveto{30.68646pt}{13.03372pt}{31.00946pt}{13.88542pt}{31.21088pt}{14.44316pt}\pgfsys@curveto{31.4123pt}{15.00092pt}{31.7353pt}{15.93964pt}{31.93672pt}{16.54973pt}\pgfsys@curveto{32.13814pt}{17.15984pt}{32.46114pt}{18.1816pt}{32.66255pt}{18.84038pt}\pgfsys@curveto{32.86397pt}{19.49918pt}{33.18697pt}{20.59636pt}{33.38838pt}{21.29785pt}\pgfsys@curveto{33.5898pt}{21.99934pt}{33.91281pt}{23.16028pt}{34.11423pt}{23.89629pt}\pgfsys@curveto{34.31564pt}{24.6323pt}{34.63864pt}{25.84221pt}{34.84006pt}{26.60246pt}\pgfsys@curveto{35.04146pt}{27.36272pt}{35.36447pt}{28.603pt}{35.56589pt}{29.37573pt}\pgfsys@curveto{35.7673pt}{30.14845pt}{36.0903pt}{31.39896pt}{36.29172pt}{32.17177pt}\pgfsys@curveto{36.49313pt}{32.94458pt}{36.81615pt}{34.18541pt}{37.01756pt}{34.94566pt}\pgfsys@curveto{37.21898pt}{35.70592pt}{37.54198pt}{36.9153pt}{37.7434pt}{37.65121pt}\pgfsys@curveto{37.94481pt}{38.38715pt}{38.2678pt}{39.54817pt}{38.46922pt}{40.24966pt}\pgfsys@curveto{38.67064pt}{40.95114pt}{38.99365pt}{42.04834pt}{39.19507pt}{42.70714pt}\pgfsys@curveto{39.39648pt}{43.36592pt}{39.71948pt}{44.38776pt}{39.9209pt}{44.99777pt}\pgfsys@curveto{40.12231pt}{45.60779pt}{40.44531pt}{46.54607pt}{40.64673pt}{47.10373pt}\pgfsys@curveto{40.84814pt}{47.66139pt}{41.17114pt}{48.51324pt}{41.37256pt}{49.01701pt}\pgfsys@curveto{41.57397pt}{49.52077pt}{41.89699pt}{50.28432pt}{42.0984pt}{50.73453pt}\pgfsys@curveto{42.29982pt}{51.18474pt}{42.62282pt}{51.8633pt}{42.82423pt}{52.26183pt}\pgfsys@curveto{43.02565pt}{52.66037pt}{43.34866pt}{53.25714pt}{43.55006pt}{53.6069pt}\pgfsys@curveto{43.75148pt}{53.95667pt}{44.07448pt}{54.47803pt}{44.2759pt}{54.7827pt}\pgfsys@curveto{44.47731pt}{55.08737pt}{44.80032pt}{55.53908pt}{45.00174pt}{55.80275pt}\pgfsys@curveto{45.20316pt}{56.0664pt}{45.52615pt}{56.45612pt}{45.72757pt}{56.68306pt}\pgfsys@curveto{45.92899pt}{56.91pt}{46.25198pt}{57.244pt}{46.4534pt}{57.4384pt}\pgfsys@curveto{46.65482pt}{57.6328pt}{46.97783pt}{57.91838pt}{47.17924pt}{58.08415pt}\pgfsys@curveto{47.38066pt}{58.24994pt}{47.70366pt}{58.49243pt}{47.90508pt}{58.63327pt}\pgfsys@curveto{48.10649pt}{58.77411pt}{48.42949pt}{58.97997pt}{48.6309pt}{59.09927pt}\pgfsys@curveto{48.83232pt}{59.2186pt}{49.15533pt}{59.3924pt}{49.35675pt}{59.49326pt}\pgfsys@curveto{49.55817pt}{59.59413pt}{49.88116pt}{59.74124pt}{50.08258pt}{59.82631pt}\pgfsys@curveto{50.284pt}{59.91138pt}{50.60701pt}{60.03482pt}{50.80841pt}{60.1064pt}\pgfsys@curveto{51.00983pt}{60.17798pt}{51.33282pt}{60.28204pt}{51.53424pt}{60.34218pt}\pgfsys@curveto{51.73566pt}{60.4023pt}{52.05867pt}{60.48929pt}{52.26009pt}{60.53978pt}\pgfsys@curveto{52.4615pt}{60.59026pt}{52.7845pt}{60.66364pt}{52.98592pt}{60.706pt}\pgfsys@curveto{53.18733pt}{60.74835pt}{53.51033pt}{60.80968pt}{53.71175pt}{60.84511pt}\pgfsys@curveto{53.91316pt}{60.88055pt}{54.23618pt}{60.93184pt}{54.43759pt}{60.96146pt}\pgfsys@curveto{54.639pt}{60.99109pt}{54.962pt}{61.03387pt}{55.16342pt}{61.05873pt}\pgfsys@curveto{55.36484pt}{61.08359pt}{55.68784pt}{61.11986pt}{55.88925pt}{61.14061pt}\pgfsys@curveto{56.09067pt}{61.16136pt}{56.41368pt}{61.191pt}{56.6151pt}{61.20833pt}\pgfsys@curveto{56.81651pt}{61.22566pt}{57.13951pt}{61.25105pt}{57.34093pt}{61.26556pt}\pgfsys@curveto{57.54234pt}{61.28008pt}{57.86534pt}{61.30084pt}{58.06676pt}{61.31297pt}\pgfsys@curveto{58.26817pt}{61.32509pt}{58.59117pt}{61.34291pt}{58.79259pt}{61.35298pt}\pgfsys@curveto{58.994pt}{61.36305pt}{59.31702pt}{61.37724pt}{59.51843pt}{61.3856pt}\pgfsys@curveto{59.71985pt}{61.39397pt}{60.04285pt}{61.40623pt}{60.24426pt}{61.41331pt}\pgfsys@curveto{60.44568pt}{61.4204pt}{60.76868pt}{61.43082pt}{60.9701pt}{61.43669pt}\pgfsys@curveto{61.17151pt}{61.44258pt}{61.49452pt}{61.45093pt}{61.69594pt}{61.45578pt}\pgfsys@curveto{61.89735pt}{61.46065pt}{62.22035pt}{61.46771pt}{62.42177pt}{61.4718pt}\pgfsys@curveto{62.62318pt}{61.47589pt}{62.94618pt}{61.48193pt}{63.1476pt}{61.48534pt}\pgfsys@curveto{63.34901pt}{61.48875pt}{63.67203pt}{61.4936pt}{63.87344pt}{61.49641pt}\pgfsys@curveto{64.07486pt}{61.49922pt}{64.39786pt}{61.50328pt}{64.59927pt}{61.50566pt}\pgfsys@curveto{64.80069pt}{61.50804pt}{65.12369pt}{61.5117pt}{65.3251pt}{61.51366pt}\pgfsys@curveto{65.52652pt}{61.51561pt}{65.84953pt}{61.51819pt}{66.05093pt}{61.5198pt}\pgfsys@curveto{66.25235pt}{61.52142pt}{66.57535pt}{61.52399pt}{66.77676pt}{61.52534pt}\pgfsys@curveto{66.97818pt}{61.52672pt}{67.3012pt}{61.52856pt}{67.50261pt}{61.52966pt}\pgfsys@curveto{67.70403pt}{61.53076pt}{68.02702pt}{61.53242pt}{68.22844pt}{61.53336pt}\pgfsys@curveto{68.42986pt}{61.53429pt}{68.75285pt}{61.53557pt}{68.95427pt}{61.53642pt}\pgfsys@curveto{69.15569pt}{61.53728pt}{69.4787pt}{61.53883pt}{69.68011pt}{61.53952pt}\pgfsys@curveto{69.88153pt}{61.54019pt}{70.20453pt}{61.54085pt}{70.40594pt}{61.54137pt}\pgfsys@curveto{70.60736pt}{61.54187pt}{71.13177pt}{61.54321pt}{71.13177pt}{61.54321pt}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@beginscope\pgfsys@invoke{ }{} \pgfsys@setlinewidth{0.6pt}\pgfsys@invoke{ }\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ } \pgfsys@setdash{}{0.0pt}\pgfsys@invoke{ }\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ } {}\pgfsys@stroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope { }{ {}{} } {\pgfsys@beginscope\pgfsys@invoke{ }{}{}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {\pgfsys@beginscope\pgfsys@invoke{ }{}{}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {\pgfsys@beginscope\pgfsys@invoke{ }{}{}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} { {}{} }{ {}{} } {\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{}{}{}{} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{}{}{}{} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{}{}{}{} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{}{}{}{} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{}{}{}{} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } {{}{{}}} {{}{{}}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ } {}{{}}{} {}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{15.9682pt}{-4.89998pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{11.52377pt}{-12.72153pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{$-5$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope {{ {{ }}{{{}}} {{ }}{{{}}} {}}} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } {{}{{}}} {{}{{}}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ } {}{{}}{} {}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{35.92845pt}{-4.89998pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{33.92845pt}{-12.72153pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{$0$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope {{ {{ }}{{{}}} {{ }}{{{}}} {}}} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } {{}{{}}} {{}{{}}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ } {}{{}}{} {}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{55.8887pt}{-4.89998pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{53.8887pt}{-12.72153pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{$5$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{}{}{}{} {{ {{ }}{{{}}} {{ }}{{{}}} {}}} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } {{}{{}}} {{}{{}}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ } {}{{}}{} {}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{-3.5pt}{-0.00737pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-10.166pt}{-2.58514pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{$0$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope {{ {{ }}{{{}}} {{ }}{{{}}} {}}} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } {{}{{}}} {{}{{}}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ } {}{{}}{} {}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{-3.5pt}{30.77252pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-18.05489pt}{28.19475pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{$0.5$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope {{ {{ }}{{{}}} {{ }}{{{}}} {}}} {{ { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } { { { {{ }}{{{}}} {{ }}{{{}}} {}}{}} } {{ { {{ }}{{{}}} {{ }}{{{}}} {}}{ {{}{}} }{}}} } } { } {{}{{}}} {{}{{}}} \pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ } {}{{}}{} {}{{}}\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0.5,0.5,0.5}\pgfsys@color@gray@stroke{0.5}\pgfsys@invoke{ }\pgfsys@roundcap\pgfsys@invoke{ }{}\pgfsys@moveto{-3.5pt}{61.55244pt}\pgfsys@stroke\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{{}{}}}{{}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-10.166pt}{58.97467pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\footnotesize{$1$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} \pgfsys@beginscope\pgfsys@invoke{ }{} \pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }{} {}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }{} {}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }{} \pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }{} {}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }{} {}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@beginscope\pgfsys@invoke{ }{} {} {{{ {{ }}{{{}}} {}}}}{{}{ {}{}{}{}{}}{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{33.19095pt}{-21.12946pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\small{$\mathit{x}$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\pgfsys@beginscope\pgfsys@invoke{ }{} {} {{{ {{ }}{{{}}} {}}}}{{ {}{}{}{}{}}{}{}}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}{}{}{}{}{} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{0.0}{1.0}{-1.0}{0.0}{-24.83789pt}{23.06683pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{\small{$\varsigma(x)$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}} }\end{array}$ (Here $\varsigma(x)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\frac{1}{1+e^{-x}}$ is the sigmoid function, as illustrated.) We can use these functions to build a network as follows (see also Fig. 2): $\mathrm{comp}\langle\mathrm{layer}_{m}(\mathrm{neuron}_{k}),\mathrm{comp}\langle\mathrm{layer}_{n}(\mathrm{neuron}_{m}),\mathrm{neuron}_{n}\rangle\rangle:\boldsymbol{(}\mathbf{real}^{k}\boldsymbol{\mathop{*}}P\boldsymbol{)}\to\mathbf{real}$ (1) $\cdots$$\cdots$$\cdots$$1$$2$$3$$k$$1$$2$$m$$1$$2$$n$ Figure 2. The network in (1) with $k$ inputs and two hidden layers. Here $P\cong\mathbf{real}^{p}$ with $p=(m(k{+}1){+}n(m{+}1){+}n{+}1)$. This program (1) describes a smooth (infinitely differentiable) function. The goal of automatic differentiation is to find its derivative. If we $\beta$-reduce all the $\lambda$’s, we end up with a very long function expression just built from the sigmoid function and linear algebra. We can then find a program for calculating its derivative by applying the chain rule. However, automatic differentiation can also be expressed without first $\beta$-reducing, in a compositional way, by explaining how higher order functions like $(\mathrm{layer})$ and $(\mathrm{comp})$ propagate derivatives. This paper is a semantic analysis of this compositional approach. The general idea of denotational semantics is to interpret types as spaces and programs as functions between the spaces. In this paper, we propose to use diffeological spaces and smooth functions [Sou80, IZ13] to this end. These satisfy the following three desiderata: * • $\mathbb{R}$ is a space, and the smooth functions $\mathbb{R}\to\mathbb{R}$ are exactly the functions that are infinitely differentiable; * • The set of smooth functions $X\to Y$ between spaces again forms a space, so we can interpret function types. * • The disjoint union of a sequence of spaces again forms a space, and this enables us to interpret variant types and inductive types, e.g. lists of reals form the space $\biguplus_{i=0}^{\infty}\mathbb{R}^{i}$. We emphasise that the most standard formulation of differential geometry, using manifolds, does not support spaces of functions. Diffeological spaces seem to us the simplest notion of space that satisfies these conditions, but there are other candidates [BH11, Sta11]. A diffeological space is in particular a set $X$ equipped with a chosen set of curves $C_{X}\subseteq X^{\mathbb{R}}$ and a smooth map $f:X\to Y$ must be such that if $\gamma\in C_{X}$ then $\gamma;f\in C_{Y}$. This is reminiscent of the method of logical relations. #### 1.0.2. From smoothness to automatic derivatives at higher types. Our denotational semantics in diffeological spaces guarantees that all definable functions are smooth. But we need more than just to know that a definable function happens to have a mathematical derivative: we need to be able to find that derivative. In this paper we focus on forward mode automatic differentiation methods for computing higher derivatives, which are macro translations on syntax (called $\overrightarrow{\mathcal{D}}$ in Section 3). We are able to show that they are correct, using our denotational semantics. Here there is one subtle point that is central to our development. Although differential geometry provides established derivatives for first order functions (such as $\mathrm{neuron}$ above), there is no canonical notion of derivative for higher order functions (such as $\mathrm{layer}$ and $\mathrm{comp}$) in the theory of diffeological spaces (e.g. [CW14]). We propose a new way to resolve this, by interpreting types as triples $(X,X^{\prime},S)$ where, intuitively, $X$ is a space of inhabitants of the type, $X^{\prime}$ is a space serving as a chosen bundle of tangents (or jets, in the case of higher order derivatives) over $X$, and $S\subseteq X^{\mathbb{R}}\times X^{\prime\mathbb{R}}$ is a binary relation between curves, informally relating curves in $X$ with their tangent (resp. jet) curves in $X^{\prime}$. This new model gives a denotational semantics for higher order automatic differentiation on a language with higher order functions. In Section 4 we boil this new approach down to a straightforward and elementary logical relations argument for the correctness of higher order automatic differentiation. The approach is explained in detail in Section 6. We explore some subtleties of non-uniqueness of derivatives of higher order functions in Section 7. #### 1.0.3. Related work and context. AD has a long history and has many implementations. AD was perhaps first phrased in a functional setting in [PS08], and there are now a number of teams working on AD in the functional setting (e.g. [WWE+19, SFVPJ19, Ell18]), some providing efficient implementations. Although that work does not involve formal semantics, it is inspired by intuitions from differential geometry and category theory. This paper adds to a very recent body of work on verified automatic differentiation. In the first order setting, there are recent accounts based on denotational semantics in manifolds [FST19, LYRY20] and based on synthetic differential geometry [CGM19], work making a categorical abstraction [CCG+20] and work connecting operational semantics with denotational semantics [AP20, Plo18], as well as work focussing on how to correctly differentiate programs that operate on tensors [BML+20] and programs that make use of quantum computing [ZHCW20]. Recently there has also been significant progress at higher types. The work of Brunel et al. [BMP20] and Mazza and Pagani [MP21] give formal correctness proofs for reverse-mode derivatives on a linear $\lambda$-calculus with a particular operational semantics. The work of Barthe et al. [BCLG20] provides a general discussion of some new syntactic logical relations arguments including one very similar to our syntactic proof of Theorem 3. Sherman et al. [SMC20] discuss a differential programming technique that works at higher types, based on exact real arithmetic and relate it to a computable semantics. We understand that the authors of [CGM19] are working on higher types. Vákár [Vák21, VS21, LNV21] phrase and prove correct a reverse mode AD technique on a higher order language based on a similar gluing technique. Vákár [Vák20] extends a standard $\lambda$-calculus with type recursion, and proves correct a forward-mode AD on such a higher-order language, also using a gluing argument. The differential $\lambda$-calculus [ER03] is related to AD, and explicit connections are made in [MO20, Man12]. One difference is that the differential $\lambda$-calculus allows the addition of terms at all types, and hence vector space models are suitable to interpret all types. This choice would appear peculiar with the variant and inductive types that we consider here, as the dimension of a disjoint union of spaces is only defined locally. This paper builds on our previous work [HSV20a, Vák20] in which we gave denotational correctness proofs for forward mode AD algorithms for computing first derivatives. Here, we explain how these techniques extend to methods that calculate higher derivatives. The Faà di Bruno construction has also been investigated [CS11] in the context of Cartesian differential categories. The idea of directly calculating higher order derivatives by using automatic differentiation methods that work with Taylor approximations (also known as jets in differential geometry) is well-known [GUW00] and it has recently gained renewed interest [Bet18, BJD19]. So far, such “Taylor-mode AD” methods have only been applied to first order functional languages, however. This paper shows how to extend these higher order AD methods to languages with support for higher order functions and algebraic data types. The two main methods for implementing AD are operator overloading and, the method used in this paper, source code transformation [VMBBL18]. Taylor-mode AD has been seen to be significantly faster than iterated AD in the context of operator overloading [BJD19] in Jax [FJL18]. There are other notable implementations of forward Taylor-mode [BS96, BS97, Kar01, PS07, WGP16]. Some of them are implemented in a functional language [Kar01, PS07]. Taylor-mode implementations use the rich algebraic structure of derivatives to avoid a lot of redundant computations occurring via iterated first order methods and share of a lot of redundant computations. Perhaps the simplest example to see this is with the sin function, whose iterated derivatives only involve sin, cos, and negation. Importantly, most AD tools have the right complexity up to a constant factor, but this constant is quite important in practice and Taylor- mode helps achieve better performance. Another stunning result of a version of Taylor-mode was achieved in [LMG18], where a gain of performance of up to two orders of magnitude was achieved for computing certain Hessian-vector products using Ricci calculus. In essence, the algorithm used is mixed-mode that is derived via jets in [Bet18]. This is further improved in [LMG20]. Tayor-mode can also be useful for ODE solvers and hence will be important for neural differential equations [CRBD18]. Finally, we emphasise that we have chosen the neural network (1) as our running example mainly for its simplicity. Indeed one would typically use reverse-mode AD to train neural networks in practice. There are many other examples of AD outside the neural networks literature: AD is useful whenever derivatives need to be calculated on high dimensional spaces. This includes optimization problems more generally, where the derivative is passed to a gradient descent method (e.g. [RM51, KW+52, Qia99, KB14, DHS11, LN89]). Optimization problems involving higher order functions naturally show up in the calculus of variations and its applications in physics, where one typically looks for a function minimizing a certain integral [GSS00]. Other applications of AD are in advanced _integration_ methods, since derivatives play a role in Hamiltonian Monte Carlo [Nea11, HG14] and variational inference [KTR+17]. Second order methods for gradient-descent have also been extensively studied. As the basic second order Newton method requires inverting a high dimentional hessian matrix, several alternatives and approximations have been studied. Some of them still require Taylor-like modes of differentiation and require a matrix-vector product where the matrix resembles the hessian or inverse hessian [KK04, Mar10, Ama12]. #### 1.0.4. Summary of contributions. We have provided a semantic analysis of higher order automatic differentiation. Our syntactic starting point are higher order forward-mode AD macros on a typed higher order language that extend their well-known first order equivalent (e.g. [SFVPJ19, WWE+19, HSV20a]). We present these in Section 3 for function types, and in Section 5 we extend them to inductive types and variants. The main contributions of this paper are as follows. * • We give a denotational semantics for the language in diffeological spaces, showing that every definable expression is smooth (Section 4). * • We show correctness of the higher order AD macros by a logical relations argument (Th. 3). * • We give a categorical analysis of this correctness argument with two parts: a universal property satisfied by the macro in terms of syntactic categories, and a new notion of glued space that abstracts the logical relation (Section 6). * • We then use this analysis to state and prove a correctness argument at all first order types (Th. 8). #### Relation to previous work This paper extends and develops the paper [HSV20a] presented at the 23rd International Conference on Foundations of Software Science and Computation Structure (FoSSaCS 2020). This version includes numerous elaborations, notably the extension of the definition, semantics and correctness of automatic differentiation methods for computing higher order derivatives (introduced in Section 2.2-2.4) and a novel discussion about derivatives of higher-order functions (Section 7). ## 2\. Rudiments of differentiation: how to calculate with dual numbers and Taylor approximations ### 2.1. First order differentiation: the chain rule and dual numbers. We will now recall the definition of gradient of a differentiable function, the goal of AD and, and what it means for AD to be correct. Recall that the derivative of a function $f:\mathbb{R}\to\mathbb{R}$, if it exists, is a function $\nabla f:\mathbb{R}\to\mathbb{R}$ such that for all $a$, $\nabla f(a)$ is the gradient of $f$ at $a$ in the sense that the function $x\mapsto f(a)+\nabla f(a)\cdot(x-a)$ gives the best linear approximation of $f$ at $a$. (The gradient $\nabla f(a)$ is often written $\frac{\mathop{}\\!\mathrm{d}f(x)}{\mathop{}\\!\mathrm{d}x}(a)$.) The chain rule for differentiation tells us that we can calculate $\nabla(f;g)(a)=\nabla f(a)\cdot\nabla g(f(a))$. In that sense, the chain rule tells us how linear approximations to a function transform under post- composition with another function. To find $\nabla f$ in a compositional way, using the chain rule, two generalizations are reasonable: * • We need both $f$ and $\nabla f$ when calculating $\nabla(f;g)$ of a composition $f;g$, using the chain rule, so we are really interested in the pair $(f,\nabla f):\mathbb{R}\to\mathbb{R}\times\mathbb{R}$; * • In building $f$ we will need to consider functions of multiple arguments, such as $+:\mathbb{R}^{2}\to\mathbb{R}$, and these functions should propagate derivatives. Thus we are more generally interested in transforming a function $g:\mathbb{R}^{n}\to\mathbb{R}$ into a function $h:(\mathbb{R}\times\mathbb{R})^{n}\to\mathbb{R}\times\mathbb{R}$ in such a way that for any $f_{1}\dots f_{n}:\mathbb{R}\to\mathbb{R}$, $(f_{1},\nabla f_{1},\dots,f_{n},\nabla f_{n});h=((f_{1},\dots,f_{n});g,\nabla((f_{1},\dots,f_{n});g))\text{.}$ (2) Computing automatically the program representing $h$, given a program representing $g$, is the goal of automatic differentiation. An intuition for $h$ is often given in terms of dual numbers. The transformed function operates on pairs of numbers, $(x,x^{\prime})$, and it is common to think of such a pair as $x+x^{\prime}\epsilon$ for an ‘infinitesimal’ $\epsilon$. But while this is a helpful intuition, the formalization of infinitesimals can be intricate, and the development in this paper is focussed on the elementary formulation in (2). A function $h$ satisfying (2) encodes all the partial derivatives of $g$. For example, if $g\colon\mathbb{R}^{2}\to\mathbb{R}$, then with $f_{1}(x)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}x$ and $f_{2}(x)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}x_{2}$, by applying (2) to $x_{1}$ we obtain $h(x_{1},1,x_{2},0)\\!=\\!(g(x_{1},x_{2}),\frac{\partial g(x,x_{2})}{\partial x}(x_{1}))$ and similarly $h(x_{1},0,x_{2},1)\\!=\\!(g(x_{1},x_{2}),\frac{\partial g(x_{1},x)}{\partial x}(x_{2}))$. And conversely, if $g$ is differentiable in each argument, then a unique $h$ satisfying (2) can be found by taking linear combinations of partial derivatives, for example: $\textstyle h(x_{1},x_{1}^{\prime},x_{2},x_{2}^{\prime})=(g(x_{1},x_{2}),x_{1}^{\prime}\cdot\frac{\partial g(x,x_{2})}{\partial x}(x_{1})+x_{2}^{\prime}\cdot\frac{\partial g(x_{1},x)}{\partial x}(x_{2}))\text{.}$ (Here, recall that the partial derivative $\frac{\partial g(x,x_{2})}{\partial x}(x_{1})$ is a particular notation for the gradient $\nabla(g(-,x_{2}))(x_{1})$, i.e. with $x_{2}$ fixed. ) In summary, the idea of differentiation with dual numbers is to transform a differentiable function $g:\mathbb{R}^{n}\to\mathbb{R}$ to a function $h:\mathbb{R}^{2n}\to\mathbb{R}^{2}$ which captures $g$ and all its partial derivatives. We packaged this up in (2) as an invariant which is useful for building derivatives of compound functions $\mathbb{R}\to\mathbb{R}$ in a compositional way. The idea of (first order) forward mode automatic differentiation is to perform this transformation at the source code level. We say that a macro for AD is correct if, given a semantic model $sem{-}$, the program $P$ representing $g=\llbracket P\rrbracket$ is transformed by the macro to a program $P^{\prime}$ representing $h=\llbracket P^{\prime}\rrbracket$. This means in particular that $P^{\prime}$ computes correct partial derivatives of (the function represented by) $P$. #### Smooth functions. In what follows we will often speak of _smooth_ functions $\mathbb{R}^{k}\to\mathbb{R}$, which are functions that are continuous and differentiable, such that their derivatives are also continuous and differentiable, and so on. ### 2.2. Higher order differentiation: the Faà di Bruno formula and Taylor approximations. We now generalize the above in two directions: * • We look for the best local approximations to $f$ with polynomials of some order $R$, generalizing the above use of linear functions ($R=1$). * • We can work directly with multivariate functions $\mathbb{R}^{k}\to\mathbb{R}$ instead of functions of one variable $\mathbb{R}\to\mathbb{R}$ ($k=1$). To make this precise, we recall that, given a smooth function $f:\mathbb{R}^{k}\to\mathbb{R}$ and a natural number $R\geq 0$, the _$R$ -th order Taylor approximation of $f$ at ${a\in\mathbb{R}^{k}}$_ is defined in terms of the partial derivatives of $f$: $\displaystyle\mathbb{R}^{k}\;\;\quad$ $\displaystyle\to\qquad\mathbb{R}$ $\displaystyle{x}\qquad$ $\displaystyle\mapsto\sum_{\left\\{(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{N}^{k}\mid\alpha_{1}+\ldots+\alpha_{k}\leq R\right\\}}{\frac{1}{\alpha_{1}!\cdot\ldots\cdot\alpha_{k}!}\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}f(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}(a)}\cdot(x_{1}-a_{1})^{\alpha_{1}}\cdot\ldots\cdot(x_{k}-a_{k})^{\alpha_{k}}.$ This is an $R$-th order polynomial. Similarly to the case of first order derivatives, we can recover the partial derivatives of $f$ up to the $R$-th order from its Taylor approximation by evaluating the series at basis vectors. See Section 2.3 below for an example. Recall that the ordering of partial derivatives does not matter for smooth functions (Schwarz/Clairaut’s theorem). So there will be $\binom{R+k-1}{k-1}$ $R$-th order partial derivatives, and altogether there are $\binom{R+k}{k}$ summands in the $R$-th order Taylor approximation. (This can be seen by a ‘stars-and-bars’ argument.) Since there are ${\binom{R+k}{k}}$ partial derivatives of $f_{i}$ of order $\leq R$, we can store them in the Euclidean space $\mathbb{R}^{\binom{R+k}{k}}$, which can also be regarded as the space of $k$-variate polynomials of degree $\leq R$. We use a convention of coordinates $\left(y_{\alpha_{1}...\alpha_{k}}\in\mathbb{R}\right)_{(\alpha_{1},\ldots,\alpha_{k})\in\left\\{(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{N}^{k}\mid 0\leq\alpha_{1}+\ldots+\alpha_{k}\leq R\right\\}}$ where $y_{\alpha_{1}\ldots\alpha_{k}}$ is intended to represent a partial derivative $\frac{\partial^{\alpha_{1}+...+\alpha_{k}}f}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}(a)$ for some function $f:\mathbb{R}^{k}\to\mathbb{R}$. We will choose these coordinates in lexicographic order of the multi-indices $(\alpha_{1},\ldots,\alpha_{k})$, that is, the indexes in the Euclidean space $\mathbb{R}^{\binom{R+k}{k}}$ will typically range from $(0,\ldots,0)$ to $(R,0,\ldots,0)$. The _$(k,R)$ -Taylor representation_ of a function $g:\mathbb{R}^{n}\to\mathbb{R}$ is a function $h:\left(\mathbb{R}^{\binom{R+k}{k}}\right)^{n}\to\mathbb{R}^{\binom{R+k}{k}}$ that transforms the partial derivatives of $f:\mathbb{R}^{k}\to\mathbb{R}^{n}$ of order $\leq R$ under postcomposition with $g$: $\displaystyle{\left({\left(\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}f_{j}(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}\right)}_{(\alpha_{1},...,\alpha_{k})=(0,...,0)}^{(R,0,...,0)}\right)}_{j=1}^{n};h=$ $\displaystyle\quad\quad{\left({\left(\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}((f_{1},\ldots,f_{n});g)(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}\right)}_{(\alpha_{1},...,\alpha_{k})=(0,...,0)}^{(R,0,...,0)}\right)}_{j=1}^{n}\text{.}$ (3) Thus the Taylor representation generalizes the dual numbers representation ($R=k=1$). To explicitly calculate the Taylor representation for a smooth function, we recall a generalization of the chain rule to higher derivatives. The chain rule tells us how the coefficients of linear approximations transform under composition of the functions. The _Faà di Bruno formula_ [Sav06, EM03, CS96] tells us how coefficients of Taylor approximations — that is, higher derivatives — transform under composition. We recall the multivariate form from [Sav06, Theorem 2.1]. Given functions $f=(f_{1},\ldots,f_{l}):\mathbb{R}^{k}\to\mathbb{R}^{l}$ and $g:\mathbb{R}^{l}\to\mathbb{R}$, for $\alpha_{1}+\ldots+\alpha_{k}>0$, $\displaystyle\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}(f;g)(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}(a)$ $\displaystyle=\alpha_{1}!\cdot\ldots\cdot\alpha_{k}!\cdot\sum_{\left\\{(\beta_{1},\ldots,\beta_{l})\in\mathbb{N}^{l}\mid 1\leq\beta_{1}+\ldots+\beta_{l}\leq\alpha_{1}+\ldots+\alpha_{k}\right\\}}\frac{\partial^{\beta_{1}+\ldots+\beta_{l}}g(y)}{\partial y_{1}^{\beta_{1}}\cdots\partial y_{l}^{\beta_{l}}}(f(a))\cdot$ $\displaystyle\sum_{\left\\{((e^{1}_{1},\ldots,e^{1}_{l}),\ldots,(e^{q}_{1},\ldots,e^{q}_{l}))\in(\mathbb{N}^{l})^{q}\mid e_{j}^{1}+\ldots+e_{j}^{q}=\beta_{j},(e^{1}_{1}+\ldots+e^{1}_{l})\cdot\alpha^{1}_{i}+\ldots+(e^{q}_{1}+\ldots+e^{q}_{l})\cdot\alpha^{q}_{i}=\alpha_{i}\right\\}}$ $\displaystyle\prod_{r=1}^{q}\prod_{j=1}^{l}\frac{1}{e^{r}_{j}!}\left(\frac{1}{\alpha^{r}_{1}!\cdot\ldots\cdot\alpha^{r}_{k}!}\frac{\partial^{\alpha^{r}_{1}+\cdots+\alpha^{r}_{k}}f_{j}(x)}{\partial^{\alpha^{r}_{1}}x_{1}\cdots\partial^{\alpha^{r}_{k}}x_{k}}(a)\right)^{e^{r}_{j}},$ where $(\alpha^{1}_{1},\ldots,\alpha^{1}_{k}),\ldots,(\alpha^{q}_{1},\ldots,\alpha^{q}_{k})\in\mathbb{N}^{k}$ are an enumeration of all the vectors $(\alpha^{r}_{1},\ldots,\alpha^{r}_{k})$ of $k$ natural numbers such that $\alpha^{r}_{j}\leq\alpha_{j}$ and $\alpha^{r}_{1}+\ldots+\alpha^{r}_{k}>0$ and we write $q$ for the number of such vectors. The details of this formula reflect the complicated combinatorics the arise from repeated applications of the chain and product rules for differentiation that one uses to prove it. Conceptually, however, it is rather straightforward: it tells us that the coefficients of the $R$-th order Taylor approximation of $f;g$ can be expressed exclusively in terms of those of $f$ and $g$. Thus the Faà di Bruno formula uniquely determines the Taylor approximation $h:\left(\mathbb{R}^{\binom{R+k}{k}}\right)^{n}\to\mathbb{R}^{\binom{R+k}{k}}$ in terms of the derivatives of $g:\mathbb{R}^{n}\to\mathbb{R}$ of order $\leq R$, and we can also recover all such derivatives from $h$. ### 2.3. Example: a two-dimensional second order Taylor series As an example, we can specialize the Faà di Bruno formula above to the second order Taylor series of a function $f:\mathbb{R}^{2}\to\mathbb{R}^{l}$ and its behaviour under postcomposition with a smooth function $g:\mathbb{R}^{l}\to\mathbb{R}$: $\displaystyle\frac{\partial^{2}(f;g)(x)}{\partial x_{i}\partial x_{i^{\prime}}}(a)$ $\displaystyle=\sum_{j=1}^{l}\frac{\partial g(y)}{\partial y_{j}}(f(a))\frac{\partial^{2}f_{j}(x)}{\partial x_{i}\partial x_{i^{\prime}}}(a)+\sum_{j,j^{\prime}=1}^{l}\frac{\partial^{2}g(y)}{\partial y_{j}\partial y_{j^{\prime}}}(f(a))\frac{\partial f_{j^{\prime}}(x)}{\partial x_{i}}(a)\frac{\partial f_{j}(x)}{\partial x_{i^{\prime}}}(a),$ where $i,i^{\prime}\in\left\\{1,2\right\\}$ might either coincide or be distinct. Rather than working with the full $(2,2)$-Taylor representation of $g$, we ignore the non-mixed second order derivatives $y_{02}^{j}=\frac{\partial^{2}f_{j}(x)}{\partial x_{2}^{2}}$ and $y_{20}^{j}=\frac{\partial^{2}f_{j}(x)}{\partial x_{1}^{2}}$ for the moment, and we represent the derivatives of order $\leq 2$ of $f_{j}:\mathbb{R}^{2}\to\mathbb{R}$ (at some point $a$) as the numbers $(y_{00}^{j},y_{01}^{j},y_{10}^{j},y_{11}^{j})=\left(f_{j}(a),\frac{\partial f_{j}(x)}{\partial x_{2}}(a),\frac{\partial f_{j}(x)}{\partial x_{1}}(a),\frac{\partial^{2}f_{j}(x)}{\partial x_{1}\partial x_{2}}(a)\right)\in\mathbb{R}^{4}$ and we can choose a similar representation for the derivatives of $(f;g)$. Then, we observe that the Faà di Bruno formula induces the function $h:(\mathbb{R}^{4})^{l}\to\mathbb{R}^{4}$ $\displaystyle h((y_{00}^{1},y_{01}^{1},y_{10}^{1},y_{11}^{1}),\ldots,(y_{00}^{l},y_{01}^{l},y_{10}^{l},y_{11}^{l}))=$ $\displaystyle\left(\begin{array}[]{l}g(y_{00}^{1},\ldots,y_{00}^{l})\\\ \sum_{j=1}^{l}\frac{\partial g(y^{1},\ldots,y^{l})}{\partial y_{j}}(y_{00}^{1},\ldots,y_{00}^{l})\cdot y_{01}^{j}\\\ \sum_{j=1}^{l}\frac{\partial g(y^{1},\ldots,y^{l})}{\partial y_{j}}(y_{00}^{1},\ldots,y_{00}^{l})\cdot y_{10}^{j}\\\ \sum_{j=1}^{l}\frac{\partial g(y^{1},\ldots,y^{l})}{\partial y_{j}}(y_{00}^{1},\ldots,y_{00}^{l})\cdot y_{11}^{j}+\sum_{j,j^{\prime}=1}^{l}\frac{\partial^{2}g(y^{1},\ldots,y^{l})}{\partial y_{j}\partial y_{j^{\prime}}}(y_{00}^{1},\ldots,y_{00}^{l})\cdot y_{10}^{j}\cdot y_{01}^{j^{\prime}}\end{array}\right).$ In particular, we can note that $\displaystyle h((y_{00}^{1},y_{01}^{1},y_{10}^{1},0),\ldots,(y_{00}^{l},y_{01}^{l},y_{10}^{l},0))=\left(\begin{array}[]{l}g(y_{00}^{1},\ldots,y_{00}^{l})\\\ \sum_{j=1}^{l}\frac{\partial g(y^{1},\ldots,y^{l})}{\partial y_{j}}(y_{00}^{1},\ldots,y_{00}^{l})\cdot y_{01}^{j}\\\ \sum_{j=1}^{l}\frac{\partial g(y^{1},\ldots,y^{l})}{\partial y_{j}}(y_{00}^{1},\ldots,y_{00}^{l})\cdot y_{10}^{j}\\\ \sum_{j,j^{\prime}=1}^{l}\frac{\partial^{2}g(y^{1},\ldots,y^{l})}{\partial y_{j}\partial y_{j^{\prime}}}(y_{00}^{1},\ldots,y_{00}^{l})\cdot y_{10}^{j}\cdot y_{01}^{j^{\prime}}\end{array}\right).$ We see can use this method to calculate any directional first and second order derivative of $g$ in one pass. For example, if $l=3$, so $g:\mathbb{R}^{3}\to\mathbb{R}$, then the last component of $h((x,x^{\prime},x^{\prime\prime},0),(y,y^{\prime},y^{\prime\prime},0),(z,z^{\prime},z^{\prime\prime},0))$ is the result of taking the first derivative in direction $(x^{\prime},y^{\prime},z^{\prime})$ and the second derivative in direction $(x^{\prime\prime},y^{\prime\prime},z^{\prime\prime})$, and evaluating at $(x,y,z)$. In the proper Taylor representation we explicitly include the non-mixed second order derivatives as inputs and outputs, leading to a function $h^{\prime}:(\mathbb{R}^{6})^{l}\to\mathbb{R}^{6}$. Above we have followed a common trick to avoid some unnecessary storage and computation, since these extra inputs and outputs are not required for computing the second order derivatives of $g$. For instance, if $l=2$ then the last component of $h((x,1,1,0),(y,0,0,0))$ computes $\frac{\partial^{2}g(x,y)}{\partial x^{2}}(x,y)$. ### 2.4. Example: a one-dimensional second order Taylor series As opposed to (2,2)-AD, (1,2)-AD computes the first and second order derivatives in the same direction. For example, if $g:\mathbb{R}^{2}\to\mathbb{R}$ is a smooth function, then $h:(\mathbb{R}^{3})^{2}\to\mathbb{R}^{3}$. An intuition for $h$ can be given in terms of triple numbers. The transformed function operates on triples of numbers, $(x,x^{\prime},x^{\prime\prime})$, and it is common to think of such a triple as $x+x^{\prime}\epsilon+x^{\prime\prime}\epsilon^{2}$ for an ‘infinitesimal’ $\epsilon$ which has the property that $\epsilon^{3}=0$. For instance we have $\displaystyle h((x_{1},1,0),(x_{2},0,0))=(g(x_{1},x_{2}),\frac{\partial g(x,x_{2})}{\partial x}(x_{1}),\frac{\partial^{2}g(x,x_{2})}{\partial x^{2}}(x_{1}))$ $\displaystyle h((x_{1},0,0),(x_{2},1,0))=(g(x_{1},x_{2}),\frac{\partial g(x_{1},x)}{\partial x}(x_{2}),\frac{\partial^{2}g(x_{1},x)}{\partial x^{2}}(x_{2}))$ $\displaystyle h((x_{1},1,0),(x_{2},1,0))=$ $\displaystyle(g(x_{1},x_{2}),\frac{\partial g(x_{1},x)}{\partial x}(x_{2}),\frac{\partial^{2}g(x,x_{2})}{\partial x^{2}}(x_{1})+\frac{\partial^{2}g(x_{1},x)}{\partial x^{2}}(x_{2})+2\frac{\partial^{2}g(x,y)}{\partial x\partial y}(x_{1},x_{2}))$ We see that we directly get non-mixed second-order partial derivatives but not the mixed-ones. We can recover $\frac{\partial^{2}g(x,y)}{\partial x\partial y}(x_{1},x_{2})$ as $\frac{1}{2}(h((x_{1},1,0),(x_{2},1,0))-h((x_{1},1,0),(x_{2},0,0))-h((x_{1},0,0),(x_{2},1,0)))$. More generally, if $g:\mathbb{R}^{l}\to\mathbb{R}$, then $h:(\mathbb{R}^{3})^{l}\to\mathbb{R}^{3}$ satisfies: $\displaystyle h((x_{1},x^{\prime}_{1},0),\ldots,(x_{l},x^{\prime}_{l},0))=\left(\begin{array}[]{l}g(x_{1},\ldots,x_{l})\\\ \sum_{i=1}^{l}\frac{\partial g(x_{1},\ldots,x_{l})}{\partial x_{i}}(x_{1},\ldots,x_{l})\cdot x^{\prime}_{i}\\\ \sum_{i,j=1}^{l}\frac{\partial^{2}g(x_{1},\ldots,x_{l})}{\partial x_{i}\partial x_{j}}(x_{1},\ldots,x_{l})\cdot x^{\prime}_{i}\cdot x^{\prime}_{j}\end{array}\right).$ We can always recover the mixed second order partial derivatives from this but this requires several computations involving $h$. This is thus different from the (2,2) method which was more direct. ### 2.5. Remark In the rest of this article, we study forward-mode $(k,R)$-automatic differentiation for a language with higher-order functions. The reader may like to fix $k=R=1$ for a standard automatic differentiation with first-order derivatives, based on dual numbers. This is the approach taken in the conference version of this paper [HSV20b]. But the generalization to higher- order derivatives with arbitrary $k$ and $R$ flows straightforwardly through the whole narrative. ## 3\. A Higher Order Forward-Mode AD Translation ### 3.1. A simple language of smooth functions. We consider a standard higher order typed language with a first order type $\mathbf{real}$ of real numbers. The types $({\tau},{\sigma})$ and terms $({t},{s})$ are as follows. $\begin{array}[t]{l@{\quad\\!\\!}*3{l@{}}@{\,}l}{\tau},{\sigma},{\rho}&::=&&\mspace{-25.0mu}\qquad\text{types}\\\ &\mathrel{\lvert}&\mathbf{real}&\qquad\text{real numbers}\\\ &\mathrel{\lvert}&\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}\dots\boldsymbol{\mathop{*}}{\tau}_{n}\boldsymbol{)}&\qquad\text{finite product}\\\ &\mathrel{\lvert}&{\tau}\to{\sigma}&\qquad\text{function}\\\\[6.0pt] {t},{s},{r}&::=&&\mspace{-25.0mu}\qquad\text{terms}\\\ &&{x}&\qquad\text{variable}\\\ &\mathrel{\lvert}&\mathsf{op}({t}_{1},\ldots,{t}_{n})&\qquad\text{operations (including constants)}\\\ &\mathrel{\lvert}&\langle{t}_{1},\dots,{t}_{n}\rangle\ \mathrel{\lvert}\mathbf{case}\,{t}\,\mathbf{of}\,\langle{x}_{1},\dots,{x}_{n}\rangle\to{s}&\qquad\text{tuples/pattern matching}\\\ &\mathrel{\lvert}&\lambda{x}.{t}\ \mathrel{\lvert}{t}\,{s}&\qquad\text{function abstraction/application}\\\ \end{array}$ The typing rules are in Figure 3. We have included some abstract basic $n$-ary operations $\mathsf{op}\in\mathsf{Op}_{n}$ for every $n\in\mathbb{N}$. These are intended to include the usual (smooth) mathematical operations that are used in programs to which automatic differentiation is applied. For example, * • for any real constant $c\in\mathbb{R}$, we typically include a constant $\underline{c}\in\mathsf{Op}_{0}$; we slightly abuse notation and will simply write $\underline{c}$ for $\underline{c}()$ in our examples; * • we include some unary operations such as $\varsigma\in\mathsf{Op}_{1}$ which we intend to stand for the usual sigmoid function, $\varsigma(x)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\frac{1}{1+e^{-x}}$; * • we include some binary operations such as addition and multiplication $(+),(*)\in\mathsf{Op}_{2}$; We add some simple syntactic sugar $t-u\stackrel{{\scriptstyle\mathrm{def}}}{{=}}t+\underline{(-1)}*u$ and, for some natural number $n$, $n\cdot{t}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\overbrace{{t}+...+{t}}^{\text{$n$ times}}\qquad\text{and}\qquad{t}^{n}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\overbrace{{t}*...*{t}}^{\text{$n$ times}}$ Similarly, we will frequently denote repeated sums and products using $\sum$\- and $\prod$-signs, respectively: for example, we write ${t}_{1}+...+{t}_{n}$ as $\sum_{i\in\\{1,...,n\\}}{t}_{i}$ and ${t}_{1}*...*{t}_{n}$ as $\prod_{i\in\left\\{1,...,n\right\\}}{t}_{i}$. This in addition to programming sugar such as $\mathbf{let}\,{x}=\,{t}\,\mathbf{in}\,{s}$ for $(\lambda{x}.{{s}})\,{t}$ and $\lambda\langle{x}_{1},\ldots,{x}_{n}\rangle.{{t}}$ for $\lambda{x}.{\mathbf{case}\,{x}\,\mathbf{of}\,\langle{x}_{1},\ldots,{x}_{n}\rangle\to{t}}$. $\begin{array}[]{c}\inferrule{\Gamma\vdash{t}_{1}:\mathbf{real}\;\;\dots\;\;\Gamma\vdash{t}_{n}:\mathbf{real}}{\Gamma\vdash\mathsf{op}({t}_{1},\ldots,{t}_{n}):\mathbf{real}}(\mathsf{op}\in\mathsf{Op}_{n})\\\\[12.0pt] \\\ \inferrule{\Gamma\vdash{t}_{1}:{\tau}_{1}\;\;\dots\;\;\Gamma\vdash{t}_{n}:{\tau}_{n}}{\Gamma\vdash\langle{t}_{1},\dots,{t}_{n}\rangle:\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}\dots\boldsymbol{\mathop{*}}{\tau}_{n}\boldsymbol{)}}\qquad\inferrule{\Gamma\vdash{t}:\boldsymbol{(}{\sigma}_{1}\boldsymbol{\mathop{*}}\dots\boldsymbol{\mathop{*}}{\sigma}_{n}\boldsymbol{)}\;\;\Gamma,{{x}_{1}\colon{\sigma}_{1},{.}{.}{.},{x}_{n}\colon{\sigma}_{n}}\vdash{s}:{\tau}}{\Gamma\vdash\mathbf{case}\,{t}\,\mathbf{of}\,\langle{x}_{1},\dots,{x}_{n}\rangle\to{s}:{\tau}}\\\\[12.0pt] \\\ \inferrule{~{}}{\Gamma\vdash{x}:{\tau}}(({x}:{\tau})\in\Gamma)\qquad\inferrule{\Gamma,{x}:{\tau}\vdash{t}:{\sigma}}{\Gamma\vdash\lambda{x}:{\tau}.{t}:{\tau}\to{\sigma}}\qquad\inferrule{\Gamma\vdash{t}:{\sigma}\to{\tau}\\\ \Gamma\vdash{s}:{\sigma}}{\Gamma\vdash{t}\,{s}:{\tau}}\end{array}$ Figure 3. Typing rules for the simple language. ### 3.2. Syntactic automatic differentiation: a functorial macro. The aim of higher order forward mode AD is to find the $(k,R)$-Taylor representation of a function by syntactic manipulations, for some choice of $(k,R)$ that we fix. For our simple language, we implement this as the following inductively defined macro $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}$ on both types and terms (see also [WWE+19, SFVPJ19]). For the sake of legibility, we simply write $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}$ as $\overrightarrow{\mathcal{D}}$ here and leave the dimension $k$ and order $R$ of the Taylor representation implicit. The following definition is for general $k$ and $R$, but we treat specific cases afterwards in Example 3.2. $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}\to{\sigma})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau})\to\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\sigma})\qquad\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}_{1}\boldsymbol{\mathop{*}}...\boldsymbol{\mathop{*}}{\tau}_{n})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}_{1})}\boldsymbol{\mathop{*}}...\boldsymbol{\mathop{*}}{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}_{n})}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\mathbf{real})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}{\mathbf{real}}^{\binom{R+k}{k}}\quad\text{(i.e.,~{}the type of tuples of reals of length $\textstyle\binom{R+k}{k}$)}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({x})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}{x}\hskip 80.0pt\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\underline{c})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\langle\underline{c},0\rangle$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\lambda{x}.{t})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda{x}.{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})}\hskip 12.0pt\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}\,{s})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({s})\hskip 12.0pt\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\langle{t}_{1},\dots,{t}_{n}\rangle)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\langle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}_{1}),\dots,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}_{n})\rangle$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\mathbf{case}\,{t}\,\mathbf{of}\,\langle{x}_{1},\dots,{x}_{n}\rangle\to{s}})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})\,\mathbf{of}\,\langle{x}_{1},\dots,{x}_{n}\rangle\to\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({s})$ $\displaystyle\begin{array}[]{ll}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\mathsf{op}({t}_{1},\ldots,{t}_{n}))\stackrel{{\scriptstyle\mathrm{def}}}{{=}}{}&\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}_{1})\,\mathbf{of}\,\langle{x}_{0...0}^{1},...,{x}_{R,0...0}^{1}\rangle\to\\\ &\vdots\\\ &{\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}_{n})\,\mathbf{of}\,\langle{x}_{0...0}^{n},...,{x}_{R,0...0}^{n}\rangle\to}\\\ &\begin{array}[]{lll}\langle&D^{0...0}\mathsf{op}(x_{0...0}^{1},...,x_{R,0...0}^{1},...,x_{0...0}^{n},....,x_{...0}^{n}),\\\ &\cdots,\\\ &D^{R...0}\mathsf{op}(x_{0...0}^{1},...,x_{R,0...0}^{1},...,x_{0...0}^{n},....,x_{R,0...0}^{n})&\rangle\end{array}\end{array}$ where $\displaystyle D^{0...0}\mathsf{op}(x_{0...0}^{1},...,x_{R,0...0}^{1},...,x_{0...0}^{n},....,x_{R,0...0}^{n})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathsf{op}(x_{0...0}^{1},...,x_{0...0}^{n})$ $\displaystyle\begin{array}[]{@{}l}D^{\alpha_{1}....\alpha_{k}}\mathsf{op}(x_{0...0}^{1},...,x_{R,0...0}^{1},...,x_{0...0}^{n},....,x_{R,0...0}^{n})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}~{}\qquad\text{\scriptsize(for $\alpha_{1}+...+\alpha_{k}>0$)}\\\ \begin{array}[]{c}{\alpha_{1}!\cdot\ldots\cdot\alpha_{k}!}\cdot\sum_{\left\\{(\beta_{1},\ldots,\beta_{n})\in\mathbb{N}^{l}\mid 1\leq\beta_{1}+\ldots+\beta_{n}\leq\alpha_{1}+\ldots+\alpha_{k}\right\\}}\partial_{\beta_{1}\cdots\beta_{n}}\mathsf{op}({x}_{0...0}^{1},\ldots,{x}_{0...0}^{n})*\\\ \sum_{\left\\{((e^{1}_{1},\ldots,e^{1}_{l}),\ldots,(e^{q}_{1},\ldots,e^{q}_{l}))\in(\mathbb{N}^{l})^{q}\mid e_{j}^{1}+\ldots+e_{j}^{q}=\beta_{j},(e^{1}_{1}+\ldots+e^{1}_{l})\cdot\alpha^{1}_{i}+\ldots+(e^{q}_{1}+\ldots+e^{q}_{l})\cdot\alpha^{q}_{i}=\alpha_{i}\right\\}}\\\ \prod_{r=1}^{q}\prod_{j=1}^{l}{\frac{1}{e^{r}_{j}!}}\cdot\left({\frac{1}{\alpha^{r}_{1}!\cdot\ldots\cdot\alpha^{r}_{k}!}}\cdot{x}_{\alpha_{1}\cdots\alpha_{k}}^{j}\right)^{e^{r}_{j}}\text{.}\end{array}\end{array}$ Here, $(\partial_{\beta_{1}\cdots\beta_{n}}\mathsf{op})(x_{1},\ldots,x_{n})$ are some chosen terms of type $\mathbf{real}$ in the language with free variables from $x_{1},\ldots,x_{n}$. We think of these terms as implementing the partial derivative $\frac{\partial^{\beta_{1}+...+\beta_{n}}\llbracket\mathsf{op}\rrbracket(x_{1},...,x_{n})}{\partial x_{1}^{\beta_{1}}\cdots\partial x_{n}^{\beta_{n}}}$ of the smooth function $\llbracket\mathsf{op}\rrbracket:\mathbb{R}^{n}\to\mathbb{R}$ that $\mathsf{op}$ implements. For example, we could choose the following representations of derivatives of order $\leq 2$ of our example operations $\begin{array}[]{ll}\partial_{01}(+)(x_{1},x_{2})=\underline{1}&\partial_{02}(+)(x_{1},x_{2})=\underline{0}\\\ \partial_{10}(+)(x_{1},x_{2})=\underline{1}&\partial_{11}(+)(x_{1},x_{2})=\underline{0}\\\ \partial_{20}(+)(x_{1},x_{2})=\underline{0}\\\\[6.0pt] \partial_{01}(*)(x_{1},x_{2})=x_{1}&\partial_{02}(*)(x_{1},x_{2})=\underline{0}\\\ \partial_{10}(*)(x_{1},x_{2})=x_{2}&\partial_{11}(*)(x_{1},x_{2})=\underline{1}\\\ \partial_{20}(*)(x_{1},x_{2})=\underline{0}\\\\[6.0pt] \partial_{1}(\varsigma)({x})=\mathbf{let}\,{y}=\,\varsigma({x})\,\mathbf{in}\,{y}*(\underline{1}-{y}){}&\partial_{2}(\varsigma)({x})=\mathbf{let}\,{y}=\,\varsigma({x})\,\mathbf{in}\\\ &\phantom{\partial_{2}(\varsigma)({x})=\;}\mathbf{let}\,{z}=\,{y}*(\underline{1}-{y})\,\mathbf{in}\,{z}*(\underline{1}-\underline{2}*{y})\end{array}$ Note that our rules, in particular, imply that $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\underline{c})=\langle\underline{c},\underline{0},\ldots,\underline{0}\rangle$. [$(1,1)$\- and $(2,2)$-AD] Our choices of partial derivatives of the example operations are sufficient to implement $(k,R)$-Taylor forward AD with $R\leq 2$. To be explicit, the distinctive formulas for $(1,1)$\- and $(2,2)$-AD methods (specializing our abstract definition of $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}$ above) are $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}(\mathbf{real})=\boldsymbol{(}\mathbf{real}\boldsymbol{\mathop{*}}\mathbf{real}\boldsymbol{)}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}(\mathsf{op}({t}_{1},\ldots,{t}_{n}))=$ $\displaystyle\qquad\begin{array}[]{l}\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}({t}_{1})\,\mathbf{of}\,\langle{x}^{1}_{0},{x}^{1}_{1}\rangle\to\ldots\to\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}({t}_{n})\,\mathbf{of}\,\langle{x}_{0}^{n},{x}_{1}^{n}\rangle\to\\\ \langle\mathsf{op}({x}_{0}^{1},\ldots,{x}_{0}^{n}),\sum_{i=1}^{n}{x}^{i}_{1}*\partial_{i}\mathsf{op}({x}_{0}^{1},\ldots,{x}_{0}^{n})\rangle\end{array}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)}(\mathbf{real})=\mathbf{real}^{6}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)}(\mathsf{op}({t}_{1},...,{t}_{n}))=$ $\displaystyle\qquad\begin{array}[]{l}\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)}({t}_{1})\,\mathbf{of}\,\langle{x}^{1}_{00},{x}^{1}_{01},{x}^{1}_{02},{x}^{1}_{10},{x}^{1}_{11},{x}^{1}_{20}\rangle\to\\\ \vdots\\\ \mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)}({t}_{n})\,\mathbf{of}\,\langle{x}^{n}_{00},{x}^{n}_{01},{x}^{n}_{02},{x}^{n}_{10},{x}^{n}_{11},{x}^{n}_{20}\rangle\to\\\ \qquad\begin{array}[]{l}\langle\mathsf{op}({x}^{1}_{00},\ldots,{x}^{n}_{00}),\\\ \sum_{i=1}^{n}{x}^{i}_{01}*\partial_{\hat{i}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n}),\\\ \sum_{i=1}^{n}{x}^{i}_{02}*\partial_{{\hat{i}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n})+\sum_{i,j=1}^{n}{x}^{i}_{01}*{x}^{j}_{01}*\partial_{\widehat{{i,j}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n}),\\\ \sum_{i=1}^{n}{x}^{i}_{10}*\partial_{\hat{i}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n}),\\\ \sum_{i=1}^{n}{x}^{i}_{11}*\partial_{{\hat{i}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n})+\sum_{i,j=1}^{n}{x}^{i}_{10}*{x}^{j}_{01}*\partial_{\widehat{{i,j}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n}),\\\ \sum_{i=1}^{n}{x}^{i}_{20}*\partial_{{\hat{i}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n})+\sum_{i,j=1}^{n}{x}^{i}_{10}*{x}^{j}_{10}*\partial_{\widehat{{i,j}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n})\rangle\end{array}\end{array}$ where we informally write $\hat{i}$ for the one-hot encoding of $i$ (the sequence of length $n$ consisting exclusively of zeros except in position $i$ where it has a $1$) and $\widehat{i,j}$ for the two-hot encoding of $i$ and $j$ (the sequence of length $n$ consisting exclusively of zeros except in positions $i$ and $j$ where it has a $1$ if $i\neq j$ and a $2$ if $i=j$) As noted in Section 2, it is often unnecessary to include all components of the $(2,2)$-algorithm, for example when computing a second order directional derivative. In that case, we may define a restricted $(2,2)$-AD algorithm that drops the non-mixed second order derivatives from the definitions above and defines $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)^{\prime}}(\mathbf{real})=\mathbf{real}^{4}$ and $\begin{array}[]{ll}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)^{\prime}}(\mathsf{op}({t}_{1},...,{t}_{n}))=&\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)^{\prime}}({t}_{1})\,\mathbf{of}\,\langle{x}^{1}_{00},{x}^{1}_{01},{x}^{1}_{10},{x}^{1}_{11}\rangle\to\\\ &\vdots\\\ &\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)^{\prime}}({t}_{n})\,\mathbf{of}\,\langle{x}^{n}_{00},{x}^{n}_{01},{x}^{n}_{10},{x}^{n}_{11}\rangle\to\\\ &\begin{array}[]{l}\langle\mathsf{op}({x}^{1}_{00},\ldots,{x}^{n}_{00}),\\\ \sum_{i=1}^{n}{x}^{i}_{01}*\partial_{\hat{i}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n}),\\\ \sum_{i=1}^{n}{x}^{i}_{10}*\partial_{\hat{i}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n}),\\\ \sum_{i=1}^{n}{x}^{i}_{11}*\partial_{{\hat{i}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n})\\\ {}+\sum_{i,i^{\prime}=1}^{n}{x}^{i}_{10}*{x}^{i}_{01}*\partial_{\hat{\hat{i}}}\mathsf{op}({x}_{00}^{1},\ldots,{x}_{00}^{n})\rangle.\end{array}\end{array}$ We extend $\overrightarrow{\mathcal{D}}$ to contexts: $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\\{{x}_{1}{:}{\tau}_{1},{.}{.}{.},{x}_{n}{:}{\tau}_{n}\\})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\\{{x}_{1}{:}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}_{1}),{.}{.}{.},{x}_{n}{:}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}_{n})\\}$. This turns $\overrightarrow{\mathcal{D}}$ into a well-typed, functorial macro in the following sense. ###### Lemma 1 (Functorial macro). If $\Gamma\vdash{t}:{\tau}$ then $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\Gamma)\vdash\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}):\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau})$. If $\Gamma,{x}:{\sigma}\vdash{t}:{\tau}$ and $\Gamma\vdash{s}:{\sigma}$ then $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\Gamma)\vdash\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}{}[^{{s}}\\!/\\!_{{x}}])=\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}){}[^{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({s})}\\!/\\!_{{x}}]$. ###### Proof 3.1. By induction on the structure of typing derviations. [Inner products] Let us write ${\tau}^{n}$ for the $n$-fold product $\boldsymbol{(}{\tau}\boldsymbol{\mathop{*}}\dots\boldsymbol{\mathop{*}}{\tau}\boldsymbol{)}$. Then, given $\Gamma\vdash{t},{s}:\mathbf{real}^{n}$ we can define their inner product $\begin{array}[]{ll}\Gamma\vdash{t}\cdot_{n}{s}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}&\mathbf{case}\,{t}\,\mathbf{of}\,\langle{z}_{1},\ldots,{z}_{n}\rangle\to\\\ &\mathbf{case}\,{s}\,\mathbf{of}\,\langle{y}_{1},\ldots,{y}_{n}\rangle\to{z}_{1}*{y}_{1}+\dots+{z}_{n}*{y}_{n}:\mathbf{real}\end{array}$ To illustrate the calculation of $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}$, let us expand (and $\beta$-reduce) $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}({t}\cdot_{2}{s})$: $\displaystyle\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}({t})\,\mathbf{of}\,\langle{z}_{1},{z}_{2}\rangle\to\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}({s})\,\mathbf{of}\,\langle{y}_{1},{y}_{2}\rangle\to$ $\displaystyle\mathbf{case}\,{z}_{1}\,\mathbf{of}\,\langle{z}_{1,1},{z}_{1,2}\rangle\to\mathbf{case}\,{y}_{1}\,\mathbf{of}\,\langle{y}_{1,1},{y}_{1,2}\rangle\to$ $\displaystyle\mathbf{case}\,{z}_{2}\,\mathbf{of}\,\langle{z}_{2,1},{z}_{2,2}\rangle\to\mathbf{case}\,{y}_{2}\,\mathbf{of}\,\langle{y}_{2,1},{y}_{2,2}\rangle\to$ $\displaystyle\qquad\langle{z}_{1,1}*{y}_{1,1}+{z}_{2,1}*{y}_{2,1}\ ,\ {z}_{1,1}*{y}_{1,2}+{z}_{1,2}*{y}_{1,1}+{z}_{2,1}*{y}_{2,2}+{z}_{2,2}*{y}_{2,1}\rangle$ Let us also expand the calculation of $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)^{\prime}}({t}\cdot_{2}{s})$: $\displaystyle\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)^{\prime}}({t})\,\mathbf{of}\,\langle{z}_{1},{z}_{2}\rangle\to\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(2,2)^{\prime}}({s})\,\mathbf{of}\,\langle{y}_{1},{y}_{2}\rangle\to$ $\displaystyle\mathbf{case}\,{z}_{1}\,\mathbf{of}\,\langle{z}_{1},{z}_{1,1}^{\prime},{z}_{1,2}^{\prime},{z}_{1}^{\prime\prime}\rangle\to\mathbf{case}\,{y}_{1}\,\mathbf{of}\,\langle{y}_{1},{y}_{1,1}^{\prime},{y}_{1,2}^{\prime},{y}_{1}^{\prime\prime},\rangle\to$ $\displaystyle\mathbf{case}\,{z}_{2}\,\mathbf{of}\,\langle{z}_{2},{z}_{2,1}^{\prime},{z}_{2,2}^{\prime},{z}_{2}^{\prime\prime},\rangle\to\mathbf{case}\,{y}_{2}\,\mathbf{of}\,\langle{y}_{2},{y}_{2,1}^{\prime},{y}_{2,2}^{\prime},{y}_{2}^{\prime\prime},\rangle\to$ $\displaystyle\langle{z}_{1}*{y}_{1}+{z}_{2}*{y}_{2},$ $\displaystyle\qquad\ {z}_{1}*{y}_{1,1}^{\prime}+{z}_{1,1}^{\prime}*{y}_{1}+{z}_{2}*{y}_{2,1}^{\prime}+{z}_{2,1}^{\prime}*{y}_{2},$ $\displaystyle\qquad\ {z}_{1}*{y}_{1,2}^{\prime}+{z}_{1,2}^{\prime}*{y}_{1}+{z}_{2}*{y}_{2,2}^{\prime}+{z}_{2,2}^{\prime}*{y}_{2},$ $\displaystyle\qquad\ {z}_{1}^{\prime\prime}*{y}_{1}+{z}_{2}^{\prime\prime}*{y}_{2}+{y}_{1}^{\prime\prime}*{z}_{1}+{y}_{2}^{\prime\prime}*{z}_{2}+$ $\displaystyle\qquad{y}_{1,1}^{\prime}*{y}_{1,2}^{\prime}+{y}_{2,1}^{\prime}*{y}_{2,2}^{\prime}+{z}_{1,1}^{\prime}*{z}_{1,2}^{\prime}+{z}_{2,1}^{\prime}*{z}_{2,2}^{\prime}\rangle$ [Neural networks] In our introduction, we provided a program (1) in our language to build a neural network out of expressions $\mathrm{neuron},\mathrm{layer},\mathrm{comp}$; this program makes use of the inner product of Ex. 3.1. We can similarly calculate the derivatives of deep neural nets by mechanically applying the macro $\overrightarrow{\mathcal{D}}$ . ## 4\. Semantics of differentiation Consider for a moment the first order fragment of the language in Section 3, with only one type, $\mathbf{real}$, and no $\lambda$’s or pairs. This has a simple semantics in the category of cartesian spaces and smooth maps. Indeed, a term ${x}_{1}\dots x_{n}:\mathbf{real}\vdash{t}:\mathbf{real}$ has a natural reading as a function $\llbracket{t}\rrbracket:\mathbb{R}^{n}\to\mathbb{R}$ by interpreting our operation symbols by the well-known operations on $\mathbb{R}^{n}\to\mathbb{R}$ with the corresponding name. In fact, the functions that are definable in this first order fragment are smooth. Let us write $\mathbf{CartSp}$ for this category of cartesian spaces ($\mathbb{R}^{n}$ for some $n$) and smooth functions. The category $\mathbf{CartSp}$ has cartesian products, and so we can also interpret product types, tupling and pattern matching, giving us a useful syntax for constructing functions into and out of products of $\mathbb{R}$. For example, the interpretation of $(\mathrm{neuron}_{n})$ in (1) becomes $\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}\xrightarrow{\llbracket\cdot_{n}\rrbracket\times{\rm id}_{\mathbb{R}}}\mathbb{R}\times\mathbb{R}\xrightarrow{\llbracket+\rrbracket}\mathbb{R}\xrightarrow{\llbracket\varsigma\rrbracket}\mathbb{R}$ where $\llbracket\cdot_{n}\rrbracket$, $\llbracket+\rrbracket$ and $\llbracket\varsigma\rrbracket$ are the usual inner product, addition and the sigmoid function on $\mathbb{R}$, respectively. Inside this category, we can straightforwardly study the first order language without $\lambda$’s, and automatic differentiation. In fact, we can prove the following by plain induction on the syntax: _The interpretation of the (syntactic) forward AD $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})$ of a first order term ${t}$ equals the usual (semantic) derivative of the interpretation of ${t}$ as a smooth function._ However, as is well-known, the category $\mathbf{CartSp}$ does not support function spaces. To see this, notice that we have polynomial terms ${x}_{1},\ldots,{x}_{d}:\mathbf{real}\vdash\lambda{y}.\,\textstyle\sum_{n=1}^{d}{x}_{n}{y}^{n}:\mathbf{real}\to\mathbf{real}$ for each $d$, and so if we could interpret $(\mathbf{real}\to\mathbf{real})$ as a Euclidean space $\mathbb{R}^{p}$ then, by interpreting these polynomial expressions, we would be able to find continuous injections $\mathbb{R}^{d}\to\mathbb{R}^{p}$ for every $d$, which is topologically impossible for any $p$, for example as a consequence of the Borsuk-Ulam theorem (see Appx. A). This lack of function spaces means that we cannot interpret the functions $(\mathrm{layer})$ and $(\mathrm{comp})$ from (1) in $\mathbf{CartSp}$, as they are higher order functions, even though they are very useful and innocent building blocks for differential programming! Clearly, we could define neural nets such as (1) directly as smooth functions without any higher order subcomponents, though that would quickly become cumbersome for deep networks. A problematic consequence of the lack of a semantics for higher order differential programs is that we have no obvious way of establishing compositional semantic correctness of $\overrightarrow{\mathcal{D}}$ for the given implementation of (1). We now show that every definable function is smooth, and then in Section 4.2 we show that the $\overrightarrow{\mathcal{D}}$ macro witnesses its derivatives. ### 4.1. Smoothness at higher types and diffeologies The aim of this section is to introduce diffeological spaces as a semantic model for the simple language in Section 3. By way of motivation, we begin with a standard set theoretic semantics, where types are interpreted as follows $\textstyle\llfloor\mathbf{real}\rrceil\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbb{R}\qquad\llfloor\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}\dots\boldsymbol{\mathop{*}}{\tau}_{n}\boldsymbol{)}\rrceil\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\prod_{i=1}^{n}\llfloor{\tau}_{i}\rrceil\qquad\llfloor\tau\to\sigma\rrceil\stackrel{{\scriptstyle\mathrm{def}}}{{=}}(\llfloor\tau\rrceil\to\llfloor\sigma\rrceil)$ and a term ${x_{1}:{\tau}_{1},\dots,x_{n}:{\tau}_{n}}\vdash t:{\sigma}$ is interpreted as a function $\prod_{i=1}^{n}\llfloor{\tau}_{i}\rrceil\to\llfloor{\sigma}\rrceil$, mapping a valuation of the context to a result. We can show that the interpretation of a term $x_{1}:\mathbf{real},\dots,x_{n}:\mathbf{real}\vdash t:\mathbf{real}$ is always a smooth function $\mathbb{R}^{n}\to\mathbb{R}$, even if it has higher order subterms. We begin with a fairly standard logical relations proof of this, and then move from this to the semantic model of diffeological spaces. ###### Proposition 2. If $x_{1}:\mathbf{real},\dots,x_{n}:\mathbf{real}\vdash t:\mathbf{real}$ then the function $\llfloor t\rrceil:\mathbb{R}^{n}\to\mathbb{R}$ is smooth. ###### Proof 4.1. For each type ${\tau}$ define a set $Q_{{\tau}}\subseteq[\mathbb{R}^{k}\to\llfloor{\tau}\rrceil]$ by induction on the structure of types: $\displaystyle Q_{\mathbf{real}}$ $\displaystyle=\\{f:\mathbb{R}^{k}\to\mathbb{R}~{}|~{}f\text{ is smooth}\\}$ $\displaystyle Q_{\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}\dots\boldsymbol{\mathop{*}}{\tau}_{n}\boldsymbol{)}}$ $\displaystyle=\\{\textstyle f:\mathbb{R}^{k}\to\prod_{i=1}^{n}\llfloor{\tau}_{i}\rrceil~{}|~{}\forall\vec{r}\in\mathbb{R}^{k}.\ \forall i.\ f_{i}(\vec{r})\in Q_{{\tau}_{i}}\\}$ $\displaystyle Q_{{{\tau}}\to{{\sigma}}}$ $\displaystyle=\\{f:\mathbb{R}^{k}\to\llfloor{\tau}\rrceil\to\llfloor{\sigma}\rrceil~{}|~{}\forall g\in Q_{{\tau}}.\,\lambda(\vec{r}).\,f(\vec{r})(g(\vec{r}))\in Q_{{\sigma}}\\}$ Now we show the fundamental lemma: if ${x_{1}:{\tau}_{1},\dots,x_{n}:{\tau}_{n}}\vdash u:{\sigma}$ and $g_{1}\in Q_{{\tau}_{1}}\dots g_{n}\in Q_{{\tau}_{n}}$ then $((g_{1}\dots g_{n});\llfloor u\rrceil)\in Q_{{\sigma}}$. This is shown by induction on the structure of typing derivations. The only interesting step here is that the basic operations ($+$, $*$, $\varsigma$ etc.) are smooth. We deduce the statement of the theorem by putting $u=t$, $k=n$, and letting $g_{i}:\mathbb{R}^{n}\to\mathbb{R}$ be the projections. At higher types, the logical relations $Q$ show that we can only define functions that send smooth functions to smooth functions, meaning that we can never use them to build first order functions that are not smooth. For example, $(\mathrm{comp})$ in (1) has this property. This logical relations proof suggests to build a semantic model by interpreting types as sets with structure: for each type we have a set $X$ together with a set $Q^{\mathbb{R}^{k}}_{X}\subseteq[\mathbb{R}^{k}\to X]$ of plots. A _diffeological space_ $(X,\mathcal{P}_{X})$ consists of a set $X$ together with, for each $n$ and each open subset $U$ of $\mathbb{R}^{n}$, a set $\mathcal{P}_{X}^{U}\subseteq[U\to X]$ of functions, called _plots_ , such that * • all constant functions are plots; * • if $f:V\to U$ is a smooth function and $p\in\mathcal{P}_{X}^{U}$, then $f;p\in\mathcal{P}_{X}^{V}$; * • if $\left(p_{i}\in\mathcal{P}_{X}^{U_{i}}\right)_{i\in I}$ is a compatible family of plots $(x\in U_{i}\cap U_{j}\Rightarrow p_{i}(x)=p_{j}(x))$ and $\left(U_{i}\right)_{i\in I}$ covers $U$, then the gluing $p:U\to X:x\in U_{i}\mapsto p_{i}(x)$ is a plot. We call a function $f:X\to Y$ between diffeological spaces _smooth_ if, for all plots $p\in\mathcal{P}_{X}^{U}$, we have that $p;f\in\mathcal{P}_{Y}^{U}$. We write $\mathbf{Diff}(X,Y)$ for the set of smooth maps from $X$ to $Y$. Smooth functions compose, and so we have a category $\mathbf{Diff}$ of diffeological spaces and smooth functions. A diffeological space is thus a set equipped with structure. Many constructions of sets carry over straightforwardly to diffeological spaces. [Cartesian diffeologies] Each open subset $U$ of $\mathbb{R}^{n}$ can be given the structure of a diffeological space by taking all the smooth functions $V\to U$ as $\mathcal{P}_{U}^{V}$. Smooth functions from $V\to U$ in the traditional sense coincide with smooth functions in the sense of diffeological spaces [IZ13]. Thus diffeological spaces have a profound relationship with ordinary calculus. In categorical terms, this gives a full embedding of $\mathbf{CartSp}$ in $\mathbf{Diff}$. [Product diffeologies] Given a family $\left(X_{i}\right)_{i\in I}$ of diffeological spaces, we can equip the product $\prod_{i\in I}X_{i}$ of sets with the _product diffeology_ in which $U$-plots are precisely the functions of the form $\left(p_{i}\right)_{i\in I}$ for $p_{i}\in\mathcal{P}_{X_{i}}^{U}$. This gives us the categorical product in $\mathbf{Diff}$. [Functional diffeology] We can equip the set $\mathbf{Diff}(X,Y)$ of smooth functions between diffeological spaces with the _functional diffeology_ in which $U$-plots consist of functions $f:U\to\mathbf{Diff}(X,Y)$ such that $(u,x)\mapsto f(u)(x)$ is an element of $\mathbf{Diff}(U\times X,Y)$. This specifies the categorical function object in $\mathbf{Diff}$. We can now give a denotational semantics to our language from Section 3 in the category of diffeological spaces. We interpret each type ${\tau}$ as a set $\llbracket{\tau}\rrbracket$ equipped with the relevant diffeology, by induction on the structure of types: $\displaystyle\llbracket\mathbf{real}\rrbracket\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbb{R}\qquad\text{with the standard diffeology}$ $\displaystyle\llbracket\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}\dots\boldsymbol{\mathop{*}}{\tau}_{n}\boldsymbol{)}\rrbracket\ \stackrel{{\scriptstyle\mathrm{def}}}{{=}}\ \textstyle\prod_{i=1}^{n}\llbracket{\tau}_{i}\rrbracket\quad\text{with the product diffeology}$ $\displaystyle\llbracket{\tau}\to{\sigma}\rrbracket\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbf{Diff}(\llbracket{\tau}\rrbracket,\llbracket{\sigma}\rrbracket)\quad\text{with the functional diffeology}$ A context $\Gamma=({x}_{1}\colon{\tau}_{1}\dots x_{n}\colon{\tau}_{n})$ is interpreted as a diffeological space $\llbracket\Gamma\rrbracket\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\prod_{i=1}^{n}\llbracket{\tau}_{i}\rrbracket$. Now well typed terms $\Gamma\vdash{t}:{\tau}$ are interpreted as smooth functions $\llbracket{t}\rrbracket:\llbracket\Gamma\rrbracket\to\llbracket{\tau}\rrbracket$, giving a meaning for ${t}$ for every valuation of the context. This is routinely defined by induction on the structure of typing derivations once we choose a smooth function $\llbracket\mathsf{op}\rrbracket:\mathbb{R}^{n}\to\mathbb{R}$ to interpret each $n$-ary operation $\mathsf{op}\in\mathsf{Op}_{n}$. For example, constants $\underline{c}:\mathbf{real}$ are interpreted as constant functions; and the first order operations ($+,*,\varsigma$) are interpreted by composing with the corresponding functions, which are smooth: e.g., $\llbracket\varsigma(t)\rrbracket(\rho)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\varsigma(\llbracket t\rrbracket(\rho))$, where $\rho\in\llbracket\Gamma\rrbracket$. Variables are interpreted as $\llbracket{x}_{i}\rrbracket(\rho)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\rho_{i}$. The remaining constructs are interpreted as follows, and it is straightforward to show that smoothness is preserved. $\displaystyle\llbracket\langle{t}_{1},\dots,{t}_{n}\rangle\rrbracket(\rho)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}(\llbracket{t}_{1}\rrbracket(\rho),\dots,\llbracket{t}_{n}\rrbracket(\rho))$ $\displaystyle\llbracket\lambda{x}{:}{\tau}.{{t}}\rrbracket(\rho)(a)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\llbracket{t}\rrbracket(\rho,a)\ \text{($a\in\llbracket{\tau}\rrbracket$)}$ $\displaystyle\llbracket\mathbf{case}\,{t}\,\mathbf{of}\,\langle{.}{.}{.}\rangle\to{s}\rrbracket(\rho)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\llbracket{s}\rrbracket(\rho,\llbracket{t}\rrbracket(\rho))$ $\displaystyle\llbracket{t}\,{s}\rrbracket(\rho)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\llbracket{t}\rrbracket(\rho)(\llbracket{s}\rrbracket(\rho))$ The logical relations proof of Proposition 2 is reminiscent of diffeological spaces. We now briefly remark on the suitability of the axioms of diffeological spaces (Def 4.1) for a semantic model of smooth programs. The first axiom says that we only consider reflexive logical relations. From the perspective of the interpretation, it recognizes in particular that the semantics of an expression of type $(\mathbf{real}\to\mathbf{real})\to\mathbf{real}$ is defined by its value on smooth functions rather than arbitrary arguments. That is to say, the set- theoretic semantics at the beginning of this section, $\llfloor(\mathbf{real}\to\mathbf{real})\to\mathbf{real}\rrceil$, is different to the diffeological semantics, $\llbracket(\mathbf{real}\to\mathbf{real})\to\mathbf{real}\rrbracket$. The second axiom for diffeological spaces ensures that the smooth maps in $\mathbf{Diff}(U,X)$ are exactly the plots in $\mathcal{P}_{X}^{U}$. The third axiom ensures that categories of manifolds fully embed into $\mathbf{Diff}$; it will not play a visible role in this paper — in fact, [BCLG20] prove similar results for a simple language like ours by using plain logical relations (over $\mathbf{Set}$) and without demanding the diffeology axioms. However, we expect the third axiom to be crucial for programming with other smooth structures or partiality. ### 4.2. Correctness of AD We have shown that a term ${x}_{1}\colon\mathbf{real},\dots,{x}_{n}\colon\mathbf{real}\vdash{t}:\mathbf{real}$ is interpreted as a smooth function $\llbracket{t}\rrbracket:\mathbb{R}^{n}\to\mathbb{R}$, even if $t$ involves higher order functions (like (1)). Moreover, the macro differentiation $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})$ is a function $\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})\rrbracket:(\mathbb{R}^{\binom{R+k}{k}})^{n}\to\mathbb{R}^{\binom{R+k}{k}}$ (Proposition 1). This enables us to state a limited version of our main correctness theorem: ###### Theorem 3 (Semantic correctness of $\overrightarrow{\mathcal{D}}$ (limited)). For any term ${x}_{1}\colon\mathbf{real},\dots,{x}_{n}\colon\mathbf{real}\vdash{t}:\mathbf{real}$, the function $\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})\rrbracket$ is the $(k,R)$-Taylor representation (3) of $\llbracket{t}\rrbracket$. In detail: for any smooth functions $f_{1}\dots f_{n}:\mathbb{R}^{k}\to\mathbb{R}$, $\displaystyle{\left({\left(\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}f_{j}(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}\right)}_{(\alpha_{1},...,\alpha_{k})=(0,...,0)}^{(R,0,...,0)}\right)}_{j=1}^{n};\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})\rrbracket=$ $\displaystyle\qquad\quad{\left({\left(\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}((f_{1},\ldots,f_{n});\llbracket{t}\rrbracket)(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}\right)}_{(\alpha_{1},...,\alpha_{k})=(0,...,0)}^{(R,0,...,0)}\right)}_{j=1}^{n}\text{.}$ For instance, if $n=2$, then $\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}({t})\rrbracket({x}_{1},1,{x}_{2},0)=\big{(}\llbracket{t}\rrbracket({x}_{1},{x}_{2}),\frac{\partial\llbracket{t}\rrbracket({x},{x}_{2})}{\partial{x}}({x}_{1})\big{)}$. ###### Proof 4.2. We prove this by logical relations. A categorical version of this proof is in Section 6.2. For each type ${\tau}$, we define a binary relation $S_{{\tau}}$ between (open) $k$-dimensional plots in $\llbracket{\tau}\rrbracket$ and (open) $k$-dimensional plots in $\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({\tau})\rrbracket$, i.e. $S_{{\tau}}\subseteq\mathcal{P}_{\llbracket{\tau}\rrbracket}^{\mathbb{R}^{k}}\times\mathcal{P}_{\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({\tau})\rrbracket}^{\mathbb{R}^{k}}$, by induction on ${\tau}$: $\displaystyle S_{\mathbf{real}}$ $\displaystyle\stackrel{{\scriptstyle\mathrm{def}}}{{=}}$ $\displaystyle\left\\{\left(f,{\left(\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}f(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}\right)}_{(\alpha_{1},...,\alpha_{k})=(0,...,0)}^{(R,0,...,0)}\right)~{}\Big{|}~{}f:\mathbb{R}^{k}\to\mathbb{R}\text{ smooth}\right\\}$ $\displaystyle S_{\boldsymbol{(}{\tau}_{1}\boldsymbol{\mathop{*}}...\boldsymbol{\mathop{*}}{\tau}_{n}\boldsymbol{)}}$ $\displaystyle\stackrel{{\scriptstyle\mathrm{def}}}{{=}}$ $\displaystyle\\{((f_{1},...,f_{n}),(g_{1},...,g_{n}))\mid(f_{1},g_{1})\in S_{{\tau}_{1}},...,(f_{n},g_{n})\in S_{{\tau}_{n}}\\}$ $\displaystyle S_{{\tau}\to{\sigma}}$ $\displaystyle\stackrel{{\scriptstyle\mathrm{def}}}{{=}}$ $\displaystyle\\{(f_{1},f_{2})\mid\forall(g_{1},g_{2})\in S_{{\tau}}.(x{\mapsto}f_{1}(x)(g_{1}(x)),x{\mapsto}f_{2}(x)(g_{2}(x)))\in S_{{\sigma}}\\}$ Then, we establish the following ‘fundamental lemma’: > If ${x}_{1}{:}{\tau}_{1},{.}{.}{.},{x}_{n}{:}{\tau}_{n}\vdash{t}:{\sigma}$ > and, for all $1{\leq}i{\leq}n$, > $f_{i}:\mathbb{R}^{k}\to\llbracket{\tau}_{i}\rrbracket$ and > > $g_{i}:\mathbb{R}^{k}\to~{}\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({\tau}_{i})\rrbracket$ > are such that $(f_{i},g_{i})$ is in $S_{{\tau}_{i}}$, then we have that > > > $\Big{(}(f_{1},\ldots,f_{n});\llbracket{t}\rrbracket,(g_{1},\ldots,g_{n});\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})\rrbracket\Big{)}$ > > is in $S_{{\sigma}}$. This is proved routinely by induction on the typing derivation of ${t}$. The case for $\mathsf{op}({t}_{1},\ldots,{t}_{n})$ relies on the precise definition of $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}(\mathsf{op}({t}_{1},\ldots,{t}_{n}))$. We conclude the theorem from the fundamental lemma by considering the case where ${\tau}_{i}={\sigma}=\mathbf{real}$, $m=n$ and $s_{i}=y_{i}$. ## 5\. Extending the language: variant and inductive types In this section, we show that the definition of forward AD and the semantics generalize if we extend the language of Section 3 with variants and inductive types. As an example of inductive types, we consider lists. This specific choice is only for expository purposes and the whole development works at the level of generality of arbitrary algebraic data types generated as initial algebras of (polynomial) type constructors formed by finite products and variants. These types are easily interpreted in the category of diffeological spaces in much the same way. The categorically minded reader may regard this as a consequence of $\mathbf{Diff}$ being a concrete Grothendieck quasitopos, e.g. [BH11], and hence is complete and cocomplete. ### 5.1. Language. We additionally consider the following types and terms: $\begin{array}[t]{l@{\quad\\!\\!}*3{l@{}}@{\,}l}{\tau},{\sigma},{\rho}&::=&\dots&\mspace{-25.0mu}\qquad\text{types}\\\ &\mathrel{\lvert}&\\{\mathsf{\ell_{1}}\,{{\tau}_{1}}\mathrel{\big{\lvert}}\ldots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{\tau}_{n}}\\}&\qquad\text{variant}\\\ &\mathrel{\lvert}&\mathbf{list}({\tau})&\qquad\text{list}\\\\[6.0pt] {t},{s},{r}&::=&\dots&\mspace{-25.0mu}\qquad\text{terms}\\\ &\mathrel{\lvert}&{\tau}.\ell\,{t}&\qquad\text{variant constructor}\\\ &\mathrel{\lvert}&[\,]\ \mathrel{\lvert}\ {t}::{s}&\qquad\text{empty list and cons}\\\ &\mathrel{\lvert}&\mathbf{case}\,{t}\,\mathbf{of}\,\\{\mathsf{\ell_{1}}\,{{x}_{1}}\to{{s}_{1}}\mathrel{\big{\lvert}}\cdots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{x}_{n}}\to{{s}_{n}}\\}&\qquad\text{pattern matching: variants}\\\ &\mathrel{\lvert}&\mathbf{fold}\,({x}_{1},{x}_{2}).{t}\,\mathbf{over}\,{s}\,\mathbf{from}\,{r}&\qquad\text{list fold}\\\ \end{array}$ We extend the type system according to the rules of Fig. 4. $\begin{array}[]{@{}c@{}}\inferrule{\Gamma\vdash{t}:{\tau}_{i}}{\Gamma\vdash{\tau}.\ell_{i}\,{t}:{\tau}}((\mathsf{\ell_{i}}\,{\tau}_{i})\in{\tau})\quad\inferrule{~{}}{\Gamma\vdash[\,]:\mathbf{list}({\tau})}\quad\inferrule{\Gamma\vdash{t}:{\tau}\\\ \Gamma\vdash{s}:\mathbf{list}({\tau})}{\Gamma\vdash{t}::{s}:\mathbf{list}({\tau})}\\\ \\\ \inferrule{\Gamma\vdash{t}:\\{\mathsf{\ell_{1}}\,{{\tau}_{1}}\mathrel{\big{\lvert}}\ldots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{\tau}_{n}}\\}\\\ \text{for each $1\leq i\leq n$: }\Gamma,{x}_{i}:{\tau}_{i}\vdash{s}_{i}:{\tau}}{\Gamma\vdash\mathbf{case}\,{t}\,\mathbf{of}\,\\{\begin{array}[t]{@{}l@{\,}l@{}l@{}}\mathsf{\ell_{1}}\,{{x}_{1}}\to{{s}_{1}}\mathrel{\big{\lvert}}\cdots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{x}_{n}}&\to{{s}_{n}}\\}:{\tau}\end{array}}\\\ \\\ \inferrule{\Gamma\vdash{s}:\mathbf{list}({\tau})\\\ \Gamma\vdash{r}:{\sigma}\\\ \Gamma,{x}_{1}:{\tau},{x}_{2}:{\sigma}\vdash{t}:{\sigma}}{\Gamma\vdash\mathbf{fold}\,({x}_{1},{x}_{2}).{t}\,\mathbf{over}\,{s}\,\mathbf{from}\,{r}:{\sigma}}\end{array}$ Figure 4. Additional typing rules for the extended language. We can then extend $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}$ (again, writing it as $\overrightarrow{\mathcal{D}}$ , for legibility) to our new types and terms by $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\\{\mathsf{\ell_{1}}\,{{\tau}_{1}}\mathrel{\big{\lvert}}\ldots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{\tau}_{n}}\\})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\\{\mathsf{\ell_{1}}\,{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}_{1})}\mathrel{\big{\lvert}}\ldots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}_{n})}\\}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\mathbf{list}({\tau}))\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbf{list}(\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}))$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}.\ell\,{t})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({\tau}).\ell\,{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}([\,])\stackrel{{\scriptstyle\mathrm{def}}}{{=}}[\,]$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t}::{s})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})::\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({s})$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\mathbf{case}\,{t}\,\mathbf{of}\,\\{\mathsf{\ell_{1}}\,{{x}_{1}}\to{{s}_{1}}\mathrel{\big{\lvert}}\cdots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{x}_{n}}\to{{s}_{n}}\\})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}$ $\displaystyle\,\quad\mathbf{case}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})\,\mathbf{of}\,\\{\mathsf{\ell_{1}}\,{{x}_{1}}\to{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({s}_{1})}\mathrel{\big{\lvert}}\cdots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{x}_{n}}\to{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({s}_{n})}\\}$ $\displaystyle\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}(\mathbf{fold}\,({x}_{1},{x}_{2}).{t}\,\mathbf{over}\,{s}\,\mathbf{from}\,{r})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbf{fold}\,({x}_{1},{x}_{2}).\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({t})\,\mathbf{over}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({s})\,\mathbf{from}\,\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}({r})$ To demonstrate the practical use of expressive type systems for differential programming, we consider the following two examples. [Lists of inputs for neural nets] Usually, we run a neural network on a large data set, the size of which might be determined at runtime. To evaluate a neural network on multiple inputs, in practice, one often sums the outcomes. This can be coded in our extended language as follows. Suppose that we have a network $f:\boldsymbol{(}\mathbf{real}^{n}\boldsymbol{\mathop{*}}P\boldsymbol{)}\to\mathbf{real}$ that operates on single input vectors. We can construct one that operates on lists of inputs as follows: $g\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda\langle l,w\rangle.{\mathbf{fold}\,({x}_{1},{x}_{2}).f\langle{x}_{1},w\rangle+{x}_{2}\,\mathbf{over}\,l\,\mathbf{from}\,\underline{0}}:\boldsymbol{(}\mathbf{list}(\mathbf{real}^{n})\boldsymbol{\mathop{*}}P\boldsymbol{)}\to\mathbf{real}$ [Missing data] In practically every application of statistics and machine learning, we face the problem of _missing data_ : for some observations, only partial information is available. In an expressive typed programming language like we consider, we can model missing data conveniently using the data type $\mathbf{maybe}({\tau})=\\{\mathsf{\mathsf{Nothing}}\,{\boldsymbol{(}\,\boldsymbol{)}}\mathrel{\big{\lvert}}\mathsf{\mathsf{Just}}\,{{\tau}}\\}$. In the context of a neural network, one might use it as follows. First, define some helper functions $\displaystyle\mathrm{fromMaybe}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda{x}.{\lambda m.{\mathbf{case}\,m\,\mathbf{of}\,\\{\mathsf{\mathsf{Nothing}}\,{\\_}\to{{x}}\mathrel{\big{\lvert}}\mathsf{\mathsf{Just}}\,{{x}^{\prime}}\to{{x}^{\prime}}\\}}}$ $\displaystyle\mathrm{fromMaybe}^{n}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda\langle{x}_{1},{.}{.}{.},{x}_{n}\rangle.{\lambda\langle m_{1},{.}{.}{.},m_{n}\rangle.{\langle\mathrm{fromMaybe}\,{x}_{1}\,m_{1},{.}{.}{.},\mathrm{fromMaybe}\,{x}_{n}\,m_{n}\rangle}}$ $\displaystyle\qquad\qquad:(\mathbf{maybe}(\mathbf{real}))^{n}\to\mathbf{real}^{n}\to\mathbf{real}^{n}$ $\displaystyle\mathrm{map}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\lambda f.{\lambda l.{\mathbf{fold}\,({x}_{1},{x}_{2}).f\,{x}_{1}::{x}_{2}\,\mathbf{over}\,l\,\mathbf{from}\,[\,]}}:({\tau}\to{\sigma})\to\mathbf{list}({\tau})\to\mathbf{list}({\sigma})$ Given a neural network $f:\boldsymbol{(}\mathbf{list}(\mathbf{real}^{k})\boldsymbol{\mathop{*}}P\boldsymbol{)}\to\mathbf{real}$, we can build a new one that operates on on a data set for which some covariates (features) are missing, by passing in default values to replace the missing covariates: $\lambda\langle l,\langle m,w\rangle\rangle.f\langle\mathrm{map}\,(\mathrm{fromMaybe}^{k}\,m)\,l,w\rangle:\boldsymbol{(}\mathbf{list}((\mathbf{maybe}(\mathbf{real}))^{k})\boldsymbol{\mathop{*}}\boldsymbol{(}\mathbf{real}^{k}\boldsymbol{\mathop{*}}P\boldsymbol{)}\boldsymbol{)}\to\mathbf{real}$ Then, given a data set $l$ with missing covariates, we can perform automatic differentiation on this network to optimize, simultaneously, the ordinary network parameters $w$ _and_ the default values for missing covariates $m$. ### 5.2. Semantics. In Section 4 we gave a denotational semantics for the simple language in diffeological spaces. This extends to the language in this section, as follows. As before, each type ${\tau}$ is interpreted as a diffeological space, which is a set equipped with a family of plots: * • A variant type $\\{\mathsf{\ell_{1}}\,{{\tau}_{1}}\mathrel{\big{\lvert}}\ldots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{\tau}_{n}}\\}$ is inductively interpreted as the disjoint union of the semantic spaces, $\textstyle\llbracket\\{\mathsf{\ell_{1}}\,{{\tau}_{1}}\mathrel{\big{\lvert}}\dots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{\tau}_{n}}\\}\rrbracket\ \ \stackrel{{\scriptstyle\mathrm{def}}}{{=}}\ \ \biguplus_{i=1}^{n}\llbracket{\tau}_{i}\rrbracket$, with $U$-plots $\textstyle\mathcal{P}_{\llbracket\\{\mathsf{\ell_{1}}\,{{\tau}_{1}}\mathrel{\big{\lvert}}\ldots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{\tau}_{n}}\\}\rrbracket}^{U}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\left\\{\left.\left[U_{j}\xrightarrow{f_{j}}\llbracket{\tau}_{j}\rrbracket\to\biguplus_{i=1}^{n}\llbracket{\tau}_{i}\rrbracket\right]_{j=1}^{n}~{}\right|~{}U=\biguplus_{j=1}^{n}U_{j},\;f_{j}\in\mathcal{P}_{\llbracket{\tau}_{j}\rrbracket}^{U_{j}}\right\\}.$ * • A list type $\mathbf{list}({\tau})$ is interpreted as the union of the sets of length $i$ tuples for all natural numbers $i$, $\llbracket\mathbf{list}({\tau})\rrbracket\ \ \stackrel{{\scriptstyle\mathrm{def}}}{{=}}\ \ \biguplus_{i=0}^{\infty}\llbracket{\tau}\rrbracket^{i}$ with $U$-plots $\textstyle\mathcal{P}_{\llbracket\mathbf{list}({\tau})\rrbracket}^{U}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\left\\{\left.\left[U_{j}\xrightarrow{f_{j}}\llbracket{\tau}\rrbracket^{j}\to\biguplus_{i=0}^{\infty}\llbracket{\tau}\rrbracket^{i}\right]_{j=0}^{\infty}~{}\right|~{}U=\biguplus_{j=0}^{\infty}U_{j},\;f_{j}\in\mathcal{P}_{\llbracket{\tau}\rrbracket^{j}}^{U_{j}}\right\\}$ The constructors and destructors for variants and lists are interpreted as in the usual set theoretic semantics. It is routine to show inductively that these interpretations are smooth. Thus every term $\Gamma\vdash{t}:{\tau}$ in the extended language is interpreted as a smooth function $\llbracket{t}\rrbracket:\llbracket\Gamma\rrbracket\to\llbracket{\tau}\rrbracket$ between diffeological spaces. List objects as initial algebras are computed as usual in a cocomplete category (e.g. [JR11]). More generally, the interpretation for algebraic data types follows exactly the usual categorical semantics of variant types and inductive types (e.g. [Pit95]). ## 6\. Categorical analysis of (higher order) forward AD and its correctness This section has three parts. First, we give a categorical account of the functoriality of AD (Ex. 6.1). Then we introduce our gluing construction, and relate it to the correctness of AD (dgm. (4)). Finally, we state and prove a correctness theorem for all first order types by considering a category of manifolds (Th. 8). ### 6.1. Syntactic categories. The key contribution of this subsection is that the AD macro translation (Section 3.2) has a canonical status as a unique functor between categories with structure. To this end, we build a syntactic category $\mathbf{Syn}$ from our language, which has the property of a _free_ category with certain structure. This means that for any category $\mathcal{C}$ with this structure, there is a unique structure-preserving functor $\mathbf{Syn}\to\mathcal{C}$, which is an interpretation of our language in that category. Generally speaking, this is the categorical view of denotational semantics (e.g. [Pit95]). But in this particular setting, the category $\mathbf{Syn}$ itself admits alternative forms of this structure, given by the dual numbers interpretation, the triple numbers interpretation etc. of Section 2. This gives canonical functors $\mathbf{Syn}\to\mathbf{Syn}$ translating the language into itself, which are the AD macro translations (Section 3.2). A key point is that $\mathbf{Syn}$ is almost entirely determined by universal properties (for example, cartesian closure for the function space); the only freedom is in the choice of interpretation of 1. (1) the real numbers $\mathbf{real}$, which can be taken as the plain type $\mathbf{real}$, or as the dual numbers interpretation $\mathbf{real}\ast\mathbf{real}$ etc.; 2. (2) the primitive operations $\mathsf{op}$, which can be taken as the operation $\mathsf{op}$ itself, or as the derivative of the operation etc.. $\displaystyle\mathbf{case}\,\langle{t}_{1},\ldots,{t}_{n}\rangle\,\mathbf{of}\,\langle{x}_{1},\ldots,{x}_{n}\rangle\to{s}={s}{}[^{{t}_{1}}\\!/\\!_{{x}_{1}},\ldots,^{{t}_{n}}\\!/\\!_{{x}_{n}}]$ $\displaystyle{s}{}[^{{t}}\\!/\\!_{{y}}]\stackrel{{\scriptstyle\\#{x}_{1},\ldots,{x}_{n}}}{{=}}\mathbf{case}\,{t}\,\mathbf{of}\,\langle{x}_{1},\ldots,{x}_{n}\rangle\to{s}{}[^{\langle{x}_{1},\ldots,{x}_{n}\rangle}\\!/\\!_{{y}}]$ $\displaystyle\mathbf{case}\,\mathsf{\ell_{i}}\,{{t}}\,\mathbf{of}\,\\{\mathsf{\ell_{1}}\,{{x}_{1}}\to{{s}_{1}}\mathrel{\big{\lvert}}\cdots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{x}_{n}}\to{{s}_{n}}\\}={s}_{i}{}[^{{t}}\\!/\\!_{{x}_{i}}]$ $\displaystyle{s}{}[^{{t}}\\!/\\!_{{y}}]\stackrel{{\scriptstyle\\#{x}_{1},\ldots,{x}_{n}}}{{=}}\mathbf{case}\,{t}\,\mathbf{of}\,\\{\mathsf{\ell_{1}}\,{{x}_{1}}\to{{s}{}[^{\mathsf{\ell_{1}}\,{{x}_{1}}}\\!/\\!_{{y}}]}\mathrel{\big{\lvert}}\cdots\mathrel{\big{\lvert}}\mathsf{\ell_{n}}\,{{x}_{n}}\to{{s}{}[^{\mathsf{\ell_{n}}\,{{x}_{n}}}\\!/\\!_{{y}}]}\\}$ $\displaystyle\mathbf{fold}\,({x}_{1},{x}_{2}).{t}\,\mathbf{over}\,[\,]\,\mathbf{from}\,{r}={r}$ $\displaystyle\mathbf{fold}\,({x}_{1},{x}_{2}).{t}\,\mathbf{over}\,{s}_{1}::{s}_{2}\,\mathbf{from}\,{r}={t}{}[^{{s}_{1}}\\!/\\!_{{x}_{1}},^{\mathbf{fold}\,({x}_{1},{x}_{2}).{t}\,\mathbf{over}\,{s}_{2}\,\mathbf{from}\,{r}}\\!/\\!_{{x}_{2}}]$ $\displaystyle u={s}{}[^{[\,]}\\!/\\!_{{y}}],{r}{}[^{{s}}\\!/\\!_{{x}_{2}}]={s}{}[^{{x}_{1}::{y}}\\!/\\!_{{y}}]\Rightarrow{s}{}[^{{t}}\\!/\\!_{{y}}]\stackrel{{\scriptstyle\\#{x}_{1},{x}_{2}}}{{=}}\mathbf{fold}\,({x}_{1},{x}_{2}).{r}\,\mathbf{over}\,{t}\,\mathbf{from}\,u$ $\displaystyle(\lambda{x}.{{t}})\,{s}={t}{}[^{{s}}\\!/\\!_{{x}}]$ $\displaystyle{t}\stackrel{{\scriptstyle\\#{x}}}{{=}}\lambda{x}.{{t}\,{x}}$ We write $\stackrel{{\scriptstyle\\#{x}_{1},\ldots,{x}_{n}}}{{=}}$ to indicate that the variables are free in the left hand side Figure 5. Standard $\beta\eta$-laws (e.g. [Pit95]) for products, functions, variants and lists. In more detail, our language induces a syntactic category as follows. Let $\mathbf{Syn}$ be the category whose objects are types, and where a morphism ${\tau}\to{\sigma}$ is a term in context ${x}:{\tau}\vdash{t}:{\sigma}$ modulo the $\beta\eta$-laws (Fig. 5). Composition is by substitution. For simplicity, we do not impose identities involving the primitive operations, such as the arithmetic identity $x+y=y+x$ in $\mathbf{Syn}$. As is standard, this category has the following universal property. ###### Lemma 4 (e.g. [Pit95]). For every bicartesian closed category $\mathcal{C}$ with list objects, and every choice of an object $F(\mathbf{real})\in\mathcal{C}$ and morphisms $F(\mathsf{op})\in\mathcal{C}(F(\mathbf{real})^{n},F(\mathbf{real}))$ for all $\mathsf{op}\in\mathsf{Op}_{n}$ and $n\in\mathbb{N}$, in $\mathcal{C}$, there is a unique functor $F:{\mathbf{Syn}\to\mathcal{C}}$ respecting the interpretation and preserving the bicartesian closed structure as well as list objects. ###### Proof 6.1 (Proof notes). The functor $F:\mathbf{Syn}\to\mathcal{C}$ is a canonical denotational semantics for the language, interpreting types as objects of $\mathcal{C}$ and terms as morphisms. For instance, $F({{\tau}\to{\sigma}})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}(F{\tau}\to F{{\sigma}})$, the function space in the category $\mathcal{C}$, and $F{({t}\,{s})}$ is the composite $(F{{t}},F{{s}});\mathit{eval}$. When $\mathcal{C}=\mathbf{Diff}$, the denotational semantics of the language in diffeological spaces (Section 4,5.2) can be understood as the unique structure preserving functor $\llbracket-\rrbracket:\mathbf{Syn}\to\mathbf{Diff}$ satisfying $\llbracket\mathbf{real}\rrbracket=\mathbb{R}$, $\llbracket\varsigma\rrbracket=\varsigma$ and so on. [Canonical definition of forward AD] The forward AD macro $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}$ (Section 3,5.1) arises as a canonical bicartesian closed functor on $\mathbf{Syn}$ that preserves list objects. Consider the unique bicartesian closed functor $F:\mathbf{Syn}\to\mathbf{Syn}$ that preserves list objects such that $F(\mathbf{real})=\mathbf{real}^{\binom{R+k}{k}}$ and $F(\mathsf{op})={z}\\!:\\!\boldsymbol{(}F(\mathbf{real})\boldsymbol{\mathop{*}}..\boldsymbol{\mathop{*}}F(\mathbf{real})\boldsymbol{)}\vdash\mathbf{case}\,{z}\,\mathbf{of}\,\langle{x}_{1},...,{x}_{n}\rangle\to\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{\\!(k,R)}(\mathsf{op}({x}_{1},\ldots,{x}_{n})):F(\mathbf{real}).$ Then for any type ${\tau}$, $F({\tau})=\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({\tau})$, and for any term $x:{\tau}\vdash{t}:{\sigma}$, $F({t})=\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})$ as morphisms $F({\tau})\to F({\sigma})$ in the syntactic category. This observation is a categorical counterpart to Lemma 1. ### 6.2. Categorical gluing and logical relations. Gluing is a method for building new categorical models which has been used for many purposes, including logical relations and realizability [MS92]. Our logical relations argument in the proof of Theorem 3 can be understood in this setting. (In fact we originally found the proof of Theorem 3 in this way.) In this subsection, for the categorically minded, we explain this, and in doing so we quickly recover a correctness result for the more general language in Section 5 and for arbitrary first order types. The general, established idea of categorical logical relations starts from the observation that that logical relations are defined by induction on the structure of types. Types have universal properties in a categorical semantics (e.g. cartesian closure for the function space), and so we can organize the logical relations argument by defining some category $\mathcal{C}$ of relations and observing that it has the requisite categorical structure. The interpretation of types as relations can then be understood as coming from a unique structure preserving map $\mathbf{Syn}\to\mathcal{C}$. In this paper, our logical relations are not quite as simple as a binary relation on sets; rather they are relations between plots. Nonetheless, this still forms a category with the appropriate structure, which follows because it can still be regarded as arising from a gluing construction, as we now explain. We define a category $\mathbf{Gl}_{k}$ whose objects are triples $(X,X^{\prime},S)$ where $X$ and $X^{\prime}$ are diffeological spaces and $S\subseteq\mathcal{P}_{X}^{\mathbb{R}^{k}}\times\mathcal{P}_{X^{\prime}}^{\mathbb{R}^{k}}$ is a relation between their $k$-dimensional plots. A morphism $(X,X^{\prime},S)\to(Y,Y^{\prime},T)$ is a pair of smooth functions $f\colon X\to Y$, $f^{\prime}\colon X^{\prime}\to Y^{\prime}$, such that if $(g,g^{\prime})\in S$ then $(g;f,g^{\prime};f^{\prime})\in T$. The idea is that this is a semantic domain where we can simultaneously interpret the language and its automatic derivatives. ###### Proposition 5. The category $\mathbf{Gl}_{k}$ is bicartesian closed, has list objects, and the projection functor $\mathrm{proj}:\mathbf{Gl}_{k}\to\mathbf{Diff}\times\mathbf{Diff}$ preserves this structure. ###### Proof 6.2 (Proof notes). The category $\mathbf{Gl}_{k}$ is a full subcategory of the comma category ${\rm id}_{\mathbf{Set}}\downarrow\mathbf{Diff}(\mathbb{R}^{k},-)\times\mathbf{Diff}(\mathbb{R}^{k},-)$. The result thus follows by the general theory of categorical gluing (e.g. [JLS07, Lemma 15]). We give a semantics $\llparenthesis-\rrparenthesis=(\llparenthesis-\rrparenthesis_{0},\llparenthesis-\rrparenthesis_{1},S_{-})$ for the language in $\mathbf{Gl}_{k}$, interpreting types ${\tau}$ as objects $(\llparenthesis{\tau}\rrparenthesis_{0},\llparenthesis{\tau}\rrparenthesis_{1},S_{{\tau}})$, and terms as morphisms. We let $\llparenthesis\mathbf{real}\rrparenthesis_{0}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbb{R}$ and $\llparenthesis\mathbf{real}\rrparenthesis_{1}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\mathbb{R}^{\binom{R+k}{k}}$, with the relation $S_{\mathbf{real}}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\left\\{(f,{\left(\frac{\partial^{\alpha_{1}+\ldots+\alpha_{k}}f(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}\right)}_{(\alpha_{1},...,\alpha_{k})=(0,...,0)}^{(R,0,...,0)})~{}|~{}f:\mathbb{R}^{k}\to\mathbb{R}\text{ smooth}\right\\}.$ We interpret the operations $\mathsf{op}$ according to $\llbracket\mathsf{op}\rrbracket$ in $\llparenthesis-\rrparenthesis_{0}$, but according to the $(k,R)$-Taylor representation of $\llbracket\mathsf{op}\rrbracket$ in $\llparenthesis-\rrparenthesis_{1}$. For instance, when $k=2$ and $r=2$, $\llparenthesis*\rrparenthesis_{1}:\mathbb{R}^{2}\times\mathbb{R}^{2}\to\mathbb{R}^{2}$ is $\displaystyle\llparenthesis*\rrparenthesis_{1}$ $\displaystyle((x_{00},x_{01},x_{02},x_{10},x_{11},x_{20}),(y_{00},y_{01},y_{02},y_{10},y_{11},y_{20}))\stackrel{{\scriptstyle\mathrm{def}}}{{=}}$ $\displaystyle(x_{00}y_{00},$ $\displaystyle\;x_{00}y_{01}+x_{01}y_{00},$ $\displaystyle\;x_{02}y_{00}+2x_{01}y_{01}+x_{00}y_{02},$ $\displaystyle\;x_{00}y_{10}+x_{10}y_{00},$ $\displaystyle\;x_{11}y_{00}+x_{01}y_{10}+x_{10}y_{01}+x_{00}y_{11},$ $\displaystyle\;x_{20}y_{00}+2x_{10}y_{10}+x_{00}y_{20})\text{.}$ At this point one checks that these interpretations are indeed morphisms in $\mathbf{Gl}_{k}$. This is equivalent to the statement that $\llparenthesis\mathsf{op}\rrparenthesis_{1}$ is the $(k,R)$-Taylor representation of $\llbracket\mathsf{op}\rrbracket$ (3). The remaining constructions of the language are interpreted using the categorical structure of $\mathbf{Gl}_{k}$, following Lemma 4. Notice that the diagram below commutes. One can check this by hand or note that it follows from the initiality of $\mathbf{Syn}$ (Lemma 4): all the functors preserve all the structure. $\textstyle{\mathbf{Syn}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{({\rm id},\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}(-))}$$\scriptstyle{\llparenthesis-\rrparenthesis}$$\textstyle{\mathbf{Syn}\times\mathbf{Syn}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\llbracket-\rrbracket\times\llbracket-\rrbracket}$$\textstyle{\mathbf{Gl}_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathrm{proj}}$$\textstyle{\mathbf{Diff}\times\mathbf{Diff}}$ (4) We thus arrive at a restatement of the correctness theorem (Th. 3), which holds even for the extended language with variants and lists, because for any $x_{1}:\mathbf{real},{.}{.}{.},x_{n}:\mathbf{real}\vdash{t}:\mathbf{real}$, the interpretations $(\llbracket{t}\rrbracket,\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})\rrbracket)$ are in the image of the projection $\mathbf{Gl}_{k}\to\mathbf{Diff}\times\mathbf{Diff}$, and hence $\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})\rrbracket$ is a $(k,R)$-Taylor representation of $\llbracket{t}\rrbracket$. ### 6.3. Correctness at all first order types, via manifolds. We now generalize Theorem 3 to hold at all first order types, not just the reals. So far, we have shown that our macro translation (Section 3.2) gives correct derivatives to functions of the real numbers, even if other types are involved in the definitions of the functions (Theorem 3 and Section 6.2). We can state this formally because functions of the real numbers have well understood derivatives (Section 2). There are no established mathematical notions of derivatives at higher types, and so we cannot even begin to argue that our syntactic derivatives of functions $(\mathbf{real}\to\mathbf{real})\to(\mathbf{real}\to\mathbf{real})$ match with some existing mathematical notion (see also Section 7). However, for functions of first order type, like $\mathbf{list}(\mathbf{real})\to\mathbf{list}(\mathbf{real})$, there _are_ established mathematical notions of derivative, because we can understand $\mathbf{list}(\mathbf{real})$ as the _manifold_ of all tuples of reals, and then appeal to the well-known theory of manifolds and jet bundles. We do this now, to achieve a correctness theorem for all first order types (Theorem 8). The key high level points are that * • manifolds support a notion of differentiation, and an interpretation of all first order types, but not an interpretation of higher types; * • diffeological spaces support all types, including higher types, but not an established notion of differentiation in general; * • manifolds and smooth maps embed full and faithfully in diffeological spaces, preserving the interpretation of first order types, so we can use the two notions together. We now explain this development in more detail. For our purposes, a smooth manifold $M$ is a second-countable Hausdorff topological space together with a smooth atlas. In more detail, a topological space $X$ is second-countable when there exists a collection $U:=\\{U_{i}\\}_{i\in\mathbb{N}}$ of open subsets of $X$ such that any open subset of $X$ can be written as a union of elements of $U$. A topological space $X$ is Hausdorff if for every distinct points $x$ and $y$, there exists disjoint open subsets $U,V$ of $X$ such that $x\in U,y\in V$. A smooth atlas of a topological space $X$ is an open cover $\mathcal{U}$ together with homeomorphisms $\left(\phi_{U}:U\to\mathbb{R}^{n(U)}\right)_{U\in\mathcal{U}}$ (called charts, or local coordinates) such that $\phi_{U}^{-1};\phi_{V}$ is smooth on its domain of definition for all $U,V\in\mathcal{U}$. A function $f:M\to N$ between manifolds is smooth if $\phi^{-1}_{U};f;\psi_{V}$ is smooth for all charts $\phi_{U}$ and $\psi_{V}$ of $M$ and $N$, respectively. Let us write $\mathbf{Man}$ for this category. This definition of manifolds is a slight generalisation of the more usual one from differential geometry because different charts in an atlas may have different finite dimensions $n(U)$. Thus we consider manifolds with dimensions that are potentially unbounded, albeit locally finite. Each open subset of $\mathbb{R}^{n}$ can be regarded as a manifold. This lets us regard the category of manifolds $\mathbf{Man}$ as a full subcategory of the category of diffeological spaces. We consider a manifold $(X,\\{\phi_{U}\\}_{U})$ as a diffeological space with the same carrier set $X$ and where the plots $\mathcal{P}_{X}^{U}$, called the _manifold diffeology_ , are the smooth functions in $\mathbf{Man}(U,X)$. A function $X\to Y$ is smooth in the sense of manifolds if and only if it is smooth in the sense of diffeological spaces [IZ13]. For the categorically minded reader, this means that we have a full embedding of $\mathbf{Man}$ into $\mathbf{Diff}$. Moreover, the natural interpretation of the first order fragment of our language in $\mathbf{Man}$ coincides with that in $\mathbf{Diff}$. That is, the embedding of $\mathbf{Man}$ into $\mathbf{Diff}$ preserves finite products and countable coproducts (hence initial algebras of polynomial endofunctors). ###### Proposition 6. Suppose that a type ${\tau}$ is first order, i.e. it is just built from reals, products, variants, and lists (or, again, arbitrary inductive types), and not function types. Then the diffeological space $\llbracket{\tau}\rrbracket$ is a manifold. ###### Proof 6.3 (Proof notes). This is proved by induction on the structure of types. In fact, one may show that every such $\llbracket{\tau}\rrbracket$ is isomorphic to a manifold of the form $\biguplus_{i=1}^{n}\mathbb{R}^{d_{i}}$ where the bound $n$ is either finite or $\infty$, but this isomorphism is typically not an identity function. We recall how the Taylor representation of any morphism $f:M\to N$ of manifolds is given by its action on jets [KSM99, Chapter IV]. For each point $x$ in a manifold $M$, define the $(k,R)$-_jet space_ $\mathcal{J}^{(k,R)}_{x}M$ to be the set $\\{\gamma\in\mathbf{Man}(\mathbb{R}^{k},M)\mid\gamma(0)=x\\}/\sim$ of equivalence classes $[\gamma]$ of $k$-dimensional plots $\gamma$ in $M$ based at $x$, where we identify $\gamma_{1}\sim\gamma_{2}$ iff all partial derivatives of order $\leq R$ coincide in the sense that $\frac{\partial^{\alpha_{1}+...+\alpha_{k}}(\gamma_{1};f)(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}(0)=\frac{\partial^{\alpha_{1}+...+\alpha_{k}}(\gamma_{2};f)(x)}{\partial x_{1}^{\alpha_{1}}\cdots\partial x_{k}^{\alpha_{k}}}(0)$ for all smooth $f:M\to\mathbb{R}$ and all multi-indices $(\alpha_{1},...,\alpha_{k})=(0,...,0),...,(R,0,...,0)$. In the case of $(k,R)=(1,1)$, a $(k,R)$-jet space is better known as a _tangent space_. The _$(k,R)$ -jet bundle_ (a.k.a. _tangent bundle_ , in case $(k,R)=(1,1)$) of $M$ is the set $\mathcal{J}^{(k,R)}(M)\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\biguplus_{x\in M}\mathcal{J}^{(k,R)}_{x}(M)$. The charts of $M$ equip $\mathcal{J}^{(k,R)}(M)$ with a canonical manifold structure. The (manifold) diffeology of these jet bundles can be concisely summarized by the plots $\mathcal{P}_{\mathcal{J}^{(k,R)}(M)}^{U}=\left\\{f:U\to|\mathcal{J}^{(k,R)}(M)|\mid\exists g\in\mathcal{P}_{M}^{U\times\mathbb{R}^{k}}.\forall u\in U.(g(u,0),[v\mapsto g(u,v)])=f(u)\right\\}$. Then $\mathcal{J}^{(k,R)}$ acts on smooth maps $f:M\to N$ to give $\mathcal{J}^{(k,R)}(f):\mathcal{J}^{(k,R)}(M)\to\mathcal{J}^{(k,R)}(N)$ is defined as $\mathcal{J}^{(k,R)}(f)(x,[\gamma])\stackrel{{\scriptstyle\mathrm{def}}}{{=}}(f(x),[\gamma;f])$. In local coordinates, this action $\mathcal{J}^{(k,R)}(f)$ is seen to coincide precisely with the $(k,R)$-Taylor representation of $f$ given by the Faà di Bruno formula [Mer04]. All told, the $(k,R)$-jet bundle is a functor $\mathcal{J}^{(k,R)}:\mathbf{Man}\to\mathbf{Man}$ [KSM99]. We can understand the jet bundle of a composite space in terms of that of its parts. ###### Lemma 7. There are canonical isomorphisms $\mathcal{J}^{(k,R)}(\biguplus_{i=1}^{\infty}M_{i})\cong\biguplus_{i=1}^{\infty}\mathcal{J}^{(k,R)}(M_{i})$ and $\mathcal{J}^{(k,R)}(M_{1}\times\ldots\times M_{n})\cong\mathcal{J}^{(k,R)}(M_{1})\times\ldots\times\mathcal{J}^{(k,R)}(M_{n})$. ###### Proof 6.4 (Proof notes). For disjoint unions, notice that that smooth morphisms from $\mathbb{R}^{k}$ into a disjoint union of manifolds always factor over a single inclusion, because $\mathbb{R}^{k}$ is connected. For products, it is well-known that partial derivatives of a morphism $(f_{1},...,f_{n})$ are calculated component-wise [Lee13, ex. 3-2]. We define a canonical isomorphism $\phi_{{\tau}}^{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}\mathcal{J}}:\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({\tau})\rrbracket\to\mathcal{J}^{(k,R)}(\llbracket{\tau}\rrbracket)$ for every type ${\tau}$, by induction on the structure of types. We let $\phi_{\mathbf{real}}^{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}\mathcal{J}}:\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}(\mathbf{real})\rrbracket\to\mathcal{J}^{(k,R)}(\llbracket\mathbf{real}\rrbracket)$ be given by $\phi_{\mathbf{real}}^{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}\mathcal{J}}(x,x^{\prime})\stackrel{{\scriptstyle\mathrm{def}}}{{=}}(x,[t\mapsto x+x^{\prime}t])$. For the other types, we use Lemma 7. We can now phrase correctness at all first order types. ###### Theorem 8 (Semantic correctness of ${\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}}$ (full)). For any ground ${\tau}$, any first order context $\Gamma$ and any term $\Gamma\vdash{t}:{\tau}$, the syntactic translation $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}$ coincides with the $(k,R)$-jet bundle functor, modulo these canonical isomorphisms: $\textstyle{\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}(\Gamma)\rrbracket\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({t})\rrbracket}$$\scriptstyle{\phi_{\Gamma}^{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}\mathcal{J}}}$$\scriptstyle{\cong}$$\textstyle{\llbracket\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(k,R)}({\tau})\rrbracket\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi_{{\tau}}^{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}\mathcal{J}}}$$\scriptstyle{\cong}$$\textstyle{\mathcal{J}^{(k,R)}(\llbracket\Gamma\rrbracket)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathcal{J}^{(k,R)}(\llbracket{t}\rrbracket)}$$\textstyle{\mathcal{J}^{(k,R)}(\llbracket{\tau}\rrbracket)}$ ###### Proof 6.5 (Proof notes). For any $k$-dimensional plot $\gamma\in\mathbf{Man}(\mathbb{R}^{k},M)$, let $\bar{\gamma}\in\mathbf{Man}(\mathbb{R}^{k},\mathcal{J}^{(k,R)}(M))$ be the $(k,R)$-jet curve, given by $\bar{\gamma}(x)=(\gamma(x),[t\mapsto\gamma(x+t)])$. First, we note that a smooth map $h:\mathcal{J}^{(k,R)}(M)\to\mathcal{J}^{(k,R)}(N)$ is of the form $\mathcal{J}^{(k,R)}(g)$ for some $g:M\to N$ if for all smooth $\gamma:\mathbb{R}^{k}\to M$ we have $\bar{\gamma};h=\overline{(\gamma;g)}:\mathbb{R}^{k}\to\mathcal{J}^{(k,R)}(N)$. This generalizes (3). Second, for any first order type ${\tau}$, $S_{\llbracket{\tau}\rrbracket}=\\{(f,\tilde{f})~{}|~{}\tilde{f};\phi_{\tau}^{\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}\mathcal{J}}=\bar{f}\\}$. This is shown by induction on the structure of types. We conclude the theorem from diagram (4), by putting these two observations together. ## 7\. Discussion: What are derivatives of higher order functions? In our gluing categories $\mathbf{Gl}_{k}$ of Section 6.2, we have avoided the question of what semantic derivatives should be associated with higher order functions. Our syntactic macro $\overrightarrow{\mathcal{D}}$ provides a specific derivative for every definable function, but in the model $\mathbf{Gl}_{k}$ there is only a _relation_ between plots and their corresponding Taylor representations, and this relation is not necessarily single-valued. Our approach has been rather indifferent about what “the” correct derivative of a higher order function should be. Instead, all we have cared about is that we are using “a” derivative that is correct in the sense that it can never be used to produce incorrect derivatives for first order functions, where we do have an unambiguous notion of correct derivative. ### 7.1. Automatic derivatives of higher order functions may not be unique! For a concrete example to show that derivatives of higher order functions might not be unique in our framework, let us consider the case $(k,R)=(1,1)$ and focus on first derivatives of the evaluation function $\displaystyle\mathrm{ev}:\ $ $\displaystyle\mathbb{R}\to\llbracket(\mathbf{real}\to\mathbf{real})\to\mathbf{real}\rrbracket=\mathbb{R}\to(\mathbb{R}\Rightarrow\mathbb{R})\Rightarrow\mathbb{R};$ $\displaystyle r\mapsto(f\mapsto f(r)).$ Our macro $\overrightarrow{\mathcal{D}}$ will return $\lambda a:\mathbb{R}.\lambda f:\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R}.f(a,1)$. In this section we show that the lambda term $\lambda a:\mathbb{R}.\lambda f:\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R}.\mathrm{sort}f(a,1)$ is also a valid derivative of the evaluation map, where $\mathrm{sort}:(\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R})\Rightarrow(\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R})$ is defined by $\displaystyle\mathrm{sort}:=\lambda f.\lambda((r,\\_).(\pi_{1}(f(r,0)),\pi_{2}\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}((-,0);f;\pi_{1})(r))).$ This map is idempotent and it converts any map $\mathbb{R}\times\mathbb{R}\to\mathbb{R}\times\mathbb{R}$ into the dual- numbers representation of its first component. For example, $(\mathrm{sort}(\mathrm{swap}))$ is the constantly $(0,0)$ function, where we write $\displaystyle\mathrm{swap}:\ $ $\displaystyle\mathbb{R}\times\mathbb{R}\to\mathbb{R}\times\mathbb{R}$ $\displaystyle(r,r^{\prime})\mapsto(r^{\prime},r).$ According to our gluing semantics, a function $g:\llparenthesis{\tau}\rrparenthesis_{1}\to\llparenthesis{\sigma}\rrparenthesis_{1}$ defines _a_ correct $(k,R)$-Taylor representation of a function $f:\llparenthesis{\tau}\rrparenthesis_{0}\to\llparenthesis{\sigma}\rrparenthesis_{0}$ iff $(f,g)$ defines a morphism $\llparenthesis{\tau}\rrparenthesis\to\llparenthesis{\sigma}\rrparenthesis$ in $\mathbf{Gl}_{k}$. In particular, there is no guarantee that every $f$ has a _unique_ correct $(k,R)$-Taylor representation $g$. (Although such Taylor representations are, in fact, unique when ${\tau},{\sigma}$ are first order types.) The gluing relation $\llparenthesis(\mathbf{real}\to\mathbf{real})\to\mathbf{real}\rrparenthesis$ in $\mathbf{Gl}_{1}$ relates curves in $\gamma:\mathbb{R}\to(\mathbb{R}\Rightarrow\mathbb{R})\Rightarrow\mathbb{R}$ to “tangent curves” $\gamma^{\prime}:\mathbb{R}\to(\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R})\Rightarrow\mathbb{R}\times\mathbb{R}$. In this relation, the function $\mathrm{ev}$ is related to at least two different tangent curves. ###### Lemma 9. We have a smooth map $\displaystyle\mathrm{sort}:\ $ $\displaystyle(\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R})\to(\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R})$ $\displaystyle f\mapsto((r,\\_)\mapsto(\pi_{1}(f(r,0)),\nabla((-,0);f;\pi_{1})(r))).$ ###### Proof 7.1. Let $f\in\mathcal{P}_{\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R}}^{U}$ and let $\gamma_{1},\gamma_{2}\in\mathcal{P}_{\mathbb{R}}^{U}$. Then, also $u\mapsto(\gamma_{1}(u),0);f(u);\pi_{1}\in\mathcal{P}_{\mathbb{R}}^{U}=\mathbf{Man}(U,\mathbb{R})$ by definition of the exponential in $\mathbf{Diff}$. Therefore, we also have that $u\mapsto\nabla(u^{\prime}\mapsto(\gamma_{1}(u^{\prime}),0);f(u^{\prime});\pi_{1})\in\mathbf{Man}(U,\mathbb{R})=\mathcal{P}_{\mathbb{R}}^{U}$, as we are working with infinitely differentiable smooth maps. Consequently, $u\mapsto(f;\mathrm{sort})(u)(\gamma_{1}(u),\gamma_{2}(u))=(\pi_{1}(f(u))(\gamma_{1}(u),0),\nabla(u^{\prime}\mapsto(\gamma_{1}(u^{\prime}),0);f(u^{\prime});\pi_{1}))\in\mathcal{P}_{\mathbb{R}\times\mathbb{R}}^{U},$ by definition of the product in $\mathbf{Diff}$. It follows that $(f;\mathrm{sort})\in\mathcal{P}_{\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R}}^{U}$. ###### Proposition 10. We have that both $(\mathrm{ev},\mathrm{ev}^{\prime}_{1})\in\llparenthesis(\mathbf{real}\to\mathbf{real})\to\mathbf{real}\rrparenthesis$ and $(\mathrm{ev},\mathrm{ev}^{\prime}_{2})\in\llparenthesis(\mathbf{real}\to\mathbf{real})\to\mathbf{real}\rrparenthesis$ for $\displaystyle\mathrm{ev}^{\prime}_{1}:$ $\displaystyle\mathbb{R}\to(\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R})\Rightarrow\mathbb{R}\times\mathbb{R}$ $\displaystyle a\mapsto(f\mapsto f(a,1))$ $\displaystyle\mathrm{ev}^{\prime}_{2}:$ $\displaystyle\mathbb{R}\to(\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R})\Rightarrow\mathbb{R}\times\mathbb{R}$ $\displaystyle a\mapsto(f\mapsto(\mathrm{sort}f)(a,1)).$ ###### Proof 7.2. By definition of $\llparenthesis-\rrparenthesis$, we need to show that for any $(\gamma,\gamma^{\prime})\in\llparenthesis\mathbf{real}\to\mathbf{real}\rrparenthesis$, we have that $(x\mapsto\mathrm{ev}(x)(\gamma(x)),x\mapsto\mathrm{ev}^{\prime}_{i}(x)(\gamma^{\prime}(x)))\in\llparenthesis\mathbf{real}\rrparenthesis$. This means that we need to show that for $i=1,2$ $\displaystyle x\mapsto\mathrm{ev}^{\prime}_{i}(x)(\gamma^{\prime}(x))=(x\mapsto\mathrm{ev}(x)(\gamma(x)),\nabla(x\mapsto\mathrm{ev}(x)(\gamma(x))))$ Unrolling further, this means we need to show that for any $\gamma:\mathbb{R}\to\mathbb{R}\Rightarrow\mathbb{R}$ and $\gamma^{\prime}:\mathbb{R}\to\mathbb{R}\times\mathbb{R}\Rightarrow\mathbb{R}\times\mathbb{R}$ such that for any $(\delta,\delta^{\prime})\in\llparenthesis\mathbf{real}\rrparenthesis$ (which means that $\delta:\mathbb{R}\to\mathbb{R}$ and $\delta^{\prime}=(\delta,\nabla\delta)$), we have that $\displaystyle\Big{(}r\mapsto\gamma(r)(\delta(r)),r\mapsto\gamma^{\prime}(r)(\delta^{\prime}(r))\Big{)}\in\llparenthesis\mathbf{real}\rrparenthesis$ The latter part finally means that we need to show that $\displaystyle r\mapsto\gamma^{\prime}(r)(\delta(r),\nabla\delta(r))=(r\mapsto\gamma(r)(\delta(r)),\nabla(r\mapsto\gamma(r)(\delta(r))))$ Now, focussing on $\mathrm{ev}^{\prime}_{1}$: we need to show that $\displaystyle x\mapsto\mathrm{ev}^{\prime}_{1}(x)(\gamma^{\prime}(x))=(x\mapsto\mathrm{ev}(x)(\gamma(x)),$ $\displaystyle\nabla(x\mapsto\mathrm{ev}(x)(\gamma(x))))$ Inlining the definition of $\mathrm{ev}^{\prime}_{1}$: we need to show that $\displaystyle x\mapsto\gamma^{\prime}(x)(x,1)=(x\mapsto\gamma(x)(x),\nabla(x\mapsto\gamma(x)(x)))$ This follows by assumption by choosing $\delta(r)=r$, and hence $\delta^{\prime}(r)=(r,1)$. Focussing on $\mathrm{ev}^{\prime}_{2}$: we need to show that $\displaystyle x\mapsto\mathrm{ev}^{\prime}_{2}(x)(\gamma^{\prime}(x))=(x\mapsto\mathrm{ev}(x)(\gamma(x)),\nabla(x\mapsto\mathrm{ev}(x)(\gamma(x))))$ Inlining $\mathrm{ev}^{\prime}_{2}$’s definition: we need to show that $\displaystyle\Big{(}x\mapsto((\pi_{1}(\gamma^{\prime}(x)(x,0)),x\mapsto\nabla((-,0);\gamma^{\prime}(x);\pi_{1})(x)))\Big{)}=$ $\displaystyle\Big{(}x\mapsto((r,\\_)\mapsto(\pi_{1}(\gamma^{\prime}(x)(r,0)),\nabla((-,0);\gamma^{\prime}(x);\pi_{1})(r)))(x,1)\Big{)}$ is equal to $\displaystyle x\mapsto(\mathrm{sort}~{}\gamma^{\prime}(x))(x,1)=$ $\displaystyle\Big{(}x\mapsto\gamma(x)(x),\nabla(x\mapsto\gamma(x)(x))\Big{)}$ That is, we need to show that $\pi_{1}(\gamma^{\prime}(x)(x,0))=\gamma(x)(x)$ for all $x\in\mathbb{R}$, which holds by the assumption that $(\gamma,\gamma^{\prime})\in\llparenthesis\mathbf{real}\to\mathbf{real}\rrparenthesis$ by choosing $\delta(x^{\prime})=x$ (and hence $\delta^{\prime}(x^{\prime})=(x,0)$) and then specializing to $x=x^{\prime}$. Yet, $\mathrm{ev}^{\prime}_{1}\neq\mathrm{ev}^{\prime}_{2}$ as $\mathrm{ev}^{\prime}_{1}(a)(\mathrm{swap})=(1,a)$ and $\mathrm{ev}^{\prime}_{2}(a)(\mathrm{swap})=(0,0)$. This shows that $\mathrm{ev}^{\prime}_{1}\neq\mathrm{ev}^{\prime}_{2}$ are both “valid” semantic derivatives of the evaluation function $(\mathrm{ev})$ in our framework. In particular, it shows that semantic derivatives of higher order functions might not be unique. Our macro $\overrightarrow{\mathcal{D}}$ will return $\mathrm{ev}^{\prime}_{1}$, but everything would still work just as well if it instead returned $\mathrm{ev}^{\prime}_{2}$. ### 7.2. Canonical derivatives of higher order functions? Differential geometers and analysts have long pursued notions of a canonical derivative of various higher order functions arising, for example, in the calculus of variations and in the study of infinite dimensional Lie groups [KM97]. Such an uncontroversial notion of derivative exists on various (infinite dimensional) spaces of functions that form suitable (so-called convenient) vector spaces, or, manifolds locally modelled on such vector spaces. At the level of generality of diffeological spaces, however, various natural notions of derivative that coincide in convenient vector spaces start to diverge and it is no longer clear what the best definition of a derivative is [CW14]. Another, fundamentally different setting that defines canonical derivatives of many higher order functions is given by synthetic differential geometry [Koc06]. While derivatives of higher order functions are of deep interest and have rightly been studied in their own right in differential geometry, we believe the situation is subtly different in computer science: 1. (1) In programming applications, we use higher order programs only to construct the first order functions that we ultimately end up running and calculating derivatives of. Automatic differentiation methods can exploit this freedom: derivatives of higher order functions only matter in so far as they can be used to construct the correct derivatives of first order functions, so we can choose a simple and cheap notion of derivative among the valid options. As such, the fact that our semantics does not commit to a single notion of derivative of higher order functions can be seen as a _feature rather than bug_ that models the pragmatics of programming practice. 2. (2) While function spaces in differential geometry are typically infinite dimensional objects that are unsuitable for representation in the finite memory of a computer, higher order functions as used in programming are much more restricted: all they can do is call a function on finitely many arguments and analyse the function outputs. As such, function types in programming can be thought of as (locally) finite dimensional. In case a canonical notion of automatic derivative of higher order function is really desired, it may be worth pursuing a more intentional notion of semantics such as one based on game semantics. Such intentional techniques could capture the computational notion of higher order function better than our current (and other) extensional semantics using existing techniques from differential geometry. We hope that an exploration of such techniques might lead to an appropriate notion of computable derivative, even for higher order functions. ## 8\. Discussion and future work ### 8.1. Summary We have shown that diffeological spaces provide a denotational semantics for a higher order language with variants and inductive types (Section 4,5). We have used this to show correctness of simple forward-mode AD translations for calculating higher derivatives (Theorem 3, Theomem 8). The structure of our elementary correctness argument for Theorem 3 is a typical logical relations proof over a denotational semantics. As explained in Section 6, this can equivalently be understood as a denotational semantics in a new kind of space obtained by categorical gluing. Overall, then, there are two logical relations at play. One is in diffeological spaces, which ensures that all definable functions are smooth. The other is in the correctness proof (equivalently in the categorical gluing), which explicitly tracks the derivative of each function, and tracks the syntactic AD even at higher types. ### 8.2. Connection to the state of the art in AD implementation As is common in denotational semantics research, we have here focused on an idealized language and simple translations to illustrate the main aspects of the method. There are a number of points where our approach is simplistic compared to the advanced current practice, as we now explain. #### 8.2.1. Representation of vectors In our examples we have treated $n$-vectors as tuples of length $n$. This style of programming does not scale to large $n$. A better solution would be to use array types, following [SFVPJ19]. As demonstrated by [CJS20], our categorical semantics and correctness proofs straightforwardly extend to cover them, in a similar way to our treatment of lists. In fact, [CJS20] formalizes our correctness arguments in Coq and extends them to apply to the system of [SFVPJ19]. #### 8.2.2. Efficient forward-mode AD For AD to be useful, it must be fast. The $(1,1)$-AD macro $\scalebox{0.8}{$\overrightarrow{\mathcal{D}}$}_{(1,1)}$ that we use is the basis of an efficient AD library [SFVPJ19]. Numerous optimizations are needed, ranging from algebraic manipulations, to partial evaluations, to the use of an optimizing C compiler, but the resulting implementation is performant in experiments [SFVPJ19]. The Coq formalization [CJS20] validates some of these manipulations using a similar semantics to ours. We believe the implementation in [SFVPJ19] can be extended to apply to the more general $(k,R)$-AD methods we described in this paper through minor changes. #### 8.2.3. Reverse-mode and mixed-mode AD While forward-mode AD methods are useful, many applications require reverse- mode AD, or even mixed-mode AD for efficiency. In [HSV20a], we described how our correctness proof applies to a continuation-based AD technique that closely resembles reverse-mode AD, but only has the correct complexity under a non-standard operational semantics [BMP20] (in particular, the linear factoring rule is crucial). It remains to be seen whether this technique and its correctness proof can be adapted to yield genuine reverse AD under a standard operational semantics. Alternatively, by relying on a variation of our techniques, [Vák21] gives a correctness proof of a rather different $(1,1)$-reverse AD algorithm that stores the (primal, adjoint)-vector pair as a struct-of-arrays rather than as an array-of-structs. Future work could explore extended its analysis to $(k,R)$-reverse AD and mixed-mode AD for efficiently computing higher order derivatives. #### 8.2.4. Other language features The idealized languages that we considered so far do not touch on several useful language constructs. For example: the use of functions that are partial (such as division) or partly-smooth (such as ReLU); phenomena such as iteration, recursion; and probabilities. Recent work by MV [Vák20] shows how our analysis of $(1,1)$-AD extends to apply to partiality, iteration, and recursion. This development is orthogonal to the one in this paper: its methods combine directly with those in the present paper to analyze $(k,R)$-forward mode AD of recursive programs. We leave the analysis of AD of probabilistic programs for future work. ## Acknowledgment We have benefited from discussing this work with many people, including M. Betancourt, B. Carpenter, O. Kammar, C. Mak, L. Ong, B. Pearlmutter, G. Plotkin, A. Shaikhha, J. Sigal, and others. In the course of this work, MV has also been employed at Oxford (EPSRC Project EP/M023974/1) and at Columbia in the Stan development team. This project has also received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 895827; a Royal Society University Research Fellowship; the ERC BLAST grant; the Air Force Office of Scientific Research under award number FA9550–21–1–0038; and a Facebook Research Award. ## References * [AAB+16] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016. * [Ama12] Shun-ichi Amari. Differential-geometrical methods in statistics, volume 28. Springer Science & Business Media, 2012. * [AP20] Martín Abadi and Gordon D Plotkin. A simple differentiable programming language. In Proc. POPL 2020. ACM, 2020. * [BCLG20] Gilles Barthe, Raphaëlle Crubillé, Ugo Dal Lago, and Francesco Gavazzo. On the versatility of open logical relations: Continuity, automatic differentiation, and a containment theorem. In Proc. ESOP 2020. Springer, 2020. To appear. * [Bet18] Michael Betancourt. A geometric theory of higher-order automatic differentiation. arXiv preprint arXiv:1812.11592, 2018. * [BH11] John Baez and Alexander Hoffnung. Convenient categories of smooth spaces. Transactions of the American Mathematical Society, 363(11):5789–5825, 2011. * [BJD19] Jesse Bettencourt, Matthew J Johnson, and David Duvenaud. Taylor-mode automatic differentiation for higher-order derivatives in JAX. 2019\. * [BML+20] Gilbert Bernstein, Michael Mara, Tzu-Mao Li, Dougal Maclaurin, and Jonathan Ragan-Kelley. Differentiating a tensor language. arXiv preprint arXiv:2008.11256, 2020. * [BMP20] Alois Brunel, Damiano Mazza, and Michele Pagani. Backpropagation in the simply typed lambda-calculus with linear negation. In Proc. POPL 2020, 2020. * [BS96] Claus Bendtsen and Ole Stauning. Fadbad, a flexible C++ package for automatic differentiation. Technical report, Technical Report IMM–REP–1996–17, Department of Mathematical Modelling, Technical University of Denmark, Lyngby, 1996. * [BS97] Claus Bendtsen and Ole Stauning. Tadiff, a flexible c++ package for automatic differentiation. TU of Denmark, Department of Mathematical Modelling, Lungby. Technical report IMM-REP-1997-07, 1997. * [CCG+20] J. Robin B. Cockett, Geoff S. H. Cruttwell, Jonathan Gallagher, Jean-Simon Pacaud Lemay, Benjamin MacAdam, Gordon D. Plotkin, and Dorette Pronk. Reverse derivative categories. In Proc. CSL 2020, 2020. * [CGM19] Geoff Cruttwell, Jonathan Gallagher, and Ben MacAdam. Towards formalizing and extending differential programming using tangent categories. In Proc. ACT 2019, 2019. * [CHB+15] Bob Carpenter, Matthew D Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, and Michael Betancourt. The Stan math library: Reverse-mode automatic differentiation in C++. arXiv preprint arXiv:1509.07164, 2015. * [CJS20] Curtis Chin Jen Sem. Formalized correctness proofs of automatic differentiation in Coq. Master’s Thesis, Utrecht University, 2020. Thesis: https://dspace.library.uu.nl/handle/1874/400790. Coq code: https://github.com/crtschin/thesis. * [CRBD18] Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, pages 6571–6583, 2018. * [CS96] G Constantine and T Savits. A multivariate Faa di Bruno formula with applications. Transactions of the American Mathematical Society, 348(2):503–520, 1996. * [CS11] J Robin B Cockett and Robert AG Seely. The Faa di Bruno construction. Theory and Applications of Categories, 25(15):394–425, 2011. * [CW14] J Daniel Christensen and Enxin Wu. Tangent spaces and tangent bundles for diffeological spaces. arXiv preprint arXiv:1411.5425, 2014. * [DHS11] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. * [Ell18] Conal Elliott. The simple essence of automatic differentiation. Proceedings of the ACM on Programming Languages, 2(ICFP):70, 2018\. * [EM03] L Hernández Encinas and J Munoz Masque. A short proof of the generalized Faà di Bruno’s formula. Applied Mathematics Letters, 16(6):975–979, 2003. * [ER03] Thomas Ehrhard and Laurent Regnier. The differential lambda-calculus. Theoretical Computer Science, 309(1-3):1–41, 2003. * [FJL18] Roy Frostig, Matthew James Johnson, and Chris Leary. Compiling machine learning programs via high-level tracing. Systems for Machine Learning, 2018. * [FST19] Brendan Fong, David Spivak, and Rémy Tuyéras. Backprop as functor: A compositional perspective on supervised learning. In 2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), pages 1–13. IEEE, 2019. * [GSS00] Izrail Moiseevitch Gelfand, Richard A Silverman, and Richard A Silverman. Calculus of variations. Courier Corporation, 2000. * [GUW00] Andreas Griewank, Jean Utke, and Andrea Walther. Evaluating higher derivative tensors by forward propagation of univariate taylor series. Mathematics of Computation, 69(231):1117–1130, 2000. * [HG14] Matthew D Hoffman and Andrew Gelman. The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1):1593–1623, 2014. * [HSV20a] Mathieu Huot, Sam Staton, and Matthijs Vákár. Correctness of automatic differentiation via diffeologies and categorical gluing. In FoSSaCS, pages 319–338, 2020. * [HSV20b] Mathieu Huot, Sam Staton, and Matthijs Vákár. Correctness of automatic differentiation via diffeologies and categorical gluing. Full version, 2020. arxiv:2001.02209. * [IZ13] Patrick Iglesias-Zemmour. Diffeology. American Mathematical Soc., 2013. * [JLS07] Peter T Johnstone, Stephen Lack, and P Sobocinski. Quasitoposes, quasiadhesive categories and Artin glueing. In Proc. CALCO 2007, 2007. * [JR11] Bart Jacobs and JMMM Rutten. An introduction to (co)algebras and (co)induction. In Advanced Topics in Bisimulation and Coinduction, pages 38–99. CUP, 2011. * [Kar01] Jerzy Karczmarczuk. Functional differentiation of computer programs. Higher-Order and Symbolic Computation, 14(1):35–57, 2001. * [KB14] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * [KK04] Dana A Knoll and David E Keyes. Jacobian-free Newton–Krylov methods: a survey of approaches and applications. Journal of Computational Physics, 193(2):357–397, 2004. * [KM97] Andreas Kriegl and Peter W Michor. The convenient setting of global analysis, volume 53. American Mathematical Soc., 1997. * [Koc06] Anders Kock. Synthetic differential geometry, volume 333. Cambridge University Press, 2006. * [KSM99] Ivan Kolár, Jan Slovák, and Peter W Michor. Natural operations in differential geometry. 1999\. * [KTR+17] Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic differentiation variational inference. The Journal of Machine Learning Research, 18(1):430–474, 2017. * [KW+52] Jack Kiefer, Jacob Wolfowitz, et al. Stochastic estimation of the maximum of a regression function. The Annals of Mathematical Statistics, 23(3):462–466, 1952. * [Lee13] John M Lee. Smooth manifolds. In Introduction to Smooth Manifolds, pages 1–31. Springer, 2013\. * [LMG18] Sören Laue, Matthias Mitterreiter, and Joachim Giesen. Computing higher order derivatives of matrix and tensor expressions. Advances in Neural Information Processing Systems, 31:2750–2759, 2018. * [LMG20] Sören Laue, Matthias Mitterreiter, and Joachim Giesen. A simple and efficient tensor calculus. In AAAI, pages 4527–4534, 2020. * [LN89] Dong C Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical programming, 45(1-3):503–528, 1989. * [LNV21] Fernando Lucatelli Nunes and Matthijs Vákár. CHAD for expressive total languages. arXiv e-prints, pages arXiv–2110, 2021. * [LYRY20] Wonyeol Lee, Hangyeol Yu, Xavier Rival, and Hongseok Yang. On correctness of automatic differentiation for non-differentiable functions. In Advances in Neural Information Processing Systems, 2020. * [Man12] Oleksandr Manzyuk. A simply typed $\lambda$-calculus of forward automatic differentiation. In Proc. MFPS 2012, 2012. * [Mar10] James Martens. Deep learning via Hessian-free optimization. In ICML, volume 27, pages 735–742, 2010. * [Mer04] Joel Merker. Four explicit formulas for the prolongations of an infinitesimal lie symmetry and multivariate Faa di Bruno formulas. arXiv preprint math/0411650, 2004. * [MO20] Carol Mak and Luke Ong. A differential-form pullback programming language for higher-order reverse-mode automatic differentiation. arxiv:2002.08241, 2020. * [MP21] Damiano Mazza and Michele Pagani. Automatic differentiation in PCF. Proc. ACM Program. Lang., 5(POPL):1–27, 2021. doi:10.1145/3434309. * [MS92] John C Mitchell and Andre Scedrov. Notes on sconing and relators. In International Workshop on Computer Science Logic, pages 352–378. Springer, 1992. * [Nea11] Radford M Neal. MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo, chapter 5. Chapman & Hall / CRC Press, 2011. * [PGC+17] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017\. * [Pit95] Andrew M Pitts. Categorical logic. Technical report, University of Cambridge, Computer Laboratory, 1995. * [Plo18] Gordon D Plotkin. Some principles of differential programming languages. Invited talk, POPL 2018, 2018. * [PS07] Barak A Pearlmutter and Jeffrey Mark Siskind. Lazy multivariate higher-order forward-mode ad. ACM SIGPLAN Notices, 42(1):155–160, 2007. * [PS08] Barak A Pearlmutter and Jeffrey Mark Siskind. Reverse-mode AD in a functional framework: Lambda the ultimate backpropagator. ACM Transactions on Programming Languages and Systems (TOPLAS), 30(2):7, 2008. * [Qia99] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1):145–151, 1999. * [RM51] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951. * [Sav06] Thomas H Savits. Some statistical applications of Faa di Bruno. Journal of Multivariate Analysis, 97(10):2131–2140, 2006. * [SFVPJ19] Amir Shaikhha, Andrew Fitzgibbon, Dimitrios Vytiniotis, and Simon Peyton Jones. Efficient differentiable programming in a functional array-processing language. Proceedings of the ACM on Programming Languages, 3(ICFP):97, 2019\. * [SMC20] Benjamin Sherman, Jesse Michel, and Michael Carbin. $\lambda_{S}$: Computable semantics for differentiable programming with higher-order functions and datatypes. arXiv preprint arXiv:2007.08017, 2020. * [Sou80] Jean-Marie Souriau. Groupes différentiels. In Differential geometrical methods in mathematical physics, pages 91–128. Springer, 1980. * [Sta11] Andrew Stacey. Comparative smootheology. Theory Appl. Categ., 25(4):64–117, 2011. * [Vák20] Matthijs Vákár. Denotational correctness of foward-mode automatic differentiation for iteration and recursion. arXiv preprint arXiv:2007.05282, 2020. * [Vák21] Matthijs Vákár. Reverse AD at higher types: Pure, principled and denotationally correct. In ESOP, pages 607–634, 2021. * [VMBBL18] Bart Van Merriënboer, Olivier Breuleux, Arnaud Bergeron, and Pascal Lamblin. Automatic differentiation in ML: Where we are and where we should be going. In Advances in Neural Information Processing Systems, pages 8757–8767, 2018. * [VS21] Matthijs Vákár and Tom Smeding. CHAD: Combinatory homomorphic automatic differentiation. arXiv preprint arXiv:2103.15776, 2021. * [WGP16] Mu Wang, Assefaw Gebremedhin, and Alex Pothen. Capitalizing on live variables: new algorithms for efficient hessian computation via automatic differentiation. Mathematical Programming Computation, 8(4):393–433, 2016. * [WWE+19] Fei Wang, Xilun Wu, Gregory Essertel, James Decker, and Tiark Rompf. Demystifying differentiable programming: Shift/reset the penultimate backpropagator. Proceedings of the ACM on Programming Languages, 3(ICFP), 2019. * [ZHCW20] Shaopeng Zhu, Shih-Han Hung, Shouvanik Chakrabarti, and Xiaodi Wu. On the principles of differentiable quantum programming languages. In Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation, PLDI 2020, London, UK, June 15-20, 2020, pages 272–285. ACM, 2020. doi:10.1145/3385412.3386011. ## Appendix A $\mathbf{CartSp}$ and $\mathbf{Man}$ are not cartesian closed categories ###### Lemma 11. There is no continuous injection $\mathbb{R}^{d+1}\to\mathbb{R}^{d}$. ###### Proof A.1. If there were, it would restrict to a continuous injection $S^{d}\to\mathbb{R}^{d}$. The Borsuk-Ulam theorem, however, tells us that every continuous $f:S^{d}\to\mathbb{R}^{d}$ has some $x\in S^{d}$ such that $f(x)=f(-x)$, which is a contradiction. Let us define the terms: ${x}_{0}:\mathbf{real},\ldots,{x}_{n}:\mathbf{real}\vdash{t}_{n}=\lambda{y}.{{x}_{0}+{x}_{1}*y+\dots+{x}_{n}*y^{n}}:\mathbf{real}\to\mathbf{real}$ Assuming that $\mathbf{CartSp}$/$\mathbf{Man}$ is cartesian closed, observe that these get interpreted as injective continuous (because smooth) functions $\mathbb{R}^{n}\to\llbracket\mathbf{real}\to\mathbf{real}\rrbracket$ in $\mathbf{CartSp}$ and $\mathbf{Man}$. ###### Theorem 12. $\mathbf{CartSp}$ is not cartesian closed. ###### Proof A.2. In case $\mathbf{CartSp}$ were cartesian closed, we would have $\llbracket\mathbf{real}\to\mathbf{real}\rrbracket=\mathbf{real}^{n}$ for some $n$. Then, we would get, in particular a continuous injection $\llbracket{t}_{n+1}\rrbracket:\mathbb{R}^{n+1}\to\mathbb{R}^{n}$, which contradicts Lemma 11. ###### Theorem 13. $\mathbf{Man}$ is not cartesian closed. ###### Proof A.3. Observe that we have $\iota_{n}:\mathbb{R}^{n}\to\mathbb{R}^{n+1}$; $\langle a_{0},\ldots,a_{n}\rangle\mapsto\langle a_{0},\ldots,a_{n},0\rangle$ and that $\iota_{n};\llbracket{t}_{n+1}\rrbracket=\llbracket{t}_{n}\rrbracket$. Let us write $A_{n}$ for the image of $\llbracket{t}_{n}\rrbracket$ and $A=\cup_{n\in\mathbb{N}}A_{n}$. Then, $A_{n}$ is connected because it is the continuous image of a connected set. Similarly, $A$ is connected because it is the non-disjoint union of connected sets. This means that $A$ lies in a single connected component of $\llbracket\mathbf{real}\to\mathbf{real}\rrbracket$, which is a manifold with some finite dimension, say $d$. Take some $x\in\mathbb{R}^{d+1}$ (say, $0$), take some open $d$-ball $U$ around $\llbracket{t}_{d+1}\rrbracket(x)$, and take some open $d+1$-ball $V$ around $x$ in $\llbracket{t}_{d+1}\rrbracket^{-1}(U)$. Then, $\llbracket{t}_{d+1}\rrbracket$ restricts to a continuous injection from $V$ to $U$, or equivalently, $\mathbb{R}^{d+1}$ to $\mathbb{R}^{d}$, which contradicts Lemma 11.
UFIFT-QG-20-06 Inflaton Effective Potential from Photons for General $\epsilon$ S. Katuwal1∗, S. P. Miao2⋆ and R. P. Woodard1† 1 Department of Physics, University of Florida, Gainesville, FL 32611, UNITED STATES 2 Department of Physics, National Cheng Kung University, No. 1 University Road, Tainan City 70101, TAIWAN ABSTRACT We accurately approximate the contribution that photons make to the effective potential of a charged inflaton for inflationary geometries with an arbitrary first slow roll parameter $\epsilon$. We find a small, nonlocal contribution and a numerically larger, local part. The local part involves first and second derivatives of $\epsilon$, coming exclusively from the constrained part of the electromagnetic field which carries the long range interaction. This causes the effective potential induced by electromagnetism to respond more strongly to geometrical evolution than for either scalars, which have no derivatives, or spin one half particles, which have only one derivative. For $\epsilon=0$ our final result agrees with that of Allen [1] on de Sitter background, while the flat space limit agrees with the classic result of Coleman and Weinberg [2]. PACS numbers: 04.50.Kd, 95.35.+d, 98.62.-g ∗ e-mail<EMAIL_ADDRESS> ⋆ email<EMAIL_ADDRESS> † e-mail<EMAIL_ADDRESS> ## 1 Introduction No one knows what caused primordial inflation but the data [3] are consistent with a minimally coupled, complex scalar inflaton $\varphi$, $\mathcal{L}=-\partial_{\mu}\varphi\partial_{\nu}\varphi^{*}g^{\mu\nu}\sqrt{-g}-V(\varphi\varphi^{*})\sqrt{-g}\;.$ (1) If the inflaton couples only to gravity the loop corrections to its effective potential come only from quantum gravity and are suppressed by powers of the loop-counting parameter $GH^{2}\mathrel{\raise 1.29167pt\hbox{$<$\kern-7.5pt\lower 4.30554pt\hbox{$\sim$}}}10^{-10}$, where $G$ is Newton’s constant and $H$ is the Hubble parameter during inflation. In that case the classical evolution suffers little disturbance but reheating is very slow. Efficient reheating requires coupling the inflaton to normal matter such as electromagnetism with a non-infinitesimal charge $q$, $\displaystyle\mathcal{L}=-\Bigl{(}\partial_{\mu}-iqA_{\mu}\Bigr{)}\varphi\Bigl{(}\partial_{\nu}+iqA_{\nu}\Bigr{)}\varphi^{*}g^{\mu\nu}\sqrt{-g}$ (2) $\displaystyle\hskip 142.26378pt-V(\varphi\varphi^{*})\sqrt{-g}-\frac{1}{4}F_{\mu\nu}F_{\rho\sigma}g^{\mu\rho}g^{\nu\sigma}\sqrt{-g}\;.\qquad$ But the price of efficient reheating is significant one loop corrections to the inflaton effective potential [4]. For large fields these corrections approach the Coleman-Weinberg form of flat space $\Delta V\longrightarrow\frac{3}{16\pi^{2}}(q^{2}\varphi\varphi^{*})^{2}\ln(q^{2}\varphi\varphi^{*}/s^{2})$, where $s$ is the renormalization scale [2]. However, cosmological Coleman- Weinberg potentials generally depend in a complicated way on the geometry of inflation [5], $ds^{2}=a^{2}\Bigl{[}-d\eta^{2}+d\vec{x}\\!\cdot\\!d\vec{x}\Bigr{]}\qquad\Longrightarrow\qquad H\equiv\frac{\partial_{0}a}{a^{2}}\;\;,\;\;\epsilon(t)\equiv-\frac{\partial_{0}H}{aH^{2}}\;.$ (3) For the special case of de Sitter (with constant $H$ and $\epsilon=0$) the result takes the form [1, 6, 7], $\Delta V\Bigl{|}_{\epsilon=0}=\frac{3H^{4}}{16\pi^{2}}\Biggl{\\{}b\Bigl{(}\frac{q^{2}\varphi\varphi^{*}}{H^{2}}\Bigr{)}+\Bigl{(}\frac{q^{2}\varphi\varphi^{*}}{H^{2}}\Bigr{)}\ln\Bigl{(}\frac{H^{2}}{s^{2}}\Bigr{)}+\frac{1}{2}\Bigl{(}\frac{q^{2}\varphi\varphi^{*}}{H^{2}}\Bigr{)}^{2}\ln\Bigl{(}\frac{H^{2}}{s^{2}}\Bigr{)}\Biggr{\\}},$ (4) where the function $b(z)$ (whose $z$ and $z^{2}$ terms depend on renormalization conventions) is, $\displaystyle b(z)=\Bigl{(}-1+2\gamma\Bigr{)}z+\Bigl{(}-\frac{3}{2}+\gamma\Bigr{)}z^{2}$ (5) $\displaystyle\hskip 56.9055pt+\int_{0}^{z}\\!\\!\\!\\!dx\,(1\\!+\\!x)\Biggl{[}\psi\Bigl{(}\frac{3}{2}\\!+\\!\frac{1}{2}\sqrt{1\\!-\\!8x}\,\Bigr{)}+\psi\Bigl{(}\frac{3}{2}\\!-\\!\frac{1}{2}\sqrt{1\\!-\\!8x}\,\Bigr{)}\Biggr{]}.\qquad$ Cosmological Coleman-Weinberg potentials are problematic because they make large corrections which cannot be completely subtracted using allowed local counterterms [5]. The classical evolution of inflation is subject to unacceptable modifications when partial subtractions are restricted to just functions of the inflaton [8], or functions of the inflaton and the Ricci scalar [7]. No other local subtractions are permitted [9] but it has been suggested that an acceptably small distortion of classical inflation might result from cancellations between the effective potentials induced by fermions and by bosons [10]. The purpose of this paper is facilitate study of this scheme by developing an accurate approximation for extending the de Sitter results (4-5) to a general cosmological geometry (3). As before on flat space [2], and on de Sitter background [6], we define the derivative of the one loop effective potential through the equation, $\Delta V^{\prime}(\varphi\varphi^{*})=\delta\xi R+\frac{1}{2}\delta\lambda\varphi\varphi^{*}+q^{2}g^{\mu\nu}i\Bigl{[}\mbox{}_{\mu}\Delta_{\nu}\Bigr{]}(x;x)\;.$ (6) Here $i[\mbox{}_{\mu}\Delta_{\nu}](x;x^{\prime})$ is the propagator of a vector gauge field, in Lorentz gauge, which acquires its mass through the Higgs mechanism rather than being a fundamental Proca field [11], $\Bigl{[}\kern 1.0pt\vbox{\hrule height=1.2pt\hbox{\vrule width=1.2pt\hskip 3.0pt\vbox{\vskip 6.0pt}\hskip 3.0pt\vrule width=0.6pt}\hrule height=0.6pt}\kern 1.0pt_{\mu}^{~{}\nu}-R_{\mu}^{~{}\nu}-M^{2}\delta_{\mu}^{~{}\nu}\Bigr{]}i\Bigl{[}\mbox{}_{\nu}\Delta_{\rho}\Bigr{]}(x;x^{\prime})=\frac{g_{\mu\rho}\,i\delta^{D}(x\\!-\\!x^{\prime})}{\sqrt{-g}}+\partial_{\mu}\partial^{\prime}_{\rho}i\Delta_{t}(x;x^{\prime})\;.$ (7) Here $\kern 1.0pt\vbox{\hrule height=1.2pt\hbox{\vrule width=1.2pt\hskip 3.0pt\vbox{\vskip 6.0pt}\hskip 3.0pt\vrule width=0.6pt}\hrule height=0.6pt}\kern 1.0pt_{\mu}^{~{}\nu}$ is the covariant vector d’Alembertian, $M^{2}\equiv 2q^{2}\varphi\varphi^{*}$ is the photon mass- squared, which is assumed to be constant (in spite of the background evolution) as per the definition of “effective potential”, and $i\Delta_{t}(x;x^{\prime})$ is the propagator of a massless, minimally coupled (MMC) scalar. We regulate the ultraviolet by working in $D$ spacetime dimensions. In section 2 we express the photon propagator as an exact spatial Fourier mode sum involving massive temporal and spatially transverse vectors, along with gradients of the MMC scalar. Section 3 begins by converting the various mode equations to a dimensionless form, then these are approximated. Each approximation is checked against explicit numerical evolution, both for the simple quadratic potential, which is excluded by the lower bound on the tensor-to-scalar ratio [12], and for a plateau potential [13] that is in good agreement with all data. In section 4 our approximations are applied to relation (6) to compute the one loop effective potential. This consists of a local part which depends on the instantaneous geometry and a numerically smaller nonlocal part which depends on the past geometry. Exact expressions are obtained, as well as expansions in the large field and small field regimes. Our conclusions are given in section 5. ## 2 Photon Mode Sum The purpose of this section is to express the Lorentz gauge propagator for a massive photon as a spatial Fourier mode sum. We begin by expressing the right hand side of the propagator equation (7) as mode sum. Then the various transverse vector modes are introduced. Next these modes are combined so as to enforce the propagator equation. The section closes by checking the de Sitter and flat space correspondence limits. ### 2.1 Lessons from the Propagator Equation If we exploit Lorentz gauge, the $\mu=0$ component of (7) reads, $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!2)\partial_{0}aH+a^{2}M^{2}\Bigr{]}i\Bigl{[}\mbox{}_{0}\Delta_{\rho}\Bigr{]}(x;x^{\prime})$ (8) $\displaystyle\hskip 156.49014pt=-\frac{\delta^{0}_{~{}\rho}i\delta^{D}(x\\!-\\!x^{\prime})}{a^{D-2}}+\partial_{0}\partial^{\prime}_{\rho}i\Delta_{t}(x;x^{\prime})\;,\qquad$ where $\partial^{2}\equiv\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}$ is the flat space d’Alembertian. The $\mu=m$ component of equation (7) reads, $\displaystyle-\frac{1}{a^{2}}\Biggl{\\{}\Bigl{[}-\partial^{2}+(D\\!-\\!4)aH\partial_{0}+a^{2}M^{2}\Bigr{]}i\Bigl{[}\mbox{}_{m}\Delta_{\rho}\Bigr{]}(x;x^{\prime})+2aH\partial_{m}i\Bigl{[}\mbox{}_{0}\Delta_{\rho}\Bigr{]}(x;x^{\prime})\Biggr{\\}}$ (9) $\displaystyle\hskip 156.49014pt=\frac{\delta_{m\rho}i\delta^{D}(x\\!-\\!x^{\prime})}{a^{D-2}}+\partial_{m}\partial^{\prime}_{\rho}i\Delta_{t}(x;x^{\prime})\;.\qquad$ We begin by writing the right hand sides of expressions (8) and (9) as Fourier mode sums. The MMC scalar propagator $i\Delta_{t}(x;x^{\prime})$ can be expressed as a Fourier mode sum over functions $t(\eta,k)$ whose wave equation and Wronskian are, $\Bigl{[}\partial_{0}^{2}+(D\\!-\\!2)aH\partial_{0}+k^{2}\Bigr{]}t(\eta,k)=0\qquad,\qquad t\\!\cdot\\!\partial_{0}t^{*}-\partial_{0}t\\!\cdot\\!t^{*}=\frac{i}{a^{D-2}}\;.$ (10) Although no closed form solution exists to the $t(\eta,k)$ wave equation for a general scale factor, relations (10) do define a unique solution when combined with the early time asymptotic form, $k\gg aH\qquad\Longrightarrow\qquad t(\eta,k)\longrightarrow\frac{e^{-ik\eta}}{\sqrt{2ka^{D-2}}}\;.$ (11) Up to infrared corrections [14], which are irrelevant owing to the derivatives in expressions (7) and (8), the Fourier mode sum for $i\Delta_{t}(x;x^{\prime})$ is, $\displaystyle i\Delta_{t}(x;x^{\prime})=\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\theta(\Delta\eta)\,t(\eta,k)t^{*}(\eta^{\prime},k)e^{i\vec{k}\cdot\Delta\vec{x}}$ (12) $\displaystyle\hskip 170.71652pt+\theta(-\Delta\eta)\,t^{*}(\eta,k)t(\eta^{\prime},k)e^{-i\vec{k}\cdot\Delta\vec{x}}\Biggr{\\}},\qquad$ where $\Delta\eta\equiv\eta-\eta^{\prime}$ and $\Delta\vec{x}\equiv\vec{x}-\vec{x}^{\prime}$. Acting $\partial_{0}\partial_{\rho}^{\prime}$ on (12) produces a term proportional to $\delta^{0}_{~{}\rho}\delta(\Delta\eta)$, which the Wronskian (10) and the change of variable $\vec{k}\rightarrow-\vec{k}$ allows us to recognize as a $D$-dimensional delta function, $\displaystyle\partial_{0}\partial_{\rho}^{\prime}i\Delta_{t}(x;x^{\prime})=\\!\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\delta^{0}_{~{}\rho}\delta(\Delta\eta)\Bigl{[}t\\!\cdot\\!\partial_{0}t^{*}\\!-\\!\partial_{0}t\\!\cdot\\!t^{*}\Bigr{]}e^{i\vec{k}\cdot\Delta\vec{x}}+\theta(\Delta\eta)\partial_{0}\partial_{\rho}^{\prime}$ (14) $\displaystyle\hskip 19.91684pt\times\Bigl{[}t(\eta,k)t^{*}(\eta^{\prime},k)e^{i\vec{k}\cdot\Delta\vec{x}}\Bigr{]}+\theta(-\Delta\eta)\partial_{0}\partial_{\rho}^{\prime}\Bigl{[}t^{*}(\eta,k)t(\eta^{\prime},k)e^{-i\vec{k}\cdot\Delta\vec{x}}\Bigr{]}\Biggr{\\}},\qquad$ $\displaystyle=\frac{\delta^{0}_{~{}\rho}i\delta^{D}(x\\!-\\!x^{\prime})}{a^{D-2}}+\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\theta(\Delta\eta)\,T_{0}(x,\vec{k})T^{*}_{\rho}(x^{\prime},\vec{k})$ $\displaystyle\hskip 199.16928pt+\theta(-\Delta\eta)\,T^{*}_{0}(x,\vec{k})T_{\rho}(x^{\prime},\vec{k})\Biggr{\\}}.\qquad$ Here we define $T_{\mu}(x,\vec{k})\equiv\partial_{\mu}[t(\eta,k)e^{i\vec{k}\cdot\vec{x}}\,]$. Substituting (14) in the right hand side of (8) gives, $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!2)\partial_{0}aH+a^{2}M^{2}\Bigr{]}i\Bigl{[}\mbox{}_{0}\Delta_{\rho}\Bigr{]}(x;x^{\prime})$ (15) $\displaystyle\hskip 5.69046pt=\\!\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\theta(\Delta\eta)\,T_{0}(x,\vec{k})T^{*}_{\rho}(x^{\prime},\vec{k})+\theta(-\Delta\eta)\,T^{*}_{0}(x,\vec{k})T_{\rho}(x^{\prime},\vec{k})\Biggr{\\}}.\qquad$ The corresponding expression for (9) is, $\displaystyle-\frac{1}{a^{2}}\Biggl{\\{}\Bigl{[}-\partial^{2}+(D\\!-\\!4)aH\partial_{0}+a^{2}M^{2}\Bigr{]}i\Bigl{[}\mbox{}_{m}\Delta_{\rho}\Bigr{]}(x;x^{\prime})+2aH\partial_{m}i\Bigl{[}\mbox{}_{0}\Delta_{\rho}\Bigr{]}(x;x^{\prime})\Biggr{\\}}$ (16) $\displaystyle\hskip 14.22636pt=\\!\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\frac{\delta_{m\rho}i\delta(\Delta\eta)e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-2}}+\theta(\Delta\eta)\,T_{m}(x,\vec{k})T^{*}_{\rho}(x^{\prime},\vec{k})$ $\displaystyle\hskip 196.32384pt+\theta(-\Delta\eta)\,T^{*}_{m}(x,\vec{k})T_{\rho}(x^{\prime},\vec{k})\Biggr{\\}}.\qquad$ The right hand sides of (15) and (16) are the Fourier mode sums that will guide us in constructing the photon propagator. ### 2.2 Transverse Vector Mode Functions In the cosmological geometry (3) a transverse (Lorentz gauge) vector field $F_{\mu}(x)$ obeys, $0=D^{\mu}F_{\mu}(x)=\frac{1}{a^{2}}\Bigl{[}-\Bigl{(}\partial_{0}\\!+\\!(D\\!-\\!2)aH\Bigr{)}F_{0}+\partial_{i}F_{i}\Bigr{]}\equiv\frac{1}{a^{2}}\Bigl{[}-\mathcal{D}F_{0}+\partial_{i}F_{i}\Bigr{]}\;.$ (17) We seek to express the photon propagator as a Fourier mode sum over a linear combination of transverse vector mode functions. Expressions (15-16) imply that one of these must be the gradient of a MMC scalar plane wave, $T_{\mu}(x,\vec{k})\equiv\partial_{\mu}\Bigl{[}t(\eta,k)e^{i\vec{k}\cdot\vec{x}}\Bigr{]}\;.$ (18) Its transversality follows from the MMC mode equation (10), $-\mathcal{D}T_{0}+\partial_{i}T_{i}=-\Bigl{[}\partial_{0}^{2}\\!+\\!(D\\!-\\!2)aH\partial_{0}\\!+\\!k^{2}\Bigr{]}t(\eta,k)e^{i\vec{k}\cdot\vec{x}}=0\;.$ (19) In $D$ spacetime dimensions there are $D-2$ purely spatial and transverse massive vector modes of the form, $V_{\mu}(x,\vec{k},\lambda,M)\equiv\epsilon_{\mu}(\vec{k},\lambda)\\!\times\\!v(\eta,k)e^{i\vec{k}\cdot\vec{x}}\qquad,\qquad\epsilon_{0}=0=k_{i}\epsilon_{i}\;.$ (20) The polarization vectors $\epsilon_{\mu}(\vec{k},\lambda)$ are the same as those of flat space, and their polarization sum is, $\sum_{\lambda}\epsilon_{\mu}(\vec{k},\lambda)\epsilon^{*}_{\rho}(\vec{k},\lambda)=\left(\matrix{0&0\cr 0&\delta_{mr}-\widehat{k}_{m}\widehat{k}_{r}}\right)\equiv\overline{\Pi}_{\mu\rho}(\vec{k})\;.$ (21) The wave equation and Wronskian of $v(\eta,k)$ are, $\Bigl{[}\partial_{0}^{2}+(D\\!-\\!4)aH\partial_{0}+k^{2}+a^{2}M^{2}\Bigr{]}v(\eta,k)=0\quad,\quad v\cdot\partial_{0}v^{*}-\partial_{0}v\cdot v^{*}=\frac{i}{a^{D-4}}\;.$ (22) Relations (22) define a unique solution when coupled with the form for asymptotically early times, $k\gg\Bigl{\\{}aH,aM\Bigr{\\}}\qquad\Longrightarrow\qquad v(\eta,k)\longrightarrow\frac{ae^{-ik\eta}}{\sqrt{2ka^{D-2}}}\;.$ (23) The spatially transverse vector modes $V_{\mu}(x,\vec{k},\lambda,M)$ represent dynamical photons. There is also a single temporal-longitudinal mode which represents the constrained part of the electromagnetic field. It is a combination of $T_{\mu}(x,\vec{k})$ with a transverse vector formed from the $\mu=0$ component $u(\eta,k,M)$ of a massive vector, $\Bigl{[}\partial_{0}^{2}+(D\\!-\\!2)\partial_{0}aH+k^{2}+a^{2}M^{2}\Bigr{]}u(\eta,k,M)=0\;\;,\;\;u\cdot\partial_{0}u^{*}-\partial_{0}u\cdot u^{*}=\frac{i}{a^{D-2}}\;.$ (24) Relations (24) define a unique solution when combined with the early time asymptotic form, $k\gg\Bigl{\\{}aH,aM\Bigr{\\}}\qquad\Longrightarrow\qquad u(\eta,k)\longrightarrow\frac{e^{-ik\eta}}{\sqrt{2ka^{D-2}}}\;.$ (25) One converts $u(\eta,k,M)$ to a transverse vector $U_{\mu}(x,\vec{k},M)$, $U_{\mu}(x,\vec{k},M)\equiv\overline{\partial}_{\mu}\Bigl{[}u(\eta,k)e^{i\vec{k}\cdot\vec{x}}\Bigr{]}\;,$ (26) where the differential operator $\overline{\partial}_{\mu}$ has the $3+1$ decomposition, $\overline{\partial}_{0}\equiv\sqrt{-\nabla^{2}}\longrightarrow k\qquad,\qquad\overline{\partial}_{i}\equiv-\frac{\partial_{i}\mathcal{D}}{\sqrt{-\nabla^{2}}}\longrightarrow-i\widehat{k}_{i}\mathcal{D}\;.$ (27) ### 2.3 Enforcing the Propagator Equation We have seen that the photon propagator $i[\mbox{}_{\rho}\Delta_{\rho}](x;x^{\prime})$ is the spatial Fourier integral of contributions from the three transverse vector modes, each having the general form of constants times, $\mathcal{F}_{\mu\rho}(x;x^{\prime})=\theta(\Delta\eta)F_{\mu}(x)F^{*}_{\rho}(x^{\prime})+\theta(-\Delta\eta)F^{*}_{\mu}(x)F_{\rho}(x^{\prime})\;\;,\;\;F_{\mu}\in\Bigl{\\{}T_{\mu},U_{\mu},V_{\mu}\Bigr{\\}}\;.$ (28) We might anticipate that the spatially transverse modes contribute with unit amplitude but the MMC scalar and temporal photon modes must be multiplied by the square of an inverse mass to even have the correct dimensions. The multiplicative factors are chosen to enforce the propagator equation (7). To check the temporal components (15) of the propagator equation we must compute, $-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!2)\partial_{0}aH+a^{2}M^{2}\Bigr{]}\mathcal{F}_{0\rho}(x;x^{\prime})\;.$ (29) To check the spatial components (16) we need, $-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!4)aH\partial_{0}+a^{2}M^{2}\Bigr{]}\mathcal{F}_{m\rho}(x;x^{\prime})-\frac{1}{a^{2}}\\!\times\\!2aH\partial_{m}\mathcal{F}_{0\rho}(x;x^{\prime})\;.$ (30) The factors of $\partial_{0}$ in the differential operators of (29-30) can act on the theta functions or on the mode functions. When all derivatives act on the MMC contribution, the result is $-M^{2}$ times the original mode function, $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!2)\partial_{0}aH+a^{2}M^{2}\Bigr{]}T_{0}(x)=-M^{2}T_{0}(x)\;,$ (31) $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!4)aH\partial_{0}+a^{2}M^{2}\Bigr{]}T_{m}(x)$ (32) $\displaystyle\hskip 167.87108pt-\frac{1}{a^{2}}\\!\times\\!2aH\partial_{m}T_{0}(x)=-M^{2}T_{m}(x)\;.\qquad$ This suggests that the MMC contribution enters the mode sum with a multiplicative factor of $-M^{-2}$. No further information comes from acting the full differential operators on the other modes, $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!2)\partial_{0}aH+a^{2}M^{2}\Bigr{]}U_{0}(x)=0\;,$ (33) $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!4)aH\partial_{0}+a^{2}M^{2}\Bigr{]}U_{m}(x)-\frac{1}{a^{2}}\\!\times\\!2aH\partial_{m}U_{0}(x)=0\;,$ (34) $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!2)\partial_{0}aH+a^{2}M^{2}\Bigr{]}V_{0}(x)=0\;,$ (35) $\displaystyle-\frac{1}{a^{2}}\Bigl{[}-\partial^{2}+(D\\!-\\!4)aH\partial_{0}+a^{2}M^{2}\Bigr{]}V_{m}(x)-\frac{1}{a^{2}}\\!\times\\!2aH\partial_{m}V_{0}(x)=0\;.$ (36) It remains to check what happens when one or two factors of $\partial_{0}$ from the differential operators in (29-30) act on the factors of $\theta(\pm\Delta\eta)$. A single conformal time derivative gives, $\partial_{0}\mathcal{F}_{\mu\rho}(x;x^{\prime})=\theta(\Delta\eta)\partial_{0}F_{\mu}F^{*}_{\rho}+\theta(-\Delta\eta)\partial_{0}F^{*}_{\mu}F_{\rho}+\delta(\Delta\eta)\Bigl{[}F_{\mu}F^{*}_{\rho}-F^{*}_{\mu}F_{\rho}\Bigr{]}\;.$ (37) If we change the Fourier integration variable $\vec{k}$ to $-\vec{k}$ in the second of the delta function terms, the result for the MMC modes is, $\displaystyle T_{\mu}T^{*}_{\rho}-T^{*}_{\mu}T_{\rho}\Bigl{|}_{\vec{k}\rightarrow-\vec{k}}$ $\displaystyle\\!\\!\\!\\!\\!=\\!\\!\\!\\!\\!$ $\displaystyle\left(\matrix{[\partial_{0}t\partial_{0}t^{*}-\partial_{0}t^{*}\,\partial_{0}t]&-ik_{r}[\partial_{0}t\,t^{*}-\partial_{0}t^{*}\,t]\cr ik_{m}[t\,\partial t^{*}-t^{*}\partial_{0}t]&k_{m}k_{r}[t\,t^{*}-t^{*}t]}\right)e^{i\vec{k}\cdot\Delta\vec{x}}.\qquad$ (38) $\displaystyle\\!\\!\\!\\!\\!=\\!\\!\\!\\!\\!$ $\displaystyle\left(\matrix{0&-k_{r}\cr- k_{m}&0}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-2}}.\qquad$ (39) The temporal photon modes make exactly the same contribution, $\displaystyle U_{\mu}U^{*}_{\rho}-U^{*}_{\mu}U_{\rho}\Bigl{|}_{\vec{k}\rightarrow-\vec{k}}$ (41) $\displaystyle\hskip 28.45274pt=\left(\matrix{k^{2}[u\,u^{*}-u^{*}u]&ik_{r}[u\,\mathcal{D}u^{*}-u^{*}\mathcal{D}u]\cr- ik_{m}[\mathcal{D}u\,u^{*}-\mathcal{D}u^{*}u]&\widehat{k}_{m}\widehat{k}_{r}[\mathcal{D}u\,\mathcal{D}u^{*}-\mathcal{D}u^{*}\,\mathcal{D}u]}\right)e^{i\vec{k}\cdot\Delta\vec{x}}.\qquad$ $\displaystyle\hskip 28.45274pt=\left(\matrix{0&-k_{r}\cr- k_{m}&0}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-2}}.\qquad$ Canceling (41) against (39) — whose multiplicative coefficient is $-M^{-2}$ — fixes the multiplicative coefficient for the temporal photons as $+M^{-2}$. The delta function term in (37) vanishes for the spatially transverse modes. We turn now to second derivative which come from $-\partial^{2}=\partial_{0}^{2}-\nabla^{2}$, $\displaystyle\partial_{0}^{2}\mathcal{F}_{\mu\rho}(x;x^{\prime})=\theta(\Delta\eta)\,\partial_{0}^{2}F_{\mu}(x)\,F^{*}_{\rho}(x^{\prime})+\theta(-\Delta\eta)\,\partial_{0}^{2}F^{*}_{\mu}(x)\,F_{\rho}(x^{\prime})$ (42) $\displaystyle\hskip 42.67912pt+\delta(\Delta\eta)\Bigl{[}\partial_{0}F_{\mu}F^{*}_{\rho}-\partial_{0}F^{*}_{\mu}F_{\rho}\Bigr{]}+\partial_{0}\Biggl{\\{}\delta(\Delta\eta)\Bigl{[}F_{\mu}F^{*}_{\rho}-F^{*}_{\mu}F_{\rho}\Bigr{]}\Biggr{\\}}.\qquad$ We have already arranged for the cancellation of the final term in (42). For the new delta function term the MMC modes give, $\displaystyle\partial_{0}T_{\mu}T^{*}_{\rho}-\partial_{0}T^{*}_{\mu}T_{\rho}\Bigr{|}_{\vec{k}\rightarrow-\vec{k}}$ (44) $\displaystyle\hskip 51.21504pt=\left(\matrix{[\partial_{0}^{2}t\,\partial_{0}t^{*}-\partial_{0}^{2}t^{*}\partial_{0}t]&-ik_{r}[\partial_{0}^{2}t\,t^{*}-\partial_{0}^{2}t^{*}t]\cr ik_{m}[\partial_{0}t\,\partial_{0}t^{*}-\partial_{0}t^{*}\partial_{0}t]&k_{m}k_{r}[\partial_{0}t\,t^{*}-\partial_{0}t^{*}t]}\right)e^{i\vec{k}\cdot\Delta\vec{x}}\;,\qquad$ $\displaystyle\hskip 51.21504pt=-i\left(\matrix{k^{2}&ik_{r}(D\\!-\\!2)aH\cr 0&k_{m}k_{r}}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-2}}\;,\qquad$ where we have used $\partial_{0}^{2}t=-[(D\\!-\\!2)aH\partial_{0}+k^{2}]t$. The corresponding contribution for the temporal modes is, $\displaystyle\partial_{0}U_{\mu}U^{*}_{\rho}-\partial_{0}U^{*}_{\mu}U_{\rho}\Bigr{|}_{\vec{k}\rightarrow-\vec{k}}$ (46) $\displaystyle=\left(\matrix{k^{2}[\partial_{0}u\,u^{*}-\partial_{0}u^{*}u]\\!&\\!ik_{r}[\partial_{0}u\,\mathcal{D}u^{*}-\partial_{0}u^{*}\mathcal{D}u]\cr- ik_{m}[\partial_{0}\mathcal{D}u\,u^{*}-\partial_{0}\mathcal{D}u^{*}u]\\!&\\!\widehat{k}_{r}\widehat{k}_{m}[\partial_{0}\mathcal{D}u\,\mathcal{D}u^{*}-\partial_{0}\mathcal{D}u^{*}\mathcal{D}u]}\right)e^{i\vec{k}\cdot\Delta\vec{x}},\qquad$ $\displaystyle=-i\left(\matrix{k^{2}&ik_{r}(D\\!-\\!2)aH\cr 0&\widehat{k}_{m}\widehat{k}_{r}(k^{2}+a^{2}M^{2})}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-2}}\;,\qquad$ where we have used $\partial_{0}\mathcal{D}u_{0}=-(k^{2}+a^{2}M^{2})u_{0}$. And each of the spatially transverse modes gives, $\displaystyle\partial_{0}V_{\mu}V^{*}_{\rho}-\partial_{0}V^{*}_{\mu}V_{\rho}\Bigr{|}_{\vec{k}\rightarrow-\vec{k}}$ $\displaystyle=$ $\displaystyle\left(\matrix{0&0\cr 0&\epsilon_{m}\epsilon_{r}^{*}[\partial_{0}v\,v*-\partial_{0}v^{*}v]}\right)e^{i\vec{k}\cdot\Delta\vec{x}}\;,\qquad$ (47) $\displaystyle=$ $\displaystyle-i\left(\matrix{0&0\cr 0&\epsilon_{m}\epsilon_{r}^{*}}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-4}}\;.\qquad$ (48) The second conformal time derivatives in both expression (29) and the corresponding spatial relation (30) come in the form $-\frac{1}{a^{2}}\times\partial_{0}^{2}$. Including the multiplicative factors, we see that the temporal delta functions which are induced consist of $\frac{1}{a^{2}M^{2}}$ times (44) minus the same factor times (46), plus the polarization sum (21) over (48), $\displaystyle\frac{i}{M^{2}}\left(\matrix{k^{2}&ik_{r}(D\\!-\\!2)aH\cr 0&k_{m}k_{r}}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D}}-\frac{i}{M^{2}}\left(\matrix{k^{2}&ik_{r}(D\\!-\\!2)aH\cr 0&\widehat{k}_{m}\widehat{k}_{r}(k^{2}+a^{2}M^{2})}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D}}$ (49) $\displaystyle\hskip 71.13188pt-i\left(\matrix{0&0\cr 0&\delta_{mr}-\widehat{k}_{m}\widehat{k}_{r}}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-2}}=-i\left(\matrix{0&0\cr 0&\delta_{mr}}\right)\frac{e^{i\vec{k}\cdot\Delta\vec{x}}}{a^{D-2}}\;.\qquad$ With $-\frac{1}{M^{2}}$ times expressions (31-32) we see that the propagator equations (15-16) are obeyed by the Fourier mode sum, $\displaystyle i\Bigl{[}\mbox{}_{\mu}\Delta_{\rho}\Bigr{]}(x;x^{\prime})=\\!\\!\int\\!\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\\!\theta(\Delta\eta)\\!\Biggl{[}\frac{U_{\mu}(x,\vec{k},M)U^{*}_{\rho}(x^{\prime},\vec{k},M)\\!-\\!T_{\mu}(x,\vec{k})T^{*}_{\rho}(x^{\prime},\vec{k})}{M^{2}}$ (50) $\displaystyle\hskip 28.45274pt+\overline{\Pi}_{\mu\rho}(\vec{k})v(\eta,k)v^{*}(\eta^{\prime},k)e^{i\vec{k}\cdot\Delta\vec{x}}\Biggr{]}+\theta(-\Delta\eta)\Biggl{[}\frac{U^{*}_{\mu}(x,\vec{k},M)U_{\rho}(x^{\prime},\vec{k},M)}{M^{2}}$ $\displaystyle\hskip 71.13188pt-\frac{T^{*}_{\mu}(x,\vec{k})T_{\rho}(x^{\prime},\vec{k})}{M^{2}}+\overline{\Pi}_{\mu\rho}(\vec{k})v^{*}(\eta,k)v(\eta^{\prime},k)e^{-i\vec{k}\cdot\Delta\vec{x}}\Biggr{]}\Biggr{\\}}.\qquad$ Note that the $U_{\mu}(x,\vec{k},M)$ and $T_{\mu}(x,\vec{k})$ modes combine to form a vector integrated propagator analogous to the scalar ones introduced in [15]. The photon propagator can also be expressed as the sum of three bi-vector differential operators acting on a scalar propagator, $\displaystyle i\Bigl{[}\mbox{}_{\mu}\Delta_{\rho}\Bigr{]}(x;x^{\prime})=\frac{1}{M^{2}}\Bigl{[}-\eta_{\mu\rho}+\overline{\Pi}_{\mu\rho}\Bigr{]}\frac{i\delta^{D}(x\\!-\\!x^{\prime})}{a^{D-2}}$ (51) $\displaystyle\hskip 56.9055pt+\frac{1}{M^{2}}\Bigl{[}\overline{\partial}_{\mu}\overline{\partial}_{\rho}^{\prime}i\Delta_{u}(x;x^{\prime})-\partial_{\mu}\partial_{\rho}^{\prime}i\Delta_{t}(x;x^{\prime})\Bigr{]}+\overline{\Pi}_{\mu\rho}i\Delta_{v}(x;x^{\prime})\;.\qquad$ The Fourier mode sum for the MMC scalar propagator $i\Delta_{t}(x;x^{\prime})$ was given in expression (12). The mode sum for the temporal propagator $i\Delta_{u}(x;x^{\prime})$ comes from replacing $t(\eta,k)$ with $u(\eta,k)$ in (12), and the mode sum for the transverse spatial propagator $i\Delta_{v}(x;x^{\prime})$ is obtained by replacing $t(\eta,k)$ with $v(\eta,k)$. The resulting lowest order (free) field strength correlators are, $\displaystyle\Bigl{\langle}\Omega\Bigl{|}T^{*}\Bigl{[}F_{0j}(x)F_{0\ell}(x^{\prime})\Bigr{]}\Bigr{|}\Omega\Bigr{\rangle}=\frac{\partial_{j}\partial_{\ell}}{\nabla^{2}}\frac{i\delta^{D}(x\\!-\\!x^{\prime})}{a^{D-4}}$ (52) $\displaystyle\hskip 110.96556pt+a^{2}{a^{\prime}}^{2}M^{2}\frac{\partial_{j}\partial_{\ell}}{\nabla^{2}}i\Delta_{u}(x;x^{\prime})+\overline{\Pi}_{j\ell}\partial_{0}\partial_{0}^{\prime}i\Delta_{v}(x;x^{\prime})\;,\qquad$ $\displaystyle\Bigl{\langle}\Omega\Bigl{|}T^{*}\Bigl{[}F_{0j}(x)F_{k\ell}(x^{\prime})\Bigr{]}\Bigr{|}\Omega\Bigr{\rangle}=\Bigl{[}\delta_{jk}\partial_{\ell}\\!-\\!\delta_{j\ell}\partial_{k}\Bigr{]}\partial_{0}i\Delta_{v}(x;x^{\prime})\;,$ (53) $\displaystyle\Bigl{\langle}\Omega\Bigl{|}T^{*}\Bigl{[}F_{ij}(x)F_{k\ell}(x^{\prime})\Bigr{]}\Bigr{|}\Omega\Bigr{\rangle}$ (54) $\displaystyle\hskip 85.35826pt=-\Bigl{[}\delta_{ik}\partial_{j}\partial_{\ell}\\!-\\!\delta_{kj}\partial_{\ell}\partial_{i}\\!+\\!\delta_{j\ell}\partial_{i}\partial_{k}\\!-\\!\delta_{\ell i}\partial_{k}\partial_{j}\Bigr{]}i\Delta_{v}(x;x^{\prime})\;.\qquad$ The $T^{*}$-ordering symbol in these correlators indicates that the derivatives in forming the field strength tensor, $F_{\mu\nu}(x)\equiv\partial_{\mu}A_{\nu}(x)-\partial_{\nu}A_{\mu}(x)$, are taken outside the time-ordering symbol. An important simplification is, $T_{\mu}(x,\vec{k})=-i\lim_{M\rightarrow 0}U_{\mu}(x,\vec{k},M)\;.$ (55) Comparing equations (31) with (33), and (32) with (34), shows that both sides of relation (55) obey the same wave equation for $M=0$. That they are identical follows from $t(\eta,k)$ and $u(\eta,k)$ having the same asymptotic forms (11) and (25). Relation (55) is of great importance because it guarantees that the propagator has no $\frac{1}{M^{2}}$ pole. ### 2.4 The de Sitter Limit In the limit of $\epsilon=0$ the mode functions have closed form solutions,111In the phase factors for $u(\eta,k,M)$ and $v(\eta,k,M)$ one must regard $\nu_{b}$ as a real number, even if $M^{2}>\frac{1}{4}(D-3)^{2}H^{2}$. $\displaystyle t(\eta,k)$ $\displaystyle\\!\\!\\!\longrightarrow\\!\\!\\!$ $\displaystyle e^{\frac{i\pi}{2}(\nu_{A}+\frac{1}{2})}\sqrt{\frac{\pi}{4Ha^{D-1}}}\\!\times\\!H^{(1)}_{\nu_{A}}\Bigl{(}\frac{k}{Ha}\Bigr{)}\;,\qquad$ (56) $\displaystyle u(\eta,k,M)$ $\displaystyle\\!\\!\\!\longrightarrow\\!\\!\\!$ $\displaystyle e^{\frac{i\pi}{2}(\nu_{b}+\frac{1}{2})}\sqrt{\frac{\pi}{4Ha^{D-1}}}\\!\times\\!H^{(1)}_{\nu_{b}}\Bigl{(}\frac{k}{Ha}\Bigr{)}\;,\qquad$ (57) $\displaystyle v(\eta,k,M)$ $\displaystyle\longrightarrow$ $\displaystyle e^{\frac{i\pi}{2}(\nu_{b}+\frac{1}{2})}\sqrt{\frac{\pi}{4Ha^{D-3}}}\\!\times\\!H^{(1)}_{\nu_{b}}\Bigl{(}\frac{k}{Ha}\Bigr{)}\;,\qquad$ (58) where the indices are, $\nu_{A}\equiv\Bigl{(}\frac{D\\!-\\!1}{2}\Bigr{)}\qquad,\qquad\nu_{b}\equiv\sqrt{\Bigl{(}\frac{D\\!-\\!3}{2}\Bigr{)}^{2}\\!-\\!\frac{M^{2}}{H^{2}}}\;.$ (59) The Fourier mode sums for the three propagators can be mostly expressed in terms of the de Sitter length function $y(x;x^{\prime})$, $y(x;x^{\prime})\equiv\Bigl{\|}\vec{x}\\!-\\!\vec{x}^{\prime}\Bigr{\|}^{2}-\Bigl{(}|\eta\\!-\\!\eta^{\prime}|\\!-\\!i\varepsilon\Bigr{)}^{2}\;.$ (60) The de Sitter limit of the temporal photon propagator is a Hypergeometric function, $i\Delta_{u}(x;x^{\prime})\longrightarrow\frac{H^{D-2}}{(4\pi)^{\frac{D}{2}}}\frac{\Gamma(\nu_{a}\\!+\\!\nu_{b})\Gamma(\nu_{A}\\!-\\!\nu_{b})}{\Gamma(\frac{D}{2})}\mbox{}_{2}F_{1}\Bigl{(}\nu_{A}\\!+\\!\nu_{b},\nu_{A}\\!-\\!\nu_{b},\frac{D}{2};1\\!-\\!\frac{y}{4}\Bigr{)}\equiv b(y)\;.$ (61) The de Sitter limit of the spatially transverse photon propagator is closely related, $i\Delta_{v}(x;x^{\prime})\longrightarrow aa^{\prime}b(y)\;.$ (62) However, infrared divergences break de Sitter invariance in the MMC scalar propagator [16, 17, 18]. The result for the noncoincident propagator takes the form [19, 20], $i\Delta_{t}(x;x^{\prime})\longrightarrow A(y)+\frac{H^{D-2}}{(4\pi)^{\frac{D}{2}}}\frac{\Gamma(D\\!-\\!1)}{\Gamma(\frac{D}{2})}\ln(aa^{\prime})\;,$ (63) where we only need derivatives of the function $A(y)$ [21], $\displaystyle A^{\prime}(y)$ $\displaystyle=$ $\displaystyle\frac{1}{2}(2\\!-\\!y)B^{\prime}(y)-\frac{1}{2}(D\\!-\\!2)B(y)\;,$ (64) $\displaystyle B(y)$ $\displaystyle\equiv$ $\displaystyle\frac{\Gamma(D\\!-\\!2)\Gamma(1)}{\Gamma(\frac{D}{2})}\,\mbox{}_{2}F_{1}\Bigl{(}D\\!-\\!2,1,\frac{D}{2};1\\!-\\!\frac{y}{4}\Bigr{)}\;.$ (65) It is useful to note that the functions $B(y)$ and $b(y)$ obey, $\displaystyle 0$ $\displaystyle=$ $\displaystyle(4y\\!-\\!y^{2})B^{\prime\prime}(y)+D(2\\!-\\!y)B^{\prime}(y)-(D\\!-\\!2)B(y)\;,$ (66) $\displaystyle 0$ $\displaystyle=$ $\displaystyle(4y\\!-\\!y^{2})b^{\prime\prime}(y)+D(2\\!-\\!y)b^{\prime}(y)-(D\\!-\\!2)b(y)-\frac{M^{2}}{H}b(y)\;.$ (67) A direct computation of the photon propagator on de Sitter background gives [11], $\displaystyle i\Bigl{[}\mbox{}_{\mu}\Delta_{\rho}\Bigr{]}(x;x^{\prime})\longrightarrow-\frac{\partial^{2}y}{\partial x^{\mu}\partial{x^{\prime}}^{\rho}}\Bigl{[}(4y\\!-\\!y^{2})\frac{\partial}{\partial y}+(D\\!-\\!1)(2\\!-\\!y)\Bigr{]}\Bigl{[}\frac{b^{\prime}(y)\\!-\\!B^{\prime}(y)}{2M^{2}}\Bigr{]}$ (68) $\displaystyle\hskip 99.58464pt+\frac{\partial y}{\partial x^{\mu}}\frac{\partial y}{\partial{x^{\prime}}^{\rho}}\Bigl{[}(2\\!-\\!y)\frac{\partial}{\partial y}-(D\\!-\\!1)\Bigr{]}\Bigl{[}\frac{b^{\prime}(y)\\!-\\!B^{\prime}(y)}{2M^{2}}\Bigr{]}.\qquad$ To see that the de Sitter limit of our mode sum (51) agrees with (68) we substitute the de Sitter limits (63), (61) and (62) and make some tedious reorganizations. This is simplest for the MMC scalar contribution, $\displaystyle\frac{\delta^{0}_{~{}\mu}\delta^{0}_{~{}\rho}i\delta^{D}(x\\!-\\!x^{\prime})}{M^{2}a^{D-2}}-\frac{\partial_{\mu}\partial_{\rho}^{\prime}i\Delta_{t}(x;x^{\prime})}{M^{2}}\longrightarrow-\frac{\partial^{2}y}{\partial x^{\mu}\partial{x^{\prime}}^{\rho}}\frac{A^{\prime}}{M^{2}}-\frac{\partial y}{\partial x^{\mu}}\frac{\partial y}{\partial{x^{\prime}}^{\rho}}\frac{A^{\prime\prime}}{M^{2}},$ (71) $\displaystyle=-\frac{\partial^{2}y}{\partial x^{\mu}\partial{x^{\prime}}^{\rho}}\Bigl{[}\frac{(2\\!-\\!y)B^{\prime}\\!-\\!(D\\!-\\!2)B}{2M^{2}}\Bigr{]}-\frac{\partial y}{\partial x^{\mu}}\frac{\partial y}{\partial{x^{\prime}}^{\rho}}\Bigl{[}\frac{(2\\!-\\!y)B^{\prime\prime}\\!-\\!(D\\!-\\!1)B^{\prime}}{2M^{2}}\Bigr{]},\qquad$ $\displaystyle=\frac{\partial^{2}y}{\partial x^{\mu}\partial{x^{\prime}}^{\rho}}\Bigl{[}\frac{(4y\\!-\\!y^{2})B^{\prime\prime}\\!+\\!(D\\!-\\!1)(2\\!-\\!y)B^{\prime}}{2M^{2}}\Bigr{]}$ $\displaystyle\hskip 167.87108pt-\frac{\partial y}{\partial x^{\mu}}\frac{\partial y}{\partial{x^{\prime}}^{\rho}}\Bigl{[}\frac{(2\\!-\\!y)B^{\prime\prime}\\!-\\!(D\\!-\\!1)B^{\prime}}{2M^{2}}\Bigr{]}.\qquad$ Each tensor component of the temporal photon contribution requires a separate treatment. The case of $\mu=0=\rho$ gives, $\displaystyle\overline{\partial}_{0}\overline{\partial}_{0}^{\prime}\,\frac{i\Delta_{u}(x;x^{\prime})}{M^{2}}\longrightarrow-\nabla^{2}\frac{b(y)}{M^{2}}=-\nabla^{2}y\,\frac{b^{\prime}}{M^{2}}-\partial_{i}y\,\partial_{i}y\,\frac{b^{\prime\prime}}{M^{2}}$ (75) $\displaystyle=\frac{aa^{\prime}H^{2}}{M^{2}}\Biggl{\\{}-2(D\\!-\\!1)b^{\prime}+4\Bigl{[}2\\!-\\!y\\!-\\!\frac{a}{a^{\prime}}\\!-\\!\frac{a^{\prime}}{a}\Bigr{]}b^{\prime\prime}\Biggr{\\}},$ $\displaystyle=\frac{aa^{\prime}H^{2}}{2M^{2}}\Biggl{\\{}\Bigl{[}-(2\\!-\\!y)+2\Bigl{(}\frac{a}{a^{\prime}}\\!+\\!\frac{a^{\prime}}{a}\Bigr{)}\Bigr{]}\Bigl{[}-(4y\\!-\\!y^{2})b^{\prime\prime}-(D\\!-\\!1)(2\\!-\\!y)b^{\prime}\Bigr{]}$ $\displaystyle\hskip 42.67912pt+\Bigl{[}8\\!-\\!4y\\!+\\!y^{2}-2(2\\!-\\!y)\Bigl{(}\frac{a}{a^{\prime}}\\!+\\!\frac{a^{\prime}}{a}\Bigr{)}\Bigr{]}\Bigl{[}(2\\!-\\!y)b^{\prime\prime}-(D\\!-\\!1)b^{\prime}\Bigr{]}\Biggr{\\}},\qquad$ $\displaystyle=-\frac{\partial^{2}y}{\partial x^{0}\partial{x^{\prime}}^{0}}\Bigl{[}(4y\\!-\\!y^{2})\frac{\partial}{\partial y}+(D\\!-\\!1)(2\\!-\\!y)\Bigr{]}\frac{b^{\prime}}{2M^{2}}$ $\displaystyle\hskip 142.26378pt+\frac{\partial y}{\partial x^{0}}\frac{\partial y}{\partial{x^{\prime}}^{0}}\Bigl{[}(2\\!-\\!y)\frac{\partial}{\partial y}-(D\\!-\\!1)\Bigr{]}\frac{b^{\prime}}{2M^{2}}.\qquad$ For $\mu=0$ and $\rho=r$ we have, $\displaystyle\overline{\partial}_{0}\overline{\partial}_{r}^{\prime}\,\frac{i\Delta_{u}(x;x^{\prime})}{M^{2}}\longrightarrow\partial_{r}\mathcal{D}^{\prime}\frac{b(y)}{M^{2}}=\partial_{r}\mathcal{D}^{\prime}y\,\frac{b^{\prime}}{M^{2}}+\partial_{r}y\,\partial_{0}^{\prime}y\,\frac{b^{\prime\prime}}{M^{2}}$ (79) $\displaystyle=\frac{a{a^{\prime}}^{2}H^{3}\Delta x^{r}}{M^{2}}\Biggl{\\{}2(D\\!-\\!1)b^{\prime}-2(2\\!-\\!y)b^{\prime\prime}+4\frac{a}{a^{\prime}}b^{\prime\prime}\Biggr{\\}},$ $\displaystyle=\frac{a^{2}a^{\prime}H^{3}\Delta x^{r}}{M^{2}}\Biggl{\\{}\Bigl{[}(4y\\!-\\!y^{2})b^{\prime\prime}+(D\\!-\\!1)(2\\!-\\!y)b^{\prime}\Bigr{]}$ $\displaystyle\hskip 142.26378pt+\Bigl{[}2\\!-\\!y-2\frac{a^{\prime}}{a}\Bigr{]}\Bigl{[}(2\\!-\\!y)b^{\prime\prime}-(D\\!-\\!1)b^{\prime}\Bigr{]}\Biggr{\\}},\qquad$ $\displaystyle=-\frac{\partial^{2}y}{\partial x^{0}\partial{x^{\prime}}^{r}}\Bigl{[}(4y\\!-\\!y^{2})\frac{\partial}{\partial y}+(D\\!-\\!1)(2\\!-\\!y)\Bigr{]}\frac{b^{\prime}}{2M^{2}}$ $\displaystyle\hskip 142.26378pt+\frac{\partial y}{\partial x^{0}}\frac{\partial y}{\partial{x^{\prime}}^{r}}\Bigl{[}(2\\!-\\!y)\frac{\partial}{\partial y}-(D\\!-\\!1)\Bigr{]}\frac{b^{\prime}}{2M^{2}}.\qquad$ And the result for $\mu=m$ and $\rho=0$ is, $\displaystyle\overline{\partial}_{m}\overline{\partial}_{0}^{\prime}\,\frac{i\Delta_{u}(x;x^{\prime})}{M^{2}}\longrightarrow-\partial_{m}\mathcal{D}\frac{b(y)}{M^{2}}=-\partial_{m}\mathcal{D}y\,\frac{b^{\prime}}{M^{2}}-\partial_{m}y\,\partial_{0}y\,\frac{b^{\prime\prime}}{M^{2}}$ (83) $\displaystyle=-\frac{a^{2}a^{\prime}H^{3}\Delta x^{m}}{M^{2}}\Biggl{\\{}2(D\\!-\\!1)b^{\prime}-2(2\\!-\\!y)b^{\prime\prime}+4\frac{a^{\prime}}{a}b^{\prime\prime}\Biggr{\\}},$ $\displaystyle=-\frac{a{a^{\prime}}^{2}H^{3}\Delta x^{m}}{M^{2}}\Biggl{\\{}\Bigl{[}(4y\\!-\\!y^{2})b^{\prime\prime}+(D\\!-\\!1)(2\\!-\\!y)b^{\prime}\Bigr{]}$ $\displaystyle\hskip 142.26378pt+\Bigl{[}2\\!-\\!y-2\frac{a}{a^{\prime}}\Bigr{]}\Bigl{[}(2\\!-\\!y)b^{\prime\prime}-(D\\!-\\!1)b^{\prime}\Bigr{]}\Biggr{\\}},\qquad$ $\displaystyle=-\frac{\partial^{2}y}{\partial x^{m}\partial{x^{\prime}}^{0}}\Bigl{[}(4y\\!-\\!y^{2})\frac{\partial}{\partial y}+(D\\!-\\!1)(2\\!-\\!y)\Bigr{]}\frac{b^{\prime}}{2M^{2}}$ $\displaystyle\hskip 142.26378pt+\frac{\partial y}{\partial x^{m}}\frac{\partial y}{\partial{x^{\prime}}^{0}}\Bigl{[}(2\\!-\\!y)\frac{\partial}{\partial y}-(D\\!-\\!1)\Bigr{]}\frac{b^{\prime}}{2M^{2}}.\qquad$ The case of $\mu=m$ and $\rho=r$ requires the most intricate analysis. It begins with the observation, $\frac{\partial_{m}\partial_{r}}{\nabla^{2}}\frac{i\delta^{D}(x\\!-\\!x^{\prime})}{a^{D-2}}+\overline{\partial}_{m}\overline{\partial}_{r}^{\prime}\frac{i\Delta_{u}(x;x^{\prime})}{M^{2}}\longrightarrow\frac{\partial_{m}\partial_{r}}{\nabla^{2}}\,\frac{\mathcal{D}\mathcal{D}^{\prime}b(y)}{M^{2}}\;.$ (84) This component combines with the contribution from spatially transverse photons, $\overline{\Pi}_{mr}i\Delta_{v}(x;x^{\prime})\longrightarrow\Bigl{(}\delta_{mr}-\frac{\partial_{m}\partial_{r}}{\nabla^{2}}\Bigr{)}aa^{\prime}b(y)\;.$ (85) The $\partial_{m}\partial_{r}/\nabla^{2}$ terms from expressions (84) and (85) give, $\displaystyle\mathcal{D}\mathcal{D}^{\prime}b(y)-aa^{\prime}M^{2}b(y)=aa^{\prime}H^{2}\Biggl{\\{}\Bigl{[}8\\!-\\!4y\\!+\\!y^{2}-2(2\\!-\\!y)\Bigl{(}\frac{a}{a^{\prime}}\\!+\\!\frac{a^{\prime}}{a}\Bigr{)}\Bigr{]}b^{\prime\prime}$ (88) $\displaystyle\hskip 8.5359pt+\Bigl{[}-(2D\\!-\\!3)(2\\!-\\!y)+2(D\\!-\\!1)\Bigl{(}\frac{a}{a^{\prime}}\\!+\\!\frac{a^{\prime}}{a}\Bigr{)}\Bigr{]}b^{\prime}+\Bigl{[}(D\\!-\\!2)^{2}-\frac{M^{2}}{H^{2}}\Bigr{]}b\Biggr{\\}},\qquad$ $\displaystyle=aa^{\prime}H^{2}\Biggl{\\{}2(2\\!-\\!y)^{2}b^{\prime\prime}-3(D\\!-\\!1)(2\\!-\\!y)b^{\prime}+(D\\!-\\!2)(D\\!-\\!1)b$ $\displaystyle\hskip 142.26378pt+2\Bigl{(}\frac{a}{a^{\prime}}\\!+\\!\frac{a^{\prime}}{a}\Bigr{)}\Bigl{[}-(2\\!-\\!y)b^{\prime\prime}+(D\\!-\\!1)b^{\prime}\Bigr{]}\Biggr{\\}},\qquad$ $\displaystyle=\frac{1}{2}\nabla^{2}I\Bigl{[}-(2\\!-\\!y)b^{\prime}+(D\\!-\\!2)b\Bigr{]}\;,\qquad$ where $I[f(y)]$ represents the indefinite integral of $f(y)$ with respect to $y$. Substituting relation (88) in (84) and (85) gives, $\displaystyle\frac{\partial_{m}\partial_{r}}{\nabla^{2}}\frac{i\delta^{D}(x\\!-\\!x^{\prime})}{a^{D-2}}+\overline{\partial}_{m}\overline{\partial}_{r}^{\prime}\,\frac{i\Delta_{u}(x;x^{\prime})}{M^{2}}+\overline{\Pi}_{mr}i\Delta_{v}(x;x^{\prime})$ (91) $\displaystyle\hskip 99.58464pt\longrightarrow aa^{\prime}\delta_{mr}b(y)+\frac{\partial_{m}\partial_{r}}{2M^{2}}I\Bigl{[}-(2\\!-\\!y)b^{\prime}+(D\\!-\\!2)b\Bigr{]},\qquad$ $\displaystyle=\frac{aa^{\prime}H^{2}}{M^{2}}\Biggl{\\{}\delta_{mr}\Bigl{[}(4y\\!-\\!y^{2})b^{\prime\prime}+(D\\!-\\!1)(2\\!-\\!y)b^{\prime}\Bigr{]}$ $\displaystyle\hskip 113.81102pt+2aa^{\prime}H^{2}\Delta x^{m}\Delta x^{r}\Bigl{[}-(2\\!-\\!y)b^{\prime\prime}+(D\\!-\\!1)b^{\prime}\Bigr{]}\Biggr{\\}},\qquad$ $\displaystyle=-\frac{\partial^{2}y}{\partial x^{m}\partial{x^{\prime}}^{r}}\Bigl{[}(4y\\!-\\!y^{2})\frac{\partial}{\partial y}+(D\\!-\\!1)(2\\!-\\!y)\Bigr{]}\frac{b^{\prime}}{2M^{2}}$ $\displaystyle\hskip 142.26378pt+\frac{\partial y}{\partial x^{m}}\frac{\partial y}{\partial{x^{\prime}}^{r}}\Bigl{[}(2\\!-\\!y)\frac{\partial}{\partial y}-(D\\!-\\!1)\Bigr{]}\frac{b^{\prime}}{2M^{2}}.\qquad$ This completes our demonstration that the de Sitter limit of our propagator agrees with the direct calculation (68). It should also be noted that taking $H\rightarrow 0$ in the de Sitter limit gives the well known flat space result [11], so we have really checked two correspondence limits. ## 3 Approximating the Amplitudes The results of the previous section are exact but they rely upon mode functions $t(\eta,k)$, $u(\eta,k,M)$ and $v(\eta,k,M)$ for which no explicit solution is known in a general cosmological geometry (3). The purpose of this section is to develop approximations for the amplitudes (norm-squares) of these mode functions. We begin converting all the dependent and independent variables to dimensionless form. Then approximations are developed for each of the three amplitudes and checked against numerical evolution for the inflationary geometry of a simple quadratic potential which reproduces the scalar amplitude and spectral index but gives too large a value for the tensor-to-scalar ratio. The section closes by demonstrating that our approximations remain valid for the plateau potentials which agree with current data. ### 3.1 Dimensionless Formulation Time scales vary so much during cosmology that it is desirable to change the independent variable from conformal time $\eta$ to the number of e-foldings since the start of inflation $n$, $n\equiv\ln\Bigl{[}\frac{a(\eta)}{a_{i}}\Bigr{]}\qquad\Longrightarrow\qquad\partial_{0}=aH\partial_{n}\quad,\quad\partial_{0}^{2}=a^{2}H^{2}\Bigl{[}\partial_{n}^{2}+(1-\epsilon)\partial_{n}\Bigr{]}\;.$ (92) We convert the wave number $k$ and the mass $M$ to dimensionless parameters using factors of $8\pi G$, $\kappa\equiv\sqrt{8\pi G}\,k\qquad,\qquad\mu\equiv\sqrt{8\pi G}\,M\;.$ (93) And the dimensionless Hubble parameter, inflaton and classical potential are, $\chi(n)\equiv\sqrt{8\pi G}\,H(\eta)\;\;,\;\;\psi(n)\equiv\sqrt{8\pi G}\,\varphi(\eta)\;\;,\;\;U(\psi\psi^{*})\equiv(8\pi G)^{2}V(\varphi\varphi^{*})\;.$ (94) The first slow roll parameter is already dimensionless and we consider it to be a function of $n$, $\epsilon(n)\equiv-\frac{\chi^{\prime}}{\chi}\;.$ (95) In terms of these dimensionless variables the nontrivial Einstein equations are, $\displaystyle\frac{1}{2}(D\\!-\\!2)(D\\!-\\!1)\chi^{2}$ $\displaystyle=$ $\displaystyle\chi^{2}\psi^{\prime}{\psi^{\prime}}^{*}+U(\psi\psi^{*})\;,$ (96) $\displaystyle-\frac{1}{2}(D\\!-\\!2)\Bigl{(}D\\!-\\!1\\!-\\!2\epsilon\Bigr{)}\chi^{2}$ $\displaystyle=$ $\displaystyle\chi^{2}\psi^{\prime}{\psi^{\prime}}^{*}-U(\psi\psi^{*})\;.$ (97) The dimensionless inflaton evolution equation is, $\chi^{2}\Bigl{[}\psi^{\prime\prime}+(D\\!-\\!1\\!-\\!\epsilon)\psi^{\prime}\Bigr{]}+\psi\,U^{\prime}(\psi\psi^{*})=0\;.$ (98) This can be expressed entirely in terms of $\psi$ and its derivatives, $\psi^{\prime\prime}+\Bigl{(}D\\!-\\!1\\!-\\!\frac{2\psi^{\prime}{\psi^{\prime}}^{*}}{D\\!-\\!2}\Bigr{)}\Biggl{[}\psi^{\prime}+\frac{(D\\!-\\!2)U^{\prime}(\psi\psi^{*})\psi}{2U(\psi\psi^{*})}\Biggr{]}=0\;.$ (99) Although our analytic approximations apply for any model of inflation, comparing them with exact numerical results of course requires an explicit model of inflation. It is simplest to carry out most of the analysis using a quadratic model with $U(\psi)=c^{2}\psi\psi^{*}$. Applying the slow roll approximation gives analytic expressions for the scalar, the dimensionless Hubble parameter and the first slow roll parameter, $\psi(n)\simeq\sqrt{\psi_{0}^{2}\\!-\\!2n}\quad,\quad\chi(n)\simeq\frac{c}{\sqrt{3}}\sqrt{\psi_{0}^{2}\\!-\\!2n}\quad,\quad\epsilon(n)\simeq\frac{1}{\psi_{0}^{2}\\!-\\!2n}\;,$ (100) Note also that $\chi(n)\simeq\chi_{0}\sqrt{1-2n/\psi_{0}^{2}}$. By starting from $\psi_{0}=10.6$ one gets somewhat over 50 e-foldings of inflation. Setting $c=7.126\times 10^{-6}$ makes this model consistent with the observed values of the scalar spectral index and the scalar amplitude [12], but the model’s tensor-to-scalar ratio is about three times larger than the 95% confidence upper limit. Although we exploit the simple slow roll results (100) of this phenomenologically excluded model to develop approximations, the section closes with a demonstration that our analytic approximations continue to apply for viable models. We define the dimensionless MMC scalar amplitude, $\mathcal{T}(n,\kappa)\equiv\ln\Bigl{[}\frac{|t(\eta,k)|^{2}}{\sqrt{8\pi G}}\Bigr{]}\;.$ (101) Following the procedure of [22, 23, 24] we convert the mode equation and Wronskian (10) into the nonlinear relation, $\mathcal{T}^{\prime\prime}+\frac{1}{2}{\mathcal{T}^{\prime}}^{2}+(D\\!-\\!1\\!-\\!\epsilon)\mathcal{T}^{\prime}+\frac{2\kappa^{2}e^{-2n}}{\chi^{2}}-\frac{e^{-2(D-1)n-2\mathcal{T}}}{2\chi^{2}}=0\;.$ (102) The asymptotic relation (11) implies the initial conditions needed for equation (102) to produce a unique solution, $\mathcal{T}(0,\kappa)=-\ln(2\kappa)\qquad,\qquad\mathcal{T}^{\prime}(0,\kappa)=-(D\\!-\\!2)\;.$ (103) The temporal photon and spatially transverse photon amplitudes are defined analogously, $\mathcal{U}(n,\kappa,\mu)\equiv\ln\Bigl{[}\frac{|u(\eta,k,M)|^{2}}{\sqrt{8\pi G}}\Bigr{]}\qquad,\qquad\mathcal{V}(n,\kappa,\mu)\equiv\ln\Bigl{[}\frac{|v(\eta,k,M)|^{2}}{\sqrt{8\pi G}}\Bigr{]}\;.$ (104) Applying the same procedure [22, 23, 24] to the temporal photon mode equation and Wronskian (24) gives, $\displaystyle\mathcal{U}^{\prime\prime}+\frac{1}{2}{\mathcal{U}^{\prime}}^{2}+(D\\!-\\!1\\!-\\!\epsilon)\mathcal{U}^{\prime}$ (105) $\displaystyle\hskip 56.9055pt+\frac{2\kappa^{2}e^{-2n}}{\chi^{2}}+2(D\\!-\\!2)(1\\!-\\!\epsilon)+\frac{2\mu^{2}}{\chi^{2}}-\frac{e^{-2(D-1)n-2\mathcal{U}}}{2\chi^{2}}=0\;.\qquad$ And the initial conditions follow from (25), $\mathcal{U}(0,\kappa,\mu)=-\ln(2\kappa)\qquad,\qquad\mathcal{U}^{\prime}(0,\kappa,\mu)=-(D\\!-\\!2)\;.$ (106) The analogous transformation of the spatially transverse photon mode equation and Wronskian (22) produces, $\mathcal{V}^{\prime\prime}+\frac{1}{2}{\mathcal{V}^{\prime}}^{2}+(D\\!-\\!3\\!-\\!\epsilon)\mathcal{V}^{\prime}+\frac{2\kappa^{2}e^{-2n}}{\chi^{2}}+\frac{2\mu^{2}}{\chi^{2}}-\frac{e^{-2(D-3)n-2\mathcal{V}}}{2\chi^{2}}=0\;.$ (107) The initial conditions associated with (23) are, $\mathcal{V}(0,\kappa,\mu)=-\ln(2\kappa)\qquad,\qquad\mathcal{V}^{\prime}(0,\kappa,\mu)=-(D\\!-\\!4)\;.$ (108) ### 3.2 Massless, Minimally Coupled Scalar The MMC scalar amplitude is controlled by the relation between the physical wave number $\kappa e^{-n}$ and the Hubble parameter $\chi(n)$. In the sub- horizon regime of $\kappa>\chi(n)e^{n}$ the amplitude falls off roughly like $\mathcal{T}(n,\kappa)\simeq-\ln(2\kappa)-(D-2)n$, whereas it approaches a constant in the super-horizon regime of $\kappa<\chi(n)e^{n}$. (The e-folding of first horizon crossing is $n_{\kappa}$ such that $\kappa=\chi(n_{\kappa})e^{n_{\kappa}}$.) Figure 1 shows that both the sub- horizon regime, and also the initial phases of the super-horizon regime, are well described by the constant $\epsilon$ solution [24], $\mathcal{T}_{1}(n,\kappa)\equiv\ln\Biggl{[}\frac{\frac{\pi}{2}z(n,\kappa)}{2\kappa e^{(D-2)n}}\Bigl{|}H^{(1)}_{\nu_{t}(n)}\Bigl{(}z(n,\kappa)\Bigr{)}\Bigr{|}^{2}\Biggr{]}.$ (109) Here the ratio $z(n,\kappa)$ and the MMC scalar index $\nu_{t}(n)$ are, $z(n,\kappa)\equiv\frac{\kappa e^{-n}}{[1\\!-\\!\epsilon(n)]\chi(n)}\qquad,\qquad\nu_{t}(n)\equiv\frac{1}{2}\Bigl{(}\frac{D\\!-\\!1\\!-\\!\epsilon(n)}{1\\!-\\!\epsilon(n)}\Bigr{)}\;.$ (110) (a) $n_{\kappa}\simeq 6.0$ (b) $n_{\kappa}\simeq 8.3$ (c) $n_{\kappa}\simeq 10.0$ Figure 1: Plots the massless, minimally coupled scalar amplitude $\mathcal{T}(n,\kappa)$ (in solid green) and the (black dashed) ultraviolet approximation (109) versus the e-folding $n$ for three different values of $\kappa$. Of course expression (109) is an approximation to the exact result. Because we propose to use this to compute the divergent coincidence limit of the propagator it is important to see how well $\mathcal{T}_{1}(n,\kappa)$ captures the ultraviolet behavior of $\mathcal{T}(n,\kappa)$. Because (109) is exact for constant first slow roll parameter, the deviation must involve derivatives of $\epsilon(n)$. It turns out to fall off like $\kappa^{-4}$ [24], $\mathcal{T}(n,\kappa)-\mathcal{T}_{1}(n,\kappa)=\Bigl{(}\frac{D\\!-\\!2}{16}\Bigr{)}\Bigl{[}(D+5-7\epsilon)\epsilon^{\prime}+\epsilon^{\prime\prime}\Bigr{]}\Bigl{(}\frac{\chi e^{n}}{\kappa}\Bigr{)}^{4}+O\Biggl{(}\Bigl{(}\frac{\chi e^{n}}{\kappa}\Bigr{)}^{6}\Biggr{)}.$ (111) We will see in section 4 that this suffices for an exact description of the ultraviolet. The discrepancy between $\mathcal{T}(n,\kappa)$ and $\mathcal{T}_{1}(n,\kappa)$ that is evident at late times in Figure 1 is due to evolution of the first slow roll parameter $\epsilon(n)$. Figure 2 shows that the asymptotic late time phase is captured with great accuracy by the form, $\mathcal{T}_{2}(n,\kappa)=\ln\Biggl{[}\frac{\chi^{2}(n_{\kappa})}{2\kappa^{3}}\times C\Bigl{(}\epsilon(n_{\kappa})\Bigr{)}\Biggr{]}\;,$ (112) where the nearly unit correction factor $C(\epsilon)$ is, $C(\epsilon)\equiv\frac{1}{\pi}\Gamma^{2}\Bigl{(}\frac{1}{2}+\frac{1}{1\\!-\\!\epsilon}\Bigr{)}[2(1\\!-\\!\epsilon)]^{\frac{2}{1-\epsilon}}\;.$ (113) (a) $n_{\kappa}\simeq 6.0$ (b) $n_{\kappa}\simeq 8.3$ (c) $n_{\kappa}\simeq 10.0$ Figure 2: Plots the massless, minimally coupled scalar amplitude $\mathcal{T}(n,\kappa)$ (in solid green) and the (black dashed) late time approximation (112) versus the e-folding $n$ for three different values of $\kappa$. Expression (112) is exact for constant $\epsilon(n)$. When the first slow roll parameter evolves there are very small nonlocal corrections whose form is known [25] but whose net contribution is negligible for smooth potentials. ### 3.3 Temporal Photon The temporal photon amplitude is very similar to the massive scalar which was the subject of a previous study [26]. Like that system, the functional form of the amplitude is controlled by two key events: 1. 1. First horizon crossing at $n_{\kappa}$ such that $\kappa e^{-n_{\kappa}}=\chi(n_{\kappa})$; and 2. 2. Mass domination at $n_{\mu}$ such that $\mu=\frac{1}{2}\chi(n_{\mu})$.222The quadratic slow roll approximation (100) gives $n_{\mu}\simeq\frac{1}{2}\psi_{0}^{2}[1-(2\mu/\chi_{0})^{2}]$. The ultraviolet is well approximated by the form that applies for constant $\epsilon(n)$ and $\mu\propto\chi(n)$ [27], $\mathcal{U}_{1}(n,\kappa,\mu)\equiv\ln\Biggl{[}\frac{\frac{\pi}{2}z(n,\kappa)}{2\kappa e^{(D-2)n}}\Bigl{|}H^{(1)}_{\nu_{u}(n,\mu)}\Bigl{(}z(n,\kappa)\Bigr{)}\Bigr{|}^{2}\Biggr{]},$ (114) where the temporal index is, $\nu^{2}_{u}(n,\mu)\equiv\frac{1}{4}\Bigl{(}\frac{D\\!-\\!3\\!+\\!\epsilon(n)}{1\\!-\\!\epsilon(n)}\Bigr{)}^{2}\\!\\!-\frac{\mu^{2}}{[1\\!-\\!\epsilon(n)]^{2}\chi^{2}(n)}.$ (115) Figure 3 shows that the ultraviolet approximation is excellent when matter domination comes either before or after inflation. (a) $n_{\mu}<0$ (b) $n_{\mu}<0$ (c) $n_{\mu}>50$ Figure 3: Plots the temporal amplitude $\mathcal{U}(n,\kappa,\mu)$ and the ultraviolet approximation (114) versus the e-folding $n$ for $\kappa=3800\chi_{0}$ (with $n_{\kappa}\simeq 8.3$) and three different values of $\mu$ with outside the range of inflation. The ultraviolet regime is $\kappa e^{-n}\gg\\{\chi(n),\mu\\}$. To see how well the ultraviolet approximation captures this regime we substitute the difference into the exact evolution equation (105) and expand in powers of $e^{n}\chi(n)/\kappa$ to find [26], $\displaystyle\mathcal{U}(n,\kappa,\mu)-\mathcal{U}_{1}(n,\kappa,\mu)=\Biggl{\\{}\Bigl{(}5\epsilon-3\epsilon^{2}\Bigr{)}\frac{\mu^{2}}{4\chi^{2}}$ (116) $\displaystyle\hskip 51.21504pt+\Bigl{(}\frac{D\\!-\\!2}{16}\Bigr{)}\Bigl{[}(D-9+7\epsilon)\epsilon^{\prime}-\epsilon^{\prime\prime}\Bigr{]}\Biggr{\\}}\Bigl{(}\frac{\chi e^{n}}{\kappa}\Bigr{)}^{4}\\!+O\Biggl{(}\Bigl{(}\frac{\chi e^{n}}{\kappa}\Bigr{)}^{6}\Biggr{)}.\qquad$ This is suffices to give an exact result for the ultraviolet so we that can take the unregulated limit of $D=4$ for the approximations which pertain for $n>n_{\kappa}$. The various terms in equation (105) behave differently before and after first horizon crossing. Evolution before first horizon crossing is controlled by the 4th and 7th terms, $\frac{2\kappa^{2}e^{-2n}}{\chi^{2}}-\frac{e^{-2(D-1)n-2\mathcal{U}}}{2\chi^{2}}\simeq 0\qquad\Longrightarrow\qquad\mathcal{U}\simeq-\ln(2\kappa)-(D\\!-\\!2)n\;.$ (117) After first horizon crossing these terms rapidly redshift into insignificance. We can take the unregulated limit ($D=4$), and equation (105) becomes, $\mathcal{U}^{\prime\prime}+\frac{1}{2}{\mathcal{U}^{\prime}}^{2}+(3\\!-\\!\epsilon)\mathcal{U}^{\prime}+4(1\\!-\\!\epsilon)+\frac{2\mu^{2}}{\chi^{2}}\simeq 0\;.$ (118) This is a nonlinear, first order equation for $\mathcal{U}^{\prime}$. Following [26] we make the ansatz, $\mathcal{U}^{\prime}\simeq\alpha+\beta\tanh(\gamma)\;.$ (119) Substituting (119) in (118) gives, $\displaystyle\Bigl{(}{\rm Eqn.\ \ref{scUlate}}\Bigr{)}=\alpha^{\prime}+\frac{1}{2}\alpha^{2}+\frac{1}{2}\beta^{2}+(3\\!-\\!\epsilon)\alpha+4(1\\!-\\!\epsilon)+\frac{2\mu^{2}}{\chi^{2}}$ (120) $\displaystyle\hskip 76.82234pt+\Bigl{[}(3\\!-\\!\epsilon\\!+\\!\alpha)\beta+\beta^{\prime}\Bigr{]}\tanh(\gamma)+\beta\Bigl{(}\gamma^{\prime}-\frac{1}{2}\beta\Bigr{)}{\rm sech}^{2}(\gamma)\;.\qquad$ Ansatz (119) does not quite solve (118), but the following choices reduce the residue to terms of order $\epsilon\times\tanh(\gamma)$, $\alpha=-3\qquad,\qquad\frac{1}{4}\beta^{2}=\frac{1}{4}+\frac{\epsilon}{2}-\frac{\mu^{2}}{\chi^{2}}\qquad,\qquad\gamma^{\prime}=\frac{1}{2}\beta\;.$ (121) Figures 4 and 5 show how $\mathcal{U}(n,\kappa,\mu)$ behaves when mass domination comes after first horizon crossing and before the end of inflation. Figure 4: Plots the temporal amplitude $\mathcal{U}(n,3800\chi_{0},0.4\chi_{0})$ and the three approximations: (114), (123) and (124). For $\kappa=3800\chi_{0}$ horizon crossing occurs at $n_{\kappa}\simeq 8.3$; for $\mu=0.4\chi_{0}$ mass domination occurs at $n_{\mu}\simeq 20.2$. First comes a phase of slow decline followed by a period of oscillations. From (119) with (121) we see that these phases are controlled by a “frequency” defined as, $\omega^{2}_{u}(n,\mu)\equiv\frac{1}{4}+\frac{\epsilon(n)}{2}-\frac{\mu^{2}}{\chi^{2}(n)}\equiv-\Omega^{2}_{u}(n,\mu)\;.$ (122) During the phase of slow decline $\omega^{2}_{u}(n,\mu)>0$. Integrating (119) with (121) for this case gives, $\displaystyle\mathcal{U}_{2}(n,\kappa,\mu)=\mathcal{U}_{2}-3(n\\!-\\!n_{2})+2\ln\Biggl{[}\cosh\Bigl{(}\int_{n_{2}}^{n}\\!\\!\\!dn^{\prime}\omega_{u}(n^{\prime},\mu)\Bigr{)}$ (123) $\displaystyle\hskip 142.26378pt+\Bigl{(}\frac{3\\!+\\!\mathcal{U}_{2}^{\prime}}{2\omega_{u}(n_{2},\mu)}\Bigr{)}\sinh\Bigl{(}\int_{n_{2}}^{n}\\!\\!\\!dn^{\prime}\omega_{u}(n^{\prime},\mu)\Bigr{)}\Biggr{]},\qquad$ where $n_{2}\equiv n_{\kappa}+4$. The oscillatory phase is characterized by $\omega^{2}_{u}(n,\mu)<0$. Integrating (119) with (121) for this case produces, $\displaystyle\mathcal{U}_{3}(n,\kappa,\mu)=\mathcal{U}_{3}-3(n\\!-\\!n_{3})+2\ln\Biggl{[}\Biggl{|}\cos\Bigl{(}\int_{n_{3}}^{n}\\!\\!\\!dn^{\prime}\Omega_{u}(n^{\prime},\mu)\Bigr{)}$ (124) $\displaystyle\hskip 142.26378pt+\Bigl{(}\frac{3\\!+\\!\mathcal{U}_{3}^{\prime}}{2\Omega_{u}(n_{3},\mu)}\Bigr{)}\sin\Bigl{(}\int_{n_{3}}^{n}\\!\\!\\!dn^{\prime}\Omega_{u}(n^{\prime},\mu)\Bigr{)}\Biggr{|}\Biggr{]},\qquad$ where $n_{3}\equiv n_{\mu}+4$. Figures 4 and 5 show that these approximations are excellent. Figure 5: Plots the temporal amplitude $\mathcal{U}(n,3800\chi_{0},0.3\chi_{0})$ and the three approximations: (114), (123) and (124). For $\kappa=3800\chi_{0}$ horizon crossing occurs at $n_{\kappa}\simeq 8.3$; for $\mu=0.3\chi_{0}$ mass domination occurs at $n_{\mu}\simeq 36.0$. It is worth noting that the approximations (123) and (124) depend on $\kappa$ principally through the integration constants $\mathcal{U}_{2}\equiv\mathcal{U}(n_{2},\kappa,\mu)$ and $\mathcal{U}_{3}\equiv\mathcal{U}(n_{3},\kappa,\mu)$. Figure 6 shows the difference $\mathcal{U}(n,400\chi_{0},\mu)-\mathcal{U}(n,3800\chi_{0},\mu)$ for the same two choices of $\mu$ in Figures 4 and 5. One can see that the difference freezes into a constant after first horizon crossing to better than five significant figures! Figure 6: Plots the difference of the temporal amplitude $\Delta\mathcal{U}\equiv\mathcal{U}(n,\kappa_{1},\mu)-\mathcal{U}(n,\kappa_{2},\mu)$ for $\kappa_{1}=400\chi_{0}$ and $\kappa_{2}=3800\chi_{0}$ with $\mu$ chosen so that all three approximations (114), (123) and (124) are necessary. ### 3.4 Spatially Transverse Photons The general considerations for the amplitude of spatially transverse photons are similar to those for temporal photons. Before first horizon crossing it is the 4th and last terms of equation (107) which control the evolution, $\frac{2\kappa^{2}e^{-2n}}{\chi^{2}}-\frac{e^{-2(D-3)n-2\mathcal{V}}}{2\chi^{2}}\simeq 0\qquad\Longrightarrow\qquad\mathcal{V}\simeq-\ln(2\kappa)-(D\\!-\\!4)n\;.$ (125) A more accurate approximation is, $\mathcal{V}_{1}(n,\kappa,\mu)\equiv\ln\Biggl{[}\frac{\frac{\pi}{2}z(n,\kappa)}{2\kappa e^{(D-4)n}}\Bigl{|}H^{(1)}_{\nu_{v}(n,\mu)}\Bigl{(}z(n,\kappa)\Bigr{)}\Bigr{|}^{2}\Biggr{]},$ (126) where $z(n,\kappa)$ is the same as (110) and the transverse index is, $\nu^{2}_{v}(n,\mu)\equiv\frac{1}{4}\Bigl{(}\frac{D\\!-\\!3\\!-\\!\epsilon(n)}{1\\!-\\!\epsilon(n)}\Bigr{)}^{2}-\frac{\mu^{2}}{[1\\!-\\!\epsilon(n)]^{2}\chi^{2}(n)}\;.$ (127) Note the slight (order $\epsilon$) difference between $\nu^{2}_{u}(n,\mu)$ and $\nu^{2}_{v}(n,\mu)$. Figure 7 shows that (126) is excellent up to several e-foldings after first horizon crossing, and throughout inflation for $n_{\mu}<0$. (a) $n_{\mu}<0$ (b) $n_{\mu}<0$ (c) $n_{\mu}>50$ Figure 7: Plots the transverse amplitude $\mathcal{V}(n,\kappa,\mu)$ and the ultraviolet approximation (126) versus the e-folding $n$ for $\kappa=3800\chi_{0}$ (with $n_{\kappa}\simeq 8.3$) and three different values of $\mu$ with outside the range of inflation. Expression (126) also models the ultraviolet to high precision, $\displaystyle\mathcal{V}(n,\kappa,\mu)-\mathcal{V}_{1}(n,\kappa,\mu)=\Biggl{\\{}\Bigl{(}5\epsilon-3\epsilon^{2}\Bigr{)}\frac{\mu^{2}}{4\chi^{2}}$ (128) $\displaystyle\hskip 51.21504pt+\Bigl{(}\frac{D\\!-\\!4}{16}\Bigr{)}\Bigl{[}(D+3-7\epsilon)\epsilon^{\prime}+\epsilon^{\prime\prime}\Bigr{]}\Biggr{\\}}\Bigl{(}\frac{\chi e^{n}}{\kappa}\Bigr{)}^{4}\\!+O\Biggl{(}\Bigl{(}\frac{\chi e^{n}}{\kappa}\Bigr{)}^{6}\Biggr{)}.\qquad$ Figure 8 shows $\mathcal{V}(n,\kappa,\mu)$ for the case where $n_{\mu}$ happens after first horizon crossing and before the end of inflation. One sees the same phases of slow decline after first horizon crossing, followed by oscillations. Figure 8: Plots the transverse amplitude $\mathcal{V}(n,3800\chi_{0},0.4\chi_{0})$ and the three approximations: (126), (131) and (132). For $\kappa=3800\chi_{0}$ horizon crossing occurs at $n_{\kappa}\simeq 8.3$; for $\mu=0.4\chi_{0}$ mass domination occurs at $n_{\mu}\simeq 20.2$. The second and third phases can be understood by noting that the two terms of expression (125) redshift into insignificance after first horizon crossing. We can also set $D=4$ so that equation (107) degenerates to, $\mathcal{V}^{\prime\prime}+\frac{1}{2}{\mathcal{V}^{\prime}}^{2}+(1\\!-\\!\epsilon)\mathcal{V}^{\prime}+\frac{2\mu^{2}}{\chi^{2}}\simeq 0\;.$ (129) The same ansatz (119) applies to this regime, with the parameter choices, $\alpha=-1\quad,\quad\frac{1}{4}\beta^{2}=\frac{1}{4}-\frac{\mu^{2}}{\chi^{2}}\equiv\omega^{2}_{v}\equiv-\Omega^{2}_{v}\quad,\quad\gamma^{\prime}=\frac{1}{2}\beta\;.$ (130) Just as there was an order $\epsilon$ difference between the temporal and transverse indices — expressions (110) and (127), respectively — so too there is an order $\epsilon$ difference between $\omega^{2}_{u}(n,\mu)$ and $\omega^{2}_{v}(n,\kappa)$. Integrating (119) with (130) for $\omega^{2}_{v}(,\mu)>0$ gives, $\displaystyle\mathcal{V}_{2}(n,\kappa,\mu)=\mathcal{V}_{2}-(n\\!-\\!n_{2})+2\ln\Biggl{[}\cosh\Bigl{(}\int_{n_{2}}^{n}\\!\\!\\!dn^{\prime}\omega_{v}(n^{\prime},\mu)\Bigr{)}$ (131) $\displaystyle\hskip 142.26378pt+\Bigl{(}\frac{1\\!+\\!\mathcal{V}_{2}^{\prime}}{2\omega_{v}(n_{2},\mu)}\Bigr{)}\sinh\Bigl{(}\int_{n_{2}}^{n}\\!\\!\\!dn^{\prime}\omega_{v}(n^{\prime},\mu)\Bigr{)}\Biggr{]},\qquad$ where $n_{2}\equiv n_{\kappa}+4$. Integrating (119) with (130) for $\omega^{2}_{v}(n,\mu)<0$ results in, $\displaystyle\mathcal{V}_{3}(n,\kappa,\mu)=\mathcal{V}_{3}-(n\\!-\\!n_{3})+2\ln\Biggl{[}\Biggl{|}\cos\Bigl{(}\int_{n_{3}}^{n}\\!\\!\\!dn^{\prime}\Omega_{v}(n^{\prime},\mu)\Bigr{)}$ (132) $\displaystyle\hskip 142.26378pt+\Bigl{(}\frac{1\\!+\\!\mathcal{V}_{3}^{\prime}}{2\Omega_{v}(n_{3},\mu)}\Bigr{)}\sin\Bigl{(}\int_{n_{3}}^{n}\\!\\!\\!dn^{\prime}\Omega_{v}(n^{\prime},\mu)\Bigr{)}\Biggr{|}\Biggr{]},\qquad$ where $n_{3}\equiv n_{\mu}+4$. Figures 8 and 9 demonstrate that the (131) and (132) approximations are excellent. Figure 9: Plots the transverse amplitude $\mathcal{V}(n,3800\chi_{0},0.3\chi_{0})$ and the three approximations: (126), (131) and (132). For $\kappa=3800\chi_{0}$ horizon crossing occurs at $n_{\kappa}\simeq 8.3$; for $\mu=0.3\chi_{0}$ mass domination occurs at $n_{\mu}\simeq 36.0$. Finally, we note that from Figure 10 that $\mathcal{V}^{\prime}(n,\kappa,\mu)$ is nearly independent of $\kappa$ after first horizon crossing. Figure 10: Plots the difference of the transverse amplitude $\Delta\mathcal{V}\equiv\mathcal{V}(n,\kappa_{1},\mu)-\mathcal{V}(n,\kappa_{2},\mu)$ for $\kappa_{1}=400\chi_{0}$ and $\kappa_{2}=3800\chi_{0}$ with $\mu$ chosen so that all three approximations (126), (131) and (132) are necessary. One consequence for the (131) and (132) approximations is that only the integration constants $\mathcal{V}_{2}$ and $\mathcal{V}_{3}$ depend on $\kappa$. ### 3.5 Plateau Potentials We chose the quadratic dimensionless potential $U(\psi\psi^{*})=c^{2}\psi\psi^{*}$ for detailed studies because it gives simple, analytic expressions (100) in the slow roll approximation for the dimensionless Hubble parameter $\chi(n)$ and the first slow roll parameter $\epsilon(n)$. Setting $c\simeq 7.126\times 10^{-6}$ makes this model consistent with the observed values for the scalar amplitude and the scalar spectral index [12]. On the other hand, the model’s large prediction of $r\simeq 0.14$ is badly discordant with limits on the tensor-to-scalar ratio [12]. We shall therefore briefly consider how our analytic approximations fare when used with the plateau potentials currently consistent with observation. The best known plateau potential is the Einstein-frame version of Starobinsky’s famous $R+R^{2}$ model [13]. Expressing the dimensionless potential for this model in our notation gives [28], $U(\psi\psi^{*})=\frac{3}{4}M^{2}\Bigl{(}1-e^{-\sqrt{\frac{2}{3}}\,|\psi|}\Bigr{)}^{2}\qquad,\qquad M=1.3\times 10^{-5}\;.$ (133) Somewhat over 50 e-foldings of inflation result if one starts from $\psi_{0}=4.6$, and the choice of $M=1.3\times 10^{-5}$ makes the model consistent with observation [12]. Figure 11 shows why $r=16\epsilon$ is so small for this model: its dimensionless Hubble parameter $\chi(n)$ is nearly constant. Figure 11: Potential and geometry for the Einstein-frame representation of Starobinsky’s original model of inflation [13]. The left shows the dimensionless potential $U(\psi\psi^{*})$ (133); the middle plot gives the dimensionless Hubble parameter $\chi(n)$ and the right hand plot depicts the first slow roll parameter $\epsilon(n)$. Inflation was assumed to start from $\psi_{0}=4.6$. Figure 12: The left hand plot shows the amplitude $\mathcal{T}(n,\kappa)$ of the massless, minimally coupled scalar for $\kappa=3800\chi_{0}$, which corresponds to $n_{\kappa}\simeq 8.3$. The right hand graph shows the frequency $\omega_{u}^{2}(n,\mu)\simeq\omega^{2}_{v}(n,\mu)$ for $\mu=0.497\chi_{0}$ which passes through zero at $n_{\mu}\simeq 12$. All our approximations pertain for this model, but the general effect of $\chi(n)$ being so nearly constant is to increase the range over which the ultraviolet approximations pertain. The left hand plot of Figure 12 shows this for the MMC scalar amplitude $\mathcal{T}(n,\kappa)$. Because $\epsilon(n)$ is so small, the temporal and transverse frequencies are nearly equal $\omega^{2}_{u}(n,\mu)\simeq\omega^{2}_{v}(n,\mu)$ and nearly constant. The right hand plot of Figure 12 shows this for a carefully chosen value of $\mu=0.497\chi_{0}$ which causes mass domination to occur during inflation. For this case we can just see the second and third phases occur in Figure 13. Figure 13: Plots of the temporal amplitude $\mathcal{U}(n,\kappa,\mu)$ (left) and the spatially transverse amplitude $\mathcal{V}(n,\kappa,\mu)$ (right) versus $n$ for the Starobinsky potential (133). For each amplitude $\kappa=3800\chi_{0}$ (which implies $n_{\kappa}\simeq 8.3$) and $\mu=0.497\chi_{0}$ (which implies $n_{\mu}\simeq 12$). ## 4 Effective Potential The purpose of this section is to evaluate the one photon loop contribution to the inflaton effective potential defined by equation (6). We begin by deriving some exact results for the trace of the coincident propagator, and we recall that $\mathcal{T}(n,\kappa)$ can be obtained from $\mathcal{U}(n,\kappa,0)$. Then the ultraviolet approximations (114) and (126) are used to derive a divergent result whose renormalization gives the part of the effective potential that depends locally on the geometry. We give large field and small field expansions for this local part, and we study its dependence on derivatives of $\epsilon(n)$. The section closes with a discussion of the nonlocal part of the effective potential which derives from the late time approximations (123), (124), (131) and (132). ### 4.1 Trace of the Coincident Photon Propagator At coincidence the mixed time-space components of the photon mode sum vanish, and factors of $\widehat{k}_{m}\widehat{k}_{n}$ average to $\delta_{mn}/(D-1)$, $\displaystyle i\Bigl{[}\mbox{}_{\mu}\Delta_{\nu}\Bigr{]}(x;x)=\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\,\Biggl{\\{}\frac{1}{M^{2}}\left(\matrix{k^{2}uu^{*}&0\cr 0&\frac{\delta_{mn}}{D-1}\mathcal{D}u\mathcal{D}u^{*}}\right)$ (134) $\displaystyle\hskip 51.21504pt-\frac{1}{M^{2}}\left(\matrix{\partial_{0}t\partial_{0}t^{*}&0\cr 0&\frac{\delta_{mn}}{D-1}k^{2}tt^{*}}\right)+\left(\matrix{0&0\cr 0&(\frac{D-2}{D-1})\delta_{mn}vv^{*}}\right)\Biggr{\\}}.\qquad$ Its trace is, $\displaystyle g^{\mu\nu}i\Bigl{[}\mbox{}_{\mu}\Delta_{\nu}\Bigr{]}(x;x)=$ (135) $\displaystyle\hskip 28.45274pt\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\,\Biggl{\\{}\frac{\mathcal{D}u\mathcal{D}u^{*}\\!-\\!k^{2}uu^{*}\\!+\\!\partial_{0}t\partial_{0}t^{*}\\!-\\!k^{2}tt^{*}}{a^{2}M^{2}}+\frac{(D\\!-\\!2)vv^{*}}{a^{2}}\Biggr{\\}}.\qquad$ Relation (55) allows us to replace the MMC scalar mode function $t(\eta,k)$ with the massless limit of the temporal mode function $u_{0}(\eta,k)\equiv u(\eta,k,0)$, $\partial_{0}t\partial_{0}t^{*}=k^{2}u_{0}u^{*}_{0}\qquad,\qquad k^{2}tt^{*}=\mathcal{D}u_{0}\mathcal{D}u^{*}_{0}\;.$ (136) Substituting (136) in (135) gives, $\displaystyle g^{\mu\nu}i\Bigl{[}\mbox{}_{\mu}\Delta_{\nu}\Bigr{]}(x;x)=$ (137) $\displaystyle\hskip 22.76228pt\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\,\Biggl{\\{}\frac{\mathcal{D}u\mathcal{D}u^{*}\\!-\\!\mathcal{D}u_{0}\mathcal{D}u_{0}^{*}\\!-\\!k^{2}(uu^{*}\\!-\\!u_{0}u^{*}_{0})}{a^{2}M^{2}}+\frac{(D\\!-\\!2)vv^{*}}{a^{2}}\Biggr{\\}}.\qquad$ This second form (137) is very important because it demonstrates the absence of any $1/M^{2}$ pole as an exact relation, before any approximations are made. The mode equation for temporal photons implies, $\displaystyle\mathcal{D}u\mathcal{D}u^{*}=a^{2}H^{2}\Bigl{[}u^{\prime}{u^{\prime}}^{*}+(D\\!-\\!2)(uu^{*})^{\prime}+(D\\!-\\!2)^{2}uu^{*}\Bigr{]}\;,$ (139) $\displaystyle\hskip 22.76228pt=(k^{2}+a^{2}M^{2})uu^{*}+\frac{a^{2}H^{2}}{2}\Bigl{(}\partial_{n}\\!+\\!D\\!-\\!1\\!-\\!\epsilon\Bigr{)}\Bigl{(}\partial_{n}\\!+\\!2D\\!-\\!4\Bigr{)}(uu^{*})\;.\qquad$ Using relations (139) and (137) allows us to express the trace of the coincident photon propagator in terms of three coincident scalar propagators, $\displaystyle g^{\mu\nu}i\Bigl{[}\mbox{}_{\mu}\Delta_{\nu}\Bigr{]}(x;x)=i\Delta_{u}(x;x)+\frac{(D\\!-\\!2)}{a^{2}}i\Delta_{v}(x;x)$ (140) $\displaystyle\hskip 28.45274pt+\frac{H^{2}}{2M^{2}}\Bigl{(}\partial_{n}\\!+\\!D\\!-\\!1\\!-\\!\epsilon\Bigr{)}\Bigl{(}\partial_{n}\\!+\\!2D\\!-\\!4\Bigr{)}\Bigl{[}i\Delta_{u}(x;x)-i\Delta_{u_{0}}(x;x)\Bigr{]}\;.\qquad$ The disappearance of any factors of $k^{2}$ from the Fourier mode sums in (140), coupled with the ultraviolet expansions (116) and (128), means that the phase 1 approximations $\mathcal{U}_{1}(n,\kappa,\mu)$ and $\mathcal{V}_{1}(n,\kappa,\mu)$ exactly reproduce the ultraviolet divergence structures. Two of the scalar propagators in expression (140) are, $\displaystyle i\Delta_{u}(x;x^{\prime})\equiv\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\theta(\Delta\eta)u(\eta,k,M)u^{*}(\eta^{\prime},k,M)e^{i\vec{k}\cdot\Delta\vec{x}}$ (141) $\displaystyle\hskip 128.0374pt+\theta(-\Delta\eta)u^{*}(\eta,k,M)u(\eta^{\prime},k,M)e^{-i\vec{k}\cdot\Delta\vec{x}}\Biggr{\\}},\qquad$ $\displaystyle i\Delta_{v}(x;x^{\prime})\equiv\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\Biggl{\\{}\theta(\Delta\eta)v(\eta,k,M)v^{*}(\eta^{\prime},k,M)e^{i\vec{k}\cdot\Delta\vec{x}}$ (142) $\displaystyle\hskip 128.0374pt+\theta(-\Delta\eta)v^{*}(\eta,k,M)v(\eta^{\prime},k,M)e^{-i\vec{k}\cdot\Delta\vec{x}}\Biggr{\\}}.\qquad$ The third scalar propagator $i\Delta_{u_{0}}(x;x^{\prime})$ is just the $M\rightarrow 0$ limit of $i\Delta_{u}(x;x^{\prime})$. The coincidence limits of each propagator can be expressed in terms of the corresponding amplitude, $\frac{i\Delta_{u}(x;x)}{\sqrt{8\pi G}}=\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\,e^{\mathcal{U}(n,\kappa,\mu)}\quad,\quad\frac{i\Delta_{v}(x;x)}{\sqrt{8\pi G}}=\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\,e^{\mathcal{V}(n,\kappa,\mu)}\;.$ (143) Expression(140) is exact but not immediately useful because we lack explicit expressions for the coincident propagators (143). It is at this stage that we must resort to the analytic approximations developed in section 3. Recall that the phase 1 approximation is valid until roughly 4 e-foldings after horizon crossing. If one instead thinks of this as a condition on the dimensionless wave number $\kappa\equiv\sqrt{8\pi G}\,k$ at fixed $n$, it means that $\kappa>\kappa_{n-4}$, where we define $\kappa_{n}$ as the dimensionless wave number which experiences horizon crossing at e-folding $n$. Taking as an example the temporal photon contribution we can write, $\displaystyle e^{\mathcal{U}(n,\kappa,\mu)}\simeq\theta\Bigl{(}\kappa\\!-\\!\kappa_{n-4}\Bigr{)}e^{\mathcal{U}_{1}(n,\kappa,\mu)}+\theta\Bigl{(}\kappa_{n-4}-\kappa\Bigr{)}e^{\mathcal{U}_{2,3}(n,\kappa,\mu)}\;,$ (145) $\displaystyle\hskip 56.9055pt=e^{\mathcal{U}_{1}(n,\kappa,\mu)}+\theta\Bigl{(}\kappa_{n-4}-\kappa\Bigr{)}\Biggl{[}e^{\mathcal{U}_{2,3}(n,\kappa,\mu)}-e^{\mathcal{U}_{1}(n,\kappa,\mu)}\Biggr{]}.\qquad$ Substituting the approximation (145) into expression (143) allows us to write, $i\Delta_{u}(x;x)\simeq L_{u}(n)+N_{u}(n)\;,$ (146) where we define the local ($L$) and nonlocal ($N$) contributions as, $\displaystyle L_{u}(n)$ $\displaystyle\\!\\!\\!\equiv\\!\\!\\!$ $\displaystyle\sqrt{8\pi G}\\!\int\\!\\!\frac{d^{D-1}k}{(2\pi)^{D-1}}\,e^{\mathcal{U}_{1}(n,\kappa,\mu)}\;,$ (147) $\displaystyle N_{u}(n)$ $\displaystyle\\!\\!\\!\equiv\\!\\!\\!$ $\displaystyle\sqrt{8\pi G}\\!\int\\!\\!\frac{d^{3}k}{(2\pi)^{3}}\,\theta\Bigl{(}\kappa_{n-4}-\kappa\Bigr{)}\Biggl{[}e^{\mathcal{U}_{2,3}(n,\kappa,\mu)}-e^{\mathcal{U}_{1}(n,\kappa,\mu)}\Biggr{]}.\qquad$ (148) Note that we have taken the unregulated limit ($D=4$) in expression (148) because it is ultraviolet finite. The same considerations apply as well for the coincident spatially transverse photon propagator $i\Delta_{v}(x;x^{\prime})$, and for the massless limit of the temporal photon propagator $i\Delta_{u_{0}}(x;x)$. ### 4.2 The Local Contribution The local contribution for each of the coincident propagators (143) comes from using the phase 1 approximation (147). For the temporal modes the amplitude is approximated by expression (114), whereupon we change variables to $z$ using $k=(1-\epsilon)Haz$, and then employ integral $6.574\;\\#2$ of [29], $L_{u}(n)=\frac{[(1\\!-\\!\epsilon)H]^{D-2}}{(4\pi)^{\frac{D}{2}}}\times\frac{\Gamma(\frac{D-1}{2}\\!+\\!\nu_{u})\Gamma(\frac{D-1}{2}\\!-\\!\nu_{u})}{\Gamma(\frac{1}{2}\\!+\\!\nu_{u})\Gamma(\frac{1}{2}\\!-\\!\nu_{u})}\times\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}\;.$ (149) Recall that the index $\nu_{u}(n,\mu)$ is defined in expression (115). Of course the massless limit is, $L_{u_{0}}(n)=\frac{[(1\\!-\\!\epsilon)H]^{D-2}}{(4\pi)^{\frac{D}{2}}}\times\frac{\Gamma(\frac{D-1}{2}\\!+\\!\nu_{u_{0}})\Gamma(\frac{D-1}{2}\\!-\\!\nu_{u_{0}})}{\Gamma(\frac{1}{2}\\!+\\!\nu_{u_{0}})\Gamma(\frac{1}{2}\\!-\\!\nu_{u_{0}})}\times\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}\;,$ (150) where the index is, $\nu_{u_{0}}(n)\equiv\nu_{u}(n,0)=\frac{1}{2}\Bigl{(}\frac{D\\!-\\!3\\!+\\!\epsilon(n)}{1\\!-\\!\epsilon(n)}\Bigr{)}\;.$ (151) The phase 1 approximation (126) for the transverse amplitude contains two extra scale factors which serve to exactly cancel the inverse scale factors that are evident in the transverse contribution to the trace of the coincident photon propagator (140). Hence we have, $\frac{L_{v}(n)}{a^{2}}=\frac{[(1\\!-\\!\epsilon)H]^{D-2}}{(4\pi)^{\frac{D}{2}}}\times\frac{\Gamma(\frac{D-1}{2}\\!+\\!\nu_{v})\Gamma(\frac{D-1}{2}\\!-\\!\nu_{v})}{\Gamma(\frac{1}{2}\\!+\\!\nu_{v})\Gamma(\frac{1}{2}\\!-\\!\nu_{v})}\times\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}\;,$ (152) where the transverse index $\nu_{v}(n,\mu)$ is given in (127). Each of the local contributions (149), (150) and (152) is proportional to the same divergent Gamma function, $\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}=\frac{2}{D\\!-\\!4}+O\Bigl{(}(D\\!-\\!4)^{0}\Bigr{)}\;.$ (153) Each also contains a similar ratio of Gamma functions, $\displaystyle\frac{\Gamma(\frac{D-1}{2}\\!+\\!\nu)\Gamma(\frac{D-1}{2}\\!-\\!\nu)}{\Gamma(\frac{1}{2}\\!+\\!\nu)\Gamma(\frac{1}{2}\\!-\\!\nu)}=\Bigl{[}\Bigl{(}\frac{D\\!-\\!3}{2}\Bigr{)}^{2}\\!-\\!\nu^{2}\Bigr{]}\\!\times\\!\frac{\Gamma(\frac{D-3}{2}\\!+\\!\nu)\Gamma(\frac{D-3}{2}\\!-\\!\nu)}{\Gamma(\frac{1}{2}\\!+\\!\nu)\Gamma(\frac{1}{2}\\!-\\!\nu)}\;,$ (155) $\displaystyle=\\!\Bigl{[}\Bigl{(}\frac{D\\!-\\!3}{2}\Bigr{)}^{2}\\!\\!\\!-\\!\nu^{2}\Bigr{]}\Biggl{\\{}\\!1+\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu\\!\Bigr{)}\\!+\\!\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu\\!\Bigr{)}\Bigr{]}\Bigl{(}\frac{D\\!-\\!4}{2}\Bigr{)}\\!+\\!O\Bigl{(}(D\\!-\\!4)^{2}\Bigr{)}\\!\Biggr{\\}}.\qquad$ These considerations allow us to break up each of the three terms in (140) into a potentially divergent part plus a manifestly finite part. For $i\Delta_{u}(x;x)\rightarrow L_{u}(n)$ this decomposition is, $\displaystyle L_{u}=\frac{[(1\\!-\\!\epsilon)H]^{D-4}}{(4\pi)^{\frac{D}{2}}}\Biggl{[}M^{2}-\frac{(D\\!-\\!2)H^{2}}{2}\Bigl{(}(D\\!-\\!3)\epsilon-\frac{1}{2}(D\\!-\\!4)\epsilon^{2}\Bigr{)}\Biggr{]}\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}$ (156) $\displaystyle\hskip 42.67912pt+\frac{1}{16\pi^{2}}\Bigl{[}M^{2}-\epsilon H^{2}\Bigr{]}\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{u}\Bigr{)}+\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{u}\Bigr{)}\Bigr{]}+O(D\\!-\\!4)\;.\qquad$ For $(D-2)i\Delta_{v}(x;x)\rightarrow(D-2)L_{v}(n)$ we have, $\displaystyle(D\\!-\\!2)L_{v}=\frac{[(1\\!-\\!\epsilon)H]^{D-4}}{(4\pi)^{\frac{D}{2}}}\Biggl{[}(D\\!-\\!2)M^{2}-\frac{(D\\!-\\!2)(D\\!-\\!4)H^{2}}{2}\Bigl{(}(D\\!-\\!3)\epsilon$ (157) $\displaystyle-\frac{(D\\!-\\!2)\epsilon^{2}}{2}\Bigr{)}\Biggr{]}\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}+\frac{2M^{2}}{16\pi^{2}}\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{v}\Bigr{)}+\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{v}\Bigr{)}\Bigr{]}+O(D\\!-\\!4)\;.\qquad$ And the final term in (140) — the one with derivatives — becomes, $\displaystyle\frac{H^{2}}{2M^{2}}\Bigl{(}\partial_{n}\\!+\\!D\\!-\\!1\\!-\\!\epsilon\Bigr{)}\Bigl{(}\partial_{n}\\!+\\!2D\\!-\\!4\Bigr{)}\Bigl{[}L_{u}\\!-\\!L_{u_{0}}\Bigr{]}=\frac{H^{2}}{2}\Bigl{(}\partial_{n}\\!+\\!D\\!-\\!1\\!-\\!\epsilon\Bigr{)}$ (158) $\displaystyle\times\Bigl{(}\partial_{n}\\!+\\!2D\\!-\\!4\Bigr{)}\frac{[(1\\!-\\!\epsilon)H]^{D-4}}{(4\pi)^{\frac{D}{2}}}\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}+\frac{H^{2}}{32\pi^{2}}\Bigl{(}\partial_{n}\\!+\\!3\\!-\\!\epsilon\Bigr{)}\Bigl{(}\partial_{n}\\!+\\!4\Bigr{)}\qquad$ $\displaystyle\hskip 28.45274pt\times\Biggl{\\{}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{u}\Bigr{)}+\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{u}\Bigr{)}-\frac{\epsilon H^{2}}{M^{2}}\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{u}\Bigr{)}-\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{u_{0}}\Bigr{)}$ $\displaystyle\hskip 128.0374pt+\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{u}\Bigr{)}-\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{u_{0}}\Bigr{)}\Bigr{]}\Biggr{\\}}+O(D\\!-\\!4)\;.\qquad$ Note that the difference $\psi(\frac{1}{2}\pm\nu_{u})-\psi(\frac{1}{2}\pm\nu_{u_{0}})$ is of order $M^{2}$ so expression (158) has no $1/M^{2}$ pole. Note also that the $1/\epsilon$ pole in $\psi(\frac{1}{2}-\nu_{u_{0}})=\psi(\frac{-\epsilon}{1-\epsilon})$ is canceled by an explicit multiplicative factor of $\epsilon$. The potentially divergent terms (the ones proportional to $\Gamma(1-\frac{D}{2})$) in expressions (156), (157 and (158) sum to give, $\displaystyle(\ref{1stterm})_{\rm div}+(\ref{2ndterm})_{\rm div}+(\ref{3rdterm})_{\rm div}=\frac{[(1\\!-\\!\epsilon)H]^{D-4}}{(4\pi)^{\frac{D}{2}}}\Bigl{[}(D\\!-\\!1)M^{2}+\frac{1}{2}R\Bigr{]}\Gamma\Bigl{(}1\\!-\\!\frac{D}{2}\Bigr{)}$ (159) $\displaystyle\hskip 19.91684pt+\frac{H^{2}}{16\pi^{2}}\Bigl{[}3-12\epsilon+4\epsilon^{2}-2\epsilon^{\prime}-\frac{(6\epsilon^{\prime}\\!+\\!\epsilon^{\prime\prime})}{1\\!-\\!\epsilon}-\Bigl{(}\frac{\epsilon^{\prime}}{1\\!-\\!\epsilon}\Bigr{)}^{2}\Bigr{]}+O(D\\!-\\!4)\;,\qquad$ where we recall that the $D$-dimensional Ricci scalar is $R=(D-1)(D-2\epsilon)H^{2}$. Comparison with expression (6) for $\Delta V^{\prime}(\varphi\varphi^{*})$ reveals that we can absorb the divergences with the following counterterms, $\delta\xi=-\frac{\Gamma(1\\!-\\!\frac{D}{2})s^{D-4}}{(4\pi)^{\frac{D}{2}}}\times\frac{1}{2}q^{2}\qquad,\qquad\delta\lambda=-\frac{\Gamma(1\\!-\\!\frac{D}{2})s^{D-4}}{(4\pi)^{\frac{D}{2}}}\times 4(D\\!-\\!1)q^{4}\;,$ (160) where $s$ is the renormalization scale. Up to finite renormalizations, these choices agree with previous results [5, 6, 7], in the same gauge and using the same regularization, on de Sitter background. Substituting expressions (156), (157), (158) and (160) into the definition (6) of $\Delta V^{\prime}(\varphi\varphi^{*})$ and taking the unregulated limit gives the local contribution, $\displaystyle\Delta V^{\prime}_{\rm L}(\varphi\varphi^{*})=\frac{q^{2}H^{2}}{16\pi^{2}}\Biggl{\\{}\frac{(6M^{2}\\!+\\!R)}{2H^{2}}\ln\Bigl{[}\frac{(1\\!-\\!\epsilon)^{2}H^{2}}{s^{2}}\Bigr{]}\\!+\\!3\\!-\\!12\epsilon\\!+\\!4\epsilon^{2}\\!-\\!2\epsilon^{\prime}\\!-\\!\frac{(6\epsilon^{\prime}\\!+\\!\epsilon^{\prime\prime})}{1\\!-\\!\epsilon}$ (161) $\displaystyle-\Bigl{(}\frac{\epsilon^{\prime}}{1\\!-\\!\epsilon}\Bigr{)}^{2}+\frac{M^{2}}{H^{2}}\Biggl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{u}\Bigr{)}+\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{u}\Bigr{)}+2\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{v}\Bigr{)}+2\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{v}\Bigr{)}\Biggr{]}$ $\displaystyle+\frac{1}{2}\Bigl{[}(\partial_{n}\\!+\\!3\\!-\\!\epsilon)(\partial_{n}\\!+\\!4)-2\epsilon\Bigr{]}\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{u}\Bigr{)}\\!+\\!\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{u}\Bigr{)}\Bigr{]}-(\partial_{n}\\!+\\!3\\!-\\!\epsilon)(\partial_{n}\\!+\\!4)$ $\displaystyle\hskip 51.21504pt\times\frac{\epsilon H^{2}}{2M^{2}}\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\nu_{u}\Bigr{)}\\!-\\!\psi\Bigl{(}\frac{1}{1\\!-\\!\epsilon}\Bigr{)}+\psi\Bigl{(}\frac{1}{2}\\!-\\!\nu_{u}\Bigr{)}\\!-\\!\psi\Bigl{(}\frac{-\epsilon}{1\\!-\\!\epsilon}\Bigr{)}\Bigr{]}\Biggr{\\}}.\qquad$ It is worth noting that there are no singularities at $\epsilon=1$, or when either $1/(1-\epsilon)$ or $-\epsilon/(1-\epsilon)$ become non-positive integers [14]. The effective potential is obtained by integrating (161) with respect to $\varphi\varphi^{*}$. The result is best expressed using the variable $z\equiv q^{2}\varphi\varphi^{*}/H^{2}$, $\displaystyle\Delta V_{\rm L}=\frac{H^{4}}{16\pi^{2}}\Biggl{\\{}\\!\Bigl{[}3z^{2}\\!+\\!\frac{Rz}{2H^{2}}\Bigr{]}\\!\ln\Bigl{[}\frac{(1\\!-\\!\epsilon)^{2}H^{2}}{s^{2}}\Bigr{]}\\!+\\!\Bigl{[}3\\!-\\!12\epsilon\\!+\\!4\epsilon^{2}\\!-\\!2\epsilon^{\prime}\\!-\\!\frac{(6\epsilon^{\prime}\\!+\\!\epsilon^{\prime\prime})}{1\\!-\\!\epsilon}\Bigr{]}z$ (162) $\displaystyle-\frac{{\epsilon^{\prime}}^{2}z}{(1\\!-\\!\epsilon)^{2}}+2\\!\int_{0}^{z}\\!\\!\\!dx\,x\Biggl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\alpha\\!\Bigr{)}\\!+\\!\psi\Bigl{(}\frac{1}{2}\\!-\\!\alpha\\!\Bigr{)}\\!+\\!2\psi\Bigl{(}\frac{1}{2}\\!+\\!\beta\\!\Bigr{)}\\!+\\!2\psi\Bigl{(}\frac{1}{2}\\!-\\!\beta\\!\Bigr{)}\Biggr{]}$ $\displaystyle+\frac{1}{2}\Bigl{[}(\partial_{n}\\!+\\!3\\!-\\!3\epsilon)(\partial_{n}\\!+\\!4\\!-\\!2\epsilon)-2\epsilon\Bigr{]}\\!\int_{0}^{z}\\!\\!\\!dx\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\alpha(x)\Bigr{)}+\psi\Bigl{(}\frac{1}{2}\\!-\\!\alpha(x)\Bigr{)}\Bigr{]}$ $\displaystyle-(\partial_{n}\\!+\\!3\\!-\\!3\epsilon)(\partial_{n}\\!+\\!4\\!-\\!2\epsilon)\\!\int_{0}^{z}\\!\\!\frac{dx\epsilon}{4x}\Bigl{[}\psi\Bigl{(}\frac{1}{2}\\!+\\!\alpha(x)\Bigr{)}-\psi\Bigl{(}\frac{1}{1\\!-\\!\epsilon}\Bigr{)}$ $\displaystyle\hskip 184.9429pt+\psi\Bigl{(}\frac{1}{2}\\!-\\!\alpha(x)\Bigr{)}-\psi\Bigl{(}\frac{-\epsilon}{1\\!-\\!\epsilon}\Bigr{)}\Bigr{]}\Biggr{\\}},\qquad$ where the $x$-dependent indices are, $\alpha(x)\equiv\sqrt{\frac{1}{4}+\frac{\epsilon\\!-\\!2x}{(1\\!-\\!\epsilon)^{2}}}\qquad,\qquad\beta(x)\equiv\sqrt{\frac{1}{4}-\frac{2x}{(1\\!-\\!\epsilon)^{2}}}\;.$ (163) Note that the term inside the square brackets on the last line of (162) vanishes for $x=0$, so the integrand is well defined at $x=0$. ### 4.3 Large Field & Small Field Expansions Expression (162) depends principally on the quantity $z=q^{2}\varphi\varphi^{*}/H^{2}$. During inflation $z$ is typically quite large, whereas it touches $0$ after the end of inflation. Figure 14 shows this for the quadratic potential, and the results are similar for the Starobinsky potential (133). It is therefore desirable to expand the potential $\Delta V_{L}(\varphi\varphi^{*})$ for large $z$ and for small $z$. Figure 14: Plots of the dimensionless inflaton field $\psi(n)$ and the ratio $z\equiv q^{2}\psi^{2}/\chi^{2}$ after the end of inflation for the quadratic potential. Here we chose $q^{2}=\frac{1}{137}$. The large field regime follows from the large argument expansion of the digamma function, $\psi(x)=\ln(x)-\frac{1}{2x}-\frac{1}{12x^{2}}+\frac{1}{120x^{4}}-\frac{1}{256x^{6}}+O\Bigl{(}\frac{1}{x^{8}}\Bigr{)}\;.$ (164) Substituting (164) in (162), and performing the various integrals gives, $\displaystyle\Delta V_{L}=\frac{H^{4}}{16\pi^{2}}\Biggl{\\{}3z^{2}\ln\Bigl{(}\frac{2q^{2}\varphi\varphi^{*}}{s^{2}}\Bigr{)}\\!-\\!\frac{3}{2}z^{2}+\frac{Rz}{2H^{2}}\ln\Bigl{(}\frac{2q^{2}\varphi\varphi^{*}}{s^{2}}\Bigr{)}\\!-\\!(4\\!+\\!8\epsilon\\!-\\!3\epsilon^{2})z$ (165) $\displaystyle\hskip 14.22636pt-\epsilon^{\prime}z-\Bigl{[}\frac{3}{4}\epsilon(1\\!-\\!\epsilon)(2\\!-\\!\epsilon)+\frac{7}{8}(1\\!-\\!\epsilon)\epsilon^{\prime}+\frac{1}{8}\epsilon^{\prime\prime}\Bigr{]}\ln^{2}(2z)+O\Bigl{(}\ln(z)\Bigr{)}\Biggr{\\}}.\qquad$ The leading contribution of (165) agrees with the famous flat space result of Coleman and Weinberg [2], $\Delta V\longrightarrow\frac{3(q^{2}\varphi\varphi^{*})^{2}}{16\pi^{2}}\ln\Bigl{(}\frac{2q^{2}\varphi\varphi^{*}}{s^{2}}\Bigr{)}\;.$ (166) The first three terms of (165) could be subtracted using allowed counterterms of the form $F(\varphi\varphi^{*},R)$ [9]. A prominent feature of the remaining terms is the presence of derivatives of the first slow roll parameter. These derivatives are typically very small during inflation but Figure 15 shows that they can be quite large after the end of inflation. Figure 15: Plots of the first slow roll and its derivatives after the end of inflation for the quadratic potential. The small field expansion derives from expanding the digamma functions in expression (162) in powers of $x$, $\displaystyle\psi\Bigl{(}\frac{1}{2}\\!+\\!\alpha(x)\Bigr{)}$ $\displaystyle=$ $\displaystyle\psi\Bigl{(}\frac{1}{1\\!-\\!\epsilon}\Bigr{)}-\psi^{\prime}\Bigl{(}\frac{1}{1\\!-\\!\epsilon}\Bigr{)}\frac{2x}{1\\!-\\!\epsilon^{2}}+O(x^{2})\;,$ (167) $\displaystyle\psi\Bigl{(}\frac{1}{2}\\!-\\!\alpha(x)\Bigr{)}$ $\displaystyle=$ $\displaystyle\psi\Bigl{(}\frac{-\epsilon}{1\\!-\\!\epsilon}\Bigr{)}+\psi^{\prime}\Bigl{(}\frac{-\epsilon}{1\\!-\\!\epsilon}\Bigr{)}\frac{2x}{1\\!-\\!\epsilon^{2}}+O(x^{2})\;,$ (168) $\displaystyle\psi\Bigl{(}\frac{1}{2}\\!+\\!\beta(x)\Bigr{)}$ $\displaystyle=$ $\displaystyle-\gamma-\frac{\pi^{2}}{6}\frac{2x}{(1\\!-\\!\epsilon)^{2}}+O(x^{2})\;,$ (169) $\displaystyle\psi\Bigl{(}\frac{1}{2}\\!-\\!\beta(x)\Bigr{)}$ $\displaystyle=$ $\displaystyle-\frac{(1\\!-\\!\epsilon)^{2}}{2x}+1-\gamma+\Bigl{[}1\\!+\\!\frac{\pi^{2}}{6}\Bigr{]}\frac{2x}{(1\\!-\\!\epsilon)^{2}}+O(x^{2})\;.\qquad$ (170) The result is, $\displaystyle\Delta V_{L}=\frac{H^{4}}{16\pi^{2}}\Biggl{\\{}\Biggl{[}\frac{R}{2H^{2}}\ln\Bigl{[}\frac{(1\\!-\\!\epsilon)^{2}H^{2}}{s^{2}}\Bigr{]}\\!+\\!1\\!-\\!8\epsilon\\!+\\!2\epsilon^{2}\\!-\\!2\epsilon^{\prime}-\frac{(6\epsilon^{\prime}\\!+\\!\epsilon^{\prime\prime})}{1\\!-\\!\epsilon}\\!-\\!\frac{{\epsilon^{\prime}}^{2}}{(1\\!-\\!\epsilon)^{2}}\Biggr{]}z$ (171) $\displaystyle\hskip 56.9055pt+\frac{1}{2}\Bigl{[}(\partial_{n}\\!+\\!3\\!-\\!3\epsilon)(\partial_{n}\\!+\\!4\\!-\\!2\epsilon)\\!-\\!2\epsilon\Bigr{]}\Bigl{[}\psi\Bigl{(}\frac{1}{1\\!-\\!\epsilon}\Bigr{)}\\!+\\!\psi\Bigl{(}\frac{-\epsilon}{1\\!-\\!\epsilon}\Bigr{)}\Bigr{]}z$ $\displaystyle+\frac{1}{2}(\partial_{n}\\!+\\!3\\!-\\!3\epsilon)(\partial_{n}\\!+\\!4\\!-\\!2\epsilon)\Bigl{[}\psi^{\prime}\Bigl{(}\frac{1}{1\\!-\\!\epsilon}\Bigr{)}\\!-\\!\psi^{\prime}\Bigl{(}\frac{-\epsilon}{1\\!-\\!\epsilon}\Bigr{)}\Bigr{]}\frac{\epsilon z}{1\\!-\\!\epsilon^{2}}+O(z^{2})\Biggr{\\}}.\qquad$ Note that the $1/\epsilon$ pole from $\psi(\frac{-\epsilon}{1-\epsilon})$ on the penultimate line of (171) cancels against the double pole from $\psi^{\prime}(\frac{-\epsilon}{1-\epsilon})$ on the last line. ### 4.4 The Nonlocal Contribution The nonlocal contribution to the effective potential is obtained by substituting the nonlocal contribution (148) to each coincident propagator in (140), and then into expression (6), $\displaystyle\Delta V^{\prime}_{\rm N}(\varphi\varphi^{*})=q^{2}N_{u}(n)+2q^{2}e^{-2n}N_{v}(n)$ (172) $\displaystyle\hskip 113.81102pt+\frac{q^{2}H^{2}}{2M^{2}}\Bigl{(}\partial_{n}\\!+\\!3\\!-\\!\epsilon\Bigr{)}\Bigl{(}\partial_{n}\\!+\\!4\Bigr{)}\Bigl{[}N_{u}(n)\\!-\\!N_{u_{0}}(n)\Bigr{]}.\qquad$ The nonlocal contributions to the various propagators are, $\displaystyle N_{u}(n)$ $\displaystyle\\!\\!\\!=\\!\\!\\!$ $\displaystyle\int_{0}^{\kappa_{n-4}}\\!\\!\frac{d\kappa\,\kappa^{2}}{32\pi^{3}G}\Biggl{[}e^{\mathcal{U}_{2,3}(n,\kappa,\mu)}-e^{\mathcal{U}_{1}(n,\kappa,\mu)}\Biggr{]},\qquad$ (173) $\displaystyle N_{u_{0}}(n)$ $\displaystyle\\!\\!\\!=\\!\\!\\!$ $\displaystyle\int_{0}^{\kappa_{n-4}}\\!\\!\frac{d\kappa\,\kappa^{2}}{32\pi^{3}G}\Biggl{[}e^{\mathcal{U}_{2}(n,\kappa,0)}-e^{\mathcal{U}_{1}(n,\kappa,0)}\Biggr{]},\qquad$ (174) $\displaystyle N_{v}(n)$ $\displaystyle\\!\\!\\!=\\!\\!\\!$ $\displaystyle\int_{0}^{\kappa_{n-4}}\\!\\!\frac{d\kappa\,\kappa^{2}}{32\pi^{3}G}\Biggl{[}e^{\mathcal{V}_{2,3}(n,\kappa,\mu)}-e^{\mathcal{V}_{1}(n,\kappa,\mu)}\Biggr{]}.\qquad$ (175) The nonlocal nature of these contributions derives from the integration over $\kappa$, which can be converted to an integration over $n_{\kappa}$, $\kappa\equiv e^{n_{\kappa}}\chi(n_{\kappa})\qquad\Longrightarrow\qquad\frac{d\kappa}{\kappa}=\Bigl{[}1\\!-\\!\epsilon(n_{\kappa})\Bigr{]}dn_{\kappa}\;.$ (176) After this is done, any factors of $\kappa$ depend on the earlier geometry. A number of approximations result in huge simplification. First, note from Figures 4 and 5 that the ultraviolet approximation (114) for $\mathcal{U}(n,\kappa,\mu)$ is typically more negative than the late time approximations (123) and (124). Figures 8 and 9 show that the same rule applies to $\mathcal{V}(n,\kappa,\mu)$. Hence we can write, $N_{u}(n)\simeq\int_{0}^{\kappa_{n-4}}\\!\\!\frac{d\kappa\,\kappa^{2}}{16\pi^{3}G}\,e^{\mathcal{U}_{2,3}(n,\kappa,\mu)}\qquad,\qquad N_{v}(n)\simeq\int_{0}^{\kappa_{n-4}}\\!\\!\frac{d\kappa\,\kappa^{2}}{16\pi^{3}G}\,e^{\mathcal{V}_{2,3}(n,\kappa,\mu)}\;.$ (177) Second, because the temporal and transverse frequencies are nearly equal, we can write, $\omega^{2}_{u}(n,\mu)\simeq\omega^{2}_{v}(n,\mu)\qquad\Longrightarrow\qquad\mathcal{U}(n,\kappa,\mu)\simeq\mathcal{V}(n,\kappa,\mu)-2n\;.$ (178) When the mass vanishes there is so little difference between the ultraviolet approximation (114) and its late time extension (123) that we can ignore this contribution, $N_{u_{0}}(n)\simeq 0$. Next, Figures 6 and 10 imply that the late time approximations for $\mathcal{U}(n,\kappa,\mu)$ and $\mathcal{V}(n,\kappa,\mu)$ inherit their $\kappa$ dependence from the ultraviolet approximation at $n\simeq n_{\kappa}+4$, which is itself independent of $\mu$, $n>n_{\kappa}+4\qquad\Longrightarrow\qquad\mathcal{U}_{2,3}(n,\kappa,\mu)\simeq\mathcal{U}_{1}(n_{\kappa}\\!+\\!4,\kappa,0)+f_{2,3}(n,\mu)\;,$ (179) where $f_{2,3}(n,\mu)$ can be read off from expressions (123) and (124) by omitting the $\kappa$-dependent integration constants. Finally, we can use the slow roll form (112) for the amplitude reached after first horizon crossing and before the mass dominates, $e^{\mathcal{U}_{1}(n_{\kappa}+4,\kappa,0)}\simeq\frac{\chi^{2}(n_{\kappa})}{2\kappa^{3}}\times C\Bigl{(}\epsilon(n_{\kappa})\Bigr{)}\;.$ (180) Putting it all together gives, $\displaystyle\Delta V_{N}(\varphi\varphi^{*})\simeq 3q^{2}\\!\\!\int_{0}^{n-4}\\!\\!\\!\\!\\!\\!\\!dn_{\kappa}\frac{[1\\!-\\!\epsilon(n_{\kappa})]\chi^{2}(n_{\kappa})C(n_{\kappa}))}{32\pi^{3}G}\times e^{f_{2,3}(n,\mu)}$ (181) $\displaystyle+\frac{q^{2}\chi^{2}(n)}{2\mu^{2}}(\partial_{n}\\!+\\!3\\!-\\!\epsilon)(\partial_{n}\\!+\\!4)\\!\\!\int_{0}^{n-4}\\!\\!\\!\\!\\!\\!dn_{\kappa}\frac{[1\\!-\\!\epsilon(n_{\kappa})]\chi^{2}(n_{\kappa})C(\epsilon(n_{\kappa}))}{32\pi^{3}G}\\!\times\\!e^{f_{2,3}(n,\mu)}\\!.\qquad$ ## 5 Conclusions In section 2 we derived an exact, dimensionally regulated, Fourier mode sum (50) for the Lorentz gauge propagator of a massive photon on an arbitrary cosmological background (3). Our result is expressed in terms of mode functions $t(\eta,k)$, $u(\eta,k,M)$ and $v(\eta,k,M)$ whose defining relations are (10), (24) and (22), which respectively represent massless minimally coupled scalars, massive temporal photons, and massive spatially transverse photons. The photon propagator can also be expressed as a sum (51) of bi-vector differential operators acting on the scalar propagators $i\Delta_{t}(x;x^{\prime})$, $i\Delta_{u}(x;x^{\prime})$ and $i\Delta_{v}(x;x^{\prime})$ associated with the three mode functions. Because Lorentz gauge is an exact gauge there should be no linearization instability, even on de Sitter, such as occurs for Feynman gauge [30, 31]. In section 3 we converted to a dimensionless form with time represented by the number of e-foldings $n$ since the beginning of inflation, and the wave number, mass and Hubble parameter all expressed in reduced Planck units, $\kappa\equiv\sqrt{8\pi G}\,k$, $\mu\equiv\sqrt{8\pi G}\,M$ and $\chi(n)\equiv\sqrt{8\pi G}\,H(\eta)$. Analytic approximations were derived for the amplitudes $\mathcal{T}(n,\kappa)$, $\mathcal{U}(n,\kappa,\mu)$ and $\mathcal{V}(n,\kappa,\mu)$ associated with each of the mode functions. Which approximation to use is controlled by first horizon crossing at $\kappa=e^{n_{\kappa}}\chi(n_{\kappa})$ and mass domination at $\mu=\frac{1}{2}\chi(n_{\mu})$. Until shortly after first horizon crossing we employ the ultraviolet approximations (109), (114) and (126). After first horizon crossing and before mass domination the appropriate approximations are (112), (123) and (131). And after mass domination (which $\mathcal{T}(n,\kappa)$ never experiences) the amplitudes are well approximated by (124) and (132). The validity of these approximations was checked against explicit numerical solutions for inflation driven by the simple quadratic model, and by the phenomenologically favored plateau model (133). In section 4 we applied our approximations to compute the effective potential induced by photons coupled to a charged inflaton. Our result consists of a part (162) which depends locally on the geometry (3) and a numerically smaller part (181) which depends on the past history. The local part was expanded both for the case of large field strength (165), and for small field strength (171). The existence of the second, nonlocal contribution, was conjectured on the basis of indirect arguments [5] that have now been explicitly confirmed. Another conjecture that has been confirmed is the rough validity of extrapolating de Sitter results [1, 6] from the constant Hubble parameter of de Sitter background to the time dependent one of a general cosmological background (3). However, we now have good approximations for the dependence on the first slow roll parameter $\epsilon(n)$. We have throughout considered the inflaton field in the vector mass $M^{2}\equiv 2q^{2}\varphi\varphi^{*}$ to be constant because this is how the “effective potential” is defined. It would be easy to relax this assumption with only minor changes in the result. In particular, equations (6) and (140) would still pertain. Our approximations for the propagators would remain, so that renormalization would be unaffected. The only thing that would change is that the $\partial_{n}$ derivatives in expression (140) would now act on $\psi$ as well as $\epsilon$ and $\chi$. This would produce some factors of $\epsilon$ and its first derivative through the relation, $\psi^{\prime}{\psi^{\prime}}^{*}=\frac{1}{2}(D\\!-\\!2)\epsilon\;.$ (182) Our most important result is probably the fact that electromagnetic corrections to the effective potential depend upon first and second derivatives of the first slow roll parameter. One consequence is that the effective potential from electromagnetism responds more strongly to changes in the geometry than for scalars [26] or spin one half fermions [32]. This can be very important during reheating (see Figure 15); it might also be significant if features occur during inflation. Another consequence is that there cannot be perfect cancellation between the positive effective potentials induced by bosons and the negative potentials induced by fermions [10]. Note that the derivatives of $\epsilon$ come exclusively from the constrained part of the photon propagator — the $t(\eta,k)$ and $u(\eta,k)$ modes — which is responsible for long range electromagnetic interactions. Dynamical photons — the $v(\eta,k)$ modes — produce no derivatives at all. These statements can be seen from expression (140), which is exact, independent of any approximation. We close with a speculation based on the correlation between the spin of the field and the number of derivatives it induces in the effective potential: scalars produce no derivatives [26], spin one half fermions induce one derivative [32], and this paper has shown that spin one vectors give two derivatives. It would be interesting to see if the progression continues for gravitinos (which ought to induce three derivatives) and gravitons (which would induce four derivatives). Of course gravitons do not acquire a mass through coupling to a scalar inflaton, but they do respond to it, and the mode equations have been derived in a simple gauge [33, 34]. Until now it was not possible to do much with this system because it can only be solved exactly for the case of constant $\epsilon(n)$, however, we now have a reliable approximation scheme that can be used for arbitrary $\epsilon(n)$. Further, we have a worthy object of study in the graviton 1-point function, which defines how quantum 0-point fluctuations back-react to change the classical geometry. At one loop order it consists of the same sort of coincident propagator we have studied in this paper. On de Sitter background the result is just a constant times the de Sitter metric [35], which must be absorbed into a renormalization of the cosmological constant if “$H$” is to represent the true Hubble parameter. Now suppose that the graviton propagator for general first slow roll parameter consists of a local part with up to 4th derivatives of $\epsilon(n)$ plus a nonlocal part. That sort of result could not be absorbed into any counterterm. So perhaps there is one loop back-reaction after all [36], and de Sitter represents a case of unstable equilibrium? Acknowledgements This work was partially supported by Taiwan MOST grants 108-2112-M-006-004 and 107-2119-M-006-014; by NSF grants PHY-1806218 and PHY-1912484; and by the Institute for Fundamental Theory at the University of Florida. ## References * [1] B. Allen, Nucl. Phys. B 226, 228-252 (1983) doi:10.1016/0550-3213(83)90470-4 * [2] S. R. Coleman and E. J. Weinberg, Phys. Rev. D 7, 1888-1910 (1973) doi:10.1103/PhysRevD.7.1888 * [3] Y. Akrami et al. [Planck], Astron. Astrophys. 641, A10 (2020) doi:10.1051/0004-6361/201833887 [arXiv:1807.06211 [astro-ph.CO]]. * [4] D. R. Green, Phys. Rev. D 76, 103504 (2007) doi:10.1103/PhysRevD.76.103504 [arXiv:0707.3832 [hep-th]]. * [5] S. P. Miao and R. P. Woodard, JCAP 09, 022 (2015) doi:10.1088/1475-7516/2015/9/022 [arXiv:1506.07306 [astro-ph.CO]]. * [6] T. Prokopec, N. C. Tsamis and R. P. Woodard, Annals Phys. 323, 1324-1360 (2008) doi:10.1016/j.aop.2007.08.008 [arXiv:0707.0847 [gr-qc]]. * [7] S. P. Miao, S. Park and R. P. Woodard, Phys. Rev. D 100, no.10, 103503 (2019) doi:10.1103/PhysRevD.100.103503 [arXiv:1908.05558 [gr-qc]]. * [8] J. H. Liao, S. P. Miao and R. P. Woodard, Phys. Rev. D 99, no.10, 103522 (2019) doi:10.1103/PhysRevD.99.103522 [arXiv:1806.02533 [gr-qc]]. * [9] R. P. Woodard, Lect. Notes Phys. 720, 403-433 (2007) doi:10.1007/978-3-540-71013-4_14 [arXiv:astro-ph/0601672 [astro-ph]]. * [10] S. P. Miao, L. Tan and R. P. Woodard, Class. Quant. Grav. 37, no.16, 165007 (2020) doi:10.1088/1361-6382/ab9881 [arXiv:2003.03752 [gr-qc]]. * [11] N. C. Tsamis and R. P. Woodard, J. Math. Phys. 48, 052306 (2007) doi:10.1063/1.2738361 [arXiv:gr-qc/0608069 [gr-qc]]. * [12] N. Aghanim et al. [Planck], Astron. Astrophys. 641, A6 (2020) doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]]. * [13] A. A. Starobinsky, Adv. Ser. Astrophys. Cosmol. 3, 130-133 (1987) doi:10.1016/0370-2693(80)90670-X * [14] T. M. Janssen, S. P. Miao, T. Prokopec and R. P. Woodard, Class. Quant. Grav. 25, 245013 (2008) doi:10.1088/0264-9381/25/24/245013 [arXiv:0808.2449 [gr-qc]]. * [15] S. P. Miao, N. C. Tsamis and R. P. Woodard, J. Math. Phys. 52, 122301 (2011) doi:10.1063/1.3664760 [arXiv:1106.0925 [gr-qc]]. * [16] A. Vilenkin and L. H. Ford, Phys. Rev. D 26, 1231 (1982) doi:10.1103/PhysRevD.26.1231 * [17] A. D. Linde, Phys. Lett. B 116, 335-339 (1982) doi:10.1016/0370-2693(82)90293-3 * [18] A. A. Starobinsky, Phys. Lett. B 117, 175-178 (1982) doi:10.1016/0370-2693(82)90541-X * [19] V. K. Onemli and R. P. Woodard, Class. Quant. Grav. 19, 4607 (2002) doi:10.1088/0264-9381/19/17/311 [arXiv:gr-qc/0204065 [gr-qc]]. * [20] V. K. Onemli and R. P. Woodard, Phys. Rev. D 70, 107301 (2004) doi:10.1103/PhysRevD.70.107301 [arXiv:gr-qc/0406098 [gr-qc]]. * [21] S. P. Miao, N. C. Tsamis and R. P. Woodard, J. Math. Phys. 50, 122502 (2009) doi:10.1063/1.3266179 [arXiv:0907.4930 [gr-qc]]. * [22] M. G. Romania, N. C. Tsamis and R. P. Woodard, Class. Quant. Grav. 30, 025004 (2013) doi:10.1088/0264-9381/30/2/025004 [arXiv:1108.1696 [gr-qc]]. * [23] M. G. Romania, N. C. Tsamis and R. P. Woodard, JCAP 08, 029 (2012) doi:10.1088/1475-7516/2012/08/029 [arXiv:1207.3227 [astro-ph.CO]]. * [24] D. J. Brooker, N. C. Tsamis and R. P. Woodard, Phys. Rev. D 93, no.4, 043503 (2016) doi:10.1103/PhysRevD.93.043503 [arXiv:1507.07452 [astro-ph.CO]]. * [25] D. J. Brooker, N. C. Tsamis and R. P. Woodard, Phys. Rev. D 96, no.10, 103531 (2017) doi:10.1103/PhysRevD.96.103531 [arXiv:1708.03253 [gr-qc]]. * [26] A. Kyriazis, S. P. Miao, N. C. Tsamis and R. P. Woodard, Phys. Rev. D 102, no.2, 025024 (2020) doi:10.1103/PhysRevD.102.025024 [arXiv:1908.03814 [gr-qc]]. * [27] T. M. Janssen, S. P. Miao, T. Prokopec and R. P. Woodard, JCAP 05, 003 (2009) doi:10.1088/1475-7516/2009/05/003 [arXiv:0904.1151 [gr-qc]]. * [28] D. J. Brooker, S. D. Odintsov and R. P. Woodard, Nucl. Phys. B 911, 318-337 (2016) doi:10.1016/j.nuclphysb.2016.08.010 [arXiv:1606.05879 [gr-qc]]. * [29] I. S. Gradshteyn and I. M. Ryzhik, “Table of Integrals, Series and Products, 4th Edition,” (New York, Academic Press, 1965). * [30] E. O. Kahya and R. P. Woodard, Phys. Rev. D 72, 104001 (2005) doi:10.1103/PhysRevD.72.104001 [arXiv:gr-qc/0508015 [gr-qc]]. * [31] E. O. Kahya and R. P. Woodard, Phys. Rev. D 74, 084012 (2006) doi:10.1103/PhysRevD.74.084012 [arXiv:gr-qc/0608049 [gr-qc]]. * [32] A. Sivasankaran and R. P. Woodard, [arXiv:2007.11567 [gr-qc]]. * [33] J. Iliopoulos, T. N. Tomaras, N. C. Tsamis and R. P. Woodard, Nucl. Phys. B 534, 419-446 (1998) doi:10.1016/S0550-3213(98)00528-8 [arXiv:gr-qc/9801028 [gr-qc]]. * [34] L. R. Abramo and R. P. Woodard, Phys. Rev. D 65, 063515 (2002) doi:10.1103/PhysRevD.65.063515 [arXiv:astro-ph/0109272 [astro-ph]]. * [35] N. C. Tsamis and R. P. Woodard, Annals Phys. 321, 875-893 (2006) doi:10.1016/j.aop.2005.08.004 [arXiv:gr-qc/0506056 [gr-qc]]. * [36] G. Geshnizjani and R. Brandenberger, Phys. Rev. D 66, 123507 (2002) doi:10.1103/PhysRevD.66.123507 [arXiv:gr-qc/0204074 [gr-qc]].
capbtabboxtable[][] # Multi-view Data Visualisation via Manifold Learning Theodoulos Rodosthenous Department of Mathematics Imperial College London London, SW7 2AZ, UK <EMAIL_ADDRESS> &Vahid Shahrezaei Department of Mathematics Imperial College London London, SW7 2AZ, UK <EMAIL_ADDRESS> &Marina Evangelou Department of Mathematics Imperial College London London, SW7 2AZ, UK <EMAIL_ADDRESS> Contact author ###### Abstract Non-linear dimensionality reduction can be performed by manifold learning approaches, such as Stochastic Neighbour Embedding (SNE), Locally Linear Embedding (LLE) and Isometric Feature Mapping (ISOMAP). These methods aim to produce two or three latent embeddings, primarily to visualise the data in intelligible representations. This manuscript proposes extensions of Student’s t-distributed SNE (t-SNE), LLE and ISOMAP, for dimensionality reduction and visualisation of multi-view data. Multi-view data refers to multiple types of data generated from the same samples. The proposed multi-view approaches provide more comprehensible projections of the samples compared to the ones obtained by visualising each data-view separately. Commonly visualisation is used for identifying underlying patterns within the samples. By incorporating the obtained low-dimensional embeddings from the multi-view manifold approaches into the $K$-means clustering algorithm, it is shown that clusters of the samples are accurately identified. Through the analysis of real and synthetic data the proposed multi-SNE approach is found to have the best performance. We further illustrate the applicability of the multi-SNE approach for the analysis of multi-omics single-cell data, where the aim is to visualise and identify cell heterogeneity and cell types in biological tissues relevant to health and disease. _K_ eywords Multi-view Data $\cdot$ Data Visualisation $\cdot$ Manifold Learning $\cdot$ Clustering ## 1 Introduction Data visualisation is an important and useful component of exploratory data analysis, as it can reveal interesting patterns in the data and potential clusters of the observations. A common approach for visualising high- dimensional data ($p>>n$) is by reducing its dimensions. Linear dimensionality reduction methods, including Principal Components Analysis (PCA) (Jolliffe and Cadima, 2016) and Non-negative Matrix Factorization (NMF) (García et al., 2018), assume linearity within data sets and as a result these methods often fail to produce reliable representations when linearity does not hold. Manifold learning, an active research area within machine learning, in contrast to the linear dimensionality reduction approaches do not rely on any linearity assumptions. By assuming that the dimensions of the data sets are artificially high, manifold learning methods aim to capture important information in an induced low-dimensional embedding (Zheng and Xue, 2009). The generated low-dimensional embeddings can be used for data visualisation in the 2-D or 3-D spaces. Manifold learning approaches used for dimensionality reduction and visualisation, focus on preserving at least one of the characteristics of the data. For example, the Stochastic Neighbour Embedding (SNE) preserves the probability distribution of the data (Hinton and Roweis, 2003). The Locally Linear Embedding (LLE) proposed by Roweis and Saul (2000) is a neighbourhood- preserving method. The Isometric Feature Mapping (ISOMAP) proposed by Tenenbaum et al. (2000) is a quasi-isometric method based on Multi-Dimensional Scaling (Kruskal, 1964). Spectral Embedding finds low-dimensional embeddings via spectral decomposition of the Laplacian matrix (Ng et al., 2001). The Local Tangent Space Alignment method proposed by Zhang and Zha (2004) learns the embedding by optimising local tangent spaces representing the local geometry of each neighbourhood and Uniform Manifold Approximation and Projection preserves the global structure of the data by constructing a theoretical framework based in Riemannian geometry and algebraic topology (McInnes et al., 2018). This manuscript focuses on data visualisation of multi-view data, which are regarded as different types of data sets that are generated on the same samples of a study. It is very common nowadays in many different fields to generate multiple data-views on the same samples. For example, multi-view imaging data describe distinct visual features such as local binary patterns (LBP), and histogram of oriented gradients (HOG) (Shen et al., 2013), while multi-omics data, e.g. proteomics, genomics, etc, in biomedical studies quantify different aspects of an organism’s biological processes (Hasin et al., 2017). Through the collection of multi-view data, researchers are interested in better understanding the collected samples, including their visualisation, clustering and classification. Analysing simultaneously the multi-view data is not a straightforward task as each data-view has its own distribution and variation pattern (Rodosthenous et al., 2020). Several approaches have been proposed for the analysis of multi-view data. These include methods on clustering (Kumar et al., 2011; Liu et al., 2013; Sun et al., 2015; Ou et al., 2016; Ye et al., 2018; Ou et al., 2018; Wang and Allen, 2021), classification (Shu et al., 2019), regression (Li et al., 2019), integration (Rodosthenous et al., 2020) and dimensionality reduction (Sun, 2013; Zhao et al., 2018; Xu et al., 2015). Such approaches have been extensively discussed in the review papers of Xu et al. (2013) and Zhao et al. (2017). In this manuscript, we focus on the visualisation task. By visualising multi- view data the aim is to obtain a global overview of the data and identify patterns that would have potentially be missed by looking at each data-view separately. Typically, multiple visualisations are produced, one from each data-view, or the features of the data-views are concatenated to produce a single visualisation. The former could provide misleading outcomes, with each data-view revealing different visualisations and patterns. The different statistical properties, physical interpretation, noise and heterogeneity between data-views suggest that concatenating features would often fail in achieving a reliable interpretation and visualisation of the data (Fu et al., 2008). A number of multi-view visualisation approaches have been proposed in the literature, with some of these approaches based on the manifold approaches t-SNE and LLE. For example, Xie et al. (2011) proposed m-SNE that combines the probability distributions produced by each data-view into a single distribution via a weight parameter. The algorithm then implements t-SNE on the combined distribution to obtain a single low-dimensional embedding. The proposed solution finds the optimal choice for both the low-dimensional embeddings and the weight parameter simultaneously. Similarly, Kanaan Izquierdo (2017) proposed two alternative solutions based on t-SNE, named MV- tSNE1 and MV-tSNE2. MV-tSNE2 is similar to m-SNE combining the probability distributions through expert opinion pooling. More recently, Canzar and Hoan Do (2021) proposed a multi-view extension of t-SNE, named j-SNE, that updates the produced low-dimensional embeddings iteratively between the data-views. Each data-view is given a weight value that is updated per iteration through regularisation. In addition, Shen et al. (2013) proposed multi-view Locally Linear Embeddings (m-LLE) that is an extension of LLE for effectively retrieve medical images. M-LLE produces a single low-dimensional embedding by integrating the embeddings from each data-view according to a weight parameter $c$, which refers to the contribution of each data-view. Similarly to m-SNE, the algorithm optimizes both the weight parameter and the embeddings simultaneously. Zong et al. (2017) proposed MV-LLE that minimises the cost function by assuming a consensus matrix across all data-views. Building on from the existing literature work, we propose here alternative extensions to the manifold approaches: t-SNE, LLE, and ISOMAP, for visualising multi-view data. The cost functions of our proposals are different from the existing ones, as they integrate the available information from the multi-view data iteratively. At each iteration, the proposed multi-SNE updates the low- dimensional embeddings by minimising the dissimilarity between their probability distribution and the distribution of each data-view. The total cost of this approach equals to the weighted sum of those dissimilarities. Our proposed variation of LLE, Multi-LLE, constructs the low-dimensional embeddings by utilising a consensus weight matrix, which is taken as the weighted sum of the weight matrices computed by each data-view. Lastly, the low-dimensional embeddings in the proposed multi-ISOMAP are constructed by using a consensus graph, for which the nodes represent the samples and the edge lengths are taken as the averaged distance between the samples in each data-view. Through a comparative study via the analysis of real and synthetic data we illustrate that our proposals result to more robust solutions compared to the approaches proposed in the literature, including m-SNE, m-LLE and MV-SNE. We further illustrate through the visualisation of the low-dimensional embeddings produced by the proposed multi-view manifold learning algorithms, that if clusters exist within the samples, they can be successfully identified. We show that this can be achieved by applying the $K$-means algorithm on the low- dimensional embeddings of the data. The $K$-means (MacQueen, 1967) was chosen to cluster the data points, as it is one of the most famous and prominent partition clustering algorithms (Xu and Tian, 2015). A better clustering performance by $K$-means suggests a visually clearer separation of clusters. Through the conducted experiments, we show that the proposed multi-SNE approach recovers well-separated clusters of the data, and has comparable performance to multi-view clustering algorithms that exist in the literature. ## 2 Material and Methods In this section, the proposed approaches for multi-view manifold learning are described. This section starts with an introduction of the notation used throughout this manuscript. The proposed multi-SNE, multi-LLE and multi-ISOMAP are described in Sections 2.2, 2.3 and 2.4, respectively. The section ends with a description of the process for tuning the parameters of the algorithms. ### 2.1 Notation Throughout this paper, the following notation is used: * • $N$: The number of samples. * • $X\in\mathbb{R}^{N\times p}$: A single-view data matrix, representing the original high-dimensional data used as input; $\textbf{x}_{i}\in\mathbb{R}^{p}$ is the $i^{th}$ data point of $X$. * • $M$: The number of data-views in a given data set; $m\in\\{1,\cdots,M\\}$ represents an arbitrary data-view. * • $X^{(m)}\in\mathbb{R}^{N\times p_{m}}$: The $m^{th}$ data-view of multi-view data; $\textbf{x}_{i}^{m}\in\mathbb{R}^{p_{m}}$ is the $i^{th}$ data point of $X^{(m)}$. * • $Y\in\mathbb{R}^{N\times d}$: A low-dimensional embedding of the original data. $\textbf{y}_{i}\in\mathbb{R}^{d}$ represents the $i^{th}$ data point of $Y$. In this manuscript, $d=2$, as the focus of the manuscript is on data visualisation. ### 2.2 Multi-SNE SNE, proposed by Hinton and Roweis (2003), measures the probability distribution, $P$ of each data point $\textbf{x}_{i}$ by looking at the similarities among its neighbours. For every sample $i$ in the data, $j$ is taken as its potential neighbour with probability $p_{ij}$, given by $\displaystyle p_{ij}=\frac{\exp{(-d_{ij}^{2})}}{\sum_{k\neq i}\exp{(-d_{ik}^{2})}},$ where $d_{ij}=\frac{||\textbf{x}_{i}-\textbf{x}_{j}||^{2}}{2\sigma_{i}^{2}}$ represents the dissimilarity between points $\textbf{x}_{i}$ and $\textbf{x}_{j}$. The value of $\sigma_{i}$ is either set by hand or found by binary search (van der Maaten and Hinton, 2008). Based on this value, a probability distribution of sample $i$, $P_{i}=\sum_{j}p_{ij}$, with fixed perplexity is produced. Perplexity refers to the effective number of local neighbours and it is defined as $Perp(P_{i})=2^{H(P_{i})}$, where $H(P_{i})=-\sum_{j}p_{ij}\log_{2}p_{ij}$ is the Shannon entropy of $P_{i}$. It increases monotonically with the variance $\sigma_{i}$ and typically takes values between $5$ and $50$. In the same way, a probability distribution in the low-dimensional space, $Y$, is computed as follows: $\displaystyle q_{ij}=\frac{\exp{(-||\textbf{y}_{i}-\textbf{y}_{j}||^{2})}}{\sum_{k\neq i}\exp{(-||\textbf{y}_{i}-\textbf{y}_{k}||^{2})}},$ which represents the probability of point $i$ selecting point $j$ as its neighbour. The induced embedding output, $\textbf{y}_{i}$, represented by probability distribution, $Q$, is obtained by minimising the Kullback-Leibler divergence (KL-divergence) $KL(P||Q)$ between the two distributions $P$ and $Q$ (Kullback and Leibler, 1951). The aim is to minimise the cost function: $\displaystyle C_{SNE}=\sum_{i}KL(P_{i}||Q_{i})=\sum_{i}\sum_{j}p_{ij}\log\frac{p_{ij}}{q_{ij}}$ Hinton and Roweis (2003) assumed a Gaussian distribution in computing the similarity between two points in both high and low dimensional spaces. van der Maaten and Hinton (2008) proposed a variant of SNE, called t-SNE, which uses a symmetric version of SNE and a Student t-distribution to compute the similarity between two points in the low-dimensional space $Q$, given by $\displaystyle q_{ij}=\frac{(1+||\textbf{y}_{i}-\textbf{y}_{j}||^{2})^{-1}}{\sum_{k\neq l}(1+||\textbf{y}_{k}-\textbf{y}_{l}||^{2})^{-1}}$ T-SNE is often preferred, because it reduces the effect of crowding problem (limited area to accommodate all data points and differentiate clusters) and it is easier to optimise, as it provides simpler gradients than SNE (van der Maaten and Hinton, 2008). We propose multi-SNE, a multi-view manifold learning algorithm based on t-SNE. Our proposal computes the KL-divergence between the distribution of a single low-dimensional embedding and each data-view of the data separately, and minimises their weighted sum. An iterative algorithm is proposed, in which at each iteration the induced embedding is updated by minimising the cost function: $\displaystyle C_{multi- SNE}=\sum_{m}\sum_{i}\sum_{j}w^{m}p^{m}_{ij}\log\frac{p^{m}_{ij}}{q_{ij}},$ (1) where $w^{m}$ is the combination coefficient of the $m^{th}$ data-view. The vector $\textbf{w}=(w^{1},\cdots,w^{M})$ acts as a weight vector that satisfies $\sum_{m}w^{m}=1$. In this study, equal weights on all data-views were considered, i.e. $w^{m}=\frac{1}{M},\quad\forall m=1,\cdots,M$. The algorithm of the proposed multi-SNE approach is presented in Appendix A. An alternative multi-view extension of t-SNE, called m-SNE was proposed by Xie et al. (2011). M-SNE applies t-SNE on a single distribution in the high- dimensional space, which is computed by combining the probability distributions of the data-views, given by $p_{ij}=\sum_{m=1}^{M}\beta^{m}p_{ij}^{m}$. The coefficients (or weights) $\beta^{m}$ share the same role as $w^{m}$ in multi-SNE and similarly $\boldsymbol{\beta}=(\beta^{1},\cdots,\beta^{M})$ satisfies $\sum_{m}\beta^{m}=1$. This leads to a different cost function than the one in equation (1). MV-tSNE1 has a similar cost function with multi-SNE given by: $\displaystyle C_{MV- tSNE1}=\sum_{m}\sum_{i}\sum_{j}p^{m}_{i|j}\log\frac{p^{m}_{i|j}}{q_{i|j}},$ (2) where the conditional probabilities are used to compute the low-dimensional embeddings (Kanaan Izquierdo, 2017). The joint probabilities used in the cost function of multi-SNE are equal to the average between the two symmetric conditional probabilities, i.e. $p_{ij}=\frac{p_{i|j}+p_{j|i}}{2N}$, leading to a symmetric probability distribution. As discussed by van der Maaten and Hinton (2008), symmetric SNE reduces the computational costs of the algorithm, as the gradient descent becomes simpler and easier to solve. Kanaan Izquierdo (2017) did not pursue MV-tSNE1 any further, but instead they proceeded with an alternative solution, MV-tSNE2, which combines the probability distributions (similar to m-SNE) through expert opinion pooling. A comparison between multi- SNE and MV-tSNE2 is presented in Appendix D.1. Multi-SNE avoids combining the probability distributions of all data-views together. Instead, the induced embeddings are updated by minimising the KL- divergence between every data-view’s probability distribution and that of the low-dimensional representation we seek to obtain. In other words, this is achieved by computing and summing together the gradient descent for each data- view. The induced embedding is then updated by minimising the summed gradient descent. Throughout this paper, for all variations of t-SNE we have applied the PCA pre-training step proposed by van der Maaten and Hinton (2008). van der Maaten and Hinton (2008) discussed that by reducing the dimensions of the input data through PCA the computational time of t-SNE is reduced. In this paper, the principal components taken retained at least $80\%$ of the total variation (variance explained) in the original data. In addition, as the multi-SNE algorithm is an iterative algorithm we opted for running the algorithm for 1,000 iterations for all analyses conducted. Alternatively, a stopping rule could have been implemented with the iterative algorithm to stop after no significant changes were observed to the cost-function. Both these options are available at the implementation of the multi-SNE algorithm. ### 2.3 Multi-LLE LLE attempts to discover a non-linear structure of high-dimensional data, $X$, by computing low-dimensional and neighbourhood-preserving embeddings, $Y$ (Saul and Roweis, 2001). The main three steps of the algorithm are: 1. 1. The set, denoted by $\Gamma_{i}$, contains the $K$ nearest neighbours of each data point $\textbf{x}_{i},i=1,\cdots,N$. The most common distance measure between the data points is the Euclidean distance. Other local metrics can also be used in identifying the nearest neighbours (Roweis and Saul, 2000). 2. 2. A weight matrix, $W$, is computed, which acts as a bridge between the high- dimensional space in $X$ and the low-dimensional space in $Y$. Initially, $W$ reconstructs $X$, by minimising the cost function: $\displaystyle\mathcal{E}_{X}=\sum_{i}|\textbf{x}_{i}-\sum_{j}W_{ij}\textbf{x}_{j}|^{2}$ (3) where the weights $W_{ij}$ describe the contribution of the $j^{th}$ data point to the $i^{th}$ reconstruction. The optimal weights $W_{ij}$ are found by solving the least squares problem given in equation (3) subject to the constraints: 1. (a) $W_{ij}=0$, if $j\notin\Gamma_{i}$, and 2. (b) $\sum_{j}W_{ij}=1$ 3. 3. Once $W$ is computed, the low-dimensional embedding $\textbf{y}_{i}$ of each data point $i=1,\cdots,N$, is obtained by minimising: $\displaystyle\mathcal{E}_{Y}=\sum_{i}|\textbf{y}_{i}-\sum_{j}W_{ij}\textbf{y}_{j}|^{2}$ (4) The solution to equation (4), is obtained by taking the bottom $d$ non-zero eigenvectors of the sparse $N\times N$ matrix, $M=(I-W)^{T}(I-W)$ (Roweis and Saul, 2000). We propose multi-LLE, a multi-view extension of LLE, that computes the low- dimensional embeddings by using the consensus weight matrix: $\hat{W}=\sum_{m}\alpha^{m}W^{m}$ where $\sum_{m}\alpha^{m}=1$, and $W^{m}$ is the weight matrix for each data- view $m=1,\cdots M$. Thus, $\hat{Y}$ is obtained by solving: $\mathcal{E}_{\hat{Y}}=\sum_{i}|\hat{\textbf{y}}_{i}-\sum_{j}\hat{W}_{i}j\hat{\textbf{y}}_{j}|^{2}$ The multi-LLE algorithm is presented in Appendix A. Shen et al. (2013) proposed m-LLE, an alternative multi-view extension of LLE. The LLE embeddings of each data-view are combined and LLE is applied on each data-view separately. The weighted average of those embeddings are taken as the unified low-dimensional embedding. In other words, computing the weight matrices $W^{m}$ and solving $\mathcal{E}_{Y^{m}}=\sum_{i}|\textbf{y}_{i}^{m}-\sum_{j}W^{m}_{ij}\textbf{y}_{j}^{m}|^{2}$, for each $m=1,\cdots M$ separately. Thus, the low-dimensional embedding $\hat{Y}$ is computed by $\hat{Y}=\sum_{m}\beta^{m}Y^{m}$, where $\sum_{m}\beta^{m}=1$. An alternative multi-view LLE solution was proposed by Zong et al. (2017) to find a consensus manidold, which is then used for multi-view clustering via Non-negative Matrix Factorization; we refer to this approach as MV-LLE. This solution minimises the cost function by assuming a consensus matrix across all data-views. The optimisation is then solved by using the Entropic Mirror Descent Algorithm (EMDA) (Beck and Teboulle, 2003). In contrast to m-LLE and MV-LLE, multi-LLE combines the weight matrices obtained from each data-view, instead of the LLE embeddings. No comparisons were conducted with MV-LLE and the proposed multi-LLE, as the code of the MV-LLE algorithm is not publicly available. ### 2.4 Multi-ISOMAP ISOMAP aims to discover a low-dimensional embedding of high-dimensional data by maintaining the geodesic distances between all points (Tenenbaum et al., 2000); it is often regarded as an extension of Multi-dimensional Scaling (Kruskal, 1964). The ISOMAP algorithm comprises of the following three steps: * Step 1. A graph is defined. Let $G\sim(V,E)$ define a neighbourhood graph, with vertices $V$ representing all data points. The edge length between any two vertices $i,j\in V$ is defined by the distance metric $d_{X}(i,j)$, measured by the Euclidean distance. If a vertex $j$ does not belong to the $K$ nearest neighbours of $i$, then $d_{X}(i,j)=0$. The parameter $K$ is given as input, and it represents the connectedness of the graph $G$; as $K$ increases, more vertices are connected. * Step 2. The shortest paths between all pairs of points in $G$ are computed. The shortest path between vertices $i,j\in V$ is defined by $d_{G}(i,j)$. Let $D_{G}\in\mathbb{R}^{|V|\times|V|}$ be a matrix containing the shortest paths between any vertices $i,j\in V$, defined by $(D_{G})_{ij}=d_{G}(i,j)$. The most efficient known algorithm to perform this task is Dijkstra’s Algorithm (Dijkstra, 1959). In large graphs, an alternative approach to Dijkstra’s Algorithm would be to initialize $d_{G}(i,j)=d_{X}(i,j)$ and replace all entries by $d_{G}(i,j)=\min\left\\{d_{G}(i,k),d_{G}(k,j)\right\\}$. * Step 3. The low-dimensional embeddings are constructed. The $i^{th}$ component of the low-dimensional embedding is given by $y_{i}=\sqrt{\lambda_{p}}u^{i}_{p}$, where $\lambda_{p}$ is the $p^{th}$ eigenvalue in decreasing order of the the matrix $D_{G}$ and $u^{i}_{p}$ the $i^{th}$ component of $p^{th}$ eigenvector (Tenenbaum et al., 2000). Multi-ISOMAP is our proposal for adapting ISOMAP on multi-view data. Let $G_{m}\sim(V,E_{m})$ be a neighbourhood graph obtained from data-view $X^{(m)}$ as defined in the first step of ISOMAP. All neighbourhood graphs are then combined into a single graph, $\tilde{G}$; the combination is achieved by computing the edge length as the averaged distance of each data-view, i.e. $d_{\tilde{G}}(i,j)=w_{m}\sum_{m}d_{G_{m}}(i,j)$. Once a combined neighbourhood graph is computed, multi-ISOMAP follows steps 2 and 3 of ISOMAP described above. For simplicity, the weights throughout this paper were set as $w_{m}=\frac{1}{M},\forall m$. The multi-ISOMAP algorithm is presented in Appendix A. For completion, we have in addition adapted ISOMAP for multi-view visualisation following the framework of both m-SNE and m-LLE. Following the same logic, m-ISOMAP combines the ISOMAP embeddings of each data-view by taking the weighted average of those embeddings as the unified low-dimensional embedding. In other words, the low-dimensional embedding $\hat{Y}$ is obtained by computing $\hat{Y}=\sum_{m}\beta^{m}Y^{m}$, where $\sum_{m}\beta^{m}=1$. ### 2.5 Parameter Tuning The multi-view manifold learning algorithms were tested on real and synthetic data sets for which the samples can be separated in several clusters. The true clusters are known and they were used to tune the parameters of the methods. To quantify the clustering performance, we used the following four extrinsic measures: (i) Accuracy (ACC), (ii) Normalised Mutual Information (NMI) (Vinh et al., 2010), (iii) Rand Index (RI) (Rand, 1971) and (iv) Adjusted Rand Index (ARI) (Hubert and Arabie, 1985). All measures take values in the range $\left[0,1\right]$, with $0$ expressing complete randomness, and $1$ perfect separation between clusters. The mathematical formulas of the four measures are presented in Appendix B. SNE, LLE and ISOMAP depend on parameters that their proper tuning ensues to optimal results. LLE and ISOMAP depend on the number of nearest neighbours ($NN$). SNE depends on the Perplexity ($Perp$) parameter, which is directly related with the number of nearest neighbours. Similarly, the multi-view extensions of the three methods depend on the same parameters. The choice of the parameter can influence the visualisations and in some cases present the data into separate maps (van der Maaten and Hinton, 2012). By assuming that the data samples belong to a number of clusters that we seek to identify, the performance of the algorithms was measured for a range of tuning parameter values, $S=\left\\{2,10,20,50,80,100,200\right\\}$. Note that for all algorithms, the parameter value cannot exceed the total number of samples in the data. For all manifold learning approaches, the following procedure was implemented to tune the optimal parameters of each method per data set: 1. 1. The method was applied for all different parameter values in $S$. 2. 2. The $K$-means algorithm was applied on the low-dimensional embeddings produced for each parameter value 3. 3. The performance of the chosen method was evaluated quantitatively by computing ACC, NMI, RI and ARI for all tested parameter values. The optimal parameter value was finally selected based on the evaluation measures. Section 4.3 explores how the different approaches are affected by their parameter values. For the other subsections of Section 4, the optimal parameter choice per approach was used for the comparison of the multi-view approaches. Section 4.3 presents the process of parameter tuning on the synthetic data analysed, and measure the performance of single-view and multi- view manifold learning algorithms. The same process was repeated for the real data analysed (see Appendix D.3 for more information). ## 3 Data Data sets with different characteristics were analysed to explore and compare the proposed multi-view manifold learning algorithms under different scenarios (Table 1). The methods were evaluated on data sets that have different number of data-views, clusters and sample sizes. The real data sets analysed are classified as heterogeneous, due to the nature of their data, while the synthetic data sets are classified as non-heterogeneous, since they were generated under the same conditions and distributions. Both high-dimensional ($p>>N$) and low-dimensional data sets were analysed. Through these comparisons we wanted to investigate how the multi-view methods perform and how they compare with single-view methods. In this section we describe the synthetic and real data sets analysed in the manuscript. Some of the real data sets analysed have previously been used in the literature for examining different multi-view algorithms, for example data integration (Wang et al., 2014) and clustering (Ou et al., 2018). ### 3.1 Synthetic Data A motivational multi-view example was constructed to qualitatively evaluate the performance of multi-view manifold learning algorithms against their corresponding single-view algorithms. Its framework was designed specifically to produce distinct projections of the samples from each data-view. Additional synthetic data sets were generated to explore how the algorithms behave when the separation between the clusters exists, but it is not as explicit as in the motivational example. Figure 1: Motivational Multi-view Data Scenario (MMDS). Each data-view captures different characteristics of the three clusters, and thus produces different clusterings. All synthetic data were generated using the following process. For the same set of samples, a specified number of data-views were generated, with each data-view capturing different information of the samples. Each data-view, $m$ follows a multivariate normal distribution with mean vector $\boldsymbol{\mu_{m}}=(\mu_{1},\cdots,\mu_{p_{m}})^{T}$ and covariance matrix $\Sigma_{m}=I_{p_{m}}$, where $p_{m}$ is the number of features in the $m^{th}$ data-view. The matrix $I_{p_{m}}$ represents a $p_{m}\times p_{m}$ identity matrix. For each data-view, different $\boldsymbol{\mu_{m}}$ values were chosen to distinguish the clusters. Noise, $\boldsymbol{\epsilon}$, following a multivariate normal distribution with mean $\boldsymbol{\mu_{\epsilon}}=0$ and covariance matrix $\Sigma_{\epsilon}=I_{p_{m}}$ was added to increase randomness within each data-view. In other words, $X\sim MVN(\boldsymbol{\mu_{m}},\Sigma_{m})+\boldsymbol{\epsilon}$, where $MVN$ represents multivariate normal distribution. Distinct polynomial functions (e.g. $h(x)=x^{4}+3x^{2}+5$) were randomly generated for each data-view and applied on the samples to express non-linearity. The last step was performed to ensure that linear dimensionality reduction methods (e.g. PCA) would not successfully cluster the data. The three synthetic data sets with their characterists are described next. #### 3.1.1 Motivational Multi-view Data Scenario (MMDS) Figure 2: More Clusters than data-views Scenario (MCS). In this example, there are 3 data views but 5 true underlying clusters. Each data-view captures different characteristics of the five clusters, and thus produces different clusterings. Assume that the truth underlying structure of the data separates the samples into three true clusters as presented in Figure 1. Each synthetic data-view describes the samples differently, which results in three distinct clusterings, none of which reflects the global underlying truth. In particular, the first view separates only cluster C from the others (View 1 in Figure 1), the second view separates only cluster B (View 2) and the third view separates only cluster A (View 3). #### 3.1.2 Noisy data-view scenario (NDS) A synthetic data set which consists of 4 data-views and 3 true underlying clusters was generated. The first three data-views follow the same structure as MMDS, while the $4^{th}$ data-view represents a completely noisy data-view, i.e. with all data points lying in a single cluster. The rationale for creating such a data set is to examine the effect of the noisy data views in the multi-view visualisation and clustering. This data set was used to show that the multi-view approaches can identify not useful data-views and discard them. For $n=300$ equally balanced data samples, the data-views contain $p_{m}=100,\forall m=1,2,3,4,$ features. To summarise, NDS adds a noisy data- view to the MMDS data set. #### 3.1.3 More clusters than data-views scenario (MCS) A synthetic data set that was generated similarly to MMDS but with 5 true underlying clusters instead of 3. The true underlying structure of the each data-view is shown in Figure 2. In this data set, $p_{v}=100,\forall v$ features were generated on $n=500$, equally balanced data samples. In comparison with MMDS, MCS contains more clusters, but the same number of data- views. ### 3.2 Real Data The three real data sets analysed in the study are described below. #### 3.2.1 Cancer Types 111http://compbio.cs.toronto.edu/SNF/SNF/Software.html This data set includes $\mathit{65}$ patients with breast cancer, $\mathit{82}$ with kidney cancer and $\mathit{106}$ with lung cancer. For each patient the three data-views are available: (a) genomics ($p_{1}=\mathit{10299}$ genes), (b) epigenomics ($p_{2}=\mathit{22503}$ methylation sites) and (c) transcriptomics ($p_{3}=\mathit{302}$ mi-RNA sequences). The aim is to cluster patients by their cancer type (Wang et al., 2014). #### 3.2.2 Caltech7 333https://github.com/yeqinglee/mvdata Caltech-101 contains pictures of objects belonging to 101 categories. This publicly available subset of Caltech-101 contains 7 classes. It consists of $\mathit{1474}$ objects on six data-views: (a) Gabor ($p_{1}=\mathit{48}$), (b) wavelet moments ($p_{2}=\mathit{40}$), (c) CENTRIST ($p_{3}=\mathit{254}$), (d) histogram of oriented gradients ($p_{4}=\mathit{1984}$), (e) GIST ($p_{5}=\mathit{512}$), and (f) local binary patterns ($p_{6}=\mathit{928}$) (Fei-Fei et al., 2006). #### 3.2.3 Handwritten Digits 555https://archive.ics.uci.edu/ml/datasets/Multiple+Features This data set consists of features on handwritten numerals ($0-9$) extracted from a collection of Dutch utility maps. Per class $\mathit{200}$ patterns have been digitised in binary images (in total there are 2000 patterns). These digits are represented in terms of six data-views: (a) Fourier coefficients of the character shapes ($p_{1}=\mathit{76}$), (b) profile correlations ($p_{2}=\mathit{216}$), (c) Karhunen-Love coefficients ($p_{3}=\mathit{64}$), (d) pixel averages in 2 x 3 windows ($p_{4}=\mathit{240}$), (e) Zernike moments ($p_{5}=\mathit{47}$) and (f) morphological features ($p_{6}=\mathit{6}$) (Dua and Graff, 2017). The handwritten digits data set is characterised by having perfectly balanced data samples; each of the ten clusters contains exactly $200$ numerals. On the other hand, caltech7 is an imbalanced data set with the first two clusters containing many more samples than the other clusters. The number of samples in each cluster is {A: 435, B: 798, C: 52, D: 34, E: 35, F: 64, G: 56}. The performance of the methods was explored on both the imbalanced caltech7 data set and a balanced version of the data, for which $50$ samples from clusters $A$ and $B$ were randomly selected. Table 1: The characteristics of the data sets analysed. The number of views, number of clusters, the largest number of features amongst the data views, and the number of samples for both the real and synthetic data sets analysed are presented. Real data are taken as heterogeneous, whereas the synthetic data are regarded as homogeneous. Data Description | | | | | ---|---|---|---|---|--- | | Views | Clusters | Features | Samples | Hetero- | High | Data Set | ($M$) | ($k$) | ($p_{largest}$) | ($N$) | geneous | dimensional Real | Cancer Types | 3 | 3 | 22503 | 253 | ✓ | ✓ Caltech7 | 6 | 7 | 1984 | 1474 | ✓ | ✓ Handwritten Digits | 6 | 10 | 240 | 2000 | ✓ | ✗ 1-1 Synthetic | MMDS | 3 | 3 | 300 | 300 | ✗ | ✗ NDS | 4 | 3 | 400 | 300 | ✗ | ✓ MCS | 3 | 5 | 300 | 500 | ✗ | ✗ ## 4 Results In this section, we illustrate the application and evaluation of the proposed multi-view extensions of t-SNE, LLE and ISOMAP on real and synthetic data. In the following subsections we have addressed the following: 1. 1. Can multi-view manifold learning approaches obtain better visualisations than single-view approaches? The performance of the multi-view approaches in visualising the underlying structure of the data is illustrated. It is shown how the underlying structure is misrepresented when individual data sets or the concatenated data set are visualised. 2. 2. The visualisations of multi-view approaches are quantitatively evaluated using $K$-means. By extracting the low dimensional embeddings of the multi-view approaches and inputting them as features in the clustering algorithm $K$-means, we have quantitatively evaluated the performance of the approaches for identifying underlying clusters and patterns within the data. 3. 3. The effect of the parameter values on the multi-view manifold learning approaches was explored. As discussed the proposed multi-view manifold approaches depend on a parameter that requires tuning. In a series of experiments we investigated the effect that the parameter value has on each approach. This was done by exploring both the visualisations produced and by evaluating the clustering of the approaches for different parameter values. 4. 4. Should we use all available data-views? If some data-views contain more noise than signal, should we discard them? These are two crucial questions that concern every researcher working with multi-view data; are all data-views necessary and beneficial to the final outcome? We have addressed these questions by analysing data sets that contain noisy data. By investigating both the produced visualisations and evaluating the clusterings obtained with and without the noisy data, we discuss why it is not always beneficial to include all available data views. The section ends by proposing alternative variations for the best performing approach, multi-SNE. Firstly, a proposal for automatically computing the weights assigned to each data-view. In addition, we explore an alternative pre-training step for multi-SNE, where instead of conducting PCA on each data- view, multi-CCA is applied on the multiple data-views for reducing their dimensions into a latent space of uncorrelated embeddings (Rodosthenous et al., 2020). ### 4.1 Comparison Between Single-view and Multi-view Visualisations Figure 3: Visualisations of MMDS. Projections produced by the SNE, LLE, ISOMAP based algorithms. The projections within the red frames present our proposed methods: multi-SNE, multi-LLE and multi-ISOMAP. The parameters $Perp$ and $NN$ refer to the optimised perplexity and number of nearest neighbours, respectively. Visualising multi-view data can be trivially achieved either by looking at the visualisations produced by each data-view, or by concatenating all features into a long vector. T-SNE, LLE and ISOMAP applied on each single data-view of the MMDS data set separately captures the correct local underlying structure of the respective data-view (Figure 3). However, by design they cannot capture the global structure of the data. $\text{{SNE}}_{concat}$, $\text{{LLE}}_{concat}$ and $\text{{ISOMAP}}_{concat}$ represent the trivial solutions of concatenating the features of all data-views before applying t-SNE, LLE and ISOMAP, respectively. These trivial solutions capture mostly the structure of the third data-view, because that data-view has a higher variability between the clusters than the other two. Multi-SNE, multi-LLE and multi-ISOMAP produced the best visualisations out of all SNE-based, LLE-based and ISOMAP-based approaches, respectively. These solutions were able to separate clearly the three true clusters, with multi- SNE showing the clearest separation between them. Even though m-SNE separates the samples according to their corresponding clusters, this separation would not be recognisable if the true labels were unknown, as the clusters are not sufficiently separated. The visualisation by m-LLE was similar to the ones produced by single-view solutions on concatenated features, while m-ISOMAP bundles all samples into a single cluster. By visualising the MMDS data set via both single-view and multi-view clustering approaches, multi-SNE has shown the most promising results (Figure 3). We have shown that single-view analyses may lead to conflicting results, while multi-view approaches are able to capture the true underlying structure of the synthetic MMDS. ### 4.2 Multi-view Manifold Learning for Clustering Figure 4: Multi-SNE visualisations of handwritten digits. Projections produced by multi-SNE with perplexity $Perp=10$ . Colours present the clustering on the data points by (a) $K$-means, and (b) Ground truth. It is very common in studies to utilise the visualisation of data to identify any underlying patterns or clusters within the data samples. Here, it is illustrated how the multi-view approaches can be used to identify such clusters. To quantify the visualisation of the data, we applied the $K$-means algorithm on the low-dimensional embeddings produced by multi-view manifold learning algorithms. If the two-dimensional embeddings can separate the data points to their respective clusters quantitatively with high accuracy via a clustering algorithm, then those clusters are expected to be qualitatively separated and visually shown in two dimensions. For all examined data sets (synthetic and real), the number of clusters (ground truth) within the samples is known, which attracts the implementation of $K$-means over alternative clustering algorithms. The number of clusters was used as the input parameter, $K$, of the $K$-means algorithm and by computing the clustering measures we evaluated whether the correct sample allocations were made. The proposed multi-SNE, multi-LLE, and multi-ISOMAP approaches were found to outperform their competitive multi-view extensions (m-SNE, m-LLE, m-ISOMAP) as well as their concatenated versions ($\text{{SNE}}_{concat}$, $\text{{LLE}}_{concat}$, $\text{{ISOMAP}}_{concat}$) (Tables 2 and 3). For the majority of the data sets the multi-SNE approach was found to overall outperform all other approaches. Figure 5: Multi-SNE visualisations of caltech7 and its balanced subset. Projections produced by multi-SNE with perplexity $Perp=80$ and $Perp=10$ for the original and balanced caltech7 data set, respectively. Colours present the clustering on the data points by (a), (c) $K$-means, and (b), (d) Ground truth. (a) (b) present the data points on the original caltech7 data set, while (c), (d) are on its balanced subset. Figure 4 shows a comparison between the true clusters of the handwritten digits data set and the clusters identified by $K$-means. The clusters reflecting the digits 6 and 9 are clustered together, but all remaining clusters are well separated and agree with the truth. Multi-SNE applied on caltech7 produces a good visualisation, with clusters A and B being clearly separated from the rest (Figure 5b). Clusters C and G are also well-separated, but the remaining three clusters are bundled together. Applying $K$-means to that low-dimensional embedding does not capture the true structure of the data (Table 2). It provides a solution with all clusters being equally sized (Figure 5a) and thus its quantitative evaluation is misleading. Motivated by this result, we have further explored the performance of proposed approaches on a balanced version of the caltech7 data set (generated as described in Section 3.2). Similarly to the visualisation of the original data set, the visualisation of the balanced caltech7 data set shows clusters A, B, C and G to be well- separated, while the remaining are still bundled together (Figures 5c and 5d). Table 2: Clustering performance. For each data set, red highlights the method with the best performance on each measure between each group of algorithms (SNE, LLE or ISOMAP based). The overall superior method for each data set is depicted with bold. The parameters $Perp$ and $NN$ refer to the selected perplexity and number of nearest neighbours, respectively. They were optimised for the corresponding methods. Due to the non-convexity of SNE-based approaches, the mean (and standard deviation) of $100$ separate runs on the same data is reported. Data Set | Algorithm | Accuracy | NMI | RI | ARI ---|---|---|---|---|--- Handwritten Digits | $\text{SNE}_{concat}$ [Perp=10] | 0.717 (0.032) | 0.663 (0.013) | 0.838 (0.005) | 0.568 (0.026) m-SNE [Perp=10] | 0.776 (0.019) | 0.763 (0.009) | 0.938 (0.004) | 0.669 (0.019) multi-SNE [Perp=10] | 0.882 (0.008) | 0.900 (0.005) | 0.969 (0.002) | 0.823 (0.008) 2-6 | $\text{LLE}_{concat}$ [NN=10] | 0.562 | 0.560 | 0.871 | 0.441 | m-LLE [NN=10] | 0.632 | 0.612 | 0.896 | 0.503 | multi-LLE [NN=5] | 0.614 | 0.645 | 0.897 | 0.524 2-6 | $\text{ISOMAP}_{concat}$ [NN=20] | 0.634 | 0.619 | 0.905 | 0.502 | m-ISOMAP [NN=20] | 0.636 | 0.628 | 0.898 | 0.477 | multi-ISOMAP [NN=5] | 0.658 | 0.631 | 0.909 | 0.518 Caltech7 | $\text{SNE}_{concat}$ [Perp=50] | 0.470 (0.065) | 0.323 (0.011) | 0.698 (0.013) | 0.290 (0.034) m-SNE [Perp=10] | 0.542 (0.013) | 0.504 (0.029) | 0.757 (0.010) | 0.426 (0.023) multi-SNE [Perp=80] | 0.506 (0.035) | 0.506 (0.006) | 0.754 (0.009) | 0.428 (0.022) 2-6 | $\text{LLE}_{concat}$ [NN=100] | 0.425 | 0.372 | 0.707 | 0.305 | m-LLE [NN=5] | 0.561 | 0.348 | 0.718 | 0.356 | multi-LLE [NN=80] | 0.638 | 0.490 | 0.732 | 0.419 2-6 | $\text{ISOMAP}_{concat}$ [NN=20] | 0.408 | 0.167 | 0.634 | 0.151 | m-ISOMAP [NN=5] | 0.416 | 0.306 | 0.686 | 0.261 | multi-ISOMAP [NN=10] | 0.519 | 0.355 | 0.728 | 0.369 Caltech7 (balanced) | $\text{SNE}_{concat}$ [Perp=80] | 0.492 (0.024) | 0.326 (0.018) | 0.687 (0.023) | 0.325 (0.015) m-SNE [Perp=10] | 0.581 (0.011) | 0.444 (0.013) | 0.838 (0.022) | 0.342 (0.016) multi-SNE [Perp=20] | 0.749 (0.008) | 0.686 (0.016) | 0.905 (0.004) | 0.619 (0.009) 2-6 | $\text{LLE}_{concat}$ [NN=20] | 0.567 | 0.348 | 0.725 | 0.380 | m-LLE [NN=10] | 0.403 | 0.169 | 0.617 | 0.139 | multi-LLE [NN=5] | 0.622 | 0.454 | 0.710 | 0.391 2-6 | $\text{ISOMAP}_{concat}$ [NN=5] | 0.434 | 0.320 | 0.791 | 0.208 | m-ISOMAP [NN=5] | 0.455 | 0.299 | 0.797 | 0.224 | multi-ISOMAP [NN=5] | 0.548 | 0.368 | 0.810 | 0.267 Cancer types | $\text{SNE}_{concat}$ [Perp=10] | 0.625 (0.143) | 0.363 (0.184) | 0.301 (0.113) | 0.687 (0.169) m-SNE [Perp=10] | 0.923 (0.010) | 0.839 (0.018) | 0.876 (0.011) | 0.922 (0.014) multi-SNE [Perp=20] | 0.964 (0.007) | 0.866 (0.023) | 0.902 (0.005) | 0.956 (0.008) 2-6 | $\text{LLE}_{concat}$ [NN=10] | 0.502 | 0.122 | 0.091 | 0.576 | m-LLE [NN=20] | 0.637 | 0.253 | 0.235 | 0.647 | multi-LLE [NN=10] | 0.850 | 0.567 | 0.614 | 0.826 2-6 | $\text{ISOMAP}_{concat}$ [NN=5] | 0.384 | 0.015 | 0.009 | 0.556 | m-ISOMAP [NN=10] | 0.390 | 0.020 | 0.013 | 0.558 | multi-ISOMAP [NN=50] | 0.514 | 0.116 | 0.093 | 0.592 Table 3: Clustering performance. For each data set, red highlights the method with the best performance on each measure between each group of algorithms (SNE, LLE or ISOMAP based). The overall superior method for each data set is depicted with bold. The parameters $Perp$ and $NN$ refer to the selected perplexity and number of nearest neighbours, respectively. They were optimised for the corresponding methods. Data Set | Algorithm | Accuracy | NMI | RI | ARI ---|---|---|---|---|--- NDS | $\text{SNE}_{concat}$ [Perp=80] | 0.747 (0.210) | 0.628 (0.309) | 0.817 (0.324) | 0.598 (0.145) m-SNE [Perp=50] | 0.650 (0.014) | 0.748 (0.069) | 0.766 (0.022 | 0.629 (0.020) multi-SNE [Perp=80] | 0.989 (0.006) | 0.951 (0.029) | 0.969 (0.019) | 0.987 (0.009) 2-6 | $\text{LLE}_{concat}$ [NN=5] | 0.606 (0.276) | 0.477 (0.357) | 0.684 (0.359) | 0.446 (0.218) | m-LLE [NN=20] | 0.685 (0.115) | 0.555 (0.134) | 0.768 (0.151) | 0.528 (0.072)) | multi-LLE [NN=20] | 0.937 (0.044) | 0.768 (0.042) | 0.922 (0.028) | 0.823 (0.047) 2-6 | $\text{ISOMAP}_{concat}$ [NN=100] | 0.649 (0.212) | 0.528 (0.265) | 0.750 (0.286) | 0.475 (0.133) | m-ISOMAP [NN=5] | 0.610 (0.234) | 0.453 (0.221) | 0.760 (0.280) | 0.386 (0.138) | multi-ISOMAP [NN=300] | 0.778 (0.112) | 0.788 (0.234) | 0.867 (0.194) | 0.730 (0.094) MCS | $\text{SNE}_{concat}$ [Perp=200] | 0.421 (0.200) | 0.215 (0.185) | 0.711 (0.219) | 0.173 (0.089) m-SNE [Perp=2] | 0.641 (0.069) | 0.670 (0.034) | 0.854 (0.080) | 0.575 (0.055) multi-SNE [Perp=50] | 0.919 (0.046) | 0.862 (0.037) | 0.942 (0.052) | 0.819 (0.018) 2-6 | $\text{LLE}_{concat}$ [NN=50] | 0.569 (0.117) | 0.533 (0.117) | 0.796 (0.123) | 0.432 (0.051) | m-LLE [NN=20] | 0.540 (0.079) | 0.627 (0.051) | 0.819 (0.077) | 0.487 (0.026) | multi-LLE [NN=20] | 0.798 (0.059) | 0.647 (0.048) | 0.872 (0.064) | 0.607 (0.022) 2-6 | $\text{ISOMAP}_{concat}$ [NN=150] | 0.628 (0.149) | 0.636 (0.139) | 0.834 (0.167) | 0.526 (0.071) | m-ISOMAP [NN=5] | 0.686 (0.113) | 0.660 (0.106) | 0.841 (0.119) | 0.565 (0.051) | multi-ISOMAP [NN=300] | 0.717 (0.094) | 0.630 (0.101) | 0.852 (0.118) | 0.570 (0.044) Through the conducted work, it was shown that the multi-view approaches proposed in the manuscript generate low-dimensional embeddings that can be used as input features in a clustering algorithm (as for example the $K$-means algorithm) for identifying clusters that exist within the data set. We have illustrated that the proposed approaches outperform existing multi-view approaches and the visualisations produced by multi-SNE are very close to the ground truth of the data sets. Alternative clustering algorithms, that do not require the number of clusters as input, can be considered as well. For example, Density-based spatial clustering of applications with noise (DBSCAN) measures the density around each data point and does not require the true number of clusters as input (Ester et al.,, 1996). In situations, where the true number of clusters is unknown, DBSCAN would be preferable over $K$-means. For completeness of our work, DBSCAN was applied on two of the real data sets explored, with similar reults observed as the ones with $K$-means. The proposed multi-SNE approach was the best performing method partitioning the data samples. The analysis using DBSCAN can be found in Appendix D.8. An important observation made was that caution needs to be taken when data sets with imbalanced clusters are analysed as the quantitative performance of the approaches on such data sets is not very robust. ### 4.3 Optimal Parameter Selection Figure 6: NDS evaluation measures. (a) NMI values along different parameter values on all manifold learning algorithms and (b) Misclustering error on the optimal parameter values. SNE, LLE and ISOMAP depend on a parameter that requires tuning. Even though the parameter is defined differently in each algorithm, it is always related to the nearest number of global neighbours. As described earlier, the optimal parameter was found by comparing the performance of the methods on a range of parameter values, $S=\left\\{2,10,20,50,80,100,200\right\\}$. In this section the synthetic data sets, NDS and MCS, were analysed, because both data sets separate the samples into known clusters by design and evaluation via clustering measures would be appropriate. To find the optimal parameter value, the performance of the algorithms was evaluated by applying $K$-means on the low-dimensional embeddings and comparing the resulting clusterings against the truth. Once the optimal parameter was found, we confirmed that the clusters were visually separated by manually looking at the two-dimensional embeddings. Since the data in NDS and MCS are perfectly balanced and were generated for clustering, this approach can effectively evaluate the data visualisations. Figure 7: MCS evaluation measures. The clustering evaluation measures are plotted against different parameter values on the all SNE, LLE and ISOMAP based algorithms. On NDS, single-view SNE, LLE and ISOMAP algorithms produced a misclustering error of $0.3$, meaning that a third of the samples was incorrectly clustered (Figure 6b). This observation shows that single-view methods capture the true local underlying structure of each synthetic data-view. The only exception for NDS is the fourth data-view, for which the error is closer to $0.6$, i.e. randomly assigns the clusters (which follows the simulation design, as it was designed to be a random data-view). After concatenating the features of all data-views, the performance of single-view approaches remains poor (Figure 6a). The variance of the misclustering error on this solution is much greater, suggesting that single-view manifold learning algorithms on concatenated data are not robust and thus not reliable. Increasing the noise level (either by incorporating additional noisy data-views, or by increasing the dimensions of the noisy data-view) in this synthetic data set had little effect on the overall performance of the multi-view approaches (see Appendix D.4 for more information). On both NDS and MCS, multi-LLE and multi-SNE were found to be sensitive to choice of their corresponding parameter value (Figures 6a and 7). While multi- LLE performed the best when the number of nearest neighbours was low, multi- SNE provided better results as perplexity was increasing. On the other hand, multi-ISOMAP had the highest NMI value when the parameter was high. Overall, ISOMAP-based multi-view algorithms showed a higher variability than the other multi-view methods, which makes them less favourable solutions. The performance of ISOMAP-based methods improved as the parameter value increased (Figure 7). However, they were outperformed by multi-LLE and multi-SNE for both synthetic data sets. Out of the three manifold learning foundations, LLE-based approaches depend the most on their parameter value to produce the optimal outcome. Specifically, their performance dropped when the parameter value lay between $20$ and $100$ (Figure 6a). When the number of nearest neighbours was set to be greater than $100$ their performance started to improve. Out of all LLE- based algorithms, the highest NMI and lowest misclustering error was obtained by multi-LLE (Figures 6 and 7). Our observations on the tuning parameters of LLE-based approaches are in agreement with earlier studies (Karbauskaiė et al.,, 2007; Valencia-Aguirre et al.,, 2009). Both Karbauskaiė et al., (2007) and Valencia-Aguirre et al., (2009) found that LLE performs best with low nearest number of neighbours and their conclusions reflect the performance of multi-LLE; best performed on low values of the tuning parameter. Even though, m-SNE performed better than single-view methods in terms of both clustering and error variability, multi-SNE produced the best results (Figures 6 and 7). In particular, multi-SNE outperformed all algorithms presented in this paper on both NDS and MCS. Even though it performed poorly for low perplexity values, its performance improved for $Perp\geq 20$. Multi-SNE was the algorithm with the lowest error variance, making it a robust and preferable solution. The four implemented measures (Accuracy, NMI, RI and ARI) use the true clusters of the samples to evaluate the clustering performance. In situations where cluster allocation is unknown, alternative clustering evaluation measures can be used, such as the Silhouette score (Rousseeuw, 1987). The Silhouette score in contrast to the other measures does not require as input the cluster allocation and is a widely used approach for identifying the best number of clusters and clustering allocation in an unsupervised setting. Evaluating the clustering performance of the methods via the Silhouette score agrees with the other four evaluation measures, with multi-SNE producing the highest value out of all multi-view manifold learning solutions. The Silhouette score of all methods applied on the MCS data set can be found in Appendix D.7. The same process of parameter tuning was implemented for the real data sets and their performance is presented in the Appendix D.3. In contrast to the synthetic data, multi-SNE on cancer types data performed the best at low perplexity values. For the remaining data sets, its performance was stable for all parameter values. With the exception of cancer types data, the performance of LLE-based solutions follow their behaviour on synthetic data. ### 4.4 Optimal Number of Data-views Figure 8: Visualisations of cancer types. Projections produced by all SNE- based manifold learning algorithms on all possible combinations between the three data-views in the cancer types data set.The parameter $Perp$ refers to the selected perplexity, which was optimised for the corresponding methods. It is common to think that more information would lead to better results, and in theory that should be the case. However, in practice that is not always true (Kumar et al., 2011). Using the cancer types data set, we explored whether the visualisations and clusterings are improved if all or a subset of the data-views are used. With three available data-views, we implemented a multi-view visualisation on three combinations of two data-views and a single combination of three data-views. The genomics data-view provides a reasonably good separation of the three cancer types, whereas miRNA data-view fails in this task, as it provides a visualisation that reflects random noise (first column of plots in Figure 8). This observation is validated quantitatively by evaluating the produced t-SNE embeddings (Table 8 in Appendix D.5). Concatenating features from the different data-views before implementing t-SNE does not improve the final outcome of the algorithm, regardless of the data-view combination. Overall, multi-view manifold learning algorithms have improved the data visualisation to a great extent. When all three data-views are considered, both multi-SNE and m-SNE provide a good separation of the clusters (Figure 8). However, the true cancer types can be identified perfectly when the miRNA data-view is discarded. In other words, the optimal solution in this data set is obtained when only genomics and epigenomics data-views are used. That is because miRNA data-view contains little information about the cancer types and adds random noise, which makes the task of separating the data points more difficult. Figure 9: Multi-SNE visualisations of MMDS and NDS. Projections produced by multi-SNE with perplexity $Perp=100$ for both MMDS and NDS. This observation was also noted between the visualisations of MMDS and NDS (Figure 9). The only difference between the two synthetic data sets, is the additional noisy data-view in NDS. Even though NDS separates the samples to their corresponding clusters, the separation is not as clear as it is in the projection of MMDS via multi-SNE. In agreement with the exploration of the cancer types data set, it is favourable to discard any noisy data-views in the implementation of multi-view manifold learning approaches. It is not always a good idea to include all available data-views in multi-view manifold learning algorithms; some data-views may provide noise which would result in a worse visualisation than discarding those data-views entirely. The noise of a data-view with unknown labels may be viewed in a single-view t-SNE plot (all data-points in a single cluster), or identified, if possible, via quantification measures such as signal-to-noise ratio. ### 4.5 Multi-SNE variations This section presents two alternative variations of multi-SNE, including automatic weight adjustments and multi-CCA as pre-training step for reducing the dimensions of the input data-views. #### 4.5.1 Automated weight adjustments Figure 10: Cancer types and NDS with automated weight adjustments. The first row presents the produced visualisations of multi-SNE with the automated weight adjustment procedure implemented. The second row of figures presents the weights assigned to each data-view at each step of the iteration. For both data the iterations ran for a maximum of 1,000 steps. A simple weight-updating approach is proposed based on the KL-divergence measure from each data-view. This simple weight-updating approach guarantees that more weight is given to the data-views producing lower KL-divergence measures and that no data-view is being completely discarded from the algorithm. Recall that $KL(P||Q)\in[0,\infty)$, with $KL(P||Q)=0$, if the two distributions, $P$ and $Q$, are perfectly matched. Let $\textbf{k}=(k^{(1)},\cdots,k^{(M)})$ be a vector, where $k^{(m)}=KL(P^{(m)}||Q),\forall m=\\{1,\cdots,M\\}$ and initialise the weight vector $\textbf{w}=(w^{(1)},\cdots,w^{(M)})$ by $w^{(m)}=\frac{1}{M},\forall m$. To adjust the weights of each data-view, the following steps are performed at each iteration: 1. 1. Normalise KL-divergence by $k^{(m)}=\frac{k^{(m)}}{\sum_{i}^{M}k^{(i)}}$. This step ensures that $k^{(m)}\in[0,1],\forall m$ and that $\sum_{m}k^{(m)}=1$. 2. 2. Measure the weights for each data-view by $w^{(m)}=1-k^{(m)}$. This step ensures that the data-view with the lowest KL-divergence value receives the highest weight. Based on the analysis in Section 4.4, we know that cancer types and NDS data sets contain noisy data-views and thus multi-SNE performs better when they are entirely discarded. Here, we assume that this information is unknown and the proposed weight-updating approach is implemented on those two data sets to test if the weights are being adjusted correctly according to the noise level of each data-view. The proposed weight-adjustment process, which looks at the produced KL- divergence between each data-view and the low-dimensional embeddings, distinguishes which data-views contain the most noise and the weight values are updated accordingly (Figure 10b). In cancer types, transcriptomics (miRNA) receives the lowest weight, while genomics (Genes) was given the highest value. This weight adjustment comes in agreement with the qualitative (t-SNE plots) and quantitative (clustering) evaluations performed in Section 4.4. In NDS, $X^{(4)}$ which represents the noisy data-view received the lowest weight, and the other data-views had around the same weight value, as they all impact the final outcome equally. Figure 11: Multi-SNE on handwritten digits, with multi-CCA as pre-training. Projections of handwritten digits data sets, produced by multi-SNE. Multi-CCA was implemented on all data-views with their respective canonical vectors acting as input features for multi-SNE. The proposed weight-adjustment process updates the weights at each iteration. For the first $100$ iterations, the weights are not changing, as the algorithm adjusts to the produced low-dimensional embeddings (Figure 10b). In NDS, the weights converge after $250$ iterations, while in cancer types, they are still being updated even after $1000$ iterations. The changes recorded are small and the weights can be said to have stabilised. The low-dimensional embeddings produced in NDS with weight adjustments separate clearly the three clusters, an observation missed without the implementation of the weight-updating approach (Figure 10a); it resembles the MMDS (i.e. without noisy data-view) multi-SNE plot (Figure 9). The produced embeddings in cancer types do not separate the three clusters as clearly as multi-SNE without the noisy data-view, but it projects a more clear separation than multi-SNE on the complete data set without weight adjustments. #### 4.5.2 Multi-CCA as pre-training As mentioned earlier, van der Maaten and Hinton (2008) proposed the implementation of PCA as a pre-training step for t-SNE to reduce the computational costs, provided that the fraction of variance explained by the principal components is high. In this paper, pre-training via PCA was implemented in all variations of SNE. Alternative linear dimensionality reduction methods may be considered, especially for multi-view data. In addition to reducing the dimensions of the original data, such methods can capture information between the data-views. For example, Canonical Correlation Analysis (CCA) captures relationships between the features of two data-views by producing two latent low-dimensional embeddings (canonical vectors) that are maximally correlated between them (Hotelling, 1936; Rodosthenous et al., 2020). Rodosthenous et al. (2020) demonstrated that multi-CCA, an extension of CCA that analyses multiple (more than two) data-views, would be preferable as it reduces over-fitting. This section demonstrates the application of multi-CCA as pre-training in replacement of PCA. This alteration of the multi-SNE algorithm was implemented on the handwritten digits data set. Multi-CCA was applied on all data-views, with $6$ canonical vectors produced for each data-view (in this particular data set $\min{(p_{1},p_{2},p_{3},p_{4},p_{5},p_{6})}=6$). The variation of multi-CCA proposed by Witten and Tibshirani (2009) was used for the production of the canonical vectors, as it is computationally cheaper compared to others (Rodosthenous et al., 2020). By using these vectors as input features, multi- SNE produced a qualitatively better visualisation than using the principal components as input features (Figure 11). By using an integrative algorithm as pre-training, all $10$ clusters are clearly separated, including $6$ and $9$. Quantitatively, clustering via $K$-Means was evaluated with ACC = 0.914, NMI = 0.838, RI = 0.968, ARI = 0.824. This evaluations suggests that quantitatively, it performed better than the $10$-dimensional embeddings produced multi-SNE with PCA as pre-training. #### 4.5.3 Comparison of multi-SNE variations Section 2.2 introduced multi-SNE, a multi-view extension of t-SNE. In Sections 4.5.1 and 4.5.2, two variations of multi-SNE are presented. The former implements an weight-adjustment process which at each iteration updates the weights allocated for each data-view, and the latter uses multi-CCA instead of PCA as a pre-training step. In this section, multi-SNE and its two variations are compared to assess whether the variations introduced to the algorithm perform better than the initial proposal. The implementation of the weight-adjustment process improved the performance of multi-SNE on all real data sets analysed (Table 4). The influence of multi- CCA as a pre-training step produced inconsistent results; in some data sets this step boosted the clustering performance of multi-SNE (e.g. handwritten digits), while for the other data sets, it did not (e.g. cancer types). From this analysis, we conclude that adjusting the weights of each data-view always improves the performance of multi-SNE. On the other hand, the choice of pre- training, either via PCA or multi-CCA, is not clear, and it depends on the data at hand. ## 5 Discussion Table 4: Clustering performance of multi-SNE variations. For each data set, bold highlights the multi-SNE variation with the best performance - highest accuracy (ACC). Perplexity was optimised for all variations. The mean performance (and its standard deviation) is depicted for the synthetic data sets NDS and MCS.) Variation | Handwritten | Cancer | Caltech7 | Caltech7 | NDS | MCS ---|---|---|---|---|---|--- digits | types | original | balanced Multi-SNE | 0.822 | 0.964 | 0.506 | 0.733 | 0.989 (0.006) | 0.919 (0.046) without weight-adjustment 1-7 Multi-SNE | 0.883 | 0.994 | 0.543 | 0.742 | 0.999 (0.002) | 0.922 (0.019) with weight-adjustment 1-7 Multi-CCA multi-SNE | 0.901 | 0.526 | 0.453 | 0.713 | 0.996(0.002) | 0.993 (0.005) without weight-adjustment 1-7 Multi-CCA multi-SNE | 0.914 | 0.562 | 0.463 | 0.754 | 0.996 (0.002) | 0.993 (0.005) with weight-adjustment In this manuscript we propose extensions of the well-known manifold learning approaches t-SNE, LLE, and ISOMAP for the visualisation of multi-view data sets. These three approaches are widely used for the visualisation of high- dimensional and complex data sets on performing non-linear dimensionality reduction. The increasing number of multiple data sets produced for the same samples in different fields, emphasises the need for approaches that produce expressive presentations of the data. We have illustrated that visualising each data set separately from the rest is not ideal as it does not reveal the underlying patterns within the samples. In contrast, the proposed multi-view approaches can produce a single visualisation of the samples by integrating all available information from the multiple data-views. Python and R (only for multi-SNE) code of the proposed solutions can be found in the links provided in Appendix E. Multi-view visualisation has been explored in the literature with a number of approaches proposed in the recent years. In this work, we propose multi-view visualisation approaches that extend the well-known manifold approaches: t-SNE, LLE, and ISOMAP. Through a comparative study of real and synthetic data we have illustrated that the proposed approach, multi-SNE, provides a better and more robust solution compared to the other tested approaches proposed in the manuscript (multi-LLE and multi-ISOMAP) and the approaches proposed in the literature including m-LLE, m-SNE, MV-tSNE2, j-SNE, j-UMAP (additional results in Appendices D.1, D.2). Although multi-SNE was computationally the most expensive multi-view manifold learning algorithm (Table 9 in Appendix F), it was found to be the solution with the superior performance, both qualitatively and quantitatively. We have utilised the low-dimensional embeddings of the proposed algorithms as features in the $K$-means clustering algorithm, which we have used (1) to quantify the visualisations produced, and (2) to select the optimal tuning parameters for the manifold learning approaches. By investigating synthetic and real multi-view data sets, each with different data characteristics, we concluded that multi-SNE provides a more accurate and robust solution than any other single-view and multi-view manifold learning algorithms we have considered. Specifically, multi-SNE was able to produce the best data visualisations of all data sets analysed in this paper. Multi-LLE provides the second best solution, while multi-view ISOMAP algorithms have not produced competitive visualisations. By exploring several data sets, we concluded that multi-view manifold learning approaches can be effectively applied on heterogeneous and high-dimensional data (i.e. $p>>n$). Table 5: Multi-view clustering performance on handwrittend digits. The NMI and accuracy (ACC) values of multi-view clustering approaches, as they were presented by the authors in their corresponding papers, are depicted along the clustering performance of multi-SNE on a range of dimensions for the embeddings ($d=2,3,5,10$). The performance of multi-SNE with PCA and multi-CCA are depicted in the table, with weight adjustments in both variations. Multi-view Clustering on handwritten digits data set | | | ---|---|---|--- | Kumar et al. | Liu et al. | Sun et al. | Ou et al. | Ou et al. | multi-SNE with PCA/multi-CCA | (2011) | (2013) | (2015) | (2016) | (2018) | 2D | 3D | 5D | 10D NMI | 0.768 | 0.804 | 0.876 | 0.785 | 0.804 | 0.863/0.838 | 0.894/0.841 | 0.897/0.848 | 0.899/0.850 ACC | – | 0.881 | – | 0.876 | 0.880 | 0.822/0.914 | 0.848/0.915 | 0.854/0.922 | 0.849/0.924 Through the conducted experiments, we have illustrated the effect of the parameters on the performance of the methods. We have shown that SNE-based methods perform the best when perplexity is in the range $\left[20,100\right]$, LLE-based algorithms should take a small number of nearest neighbours, in the range $\left[10,50\right]$, while the parameter of ISOMAP-based should be in the range $\left[100,N\right]$ , where $N$ is the number of samples. Our conclusions about the superiority of multi-SNE have been further supported by implementing the Silhouette score as an alternative approach for evaluating the clustering and tuning the parameters of the methods. In contrast to the measures used throughout the paper, the Silhouette score does not take into account the number of clusters that exist in the data set, illustrating the applicability of multi-SNE approach in unsupervised learning problems where the underlying clusters of the samples are not known (Appendix D.7). Similarly, we have illustrated that alternative clustering algorithms can be implemented for clustering the samples. By inputting the produced multi-SNE embeddings in the DBSCAN algorithm we further illustrated how the clusters of the samples can be identified (Appendix D.8). Multi-view clustering is a topic that has gathered a lot of interest in the recent years with a number of approaches published in the literature. Such approaches include the ones proposed by Kumar et al. (2011), Liu et al. (2013), Sun et al. (2015), Ou et al. (2016) and Ou et al. (2018). The handwritten data set presented in the manuscript, has been analysed by the aforementioned studies for multi-view clustering. Table 5 shows the NMI and accuracy values of the clusterings performed by the multi-view clustering algorithms (these values are as given in the corresponding articles). In addition, the NMI and accuracy values of the $K$-means clustering applied on the multi-SNE low-dimensional embeddings (from 2 to 10 dimensions) are presented in the table. On handwritten digits, the multi-SNE variation with multi-CCA as pre-training and weight adjustments had the best performance (Table 4). This variation of multi-SNE with $K$-means was compared against the multi-view clustering algorithms and it was found to be the most accurate, while pre-training with PCA produced the highest NMI (Table 5). By applying $K$-means to the low-dimensional embeddings of multi-SNE can successfully cluster the observations of the data (see Appendix D.6 for a 3-dimensional visualisation via multi-SNE). Figure 12: Visualisations and clustering performance on single-cell multi- omics data. Projections produced by t-SNE on RNA, ATAC and multi-SNE on both data-views with perplexity $Perp=80$ for the two t-SNE projections and $Perp=20$ for multi-SNE. The clustering performance of the data by Liu et al. (2013), t-SNE and multi-SNE are presented. An important area of active current research, where manifold learning approaches, such as t-SNE, as visualisation tools are commonly used is single- cell sequencing (scRNA-seq) and genomics. Last few years, have seen fast developments of multi-omics single-cell methods, where for example for the same cells multiple omics measurements are being obtained such as transcripts by scRNA-seq and chromatin accessibility by a method known as scATAC-seq (Stuart et al., 2019). As recently discussed the integration of this kind of multi-view single-cell data poses unique and novel statistical challenges (Argelaguet et al., 2021). We therefore believe our proposed multi-view methods will be very useful in producing an integrated visualisation of cellular heterogeneity and cell types studied by multi-omics single-cell methods in different tissues, in health and disease. To illustrate the capability of multi-SNE for multi-omics single-cell data, we applied multi-SNE on a representative data set of scRNA-seq and ATAC-seq for human peripheral blood mononuclear cells (PBMC) 777 https://support.10xgenomics.com/single-cell-multiome-atac- gex/datasets/1.0.0/pbmc_granulocyte_sorted_10k (Stuart and Srivastava, 2021) (Figure 7). Multi-SNE produced more intelligible projections of the cells compared to m-SNE and achieved higher evaluation scores (Appendix C). To test the quality of the obtained multi-view visualisation, we compared its performance against the multi-view clustering approach proposed by Liu et al. (2013) on this single-cell data. A balanced subset of this data set was used, which consists of two data-views on $9105$ cells (scRNA-seq and ATAC-seq with $36000$ and $108000$ features, respectively). A detailed description of this data set, the pre-processing steps performed, and the projections of t-SNE and multi-SNE on the original data are provided in the Appendix C (Figure 13). We found Multi-SNE to have the highest accuracy (and a close NMI to the approach by Liu et al. (2013)) as seen in Figure 7. Qualitatively, the projections by t-SNE on scRNA-seq and multi-SNE are similar, but multi-SNE separates the clusters better (especially between CD4 and CD8 cell types (Figure 12, Figure 13 in Appendix C). While it is known that ATAC-seq data is more noisy and has less information by itself, we see that integration of the data-views results in better overall separation of the different cell types in this data set. These results indicate the promise of multi-SNE as a unified multi-view and clustering approach for multi-omics single-cell data. The increasing number of multi-view, high-dimensional and heterogeneous data requires novel visualisation techniques that integrate this data into expressive and revealing representations. In this manuscript, new multi-view manifold learning approaches are presented and their performance across real and synthetic data sets with different characteristics was explored. The multi-SNE approach is proposed to provide a unified solution for robust visualisation and subsequent clustering of multi-view data. ## References * Argelaguet et al. (2021) Ricard Argelaguet, Anna S. E. Cuomo, Oliver Stegle, and John C. Marioni. Computational principles and challenges in single-cell data integration. _Nature Biotechnology_ , pages 1546–1696, 2021. * Beck and Teboulle (2003) Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. _Operations Research Letters_ , 31(3):167–175, 2003. * Canzar and Hoan Do (2021) Stefan Canzar and Van Hoan Do. A generalization of t-sne and umap to single-cell multimodal omics. _Genome Biology_ , 22(1):1–9, 2021. * Dijkstra (1959) Edsger W Dijkstra. A note on two problems in connexion with graphs. _Numerische mathematik_ , 1(1):269–271, 1959\. * Dua and Graff (2017) Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. * Ester et al., (1996) Martin Ester, Hans-Peter Kriegel, Jörg Sander and Xiaowei Xu A density-based for discovering clusters in large spatial databases with noise. _Knowledge Discovery and Data Mining_ , 34:226-331, 1996. * Fei-Fei et al. (2006) Li Fei-Fei, Rob Fergus, and Perona Pietro. One-shot learning of object categories. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 28(4):594–611, 2006. * Fu et al. (2008) Yun Fu, Liangliang Cao, Guodong Guo, and Thomas S. Huang. Multiple feature fusion by subspace learning. In _Proceedings of the 2008 International Conference on Content-Based Image and Video Retrieval_ , page 127–134, 2008. * García et al. (2018) Diego García, Ignacio Díaz, Daniel Pérez, Abel A. Cuadrado, Manuel Domínguez, and Antonio Morá. Interactive visualization for nilm in large buildings using non-negative matrix factorization. _Energy and Buildings_ , 176:95 – 108, 2018. * Hasin et al. (2017) Yehudit Hasin, Marcus Seldin, and Aldons Lusis. Multi-omics approaches to disease. _Genome Biology_ , 18, 2017. * Hinton and Roweis (2003) Geoffrey E. Hinton and Sam T. Roweis. Stochastic neighbor embedding. _Advances in Neural Information Processing Systems_ , pages 857–864, 2003. * Hoffman et al. (2021) Paul Hoffman, Satija Lab, and Collaborators. Integrating scrna-seq and scatac-seq data. https://satijalab.org/seurat/articles/atacseq_integration_vignette.html, 2021\. Accessed: 2021-03-18. * Hotelling (1936) Harold Hotelling. Relations between two sets of variates. _Biometrika_ , 28:321, 1936. * Hubert and Arabie (1985) Lawrence Hubert and Phipps Arabie. Comparing partitions. _Journal of Classification_ , 2:193–218, 1985. * Jolliffe and Cadima (2016) Ian T. Jolliffe and Jorge Cadima. Review article principal component analysis: A review and recent developments. _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , 374, 2016. * Kanaan Izquierdo (2017) Samir Kanaan Izquierdo. Multiview pattern recognition methods for data visualization, embedding and clustering. 2017\. * Karbauskaiė et al., (2007) Rasa Karbauskaitė, Olga Kurasova and Gintautas Dzemyda Selection of the number of neighbours of each data point for the locally linear embedding algorithm. _Information technology and contro_ , 3:36, 2007. * Kruskal (1964) Joseph B Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. _Psychometrika_ , 29(1):1–27, 1964. * Kullback and Leibler (1951) Solomon Kullback and Richard A. Leibler. On information and sufficiency. 22:79–86, 1951. * Kumar et al. (2011) Abhishek Kumar, Piyush Rai, and Daume Hal. Co-regularized multi-view spectral clustering. In _Advances in Neural Information Processing Systems 24_ , pages 1413–1421, 2011. * Li et al. (2019) Gen Li, Xiaokang Liu, and Kun Chen. Integrative multi-view regression: Bridging group-sparse and low-rank models. _Biometrics_ , 75:593 – 602, 2019. * Liu et al. (2013) Jialu Liu, Chi Wang, Jing Gao, and Jiawei Han. Multi-view clustering via joint nonnegative matrix factorization. _Proceedings of the 2013 SIAM International Conference on Data Mining_ , pages 252–260, 2013. * MacQueen (1967) James MacQueen. Some methods for classification and analysis of multivariate observations. In _Proceedings of the fifth Berkeley symposium on mathematical statistics and probability_ , volume 1, pages 281–297, 1967. * McInnes et al. (2018) Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. Umap: Uniform manifold approximation and projection. _Journal of Open Source Software_ , 3(29):861, 2018. * Ng et al. (2001) Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: analysis and an algorithm. In _Proceedings of the Fourteenth International Conference on Neural Information Processing Systems: Natural and Synthetic_ , pages 849–856, 2001. * Ou et al. (2016) Weihua Ou, Shujian Yu, Gai Li, Jian Lu, Kesheng Zhang, and Gang Xie. Multi-view non-negative matrix factorization by patch alignment framework with view consistency. _Neurocomputing_ , 204:116 – 124, 2016. * Ou et al. (2018) Weihua Ou, Fei Long, Yi Tan, Shujian Yu, and Wang Pengpeng. Co-regularized multiview nonnegative matrix factorization with correlation constraint for representation learning. _Multimedia Tools and Applications_ , 77:12955–12978, 2018\. * Rand (1971) William M. Rand. Objective criteria for the evaluation of clustering methods. _Journal of the American Statistical Association_ , 66(336):846–850, 1971. * Rodosthenous et al. (2020) Theodoulos Rodosthenous, Vahid Shahrezaei, and Marina Evangelou. Integrating multi-omics data through sparse canonical correlation analysis for the prediction of complex traits: A comparison study. _Bioinformatics_ , 36(17):4616–4625, 2020. * Rousseeuw (1987) Peter J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. _Journal of computational and applied mathematics_ , 20:53–65, 1987. * Roweis and Saul (2000) Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. _Science_ , 290:2323–2326, 2000. * Saul and Roweis (2001) Lawrence Saul and Sam Roweis. An introduction to locally linear embedding. _Journal of Machine Learning Research_ , 7, 01 2001. * Shen et al. (2013) Hualei Shen, Dacheng Tao, and Dianfu Ma. Multiview locally linear embedding for effective medical image retrieval. _PLOS ONE_ , 8(12):1–21, 2013. * Shu et al. (2019) Ting Shu, Bob Zhang, and Yuan Yan Tang. Multi-view classification via a fast and effective multi-view nearest-subspace classifier. _IEEE Access_ , 7:49669–49679, 2019. * Stuart and Srivastava (2021) Tim Stuart and Ari Srivastava. Joint rna and atac analysis: 10x multiomic. https://satijalab.org/signac/articles/pbmc_multiomic.html, 2021\. Accessed: 2021-04-28. * Stuart et al. (2019) Tim Stuart, Andrew Butler, Paul Hoffman, Christoph Hafemeister, Efthymia Papalexi, William M 3rd Mauck, Yuhan Hao, Marlon Stoeckius, Peter Smibert, and Rahul Satija. Comprehensive integration of single-cell data. _Cell_ , 177:1888–1902, 2019. * Sun et al. (2015) Jiangwen Sun, Jin Lu, Tingyang Xu, and Jinbo Bi. Multi-view sparse co-clustering via proximal alternating linearized minimization. volume 37 of _Proceedings of Machine Learning Research_ , pages 757–766, 2015. * Sun (2013) Shiliang Sun. A survey of multi-view machine learning. _Neural Computing and Applications_ , 23:2031 – 2038, 2013\. * Tenenbaum et al. (2000) Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reducition. _Science_ , 290:2319–2323, 2000. * Valencia-Aguirre et al., (2009) Juliana Valencia-Aguirre, Andrés Álvarez-Mesa, Genaro Daza-Santacoloma and Germán Castellanos-Domínguez Automatic choice of the number of nearest neighbors in locally linear embedding. _Iberoamerican Congress on Pattern Recognition_ , 77–84, 2009. * van der Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. Visualising data using t-sne. _Journal of Machine Learning Research_ , 9:2579–2605, 2008\. * van der Maaten and Hinton (2012) Laurens van der Maaten and Geoffrey Hinton. Visualizing non-metric similarities in multiple maps. _Machine Learning_ , 87(1), 2012. * Vinh et al. (2010) Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. _Journal of Machine Learning Research_ , 11:2837–2854, 2010\. * Wang et al. (2014) Bo Wang, Aziz M. Mezlini, Feyyaz Demir, Marc Fiume, Zhuowen Tu, Michael Brudno, Benjamin Haibe-Kains, and Anna Goldenberg. Similarity network fusion for aggregating data types on a genomic scale. _Nature Methods_ , 11(3):333–337, 2014. * Wang and Allen (2021) Minjie Wang and Genevera I. Allen. Integrative generalized convex clustering optimization and feature selection for mixed multi-view data. _Journal of Machine Learning Research_ , 22(55):1–73, 2021. * Witten and Tibshirani (2009) Daniela M Witten and Robert J Tibshirani. Extensions of sparse canonical correlation analysis with applications to genomic data. _Statistical applications in genetics and molecular biology_ , 8(1), 2009. * Xie et al. (2011) Bo Xie, Yang Mu, Dacheng Tao, and Kaiqi Huang. m-sne: multiview stochastic neighbor embedding. _IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics_ , 41:1088–1096, 2011. * Xu et al. (2013) Chang Xu, Dacheng Tao, and Chao Xu. A survey on multi-view learning. _ArXiv_ , abs/1304.5634, 2013. * Xu et al. (2015) Chang Xu, Dacheng Tao, and Chao Xu. Multi-view intact space learning. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 37(12):2531–2544, 2015. * Xu and Tian (2015) Dongkuan Xu and Yingjie Tian. A comprehensive survey of clustering algorithms. _Annals of Data Science_ , 2:165–193, 2015. * Ye et al. (2018) Fanghua Ye, Zitai Chen, Hui Qian, Rui Li, Chuan Chen, and Zibin Zheng. New approaches in multi-view clustering. In _Recent Applications in Data Clustering_ , chapter 11. 2018. * Zhang and Zha (2004) Zhenyue Zhang and Hongyuan Zha. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. _SIAM Journal on Scientific Computing_ , 8:406–424, 2004\. * Zhao et al. (2017) Jing Zhao, Xijiong Xie, Xin Xu, and Shiliang Sun. Multi-view learning overview: Recent progress and new challenges. _Information Fusion_ , 38:43 – 54, 2017. * Zhao et al. (2018) Yue Zhao, Xinge You, Shujian Yu, Chang Xu, Wei Yuan, Xiao-Yuan Jing, Taiping Zhang, and Dacheng Tao. Multi-view manifold learning with locality alignment. _Pattern Recognition_ , 78:154 – 166, 2018. * Zheng and Xue (2009) Nanning Zheng and Jianru Xue. _Manifold Learning_ , pages 87–119. 2009\. * Zong et al. (2017) Linlin Zong, Xianchao Zhang, Long Zhao, Hong Yu, and Qianli Zhao. Multi-view clustering via multi-manifold regularized non-negative matrix factorization. _Neural Networks_ , 88:74–89, 2017. ## Appendix A Algorithms Data:$M$ data sets, $X^{(m)}\in\mathbb{R}^{N\times p_{m}},\quad\forall m\in\\{1,\cdots,M\\}$ Parameters:Perp $\left[\textrm{Perplexity}\right]$; T $\left[\textrm{Number of iterations}\right]$; $\eta$ $\left[\textrm{Learning rate}\right]$; $\alpha(t)$ $\left[\textrm{Momentum}\right]$ Result:Induced embedding, $Y\in\mathbb{R}^{n\times d}$. Often, $d=2$ begin Optional step of implementing PCA, or multi-CCA on $X^{(m)}\quad\forall m\in\\{1,\cdots,M\\}$ Compute pairwise affinities $p^{m}_{i|j}$ with perplexity Perp, $\quad\forall m\in\left\\{1,\cdots,M\right\\}$ Set $p^{m}_{ij}=\frac{p^{m}_{i|j}+p^{m}_{j|i}}{2n},\quad\forall m\in\left\\{1,\cdots,M\right\\}$ Initialise solution $Y^{(0)}\sim\mathcal{N}(0,0.1)$ for _t=1 to T_ do Compute induced affinities $q_{i|j}$ and set sum of gradients, $G=0$ for _m=1 to M_ do Compute gradient $\frac{\delta C_{m}}{\delta Y}$ $G\leftarrow G+\frac{\delta C_{m}}{\delta Y}$ end for Set $Y^{(t)}=Y^{(t-1)}+\eta G+\alpha(t)(Y^{(t-1)}-Y^{(t-2)})$ end for end Algorithm 1 Multi-SNE Data: $M$ data sets, $X^{(m)}\in\mathbb{R}^{N\times p_{m}},\quad\forall m\in\\{1,\cdots,M\\}$ Parameters: k $\left[\textrm{Number of neighbours}\right]$ Result: Induced embedding, $\hat{Y}\in\mathbb{R}^{n\times d}$. Often, $d=2$ begin for _m=1 to M_ do Find $k$ nearest neighbours of $X^{(m)}$. Compute $W^{m}$ by minimising equation (3). end for Let $\hat{W}=\sum_{m}\alpha^{m}W^{m}$, where $\sum_{m}\alpha^{m}=1$. Compute the $d$-dimensional embeddings $\hat{Y}$ by minimising equation (4) under $\hat{W}$. end Algorithm 2 multi-LLE Data: $M$ data sets, $X^{(m)}\in\mathbb{R}^{N\times p_{m}},\quad\forall m\in\\{1,\cdots,M\\}$ Parameters: k $\left[\textrm{Number of neighbours}\right]$ Result: Induced embedding, $\hat{Y}\in\mathbb{R}^{n\times d}$. Often, $d=2$ begin for _m=1 to M_ do Construct a $N\times N$ neighborouhood graph, $G_{m}\sim(V,E_{m})$ with samples represented by nodes. The edge length between $k$ nearest neighbours of each node is measured by Euclidean distance. end for Measure the average edge length between all nodes. Combine all neighborouhood graphs into a single graph, $\tilde{G}$. In $D_{G}\in\mathbb{R}^{|V|\times|V|}$, the computed shortest path distances between nodes in $\tilde{G}$ are stored. Compute the $d$-dimensional embeddings $Y$ by computing $y_{i}=\sqrt{\lambda_{p}}u^{i}_{p}$, where $\lambda_{p}$ is the $p^{th}$ eigenvalue in decreasing order of the the matrix $D_{G}$ and $u^{i}_{p}$ the $i^{th}$ component of $p^{th}$ eigenvector. end Algorithm 3 multi-ISOMAP ## Appendix B Data Clustering Evaluation Measures Let $\mathbf{X}=\left\\{X_{1},\cdots,X_{r}\right\\}$ be the true classes of the data and $\mathbf{Y}=\left\\{Y_{1},\cdots,Y_{s}\right\\}$ the clusterings found on $N$ objects. In this study, we assume to know the number of clusters and thus set $r=s$. Let $n_{ij}$ be the number of objects in $X_{i}$ and $Y_{j}$. A contingency table is defined as shown in Table 6. Table 6: A contingency table for data clustering. $X_{i}$ refers to the $i^{th}$ class (truth) and $Y_{j}$ refers to the $j^{th}$ cluster. $n_{ij}$ are the number of samples found in class $i$ and cluster $j$. In this study, $r=s$ was taken, as $K$-means with the true number of classes known was performed. | $Y_{1}$ | $Y_{2}$ | $\cdots$ | $Y_{s}$ | $\sum_{j}^{s}Y_{j}$ ---|---|---|---|---|--- 1-6 $X_{1}$ | $n_{11}$ | $n_{12}$ | $\cdots$ | $n_{1s}$ | $\sum_{i}^{r}n_{1i}=a_{1}$ $X_{2}$ | $n_{21}$ | $n_{22}$ | $\cdots$ | $n_{2s}$ | $a_{2}$ $\vdots$ | $\vdots$ | $\ddots$ | $\vdots$ | $\vdots$ | $\vdots$ $X_{r}$ | $n_{r1}$ | $n_{r2}$ | $\cdots$ | $n_{rs}$ | $a_{r}$ 1-6 | $\sum_{i}^{r}n_{i1}=b_{1}$ | $b_{2}$ | $\cdots$ | $b_{s}$ | $\text{{S}}=\sum_{i}^{r}\sum_{j}^{s}n_{ij}$ The formulas of the four measures used to evaluate data clustering are given below, with the terms defined in Table 6. Accuracy (ACC) $\displaystyle(Acc)=\frac{\sum_{i}\sum_{j}\mathds{1}\left\\{i=j\right\\}n_{ij}}{\textbf{S}}$ Normalised Mutual Information (NMI) $\displaystyle(NMI)=\frac{2I(\textbf{X},\textbf{Y})}{H(\textbf{X})+H(\textbf{Y})}$ where $I(\textbf{X},\textbf{Y})$ is the mutual information between X and Y, and $H(\textbf{X})$ is the entropy of X. Rand Index (RI) $\displaystyle(RI)$ $\displaystyle=\frac{{N\choose 2}-\left[\frac{1}{2}\sum_{i}(\sum_{j}n_{ij})^{2}+\sum_{j}(\sum_{i}n_{ij})^{2}-\sum_{i}\sum_{j}n^{2}_{ij}\right]}{{N\choose 2}}$ $\displaystyle=\frac{\alpha+\beta}{{N\choose 2}}$ where $\alpha$ refers to the number of elements that are in the same subset in $X$ and in the same subset in $Y$, while $\beta$ is the number of elements that are in different subsets in $X$ and in different subsets in $Y$. Adjusted Rand Index (ARI) $\displaystyle(ARI)=\frac{\sum_{i}\sum_{j}{n_{ij}\choose 2}-\frac{\sum_{i}{a_{i}\choose 2}\sum_{j}{b_{j}\choose 2}}{{n\choose 2}}}{\frac{1}{2}\left[\sum_{i}{a_{i}\choose 2}+\sum_{j}{b_{j}\choose 2}\right]-\frac{\sum_{i}{a_{i}\choose 2}\sum_{j}{b_{j}\choose 2}}{{n\choose 2}}}$ ## Appendix C Single-cell data In the multi-omics single-cell data analysis, we used the publicly available data set provided by 10x Genomics for human peripheral blood mononuclear cells (PBMC) 888 https://support.10xgenomics.com/single-cell-multiome-atac- gex/datasets/1.0.0/pbmc_granulocyte_sorted_10k. This data set can be downloaded and installed via the R package SeuratData, by running the command InstallData("pbmcMultiome"). In their vignette, Hoffman et al. (2021) and Stuart and Srivastava (2021) explored this data set to demonstrate how to jointly integrate and analyse such data. In this data set, scRNA-seq and scATAC-seq profiles were simultaneously collected in the same cells by 10x Genomics. Data on $11909$ single cells are available on $36601$ genes and $108377$ peaks in scRNA-seq and scATAC-seq, respectively. Cells with zero summed expression, along all genes were removed, leaving us with $10412$ cells. Pre-processing was employed via the Seurat package, following the steps performed by Hoffman et al. (2021). Firstly, we log-normalised both data-views and then selected features for each individual data-view. In feature selection, we aim to identify a subset of features with high variability across cells (using the functions FindVariableFeatures and FindTopFeatures) (Stuart et al., 2019). The multi-omics single-cell data set consists of $19$ imbalanced clusters that correspond to their corresponding cell types; we assume the annotations provided by Seurat to be accurate (Figure 13). To evaluate the clustering performance of multi-SNE, we took a balanced subset of the data. Cell-type clusters with less than $200$ cells were removed entirely and we combined cells with cell types under the same hierarchy. For example, Intermediate B, Naive B and Memory B were combined to create a single cluster, B cells. Similarly, CD4 Naive, CD4 TCM and CD4 TEM were combined as CD4 cells. After this process, we ended up with a subset of $9105$ single cells separated in $6$ cell-type clusters (B cell, CD14 Mono, CD4, CD8 Naive, CD8 TEM and NK). Figure 13: Visualisations of single-cell data. Projections of the full data set with unbalanced clusters produced by t-SNE on RNA, ATAC, m-SNE and multi- SNE on both data-views with perplexity $Perp=80$ for the two t-SNE projections, $Perp=100$ for m-SNE and $Perp=20$ for multi-SNE. M-SNE and multi-SNE combined the scRNA-seq and scATAC-seq to produce more intelligible projection of the cells than t-SNE applied on either data-view. Qualitatively the superiority of the multi-view manifold learning algorithms may not be obvious at first, but subtle differences can be observed. Quantitatively, multi-SNE received the best evaluation scores, with $NMI=0.807$, while m-SNE received $NMI=0.760$. Single-view t-SNE scored $NMI=0.620$ and $NMI=0.572$ for scRNA-seq and scATAC-seq, respectively. ## Appendix D Additional comparisons ### D.1 Multi-SNE, m-SNE and MV-tSNE2 This section justifies the exclusion of MV-tSNE2 from the comparisons against multi-SNE. Due to its superior performance, m-SNE was selected as an existing competitor of multi-SNE. Multi-SNE and m-SNE outperformed MV-tSNE2 on all data sets presented in this manuscript (Figure 14). By comparing the produced visualisations on two data sets, Figure 14 evaluates the three algorithms qualitatively. Multi-SNE produced the best separation among clusters on both data sets. In MV-tSNE2, a lot of the samples are projected bundled together, making it difficult to distinguish the true clusters. Quantitative evaluation of the methods agree with the conclusions reached by assessing the visualisations qualitatively. Figure 14: Visualisations by multi-SNE, m-SNE and MV-tSNE2. The three multi- view SNE-based projections of cancer types and handwritten digits data sets. ### D.2 Multi-SNE, j-SNE and j-UMAP At the same time as multi-SNE was developed, Canzar and Hoan Do (2021) proposed generalisations of t-SNE (named j-SNE) and UMAP (named j-UMAP) based on similar objective function as multi-SNE. Canzar and Hoan Do (2021) introduced a regularisation term that reduces the bias towards specific data- views; the proposed objective function is given by: $\displaystyle C_{j-SNE}=\sum_{m}\sum_{i}\sum_{j}\alpha^{m}p^{m}_{ij}\log\frac{p^{m}_{ij}}{q_{ij}}+\lambda\sum_{m}\alpha^{m}\log\alpha^{m},$ (5) where $\alpha^{m}$ represents the weight provided for the $m^{th}$ data-view and $\lambda$ is a regularisation parameter. The weights and low-dimensional embeddings are updated iteratively. The adjustments on the weights of each data-view are performed in accordance to the regularisation parameter, which requires tuning for optimal results. Figure 15: j-UMAP, j-SNE and multi-SNE visualisations. Projections of cancer types and handwritten digits data sets, produced by j-UMAP, j-SNE and multi- SNE. Figure 15 compares qualitatively multi-SNE with j-SNE and j-UMAP (with their respective tuning parameters optimised) on the cancer types and handwritten digits data. As expected, the projections by j-SNE and multi-SNE are very much alike for both data sets. The increasing complexity imposed by the regularisation term in j-SNE does not seem to benefit the visualisation of the samples. j-UMAP does not separate the three cancer types, but it manages to separate the 10 digits, even samples that represent the $6$ and $9$ numerals; j-SNE failed to do that. This was achieved by multi-SNE at the $3$-dimensional visualisation, or alternatively by using multi-CCA as a pre-training step. All three algorithm allocated similar weight values to each data-view on both data sets. In particular, transcriptomics on cancer types and morphological features on handwritten digits received the lowest weight. ### D.3 Tuning parameters on real data In this section we have explored how the parameter values affect the multi- view approaches when analysing the real data sets. Figure 16 depicts the NMI evaluation measure on each real data set for parameter values in range $S$. Figure 16: Real data sets evaluation via NMI. The NMI values are plotted against different parameter values on the all multi-view manifold learning algorithms investigated in this manuscript. Similar conclusions with the ones made in Section 4.3 were reached (Figure 16). SNE-based solutions had a more consistent performance than LLE and ISOMAP based approaches. In contrast to the conclusions reached by testing the tuning parameters on synthetic data, SNE-based approaches applied on the cancer types data set, performed the best when the perplexity was low. This observation highlights the importance of the tuning parameters (perplexity and number of neighbours) in these algorithms, as discussed by their respective authors. For the remaining data sets, its performance was stable on different parameter values. With the exception of cancer types data, the performance of LLE-based solutions form similar behaviour with the synthetic data (i.e. their performance is reduced around $NN=50$ and then it is regained. ### D.4 Increased randomness in data We have further explored how additional noise affects the performance of the multi-view learning approaches. As discussed, the NDS data set contains three informative data-views and one noisy data-view. In Section 4.4, we concluded that the inclusion of the noisy data-view reduces the performance both qualitatively and quantitatively. This complication was targeted and solved through an automatic weight-updating approach in Section 4.5.1. The purpose of this section is to test the performance of multi-view manifold learning solutions on data sets with higher levels of randomness. To increase the noise in the synthetic data, additional noisy data-views were generated. In particular, this section compares the performance of manifold learning algorithms on three synthetic data sets: (a) NDS, (b) NDS with one additional noisy data-view, and (c) NDS with two additional noisy data-views. Each simulation was performed for $200$ runs and with equal weights for a fair comparison. Multi-SNE was the superior algorithm in all simulations (Table 7). With each additional noisy data-view, all multi-view manifold learning algorithms saw a reduction in their performance. Although all evaluation measures reflect this observation, the change in their performance is best observed on the NMI values (Table 7). Further, with more noisy data-views, the variance of the evaluation measures increased. This observation suggests that all algorithms clustered the samples with higher uncertainty. Table 7: Clustering performance of NDS and with additional noisy data-views. For each data set, red highlights the method with the best performance on each measure between each group of algorithms (SNE, LLE or ISOMAP based). The overall superior method for each data set is depicted with bold. The parameters $Perp$ and $NN$ refer to the selected perplexity and number of nearest neighbours, respectively. They were optimised for the corresponding methods. Data Set | Algorithm | Accuracy | NMI | RI | ARI ---|---|---|---|---|--- NDS | $\text{SNE}_{concat}$ [Perp=80] | 0.747 | 0.628 | 0.817 | 0.598 m-SNE [Perp=50] | 0.650 | 0.748 | 0.766 | 0.629 multi-SNE [Perp=80] | 0.989 | 0.951 | 0.969 | 0.987 2-6 | $\text{LLE}_{concat}$ [NN=5] | 0.606 | 0.477 | 0.684 | 0.446 | m-LLE [NN=20] | 0.685 | 0.555 | 0.768 | 0.528 | multi-LLE [NN=20] | 0.937 | 0.768 | 0.922 | 0.823 2-6 | $\text{ISOMAP}_{concat}$ [NN=100] | 0.649 | 0.528 | 0.750 | 0.475 | m-ISOMAP [NN=5] | 0.610 | 0.453 | 0.760 | 0.386 | multi-ISOMAP [NN=300] | 0.778 | 0.788 | 0.867 | 0.730 Higher dimension | $\text{SNE}_{concat}$ [Perp=80] | 0.723 | 0.648 | 0.787 | 0.585 m-SNE [Perp=50] | 0.623 | 0.705 | 0.734 | 0.605 multi-SNE [Perp=80] | 0.983 | 0.937 | 0.951 | 0.966 2-6 | $\text{LLE}_{concat}$ [NN=5] | 0.575 | 0.427 | 0.628 | 0.402 | m-LLE [NN=20] | 0.671 | 0.534 | 0.755 | 0.513 | multi-LLE [NN=20] | 0.903 | 0.788 | 0.898 | 0.802 2-6 | $\text{ISOMAP}_{concat}$ [NN=100] | 0.622 | 0.510 | 0.705 | 0.453 | m-ISOMAP [NN=5] | 0.589 | 0.439 | 0.734 | 0.344 | multi-ISOMAP [NN=300] | 0.765 | 0.767 | 0.859 | 0.711 One additional noisy | $\text{SNE}_{concat}$ [Perp=10] | 0.650 | 0.522 | 0.724 | 0.489 m-SNE [Perp=100] | 0.689 | 0.584 | 0.786 | 0.530 multi-SNE [Perp=50] | 0.965 | 0.854 | 0.956 | 0.901 2-6 | $\text{LLE}_{concat}$ [NN=10] | 0.604 | 0.445 | 0.723 | 0.413 | m-LLE [NN=10] | 0.667 | 0.522 | 0.765 | 0.490 | multi-LLE [NN=5] | 0.912 | 0.692 | 0.891 | 0.756 2-6 | $\text{ISOMAP}_{concat}$ [NN=20] | 0.543 | 0.375 | 0.733 | 0.481 | m-ISOMAP [NN=20] | 0.552 | 0.482 | 0.703 | 0.444 | multi-ISOMAP [NN=5] | 0.584 | 0.501 | 0.739 | 0.493 Two additional noisy | $\text{SNE}_{concat}$ [Perp=10] | 0.581 | 0.310 | 0.688 | 0.309 m-SNE [Perp=10] | 0.603 | 0.388 | 0.712 | 0.359 multi-SNE [Perp=10] | 0.936 | 0.781 | 0.926 | 0.832 2-6 | $\text{LLE}_{concat}$ [NN=10] | 0.523 | 0.251 | 0.641 | 0.222 | m-LLE [NN=10] | 0.570 | 0.344 | 0.682 | 0.317 | multi-LLE [NN=5] | 0.858 | 0.557 | 0.832 | 0.622 2-6 | $\text{ISOMAP}_{concat}$ [NN=20] | 0.470 | 0.389 | 0.565 | 0.409 | m-ISOMAP [NN=20] | 0.489 | 0.406 | 0.611 | 0.453 | multi-ISOMAP [NN=5] | 0.524 | 0.467 | 0.782 | 0.517 ### D.5 t-SNE on single-view cancer types Table 8 presents the clustering performance of t-SNE applied on the three views in the cancer types data set, separately. Genomics was the favoured view on all evaluation measures. Table 8: Clustering performance on the induced embedding of a single view obtained by implementing t-SNE on Cancer Types data. Standard deviation is reported in the parentheses. | ACC | NMI | RI | ARI ---|---|---|---|--- Genomics | 0.595 (0.044) | 0.299 (0.041) | 0.667 (0.017) | 0.253 (0.039) Epigenomics | 0.500 (0.036) | 0.116 (0.033) | 0.598 (0.018) | 0.107 (0.035) Transcriptomics | 0.456 (0.023) | 0.042 (0.011) | 0.572 (0.006) | 0.049 (0.013) ### D.6 Handwritten digits projection in 3 dimensions (3D) Figure 17: 3D multi-SNE visualisation of handwritten digits. Projections produced by weight adjusting multi-SNE with multi-CCA as pre-training and perplexity $Perp=80$. Colours present the true clustering on the data points. ### D.7 Alternative quantitative evaluation measures for clustering Accuracy (ACC), Normalised Mutual Information (NMI), Rand Index (RI) and Adjusted Rand Index (ARI) are the evaluation measures chosen to quantitative evaluate the clustering performance of the proposed multi-view approaches. These measures were chosen because the true annotations of the data sets are known and together they provide a wide assessment range. Practically, clustering is often applied on data with unknown annotations (labelling), therefore for completeness we have further explored the implementation of the Silhouette score for identifying the optimal tuning parameter of the manifold visualisation approaches. The Silhouette score is a widely used measure for quantifying the clustering produced by the clustering algorithms, or for selecting the optimal number of clusters. Figure 18: Silhouette score on MCS. The clustering evaluation via Silhouette score is plotted against different parameter values on all SNE, LLE and ISOMAP based algorithms. Figure 18 presents the evaluation performance of the methods with respect to their tuning parameter when the Silhouette score is evaluated instead of the other four measures. This figure complements Figure 7. The Silhouette score is not always in agreement with the other evaluation measures. For example, in SNE-based solutions, according to the Silhouette score, multi-SNE is favoured over the other methods only when perplexity is $100$. Another difference between the silhouette score and other measures is that as the perplexity increases, multi-SNE remains stable for the other measures (for $Perp\geq 50$), while its silhouette score keeps increasing. Silhouette score measures how well the clusters are separated between them. This is conceptually different from what the other measures quantify which is how well the proposed clusters agree with the known clusters. It is therefore of no surprise that the findings are not always in agreement. ### D.8 Alternative clustering algorithms For the clustering task of the samples, any clustering algorithm could have potentially been applied to the low-dimensional embeddings produced by the multi-view visualisation approaches proposed. Figure 19: $K$-means and DBSCAN on SNE-based solutions applied on handwritten digits data set. The clustering evaluation measures are plotted against different perplexity values on multi-SNE, m-SNE and $SNE_{concat}$. The performance of $K$-means and DBSCAN applied on the produced embeddings is depicted in the first and second row of this figure, respectively In the main body of this manuscript, the $K$-means algorithm was chosen due to its popularity, its strong robust performance and because the true number of clusters is known for all data sets. In practice, the latter is not always true, and clustering algorithms that do not require the number of clusters as a parameter input are preferable. Density-based spatial clustering of applications with noise (DBSCAN) is an example of such an algorithm (Ester et al.,, 1996). DBSCAN instead requires two other tuning parameters: the minimum number of samples required to form a dense cluster and a threshold in determining the neighbourhood of a sample, named $\epsilon$. Figure 20: $K$-means and DBSCAN on SNE-based solutions applied on caltech7 data set. The clustering evaluation measures are plotted against different perplexity values on multi-SNE, m-SNE and $SNE_{concat}$. The performance of $K$-means and DBSCAN applied on the produced embeddings is depicted in the first and second row of this figure, respectively The implementation of DBSCAN on handwritten digits, smooths the performance of SNE-based solutions across different perplexity values (Figure 19). For all parameter values, DBSCAN performs equally well, while the performance of $K$-means slightly oscillates. A greater disagreement between the two unsupervised learning algorithms is observed in their application to caltech7 data set (Figure 20). While the accuracy of multi-SNE by implementing $K$-means reduces with higher perplexity, the opposite behaviour is observed when DBSCAN is implemented. In addition, DBSCAN finds multi-SNE to be superior over m-SNE, while $K$-means concludes the opposite. This appendix demonstrates that the implementation of clustering on the produced embeddings is not restricted only to $K$-means, but alternative clustering solutions may be used. In particular, DBSCAN is a good choice, especially when the true number of clusters is unknown. ## Appendix E Reproducibility The code of multi-SNE was based on the publicly available software, written by the author of t-SNE, found in the following link: https://lvdmaaten.github.io/tsne/ We refer the readers to follow the code and functions provided in the link below to reproduce the findings of this paper. The software for m-SNE and m-LLE were not found publicly available and thus we used our own implementation of the method that can be found in the same link below: https://github.com/theorod93/multiView_manifoldLearning An R package that contains the code for multi-SNE can be installed via devtools and it can be found in https://github.com/theorod93/multiSNE We refer the readers to follow the links provided in the main body of the paper for the public multi-view data used in this paper. ## Appendix F Computation time In terms of computation time, none of the multi-view manifold learning algorithms was consistently faster than the rest (Table 9). However, multi-SNE was often the slowest algorithm, while m-SNE and multi-ISOMAP had the fastest computation time. Table 9: Averaged running time recorded in minutes. Taken for each manifold learning algorithm on all data sets seen in this paper; standard deviation is given in the parentheses. All algorithms ran on High Performance Computing with 4 nodes. Running time | | | | ---|---|---|---|--- | MMDS | NDS | MCS | Caltech7 | Handwritten Digits | Cancer Types m-SNE | 0.43 (0.019) | 0.29 (0.07) | 0.42 (0.01) | 4.29 (0.54) | 13.34 (3.43) | 251.71 (15.68) multi-SNE | 1.07 (0.100) | 0.78 (0.14) | 1.02 (0.01) | 15.95 (0.71) | 45.76 (8.44) | 252.00 (11.23) m-LLE | 0.25 (0.071) | 0.40 (0.12) | 0.42 (0.34) | 37.5 (2.21) | 26.28 (8.82) | 159.52 (17.49) multi-LLE | 0.28 (0.099) | 0.41 (0.15) | 0.30 (0.14) | 37.9 (2.57) | 27.94 (5.29) | 157.73 (18.19) m-ISOMAP | 0.22 (0.015) | 0.57 (0.06) | 0.37 (0.01) | 38.07 (3.09) | 29.52 (5.37) | 154.83 (18.04) multi-ISOMAP | 0.24 (0.032) | 0.54 (0.13) | 0.33 (0.05) | 21.23 (2.24) | 16.77 (4.65) | 85.47 (14.57)
# The heterotic ${\rm G}_{2}$ system on contact Calabi–Yau $7$-manifolds Jason D. Lotay _University of Oxford_ Henrique N. Sá Earp _University of Campinas (Unicamp)_ ###### Abstract We obtain non-trivial approximate solutions to the heterotic $\rm{G}_{2}$ system on the total spaces of non-trivial circle bundles over Calabi–Yau $3$-orbifolds, which satisfy the equations up to an arbitrarily small error, by adjusting the size of the $S^{1}$ fibres in proportion to a power of the string constant $\alpha^{\prime}$. Each approximate solution provides a cocalibrated $\rm{G}_{2}$-structure, the torsion of which realises a non- trivial scalar field, a constant (trivial) dilaton field and an $H$-flux with nontrivial Chern–Simons defect. The approximate solutions also include a connection on the tangent bundle which, together with a non-flat $\rm{G}_{2}$-instanton induced from the horizontal Calabi–Yau metric, satisfy the anomaly-free condition, also known as the heterotic Bianchi identity. The approximate solutions fail to be genuine solutions solely because the connections on the tangent bundle are only $\rm{G}_{2}$-instantons up to higher order corrections in $\alpha^{\prime}$. ###### Contents 1. 1 Introduction 1. 1.1 Heterotic ${\rm G}_{2}$ system or ${\rm G}_{2}$-Hull–Strominger system 2. 1.2 Gauge theory on contact Calabi–Yau (cCY) manifolds 3. 1.3 Statement of main result 2. 2 Contact Calabi–Yau geometry: scalar field, dilaton, and flux 1. 2.1 Torsion forms and flux of the ${\rm G}_{2}$-structure $\varphi_{\varepsilon}$ 2. 2.2 Local orthonormal coframe 3. 3 Gauge fields: G2-instanton, Bismut, Hull and twisted connections 1. 3.1 The G2-instanton $A$ and the “squashings” $\theta_{\varepsilon}^{k}$ of the Levi-Civita connection 1. 3.1.1 Local connection matrices 2. 3.1.2 Local curvature matrices 2. 3.2 The “squashed” Bismut and Hull connections on $TK$ 1. 3.2.1 Local connection matrices and torsion 2. 3.2.2 Local curvature matrices 3. 3.3 Connections: an extra twist 4. 3.4 The G2-instanton condition 4. 4 The anomaly term 1. 4.1 Terms involving the matrix $\mathcal{I}$ 2. 4.2 Linear contribution from the ${\rm G}_{2}$ field strength 3. 4.3 The nonlinear contribution $\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}$ 4. 4.4 Proof of Theorem 1 5. A Covariant matrix operations ## 1 Introduction The heterotic ${\rm G}_{2}$ system intertwines geometric and gauge-theoretic degrees of freedom over a $7$-manifold with ${\rm G}_{2}$-structure, subject to instanton-type equations and a prescribed Chern–Simons defect. The latter constraint is required by what physicists refer to as the Green–Schwartz anomaly cancellation mechanism. This setup fits in the broader context of so- called Hull–Strominger systems on manifolds with special geometry, particularly in real dimensions 6, 7 and 8, which arise as low-energy effective theories of the heterotic string. Its motivation aside, we should stress from the outset that the language and arguments in this paper are primarily aimed at a mathematical audience. To the best of our knowledge, the present problem was first formulated in the mathematics literature by Fernandez et al. [Fernandez2011], who found ‘the first explicit compact valid supersymmetric heterotic solutions with non-zero flux, non-flat instanton and constant dilaton’ on some carefully chosen generalised Heisenberg nilmanifolds. Moreover, they somewhat inspired our approach, by invoking the methods of Kobayashi [Kobayashi1956] to guarantee, albeit non-constructively, the existence of circle fibrations which _partially_ satisfy the heterotic ${\rm G}_{2}$ system [Fernandez2011]*Theorem 6.4. For a comprehensive survey of the problem’s origins in the string theory literature, we refer the reader to that paper’s Introduction and references therein. Over recent years, such Hull–Strominger systems have attracted substantial interest. For instance, García-Fernández et al. have addressed description of infinitesimal moduli of solutions to these systems over a Calabi–Yau [GarciaFernandez2017] or ${\rm G}_{2}$-manifold [Clarke2016] base, as well as an interpretation of the problem from the perspective of generalised Ricci flow on a Courant algebroid [Garcia-Fernandez2019]. More recently still, Fino et al. [Fino2021] have found solutions to the Hull–Strominger system in 6 dimensions using 2-torus bundles over K3 orbifolds, extending the fundamental work of Fu–Yau [Fu2008], which also has some relation to our study. Our approach to the heterotic ${\rm G}_{2}$ system will follow most closely the thorough investigation by de la Ossa et al. in [delaOssa2016, Ossa2018a, Ossa2018], who propose, among various contributions, a physically viable formulation of the problem for ${\rm G}_{2}$-structures with torsion. Indeed, we study the system over so-called contact Calabi–Yau (cCY) $7$-manifolds, which carry cocalibrated ${\rm G}_{2}$-structures; cCY manifolds were introduced by [Habib2015], and gauge theory on $7$-dimensional cCY was proposed in [Calvo-Andrade2020] and further studied in [Portilla2019]. Our base $7$-manifolds include the total spaces of $S^{1}$-(orbi)bundles over every weighted Calabi–Yau $3$-fold famously listed by Candelas-Lynker- Schimmrigk [Candelas1990], seen as links of isolated hypersurface singularities on $S^{9}\subset\mathbb{C}^{5}$. In particular, we obtain _constructive_ approximate solutions to the heterotic ${\rm G}_{2}$-system over compact _simply-connected_ (actually, $2$-connected) $7$-manifolds as in Example 2.3, which can be made arbitrarily close to genuine solutions by shrinking the circle fibres. ### 1.1 Heterotic ${\rm G}_{2}$ system or ${\rm G}_{2}$-Hull–Strominger system ###### Definition 1.1. On a 7-manifold with ${\rm G}_{2}$-structure $(K^{7},\varphi)$, we let $\psi=*\varphi\in\Omega^{4}(K)$ and recall the following characterisations of some components of $\Omega^{\bullet}(K)$ corresponding to irreducible ${\rm G}_{2}$-representations: $\begin{split}\Omega^{2}_{14}(K)&=\\{\beta\in\Omega^{2}(K)\,:\,\beta\wedge\varphi=-*\beta\\}=\\{\beta\in\Omega^{2}(K)\,:\,\beta\wedge\psi=0\\},\\\ \Omega^{3}_{27}(K)&=\\{\gamma\in\Omega^{3}(K)\,:\,\gamma\wedge\varphi=0,\,\gamma\wedge\psi=0\\}.\end{split}$ The _torsion_ of $\varphi$ is completely described by the quantities $\tau_{0}\in C^{\infty}(K)$, $\tau_{1}\in\Omega^{1}(K)$, $\tau_{2}\in\Omega^{2}_{14}(K)$ and $\tau_{3}\in\Omega^{3}_{27}(K)$, which satisfy $d\varphi=\tau_{0}\psi+3\tau_{1}\wedge\varphi+*\tau_{3}\quad\text{and}\quad d\psi=4\tau_{1}\wedge\psi+\tau_{2}\wedge\varphi.$ Given a smooth $G$-bundle $F\to K$, for some compact semi-simple Lie group $G$, let $\mathcal{A}(F)$ denote its space of smooth $G$-connections. ###### Definition 1.2. The _heterotic ${\rm G}_{2}$ system_ or _${\rm G}_{2}$ -Hull–Strominger system_ on a $7$-manifold with ${\rm G}_{2}$-structure $(K,\varphi)$ is comprised of the following degrees of freedom: * • Geometric fields (tensors): $\lambda\in\mathbb{R}\text{ (scalar field)},\quad\mu\in C^{\infty}(K)\text{ (dilaton)},\quad\text{and}\quad H\in\Omega^{3}(K)\text{ (flux)}.$ * • Gauge fields (connections): $A\in\mathcal{A}(E),\quad\text{and}\quad\theta\in\mathcal{A}(TK),$ where $E\to K$ is a vector bundle and both connections are respectively ${\rm G}_{2}$-instantons: $F_{A}\wedge\psi=0\quad\text{and}\quad R_{\theta}\wedge\psi=0.$ The geometric fields satisfy the following relations with the torsion of the ${\rm G}_{2}$-structure $\varphi$: $\begin{split}\tau_{0}&=\frac{3}{7}\lambda\\\ \tau_{1}&=\frac{1}{2}d\mu\\\ \tau_{2}&=0\\\ \end{split}\qquad\begin{split}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}H^{\perp}}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}=-\frac{1}{2}d\mu^{\\#}\lrcorner\psi-\tau_{3}}\\\ H&=\frac{\lambda}{14}\varphi\oplus H^{\perp}\\\ \tau_{3}&=-H^{\perp}-\frac{1}{2}d\mu^{\\#}\lrcorner\psi.\end{split}$ (1) Given a (small) real constant $\alpha^{\prime}\neq 0$, related to the string scale, the flux compensates exactly the Chern–Simons defect between the gauge fields via the _anomaly-free condition_ , also referred to as the _heterotic Bianchi identity_ : $dH=\frac{\alpha^{\prime}}{4}\left(\mathop{\mathrm{tr}}\nolimits F_{A}\wedge F_{A}-\mathop{\mathrm{tr}}\nolimits R_{\theta}\wedge R_{\theta}\right),$ (2) where $F_{A}$ is the curvature of $A$, $R_{\theta}$ is the Riemann curvature tensor of $\theta$. ###### Remark 1.3. In the physics literature one obtains the heterotic ${\rm G}_{2}$ system by truncating a system of equations involving (formal) power series in $\alpha^{\prime}$. Consequently, one finds statements such as that $\theta$ need only be a ${\rm G}_{2}$-instanton on $TK$ “up to $O(\alpha^{\prime})$-corrections”, cf. [Ossa2014]*Appendix B. One natural way to formulate the ${\rm G}_{2}$-instanton condition to order $O(\alpha^{\prime})^{k}$ is $|R_{\theta}\wedge\psi|_{g}=O(\alpha^{\prime})^{k},\quad\text{as $\alpha^{\prime}\to 0$,}$ where $|.|_{g}$ is the pointwise $C^{0}$-norm with respect to the ${\rm G}_{2}$-metric $g$ defined by $\varphi=\ast\psi$. (Note that $\theta$, $\psi$ and $g$ can all depend on $\alpha^{\prime}$.) We will state our results in Theorem 1 below in those terms. However, we should stress that we still adopt Definition 1.2 for a genuine solution to the heterotic ${\rm G}_{2}$ system, in accordance with the mathematics literature on Hull–Strominger systems in 6 and 7 dimensions; see e.g. [Fernandez2011, Ivanov2010, Clarke2016, GarciaFernandez2016, GarciaFernandez2017, GarciaFernandez2020]. ###### Remark 1.4. For physical reasons one typically assumes $\alpha^{\prime}>0$ in (2), so we are not interested in the case $dH=0$. Hence, (2) only has any hope of occurring under the so-called _omalous_ condition: $p_{1}(E)=p_{1}(K)\in H^{4}_{dR}(K).$ (3) Omalous bundles can be systematically constructed for instance via monad techniques, as in the following example, which is derived trivially by combining results from [Henni2013, Calvo-Andrade2020]. In this paper, though, we will follow a different approach, cf. Theorem 1 below. ###### Example 1.5. When $K$ is the link in $S^{9}$ associated to the Fermat quintic $V\subset\mathbb{P}^{4}$, the cohomology of the monad $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{V}(-1)^{\oplus 10}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{V}^{\oplus 22}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{V}(1)^{\oplus 10}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$ is a rank $2$ omalous bundle $E$, i.e. satisfying (3), with $c_{1}=0$ and $c_{2}=10$. ###### Remark 1.6. Fernandez et al. [Fernandez2011] argue that one can replace the ${\rm G}_{2}$-instanton condition on $R_{\theta}$ by a more general second order condition, and still satisfy the equations of motion which motivate the heterotic ${\rm G}_{2}$ system. However, Ivanov concluded separately that in this context both conditions are equivalent [Ivanov2010]*§2.3.1. ### 1.2 Gauge theory on contact Calabi–Yau (cCY) manifolds Let $(M^{2n+1},\eta,\xi)$ denote a contact manifold, with contact form $\eta$ and Reeb vector field $\xi$ [Boyer2008]. When $M$ is endowed in addition with a Sasakian structure, namely an integrable transverse complex structure $J$ and a compatible metric $g$, Biswas-Schumacher [Biswas2010] propose a natural notion of Sasakian holomorphic structure for complex vector bundles $E\to M$. We recall that a connection $A$ on a complex vector bundle over a Kähler manifold is said to be _Hermitian Yang-Mills (HYM)_ if $\hat{F}_{A}:=(F_{A},\omega)=0\quad\text{and}\quad F^{0,2}_{A}=0.$ (4) This notion extends to Sasakian bundles, by taking $\omega:=d\eta\in\Omega^{1,1}(M)$ as a ‘transverse Kähler form’, and defining HYM connections to be the solutions of (4) in that sense. The well-known concept of _Chern connection_ also extends, namely as a connection mutually compatible with the holomorphic structure (_integrable_) and a given Hermitian bundle metric (_unitary_), see [Biswas2010]*§ 3. An important class of Sasakian manifolds are those endowed with a _contact Calabi–Yau (cCY)_ structure [Definition 2.1], the Riemannian metrics of which have transverse holonomy ${\rm SU}(2n+1)$, in the sense of foliations, corresponding to the existence of a global transverse holomorphic volume form $\Omega\in\Omega^{n,0}(M)$ [Habib2015]. When $n=3$, cCY $7$-manifolds are naturally endowed with a ${\rm G}_{2}$-structure defined by the $3$-form $\varphi:=\eta\wedge d\eta+\mathop{\mathrm{Re}}\Omega,$ (5) which is _cocalibrated_ , in the sense that its Hodge dual $\psi:=\ast_{g}\varphi$ is closed under the de Rham differential. When a $3$-form $\varphi$ on a $7$-manifold defines a ${\rm G}_{2}$-structure, the condition $F_{A}\wedge\psi=0$ (6) is referred to as the _${\rm G}_{2}$ -instanton equation_. On holomorphic Sasakian bundles over closed cCY $7$-manifolds, it has the distinctive feature that integrable solutions (i.e. compatible with the holomorphic Sasakian structure) are indeed Yang-Mills critical points, even though the ${\rm G}_{2}$-structure has torsion [Calvo-Andrade2020]. ### 1.3 Statement of main result ###### Definition 1.7. Let $V$ be a Calabi–Yau 3-orbifold with metric $g_{V}$, volume form $\mathrm{vol}_{V}$, Kähler form $\omega$ and holomorphic volume form $\Omega$ satisfying $\mathrm{vol}_{V}=\frac{\omega^{3}}{3!}=\frac{\mathop{\mathrm{Re}}\Omega\wedge\mathop{\mathrm{Im}}\Omega}{4}.$ (7) Suppose that the total space of $\pi:K\to V$ is a contact Calabi–Yau 7-manifold, i.e. $K$ is a $S^{1}$-(orbi)bundle, with connection $1$-form $\eta$, such that111For ease of notation, we omit the pullback $\pi^{*}$ for forms and tensors defined on $K$ which are pulled back from $V$. $d\eta=\omega$. For every $\varepsilon>0$, we define a $S^{1}$-invariant ${\rm G}_{2}$-structure on $K$ by $\displaystyle\varphi_{\varepsilon}$ $\displaystyle=\varepsilon\eta\wedge\omega+\mathop{\mathrm{Re}}\Omega,$ (8) $\displaystyle\psi_{\varepsilon}$ $\displaystyle=\frac{1}{2}\omega^{2}-\varepsilon\eta\wedge\mathop{\mathrm{Im}}\Omega.$ (9) The metric induced from this ${\rm G}_{2}$-structure and its corresponding volume form are $g_{\varepsilon}=\varepsilon^{2}\eta\otimes\eta+g_{V}\quad\text{and}\quad\mathrm{vol}_{\varepsilon}=\varepsilon\eta\wedge\mathrm{vol}_{V}.$ (10) NB.: The choice of $\varepsilon>0$ will a posteriori depend on the string parameter $\alpha^{\prime}$ in (2). We will see that producing geometric fields satisfying the prescribed relations (1) with the torsion of the ${\rm G}_{2}$-structure (8) is rather straightforward. The actual problem consists in obtaining gauge fields that satisfy the heterotic Bianchi identity (2) on the contact Calabi–Yau $K^{7}$. We introduce therefore the following data: * • Let $A:=\pi^{*}\Gamma_{V}$ be the pullback of the Levi-Civita connection of $g_{V}$ to $E:=\pi^{*}TV\to K$. Then $A$ is a ${\rm G}_{2}$-instanton on $E$, since it is the pullback of a HYM connection on $TV$ [Calvo-Andrade2020]*§4.3. Moreover, $A$ is a Yang-Mills connection and it minimises the Yang-Mills energy among Chern connections, with respect to the natural Sasakian holomorphic structure of $E$ [ibid., Theorem 1.4]. * • For each fixed $\varepsilon>0$, let $\theta_{\varepsilon}$ denote the Levi- Civita connection of the metric $g_{\varepsilon}$ on $K$ of Definition 1.7. It was shown in [Friedrich2003] that there is a unique metric connection which makes $\varphi_{\varepsilon}$ parallel and has totally skew-symmetric torsion (which may be identified with $H_{\varepsilon}$), often called the _Bismut connection_ (and also sometimes called the _canonical connection_). Following work in [Hull1986], another natural metric connection which appears in the physics literature is the _Hull connection_ , whose torsion has the opposite sign to the Bismut connection. The Bismut and Hull connections fit in a 1-parameter family $\\{\theta_{\varepsilon}^{\delta}\\}$, which are modifications of $\theta_{\varepsilon}$ by a prescribed torsion component governed by the parameter $\delta\in\mathbb{R}$ and the flux $H_{\varepsilon}$. We further extend it to a 2-parameter family $\\{\theta_{\varepsilon}^{\delta,k}\\}$, with222Choosing $k=0$ would in fact require the $S^{1}$-fibration $K\to V$ to be trivial, see Remark 3.6. $k\in\mathbb{R}\smallsetminus\\{0\\}$, corresponding to “squashings” of the connections $\theta_{\varepsilon}^{\delta}$. Finally, we define a “twist” by an additional parameter $m\in\mathbb{R}$, to obtain our overall family of connections $\\{\theta_{\varepsilon,m}^{\delta,k}\\}$ on $TK$ [Proposition 3.21]. Whilst typically $\theta_{\varepsilon,m}^{\delta,k}$ will _not_ be a ${\rm G}_{2}$-instanton on $TK$, it does satisfy the ${\rm G}_{2}$-instanton condition up to $O(\alpha^{\prime})$-corrections (in the sense of Remark 1.3) for various parameter choices. ###### Theorem 1. Let $(K^{7},\eta,\xi,J,\Omega)$ be a contact Calabi–Yau $7$-manifold, fibering by $\pi:K^{7}\to V$ over the Calabi–Yau $3$-fold $(V,g_{V},\omega,J,\Omega)$, and let $E:=\pi^{*}TV\to K$. Given any $\alpha^{\prime}>0$, there exist $k(\alpha^{\prime}),\varepsilon(\alpha^{\prime})>0$ and $m,\delta\in\mathbb{R}$ such that the following assertions hold: 1. (i) The ${\rm G}_{2}$-structure (8) is coclosed and satisfies the torsion conditions (1), with scalar field $\lambda=\frac{\varepsilon}{2}$, constant dilaton $\mu\in\mathbb{R}$, and flux $H_{\varepsilon}=-\varepsilon^{2}\eta\wedge\omega+\varepsilon\mathop{\mathrm{Re}}\Omega$. 2. (ii) The connection $A:=\pi^{*}\Gamma_{V}$ is a ${\rm G}_{2}$-instanton on $E$, with respect to the dual $4$-form (9). 3. (iii) There exists a connection $\theta:=\theta_{\varepsilon,m}^{\delta,k}$ on $TK$, with torsion $H^{\delta,k}_{\varepsilon,m}=\big{(}1-k-\frac{km}{2}\big{)}\varepsilon^{2}\omega\otimes\eta+\frac{km\varepsilon^{2}}{2}\eta\wedge\omega+k\delta H_{\varepsilon},$ which satisfies the ${\rm G}_{2}$-instanton condition (6) to order $O(\alpha^{\prime})^{2}$ with respect to the dual $4$-form (9); i.e. $|R_{\theta}\wedge\psi_{\varepsilon}|_{g_{\varepsilon}}=O(\alpha^{\prime})^{2}\quad\text{as $\alpha^{\prime}\to 0$.}$ (11) 4. (iv) The data $(H_{\varepsilon},A,\theta)$ satisfy the heterotic Bianchi identity (2): $dH_{\varepsilon}=\frac{\alpha^{\prime}}{4}(\mathop{\mathrm{tr}}\nolimits F_{A}^{2}-\mathop{\mathrm{tr}}\nolimits R_{\theta}^{2}).$ (12) 5. (v) $\displaystyle\lim_{\alpha^{\prime}\to 0}\varepsilon(\alpha^{\prime})=0\quad\text{and}\quad\lim_{\alpha^{\prime}\to 0}k(\alpha^{\prime})=\infty$. The various components of the proof are developed throughout the paper, and aggregated in §4.4. Acknowledgements: The present article stems from an ongoing Newton Mobility bilateral collaboration (2019-2021), granted by the UK Royal Society [NMG$\backslash$R1$\backslash$191068]. JDL is also partially supported by the Simons Collaboration on Special Holonomy in Geometry, Analysis, and Physics ($\\#724071$ Jason Lotay). HSE has also been funded by the São Paulo Research Foundation (Fapesp) [2017/20007-0] & [2018/21391-1] and the Brazilian National Council for Scientific and Technological Development (CNPq) [307217/2017-5]. The authors would like to thank Xenia de la Ossa, Eirik Svanes and Mario García-Fernández for several insightful discussions. ## 2 Contact Calabi–Yau geometry: scalar field, dilaton, and flux One may interpret special structure group reductions on compact odd- dimensional Riemannian manifolds as ‘transverse even-dimensional’ structures with respect to a $S^{1}$-action. So for instance contact geometry may be seen as transverse symplectic geometry, almost-contact geometry as tranverse almost-complex geometry, and in the same way Sasakian geometry as transverse Kähler geometry. In particular, one may consider reduction of the transverse holonomy group; indeed Sasakian manifolds with transverse holonomy ${\rm SU}(n)$ are studied by Habib and Vezzoni [Habib2015]*§ 6.2.1: ###### Definition 2.1. A Sasakian manifold $(K^{2n+1},\eta,\xi,J,\Omega)$ is said to be a _contact Calabi–Yau manifold_ (cCY) if $\Omega$ is a nowhere-vanishing transverse form of horizontal type $(n,0)$, such that $\Omega\wedge\bar{\Omega}=(-1)^{\frac{n(n+1)}{2}}{\bf i}^{n}\omega^{n}\quad\text{and}\quad d\Omega=0,\quad\text{with}\quad\omega=d\eta.$ Let us specialise to real dimension $7$. It is well-known that, for a Calabi–Yau 3-fold $(V,\omega,\Omega)$, the product $V\times{\rm S}^{1}$ has a natural torsion-free ${\rm G}_{2}$-structure defined by: $\varphi:=dt\wedge\omega+\mathop{\mathrm{Re}}\Omega,$ where $t$ is the coordinate on ${\rm S}^{1}$. The Hodge dual of $\varphi$ is $\psi:=\ast\varphi=\frac{1}{2}\omega\wedge\omega+dt\wedge\mathop{\mathrm{Im}}\Omega$ (13) and the induced metric $g_{\varphi}=g_{V}+dt\otimes dt$ is the Riemannian product metric on $V\times S^{1}$ with holonomy $\mathrm{Hol}(g_{\varphi})={\rm SU}(3)\subset{\rm G}_{2}$. A contact Calabi–Yau structure essentially emulates all of these features, albeit its ${\rm G}_{2}$-structure has some symmetric torsion. ###### Proposition 2.2 ([Habib2015]*§6.2.1). Every cCY manifold $(K^{7},\eta,\xi,J,\Omega)$ is an $S^{1}$-bundle $\pi:K\to V$ over a Calabi–Yau 3-orbifold $(V,\omega,\Omega)$, with connection 1-form $\eta$ and curvature $d\eta=\omega,$ (14) and it carries a cocalibrated ${\rm G}_{2}$-structure $\varphi:=\eta\wedge\omega+\mathop{\mathrm{Re}}\Omega,$ (15) with torsion $d\varphi=\omega\wedge\omega$ and Hodge dual $4$-form $\psi=\ast\varphi=\frac{1}{2}\omega\wedge\omega+\eta\wedge\mathop{\mathrm{Im}}\Omega$. ###### Example 2.3 (Calabi–Yau links for $k=1$). Given a rational weight vector $w=(w_{0},\dots,w_{4})\in\mathbb{Q}^{5}$, a $w$-weighted homogeneous polynomial $f\in\mathbb{C}[z_{0},\dots,z_{4}]$ of degree $d=\sum_{i=0}^{4}w_{i}$ cuts out an affine hypersurface $\mathcal{V}=(f)$ with an isolated singularity at $0\in\mathbb{C}^{5}$. Its link $K_{f}:=\mathcal{V}\cap S^{9}\subset\mathbb{C}^{5}$ on a local $9$-sphere is a compact and $2$-connected smooth cCY $7$-manifold, fibering by circles over a Calabi–Yau $3$-orbifold $V\subset\mathbb{P}^{4}(w)$ by the weighted Hopf fibration [Calvo-Andrade2020]*Theorem 1.1: ${K_{f}^{7}}$${S^{9}}$${V}$${\mathbb{P}^{4}(w)}$ In particular, $V$ can be assumed to be any of the weighted Calabi–Yau $3$-folds listed by Candelas-Lynker-Schimmrigk [Candelas1990]. For a detailed survey on Calabi–Yau links, see [Calvo-Andrade2020]*§2. The $\mathbb{C}$-family of Fermat quintics yields but the simplest of instances, and indeed the only one for which the base $V$ is smooth. ### 2.1 Torsion forms and flux of the ${\rm G}_{2}$-structure $\varphi_{\varepsilon}$ We begin by addressing the heterotic ${\rm G}_{2}$ system conditions (1) on the ${\rm G}_{2}$-structure, as prescribed by [delaOssa2016]. In particular, we identify the components of the torsion corresponding to the scalar field, the dilaton and the flux, as asserted in Theorem 1–(i). We see from (8), (9), (14), and the fact that $V$ is Calabi–Yau, that $d\varphi_{\varepsilon}=\varepsilon\omega^{2}\quad\text{and}\quad d\psi_{\varepsilon}=0,$ (16) so that the ${\rm G}_{2}$-structures of Definition 1.7 are coclosed. We can now compute their torsion forms. ###### Lemma 2.4. For each $\varepsilon>0$, the ${\rm G}_{2}$-structure on $K^{7}$ defined by (8)–(9) has torsion forms $\begin{array}[]{ll}\tau_{0}=\displaystyle\frac{6}{7}\varepsilon,&\tau_{1}=0,\\\ \tau_{2}=0,&\tau_{3}=\displaystyle\frac{8}{7}\varepsilon^{2}\eta\wedge\omega-\frac{6}{7}\varepsilon\mathop{\mathrm{Re}}\Omega.\end{array}.$ ###### Proof. The fact that $\tau_{1}$ and $\tau_{2}$ vanish is an immediate consequence of (16). Again by (16) and definition of the torsion forms, we have: $d\varphi_{\varepsilon}=\varepsilon\omega^{2}=\tau_{0}\psi_{\varepsilon}+*_{\varepsilon}\tau_{3}.$ (17) Thus, using $\omega\wedge\Omega=0$, we find $7\tau_{0}\mathrm{vol}_{\varepsilon}=d\varphi_{\varepsilon}\wedge\varphi_{\varepsilon}=\varepsilon\omega^{2}\wedge(\varepsilon\eta\wedge\omega)=6\varepsilon(\varepsilon\eta\wedge\frac{\omega^{3}}{3!}).$ (18) We further deduce from (18) and the expression of volume form (10) that $\tau_{0}=\frac{6}{7}\varepsilon.$ (19) Moreover, substituting (19) into (17), we see that $\displaystyle*_{\varepsilon}\tau_{3}$ $\displaystyle=d\varphi_{\varepsilon}-\tau_{0}\psi_{\varepsilon}=\varepsilon\omega^{2}-\frac{6}{7}\varepsilon(\frac{1}{2}\omega^{2}-\varepsilon\eta\wedge\mathop{\mathrm{Im}}\Omega)=\frac{4}{7}\varepsilon\omega^{2}+\frac{6}{7}\varepsilon^{2}\eta\wedge\mathop{\mathrm{Im}}\Omega.$ (20) Therefore, using (10) and (20) we obtain $\displaystyle\tau_{3}$ $\displaystyle=\frac{8}{7}\varepsilon\ast_{\varepsilon}(\frac{1}{2}\omega^{2})+\frac{6}{7}\varepsilon\ast_{\varepsilon}(\varepsilon\eta\wedge\mathop{\mathrm{Im}}\Omega)=\frac{8}{7}\varepsilon^{2}\eta\wedge\omega-\frac{6}{7}\varepsilon\mathop{\mathrm{Re}}\Omega.\qed$ We may compute the flux of the ${\rm G}_{2}$ structure $\varphi_{\varepsilon}$ as follows. ###### Lemma 2.5. In the situation of Lemma 2.4, the flux of the ${\rm G}_{2}$ structure $\varphi_{\varepsilon}$ is $H_{\varepsilon}=-\varepsilon^{2}\eta\wedge\omega+\varepsilon\mathop{\mathrm{Re}}\Omega.$ (21) Hence, $dH_{\varepsilon}=-\varepsilon^{2}\omega^{2}.$ (22) ###### Proof. From Definition 1.2 and the Lemma, we compute directly: $\displaystyle H_{\varepsilon}$ $\displaystyle=\frac{\lambda}{14}\varphi_{\varepsilon}+(H_{\varepsilon})^{\perp}=\frac{\tau_{0}}{6}\varphi_{\varepsilon}-\tau_{3}$ $\displaystyle=\frac{1}{7}\varepsilon(\varepsilon\eta\wedge\omega+\mathop{\mathrm{Re}}\Omega)-(\frac{8}{7}\varepsilon^{2}\eta\wedge\omega-\frac{6}{7}\varepsilon\mathop{\mathrm{Re}}\Omega)$ $\displaystyle=-\varepsilon^{2}\eta\wedge\omega+\varepsilon\mathop{\mathrm{Re}}\Omega.\qed$ ### 2.2 Local orthonormal coframe One key strategy in our construction consists in varying the length of the $S^{1}$-fibres on $K$ as a function of the string parameter $\alpha^{\prime}$. With that in mind, we adopt a useful local orthonormal coframe as follows. ###### Definition 2.6. Given $\varepsilon>0$, let $(K^{7},\varphi_{\varepsilon})$ be as in Definition 1.7. We choose the local Sasakian real orthonormal coframe on $K$: $e_{0}=\varepsilon\eta,\quad e_{1},\quad e_{2},\quad e_{3},\quad Je_{1},\quad Je_{2},\quad Je_{3},$ (23) where $J$ is the transverse complex structure (from the Calabi–Yau 3-fold $V$) acting on 1-forms, and we have a basic ${\rm SU}(3)$-coframe $\\{e_{1},e_{2},e_{3},Je_{1},Je_{2},Je_{3}\\}$, the pullback of an ${\rm SU}(3)$-coframe on $V$, such that $\displaystyle\omega$ $\displaystyle=e_{1}\wedge Je_{1}+e_{2}\wedge Je_{2}+e_{3}\wedge Je_{3},$ (24) $\displaystyle\Omega$ $\displaystyle=(e_{1}+iJe_{1})\wedge(e_{2}+iJe_{2})\wedge(e_{3}+iJe_{3}).$ (25) ###### Remark 2.7. It is worth noting from (25) that $\displaystyle\mathop{\mathrm{Re}}\Omega$ $\displaystyle=e_{1}\wedge e_{2}\wedge e_{3}-e_{1}\wedge Je_{2}\wedge Je_{3}-e_{2}\wedge Je_{3}\wedge Je_{1}-e_{3}\wedge Je_{1}\wedge Je_{2},$ (26) $\displaystyle\mathop{\mathrm{Im}}\Omega$ $\displaystyle=Je_{1}\wedge e_{2}\wedge e_{3}+Je_{2}\wedge e_{3}\wedge e_{1}+Je_{3}\wedge e_{1}\wedge e_{2}-Je_{1}\wedge Je_{2}\wedge Je_{3}.$ (27) Using (24) and (27), we easily derive the precise expression of $\psi_{\varepsilon}$ in this frame: $\displaystyle\begin{split}\psi_{\varepsilon}=\frac{1}{2}\omega^{2}-\varepsilon\eta\wedge\mathop{\mathrm{Im}}\Omega&=e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3}+e_{3}\wedge Je_{3}\wedge e_{1}\wedge Je_{1}+e_{1}\wedge Je_{1}\wedge e_{2}\wedge Je_{2}\\\ &\quad- e_{0}\wedge(Je_{1}\wedge e_{2}\wedge e_{3}+Je_{2}\wedge e_{3}\wedge e_{1}+Je_{3}\wedge e_{1}\wedge e_{2}-Je_{1}\wedge Je_{2}\wedge Je_{3}).\end{split}$ (28) ###### Lemma 2.8. In terms of the local coframe (23) and the natural matrix operations described in Appendix A, the basic $3$-covectors $e=(e_{1},e_{2},e_{3})^{\rm T}$ and $Je=(Je_{1},Je_{2},Je_{3})^{\rm T}$ in $\Omega^{1}(K)^{\oplus 3}$ have the following properties. * (a) The vectors $e\times Je\quad\text{and}\quad e\times e-Je\times Je$ consist of basic forms of type $(2,0)+(0,2)$. * (b) The vector $e\times e+Je\times Je$ and the off-diagonal part of $[e]\wedge[Je]-[Je]\wedge[e]$ (29) consist of basic forms of type $(1,1)$ which are also primitive (i.e. wedge with $\omega^{2}$ to give zero). The diagonal part of (29) consists of basic forms of type $(1,1)$. ###### Proof. For (a), we notice that $\displaystyle e_{2}\wedge Je_{3}-e_{3}\wedge Je_{2}$ $\displaystyle=\mathop{\mathrm{Im}}((e_{2}+iJe_{2})\wedge(e_{3}+iJe_{3})),$ $\displaystyle e_{2}\wedge e_{3}-Je_{2}\wedge Je_{3}$ $\displaystyle=\mathop{\mathrm{Re}}((e_{2}+iJe_{2})\wedge(e_{3}+iJe_{3})).$ We deduce that $e\times Je$ and $e\times e-Je\times Je$ consist of basic forms of type $(2,0)+(0,2)$ as claimed. For (b), we observe that $e_{2}\wedge e_{3}+Je_{2}\wedge Je_{3}=\mathop{\mathrm{Re}}((e_{2}+iJe_{2})\wedge(e_{3}-iJe_{3})),$ and hence $e\times e+Je\times Je$ consists of primitive forms of basic type $(1,1)$. We now note that $[e]\wedge[Je]-[Je]\wedge[e]=e\wedge Je^{\rm T}-Je\wedge e^{\rm T}-2\omega I$ (30) by Lemma A.3. Since $\displaystyle e_{2}\wedge Je_{3}+e_{3}\wedge Je_{2}=\mathop{\mathrm{Im}}((e_{2}-iJe_{2})\wedge(e_{3}+iJe_{3})),$ we deduce that the off-diagonal part of $[e]\wedge[Je]-[Je]\wedge[e]$ consists of forms of basic type $(1,1)$ which are primitive also. Finally, we now see from (30) that the diagonal entries in $[e]\wedge[Je]-[Je]\wedge[e]$ define the diagonal matrix $-2\,\text{diag}(e_{2}\wedge Je_{2}+e_{3}\wedge Je_{3},e_{3}\wedge Je_{3}+e_{1}\wedge Je_{1},e_{1}\wedge Je_{1}+e_{2}\wedge Je_{2}),$ (31) which clearly consists of basic forms of type $(1,1)$. ∎ ## 3 Gauge fields: G2-instanton, Bismut, Hull and twisted connections It is well-known that the pullback of a basic HYM connection to the total space of a contact Calabi–Yau (cCY) $7$-manifold is a ${\rm G}_{2}$-instanton, with respect to the standard ${\rm G}_{2}$-structure [Calvo-Andrade2020]*§4.3. Since the Levi-Civita connection of the Calabi–Yau $(V,g_{V})$ on $TV$ is HYM, the following result establishes Theorem 1–(ii). ###### Lemma 3.1. Let $E=\pi^{*}TV$ be the pullback of $TV$ to $K$ via the projection $\pi:K\to V$. Let $A$ be the connection on $E$ given by the pullback of the Levi-Civita connection of $g_{V}$. Then $A$ is a ${\rm G}_{2}$-instanton on $E$ with holonomy contained in $\mathrm{SU}(3)$. In this section we give formulae for the connections $\theta_{\varepsilon,m}^{\delta,k}$ and $A$ and their curvatures with respect to the local coframe in Definition 2.6. Using the freedom given by all three parameters, we will show that $\theta_{\varepsilon,m}^{\delta,k}$ can be chosen to satisfy the ${\rm G}_{2}$-instanton condition, at least to higher orders of the string scale $\alpha^{\prime}$. ### 3.1 The G2-instanton $A$ and the “squashings” $\theta_{\varepsilon}^{k}$ of the Levi-Civita connection #### 3.1.1 Local connection matrices Since the choice of a local Sasakian coframe on $K$ naturally trivialises $E=\pi^{*}TV\hookrightarrow TK$, we now want to relate the local matrix of the Levi-Civita connection $\theta_{\varepsilon}$ on $TK$ to (the pullback of) the gauge field $A$. To that end, we compute the first structure equations of our natural coframe: ###### Proposition 3.2. The coframe (23) on $K$ satisfies the following structure equations: $\displaystyle de_{0}$ $\displaystyle=\varepsilon\omega=\varepsilon(e_{1}\wedge Je_{1}+e_{2}\wedge Je_{2}+e_{3}\wedge Je_{3}),$ (32) $\displaystyle de_{i}$ $\displaystyle=-a_{ij}\wedge e_{j}-b_{ij}\wedge Je_{j},$ (33) $\displaystyle d(Je_{i})$ $\displaystyle=b_{ij}\wedge e_{j}-a_{ij}\wedge Je_{j},$ (34) for some local 1-forms $a_{ij},b_{ij}$, using the summation convention, with $1\leq i,j\leq 3$. Moreover, $a_{ji}=-a_{ij},\quad b_{ji}=b_{ij},\quad\sum_{i=1}^{3}b_{ii}=0,$ (35) so the matrix $a:=(a_{ij})$ is skew-symmetric, and the matrix $b:=(b_{ij})$ is symmetric traceless. Letting $I:=(\delta_{ij})$ and $e:=(e_{1}\;e_{2}\;e_{3})^{\rm T}$, the structure equations (32)–(34) can be written in terms of $7\times 7$ matrices: $d\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)=-\left(\begin{array}[]{ccc}0&\frac{\varepsilon}{2}Je^{\rm T}&-\frac{\varepsilon}{2}e^{\rm T}\\\ -\frac{\varepsilon}{2}Je&a&b-\frac{\varepsilon}{2}e_{0}I\\\ \frac{\varepsilon}{2}e&-b+\frac{\varepsilon}{2}e_{0}I&a\end{array}\right)\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right).$ (36) ###### Proof. The first equation (32) is a direct consequence of (23) and (24). The relationship between the derivatives of $e_{i}$ and $Je_{i}$ and the properties of the $a_{ij}$ and $b_{ij}$ are a consequence of $J$ being covariantly constant (on $V$) and $A$ having holonomy contained in ${\rm SU}(3)$, since $A$ arises from a torsion-free ${\rm SU}(3)$-structure. ∎ It will be useful later to have the following corollary of the structure equations, which is an elementary computation using (36). ###### Proposition 3.3. Using the notation of Definition A.1, the coframe in Definition 2.6 satisfies $\begin{split}d([e])&=-a\wedge[e]-[e]\wedge a+b\wedge[Je]-[Je]\wedge b,\\\ d([Je])&=-a\wedge[Je]-[Je]\wedge a-b\wedge[e]+[e]\wedge b.\end{split}$ (37) The matrix in (36) represents the Levi-Civita connection $\theta_{\varepsilon}$ in the given local coframe, and setting $\varepsilon=0$ in that matrix gives the matrix of $A$. Hence, we have the following. ###### Corollary 3.4. If we let $A=\left(\begin{array}[]{ccc}0&0&0\\\ 0&a&b\\\ 0&-b&a\end{array}\right)$ (38) and $B=\left(\begin{array}[]{ccc}0&Je^{\rm T}&-e^{\rm T}\\\ -Je&0&-e_{0}I\\\ e&e_{0}I&0\end{array}\right),$ (39) then the Levi-Civita connection $\theta_{\varepsilon}$ of the metric $g_{\varepsilon}$ in (10) is given locally by $\displaystyle\theta_{\varepsilon}$ $\displaystyle=\left(\begin{array}[]{ccc}0&\frac{\varepsilon}{2}Je^{\rm T}&-\frac{\varepsilon}{2}e^{\rm T}\\\ -\frac{\varepsilon}{2}Je&a&b-\frac{\varepsilon}{2}e_{0}I\\\ \frac{\varepsilon}{2}e&-b+\frac{\varepsilon}{2}e_{0}I&a\end{array}\right)=A+\frac{\varepsilon}{2}B.$ Corollary 3.4 allows us to define a family of connections $\theta^{k}_{\varepsilon}$ on $TK$ as follows. ###### Definition 3.5. For each $0\neq k\in\mathbb{R}$, let $\theta^{k}_{\varepsilon}$ be the connection on $TK$ given, in the local coframe of Definition 2.6, by $\theta^{k}_{\varepsilon}:=A+\frac{k\varepsilon}{2}B,$ with $A$ and $B$ as in Corollary 3.4. ###### Remark 3.6. The _trivial case_ $k=0$ can only occur when $K=S^{1}\times V$ is a trivial bundle over $V$, and then the connection on $TK$ will be equal to the pullback of the Levi-Civita connection on $V$ (trivial along $S^{1}$). Since we are assuming that $K\to V$ is a non-trivial $S^{1}$-bundle, we require $k\neq 0$. ###### Remark 3.7. Notice that $\displaystyle d\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)$ $\displaystyle=-\left(A+\frac{k\varepsilon}{2}B\right)\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)+\frac{(k-1)\varepsilon}{2}B\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)$ $\displaystyle=-\theta^{k}_{\varepsilon}\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)+(1-k)\varepsilon\left(\begin{array}[]{c}\omega\\\ 0\\\ 0\end{array}\right).$ Therefore, we may view $\theta^{k}_{\varepsilon}$ as a metric connection on $K$, with torsion $(1-k)\varepsilon\omega\otimes e_{0}$. Since $k\neq 0$, we see from Corollary 3.4 and Definition 3.5 that we may view $\theta^{k}_{\varepsilon}$ as a “squashing” of the Levi-Civita connection $\theta_{\varepsilon}$ of the metric $g_{\varepsilon}$ on $K$. #### 3.1.2 Local curvature matrices We begin by relating the curvature of the connections $\theta^{k}_{\varepsilon}$ in Definition 3.5 to the curvature $F_{A}$ of $A$. ###### Proposition 3.8. In the local coframe of Definition 2.6, the curvature $R_{\theta_{\varepsilon}^{k}}$ of the connection $\theta_{\varepsilon}^{k}$ from Definition 3.5 satisfies $R_{\theta_{\varepsilon}^{k}}=F_{A}+\frac{k\varepsilon^{2}}{2}\omega\mathcal{I}+\frac{k^{2}\varepsilon^{2}}{4}B\wedge B,$ where $\mathcal{I}=\left(\begin{array}[]{ccc}0&0&0\\\ 0&0&-I\\\ 0&I&0\end{array}\right)$ (40) and $\begin{split}B\wedge B&=\left(\begin{array}[]{ccc}0&e_{0}\wedge e^{\rm T}&e_{0}\wedge Je^{\rm T}\\\ -e_{0}\wedge e&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ -e_{0}\wedge Je&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\\\ &=e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)+\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right).\end{split}$ (41) ###### Proof. From the relation between $\theta_{\varepsilon}^{k}$ and $A$ in Corollary 3.4, we see that $\displaystyle R_{\theta_{\varepsilon}^{k}}$ $\displaystyle=d\theta_{\varepsilon}^{k}+\theta_{\varepsilon}^{k}\wedge\theta_{\varepsilon}^{k}$ $\displaystyle=dA+\frac{k\varepsilon}{2}dB+(A+\frac{k\varepsilon}{2}B)\wedge(A+\frac{k\varepsilon}{2}B)$ $\displaystyle=F_{A}+\frac{k\varepsilon}{2}(dB+A\wedge B+B\wedge A)+\frac{k^{2}\varepsilon^{2}}{4}B\wedge B.$ (42) The first term of interest in (42) is $\displaystyle dB+A\wedge B+B\wedge A$ $\displaystyle=\left(\begin{array}[]{ccc}0&d(Je^{\rm T})+Je^{\rm T}\wedge a+e^{\rm T}\wedge b&-d(e^{\rm T})+Je^{\rm T}\wedge b-e^{\rm T}\wedge a\\\ -d(Je)-a\wedge Je+b\wedge e&b\wedge e_{0}I+e_{0}I\wedge b&-d(e_{0})I-a\wedge e_{0}I-e_{0}I\wedge a\\\ d(e)+b\wedge Je+a\wedge e&d(e_{0})I+a\wedge e_{0}I+e_{0}I\wedge a&b\wedge e_{0}I+e_{0}I\wedge b\end{array}\right)$ (46) $\displaystyle=\varepsilon\omega\left(\begin{array}[]{ccc}0&0&0\\\ 0&0&-I\\\ 0&I&0\end{array}\right)=\varepsilon\omega\mathcal{I}$ (50) as a consequence of the structure equations for the coframe in Proposition 3.2. Equation (41) follows directly from (39). ∎ At this point, it is worth recalling that $A$ is a ${\rm G}_{2}$-instanton, in fact the lift of a connection with holonomy $\mathrm{SU}(3)$ on $V$, so $F_{A}$ must take values in $\mathfrak{su}(3)\subseteq\mathfrak{g}_{2}$: $F_{A}=\left(\begin{array}[]{ccc}0&0&0\\\ 0&\alpha&\beta\\\ 0&-\beta&\alpha\end{array}\right),$ (51) where $\alpha$ is a skew-symmetric $3\times 3$ matrix of 2-forms, and $\beta$ is a symmetric traceless $3\times 3$ matrix of 2-forms. ###### Lemma 3.9. The block-elements of the curvature matrix (51) of $A$ in the local coframe (23), satisfy: $\displaystyle\begin{split}\alpha\wedge e+\beta\wedge Je&=0,\\\ \alpha\wedge Je-\beta\wedge e&=0.\end{split}$ (52) Moreover, using the notation of Definition A.1, we have $\begin{split}\alpha\wedge[e]-[e]\wedge\alpha-\beta\wedge[Je]-[Je]\wedge\beta&=0,\\\ \alpha\wedge[Je]+[Je]\wedge\alpha+\beta\wedge[e]-[e]\wedge\beta.\end{split}$ (53) ###### Proof. Differentiating the defining relation $d\left(\begin{array}[]{c}0\\\ e\\\ Je\end{array}\right)=-A\wedge\left(\begin{array}[]{c}0\\\ e\\\ Je\end{array}\right),$ we obtain $\displaystyle 0$ $\displaystyle=-dA\wedge\left(\begin{array}[]{c}0\\\ e\\\ Je\end{array}\right)+A\wedge d\left(\begin{array}[]{c}0\\\ e\\\ Je\end{array}\right)=-(dA+A\wedge A)\wedge\left(\begin{array}[]{c}0\\\ e\\\ Je\end{array}\right)$ $\displaystyle=-F_{A}\wedge\left(\begin{array}[]{c}0\\\ e\\\ Je\end{array}\right)=-\left(\begin{array}[]{ccc}0&0&0\\\ 0&\alpha&\beta\\\ 0&-\beta&\alpha\end{array}\right)\wedge\left(\begin{array}[]{c}0\\\ e\\\ Je\end{array}\right)$ $\displaystyle=-\left(\begin{array}[]{c}0\\\ \alpha\wedge e+\beta\wedge Je\\\ \alpha\wedge Je-\beta\wedge e\end{array}\right).$ Equation (53) follows similarly from the structure equations (37). ∎ ### 3.2 The “squashed” Bismut and Hull connections on $TK$ We now introduce an additional parameter to our connections which introduces a multiple of the flux $H_{\varepsilon}$ as torsion. This, in particular, leads us to the Bismut and Hull connections. #### 3.2.1 Local connection matrices and torsion We begin by identifying the flux $H_{\varepsilon}$ with a locally defined matrix of 1-forms and a vector-valued 2-form as follows, so that we can define connections with torsion given by the flux. ###### Proposition 3.10. In the local coframe of Definition 2.6, and using the notation from Definition A.1, let $C:=\left(\begin{array}[]{ccc}0&Je^{\rm T}&-e^{\rm T}\\\ -Je&-[e]&e_{0}I+[Je]\\\ e&-e_{0}I+[Je]&[e]\end{array}\right)=\left(\begin{array}[]{ccc}0&Je^{\rm T}&-e^{\rm T}\\\ -Je&-[e]&[Je]\\\ e&[Je]&[e]\end{array}\right)-e_{0}\mathcal{I}.$ (54) Then we may raise an index on the $3$-form $H_{\varepsilon}$ and view it as a vector-valued $2$-form, as follows: $H_{\varepsilon}=\frac{\varepsilon}{2}\left(\begin{array}[]{ccc}0&Je^{\rm T}&-e^{\rm T}\\\ -Je&-[e]&e_{0}I+[Je]\\\ e&-e_{0}I+[Je]&[e]\end{array}\right)\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)=\frac{\varepsilon}{2}C\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right).$ (55) ###### Proof. By Lemma 2.5, (24) and (26), we have that $\begin{split}H_{\varepsilon}&=-\varepsilon^{2}\eta\wedge\omega+\varepsilon\mathop{\mathrm{Re}}\Omega\\\ &=-\varepsilon e_{0}\wedge(e_{1}\wedge Je_{1}+e_{2}\wedge Je_{2}+e_{3}\wedge Je_{3})\\\ &\qquad+\varepsilon(e_{1}\wedge e_{2}\wedge e_{3}-e_{1}\wedge Je_{2}\wedge Je_{3}-e_{2}\wedge Je_{3}\wedge Je_{1}-e_{3}\wedge Je_{1}\wedge Je_{2}).\end{split}$ (56) We raise an index, so that $H_{\varepsilon}$ is a vector-valued 2-form, and use Lemma A.3 to deduce the claim: $\displaystyle H_{\varepsilon}$ $\displaystyle=\varepsilon\left(\begin{array}[]{c}-e_{1}\wedge Je_{1}-e_{2}\wedge Je_{2}-e_{3}\wedge Je_{3}\\\ e_{0}\wedge Je_{1}+e_{2}\wedge e_{3}-Je_{2}\wedge Je_{3}\\\ e_{0}\wedge Je_{2}+e_{3}\wedge e_{1}-Je_{3}\wedge Je_{1}\\\ e_{0}\wedge Je_{3}+e_{1}\wedge e_{2}-Je_{1}\wedge Je_{2}\\\ -e_{0}\wedge e_{1}-e_{2}\wedge Je_{3}+e_{3}\wedge Je_{2}\\\ -e_{0}\wedge e_{1}-e_{3}\wedge Je_{1}+e_{1}\wedge Je_{3}\\\ -e_{0}\wedge e_{1}-e_{1}\wedge Je_{2}+e_{2}\wedge Je_{1}\end{array}\right)=\frac{\varepsilon}{2}\left(\begin{array}[]{ccc}0&Je^{\rm T}&-e^{\rm T}\\\ -Je&-[e]&e_{0}I+[Je]\\\ e&-e_{0}I+[Je]&[e]\end{array}\right)\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right).\qed$ ###### Corollary 3.11. In the terms of Definition 3.5 and Proposition 3.10, let $\tau_{\varepsilon}:=\varepsilon C$; then each local matrix $\theta^{\delta,k}_{\varepsilon}=\theta_{\varepsilon}^{k}+\frac{k\delta}{2}\tau_{\varepsilon}=A+\frac{k\varepsilon}{2}B+\frac{k\varepsilon\delta}{2}C,\quad\text{for}\quad k\neq 0\text{ and }\delta\in\mathbb{R},$ (57) defines a connection on $TK$, with torsion $H^{\delta,k}_{\varepsilon}=(1-k)\varepsilon\omega\otimes e_{0}+k\delta H_{\varepsilon}.$ (58) Explicitly, $\theta^{\delta,k}_{\varepsilon}=A+\frac{k\varepsilon}{2}B+\frac{k\varepsilon\delta}{2}C=\left(\begin{array}[]{ccc}0&\frac{k\varepsilon(1+\delta)}{2}Je^{\rm T}&-\frac{k\varepsilon(1+\delta)}{2}e^{\rm T}\\\\[2.0pt] -\frac{k\varepsilon(1+\delta)}{2}Je&a-\frac{k\varepsilon\delta}{2}[e]&b-\frac{k\varepsilon(1-\delta)}{2}e_{0}I+\frac{k\varepsilon\delta}{2}[Je]\\\\[2.0pt] \frac{k\varepsilon(1+\delta)}{2}e&-b+\frac{k\varepsilon(1-\delta)}{2}e_{0}I+\frac{k\varepsilon\delta}{2}[Je]&a+\frac{k\varepsilon\delta}{2}[e]\end{array}\right).$ ###### Proof. We see from (36) and Proposition 3.10 that $\displaystyle d\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)$ $\displaystyle=-(A+\frac{k\varepsilon}{2}B+\frac{k\varepsilon\delta}{2}C)\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)+(1-k)\varepsilon\left(\begin{array}[]{c}\omega\\\ 0\\\ 0\end{array}\right)+\frac{k\varepsilon\delta}{2}C\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)r$ $\displaystyle=-\theta^{\delta,k}_{\varepsilon}\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)+(1-k)\varepsilon\left(\begin{array}[]{c}\omega\\\ 0\\\ 0\end{array}\right)+k\delta H_{\varepsilon}.\qed$ ###### Remark 3.12. It is possible to further deform the connection, and indeed the whole heterotic ${\rm G}_{2}$ system, by allowing a non-trivial (non-constant) dilaton, which is equivalent to performing a conformal transformation on the ${\rm G}_{2}$-structure. However, since there are in general no distinguished functions on $K$ to define the dilaton, we will not pursue this possibility here. ###### Definition 3.13. Taking $\delta=+1$ in (57) gives $\theta^{+,k}_{\varepsilon}=A+\frac{k\varepsilon}{2}(B+C)=\left(\begin{array}[]{ccc}0&k\varepsilon Je^{\rm T}&-k\varepsilon e^{\rm T}\\\ -k\varepsilon Je&a-\frac{k\varepsilon}{2}[e]&b+\frac{k\varepsilon}{2}[Je]\\\ k\varepsilon e&-b+\frac{k\varepsilon}{2}[Je]&a+\frac{k\varepsilon}{2}[e]\end{array}\right).$ (59) We see from our choice of coframe that $\theta^{+,k}_{\varepsilon}$ takes values in $\mathfrak{g}_{2}\subseteq\Lambda^{2}$, see e.g. [Lotay2011], and hence $\theta^{+,k}_{\varepsilon}$ has holonomy contained in ${\rm G}_{2}$, as its curvature will necessarily take values in $\mathfrak{g}_{2}$. Further, setting $k=1$ in (59) gives what is often called the _Bismut connection_ $\theta_{\varepsilon}^{+}$ for $\varphi_{\varepsilon}$, the unique metric connection which makes $\varphi_{\varepsilon}$ parallel and has totally skew-symmetric torsion (which is the flux $H_{\varepsilon}$). ###### Remark 3.14. The Bismut connection has been the subject of much study, and is a natural connection in this context. It is therefore tempting to use the Bismut connection (and more generally the connections $\theta^{+,k}_{\varepsilon}$ in Definition 3.13) when studying the heterotic ${\rm G}_{2}$ system, particularly because of its holonomy property. However, inspired by the ideas in [Hull1986, Martelli2011], one could also consider a connection, known as the _Hull connection_ , whose torsion has the _opposite sign_ to the Bismut connection when trying to satisfy the heterotic Bianchi identity (2). This also motivates our discussion of the connections $\theta^{\delta,k}_{\varepsilon}$ for $\delta<0$ as well as $\delta\geq 0$. As a consequence of the previous remark, we will also be interested in the Hull connection, formally defined below. ###### Definition 3.15. Taking $\delta=-1$ in (57) gives $\displaystyle\begin{split}\theta^{-,k}_{\varepsilon}&=A+\frac{k\varepsilon}{2}(B-C)\\\ &=\left(\begin{array}[]{ccc}0&0&0\\\ 0&a+\frac{k\varepsilon}{2}[e]&b-k\varepsilon e_{0}I-\frac{k\varepsilon}{2}[Je]\\\ 0&-b+k\varepsilon e_{0}I-\frac{k\varepsilon}{2}[Je]&a-\frac{k\varepsilon}{2}[e]\end{array}\right)\\\ &=\left(\begin{array}[]{ccc}0&0&0\\\ 0&a+\frac{k\varepsilon}{2}[e]&b-\frac{k\varepsilon}{2}[Je]\\\ 0&-b-\frac{k\varepsilon}{2}[Je]&a-\frac{k\varepsilon}{2}[e]\end{array}\right)+k\varepsilon e_{0}\mathcal{I}.\end{split}$ (60) Setting $k=1$ in (60) gives the _Hull connection_ $\theta^{-}_{\varepsilon}$ associated to the ${\rm G}_{2}$-structure $\varphi_{\varepsilon}$. ###### Remark 3.16. As in the case of $\theta^{k}_{\varepsilon}$, we may view the connections $\theta^{+,k}_{\varepsilon}$ and $\theta^{-,k}_{\varepsilon}$, respectively, as “squashed” versions of the Bismut and Hull connections $\theta^{+}_{\varepsilon}$ and $\theta^{-}_{\varepsilon}$. #### 3.2.2 Local curvature matrices Now, we want to determine the curvature of $\theta_{\varepsilon}^{\delta,k}$ in Corollary 3.11, with a particular emphasis on the cases $\delta=\pm 1$. We begin with the result for all $\delta$. ###### Proposition 3.17. The curvature $R^{\delta,k}_{\varepsilon}$ of the connection $\theta_{\varepsilon}^{\delta,k}$ in (57) satisfies $R^{\delta,k}_{\varepsilon}=F_{A}+\frac{k\varepsilon^{2}(1-\delta)}{2}\omega\mathcal{I}+\frac{k^{2}\varepsilon^{2}}{4}Q^{\delta},$ (61) where $\mathcal{I}$ is given in (40), $\displaystyle\begin{split}Q^{\delta}:=(B+\delta C)&\wedge(B+\delta C)=(1-\delta)Q_{-}^{\delta}+(1+\delta)Q_{+}^{\delta}+\delta^{2}Q_{0}\end{split}$ (62) and $\displaystyle Q^{\delta}_{-}$ $\displaystyle=e_{0}\wedge\left(\begin{array}[]{ccc}0&(1+\delta)e^{\rm T}&(1+\delta)Je^{\rm T}\\\ -(1+\delta)e&-2\delta[Je]&-2\delta[e]\\\ -(1+\delta)Je&-2\delta[e]&2\delta[Je]\end{array}\right),$ (66) $\displaystyle Q^{\delta}_{+}$ $\displaystyle=\left(\begin{array}[]{ccc}0&2\delta(e\times Je)^{\rm T}&\delta(e\times e-Je\times Je)^{\rm T}\\\ -2\delta(e\times Je)&-(1+\delta)(Je\wedge Je^{\rm T})&(1+\delta)(Je\wedge e^{\rm T})\\\ -\delta(e\times e-Je\times Je)&(1+\delta)(e\wedge Je^{\rm T})&-(1+\delta)(e\wedge e^{\rm T})\end{array}\right),$ (70) $\displaystyle Q_{0}$ $\displaystyle=\frac{1}{2}\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right).$ (74) ###### Proof. We begin by observing that, by Corollary 3.11 and (50), $\displaystyle R^{\delta,k}_{\varepsilon}$ $\displaystyle=d\theta_{\varepsilon}^{\delta,k}+\theta_{\varepsilon}^{\delta,k}\wedge\theta_{\varepsilon}^{\delta,k}$ $\displaystyle=dA+\frac{k\varepsilon}{2}dB+\frac{k\delta\varepsilon}{2}dC+\left(A+\frac{k\varepsilon}{2}B+\frac{k\varepsilon\delta}{2}C\right)\wedge\left(A+\frac{k\varepsilon}{2}B+\frac{k\varepsilon\delta}{2}C\right)$ $\displaystyle=F_{A}+\frac{k\varepsilon}{2}(dB+A\wedge B+B\wedge A)+\frac{k\varepsilon\delta}{2}(dC+A\wedge C+C\wedge A)+\frac{k^{2}\varepsilon^{2}}{4}(B+\delta C)\wedge(B+\delta C)$ $\displaystyle=F_{A}+\frac{k\varepsilon^{2}}{2}\omega\mathcal{I}+\frac{k\varepsilon\delta}{2}(dC+A\wedge C+C\wedge A)+\frac{k^{2}\varepsilon^{2}}{4}(B+\delta C)\wedge(B+\delta C).$ (75) We may easily compute $dC+A\wedge C+C\wedge A$ appearing in (75). We first see that $(dC+A\wedge C+C\wedge A)_{1j}=(dB+A\wedge B+B\wedge A)_{1j}=0.$ Therefore, $(dC+A\wedge C+C\wedge A)_{j1}=0$ as well by skew-symmetry. We may therefore write $dC+A\wedge C+C\wedge A$ in the block form $dC+A\wedge C+C\wedge A=\left(\begin{array}[]{ccc}0&0&0\\\ 0&c&d\\\ 0&-d^{\rm T}&-c\end{array}\right).$ We then find that $\displaystyle c$ $\displaystyle=-b\wedge e_{0}I-e_{0}I\wedge b-d([e])-a\wedge[e]-[e]\wedge a+b\wedge[Je]-[Je]\wedge b=0$ using the structure equations (37) in Proposition 3.3. We also find that $\displaystyle d$ $\displaystyle=d(e_{0})I+d([Je])+a\wedge e_{0}I+e_{0}I\wedge a+a\wedge[Je]+[Je]\wedge a+b\wedge[e]-[e]\wedge b$ $\displaystyle=\varepsilon\omega I,$ using (32) and (37). Overall, we deduce that $dC+A\wedge C+C\wedge A=\varepsilon\omega\left(\begin{array}[]{ccc}0&0&0\\\ 0&0&I\\\ 0&-I&0\end{array}\right)=-\varepsilon\omega\mathcal{I}.$ Hence, (61) follows. We now need only verify (62). Recall from Corollary 3.11 that $B+\delta C=\left(\begin{array}[]{ccc}0&(1+\delta)Je^{\rm T}&-(1+\delta)e^{\rm T}\\\ -(1+\delta)Je&-\delta[e]&-(1-\delta)e_{0}I+\delta[Je]\\\ (1+\delta)e&(1-\delta)e_{0}I+\delta[Je]&\delta[e]\end{array}\right).$ (76) Using Lemma A.3, we start with the first row of $(B+\delta C)\wedge(B+\delta C)$ and find the non-zero entries $\displaystyle(1-\delta)(1+\delta)e_{0}\wedge e^{\rm T}-\delta(1+\delta)\big{(}Je^{\rm T}\wedge[e]$ $\displaystyle+e^{\rm T}\wedge[Je]\big{)}=(1-\delta)(1+\delta)e_{0}\wedge e^{\rm T}+2\delta(1+\delta)(e\times Je)^{\rm T}$ and $\displaystyle(1-\delta)(1+\delta)e_{0}\wedge Je^{\rm T}$ $\displaystyle+\delta(1+\delta)\big{(}Je^{\rm T}\wedge[Je]-e^{\rm T}\wedge[e]\big{)}$ $\displaystyle=(1-\delta)(1+\delta)e_{0}\wedge Je^{\rm T}+\delta(1+\delta)(e\times e-Je\times Je)^{\rm T}.$ Moving to the middle block and again using Lemma A.3, we obtain $\displaystyle-(1+\delta)^{2}Je\wedge Je^{\rm T}$ $\displaystyle+\delta^{2}\big{(}[e]\wedge[e]+[Je]\wedge[Je]\big{)}-\delta(1-\delta)(e_{0}I\wedge[Je]-[Je]\wedge e_{0}I)$ $\displaystyle=-(1+\delta)^{2}Je\wedge Je^{\rm T}-\frac{1}{2}\delta^{2}[e\times e+Je\times Je]-2\delta(1-\delta)e_{0}\wedge[Je].$ Similarly, for the bottom right block, we obtain $\displaystyle-(1+\delta)^{2}e\wedge e^{\rm T}-\frac{1}{2}\delta^{2}[e\times e+Je\times Je]+2\delta(1-\delta)e_{0}\wedge[Je].$ The remaining entries are defined by the middle right block, which is $\displaystyle(1+\delta)^{2}Je\wedge e^{\rm T}-\delta^{2}([e]\wedge[Je]-[Je]\wedge[e])-2\delta(1-\delta)e_{0}\wedge[e].$ Equation (62) now follows. ∎ We now can specialize to the Bismut and Hull connections. ###### Corollary 3.18. The curvature $R_{\theta^{+}_{\varepsilon}}$ of the Bismut connection $\theta^{+}_{\varepsilon}$ satisfies $R_{\theta^{+}_{\varepsilon}}=F_{A}+\frac{\varepsilon^{2}}{4}(B+C)\wedge(B+C),$ (77) where $\displaystyle\begin{split}(B+C)\wedge(B+C)=\;&2\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&-2(Je\wedge Je^{\rm T})&2(Je\wedge e^{\rm T})\\\ -(e\times e-Je\times Je)&2(e\wedge Je^{\rm T})&-2(e\wedge e^{\rm T})\end{array}\right)\\\ &+\frac{1}{2}\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right).\end{split}$ (78) ###### Corollary 3.19. The curvature $R_{\theta_{\varepsilon}^{-}}$ of the Hull connection $\theta_{\varepsilon}^{-}$ satisfies $R_{\theta^{-}_{\varepsilon}}=F_{A}+\varepsilon^{2}\omega\mathcal{I}+\frac{\varepsilon^{2}}{4}(B-C)\wedge(B-C),$ (79) where $\mathcal{I}$ is given in (40) and $\displaystyle\begin{split}(B-C)&\wedge(B-C)\\\ =\;&4e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)+\frac{1}{2}\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right).\end{split}$ (80) ### 3.3 Connections: an extra twist It will be useful to “twist” our connection by multiples of $e_{0}\mathcal{I}$. To discern the impact of this twist on the curvature of the connection, we have the following lemma. ###### Lemma 3.20. The local connection matrices $A,B,C$ from Corollary 3.4 and Proposition 3.10 satisfy $\displaystyle A\wedge e_{0}\mathcal{I}+e_{0}\mathcal{I}\wedge A$ $\displaystyle=0,$ (81) $\displaystyle B\wedge e_{0}\mathcal{I}+e_{0}\mathcal{I}\wedge B$ $\displaystyle=e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right),$ (85) $\displaystyle C\wedge e_{0}\mathcal{I}+e_{0}\mathcal{I}\wedge C$ $\displaystyle=e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&-2[Je]&-2[e]\\\ -Je&-2[e]&2[Je]\end{array}\right).$ (89) ###### Proof. Given that $A$ in (38) takes values in $\mathfrak{su}(3)\subseteq\mathfrak{u}(3)$ and $\mathcal{I}$ in (40) is central in $\mathfrak{u}(3)$, we immediately deduce (81). Moreover, we see from (39), (54) and (40) that $B\wedge e_{0}\mathcal{I}=\left(\begin{array}[]{ccc}0&-(e\wedge e_{0})^{\rm T}&-(Je\wedge e_{0})^{\rm T}\\\ 0&0&0\\\ 0&0&0\end{array}\right),\qquad e_{0}\mathcal{I}\wedge B=\left(\begin{array}[]{ccc}0&0&0\\\ -e_{0}\wedge e&0&0\\\ -e_{0}\wedge Je&0&0\end{array}\right)$ and $C\wedge e_{0}\mathcal{I}=\left(\begin{array}[]{ccc}0&-(e\wedge e_{0})^{\rm T}&-(Je\wedge e_{0})^{\rm T}\\\ 0&[Je]\wedge e_{0}&[e]\wedge e_{0}\\\ 0&[e]\wedge e_{0}&-[Je]\wedge e_{0}\end{array}\right),$ $e_{0}\mathcal{I}\wedge C=\left(\begin{array}[]{ccc}0&0&0\\\ -e_{0}\wedge e&-e_{0}\wedge[Je]&-e_{0}\wedge[e]\\\ -e_{0}\wedge Je&-e_{0}\wedge[e]&e_{0}\wedge[Je]\end{array}\right).$ Equations (85) and (89) then follow. ∎ The previous lemma allows us to compute the curvature of a twisted connection, in particular establishing Theorem 1–(iii), as follows. ###### Proposition 3.21. In the local coframe from Definition 2.6, define a connection $\theta^{\delta,k}_{\varepsilon,m}$ on $TK$ by $\theta^{\delta,k}_{\varepsilon,m}=\theta^{\delta,k}_{\varepsilon}+\frac{km\varepsilon}{2}e_{0}\mathcal{I}.$ (90) Then its torsion is $H^{\delta,k}_{\varepsilon,m}=\big{(}1-k-\frac{km}{2}\big{)}\varepsilon\omega\otimes e_{0}+\frac{km\varepsilon}{2}e_{0}\wedge\omega+k\delta H_{\varepsilon}$ (91) and its curvature is given by $R^{\delta,k}_{\varepsilon,m}=F_{A}+\frac{k\varepsilon^{2}(1-\delta+m)}{2}\omega\mathcal{I}+\frac{k^{2}\varepsilon^{2}}{4}Q^{\delta}_{m}$ (92) where $Q^{\delta}_{m}=\left(1-\delta+m\right)Q_{-}^{\delta}+(1+\delta)Q_{+}^{\delta}+\delta^{2}Q_{0}$ (93) for $Q^{\delta}_{-},Q^{\delta}_{+},Q_{0}$ defined in (66), (70) and (74), respectively. ###### Proof. Using (58) we see that $\displaystyle d\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)$ $\displaystyle=-\theta^{k}_{\delta}\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)+(1-k)\varepsilon\left(\begin{array}[]{c}\omega\\\ 0\\\ 0\end{array}\right)+k\delta H_{\varepsilon}$ $\displaystyle=-\theta^{\delta,k}_{\varepsilon,m}\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)+\frac{km\varepsilon}{2}e_{0}\mathcal{I}\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)+(1-k)\varepsilon\left(\begin{array}[]{c}\omega\\\ 0\\\ 0\end{array}\right)+k\delta H_{\varepsilon}.$ Since $\frac{km\varepsilon}{2}e_{0}\mathcal{I}\wedge\left(\begin{array}[]{c}e_{0}\\\ e\\\ Je\end{array}\right)=\frac{km\varepsilon}{2}\left(\begin{array}[]{c}0\\\ -e_{0}\wedge Je\\\ e_{0}\wedge e\end{array}\right)$ and raising an index on $e_{0}\wedge\omega$ gives the vector-valued 2-form $\left(\begin{array}[]{c}\omega\\\ -e_{0}\wedge Je\\\ e_{0}\wedge e\end{array}\right),$ we quickly deduce (91). We know by definition that $\displaystyle R^{\delta,k}_{\varepsilon,m}$ $\displaystyle=d(\theta^{\delta,k}_{\varepsilon}+\frac{km\varepsilon}{2}e_{0}\mathcal{I})+(\theta^{\delta,k}_{\varepsilon}+\frac{km\varepsilon}{2}e_{0}\mathcal{I})\wedge(\theta^{\delta,k}_{\varepsilon}+\frac{km\varepsilon}{2}e_{0}\mathcal{I})$ $\displaystyle=R^{\delta,k}_{\varepsilon}+\frac{km\varepsilon^{2}}{2}\omega\mathcal{I}+\frac{km\varepsilon}{2}(\theta^{\delta,k}_{\varepsilon}\wedge e_{0}\mathcal{I}+e_{0}\mathcal{I}\wedge\theta^{\delta,k}_{\varepsilon}).$ Lemma 3.20 implies that $\displaystyle\theta^{\delta,k}_{\varepsilon}\wedge e_{0}\mathcal{I}+e_{0}\mathcal{I}\wedge\theta^{\delta,k}_{\varepsilon}$ $\displaystyle=\left(A+\frac{k\varepsilon}{2}(B+\delta C)\right)\wedge e_{0}\mathcal{I}+e_{0}\mathcal{I}\wedge\left(A+\frac{k\varepsilon}{2}(B+\delta C)\right)$ $\displaystyle=\frac{k\varepsilon}{2}e_{0}\wedge\left(\begin{array}[]{ccc}0&(1+\delta)e^{\rm T}&(1+\delta)Je^{\rm T}\\\ -(1+\delta)e&-2\delta[Je]&-2\delta[e]\\\ -(1+\delta)Je&-2\delta[e]&2\delta[Je]\end{array}\right)=\frac{k\varepsilon}{2}Q^{\delta}_{-}$ by (66). The result now follows from Proposition 3.17. ∎ The following observation, which may have potential interest, is immediate from (91): ###### Corollary 3.22. The connection $\theta^{\delta,k}_{\varepsilon,m}$ in (90) has totally skew- symmetric torsion if, and only if, $1-k\left(1+\frac{m}{2}\right)=0.$ ### 3.4 The G2-instanton condition One way to check the ${\rm G}_{2}$-instanton condition is to verify the vanishing of the wedge product of the curvature with $\psi_{\varepsilon}$, cf. (9). Before doing this, we make some elementary observations. ###### Lemma 3.23. In the local coframe (23) on a contact Calabi–Yau $7$-manifold as in Definition 1.7, and using the notation from Definition A.1, the following identities hold: $\displaystyle 2(e\times Je)\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=4e\wedge\frac{\omega^{2}}{2},$ $\displaystyle(e\times e-Je\times Je)\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=4Je\wedge\frac{\omega^{2}}{2},$ (94) $\displaystyle e\wedge e^{\rm T}\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=[Je]\wedge\frac{\omega^{2}}{2},$ $\displaystyle Je\wedge Je^{\rm T}\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=-[Je]\wedge\frac{\omega^{2}}{2},$ (95) $\displaystyle[e\times e+Je\times Je]\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=0,$ $\displaystyle([e]\wedge[Je]-[Je]\wedge[e])\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=0,$ (96) $\displaystyle[e\times e+Je\times Je]\wedge\frac{\omega^{2}}{2}$ $\displaystyle=0,$ $\displaystyle([e]\wedge[Je]-[Je]\wedge[e])\wedge\frac{\omega^{2}}{2}$ $\displaystyle=-4\frac{\omega^{3}}{6}I,$ (97) $\displaystyle e\wedge Je^{\rm T}\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=[e]\wedge\frac{\omega^{2}}{2},$ $\displaystyle e\wedge Je^{\rm T}\wedge\frac{\omega^{2}}{2}$ $\displaystyle=\frac{\omega^{3}}{6}I.$ (98) $\displaystyle Je\wedge e^{\rm T}\wedge\mathop{\mathrm{Im}}\Omega$ $\displaystyle=[e]\wedge\frac{\omega^{2}}{2},$ $\displaystyle Je\wedge e^{\rm T}\wedge\frac{\omega^{2}}{2}$ $\displaystyle=-\frac{\omega^{3}}{6}I.$ (99) ###### Proof. We observe from (27) that $\displaystyle\mathop{\mathrm{Im}}\Omega\wedge e_{2}\wedge Je_{3}=Je_{2}\wedge e_{3}\wedge e_{1}\wedge e_{2}\wedge Je_{3}=e_{1}\wedge(e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3})=e_{1}\wedge\frac{\omega^{2}}{2}$ and $\displaystyle\mathop{\mathrm{Im}}\Omega\wedge e_{3}\wedge Je_{2}=Je_{3}\wedge e_{1}\wedge e_{2}\wedge e_{3}\wedge Je_{2}=-=e_{1}\wedge(e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3})=-e_{1}\wedge\frac{\omega^{2}}{2}.$ Similarly, we may also compute $\displaystyle\mathop{\mathrm{Im}}\Omega\wedge e_{2}\wedge e_{3}$ $\displaystyle=-Je_{1}\wedge Je_{2}\wedge Je_{3}\wedge e_{2}\wedge e_{3}=(e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3})\wedge Je_{1}=\frac{\omega^{2}}{2}\wedge Je_{1}.$ and $\displaystyle\mathop{\mathrm{Im}}\Omega\wedge Je_{2}\wedge Je_{3}=Je_{1}\wedge e_{2}\wedge e_{3}\wedge Je_{2}\wedge Je_{3}=-(e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3})\wedge Je_{1}=-\frac{\omega^{2}}{2}\wedge Je_{1}.$ Hence, (94), (95) and the first equations in (98) and (99) hold (noting that $e_{j}\wedge Je_{j}\wedge\mathop{\mathrm{Im}}\Omega=0$). We also notice that $\displaystyle e_{1}\wedge Je_{1}\wedge\frac{\omega^{2}}{2}$ $\displaystyle=e_{1}\wedge Je_{1}\wedge e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3}=\frac{\omega^{3}}{6},$ from which the remaining identities in (98) and (99) follow (since clearly $e_{j}\wedge Je_{k}\wedge\omega^{2}=0$ for $j\neq k$). The previous calculation, together with Lemma 2.8 and (31), show that $\displaystyle([e]\wedge[Je]-[Je]\wedge[e])\wedge\frac{\omega^{2}}{2}=-4(e_{1}\wedge Je_{1}\wedge e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3})I=-4\frac{\omega^{3}}{6}I$ as claimed. The rest of (97) follows from Lemma 2.8. ∎ ###### Proposition 3.24. The curvature $R_{\theta_{\varepsilon}^{\delta,k}}$ of the connection $\theta_{\varepsilon}^{\delta,k}$ in (57) satisfies $\begin{split}R_{\theta_{\varepsilon}^{\delta,k}}\wedge\psi_{\varepsilon}&=\frac{k\varepsilon^{2}(1-\delta)\big{(}6+k(1+3\delta)\big{)}}{4}\frac{\omega^{3}}{6}\mathcal{I}\\\ &+\frac{k^{2}\varepsilon^{2}}{4}e_{0}\wedge\frac{\omega^{2}}{2}\wedge\left(\begin{array}[]{ccc}0&(1-5\delta)(1+\delta)e^{\rm T}&(1-5\delta)(1+\delta)Je^{\rm T}\\\ (5\delta-1)(1+\delta)e&(\delta^{2}-4\delta-1)[Je]&(\delta^{2}-4\delta-1)[e]\\\ (5\delta-1)(1+\delta)Je&(\delta^{2}-4\delta-1)[e]&-(\delta^{2}-4\delta-1)[Je]\end{array}\right).\end{split}$ (100) Therefore, $\theta^{\delta,k}_{\varepsilon}$ is never a ${\rm G}_{2}$-instanton. ###### Remark 3.25. We see that $\theta^{\delta,k}_{\varepsilon}$ can be a ${\rm G}_{2}$-instanton if and only if we are in the trivial case where $k=0$, which we have excluded. ###### Proof. Since $A$ is a ${\rm G}_{2}$-instanton by Lemma 3.1, we deduce immediately from Proposition 3.17 that $\displaystyle R_{\theta_{\varepsilon}^{\delta,k}}\wedge\psi_{\varepsilon}$ $\displaystyle=F_{A}\wedge\psi_{\varepsilon}+\frac{k\varepsilon^{2}(1-\delta)}{2}(\omega\wedge\psi_{\varepsilon})\mathcal{I}+\frac{k^{2}\varepsilon^{2}}{4}Q^{\delta}\wedge\psi_{\varepsilon}$ $\displaystyle=\frac{k\varepsilon^{2}(1-\delta)}{4}\omega^{3}\mathcal{I}+\frac{k^{2}\varepsilon^{2}}{4}Q^{\delta}\wedge\psi_{\varepsilon}.$ (101) We now study the term $Q^{\delta}\wedge\psi_{\varepsilon}$. We first note that $\displaystyle e_{0}\wedge e\wedge\psi_{\varepsilon}$ $\displaystyle=e_{0}\wedge\frac{\omega^{2}}{2}\wedge e,\qquad e_{0}\wedge Je\wedge\psi_{\varepsilon}=e_{0}\wedge\frac{\omega^{2}}{2}\wedge Je.$ Hence, from (66), we find that $Q^{\delta}_{-}\wedge\psi_{\varepsilon}=e_{0}\wedge\frac{1}{2}\omega^{2}\wedge\left(\begin{array}[]{ccc}0&(1+\delta)e^{\rm T}&(1+\delta)Je^{\rm T}\\\ -(1+\delta)e&-2\delta[Je]&-2\delta[e]\\\ -(1+\delta)Je&-2\delta[e]&2\delta[Je]\end{array}\right).$ (102) By Lemmas 2.8 and 3.23 we find that $\displaystyle 2(e\times Je)\wedge\psi_{\varepsilon}$ $\displaystyle=-2e_{0}\wedge\mathop{\mathrm{Im}}\Omega\wedge(e\times Je)=-4e_{0}\wedge\frac{\omega^{2}}{2}\wedge e,$ $\displaystyle(e\times e-Je\times Je)\wedge\psi_{\varepsilon}$ $\displaystyle=-e_{0}\wedge\mathop{\mathrm{Im}}\Omega\wedge(e\times e-Je\times Je)=-4e_{0}\wedge\frac{\omega^{2}}{2}\wedge Je.$ We also see from Lemma 3.23 that $\displaystyle Je\wedge Je^{\rm T}\wedge\psi_{\varepsilon}$ $\displaystyle=-e_{0}\wedge\mathop{\mathrm{Im}}\Omega\wedge Je\wedge Je^{\rm T}=e_{0}\wedge\frac{\omega^{2}}{2}\wedge[Je],$ $\displaystyle e\wedge e^{\rm T}\wedge\psi_{\varepsilon}$ $\displaystyle=-e_{0}\wedge\mathop{\mathrm{Im}}\Omega\wedge e\wedge e^{\rm T}=-e_{0}\wedge\frac{\omega^{2}}{2}\wedge[Je],$ $\displaystyle Je\wedge e^{\rm T}\wedge\psi_{\varepsilon}$ $\displaystyle=-\frac{\omega^{3}}{6}I-e_{0}\wedge\mathop{\mathrm{Im}}\Omega\wedge Je\wedge e^{\rm T}=-\frac{\omega^{3}}{6}I-e_{0}\wedge\frac{\omega^{2}}{2}\wedge[e],$ $\displaystyle e\wedge Je^{\rm T}\wedge\psi_{\varepsilon}$ $\displaystyle=\frac{\omega^{3}}{6}I-e_{0}\wedge\mathop{\mathrm{Im}}\Omega\wedge e\wedge Je^{\rm T}=\frac{\omega^{3}}{6}I-e_{0}\wedge\frac{\omega^{2}}{2}\wedge[e].$ We deduce that $\displaystyle Q_{+}^{\delta}\wedge\psi_{\varepsilon}=(1+\delta)\frac{\omega^{3}}{6}\mathcal{I}+e_{0}\wedge\frac{\omega^{2}}{2}\wedge\left(\begin{array}[]{ccc}0&-4\delta e^{\rm T}&-4\delta Je^{\rm T}\\\ 4\delta e&-(1+\delta)[Je]&-(1+\delta)[e]\\\ 4\delta Je&-(1+\delta)[e]&(1+\delta)[Je]\end{array}\right).$ Finally, it follows from Lemma 3.23 that $\displaystyle[e\times e+Je\times Je]\wedge\psi_{\varepsilon}=0,\qquad([e]\wedge[Je]-[Je]\wedge[e])\wedge\psi_{\varepsilon}=-4\frac{\omega^{3}}{6}I.$ Thus, $\displaystyle Q_{0}\wedge\psi_{\varepsilon}=\frac{\omega^{3}}{6}\left(\begin{array}[]{ccc}0&0&0\\\ 0&0&4I\\\ 0&-4I&0\end{array}\right)=-4\frac{\omega^{3}}{6}\mathcal{I}.$ Overall, we have $\displaystyle Q^{\delta}$ $\displaystyle\wedge\psi_{\varepsilon}=\big{(}(1-\delta)Q_{-}^{\delta}+(1+\delta)Q_{+}^{\delta}+\delta^{2}Q_{0}\big{)}\wedge\psi_{\varepsilon}$ $\displaystyle=(1-\delta)e_{0}\wedge\frac{1}{2}\omega^{2}\wedge\left(\begin{array}[]{ccc}0&(1+\delta)e^{\rm T}&(1+\delta)Je^{\rm T}\\\ -(1+\delta)e&-2\delta[Je]&-2\delta[e]\\\ -(1+\delta)Je&-2\delta[e]&2\delta[Je]\end{array}\right)$ $\displaystyle\qquad+(1+\delta)^{2}\frac{\omega^{3}}{6}\mathcal{I}+(1+\delta)e_{0}^{(k)}\wedge\frac{\omega^{2}}{2}\wedge\left(\begin{array}[]{ccc}0&-4\delta e^{\rm T}&-4\delta Je^{\rm T}\\\ 4\delta e&-(1+\delta)[Je]&-(1+\delta)[e]\\\ 4\delta Je&-(1+\delta)[e]&(1+\delta)[Je]\end{array}\right)$ $\displaystyle\qquad-4\delta^{2}\frac{\omega^{3}}{6}\mathcal{I}$ $\displaystyle=(1-\delta)(1+3\delta)\frac{\omega^{3}}{6}\mathcal{I}+e_{0}\wedge\frac{\omega^{2}}{2}\wedge\left(\begin{array}[]{ccc}0&(1+\delta)(1-5\delta)e^{\rm T}&(1+\delta)(1-5\delta)Je^{\rm T}\\\ (1+\delta)(5\delta-1)e&(\delta^{2}-4\delta-1)[Je]&(\delta^{2}-4\delta-1)[e]\\\ (1+\delta)(5\delta-1)Je&(\delta^{2}-4\delta-1)[e]&-(\delta^{2}-4\delta-1)[Je]\end{array}\right).$ We deduce from this equation and (101) that the coefficient of $\frac{\omega^{3}}{6}\mathcal{I}$ in $R_{\theta_{\varepsilon}^{\delta,k}}\wedge\psi_{\varepsilon}$ is $\frac{6k\varepsilon^{2}(1-\delta)}{4}+\frac{k^{2}\varepsilon^{2}(1-\delta)(1+3\delta)}{4}=\frac{k\varepsilon^{2}(1-\delta)\big{(}6+k(1+3\delta)\big{)}}{4}$ The claimed formula (100) now follows. Since the quadratics $(1-5\delta)(1+\delta)$ and $\delta^{2}-4\delta-1$ in $\delta$ have no common roots, we see that if $\theta^{\delta,k}_{\varepsilon}$ were a ${\rm G}_{2}$-instanton, then we must have $k=0$. ∎ ###### Remark 3.26. In particular, we see that neither the Bismut nor the Hull connection are ${\rm G}_{2}$-instantons. A straightforward adaptation of the arguments leading to Proposition 3.24, using Proposition 3.21, gives the following result for $\theta^{\delta,k}_{\varepsilon,m}$. ###### Corollary 3.27. The curvature $R^{\delta,k}_{\varepsilon,m}$ of the connection $\theta^{\delta,k}_{\varepsilon,m}$ in (90) satisfies $\begin{split}&R^{\delta,k}_{\varepsilon,m}\wedge\psi_{\varepsilon}\\\ &=\frac{k\varepsilon^{2}\big{(}6(1-\delta+m)+k(1-\delta)(1+3\delta)\big{)}}{4}\frac{\omega^{3}}{6}\mathcal{I}\\\ &+\frac{k^{2}\varepsilon^{2}}{4}e_{0}\wedge\frac{\omega^{2}}{2}\wedge\left(\begin{array}[]{ccc}0&(1+m-5\delta)(1+\delta)e^{\rm T}&(1+m-5\delta)(1+\delta)Je^{\rm T}\\\ (5\delta-1-m)(1+\delta)e&(\delta^{2}-2(2+m)\delta-1)[Je]&(\delta^{2}-2(2+m)\delta-1)[e]\\\ (5\delta-1-m)(1+\delta)Je&(\delta^{2}-2(2+m)\delta-1)[e]&-(\delta^{2}-2(2+m)\delta-1)[Je]\end{array}\right).\end{split}$ (103) Therefore, $\theta^{\delta,k}_{\varepsilon,m}$ is never a ${\rm G}_{2}$-instanton. ###### Proof. The key observation is (102) which shows, together with Proposition 3.21, that we must add $\frac{km\varepsilon^{2}}{4}\omega^{3}\mathcal{I}+\frac{k^{2}\varepsilon^{2}}{4}me_{0}\wedge\frac{1}{2}\omega^{2}\wedge\left(\begin{array}[]{ccc}0&(1+\delta)e^{\rm T}&(1+\delta)Je^{\rm T}\\\ -(1+\delta)e&-2\delta[Je]&-2\delta[e]\\\ -(1+\delta)Je&-2\delta[e]&2\delta[Je]\end{array}\right)$ to the right-hand side of (100) to obtain $R_{\theta_{\varepsilon,m}^{\delta,k}}\wedge\psi_{\varepsilon}$. The claimed formula (LABEL:eq:G2.inst.eps.delta.m.k) then follows. We deduce that, since $k\neq 0$, $\theta^{\delta,k}_{\varepsilon,m}$ is a ${\rm G}_{2}$-instanton if and only if $\displaystyle(1-\delta)(6+k(1+3\delta))+6m=0,\quad(5\delta-1-m)(1+\delta)=0,\quad(\delta^{2}-1)-2(2+m)\delta=0.$ One may see that the only real solutions have $\delta=-1$, meaning the second equation is satisfied for any $m$. The third equation forces $m=-2$ and the first equation gives $12-4k+6m=0$, which then forces $k=0$. ∎ ###### Remark 3.28. Although $\theta^{\delta,k}_{\varepsilon,m}$ is never a ${\rm G}_{2}$-instanton, we know by (11) and (LABEL:eq:G2.inst.eps.delta.m.k) that it is an “approximate” ${\rm G}_{2}$-instanton whenever $\frac{k\varepsilon^{2}\big{(}6(1-\delta+m)+k(1-\delta)(1+3\delta)\big{)}}{4},\quad\frac{k^{2}\varepsilon^{2}}{4}(1+m-5\delta)(1+\delta),\quad\frac{k^{2}\varepsilon^{2}}{4}(\delta^{2}-2(2+m)\delta-1)$ are all “sufficiently small” in a suitable sense. This smallness will be related to the constant $\alpha^{\prime}$ which we will determine in the next section on the anomaly-free condition (2). ## 4 The anomaly term We wish to study the heterotic Bianchi identity for the connections $\theta=\theta^{\delta,k}_{\varepsilon,m}$ and ${\rm G}_{2}$-structure $\varphi_{\varepsilon}$. By (2) and Lemma 2.5, this becomes $dH_{\varepsilon}=-\varepsilon^{2}\omega^{2}=\frac{\alpha^{\prime}}{4}(\mathop{\mathrm{tr}}\nolimits F_{A}^{2}-\mathop{\mathrm{tr}}\nolimits R_{\theta}^{2}).$ (104) Proposition 3.21 allows us to study when this condition can be satisfied, since by (92), we have that $\begin{split}R_{\theta}^{2}-F_{A}^{2}&=\frac{k^{2}\varepsilon^{4}(1-\delta+m)^{2}}{4}\omega^{2}\mathcal{I}^{2}+\frac{k\varepsilon^{2}(1-\delta+m)}{2}(F_{A}\wedge\omega\mathcal{I}+\omega\mathcal{I}\wedge F_{A})\\\ &\quad+\frac{k^{3}\varepsilon^{4}(1-\delta+m)}{8}(\omega\mathcal{I}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge\omega\mathcal{I})+\frac{k^{2}\varepsilon^{2}}{4}(F_{A}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge F_{A})+\frac{k^{4}\varepsilon^{4}}{16}(Q^{\delta}_{m})^{2}.\end{split}$ (105) ### 4.1 Terms involving the matrix $\mathcal{I}$ We begin by studying the trace of the first line on the right-hand side of (105). ###### Lemma 4.1. For $\mathcal{I}$ as in (40) and $F_{A}$ as in (51) we have that $\mathop{\mathrm{tr}}\nolimits\mathcal{I}^{2}=-6\quad\text{and}\quad\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge\omega\mathcal{I}+\omega\mathcal{I}\wedge F_{A})=0.$ (106) ###### Proof. We first notice that $\mathcal{I}^{2}=-\left(\begin{array}[]{ccc}0&0&0\\\ 0&I&0\\\ 0&0&I\end{array}\right)$ and hence the first equation in (106) holds. We then deduce from the formula (51) for $F_{A}$ that $F_{A}\wedge\omega\mathcal{I}+\omega\mathcal{I}\wedge F_{A}=\left(\begin{array}[]{ccc}0&0&0\\\ 0&2\beta\wedge\omega&-2\alpha\wedge\omega\\\ 0&2\alpha\wedge\omega&2\beta\wedge\omega\end{array}\right).$ Since $\beta$ is traceless, the second equation in (106) also holds. ∎ We deduce from (104) and Lemma 4.1 that $\begin{split}\mathop{\mathrm{tr}}\nolimits(R^{2}_{\theta^{\delta,k}_{\varepsilon,m}}-F_{A}^{2})&=-\frac{3k^{2}\varepsilon^{4}(1-\delta+m)^{2}}{2}\omega^{2}+\frac{k^{3}\varepsilon^{4}(1-\delta+m)}{8}\mathop{\mathrm{tr}}\nolimits(\omega\mathcal{I}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge\omega\mathcal{I})\\\ &\quad+\frac{k^{2}\varepsilon^{2}}{4}\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge F_{A})+\frac{k^{4}\varepsilon^{4}}{16}\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}.\end{split}$ (107) We now wish to study the second term on the right-hand side of (107). ###### Lemma 4.2. For $\mathcal{I}$ in (40) and $Q^{\delta}_{-}$, $Q^{\delta}_{+}$, $Q_{0}$ in (66), (70) and (74), we have $\displaystyle\mathop{\mathrm{tr}}\nolimits(\omega\mathcal{I}\wedge Q^{\delta}_{-}+Q^{\delta}_{-}\wedge\omega\mathcal{I})$ $\displaystyle=0,$ (108) $\displaystyle\mathop{\mathrm{tr}}\nolimits(\omega\mathcal{I}\wedge Q^{\delta}_{+}+Q^{\delta}_{+}\wedge\omega\mathcal{I})$ $\displaystyle=-4(1+\delta)\omega^{2},$ (109) $\displaystyle\mathop{\mathrm{tr}}\nolimits(\omega\mathcal{I}\wedge Q_{0}+Q_{0}\wedge\omega\mathcal{I})$ $\displaystyle=16\omega^{2}.$ (110) Hence, for $Q^{\delta}_{m}$ given in (93), we have $\mathop{\mathrm{tr}}\nolimits(\omega\mathcal{I}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge\omega\mathcal{I})=4(4\delta^{2}-(1+\delta)^{2})\omega^{2}.$ (111) ###### Proof. We first observe that $\displaystyle\omega\mathcal{I}\wedge e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)+e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)\wedge\omega\mathcal{I}$ $\displaystyle=e_{0}\wedge\omega\wedge\left(\begin{array}[]{ccc}0&Je^{\rm T}&-e^{\rm T}\\\ Je&0&0\\\ -e&0&0\end{array}\right)$ and $\displaystyle\omega\mathcal{I}\wedge e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)+e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)\wedge\omega\mathcal{I}$ $\displaystyle=0.$ Given the formula (66) for $Q^{\delta}_{-}$ we deduce (108). Similarly, we observe that $\displaystyle\mathop{\mathrm{tr}}\nolimits\bigg{(}\omega\mathcal{I}$ $\displaystyle\wedge e_{0}\wedge\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)$ $\displaystyle+e_{0}\wedge\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)\wedge\omega\mathcal{I}\bigg{)}=0.$ However, $\displaystyle\omega\mathcal{I}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)$ $\displaystyle+\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\wedge\omega\mathcal{I}$ $\displaystyle=\omega\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&Je\wedge e^{\rm T}-e\wedge Je^{\rm T}&e\wedge e^{\rm T}+Je\wedge Je^{\rm T}\\\ 0&-e\wedge e^{\rm T}-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}-e\wedge Je^{\rm T}\end{array}\right).$ Taking the trace of this equation yields $2\omega\wedge(-2e_{1}\wedge Je_{1}-2e_{2}\wedge Je_{2}-2e_{3}\wedge Je_{3})=-4\omega^{2}.$ The equation (70) for $Q^{\delta}_{+}$ then gives (109). Finally, we calculate $\displaystyle\frac{1}{2}\mathop{\mathrm{tr}}\nolimits\Bigg{(}\omega\mathcal{I}$ $\displaystyle\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right)\Bigg{)}$ $\displaystyle+\frac{1}{2}\mathop{\mathrm{tr}}\nolimits\left(\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right)\wedge\omega\mathcal{I}\right)$ $\displaystyle=\mathop{\mathrm{tr}}\nolimits\left(\begin{array}[]{ccc}0&0&0\\\ 0&-2\omega\wedge([e]\wedge[Je]-[Je]\wedge[e])&\omega\wedge[e\times e+Je\times Je]\\\ 0&-\omega\wedge[e\times e+Je\times Je]&-2\omega\wedge([e]\wedge[Je]-[Je]\wedge[e])\end{array}\right)$ $\displaystyle=-4\omega\wedge\mathop{\mathrm{tr}}\nolimits([e]\wedge[Je]-[Je]\wedge[e])=-4\omega\wedge(-4\omega)=16\omega^{2}$ by (31). Hence, (110) holds, and equation (111) then immediately follows from (93) and (108)–(110). ∎ Inserting (111) in (107), we obtain: $\begin{split}\mathop{\mathrm{tr}}\nolimits(R^{2}_{\theta^{\delta,k}_{\varepsilon,m}}-F_{A}^{2})&=\frac{k^{2}\varepsilon^{4}(1-\delta+m)\big{(}k(4\delta^{2}-(1+\delta)^{2})-3\big{)}}{2}\omega^{2}\\\ &\quad+\frac{k^{2}\varepsilon^{2}}{4}\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge F_{A})+\frac{k^{4}\varepsilon^{4}}{16}\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}.\end{split}$ (112) ### 4.2 Linear contribution from the ${\rm G}_{2}$ field strength In this subsection, we wish to analyse the term $\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge F_{A})$ from (112). ###### Lemma 4.3. For $Q^{\delta}_{-}$ in (66) and $Q^{\delta}_{+}$ in (70) we have $\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q^{\delta}_{-}+Q^{\delta}_{-}\wedge F_{A})=0\quad\text{and}\quad\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q^{\delta}_{+}+Q^{\delta}_{+}\wedge F_{A})=0$ ###### Proof. We see, from (52), that $\displaystyle F_{A}\wedge e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)$ $\displaystyle+e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)\wedge F_{A}$ $\displaystyle=e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}\wedge\alpha- Je^{\rm T}\wedge\beta&e^{\rm T}\wedge\beta+Je^{\rm T}\wedge\alpha\\\ -\alpha\wedge e-\beta\wedge Je&0&0\\\ \beta\wedge e-\alpha\wedge Je&0&0\end{array}\right)=0.$ We may also compute $\displaystyle\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)+e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)\wedge F_{A})$ $\displaystyle=e_{0}\wedge\mathop{\mathrm{tr}}\nolimits\left(\begin{array}[]{ccc}0&0&0\\\ 0&\alpha\wedge[Je]+\beta\wedge[e]+[Je]\wedge\alpha-[e]\wedge\beta&\alpha\wedge[e]-\beta\wedge[Je]+[Je]\wedge\beta+[e]\wedge\alpha\\\ 0&-\beta\wedge[Je]+\alpha\wedge[e]+[e]\wedge\alpha+[Je]\wedge\beta&-\beta\wedge[e]-\alpha\wedge[Je]+[e]\wedge\beta-[Je]\wedge\alpha\end{array}\right)=0.$ The first result now follows from (66). For the second equation, we clearly have $\displaystyle\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)$ $\displaystyle\qquad+\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)\wedge F_{A})=0$ since the matrix the trace of which we are taking has no entries along the diagonal. On the other hand, if we consider $\displaystyle F_{A}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)+\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\wedge F_{A},$ we find that the only entries which are not trivially zero are $\displaystyle(-\alpha\wedge Je+\beta\wedge e)\wedge Je^{\rm T}-Je\wedge(Je^{\rm T}\wedge\alpha+e^{\rm T}\wedge\beta),$ $\displaystyle(\alpha\wedge Je-\beta\wedge e)\wedge e^{\rm T}-Je\wedge(Je^{\rm T}\wedge\beta-e^{\rm T}\wedge\alpha),$ $\displaystyle(\beta\wedge Je+\alpha\wedge e)\wedge Je^{\rm T}+e\wedge(Je^{\rm T}\wedge\alpha+e^{\rm T}\wedge\beta),$ $\displaystyle-(\beta\wedge Je-\alpha\wedge e)\wedge e^{\rm T}+e\wedge(Je^{\rm T}\wedge\beta-e^{\rm T}\wedge\alpha),$ yet these also vanish, by (52). Using (70) completes the result. ∎ From Lemma 4.3 we deduce that $\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q^{\delta}_{m}+Q^{\delta}_{m}\wedge F_{A})=\delta^{2}\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q_{0}+Q_{0}\wedge F_{A}).$ We conclude this section by studying this final term. ###### Lemma 4.4. For $Q_{0}$ in (74), we have $\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q_{0}+Q_{0}\wedge F_{A})=0.$ ###### Proof. We first see that $\displaystyle\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q_{0})$ $\displaystyle=\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right))$ $\displaystyle=2\mathop{\mathrm{tr}}\nolimits(-\alpha\wedge[e\times e+Je\times Je]+2\beta\wedge([e]\wedge[Je]-[Je]\wedge[e]),$ and $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q_{0}\wedge F_{A})$ $\displaystyle=\mathop{\mathrm{tr}}\nolimits(\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right)\wedge F_{A})$ $\displaystyle=2\mathop{\mathrm{tr}}\nolimits(-[e\times e+Je\times Je]\wedge\alpha+2([e]\wedge[Je]-[Je]\wedge[e])\wedge\beta).$ Hence, $\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q_{0}+Q_{0}\wedge F_{A})=4\mathop{\mathrm{tr}}\nolimits(-\alpha\wedge[e\times e+Je\times Je]+2\beta([e]\wedge[Je]-[Je]\wedge[e]).$ Using Lemma A.3 we find that $\displaystyle[e\times e+Je\times Je]=2e\wedge e^{\rm T}+2Je\wedge Je^{\rm T},$ $\displaystyle[e]\wedge[Je]-[Je]\wedge[e]=e\wedge Je^{\rm T}-Je\wedge e^{\rm T}-2\omega I.$ Therefore, $\mathop{\mathrm{tr}}\nolimits(F_{A}\wedge Q_{0}+Q_{0}\wedge F_{A})=8\mathop{\mathrm{tr}}\nolimits\big{(}-(\alpha\wedge e+\beta\wedge Je)\wedge e^{\rm T}-(\alpha\wedge Je-\beta\wedge e)\wedge Je^{\rm T}\big{)}-16\omega\wedge\mathop{\mathrm{tr}}\nolimits\beta=0$ by (52) and the fact that $\beta$ is traceless. ∎ By (112) and Lemmas 4.3 and 4.4 we obtain, for $\theta=\theta^{\delta,k}_{\varepsilon,m}$, $\begin{split}\mathop{\mathrm{tr}}\nolimits(R_{\theta}^{2}-F_{A}^{2})=\frac{k^{2}\varepsilon^{4}(1-\delta+m)\big{(}k(4\delta^{2}-(1+\delta)^{2})-3\big{)}}{2}\omega^{2}+\frac{k^{4}\varepsilon^{4}}{16}\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}.\end{split}$ (113) ### 4.3 The nonlinear contribution $\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}$ We now wish to compute the term $\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}$ in (113), to complete our analysis of the difference in the traces of the squares of the curvatures of $\theta^{\delta,k}_{\varepsilon,m}$ and $A$. We begin with the “square terms” in $(Q^{\delta}_{m})^{2}$. ###### Lemma 4.5. For $Q^{\delta}_{-}$, $Q^{\delta}_{+}$, $Q_{0}$ in (66)–(74) we have $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-})^{2}=0,\quad\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{+})^{2}=-8\delta^{2}\omega^{2},\quad\mathop{\mathrm{tr}}\nolimits(Q_{0})^{2}=0.$ ###### Proof. Since $Q^{\delta}_{-}=e_{0}\wedge Q$ for some matrix of 1-forms, we see immediately that $(Q^{\delta}_{-})^{2}=0$. For $Q^{\delta}_{+}$, we note that $Q^{\delta}_{+}=\delta\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)+(1+\delta)\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right).$ (114) We see that, in $(Q^{\delta}_{+})^{2}$, the cross-terms coming from the pair of matrices above will be obviously traceless, so it suffices to compute the trace of each square. We see that $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)^{2}$ $\displaystyle\qquad=-4(e\times Je)^{\rm T}\wedge(e\times Je)-(e\times e-Je\times Je)^{\rm T}\wedge(e\times e-Je\times Je).$ We observe that $\displaystyle 4(e_{2}\wedge Je_{3}-e_{3}\wedge Je_{2})\wedge(e_{2}\wedge Je_{3}-e_{3}\wedge Je_{2})=8e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3}$ $\displaystyle 2(e_{2}\wedge e_{3}-Je_{2}\wedge Je_{3})\wedge 2(e_{2}\wedge e_{3}-Je_{2}\wedge Je_{3})=8e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3}$ and hence $\mathop{\mathrm{tr}}\nolimits\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)^{2}=-8\omega^{2}.$ On the other hand, $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)^{2}=Je\wedge e^{\rm T}\wedge e\wedge Je^{\rm T}+e\wedge Je^{\rm T}\wedge Je\wedge e^{\rm T}=0.$ This gives the result for $\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{+})^{2}$. From the formula (74) for $Q_{0}$ we see that $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q_{0})^{2}=\frac{1}{2}\mathop{\mathrm{tr}}\nolimits[e\times e+Je\times Je]^{2}-2\mathop{\mathrm{tr}}\nolimits([e]\wedge[Je]-[Je]\wedge[e])^{2}.$ We then calculate $\displaystyle\mathop{\mathrm{tr}}\nolimits[e\times e+Je\times Je]^{2}$ $\displaystyle=\mathop{\mathrm{tr}}\nolimits\left(\begin{array}[]{ccc}0&2e_{1}\wedge e_{2}+2Je_{1}\wedge Je_{2}&-2e_{3}\wedge e_{1}-2Je_{3}\wedge Je_{1}\\\ -2e_{1}\wedge e_{2}-2Je_{1}\wedge Je_{2}&0&2e_{2}\wedge e_{3}+2Je_{2}\wedge Je_{3}\\\ 2e_{3}\wedge e_{1}+2Je_{3}\wedge Je_{1}&-2e_{2}\wedge e_{3}-2Je_{2}\wedge Je_{3}&0\end{array}\right)^{2}$ $\displaystyle=16(e_{1}\wedge Je_{1}\wedge e_{2}\wedge Je_{2}+e_{3}\wedge Je_{3}\wedge e_{1}\wedge Je_{1}+e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3})=8\omega^{2}$ and $\displaystyle\mathop{\mathrm{tr}}\nolimits$ $\displaystyle([e]\wedge[Je]-[Je]\wedge[e])^{2}$ $\displaystyle\qquad=\mathop{\mathrm{tr}}\nolimits\left(\begin{array}[]{ccc}-2e_{2}\wedge Je_{2}-2e_{3}\wedge Je_{3}&e_{2}\wedge Je_{1}+e_{1}\wedge Je_{2}&e_{3}\wedge Je_{1}+e_{1}\wedge Je_{3}\\\ e_{1}\wedge Je_{2}+e_{2}\wedge Je_{1}&-2e_{3}\wedge Je_{3}-2e_{1}\wedge Je_{1}&e_{3}\wedge Je_{2}+e_{2}\wedge Je_{3}\\\ e_{1}\wedge Je_{3}+e_{3}\wedge Je_{1}&e_{2}\wedge Je_{3}+e_{3}\wedge Je_{2}&-2e_{1}\wedge Je_{1}-2e_{2}\wedge Je_{2}\end{array}\right)^{2}$ $\displaystyle\qquad=8(e_{2}\wedge Je_{2}\wedge e_{3}\wedge Je_{3}+e_{3}\wedge Je_{3}\wedge e_{1}\wedge Je_{1}+e_{1}\wedge Je_{1}\wedge e_{2}\wedge Je_{2})$ $\displaystyle\qquad\quad+4(e_{1}\wedge Je_{2}\wedge e_{2}\wedge Je_{1}+e_{3}\wedge Je_{1}\wedge e_{1}\wedge Je_{3}+e_{2}\wedge Je_{3}\wedge e_{3}\wedge Je_{2})$ $\displaystyle\qquad=4\omega^{2}-2\omega^{2}=2\omega^{2}.$ The formula for $\mathop{\mathrm{tr}}\nolimits(Q_{0})^{2}$ then follows. ∎ We now look at the “cross terms” in $(Q^{\delta}_{m})^{2}$. ###### Lemma 4.6. For $Q^{\delta}_{-}$, $Q^{\delta}_{+}$ in (66)–(70), we have $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-}\wedge Q^{\delta}_{+}+Q^{\delta}_{+}\wedge Q^{\delta}_{-})$ $\displaystyle=0.$ ###### Proof. Just as for $Q^{\delta}_{+}$ in (114) we can split $Q^{\delta}_{-}$ as $Q^{\delta}_{-}=(1+\delta)e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)-2\delta e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right).$ (115) Hence, we can break down the calculation of $\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-}\wedge Q^{\delta}_{+}+Q^{\delta}_{+}\wedge Q^{\delta}_{-})$ into more manageable steps. First, we see that $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)\wedge\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)\right)$ $\displaystyle+\mathop{\mathrm{tr}}\nolimits\left(\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)\wedge e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)\right)$ $\displaystyle=2e_{0}\wedge\big{(}-2e^{\rm T}\wedge(e\times Je)-Je^{\rm T}\wedge(e\times e-Je\times Je)-2\mathop{\mathrm{tr}}\nolimits(e\wedge(e\times Je)^{\rm T})-\mathop{\mathrm{tr}}\nolimits(Je\wedge(e\times e-Je\times Je)^{\rm T})\big{)}$ $\displaystyle=4e_{0}\wedge\big{(}-2e^{\rm T}\wedge(e\times Je)-Je^{\rm T}\wedge(e\times e-Je\times Je\big{)}.$ We observe that $\displaystyle 2e^{\rm T}\wedge(e\times Je)$ $\displaystyle=2e_{1}\wedge(e_{2}\wedge Je_{3}-e_{3}\wedge Je_{2})+2e_{2}\wedge(e_{3}\wedge Je_{1}-e_{1}\wedge Je_{3})$ $\displaystyle\qquad+2e_{3}\wedge(e_{1}\wedge Je_{2}-e_{2}\wedge Je_{1})$ $\displaystyle=4\mathop{\mathrm{Im}}\Omega+4Je_{1}\wedge Je_{2}\wedge Je_{3},$ $\displaystyle Je^{\rm T}\wedge(e\times e-Je\times Je)$ $\displaystyle=2Je_{1}\wedge(e_{2}\wedge e_{3}-Je_{2}\wedge Je_{3})+2Je_{2}\wedge(e_{3}\wedge e_{1}-Je_{3}\wedge Je_{1})$ $\displaystyle\qquad+2Je_{3}\wedge(e_{1}\wedge e_{2}-Je_{3}\wedge Je_{1}),$ $\displaystyle=2\mathop{\mathrm{Im}}\Omega-4Je_{1}\wedge Je_{2}\wedge Je_{3}$ and thus $\displaystyle 4e_{0}\wedge\big{(}-2e^{\rm T}\wedge(e\times Je)-Je^{\rm T}\wedge(e\times e-Je\times Je\big{)}=-24e_{0}\wedge\mathop{\mathrm{Im}}\Omega.$ Now, clearly, $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(e_{0}\wedge\left(\begin{array}[]{ccc}0&e^{\rm T}&Je^{\rm T}\\\ -e&0&0\\\ -Je&0&0\end{array}\right)\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\right)$ $\displaystyle=0,$ $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)\wedge\left(\begin{array}[]{ccc}0&2(e\times Je)^{\rm T}&(e\times e-Je\times Je)^{\rm T}\\\ -2(e\times Je)&0&0\\\ -(e\times e-Je\times Je)&0&0\end{array}\right)\right)$ $\displaystyle=0,$ so for $\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-}\wedge Q^{\delta}_{+})$ we are simply left with computing $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\right)$ $\displaystyle+\mathop{\mathrm{tr}}\nolimits\left(\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\wedge e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)\right)$ $\displaystyle=2e_{0}\wedge\mathop{\mathrm{tr}}\nolimits\big{(}[Je]\wedge(e\wedge e^{\rm T}-Je\wedge Je^{\rm T})+[e]\wedge(e\wedge Je^{\rm T}+Je\wedge e^{\rm T})\big{)}.$ To conclude, we notice that $\displaystyle\mathop{\mathrm{tr}}\nolimits\big{(}[Je]\wedge(e\wedge e^{\rm T}-Je\wedge Je^{\rm T})\big{)}$ $\displaystyle=-2Je_{3}\wedge e_{1}\wedge e_{2}-2Je_{2}\wedge e_{3}\wedge e_{1}-2Je_{1}\wedge e_{2}\wedge e_{3}+6Je_{1}\wedge Je_{2}\wedge Je_{3}$ $\displaystyle=-2\mathop{\mathrm{Im}}\Omega+4Je_{1}\wedge Je_{2}\wedge Je_{3},$ $\displaystyle\mathop{\mathrm{tr}}\nolimits\big{(}[e]\wedge(e\wedge Je^{\rm T}+Je\wedge e^{\rm T})\big{)}$ $\displaystyle=2e_{3}\wedge(e_{2}\wedge Je_{1}+Je_{2}\wedge e_{1})+2e_{2}\wedge(e_{3}\wedge Je_{1}+Je_{3}\wedge e_{1})$ $\displaystyle\quad+2e_{1}\wedge(e_{3}\wedge Je_{2}+Je_{3}\wedge e_{2})$ $\displaystyle=-4\mathop{\mathrm{Im}}\Omega-4Je_{1}\wedge Je_{2}\wedge Je_{3},$ which gives $2e_{0}\wedge\mathop{\mathrm{tr}}\nolimits\big{(}[Je]\wedge(e\wedge e^{\rm T}-Je\wedge Je^{\rm T})+[e]\wedge(e\wedge Je^{\rm T}+Je\wedge e^{\rm T})\big{)}=-12e_{0}\wedge\mathop{\mathrm{Im}}\Omega.$ Hence, as claimed, $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-}\wedge Q^{\delta}_{+}+Q^{\delta}_{+}\wedge Q^{\delta}_{-})$ $\displaystyle=(1+\delta)\delta(-24e_{0}\wedge\mathop{\mathrm{Im}}\Omega)-2\delta(1+\delta)(-12e_{0}\wedge\mathop{\mathrm{Im}}\Omega)=0.\qed$ ###### Lemma 4.7. For $Q^{\delta}_{-}$, $Q_{0}$, respectively in (66), (74), we have $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-}\wedge Q_{0}+Q_{0}\wedge Q^{\delta}_{-})$ $\displaystyle=0.$ ###### Proof. Recall the splitting (115). Since we have $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(e_{0}\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&[Je]&[e]\\\ 0&[e]&-[Je]\end{array}\right)\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right)\right)$ $\displaystyle=e_{0}\wedge\mathop{\mathrm{tr}}\nolimits(-[Je]\wedge[e\times e+Je\times Je]+2[e]\wedge([e]\wedge[Je]-[Je]\wedge[e]))$ $\displaystyle\quad+e_{0}\wedge\mathop{\mathrm{tr}}\nolimits(-2[e]\wedge([e]\wedge[Je]-[Je]\wedge[e])+[Je]\wedge[e\times e+Je\times Je])$ $\displaystyle=0,$ the result then follows from (115) and (74). ∎ ###### Lemma 4.8. For $Q^{\delta}_{+}$, $Q_{0}$, respectively in (70), (74), we have $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{+}\wedge Q_{0}+Q_{0}\wedge Q^{\delta}_{+})$ $\displaystyle=16(1+\delta)\omega^{2}.$ ###### Proof. Recall the splitting (114). We see that to calculate $\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{+}\wedge Q_{0})$ it suffices to compute the following: $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\wedge\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right)\right)$ $\displaystyle=\mathop{\mathrm{tr}}\nolimits\big{(}(Je\wedge Je^{\rm T}+e\wedge e^{\rm T})\wedge[e\times e+Je\times Je])+2(Je\wedge e^{\rm T}-e\wedge Je^{\rm T})\wedge([e]\wedge[Je]-[Je]\wedge[e])\big{)}$ $\displaystyle=2\mathop{\mathrm{tr}}\nolimits(Je\wedge Je^{\rm T}+e\wedge e^{\rm T})^{2}-2\mathop{\mathrm{tr}}\nolimits(Je\wedge e^{\rm T}-e\wedge Je^{\rm T})^{2}-4\omega\wedge\mathop{\mathrm{tr}}\nolimits(Je\wedge e^{\rm T}-e\wedge Je^{\rm T})$ by Lemma A.3. We first see that $\displaystyle 2\mathop{\mathrm{tr}}\nolimits(Je\wedge Je^{\rm T}+e\wedge e^{\rm T})^{2}$ $\displaystyle=2(4e_{1}\wedge e_{2}\wedge Je_{2}\wedge Je_{1}+4e_{3}\wedge e_{1}\wedge Je_{1}\wedge Je_{3}+4e_{2}\wedge e_{3}\wedge Je_{3}\wedge Je_{2})$ $\displaystyle=4\omega^{2}.$ We also see that $\displaystyle-2\mathop{\mathrm{tr}}\nolimits(Je\wedge e^{\rm T}-e\wedge Je^{\rm T})^{2}$ $\displaystyle=-2\mathop{\mathrm{tr}}\nolimits(Je\wedge e^{\rm T})^{2}-2\mathop{\mathrm{tr}}\nolimits(e\wedge Je^{\rm T})^{2}$ $\displaystyle=-2(2Je_{1}\wedge e_{2}\wedge Je_{2}\wedge e_{1}+2Je_{3}\wedge e_{1}\wedge Je_{1}\wedge e_{3}+2Je_{2}\wedge e_{3}\wedge Je_{3}\wedge e_{2})$ $\displaystyle\quad-2(2e_{1}\wedge Je_{2}\wedge e_{2}\wedge Je_{1}+2e_{3}\wedge Je_{1}\wedge e_{1}\wedge Je_{3}+2e_{2}\wedge Je_{3}\wedge e_{3}\wedge Je_{2})$ $\displaystyle=4\omega^{2}$ and $\displaystyle-4\omega\wedge\mathop{\mathrm{tr}}\nolimits(Je\wedge e^{\rm T}-e\wedge Je^{\rm T})$ $\displaystyle=-4\omega\wedge(-2\omega)=8\omega^{2}.$ Hence, $\displaystyle\mathop{\mathrm{tr}}\nolimits\left(\left(\begin{array}[]{ccc}0&0&0\\\ 0&-Je\wedge Je^{\rm T}&Je\wedge e^{\rm T}\\\ 0&e\wedge Je^{\rm T}&-e\wedge e^{\rm T}\end{array}\right)\wedge\frac{1}{2}\left(\begin{array}[]{ccc}0&0&0\\\ 0&-[e\times e+Je\times Je]&-2([e]\wedge[Je]-[Je]\wedge[e])\\\ 0&2([e]\wedge[Je]-[Je]\wedge[e])&-[e\times e+Je\times Je]\end{array}\right)\right)$ $\displaystyle=\frac{1}{2}(4\omega^{2}+4\omega^{2}+8\omega^{2})=8\omega^{2}.$ The result then follows from (114) and (74). ∎ ###### Corollary 4.9. For $Q^{\delta}_{m}$ in (93), we have $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}=8\delta^{2}(1+\delta)^{2}\omega^{2}.$ ###### Proof. From the definition of $Q^{\delta}_{m}$ in (93), using Lemmas 4.5-4.8, we compute: $\displaystyle\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{m})^{2}$ $\displaystyle=\mathop{\mathrm{tr}}\nolimits\big{(}(1-\delta+m)Q^{\delta}_{-}+(1+\delta)Q^{\delta}_{+}+\delta^{2}Q_{0}\big{)}^{2}$ $\displaystyle=(1-\delta+m)^{2}\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-})^{2}+(1+\delta)^{2}\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{+})^{2}+\delta^{4}\mathop{\mathrm{tr}}\nolimits(Q_{0}^{2})+(1-\delta+m)(1+\delta)\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-}\wedge Q^{\delta}_{+}+Q^{\delta}_{+}\wedge Q^{\delta}_{-})$ $\displaystyle\quad+(1-\delta+m)\delta^{2}\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{-}\wedge Q_{0}+Q_{0}\wedge Q^{\delta}_{-})+(1+\delta)\delta^{2}\mathop{\mathrm{tr}}\nolimits(Q^{\delta}_{+}\wedge Q_{0}+Q_{0}\wedge Q^{\delta}_{+})$ $\displaystyle=-8(1+\delta)^{2}\delta^{2}\omega^{2}+16(1+\delta)^{2}\delta^{2}\omega^{2}$ $\displaystyle=8\delta^{2}(1+\delta)^{2}\omega^{2}.\qed$ Combining Corollary 4.9 and (113), we conclude that $\mathop{\mathrm{tr}}\nolimits(R_{\theta}^{2}-F_{A}^{2})=\frac{k^{2}\varepsilon^{4}\big{(}k^{2}\delta^{2}(1+\delta)^{2}+(1-\delta+m)\big{(}k(4\delta^{2}-(1+\delta)^{2})-3\big{)}\big{)}}{2}\omega^{2},\quad\text{with}\quad\theta=\theta^{\delta,k}_{\varepsilon,m}.$ (116) ### 4.4 Proof of Theorem 1 We are now in position to prove the final parts (iv) and (v) in Theorem 1. Replacing the Chern–Simons defect (113), between gauge fields $A$ and $\theta$, in the heterotic Bianchi identity (104), we obtain $-\varepsilon^{2}\omega^{2}=-\frac{\alpha^{\prime}}{4}\frac{k^{2}\varepsilon^{4}\big{(}k^{2}\delta^{2}(1+\delta)^{2}+(1-\delta+m)\big{(}k(4\delta^{2}-(1+\delta)^{2})-3\big{)}\big{)}}{2}\omega^{2}.$ (117) Hence, there is a solution for $\alpha^{\prime}>0$ if, and only if, $k^{2}\big{(}k^{2}\delta^{2}(1+\delta)^{2}+(1-\delta+m)\big{(}k(4\delta^{2}-(1+\delta)^{2})-3\big{)}\big{)}>0,$ (118) in which case $\alpha^{\prime}=\frac{8}{k^{2}\varepsilon^{2}\big{(}k^{2}\delta^{2}(1+\delta)^{2}+(1-\delta+m)\big{(}k(4\delta^{2}-(1+\delta)^{2})-3\big{)}\big{)}}.$ (119) We deduce the following constraints to have an approximate solution to the heterotic ${\rm G}_{2}$ system, in the sense that all of the conditions in Definition 1.2 for the heterotic ${\rm G}_{2}$ system are satisfied except that we only require that $\theta$ be a ${\rm G}_{2}$-instanton to order $O(\alpha^{\prime})^{2}$ as in (11): ###### Proposition 4.10. There is an approximate solution to the heterotic ${\rm G}_{2}$ system if and only if $\lambda_{0}:=k^{2}\varepsilon^{2}\big{(}k^{2}\delta^{2}(1+\delta)^{2}+(1-\delta+m)\big{(}k(4\delta^{2}-(1+\delta)^{2})-3\big{)}\big{)}>0$ (120) is large so that $\alpha^{\prime}=\frac{8}{\lambda_{0}}>0$ (121) is small and the terms in the ${\rm G}_{2}$-instanton condition (100), $\lambda_{1}:=\frac{k\varepsilon^{2}\big{(}6(1-\delta+m)+k(1-\delta)(1+3\delta)\big{)}}{4},\quad\lambda_{2}:=\frac{k^{2}\varepsilon^{2}}{4}(1+m-5\delta)(1+\delta),\quad\lambda_{3}:=\frac{k^{2}\varepsilon^{2}}{4}(\delta^{2}-2(2+m)\delta-1)$ (122) are all $O(\alpha^{\prime})^{2}$, so that (11) is satisfied. Inspecting (120), there are at least three manifest Ansätze for this asymptotic regime, all of which satisfy items (i)–(v) of Theorem 1: ###### Case 1. $1-\delta+m=0$ and $\delta\neq 0,-1$: $\alpha^{\prime}=\frac{8}{\delta^{2}(1+\delta)^{2}}\frac{1}{\varepsilon^{2}k^{4}},\quad\lambda_{1}=\frac{(1-\delta)(1+3\delta)}{4}k^{2}\varepsilon^{2},\quad\lambda_{2}=-\delta(1+\delta)k^{2}\varepsilon^{2},\quad\lambda_{3}=-\frac{(\delta+1)^{2}}{4}k^{2}\varepsilon^{2}.$ In order to have $k^{2}\varepsilon^{2}=O(\alpha^{\prime})^{2}$, we may take, for instance, $k^{2}=\frac{1}{(\alpha^{\prime})^{3}}\quad\text{and}\quad\varepsilon^{2}=\frac{8}{\delta^{2}(1+\delta)^{2}}(\alpha^{\prime})^{5},\quad\text{with}\quad\delta\neq 0,-1\quad\text{and}\quad m=\delta-1,$ which is physically meaningful with $\varepsilon\ll 1$ and $k\gg 1$. ###### Case 2. $\delta=0$ and $(1+m)(k+3)<0$: $\alpha^{\prime}=-\frac{8}{(1+m)(1+\frac{3}{k})}\frac{1}{\varepsilon^{2}k^{3}},\quad\lambda_{1}=\frac{\big{(}1+\frac{6(1+m)}{k}\big{)}}{4}k^{2}\varepsilon^{2},\quad\lambda_{2}=\frac{1+m}{4}k^{2}\varepsilon^{2},\quad\lambda_{3}=-\frac{1}{4}k^{2}\varepsilon^{2}.$ In order to have $k\varepsilon^{2}=O(\alpha^{\prime})^{2}$ and $k^{2}\varepsilon^{2}=O(\alpha^{\prime})^{2}$, we may take, for instance, $k=\frac{1}{(\alpha^{\prime})^{3}}\quad\text{and}\quad\varepsilon^{2}=\frac{8}{(1+m)(1+3(\alpha^{\prime})^{3})}(\alpha^{\prime})^{8},\quad\text{with}\quad m<-1,$ which is physically meaningful with $\varepsilon\ll 1$ and $k\gg 1$. ###### Case 3. $\delta=-1$ and $(2+m)(4k-3)>0$: $\alpha^{\prime}=\frac{8}{(2+m)(4-\frac{3}{k})}\frac{1}{\varepsilon^{2}k^{3}},\quad\lambda_{1}=\left(\frac{3(2+m)}{2k}-1\right)k^{2}\varepsilon^{2},\quad\lambda_{2}=0,\quad\lambda_{3}=-\frac{2+m}{2}k^{2}\varepsilon^{2}.$ In order to have $k\varepsilon^{2}=O(\alpha^{\prime})^{2}$ and $k^{2}\varepsilon^{2}=O(\alpha^{\prime})^{2}$, we may take, for instance, $k=\frac{1}{(\alpha^{\prime})^{3}}\quad\text{and}\quad\varepsilon^{2}=\frac{8}{(2+m)(4-3(\alpha^{\prime})^{3})}(\alpha^{\prime})^{8},\quad\text{with}\quad m>-2,$ which is physically meaningful with $\varepsilon\ll 1$ and $k\gg 1$. NB.: Several other solution regimes are possible, in particular one may adjust the choices of $m$ and $\delta$ to the string scale $\alpha^{\prime}$ itself. Furthermore, it should be noted that the asymptotic properties of $\varepsilon(\alpha^{\prime})$ and $k(\alpha^{\prime})$ as $\alpha^{\prime}\to 0$ are a _consequence_ of the heterotic Bianchi identity (104) and the ${\rm G}_{2}$-instanton condition (100) ‘up to $O(\alpha^{\prime})^{2}$ terms’, and therefore _not a choice_ imposed on the Ansatz. ## Appendix A Covariant matrix operations ###### Definition A.1. For a $3\times 1$ vector $a$, we define $[a]$ by $\left[\left(\begin{array}[]{c}a_{1}\\\ a_{2}\\\ a_{3}\end{array}\right)\right]=\left(\begin{array}[]{ccc}0&a_{3}&-a_{2}\\\ -a_{3}&0&a_{1}\\\ a_{2}&-a_{1}&0\end{array}\right).$ (123) This leads us to the following definition and lemma. ###### Definition A.2. Let $a=\left(\begin{array}[]{c}a_{1}\\\ a_{2}\\\ a_{3}\end{array}\right)\quad\text{and}\quad b=\left(\begin{array}[]{c}b_{1}\\\ b_{2}\\\ b_{3}\end{array}\right)$ be vectors of $1$-forms and define $a\times b=\left(\begin{array}[]{c}a_{2}\wedge b_{3}-a_{3}\wedge b_{2}\\\ a_{3}\wedge b_{1}-a_{1}\wedge b_{3}\\\ a_{1}\wedge b_{2}-a_{2}\wedge b_{1}\end{array}\right).$ (124) Notice that $b\times a=a\times b.$ (125) ###### Lemma A.3. Let $a$ and $b$ be $3\times 1$ vectors of $1$-forms. Then $\displaystyle[a]\wedge b$ $\displaystyle=-a\times b,$ (126) $\displaystyle a^{\rm T}\wedge[b]$ $\displaystyle=-(a\times b)^{\rm T},$ (127) $\displaystyle[a]\wedge[b]+[b]\wedge[a]$ $\displaystyle=-[a\times b],$ (128) $\displaystyle[a]\wedge[b]-[b]\wedge[a]$ $\displaystyle=a\wedge b^{\rm T}-b\wedge a^{\rm T}-2I\otimes\sum_{j=1}^{3}a_{j}\wedge b_{j}.$ (129) In particular, $[a]\wedge[a]=-a\wedge a^{\rm T}=-\frac{1}{2}[a\times a].$ (130) ###### Proof. We first see that $\displaystyle[a]\wedge b$ $\displaystyle=\left(\begin{array}[]{ccc}0&a_{3}&-a_{2}\\\ -a_{3}&0&a_{1}\\\ a_{2}&-a_{1}&0\end{array}\right)\wedge\left(\begin{array}[]{c}b_{1}\\\ b_{2}\\\ b_{3}\end{array}\right)=\left(\begin{array}[]{c}a_{3}\wedge b_{2}-a_{2}\wedge b_{3}\\\ a_{1}\wedge b_{3}-a_{3}\wedge b_{1}\\\ a_{2}\wedge b_{1}-a_{1}\wedge b_{2}\end{array}\right)$ $\displaystyle=-a\times b$ by Definition A.2. Similarly, $\displaystyle a^{\rm T}\wedge[b]$ $\displaystyle=(\begin{array}[]{ccc}a_{1}&a_{2}&a_{3}\end{array})\wedge\left(\begin{array}[]{ccc}0&b_{3}&-b_{2}\\\ -b_{3}&0&b_{1}\\\ b_{2}&-b_{1}&0\end{array}\right)$ $\displaystyle=(\begin{array}[]{ccc}-a_{2}\wedge b_{3}+a_{3}\wedge b_{2}&-a_{3}\wedge b_{1}+a_{1}\wedge b_{3}&-a_{1}\wedge b_{2}+a_{2}\wedge b_{1}\end{array})$ $\displaystyle=-(a\times b)^{\rm T}.$ From Definition A.2 we see that $[a\times b]=\left(\begin{array}[]{ccc}0&a_{1}\wedge b_{2}-a_{2}\wedge b_{1}&a_{1}\wedge b_{3}-a_{3}\wedge b_{1}\\\ a_{2}\wedge b_{1}-a_{1}\wedge b_{2}&0&a_{2}\wedge b_{3}-a_{3}\wedge b_{2}\\\ a_{3}\wedge b_{1}-a_{1}\wedge b_{3}&a_{3}\wedge b_{2}-a_{2}\wedge b_{3}&0\end{array}\right).$ On the other hand, $\displaystyle[a]\wedge[b]$ $\displaystyle=\left(\begin{array}[]{ccc}0&a_{3}&-a_{2}\\\ -a_{3}&0&a_{1}\\\ a_{2}&-a_{1}&0\end{array}\right)\wedge\left(\begin{array}[]{ccc}0&b_{3}&-b_{2}\\\ -b_{3}&0&b_{1}\\\ b_{2}&-b_{1}&0\end{array}\right)$ $\displaystyle=\left(\begin{array}[]{ccc}-a_{2}\wedge b_{2}-a_{3}\wedge b_{3}&a_{2}\wedge b_{1}&a_{3}\wedge b_{1}\\\ a_{1}\wedge b_{2}&-a_{3}\wedge b_{3}-a_{1}\wedge b_{1}&a_{3}\wedge b_{2}\\\ a_{1}\wedge b_{3}&a_{2}\wedge b_{3}&-a_{1}\wedge b_{1}-a_{2}\wedge b_{2}\end{array}\right)$ $\displaystyle=-b\wedge a^{\rm T}-I\otimes\sum_{j=1}^{3}a_{j}\wedge b_{j}.$ and $\displaystyle[b]\wedge[a]$ $\displaystyle=\left(\begin{array}[]{ccc}-b_{2}\wedge a_{2}-b_{3}\wedge a_{3}&b_{2}\wedge a_{1}&b_{3}\wedge a_{1}\\\ b_{1}\wedge a_{2}&-b_{3}\wedge a_{3}-b_{1}\wedge a_{1}&b_{3}\wedge a_{2}\\\ b_{1}\wedge a_{3}&b_{2}\wedge a_{3}&-b_{1}\wedge a_{1}-b_{2}\wedge a_{2}\end{array}\right)$ $\displaystyle=\left(\begin{array}[]{ccc}a_{2}\wedge b_{2}+a_{3}\wedge b_{3}&-a_{1}\wedge b_{2}&-a_{1}\wedge b_{3}\\\ -a_{2}\wedge b_{1}&a_{3}\wedge b_{3}+a_{1}\wedge b_{1}&-a_{2}\wedge b_{3}\\\ -a_{3}\wedge b_{1}&-a_{3}\wedge b_{2}&a_{1}\wedge b_{1}+a_{2}\wedge b_{2}\end{array}\right)$ $\displaystyle=-a\wedge b^{\rm T}+I\otimes\sum_{j=1}^{3}a_{j}\wedge b_{j}.\qed$ ## References
###### Abstract The lattice Boltzmann method, now widely used for a variety of applications, has also been extended to model multi-phase flows through different formulations. While already applied to many different configurations in the low Weber and Reynolds number regimes, applications to higher Weber/Reynolds numbers or larger density/viscosity ratios are still the topic of active research. In this study, through a combination of the decoupled phase-field formulation –conservative Allen-Cahn equation– and a cumulants-based collision operator for a low-Mach pressure-based flow solver, we present an algorithm that can be used for higher Reynolds/Weber numbers. The algorithm is validated through a variety of test-cases, starting with the Rayleigh-Taylor instability both in 2-D and 3-D, followed by the impact of a droplet on a liquid sheet. In all simulations, the solver is shown to correctly capture the dynamics of the flow and match reference results very well. As the final test-case, the solver is used to model droplet splashing on a thin liquid sheet in 3-D with a density ratio of 1000 and kinematic viscosity ratio of 15 –matching the water/air system– at We=8000 and Re=1000. The results show that the solver correctly captures the fingering instabilities at the crown rim and their subsequent breakup, in agreement with experimental and numerical observations reported in the literature. ###### keywords: lattice Boltzmann method; multi-phase flows; Conservative Allen-Cahn; Phase field. xx 1 5 Received: date; Accepted: date; Published: date Lattice Boltzmann solver for multi-phase flows: Application to high Weber and Reynolds numbers S.A. Hosseini 1,2*, H. Safari 1 and D. Thévenin 1 S.A. Hosseini, H. Safari and D. Thévenin Correspondence<EMAIL_ADDRESS> lbLBlattice Boltzmann lbmLBMlattice Boltzmann method acACAllen-Cahn chCHCahn- Hilliard pdePDEpartial differential equation edfEDFequilibrium distribution function dvbeDVBEdiscrete velocity Boltzmann equation nsNSNavier-Stokes acmACMartificial compressibility method eosEoSequation of state mrtMRTmultiple relaxation time srtSRTsingle relaxation time rhsRHSright hand side dfDFdistribution function ceCEChapman Enskog ## 1 Introduction The is a discrete solver for the so-called , initially developed as an alternative to classical solvers for the incompressible hydrodynamic regime Krüger et al. (2017); Guo and Shu (2013). Due to the simplicity of the algorithm, low computational cost of discrete time-evolution equations and locality of non-linear terms and boundary conditions it has rapidly grown over the past decades Succi (2002). It is worth noting that while intended for the incompressible regime, the formally solves the compressible isothermal equations at a reference temperature. While originally tied to the considered flow’s temperature, in the context of the solver, the reference temperature is a numerical parameter allowing to control convergence and consistency of the results Krüger et al. (2017). The weak compressibility in the formulation along with the parabolic nature of the governing the evolution of pressure –as opposed to Chorin’s original – have made the scheme efficient and applicable to unsteady flows Chorin (1997). Although originally used for single-phase flows it has since been extended to multi-phase, multi-species and compressible flows. While generally based on diffuse-interface formulations, solvers for multi- phase flows can be categorized as pertaining to one of three major categories: (a) pseudo-potential Shan and Chen (1993, 1994), (b) free energy Swift et al. (1996, 1995) and (c) phase-field. Other types of formulations can also be found in the literature, however they are not as widely spread and/or developed as these three. In the context of the free energy formulation, the expression for the non- local non-ideal pressure tensor is found through the free energy functional. The appropriate pressure tensor is then introduced into the solver via a moment-matching approach assigning coefficients to different terms in the Swift et al. (1995). The interesting point that makes this formulation consistent and differentiates it from the generic double-well potential-based Cahn-Hilliard formulation, is that in the minimization process of the free energy, the is explicitly considered. It is interesting to note that, as is the case for the pseudo-potential formulation, the explicit intervention of the within the free functional makes the thickness of the interface tied to physical parameters, i.e. surface tension, density ratio, etc. As a consequence, the choice of the and/or tuning of the coefficients in the is a method of choice to widen the area of accessible density ratios. This approach was later extended by introducing non-ideal components of the pressure tensor via an external body force. Introducing these effects with a body force made the scheme more stable by reducing Galilean invariance issues tied to the third-order moments of the Wagner and Li (2006). The pseudo-potential formulation follows more of a bottom-up approach in introducing non-ideal dynamics into the solver. It follows the general philosophy of the Boltzmann-Vlasov equation, introducing a non-local _potential_ to account for non-ideal effects. While the original formulation relied on what was termed an effective _density_ , actual were introduced into the pseudo-potential in Kupershtokh et al. (2009); Yuan and Schaefer (2006). Apart from thermodynamic consistency, the possibility of using different allowed for higher density ratios to be modeled. As the free energy formulation, this model is limited to lower Weber number regimes because it naturally comes with large surface tension values. While more advanced models allow for independent tuning of the surface tension Sbragaglia et al. (2007), the spectrum of values covered by the model is rather limited and barely allows for variations of one order of magnitude Li and Luo (2013). The last category is based on the free energy functional minimization approach, just like the free energy approach. However, contrary to the latter, the surface and bulk energies used in the minimization process are those of a generic double-well potential Fakhari and Rahimian (2010), allowing to decouple –among other parameters– the interface thickness from the fluid physical properties. Another consequence of this choice of functional is a partial loss of thermodynamic consistency, making the extension of the formulation to more complex physics such as thermal flows, compressible flows, or acoustics less straightforward, although a number of attempts have been documented in the literature Safari et al. (2013, 2014); Yazdi et al. (2018). Nevertheless, it has been observed to be very effective and robust for multi- phase flows in the incompressible regime and readily able to deal with larger Weber numbers. For a more comprehensive overview of the developments of such models, interested readers are referred to Wang et al. (2019). It is also worth noting that approaches relying on explicit tracking of the interface with a consistent energy functional making use of the non-ideal have also been proposed as ways to improve the stability of the original free energy formulation He et al. (1999); Inamuro et al. (2004). Over the past decades a lot of efforts have been put in developing phase field-based solvers for various applications Safari et al. (2014); Amirshaghaghi et al. (2016, 2018). Given that in such formulations the local density is a dependent variable –on the local value of the order parameter, they have to be coupled to a modified form of the solver for the flow usually referred to as the incompressible formulation. The so-called low-Mach formulation is mostly based on the modified distribution function introduced in He et al. (1999) where the pressure is the zeroth-order moment of the distribution function. This flow solver has been combined with different forms of interface tracking formulations, e.g. , conservative or to model multi- phase flows. The aim of the present study is to introduce a multi-phase solver relying on the pressure-based formulation of He et al. (1999) and a realization –for the flow solver– coupled with a solver for the conservative . The use of the collision operator in cumulants space along with the decoupled interface tracking allow for simulations in the high Reynolds and Weber regimes. After a brief introduction of the model, it will be used to simulate a variety of test-cases proving its ability to reproduce correct physics and its robustness. It is worth noting that all models were implemented in our in- house multi-physics solver ALBORZ Hosseini (2020). ## 2 Theoretical background ### 2.1 Target macrosopic system As briefly stated in the introduction, the aim of the present work is to solve the multi-phase flow equations within the context of a diffuse interface formulation in the limit of the incompressible regime, where interface dynamics are followed and accounted for via an additional indicator field, $\phi$. As such, at the macroscopic level the low Mach equations are targeted: ${\partial_{t}\rho u_{i}+\partial_{j}\rho u_{i}u_{j}+\partial_{j}\sigma_{ij}+\mu_{\phi}\partial_{i}\phi+F_{b,i}=0,}$ (1) where $u_{i}$ is the fluid velocity, $\rho$ the fluid density and $F_{b,i}$ designates external body forces. The stress tensor $\sigma_{ij}$ is defined as: ${\sigma_{ij}=p_{h}\delta_{ij}-\eta\left(\partial_{i}u_{j}+\partial_{j}u_{i}\right)+\frac{2}{3}\eta\partial_{k}u_{k}\delta_{ij},}$ (2) where $\eta$ is the fluid dynamic viscosity, tied to the kinematic viscosity $\nu$ as $\eta=\rho\nu$, and $p_{h}$ the hydrodynamic pressure. The chemical potential $\mu_{\phi}$ is defined as: ${\mu_{\phi}=2\beta\phi\left(\phi-1\right)\left(2\phi-1\right)\kappa\Delta\phi,}$ (3) where $\Delta=\nabla^{2}$ is the Laplacian operator and $\beta$ and $\kappa$ are parameters specific to the formulation. It must be noted that the second term on the of Equation 1 accounts for surface tension effects. For the sake of clarity the free parameters will be detailed in the next paragraph. The interface is tracked using the conservative equation, where the order parameter $\phi$ evolves as Sun and Beckermann (2007); Chiu and Lin (2011): ${\partial_{t}\phi+\partial_{i}u_{i}\phi-\partial_{i}M\left[\partial_{i}\phi- n_{i}\frac{4\phi(1-\phi)}{W}\right]=0,}$ (4) where the parameter $\phi$ takes on values between 0 and 1, $M$ is mobility, $W$ is the interface thickness and $n_{i}$ is the unit normal to the interface obtained as: ${n_{i}=\frac{\partial_{i}\phi}{\lvert\lvert\nabla\phi\lvert\lvert}.}$ (5) The interfaces can be found through iso-surfaces of the order parameter, i.e. $\phi=1/2$. To recover the correct surface tension, the free parameters appearing in the chemical potential, i.e. $\kappa$ and $\beta$ are tied to the surface tension $\sigma$ and interface thickness $W$ in the equation via $\beta=12\sigma/W$ and $\kappa=3\sigma W/2$. ### 2.2 formulation for the conservative phase-field equation The conservative equation can readily be recovered by appropriately defining the discrete equilibrium state and relaxation coefficient in the advection- diffusion model: $\partial_{t}g_{\alpha}+c_{\alpha,i}\partial_{i}g_{\alpha}+\mathcal{S}_{\alpha}=\Omega^{\phi}_{\alpha},$ (6) where $g_{\alpha}$ and $c_{\alpha}$ are the populations and velocities in the discrete velocity kinetic model and the collision operator is defined as: $\Omega^{\phi}_{\alpha}=\frac{1}{\tau_{\phi}}\left(g_{\alpha}^{(eq)}-g_{\alpha}\right).$ (7) The is defined as: ${g_{\alpha}^{(eq)}=w_{\alpha}\phi\sum_{n=0}^{2}\frac{1}{n!c_{s}^{2n}}\mathcal{H}_{n}:a^{(eq)}_{n},}$ (8) where $\mathcal{H}_{n}$ and $a_{n}^{(eq)}$ are the Hermite polynomial and coefficient of order $n$, $c_{s}$ the lattice sound speed and $w_{\alpha}$ weights tied to each discrete velocity (resulting from the Gauss-Hermite quadrature). The expressions for these polynomials and corresponding coefficients are listed in Appendix A. The source term in Equation 6 is defined as Fakhari et al. (2017): $\mathcal{S}_{\alpha}=w_{\alpha}\mathcal{H}_{i}n_{i}\frac{4\phi(1-\phi)}{W}.$ (9) Given that the source term affects the first-order moment –a non-conserved moment of the distribution function– the distribution function is tied to the phase parameter as: $\phi=\sum_{\alpha}g_{\alpha}.$ (10) The relaxation coefficient is fixed as: $\tau_{\phi}=\frac{M}{c_{s}^{2}}.$ (11) After integration in space/time the now-famous collision-streaming form can be recovered: ${\bar{g}_{\alpha}\left(x+c_{\alpha}\delta_{t},t+\delta_{t}\right)=\left(1-\frac{\delta_{t}}{\bar{\tau}_{\phi}}\right)\bar{g}_{\alpha}\left(x,t\right)+\frac{\delta_{t}}{\bar{\tau}_{\phi}}g^{(eq)}_{\alpha}\left(x,t\right)+\delta_{t}\bar{\mathcal{S}}_{\alpha}\left(x,t\right),}$ (12) where the source term takes on a new form, i.e.: $\bar{\mathcal{S}}_{\alpha}=\left(1-\frac{1}{2\tau_{\phi}}\right)w_{\alpha}\mathcal{H}_{i}n_{i}\frac{4\phi(1-\phi)}{W},$ (13) and: ${\bar{\tau}_{\phi}=\tau_{\phi}+\frac{\delta_{t}}{2}.}$ (14) It is also worth noting that the derivatives of the order parameter appearing in the various discrete time-evolution equations are computed using isotropic finite differences, i.e.: $\partial_{i}\phi=\frac{1}{c_{s}^{2}}\sum_{\alpha}w_{\alpha}c_{\alpha,i}\phi(x+c_{\alpha}),$ (15) and: $\partial^{2}_{i}\phi=\frac{2}{c_{s}^{2}}\sum_{\alpha}w_{\alpha}\left[\phi(x+c_{\alpha})-\phi(x)\right].$ (16) While the present work makes use of a second-order , one must note that the same macroscopic , i.e. Equation 4, can also be recovered by using a first- order and an additional correction term of the following form Wang et al. (2016): $C_{\alpha}=\frac{w_{\alpha}}{c_{s}^{2}}\mathcal{H}_{i}\partial_{t}\phi u_{i},$ (17) where, as for Equation 13, post-discretization it changes into: $\bar{C}_{\alpha}=\left(1-\frac{1}{2\tau_{\phi}}\right)\frac{w_{\alpha}}{c_{s}^{2}}\mathcal{H}_{i}\partial_{t}\phi u_{i}.$ (18) Such correction terms were first introduced in the context of advection- diffusion solvers Chopard et al. (2009) and further extended to non-linear equations in the same context Hosseini et al. (2019). Detailed derivation and multi-scale analyses are readily available in the literature, e.g. Zu et al. (2020). ### 2.3 model for flow field The flow solver kinetic model follows the low-Mach formulation used, among other sources, in Lee and Lin (2003); Hosseini et al. (2019, 2020) and based on the original model introduced in He et al. (1999): $\partial_{t}f^{{}^{\prime}}_{\alpha}+c_{\alpha,i}\partial_{i}f^{{}^{\prime}}_{\alpha}=\Omega_{\alpha}+\Xi_{\alpha},$ (19) where the collision operator is: $\Omega_{\alpha}=\frac{1}{\tau}\left({f_{\alpha}^{(eq)}}^{{}^{\prime}}-f^{{}^{\prime}}_{\alpha}\right),$ (20) and $\Xi_{\alpha}$ is defined as: $\Xi_{\alpha}=c_{s}^{2}\left(\frac{f_{\alpha}^{(eq)}}{\rho}-w_{\alpha}\right)\left(c_{\alpha,i}-u_{i}\right)\partial_{i}\rho+w_{\alpha}c_{s}^{2}\rho\partial_{i}u_{i}+\left(F_{b,i}+F_{s,i}\right)\left(c_{\alpha,i}-u_{i}\right)\frac{f_{\alpha}^{(eq)}}{\rho},$ (21) and the relaxation coefficient $\tau$ is tied to the fluid kinematic viscosity $\nu$ as: $\tau=\frac{\nu}{c_{s}^{2}}.$ (22) The forces $F_{b,i}$ and $F_{s,i}$ represent respectively external body forces and surface tension, i.e.: ${F_{s,i}=\mu_{\phi}\partial_{i}\phi.}$ (23) The modified distribution function, $f^{{}^{\prime}}_{\alpha}$, is defined as: $f^{{}^{\prime}}_{\alpha}=w_{\alpha}p_{h}+c_{s}^{2}\left(f_{\alpha}-w_{\alpha}\rho\right),$ (24) where $f_{\alpha}$ is the classical iso-thermal distribution function. The modified equilibrium follows the same logic and is defined as: ${{f^{(eq)}_{\alpha}}^{{}^{\prime}}=w_{\alpha}p_{h}+w_{\alpha}\rho c_{s}^{2}\sum_{n=1}^{2}\frac{1}{n!c_{s}^{2n}}\mathcal{H}_{n}:a^{(eq)}_{n}.}$ (25) The density is tied to the order parameter as: $\rho=\rho_{l}+\left(\rho_{h}-\rho_{l}\right)\phi,$ (26) where $\rho_{h}$ and $\rho_{l}$ are respectively the densities of the heavy and light fluid. For a detailed analysis of the macroscopic equations recovered by this model and the derivation of the discrete equations, interested readers are referred to Hosseini et al. (2019); Hosseini (2020). In the context of the present study the low-Mach model is wrapped in a moments- based formulation where the post-collision populations $f^{{}^{\prime}*}_{\alpha}$, to be streamed as: ${{f^{{}^{\prime}}}_{\alpha}\left(x+c_{\alpha}\delta_{t},t+\delta_{t}\right)={f^{{}^{\prime}*}}_{\alpha}\left(x,t\right),}$ (27) are computed as: ${f^{{}^{\prime}*}}_{\alpha}=\rho c_{s}^{2}f_{\alpha}^{p*}+\frac{\delta_{t}}{2}\Xi_{\alpha}.$ (28) The post-collision pre-conditioned population $f_{\alpha}^{p*}$ is: ${f^{p*}_{\alpha}}=\mathcal{C}^{-1}\left(\mathcal{I}-\mathcal{W}\right)\mathcal{K}^{p}+\mathcal{C}^{-1}\mathcal{W}\mathcal{K}^{p},$ (29) where $\mathcal{C}$ is the moments transform matrix –from pre-conditioned populations to the target momentum space, $\mathcal{I}$ the identity matrix and $\mathcal{W}$ the diagonal relaxation frequency matrix. Following Geier et al. (2020) prior to transformation to momentum space the populations are pre- conditioned as: $f^{p}_{\alpha}=\frac{1}{\rho c_{s}^{2}}f^{{}^{\prime}}_{\alpha}+\frac{\delta_{t}}{2\rho c_{s}^{2}}\Xi_{\alpha}.$ (30) This pre-conditioning accomplishes two tasks, namely normalizing the populations with the density and thus eliminating the density-dependence of the moments and introducing the first half of the source term. As such the moments $\mathcal{K}^{p}$ are computed as: $\mathcal{K}^{p}_{\beta}=\mathcal{C}_{\alpha\beta}f^{p}_{\alpha}.$ (31) The transformation from s to cumulants is carried out using the steps suggested in Geier et al. (2015), which allows for a more efficient algorithm. The s are first transformed into central moments: ${\widetilde{\Pi}_{\beta}^{p}=\sum_{\alpha}{\left(c_{\alpha,x}-u_{x}\right)}^{n_{x}}{\left(c_{\alpha,y}-u_{y}\right)}^{n_{y}}{\left(c_{\alpha,z}-u_{z}\right)}^{n_{z}}f_{\alpha}^{p}.}$ (32) where here $\beta=x^{n_{x}}y^{n_{y}}z^{n_{z}}$. The central moments are then transformed into the corresponding cumulants using the following relations: $\displaystyle\mathcal{K}^{p}_{x}=$ $\displaystyle\widetilde{\Pi}_{x}^{p},$ (33a) $\displaystyle\mathcal{K}^{p}_{xy}=$ $\displaystyle\widetilde{\Pi}_{xy}^{p},$ (33b) $\displaystyle\mathcal{K}^{p}_{x^{2}}=$ $\displaystyle\widetilde{\Pi}_{x^{2}}^{p},$ (33c) $\displaystyle\mathcal{K}^{p}_{xy^{2}}=$ $\displaystyle\widetilde{\Pi}_{xy^{2}}^{p},$ (33d) $\displaystyle\mathcal{K}^{p}_{xyz}=$ $\displaystyle\widetilde{\Pi}_{xyz}^{p},$ (33e) $\displaystyle\mathcal{K}^{p}_{x^{2}yz}=$ $\displaystyle\widetilde{\Pi}_{x^{2}yz}^{p}-\left[\widetilde{\Pi}_{x^{2}}^{p}\widetilde{\Pi}_{yz}^{p}+2\widetilde{\Pi}_{xy}^{p}\widetilde{\Pi}_{xz}^{p}\right],$ (33f) $\displaystyle\mathcal{K}^{p}_{x^{2}y^{2}}=$ $\displaystyle\widetilde{\Pi}_{x^{2}y^{2}}^{p}-\left[\widetilde{\Pi}_{x^{2}}^{p}\widetilde{\Pi}_{y^{2}}^{p}+2{(\widetilde{\Pi}_{xy}^{p})}^{2}\right],$ (33g) $\displaystyle\mathcal{K}^{p}_{xy^{2}z^{2}}=$ $\displaystyle\widetilde{\Pi}_{xy^{2}z^{2}}^{p}-\left[\widetilde{\Pi}_{z^{2}}^{p}\widetilde{\Pi}_{xy^{2}}^{p}+\widetilde{\Pi}_{y^{2}}^{p}\widetilde{\Pi}_{xz^{2}}^{p}+4\widetilde{\Pi}_{yz}^{p}\widetilde{\Pi}_{xyz}^{p}+2(\widetilde{\Pi}_{xz}^{p}\widetilde{\Pi}_{y^{2}z}^{p}+\widetilde{\Pi}_{xy}^{p}\widetilde{\Pi}_{yz^{2}}^{p})\right],$ (33h) $\displaystyle\mathcal{K}^{p}_{x^{2}y^{2}z^{2}}=$ $\displaystyle\widetilde{\Pi}_{x^{2}y^{2}z^{2}}^{p}-\left[4{(\widetilde{\Pi}_{xyz}^{p})}^{2}+\widetilde{\Pi}_{x^{2}}^{p}\widetilde{\Pi}_{y^{2}z^{2}}^{p}+\widetilde{\Pi}_{y^{2}}^{p}\widetilde{\Pi}_{x^{2}z^{2}}^{p}+\widetilde{\Pi}_{z^{2}}^{p}\widetilde{\Pi}_{x^{2}y^{2}}^{p}+4(\widetilde{\Pi}_{xy}^{p}\widetilde{\Pi}_{x^{2}yz}^{p}+\right.$ $\displaystyle\left.\widetilde{\Pi}_{xz}^{p}\widetilde{\Pi}_{xy^{2}z}^{p}+\widetilde{\Pi}_{xy}^{p}\widetilde{\Pi}_{xyz^{2}}^{p}+2(\widetilde{\Pi}_{xy^{2}}^{p}\widetilde{\Pi}_{xz^{2}}^{p}+\widetilde{\Pi}_{x^{2}y}^{p}\widetilde{\Pi}_{yz^{2}}^{p}+\widetilde{\Pi}_{x^{2}z}^{p}\widetilde{\Pi}_{y^{2}z}^{p}))+\right.$ $\displaystyle\left.(16\widetilde{\Pi}_{xy}^{p}\widetilde{\Pi}_{xz}^{p}\widetilde{\Pi}_{yz}^{p}+4({(\widetilde{\Pi}_{xz}^{p})}^{2}\widetilde{\Pi}_{y^{2}}^{p}+{(\widetilde{\Pi}_{yz}^{p})}^{2}\widetilde{\Pi}_{x^{2}}^{p}+{(\widetilde{\Pi}_{xy}^{p})}^{2}\widetilde{\Pi}_{z^{2}}^{p})+2\widetilde{\Pi}_{x^{2}}^{p}\widetilde{\Pi}_{y^{2}}^{p}\widetilde{\Pi}_{z^{2}}^{p})\right].$ (33i) The remainder of the moments can be easily obtained via permutation of the indices. The collision process is performed in cumulant space according to Geier et al. (2015). The fluid viscosity is controlled via the collision factor related to second-order cumulants (e.g. $\mathcal{K}^{p}_{xy}$, $\mathcal{K}^{p}_{x^{2}}-\mathcal{K}^{p}_{y^{2}}$, $\mathcal{K}^{p}_{x^{2}}-\mathcal{K}^{p}_{z^{2}}$ etc). The rest of the collision factors are set to unity for simplicity. Once the collision step has been applied, cumulants are transformed back into central moments as: $\displaystyle\widetilde{\Pi}_{x}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{x},$ (34a) $\displaystyle\widetilde{\Pi}_{xy}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{xy},$ (34b) $\displaystyle\widetilde{\Pi}_{x^{2}}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{x^{2}},$ (34c) $\displaystyle\widetilde{\Pi}_{xy^{2}}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{xy^{2}},$ (34d) $\displaystyle\widetilde{\Pi}_{xyz}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{xyz},$ (34e) $\displaystyle\widetilde{\Pi}_{x^{2}yz}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{x^{2}yz}+\left[\widetilde{\Pi}^{p*}_{x^{2}}\widetilde{\Pi}^{p*}_{yz}+2\widetilde{\Pi}^{p*}_{xy}\widetilde{\Pi}^{p*}_{xz}\right],$ (34f) $\displaystyle\widetilde{\Pi}_{x^{2}y^{2}}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{x^{2}y^{2}}+\left[\widetilde{\Pi}_{x^{2}}^{p*}\widetilde{\Pi}_{y^{2}}^{p*}+2{(\widetilde{\Pi}_{xy}^{p*})}^{2}\right],$ (34g) $\displaystyle\widetilde{\Pi}_{xy^{2}z^{2}}^{p*}=$ $\displaystyle\mathcal{K}^{p*}_{xy^{2}z^{2}}+\left[\widetilde{\Pi}^{p*}_{z^{2}}\widetilde{\Pi}^{p*}_{xy^{2}}+\widetilde{\Pi}^{p*}_{y^{2}}\widetilde{\Pi}^{p*}_{xz^{2}}+4\widetilde{\Pi}^{p*}_{yz}\widetilde{\Pi}^{p*}_{xyz}+2(\widetilde{\Pi}^{p*}_{xz}\widetilde{\Pi}^{p*}_{y^{2}z}+\widetilde{\Pi}^{p*}_{xy}\widetilde{\Pi}^{p*}_{yz^{2}})\right],$ (34h) $\displaystyle\widetilde{\Pi}^{p*}_{x^{2}y^{2}z^{2}}=$ $\displaystyle\mathcal{K}^{p*}_{x^{2}y^{2}z^{2}}+\left[4{(\widetilde{\Pi}^{p*}_{xyz})}^{2}+\widetilde{\Pi}^{p*}_{x^{2}}\widetilde{\Pi}^{p*}_{y^{2}z^{2}}+\widetilde{\Pi}^{p*}_{y^{2}}\widetilde{\Pi}^{p*}_{x^{2}z^{2}}+\widetilde{\Pi}^{p*}_{z^{2}}\widetilde{\Pi}^{p*}_{x^{2}y^{2}}+4(\widetilde{\Pi}^{p*}_{xy}\widetilde{\Pi}^{p*}_{x^{2}yz}+\right.$ $\displaystyle\left.\widetilde{\Pi}^{p*}_{xz}\widetilde{\Pi}^{p*}_{xy^{2}z}+\widetilde{\Pi}^{p*}_{xy}\widetilde{\Pi}^{p*}_{xyz^{2}}+2(\widetilde{\Pi}^{p*}_{xy^{2}}\widetilde{\Pi}^{p*}_{xz^{2}}+\widetilde{\Pi}^{p*}_{x^{2}y}\widetilde{\Pi}^{p*}_{yz^{2}}+\widetilde{\Pi}^{p*}_{x^{2}z}\widetilde{\Pi}^{p*}_{y^{2}z}))-\right.$ $\displaystyle\left.(16\widetilde{\Pi}^{p*}_{xy}\widetilde{\Pi}^{p*}_{xz}\widetilde{\Pi}^{p*}_{yz}+4({(\widetilde{\Pi}^{p*}_{xz})}^{2}\widetilde{\Pi}^{p*}_{y^{2}}+{(\widetilde{\Pi}^{p*}_{yz})}^{2}\widetilde{\Pi}^{p*}_{x^{2}}+{(\widetilde{\Pi}^{p*}_{xy})}^{2}\widetilde{\Pi}^{p*}_{z^{2}})+2\widetilde{\Pi}^{p*}_{x^{2}}\widetilde{\Pi}^{p*}_{y^{2}}\widetilde{\Pi}^{p*}_{z^{2}})\right].$ (34i) After this step, the post-collision central moments can be readily transformed back to populations. All transforms presented here and upcoming simulations are based on the D3Q27 stencil. It must also be noted that the following set of 27 moments are used as moments basis: $\beta\in\\{0,x,y,z,xy,xz,yz,x^{2}-y^{2},x^{2}-z^{2},x^{2}+y^{2}+z^{2},\\\ xy^{2}+xz^{2},xyz,xy^{2}-xz^{2},x^{2}+yz^{2},x^{2}z+y^{2}z,x^{2}y-yz^{2},x^{2}z-y^{2}z,x^{2}y^{2}-2x^{2}z^{2}+y^{2}z^{2},\\\ x^{2}y^{2}+x^{2}z^{2}-2y^{2}z^{2},x^{2}y^{2}+x^{2}z^{2}+y^{2}z^{2},x^{2}yz,xy^{2}z,xyz^{2},x^{2}y^{2}z,x^{2}yz^{2},xy^{2}z^{2},x^{2}y^{2}z^{2}\\},$ (35) where $\beta=x^{2}-y^{2}$ stands for a central moment of the form $\widetilde{\Pi}^{p}_{x^{2}}-\widetilde{\Pi}^{p}_{y^{2}}$. Previous systematic studies of the flow solver have shown second-order convergence under diffusive scaling Hosseini et al. (2019). ## 3 Numerical applications The proposed numerical method will be validated through different test-cases in the present section. All results and simulation parameters are reported in units, i.e. non-dimensionalized with time-step, grid-size and heavy fluid density. ### 3.1 Static droplet: Surface tension measurement As a first test, to validate the hydrodynamics of the model, we consider the case of a static droplet in a rectangular domain with periodic boundaries all around. All cases consist of a domain of size $256\times 256$ filled with a light fluid. A _droplet_ of the heavier fluid is placed at the center of the domain. Simulations are pursued till the system converges. The pressure difference between the droplet and surrounding lighter fluid is then extracted. Using Laplace’s law, i.e.: $\Delta P=\frac{\sigma}{r},$ (36) where $\Delta P$ is the pressure difference and $r$ the droplet radius, one can readily obtain the effective surface tension. Three different surface tensions, i.e. $\sigma=1\times 10^{-1}$, $1\times 10^{-3}$ and $1\times 10^{-6}$, along with four different droplet radii, i.e. $r=25$, $30$, $35$ and $45$, were considered here. The obtained results are shown in Figure 1. The results presented here consider a density ratio of 20 and non-dimensional viscosity of 0.1. Figure 1: Changes of pressure difference around droplet for different surface tensions and droplet radii. Red, blue and black symbols illustrate results from the present study with respectively $\sigma=10^{-1},10^{-3}$ and $10^{-6}$. It is readily observed that the model satisfies Laplace’s law and recovers correct surface tensions. Furthermore, it is seen that it can span a wide range of surface tensions, as opposed to other classes of multi-phase solvers such as the free energy or pseudo-potential formulations Qin et al. (2018); Mazloomi M et al. (2015) and maintain relatively low spurious currents. For example, at a density ratio of 1000 and $\sigma=10^{-3}$, the spurious currents were found to be only of the order of $10^{-6}$, in strong contrast with the previously cited approaches. ### 3.2 Rayleigh-Taylor instability The Rayleigh-Taylor instability is a well-known and widely studied gravity- driven effect occurring when a layer of a heavier fluid lies on top of another layer of a lighter fluid Yang et al. (2018a, b); Rahmat et al. (2014). A perturbation at the interface between the two fluids causes the heavier one to penetrate into the lighter fluid. In general, the dynamics of this system are governed by two non-dimensional parameters, namely the Atwood and Reynolds numbers. The former is defined as: $\hbox{At}=\frac{\rho_{h}-\rho_{l}}{\rho_{h}+\rho_{l}},$ (37) while the latter is: $\hbox{Re}=\frac{\rho_{h}U^{*}L}{\mu_{h}},$ (38) where $\rho_{l}$ and $\rho_{h}$ are the densities of the heavy and light fluids, $\mu_{h}$ is the dynamic viscosity of the heavy fluid, $L_{x}$ is the size of the domain in the horizontal direction and $U^{*}$ is the characteristic velocity defined as: $U^{*}=\sqrt{gL_{x}},$ (39) where $g$ is gravity-driven acceleration. The characteristic time for this case is defined as: $T=\frac{L_{x}}{U^{*}}.$ (40) Following the set-up studied in He et al. (1999), we consider a domain of size $L_{x}\times 4L_{x}$ with $L_{x}=600$. Initially the top half of the domain is filled with the heavy liquid and the bottom half with the lighter one. The interface is perturbed via the following profile: $h_{i}(x)=\frac{L}{10}\cos\left(\frac{2\pi x}{L_{x}}\right)+2L_{x}.$ (41) While periodic boundaries were applied in the horizontal direction, at the top and bottom boundaries no-slip boundary conditions were applied using the half- way bounce-back scheme Krüger et al. (2017). The At number is set to 0.5 while two different Re numbers are considered, i.e. Re=256 and 2048. In both cases $g=6\times 10^{-6}$ while the non-dimensional viscosities were respectively 0.1406 and 0.0176. To validate the simulations, the position of the downward- plunging heavy liquid spike is measured over time and compared to reference data from He et al. (1999). The results are illustrated in Figure 2. Figure 2: (Right) Evolution of interface for the Rayleigh-Taylor instability for (top row) Re=256 and (bottom row) Re=2048 at different times: (from left to right) $t/T=$1,2,3,4 and 5. (Left) Position of the penetrating spike over time: (black) Re=256 and (red) Re=2048. (Plain lines) present results and (symbols) data from He et al. (1999). It is observed that both simulations agree very well with the reference solution of He et al. (1999). To showcase the ability of the solver to handles under-resolved simulations and illustrate the convergence of the obtained solutions, the simulations were repeated at two additional lower resolutions with $L_{x}=$300 and 150 with an acoustic scaling of the time-step size. The results obtained with those lower resolutions are shown in Figures 3 and 4. Figure 3: (Right) Interface for the Rayleigh-Taylor instability at $t/T=$5 and Re=256 for three different resolutions (left to right) $L_{x}$=150, 300 and 600. (Left) Position of the penetrating spike over time: (black) $L_{x}$=600, (red) $L_{x}$=300 and (blue) $L_{x}$=150. By looking at the position of the plunging spike it can be clearly seen that while minor differences exist, even the lowest resolution captures the correct position. Smaller feature however, especially at Re=2048, need higher resolutions to be correctly captured. At Re=256 for instance, even the secondary instability is converged as at $L_{x}$=300 no segmentation is observed. For Re=2056 on the other hand, while larger structure start to converge, thinner features clearly need more resolutions. Figure 4: (Right) Interface for the Rayleigh-Taylor instability at $t/T=$5 and Re=2048 for three different resolutions (left to right) $L_{x}$=150, 300 and 600. (Left) Position of the penetrating spike over time: (black) $L_{x}$=600, (red) $L_{x}$=300 and (blue) $L_{x}$=150. ### 3.3 Turbulent 3-D Rayleigh-Taylor instability To further showcase the ability of the solver to deal with complex flows, we also consider the Rayleigh-Taylor instability in 3-D. The studied configuration follows those studied in Liang et al. (2016). The definitions of non-dimensional parameters are similar to those used in the previous section. The domain is discretized using $100\times 100\times 1200$ grid-points, with $L=100$. The interface is placed at the center of the domain along the $z$-axis, and perturbed using: $h_{i}(x,y)=\frac{L}{10}\left[\cos\left(\frac{2\pi x}{L}\right)+\cos\left(\frac{2\pi y}{L}\right)\right]+6L,$ (42) and the Reynolds and Atwood numbers are set to respectively 1000 and 0.15. As for previous configurations, periodic boundaries were applied in the horizontal direction and no-slip boundaries at the top and bottom. The body force was set to $g=3.6\times 10^{-5}$ and viscosity to 0.006. The position of the downward-plunging spike was measured over time and compared to reference data from Liang et al. (2016). After the penetration of two liquids into each other, the Kelvin-Helmholtz instability causes the plunging spike to roll up and take a mushroom-like shape. As the mushroom-shaped spike further progresses into the lighter fluid, the cap disintegrates into four finger-like structures. It is interesting to note that, as will be shown later, these fingers are reminiscent of the instabilities leading to splashing in the impact of a droplet on liquid surfaces. Figure 5: (Right) Evolution of interface for the 3-D Rayleigh-Taylor instability for Re=1000 at different times: (from left to right) $t/T=$1.9, 3.9, 5.8, 7.8 and 9.7. (Left) Position of the penetrating spike over time: (Plain lines) present results and (symbols) data from Liang et al. (2016). Overall, as shown in Figure 5, the results obtained from the present simulation are in good agreement with reference data. ### 3.4 Droplet splashing on thin liquid film As the final case, we consider the impact of a droplet on a thin liquid layer. This configuration is interesting as it involves complex dynamics such as splashing and is of interest in many areas of science and engineering Hagemeier et al. (2011, 2012). Immediately after impact the liquid surface is perturbed. In many instances, at the contact point (line) a thin liquid jet forms, and then continues to grow and propagate as a _corolla_. As the crown- like structures propagates radially, a rim starts to form. At high enough Weber numbers the structure breaks into small droplets via the Rayleigh–Plateau instability Josserand and Zaleski (2003). A detailed study of the initial stages of the spreading process have shown that the spreading radius scales with time regardless of the Weber and Reynolds numbers Josserand and Zaleski (2003). While widely studied in the literature using different numerical formulations Hu et al. (2019); Liang et al. (2018); Fakhari et al. (2017); Sitompul and Aoki (2019), simulations have usually been limited to lower density and viscosity ratios and/or Weber and Reynolds numbers Hu et al. (2019); Liang et al. (2018); Fakhari et al. (2017); Qin et al. (2018). As such we first focus on a 2-D configuration considering three sets of We and Re numbers, namely: Re=200 and We=220, Re=1000 and We=220 and Re=1000 and We=2200. In all simulations the density and viscosity ratios are set to $\rho_{h}/\rho_{l}=1000$ and $\nu_{l}/\nu_{h}=15$ emulating a water/air system. The geometrical configuration is illustrated in Figure 6. Figure 6: Geometrical configuration of the droplet impact on liquid sheet case in 2-D. The top and bottom boundary conditions are set to walls modeled with the half- way bounce-back formulation while symmetrical boundaries are applied to the left and right. The droplet diameter is resolved with 100 grid-points. The initial velocity in the droplet is set to $U_{0}=$0.05 and $\nu_{L}$ is determined via the Reynolds number: $\hbox{Re}=\frac{\rho_{h}U_{0}D}{\mu_{h}}.$ (43) Furthermore, the We number is defined as: ${\hbox{We}=\frac{\rho_{l}D{U_{0}}^{2}}{\sigma}.}$ (44) The evolution of the liquid surface as obtained from the simulations is shown in Figure 7. Following Josserand and Zaleski (2003), breakup of the rims and splashing occurs for larger impact parameters defined as: $K=\hbox{We}^{1/2}\hbox{Re}^{1/4}.$ (45) Accordingly, the impact parameters for the studied 2-D cases are: K=55.7, 83.4 and 263.8. Looking at the evolution of the systems in Figure 7 it can be clearly observed that in agreement with observations in Josserand and Zaleski (2003), larger values of the impact parameter lead to droplet detachment from the rim and splashing. Figure 7: Impact of circular droplet on liquid sheet at different We and Re numbers with $\rho_{h}/\rho_{l}=1000$ and $\nu_{l}/\nu_{h}=15$. (black) Re=200 and We=220, (red) Re=1000 and We=220, and (blue) Re=1000 and We=2200. Furthermore, the evolution of the spreading radii $r_{K}$ over time for different cases are shown in Figure 8. As shown there the radii scale with time at the initial stages of the impact, in agreement with results reported in Josserand and Zaleski (2003). Figure 8: Evolution of the spreading radius $r_{K}$ as a function of time for the droplet impact on liquid film case. Circular symbols designate 2-D simulations: (black) Re=200 and We=220, (red) Re=1000 and We=220 and (blue) Re=1000 and We=2200. Rectangular symbols belong to the 3-D simulation with Re=1000 and We=8000. The dashed line is $\frac{r_{K}}{D}=1.1\sqrt{t/T}$. As a final test-case, to showcase the robustness of the proposed algorithm, a 3-D configuration with Re=1000 and We=8000 was also ran. The evolution of the liquid surface over time is shown in Figure 9. Figure 9: Impact of spherical droplet on thin liquid sheet at We=8000 and Re=1000 at different times with $\rho_{h}/\rho_{l}=1000$ and $\nu_{l}/\nu_{h}=15$. After initial impact a thin liquid jet is formed at the contact line between the droplet and the sheet. Then, the crown evolves and spreads. At later stages the finger-like structures start to form at the tip of the crown. These liquid fingers then get detached from the crown and liquid splashing is observed. The sequence of events is in excellent agreement with those presented in Josserand and Zaleski (2003). Furthermore, the spreading radius, as plotted in Figure 8 is in agreement with theoretical predictions. ## 4 Conclusions A -based solver relying on the conservative equation and a modified hydrodynamic pressure/velocity-based distribution and collision operator in cumulants space was presented in the this study with the aim to model multi- phase flows in the larger Weber/Reynolds regimes. While the stability at high Weber numbers –i.e. low surface tensions– is achieved through the decoupled nature of the conservative formulation, the added stability in terms of kinematic viscosity –i.e. larger Reynolds numbers– is brought about by the collision operator and the modified pressure-based formulation for the flow. Compared to other models available in the literature based on the formulation, the use of cumulants allows for stability at considerably higher Reynolds numbers –i.e. lower values of the relaxation factor. For instance, configurations such as the 3-D droplet splashing were not stable with the formulation for the same choice of non-dimensional parameters, i.e. resolution and relaxation factor. The algorithm was shown to capture the dynamics of the flow and be stable in the targeted regimes. The application of the proposed algorithm to more complex configurations such as liquid jets is currently being studied and will be reported in future publications. conceptualization, S.A.H. and H.S.; methodology, S.A.H.; software, S.A.H.; validation, S.A.H. and H.S.; formal analysis, S.A.H.; investigation, S.A.H.; data curation, S.A.H.; writing–original draft preparation, S.A.H.; writing–review and editing, S.A.H., H.S. and D.T.; visualization, S.A.H.; supervision, D.T. S.A.H. and H.S. would like to acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in TRR 287 (Project-ID 422037413). The authors declare no conflict of interest.. References yes ## References * Krüger et al. (2017) Krüger, T.; Kusumaatmaja, H.; Kuzmin, A.; Shardt, O.; Silva, G.; Viggen, E.M. The Lattice Boltzmann Method: Principles and Practice; Graduate Texts in Physics, Springer International Publishing: Cham, 2017. doi:black10.1007/978-3-319-44649-3. * Guo and Shu (2013) Guo, Z.; Shu, C. Lattice Boltzmann Method and Its Applications in Engineering; Vol. 3, Advances in Computational Fluid Dynamics, WORLD SCIENTIFIC, 2013. doi:black10.1142/8806. * Succi (2002) Succi, S. The Lattice Boltzmann Equation for Fluid Dynamics and Beyond; 2002. * Chorin (1997) Chorin, A.J. A Numerical Method for Solving Incompressible Viscous Flow Problems. Journal of Computational Physics 1997, 135, 118–125. doi:black10.1006/jcph.1997.5716. * Shan and Chen (1993) Shan, X.; Chen, H. Lattice Boltzmann model for simulating flows with multiple phases and components. Phys. Rev. E 1993, 47, 1815–1819. doi:black10.1103/PhysRevE.47.1815. * Shan and Chen (1994) Shan, X.; Chen, H. Simulation of nonideal gases and liquid-gas phase transitions by the lattice Boltzmann equation. Phys. Rev. E 1994, 49, 2941–2948. doi:black10.1103/PhysRevE.49.2941. * Swift et al. (1996) Swift, M.R.; Orlandini, E.; Osborn, W.R.; Yeomans, J.M. Lattice Boltzmann simulations of liquid-gas and binary fluid systems. Phys. Rev. E 1996, 54, 5041–5052. doi:black10.1103/PhysRevE.54.5041. * Swift et al. (1995) Swift, M.R.; Osborn, W.R.; Yeomans, J.M. Lattice Boltzmann Simulation of Nonideal Fluids. Phys. Rev. Lett. 1995, 75, 830–833. doi:black10.1103/PhysRevLett.75.830. * Wagner and Li (2006) Wagner, A.; Li, Q. Investigation of Galilean invariance of multi-phase lattice Boltzmann methods. Physica A: Statistical Mechanics and its Applications 2006, 362, 105–110. doi:black10.1016/j.physa.2005.09.030. * Kupershtokh et al. (2009) Kupershtokh, A.; Medvedev, D.; Karpov, D. On equations of state in a lattice Boltzmann method. Computers & Mathematics with Applications 2009, 58, 965–974. doi:black10.1016/j.camwa.2009.02.024. * Yuan and Schaefer (2006) Yuan, P.; Schaefer, L. Equations of state in a lattice Boltzmann model. Physics of Fluids 2006, 18, 042101. doi:black10.1063/1.2187070. * Sbragaglia et al. (2007) Sbragaglia, M.; Benzi, R.; Biferale, L.; Succi, S.; Sugiyama, K.; Toschi, F. Generalized lattice Boltzmann method with multirange pseudopotential. Phys. Rev. E 2007, 75, 026702. doi:black10.1103/PhysRevE.75.026702. * Li and Luo (2013) Li, Q.; Luo, K.H. Achieving tunable surface tension in the pseudopotential lattice Boltzmann modeling of multiphase flows. Phys. Rev. E 2013, 88, 053307. doi:black10.1103/PhysRevE.88.053307. * Fakhari and Rahimian (2010) Fakhari, A.; Rahimian, M.H. Phase-field modeling by the method of lattice Boltzmann equations. Physical Review E 2010, 81, 036707. * Safari et al. (2013) Safari, H.; Rahimian, M.H.; Krafczyk, M. Extended lattice Boltzmann method for numerical simulation of thermal phase change in two-phase fluid flow. Physical Review E 2013, 88, 013304. * Safari et al. (2014) Safari, H.; Rahimian, M.H.; Krafczyk, M. Consistent simulation of droplet evaporation based on the phase-field multiphase lattice Boltzmann method. Physical Review E 2014, 90, 033305. * Yazdi et al. (2018) Yazdi, H.; Rahimiani, M.H.; Safari, H. Numerical simulation of pressure-driven phase-change in two-phase fluid flows using the Lattice Boltzmann Method. Computers & Fluids 2018, 172, 8–18. * Wang et al. (2019) Wang, H.; Yuan, X.; Liang, H.; Chai, Z.; Shi, B. A brief review of the phase-field-based lattice Boltzmann method for multiphase flows. Capillarity 2019, 2, 33–52. * He et al. (1999) He, X.; Chen, S.; Zhang, R. A Lattice Boltzmann Scheme for Incompressible Multiphase Flow and Its Application in Simulation of Rayleigh–Taylor Instability. Journal of Computational Physics 1999, 152, 642–663. doi:black10.1006/jcph.1999.6257. * Inamuro et al. (2004) Inamuro, T.; Ogata, T.; Tajima, S.; Konishi, N. A lattice Boltzmann method for incompressible two-phase flows with large density differences. Journal of Computational Physics 2004, 198, 628–644. doi:black10.1016/j.jcp.2004.01.019. * Amirshaghaghi et al. (2016) Amirshaghaghi, H.; Rahimian, M.; Safari, H. Application of a two phase lattice Boltzmann model in simulation of free surface jet impingement heat transfer. International Communications in Heat and Mass Transfer 2016, 75, 282–294. * Amirshaghaghi et al. (2018) Amirshaghaghi, H.; Rahimian, M.H.; Safari, H.; Krafczyk, M. Large Eddy Simulation of liquid sheet breakup using a two-phase lattice Boltzmann method. Computers & Fluids 2018, 160, 93–107. * Hosseini (2020) Hosseini, S.A. Development of a lattice Boltzmann-based numerical method for the simulation of reacting flows. PhD thesis, Otto-von-Guericke Universität/Universite Paris-Saclay, 2020\. * Sun and Beckermann (2007) Sun, Y.; Beckermann, C. Sharp interface tracking using the phase-field equation. Journal of Computational Physics 2007, 220, 626–653. * Chiu and Lin (2011) Chiu, P.H.; Lin, Y.T. A conservative phase field method for solving incompressible two-phase flows. Journal of Computational Physics 2011, 230, 185–204. * Fakhari et al. (2017) Fakhari, A.; Bolster, D.; Luo, L.S. A weighted multiple-relaxation-time lattice Boltzmann method for multiphase flows and its application to partial coalescence cascades. Journal of Computational Physics 2017, 341, 22–43. doi:black10.1016/j.jcp.2017.03.062. * Wang et al. (2016) Wang, H.; Chai, Z.; Shi, B.; Liang, H. Comparative study of the lattice Boltzmann models for Allen-Cahn and Cahn-Hilliard equations. Physical Review E 2016, 94, 033304. * Chopard et al. (2009) Chopard, B.; Falcone, J.L.; Latt, J. The lattice Boltzmann advection-diffusion model revisited. The European Physical Journal Special Topics 2009, 171, 245–249. * Hosseini et al. (2019) Hosseini, S.A.; Darabiha, N.; Thévenin, D. Lattice Boltzmann advection-diffusion model for conjugate heat transfer in heterogeneous media. International Journal of Heat and Mass Transfer 2019, 132, 906–919. * Zu et al. (2020) Zu, Y.; Li, A.; Wei, H. Phase-field lattice Boltzmann model for interface tracking of a binary fluid system based on the Allen-Cahn equation. Physical Review E 2020, 102, 053307. * Lee and Lin (2003) Lee, T.; Lin, C.L. Pressure evolution lattice Boltzmann equation method for two-phase flow with phase change. Physical Review E 2003, 67, 056703. * Hosseini et al. (2019) Hosseini, S.A.; Safari, H.; Darabiha, N.; Thévenin, D.; Krafczyk, M. Hybrid Lattice Boltzmann-finite difference model for low Mach number combustion simulation. Combustion and Flame 2019, 209, 394–404. * Hosseini et al. (2020) Hosseini, S.A.; Abdelsamie, A.; Darabiha, N.; Thévenin, D. Low-Mach hybrid lattice Boltzmann-finite difference solver for combustion in complex flows. Physics of Fluids 2020, 32, 077105. * Geier et al. (2020) Geier, M.; Lenz, S.; Schönherr, M.; Krafczyk, M. Under-resolved and large eddy simulations of a decaying Taylor–Green vortex with the cumulant lattice Boltzmann method. Theor. Comput. Fluid Dyn. 2020. doi:black10.1007/s00162-020-00555-7. * Geier et al. (2015) Geier, M.; Schönherr, M.; Pasquali, A.; Krafczyk, M. The cumulant lattice Boltzmann equation in three dimensions: Theory and validation. Computers & Mathematics with Applications 2015, 70, 507–547. doi:black10.1016/j.camwa.2015.05.001. * Qin et al. (2018) Qin, F.; Mazloomi Moqaddam, A.; Kang, Q.; Derome, D.; Carmeliet, J. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow. Physics of Fluids 2018, 30, 032104. doi:black10.1063/1.5016965. * Mazloomi M et al. (2015) Mazloomi M, A.; Chikatamarla, S.; Karlin, I. Entropic Lattice Boltzmann Method for Multiphase Flows. Phys. Rev. Lett. 2015, 114, 174502. doi:black10.1103/PhysRevLett.114.174502. * Yang et al. (2018a) Yang, X.; He, H.; Xu, J.; Wei, Y.; Zhang, H. Entropy generation rates in two-dimensional Rayleigh–Taylor turbulence mixing. Entropy 2018, 20, 738. * Yang et al. (2018b) Yang, H.; Wei, Y.; Zhu, Z.; Dou, H.; Qian, Y. Statistics of heat transfer in two-dimensional turbulent Rayleigh-Bénard convection at various Prandtl Number. Entropy 2018, 20, 582. * Rahmat et al. (2014) Rahmat, A.; Tofighi, N.; Shadloo, M.; Yildiz, M. Numerical simulation of wall bounded and electrically excited Rayleigh–Taylor instability using incompressible smoothed particle hydrodynamics. Colloids and Surfaces A: Physicochemical and Engineering Aspects 2014, 460, 60–70. * Liang et al. (2016) Liang, H.; Li, Q.X.; Shi, B.C.; Chai, Z.H. Lattice Boltzmann simulation of three-dimensional Rayleigh-Taylor instability. Phys. Rev. E 2016, 93, 033113. doi:black10.1103/PhysRevE.93.033113. * Hagemeier et al. (2011) Hagemeier, T.; Hartmann, M.; Thévenin, D. Practice of vehicle soiling investigations: A review. International Journal of Multiphase Flow 2011, 37, 860–875. * Hagemeier et al. (2012) Hagemeier, T.; Hartmann, M.; Kühle, M.; Thévenin, D.; Zähringer, K. Experimental characterization of thin films, droplets and rivulets using LED fluorescence. Experiments in fluids 2012, 52, 361–374. * Josserand and Zaleski (2003) Josserand, C.; Zaleski, S. Droplet splashing on a thin liquid film. Phys. Fluids 2003, 15, 1650. doi:black10.1063/1.1572815. * Hu et al. (2019) Hu, Y.; Li, D.; Jin, L.; Niu, X.; Shu, S. Hybrid Allen-Cahn-based lattice Boltzmann model for incompressible two-phase flows: The reduction of numerical dispersion. Phys. Rev. E 2019, 99, 023302. doi:black10.1103/PhysRevE.99.023302. * Liang et al. (2018) Liang, H.; Xu, J.; Chen, J.; Wang, H.; Chai, Z.; Shi, B. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows. Phys. Rev. E 2018, 97, 033309. doi:black10.1103/PhysRevE.97.033309. * Sitompul and Aoki (2019) Sitompul, Y.P.; Aoki, T. A filtered cumulant lattice Boltzmann method for violent two-phase flows. Journal of Computational Physics 2019, 390, 93–120. doi:black10.1016/j.jcp.2019.04.019. ## Appendix A Hermite polynomials and coefficients The Hermite polynomials used in the s of different solvers are defined as: $\displaystyle\mathcal{H}_{0}$ $\displaystyle=1,$ (46a) $\displaystyle\mathcal{H}_{i}$ $\displaystyle=c_{\alpha,i},$ (46b) $\displaystyle\mathcal{H}_{ij}$ $\displaystyle=c_{\alpha,i}c_{\alpha,j}-c_{s}^{2}\delta_{ij},$ (46c) where $\delta_{ij}$ denotes the Kronecker delta function, while corresponding equilibrium coefficients are: $\displaystyle a^{(eq)}_{0}$ $\displaystyle=\rho,$ (47a) $\displaystyle a^{(eq)}_{i}$ $\displaystyle=\rho u_{i},$ (47b) $\displaystyle a^{(eq)}_{ij}$ $\displaystyle=\rho u_{i}u_{j},$ (47c)
# Latent Space Analysis of VAE and Intro-VAE applied to 3-dimensional MR Brain Volumes of Multiple Sclerosis, Leukoencephalopathy, and Healthy Patients Christopher Vogelsanger <EMAIL_ADDRESS>Christian Federau <EMAIL_ADDRESS> ###### Abstract Multiple Sclerosis (MS) and microvascular leukoencephalopathy are two distinct neurological conditions, the first caused by focal autoimmune inflammation in the central nervous system, the second caused by chronic white matter damage from atherosclerotic microvascular disease. Both conditions lead to signal anomalies on Fluid Attenuated Inversion Recovery (FLAIR) magnetic resonance (MR) images, which can be distinguished by an expert neuroradiologist, but which can look very similar to the untrained eye as well as in the early stage of both diseases. In this paper, we attempt to train a 3-dimensional deep neural network to learn the specific features of both diseases in an unsupervised manner. For this manner, in a first step we train a generative neural network to create artificial MR images of both conditions with approximate explicit density, using a mixed dataset of multiple sclerosis, leukoencephalopathy and healthy patients containing in total 5404 volumes of 3096 patients. In a second step, we distinguish features between the different diseases in the latent space of this network, and use them to classify new data. Figure 1: Overview ## 1 Introduction Multiple sclerosis (MS) is a common neurological disease that is characterized by recurring episodes of inflammation in the central nervous system, during which significant demyelination and axonal loss occur. Typical symptoms include muscle weakness and vision, sensation and coordination disorders. MS white matter lesions have a particular pattern on brain magnetic resonance images (MRI), which is used for diagnosis and follow-up of the disease.[9] Microvascular leukoencephalopathy is a collective term for chronic ischemic white matter brain lesions due to microvascular disease in the context of atherosclerosis.[16] Both conditions lead to signal anomalies on Fluid Attenuated Inversion Recovery (FLAIR) magnetic resonance (MR) images, which can be distinguished by an expert neuroradiologist, by differentiating subtle differences in the number, aspect, anatomic location and distributions of the lesions [22], but not using simple criteria such as for example a signal threshold. The lesions can look very similar to the untrained eye as well as in the early stage of both diseases. We used generative network models to reconstruct and generate FLAIR MRI data. Specifically, we trained Variational Autoencoders (VAE) and an Introspective Autoencoders (Intro-VAE) on a mixed database of normal, multiple sclerosis and leukoencephalopathy MRI scans and compared the generated images in terms of their quality. The networks took whole volumes as three dimensional input. We then decomposed and analyzed the latent space of the networks using LDA in order to check if typical lesions characteristics of the MS pattern can be found encoded in the latent space and if we can use the latent space to distinguish between images of the three conditions. ## 2 Related Work Neural network development in computer vision happened mostly on non- volumetric 2D images, and similarly, most applications of neural networks to medical image analysis was done in two dimensions. There are a multitude of papers applying neural network models on MS datasets, and the developed networks are usually not meant to distinguish between lesions caused by different disease. [7] [20] [21] [18] Generative models have also been sparsely applied to medical imaging of MS. Looking at disease detection, in one paper, the authors trained a VAE only on the data of healthy patients, with the idea that abnormal samples, in their case images with MS lesions, would also have an abnormal latent space code[24]. They found their approach successful, but this approach is not disease specific, and a brain image with a stroke lesion or other irregularity would also be detected as abnormal. Three dimensional brain images, from healthy data but also with tumors and strokes, could be generated with different GAN architectures. [15] A different paper employed a Vector-Quantised Variational Autoencoders (VQ-VAE) on 3 dimensional MRI data. [23] They achieved impressive results in image compression and reconstruction, but didn’t do an analysis of the latent space nor did they show newly sampled brains only reconstructed ones. ## 3 Theory Generative models are machine learning models that try to estimate the joint distribution $P_{Y,X}(y,x)$ of the data $X$ and the target variable $Y$. This allows them to generate new data which is coherent with the dataset. There exist multiple neural network architectures that use generative approaches. Two common ones are Variational Autoencoders and Generative Adversarial Networks.[10] ### 3.1 Variational Autoencoder (VAE) Variational autoencoders (VAEs) consist of an encoding part and a decoding part, which are trained jointly. VAEs share some similarities with basic autoencoders [11] but are a type of generative model and use a vector of latent random variables. Typically, their target distribution is a (multivariate) gaussian: the encoder compresses the high dimensional input into a mean vector $\mu$ and standard deviation $\sigma$ vector with lower dimensions than the input. A sample $z$, also called a latent vector, can then be fed to the decoder to restore the data as good as possible. To generate new data one can simply sample new latent vectors from the target distribution and pass them through the decoder network. The objective function of the VAE consists of two terms.[14] The first term tries to minimize the reconstruction error. The second term is a regularizer that tries to match the distribution of the latent variables generated by encoding the data with a chosen distribution over the latent variables: $L(\phi,\theta;x)=\mathbb{E}_{q_{\phi}(z|x)}[\log(p_{\phi}(x|z))]-KL[q_{\phi}(z|x)||p_{\phi}(z)]$ where $x$: datapoint, $z$: latent variable, $p_{\phi}(z)$: true distribution of the latent variable, $q_{\phi}(z|x)$: simple distribution of the latent variable given the data (typically $q_{\phi}(z|x)=\mathcal{N}(z;\mu,\sigma^{2}I))$ ### 3.2 Generative Adversarial Network (GAN) Generative Adversarial Networks (GANs), another popular form of generative models, are composed of two networks that are trained jointly: A generator that uses random noise to create new datapoints and a discriminator that tries to tell the generated and real data apart.[12]. In contrast to VAEs, GANs try to model the data distribution directly instead of approximating it with a chosen target distribution. The GAN tries to solve the following optimization problem: [4] $\min_{G}\max_{D}\mathbb{E}_{x\thicksim p_{data}(x)}[\log(D(x))]+\mathbb{E}_{z\thicksim p_{z}(z)}[log(1-D(G(z)))]$ where $D(x)$: discriminator’s estimate of the probability that real datapoint $x$ is real, $G(z)$: generator’s output given noise $z$, $D(G(z))$: discriminator’s estimate of the probability that a fake instance is real To create new data with a GAN, one simply feeds the generator a random vector of noise. Typically GANs produce sharper images compared to VAEs. Indeed, a lack of sharpness in the generated images would be used by the discriminator to identify the generated images. ### 3.3 Introspective Variational Autoencoder The introspective variational autoencoder (intro-VAE or IVAE) is a modification of a standard VAE. With the idea to combine the strength of VAEs, namely their nice manifold representations in the latent space, with the strength of GANs, their sharpness of generated images. But compared to a GAN the sampling diversity should be improved and the training should be more stable.[13] The IVAE has the same internal construction as a VAE, but its encoder simultaneously acts as discriminator and its decoder can also be thought of as generator. This is the main distinguishing factor from other VAE and GAN combinations which typically rely on a separate discriminator. The paper introducing the IVAE states that the blurriness of VAEs originates from the ”assignment of high probability to training points”[13] and claims a VAE ”cannot ensure that a low probability is assigned to blurry datapoints”[13]. The IVAE has the possibility to change the probability given to blurry points since it can behave like a normal VAE for real data but acts as a GAN in the case of generated data. [13]. The training is similar to that of the GAN but the loss for the encoder and generator contain also the VAE loss. $L_{E}(x,z)=E(x)+max(0,m-E(G(z)))+L_{AE}(x)$ $L_{G}(z)=E(G(z))+L_{AE}(x)$ where $E(x)=KL(q_{\Phi}(z|x)||p(z))$, with $x$ being a datapoint, $G(z)$: generator’s output given noise $z$, $L_{AE}(x)=\mathbb{E}_{q_{\phi}(z|x)}[\log p_{\phi}(x|z)]$ ### 3.4 Linear Discriminant Analysis The Linear Discriminant Analysis (LDA) is a method to find a subspace of a feature space that maximizes the separability between classes. As such the LDA can be used to show separability of high dimensional data and to predict the class of a datapoint. [19] ## 4 Methods ### 4.1 Databases Three databases were produced: a normal database, a multiple sclerosis database and a leukoencephalopathy database which differentiated between three degrees of severity. #### 4.1.1 Databases Characteristics All three databases consisted exclusively of MR FLAIR scans. Together the databases contained 5404 MR scans of 3096 patients. No Patient was in multiple databases. 1855 scans from 1855 patients were included in the normal database; patient age: [mean $\pm$ standard deviation] 39 $\pm$ 24 y. 2910 scans from 616 patients were included in the MS database; patient age: 46 $\pm$ 14 y. 639 scans from 625 patients were included in the leukoencephalopathy database: (393 scans from 384 patients were of severity 1, 41 scans from 40 patients were of severity 2, 205 scans from 201 patients were of severity 3); patient age: 75 $\pm$ 10 y. Figure 1 shows an overview over the databases. | Number of Patients | Number of MR images | Age [mean $\pm$ standard deviation] ---|---|---|--- Healthy | 1855 | 1855 | 39 $\pm$ 24 MS | 616 | 2910 | 46 $\pm$ 14 L 1 | 384 | 393 | 73 $\pm$ 10 L 2 | 40 | 41 | 76 $\pm$ 9 L 3 | 201 | 205 | 81 $\pm$ 8 Table 1: Data distribution over classes. L is short for leukoencephalopathy ### 4.2 Data pre-processing #### 4.2.1 Coregistration In a first step the MR scans were coregistered. The framework used for the coregistration was SimpleElastix [17], a python wrapper for the Elastix framework [2] which uses the itk framework [5]. For skull stripping the fsl BET [3] tool was used. The coregistration parameters were: * • Skull-stripped template and skull-stripped moving image * • Affine transformation to the MNI template [6] * • Advanced Mattes Mutual Information * • Output dimensions: 182$\times$218$\times$182 Figure 2: Mean Image over the whole dataset, coregistered with Advanced Mattes Mutual Information, skull stripped template and skull stripped moving image We checked the result of the coregistration with a mean image (Figure 2) which was computed over all coregistered images in the dataset. The sharpness of this mean image was used as quality measure. #### 4.2.2 Trimming and down sampling In a second step the MR scans were trimmed to a resolution of 160$\times$192$\times$160 and downsampled to the final resultion of 40$\times$48$\times$40 by taking the average over 4$\times$4$\times$4 voxels. #### 4.2.3 Bounding the voxel values In a last step the upper value of voxels in an image was constrained to the value of the 99.5 percentile and the voxel values were transformed into the interval [0, 1] using the following formula. $X_{new}=\frac{X_{old}-\min(X)}{\max(X)}$ $X$: voxel array on an image ### 4.3 Datasets for Training and Testing All three databases were randomly divided into a training set (90%) and a test set (10%). To prevent cross contamination this was done on the level of patients and not on the level of images. The final training dataset contained the training sets of all three databases. All neural network models were trained on the same final training set and analysed using the same final test set. ### 4.4 Neural Networks We created four different neural networks using Keras with a TensorFlow backend. Two were VAE models which had latent space dimensionality 8 and 512 referred to as VAE-8 and VAE-512. Two were IVAE models which had latent space dimensionality 32 and 256 referred to as IVAE-32 and IVAE-256 respectively. All four neural network models produced shared the same overall structure for the encoder and decoder. They only differed in the size of the latent space. The networks input and output were whole three-dimensional MR images, not just two-dimensional slices. The exact input and output dimensions were 40$\times$48$\times$40\. The encoder used multiple convolutional layers with batch normalization and added pooling to reduce the resolution down to 5$\times$6$\times$5$\times$64\. Dense layers with dropout and batch normalization reduced the number of parameters further to the latent dimension. The decoder we used was a structural reversal of the encoder. It consisted of dense layers with dropout and batch normalization that increased the number of parameters from the latent dimension up to 5$\times$6$\times$5$\times$64 again. After that deconvolutional layers with batch normalization and added upsampling were utilized to reach the output resolution. A NVIDIA Titan RTX with 24 GB of memory was used to train the models. The models were trained for one week each. ## 5 Results Four neural network models (VAE-8, VAE-512, IVAE-32 and IVAE-256) were trained on the same dataset. We evaluated the performance of the networks in terms of image reconstruction and generation. The created images were compared to images from the database to determine which features the neural networks were able to recreate and which were missing. We analysed the separability of image labels based on the associated latent space. ### 5.1 Reconstruction quality The figures 3, 4 and 5 compare the image reconstruction quality of the different models. The leftmost column shows a slice of the original image, which is from the test set. The other four columns show the reconstruction of that image from the respective model. Both VAE models produced very blurry results. Only the lateral ventricle, the dark colored part in the middle of the brain, and the overall color scheme was reconstructed with mentionable accuracy. The difference of white and grey matter was only hinted at and certainly not accurate. The VAE models also did not show any gyri, the skin folds on the surface of the brain, in the reconstruction. The IVAE models created sharper reconstructions but they had inaccuracies. The lateral ventricle was in some examples poorly reconstructed. In some places there were some holes or dark spots that weren’t there in the original image. The difference of white and grey matter was hinted at but was still rather blurry. Same was true for the gyri that showed up in the reconstructions as a bunch of darker pixels on the edges of the brain but lack structure. For both model types the differences in terms of image quality between the model with few and the model with many latent space dimensions were sometimes noticeable but often minimal. | | | | ---|---|---|---|--- | | | | | | | | | | | | | | | | | | | | Original | VAE-8 | VAE-512 | IVAE-32 | IVAE-256 Figure 3: Axial MRI, on the left the original image, in the other columns the reconstruction of the mentioned model | | | | ---|---|---|---|--- | | | | | | | | | | | | | | | | | | | | Original | VAE-8 | VAE-512 | IVAE-32 | IVAE-256 Figure 4: Coronal MRI, on the left the original image, in the other columns the reconstruction of the mentioned model | | | | ---|---|---|---|--- | | | | | | | | | | | | | | | | | | | | Original | VAE-8 | VAE-512 | IVAE-32 | IVAE-256 Figure 5: Sagittal MRI, on the left the original image, in the other columns the reconstruction of the mentioned model ### 5.2 Image Generation Quality The figures 6, 7, 8 and 9 compare the image generation quality. The axial, coronal and sagittal slice that are grouped together belong to the same brain. The latent space vectors were drawn from a standard normal distribution which was the target distribution of the models during training. As in the image reconstruction part, the IVAE models once again returned sharper images than the VAE models. For the VAE models aside from the changing color scheme and some small changes to the lateral ventricle most of the image stayed the same over different latent samples. The IVAE models produced a bigger variety of different brains although their quality wasn’t on par with the reconstructed ones. In some examples they created unnatural looking artifacts. The lateral ventricles were vanishing in some images and looked worse than in the reconstructed examples. Some images also seemed patchy and incoherent. In most cases neither the VAE nor the IVAE models created sharp images of new brains. | | | | ---|---|---|---|--- | | | | | | | | VAE-8 | VAE-8 | VAE-8 | VAE-8 | VAE-8 Figure 6: axial, coronal and sagittal MRI slice, sampled by the VAE network with latent space dimension 8. | | | | ---|---|---|---|--- | | | | | | | | VAE-512 | VAE-512 | VAE-512 | VAE-512 | VAE-512 Figure 7: axial, coronal and sagittal MRI slice, sampled by the VAE network with latent space dimension 512. | | | | ---|---|---|---|--- | | | | | | | | IVAE-32 | IVAE-32 | IVAE-32 | IVAE-32 | IVAE-32 Figure 8: axial, coronal and sagittal MRI slice, sampled by the IVAE network with latent space dimension 32. | | | | ---|---|---|---|--- | | | | | | | | IVAE-256 | IVAE-256 | IVAE-256 | IVAE-256 | IVAE-256 Figure 9: axial, coronal and sagittal MRI slice, sampled by the IVAE network with latent space dimension 256. ### 5.3 Latent space Figure 10 shows a linear discriminant analysis (LDA) of the latent space of the different models. To improve readability of the plots the three leukoencephalopathy groups were accumulated into a single category. We show the first three dimensions of the LDA rather than the full latent space since most latent space features had no or very little difference between the distributions of the different classes. For all four models the LDA has a dimension that showed a good separability between the MS data and the other two categories. The models with a larger latent space also had a dimension that differentiates between healthy and leukoencephalopathy data. The two models with the smaller latent space still showed some separability between healthy and leukoencephalopathy but it was not as pronounced. --- VAE model with latent dimenson 8 VAE model with latent dimenson 512 IVAE model with latent dimenson 32 IVAE model with latent dimenson 256 Figure 10: The first three dimensions of the LDA composition done for the different models on latent space vectors of the test set. The models with a larger latent space show a good separability for the MS images from images of the other two classes. While the separability between healthy and leukoencephalopathy images is not so clean. Table 2 shows the resulting statistics from an LDA done on the latent space of the IVAE-256 model. The LDA was done on the training set and then used to predict the classes of the test set. The high accuracy and recall for detecting MS stuck out especially but the accuracy and recall for the healthy data was still good. Only the three leukoencephalopathy classes were not as often classified correctly. | MS | Leuk 1 | Leuk 2 | Leuk 3 | Healthy ---|---|---|---|---|--- FP | 24 | 29 | 3 | 13 | 45 FN | 35 | 18 | 3 | 10 | 48 TP | 285 | 22 | 1 | 12 | 138 TN | 228 | 503 | 565 | 537 | 341 | MS | Leuk 1 | Leuk 2 | Leuk 3 | Healthy ---|---|---|---|---|--- Precision | 0.92 | 0.43 | 0.25 | 0.48 | 0.75 Recall | 0.89 | 0.55 | 0.25 | 0.55 | 0.74 Table 2: LDA used on the latent space created by the IVAE model with latent dimenson 256 to classify the data in the test set Figure 11: Distribution of some metadata attributes of the dataset We found that the classes in the dataset correlated with metadata features that can influence the look of the image. Figure 11 shows that there is clearly a bias in the dataset. Pixel bandwidth, repetition time and echo time indicate that the MS images were taken with different parameters than images from the other two classes. Both repetition time and echo time are parameters controlling the contrast of the MR image. [8] The pixel bandwidth has an influence on the appearance of the image as well. [1] | | | ---|---|---|--- Value: -1.25 | Value: 0.00 | Value: 1.25 | Dimension: 175 Figure 12: Latent Space Feature Dimension on the right, sampled images on the left, taken from the IVAE model with latent dimension 256. All the feature values were kept the same only the value of the 175. dimension was changed. | | | ---|---|---|--- Value: -1.00 | Value: 0.25 | Value: 1.00 | Dimension: 76 Figure 13: Latent Space Feature Dimension on the right, sampled images on the left, taken from the IVAE model with latent dimension 256. All the feature values were kept the same only the value of the 76. dimension was changed. We tried to find out what effect changing values of certain latent space dimensions had on the image. Since the IVAE-256 has a latent space of size 256 we limited ourselves to latent dimension that showed different distributions for the different classes. Figure 12 and 13 give some insight on what the latent space features change in the image. All the other latent dimensions were set to 0. The first row shows that feature number 175 had an influence on the shape of the lateral ventricle. The second row shows that feature number 76 had an influence on the color scheme of the brain. ## 6 Conclusion ### 6.1 Overview We trained and compared VAE and IVAE models with different latent space sizes on a dataset with multiple disease categories. We found that the IVAEs generated more detailed images than the VAEs independent of the latent space size. For the VAEs higher latent space dimensionality seemed to have no beneficial impact on the image quality and the generated images looked always blurry. The IVAEs seemed to profit marginally from having a higher latent space resulting in less artefacts in the generated images. Over all models there was a noticeable quality difference between reconstructed and sampled images, for the VAEs the sampled images were less diverse than the reconstructed ones and for IVAEs there was a noticeable loss of detail for the sampled images. Examining the latent space of the neural networks revealed that all four model were able to differentiate between MS and the other categories. For the IVAE-256 model we achieved a precision value of 92% and recall of 89% for detecting MS. To determine whether our models were really picking up on disease specific details of the image we searched for biases in the different datasets within the training database that could lead to differences in the images. We found such biases, it was not possible to assert with certitude if they had an impact on the classification result. ## References * [1] Bandwidth and image quality. https://mrimaster.com/technique%20bandwidth.html. [accessed 21 March 2020]. * [2] elastix. http://elastix.isi.uu.nl/. [accessed 16 March 2020]. * [3] Fmrib software library v6.0. https://fsl.fmrib.ox.ac.uk/fsl/fslwiki. [accessed 16 March 2020]. * [4] Gan loss functions. https://developers.google.com/machine-learning/gan/loss. [accessed 23 September 2020]. * [5] Insight toolkit (itk). https://itk.org/. [accessed 16 March 2020]. * [6] Mni average brain (305 mri) stereotaxic registration model. http://nist.mni.mcgill.ca/?p=957. [accessed 17 November 2020]. * [7] Shahab Aslani, Michael Dayan, Loredana Storelli, Massimo Filippi, Vittorio Murino, Maria A Rocca, and Diego Sona. Multi-branch convolutional neural network for multiple sclerosis lesion segmentation. arXiv:1811.02942, 2018. * [8] Allen D. Elster. I know long tr/te gives t2-weighting and short tr/te gives t1-weighting, but i don’t understand why. can you explain? http://mriquestions.com/image-contrast-trte.html. [accessed 21 March 2020]. * [9] Robin Smithuis Frederik Barkhof and Marieke Hazewinkel. Multiple sclerosis. https://radiologyassistant.nl/neuroradiology/multiple-sclerosis. [accessed 25 March 2020]. * [10] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv:1701.00160, 2016. * [11] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. * [12] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. arXiv:1406.2661, 2014. * [13] Huaibo Huang, Zhihang Li, Ran He, Zhenan Sun, and Tieniu Tan. Introvae: Introspective variational autoencoders for photographic image synthesis. arXiv:1807.06358, 2018. * [14] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv:1312.6114, 2013. * [15] Gihyun Kwon, Chihye Han, and Dae shik Kim. Generation of 3d brain mri using auto-encoding generative adversarial networks. arXiv:1908.02498, 2019. * [16] Kolodny EH Lyon G, Fattal-Valevski A. Leukodystrophies: clinical and genetic aspects., 2006. * [17] Kasper Marstal. Simpleelastix. https://simpleelastix.github.io/. [accessed 16 March 2020]. * [18] Richard McKinley, Lorenz Grunder, Rik Wepfer, Fabian Aschwanden, Tim Fischer, Christoph Friedli, Raphaela Muri, Christian Rummel, Rajeev Verma, Christian Weisstanner, Mauricio Reyes, Anke Salmen, Andrew Chan, Roland Wiest, and Franca Wagner. Automatic detection of lesion load change in multiple sclerosis using convolutional neural networks with segmentation confidence. arXiv:1904.03041, 2019. * [19] Sebastian Raschka. Linear discriminant analysis – bit by bit. https://sebastianraschka.com/Articles/2014_python_lda.html. [accessed 12 March 2020]. * [20] Snehashis Roy, John A. Butman, Daniel S. Reich, Peter A. Calabresi, and Dzung L. Pham. Multiple sclerosis lesion segmentation from brain mri via fully convolutional neural networks. arXiv:1803.09172, 2018. * [21] Mostafa Salem, Sergi Valverde, Mariano Cabezas, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, and Xavier Lladó. Multiple sclerosis lesion synthesis in mri using an encoder-decoder u-net. arXiv:1901.05733, 2019. * [22] Alan J. Thompson, Brian G. Weinshenker, and Jerry S. Wolinsky. Diagnostic criteria for multiple sclerosis: 2005 revisions to the ”mcdonald criteria”. https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.604.2677. * [23] Petru-Daniel Tudosiu, Thomas Varsavsky, Richard Shaw, Mark Graham, Parashkev Nachev, Sebastien Ourselin, Carole H. Sudre, and M. Jorge Cardoso. Neuromorphologicaly-preserving volumetric data encoding using vq-vae. arXiv:2002.05692, 2020. * [24] Aleksei Vasilev, Vladimir Golkov, Marc Meissner, Ilona Lipp, Eleonora Sgarlata, Valentina Tomassini, Derek K. Jones, and Daniel Cremers. q-space novelty detection with variational autoencoders. arXiv:1806.02997, 2018.
# Learning from pandemics: using extraordinary events can improve disease now- casting models Sara Mesquita Social Physics and Complexity Lab - SPAC, LIP, Avenida Prof. Gama Pinto, 1600-078 Lisboa, Portugal Cláudio Haupt Vieira Nova School of Business and Economics, Rua da Holanda, 2775-405 Carcavelos, Portugal Lília Perfeito Social Physics and Complexity Lab - SPAC, LIP, Avenida Prof. Gama Pinto, 1600-078 Lisboa, Portugal Joana Gonçalves-Sá Social Physics and Complexity Lab - SPAC, LIP, Avenida Prof. Gama Pinto, 1600-078 Lisboa, Portugal Physics Department, Avenida Rovisco Pais, Instituto Superior Técnico, 1049-001, Lisboa, Portugal Nova School of Business and Economics, Rua da Holanda, 2775-405 Carcavelos, Portugal ###### Abstract Online searches have been used to study different health-related behaviours, including monitoring disease outbreaks. An obvious caveat is that several reasons can motivate individuals to seek online information and models that are blind to people’s motivations are of limited use and can even mislead. This is particularly true during extraordinary public health crisis, such as the ongoing pandemic, when fear, curiosity and many other reasons can lead individuals to search for health-related information, masking the disease- driven searches. However, health crisis can also offer an opportunity to disentangle between different drivers and learn about human behavior. Here, we focus on the two pandemics of the 21st century (2009-H1N1 flu and Covid-19) and propose a methodology to discriminate between search patterns linked to general information seeking (media driven) and search patterns possibly more associated with actual infection (disease driven). We show that by learning from such pandemic periods, with high anxiety and media hype, it is possible to select online searches and improve model performance both in pandemic and seasonal settings. Moreover, and despite the common claim that more data is always better, our results indicate that lower volume of the right data can be better than including large volumes of apparently similar data, especially in the long run. Our work provides a general framework that can be applied beyond specific events and diseases, and argues that algorithms can be improved simply by using less (better) data. This has important consequences, for example, to solve the accuracy-explainability trade-off in machine-learning. ###### keywords: Now-casting, Google search trends, Influenza, Covid-19, Human behavior ## Introduction Infectious diseases pose great health risks to human populations worldwide. To mitigate these risks, public health institutions have set up surveillance systems that attempt to rapidly and accurately detect disease outbreaks. These systems typically include sentinel doctors and testing labs, and enable a timely response which can limit and even stop outbreaks. However, even when in place, detection and mitigation mechanisms can fail, leading to epidemics and, more rarely, pandemics, as we are currently experiencing. In fact, disease surveillance mechanisms that only rely on highly trained personnel, are typically expensive, limited, and slow. It has been extensively argued that these should be complemented with "Digital Era" tools, such as online information, mobility patterns, or digital contact-tracing[1, 2, 3]. Online behaviours, such as searches on Google, have proven to be very relevant tools, as health information seeking is a prevalent habit of online users [4]. This methodology has been applied to follow other epidemics, such as Dengue [5, 6, 7], Avian Influenza [8], and Zika surveillance [9]. In the case of Influenza, a very common infectious disease, the potential of online-based surveillance methods gained large support with the launch of Google Flu Trends (GFT), in 2008 [10]. GFT attempted to predict the timing and magnitude of influenza activity by aggregating flu-related search trends and, contrary to traditional surveillance methods, provided reports in near real-time[11], without the need for data on clinical visits and lab reports. More recently, many others have found strong evidence that the collective search activity of flu-infected individuals seeking health information online provides a representative signal of flu activity [12, 13, 14, 15, 16]. However, flu infection is not the sole (and perhaps not even the strongest) motivation for individuals to seek flu- related information online [17]. This is particularly true during extraordinary times, such as pandemics, when it is reasonable to expect individuals to have various degrees of interest, ranging from curiosity to fear, to actual disease [18]. In fact, the GFT model missed the first wave of the 2009 flu pandemic and overestimated the magnitude of the severe 2013 seasonal flu outbreak, in the USA[17, 19]. This led many authors to suggest that high media activity can lead to abnormal Google search trends, possibly leading to estimation errors [20, 17, 21, 22, 23, 24]. This "media effect" was also observed by others studying Zika [25, 26, 27], and contributed to the disenchantment with the potential of such tools, particularly during such "extraordinary times". However, if we could decouple searches mostly driven by media, anxiety, or curiosity, from the ones related with actual disease, we could not only improve disease monitoring, we could also deepen our understanding of online human behavior. In the case of Google search trends, identifying what terms are more correlated with media exposure and reducing their influence in the model is crucial to correct past errors. In this paper, we propose that the characteristics that make pandemics unique and hard to now-cast, such as media hype, can also be used as opportunities for two main reasons: 1) as pandemics tend to exacerbate behaviors, the noise (media) is of the same order of magnitude as the signal (cases), making it more visible, allowing us to discriminate between the two; and 2) because information seeking becomes less common as the pandemic progresses [28, 18] and these different dynamics can be used when selecting the search terms. In fact, instead of ignoring pandemic periods, studying what happens during the worst possible moment can help us understand which are the search-terms more associated with the disease and the ones that were prompted by media exposure. This solution might avoid over-fitting and enable the predictive model to be more robust over time, especially during seasonal events. Therefore, we focus on the only two XXI century WHO declared pandemics and aim at learning from pandemics to now-cast seasonal epidemics (or secondary waves of the same pandemic), and improving current models by incorporating insights from information-seeking behavior. The first pandemic of the XXI century was caused by an Influenza A(H1N1)09pdm strain (pH1N1), which emerged in Mexico in February 2009 [29]. By June 2009, pH1N1 had spread globally with around 30 000 confirmed cases in 74 countries. In most countries pH1N1 displayed a bi-phasic activity: a spring-summer wave and a fall-winter wave [30, 31]. The fall-winter wave was overall more severe than the spring-summer wave as it coincided with the common flu season (in the Northern Hemisphere), that typically provides optimal conditions for flu transmission [32]. The pandemic was officially declared to be over in August 2010 and a total of 18 449 laboratory-confirmed pH1N1 attributable deaths were counted (WHO, 2009). This number was later revised and pH1N1 associated mortality is now believed to have been 15 times higher than the original official number [33]. The second pandemic of this century, was caused by the SARS-CoV-2 virus, first identified in the last day of 2019 in the Chinese province of Wuhan. To date, Covid-19 has infected more than 78 million people and killed more than 1,7 million people worldwide. Both Covid-19 and influenza viruses cause respiratory diseases with manifestations ranging from asymptomatic or mild to severe disease and death. They share a range of symptoms and trigger similar public health measures due to common transmission mechanisms. Both pandemics have a led to a great surge in media reports and public attention across many platforms, from traditional to online social media. However, there are several differences between the two pandemics: there is still a lot of uncertainty and lack of knowledge surrounding the SARS-CoV-2 virus, including its lethality (although it is certain to be higher than the flu for older age-groups), whether it displays seasonal behaviour, its transmission frequency and patterns, whether infection confers lasting immunity, or what are its long-term health effects, respiratory or not[34, 35, 36, 37, 38]. Moreover, the Covid-19 pandemic led to unique public health measures and what might be considered the largest lockdown in history, with authorities implementing several preventive measures from social distancing to isolating entire countries. These restrictions have been instrumental in reducing the impact of the pandemic, but most decision- makers acknowledged the need to loosen the confinement measures. In the interest of economic and social needs, several countries re-opened schools and businesses, and many experienced surges in cases and deaths[39], often referred to as second and even third waves. At this point, and as vaccines start to be distributed mostly in developed countries, all tools that can help us in identifying outbreaks are of utmost importance and different countries are deploying different measures such as conditional movement and contact tracing apps. For all these reasons, improving fast, online surveillance is even more crucial now than it was in 2009, and there are already several studies on using online data to explain and forecast Covid-19 dynamics[40, 41, 42, 43, 44, 45]. However, and despite its potential, separating what is media hype from reporting of actual disease cases (be it on Google, Facebook, or any other platform), and understanding their impact on collective attention, has been considered a huge challenge. One of the main reasons is that the patterns are intertwined with the actual spread of a disease within a population. Therefore, we learn from the 2009 flu pandemic and propose a system to reduce the signal to noise ratio on online-searches and now-cast the current Covid-19 pandemic. The 2009 influenza offers a great case study as it was extensively researched: precise signals of pandemic flu infections were obtained through large-scale laboratory confirmations [46], several studies analyzed the media’s behaviour during the pandemic [47, 48, 49], including the collection of news pieces and news counts, and as the pandemic emerged at a period of widespread Internet usage [50], several online datasets are available (including the collective behaviour of millions of users through their search trends on Google). Building on these datasets and by adding insights from human behaviour, we apply our framework to the current Covid-19 pandemic and provide a robust and possibly generalizable system. ## Results ### Dynamics of media reports and online searches do not match disease cases Improving signal (disease) to noise ratio is fundamental in disease surveillance. As extraordinary events, such as pandemics, tend to become the dominant story nearly everywhere, fear and curiosity can increase and so do searches for information. First we asked whether there is a correspondence between the number of cases (for both the 2009 flu and Covid-19), media reports, and searches on Google for disease-related terms (flu and Covid-19, respectively). We focused on the US in the case of the 2009 flu and Spain during the Covid-19 pandemic. These are countries that had a large number of cases, good data availability and, in the case of Spain, already a strong second Covid-19 wave, as detailed in the methods. Figure 1 shows the number of confirmed infections, news mentions and GT searches in the United States for the 2009 pandemic (a) and in Spain for the current one (b). Since news now travel faster than pathogenic agents, the news peak for the 09 flu pandemic (figure 1a) had its peak on the last week of April, while the first peak in cases happened later, at the end of June. More relevant is that by the time H1N1 infections had its highest peak in the US (in October/November, during regular flu season), the frequency of online searches for "flu" and news mentions had significantly reduced. In the case of the Covid-19 pandemic (figure 1b), the early news mentions began in late 2019 when the disease was identified in China, but the first cases in Spain were only identified in February 2020 (for a similar analysis on the US case see the supplementary materials). As observed before, there was a disconnect between the intensity of the disease and both its visibility in media and the volume of Google searches[17, 19], raising the important question of whether we can discriminate between different drivers of online searches. Figure 1: Flu and Covid-19 cases during the 2009 and 2020 pandemics in US (a) and Spain (b), respectively. a - Normalized weekly cases of flu (orange), media mentions (purple), and Google-trends searches for the term "flu" (pink) in the United States of America from March 2009 to March 2010. b - Normalized weekly cases of Covid-19 (orange), media mentions (purple), and Google-trends searches for the term "Covid-19" (pink) between February and November 2020. All datasets are normalized to their highest value in the period. We can see a quick increase in media activity in both situations that precedes the number of cases of infection. In both panels, searches for the terms ’flu’ or ’Covid-19’, display a pattern more similar to the media activity trend (Pearson correlation between the search term and media of 0.85 for the flu pandemic and 0.44 for Covid-19 pandemic, compared to 0.27 and -0.03 between the search term and cases of infection, respectively). ### Online searches have different patterns Given that the searches for "flu" and "Covid-19" do not closely follow the variation in the number of confirmed cases, we asked if we could identify particular search terms, with higher correlation with the disease progression. We started by selecting a large number of search terms, related to each disease (see supplementary materials for the full list), all of which could be a priori considered useful for now-casting. Using hierarchical clustering, we identified three distinct clusters in both the 2009 flu and COVID-19 (2a and 2d). Figures 2b and 2e show the centroids of each cluster, revealing the existence of different dynamics. In the case of the flu in the US, one cluster has a strong peak in the second half of 2009, another has the strongest (almost unique) peak in the first half, and a third cluster has much less clearly defined peaks (figure 2b). The first cluster (orange) shows a strong correlation with the number of pH1N1 confirmed cases (r = 0.78, $p=4\times 10^{-16}$) and a lower correlation with media (r = 0.60, $p=2\times 10^{-8}$), while the second cluster (purple) has the opposite trend (figure 2c, r = 0.16, $p=0.2$ with pH1N1 cases and r = 0.83, $p=3\times 10^{-20}$ with media). The third has an intermediate correlation with the flu cases and poor with the media reports. As an additional test, we asked whether there was evidence that cases or media preceded any of the clusters. We performed a Granger causality test and show that that media precedes cluster 2 but not cluster 1 (supplementary materials). Neither cases nor media showed significant results for clusters 1 or 3. The grouping of the search terms is not intuitive from their meaning. Interestingly, there is no clear pattern on the search-terms that could have indicated that some would be more correlated with cases or media attention. For example, symptoms such as ’fever’ or ’cough’, appear on cluster 3, together with ’Guillaume-Barré syndrome’ and disinfectant’, while cluster 1 contains ’vaccine’ and ’treatment’ along with the strain of the virus and ’hand sanitizer’. In the case of Covid-19, the clusters are not so well defined, as shown by the smaller relative length of the internal branches of the clustering dendrogram (figure 2d). This is likely due to a) the smaller time-frame considered (roughly half of that of H1N1 - figure 1) b), the lower search volume, explained by the much smaller population of Spain when compared to the US, and c) the real-time nature of the analysis. Still, we could identify three clear clusters and a very similar pattern (figure 2e): the first cluster (again orange) shows two broad peaks, the second larger than the first. The second cluster (purple) shows a clear first plateau, between March and May 2020, and the third cluster (green) a much sharper peak, encompassing little over one month. When we repeated the correlation analysis, we again identified a cluster (C1, orange) that strongly correlates with the number of cases (r = 0.71, $p=8\times 10^{-6}$) but less with the media (r = 0.52, $p=0.003$), and a cluster (C2, purple) with the opposite pattern (a correlation with cases of r = 0.13, $p=0.45$ and with media of r = 0.71, $p=2\times 10^{-6}$ ) (figure 2f). Cluster 3 (green) correlates poorly with both the number of confirmed cases and media attention. Thus, and despite the strong entanglement and time- coincidence between the cases and the media, particularly in the case of the current pandemic, these results show that 1) not all pandemic-related search trends show the same patterns, and 2) some of the patterns may be driven by media attention whereas others by the number of cases. Figure 2: Different patterns of searches during pandemics Top panels refer to the 2009 flu pandemic in the USA, bottom panels refer to the COVID-19 pandemic in Spain. a - Dendrogram summarizing the hierarchical clustering of Google Trends search terms for the flu pandemic in US. Three clusters are very salient. b - Centroid and standard deviation over time for each cluster. The cluster colors correspond to the clusters in a. c - Pearson correlation between the cluster centroid and either the flu cases (top) or the media mentions (bottom).* denotes 0.01 < p-value < 0.05, ** denotes p-value < 0.001, and ns a non significant p-value. d - Dendrogram summarizing the hierarchical clustering of Google Trends search terms for Covid-19 in Spain. e - Shows the centroid and standard deviation over time for each cluster. The cluster colors correspond to the clusters in d. f - Pearson correlation between the cluster centroid and either the Covid-19 cases (top) or the media mentions (bottom). ### Pandemic search-terms can be used to improve seasonal forecasting That very similar search-terms display such different time patterns is interesting in itself but only useful if they have predictive power. Therefore, we asked whether the search terms identified as correlating with the number of confirmed cases (during a pandemic) could be used to forecast seasonal epidemics. The rationale is that if we can reduce the noise caused by the media coverage and identify the terms that are more resilient to outside factors, we can make seasonal forecasting more robust. Therefore, our goal was not to devise the best possible model, but rather to test whether particular search terms perform better than others. To do this, we took advantage of extensively available seasonal flu data and chose two simple models: a linear regression and the non-linear random forest (details in the Methods). We then tested the predictive power of the models when we used all search terms from figure 2A (that we call "All data") or just the terms from the identified clusters in figure 2b. For both models and all dataset variations, we used three years of data to predict the fourth and assessed the performance of the model only on the prediction season (see Methods for details). Figure 3 and table 4 show the performance of the two models (Figure 3a and 3b) measured by the root-mean-square error (RME) and the coefficient of determination, R2. In general, both models perform similarly, with a mean R2 above 0.7. In both cases, using all data (pink line) is not better than just using the terms more correlated with the number of cases during the pandemic (cluster 1, orange line), and on average cluster 1 performs better than all terms in both the linear regression (R2 = 0.81 for cluster 1 vs R2 = 0.71 for all data) and random forest(R2 = 0.86 for cluster 1 vs R2 = 0.81 for all data). It can also be observed that cluster 1 terms (orange) tend to have a more consistent performance (shown by the smaller standard deviation: $\hat{\sigma}$ = 0.08 for cluster 1 in the case of linear regression and $\hat{\sigma}$ = 0.06 for random forest vs $\hat{\sigma}$ = 0.163 in the case of linear regression and $\hat{\sigma}$ = 0.104 for random forest when considering all data). It is important to note that some of the features from clusters 2 and 3 might be better local predictors, and that can explain the performance of the models when using all search terms, but overall, using only the pre-identified terms of cluster 1 is better. This indicates that 1) insights from pandemics can be used in seasonal forecasting models, and 2) refining the search-term selection, by selecting the ones less sensitive to media hype, might reduce over-fitting and improve model robustness. Figure 3: Performance comparison of model predictions for the flu pandemic. a shows the mean squared error for the linear regression and b for the Random Forest model. Both use Google search terms from figure 2a as independent variables to predict the seasonal flu cases between 2014 and 2019. Each dot represents the squared difference between the prediction and the empirical data, averaged over one season. Cluster 1 (orange) shows better results in almost all seasons and has a smaller standard deviation (shaded area) when compared to cluster 2 (purple) or all data (pink). In both cases, three years were used as training and the models were tested on the following year, in a sliding window process. ### Improving a model for Covid-19 We then asked whether these results could be used in the current pandemic. This is a more challenging setting for several reasons: first, the data is arriving in close to real-time and with varying quality (the number of tests, the criteria for testing, and the reporting formats have been changing with time, even for the same country); second, there is no indication that Covid-19 might become a seasonal disease and the periodicity of new outbreaks, if any, remains unknown; third, reporting is now happening in many different online platforms, at an even faster pace than in 2009, and more importantly fourth, we do not have a large number of past seasons to train our models on. Still, we employed a similar approach to test whether the rationale of the flu pandemic could be applied to Covid-19. The US pandemic situation has been particular, with different states having widely different infection rates and risk levels[51]. Also, at the time of this study, there were no states with clear strong second waves or evidence of seasonality. Therefore, we focused on Spain, one of the first countries to have a clear and strong second wave and trained the models on the first (February-June) wave to try to now-cast the second (June-November) wave. Still, data for the US can be found in the supplementary materials with results very consistent with what we observed in the case of Spain. Figure 4 shows that, again, using only the features from cluster 1 (orange) offers a much better prediction than using the search-terms from clusters 2 (purple) or 3 (supplementary materials), despite the fact that cluster 1 has a much smaller number of terms. The result is particularly striking in the case of the random forest (figure 4b, compare pink and orange). These results further support the idea that by selecting online data, using a semi-manual approach, it is possible to improve disease now-casting. Figure 4: Performance comparison of model predictions for Covid-19. a shows the mean squared error for the linear regression and b for the random forest model. Both use Google search terms from figure 2d as independent variables to predict the second wave (June to November) of Covid-19 in Spain. Each dot shows the squared difference between the prediction and the empirical data in each week. Cluster 1 (orange) presents better results in almost all seasons and has a smaller standard deviation (shaded area) when compared to cluster 2 (purple) or all data (pink). In both cases, the first wave was used to train the model. Table 1: Model results for both pandemics. | Flu | Covid-19 ---|---|--- | L. Regression | Random Forest | L. Regression | Random Forest | R2 | RMSE | R2 | RMSE | R2 | RMSE | R2 | RMSE Cluster 1 | 0.83 | 0.17 | 0.86 | 0.14 | 0.96 | 0.04 | 0.84 | 0.16 Cluster 2 | 0.76 | 0.25 | 0.82 | 0.18 | 0.70 | 0.30 | 0.20 | 0.80 All data | 0.72 | 0.28 | 0.81 | 0.19 | 0.46 | 0.54 | 0.35 | 0.65 ## Discussion In the past, the inclusion of online data in surveillance systems has both improved the disease prediction ability over traditional syndromic surveillance systems, while also showing some very obvious caveats. Online- data based surveillance systems have many limitations and challenges, including noisy data, demographic bias, privacy issues, and, often, very limited prediction power. Previous approaches have assumed that if a search- term is a good predictor of cases in one year, it will be a good predictor in the following years [11, 52], when in fact, search terms may be associated with both cases and media hype in a particular year, but soon loose association with one or the other (especially when media interest fades). Moreover, and taking into consideration that these approaches often use a single explanatory-variable, meaning the model ignores the variability in individual search query tendencies over time, it can happen that terms highly correlated with disease cases in a certain moment can be highly correlated with media reports as well, but over time some might lose their association with one or another. However, and despite the described limitations, there are several successful examples of using online behaviour as a proxy for "real- world" behaviour in disease settings and it is increasingly clear that such data can offer insights not limited to disease forecasting [53, 54, 55, 56, 16]. Pandemics have been particularly ignored in digital now-casting because they represent (hopefully) rare events when people’s behaviour changes, making forecasting even more challenging. A large part of these behavioural changes is driven by the excess media attention: people become curious and possibly afraid, and start looking for more (different?) information. This is in contrast with seasonal outbreaks where there isn’t so much relative media attention, there is more common knowledge, and people’s online searches might be primarily driven by actual disease. In general, the notions that online search-data is too noisy and that the models used have limited prediction power have led people to try to increase the type and quantity of data, or to build more complex models. However, we argue that this tension, between using the large potential of online data and the so-called "data hubris", can be balanced in the opposite direction, by including behavioural knowledge and human curation, to reduce the amount of data required, while keeping the models simple and explainable. In this study, we applied this approach to two pandemics and showed that, contrary to general arguments of "more data trumps smarter algorithms"[57] we can use such extraordinary events to improve seasonal forecasting, and argue that lowering the volume of data can reduce over-fitting while maintaining the quality of the predictions. This was done by actively discriminating between search queries that are very sensitive to media from queries possibly more driven by symptoms. Our approach combines elements of human curation and blind statistical inference. On the one hand our initial term list is based on knowledge of the disease. On the other, the clustering algorithm is blind to the actual meaning of the terms. This leads to unusual term-pairings such as the fact that "oseltamivir" (cluster 2), a drug used to treat flu is separate from "flu treatment" (cluster 1). We can explain this separation by considering that the media is more likely to mention the name of the drug, but that sick people might not remember it. However a priori we might not think this distinction was important. Finally, the choice of the best cluster is again based on human curation by looking at the correlations with media and cases, which we postulate are the main drivers behind search queries. In many general now-casting problems, a similar semi-automated approach is probably more fruitful than a fully automated, data-hungry methodology. This approach can also be particularly useful in countries where data is sparse or suffers from significant bias or delays. Even within Europe, data collection and reporting have been inconsistent, limiting global epidemiological analysis[57]. Methods as the one we describe here cannot replace the need for strong, centralized, data collection systems (through the European, American or other CDCs) but might help to fill existing gaps, while surveillance networks are built or reinforced. In addition to improving now-casting models, finding different search patterns in Google Trends can offer insights into the behaviours of internet users. Specifically, by clustering search trends on a topic we can ask whether there are different motivations behind them. If there are hypotheses about what those motivations are, they can also be tested by correlating with centroids as we do here. For example, the search terms from the media-related clusters (clusters 2) could be further analyzed to discriminate which terms are more often found in newspapers versus television, offering insight into the preferred news media. This methodology opens new doorways into connecting online and offline behaviour. Overall, we add to the ongoing work on using digital tools and online data to improve disease monitoring and propose a new tool to now-cast infectious diseases, combining statistical tools and human curation, that can prove useful in the monitoring of the current and future pandemics and epidemics. ## Methods ### Data and Sources #### Selected countries and time period. Data for the 2009 pandemic was collected for the USA, from March 2009 to August 2019, as it offered reliable data on a large number of people. This was not possible for Covid-19 as this pandemic is reaching different states at different times and second or third waves are mostly caused by surges in new states than as a nation-wide, simultaneous epidemic. Still, supplemental text shows that three clusters are observed, one more correlated with cases than the rest. Data for the Covid-19 pandemic was collected for Spain, from January 2nd to November 15th 2020, as it was the country with highest number of reliable second-wave cases, offering at least one training and one testing period. #### Google search trends Data from Google search trends (GT) [58] was extracted from the United States and Spain both for flu and Covid-19 pandemics, through the GT API. It provides a normalized number of queries for a given keyword, time and country [11, 59]. Search terms were selected to cover various aspects of pandemic and seasonal flu, and Covid-19, such as symptoms, antivirals, personal care, institutions and pandemic circumstantial terms.This was done with the help of "related queries" option that Google Trends provides, returning what people also search for when they search for a specific term. Terms that contained many "zeros" interspersed with high values were indicative of low search volume and were removed. In the end we had 49 flu-related weekly search trends in the United States and 63 Covid-related terms in Spain. Time periods were December 2019 to September 2020 in the case of Spain and September 2009 to September 2019, in the case of the USA, to cover pre-pandemic, pandemic and post-pandemic periods. In the case of the US flu pandemic, search-terms were extracted for each season separately, with a season being defined as going from September 1st to October 1st the following year. GT time series were extracted in September 2020 in the case of Spain, and July 2020 in the case of the US. Data was binned in a weekly resolution, to match that of reported cases and remove daily variation. Both word lists are reported in the supplemental text. #### News media The pandemic flu, United States media dataset contains the weekly count of both TV news broadcast and print media, that mentioned "flu" or "influenza". It includes NBC, CBS, CNN, FOX and MSNBC networks, obtained from the Vanderbilt Television News Archive [60], and The New York Times, from the NYT API (https://developer.nytimes.com/). The Covid-19 media dataset, for both the USA and Spain was obtained through Media Cloud [61], an online open-source platform containing extensive global news corpus starting in 2011. The query "Covid-19 OR Coronavirus" was used to track media coverage of the pandemic over time. It aggregated articles that had 1 keyword, the other or both. For the case of the US, we searched the collection "United States - National" (#34412234) and "United States - State & Local" (#38379429), which includes 271 national and 10,457 local media sources, respectively. For Spain we used collection "Spain - National" (#34412356) which includes 469 media sources, and Spain - State & Local(#38002034), including 390 media sources. #### Infectious Disease Data Data of confirmed infections from both pH1N1 and SARS-CoV-2 are publicly available. For US pH1N1 cases were extracted from the CDC’s National Respiratory and Enteric Virus Surveillance System [62]. In the case of Covid-19 in the US, data from national and state-level cases were extracted ECDC’s Our World in Data [63] and from New York Times [64], respectively, in August 2020. In the case of Covid-19 in Spain, data was obtained from the WHO [39]. ### Analysis #### Hierarchical clustering Google search terms were independently extracted from Google Trends[65]. While all search queries include a 100, not all include a zero (if there were no weeks with less than 1% of the maximum weekly volume), so all series were re- scaled between 0 and 100. These were clustered using hierarchical clustering, computing the pairwise Euclidean distance between words and using Ward’s linkage method (an agglomerative algorithm) to construct the dendrograms shown in 2. clustering was performed in Python, using scipy.cluster.hierarchy.dendrogram [66]. The number of clusters was determined through visual inspection of the dendrogram. This task was performed using data from the pandemic period, which for H1N1 pandemic was between March 2009 and August 2010, and for Covid-19 from December 2019 to September 2020. #### Modeling and Evaluation The datasets for seasonal flu were collected similarly to those of the pandemic. They are aggregated by week and seasons were defined by visual inspection, varying from season to season, over the 9 years of data. Each dataset (cases and search time series) in each season was standardized so its mean value was 0 and its standard deviation was 1. The model was trained with 3 seasons and tested with the 4th. In the case of Covid-19 in Spain, the data was split around the week with the fewest number of cases (June). The first wave was then used to train and the second to test. Linear Regression In each case, a model of the form $I_{i}=\beta_{0}+\beta_{1}\times W_{1}+\beta_{1}\times W_{1}+...+\beta_{n}\times W_{n}+\epsilon_{i}$ (1) was trained, where $I_{i}$ is the number of infections in week $i$, $\beta_{0}$ is the intercept, $\beta_{1}$ to $\beta_{n}$ are the coefficients of each search term and $\epsilon_{i}$ is the error. The coefficients were estimated as to minimize the sum of the square of the errors across all weeks. the regression was implemented in Python using sklearn.linear_model.LinearRegression [67] with default parameters. Random Forest For each dataset, a random forest model was trained using sklearn.ensemble.RandomForestRegressor [68] implemented in Python. The hyperparameters - number of estimators, max features and max depth - were selected through cross validation using GridSearchCV from [10,20,50,100,200,500,1000], [0.6,0.8,"auto","sqrt"] and [2,4,5,6] respectively. ## Acknowledgments The authors would like to thank members of the SPAC lab for comments and critical reading of the manuscript. This work was partially funded by FCT grant DSAIPA/AI/0087/2018 to JGS and by PhD fellowships SFRH/BD/139322/2018 and 2020.10157.BD to CHV and SM, respectively. ## Author contributions statement All authors participated in project conception, data analysis, and paper writing. ## Additional information ## References * [1] Hay, S. I., George, D. B., Moyes, C. L. & Brownstein, J. S. Big data opportunities for global infectious disease surveillance. _PLoS med_ 10, e1001413 (2013). * [2] Ferretti, L. _et al._ Quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing. _Science_ 368 (2020). * [3] Salathe, M. _et al._ Digital epidemiology. _PLoS Comput Biol_ 8, e1002616 (2012). * [4] Fox, S. _Online health search 2006_ (Pew Internet & American Life Project, 2006). * [5] Chan, E. H., Sahai, V., Conrad, C. & Brownstein, J. S. Using web search query data to monitor dengue epidemics: a new model for neglected tropical disease surveillance. _PLoS neglected tropical diseases_ 5, e1206 (2011). * [6] Althouse, B. M., Ng, Y. Y. & Cummings, D. A. Prediction of dengue incidence using search query surveillance. _PLoS Negl Trop Dis_ 5, e1258 (2011). * [7] Husnayain, A., Fuad, A. & Lazuardi, L. Correlation between google trends on dengue fever and national surveillance report in indonesia. _Global Health Action_ 12, 1552652 (2019). * [8] Mollema, L. _et al._ c. _Journal of medical Internet research_ 17, e128 (2015). * [9] Teng, Y. _et al._ Dynamic forecasting of zika epidemics using google trends. _PloS one_ 12, e0165085 (2017). * [10] Google flu trends. https://web.archive.org/web/20121022154915/http://www.google.org/flutrends/about/how.html. Accessed: 2020-12-22. * [11] Ginsberg, J. _et al._ Detecting influenza epidemics using search engine query data. _Nature_ 457, 1012–1014 (2009). * [12] Hickmann, K. S. _et al._ Forecasting the 2013–2014 influenza season using wikipedia. _PLoS Comput Biol_ 11, e1004239 (2015). * [13] Lamb, A., Paul, M. & Dredze, M. Separating fact from fear: Tracking flu infections on twitter. In _Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , 789–795 (2013). * [14] Santillana, M. _et al._ Combining search, social media, and traditional data sources to improve influenza surveillance. _PLoS Comput Biol_ 11, e1004513 (2015). * [15] Sharpe, J. D., Hopkins, R. S., Cook, R. L. & Striley, C. W. Evaluating google, twitter, and wikipedia as tools for influenza surveillance using bayesian change point analysis: a comparative analysis. _JMIR public health and surveillance_ 2, e161 (2016). * [16] Won, M., Marques-Pita, M., Louro, C. & Gonçalves-Sá, J. Early and real-time detection of seasonal influenza onset. _PLoS computational biology_ 13, e1005330 (2017). * [17] Lazer, D., Kennedy, R., King, G. & Vespignani, A. The parable of google flu: traps in big data analysis. _Science_ 343, 1203–1205 (2014). * [18] Towers, S. _et al._ Mass media and the contagion of fear: the case of ebola in america. _PloS one_ 10, e0129179 (2015). * [19] Olson, D. R., Konty, K. J., Paladini, M., Viboud, C. & Simonsen, L. Reassessing google flu trends data for detection of seasonal and pandemic influenza: a comparative epidemiological study at three geographic scales. _PLoS Comput Biol_ 9, e1003256 (2013). * [20] Copeland, P. _et al._ Google disease trends: an update. In _International Society of Neglected Tropical Diseases 2013_ , 3 (2013). * [21] Funk, S. _et al._ Nine challenges in incorporating the dynamics of behaviour in infectious diseases models. _Epidemics_ 10, 21–25 (2015). * [22] Shih, T.-J., Wijaya, R. & Brossard, D. Media coverage of public health epidemics: Linking framing and issue attention cycle toward an integrated theory of print news coverage of epidemics. _Mass Communication & Society_ 11, 141–160 (2008). * [23] Collinson, S. & Heffernan, J. M. Modelling the effects of media during an influenza epidemic. _BMC public health_ 14, 376 (2014). * [24] Collinson, S., Khan, K. & Heffernan, J. M. The effects of media reports on disease spread and important public health measurements. _PloS one_ 10, e0141423 (2015). * [25] Tizzoni, M., Panisson, A., Paolotti, D. & Cattuto, C. The impact of news exposure on collective attention in the united states during the 2016 zika epidemic. _PLoS computational biology_ 16, e1007633 (2020). * [26] Dillard, J. P., Li, R. & Yang, C. Fear of zika: Information seeking as cause and consequence. _Health Communication_ 1–11 (2020). * [27] Yang, C., Dillard, J. P. & Li, R. Understanding fear of zika: Personal, interpersonal, and media influences. _Risk Analysis_ 38, 2535–2545 (2018). * [28] Tausczik, Y., Faasse, K., Pennebaker, J. W. & Petrie, K. J. Public anxiety and information seeking following the h1n1 outbreak: blogs, newspaper articles, and wikipedia visits. _Health communication_ 27, 179–185 (2012). * [29] Mena, I. _et al._ Origins of the 2009 h1n1 influenza pandemic in swine in mexico. _Elife_ 5, e16777 (2016). * [30] Brammer, L. _et al._ Surveillance for influenza during the 2009 influenza a (h1n1) pandemic–united states, april 2009–march 2010\. _Clinical Infectious Diseases_ 52, S27–S35 (2011). * [31] Devaux, I. _et al._ Initial surveillance of 2009 influenza a (h1n1) pandemic in the european union and european economic area, april–september 2009. _Eurosurveillance_ 15, 19740 (2010). * [32] Shaman, J. & Kohn, M. Absolute humidity modulates influenza survival, transmission, and seasonality. _Proceedings of the National Academy of Sciences_ 106, 3243–3248 (2009). * [33] Dawood, F. S. _et al._ Estimated global mortality associated with the first 12 months of 2009 pandemic influenza a h1n1 virus circulation: a modelling study. _The Lancet infectious diseases_ 12, 687–695 (2012). * [34] Carlson, C. J., Gomez, A. C., Bansal, S. & Ryan, S. J. Misconceptions about weather and seasonality must not misguide covid-19 response. _Nature Communications_ 11, 1–4 (2020). * [35] Greenhalgh, T., Knight, M., Buxton, M., Husain, L. _et al._ Management of post-acute covid-19 in primary care. _bmj_ 370 (2020). * [36] Arunachalam, P. S. _et al._ Systems biological assessment of immunity to mild versus severe covid-19 infection in humans. _Science_ 369, 1210–1220 (2020). * [37] Del Rio, C., Collins, L. F. & Malani, P. Long-term health consequences of covid-19. _Jama_ 324, 1723–1724 (2020). * [38] Kanzawa, M., Spindler, H., Anglemyer, A. & Rutherford, G. W. Will coronavirus disease 2019 become seasonal? _The Journal of infectious diseases_ 222, 719–721 (2020). * [39] Who coronavirus disease. https://covid19.who.int/. Accessed: 2020-10-01. * [40] Kogan, N. E. _et al._ An early warning approach to monitor covid-19 activity with multiple digital traces in near real-time. _arXiv preprint arXiv:2007.00756_ (2020). * [41] Dewhurst, D. R. _et al._ Divergent modes of online collective attention to the covid-19 pandemic are associated with future caseload variance. _arXiv preprint arXiv:2004.03516_ (2020). * [42] Liu, D. _et al._ A machine learning methodology for real-time forecasting of the 2019-2020 covid-19 outbreak using internet searches, news alerts, and estimates from mechanistic models. _arXiv preprint arXiv:2004.04019_ (2020). * [43] Ayyoubzadeh, S. M., Ayyoubzadeh, S. M., Zahedi, H., Ahmadi, M. & Kalhori, S. R. N. Predicting covid-19 incidence through analysis of google trends data in iran: data mining and deep learning pilot study. _JMIR Public Health and Surveillance_ 6, e18828 (2020). * [44] Lu, T. & Reis, B. Y. Internet search patterns reveal clinical course of disease progression for covid-19 and predict pandemic spread in 32 countries. _medRxiv_ (2020). * [45] Effenberger, M. _et al._ Association of the covid-19 pandemic with internet search volumes: a google trendstm analysis. _International Journal of Infectious Diseases_ (2020). * [46] Panning, M. _et al._ Detection of influenza a (h1n1) v virus by real-time rt-pcr. _Eurosurveillance_ 14, 19329 (2009). * [47] Duncan, B. How the media reported the first days of the pandemic (h1n1) 2009: results of eu-wide media analysis. _Eurosurveillance_ 14, 19286 (2009). * [48] Klemm, C., Das, E. & Hartmann, T. Swine flu and hype: a systematic review of media dramatization of the h1n1 influenza pandemic. _Journal of Risk Research_ 19, 1–20 (2016). * [49] Reintjes, R. _et al._ “pandemic public health paradox”: time series analysis of the 2009/10 influenza a/h1n1 epidemiology, media attention, risk perception and public reactions in 5 european countries. _PloS one_ 11, e0151258 (2016). * [50] Seybert, H. & Lööf, A. Internet usage in 2010–households and individuals. _Eurostat, data in Focus_ 50–2010 (2010). * [51] Chande, A. _et al._ Real-time, interactive website for us-county-level covid-19 event risk assessment. _Nature Human Behaviour_ 4, 1313–1319 (2020). * [52] Cook, S., Conrad, C., Fowlkes, A. L. & Mohebbi, M. H. Assessing google flu trends performance in the united states during the 2009 influenza virus a (h1n1) pandemic. _PloS one_ 6, e23610 (2011). * [53] Choi, H. & Varian, H. Predicting the present with google trends. _Economic record_ 88, 2–9 (2012). * [54] Moat, H. S., Preis, T., Olivola, C. Y., Liu, C. & Chater, N. Using big data to predict collective behavior in the real world 1. _Behavioral and Brain Sciences_ 37, 92–93 (2014). * [55] Stephens-Davidowitz, S. The cost of racial animus on a black candidate: Evidence using google search data. _Journal of Public Economics_ 118, 26–40 (2014). * [56] Vosen, S. & Schmidt, T. Forecasting private consumption: survey-based indicators vs. google trends. _Journal of forecasting_ 30, 565–578 (2011). * [57] Flaxman, S. _et al._ Estimating the effects of non-pharmaceutical interventions on covid-19 in europe. _Nature_ 584, 257–261 (2020). * [58] Google trends. https://trends.google.com/trends/?geo=US. Accessed: 2020-10-16. * [59] Carneiro, H. A. & Mylonakis, E. Google trends: a web-based tool for real-time surveillance of disease outbreaks. _Clinical infectious diseases_ 49, 1557–1564 (2009). * [60] Sood, G. & Laohaprapanon, S. Vanderbilt TV News Abstracts, DOI: 10.7910/DVN/BP2JXU (2020). * [61] Media cloud. https://mediacloud.org/. Accessed: 2021-01-04. * [62] Flunet. https://www.who.int/influenza/gisrs_laboratory/flunet/en/. Accessed: 2020-06-18. * [63] Our world in data. https://github.com/owid/covid-19-data/tree/master/public/data. Accessed: 2020-08-20. * [64] New york times covid-19 data. https://github.com/nytimes/covid-19-data. Accessed: 2020-08-20. * [65] Stephens-Davidowitz, S. & Varian, H. A hands-on guide to google data. _further details on the construction can be found on the Google Trends page_ (2014). * [66] Scipy clustering. https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html. * [67] Linearregression. https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.htmll. * [68] Randomforestregressor. https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html.
11institutetext: Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA22institutetext: Facebook Artificial Intelligence, Boston, MA, 0214233institutetext: Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90007, USA $\textdagger$ Contribute Equally. # Symmetric-Constrained Irregular Structure Inpainting for Brain MRI Registration with Tumor Pathology Xiaofeng Liu 1†1† Fangxu Xing 1†1† Chao Yang 22 C.-C. Jay Kuo 33 Georges El Fakhri 11 Jonghye Woo 11 ###### Abstract Deformable registration of magnetic resonance images between patients with brain tumors and healthy subjects has been an important tool to specify tumor geometry through location alignment and facilitate pathological analysis. Since tumor region does not match with any ordinary brain tissue, it has been difficult to deformably register a patient’s brain to a normal one. Many patient images are associated with irregularly distributed lesions, resulting in further distortion of normal tissue structures and complicating registration’s similarity measure. In this work, we follow a multi-step context-aware image inpainting framework to generate synthetic tissue intensities in the tumor region. The coarse image-to-image translation is applied to make a rough inference of the missing parts. Then, a feature-level patch-match refinement module is applied to refine the details by modeling the semantic relevance between patch-wise features. A symmetry constraint reflecting a large degree of anatomical symmetry in the brain is further proposed to achieve better structure understanding. Deformable registration is applied between inpainted patient images and normal brains, and the resulting deformation field is eventually used to deform original patient data for the final alignment. The method was applied to the Multimodal Brain Tumor Segmentation (BraTS) 2018 challenge database and compared against three existing inpainting methods. The proposed method yielded results with increased peak signal-to-noise ratio, structural similarity index, inception score, and reduced L1 error, leading to successful patient-to-normal brain image registration. ###### Keywords: Brain Tumor Registration Image Inpainting Irregular Structure Symmetry Contextual Learning Deep Learning ## 1 Introduction In brain imaging studies, magnetic resonance imaging (MRI) as a noninvasive tool is widely used to provide information on the brain’s clinical structure, tissue anatomy, and functional behaviors [28, 4]. When multiple datasets from a population of interest are involved, to establish a comparable framework in which similarity and variability in the tissue structure can be evaluated, deformable image registration between subjects are often used to achieve inter-subject alignment [37]. Brain tumor is a common type of disorder diagnosed using medical imaging [35]. However, tumors in MRI tend to cause difficulties with deformable registration: 1) Tumor regions have no matching structure in a normal brain, nullifying the basic mathematical assumptions made for regular image registration methods and subsiding their performance; 2) Expansion of tumor regions often alters its peripheral structure, causing the whole image to become asymmetric with distorted hemispheres or ventricles; and 3) The locations of tumors are sometimes irregularly scattered around the whole brain, causing inconsistencies when matching multiple tumor spots [10]. There has been a great deal of work that tackles patient-to-normal tissue registration in a traditional way [38, 19]. Especially, for small tumor cases, Dawant et al. [9] introduced a tumor seed and Cuadra et al. [8] extended it with a tumor growth model to drive the registration process. For larger tumors, Mohamed et al. [27] used a biomechanical model of tumor-induced deformation to generate a similar tumor image from the normal image. Since then many methods have been focusing on tumor growth simulations to facilitate symmetry computation [14, 40]. More traditional methods are summarized in [37]. In this work, we propose a new image inpainting method—i.e., a restorative method that treats tumor as defective holes in an ideal image and reconstructs them with synthetic normal tissue. The synthesized brain can be processed with regular deformable registration and the tumor region will eventually be re-applied after being mapped to the new space. Traditional inpainting methods are either diffusion-based or patch-based with low-level features [3, 2, 5, 7, 12, 32]. These prior approaches usually perform poorly in generating semantically meaningful contents and filling in large missing regions [21]. Recently developed learning-based inpainting methods usually use generative adversarial networks (GANs) to learn image semantics and infer contents in the missing region [15, 36, 31, 39]. In the brain tumor application, difficulties 2) and 3) need to be addressed specifically. Starting from the initial context encoder deep learning method [31], Liu et al. [20] updated the mask and convolution weights in each layer to handle irregular holes. However, it is challenging for these 1-step inpainting solutions to address the large holes with complicated texture. Song et al. [36] proposed a multi-step framework to refine the results with patch- swap, but its coarse inpainting module does not fit for multiple irregular holes. Moreover, the above methods are designed for general image cases and do not involve priors such as brain anatomy and physiology. In this work, we propose a novel multi-step inpainting method capable of making fine-grained prediction within irregular holes with feature patch-wise conditional refinement. It also incorporates a symmetry constraint to explicitly exploit the quasi-symmetry property of the human brain for better structure understanding. Deformable registration is applied between inpainted patient images and normal controls whose deformation field is then used to deform original patient data into the target space, achieving patient-to- normal registration. ## 2 Methods Given a brain MRI slice $I_{0}$ with tumor, the goal is to replace the pathological regions with normal brain appearances. The incomplete input $I_{0}$ is composed of $R$ and $\overline{R}$, representing the removed pathological region (the hole) and the remaining normal region (boundary or context), respectively. Mathematically, the task is to generate a new, complete image $I$ with plausible contents in $\overline{R}$. Following the basic idea of contextual-based image inpainting [36], our framework consists of three sequential modules: global perception inference (GPI), context-aware patch swapping (CPS), and feature-to-image translator (F2I). The intuition behind the multi-step operation is that direct learning of the distribution of high dimensional image data is challenging. Thus using a coarse generation followed by a refinement scheme can increase the inpainting performance [36]. Our network architecture is shown in Fig. 1. Figure 1: Overview of the proposed network architecture. GPI is used for coarse inference and VGG is used for extracting the feature map. The patch- swap layer propagates high frequency information from the boundary to the hole. F2I translates to a complete, high-resolution image further constrained with symmetric loss. ### 2.1 Global Perception Inference The input to the GPI network $I_{0}$ is a 1$\times$240$\times$240 image with irregular holes. Its output is a coarse prediction $I_{1}$. Considering the potential irregular distribution of tumor locations, the rectangular hole generation module used in [36] is not applicable. Therefore, we first adopt the GPI network structure from the image-to-image translation network proposed in [17], which consists of 4$\times$4 convolutions with skip connections in order to concatenate different features from each encoder layer and the corresponding decoder layer. We slightly modify the size of each layer since only single channel T1-weighted MRI is used in this task. The GPI module is explicitly trained using the $L_{1}$ reconstruction loss, which is important for stabilizing the adversarial training [23]. It can be formulated as $\displaystyle\mathcal{L}_{1}(I_{1},I_{gt})$ $\displaystyle=$ $\displaystyle\parallel I_{1}-I_{gt}\parallel_{1},\vspace{-5pt}$ (1) where $I_{1}$ and $I_{gt}$ are the rough inpainting result of GPI and the ground truth, respectively. The second objective is the adversarial loss based on GANs [24], which can be defined as: $\displaystyle\mathcal{L}_{adv}=\max_{D_{1}}\mathbb{E}[\log(D_{1}(I_{0},I_{gt}))+\log(1-D_{1}(I_{0},I_{1}))].\vspace{-5pt}$ (2) Here, a pair of images are input to the discriminator $D_{1}$ as is the setting of adversarial training. The incomplete image $I_{0}$ and the original image $I_{gt}$ are the real pair, and the incomplete image $I_{0}$ and the prediction $I_{1}$ are the fake pair. During training, the overall loss function is given by $\mathcal{L}_{GPI}=\lambda_{1}\mathcal{L}_{1}+\lambda_{2}\mathcal{L}_{adv}$, where $\lambda_{1}$ and $\lambda_{1}$ are the balancing hyperparameters for the two losses. ### 2.2 Context-aware Patch Swapping We use $I_{1}$ as input to the CPS network which is implemented in two phases. First, $I_{1}$ is encoded as $F_{1}$ by a fully convolutional network (FCN) using the pre-trained VGG network as in [36]. Then the patch-swap operation is applied to propagate the texture from $\overline{R}$ to $R$ while maintaining the high frequency information in $R$ [22]. $r$ and $\bar{r}$ denote the regions in $F_{1}$ corresponding to $R$ and $\bar{R}$ in $I_{1}$, respectively. For each 1$\times$1 neural patch111In the inpainting community, the 1$\times$1 patch (in a feature map) is a widely used concept. The output of F1 $\in\mathbb{R}^{256\times 60\times 60}$, while the original image is 240$\times$240$\times$1; therefore a 1$\times$1 area in a feature map is not considered as a pixel. $p_{i}$ of $F_{1}$ overlapping with $r$, the closest-matching neural patch in $\bar{r}$, indexed by $q_{i}$, is found using the following cross-correlation metric $\displaystyle d(p,q)=\frac{<p,q>}{\parallel p\parallel\cdot\parallel q\parallel}\vspace{-5pt},$ (3) where $p_{i}$ is replaced by $q_{i}$. We first swap each patch in $r$ with its most similar patch in $\bar{r}$, followed by averaging overlapping patches. The output is then a new feature map $F^{\prime}_{1}$. This process is illustrated in Fig. 2 left. Figure 2: Illustration of the patch-swap operation (left) and symmetry constraint (right). Patch-swap is implemented in the FCN-based VGG’s feature space to search for the most similar boundary $1\times 1$ feature patch with minimum $d(p,q)$. ### 2.3 Feature-to-image Translator Next, we use the F2I network to learn the mapping from the swapped feature map to a complete and vivid image, which has a U-Net style generator. The input to the U-Net is a feature map extracted by the FCN-based VGG network. The generator consists of seven convolution layers and eight deconvolution layers, where the first six corresponding deconvolutional and convolutional layers are connected through skip connections. The output is a complete 1$\times$240$\times$240 image. In addition, the F2I network comprises a patch- GAN based discriminator $D_{2}$ for adversarial training. However, the input to $D_{2}$ is a pair of an image and its feature map in contrast to the GPI network. In practice, we follow [36] that uses the ground truth as training input. Specifically, the feature map $F_{gt}=\mbox{vgg}(I_{gt})$ is the input to the patch-swap layer followed by using the swapped feature $F^{\prime}_{gt}=\mbox{patch\\_swap}(F_{gt})$ to train the F2I model. $F^{\prime}_{1}=\mbox{patch\\_swap}(F_{1})$ is still used as input for inference, since $I_{gt}$ is not accessible at test time. Of note, using different types of input for both training and testing is not a common practice in training a machine learning model. However, its effectiveness in inpainting has been demonstrated in [36]. Similar to [42], the robustness can be further improved by sampling from both the ground truth and the GPI prediction. The first objective is the perceptual loss defined on the entire image between the final output $I$ and the ground truth $I_{gt}$: $\displaystyle\mathcal{L}_{perceptual}(I,I_{gt})=\parallel vgg(I)-vgg(I_{gt})\parallel_{2}.$ (4) This perceptual loss has been widely used in many tasks [13, 18, 11, 6] as it corresponds better with human perception of similarity [41]. The adversarial loss is defined by the discriminator $D_{2}$, which can be expressed as: $\displaystyle\mathcal{L}_{adv}=\max_{D_{2}}\mathbb{E}[\log(D_{2}(F^{\prime}_{gt},I_{gt}))+\log(1-D_{2}(F^{\prime}_{gt},I))],$ (5) where the real and fake pairs for adversarial training are ($F^{\prime}_{gt},I_{gt}$) and ($F^{\prime}_{gt},I$), respectively. ### 2.4 Quasi-symmetry Constraint While the brain is not exactly symmetrical w.r.t. the mid-sagittal plane, there is a large degree of symmetry between left and right hemispheres in the brain which we call the “quasi-symmetry property” [33, 29]. As such, using this anatomical symmetry constraint on the generated images can mitigate the ill-posed inpainting task and further improve performance especially for large hole cases. The symmetry loss is given by $\displaystyle\mathcal{L}_{sym}(I)=\mathbb{E}\parallel I_{R}-I_{\hat{R}}\parallel_{2},$ (6) where $R$ and $\hat{R}$ are the hole and its mirrored region as shown in Fig. 2 right. Therefore, we can easily transfer the appearance of the normal brain tissue to the corresponding tumor part by teaching the network to recover the lost information from the mirrored side. Note that the brains used in our experiments are coarsely aligned on their mid-sagittal planes. More importantly, our technique is robust against any potential misalignments, since the resolution of the feature space is 60$\times$60, while the input is 240$\times$240, with the down-sampling of the maxpooling operation, the deep neural networks, in general, are robust against small rotation [25]. Besides, the deep neural network can tackle this simple rotation. With the symmetry constraint, the overall loss for the F2I translation network is defined as: $\displaystyle\mathcal{L}_{F2I}=\lambda_{3}\mathcal{L}_{perceptual}+\lambda_{4}\mathcal{L}_{adv}+\lambda_{5}\mathcal{L}_{sym},$ (7) where $\lambda_{3}$, $\lambda_{4}$, and $\lambda_{5}$ are the balancing hyperparameters for different losses. Considering the brain is not strictly symmetrical w.r.t. the mid-sagittal plane, we usually choose a relatively small weight $\lambda_{5}$ for $\mathcal{L}_{sym}$. ## 3 Experiments and Results Figure 3: An ablation study of our symmetry constraint and the comparison with the other inpainting methods. The proposed method was validated both qualitatively and quantitatively on the T1 modality of Brain Tumor Segmentation (BraTS) 2018 database222https://www.med.upenn.edu/sbia/brats2018/data.html. From a total of 210 patients each with ~150 slices, we randomly selected 16 patients for testing and the remaining subjects were used for training in a subject independent manner. Training was performed on four NVIDIA TITAN Xp GPUs with the PyTorch deep learning toolbox [30], which took about 5 hours. The normal slices without tumors in the training set were selected to train our network. Since tumors in BraTS data can occur in different spatial locations, our network is capable of familiarizing with the normal appearance in different slices. We randomly chose the irregular tumor segmentation labels in our training set as training masks. The process of computing cross-correlation for all the neural patch pairs between the hole and the remaining region (e.g., boundary) is computationally prohibitive. To alleviate this, the strategy in [6, 36] was used to speed up computation via paralleled convolution. In practice, processing one feature map only took about 0.1 seconds. In order to match the absolute value of each loss, we set different weights for each part. For the training of GPI, we set weight $\lambda_{1}=10$ and $\lambda_{2}=1$. Adam optimizer was used for training. The learning rate was set at $lr_{GPI}=1\mathrm{e}{-3}$ and $lr_{D_{1}}=1\mathrm{e}{-4}$ and the momentum was set at 0.5. When training the F2I network, we set $\lambda_{3}=10$, $\lambda_{4}=3$ and $\lambda_{5}=1$. For the learning rate, we set $lr_{F2I}=2\mathrm{e}{-4}$ and $lr_{D_{2}}=2\mathrm{e}{-4}$. Same as the GPI module, the momentum was set as 0.5. The inpainting results of various cases are shown in Figs. 3, 4, and 5. The proposed network can deal with incomplete data from different unseen patients, different slice positions, and arbitrary shape and number of holes. Comparisons with the other inpainting methods are shown in Fig. 3. Our proposed method using context-aware inpainting [36] shows superior performance over the other methods as visually assessed. In addition, an ablation study to evaluate the contribution of the symmetry constraint is illustrated in Fig. 3. Of note, the inpainting quality was further improved using the symmetry constraint plus marginal training cost without additional testing cost. This is partly attributed to the use of the context and quasi-symmetry property of the brain. Figure 4: Inpainting results comparing to the ground truth. Table 1: Numerical comparison of four methods using BraTS 2018 testing set. Note that smaller mean L1 error and larger SSIM mean error indicate higher similarity. Methods | mean L1 error $\downarrow$ | SSIM $\uparrow$ | PSNR $\uparrow$ | Inception Score $\uparrow$ ---|---|---|---|--- Patch-match [3] | 445.8 | 0.9460 | 29.55 | 9.13 GLC [16] | 432.6 | 0.9506 | 30.34 | 9.68 Partial Conv [20] | 373.2 | 0.9512 | 33.57 | 9.77 Proposed | 292.5 | 0.9667 | 34.26 | 10.26 Proposed+symmetry | 254.8 | 0.9682 | 34.52 | 10.58 Figure 5: Deformable registration of two brain tumor subjects to a brain atlas: direct registration vs. inpainted registration. Tumors are marked in red boxes. Table 2: Mutual information between registered brain volumes and the brain atlas on ten test subjects using direct patient registration and inpainted volume registration. Methods | Sub1 | Sub2 | Sub3 | Sub4 | Sub5 | Sub6 | Sub7 | Sub8 | Sub9 | Sub10 ---|---|---|---|---|---|---|---|---|---|--- Direct registration | 0.303 | 0.311 | 0.308 | 0.324 | 0.315 | 0.299 | 0.309 | 0.303 | 0.317 | 0.308 Inpainted registration | 0.309 | 0.311 | 0.309 | 0.324 | 0.316 | 0.304 | 0.312 | 0.313 | 0.320 | 0.312 For quantitative evaluation, we manually generated holes with random size and positions on normal slices of the testing subjects. Therefore, the ground truth is known. The inpainted images were expected to have sharp and realistic looking textures, be coherent with $\bar{R}$, and look similar to its corresponding ground truth. Our results are illustrated in Fig. 4. The proposed method generated visually satisfying results. Table 1 lists numerical comparisons between the proposed approach, Patch-match [3], GLC [16], and Partial Conv [20]. We note that the compared inpainting baselines [16, 20] are based on the 1-step framework. We used four quality measurements to assess the performance: mean L1 error, structural similarity index (SSIM), peak signal- to-noise ratio (PSNR), and inception score [34]. We directly computed the mean L1 error and SSIM over the holes, while the incepetion score is measured on the completed $I$. Finally, Fig. 5 and Table 2 show the results of deformable registration using the ANTs SyN method [1] with normalized cross-correlation as a similarity metric. As for the target atlas, we used a T1-weighted brain atlas constructed using healthy subjects from the OASIS database [26]. The result was evaluated using mutual information (MI) computed only in normal tissues to achieve a fair comparison (tumor masks were used to exclude the tumor region). Direct patient-to-normal registration was affected by the existence of tumor, thus reducing the MI score even in normal tissues. This was corrected by using the inpainted volume as registration input, yielding improved or equal MI scores on every subject tested. The mean of MI was improved from 0.3097 to 0.3129. ## 4 Conclusion This paper presented an inpainting network that replaces the pathological tumor regions with normal brain appearances, targeting patient-to-normal deformable registration. The challenges lie in irregular brain tumor distribution. The two-stage inpainting scheme utilized both the complete and segmented samples, producing the refined results based on pixel-wise semantic relevance. Our experimental results demonstrate that the proposed method surpassed the comparison methods, which can be used for the registration between healthy subjects and tumor patients. ## 5 Acknowledgements This work was supported by NIH R01DE027989, R01DC018511, R01AG061445, and P41EB022544. ## References * [1] Avants, B.B., Tustison, N.J., Song, G., Cook, P.A., Klein, A., Gee, J.C.: A reproducible evaluation of ants similarity metric performance in brain image registration. Neuroimage 54(3), 2033–2044 (2011) * [2] Ballester, C., Bertalmio, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels. IEEE transactions on image processing 10(8), 1200–1211 (2001) * [3] Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24–1 (2009) * [4] Bauer, S., Wiest, R., Nolte, L.P., Reyes, M.: A survey of mri-based medical image analysis for brain tumor studies. Physics in Medicine & Biology 58(13), R97 (2013) * [5] Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques. pp. 417–424 (2000) * [6] Chen, T.Q., Schmidt, M.: Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337 (2016) * [7] Criminisi, A., Pérez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on image processing 13(9), 1200–1212 (2004) * [8] Cuadra, M.B., Gomez, J., Hagmann, P., Pollo, C., Villemure, J.G., Dawant, B.M., Thiran, J.P.: Atlas-based segmentation of pathological brains using a model of tumor growth. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 380–387. Springer (2002) * [9] Dawant, B., Hartmann, S., Pan, S., Gadamsetty, S.: Brain atlas deformation in the presence of small and large space-occupying tumors. Computer Aided Surgery 7(1), 1–10 (2002) * [10] DeAngelis, L.M.: Brain tumors. New England journal of medicine 344(2), 114–123 (2001) * [11] Dosovitskiy, A., Brox, T.: Generating images with perceptual similarity metrics based on deep networks. In: Advances in Neural Information Processing Systems. pp. 658–666 (2016) * [12] Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: Proceedings of the 28th annual conference on Computer graphics and interactive techniques. pp. 341–346 (2001) * [13] Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. pp. 2414–2423. IEEE (2016) * [14] Gooya, A., Biros, G., Davatzikos, C.: Deformable registration of glioma images using em algorithm and diffusion reaction modeling. IEEE transactions on medical imaging 30(2), 375–390 (2010) * [15] Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Transactions on Graphics (TOG) 36(4), 107 (2017) * [16] Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Transactions on Graphics (ToG) 36(4), 1–14 (2017) * [17] Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1125–1134 (2017) * [18] Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision. pp. 694–711. Springer (2016) * [19] Lamecker, H., Pennec, X.: Atlas to Image-with-Tumor Registration based on Demons and Deformation Inpainting (2010) * [20] Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 85–100 (2018) * [21] Liu, H., Jiang, B., Xiao, Y., Yang, C.: Coherent semantic attention for image inpainting. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4170–4179 (2019) * [22] Liu, X., Guo, Z., Li, S., Kong, L., Jia, P., You, J., Kumar, B.: Permutation-invariant feature restructuring for correlation-aware image set-based recognition. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4986–4996 (2019) * [23] Liu, X., Kumar, B.V., Ge, Y., Yang, C., You, J., Jia, P.: Normalized face image generation with perceptron generative adversarial networks. In: 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA). pp. 1–8. IEEE (2018) * [24] Liu, X., Li, S., Kong, L., Xie, W., Jia, P., You, J., Kumar, B.: Feature-level frankenstein: Eliminating variations for discriminative recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 637–646 (2019) * [25] Marcos, D., Volpi, M., Tuia, D.: Learning rotation invariant convolutional filters for texture classification. In: ICPR (2016) * [26] Marcus, D.S., Wang, T.H., Parker, J., Csernansky, J.G., Morris, J.C., Buckner, R.L.: Open access series of imaging studies (oasis): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults. Journal of cognitive neuroscience 19(9), 1498–1507 (2007) * [27] Mohamed, A., Zacharaki, E.I., Shen, D., Davatzikos, C.: Deformable registration of brain tumor images via a statistical model of tumor-induced deformation. Medical image analysis 10(5), 752–763 (2006) * [28] Oishi, K., Faria, A.V., Van Zijl, P.C., Mori, S.: MRI atlas of human white matter. Academic Press (2010) * [29] Oostenveld, R., Stegeman, D.F., Praamstra, P., van Oosterom, A.: Brain symmetry and topographic analysis of lateralized event-related potentials. Clinical neurophysiology 114(7), 1194–1202 (2003) * [30] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017) * [31] Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2536–2544 (2016) * [32] Prados, F., Cardoso, M.J., Cawley, N., Kanber, B., Ciccarelli, O., Wheeler-Kingshott, C.A.G., Ourselin, S.: Fully automated patch-based image restoration: Application to pathology inpainting. In: International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. pp. 3–15. Springer (2016) * [33] Raina, K., Yahorau, U., Schmah, T.: Exploiting bilateral symmetry in brain lesion segmentation. arXiv preprint arXiv:1907.08196 (2019) * [34] Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. In: Advances in neural information processing systems. pp. 2234–2242 (2016) * [35] Sartor, K.: MR imaging of the brain: tumors. European radiology 9(6), 1047–1054 (1999) * [36] Song, Y., Yang, C., Lin, Z., Liu, X., Huang, Q., Li, H., Jay Kuo, C.C.: Contextual-based image inpainting: Infer, match, and translate. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 3–19 (2018) * [37] Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: A survey. IEEE transactions on medical imaging 32(7), 1153–1190 (2013) * [38] Tang, Z., Wu, Y., Fan, Y.: Groupwise registration of mr brain images with tumors. Physics in Medicine & Biology 62(17), 6853 (2017) * [39] Yang, C., Song, Y., Liu, X., Tang, Q., Kuo, C.C.J.: Image inpainting using block-wise procedural training with annealed adversarial counterpart. arXiv preprint arXiv:1803.08943 (2018) * [40] Zacharaki, E.I., Shen, D., Lee, S.K., Davatzikos, C.: Orbit: A multiresolution framework for deformable registration of brain tumor images. IEEE transactions on medical imaging 27(8), 1003–1017 (2008) * [41] Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. arXiv preprint arXiv:1801.03924 (2018) * [42] Zheng, S., Song, Y., Leung, T., Goodfellow, I.: Improving the robustness of deep neural networks via stability training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4480–4488 (2016)
# Few Shot Dialogue State Tracking using Meta-learning Saket Dingliwal1,2 Shuyang Gao1 Sanchit Agarwal1 Chien-Wei Lin1 Tagyoung Chung1 Dilek Hakkani-Tür1 1Amazon Alexa AI 2Carnegie Mellon University <EMAIL_ADDRESS> ###### Abstract Dialogue State Tracking (DST) forms a core component of automated chatbot based systems designed for specific goals like hotel, taxi reservation, tourist information etc. With the increasing need to deploy such systems in new domains, solving the problem of zero/few-shot DST has become necessary. There has been a rising trend for learning to transfer knowledge from resource-rich domains to unknown domains with minimal need for additional data. In this work, we explore the merits of meta-learning algorithms for this transfer and hence, propose a meta-learner D-REPTILE specific to the DST problem. With extensive experimentation, we provide clear evidence of benefits over conventional approaches across different domains, methods, base models and datasets with significant (5-25%) improvement over the baseline in low- data setting. Our proposed meta-learner is agnostic of the underlying model and hence any existing state-of-the-art DST system can improve its performance on unknown domains using our training strategy. ## 1 Introduction Task-Oriented Dialogue (TOD) systems are automated conversational agents built for a specific goal (for example hotel reservation). Many businesses from wide-variety of domains (like hotel, restaurant, car-rental, payments etc) have adopted these systems to cut down their cost on customer support services. Almost all such systems have a Dialogue State Tracking (DST) module which keeps track of values for some predefined domain-specific slots (example hotel-name, restaurant-rating etc) after every turn of utterances from user and system. These values are then used by Natural Language Generator (NLG) to generate system responses and fulfill the user goals. Many of the recent works Wu et al. (2019); Zhang et al. (2019); Goel et al. (2019); Heck et al. (2020) have proposed various neural models that achieve good performance for the task but are data hungry in general. Therefore, adapting to a new unknown domain (target domain) requires large amounts of domain-specific annotations limiting their use. However, given a wide range of practical applications, there has been a recent interest in data-efficient approaches. Lee et al. (2019), Gao et al. (2020) used transformer Vaswani et al. (2017) based models which significantly reduce data dependence. Further, Gao et al. (2020) model the problem as machine reading comprehension task and benefit from its readily available external datasets and methods. Wu et al. (2019) were first to propose transferring knowledge from one domain to another. Since, many domains like restaurant and hotel share a lot of common slots like name, area, rating, etc and hence such a transfer proved to be effective for a low-resource domain. More recently, Campagna et al. (2020) aimed at zero-shot DST using synthetic data generation for the target domain imitating data from other domains. Recent meta-learning methods like MAML Finn et al. (2017), REPTILE Nichol et al. (2018) have proven to be very successful in efficient and fast adaptations to new tasks with very few labelled samples. These methods specifically aim at the setting where there are many similar tasks but very small amount of data for each task. Agnostic of the underlying model, these meta-learning algorithms spit out initialization for its parameters which when fine-tuned using low-resource target task achieves good performance. Following their widespread success in few-shot image classification, there has been a lot of recent work on their merit in natural language processing tasks. Huang et al. (2018); Gu et al. (2018); Sennrich and Zhang (2019); Bansal et al. (2019); Dou et al. (2019); Yan et al. (2020) attempt at using meta-learning for efficient transfer of knowledge from high-resource tasks to a low-resource task. Further, some of the more recent works Dai et al. (2020); Qian and Yu (2019) have shown meta-learners can be used for system response generation in TOD systems which is generally downstream task for our DST task. To the best of our knowledge, ours is the first work exploring meta-learning algorithms for the DST problem. While prior work focused on training their models with a mixture of data from other available domains (train domains) followed by fine-tuning with data from target domain, we identify that this method of transferring knowledge between domains is inefficient, particularly in very low-data setting with just $0,1,2,4$ annotated examples from target domain. We, on the other hand, use train domains to meta-learn the parameters of the model used to initialize the fine-tuning process. We hypothesize that though different domains share many common slots, they can have different complexities. For some of the domains, it might be easier to train the model using very few examples while others may require large number of gradient steps (based on their different data complexity and training curves with 1%, 5%, 10% data in Gao et al. (2020)). Meta-learning takes into account that this gradient information and share it across domains. Rather than looking for an initialization that try to simultaneously minimizes joint loss over all the domains, it looks for a point from which the optimum parameters of individual domains are reachable in few ($<5$) gradient steps (and hence very few training examples for these steps). Then the hope is that the target domain is similar to at least one of the train domains (for example hotel & restaurant or taxi & train) and hence the learned initialization will achieve efficient fine-tuning with very few examples for the target domain as well. This direction of limited data is motivated by practical applicability, where it might be possible for any developer to manually annotate $4$-$8$ examples before deploying the chatbot for a new domain. We highlight the main contributions of our work below (i) We are the first to explore and reason about the benefits of meta-learning algorithms for DST problem (ii) We propose a meta-learner D-REPTILE that is agnostic to underlying model and hence has the ability to improve the state of the art performance in zero/few-shot DST for new domains. (iii) With extensive experimentation, we provide evidence of the benefit of our approach over conventional methods. We achieve a significant 5-25% improvement over the baseline in few-shot DST that is consistent across different target domains, methods, base models and datasets. ## 2 Background ### 2.1 Dialogue State Tracking DST refers to keeping track of the state of the dialogue at every turn. State of dialogue can be defined as $\langle\textit{slot\\_name,slot\\_value}\rangle$ pairs that represents, given a domain-specific slot, the value that the user provides or system-provided value that the user accepts. Further, many domains have a pre-defined ontology that specify the set of values each slot can take. Note that the number of values in ontology varies a lot with slots. Some slots like hotel-stars might have just five different values (called categorical slots), while those like hotel-name have hundreds of possible values (called extractive slots). It might be possible that a slot has never been discussed in the dialogue sequence and in that case, model has to predict a None value for that particular slot. Various models have been proposed for the above task, but particularly relevant to this work is transformer-based model STARC by Gao et al. (2020). For each slot, they form a question (like what is the name of the hotel for hotel-name slot) and then at each turn append the tokens from dialogue utterance and the question separated by [SEP] token. They then pass these sequence of tokens through a transformer to form token embeddings. For the extractive slots, they use token embeddings to mark the span (start and end position) of the answer value in the dialogue itself (called extractive- model). For the categorical slots with less number of possible values, categorical-model append embeddings of each possible value to the token embeddings and then use a classifier with softmax layer to predict the correct option. ### 2.2 Meta-Learning With advances in model-agnostic meta-learning framework by Finn et al. (2017); Nichol et al. (2018), the few-shot problems have been revolutionized. These frameworks define a underlying task-distribution from which both train ($\tau$) and target tasks ($\tau^{{}^{\prime}}$) are sampled. For each task $\tau$, we are given very few labelled data-points $\mathcal{D}^{train}_{\tau}$ and a loss function $\mathcal{L}_{\tau}$. Now, given a new data point $\mathcal{D}^{test}_{\tau^{{}^{\prime}}}$ from target task $\tau^{{}^{\prime}}$, the goal is to learn parameters $\theta_{\mathcal{M}}$ of any model $\mathcal{M}$ such that $\mathcal{L}_{\tau^{{}^{\prime}}}(\mathcal{D}^{test}_{\tau^{{}^{\prime}}};\theta_{\mathcal{M}})$ is minimized. This is achieved by $k$-steps of gradient descent using $\mathcal{D}^{train}_{\tau^{{}^{\prime}}}$ with learning rate $\alpha$. More formally, $\theta_{\mathcal{M}}=\textit{SGD}(\mathcal{D}^{train}_{\tau^{{}^{\prime}}},\mathcal{L}_{\tau^{{}^{\prime}}},\theta_{\mathcal{M}}^{INIT};k,\alpha)$ where $\textit{SGD}(\mathcal{D},\mathcal{L},\theta^{INIT};k,\alpha)$ gives $\theta^{(k)}$ such that $\theta^{(t)}=\theta^{(t-1)}-\alpha\nabla_{\theta}(\mathcal{L}(\mathcal{D};\theta)),\theta^{(0)}=\theta^{INIT}$ (1) Therefore, the goal now is to find a good initialization $\theta_{\mathcal{M}}^{INIT}$ for the gradient descent using the data from train tasks $\tau$. This is achieved by minimizing the empirical loss as $\theta^{(k)}=\textit{SGD}(\mathcal{D}^{train}_{\tau},\mathcal{L}_{\tau},\theta;k,\alpha)\\\ $ (2) $\theta_{\mathcal{M}}^{INIT}=arg\,\min_{\theta}\sum_{\tau}\mathcal{L_{\tau}}(\mathcal{D}^{train}_{\tau};\theta^{(k)})$ (3) Note that the above optimization is complex and involve second-order derivatives. For computational benefits, Nichol et al. (2018) proposed REPTILE and showed that these terms can be ignored without effecting the performance of the meta-learning algorithm. We refer the reader to their work for more details. ## 3 Methodology In this work, we propose D-REPTILE, a meta-learning algorithm specific to DST task. Following what Qian and Yu (2019) did for dialogue generation problem, we treat different domains as tasks for the meta-learning algorithm. Let $\mathrm{D}=\\{d_{1},d_{2},\dots d_{n}\\}$ (eg. $\\{restaurant,taxi,payment,\dots\\})$ be the set of train domains for which we have annotated data available. Let $p_{D}(.)$ define a probability distribution over these domains. Let $\mathcal{D}_{d_{1}},\mathcal{D}_{d_{2}}\dots\mathcal{D}_{d_{n}}$ be the training data from each of these domains. Let $\mathcal{M}$ be any DST model with parameters $\theta_{\mathcal{M}}$. Let $m$ be the task-batch size (number of domains in a batch in our case), $\alpha,\beta$ be the inner and outer learning rate respectively, $k$ be the number of gradient steps. Let $\textit{SGD}(.)$ be the function as defined in equation 1. Borrowing the meta-learning theory regarding optimizing the objective equation 3 from Nichol et al. (2018), we define the algorithm D-REPTILE in Algorithm 1. The update rule for initialization (as defined in step 8) is same as that of REPTILE. We chose REPTILE over other meta-learning algorithms because of its simplicity and computational advantages. Nonetheless, its straight-forward to switch any other initialization based meta-learner by changing meta-update step. The novelty of our learner lies in its definition of the meta-learning tasks that represent different domains of DST problem. This algorithm aims to find $\theta_{\mathcal{M}}^{INIT}$, which we use to initialize the model for the fine-tuning stage with the target domain. Input: $\mathcal{D}_{d_{1}},\mathcal{D}_{d_{2}}\dots\mathcal{D}_{d_{n}}$ Parameters : $\mathcal{M}$, $\mathcal{L}$, $p_{D}(.)$, $\alpha$, $\beta$, $k$, m Result: $\theta_{\mathcal{M}}^{INIT}$ Initialize $\theta_{\mathcal{M}}$ randomly for _iteration $i=1,2,\dots$_ do sample m domains $\mathrm{D_{i}}$ using $p_{D}(.)$ for _domain $d_{j}\in\mathrm{D}_{i}$_ do sample data points $\mathcal{D}_{ij}$ from $\mathcal{D}_{d_{j}}$ $\theta_{\mathcal{M}}^{d_{j}}\leftarrow\textit{SGD}(\mathcal{D}_{ij},\mathcal{L},\theta_{\mathcal{M}};k,\alpha)$ end for $\theta_{\mathcal{M}}\leftarrow\theta_{\mathcal{M}}+\beta\frac{1}{m}\sum_{j=1}^{m}(\theta_{\mathcal{M}}^{d_{j}}-\theta_{\mathcal{M}})$ end for return $\theta_{\mathcal{M}}$; Algorithm 1 D-REPTILE: Meta Learner for DST We argue that the meta-learned initialization are better suited for fine- tuning than conventional methods. In the hope that joint optimal parameters for train domains lie close to individual domains, Wu et al. (2019) initialize the fine-tuning stage of the target domain from the joint minimum of the loss from data from all the train domains (called Naive pre-training before Fine- Tuning or NFT here). More formally, they chose the following initialization $\theta_{\mathcal{M}}^{INIT}=arg\min_{\theta}\sum_{j=1}^{n}\mathcal{L}(\mathcal{D}_{d_{j}};\theta)$ (4) Such an initialization tries to simultaneously minimize the loss for all the domains which might be useful if the goal was to perform well on test data coming from mixture of these domains. However, here our goal is to perform well on a single unknown target domain and no direct relation between this initialization and the optimal parameters for the target domain can be seen. Further, as the number of train domains increases or training data for each domain decreases, the joint optimum can be very far-off from the individual domain-optimum parameters. Therefore, these methods perform particularly bad. We show empirical evidence for this hypothesis in Section 4. On the other hand, if we optimize equation 3, we will reach a point in the parameter space from where all the domain-optimum parameters are reachable in k-gradient descent steps. Therefore, we can hope to reach the optimum parameters for the target domain as well efficiently. This hope is much larger for DST problem specifically because of similarities in different related domains (specifically related slots as shown in Section 5). Let us consider the following example, let restaurant and taxi be two of the train domains. Optimizing equation 3, we might reach a point which is closer to optimum parameters of restaurant domain than taxi domain if we have have smaller gradient values for restaurant data but large for taxi. However notably, both the optimum-parameters are reachable in k-gradient steps. Now if target domain is hotel (similar to restaurant domain with common slots like rating, name, etc), we will already be close to its optimum parameters. Also if the target domain is bus (similar to taxi domain with common slots like time, place, etc), we will have larger gradients in fine-tuning stage and thus will reach the optimum parameters for bus as well. This might not have been possible with equation 4 as the optimum parameters for joint of restaurant and taxi data might be very far from both the individual train domains and will also have no specific gradient properties for faster adaptation for any of hotel or bus target domains. ## 4 Experiments ### 4.1 Datasets We used two different DST datasets for our experiments. (i) MultiWoz 2.0 Budzianowski et al. (2018), 2.1 Eric et al. (2019) (ii) DSTC8 Rastogi et al. (2019). The former is manually annotated complex dataset with mostly 5 different domains, 8438 dialogues while the latter is relatively simple synthetically generated dataset with 26 domains and 16142 dialogues. Both the datasets contains dialogues spanning multiple domains. Following the setting from Wu et al. (2019), for extracting data of a particular domain from the dataset, we consider all the dialogues in which that domain is present and ignore slots from other domains both in train and test set. Further, as shown by Gao et al. (2020), we use external datasets from Machine Reading for Question Answering (MRQA) 2019 shared task Fisch et al. (2019), DREAM Sun et al. (2019), RACE Lai et al. (2017) to pre-train our transformer in our experiments and label it with suffix ’-RC’ to distinguish it from ’-base’ model. ### 4.2 Evaluation Metric Based on the objective in DST, there is a well established metric Joint Goal Accuracy (JGA). JGA is the fraction of total turns across all dialogues for which the predicted and ground truth dialogue state matches for all slots. Following Wu et al. (2019), for testing for a single target domain in a multi- domain dialogue, we only consider slots from that domain in metric computation. Note that in some of our experiments (where explicitly mentioned), we further restrict the slots to only extractive or only categorical slots. Also, as it happens most of the times, whenever a slot is not mentioned in any turn, the ground truth value for that slot is None. For analysis, we further use the metric Active Slot Accuracy which is the fraction of predicted values of a particular slot that were correct whenever the ground truth value was not none. ### 4.3 Experimental Setting For all our experiments, both D-REPTILE and baseline (NFT (Sec. 3)) uses STARC (Sec. 2) as base model $\mathcal{M}$. This ensures that all the gains achieved in our experiments are only due to meta-learning. In our implementation 111https://github.com/saketdingliwal/Few-Shot-DST, we use pre-trained word embeddings Roberta-Large Liu et al. (2019), Adam optimizer Kingma and Ba (2014) for gradient updates in both inner and outer loop, $\alpha=5e^{-5},\,\beta=1,\,m=4,\,k=5,\,p_{D}(i)\propto|\mathcal{D}_{d_{i}}|$ (chosen using dev-set experiments as explained in Section 5). As shown recently Mosbach et al. (2020), the fine-tuning of transformer based model is unstable, therefore, we run fine-tuning multiple times and report the mean and the standard deviation of the performance. Also, the performance varies with the choice of training data from target domain used for fine-tuning. However, for our experiments, we chose these dialogues based on number of active slots (not None) and use the same dialogue for both D-REPTILE and baseline. Since, we use very little data (0, 1, or 2 examples) from target domain, we obviously would like to have dialogues that at-least have all the slots being discussed in the utterances. In practical scenarios, where a developer might be creating 1 or 2 examples for a new domain, it is always possible to include all the slots in the dialogue utterances. ### 4.4 Results Figure 1: Performance of D-REPTILE vs NFT for different MultiWoz domains with three different models. In our experiments, we are able to achieve significant improvement over the baseline method under low-data setting ($<$ 32 dialogues). Note that the choice of low-data setting is guided by the practical applications of the method. It also validates our hypothesis that the initialization chosen by meta-learning is closer to optimal parameters of the target domain in terms of gradient steps and therefore perform better when there is very less data. However, as fine-tuning data is increased to 1000s of dialogues, any random initialization is also able to reach the optimal parameters for target domain. We observe the benefits of D-REPTILE in limited data consistently across different domains, datasets and models as explained one-by-one below Across domains \- We used all different domains of MultiWoz 2.0 data as target domain in 5 plots in Figure 1. We pre-train D-REPTILE (solid) and NFT (dashed) versions of different models (represented by different colors). For the models represented by red and blue colors, we used all domains other than target domain as our train domains. For example, for the first plot, hotel domain is our target domain, while restaurant, train, attraction and taxi are our train domains. The red corresponds to starting with Roberta-Base embeddings, while the blue represent Roberta-RC which is pre-trained Roberta-Base with reading comprehension datasets Gao et al. (2020). The green dotted line represent model without any pretraining. It is clearly very bad and unstable. This shows importance of using other domains for few-shot experiments. We fine-tune all our models using different amount of training data of target domain (x-axis). In each one of our models, the solid lines (D-REPTILE) lies strictly above the dashed lines (NFT) in JGA metric. The gains obtained are as high as 47.8% (D-REPTILE) vs 22.3% (NFT) for restaurant domain with 1 dialogue which is more than 100% improvement at no annotation cost at all. Across models \- Not only the results are consistent across different base models for transformer as shown in Figure 1 but also across different DST methods. As done in Gao et al. (2020), we train separate categorical and extractive models for hotel domain (using categorical and extractive data respectively from train domains) (which we have combined to plot Figure 1). If we consider these two fairly different models separately, we achieve similar trends in each individually as plotted in Figure 2. Note that JGA metric is computed here with restricted slots based on the type of the model. The gains are larger for the extractive model possibly because marking span in original dialogue can be considered slightly harder task than choosing among limited number of choices. Figure 2: Performance of D-REPTILE vs NFT for different DST models for different slots in hotel domain. Figure 3: Performance of D-REPTILE vs NFT for Hotels_2 domain in DSTC8 data as target domain. Across datasets \- To show that the merits of D-REPTILE are not limited to MultiWoz data, we tested with domains from DSTC8 dataset as both train and target domain. In Figure 1, the orange lines represent model pre-trained using all the domains in DSTC8 as train domains while target domain is from MultiWoz. As expected, the performance of these models fall below red and blue lines (models pre-trained with MultiWoz train domains) but above green (no pre-training) as training and testing datasets are different. However, the solid orange line (D-REPTILE) lies above dashed line (NFT). In another set of experiments, we used target domain from DSTC8 and compiled the results in Figure 3. Except for Hotels_1, Hotels_2 and Hotels_3, all other domains from DSTC8 are used as train domains while Hotels_2 is kept as target domain. We see that the benefits of meta-learning are much larger for DSTC8 dataset than MultiWoz. For example, with 8 dialogues for fine-tuning, D-REPTILE achieves JGA of 43.9% while NFT is only able to get 14.1%. This can be attributed to increase in number of different training tasks (23 domains were used as train domains for DSTC8 as opposed to 4 for MultiWoz experiments). Surprisingly, the meta-learned initializations not only adapt faster but are also better to start with. We see an improvement in zero-shot performance as well. In addition to comparison with the NFT baseline, we also show improvement over existing models on MultiWoz 2.0 dataset in Table 1. Also note that D-REPTILE is model-agnostic and therefore has the capability to improve the JGA for any underlying model for a new unknown domain. Joint Goal Accuracy | Hotel | Restaurant | Taxi | Attraction | Train ---|---|---|---|---|--- TRADE Wu et al. (2019) | 13.7 | 11.52 | 60.58 | 19.87 | 22.37 STARC Gao et al. (2020) | 28.6 | 28.2 | 65 | 36.9 | 26.1 STARC + D-REPTILE | 32.4 | 47.8 | 67.2 | 45.9 | 46.1 Table 1: Zero-shot performance on MultiWoz 2.0 dataset. Domains like restaurant and train witness a significant boost in performance over baselines ## 5 Ablation Studies To validate our various theoretical hypothesis, search for hyper-parameters, clearly identify and reason about the situations where using meta-learning helps DST, we perform additional analysis as written in subsections below. ### 5.1 Slot-wise Analysis Figure 4: Active Slot accuracy for slots common between different domains Figure 5: Active Slot accuracy for slots unique to specific target domain To exactly pin-point the advantage of D-REPTILE, we do a slot-wise analysis of our models in Figure 4 and 5. Note that slots are defined as domain_name.slot_name. For example, hotel.day represents performance of the models in predicting the values for day slot where the target domain was hotel. Overall performance or JGA in plot 1 of Figure 1 is combination of all the hotel slots like day, people, area, etc. Figure 4 shows the slots which are common among different domains while Figure 5 compare the performance for slots that are unique to a target domain. We can see that for the common slots, the solid lines (D-REPTILE) mostly lie higher than the dashed (NFT) counterparts. However, nothing can be said in particular about slots in Figure 5. This behaviour is expected as unique slots particular to a target domain have little to gain from the different slots present in train domains (which were used for pre-training). This is evident from the fact that slots like hotel.internet, hotel.parking have zero-shot active accuracy close to zero for all kinds of pretraining strategies (Figure 5). However, wherever slots between different domains are similar, the pretraining have much larger influence. In that case, the merit of learning generalizable initialization from D-REPTILE than NFT is much more clearly evident (Figure 4). Figure 6: JGA for categorical model for hotel domain with different datasets Figure 7: JGA for restaurant domain dev set with different hyper-parameters for the best D-REPTILE model ### 5.2 Hyper-parameter Search We briefly discuss the choice of various hyper-parameters here. We use dev set from restaurant domain for searching for optimum values for different parameters introduced by meta-learning, while the rest are kept same as STARC model Gao et al. (2020). In Figure 7, we plot the variation in performance with $k$ and $p_{D}(.)$. Like any meta-learning algorithm, setting $k$ too small or too large hurts the performance in our case as well (specially $k=1$ where it becomes theoretically similar to NFT Nichol et al. (2018)). Hence, optimum value $k=5$ is used for all our experiments. Also, similar to the conclusion in Dou et al. (2019), we find choosing $p_{D}(.)$ of any domain as proportional to the size of the training dataset of that domain helpful (blue vs red line). This is attributed to the fact that in case of imbalance in data among different train domains, the algorithm gets to see all the data from the resource-rich domain as it is chosen more often and hence generalizes better. ### 5.3 Adding more train domains As mentioned in previous section, we observe that benefits of D-REPTILE are much more profound when target domain is from DSTC8 dataset than when it is from MultiWoz (Figure 3). Given that DSTC8 has 23 train domains as compared to 4 in MultiWoz, it is not difficult to see the reason for this boost in performance. In this subsection, we try to answer the question whether MultiWoz target can also gain from additional domains of DSTC8. Here, for ease of computation, we only experiment with categorical model with hotel domain as target . We use both DSTC8 domains and MultiWoz domains (of course excluding hotel domain data during pre-training) and test it on hotel data from MultiWoz. These are represented by additional pink and black lines in Figure 6. We observe that although D-REPTILE helps to improve performance over baseline NFT but adding additional domains does not help the model much overall(solid black line is similar to solid blue line). This shows that in addition to the number of different training tasks, the relatedness of those tasks is also very crucial for meta-learning. The DSTC8 domains which are out- of-sample for MultiWoz target domain did not prove to be effective. (the small difference between JGA values for 1-dialogue fine-tuning in Figure 6 and categorical model in Figure 2 is due to difference in the choice of the single dialogue from hotel domain used for fine-tuning) ## 6 Conclusion We conclude our analysis on the merits of meta-learning as compared to naive pre-training for DST problem on a very positive note. Given the practical applicability of very-low data analysis, we provide enough evidence to a developer of an automated conversational system for an unknown domain that irrespective of his/her model and target domain, D-REPTILE can achieve significant improvement (sometimes almost double) over conventional fine- tuning methods with no additional cost. With detailed ablations, we further provide insights on which slots and domains will particularly benefit from pre-traning strategies and which of those will require additional data. Being agnostic to underlying model, our proposed algorithm has capability to push state-of-the-art in zero/few-shot DST problem, giving hope for expanding the scope of similar chatbot based systems in new businesses. ## References * Bansal et al. (2019) Trapit Bansal, Rishikesh Jha, and Andrew McCallum. 2019. Learning to few-shot learn across diverse natural language classification tasks. _arXiv preprint arXiv:1911.03863_. * Budzianowski et al. (2018) Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. 2018. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. _arXiv preprint arXiv:1810.00278_. * Campagna et al. (2020) Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, and Monica S Lam. 2020. Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. _arXiv preprint arXiv:2005.00891_. * Dai et al. (2020) Yinpei Dai, Hangyu Li, Chengguang Tang, Yongbin Li, Jian Sun, and Xiaodan Zhu. 2020\. Learning low-resource end-to-end goal-oriented dialog for fast and reliable system deployment. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 609–618. * Dou et al. (2019) Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. _arXiv preprint arXiv:1908.10423_. * Eric et al. (2019) Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, and Dilek Hakkani-Tur. 2019. Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. _arXiv preprint arXiv:1907.01669_. * Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. _arXiv preprint arXiv:1703.03400_. * Fisch et al. (2019) Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019\. Mrqa 2019 shared task: Evaluating generalization in reading comprehension. _arXiv preprint arXiv:1910.09753_. * Gao et al. (2020) Shuyang Gao, Sanchit Agarwal, Tagyoung Chung, Di Jin, and Dilek Hakkani-Tur. 2020\. From machine reading comprehension to dialogue state tracking: Bridging the gap. _arXiv preprint arXiv:2004.05827_. * Goel et al. (2019) Rahul Goel, Shachi Paul, and Dilek Hakkani-Tür. 2019. Hyst: A hybrid approach for flexible and accurate dialogue state tracking. _arXiv preprint arXiv:1907.00883_. * Gu et al. (2018) Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018. Meta-learning for low-resource neural machine translation. _arXiv preprint arXiv:1808.08437_. * Heck et al. (2020) Michael Heck, Carel van Niekerk, Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gašić. 2020. Trippy: A triple copy strategy for value independent neural dialog state tracking. _arXiv preprint arXiv:2005.02877_. * Huang et al. (2018) Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen-tau Yih, and Xiaodong He. 2018\. Natural language to structured query generation via meta-learning. _arXiv preprint arXiv:1803.02400_. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_. * Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. _arXiv preprint arXiv:1704.04683_. * Lee et al. (2019) Hwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. Sumbt: Slot-utterance matching for universal and scalable belief tracking. _arXiv preprint arXiv:1907.07421_. * Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_. * Mosbach et al. (2020) Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. _arXiv preprint arXiv:2006.04884_. * Nichol et al. (2018) Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. _arXiv preprint arXiv:1803.02999_. * Qian and Yu (2019) Kun Qian and Zhou Yu. 2019. Domain adaptive dialog generation via meta learning. _arXiv preprint arXiv:1906.03520_. * Rastogi et al. (2019) Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. _arXiv preprint arXiv:1909.05855_. * Sennrich and Zhang (2019) Rico Sennrich and Biao Zhang. 2019. Revisiting low-resource neural machine translation: A case study. _arXiv preprint arXiv:1905.11901_. * Sun et al. (2019) Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. Dream: A challenge data set and models for dialogue-based reading comprehension. _Transactions of the Association for Computational Linguistics_ , 7:217–231. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008. * Wu et al. (2019) Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. _arXiv preprint arXiv:1905.08743_. * Yan et al. (2020) Ming Yan, Hao Zhang, Di Jin, and Joey Tianyi Zhou. 2020. Multi-source meta transfer for low resource multiple-choice question answering. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7331–7341. * Zhang et al. (2019) Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. _arXiv preprint arXiv:1910.03544_.
# Extensions of definable local homomorphisms in o-minimal structures and semialgebraic groups Eliana Barriga Eliana Barriga Department of Mathematics, Ben Gurion University of the Negev, Be’er-Sheva 84105, Israel<EMAIL_ADDRESS> ###### Abstract. We state conditions for which a definable local homomorphism between two locally definable groups $\mathcal{G}$, $\mathcal{G^{\prime}}$ can be uniquely extended when $\mathcal{G}$ is simply connected (Theorem 2.1). As an application of this result we obtain an easy proof of [3, Thm. 9.1] (see Corollary 2.2). We also prove that Theorem 10.2 in [3] also holds for any definably connected definably compact semialgebraic group $G$ not necessarily abelian over a sufficiently saturated real closed field $R$; namely, that the o-minimal universal covering group $\widetilde{G}$ of $G$ is an open locally definable subgroup of $\widetilde{H\left(R\right)^{0}}$ for some $R$-algebraic group $H$ (Thm. 3.3). Finally, for an abelian definably connected semialgebraic group $G$ over $R$, we describe $\widetilde{G}$ as a locally definable extension of subgroups of the o-minimal universal covering groups of commutative $R$-algebraic groups (Theorem 3.4). ###### Key words and phrases: O-minimality, local homomorphisms, semialgebraic groups, real closed fields, algebraic groups, locally definable groups ###### 2010 Mathematics Subject Classification: 03C64; 20G20; 22E15; 03C68; 22B99 Supported by the Israel Science Foundation (ISF) grants No. 181/16 and 1382/15. ## 1\. Introduction and Preliminaries The study of definable and locally definable groups has been of importance in the research in model theory of o-minimal structures, and includes such classes as the semialgebraic and the subanalytic groups. The ordered real field $\left(\mathbb{R},<,+,\cdot\right)$ as well as its expansion with the exponential function are examples of o-minimal structures [6]. Let $\mathcal{M}$ be a sufficiently $\kappa$-saturated o-minimal structure. By definable we mean definable in $\mathcal{M}$. A group is called locally definable if the domain of the group and the graph of the group operation are a countable unions of definable sets. Every $n$-dimensional locally definable group $\mathcal{G}$ can be endowed with a unique topology $\tau$ making the group into a topological group such that any $g\in\mathcal{G}$ has a definable neighborhood definably isomorphic to $M^{n}$ [16, Prop. 2.2]. From now on, any topological property on a locally definable group refers to this $\tau$-topology, unless stated otherwise. When a locally definable group is definable, its $\tau$-topology agrees with the t-topology given by Pillay in [18, Prop. 2.5]. The t-topology exists for every group $\mathcal{G}$ definable in an o-minimal structure $\mathcal{N}$ (not necessarily saturated), and when $\mathcal{N}$ o-minimally expands the reals, $\mathcal{G}$ is a real Lie group [18]. A a locally definable subset $X$ of a locally definable group $\mathcal{G}$ is $\tau$-connected if $X$ has no nonempty proper definable subset ($\tau$-)clopen relative to $X$ such that whose intersection with any definable subset of $\mathcal{G}$ is definable. When $\mathcal{G}$ is definable, by [18, Corollary 2.10], there is a unique maximal definably connected definable subset of $\mathcal{G}$ containing the identity element of $\mathcal{G}$, which we call the definable identity component of $\mathcal{G}$, and we denoted it by $\mathcal{G}^{0}$. Thus, $\mathcal{G}$ is definably connected if and only if $\mathcal{G}=\mathcal{G}^{0}$, or, equivalently by [18], if $\mathcal{G}$ has no proper definable subgroup of finite index. We say that a definable group $\mathcal{G}$ is definably compact if every definable path $\gamma:\left(0,1\right)\rightarrow\mathcal{G}$ has limits points in $\mathcal{G}$ (where the limits are taken with respect to the t-topology on $\mathcal{G}$). The notions of path connectedness, homotopy, o-minimal fundamental group, and simply connectedness are defined as in algebraic topology using locally definable maps instead of general maps. For details on these definitions we refer the reader to [2]. As in the Lie setting [5], Edmundo and Eleftheriou defined and proved in [9] the existence of the o-minimal universal covering group $\widetilde{\mathcal{G}}$ of a connected locally definable group $\mathcal{G}$. As in Lie groups, $\widetilde{\mathcal{G}}$ covers any cover of $\mathcal{G}$ in this category, is simply connected, and allows to study definable and locally definable groups through them. In the theory of Lie groups, it is known that when two Lie groups are locally isomorphic, then their universal covers are (globally) isomorphic as Lie groups: ###### Fact 1.1. ([14, Corollary 4.20]) Let $\mathcal{H}$ and $\mathcal{H}^{\prime}$ be connected Lie groups, and $\widetilde{\mathcal{H}}^{Lie}$ and $\widetilde{\mathcal{H}^{\prime}}^{Lie}$ their universal covering Lie groups respectively. Then $\mathcal{H}$ and $\mathcal{H}^{\prime}$ are locally isomorphic if and only if $\widetilde{\mathcal{H}}^{Lie}$ and $\widetilde{\mathcal{H}^{\prime}}^{Lie}$ are isomorphic as Lie groups. We say that two topological groups $\mathcal{H}$ and $\mathcal{H}^{\prime}$ are locally homomorphic if there are neighborhoods $U$ and $U^{\prime}$ of the identities of $\mathcal{H}$ and $\mathcal{H}^{\prime}$ respectively and a map $f:U\subseteq\mathcal{H}\rightarrow U^{\prime}\subseteq\mathcal{H}^{\prime}$ such that $f\left(hh^{\prime}\right)=f\left(h\right)f\left(h^{\prime}\right)$ whenever $h,h^{\prime}$, and $hh^{\prime}$ belong to $U$ [5, Definition 2, Chap. 2, Section 7]; such a map $f$ is called a local homomorphism of $\mathcal{H}$ into $\mathcal{H}^{\prime}$, and if in addition $f$ is a homeomorphism, $f$ is called a local isomorphism, and $\mathcal{H}$ and $\mathcal{H}^{\prime}$ are locally isomorphic. From a model-theoretical point of view, it is natural to ask whether Fact 1.1 holds or not in the category of locally definable groups. For this, we will restrict the previous definitions to definable maps. However, if $\mathcal{M}$ expands an ordered field $\left(R,<,+,0,\cdot,1\right)$, the additive group $\left(R,+\right)$ is definably locally isomorphic to the group $G=\left(\left[0,1\right),+_{mod\,1}\right)$ with addition modulo $1$ through the map $f:(-\frac{1}{2},\frac{1}{2})\subseteq\left(R,+\right)\rightarrow G$, $f\left(t\right)=t\,mod\,1$, but $\left(R,+\right)$ and $\widetilde{G}=\left(\bigcup_{n\in\mathbb{N}}\left(-n,n\right),+\right)\leq\left(R,+\right)$ are not isomorphic as locally definable groups ($\left(R,+\right)$ is definable, but $\widetilde{G}$ is not). Fact 1.1 follows from a well-known result for topological groups that assures that a local homomorphism between topological groups with domain a connected neighborhood of the identity element of a simply connected group $\mathcal{H}$ can be extended to a group homomorphism from the whole group $\mathcal{H}$: ###### Fact 1.2. ([5, Thm. 3, Chap. 2, Section 7]) Let $\mathcal{H}$ be a simply connected topological group. Let $f$ be a local homomorphism of $\mathcal{H}$ into a group $\mathcal{H}^{\prime}$. If the set on which $f$ is defined is connected, then it is possible to extend $f$ to a homomorphism $\overline{f}:\mathcal{H}\rightarrow\mathcal{H}^{\prime}$. Again the above example of $\left(R,+\right)$ and $G=\left(\left[0,1\right),+_{mod\,1}\right)$ with the definable local isomorphism $f:(-\frac{1}{2},\frac{1}{2})\subseteq\left(R,+\right)\rightarrow G$, $f\left(t\right)=t\,mod\,1$ shows that $f$ cannot be extended to a locally definable homomorphism from $\left(R,+\right)$ into $G$ (otherwise, the kernel, $\mathbb{Z}$, of such a (locally) definable homomorphism would be definable in $\mathcal{M}$, which is not possible). Therefore, Fact 1.2 does not hold in the category of locally definable groups. Nevertheless, we were able to formulate a sufficient condition for a definable local homomorphism to extend to a locally definable homomorphism of the whole group. More precisely, we prove the following. ###### Theorem 2.1. Let $\mathcal{G}$ and $\mathcal{G}^{\prime}$ be locally definable groups, $U$ a definably connected definable neighborhood of the identity $e$ of $\mathcal{G}$, and $f:U\subseteq\mathcal{G}\rightarrow\mathcal{G}^{\prime}$ a definable local homomorphism. Assume that there is a definable neighborhood $V$ of $e$ generic in $\mathcal{G}$ such that $V^{-1}V\subseteq U$. If $\mathcal{G}$ is simply connected, then $f$ is uniquely extendable to a locally definable group homomorphism $\overline{f}:\mathcal{G}\rightarrow\mathcal{G}^{\prime}$. Above a subset $X$ of a group $\mathcal{G}$, locally definable in a $\kappa$-saturated o-minimal structure, is left (right) generic in $\mathcal{G}$ if less than $\kappa$-many left (right) group translates of $X$ cover $\mathcal{G}$; i.e., $\mathcal{G}=AX$ ($\mathcal{G}=XA$) for some $A\subseteq\mathcal{G}$ with $\left|A\right|<\kappa$. $X$ is generic if it is both left and right generic. A locally definable group may not have definable generic subsets; however, when it does, the group has interesting properties, see for example [10, Thm. 3.9]. Theorem 2.1 allows us to easily prove Corollary 2.2, a result on extension of a definable local homomorphism between abelian locally definable groups that we have previously proved in [3, Thm. 9.1] using different methods. From now on until the end of this paper, let $R=\left(R,<,+,\cdot\right)$ be a sufficiently saturated real closed field, and denote by $R_{a}$ its additive group $\left(R.+\right)$ and by $R_{m}$ its multiplicative group of positive elements $\left(R^{>0},\cdot\right)$. Corollary 2.2 was applied in [3] to prove a characterization of the o-minimal universal covering group $\widetilde{G}$ of an abelian definably connected definably compact semialgebraic group $G$ over $R$ in terms of $R$-algebraic groups [3, Thm. 10.2] (see Fact 3.1). In Section 3.1, we show that Theorem 10.2 in [3] also holds for any definably connected definably compact semialgebraic group over $R$ not necessarily abelian (Thm. 3.3). ###### Theorem 3.3. Let $G$ be a definably connected definably compact group definable in $R$. Then $\widetilde{G}$ is an open locally definable subgroup of $\widetilde{H\left(R\right)^{0}}$ for some $R$-algebraic group $H$. By [17], every abelian torsion free semialgebraic group over $R$ is definably isomorphic to $R_{a}^{k}\times R_{m}^{n}$ for some $k,n\in\mathbb{N}$. Therefore, the results for torsion free and definably compact semialgebraic groups over $R$ suggest asking ourselves the following. ###### Question 1.3. Let $G$ be an abelian definably connected semialgebraic group over $R$. Is $\widetilde{G}$ an open locally definable subgroup of $\widetilde{H\left(R\right)^{0}}$ for some $R$-algebraic group $H$? Although the above question remains open, we were able to prove: ###### Theorem 3.4. Let $G$ be a definably connected semialgebraic group over $R$. Then there exist a locally definable group $\mathcal{W}$, commutative $R$-algebraic groups $H_{1}$, $H_{2}$ such that $\widetilde{G}$ is a locally definable extension of $\mathcal{W}$ by $H_{2}\left(R\right)^{0}$ where $\mathcal{W}$ is an open subgroup of $\widetilde{H_{1}\left(R\right)^{0}}$. In fact, $H_{2}\left(R\right)^{0}$ is isomorphic to $R_{a}^{k}\times R_{m}^{n}$ as definable groups for some $k,n\in\mathbb{N}$. Where we say that a locally definable group $\mathcal{G}$ is a locally definable extension of $\mathcal{G}^{\prime}$ by $\mathcal{G}^{\prime\prime}$ if we have an exact sequence $1\rightarrow\mathcal{G}^{\prime\prime}\rightarrow\mathcal{G}\rightarrow\mathcal{G}^{\prime}\rightarrow 1$ in the category of locally definable groups with locally definable homomorphisms [8, Section 4]. ## 2\. An extension of a definable local homomorphism between locally definable groups Recall that we are working in a sufficiently $\kappa$-saturated o-minimal structure $\mathcal{M}$. ###### Theorem 2.1. Let $\mathcal{G}$ and $\mathcal{G}^{\prime}$ be locally definable groups, $U$ a definably connected definable neighborhood of the identity $e$ of $\mathcal{G}$, and $f:U\subseteq\mathcal{G}\rightarrow\mathcal{G}^{\prime}$ a definable local homomorphism. Assume that there is a definable neighborhood $V$ of $e$, generic in $\mathcal{G}$, such that $V^{-1}V\subseteq U$. If $\mathcal{G}$ is simply connected, then $f$ is uniquely extendable to a locally definable group homomorphism $\overline{f}:\mathcal{G}\rightarrow\mathcal{G}^{\prime}$. ###### Proof. Let $x\in\mathcal{G}$. Since $\mathcal{G}$ is path connected, there is a locally definable path $\omega:I=[0,1]\rightarrow\mathcal{G}$ such that $\omega\left(0\right)=e$, $\omega\left(1\right)=x$. Note that since $V$ is generic in $\mathcal{G}$, then the topological interior of $V$ is also generic in $\mathcal{G}$ – this fact might be followed from Lemma 2.22 of [13], for example – . So replacing $V$ with its topological interior, we now have that $V$ is an open neighborhood of $e$ and generic in $\mathcal{G}$. By the genericity of $V$ in $\mathcal{G}$, $\mathcal{G}=A\cdot V$ for some $A\subseteq\mathcal{G}$, $\left|A\right|<\kappa$. As $\omega\left(I\right)$ is a definable subset of $A\cdot V$, saturation yields that there is a finite set $A_{0}\subseteq A$ such that $\omega\left(I\right)\subseteq A_{0}\cdot V$. Then, $I=\bigcup_{a\in A_{0}}\omega^{-1}\left(a\cdot V\right)$. As $V$ is an open neighborhood, $\omega^{-1}\left(a\cdot V\right)$ is also an open neighborhood of some element of $I$. Since $\mathcal{M}$ is an o-minimal structure, $\omega^{-1}\left(a\cdot V\right)$ is a finite union of points and intervals. Then $\left\\{\omega^{-1}\left(a\cdot V\right):a\in A_{0}\right\\}$ is a collection of open intervals in $I$. Thus we can choose a division of $I$ into subintervals $\left[t_{i},t_{i+1}\right]$ such that $0=t_{0}<t_{1}<\ldots<t_{n}=1$ and $\left(\omega\left[t_{i},t_{i+1}\right]\right)\subseteq a_{i}V$ for some $a_{i}\in A_{0}$. So, for $t,t^{\prime}\in\left[t_{i},t_{i+1}\right]$, $\omega\left(t\right)=a_{i}v$, $\omega\left(t^{\prime}\right)=a_{i}v^{\prime}$ for $v,v^{\prime}\in V$, and $\omega\left(t\right)^{-1}\omega\left(t^{\prime}\right)=v^{-1}v^{\prime}\in V^{-1}V\subseteq U$. For such a locally definable path $\omega$ and division define $\overline{f}_{\omega}\left(x\right)\coloneqq f\left(\omega\left(t_{0}\right)^{-1}\omega\left(t_{1}\right)\right)f\left(\omega\left(t_{1}\right)^{-1}\omega\left(t_{2}\right)\right)\cdots f\left(\omega\left(t_{n-1}\right)^{-1}\omega\left(t_{n}\right)\right).$ Now, we will show that $\overline{f}_{\omega}$ is invariant under refinements of the division of $I$. Let $t^{\prime}\in\left[t_{i},t_{i+1}\right]$ be a new subdivision point. Since $\omega\left(t_{i}\right)^{-1}\omega\left(t^{\prime}\right),\omega\left(t^{\prime}\right)^{-1}\omega\left(t_{i+1}\right)\in U$ and $f$ is a local homomorphism, $f\left(\omega\left(t_{i}\right)^{-1}\omega\left(t^{\prime}\right)\right)f\left(\omega\left(t^{\prime}\right)^{-1}\omega\left(t_{i+1}\right)\right)=f\left(\omega\left(t_{i}\right)^{-1}\omega\left(t_{i+1}\right)\right)$. Hence, given two subdivisions of $I$, we can consider a refinement common to these, then $\overline{f}_{\omega}$ does not depend on the subdivisions of $I$. Now, we will show that $\overline{f}_{\omega}$ is determined independently of the choice of a path $\omega$. Let $\omega^{\prime}:I\rightarrow\mathcal{G}$ be another locally definable path connecting $e$ and $x$. Since $\mathcal{G}$ is simply connected, there is a locally definable homotopy $\Gamma:I\times I\rightarrow\mathcal{G}$ between $\omega$ and $\omega^{\prime}$ with $\Gamma\left(t,0\right)=\omega\left(t\right)$ and $\Gamma\left(t,1\right)=\omega^{\prime}\left(t\right)$. As $\Gamma\left(I\times I\right)$ is a definable subset of $A\cdot V$, again saturation implies that there is a finite set $A_{1}\subseteq A$ such that $\Gamma\left(I\times I\right)\subseteq A_{1}\cdot V$. So, $I\times I\subseteq\bigcup_{a\in A_{1}}\Gamma^{-1}\left(a\cdot V\right)$. By continuity of $\varGamma$, for every $\left(t,t^{\prime}\right)\in I\times I$, there is $I_{i}\times I_{j}\subseteq I\times I$ such that $\left(t,t^{\prime}\right)\in I_{i}\times I_{j}$, $\Gamma\left(I_{i}\times I_{j}\right)\subseteq a_{i,j}V$ for some $a_{i,j}\in A_{1}$. Therefore, we can partition $I$ in a finite number of subintervals $I_{i}$ such that $\Gamma\left(I_{i}\times I_{j}\right)\subseteq a_{i,j}V$ for some $a_{i,j}\in A_{1}$. As the subintervals $I_{i}$ are finite and cover $I$, we may assume that $I_{i}=\left[\frac{i}{n},\frac{i+1}{n}\right]$ for some $n\in\mathbb{N}$. Note that if $\left(s,t\right),\left(s^{\prime},t^{\prime}\right)\in I_{i}\times I_{j}$, then $\varGamma\left(s,t\right)^{-1}\varGamma\left(s^{\prime},t^{\prime}\right)=\left(a_{i,j}v\right)^{-1}a_{i,j}v^{\prime}=v^{-1}v^{\prime}\in U$. Let $\omega_{i}\left(t\right)\coloneqq\varGamma\left(t,\frac{i}{n}\right)$ for $i\in\left\\{0,1,\ldots,n\right\\}$. So $\omega_{0}\left(t\right)=\omega\left(t\right)$, $\omega_{n}\left(t\right)=\omega^{\prime}\left(t\right)$. Since $f$ is a local homomorphism, then $\overline{f}_{\omega_{i}}\left(x\right)=\overline{f}_{\omega_{i+1}}\left(x\right)$ for $i\in\left\\{0,1,\ldots,n-1\right\\}$. Therefore, $\overline{f}_{\omega}$ is determined independently of the choice of a path $\omega$, and denote it by $\overline{f}$ . Now, let $x,y\in\mathcal{G}$ and $\omega,\gamma:I\rightarrow\mathcal{G}$ locally definable paths connecting $e$ and $x$, and $e$ and $y$, respectively. Then the locally definable path $\sigma:I\rightarrow\mathcal{G}$, $\sigma\left(t\right)\coloneqq x\gamma\left(t\right)$ connects $x$ with $xy$. Let $\omega*\sigma$ denote the concatenation of the paths $\omega$ and $\sigma$. Then, $\overline{f}_{\omega}\left(x\right)\overline{f}_{\gamma}\left(y\right)=\overline{f}_{\omega*\sigma}\left(xy\right)$, namely $\overline{f}\left(x\right)\overline{f}\left(y\right)=\overline{f}\left(xy\right)$, so $\overline{f}$ is a group homomorphism. Next, we will see that $\overline{f}$ is an extension of $f$. As $U$ is definably connected, so is path connected [2], then if $x\in U$, there is a locally definable path $\omega:I\rightarrow\mathcal{G}$ such that $\omega\left(0\right)=e$, $\omega\left(1\right)=x$, and $\omega\left(I\right)\subseteq U$. As $\overline{f}$ does not depend on the subdivisions of $I$, let $t_{0}=0$, $t_{1}=1$, then it is clear that $\overline{f}\left(x\right)=f\left(\omega\left(t_{0}\right)^{-1}\omega\left(t_{1}\right)\right)=f\left(x\right)$. Now, let $h$ be another extension of $f$. Let $\overline{\mathcal{G}}=\left\\{g\in\mathcal{G}:h\left(g\right)=\overline{f}\left(g\right)\right\\}$. Then $\overline{\mathcal{G}}$ is a locally definable subgroup of $\mathcal{G}$, and $U\subseteq\overline{\mathcal{G}}$. As $U$ is generic in $\mathcal{G}$, $U$ generates $\mathcal{G}$ [10, Fact 2.3(2)], then $\mathcal{G}\subseteq\overline{\mathcal{G}}$, so $h=\overline{f}$. Therefore, $f$ is uniquely extendable to a locally definable group homomorphism from $\mathcal{G}$ into $\mathcal{G}^{\prime}$. Finally, observe that $\overline{f}$ is a locally definable map on $\mathcal{G}$. For this note that since $\overline{f}$ is a group homomorphism, $\overline{f}\left(x^{-1}y\right)=\overline{f}\left(x\right)^{-1}\overline{f}\left(y\right)=f\left(x\right)^{-1}f\left(y\right)$ for every $x,y\in U$, then $\overline{f}$ restricted to $\prod_{n}U^{-1}U$ is a definable map. And as $\mathcal{G}=\left\langle U\right\rangle=\bigcup_{n\in\mathbb{N}}\prod_{n}U^{-1}U$, then $\overline{f}$ is a locally definable map on $\mathcal{G}$. ∎ For a locally definable group $\mathcal{G}$ in $\mathcal{M}$, we denote by $\mathcal{G}^{00}$ the smallest, if such exists, type-definable subgroup of $\mathcal{G}$ of index smaller than $\kappa$, where for a small set we mean a subset of $M^{n}$ with cardinality smaller than $\kappa$ ([11]). $\mathcal{G}^{00}$ may not exist (see an example in [10, Subsection 2.2]). For definable groups such a type-definable group always exists [19]. With Theorem 2.1, it is easy to prove the following result, which was previously demonstrated in [3] using a different technique. ###### Corollary 2.2. ([3, Thm. 9.1]) Let $\mathcal{G}$ and $\mathcal{G}^{\prime}$ be two abelian locally definable groups such that $\mathcal{G}$ is connected, torsion free, and $\mathcal{G}^{00}$ exists. Let $U\subseteq\mathcal{G}$ be a definably connected definable set such that $\mathcal{G}^{00}\subseteq U$, and $f:U\subseteq\mathcal{G}\rightarrow\mathcal{G}^{\prime}$ a definable local homomorphism. Then $f$ is uniquely extendable to a locally definable group homomorphism $\overline{f}:\mathcal{G}\rightarrow\mathcal{G}^{\prime}$. ###### Proof. We will just check that the assumptions are enough to apply Theorem 2.1. First, we will see that $\mathcal{G}$ is simply connected. Since $\mathcal{G}^{00}$ exists, then $\mathcal{G}$ is definably generated, and by [10, Thm 3.9], $\mathcal{G}$ covers an abelian definable group. Claim 6.4 in [3] yields that $\mathcal{G}$ is simply connected since $\mathcal{G}$ is also torsion free. Now, as $\mathcal{G}^{00}\subseteq U$, by saturation, there is a definable set $\overline{V}$ such that $\mathcal{G}^{00}\subseteq\overline{V}\subseteq\overline{V}^{-1}\overline{V}\subseteq U$. Let $V$ be the topological interior of $\overline{V}$ in $\mathcal{G}$, which is definable, then $\mathcal{G}^{00}\subseteq V\subseteq V^{-1}V\subseteq U$ since $\mathcal{G}^{00}$ is open in $\mathcal{G}$. Therefore, $V$ is a definable neighborhood of the identity in $\mathcal{G}$ open in $\mathcal{G}$, generic in $\mathcal{G}$, and $V^{-1}V\subseteq U$. Finally, Theorem 2.1 implies that there is a unique extension $\overline{f}:\mathcal{G}\rightarrow\mathcal{G}^{\prime}$ $f$ that is a locally definable group homomorphism. ∎ ## 3\. Universal covers of definably local homomorphic locally definable groups As in [3], Corollary 2.2, together with other results in [3], implies the following fact, and establishes a relation between the o-minimal universal covering groups of two definably locally homomorphic locally definable groups. This result could be interpreted as an analogue in the category of locally definable groups of the known fact that two connected locally homomorphic Lie groups have isomorphic universal covering Lie groups (Fact 1.1). ###### Fact 3.1. ([3, Thm. 10.1]) Let $\mathcal{G}$ and $\mathcal{G}^{\prime}$ be two divisible abelian connected locally definable groups such that $\mathcal{G}^{00}$ exists and $\mathcal{G}^{00}$ is a decreasing intersection of $\omega$-many simply connected definable subsets of $\mathcal{G}$. Let $X\subseteq\mathcal{G}$ be a definable set with $\mathcal{G}^{00}\subseteq X$, and $f:X\subseteq\mathcal{G}\rightarrow\mathcal{G}^{\prime}$ a definable homeomorphism and local homomorphism. Then $\widetilde{\mathcal{G}}$ is an open locally definable subgroup of $\widetilde{\mathcal{G}^{\prime}}$. An important corollary of the above result is the characterization of the o-minimal universal covers of the abelian definably connected definably compact semialgebraic groups over $R$ in terms of algebraic groups. ###### Fact 3.2. ([3, Thm. 10.2]) Let $G$ be an abelian definably connected definably compact group definable in $R$. Then $\widetilde{G}$ is an open locally definable subgroup of $\widetilde{H\left(R\right)^{0}}$ for some Zariski-connected $R$-algebraic group $H$. In fact, the algebraic group $H$ in Fact 3.2 is commutative since $G$ is abelian, and by Theorem 4.1 in [3], $G$ and $H\left(R\right)^{0}$ are definably locally homomorphic. ### 3.1. Universal cover of a definably compact semialgebraic group Now, we will show that Fact 3.2 also holds for every definably connected definably compact $R$-definable group not necessarily abelian. For this, we will use several results of Hrushovski, Peterzil, and Pillay in [12] together with the abelian case Fact 3.2. ###### Theorem 3.3. Let $G$ be a definably connected definably compact group definable in $R$. Then $\widetilde{G}$ is an open locally definable subgroup of $\widetilde{H\left(R\right)^{0}}$ for some $R$-algebraic group $H$. ###### Proof. By [12, Corollary 6.4], $G$ is definably isomorphic to the almost direct product of $S$ and $G_{0}$ where $S$ is some definably connected semisimple definable group, and $G_{0}$ is some abelian definably connected definably compact group, so $G\simeq\left(G_{0}\times S\right)/F$ for some finite central subgroup $F\subseteq G_{0}\times S$. Therefore, $G_{0}\times S$ is a finite cover of $G$, then $\widetilde{G}\simeq\widetilde{G_{0}\times S}$. Since the o-minimal fundamental group for locally definable groups (see [2] ) has the property that the group $\pi_{1}\left(\mathcal{G}_{1},g_{1}\right)\times\pi_{1}\left(\mathcal{G}_{2},g_{2}\right)$ is isomorphic to the group $\pi_{1}\left(\mathcal{G}_{1}\times\mathcal{G}_{2},\left(g_{1},g_{2}\right)\right)$ for $\mathcal{G}_{1}$, $\mathcal{G}_{2}$ locally definable groups and $g_{1}$, $g_{2}$ elements in $\mathcal{G}_{1}$, $\mathcal{G}_{2}$, respectively, then if $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$ are simply connected, then $\widetilde{\mathcal{G}_{1}\times\mathcal{G}_{2}}$ is isomorphic to $\widetilde{\mathcal{G}_{1}}\times\widetilde{\mathcal{G}_{2}}$ as locally definable groups. Hence, $\widetilde{G_{0}\times S}\simeq\widetilde{G_{0}}\times\widetilde{S}$. Now, by [12, Fact 1.2(3)], the center $Z\left(S\right)$ of $S$ is finite and $S/Z\left(S\right)$ is definably isomorphic to a direct product $S_{1}\times\ldots\times S_{n}$ of finitely many definably simple groups $S_{i}$’s. By [12, Fact 1.2(1)], $S_{i}\simeq H_{i}\left(R_{i}\right)^{0}$ for some real closed field $R_{i}$ definable in $R$ and some $R_{i}$-algebraic group $H$. But, by [15, Thm. 1.1], $R_{i}\simeq R$. Let $H_{\star}\coloneqq H_{1}\times\ldots\times H_{n}$, then $H_{\star}\left(R\right)^{0}\simeq H_{1}\left(R\right)^{0}\times\ldots\times H_{n}\left(R\right)^{0}\simeq S_{1}\times\ldots\times S_{n}$; namely, every semisimple semialgebraic group $S$ over $R$ is up to its center $H_{\star}\left(R\right)^{0}$ for some $R$-algebraic group $H_{\star}$. Thus, $\widetilde{S}\simeq\widetilde{H_{\star}\left(R\right)^{0}}$. Now, by Fact 3.2, $\widetilde{G_{0}}\leq\widetilde{H_{\star\star}\left(R\right)^{0}}$ for some $R$-algebraic group $H_{\star\star}$, then $\widetilde{G}\simeq\widetilde{G_{0}}\times\widetilde{S}\leq\widetilde{H_{\star\star}\left(R\right)^{0}}\times\widetilde{H_{\star}\left(R\right)^{0}}\simeq\widetilde{H\left(R\right)^{0}}$ for $H\coloneqq H_{\star\star}\times H_{\star}$. Note that $\widetilde{G}$ and $\widetilde{H\left(R\right)^{0}}$ have the same dimension, then $\widetilde{G}$ is, up to isomorphism of locally definable groups, an open locally definable subgroup of $\widetilde{H\left(R\right)^{0}}$ for some $R$-algebraic group $H$. ∎ ### 3.2. Universal cover of an abelian semialgebraic group In this subsection, we will prove that the o-minimal universal covering group of an abelian definably connected semialgebraic group over $R$ is a locally definable (group) extension, in the category of locally definable groups (see [8, Section 4] for basics on locally definable extensions), of an open locally definable subgroup of $\widetilde{H_{1}\left(R\right)^{0}}$ by $H_{2}\left(R\right)^{0}$ for some $R$-algebraic groups $H_{1}$, $H_{2}$. This will mainly follow by the characterization of abelian groups definable in o-minimal structures [7] and our previous results in this work. Recall that for the sufficiently saturated real closed field $R=\left(R,<,+,\cdot\right)$, $R_{a}$ denotes the additive group $\left(R.+\right)$, and $R_{m}$ the multiplicative group of positive elements $\left(R^{>0},\cdot\right)$. ###### Theorem 3.4. Let $G$ be a definably connected semialgebraic group over $R$. Then there exist a locally definable group $\mathcal{W}$, commutative $R$-algebraic groups $H_{1}$, $H_{2}$ such that $\widetilde{G}$ is a locally definable extension of $\mathcal{W}$ by $H_{2}\left(R\right)^{0}$ where $\mathcal{W}$ is an open subgroup of $\widetilde{H_{1}\left(R\right)^{0}}$. In fact, $H_{2}\left(R\right)^{0}$ is isomorphic to $R_{a}^{k}\times R_{m}^{n}$ as definable groups for some $k,n\in\mathbb{N}$. ###### Proof. By [7], $G$ is a definable extension of some abelian definably compact definably connected definable group $K$ by the maximal torsion free normal definable subgroup $T$ of $G$: $1\rightarrow T\rightarrow G\rightarrow K\rightarrow 1$. By [3, Thm. 10.2], $\widetilde{K}\leq\widetilde{H_{1}\left(R\right)^{0}}$ for some commutative $R$-algebraic group $H_{1}$. By [17], $T$ is definably isomorphic to $R_{a}^{k}\times R_{m}^{n}$ for some $k,n\in\mathbb{N}$. So in particular, $T\simeq H_{2}\left(R\right)^{0}$ for $H_{2}=\left(R\left(\sqrt{-1}\right),+\right)^{k}\times\left(R\left(\sqrt{-1}\right),\cdot\right)^{n}$. Thus, so far we have that $1\rightarrow H_{2}\left(R\right)^{0}\rightarrow G\overset{\pi}{\rightarrow}K\rightarrow 1$ with $\widetilde{K}\leq\widetilde{H_{1}\left(R\right)^{0}}$ for some commutative $R$-algebraic groups $H_{1},H_{2}$. By [17, Thm. 5.1], there is a continuous definable section $s:K\rightarrow G$ (continuous with respect with their $\tau$-topologies). Then the map $\varphi:H_{1}\left(R\right)^{0}\times K\rightarrow G$, $\varphi\left(h,k\right)=hs\left(k\right)$ is a definable homeomorphism with inverse $\varphi^{-1}\left(g\right)=\left(g\left(s\left(\pi\left(g\right)\right)\right)^{-1},\pi\left(g\right)\right)$ for $g\in G$. Here the direct product $H_{1}\left(R\right)^{0}\times K$ has the product topology, and the groups $K$, $G$, and the subgroup $H_{1}\left(R\right)^{0}\leq G$ have the $\tau$-topology ([16]) which coincides with the t-topology ([18]) for definable groups. Let $f$ be the definable two-cocycle associated with the section $s$, i.e., $\displaystyle f:K\times K$ $\displaystyle\rightarrow H_{1}\left(R\right)^{0}$ $\displaystyle\left(k_{1},k_{2}\right)$ $\displaystyle\mapsto s\left(k_{1}\right)s\left(k_{2}\right)\left(s\left(k_{1}k_{2}\right)\right)^{-1}_{.}$ Then, $G$ is definably isomorphic to the group $\left(H_{1}\left(R\right)^{0}\times K,\cdot_{f}\right)$ with group operation given by $\left(h,k\right)\cdot_{f}\left(h^{\prime},k^{\prime}\right)=\left(hh^{\prime}f\left(k,k^{\prime}\right),kk^{\prime}\right),$ through the definable group isomorphism $\varphi$. Let $p_{K}:\widetilde{K}\rightarrow K$ be the o-minimal universal covering homomorphism of $K$, and $\textrm{id}:H_{1}\left(R\right)^{0}\rightarrow H_{1}\left(R\right)^{0}$ the identity map on $H_{1}\left(R\right)^{0}$. Now, let $\begin{aligned} \widetilde{f}:\widetilde{K}\times\widetilde{K}&\rightarrow H_{1}\left(R\right)^{0}\\\ \left(\widetilde{k_{1}},\widetilde{k_{2}}\right)&\mapsto f\left(p_{K}\left(\widetilde{k_{1}}\right),p_{K}\left(\widetilde{k_{2}}\right)\right)\end{aligned}.$ The two-cocycle condition ([7, Eq. 3, Section 3]) of $f$ implies the same condition for $\widetilde{f}$, thus the group $\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)$ with group operation given by $\left(h,\widetilde{k_{1}}\right)\cdot_{\widetilde{f}}\left(h^{\prime},\widetilde{k_{2}}\right)=\left(hh^{\prime}\widetilde{f}\left(\widetilde{k_{1}},\widetilde{k_{2}}\right),\widetilde{k_{1}}\widetilde{k_{2}}\right)$ induced by $\widetilde{f}$ is a locally definable group. Let $i:H_{1}\left(R\right)^{0}\rightarrow H_{1}\left(R\right)^{0}\times\widetilde{K}$ be the map $h\mapsto\left(h,1\right)$, and $\pi_{2}:H_{1}\left(R\right)^{0}\times\widetilde{K}\rightarrow\widetilde{K}$ the projection map into the second coordinate. So far we have that $1\rightarrow H_{2}\left(R\right)^{0}\overset{i}{\rightarrow}\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)\overset{\pi_{2}}{\rightarrow}\widetilde{K}\rightarrow 1$ is a locally definable extension. Note that the locally definable group $\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)$ is connected because $H_{1}\left(R\right)^{0}$ and $\widetilde{K}$ are both connected [8, Corollary 4.8(ii)]. Now, the map $\displaystyle\varphi\circ\left(\textrm{id}\times p_{K}\right):\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)$ $\displaystyle\rightarrow G$ $\displaystyle\left(h,\widetilde{k}\right)$ $\displaystyle\mapsto\varphi\left(h,p_{K}\left(\widetilde{k}\right)\right)=hs\left(p_{K}\left(\widetilde{k}\right)\right)$ is a locally definable covering map. The abelianity of $G$ and the definition of $\widetilde{f}$ let easily to conclude that $\varphi\circ\left(\textrm{id}\times p_{K}\right)$ is also a group homomorphism. Hence, $\varphi\circ\left(\textrm{id}\times p_{K}\right):\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)\rightarrow G$ is a locally definable covering homomorphism. Next, we will see that $\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)$ is simply connected. Note that, by [4, Proposition 5.14], $\widetilde{K}$ is torsion free. So $H_{1}\left(R\right)^{0}$ and $\widetilde{K}$ are both torsion free, and therefore $\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)$ is too. Finally, [3, Claim 6.4] yields the simply connectedness of the group $\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)$, then $\varphi\circ\left(\textrm{id}\times p_{K}\right):\left(H_{1}\left(R\right)^{0}\times\widetilde{K},\cdot_{\widetilde{f}}\right)\rightarrow G$ is the o-minimal universal covering homomorphism of $G$. We conclude the proof of the theorem. ∎ ## Acknowledgements I warmly thank Assaf Hasson and Kobi Peterzil for their support, generous ideas, and kindness during this work. I also want to express my gratitude to the Ben-Gurion University of the Negev, Israel joint with its warm and helpful staff for supporting my research. The main results of this paper have been presented at the Israel Mathematical Union (IMU) virtual meeting 2020 on September 6th, 2020 (Israel), the Logic virtual seminar of the Università degli Studi della Campania “Luigi Vanvitelli” on May 28th, 2020 (Campania, Italy), and the Logic seminar of the Hebrew University of Jerusalem (Jerusalem, Israel) on December 11th, 2019. This research was funded by the Israel Science Foundation (ISF) grants No. 181/16 and 1382/15. ## References * [1] Elías Baro and Mário J. Edmundo. Corrigendum to: “Locally definable groups in o-minimal structures” [J. Algebra 301 (2006), no. 1, 194–223; mr2230327] by Edmundo. J. Algebra, 320(7):3079–3080, 2008. * [2] Elías Baro and Margarita Otero. Locally definable homotopy. Ann. Pure Appl. Logic, 161(4):488–503, 2010. * [3] Eliana Barriga. Definably compact groups definable in real closed fields. Israel J. Math., 238:121–166, 2020. * [4] Alessandro Berarducci, Mário Edmundo, and Marcello Mamino. Discrete subgroups of locally definable groups. Selecta Math. (N.S.), 19(3):719–736, 2013. * [5] Claude Chevalley. Theory of Lie Groups. I., volume 8 of Princeton Mathematical Series. Princeton University Press, Princeton, N. J., 1946. * [6] Lou van den Dries. Tame topology and o-minimal structures, volume 248 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 1998. * [7] Mário J. Edmundo. Solvable groups definable in o-minimal structures. J. Pure Appl. Algebra, 185(1-3):103–145, 2003. * [8] Mário J. Edmundo. Locally definable groups in o-minimal structures. J. Algebra, 301(1):194–223, 2006. * [9] Mário J. Edmundo and Pantelis E. Eleftheriou. The universal covering homomorphism in o-minimal expansions of groups. MLQ Math. Log. Q., 53(6):571–582, 2007. * [10] Pantelis E. Eleftheriou and Ya’acov Peterzil. Definable quotients of locally definable groups. Selecta Math. (N.S.), 18(4):885–903, 2012. * [11] Ehud Hrushovski, Ya’acov Peterzil, and Anand Pillay. Groups, measures, and the NIP. J. Amer. Math. Soc., 21(2):563–596, 2008. * [12] Ehud Hrushovski, Ya’acov Peterzil, and Anand Pillay. On central extensions and definably compact groups in o-minimal structures. J. Algebra, 327:71–106, 2011. * [13] Jana Maříková. Type-definable and invariant groups in o-minimal structures. J. Symbolic Logic, 72(1):67–80, 03 2007. * [14] Mamoru Mimura and Hirosi Toda. Topology of Lie groups I and II, volume 91. Translations of Mathematical Monographs, 1991. * [15] M. Otero, Y. Peterzil, and A. Pillay. On groups and rings definable in o–minimal expansions of real closed fields. Bulletin of The London Mathematical Society, 28:7–14, 1996. * [16] Ya’acov Peterzil and Sergei Starchenko. Definable homomorphisms of abelian groups in o-minimal structures. Ann. Pure Appl. Logic, 101(1):1–27, 2000. * [17] Ya’acov Peterzil and Sergei Starchenko. On torsion-free groups in o-minimal structures. Illinois J. Math., 49(4):1299–1321 (electronic), 2005. * [18] Anand Pillay. On groups and fields definable in $o$-minimal structures. J. Pure Appl. Algebra, 53(3):239–255, 1988. * [19] Saharon Shelah. Minimal bounded index subgroup for dependent theories. Proc. Amer. Math. Soc., 136(3):1087–1091, 2008.
# Exploring Adversarial Robustness of Multi-sensor Perception Systems in Self Driving 1,2 James Tu 3 Huichen Li Xinchen Yan 1,2 Mengye Ren 1,2 Yun Chen 1 Ming Liang 4 Eilyan Bitar Ersin Yumer 1,2 Raquel Urtasun Waabi1, University of Toronto2, UIUC3, Cornell University4 {jtu, mren, yun<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Modern self-driving perception systems have been shown to improve upon processing complementary inputs such as LiDAR with images. In isolation, 2D images have been found to be extremely vulnerable to adversarial attacks. Yet, there are limited studies on the adversarial robustness of multi-modal models that fuse LiDAR and image features. Furthermore, existing works do not consider physically realizable perturbations that are consistent across the input modalities. In this paper, we showcase practical susceptibilities of multi-sensor detection by inserting an adversarial object on a host vehicle. We focus on physically realizable and input-agnostic attacks that are feasible to execute in practice, and show that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors. Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features. Furthermore, in modern sensor fusion methods which project image features into 3D, adversarial attacks can exploit the projection process to generate false positives in distant regions in 3D. Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly. 00footnotetext: Work done while all authors were at UberATG. > Keywords: Adversarial, Self-Driving, Perception, Multimodal ## 1 Introduction Recent advances in self-driving perception have shown that fusing information from multiple sensors (e.g., camera, LiDAR, radar) [1, 2, 3, 4, 5, 6] leads to superior performance when compared to relying on single sensory inputs. Such performance gains are primarily due to the complementary information contained in the measurements provided by the different types of sensors. For example, LiDAR sensors provide accurate 3D geometry while cameras capture rich appearance information. Meanwhile, modern perception models which rely on deep neural networks (DNNs) have been found to be extremely vulnerable to adversarial attacks when processing images in isolation [7, 8, 9, 10, 11]. Adversarial attacks can be thought of as perturbations to the sensory inputs which do not alter the semantic meaning of the scene, but drastically change a DNN’s output and resulting in incorrect predictions. Such vulnerabilities can lead to catastrophic consequences in safety-critical applications. In the context of self-driving, most efforts have investigated attacks against single-sensor inputs, such as image-only attacks [7, 11] and LiDAR-only attacks [12]. Towards multi-modal robustness, [13] considers perturbations of LiDAR and image inputs independently, resulting in perturbations that are inconsistent across modalities and therefore may not be physically realizable and hence not threatening in practice. On the other hand, some proposed physically realizable approaches [14] only search over shape but ignore texture which is crucial for corrupting image inputs. Furthermore, these prior works do not generate universal perturbations which are perhaps the most threatening in practice. Such perturbations are input agnostic and can attack any input in the training distribution with high probability, meaning they can be executed without prior knowledge of the scene and are able to consistently disrupt models that process sensory information across time. This paper demonstrates the susceptibility of multi-sensor detection models to physically realizable and input-agnostic adversarial perturbations. To create a physically realizable attack which is also feasible to execute, we focus on object insertion attacks [7, 15, 16, 17, 12], as they can be carried out via the deployment of physical objects in the real world. Following [12], we insert the adversarial object by placing it on the rooftop of a host vehicle. We render the adversary into LiDAR and image inputs to ensure perturbations are consistent across modalities and that our attack is physically realizable. Furthermore, we consider occlusion and environmental lighting in the rendering process as shown in Figure 1 to enhance the realism of simulation. To enable end-to-end learning of geometry and texture, we render the pixels and LiDAR points in a differentiable manner. During training, our adversary is optimized with respect to all vehicles in the training distribution to create a universal attack which can be applied to any vehicle in any scene. We conduct an empirical evaluation of our proposed attack on the KITTI [18] self-driving dataset and a novel large-scale self-driving dataset Xenith using the multi-sensor detector MMF [5]. We generate input-agnostic adversarial examples that successfully hide host vehicles from state-of-the-art detectors in both datasets. More importantly, we find that incorporating image inputs makes models more vulnerable when compared to using LiDAR alone, as successful attacks are primarily caused by the brittle image features. Moreover, the projection of image features into 3D allows the adversary to generate false detections in distant regions. Nonetheless, we show that false negative failures can be circumvented by applying feature denoising and adversarial training. However, we observe that distant false positives are much harder to correct with adversarial defense, as they are also caused by inaccurate mappings between 2D pixels and 3D LiDAR points during fusion. ## 2 Related work Adversarial attacks were first discovered in the 2D image domain, where small perturbations on the pixels were shown to generate drastically different prediction results on object classification [19, 20]. Networks trained on object detection and semantic segmentation have also been shown to exhibit such vulnerability [8, 9, 21, 22, 23, 24]. Early methods [19, 20, 25, 26] assume the perfect knowledge of the gradients of the victim model, referred to as whitebox attacks. Later it was found that a blackbox attack can achieve similar success as well [27, 28, 29]. Defense and robustness evaluation procedures have also been explored for adversarial attacks [30, 31, 32, 33, 34, 35, 36, 37]. Figure 1: Simulating the addition of a mesh onto a vehicle rooftop in a realistic manner. First roof approximation is done to determine placement location and heading. Then LiDAR points and pixels are rendered with directional lighting to approximate sunlight. Finally, a dense depth image is generated with depth completion and used to handle occlusion. Aside from changing the pixel values by a small amount, various other ways to “perturb” an image were also proposed. Object insertion attacks are realistic attacks that insert an object to change the network output while not introducing changes in semantics [38, 15, 7, 39, 40]. These attacks were originally designed to be stickers that can be attached to a target object, and has since also been applied to the lens of a camera [41]. Image rendering is another popular technique for non-pixel based attacks, which can also be made differentiable [42], by using which [43] showed that adversarial attacks can be made through changing lighting. Various other object insertion attacks designed camouflage textures that can be wrapped around the target object [44, 10, 45, 46, 47, 48]. Aside from the typical image-based attacks introduced above, adversarial attacks against point clouds have also been studied. [17, 49, 50, 51] tried to directly perturb the location and cardinality of point clouds. However, such attacks may not be physically realizable, as arbitrary perturbations won’t be achievable by a LiDAR sensor with fixed angular frequency. Towards more realistic attacks, [52, 53] developed spoofing attacks that add malicious LiDAR points, while other approaches [16, 54, 12] instead optimize 3D mesh surfaces and use differentiable ray-casting to generate LiDAR point clouds. Despite the fact that multi-modal sensor configurations are widely seen on self-driving vehicles [55, 5, 56, 57, 58, 59], research on multi-modal sensor attacks is still very limited. Several preliminary works show the possibility of attacking multi-sensor fusion networks [13, 14, 60]. However, [13] did not consider consistency across data modalities when perturbing the image input, whereas [14] did not consider image texture, resulting in a lack of attack expressivity, and [60] did neither. We believe that it would be an interesting question to ask, whether multi-sensor fusion can be made more robust when the attacks are both input-agnostic and physically realizable. Figure 2: Overview of the attack pipeline. The adversarial mesh is rendered into both LiDAR and image inputs in a differentiable manner. The inputs are then processed by a multi-sensor detection model which outputs bounding box proposals. An adversarial loss is then applied to generate false negatives by suppressing correct proposals and false positives by encouraging false proposals. Since the entire pipeline is differentiable, gradient can flow from the adversarial loss to mesh parameters. ## 3 Multi-sensor Adversarial Learning In this section, we present a general method for learning an adversarial textured mesh to attack any multi-sensor object detector that is differentiable end-to-end. Specifically, we require the adversarial attack to be (1) input-agnostic for different environments, (2) geometrically-consistent across image and LiDAR input modalities, and (3) fully-automatic for implementation at large-scale. Our attacks are focused on vehicles as they are the most common object of interest on the road. Preliminaries: We consider a bird’s eye view (BEV) object detection model $F$ that takes the front camera image $x_{\text{I}}\in{[0,1]}^{H\times W\times 3}$ and LiDAR point clouds $x_{\text{L}}\in\mathbb{R}^{P\times 3}$ as input $x=(x_{\text{I}},x_{\text{L}})$. Here, the dimensions $H$ and $W$ represent the image height and width respectively. $P$ is the number of LiDAR points which could vary in each frame. The object detector is trained on bird’s eye view (BEV) bounding box annotations $\mathcal{Y}$, with each bounding box instance $b\in\mathcal{Y}$ parameterized by $b=(b_{x},b_{y},b_{h},b_{w},b_{\alpha})$. Subsequently, $b_{x}$ and $b_{y}$ are coordinates of the bounding box center, $b_{h}$ and $b_{w}$ indicate the width and height, respectively, and $b_{\alpha}$ represents the orientation. In order to process both image and LiDAR data modalities, the object detector uses two separate branches to extract features from each modality (see Fig 2). Then, the 2D image features are projected into 3D space to be fused with the LiDAR features. ### 3.1 Multi-sensor Simulation for Object Insertion In this work, we design a framework to insert a textured mesh into the scene so that both appearance and shape can be perturbed to attack multi-sensor perception systems. We attach a triangle mesh $\mathcal{M}=(\mathcal{V},\mathcal{F},\mathcal{T})$ onto the roof of a host vehicle, as such placement is physically realizable in the real world. The mesh is parameterized by vertex coordinates $\mathcal{V}\in\mathbb{R}^{N\times 3}$, vertex indices of faces $\mathcal{F}\in\mathbb{N}^{M\times 3}$, and per- face vertex textures $\mathcal{T}\in\mathbb{R}^{M\times C\times C\times 3}$. The dimensions $N$, $M$ and $C$ represent the number of vertices, the number of triangle faces, and the per-face texture resolution, respectively. For scalability reasons, we do not consider transparency, reflective materials, or shadowing, as handling each case would require sophisticated physics-based rendering. Instead, we approximate the sensor simulation using LiDAR ray- tracing and a light-weight differentiable image renderer. Both image and LiDAR rendering pipelines are differentiable, allowing gradients from LiDAR points and image pixels to flow into the mesh parameters during optimization. The overall pipeline of for object insertion is illustrated in Figure 1. Rooftop Approximation: First, we estimate the center of the vehicle’s rooftop to determine the 3D location for placing the adversary. Following [61, 12, 62], we represent our vehicle objects using signed distance functions (SDFs) and further project them onto a low-dimensional shape manifold using PCA. For each vehicle, we then optimize the low-dimension latent code that minimizes the fitting error between the vehicle point clouds and the shape manifold. Then, a fitted SDF is converted to a watertight vehicle mesh with Marching Cubes [63]. We select the top 20cm of the vehicle as the approximate rooftop and use the rooftop center and vehicle heading to determine the exact pose for object insertion. LiDAR Simulation: To simulate insertion in the LiDAR sweep, we sample rays according to LiDAR sensor specifications used to collect the original sweep, such as the number of beams and the horizontal rotation rate. We then compute the intersection of these rays and mesh faces using the Moller-Trumbore algorithm [64, 12] to obtain a simulated point cloud of the adversarial mesh. These simulated points are then added to the original LiDAR sweep. Image Simulation: To render the adversary into the image, we extract the intrinsics and extrinsics from the camera sensor that captured the original image. We then use a light-weight differentiable renderer SoftRas [65] to simulate the mesh pixels. Using a soft rasterizer during optimization allows gradient flow from occluded and far-range vertices to enable better 3D reasoning from pixel gradients. To enhance the fidelity of rendered images, we model the sun light with a directional lighting model as a light at infinite distance. Occlusion Reasoning: As we insert a new object into the scene, the rendering process must also consider occlusion for both the original and newly rendered points and pixels. To handle LiDAR occlusions, we compare the depth of existing points in the LiDAR sweep and newly rendered points, discarding the point that is farther away. Unlike LiDAR where the depth of each point is known, raw images do not contain depth information. To obtain depth estimates, first the LiDAR points are projected onto the image to generate a sparse depth image since images have higher resolution than LiDAR. We then use a depth completion model [66], which uses the sparse depths and RGB image to compute dense per-pixel depth estimates. With depth estimates of each pixel, we overlay rendered pixels onto the original image and discard occluded pixels. Note that we do not attack the depth completion model as it is a preprocessing step used to increase the fidelity of simulation. ### 3.2 Universal Adversarial Example Generation Adversarial Objectives: We consider two adversarial objectives: one for false negatives and the other for false positives. To generate false negative attacks, we follow prior work [8, 12] in attacking object detectors by suppressing all relevant bounding box proposals for the host vehicle. A proposal is relevant if its confidence score is greater than 0.1 and it overlaps with the ground-truth bounding box. The adversarial loss then minimizes the confidence of all candidates: $\displaystyle\mathcal{L}_{\text{adv}}^{\text{fn}}=\sum_{b\in\widetilde{\mathcal{Y}}}-\text{IoU}(b,b^{*})\log(1-\texttt{score}(b)),$ (1) where IoU denotes the intersection over union operator and $b^{*}$ is the corresponding ground-truth box we aim to attack. Alternatively, we aim to generate false bounding box proposals that do not overlap with any ground-truth boxes in the scene. The false positive adversarial loss increases the confidence of the false positive candidates as follows: $\displaystyle\mathcal{L}_{\text{adv}}^{\text{fp}}$ $\displaystyle=\Sigma_{b\in\widetilde{\mathcal{Y}}_{\text{fp}}}\log(1-\texttt{score}(b))\text{ and }$ $\displaystyle\widetilde{\mathcal{Y}}_{\text{fp}}$ $\displaystyle=\\{b|b\in\widetilde{\mathcal{Y}}\text{ and }\forall b^{*}\in\mathcal{Y}\text{ s.t. }\text{IoU}(b,b^{*})=0\\},$ (2) where $\widetilde{\mathcal{Y}}_{\text{fp}}$ is a subset of bounding box proposals with no overlap with any ground-truth object bounding boxes. Figure 3: Plot of the host vehicle recall across IoU thresholds. Only attacking LiDAR yields very weak attacks and attacking the image produces significantly stronger perturbations. Mesh Regularization: Besides the adversarial objective, we use an additional regularization term to encourage realism in the mesh geometry. Specifically, we use a mesh Laplacian regularizer [65], which encourages smooth object surface geometries: $\mathcal{L}_{\text{lap}}=\sum_{i}{\lVert{\delta}_{i}\rVert}^{2}_{2}$, with $\delta_{i}$ as the distance from vertex $v_{i}\in\mathcal{V}$ to the centroid of its immediate neighbors $\mathcal{N}(i)$: $\delta_{i}=v_{i}-\frac{1}{\lVert\mathcal{N}(i)\rVert}\sum_{j\in N(i)}v_{j}$. In addition to Laplacian regularization, we also constrain the physical scale of the adversary with an axis-aligned 3D box. Namely, we require that $\left\lVert\mathcal{V}_{j}\right\rVert_{\infty}\leq L_{j}\text{ for $j\in\\{x,y,z\\}$}$, where $L_{x}$, $L_{y}$, and $L_{z}$ represent the box constraints along $xyz$-axis, respectively. Learning Input-Agnostic Attacks: Overall, our optimization objective can be summarized as $\mathcal{L}=\mathcal{L}_{\text{adv}}^{\text{fn}}+\lambda_{\text{fp}}\mathcal{L}_{\text{adv}}^{\text{fp}}+\lambda_{\text{lap}}\mathcal{L}_{\text{lap}},$ where $\lambda_{\text{fp}}$ and $\lambda_{\text{lap}}$ are coefficient that weight the relative importance of the false positive loss term and mesh regularization term. We employ this objective to optimize the shape and appearance of the inserted object on the entire dataset to generate an input- agnostic adversarial example. Therefore, we can denote the optimal adversary as the following equation: $\displaystyle\mathcal{M}^{*}=\operatorname*{argmin}_{\mathcal{M}}\operatorname*{\mathbb{E}}_{x,\mathcal{Y}}\ \left[\mathcal{L}_{\text{adv}}^{\text{fn}}+\lambda_{\text{fp}}\mathcal{L}_{\text{adv}}^{\text{fp}}+\lambda_{\text{lap}}\mathcal{L}_{\text{lap}}\right].$ (3) With our proposed pipeline which is differentiable end-to-end, optimization of the adversarial mesh is done using projected gradient descent to respect the $\ell_{\infty}$ constraints on mesh vertices. In our experiments, we also conduct attacks targeting a single modality. To achieve this, we disable the gradient flow to the untargeted input branch, while we still simulate the mesh into both modalities to maintain physical consistency across image and LiDAR modalities. ### 3.3 Multi-sensor Adversarial Robustness Towards defenses against our object insertion attack, we also study defense mechanisms. Compared to the single-sensor setting, achieving multi-sensor adversarial robustness is even more challenging. First, each single input modality could be attacked even when the perturbations on the other input sensors are non-adversarial. Second, adversarial perturbations from each single input modality can interact with each other, which is a unique aspect in the multi-sensor setting. Thus, we need to deal with not only perturbations at each input modality but also their effect in the fusion layer. Figure 4: Placing the adversarial mesh on a host vehicle can hide the host vehicle completely from state-of-the-art detectors.The same mesh is used for all vehicles in a dataset as the attack is input-agnostic with respect to the training distribution. We employ adversarial training as it is the most standard and reliable approach to defense. Adversarial training can be formulated as solving for model parameters $\displaystyle\theta^{*}=\operatorname*{argmin}_{\theta}\operatorname*{\mathbb{E}}_{x,\mathcal{Y}}\left[\max_{\widetilde{x}_{\text{I}},\widetilde{x}_{\text{L}}}\mathcal{L}_{\text{task}}(F(\widetilde{x};\theta),\mathcal{Y})\right]$ (4) which minimize the empirical risk under perturbation. Here $\mathcal{L}_{\text{task}}$ is the loss function used to train the detection model. This is achieved by training detection model $F$ against adversarial data $\widetilde{x}$ generated by our attack. While adversarial training is typically performed on image perturbations that are cheap to generate with only few PGD steps [67], our adversarial example generation is prohibitively expensive for the inner loop of the min-max objective. Thus, instead of generating a strong adversary from scratch at every iteration, we adopt free adversarial training [68] and continuously update the same adversary to reduce computation. ## 4 Experimental Evaluations In this section, we first describe the our datasets, attack protocols, and evaluation metrics. More details on experiment setting are provided in the supplementary material. We then conduct experiments on the multi-sensor detection model MMF [5] and present our empirical findings for white-box attacks on each dataset and the black-box transfer attacks across datasets. Finally, we explore several defense mechanisms to achieve a more robust multi- sensor object detector. ### 4.1 Experimental Setting #### Datasets: We conduct our experiments on two self-driving datasets: KITTI [18] and Xenith. Xenith is collected by having a fleet of self-driving cars drive around cities in North America. Snippets are captured in the daytime and detection is performed on objects within 70 meters forward and 40 meters to the left and right of the ego-car. Each vehicle in a frame is considered a separate sample and we have 12,284 vehicles in KITTI and 77,818 in Xenith. #### Metrics: Following prior work on object insertion attacks [12], we evaluate how often the host vehicle “disappears” by measuring its recall across various IoU thresholds. For a scalar metric, we evaluate the false negative attack success rate (FN ASR) as the percentage of host vehicles detected before perturbation that are undetected after perturbation. We consider a vehicle detected if there exists an output bounding box having greater than 0.7 IoU with the vehicle. On the other hand, we consider an output bounding box a false positive if its maximum IoU with any ground truth box is less 0.3 and it does not overlap with any detection produced before perturbation. We evaluate the false positive attack success rate (FP ASR) as the percentage of attacks which generate at least one false positive. Finally, the overall attack success rate (ASR) is the percentage of attacks which successfully create a false positive or false negative. #### Implementation Details: The adversarial mesh is initialized as an icosphere with $N=162$ vertices and $M=320$ faces. Per-face textures are parameterized using a texture atlas with a $5\times 5$ texture resolution for each face. During optimization, we set $\lambda_{\text{lap}}=0.001$, $\lambda_{\text{fp}}=1$, and further constrain the scale of the mesh with an axis-aligned 3D box where the $x$ and $y$ coordinates are bounded by $L_{x}=L_{y}=0.8m$ and the $z$ coordinate is bounded by $L_{z}=0.5m$. We use Adam [69] to optimize the mesh parameters with a learning rate of $0.004$ for textures and $0.001$ for vertex coordinates. To target either LiDAR or image branch in isolation, we disable gradient flow to the other branch during the backward pass to the adversary. ### 4.2 Universal Adversarial Attacks #### Hiding Host Vehicle: We evaluate the drop in recall in detecting the host vehicle, as missed detections can lead to colliding with unseen objects which is the most dangerous outcome. We sweep IoU thresholds and visualize the IoU-recall curve in Figure 3. First, inserting a mesh with randomized shape and appearance has little impact on the detector. On the other hand, an adversarial mesh generated by perturbing both input modalities leads to a significant drop in recall. Moreover, we perturb the LiDAR and image inputs in isolation and find that targeting the LiDAR inputs alone yields very weak attacks. Meanwhile, targeting the image alone is almost as strong as perturbing both modalities. Therefore, image inputs are significantly less robust to the proposed attack. #### Attack Success Rates: Figure 5: Size of adversary box constraint vs attack success rate. In Table 2 and Table 2, we further analyze results in terms of the attack success rates. Again, we consider meshes with randomly generated geometry and texture as a baseline. We observe similar trends of image features being significantly more vulnerable. In addition to missed detections, the adversarial mesh is able to attack the detector through generating false positive proposals of objects that do not exist. Furthermore, we compare against prior work [12] which attacks a LiDAR-only detector. In this case, incorporating image inputs boosts robustness to LiDAR attacks at the cost of being more vulnerable to multimodal attacks. #### Adversary Size Furthermore, to understand how much the size of the adversary affects the strength of the attack, we vary the size of the box constraints on the vertex parameters. Here we sweep $L_{x}=L_{y}=L_{z}=L$ and and measure the attack success rates. Results are shown in Figure 5. As expected, the attack becomes stronger as the $\ell_{\infty}$ constraints on vertex coordinates are relaxed. Detector | Attack | FN ASR | FP ASR | ASR ---|---|---|---|--- LiDAR | LiDAR [12] | 31.85% | 4.84% | 33.23% LiDAR + Image | Random | 5.68% | 2.01% | 7.64% LiDAR | 7.99% | 2.36% | 10.11% Image | 26.06% | 3.40% | 28.43% Both | 32.76% | 4.38% | 34.68% Table 1: Attacks on KITTI. Also comparing with random meshes and a LiDAR only model. Detector | Attack | FN ASR | FP ASR | ASR ---|---|---|---|--- LiDAR | LiDAR [12] | 23.80% | 10.70% | 32.60% LiDAR + Image | Random | 5.06% | 4.15% | 9.17% LiDAR | 9.52% | 6.21% | 15.33% Image | 42.81% | 10.78% | 49.59% Both | 43.15% | 11.77% | 49.76% Table 2: Attacks on Xenith. Also comparing with random meshes and a LiDAR only model. #### Qualitative Examples: Qualitative examples are shown in Figure 4. First, the detector fails to detect host vehicles with the adversarial mesh on its rooftop. We show detections in the image rather than LiDAR for ease of viewing. Note that the same adversarial mesh is used for all demonstrations, as the attack is agnostic to the host vehicle and environment. Furthermore, we show in Figure 6 that our adversarial mesh generates false positives at very distant locations. Here, detections are visualized in BEV since distant objects appear too small in the image. Additionally, we visualize image features in the image plane and the visual cone of projected image features into 3D, showing that long-range false positives are caused by strong image features dominating after fusion. Figure 6: Visualization of the attack producing distant false positives due to the camera’s perspective projection, as well as the corrupted image features in the image plane and projected in 3D. #### Black-box Transfer Attacks: We conduct transfer attacks across datasets and show results in Table 4. Overall, our transfer attack on the target dataset is stronger than attacking only the LiDAR input modality on the source dataset, especially from Xenith to KITTI. On the other hand, the transferability is probably lowered by the image resolution and hardware, which is beyond the scope of our paper but an interesting future direction to explore. Source | Target | FN ASR | FP ASR | ASR ---|---|---|---|--- KITTI | KITTI | 32.76% | 4.38% | 34.68% Xenith | 14.20% | 2.86% | 16.88% Xenith | KITTI | 12.64% | 6.12% | 18.22% Xenith | 43.15% | 11.77% | 49.76% Table 3: Transfer - KITTI & Xenith Defense | FN ASR | FP ASR | ASR | AP(clean) ---|---|---|---|--- None | 43.15% | 11.77% | 49.76% | 84.64% JPEG [70] | 43.19% | 9.45% | 49.60% | 84.52% Adv Train [68] | 7.83% | 8.29% | 14.97% | 84.16% Adv FD [36] | 3.57% | 7.53% | 10.82% | 83.97% Table 4: Defense results on Xenith ### 4.3 Improving Robustness #### Attacks Against Defense Methods: As empirical findings suggest that the image feature is more vulnerable, we first employ an existing image-based defense method that removes high- frequency component through JPEG compression [70]. In addition, we conduct adversarial training against the attacker. Since generating a strong adversary is extremely expensive due to the simulation pipeline, we employ a strategy similar to Free Adversarial Training [68] and reuse past perturbations by continuously updating the same adversarial object. Specifically, we perform 5 updates to the adversary per one update to the model. We combine the feature denoising [36] with the adversarial training to further enhance robustness against image perturbations in particular. We report the success rates as well as the average precision (AP) at 0.7 IoU to study the trade-off between adversarial robustness and performance on benign data [35]. As shown in Table 4, we find JPEG compression is very ineffective as defense. We hypothesize this is because the input-agnostic adversary is rendered at various different poses during training and therefore do not rely on high- frequency signals that are removed by JPEG compression. In comparison, our adversarial training effectively reduces the overall attack success rate from 49.76% to 14.97%, while dropping AP by 0.5%. Finally, adding non-local mean blocks after every residual block in the image processing backbone further improves robustness by reducing the ASR by 5%. #### Discussions and Future Work: While adversarial training methods are effective, they are specific to a specific threat model and may not generalize to unseen perturbations. A more threat-agnostic mechanism like more robust sensor fusion would bring more robustness in general. Furthermore, adversarial defense methods are only effective at recovering the missed detections, but struggle to detect false positives. We believe this is because distant false positives shown in Figure 6 are only partially due to vulnerabilities to adversarial perturbations. In fact, such examples exploit erroneous associations between objects that are distant in 3D. Specifically, the mapping between a mesh pixel and a LiDAR point far away from the mesh enables such attacks. These false associations can easily occur if the assigned pixel for each LiDAR points is shifted by a few pixels, since objects which are far apart in 3D may appear very close in 2D. We identify two reasons how this can occur in practice. First, due to the receptive field of DNN activations, an adversarial object can influence pixels outside its physical boundaries. Second, while LiDAR sweeps are collected in a continuous fashion with a rotating sensor, images are captured instantaneously at regular intervals. Consequently, the camera extrinsics used for projection become outdated for LIDAR points captured before and after the image. Thus, to achieve more robust sensor fusion for images and LiDAR, fusion modules must reason about 3D geometry, contextual information, and temporal information of LiDAR points to generate mappings between image pixels and LiDAR points more intelligently. We hope these findings will inspire future work towards more robust sensor fusion methods. ## 5 Conclusions Our work investigates practical adversarial attack against mulit-sensor detection models in self-driving to understand how consuming multiple input modalities affect adversarial robustness. Compared to existing attacks against multimodal detectors, our object insertion attack is more threatening in practice as we generate input-agnostic and physically realizable adversarial perturbations. Our experiments reveal that vulnerabilities of multi-sensor object detectors are primarily due to non-robust image features. While adversarial training can effectively recover the missed detections, we find it still struggles to detect false positives without a deeper reasoning about 3D geometry in feature fusion. We believe this work would open up new research opportunities and challenges in the field of multi-sensor robustness. ## References * Gupta et al. [2014] S. Gupta, R. Girshick, P. Arbeláez, and J. Malik. Learning rich features from RGB-D images for object detection and segmentation. In _ECCV_ , 2014. * Song and Xiao [2016] S. Song and J. Xiao. Deep sliding shapes for amodal 3D object detection in RGB-D images. In _CVPR_ , 2016. * Wang and Neumann [2018] W. Wang and U. Neumann. Depth-aware CNN for RGB-D segmentation. In _ECCV_ , 2018. * Qi et al. [2018] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas. Frustum pointnets for 3D object detection from RGB-D data. In _CVPR_ , 2018. * Liang et al. [2019] M. Liang, B. Yang, Y. Chen, R. Hu, and R. Urtasun. Multi-task multi-sensor fusion for 3D object detection. In _CVPR_ , 2019. * Yang et al. [2020] B. Yang, R. Guo, M. Liang, S. Casas, and R. Urtasun. Radarnet: Exploiting radar for robust perception of dynamic objects. In _ECCV_ , 2020. * Eykholt et al. [2018] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song. Robust physical-world attacks on deep learning visual classification. In _CVPR_ , 2018. * Xie et al. [2017] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille. Adversarial examples for semantic segmentation and object detection. In _ICCV_ , 2017. * Lu et al. [2017] J. Lu, H. Sibai, and E. Fabry. Adversarial examples that fool detectors. _arXiv preprint arXiv:1712.02494_ , 2017. * Wiyatno and Xu [2019] R. Wiyatno and A. Xu. Physical adversarial textures that fool visual object tracking. In _ICCV_ , 2019. * Ranjan et al. [2019] A. Ranjan, J. Janai, A. Geiger, and M. J. Black. Attacking optical flow. In _ICCV_ , 2019. * Tu et al. [2020] J. Tu, M. Ren, S. Manivasagam, M. Liang, B. Yang, R. Du, F. Cheng, and R. Urtasun. Physically realizable adversarial examples for LiDAR object detection. In _CVPR_ , 2020. * Wang et al. [2020] S. Wang, T. Wu, and Y. Vorobeychik. Towards robust sensor fusion in visual perception. _arXiv preprint arXiv:2006.13192_ , 2020. * Cao et al. [2020] Y. Cao, N. Wang, C. Xiao, D. Yang, J. Fang, R. Yang, Q. A. Chen, M. Liu, and B. Li. 3D adversarial object against msf-based perception in autonomous driving. In _MLSys_ , 2020. * Athalye et al. [2018] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok. Synthesizing robust adversarial examples. In _ICML_. PMLR, 2018. * Xiao et al. [2019] C. Xiao, D. Yang, B. Li, J. Deng, and M. Liu. Meshadv: Adversarial meshes for visual recognition. In _CVPR_ , 2019. * Xiang et al. [2019] C. Xiang, C. R. Qi, and B. Li. Generating 3D adversarial point clouds. In _CVPR_ , 2019. * Geiger et al. [2012] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In _Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2012. * Szegedy et al. [2014] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. _ICLR_ , 2014. * Goodfellow et al. [2015] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. _ICLR_ , 2015. * Chen et al. [2018] S.-T. Chen, C. Cornelius, J. Martin, and D. H. P. Chau. Shapeshifter: Robust physical adversarial attack on Faster R-CNN object detector. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_. Springer, 2018. * Liu et al. [2018] X. Liu, H. Yang, Z. Liu, L. Song, H. Li, and Y. Chen. Dpatch: An adversarial patch attack on object detectors. _arXiv preprint arXiv:1806.02299_ , 2018. * Li et al. [2018] Y. Li, D. Tian, M.-C. Chang, X. Bian, and S. Lyu. Robust adversarial perturbation on deep proposal-based models. _arXiv preprint arXiv:1809.05962_ , 2018. * Wei et al. [2019] X. Wei, S. Liang, N. Chen, and X. Cao. Transferable adversarial attacks for image and video object detection. In _IJCAI_ , 2019. * Moosavi-Dezfooli et al. [2016] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: A simple and accurate method to fool deep neural networks. In _CVPR_ , 2016. * Moosavi-Dezfooli et al. [2017] S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In _CVPR_ , 2017. * Papernot et al. [2017] N. Papernot, P. D. McDaniel, I. J. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against machine learning. In R. Karri, O. Sinanoglu, A. Sadeghi, and X. Yi, editors, _AsiaCCS_ , 2017. * Brendel et al. [2018] W. Brendel, J. Rauber, and M. Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In _ICLR_. OpenReview.net, 2018. * Li et al. [2020] H. Li, X. Xu, X. Zhang, S. Yang, and B. Li. QEBA: query-efficient boundary-based blackbox attack. In _CVPR_ , 2020. * Athalye et al. [2018] A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. _arXiv preprint arXiv:1802.00420_ , 2018. * Papernot et al. [2016] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In _EuroS &P_, 2016. * Carlini and Wagner [2017] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In _SP_ , 2017. * Ilyas et al. [2019] A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry. Adversarial examples are not bugs, they are features. In _NeurIPS_ , 2019. * Carlini et al. [2019] N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, A. Madry, and A. Kurakin. On evaluating adversarial robustness. _arXiv preprint arXiv:1902.06705_ , 2019. * Tsipras et al. [2018] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry. Robustness may be at odds with accuracy. _arXiv preprint arXiv:1805.12152_ , 2018. * Xie et al. [2019] C. Xie, Y. Wu, L. v. d. Maaten, A. L. Yuille, and K. He. Feature denoising for improving adversarial robustness. In _CVPR_ , 2019. * Engstrom et al. [2019] L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry. Exploring the landscape of spatial robustness. In _ICML_ , 2019. * Brown et al. [2017] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer. Adversarial patch. _arXiv preprint arXiv:1712.09665_ , 2017. * Yang et al. [2020] C. Yang, A. Kortylewski, C. Xie, Y. Cao, and A. Yuille. Patchattack: A black-box texture-based attack with reinforcement learning. In _ECCV_ , 2020. * Hamdi et al. [2020] A. Hamdi, M. Müller, and B. Ghanem. Sada: semantic adversarial diagnostic attacks for autonomous applications. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 10901–10908, 2020. * Li et al. [2019] J. Li, F. R. Schmidt, and J. Z. Kolter. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In K. Chaudhuri and R. Salakhutdinov, editors, _ICML_ , 2019. * Kato et al. [2020] H. Kato, D. Beker, M. Morariu, T. Ando, T. Matsuoka, W. Kehl, and A. Gaidon. Differentiable rendering: A survey. _CoRR_ , abs/2006.12057, 2020. * Zeng et al. [2019] X. Zeng, C. Liu, Y.-S. Wang, W. Qiu, L. Xie, Y.-W. Tai, C.-K. Tang, and A. L. Yuille. Adversarial attacks beyond the image space. In _CVPR_ , 2019. * Zhang et al. [2019] Y. Zhang, H. Foroosh, P. David, and B. Gong. CAMOU: learning physical vehicle camouflages to adversarially attack detectors in the wild. In _ICLR_ , 2019. * Duan et al. [2020] R. Duan, X. Ma, Y. Wang, J. Bailey, A. K. Qin, and Y. Yang. Adversarial camouflage: Hiding physical-world attacks with natural styles. In _CVPR_ , 2020. * Huang et al. [2020] L. Huang, C. Gao, Y. Zhou, C. Xie, A. L. Yuille, C. Zou, and N. Liu. Universal physical camouflage attacks on object detectors. In _CVPR_ , 2020. * Chen et al. [2020] T. Chen, Y. Wang, J. Zhou, S. Liu, S. Chang, C. Bajaj, and Z. Wang. Can 3D adversarial logos cloak humans? _CoRR_ , abs/2006.14655, 2020. * Wu et al. [2020] Z. Wu, S.-N. Lim, L. S. Davis, and T. Goldstein. Making an invisibility cloak: Real world adversarial attacks on object detectors. In _European Conference on Computer Vision_ , pages 1–17. Springer, 2020. * Hamdi et al. [2020] A. Hamdi, S. Rojas, A. K. Thabet, and B. Ghanem. AdvPC: Transferable adversarial perturbations on 3D point clouds. In A. Vedaldi, H. Bischof, T. Brox, and J. Frahm, editors, _ECCV_ , 2020. * Yang et al. [2019] J. Yang, Q. Zhang, R. Fang, B. Ni, J. Liu, and Q. Tian. Adversarial attack and defense on point sets. _CoRR_ , abs/1902.10899, 2019. * Hamdi et al. [2020] A. Hamdi, S. Rojas, A. Thabet, and B. Ghanem. Advpc: Transferable adversarial perturbations on 3D point clouds. In _ECCV_ , 2020. * Cao et al. [2019] Y. Cao, C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao. Adversarial sensor attack on LiDAR-based perception in autonomous driving. In L. Cavallaro, J. Kinder, X. Wang, and J. Katz, editors, _CCS_ , 2019. * Sun et al. [2020] J. Sun, Y. Cao, Q. A. Chen, and Z. M. Mao. Towards robust LiDAR-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. In S. Capkun and F. Roesner, editors, _USENIX Security_ , 2020. * Cao et al. [2019] Y. Cao, C. Xiao, D. Yang, J. Fang, R. Yang, M. Liu, and B. Li. Adversarial objects against LiDAR-based autonomous driving systems. _arXiv preprint arXiv:1907.05418_ , 2019. * Liang et al. [2018] M. Liang, B. Yang, S. Wang, and R. Urtasun. Deep continuous fusion for multi-sensor 3D object detection. In _ECCV_ , 2018. * Sindagi et al. [2019] V. A. Sindagi, Y. Zhou, and O. Tuzel. MVX-Net: Multimodal voxelnet for 3D object detection. In _ICRA_ , 2019. * Fadadu et al. [2020] S. Fadadu, S. Pandey, D. Hegde, Y. Shi, F.-C. Chou, N. Djuric, and C. Vallespi-Gonzalez. Multi-view fusion of sensor data for improved perception and prediction in autonomous driving. In _CoRL_ , 2020. * Bijelic et al. [2020] M. Bijelic, T. Gruber, F. Mannan, F. Kraus, W. Ritter, K. Dietmayer, and F. Heide. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In _CVPR_ , 2020. * Chadwick et al. [2019] S. Chadwick, W. Maddetn, and P. Newman. Distant vehicle detection using radar and vision. In _ICRA_ , 2019. * Yu et al. [2020] Y. Yu, H. J. Lee, B. C. Kim, J. U. Kim, and Y. M. Ro. Investigating vulnerability to adversarial examples on multimodal data fusion in deep learning. _CoRR_ , abs/2005.10987, 2020. URL https://arxiv.org/abs/2005.10987. * Engelmann et al. [2017] F. Engelmann, J. Stückler, and B. Leibe. SAMP: shape and motion priors for 4D vehicle reconstruction. In _WACV_ , 2017. * Najibi et al. [2020] M. Najibi, G. Lai, A. Kundu, Z. Lu, V. Rathod, T. Funkhouser, C. Pantofaru, D. Ross, L. S. Davis, and A. Fathi. DOPS: Learning to detect 3D objects and predict their 3D shapes. In _CVPR_ , 2020. * Lorensen and Cline [1987] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. _ACM siggraph computer graphics_ , 1987. * Möller and Trumbore [1997] T. Möller and B. Trumbore. Fast, minimum storage ray-triangle intersection. _Journal of graphics tools_ , 1997. * Liu et al. [2019] S. Liu, T. Li, W. Chen, and H. Li. Soft rasterizer: A differentiable renderer for image-based 3D reasoning. In _ICCV_ , 2019. * Chen et al. [2019] Y. Chen, B. Yang, M. Liang, and R. Urtasun. Learning joint 2D-3D representations for depth completion. In _ICCV_ , 2019. * Madry et al. [2018] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=rJzIBfZAb. * Shafahi et al. [2019] A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein. Adversarial training for free! In _NeurIPS_ , 2019. * Kingma and Ba [2014] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. * Dziugaite et al. [2016] G. K. Dziugaite, Z. Ghahramani, and D. M. Roy. A study of the effect of JPG compression on adversarial images. _arXiv preprint arXiv:1608.00853_ , 2016.
# QUBIC IV: Performance of TES Bolometers and Readout Electronics M. Piat G. Stankowiak E.S. Battistelli P. de Bernardis G. D’Alessandro M. De Petris L. Grandsire J.-Ch. Hamilton T.D. Hoang S. Marnieros S. Masi A. Mennella L. Mousset C. O’Sullivan D. Pr^ele A. Tartari J.-P. Thermeau S.A. Torchinsky F. Voisin M. Zannoni P. Ade J.G. Alberro A. Almela G. Amico L.H. Arnaldi D. Auguste J. Aumont S. Azzoni S. Banfi B. B’elier A. Ba‘u D. Bennett L. Berg’e J.-Ph. Bernard M. Bersanelli M.-A. Bigot- Sazy J. Bonaparte J. Bonis E. Bunn D. Burke D. Buzi F. Cavaliere P. Chanial C. Chapron R. Charlassier A.C. Cobos Cerutti F. Columbro A. Coppolecchia G. De Gasperis M. De Leo S. Dheilly C. Duca L. Dumoulin A. Etchegoyen A. Fasciszewski L.P. Ferreyro D. Fracchia C. Franceschet M.M. Gamboa Lerena K.M. Ganga B. Garc’ia M.E. Garc’ia Redondo M. Gaspard D. Gayer M. Gervasi M. Giard V. Gilles Y. Giraud-Heraud M. G’omez Berisso M. Gonz’alez M. Gradziel M.R. Hampel D. Harari S. Henrot-Versill’e F. Incardona E. Jules J. Kaplan C. Kristukat L. Lamagna S. Loucatos T. Louis B. Maffei W. Marty A. Mattei A. May M. McCulloch L. Mele D. Melo L. Montier L.M. Mundo J.A. Murphy J.D. Murphy F. Nati E. Olivieri C. Oriol A. Paiella F. Pajot A. Passerini H. Pastoriza A. Pelosi C. Perbost M. Perciballi F. Pezzotta F. Piacentini L. Piccirillo G. Pisano M. Platino G. Polenta R. Puddu D. Rambaud E. Rasztocky P. Ringegni G.E. Romero J.M. Salum A. Schillaci C.G. Sc’occola S. Scully S. Spinelli M. Stolpovskiy A.D. Supanitsky P. Timbie M. Tomasi C. Tucker G. Tucker D. Vigan‘o N. Vittorio F. Wicek M. Wright and A. Zullo ###### Abstract A prototype version of the Q & U bolometric interferometer for cosmology (QUBIC) underwent a campaign of testing in the laboratory at Astroparticle Physics and Cosmology laboratory in Paris (APC). The detection chain is currently made of 256 NbSi transition edge sensors (TES) cooled to 320 mK. The readout system is a 128:1 time domain multiplexing scheme based on 128 SQUIDs cooled at 1 K that are controlled and amplified by an SiGe application specific integrated circuit at 40 K. We report the performance of this readout chain and the characterization of the TES. The readout system has been functionally tested and characterized in the lab and in QUBIC. The low noise amplifier demonstrated a white noise level of 0.3 $\mathrm{nV}/\sqrt{\mathrm{Hz}}$. Characterizations of the QUBIC detectors and readout electronics includes the measurement of I-V curves, time constant and the noise equivalent power. The QUBIC TES bolometer array has approximately 80% detectors within operational parameters. It demonstrated a thermal decoupling compatible with a phonon noise of about $5\times 10^{-17}~{}\mathrm{W}/\sqrt{\mathrm{Hz}}$ at 410 mK critical temperature. While still limited by microphonics from the pulse tubes and noise aliasing from readout system, the instrument noise equivalent power is about $2\times 10^{-16}~{}\mathrm{W}/\sqrt{\mathrm{Hz}}$, enough for the demonstration of bolometric interferometry. ## 1 Introduction QUBIC is an international ground based experiment dedicated to the observation of cosmic microwave background (CMB) polarisation. It will be deployed in Argentina, at the Alto Chorrillos mountain site (altitude of 4869 m a.s.l.) near San Antonio de los Cobres, in the Salta province. QUBIC has two configurations: the “technological demonstrator” (TD) and the “full instrument” (FI). The TD and FI share the same cryostat and cryogenics but the TD has only one-quarter of the 150 GHz TES focal plane (256 TESs), an array of 64 horns and switches and a smaller optical combiner. The QUBIC TD has demonstrated the feasibility of the bolometric interferometry after extensive tests at APC laboratory since 2018. In this paper, we present the main results of this characterization phase on the detection chain. This paper is organized as follows. An overview of the QUBIC detection chain is given in Section 2. The tests of the readout system is described in Section 3. Section 4 describes the TES characterizations in terms of critical temperature, TES parameters, power background, time constants and noise performance. Finally, some concluding remarks are given in Section 5. ## 2 QUBIC detection chain The QUBIC detection chain architecture is shown on Figure 1. Each focal plane is composed of four 256-pixel TES arrays assembled together to obtain 1024-pixel detector cooled at about 320 mK by a 3He fridge. For each quarter focal plane, two blocks of 128 SQUIDs (superconducting quantum interference devices) are used at 1 K in a 128:1 time domain multiplexing (TDM) scheme [1, 2]. Each block is controlled and amplified by an ASIC (application specific integrated circuit) cooled to 40 K while a warm FPGA (field programmable gate array) board ensure the control and acquisition of the signal to the acquisition computer. Figure 1: top: QUBIC cryo-mechanical structure which supports one TES focal plane at 350 mK on top and the SQUID boxes at 1 K below. The focal plane diameter is 110 mm. bottom: Architecture of the QUBIC detection chain for one focal plane of 1024 channels, highlighted on one quarter of it. ### 2.1 TES The detectors are TESs made with a NbxSi1-x amorphous thin film (x${\approx}$0.15 in our case). Their transition temperature Tc (Figure 2) can be adapted by changing the composition $x$ of the compound. The array currently used (reference P87) has a critical temperature of about 410 mK. The normal state resistance Rn is adjusted to about 1 $\Omega$ with interleaved electrodes for optimum performance. To adapt to the optics, the pixels have 3 mm spacing while the grid absorber structure is 2.7 mm wide without sensitivity to polarization. Figure 2: Superconducting transition characteristics (resistance R versus temperature T) of four Nb0.15Si0.85 TESs distributed far away from each other on the 256 pixel array reference P41. The low thermal coupling between the TES and the thermal bath is obtained using 500 nm thin SiN suspended and patterned membranes, which exhibit thermal conductance in the range 50-500 pW/K depending on the precise pixel geometry and temperature. The noise equivalent power (NEP) is expected to be of the order of $5\times 10^{-17}~{}W/\sqrt{Hz}$ at 150 GHz with a natural time constant of about 100 ms [3]. Light absorption is achieved using a Palladium metallic grid placed in a quarter-wave cavity optimizing the absorption efficiency. The back-short distance of 400 ${\mu}$m has been chosen after electromagnetic simulations in order to have absorption higher than 94% at both 150 and 220 GHz. The routing of the signal between the TES and the bonding pads at the edge of the array is realised by superconducting aluminium lines patterned on the silicon frame supporting the membranes. The detailed fabrication process of the QUBIC detectors is given in [4]. The latest upgrade of the production process allows excellent fabrication quality with a dead- pixel yield as low as 5% at room temperature. Figure 3: Left: Layout of the 3-mm pitch pixel structure. Pd grid for light absorption, NbSi for temperature sensing, SiN structure for decoupling the sensor from the cold bath and Al for routing the signal to the SQUID amplifiers Right: A 256 TES array being integrated. The 256-pixel array is finally integrated within the focal plane holder and electrically connected to an aluminium printed circuit board (PCB, provided by Omni Circuit Boards111www.omnicircuitboards.com) using ultrasonic bonding of aluminium wires (Figure 3). ### 2.2 SQUIDs The second stage of the detection chain is composed of the SQUIDs maintained at a temperature of about 1 K by a 4He fridge. Each TES is in series with the input inductance $L_{\mathrm{in}}$ of the SQUID and is voltage biased with a $10~{}m\Omega$ resistor in parallel as shown in Figure 4. The input inductance of the SQUID converts the TES current into a magnetic flux $\Phi_{\mathrm{in}}$ that is converted in an output voltage by the SQUID. The latter is therefore a trans-impedance amplifier with a gain of the order of 100 V/A. To compensate the flux variation, a current from a feedback loop is injected to create a feedback flux $\Phi_{\mathrm{fb}}$. The voltage sent by the digital to analog converter (DAC) to create this feedback current through the feedback resistor $R_{\mathrm{fb}}$ is the QUBIC signal (Figure 4). This process, allowing to lock the flux operating point in the SQUID is known as a flux locked loop (FLL) [5]. In addition to being cryogenic amplifiers, SQUIDs also enable the multiplexing because of their large bandwidth. As shown in Figure 5, the SQUID multiplexer is composed of 4 columns of 32 SQUIDs AC-biased with capacitors in order to reduce power dissipation and noise. The SQUIDs used in QUBIC shown in Figure 6 have a dual-washer gradiometric layout. They are based on a SQ600S commercial design provided by StarCryoelectronics222starcryo.com, slightly modified in order to reduce the area of each die. Visual inspections and room temperature tests with a probe-station are used to select the SQUIDs before integration on a specific PCB. One SQUID PCB is composed of 32 SQUIDs and is integrated in an aluminium box. The architecture therefore uses 4 of these PCB boxes to read out 128 pixels. As shown in Figure 1, 4 stack of 8 SQUID boxes is installed at 1 K below the TESs in the cryo- mechanical structure, surrounded with a Cryophy333www.aperam.com magnetic shield. Figure 4: Layout of the TES, SQUID and ASIC operating in flux-locked loop. Figure 5: Left: Topology of the 128 to 1 multiplexer sub-system (4$\times$32 SQUIDs + 1 ASIC). Right: Integration of 32 SQUIDs (1 column) with bias capacitors and filter devices. Figure 6: On the left, layout of a gradiometric SQUID (100 $\mu m$ grid), on the right, picture of one SQUID bare die (about 1.7 mm side). ### 2.3 ASIC In addition to the SQUIDs, a 4 to 1 multiplexed low noise amplifier (LNA) reads out sequentially 4 columns of 32 SQUID each. The resulting multiplexing factor is 128. The LNA, together with sequential biasing of the SQUID and the overall clocking of this 128:1 multiplexer, is all done in a cryogenic ASIC operating at about 40 K [6]. The TDM readout is based on the association of 4 columns of 32 SQUIDs in series with the dedicated cryogenic ASIC. The ASIC is designed in full-custom using CADENCE CAD tools. The technology is a standard $0.35~{}\mu$m BiCMOS SiGe from Austria MicroSystem (AMS). This technology consists of p-substrate, 4-metal and 3.3 V process. It includes standard complementary MOS transistors and high speed vertical SiGe NPN hetero-junction bipolar transistors (HBT). Bipolar transistors are preferentially used for the design of analog parts because of their good performance at cryogenic temperature [1]. The design of the ASIC is based on pre-experimental characterizations results, and its performance at cryogenic temperature is extrapolated from simulation results obtained at room temperature, using CAD tools. Figure 7: Left: Microphotography of the cryogenic ASIC designed to read out 4$\times$32 TES/SQUID pixels. Right: ASIC module assembly used for QUBIC. Each ASIC board (shown in Figure 7) has a power dissipation of typically 16 mW and is placed on the 40-K stage. The ASIC integrates all parts needed to achieve the readout, the multiplexing and the control of an array of up to 128 TESs/SQUIDs. It includes a differential switching current source to address sequentially 32 lines of SQUIDs, achieving a first level of multiplexing of 32:1. In this configuration, the SQUIDs are AC biased through capacitors which allows good isolation (low cross-talk between SQUID columns) and no power dissipation. A cryogenic SiGe low-noise amplifier ($e_{n}=$ 0.3 nV/$\sqrt{Hz}$, gain = 70, bandwidth of about 6 MHz in simulations) with 4 multiplexed inputs, performs a second multiplexing stage between each of the 4 columns. The low frequency noise of the LNA increases with decreasing temperature. An operation at about 40 K appears to be a good trade-off between this corner frequency and the white noise level. This cryogenic ASIC includes also the digital synchronization circuit of the overall multiplexing switching (AC current sources and multiplexed low-noise amplifier). A serial protocol allows focusing on sub-array as well as adjusting the amplifiers and current sources with a reduced number of control wires. We have developed a full-custom CMOS digital library dedicated to cryogenic applications and ionizing environments (rad-hard full custom digital library) [1]. ### 2.4 Warm electronics and acquisition software The final stage of the readout electronics operates at room temperature on a board called NetQuiC. It is connected to the acquisition computer via a network switch. Each NetQuiC board is based on a differential amplifier (gain = 100, bandwidth limited to 1 MHz with a second-order anti aliasing low-pass filter), a 2 MHz 16-bit analog to digital converter (ADC), seven 16-bit DACs and a Xilinx Spartan 6 FPGA (XEM6010 board from Opal Kelly). It also contains 2 feedback resistors $R_{\mathrm{fb}}$ of 10 k$\Omega$ and 100 k$\Omega$ that could be individually connected for large dynamic range or sensitive measurements respectively. This system is designed to adjust the operating biasing of the TESs and to control the feedback of the SQUIDs. Moreover, it takes the signal from the cryogenic multiplexing ASIC, computes the scientific signal and sends it to the data acquisition system. For this detection chain each FPGA manages 128 detectors, with a total of 16 FPGAs for the full 2048 pixel focal planes. A dedicated software named QUBIC Studio was developed at the Institute for Research in Astrophysics and Planetology (IRAP) for the data acquisition [7, 2]. QUBIC Studio interfaces with the generic electrical ground support equipment (EGSE) tool, called “dispatcher”, which was also developed at IRAP. QUBIC Studio has a user-friendly interface to manage the connection with the readout electronics. This tool gives a global visualization of the complete detection chain. ## 3 Readout tests and characterization The core of the readout is made of an ASIC cooled to 40 K that controls the multiplexing and amplifies the voltage from the SQUIDs. This device has been first tested and validated since it has been used to further characterize the SQUIDs. ### 3.1 ASIC tests and characterizations #### 3.1.1 Implemented functions The ASIC called SQMUX128evo has been designed to control the time-domain multiplexing and to amplify the signals from 4 columns of 32 SQUIDs in series (see Figure 5) i.e. 128 channels. Its block diagram is outlined on Figure 8. The following functions have been integrated: * • An ultra low noise voltage amplifier with 4 multiplexed inputs for reading 4 columns of SQUIDs, * • a current source for the polarization of the multiplexed amplifier, * • an AC bias current source associated with a 1:32 multiplexer for addressing the 32 SQUIDs lines through addressing capacitors, * • two voltage references for adjusting the common mode at the input of the multiplexed amplifier and at the output of the AC bias source of the SQUID, * • a digital circuit which controls the row / column addressing of the multiplexer from an external clock signal CK, * • a serial link for addressing configurable blocks such as voltage references, bias current sources or the multiplexer’s row/column addressing circuit. This ASIC has been integrated on a specific PCB in order to be fully characterized at room and cryogenic temperatures. It has been tested and proven functional down to 4.2 K thanks to a low power dissipation of about 16 mW per ASIC whatever the number of columns to read out. In QUBIC, the ASIC operates at approximately 40 K due to the available cryogenic power. Figure 8: Block diagram of the ASIC SQMUX128evo (see text for a detailed description). #### 3.1.2 Current sources and voltage references The ASIC SQMUX128evo integrates digitally adjustable current sources and voltage references for the biasing of the multiplexed input amplifier and for the SQUID AC bias circuit. The current sources are based on a fixed current reference ($I_{REF}\simeq 100~{}\mu$A typically) followed by current DACs whose values are encoded on 3 and 4 bits for the amplifier bias and the SQUID AC bias circuit respectively. The reference current $I_{REF}$ is obtained by taking the current flowing through an external resistor $R_{REF}$ under a fixed voltage (forward-biased diode voltage, about 0.7 V at room temperature). This allows to precisely adjust the reference current value according to the operating temperature. For a nominal current $I_{REF}=100~{}\mu$A: * • The output of the 3-bit current DAC allows to adjust the bias of the amplifier with multiplexed inputs (IA in Figure 8) from 1.65 mA to 2.85 mA in steps of 200 $\mu$A; * • The output of the 4-bit DAC in current adjusts the AC bias of the SQUIDs (ISQ in Figure 8) from 5 $\mu$A to 40 $\mu$A in steps of 2.5 $\mu$A. The ASIC SQMUX128evo also incorporates two 3-bits voltage references for common mode adjustment at the input of the multiplexed amplifier (VICM) and at the output of the SQUID AC bias source (Vinit). This voltage ranges from 1.453 V to 1.895 V. For the voltage references and current sources, the values measured at room temperature are fully compliant to those simulated. At low temperatures, an adjustment of the reference resistance and of the threshold voltage of a forward-biased diode from 0.7 V to about 1 V needs to be done to reproduce the results in simulation for the current sources. The agreement between measurement and simulation has been achieved thanks to the use of a proven standard technology with a reliable design kit. #### 3.1.3 Amplifier with 4 multiplexed inputs The amplifier is made of 4 differential pairs of SiGe bipolar transistors (each with a trans-conductance $g_{m}$) whose collectors are connected to a common resistor ($R_{L}~{}=~{}560~{}\Omega$ at room temperature and $500~{}\Omega$ at 40 K). The multiplexing is achieved by means of a set of CMOS switches that sequentially bias each differential pair that has to be activated ($I_{BIAS}~{}=~{}2~{}mA$ typically). A common mode (VICM) is applied at the input of each differential pair through 2 series resistors of $50~{}\Omega$ each (differential input impedance of $100~{}\Omega$) connected to a 3-bit voltage reference. Each output of the differential stage is followed by a common collector amplifier in order to reduce the output impedance to about $50~{}\Omega$ at low temperature. The expected maximum gain is about 20 and 70 at room and cryogenic temperature respectively. Gain and noise measurements were performed using an Agilent444http://www.agilent.com HP89410 vector analyser. For the gain measurement, as the vector analyser does not have differential sources and inputs, the setup uses a ”single to differential” circuit, made from an AD8132, to differentiate the signal coming from the analyser source and drive the input of the amplifier under test. The output common mode of the AD8132 is adjusted to match the VICM of the amplifier under test. A Stanford Research SR560 amplifier is used to differentiate between the outputs of the amplifier under test before feedback to the input of the analyser. This external amplifier limits the bandwidth to about 1 MHz. For noise measurement, this amplifier is also used to provide the extra gain needed to overcome the noise floor of the analyser. In addition, the noise measurement is performed by shunting the differential inputs of the amplifier under test with a short circuit in the lab or with zero bias of the SQUIDs in QUBIC. The amplifier with multiplexed inputs is biased at maximum current (1.80 mA at 300 K and 2.85 mA at 77 K) so that the voltage gain is as high as possible. Figure 9: Multiplexed LNA (low noise amplification) equivalent input noise voltage measurement at 300 K and 77 K (top) and at about 72 K for the two ASICs in QUBIC (bottom). From 300 K to 77 K, the voltage gain increases from 20 to 70 as expected and the white noise level decreases from 0.95 nV/$\sqrt{Hz}$ to 0.25 nV/$\sqrt{Hz}$ as shown in Figure 9. The corner frequency also increases from about 100 Hz at room temperature to about 6 kHz at 77 K. The presence of an excess noise below 100 Hz at low temperature is not understood. The 3 dB bandwidth of the LNA is estimated at about 6 MHz by simulation, not measured precisely because of the limitation from the measurement setup. #### 3.1.4 AC bias current source The AC bias current source allows to alternately bias two consecutive SQUIDs of the same row at +$I_{sq}$ and -$I_{sq}$ through addressing capacitors (no power dissipation on the cryogenic stages as compared to the addressing with resistors). It consists of two differential branches, each of them having an inverter degenerated by current mirrors referenced to the biasing current source described above. These inverters are controlled in phase opposition to the rate imposed by the column changes. Alternately, the outputs of these inverters simultaneously push and pull the same $I_{sq}$ biasing current through each SQUID of the same row through the addressing capacitors. A 1:32 multiplexer located at the output of the inverters of the AC source allows the selection of the row to be biased. In order to avoid a drift of the operating point at the output of the inverters of the AC biasing circuit and a risk of saturation of the current sources, these outputs are connected to the voltage reference $V_{init}$ through 2 external resistors of 10 k$\Omega$. #### 3.1.5 Multiplexer addressing circuit The sequencing of the multiplexing is carried out internally in the ASIC by an integrated digital circuit referenced to an external clock signal CK of 100 kHz nominal frequency. In addition to the control of the LNA and the SQUID biasing circuit, it generates two signals active on falling edge, SYNCCb and SYNCLb, that indicate the end of row and complete cycle respectively. The addressing circuit clocked at a multiplexing frequency of 100 kHz was functionally tested down to a temperature of 4.2 K as shown in Figure 10 [8]. Figure 10: Functional clocking validation at 4.2 K of the multiplexer: Line: synchronize the SQUID switching current source to the multiplexed LNA; Cycle: give the start - pixel 1 - of the full multiplexing cycle; Vout is the multiplexed signal of 128 pixels with the SQUID stage replaced by 128 resistors biased through capacitors (4 different offsets are noticeable). The different data have been scaled and shifted for clarity. The numbers in red give the channel ordering. #### 3.1.6 Functional tests of the ASIC with SQUIDs Functional tests of the ASIC have been performed down to 4.2K in a dedicated cryostat on a small array of 2 columns of 2 SQUIDs in series as shown in Figure 10. The settle time of the system after switching from one channel to the other is of the order of 5 $\mu$s. These tests have validated the AC SQUID biasing operation and the overall multiplexing topology (switching AC current sources, multiplexed LNA and digital clocking). Figure 11: Validation at 4.2 K of the ASIC and SQUIDs AC biasing operation through addressing capacitors (100 nF). The tests are performed on an array of 2 columns of 2 SQUIDs in series associated to the cryogenic ASIC. The measured data have been scaled and shifted for clarity. Signals in blue and orange are synchronization signals of the SQUID switching current source and the multiplexed LNA respectively. Signals in green and red are the measured multiplexed output signal, with and without offset compensation respectively, corresponding to voltage-flux characteristics of each SQUID obtained by applying a large ramp signal into their feedback coil. ### 3.2 SQUIDs tests and characterizations The SQUIDs are first selected with room temperature measurements and furthermore characterized at two temperatures in the QUBIC readout system. #### 3.2.1 Selection and sorting of SQUIDs at 300 K Before installation in QUBIC, the manufactured SQUIDs undergo a visual inspection in a clean room in order to detect and remove the ones exhibiting evidence of defects during fabrication or storage. We further proceed in the measurement of 4 resistance values at room temperature: heater, SQUID washer, feedback inductance and input inductance. The distribution of these values is found to be close to a Gaussian with a standard deviation of about 5% the mean value. SQUIDs with all parameters within 2$\sigma$ of the mean values are selected for installation in QUBIC. SQUIDs that are between 2$\sigma$ and 3$\sigma$ for one or more measurements are held aside as a possible option in case there are not enough SQUIDs passing the first criteria. All SQUIDs with any parameter larger than 3$\sigma$ from the mean are rejected. While these room temperature measurements do not guarantee that a SQUID is functional, the 2 and 3 $\sigma$ selection process has been chosen as a trade-off between the number of available chips and the expected homogeneity in the SQUID behaviour. A further selection process is performed based on the leakage resistance between SQUID washer and the input inductance. Leakage measured at cryogenic temperature is typically a few M$\Omega$ between a full stack of 32 SQUIDs and the 32 input inductances. This level of leakage does not significantly degrade the operation of the SQUIDs. The pass/fail level for leakage to the input inductance was therefore set at 2 M$\Omega$, with the majority of leakage values measured closer to 20 M$\Omega$. SQUIDs with leakage to the input inductance less than 2 M$\Omega$ were rejected in order to avoid any risk of electrostatic discharge damage. For the same reason, the leakage between the SQUID washer and the feedback must be that of an open circuit ($\mathrm{resistance}>40~{}\mathrm{M}\Omega$), otherwise the SQUID is rejected. We typically obtained a yield of about 82% for tested SQUIDs. #### 3.2.2 Tests at Cryogenic Temperature The characterization of the SQUIDs is performed at the begining of the calibration phase, with the focal plane temperature kept just above the TES critical temperature in order to be in normal state and to reduce the detector current noise contribution. The main goal is to define the optimal SQUID bias current to be used during observations. The principle of the procedure is the following: a slow sinusoidal signal of 12 seconds period and 1 V peak-to-peak amplitude is injected into the feedback inductance through the feedback resistor $R_{\mathrm{fb}}$=10 k$\Omega$ and the bias current of the SQUIDs is increased step by step. For each value of the input current $I_{\mathrm{sq}}$, the response of the SQUID is therefore measured as shown in Figure 12. Figure 12: Flux-to-voltage SQUID transfer function for current biasing on ASIC 1. The plots show the response signal ($V_{\mathrm{sq}}$) as a function of the quantum flux going through the SQUID. There are 9 curves corresponding to increasing bias current ($I_{\mathrm{sq}}$) from bottom to top. As the SQUID current $I_{\mathrm{sq}}$ increases, the amplitude of the response of the SQUID also increases until it reaches a maximum and then it decreases. The optimum $I_{\mathrm{sq}}$ corresponds to the maximum amplitude of the SQUID response. Since the same $I_{\mathrm{sq}}$ must be supplied to all the SQUIDs per ASIC, it is necessary to select a single bias index for all the SQUIDs for each ASIC. While it seems natural to choose the SQUID current bias corresponding to the majority of the SQUIDs, in reality it does not maximize the number of operational SQUIDs. A SQUID is considered operational if its response is greater than 10 $\mu$V. The SQUID current is therefore chosen to maximize the number of operational SQUIDs. We also deduced from these data the current noise by dividing the voltage noise (averaged between 40 Hz and 50 Hz) by the slope in the middle of the flux-to-voltage transfer function and by the input coil mutual inductance. Figure 13 shows the histograms of the SQUID response and the deduced current noise for three $I_{\mathrm{sq}}$ bias current for the two ASICs. Figure 13: Histograms of SQUID voltage swing (top) and current noise (bottom) for SQUIDs connected to ASIC 1 (left) and 2 (right), for three bias currents around the optimal one. The SQUID voltage swing histograms of Figure 13 show that 28 $\mu$A is the optimal bias current for ASIC 1 for which 93% of the SQUIDs are operational. For ASIC 2, the histograms give 30.6 $\mu$A as the best bias current with 91.1% operational SQUIDs. In terms of current noise, the distribution is slightly more peaked for these optimal bias current as shown in the bottom histograms of Figure 13, with a median value around 70 $pA/\sqrt{Hz}$ dominated by the TES aliased current noise (see section 4.9). The yield of SQUIDs for the QUBIC TD is 93% for the 128 SQUIDs connected to ASIC 1, and 89% for the 128 SQUIDs connected to ASIC 2. This corresponds to 119 operational SQUIDs for ASIC 1 and 114 operational SQUIDs for ASIC 2. The total yield is therefore 91%. The optimum bias current is 28.1 $\mu$A for ASIC 1 and 30.6 $\mu$A for ASIC 2. ## 4 TES characterization The TES array currently used in QUBIC has the reference P87 of the production series. It has been selected after characterizations both at room and cryogenic temperatures. This array has been also fully characterized during the QUBIC calibration phase. ### 4.1 Selection process and integration The TES arrays that have been successfully produced undergo visual inspections and resistance measurements at room temperature on a probe station. These measurements are done before integration and wire bonding to detect possible defects or issues with the routing. If successful, the array is integrated in the QUBIC holder and Al wire bonded (Figure 3). The next steps consist in characterizations at cryogenic temperature. They are done in an Oxford Instrument dilution fridge before integration in the QUBIC cryostat. ### 4.2 Critical Temperature The QUBIC detector wafer includes 8 dark pixels, 4 for each ASIC besides the 124 active ones. These channels consist in NbSi thermal sensors of the same geometry as the ones used on TESs, without thermal decoupling from the silicon wafer. They are located outside the focal plane and are therefore shielded from radiation. Figure 14 shows measurements of the transition from normal to superconducting state for 2 of these dark pixels, measured in a dilution fridge cryostat (which is a dedicated test bed for selection of detectors) by increasing slowly the temperature and with a QUBIC readout chain. The critical temperature is about 412 mK and some temperature dependence is still present above the transition, which allows to still have some sensitivity in case of saturation. The unit-less parameter $\alpha=\frac{T}{R}\times\frac{\partial{R}}{\partial{T}}$ has been evaluated from these transition curves and is higher than 100 for the lowest part in the transition. A small transition is visible at about 520 mK in this P87 array which is not understood. Figure 14: Resistance as a function of temperature for array P87 dark pixels on channels 68 and 100 (red) and derived $\alpha=\frac{T}{R}\frac{\partial{R}}{\partial{T}}$ parameter (blue). ### 4.3 TES normal and parasitic resistances With the bath temperature below the TES critical one, the detectors need to be over-biased (above about 7 $\mu$V) in order to be in the normal state. A slow and small sine wave voltage oscillation was added in order to deduce the resistance value. Figure 15 top shows the distribution of the normal resistance values for the array P87. It is highly peaked around 1.2 $\Omega$ as expected from the transition measurement. The same procedure is used to determine the resistance in superconducting state, but without any DC bias on the detectors. The residual resistance obtained from these measurements, assuming the TES resistance is 0 $\Omega$, is given by the sum of the shunt resistance (10 m$\Omega$) and the parasitic resistance which is in series with the TES. The parasitic resistance is assumed to come from the connectors used. Figure 15 bottom shows the distribution of these residual resistance values for the array P87. The median is 28 m$\Omega$ which leads to a parasitic resistance of about 18 $m\Omega$ compatible with previous measurements on a dedicated test bench. Figure 15: Histogram of normal resistance (left) and of residual resistance in the superconducting state (right, 167 total number of TES for both graphs). This residual resistance is the sum of a parasitic resistance and the bias resistor of 10 m$\Omega$. ### 4.4 TES parameters The I-V characteristics at different temperatures have been acquired both in a dilution fridge cryostat and in the QUBIC cryostat with optical window open and closed. The measurement follows the procedure outlined in [9]. Figure 16 shows the I-V curves for the P87 array measured at 360 mK in blind configuration and Figure 17 is an example of the I-V curves for one TES on ASIC2 at different temperatures. The strong Electro-Thermal Feedback (ETF) regime is clearly seen with the increase of the TES current at low bias voltages. An overall yield of about 84% is furthermore obtained in this I-V characterization. Figure 16: I-V curves for the array of detectors at 360 mK. Each box in the plot shows the measured I-V curve for the detector in that position in the focal plane. Detailed I-V curves are shown in figure 17. This is array P87 measured in the APC dilution cryostat. The vertical axis for each plot is in arbitrary current units, scaled for the minimum and maximum of each plot. There are 244 TES bolometers in the focal plane of the QUBIC Technical Demonstrator. Eight TES are outside the focal plane (not shown) and are used as dark detectors for comparison. The background colour indicates the bias voltage turnover point.We see homogeneous characteristics of the TES array and a yield of 84% (proportion of TES showing an Electro-Thermal Feedback effect in the I-V curve). The black, filled-in “pixels” in the bottom-left are empty positions. The QUBIC-FI will have four arrays equivalent to this one in order to make a roughly circular focal plane for each frequency channel. The physical parameters of each TES can be deduced from these measurements. Assuming the TES is in the strong ETF regime and that it is blind, the Power- Temperature relation is classically given by: $P_{\mathrm{bias}}=\kappa(T_{\mathrm{bath}}^{n}-T_{c}^{n})$ (4.1) where $P_{\mathrm{bias}}$ is the bias power dissipated in the TES, $T_{\mathrm{bath}}$ is the bath temperature, $T_{c}$ the TES critical temperature, $\kappa$ and $n$ are constants that depend on the thermal link between the absorber and the bath. In the ETF regime, the bias power is therefore constant for a given bath temperature. The dynamic thermal conductance G is further given by the following equation: $G=n\kappa T_{c}^{n-1}$ (4.2) Figure 18 shows an example of the Power-Temperature relation for the same TES as in Figure 17. A curve fitting algorithm based on Eq. 4.1 is used to derive the values of $\kappa$, $n$ and $T_{c}$ from measured temperatures and powers. Figure 17: I-V curves of TES#63 on ASIC2 at different temperatures. The TES voltage is obtained from the bias voltage with a $10^{-6}$ divider bridge. Figure 18: An example of the fit to the power-temperature measurements. This is for TES#63 on ASIC2. The best fit parameters from equation 4.1 are also given, with deduced dynamic thermal conductance and phonon NEP. While degenerate with $\kappa$ as shown in [3], the index $n$ of the power law is around 4 as expected for 500 nm thickness Si3N4 legs. Figure 19 shows the distribution of the critical temperature and dynamic thermal conductance obtained with the fit. The critical temperature is around 410 mK as measured on the dark pixels and the median dynamic thermal conductance is about 300 pW/K. The spread in these parameters is probably inherent to the previously quoted degeneracy between parameters in the fit. ### 4.5 Detector biasing A common bias voltage is used for all 128 TESs readout by one ASIC. As seen in Figure 16, there are some inhomogeneity in the pixel behavior, especially below the turnover, which could leads to over or under biasing some pixels. Going deeper in the transition should wipe out this effect since the responsivity depends only on the bias voltage in strong ETF. We nevertheless experienced some instability at low bias due to the fact that the FLL is no more fast enough with respect to TES time constant. The yield therefore decreases when going deeper in the transition. As a consequence, an optimum has to be found between stability and responsivity, which is usually between 2 and 3 $\mu$V. Figure 19: Distribution of critical temperature (top) and dynamic thermal conductance (bottom) of P87 TES array. ### 4.6 Power background The P-V curves measured during blind characterizations and with the QUBIC optical window open are compared in Figure 20. The comparison leads to an estimate of the power background of the order of a 5 pW which is higher than the expected 1-2 pW from the photometric model of the instrument. This could be due to a difference in temperature sensor calibration between the cryostat used for blind characterizations and QUBIC. Figure 20: Examples of electrical power versus bias voltage measured in the dilution and in QUBC for two detectors. Comparing the electrical power at the same bath temperature in the Electro-Thermal Feedback mode (at low bias voltage) gives an estimation of the background power. ### 4.7 Phonon Noise Equivalent Power The expected Phonon Noise Equivalent Power (NEPphonon) was derived from the fitted parameters with the relation [10]: $NEP_{\mathrm{phonon}}=\sqrt{\gamma 4k_{B}T^{2}G}$ (4.3) where $k_{B}$ is the Boltzmann constant, $T$ is the bolometer temperature and $\gamma$ is a correction coefficient given by: $\gamma=\frac{n}{2n+1}\frac{1-(\frac{T_{\mathrm{bath}}}{T})^{2n+1}}{1-(\frac{T_{\mathrm{bath}}}{T})^{n}}.$ (4.4) Figure 21 shows histograms of the distribution of the phonon NEP values for the full array derived from the fitted parameters. There is a strong clustering of NEP values around $4.8\times 10^{-17}~{}\mathrm{W}/\sqrt{\mathrm{Hz}}$. The dominance of this value in the histogram is an indication of the homogeneity in the fabrication process of the TES array ([11]). Figure 21: Histogram of the phonon noise equivalent power of the full array derived from the fitted parameters. The average phonon NEP is $4.8\times 10^{-17}~{}\mathrm{W}/\sqrt{\mathrm{Hz}}$. ### 4.8 Time constants The performances of QUBIC have been tested using a monochromatic calibration source [12] To estimate the time constants, the calibration source is modulated in power with a square wave signal with a frequency of 0.6 Hz and a duty cycle of 33%. The amplitude is chosen to avoid saturation of the detectors while having sufficient signal-to-noise ratio (SNR). The power amplitude on the focal plane is however not constant but corresponds to the synthetic beam. By using a detector located on the calibration source, we checked that the intrinsic rise and fall time is much faster than the expected time constant of the detectors (which is of the order of a few tens of ms). To process the data, we did a very mild low-pass and high-pass filtering as we do not want the filtering to alter the time constants. We then fold the data for each TES into one period of the calibration source. The filtering and the resulting folded signal is shown in Figure 22 for one TES. The signal peaks on the spectrum can be easily seen. Figure 22: Folded signal for TES 94. upper: The power spectrum in ADU. lower: Normalized folded data for some TESs in black, the median of all detectors is shown in red and its derivative in blue. Figure 22 lower shows the normalized (removed average and divided by RMS) folded data for each TES in black, the median is shown in red. The derivative is shown in blue and helps finding the first guess for the start-time of the calibration source shown as a red dot. Note that no selection has been made at this stage to remove TESs with low SNR. We then fit each TES folded signal (not normalized - meaning with its proper amplitude) with a model for the calibration source signal including a rise time and a fall time. Figure 23 shows the average time constants of all TESs as a function of $V_{TES}$. The rise time constant appears lower than the fall time indicating again the effect of ETF, but also the fact that we are probably reaching a non-linear regime for most TESs. For small signals, we expect to have a single time constant reaching at most the value of the rise time measured during this sequence, so about 40 ms. This value is enough for QUBIC since the considered scanning speed is about 1 deg/s which leads to a duration of 500 ms for a 30 arcmin beam width. Figure 23: Average value of time constants for rise and fall time as a function of $V_{TES}$. ### 4.9 Noise characterizations Aliasing of the TES Johnson noise is a limitation to Time Domain Multiplexing performance. Any source of noise before the SQUIDs with a bandwidth higher than the sampling frequency will be aliased. In Time Domain Multiplexing, the signal of each detector is averaged during the duration of measurement $T_{meas}$ which is smaller than the sampling period $T_{s}=1/f_{s}$ by a factor $N_{MUX}$ which is the total number of pixels readout in the multiplexing scheme. The noise bandwidth of this averaging is therefore given by $\Delta f=\frac{1}{2\times T_{meas}}=\frac{f_{s}\times N_{MUX}}{2}$. The aliasing leads to an increase of noise given by the square root of the ratio between the noise bandwidth and the Nyquist frequency $f_{s}/2$, that is $\sqrt{N_{MUX}}$. In QUBIC, the ADC frequency $f_{ADC}=2~{}MHz$ drives the multiplexing. The main parameters are therefore: (i) The number of samples $N_{s}$ to be read out for each TES, and (ii) the total number of pixels to be read out. The maximum number of pixels is equal to $N_{MUX}$ which is 128\. By reducing the number of pixels sampled, the sampling frequency per pixel is increased. The sampling frequency per TES is $f_{s}=\frac{f_{ADC}}{N_{s}\times N_{MUX}}$. Typical parameters are $N_{s}=100$ and $N_{MUX}=128$ leading to $f_{s}=156.25~{}Hz$ and $\Delta f=10~{}kHz$. The SQUID input inductance value is $L_{in}=651~{}nH$ which leads to a bandwidth higher than 24 kHz for TES resistance above 100 m$\Omega$. For such resistance values, Johnson noise is increased by a factor $\sqrt{N_{MUX}}=11.3$. To overcome this limitation, a Nyquist inductor can be added in series with the TES. A value of $L_{Nyq}=15~{}\mu H$ will reduce the noise bandwidth of Johnson noise to 1 kHz for a 100 m$\Omega$ resistance giving an increase of noise of 3.6 for the typical parameters. The number of samples $N_{s}$ can also be reduced in order to increase the sampling frequency and further reduce the aliasing. This limitation in aliasing was expected for the TD and will be improved for the Full Instrument by both adding a Nyquist inductor and increasing the sample rate. The result for the TD is a constraint on NEP of about $10^{-16}~{}W/\sqrt{Hz}$, which is a factor 2 higher than the FI requirement, but this sensitivity is acceptable for the QUBIC-TD. #### 4.9.1 Noise in normal and superconducting states Figure 24 shows the histogram of the measured current noise between 1 Hz and 2 Hz in normal (bias voltage at 8 $\mu$V) and superconducting state of the TES. In the normal state, a typical value of 110 $pA/\sqrt{Hz}$ is obtained, compatible with the expectation within a factor of 2 taking into account the aliasing effect. In the superconducting state, the median current noise is 470 $pA/\sqrt{Hz}$, compatible with the expectation taking into account the aliasing effect and the fact that the shunt resistor and probably part of the parasitic resistance are located on the 1 K stage, which was cooled to only about 2.6 K during this measurement. Figure 24: Histogram of current noise measured between 1 Hz and 2 Hz in the normal state (left, 153 total number of TES) and in the superconducting state (right, 192 total number of TES). #### 4.9.2 Noise in the transition The detector current noise can be converted into NEP assuming the TES are in strong Electro-Thermal Feedback mode. In this case, the TES responsivity $\Re~{}[A/W]$ is given by the inverse of the TES voltage, $\Re=\frac{1}{V_{TES}}$. The TES voltage is obtained from the bias voltage assuming the TES resistance is higher than the shunt resistance: $V_{TES}=V_{bias}\times 10^{-6}$. Figure 25 shows some typical NEP spectra at different bias voltages. There is clear evidence of a noise increase at low frequency when decreasing the bias voltage, which is usually produced by the phonon noise in the TES. The noise level is however much higher than expected and it varies between the TES, as seen in Figure 25. This elevated level has further been attributed to a high sensitivity to microphonics from the pulse tubes (PT) as demonstrated in the following. Figure 25: NEP spectra on some channels at different bias voltages, from 3 V to 1 V. This corresponds to the ratio of TES and normal resistance ranging from about 60% to about 10%. Note that these measurements were taken at higher frequency sampling by choosing only rows 1 to 8, so $N_{MUX}=32$ which leads to $f_{s}=625$ Hz. A test of sensitivity to pulse tube microphonics was carried out by stopping the two units for a few minutes. An example timeline and associated time- frequency analysis is shown in Figure 26. The noise level below few Hz is reduced when both PTs are off while it remains the same at higher frequency. This frequency range where a noise improvement is measured corresponds to the detector bandwidth. The induced parasitic signal is therefore thermal on the detector. The remaining excess of low frequency noise when both PTs are off is attributed to temperature drift. Figure 26: Examples of timeline in power and corresponding time-frequency analysis (in log of NEP) for two TESs (left: TES 25 and right: TES 57). The two pulse tubes are OFF between $\sim 240$ s and $\sim 420$ s. Figure 27 (left) shows the distribution in NEP for two cases: PTs on or off. It appears that the median NEP when the PT are on is about 3 times higher than when they are off. We are clearly dominated by the PT microphonics. The distribution of the NEP ratio between PTs on and off is presented in Figure 27 right and Figure 28 shows the degradation of noise because of the PTs on the TES array. If there are mechanical resonances on the wafer, we expect to measure an increase of excess noise in specific locations and most probably in the middle of the array. It is not clear at this stage if we see here some mechanical specific location on the wafer. Figure 27: Left: Histogram of NEP measured between 1 Hz and 2 Hz in the transition ($V_{bias}=1.5V$) with PTs ON and OFF. The response is assumed to be given by $1/V_{TES}$. The total number of TES are 130 and 120 respectively. Right: Histogram of the ratio of NEP with PTs ON and NEP with PT OFF. The total number of TES is 143. The origin of these perturbations was investigated. We checked from temperature stability measurements that it is not due to thermal fluctuations of the TES or of the 1 K stage. The interpretation is the following: The pulse tube vibrations are exciting mechanical resonance on the TES support structure but also on the TES themselves. This mechanical resonance further dissipates heat on different parts of the system. This assumption is supported by 3 arguments: 1. 1. In the timelines of Figure 26 after the PTs are switched off, we see a small increase in the TES power which is due to a small cooling of the detector, before heating up due to background increase. 2. 2. We excited mechanically the cryostat with a speaker connected to an audio amplifier and a sine wave generator sweeping from 100 Hz to 1300 Hz in one hour. Figure 29 shows signals of TES and of the TES stage thermometer as a function of the excited frequency. Resonances are clearly seen, especially around 700 Hz, probably due to a mechanical resonance. 3. 3. With the same setup, we excited the cryostat at a resonance (251 Hz) but the sine wave is modulated in amplitude at 1.5 Hz with 50% depth. Figure 30 shows that this 1.5 Hz is seen directly by the TES. When changing the frequency of resonance (238 Hz for instance), the 1.5 Hz line disappeared from the TES spectra. We are therefore seeing some heat dissipation produced mainly by the PT vibrations. The environment could also contribute to a lesser extent, for example the traffic on the road nearby. Figure 28: Map of the NEP ratio between PTs on and off. No clear pattern is visible, as one would expect from wafer mechanical resonances. Figure 29: Top : Time ordered signals in ADU of some TES with the time axis converted in frequency of the mechanical excitation. Bottom: Temperature of the TES stage as a function of the frequency of excitation. The graphs have been adjusted to share the same x-axis. At mechanical excitation frequencies between about 600 Hz and 800 Hz, resonances are clearly seen on the TES signals and in the TES stage temperature. Figure 30: Spectra of TES 96 showing the 1.5 Hz signal from the Amplitude Modulation of the mechanical excitation at 251 Hz. This modulation frequency is not seen off resonance. A better mechanical decoupling of the two PTs is needed to overcome this problem. The current thermal straps on the 40 K cold heads are made of thin copper plates which are soft in only one direction. Very soft copper braids will replace these thermal strap to the 40 K shield. The 4 K cold head is already thermally connected to the 4 K shield with very soft copper braids. On the cryostat itself, a soft bellows between the PT and the structure can be added but this needs a detailed study. It should be noted that microphonics is a common problem for PT systems but the effect depends on the detailed mechanical configuration of the setup. This explains why such a strong effect was not seen at the sub-system level. This effect is described by [13, 14, 15], and [16]. ## 5 Conclusion The QUBIC detection chain based on TES and SQUID, has reached an important milestone. We demonstrated an overall yield of approximately 80$\%$ of working detectors (TESs and SQUIDs included), a thermal decoupling compatible with a phonon noise of about $5\times 10^{-17}~{}\mathrm{W}/\sqrt{\mathrm{Hz}}$ at 410 mK critical temperature, and a time constant of about 40 ms which is enough for the instrument. The QUBIC sensitivity is however currently limited to $2\times 10^{-16}~{}\mathrm{W}/\sqrt{\mathrm{Hz}}$ by microphonic noise and aliasing in the readout electronics. The former will be soon improved by mechanically decoupling the first stages of the pulse tubes. The aliasing of the detector noise will be further improved by increasing the sampling frequency and adding Nyquist inductors to reduce the noise bandwidth of the detectors. ## Acknowledgments QUBIC is funded by the following agencies. France: ANR (Agence Nationale de la Recherche) 2012 and 2014, DIM-ACAV (Domaine d’Intérêt Majeur-Astronomie et Conditions d’Apparition de la Vie), CNRS/IN2P3 (Centre national de la recherche scientifique/Institut national de physique nucléaire et de physique des particules), CNRS/INSU (Centre national de la recherche scientifique/Institut national et al de sciences de l’univers). Italy: CNR/PNRA (Consiglio Nazionale delle Ricerche/Programma Nazionale Ricerche in Antartide) until 2016, INFN (Istituto Nazionale di Fisica Nucleare) since 2017. Argentina: MINCyT (Ministerio de Ciencia, Tecnología e Innovación), CNEA (Comisión Nacional de Energía Atómica), CONICET (Consejo Nacional de Investigaciones Científicas y Técnicas). D. Burke and J.D. Murphy acknowledge funding from the Irish Research Council under the Government of Ireland Postgraduate Scholarship Scheme. D. Gayer and S. Scully acknowledge funding from the National University of Ireland, Maynooth. D. Bennett acknowledges funding from Science Foundation Ireland. ## References * [1] D. Prêle, F. Voisin, C. Beillimaz, S. Chen, M. Piat, A. Goldwurm, and P. Laurent, “SiGe Integrated Circuit Developments for SQUID/TES Readout,” Journal of Low Temperature Physics, vol. 193, pp. 455–461, Nov. 2018. * [2] M. Piat, B. Bélier, L. Bergé, N. Bleurvacq, C. Chapron, S. Dheilly, L. Dumoulin, M. González, L. Grandsire, J. C. Hamilton, S. Henrot-Versillé, D. T. Hoang, S. Marnieros, W. Marty, L. Montier, E. Olivieri, C. Oriol, C. Perbost, D. Prêle, D. Rambaud, M. Salatino, G. Stankowiak, J. P. Thermeau, S. Torchinsky, F. Voisin, P. Ade, J. G. Alberro, A. Almela, G. Amico, L. H. Arnaldi, D. Auguste, J. Aumont, S. Azzoni, S. Banfi, P. Battaglia, E. S. Battistelli, A. Baùski, L. P. Ferreyro, D. Fracchia, C. Franceschet, M. M. Gamboa Lerena, K. Ganga, B. García, M. E. García Redondo, M. Gaspard, A. Gault, D. Gayer, M. Gervasi, M. Giard, V. Gilles, Y. Giraud-Heraud, M. Gómez Berisso, M. Gradziel, D. Harari, V. Haynes, F. Incardona, E. Jules, J. Kaplan, A. Korotkov, C. Kristukat, L. Lamagna, S. Loucatos, T. Louis, R. Luterstein, B. Maffei, S. Masi, A. Mattei, A. May, M. McCulloch, M. C. Medina, L. Mele, S. Melhuish, A. Mennella, L. Mousset, L. M. Mundo, J. A. Murphy, J. D. Murphy, F. Nati, C. O’Sullivan, A. Paiella, F. Pajot, A. Passerini, H. Pastoriza, A. Pelosi, M. Perciballi, F. Pezzotta, F. Piacentini, L. Piccirillo, G. Pisano, M. Platino, G. Polenta, R. Puddu, P. Ringegni, G. E. Romero, J. M. Salum, A. Schillaci, C. Scóccola, S. Scully, S. Spinelli, M. Stolpovskiy, F. Suarez, A. Tartari, P. Timbie, M. Tomasi, C. Tucker, G. Tucker, S. Vanneste, D. Viganò, N. Vittorio, B. Watson, F. Wicek, M. Zannoni, and A. Zullo, “QUBIC: Using NbSi TESs with a Bolometric Interferometer to Characterize the Polarization of the CMB,” Journal of Low Temperature Physics, vol. 200, pp. 363–373, Apr. 2020. * [3] M. Salatino, B. Bélier, C. Chapron, D. T. Hoang, S. Maestre, S. Marnieros, W. Marty, L. Montier, M. Piat, D. Prêle, D. Rambaud, J. P. Thermeau, S. A. Torchinsky, S. Henrot-Versillé, F. Voisin, P. Ade, G. Amico, D. Auguste, J. Aumont, S. Banfi, G. Barbarán, P. Battaglia, E. Battistelli, A. Baú, D. Bennett, L. Bergé, J. P. Bernard, M. Bersanelli, M. A. Bigot-Sazy, N. Bleurvacq, J. Bonaparte, J. Bonis, G. Bordier, E. Bréelle, E. Bunn, D. Burke, D. Buzi, A. Buzzelli, F. Cavaliere, P. Chanial, R. Charlassier, F. Columbro, G. Coppi, A. Coppolecchia, F. Couchot, R. D’Agostino, G. D’Alessand ro, P. de Bernardis, G. De Gasperis, M. De Leo, M. De Petris, A. Di Donato, L. Dumoulin, A. Etchegoyen, A. Fasciszewski, C. Franceschet, M. M. Gamboa Lerena, B. García, X. Garrido, M. Gaspard, A. Gault, D. Gayer, M. Gervasi, M. Giard, Y. Giraud-Héraud, M. Gómez Berisso, M. González, M. Gradziel, L. Grandsire, E. Guerrard, J. C. Hamilton, D. Harari, V. Haynes, F. Incardona, E. Jules, J. Kaplan, A. Korotkov, C. Kristukat, L. Lamagna, S. Loucatos, T. Louis, A. Lowitz, V. Lukovic, R. Luterstein, B. Maffei, S. Masi, A. Mattei, A. J. May, M. A. McCulloch, M. C. Medina, L. Mele, S. Melhuish, A. Mennella, L. M. Mundo, J. A. Murphy, J. D. Murphy, C. O’Sullivan, E. Olivieri, A. Paiella, F. Pajot, A. Passerini, H. Pastoriza, A. Pelosi, C. Perbost, O. Perdereau, F. Pezzotta, F. Piacentini, L. Piccirillo, G. Pisano, G. Polenta, R. Puddu, P. Ringegni, G. E. Romero, A. Schillaci, C. G. Scóccola, S. Scully, S. Spinelli, M. Stolpovskiy, F. Suarez, A. Tartari, P. Timbie, M. Tristram, V. Truongcanh, C. Tucker, G. Tucker, S. Vanneste, D. Viganò, N. Vittorio, B. Watson, F. Wicek, M. Zannoni, and A. Zullo, “Performance of NbSi transition-edge sensors readout with a 128 MUX factor for the QUBIC experiment,” in Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy IX, vol. 10708 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, p. 1070845, July 2018. * [4] S. Marnieros, P. Ade, J. G. Alberro, A. Almela, G. Amico, L. H. Arnaldi, D. Auguste, J. Aumont, S. Azzoni, S. Banfi, P. Battaglia, E. S. Battistelli, A. Baù, B. Bélier, D. Bennett, L. Bergé, J. P. Bernard, M. Bersanelli, M. A. Bigot-Sazy, N. Bleurvacq, J. Bonaparte, J. Bonis, A. Bottani, E. Bunn, D. Burke, D. Buzi, F. Cavaliere, P. Chanial, C. Chapron, R. Charlassier, F. Columbro, A. Coppolecchia, G. D’Alessandro, P. de Bernardis, G. De Gasperis, M. De Leo, M. De Petris, S. Dheilly, L. Dumoulin, A. Etchegoyen, A. Fasciszewski, L. P. Ferreyro, D. Fracchia, C. Franceschet, M. M. Gamboa Lerena, K. Ganga, B. García, M. E. García Redondo, M. Gaspard, D. Gayer, M. Gervasi, M. Giard, V. Gilles, Y. Giraud-Heraud, M. Gómez Berisso, M. González, M. Gradziel, L. Grandsire, J. C. Hamilton, D. Harari, S. Henrot-Versillé, D. T. Hoang, F. Incardona, E. Jules, J. Kaplan, C. Kristukat, L. Lamagna, S. Loucatos, T. Louis, B. Maffei, W. Marty, S. Masi, A. Mattei, A. May, M. McCulloch, L. Mele, S. Melhuish, A. Mennella, L. Montier, L. Mousset, L. M. Mundo, J. A. Murphy, J. D. Murphy, F. Nati, E. Olivieri, C. Oriol, C. O’Sullivan, A. Paiella, F. Pajot, A. Passerini, H. Pastoriza, A. Pelosi, C. Perbost, M. Perciballi, F. Pezzotta, F. Piacentini, M. Piat, L. Piccirillo, G. Pisano, M. Platino, G. Polenta, D. Prêle, R. Puddu, D. Rambaud, P. Ringegni, G. E. Romero, M. Salatino, J. M. Salum, A. Schillaci, C. Scóccola, S. Scully, S. Spinelli, G. Stankowiak, M. Stolpovskiy, A. Tartari, J. P. Thermeau, P. Timbie, M. Tomasi, S. Torchinsky, G. Tucker, C. Tucker, D. Viganò, N. Vittorio, F. Voisin, F. Wicek, M. Zannoni, and A. Zullo, “TES Bolometer Arrays for the QUBIC B-Mode CMB Experiment,” Journal of Low Temperature Physics, vol. 199, pp. 955–961, Jan. 2020. * [5] D. Prêle, M. Piat, L. Sipile, and F. Voisin, “Operating point and flux jumps of a SQUID in flux-locked loop,” IEEE Transactions on Applied Superconductivity, vol. 26, no. 2, pp. 1–5, 2016. * [6] D. Prêle, F. Voisin, M. Piat, T. Decourcelle, C. Perbost, C. Chapron, D. Rambaud, S. Maestre, W. Marty, and L. Montier, “A 128 Multiplexing Factor Time-Domain SQUID Multiplexer,” Journal of Low Temperature Physics, vol. 184, pp. 363–368, Jul 2016. * [7] J. Aumont et al., “QUBIC Technical Design Report,” arXiv e-prints, p. arXiv:1609.04372, Sep 2016. * [8] D. Prêle, F. Voisin, M. Piat, J. Martino, T. Decourcelle, and C. Chapron, “Capacitively-Coupled SQUID Bias for Time Division Multiplexing,” Journal of Low Temperature Physics, vol. 176, pp. 433–438, Aug. 2014. * [9] C. Perbost, “TES arrays for the detection of cmb b-mode polarisation : application to the QUBIC experiment,” Archives Ouvertes, Dec 2016. * [10] J. C. Mather, “Bolometer noise: nonequilibrium theory,” Applied Optics, vol. 21, pp. 1125–1129, Mar. 1982. * [11] S. Marnieros et al., “TES bolometer arrays for the QUBIC B-mode CMB experiment,” in 18th International Workshop on Low Temperature Detectors, Jul 2019. * [12] S. A. Torchinsky et al., “QUBIC – III: Laboratory Characterization,” J. Cosmo. Astroparticle Phys., vol. ?, p. ?, Oct 2020. submitted. * [13] S. R. Dicker et al., MUSTANG: 90 GHz science with the Green Bank Telescope, vol. 7020 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, p. 702005. 2008\. * [14] C. D. Sheehy et al., The Keck Array: a pulse tube cooled CMB polarimeter, vol. 7741 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, p. 77411R. 2010\. * [15] R. Maisonobe, J. Billard, M. De Jesus, A. Juillard, D. Misiak, E. Olivieri, S. Sayah, and L. Vagneron, “Vibration decoupling system for massive bolometers in dry cryostats,” Journal of Instrumentation, vol. 13, p. T08009, Aug 2018. * [16] L. Gottardi, H. van Weers, J. Dercksen, H. Akamatsu, M. P. Bruijn, J. R. Gao, B. Jackson, P. Khosropanah, J. van der Kuur, K. Ravensberg, and M. L. Ridder, “A six-degree-of-freedom micro-vibration acoustic isolator for low-temperature radiation detectors based on superconducting transition-edge sensors,” Review of Scientific Instruments, vol. 90, p. 055107, May 2019.
# A simple finite delayed multi-type branching process for infectious disease modeling Andrew Hart Servet Martínez Center for Mathematical Modeling, IRL 2807 CNRS- UCHILE, Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile, Santiago, Chile. E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS> (21 October, 2022) ###### Abstract We study a model for the spread of an infectious disease which incorporates spatial and temporal effects. The model is a delayed multi-type branching process in which types represent geographic regions while infected individuals reproduce offspring during a finite time interval and have convalescence times and random death/recovery outcomes. With the imposition of the condition that the mean matrices at each delay offset all have the same left and right Perron-Frobenius eigenvectors, we give simple expressions for various quantities derived from the limit of the geometrically weighted mean evolution of the process which are straightforward to compute explicitly from the mean matrices. Keywords: delayed multi-type branching process; Perron-Frobenius theory; infectious disease modeling. 2020 MSC: 60J80; 60J85, 92D30. ## 1 Introduction Inspired by agent-based simulation studies of the spread of SARS-CoV2 such as that reported in [8], our aim is to study a model for the epidemic spread of an infectious agent which possesses levels of infectiousness that vary both spatially and according to the time elapsed since infection within a fixed finite time window and has a convalescence time that is not bounded in general. Our model is based on discrete-time branching processes (we shall write ${{\mathfrak{b}}{\mathfrak{p}}}$ for branching process in future), which find a natural application in describing disease spread, for instance, see [2, 10]. More precisely, it is a cross between a multi-type ${\mathfrak{b}}{\mathfrak{p}}$ and a delayed ${\mathfrak{b}}{\mathfrak{p}}$ sporting the following characteristics. Individuals who contract an infectious disease are only contagious for a short time immediately following their infection, say, $D$ days. Thus, an individual born (infected) at time $s$ has the opportunity to reproduce offspring at times $\\{s+d:1\leq d\leq D\\}$, and the number of offspring born at time $s+d$ follows a law that depends on the time offset $d$. Individuals may continue suffering the effects of the disease long after they have ceased to be infectious and the model assigns each individual in the population a random lifetime (convalescence time). This time, which is not necessarily bounded, may be zero, signifying that the individual is asymptomatic. Symptomatic Individuals either die or recover according to a Bernoulli random variable at the end of their lifetimes. If they die before $D$ days have lapsed, they cease to be infectious. Asymptomatic individuals (with zero lifetimes) do not die, but remain contagious for $D$ days following infection. Spatial disparity is captured by including a multi-type component in which types represent regions and the numbers of offspring of all types produced by individuals of distinct types can have different distributions. The model plays host to three closely related processes that evolve in time: the offspring process, the asymptomatic population size process and the symptomatic population size process. The class of models we study here is much more restrictive than the C-M-J ${\mathfrak{b}}{\mathfrak{p}}$’s in which individuals reproduce according to a random point process and are alive during random time intervals. These are described and comprehensively studied in a large body of work (see [3, 4, 5, 6, 7, 9]) that encompasses more general frameworks than that considered here and they mostly deal with the continuous-time setting. This literature examines the limiting behavior of processes weighted by the Malthusian coefficient in great detail and has the establishment of long-term mean behavior, convergence in distribution (see [7, Theorem 2]) and conditions for a.s. convergence (see [7, Theorem 2] and [9, Proposition 1.1]) as notable achievements. In particular, concerning the mean limits of processes, there are analytical procedures for obtaining these (see [7, Lemma 2 and Proposition 2] and [5, Theorem 4.1]) which are general while being difficult to employ in models involving practical applications in epidemiology where explicit estimates are required. The evolution equations for the mean population sizes of the three processes we study are governed by a family of mean matrices at different delay times. The most basic form of this model is where the mean matrices are proportional to each other so that the scale of the matrix at each delay engenders time- dependent levels of infectiousness while the ratio of infectiousness between geographic regions remains constant. The long-term behavior of this process can be derived explicitly in a form that is easy to obtain directly from the model data, namely, the mean matrices. We want to consider generalizations of this model which allow geographical effects and contagiousness to interact in a time-dependent way while maintaining the simplicity of calculation. In order to obtain straightforward analytic results, we found it necessary to impose a condition on the model which requires the mean matrices to share (that is, have the same) Perron-Frobenius eigenvectors. Models in which the family of mean matrices share P-F eigenvectors generalize the simplest situation described above where time and geographic effects do not influence each other while yielding simple expressions which, to the best of our knowledge, do not directly fall out of the general treatments mentioned above. Furthermore, they constitute a sufficiently large and complex class of models to be interesting. The paper is arranged as follows. Section 2 defines delayed multi-type ${{\mathfrak{b}}{\mathfrak{p}}}$’s and describes the offspring distribution specific to the model studied here. It also discusses the mean evolution equations in terms of paths backwards through time from the present to time $0$. These paths are i.i.d. Bernoulli sequences taking values in $\\{1,\ldots,D\\}$ and their distribution is determined by the P-F eigenvalues of the mean matrices and the Malthusian parameter. The following section presents some definitions and lemmas needed to prove the main result, Theorem 8, which is presented in Section 4. This states that when all the mean matrices share P-F eigenvectors, then the mean population sizes of our three processes, weighted according to the Malthusian parameter, converge to particular multiples of the left P-F eigenvector. This is analogous to what happens for the classical multi-type ${{\mathfrak{b}}{\mathfrak{p}}}$. In studying the asymptotic behavior of the mean population sizes, the main difficulty is to control a product of matrices which is managed by means of some basic combinatorics on runs given in Section 3. Finally, Section 5 concludes by discussing families of matrices sharing P-F eigenvectors in more detail. ## 2 Delayed multi-type branching processes In this model, each individual is born at some time $s\in{\mathbb{N}}_{0}=\\{0,1,2,..\\}$ and generates offspring independently of all other individuals. The offspring can be of any type $i\in I$ and each is born within a finite set of times ${\cal D}\subset{\mathbb{N}}=\\{1,2,\ldots\\}$ following the birth of the parent. Thus, an individual born at $s$ generates offspring at times in $s+{\cal D}$. The number of offspring of each type in $I$ produced by the same parent at different ages in ${\cal D}$ are independent. If $\min{\cal D}>1$ there is a latency period during which an individual is not yet contagious. One can assume that g.c.d.$\,{\cal D}\,=1$ and we write $D=\max\,{\cal D}$ for the maximum delay. Our interest is in the case $|{\cal D}|>1$. ### 2.1 The random structure The set of nodes ${{\cal I}}$ represents all the potential individuals involved in the process. A node $b\in{{\cal I}}$ is identified by $b=(a;i,t,l)$ where $a$ is its parent node, $i\in I$ is its type, $t$ is its time of birth and $l$ enumerates the nodes born to parent $a$ of type $i$ at time $t$. When $t=0$, $b$ is a root and we set $a=\emptyset$. Each node $b$ gives rise to a set of nodes $(b;j,s,h)$ for $j\in I$, $s=t+d$ for $d\in{\cal D}$ and $h\in{\mathbb{N}}$. Let ${{\cal I}}^{i}$ denote the set of all type $i$ nodes. Next, we associate with $b\in{{\cal I}}$ independent random elements ${\mathfrak{z}}(b)$ and $({{\cal L}},\varepsilon)(b)$ which are also mutually independent. Variables ${\mathfrak{z}}(b)$of the same type $i$ are identically distributed for $b\in{{\cal I}}^{i}$. For each $b=(a;i,t,l)$, ${\mathfrak{z}}(b)=({\mathfrak{z}}_{a;t,l,d}^{i,j}:d\in{\cal D},j\in I)$ is a vector of random variables where ${{\mathfrak{z}}}_{a;t,l,d}^{i,j}$ is the number of potential offspring of type $j$ born to $b$ at time $t+d$ and its distribution only depends on $d$ and $(i,j)$. We shall let ${{\mathfrak{z}}}_{d}^{i,j}$ denote a random variable having this distribution. The variables $({{\cal L}},\varepsilon)(b)$ are identically distributed for all $b\in{{\cal I}}$. Variable ${{\cal L}}(b)$ takes values in ${\mathbb{N}}_{0}$ and indicates the lifetime of $b$ while $\varepsilon(b)$ may depend on ${{\cal L}}(b)$, taking the value $1$ or $0$ according as $b$ recovers or dies respectively after time ${{\cal L}}(b)$ has elapsed. If ${{\cal L}}(b)=0$, $b$ is asymptomatic and cannot die, so we set $\varepsilon(b)=1$. In contrast, if ${\cal L}(b)>0$, then $b$ is symptomatic and able to produce offspring at times in ${\cal D}$ up to $D$, except if it dies before $D$. Here, we use ‘die’ to mean a shift to a non-reproductive state like actual death or isolation, as in [10]. Note that symptomatic individuals that recover before time $D$ remain contagious and able to produce offspring at all times in ${\cal D}$ up to $D$. Let $({{\cal L}},\varepsilon)$ denote a random element with the same law as $({{\cal L}},\varepsilon)(b)$. To avoid the possibility of having trivial dynamics, we assume ${\mathbb{P}}({\cal L}<\infty)=1$ and ${\mathbb{P}}({\cal L}>0)>0$, and when modelling asymptomatic individuals we assume ${\mathbb{P}}({\cal L}=0)>0$. We now fix a random realization on ${{\cal I}}^{{{\mathbb{N}}_{0}}}$. In order for an individual $b=(a;t,l,d)$ to produce offspring as described above, one requires that it has not yet died. So, the total number of offspring of type $j$ born at time $t+d$ will be $\xi_{a;t,l,d}^{i,j}={{\mathfrak{z}}}_{a;t,l,d}^{i,j}\left(1-{{\bf 1}}\bigl{(}{\cal L}(a;i,t,l)\leq d,\varepsilon(a;i,t,l)=0\bigr{)}\right).$ This ensures no offspring are produced after the death of the parent and fixes a dependence on the pair $({\cal L},\varepsilon)$ specific to the individual. The distribution of $\xi_{a;t,l,d}^{i,j}$ only depends on $d$ and $(i,j)$, and we will use $\xi_{d}^{i,j}$ to denote a random variable having this distribution. The individuals so generated, also denoted by $b\in{{\cal I}}$ in what follows, are identified by a triplet $(i,s,l)$ where $i$ is the type, $s$ the time of birth and $l$ enumerates all the individuals of type $i$ born at time $s$ and, from now on, we dispense with the parent and simply write ${{\mathfrak{z}}}_{s,l,d}^{i,j}$ and $\xi_{s,l,d}^{i,j}$. When an individual $b=(i,s,l)$ is generated by the process, then it is ill and manifests symptoms in the time interval $[s,s+{\cal L}-1]$ when ${\cal L}>0$. In contrast,if ${{\cal L}}=0$, then once infected the individual is asymptomatic during the time interval $[s,s+D]$ but is not counted as being ill. In either case, the individual is able to infect others and produce offspring during the interval $[s,s+D]$ provided it does not die before $s+D$. We should mention that the fixed set of delays at which offspring can be born may be random for each individual instead of fixed. Provided that the random time span ${\cal A}$ is bounded , such random delays can be viewed within the deterministic framework as follows: replace the number of $j$-type offspring produced by a $i$-type individual at delay $d$ by $\xi_{d}^{i,j}{\mathbf{1}}({\cal A}\geq d)$. ### 2.2 The processes The offspring process ${{\cal X}}(s)=\left({{\cal X}}_{j}(s):j\in I\right)$ is defined by ${{\cal X}}(s)={\mathbf{0}}$ for $s<0$ and ${{\cal X}}_{j}(s)={\bf 1}(j=i_{0},s=0)+\sum_{i\in I}\sum_{d\in{\cal D}}\sum_{l=1}^{{{\cal X}}_{i}(s-d)}\xi_{s-d,l,d}^{i,j}\text{ for }s\in{\mathbb{N}}_{0}.$ (1) The initial condition is a single type $i_{0}$ individual (${{\cal X}}(0)=e_{i_{0}}$) and ${{\cal X}}_{j}(s)$ is the number of type $j$ offspring born at time $s$ for $s>0$. Since ${{\cal X}}=({{\cal X}}(s):s\geq 0)$ only counts offspring, there is the implicit assumption that individuals live for a single unit of time. Each individual $b=(i,t,l)$ has a lifetime ${{\cal L}}(i,t,l)$ distributed as ${{\cal L}}$ during which it is considered to be ill. Define ${{\cal U}}(s)=\left({{\cal U}}_{j}(s):j\in I\right)$ to be the number of ill (symptomatic) individuals of each type at time $s$. Now, recall that the set $\\{(j,s,l):l=1,\ldots,{{\cal X}}_{j}(s)\\}$ enumerates the type $j$ offspring born at time $s\geq 1$. For $s=0$, $(i_{0},0,1)$ denotes the initial individual. We set ${{\cal U}}(s)={\mathbf{0}}$ for $s<0$. Since an individual with ${{\cal L}}(i,t,l)=0$ is never ill one has ${{\cal U}}_{j}(s)=\sum_{c=0}^{s}\sum_{l=1}^{{{\cal X}}_{j}(s-c)}{\bf 1}({{\cal L}}(j,s-c,l)>c),\;s\in{\mathbb{N}}_{0}.$ (2) We shall call ${{\cal U}}=\left({{\cal U}}(s):s\in{\mathbb{N}}_{0}\right)$ a delayed multi-type ${{\mathfrak{b}}{\mathfrak{p}}}$. Individuals for whom ${\cal L}=0$ are not counted as ill by the model. They are asymptomatic for $D$ time units and are able to infect others during that time. The process of asymptomatic cases present at each time is ${{\cal Y}}(s)=({\cal Y}_{j}(s):j\in I)$ where ${{\cal Y}}(s)={\mathbf{0}}$ for $s<0$ and ${\cal Y}_{j}(s)=\sum_{c=0}^{D}\sum_{l=1}^{{\cal X}_{j}(s-c)}{\bf 1}({\cal L}(j,s-c,l)=0),\;s\in{\mathbb{N}}_{0}.$ In the epidemiological setting, ${\cal X}$ models the incidence while ${\cal Y}+{\cal U}$ gives the prevalence. The three processes ${\cal U}$, ${\cal X}$ and ${{\cal Y}}$ become extinct together almost surely or they all have some positive probability of not dying out. In other words, the extinction time of ${\cal X}$, $T^{{\cal X}}=\inf\\{t\geq 0:\sum_{j\in I}{{\cal X}}_{j}(t+s)=0\text{ for all }s\geq 0\\}$, and the analogously defined $T^{{\cal U}}$ and $T^{{\cal Y}}$ satisfy: ${\mathbb{P}}(T^{{\cal U}}<\infty)=1\;\Longleftrightarrow\;{\mathbb{P}}(T^{{\cal X}}<\infty)=1\;\Longleftrightarrow\;{\mathbb{P}}(T^{{\cal Y}}<\infty)=1.$ The first equivalence is a direct consequence of the fact that all individuals $b$ generated during this process have an almost surely finite lifetime ${\cal L}(b)$. For the second equivalence, the implication ($\Rightarrow$) follows from $T^{\cal Y}\leq T^{\cal X}+D<\infty$ a.s. and conversely, if ${\cal Y}$ becomes extinct and ${{\cal X}}$ is not extinct, one gets ${\mathbb{P}}\left(T^{{\cal Y}}<\infty,\exists t_{n}\to\infty,\exists b_{n}=(i_{n},t_{n},1)\right)>0$. Since lifetimes are identical and independent of all other variables, imposing the condition that ${\cal L}(b_{n})=0$ merely thins the set of individuals born after time $T^{\cal Y}$, so ${\cal X}$ must become extinct. When averaging the processes ${\cal U}$ and ${\cal X}$ with respect to $(\varepsilon(b):b\in{\cal I})$ one gets particular classes of the processes considered in [7] and [9]. The individuals of ${{\cal X}}$ have unit lifetimes, reproduce at time offsets in ${\cal D}$ and $i$-type individuals produce $j$-type offspring using a copy of $\xi_{d}^{i,j}$. Similarly, individuals of ${{\cal U}}$ have lifetimes given by ${{\cal L}}>0$, reproduce at times in $\\{d\in{\cal D}:d<{{\cal L}}\\}$ and $i$-type individuals reproduce $j$-type offspring using a copy of $\xi_{d}^{i,j}$. Let us average the law of $\xi_{t,l,d}^{i,j}$ over $({\cal L},\varepsilon)$. This gives us the offspring law of a contagious individual when there is no information available about lifetime or recovery $({\cal L},\varepsilon)$. The law $\bigl{(}p_{d}^{i,j}(n):n\in{\mathbb{N}}_{0}\bigr{)}$ only depends on $i$, $j$ and $d$. Due to the independence between ${\mathfrak{z}}(b)$ and $({\cal L},\varepsilon)(b)$, $p_{d}^{i,j}(n)={\mathbb{E}}_{({\cal L},\varepsilon)}\left({\mathbb{P}}\bigl{(}\xi_{d}^{i,j}=n\bigr{)}\right)$, $n\geq 0$, takes the form $p_{d}^{i,j}(n)=\begin{cases}{\mathbb{P}}({{\mathfrak{z}}}_{d}^{i,j}=n)\bigl{(}1-{\mathbb{P}}({{\cal L}}\leq d,\varepsilon=0)\bigr{)},&\text{ if }n>0,\\\ {\mathbb{P}}({{\mathfrak{z}}}_{d}^{i,j}=0)\bigl{(}1-{\mathbb{P}}({\cal L}\leq d,\varepsilon=0)\bigr{)}+{\mathbb{P}}({\cal L}\leq d,\varepsilon=0),&\text{ if }n=0.\end{cases}$ Naturally, $\sum_{n\geq 0}p_{d}^{i,j}(n)=1$. The mean number of offspring of type $j$ produced by an individual of type $i$ and age $d$ is given by $M_{d}(i,j)={\mathbb{E}}\left(\xi_{d}^{i,j}\right)=\sum_{n\geq 0}n\,p_{d}^{i,j}(n),\text{ for }i,j\in I,d\in{\cal D}.$ Let $M_{d}=\left(M_{d}(i,j):i,j\in I\right)$. We assume $M_{d}$ is irreducible for all $d\in{\cal D}$. By convention, set $M_{d}={\mathbf{0}}$ for any $d\not\in{\cal D}$ and $p_{d}^{i,j}(n)={\bf 1}(n=0)$. The left and right eigenvectors $\nu_{d}$ and $h_{d}$ of each mean matrix $M_{d}$ are assumed to be normalized such that $\nu^{\prime}_{d}h_{d}=1$ and $\nu_{d}^{\prime}{\mathbf{1}}=1$ for all $d\in{\cal D}$. Let $\rho_{d}$ denote the P-F eigenvalue of $M_{d}$, so $M_{d}h_{d}=\rho_{d}h_{d}$ and $\nu^{\prime}_{d}M_{d}=\rho_{d}\nu^{\prime}_{d}$. We will denote the expected value when starting with a single individual of type $i_{0}$ by ${\mathbb{E}}_{i_{0}}$, but if there is no confusion we shall simply write ${\mathbb{E}}$. We set ${\mathbb{E}}({{\cal X}}(s))={\mathbf{0}}$ for $s<0$ and we have ${\mathbb{E}}_{i_{0}}({{\cal X}}(s))^{\prime}=e_{i_{0}}^{\prime}{\bf 1}(s=0)+\sum_{d\in{\cal D}}{\mathbb{E}}_{i_{0}}({{\cal X}}(s-d))^{\prime}M_{d},\;s\in{\mathbb{N}}_{0}.$ (3) To evaluate the expected number of individuals of each type ill at time $s$, ${\mathbb{E}}_{i_{0}}({{\cal U}}_{j}(s))$, we can use relation (2) together with $M_{d}={\mathbf{0}}$ for $d\not\in{\cal D}$ to obtain ${\mathbb{E}}_{i_{0}}({{\cal U}}_{j}(s))=\sum_{c=0}^{s}{\mathbb{E}}_{i_{0}}\left(\sum_{l=1}^{{{\cal X}}_{j}(s-c)}{\bf 1}\bigl{(}{\cal L}(j,s-c,l)>c\bigr{)}\right)=\sum_{c=0}^{s}{\mathbb{E}}_{i_{0}}({\cal X}_{j}(s-c)){\mathbb{P}}({\cal L}>c).$ Then, substituting ${\mathbb{E}}({\cal X}_{j}(s))$ into this yields ${\mathbb{E}}_{i_{0}}({\cal U}(s))^{\prime}=e_{i_{0}}^{\prime}{\mathbb{P}}({\cal L}>s)+\sum_{d\in{\cal D}}{\mathbb{E}}_{i_{0}}({\cal U}(s-d))^{\prime}M_{d},\;s\in{\mathbb{N}}_{0}.$ (4) Similarly, the expected number of asymptomatic cases is given by, ${\mathbb{E}}_{i_{0}}({\cal Y}(s))^{\prime}\\!=\\!e_{i_{0}}^{\prime}{\bf 1}(0\\!\leq\\!s\leq D){\mathbb{P}}({\cal L}\\!=\\!0)+\sum_{d\in{\cal D}}{\mathbb{E}}_{i_{0}}({\cal Y}(s-d))^{\prime}M_{d},\;s\\!\in\\!{\mathbb{N}}_{0}.$ (5) Following Definition 2 in [7] where the Malthusian parameter is given for general delayed multi-type processes, we have that the Malthusian parameter of ${{\cal X}}$ in the supercritical case is $\theta=\log{{{\widehat{\rho}}}}\text{ where }{{\widehat{\rho}}}\hbox{ is uniquely defined by the P-F eigenvalue of }\sum_{d\in{\cal D}}{{{\widehat{\rho}}}}^{-d}M_{d}\text{ is }1.$ (6) In particular, we have $\lim\limits_{s\to\infty}{\mathbb{E}}({{\cal X}}_{j}(s))=\infty\;\forall j\in I\;\Leftrightarrow{{{\widehat{\rho}}}}>1$. The Malthusian parameter given in (6) also holds for the critical and subcritical cases and we have $\lim\limits_{s\to\infty}{\mathbb{E}}({{\cal X}}(s))={\mathbf{0}}\,\Leftrightarrow{{{\widehat{\rho}}}}<1$ and $\lim\limits_{s\to\infty}{\mathbb{E}}({{\cal X}}(s))=C_{0}$ a finite, strictly positive vector if and only if ${{{\widehat{\rho}}}}=1$. This is proven by encoding the offspring process as a multi-type ${{\mathfrak{b}}{\mathfrak{p}}}$ process, which also allows the constant $C_{0}>0$ to be characterized in the critical case. Let us perform this encoding. First, notice that ${{\cal X}}$ is equally distributed as a delayed ${{\mathfrak{b}}{\mathfrak{p}}}$ ${{{\widehat{\cal X}}}}$ whose definition includes no concept of lifetime and where each individual $(i,t,l)$ generates ${\widehat{\xi}}_{t,l;d}^{i,j}$ offspring of type $j$ at time $t+d$ according to law $p_{d}^{i,j}$. So, ${{\widehat{\cal X}}}$ satisfies ${{{\widehat{\cal X}}}}_{j}(s)={\bf 1}(j=i_{0},s=0)+\sum_{i\in I}\sum_{d\in{\cal D}}\sum_{l=1}^{{{{\widehat{\cal X}}}}_{i}(s-d)}{\widehat{\xi}}_{s-d,l,d}^{i,j}$ for $s\in{\mathbb{N}}_{0}$. The mean number of offspring in ${{{\widehat{\cal X}}}}$ and ${{\cal X}}$ is given by $M_{d}^{i,j}=\sum_{n\geq 0}np^{i,j}_{d}(n)$, and ${\mathbb{E}}({{{\widehat{\cal X}}}}(s))={\mathbb{E}}({{\cal X}}(s)),\,s\in{\mathbb{N}}_{0}$. Set $[D]=\\{1,\ldots,D\\}$ and consider the new set of types ${{{\widehat{I}}}}=[D]\times I$. Then, ${{{\widehat{\cal X}}}}$ may be viewed as the following multi-type ${{\mathfrak{b}}{\mathfrak{p}}}$ ${{{\widehat{Z}}}}$ on the set of types ${{{\widehat{I}}}}$ defined by, ${{{\widehat{Z}}}}(s)=({{{\widehat{Z}}}}_{d,j}(s):(d,j)\in{{{\widehat{I}}}}),\quad s\geq 0,\text{ with }{{{\widehat{Z}}}}_{d,j}(s)={{{\widehat{\cal X}}}}_{j}(s+D-d).$ (7) For $1<d\leq D$ one has ${{{\widehat{Z}}}}_{d,j}(s+1)={{{\widehat{Z}}}}_{d-1,j}(s)$ while for $d=1$ one obtains from (1) that ${{{\widehat{Z}}}}_{1,j}(s+1)={{{\widehat{\cal X}}}}_{j}(s+D)=\sum_{i\in I}\sum_{e\in{\cal D}}\sum_{l=1}^{{{{\widehat{\cal X}}}}_{i}(s+D-e)}{{\widehat{\xi}}}_{s+D-e,l,e}^{i,j}=\sum_{i\in I}\sum_{e\in{\cal D}}\sum_{l=1}^{{{{\widehat{Z}}}}_{e,i}(s)}{{\widehat{\xi}}}_{s+D-e,l,e}^{i,j}.$ The mean matrix of ${{\widehat{Z}}}$ is given by ${{{\widehat{M}}}}((e,i),(d,j))={\bf 1}(i=j){\bf 1}(e=d-1)$ if $d>1$. When $d=1$, one has ${{{\widehat{M}}}}((e,i),(1,j))=0$ if $e\notin{\cal D}$ and ${{{\widehat{M}}}}((e,i),(1,j))=M_{e}(i,j)$ otherwise. The matrix ${{{\widehat{M}}}}$ is irreducible. Let ${{\widehat{\nu}}}=({{\widehat{\nu}}}(e,i):(e,i)\in{{\widehat{I}}})$ be a left eigenvector of ${{\widehat{M}}}$ corresponding to its P-F eigenvalue ${\rho}$. For $d>1$, one gets ${\rho}{{{\widehat{\nu}}}}(d,j)={{{\widehat{\nu}}}}(d-1,j)$ and so ${{{\widehat{\nu}}}}(d,j)={\rho}^{-(d-1)}{{{\widehat{\nu}}}}(1,j),\,d\in[D],j\in I$. Hence, ${\rho}{{{\widehat{\nu}}}}(1,j)=\sum_{e\in{\cal D}}\sum_{i\in I}{\rho}^{-(e-1)}{{{\widehat{\nu}}}}(1,i)M_{e}(i,j)=\sum_{i\in I}{{{\widehat{\nu}}}}(1,i)\left(\sum_{e\in{\cal D}}{\rho}^{-(e-1)}M_{e}(i,j)\right).$ Therefore, ${{{\underline{\nu}}}}=({{{\underline{\nu}}}}(j)={{{\widehat{\nu}}}}(1,j):j\in I)$ satisfies ${{\underline{\nu}}}^{\prime}M_{\rho}={{\underline{\nu}}}^{\prime}$, where $M_{\rho}=\sum_{e\in{\cal D}}{\rho}^{-e}M_{e}$. We deduce that $1$ is the P-F eigenvalue of $M_{\rho}$. It follows that ${{\widehat{\rho}}}$ defined in (6) is also the P-F eigenvalue of ${{\widehat{M}}}$. Now, since ${\mathbb{E}}({\cal X}(s))={\mathbb{E}}({{\widehat{\cal X}}}(s))={\mathbb{E}}({{\widehat{Z}}}_{D,\cdot}(s))$ and $\lim\limits_{s\to\infty}{{\widehat{Z}}}_{D,\cdot}(S))={\mathbf{0}}$ if and only if $\lim\limits_{s\to\infty}{\mathbb{E}}({{\widehat{Z}}}(s))={\mathbf{0}}$ (because ${{{\widehat{Z}}}}_{d,j}(s+1)={{{\widehat{Z}}}}_{d-1,j}(s)$ for $1<d\leq D$), one has $\lim\limits_{s\to\infty}{\mathbb{E}}({{\cal X}}(s))={\mathbf{0}}$ if and only if $\lim\limits_{s\to\infty}{\mathbb{E}}({{{\widehat{Z}}}}(s))={\mathbf{0}}$. Since ${{{\widehat{Z}}}}$ is a multi-type ${{\mathfrak{b}}{\mathfrak{p}}}$, this occurs if and only if ${{{\widehat{\rho}}}}<1$, which provides the condition for extinction. Similarly, the limit is $\infty$ if and only if $\rho>1$. Now when ${{{\widehat{\rho}}}}=1$ one gets $\lim\limits_{s\to\infty}{\mathbb{E}}({{{\widehat{Z}}}}(s))={\mathbb{E}}({{\widehat{Z}}}(0))^{\prime}{{{\widehat{h}}}}{{{\widehat{\nu}}}}$, ${{{\widehat{h}}}}$ being the right Perron-Frobenius eigenvector of ${{{\widehat{M}}}}$, which characterizes the constant $C_{0}>0$ in the critical case. ###### Remark 1. Analogous to a multi-type ${{\mathfrak{b}}{\mathfrak{p}}}$, there exists a distribution of the mean evolution of types for the process ${\cal X}$ which is stationary. Let $\nu$ be the left eigenvector of $M_{{{\widehat{\rho}}}}=\sum_{d\in{\cal D}}{{{\widehat{\rho}}}}^{-d}M_{d}$ normalized to sum to unity and take it as the initial distribution: ${\mathbb{E}}({{\cal X}}(s))^{\prime}=\nu^{\prime}{{{\widehat{\rho}}}}^{s}$ for $s=0,1,\ldots,D-1$. Then, (3) yields ${\mathbb{E}}({{{\cal X}}}(s))^{\prime}=\sum_{d\in{\cal D}}\nu^{\prime}M_{d}{{{\widehat{\rho}}}}^{s-d}={{{\widehat{\rho}}}}^{s}\nu^{\prime}\sum_{d\in{\cal D}}M_{d}{{{\widehat{\rho}}}}^{-d}=\nu^{\prime}{{{\widehat{\rho}}}}^{s}$, so that ${\mathbb{E}}({{\cal X}}(s))^{\prime})/({\mathbb{E}}({{\cal X}}(s))^{\prime}{\mathbf{1}})=\nu^{\prime}$ for all $s\geq 0$. Next, we can iterate equations (3), (4) and (5) backwards through time from $s$ to $0$. This is done by taking a path with elements in ${\cal D}$. For $r\leq s$, the set of paths of length $r$ from $s$ to $0$ with elements in ${\cal D}$ is defined by $\Gamma(s,r)=\\{(d_{1},\ldots,d_{r})\in{\cal D}^{r}:\sum_{l=1}^{r}d_{l}=s\\}.$ (8) The elements in a path $(d_{1},\ldots,d_{r})$ represent step sizes in ${\cal D}$ so that a path is represented as a sequence of $r$ steps that span the range from $0$ to $s$, so $d_{r}$ connects $s-d_{r}$ to $s$ and so on. Note that if $\Gamma(s,r)\neq\emptyset$ then $r\geq\lceil s/D\rceil$. From (3) and (4) we find $\begin{split}{\mathbb{E}}({{\cal X}}(s))^{\prime}={\bf 1}(s=0){\mathbb{E}}({{\cal X}}(0))^{\prime}+{\mathbb{E}}({{\cal X}}(0))^{\prime}\Xi(s)\text{ and}\\\ {\mathbb{E}}({\cal U}(s))^{\prime}={\mathbb{E}}({{\cal U}}(0))^{\prime}{\mathbb{P}}({\cal L}>s)+{\mathbb{E}}({{\cal U}}(0))^{\prime}\sum_{c=0}^{s-1}{\mathbb{P}}({\cal L}>c)\Xi(s-c)\text{ where }\\\ \Xi(s)=\sum_{\lceil{s/D}\rceil\leq r\leq s}\;\Xi(s;r)\text{ and }\Xi(s;r)=\sum_{(d_{1},\ldots,d_{r})\in\Gamma(s,r)}\;\left(\prod_{l=1}^{r}M_{d_{l}}\right).\end{split}$ (9) We will group the paths in $\Gamma(s,r)$ according to the tuples ${\vec{k}}=(k_{d}:d\in{\cal D})$ where $k_{d}=|\\{l:d_{l}=d\\}|$ is the number of steps of size $d$ in the path. For vectors ${\vec{k}}=(k_{d}:d\in{\cal D})\in{\mathbb{N}}_{0}^{\cal D}$ with non-negative coordinates, we define $|{\vec{k}}|=\sum_{d\in{\cal D}}k_{d}$. We consider the following classes of tuples of paths, $\Lambda(s)=\left\\{{\vec{k}}\in{\mathbb{N}}_{0}^{\cal D}:\sum_{d\in{\cal D}}d\,k_{d}=s\right\\},\quad\Lambda(s,r)=\left\\{{\vec{k}}\in\Lambda(s):|{\vec{k}}|=r\right\\},$ (10) the first is the set of all paths from $s$ to $0$, and the second is the set of such paths of length $r$. Note that $\Lambda(s,r)\neq\emptyset$ only when ${\lceil{s/D}\rceil}\leq r\leq s$. To each ${\vec{k}}\in\Lambda(s,r)$ one associates the set of tuples $S({\vec{k}})=\\{(x_{1},\ldots,x_{r})\in{\cal D}^{r}:\sum_{i=1}^{r}{\bf 1}(x_{i}=d)=k_{d},\forall\,d\in{\cal D}\\}.$ (11) This is the set of all the permutations of one fixed tuple $(y_{1},\ldots,y_{r})\in S({\vec{k}})$ of symbols in ${\cal D}$ with frequencies given by ${\vec{k}}$. So, $|S({\vec{k}})|=\frac{r!}{\prod_{d\in{\cal D}}k_{d}!}\hbox{ where }r=|{\vec{k}}|.$ (12) Then, the expression $\Xi(s;r)$ in (9) satisfies $\Xi(s;r)=\sum_{{\vec{k}}\in\Lambda(s,r)}\prod_{d\in{\cal D}}\rho_{d}^{k_{d}}\sum_{(d_{1},\ldots,d_{r})\in S({\vec{k}})}\;\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}.$ (13) The products of subsets of the matrices $(M_{d}:d\in{\cal D})$ appearing in (9) and (13) pose a hindrance to the analytic treatment of the equations. In order to proceed, we shall impose the afore-mentioned condition that the mean matrices share P-F eigenvectors. ## 3 Matrices sharing P-F eigenvectors and runs The paths introduced in the preceding section may be viewed as sequences of symbols in ${\cal D}$. It will be shown that sufficiently long paths will uniformly contain runs of a specified size and this will provide control over the product of matrices that appears in (9) when the matrices $(M_{d}:d\in{\cal D})$ satisfy a particular property which we now introduce. ###### Definition 2. The family of mean matrices $(M_{d}:d\in{\cal D})$ is said to share P-F eigenvectors if they have exactly the same left and right eigenvectors, that is, $h_{d}=h$ and $\nu_{d}=\nu$ for $d\in{\cal D}$. We assume that $h$ and $\nu$ are scaled so that $\nu^{\prime}{\mathbf{1}}=1$ and $\nu^{\prime}h=1$. $\;\Box$ Since $h_{d}=h$, $\nu_{d}=\nu$ and $\nu_{d}^{\prime}h_{d^{\prime}}=\nu^{\prime}h=1$ for all $d,d^{\prime}\in{\cal D}$, the first consequence of the matrices sharing P-F eigenvectors is that $\prod_{d\in{\cal D}}h_{d}\nu^{\prime}_{d}=h\nu^{\prime}$. Consider an irreducible non-negative matrix $A$ with P-F eigenvector $\rho_{A}$ and left and right P-F eigenvectors normalized by $\nu^{\prime}h=1$. Then, for any norm ${\|{\cdot}\|}$, there exists $C<\infty$ and $\delta\in(0,1)$ such that ${\|{\rho^{-t}A^{t}-h\nu^{\prime}}\|}\leq C\delta^{t}$ for all $t\geq 0$ and $\lim\limits_{t\to\infty}\rho^{-t}A^{t}=h\nu^{\prime}$ componentwise. From now on, we will use ${\|{\cdot}\|}$ to denote the $\infty$-norm on vectors as well as the associated spectral norm on matrices, that is, ${\|{A}\|}=\max_{i\in I}\sum_{j\in I}{|{A(i,j)}|}$. ###### Lemma 3. Assume the matrices $(M_{d}:d\in{\cal D})$ share the right P-F eigenvector $h$. Then, $\forall r\geq 1,(d_{1},\ldots,d_{r})\in{\cal D}^{r}:\quad{\|{\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}}\|}\leq{{\mathfrak{w}}}(h)\hbox{ where }{{\mathfrak{w}}}(h)=\max_{i,j\in I}(h_{i}/h_{j}).$ (14) ###### Proof. From the assumptions we get that $h$ is a right eigenvector of $A=\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}$ corresponding to eigenvalue $1$. Then, $P=\left(P(i,j):i,j\in I\right)$ defined by $P(i,j)=A(i,j)h(j)/h(i)$, is a stochastic matrix so ${\|{P}\|}=1$ and ${\|{A}\|}=\max_{i\in I}\sum_{j\in I}{|{P(i,j)h(i)/h(j)}|}\leq\max_{i,j\in I}(h(i)/h(j)).$ So, (14) holds. $\Box$∎ Similarly, one can show that if $(M_{d}:d\in{\cal D})$ share the left P-F eigenvector $\nu$, then, $\forall r\geq 1$, $(d_{1},\ldots,d_{r})\in{\cal D}^{r}$, ${\|{\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}}\|}_{1}\leq{{\mathfrak{w}}}(\nu)$, where ${{\|{A}\|}}_{1}={\|{A^{\prime}}\|}$. ###### Definition 4. Let ${\kappa}>1$. We say that a tuple $(d_{l}:l=1,\ldots,s)$ has a $\kappa$-run if there exists some $l_{0}\in\\{1,\ldots,s-\kappa+1\\}$ such that $d_{l}=d_{l_{0}}$ for $l=l_{0},\ldots,l_{0}+\kappa-1$.$\;\Box$ ###### Lemma 5. Assume $(M_{d}:d\in{\cal D})$ share right and left P-F eigenvectors $h$ and $\nu$ respectively. Then, for every $\epsilon\in(0,1)$ there exists $\kappa(\epsilon)$ such that for all $r\geq\kappa(\epsilon)$, the following property holds: if the tuple $(d_{l}:l=1,\ldots,r)$ has a $\kappa(\epsilon)$-run, then ${\|{\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}-h\nu^{\prime}}\|}\leq\epsilon.$ (15) ###### Proof. Since $\lim\limits_{k\to\infty}\rho_{d}^{-k}M_{d}^{k}=h\nu^{\prime}$ componentwise and the matrix $h\nu^{\prime}$ is strictly positive, for every $\epsilon^{\prime}\in(0,1)$ there exists $k(\epsilon^{\prime})$ such that $\forall d\in{\cal D},\forall k\geq k(\epsilon^{\prime}):\quad(1-\epsilon^{\prime})h\nu^{\prime}\leq\rho_{d}^{-k}M_{d}^{k}\leq(1+\epsilon^{\prime})h\nu^{\prime}\;\text{ componentwise }.$ (16) Further, if, for $(d_{l}:l=1,\ldots,r)$, there exists some $l_{0}\in\\{1,\ldots,r-k(\epsilon^{\prime})+1\\}$ such that $d_{l}=d_{l_{0}}$ for $l=l_{0},\ldots,l_{0}+k(\epsilon^{\prime})-1$, then $\displaystyle\rho_{d_{l_{0}-1}}^{-1}M_{d_{l_{0}-1}}(1\pm\epsilon^{\prime})h\nu^{\prime}=(1\pm\epsilon^{\prime})h\nu^{\prime}\text{ for }l_{0}>1,$ $\displaystyle(1\pm\epsilon^{\prime})h\nu^{\prime}\rho_{d_{l_{0}+k(\epsilon^{\prime})}}^{-1}M_{d_{l_{0}+k(\epsilon^{\prime})}}=(1\pm\epsilon^{\prime})h\nu^{\prime}\text{ for }l_{0}+k(\epsilon^{\prime})\leq r,$ then an inductive argument shows that $(1-\epsilon^{\prime})h\nu^{\prime}\leq\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}\leq(1+\epsilon^{\prime})h\nu^{\prime}$ componentwise. Hence, ${\|{\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}-h\nu^{\prime}}\|}\leq\epsilon^{\prime}{\|{h\nu^{\prime}}\|}$. The result follows by setting $\kappa(\epsilon)=k(\epsilon^{\prime})$ and $\epsilon^{\prime}=\epsilon/{\|{h\nu^{\prime}}\|}$. $\Box$∎ Next we examine some properties of the existence of runs of finite classes of tuples of symbols belonging to ${\cal D}$. Let $(x_{1},\ldots,x_{r(x)})$ be a tuple with elements in ${\cal D}$. Fix $\kappa>1$. For a finite class of tuples $\Gamma\subseteq\bigcup_{r\in{\mathbb{N}}}{\cal D}^{r}$, one defines $\Gamma^{\kappa}=\\{(x_{1},\ldots,x_{r(x)})\in\Gamma:\;\exists m\leq r(x)-(\kappa-1)\text{ such that }x_{m}=\ldots=x_{m+\kappa-1}\\}.$ This is the set of tuples in $\Gamma$ having a $\kappa$-run. Notice that tuples shorter than $\kappa$ cannot belong to $\Gamma^{\kappa}$. In (10), we defined $\Lambda(s)$ and to each ${\vec{k}}\in\Lambda(s)$ one associates the set $S({\vec{k}})$ of all permutations of an $r$-tuple of symbols in ${\cal D}$ with frequencies specified by ${\vec{k}}$. We can also associate to ${\vec{k}}$ the subset $S({\vec{k}})^{\kappa}$ of sequences in $S({\vec{k}})$ having a $\kappa$-run. We claim that $\forall\kappa>1,\;\forall\epsilon>0,\;\exists s(\epsilon)\hbox{ such that }\forall s\geq s(\epsilon):\;|S({\vec{k}})^{\kappa}|/|S({\vec{k}})|\geq(1-\epsilon)\;\forall{\vec{k}}\in\Lambda(s).$ (17) This relation is a consequence of the following result which gives a lower bound for the number of runs. Let $\lfloor x\rfloor$ denotes the integer part of $x$. We obviously take $|{\cal D}|>1$. ###### Proposition 6. Let $\upsilon\geq 1$ and $\alpha\in(0,1/|{\cal D}|)$. Set $\beta_{0}=\alpha$ and define $\beta_{l}=\beta_{l-1}(1-\beta_{l-1})$ for $l=1,\ldots,\upsilon$. For $r>2^{\upsilon}$, set $r^{\prime}=\lfloor r/2^{\upsilon}\rfloor$. For $\delta\in(0,1/2)$ define $S({\vec{k}})(\upsilon,\delta)=\\{(x_{1},\ldots,x_{r})\in S({\vec{k}}):\exists d\in{\cal D},\;\sum_{i=1}^{r^{\prime}}{\bf 1}(\forall l=1,\ldots,2^{\upsilon}:\;x_{(i-1)2^{\upsilon}+l}=d)\geq(1-\delta)\beta_{\upsilon}\\}.$ Then, $\lim\limits_{r\to\infty}|S({\vec{k}})(\upsilon,\delta)|/|S({\vec{k}})|=1$. ###### Proof. It suffices to assume $r$ is of the form $r=2^{\upsilon}r^{\prime}$ for some integer $r^{\prime}\geq 1$. We start with the case $\upsilon=1$. We must study the set of $r-$tuples $S({\vec{k}})(1,\delta)$. Let $S({\vec{k}})(1,\delta;d)=\\{(x_{1},\ldots,x_{r})\in S({\vec{k}}):\;\sum_{i=1}^{r^{\prime}}{\bf 1}(x_{2(i-1)+1}\\!=\\!d\\!=\\!x_{2i})\geq(1\\!-\\!\delta)\alpha(1\\!-\\!\alpha)\\}.$ We know there exists ${\underline{d}}\in{\cal D}$ with $k_{{\underline{d}}}\geq\alpha r$ because $|{\cal D}|\geq 2$ and $\alpha<1/|{\cal D}|$. We will show that $\forall{\underline{d}}\in{\cal D}\hbox{ such that }k_{{\underline{d}}}\geq\alpha r:\quad\lim\limits_{r\to\infty}|S({\vec{k}})(1,\delta;{\underline{d}})|/|S({\vec{k}})|=1.$ (18) Since $S({\vec{k}})(1,\delta;{\underline{d}})$ increases with $k_{{\underline{d}}}$, it suffices to show that (18) holds when $k_{{\underline{d}}}=\alpha r$. Next, we rename symbol ${\underline{d}}$ to $1$ and collapse all the other symbols in ${\cal D}^{\prime}={\cal D}\setminus\\{{\underline{d}}\\}$ into the single symbol $0$. In the alphabet $A=\\{0,1\\}$ we get a new frequency vector ${\vec{l}}=(l_{0},l_{1})$ where $l_{1}=k_{{\underline{d}}}$ and $l_{0}=\sum_{d\in{\cal D}^{\prime}}k_{d}=r-k_{{\underline{d}}}$. This relabelling defines a set $S_{A}({\vec{l}})$ of $r$-tuples of the symbols $\\{0,1\\}$ with frequencies matching ${\vec{l}}$. So $|{\vec{l}}|=|{\vec{k}}|=r$ and $r!=|S_{A}({\vec{l}})|l_{1}!(r-l_{1})!$. Hence$|S({\vec{k}})|=|S_{A}({\vec{l}})|(r-k_{{\underline{d}}})!/\prod_{d\in{\cal D}^{\prime}}k_{d}!$. To each sequence $x$ in $S_{A}({\vec{l}})(1,\delta;1)$ one can associate $(r-k_{{\underline{d}}})!/\prod_{d\in{\cal D}^{\prime}}k_{d}!$ elements in $S({\vec{k}})(1,\delta;{\underline{d}})$ by replacing the $r-k_{{\underline{d}}}$ symbols of type $0$ in $x$ by symbols in ${\cal D}^{\prime}$ according to the frequencies $(k_{d}:d\in{\cal D}^{\prime})$. Then, $|S_{A}({\vec{l}})(1,\delta;1)|/|S_{A}({\vec{l}})|\leq|S({\vec{k}})(1,\delta;{\underline{d}})|/|S({\vec{k}})|.$ Hence it suffices to show the result for the sets $S_{A}({\vec{l}})(1,\delta)$. Let us now consider a new alphabet $A^{2}=\\{(1,1),(1,0),(0,1),(0,0)\\}$. Recall $r$ is even and $r^{\prime}=r/2$ and so every $r$-sequence $x=(x_{1},\ldots,x_{r})\in S_{A}({\vec{l}})$ is coded by a $r^{\prime}$-sequence ${\widetilde{x}}=((x_{1},x_{2}),\ldots,(x_{2i-1},x_{2i}),\ldots,(x_{2r^{\prime}-1},x_{2r^{\prime}})$ where $(x_{2i-1},x_{2i})\in A^{2}$. So, every sequence in $S_{A}({\vec{l}})$ corresponds to a sequence in $S_{A^{2}}({\vec{h}})$ for some vector ${\vec{h}}=(h_{11},h_{10},h_{01},h_{00})$, where $h_{ab}$ is the number of pairs $(a,b)$ in the $r^{\prime}$-sequence ${\widetilde{x}}$. Since in the $2r^{\prime}$-sequences $S({\vec{k}})$ there are $l_{1}=2\alpha r^{\prime}$ symbols of type $1$ and $l_{0}=2(1-\alpha)r^{\prime}$ symbols of type $0$, in the recoded sequence this corresponds to $2h_{11}+h_{10}+h_{01}=l_{1}$,$2h_{00}+h_{01}+h_{10}=2r^{\prime}-l_{1}$. We have $h_{10}+h_{01}\leq 2\alpha r^{\prime}$ and so $h_{10}+h_{01}=2\beta\alpha r^{\prime}$ for some $0\leq\beta\leq 1$ and $h_{11}=r^{\prime}\alpha(1-\beta)$, $h_{00}=r^{\prime}(1-\alpha(1+\beta))$. Set $h_{10}=2\gamma\beta\alpha r^{\prime}$, where $\gamma\in[0,1]$ and $h_{01}=2(1-\gamma)\beta\alpha r^{\prime}$. By setting $j=2\beta\alpha r^{\prime}$ and $j_{1}=2\gamma\beta\alpha r^{\prime}$, we get $|S_{A}({\vec{l}})|=\sum_{j=0}^{\alpha r^{\prime}}\sum_{j_{1}=0}^{j}|S_{A^{2}}({\vec{h}})|$. Then, $\displaystyle\frac{2r^{\prime}!}{(2\alpha r^{\prime})!(2(1-\alpha)r^{\prime})!)}=|S_{A}({\vec{l}})|=\sum_{j=0}^{\alpha r^{\prime}}\sum_{j_{1}=0}^{j}|S_{A^{2}}({\vec{h}})|\hbox{ with }$ (19) $\displaystyle|S_{A^{2}}({\vec{h}})|=\frac{r^{\prime}!}{(r^{\prime}\alpha(1-\beta))!(2r^{\prime}\gamma\beta\alpha)!(2r^{\prime}(1-\gamma)\beta\alpha)!(r^{\prime}(1-\alpha(1+\beta)))!}.$ When $r^{\prime}$ is large, we obtain $\frac{1}{2r^{\prime}}\log\left(\frac{2r^{\prime}!}{(2\alpha r^{\prime})!(2r^{\prime}(1\\!-\\!\alpha))!)}\right)\\!=\\!c^{*}+O\left(\frac{\log r^{\prime}}{r^{\prime}}\right)\hbox{ with }c^{*}\\!=\\!-\left(\alpha\log\alpha\\!+\\!(1\\!-\\!\alpha)\log(1\\!-\\!\alpha)\right).$ (20) Next we compute the extrema of $\frac{1}{2r^{\prime}}\log|S_{A^{2}}({\vec{h}})|)$. For $r^{\prime}$ large, we have $\frac{1}{2r^{\prime}}\log\left(\frac{r^{\prime}!}{(r^{\prime}\alpha(1\\!-\\!\beta))!(2r^{\prime}\gamma\beta\alpha)!(2r^{\prime}(1\\!-\\!\gamma)\beta\alpha)!(r^{\prime}(1\\!-\\!\alpha(1\\!+\\!\beta)))!}\right)\\!=\\!c(\beta,\gamma)+O\left(\frac{\log r^{\prime}}{r^{\prime}}\right)$ (21) where $\displaystyle c(\beta,\gamma)$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\alpha(1-\beta)\log(\alpha(1-\beta))-\gamma\beta\alpha\log(2\gamma\beta\alpha)$ $\displaystyle-(1-\gamma)\beta\alpha\log(2(1-\gamma)\beta\alpha)-\frac{1}{2}(1-\alpha(1+\beta))\log(1-\alpha(1+\beta))).$ We have that $\frac{\partial}{\partial\gamma}c(\beta,\gamma)=0$ is equivalent to $\gamma/(1-\gamma)=1$ and so $\gamma=1/2$. Also we find that $\frac{\partial}{\partial\beta}c(\beta,1/2)=0$ gives $\beta=(1-\alpha)$. One can check that $c(1-\alpha,1/2)=-(\alpha\log\alpha+(1-\alpha)\log(1-\alpha))=c^{*}$, so from (19) and (20) we get that $c(1-\alpha,1/2)$ maximizes $c(\beta,\gamma)$. Also, from (19), (20) and (3), for all $\epsilon_{1}>0$ there exists an $r^{\prime}(\epsilon_{1})$ large enough such that for all $r\geq r^{\prime}(\epsilon_{1})$ one has $|S_{A}({\vec{l}})(1,\delta;1)|\geq|S_{A}({\vec{l}})|(1-\epsilon_{1})$. So, $|S({\vec{k}})(1,\delta;{\underline{d}})|\geq|S({\vec{k}})|(1-\epsilon_{1})$. Now, we recode the alphabet $A^{2}$ by collapsing the symbols $\\{10,01,00\\}$ into a single symbol $0$ and relabelling symbol $11$ as $1$. We take $\beta_{1}=\alpha(1-\alpha)$ and carry out the same procedure again to get that $|S({\vec{l}})(2,\delta;1)|=|S({\vec{l}})|(1-\epsilon_{1})^{2}$. So, for $\epsilon_{1}>0$ such that $(1-\epsilon_{1})^{\upsilon}=1-\epsilon$ and by a recurrence procedure we find $|S({\vec{l}})(\upsilon,\delta;1)|\geq|S({\vec{l}})|(1-\epsilon)$. This gives $|S({\vec{k}})(\upsilon,\delta;{\underline{d}})|/|S({\vec{k}})|\geq(1-\epsilon)$ and the result is shown. $\Box$∎ ## 4 Limit results for matrices sharing P-F eigenvectors First note that when the $M_{d}$’s share P-F eigenvectors the right eigenvector $h$ satisfies $M_{d}h=\rho_{d}\,h$ for all $d\in{\cal D}$, where $\rho_{d}$ is the P-F eigenvalue of $M_{d}$. Then, $\nu$ and $h$ are left and right eigenvectors of $\sum_{d\in{\cal D}}{\rho}^{-d}M_{d}$ with corresponding eigenvalue $\sum_{d\in{\cal D}}{\rho}^{-d}\rho_{d}$ and so, $\sum_{d\in{\cal D}}{{{\widehat{\rho}}}}^{-d}\rho_{d}=1$ is equivalent to $\sum_{d\in{\cal D}}e^{-\theta d}\rho_{d}=1,$ (23) where ${{\widehat{\rho}}}$ and $\theta=\log{{\widehat{\rho}}}$ were defined in (6). Then $\theta$ is positive, zero or negative according as $\sum_{d\in{\cal D}}\rho_{d}$ is greater than, equal to or less than $1$, respectively, which in turn determines the subcritical, critical or supercritical nature of the process, respectively. ###### Remark 7. From (23), one can define a probability vector ${\vec{\beta}}=(\beta_{d}:d\in{\cal D})$ by $\beta_{d}=\rho_{d}e^{-\theta d},d\in{\cal D}$. The mean of ${\vec{\beta}}$ is $\mu({\vec{\beta}})=\sum_{d\in{\cal D}}d\,\beta_{d}$. Since any sequence $(d_{1},\ldots,d_{r})\in\Gamma(s,r)$ satisfies $\sum_{l=1}^{r}d_{l}=s$ (see (8), we have $\prod_{l=1}^{r}\beta_{d_{l}}=e^{-\theta s}\prod_{d\in{\cal D}}\rho_{d}^{k_{d}}$. Let $(\delta_{i}:i\in{\mathbb{N}})$ be an i.i.d. Bernoulli sequence taking values in ${\cal D}$ according to ${\vec{\beta}}$. Let ${\bf P}$ and ${\bf E}$ be its associated probability measure and expectation respectively. Let $\tau_{s}=\inf\\{r\geq 1:\sum_{l=1}^{r}\delta_{l}\geq s\\}$. From (9), we find that $\Xi(s)=e^{\theta s}{\bf E}\left({\bf 1}(\sum_{l=1}^{\tau_{s}}\delta_{i}=s)\prod_{l=1}^{\tau_{s}}\rho_{\delta_{l}}^{-1}M_{\delta_{l}}\right)$. ###### Theorem 8. Assume that the matrices $(M_{d}:d\in{\cal D})$ share P-F eigenvectors and let $\theta\in{\mathbb{R}}$ be the unique solution to (23). In the subcritical case ($\theta<0$), assume that ${\mathbb{E}}\left(e^{-\theta{\cal L}}\right)<\infty$, and in the critical case ($\theta=0$), assume that ${\mathbb{E}}({\cal L})<\infty$. Then the long-term behavior of the mean number of symptomatic individuals is $\lim\limits_{s\to\infty}e^{-\theta s}{\mathbb{E}}({{\cal U}}(s))^{\prime}=(\mu({\vec{\beta}}))^{-1}{\mathbb{E}}(({{\cal U}}(0))^{\prime}h\left(\sum_{c\geq 0}{\mathbb{P}}({\cal L}>c)e^{-\theta c}\right)\nu^{\prime}.$ (24) while the limiting mean number of asymptomatic individuals and offspring are respectively given by $\displaystyle\lim_{s\to\infty}e^{-\theta s}{\mathbb{E}}({\cal Y}(s))^{\prime}$ $\displaystyle=$ $\displaystyle(\mu({\vec{\beta}}))^{-1}(D+1){\mathbb{P}}({\cal L}=0){\mathbb{E}}({\cal X}(0))^{\prime}h\nu^{\prime}\text{ and}$ (25) $\displaystyle\lim_{s\to\infty}e^{-\theta s}{\mathbb{E}}({\cal X}(s))^{\prime}$ $\displaystyle=$ $\displaystyle(\mu({\vec{\beta}}))^{-1}{\mathbb{E}}({\cal X}(0))^{\prime}h\nu^{\prime}.$ (26) Moreover, $\nu^{\prime}$ is the limit of types for the processes ${\cal X}$, ${\cal Y}$ and ${\cal U}$, that is, $\lim_{s\to\infty}\frac{{\mathbb{E}}({\cal X}(s))^{\prime}}{{\mathbb{E}}({\cal X}(s))^{\prime}{\mathbf{1}}}=\lim_{s\to\infty}\frac{{\mathbb{E}}({\cal Y}(s))^{\prime}}{{\mathbb{E}}({\cal Y}(s))^{\prime}{\mathbf{1}}}=\lim_{s\to\infty}\frac{{\mathbb{E}}({\cal U}(s))^{\prime}}{{\mathbb{E}}({\cal U}(s))^{\prime}{\mathbf{1}}}=\nu^{\prime}.$ (27) ###### Proof. We begin by considering a $1$-type delayed ${{\mathfrak{b}}{\mathfrak{p}}}$ ${\zeta}=({\zeta}(s):s\geq 0)$ with offspring means $(\rho_{d}:d\in{\cal D})$ starting from $\zeta(0)=1$, where each individual is ill for ${{\cal L}}$ units of time. Consider the sequence $(c(s)=e^{-\theta s}{\mathbb{E}}(\zeta(s)):s\geq 0)$ that satisfies the renewal equation $c(s)=e^{-\theta s}{\mathbb{P}}({\cal L}>s)+\sum_{k\geq 0}c(s-k)\beta_{k},\;s\in{\mathbb{N}}_{0}.$ The solution to this equation has the form $c(s)=\sum_{c=0}^{s}e^{-\theta(s-c)}{\mathbb{P}}({\cal L}>s-c)u(c)=\sum_{c=0}^{s}{\mathbb{P}}({\cal L}>c)e^{-\theta c}u(s-c),\;s\in{\mathbb{N}}_{0},$ where $u(t)$ is a discrete analogue to the renewal density associated with ${\vec{\beta}}$. From (9), with $\rho_{d}$ instead of $M_{d}$ and using formulae (12) and (13), one has $u(0)=1$ and for $t>0$, $u(t)={\overline{\Upsilon}}(t)e^{-\theta t}$. Here ${\overline{\Upsilon}}(t)=\sum_{\lceil{t/D}\rceil\leq r\leq t}\Upsilon(t,r)\hbox{ and }\Upsilon(t,r)=\sum_{{\vec{k}}\in\Lambda(t,r)}|S({\vec{k}})|\prod_{d\in{\cal D}}\rho_{d}^{k_{d}}.$ Therefore, $c(s)=e^{-\theta s}{\mathbb{E}}(\zeta(s))={\mathbb{P}}({\cal L}>s)e^{-\theta s}+e^{-\theta s}\sum_{c=0}^{s-1}{\mathbb{P}}({\cal L}>c){\overline{\Upsilon}}(s-c).$ In the supercritical case, Lemma 1 of [7] shows in a general framework that $\lim\limits_{s\to\infty}e^{-\theta s}{\mathbb{E}}(\zeta(s))=\frac{1}{\mu({\vec{\beta}})}\sum_{t\geq 0}{\mathbb{P}}({\cal L}>t)e^{-\theta t}.$ (28) This is also part of Proposition 1.1 in [9]. In our case and under our hypotheses, (28) also holds for the critical and subcritical cases. This follows from the renewal theorem, see Proposition 4.7 in Chapter V of [1]. For the subcritical case, ${\mathbb{E}}\bigl{(}e^{-\theta{\cal L}}\bigr{)}$ is assumed to be finite and so $\sum_{s\geq 0}{\mathbb{P}}({\cal L}>s)e^{-\theta s}={\mathbb{E}}\bigl{(}e^{-\theta{\cal L}}\bigr{)}-1)/(e^{-\theta}-1)<\infty$. In the critical case, we have $\sum_{s\geq 0}{\mathbb{P}}({\cal L}>s)={\mathbb{E}}({\cal L})$, which was assumed to be finite. So, from (28) we obtain $\lim\limits_{s\to\infty}e^{-\theta s}\sum_{c=0}^{s}{\mathbb{P}}({\cal L}>c){\overline{\Upsilon}}(s-c)=\mu({\vec{\beta}})^{-1}\sum_{t\geq 0}{\mathbb{P}}({\cal L}>t)e^{-\theta t}.$ (29) Now, from (13) one has $\Xi(s-c;r)=\sum_{{\vec{k}}\in\Lambda(s-c,r)}\prod_{d\in{\cal D}}\rho_{d}^{k_{d}}\sum_{(d_{1},\ldots,d_{r})\in S({\vec{k}})}\;\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}$. Assume for the moment that ${{\cal L}}$ is bounded by $L$, so ${\mathbb{P}}({{\cal L}}\geq t)=0$ for $t>L$. From (16) it holds for all $k\geq\kappa(\epsilon)$ that ${\|{\rho_{d}^{-k}M_{d}^{k}-h\nu^{\prime}}\|}\leq\epsilon$ for all $d\in{\cal D}$. On the other hand, from (17) there exists $s(\epsilon)$ such that for all $s\geq s(\epsilon)+L$ one has that every tuple ${\vec{k}}\in\Lambda(s-c,r)$ with $c\leq L$ satisfies $|S({\vec{k}})^{\kappa(\epsilon)}|\geq(1-\epsilon)|S({\vec{k}})|$. Since for every sequence $(d_{1},\ldots,d_{r})\in S({\vec{k}})^{\kappa(\epsilon)}$ there exists a $\kappa(\epsilon)$-run, Lemma 5 guarantees that ${\|{\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}-h\nu^{\prime}}\|}\leq\epsilon$. Hence, for all $s\geq s(\epsilon)+L$, ${\|{\Xi(s-c;r)-\Upsilon(s-c,r)h\nu^{\prime}}\|}\leq\left(\sum_{{\vec{k}}\in\Lambda(s-c,r)}|S({\vec{k}})^{\kappa(\epsilon)}|\prod_{d\in{\cal D}}\rho_{d}^{k_{d}}\right)\epsilon\\\ +\sum_{{\vec{k}}\in\Lambda(s-c,r)}\prod_{d\in{\cal D}}\rho_{d}^{k_{d}}\sum_{(d_{1},\ldots,d_{r})\in S({\vec{k}})\setminus S({\vec{k}})^{\kappa(\epsilon)}}{\|{\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}}\|}.$ From (14) one has ${\|{\prod_{l=1}^{r}\rho_{d_{l}}^{-1}M_{d_{l}}}\|}\leq{{\mathfrak{w}}}(h)$. Setting $C=1+{{\mathfrak{w}}}(h)$, we thus have ${\|{\Xi(s-b;r)-\Upsilon(s-b,r)h\nu^{\prime}}\|}\leq C\epsilon\left(\sum_{{\vec{k}}\in\Lambda(s-b,r)}|S({\vec{k}})|\prod_{d\in{\cal D}}\rho_{d}^{k_{d}}\right)=C\epsilon\Upsilon(s-b,r).$ (30) Therefore, $\sum_{c=0}^{L}{\mathbb{P}}({\cal L}>c)\sum_{\lceil{(s-c)/D}\rceil\leq r\leq s-c}\\!\\!\\!{\|{\Xi(s-c;r)-\Upsilon(s-c,r)h\nu^{\prime}}\|}\leq C\epsilon\left(\sum_{c=0}^{L}{\mathbb{P}}({\cal L}>c){\overline{\Upsilon}}(s-c)\right).$ So, from (29) we get $\lim\limits_{s\to\infty}e^{-\theta s}\sum_{c=0}^{L}{\mathbb{P}}({\cal L}>c)\sum_{\lceil{(s-c)/D}\rceil\leq r\leq s-c}\\!\\!{\|{\Xi(s-c;r)-\Upsilon(s-c,r)h\nu^{\prime}}\|}\leq C\epsilon\mu({\vec{\beta}})^{-1}\sum_{c=0}^{L}{\mathbb{P}}({\cal L}>c)e^{-\theta c}.$ Letting $\epsilon\to 0$ and using (29) again shows that $\lim\limits_{s\to\infty}e^{-\theta s}{\mathbb{E}}({{\cal U}}_{j}(s))={\mathbb{E}}({{\cal U}}(0))^{\prime}h\nu^{\prime}e_{j}\mu({\vec{\beta}})^{-1}\left(\sum_{c=0}^{L}{\mathbb{P}}({\cal L}>c)e^{-\theta c}\right).$ (31) So, we have proved (24) for bounded lifetimes. In the general case take ${\cal L}^{n}$ to be an increasing sequence of bounded lifetimes that converge to ${\cal L}$. Then, for all $c\geq 0$, ${\mathbb{P}}({{\cal L}}^{n}>c)$ increases to ${\mathbb{P}}({{\cal L}}>c)$. Let us add a superscript $n$ on all quantities in which the lifetime is ${{\cal L}}^{n}$. We have ${\mathbb{E}}(\zeta(s))\\!-\\!{\mathbb{E}}(\zeta^{n}(s))=({\mathbb{P}}({\cal L}\\!>\\!s)\\!-\\!{\mathbb{P}}({\cal L}^{n}\\!>\\!s))+\sum_{c=0}^{s-1}({\mathbb{P}}({\cal L}\\!>\\!c)-{\mathbb{P}}({\cal L}^{n}\\!>\\!c)){\overline{\Upsilon}}(s-c).$ (32) By using (29) yet again and taking advantage of the fact that $\mu({\vec{\beta}})^{-1}\sum_{t\geq 0}{\mathbb{P}}({\cal L}^{n}>t)e^{-\theta t}\nearrow\mu({\vec{\beta}})^{-1}\sum_{t\geq 0}{\mathbb{P}}({\cal L}>t)e^{-\theta t}$ as $n\to\infty$, we obtain $\lim\limits_{n\to\infty}\lim\limits_{s\to\infty}e^{-\theta s}\left({\mathbb{E}}(\zeta(s))-{\mathbb{E}}(\zeta^{n}(s))\right)=0.$ (33) On the other hand ${\mathbb{E}}({\cal U}(s))^{\prime}-{\mathbb{E}}({\cal U}^{n}(s))^{\prime}\\!=\\!({\mathbb{P}}({\cal L}\\!>\\!s)-{\mathbb{P}}({\cal L}^{n}\\!>\\!s)){\mathbb{E}}({\cal U}(0))\\!+\\!\sum_{c=0}^{s-1}({\mathbb{P}}({\cal L}\\!>\\!c)-{\mathbb{P}}({\cal L}^{n}\\!>\\!c))\Xi(s-c),$ which is non-negative componentwise. Consequently, from (30) we have $\displaystyle e^{-\theta s}{\|{{\mathbb{E}}({\cal U}(s))^{\prime}-{\mathbb{E}}({\cal U}^{n}(s))^{\prime}}\|}\leq e^{-\theta s}\left({\mathbb{P}}({\cal L}\\!>\\!s)-{\mathbb{P}}({\cal L}^{n}\\!>\\!s)\right){\|{{\mathbb{E}}({{\cal U}}(0))^{\prime}}\|}$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad+(C\epsilon+{\|{h\nu^{\prime}}\|})e^{-\theta s}\left(\sum_{c=0}^{s-1}\left({\mathbb{P}}({\cal L}\\!>\\!s)-{\mathbb{P}}({\cal L}^{n}\\!>\\!s)\right){\overline{\Upsilon}}(s-c)\right).$ From (32) and (33) one gets $\lim\limits_{n\to\infty}\lim\limits_{s\to\infty}e^{-\theta s}{\|{{\mathbb{E}}({\cal U}(s))^{\prime}-{\mathbb{E}}({\cal U}^{n}(s))^{\prime}}\|}=0$ and hence $\lim\limits_{s\to\infty}e^{-\theta s}{\mathbb{E}}({\cal U}(s))^{\prime}=\lim\limits_{n\to\infty}\lim\limits_{s\to\infty}e^{-\theta s}{\mathbb{E}}({\cal U}^{n}(s))^{\prime}$. At this point, an application of (31) completes the proof of (24). Next, the asymptotic behavior of ${\cal X}_{j}(s)$ is a particular case of what we have proven where individuals have lifetimes of $1$ and always recover. Hence (26) can be established by applying (24) directly to ${\cal X}_{j}(s)$ and noting that ${\mathbb{P}}({\cal L}>0)=1$. Now, considering the mean number of asymptomatic cases, we have $\displaystyle\lim_{s\to\infty}e^{-\theta s}{\mathbb{E}}({\cal Y}(s))^{\prime}$ $\displaystyle=$ $\displaystyle\lim_{s\to\infty}e^{-\theta s}{\mathbb{P}}({\cal L}=0)\sum_{c=0}^{D}{\mathbb{E}}({\cal X}(s))^{\prime}={\mathbb{P}}({\cal L}=0)\sum_{c=0}^{D}\lim_{s\to\infty}e^{-\theta s}{\mathbb{E}}({\cal X}(s))^{\prime}$ $\displaystyle=$ $\displaystyle\mu({\vec{\beta}})^{-1}(D+1){\mathbb{P}}({\cal L}=0){\mathbb{E}}({\cal X}(0))^{\prime}h\nu^{\prime}.$ Finally, relation (27) on the mean evolution of types follows straightforwardly from (24), (26) and (25) since $\nu^{\prime}{\mathbf{1}}=1$. As the calculation of the limits is the same for all three processes, we only present the case of ${\cal U}$. $\lim_{s\to\infty}\frac{{\mathbb{E}}({\cal U}(s))^{\prime}}{{\mathbb{E}}({\cal U}(s))^{\prime}{\mathbf{1}}}=\frac{\mu({\vec{\beta}})^{-1}({\mathbb{E}}({{\cal U}}(0))^{\prime}h)\left(\sum_{t\geq 0}{\mathbb{P}}({\cal L}>t)e^{-\theta t}\right)\nu^{\prime}}{\mu({\vec{\beta}})^{-1}({\mathbb{E}}({{\cal U}}(0))^{\prime}h)\left(\sum_{t\geq 0}{\mathbb{P}}({\cal L}>t)e^{-\theta t}\right)\nu^{\prime}{\mathbf{1}}}=\nu^{\prime}\,.$ $\Box$∎ ###### Corollary 9. Assume $(M_{d}:d\in{\cal D})$ share P-F eigenvectors. Then, the proportion of the expected actively reproducing (infectious) population of type $j$ that is of age $d$ at time $s$, ${\mathcal{E}}_{d}^{j}(s)=\frac{{\mathbb{E}}({\cal X}_{j}(s-d)){\mathbb{P}}({\cal L}>d)}{\sum_{d^{\prime}\in{\cal D}}{\mathbb{E}}({\cal X}_{j}(s-d^{\prime})){\mathbb{P}}({\cal L}>d^{\prime})},$ satisfies $\lim_{s\to\infty}{\mathcal{E}}_{d}^{j}(s)=\frac{{\mathbb{P}}({\cal L}>d)e^{-\theta d}}{\sum_{d^{\prime}\in{\cal D}}{\mathbb{P}}({\cal L}>d^{\prime})e^{-\theta d^{\prime}}}\text{ for }d\in{\cal D},\,j\in I.$ (34) ###### Proof. One can write ${\mathcal{E}}_{d}^{j}(s)$ as ${\mathcal{E}}_{d}^{j}(s)=\frac{e^{-\theta(s-d)}{\mathbb{E}}({\cal X}_{j}(s-d)){\mathbb{P}}({\cal L}>d)e^{-\theta d}}{\sum_{b^{\prime}\in{{\cal D}}^{\prime}}e^{-\theta(s-d^{\prime})}{\mathbb{E}}({\cal X}_{j}(s-b^{\prime})){\mathbb{P}}({\cal L}>b^{\prime})e^{-\theta d^{\prime}}},$ and it suffices to let $s\to\infty$ and use $\lim\limits_{s\to\infty}e^{-\theta s}{\mathbb{E}}({\cal X}(s))^{\prime}=(\mu({\vec{\beta}}))^{-1}{\mathbb{E}}({\cal X}(0))^{\prime}h\nu^{\prime}$. $\Box$∎ ## 5 Matrices sharing P-F eigenvectors and conclusions We will finish by discussing some properties of classes of matrices sharing the same P-F eigenvectors. Let $M$ be an irreducible non-negative matrix with P-F eigenvalue $\rho>0$ and corresponding left and right P-F eigenvectors $\nu$ and $h$ respectively, with $\nu^{\prime}h=1$. The associated stochastic matrix $P$ defined by $P(i,j)=\rho^{-1}M(i,j)h(j)/h(i)$ has the componentwise product $(\nu(i)h(i):i\in I)$ as its unique stationary distribution. Conversely, suppose $P$ is an irreducible stochastic matrix with stationary distribution $\pi$. Fix $\rho>0$, a vector $h>0$ and define $M(i,j)=\rho\,P(i,j)h(i)/h(j)$. Then, $M$ is an irreducible non-negative matrix. It is straightforward to see that $Mh=\rho\,h$ and so $\rho$ is the P-F eigenvalue corresponding to the right eigenvector $h$. Further, the left P-F eigenvector for $M$ is $\nu(i)=\pi(i)/h(i)$, $i\in I$, while, by definition, $P$ is the stochastic matrix associated with $M$. Therefore, a family of irreducible non-negative matrices with fixed P-F eigenvectors $\nu$ and $h$ can be constructed by combining arbitrary values $\rho>0$ and stochastic matrices with stationary distribution $\pi(i)=h(i)\nu(i)$, $i\in I$. Thus, the class of families of matrices sharing P-F eigenvectors is non- trivial and there is a simple mechanism for generating such families. Incidentally, an alternative construction based on the left eigenvector instead of the right eigenvector can be carried out by taking $M(i,j)=\rho\,(\nu(j)/\nu(i))P(j,i)$. This is a time-reversed construction because $P^{*}(i,j)=(\pi(j)/\pi(i))P(j,i)$ is the transition matrix of the time reverse of the Markov chain defined by $P$. Next, we show that if the mean matrices commute, then they share P-F eigenvectors. ###### Proposition 10. Assume that the family of matrices $(M_{d}:d\in{\cal D})$ is commutative, that is $M_{d}M_{d^{\prime}}=M_{d^{\prime}}M_{d}$ for $d,d^{\prime}\in{\cal D}$. Then, the matrices share the same right and left P-F eigenvectors. ###### Proof. Let $d,d^{\prime}\in{\cal D}$. The commutativity property implies $h_{d}\nu^{\prime}_{d}h_{d^{\prime}}\nu^{\prime}_{d^{\prime}}=\lim_{t\to\infty}\rho_{d}^{-t}M_{d}^{t}\rho_{d^{\prime}}^{-t}M_{d^{\prime}}^{t}=\lim_{t\to\infty}\rho_{d^{\prime}}^{-t}M_{d^{\prime}}^{t}\rho_{d}^{-t}M_{d}^{t}=h_{d^{\prime}}\nu^{\prime}_{d^{\prime}}h_{d}\nu^{\prime}_{d}.$ So, $\delta h_{d}\nu^{\prime}_{d^{\prime}}=\epsilon h_{d^{\prime}}\nu^{\prime}_{d}$ where $\delta=\nu^{\prime}_{d}h_{d^{\prime}}$ and $\epsilon=\nu^{\prime}_{d^{\prime}}h_{d}$. Then, $\delta h_{d}(i)\nu_{d^{\prime}}(j)=\epsilon h_{d^{\prime}}(i)\nu_{d}(j)$ for all $i,j\in I$. Summing over $j$ leads to $h_{d^{\prime}}=(\delta/\epsilon)h_{d}$ and hence $\delta=\nu^{\prime}_{d}h_{d^{\prime}}=\nu^{\prime}_{d}h_{d}(\delta/\epsilon)=\delta/\epsilon$. Similarly, one can show that $\epsilon=\epsilon/\delta$. From this we obtain $\delta=1=\epsilon$ and hence we can conclude that $h_{d}=h_{d^{\prime}}$. It also follows that $\nu_{d}(j)=\nu_{d^{\prime}}(j)$ for all $j\in I$, that is, $\nu_{d}=\nu_{d^{\prime}}$. $\Box$∎ In the context of epidemiological modeling, the family of mean matrices $(M_{d}:d\in{\cal D})$ encompasses two different aspects of the system: the types encapsulate geographical interactions while the delays describe temporal effects such as variation in contagiousness. The simplest way of modeling this is by setting $M_{d}=a_{d}\,M$, where the $a_{d}$’s represent time-dependent changes in the level of transmission. (In the $1-$type case but extending ${\cal D}$ to be ${\mathbb{N}}$, an example of this can be found as in [10], where the $a_{d}$’s give a geometric probability of being contagious after time $d$). Due to the common $M$, the $M_{d}$’s commute and also have the same left and right P-F eigenvectors with P-F eigenvalues that vary according to the $a_{d}$’s. Proposition 10 shows that families of mean matrices that are commutative also share P-F eigenvectors. Thus, rescalings of a single matrix, commutativity and sharing P-F eigenvectors are successively more relaxed conditions on the structure of the mean matrices. The requirement that the family of mean matrices share P-F eigenvectors is the most general condition we have found that allows the product of mean matrices to be controlled like the powers of a single matrix. ## Acknowledgments This work was supported by the Center for Mathematical Modeling ANID Basal Projects ACE210010 and FB210005. ## References * [1] S. Asmussen. Applied Probability and Queues, volume 51 of Stochastic Modelling and Applied Probability. Springer-Verlag, New York, second edition edition, 2010. * [2] N. Becker. Estimation for discrete time branching processes with application to epidemics. Biometrics, 33(3):515–522, 1977. * [3] Kenny S. Crump and Charles J. Mode. A general age-dependent branching process. i. Journal of Mathematical Analysis and Applications, 24(3):494–508, 1968. * [4] Kenny S. Crump and Charles J. Mode. A general age-dependent branching process. ii. Journal of Mathematical Analysis and Applications, 25(1):8–17, 1969\. * [5] K.S. Crump. On systems of renewal equations. J. Math. Anal. Appl., 30:425–434, 1970. * [6] R.A. Doney. A limit theorem for a class of supercritical branching processes. J. Appl. Probab., 9(4):707–724, 1972. * [7] R.A. Doney. On single- and multi-type general age-dependent branching processes. J. Appl. Probab., 13(2):239–246, 1976. * [8] N.M. Ferguson, D. Laydon, G. Nedjati-Gilani, N. Imai, K. Ainslie, M. Baguelin, S. Bhatia, A. Boonyasiri, Z. Cucunubá, G. Cuomo-Dannenburg, and A. Dighe. Report 9: Impact of non-pharmaceutical interventions (npis) to reduce covid-19 mortality and healthcare demand. Imperial College COVID-19 Response Team, London, 16 March 2020. https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf, 2020\. * [9] O. Nerman. On the convergence of supercritical general (C-M-J) branching processes. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 57:365–395, 1981. * [10] N.M. Yanev, V.K. Stoimenova, and D.V. Atanasov. Stochastic modeling and estimation of covid-19 population dynamics. C. R. Acad. Bulg. Sci., 73(4):451–460, 2020.
# The Euler Class from a General Connection, Relative to a Metric Brian Klatt Rutgers University Department of Mathematics New Brunswick, NJ United States<EMAIL_ADDRESS> ###### Abstract. We extend the well-known formula for the Euler class of a real oriented even- dimensional vector bundle in terms of the curvature of a metric connection to the case of a general linear connection provided a metric is present. We rewrite the classical Gauss-Bonnet theorem in dimension two in light of this formula. We also discuss a potential application to a conjecture of Chern, and make a brief digression to discuss $m$-quasi-Einstein manifolds. ###### Key words and phrases: Euler class, Gauss-Bonnet ###### 2010 Mathematics Subject Classification: Primary: 53C05 ## 1\. Introduction The Gauss-Bonnet theorem is one of the most remarkable and beautiful theorems in geometry, and in all of mathematics. It states (see e.g. [9]) that if $(M^{2},g)$ is a closed oriented surface with metric $g$ and Gauss curvature $K_{g}$ then $\int_{M^{2}}\,K_{g}\,\mathrm{d}V=2\pi\chi(M^{2})$ where $\chi$ denotes the Euler characteristic. Thus, knowing a single geometric quantity, the total Gauss curvature, completely determines the topology of the surface; or alternatively, knowing the topology restricts the range of possibilities for a metric on the surface. This theorem was generalized by Allendoerfer & Weil to the case of a closed oriented $2m$-dimensional Riemannian manifold [1], and given a profoundly simple intrinsic proof shortly after by Chern [6]. Their result is that $\int_{M^{2m}}\,\mathrm{Pf}(\Omega)=(2\pi)^{m}\chi(M^{2m})$ where $\Omega=\Omega^{i}{}_{j}$ is the endomorphism-valued curvature $2$-form and $\mathrm{Pf}(\Omega)=\frac{1}{2^{m}m!}\displaystyle\sum_{\sigma\in S_{2m}}(-1)^{\sigma}\Omega^{\alpha_{\sigma(1)}}{}_{\alpha_{\sigma(2)}}\wedge...\wedge\Omega^{\alpha_{\sigma(2m-1)}}{}_{\alpha_{\sigma(2m)}}$ is a well-defined $2m$-form when computed in an oriented orthonormal basis, called the Pfaffian form. In fact, this form makes perfect sense if $\Omega$ is the curvature $2$-form of a connection compatible with a fibre metric $g$ on an oriented rank-$2k$ vector bundle $E$, where the sum defining the Pfaffian is now taken over $S_{2k}$. We then have the well-known theorem (which we will refer to simply as the Chern-Gauss-Bonnet theorem) that $(2\pi)^{k}e(E)=\mathrm{Pf}(\Omega)=\frac{1}{2^{k}k!}\displaystyle\sum_{\sigma\in S_{2k}}(-1)^{\sigma}\Omega^{\alpha_{\sigma(1)}}{}_{\alpha_{\sigma(2)}}\wedge...\wedge\Omega^{\alpha_{\sigma(2k-1)}}{}_{\alpha_{\sigma(2k)}}$ in the cohomology ring of $M$, where $e(E)$ is the Euler class, defined as the pullback of the Thom class of $E$ by the zero section (see [3] for a statement and proof). It is natural to wonder whether the Euler class can be computed from a general (i.e. not necessarily metric) connection, but a famed inequality of Milnor ([12], but see also [8]) implies that there are flat oriented plane bundles with nonzero Euler class over a surface. Thus the Euler class of such a bundle cannot be computed purely from its curvature. This does not, however, rule out the possibility of a natural formula for the Euler class from a general connection; there may exist such a formula, provided it includes non-curvature terms. It is the primary purpose of this paper to state and prove just such a formula. This we accomplish in sections 2 and 4; section 3 contains some preliminaries to the proof on the Pfaffian of matrices. We then, in section 5, record the global and local Gauss-Bonnet formulas in light of our general formula. Section 6 contains miscellanea related to the main theorem, in two subsections. The first part calls attention to a potential application to a conjecture of Chern on the Euler characteristic of affine manifolds. The second is an invitation to further research on ``generalized $m$-quasi- Einstein manifolds.'' This is related to the main theorem by happenstance; it is through this topic that the author was led to discover the main theorem (in particular, by thinking about open problem (vi) of [4]). Finally section 7 is an appendix in two parts: The first serves to fix notations and contains some standard formulas of differential geometry so as to avoid cluttering the flow of argumentation in the main part of the paper, and the second is an extensive compendium of geometric quantities associated to a particular class of tangent bundle connections. ## 2\. Statement of the Formula Our setup will be a general linear connection $\nabla$ and fibre metric $g$ on an oriented rank-$2k$ real vector bundle $E$. The simplest way to realize a formula of the kind we desire is to find the correct ``error term" to insert into $(2\pi)^{k}e(E)=\mathrm{Pf}(\Omega)$, such that the error term vanishes when the connection $\nabla$ is metric. Thus the obvious building block for our error term is $\nabla g$. This is a symmetric matrix of $1$-forms with lower bundle indices: $\nabla g=\nabla_{i}g_{\alpha\beta}$. We can naturally construct from this an object of the same tensorial type as $\Omega=\Omega^{\alpha}{}_{\beta}=\Omega_{ij}{}^{\alpha}{}_{\beta}$ by using the inverse metric to first construct $g^{-1}\nabla g=(g^{-1}\nabla g)_{i}{}^{\alpha}{}_{\beta}=g^{\alpha\gamma}\nabla_{i}g_{\gamma\beta}$ and then squaring this matrix, using the wedge product to multiply the one-form entries: ###### Definition 2.1. The endomorphism-valued two-form $(g^{-1}\nabla g)^{2}$ is defined by $\displaystyle(g^{-1}\nabla g)^{2}$ $\displaystyle=(g^{-1}\nabla g)^{2})_{ij}{}^{\alpha}{}_{\beta}$ $\displaystyle=(g^{-1}\nabla g)_{i}{}^{\alpha}{}_{\gamma}\wedge(g^{-1}\nabla g)_{j}{}^{\gamma}{}_{\beta}$ $\displaystyle=g^{\alpha\gamma}\nabla_{i}g_{\gamma\delta}\,g^{\delta\epsilon}\nabla_{j}g_{\epsilon\beta}-g^{\alpha\gamma}\nabla_{j}g_{\gamma\delta}\,g^{\delta\epsilon}\nabla_{i}g_{\epsilon\beta}$ With these considerations in mind, one can do explicit computations with a simple non-metric tangent bundle connection like, say, $\Gamma_{i}{}^{k}{}_{j}=\overline{\Gamma}_{i}{}^{k}{}_{j}+a\,\varphi_{i}\delta^{k}_{j}+b\,\varphi_{j}\delta^{k}_{i}+c\,\varphi^{k}g_{ij}$ ($\overline{\Gamma}_{i}{}^{k}{}_{j}$ are the Christoffel symbols for the Levi- Civita connection $\nabla_{g}$ of $g$ and $\varphi$ is a $1$-form) to guess the error term, which leads to our main theorem. ###### Theorem 2.2. Let $E$ be an oriented rank-$2k$ real vector bundle with connection $\nabla$, curvature $\Omega=\Omega_{\nabla}$, and fibre metric $g$. Then $(2\pi)^{k}e(E)=\mathrm{Pf}(\Omega-\tfrac{1}{4}(g^{-1}\nabla g)^{2})$ where $e(E)$ is the Euler class of the bundle, and the Pfaffian is computed using any oriented orthonormal basis for the metric $g$. ###### Remark 2.3. Theorem 2.2 brings attention to an interesting class of connections $\nabla$ with respect to a given metric $g$: those satisfying $(g^{-1}\nabla g)^{2}=0$. This condition is strictly weaker than metric compatibility because it is satisfied if (but not only if), for example, $\nabla_{i}g_{\alpha\beta}=\varphi_{i}g_{\alpha\beta}$; on the tangent bundle, this includes Weyl connections $\nabla=\nabla_{g}-\tfrac{\varphi_{i}}{2}\delta^{k}_{j}-\tfrac{\varphi_{j}}{2}\delta^{k}_{i}+\tfrac{\varphi^{k}}{2}g_{ij}$ [14]. Therefore for Weyl connections, and any connection such that $(g^{-1}\nabla g)^{2}=0$, one can compute the Euler class directly from the curvature without any error term via $(2\pi)^{k}e(E)=\mathrm{Pf}(\Omega)$. ## 3\. Preliminary for the Proof We only need a brief interlude on properties of the Pfaffian before we can proceed to the proof. (For further reference see Volume V of [17].) ###### Definition 3.1. Let $M_{1}$, …, $M_{k}$ be $2k$-by-$2k$ matrices with entries in a commutative algebra $R$ over $\mathbb{Q}$. We define the Pfaffian of these matrices by $\mathrm{Pf}(M_{1},...,M_{k})=\frac{1}{2^{k}k!}\displaystyle\sum_{\sigma\in S_{2k}}(-1)^{\sigma}(M_{1})^{i_{\sigma(1)}}_{i_{\sigma(2)}}...(M_{k})^{i_{\sigma(2k-1)}}_{i_{\sigma(2k)}}$ By setting all $M_{i}=M$ we recover $\mathrm{Pf}(M)$, the usual Pfaffian of $M$. The Pfaffian is obviously multilinear with respect to matrix addition and multiplication of matrices by elements of $R$. We also have the following ###### Proposition 3.2. The Pfaffian satisfies the following properties: 1. (1) $\mathrm{Pf}(M_{1},...,(M_{i})^{T},...,M_{k})=-\mathrm{Pf}(M_{1},...,M_{i},...,M_{k})$ 2. (2) $\mathrm{Pf}(M_{1},...,M_{i},...,M_{k})=\mathrm{Pf}(M_{1},...,(M_{i})_{A},...,M_{k})$ where $(M_{i})_{A}=\frac{1}{2}(M_{i}-(M_{i})^{T})$ 3. (3) $\mathrm{Pf}(M_{1},...,M_{k})=\mathrm{Pf}((M_{1})_{A},...,(M_{k})_{A})$ and $\mathrm{Pf}(M)=\mathrm{Pf}(M_{A})$ ###### Proof. 1. (1) This is obvious from the definition, since a transposition has sign equal to $-1$. 2. (2) With $(M_{i})_{A}=\frac{1}{2}(M_{i}-(M_{i})^{T})$ and $(M_{i})_{S}=\frac{1}{2}(M_{i}+(M_{i})^{T})$, compute $\displaystyle\mathrm{Pf}(M_{1},...,M_{i},...,M_{k})$ $\displaystyle=\mathrm{Pf}(M_{1},...,(M_{i})_{A}+(M_{i})_{S},...,M_{k})$ $\displaystyle=\mathrm{Pf}(M_{1},...,(M_{i})_{A},...,M_{k})+\mathrm{Pf}(M_{1},...,(M_{i})_{S},...,M_{k})$ and note by (1) that $\displaystyle\mathrm{Pf}(M_{1},...,(M_{i})_{S},...,M_{k})$ $\displaystyle=\mathrm{Pf}(M_{1},...,((M_{i})_{S})^{T},...,M_{k})$ $\displaystyle=-\mathrm{Pf}(M_{1},...,(M_{i})_{S},...,M_{k})$ so $\mathrm{Pf}(M_{1},...,(M_{i})_{S},...,M_{k})=0,$ and (2) follows. 3. (3) Use induction on (2). ∎ ## 4\. Proof of the Main Theorem and Developments We are now ready for the proof of Thm. 2.2. ### 4.1. Proof of Theorem 2.2 ###### Proof. Let $\\{E_{\alpha}\\}$ be a local oriented orthonormal basis of $E$, and $\omega=\omega^{\alpha}{}_{\beta}=\omega_{i}{}^{\alpha}{}_{\beta}$ and $\Omega=\Omega^{\alpha}{}_{\beta}=\Omega_{ij}{}^{\alpha}{}_{\beta}$ be the connection and curvature matrices, respectively, in this frame. We split the matrices of forms $\omega=\omega_{A}+\omega_{S}$ and $\Omega=\Omega_{A}+\Omega_{S}$ into antisymmetric (subscript ``A'') and symmetric (subscript ``S'') parts. One can readily check that $\Omega_{A}=\mathrm{d}\omega_{A}+\omega_{A}\wedge\omega_{A}+\omega_{S}\wedge\omega_{S}$ and $\Omega_{S}=\mathrm{d}\omega_{S}+\omega_{A}\wedge\omega_{S}+\omega_{S}\wedge\omega_{A}.$ We also compute in our local orthonormal frame that $g^{-1}\nabla g=\delta^{\alpha\gamma}(\mathrm{d}\delta_{\gamma\beta}-\omega^{\epsilon}{}_{\gamma}\delta_{\epsilon\beta}-\omega^{\epsilon}{}_{\beta}\delta_{\gamma\epsilon})=-(\omega^{\alpha}{}_{\beta}+\omega^{\beta}{}_{\alpha})=-2\,\omega_{S}$ which immediately implies $\tfrac{1}{4}(g^{-1}\nabla g)^{2}=\omega_{S}\wedge\omega_{S}.$ Therefore $\mathrm{Pf}(\Omega-\tfrac{1}{4}(g^{-1}\nabla g)^{2})=\mathrm{Pf}(\mathrm{d}\omega_{A}+\omega_{A}\wedge\omega_{A}+\Omega_{S})=\mathrm{Pf}(\mathrm{d}\omega_{A}+\omega_{A}\wedge\omega_{A})$ where we neglected the symmetric part in the last equality by (3) of Prop. 3.2. All that remains of the proof is to show that $\omega_{A}$ defines a metric connection, for then $\mathrm{Pf}(\Omega-\tfrac{1}{4}(g^{-1}\nabla g)^{2})=\mathrm{Pf}(\mathrm{d}\omega_{A}+\omega_{A}\wedge\omega_{A})=\mathrm{Pf}(\Omega_{\omega_{A}})=(2\pi)^{k}e(E)$ by the Chern-Gauss-Bonnet theorem. This follows easily from $\omega_{A}=\omega-\omega_{S}=\omega+\tfrac{1}{2}g^{-1}\nabla g.$ In fact, define $\nabla^{g}=\nabla+\frac{1}{2}g^{-1}\nabla g$. This is clearly a connection with connection matrix $\omega_{A}$ in our local orthonormal frame, and since this matrix is antisymmetric, it's a metric connection. This completes the proof. ∎ ### 4.2. The Canonical Associated Metric Connection We record an observation from the proof. ###### Definition 4.1. Suppose $E$ is a real vector bundle with linear connection $\nabla$ and fibre metric $g$. Then we call $\nabla^{g}=\nabla+\tfrac{1}{2}g^{-1}\nabla g$ the _canonical g-metric connection associated to_ $\nabla$. ###### Proposition 4.2. $\nabla^{g}$ is indeed a metric connection with respect to $g$. We give a proof independent of our work in the proof of Thm 2.2. ###### Proof. Choose an arbitrary local frame $\\{E_{\alpha}\\}$ and let $\omega^{\alpha}{}_{\beta}$ be the matrix of connection $1$-forms of $\nabla$. Then the connection $1$-forms of $\nabla^{g}$ are $\eta^{\alpha}{}_{\beta}=\omega^{\alpha}{}_{\beta}+\frac{1}{2}g^{\alpha\gamma}\nabla g_{\gamma\beta}$, and we can simply compute $\displaystyle\nabla^{g}g$ $\displaystyle=\mathrm{d}g_{\alpha\beta}-\eta^{\gamma}{}_{\alpha}g_{\gamma\beta}-\eta^{\gamma}{}_{\beta}g_{\alpha\gamma}$ $\displaystyle=\mathrm{d}g_{\alpha\beta}-(\omega^{\gamma}{}_{\alpha}+\tfrac{1}{2}g^{\gamma\delta}\nabla g_{\delta\alpha})g_{\gamma\beta}-(\omega^{\gamma}{}_{\beta}+\tfrac{1}{2}g^{\gamma\delta}\nabla g_{\delta\beta})g_{\alpha\gamma}$ $\displaystyle=\mathrm{d}g_{\alpha\beta}-\omega^{\gamma}{}_{\alpha}g_{\gamma\beta}-\omega^{\gamma}{}_{\beta}g_{\alpha\gamma}-\nabla g_{\alpha\beta}$ $\displaystyle=0$ ∎ In brief, then, Thm. 2.2 tells us that to compute the Euler class from $\nabla$ in the presence of a metric $g$, one should compute the Pfaffian form of the canonical metric connection induced by $\nabla$ and $g$. Our next proposition gives an alternative characterization of the connection $\nabla^{g}$ as the connection nearest to $\nabla$, at each point, in the affine subspace of $g$-metric connections. Thus $\nabla^{g}$ is a sort of orthogonal projection of $\nabla$ onto the space of $g$-metric connections. ###### Proposition 4.3. Let $E$ be a real vector bundle, $\nabla$ a linear connection, and $g$ a fibre metric. Also, arbitrarily fix a Riemannian metric $h$ on the underlying manifold, and define length in $T^{*}M\otimes E\otimes E^{*}$ with the metric $\mu=h^{-1}\otimes g\otimes g^{-1}$. If $\nabla^{\prime}$ is a $g$-metric connection, then $|\nabla-\nabla^{g}|\leq|\nabla-\nabla^{\prime}|$ with equality if and only if $\nabla^{\prime}=\nabla^{g}$. ###### Proof. The basic point is that $\nabla^{g}-\nabla=\frac{1}{2}g^{-1}\nabla g$ is $g$-symmetric, which is orthogonal to the $g$-antisymmetric directions that define the metric connections. To be precise, let $\nabla^{\prime}$ be a $g$-metric connection. Then $|\nabla-\nabla^{\prime}|^{2}=|(\nabla-\nabla^{g})-(\nabla^{\prime}-\nabla^{g})|^{2}$ We'll compute in a local $g$-orthonormal frame $\\{E_{\alpha}\\}$ of $E$ and a local $h$-orthonormal frame $\\{e_{i}\\}$ of $TM$, so $g_{\alpha\beta}=\delta_{\alpha\beta}$ and $h_{ij}=\delta_{ij}$. Denote by $\omega$ the matrix of connection $1$-forms. As discussed in the proof of Thm 2.2, $\nabla-\nabla^{g}=\omega_{S}$, a symmetric matrix of $1$-forms, while $\nabla^{\prime}-\nabla^{g}=T_{i}{}^{\alpha}{}_{\beta}$ is an antisymmetric matrix of $1$-forms since each connection in the difference is metric. Now compute $\mu(\omega_{S},T)=\displaystyle\sum_{i,\,\alpha,\,\beta}(\omega_{S})_{i}{}^{\alpha}{}_{\beta}\,T_{i}{}^{\alpha}{}_{\beta}=-\displaystyle\sum_{i,\,\alpha,\,\beta}(\omega_{S})_{i}{}^{\beta}{}_{\alpha}\,T_{i}{}^{\beta}{}_{\alpha}=-\mu(\omega_{S},T)$ Thus $\mu(\omega_{S},T)=0$ and we find that $\displaystyle|\nabla-\nabla^{\prime}|^{2}$ $\displaystyle=|(\nabla-\nabla^{g})-(\nabla^{\prime}-\nabla^{g})|^{2}$ $\displaystyle=|(\nabla-\nabla^{g})|^{2}+|(\nabla^{\prime}-\nabla^{g})|^{2}$ $\displaystyle=|\omega_{S}|^{2}+|(\nabla^{\prime}-\nabla^{g})|^{2}$ and so the conclusion clearly follows. ∎ ## 5\. The Formula on the Tangent Bundle of a Surface We will now give Thm 2.2 explicitly on the tangent bundle of a compact oriented surface, so as to compare it to the classical Gauss-Bonnet formula. Of course the integrands will differ by a divergence term; we will determine it exactly. We also give the corresponding local Gauss-Bonnet theorem. ### 5.1. Global Gauss-Bonnet Theorem We begin by specializing our setup to the case where $E=TM$, and so $g$ is a Riemannian metric on $M$, with Levi-Civita connection $\nabla_{g}$. Given a connection $\nabla$ (not assumed to be metric or torsion-free) we have a ``right triangle'' (by Prop. 4.3) of connections with vertices $\nabla$, $\nabla^{g}$, and $\nabla_{g}$. If we write $\nabla=\nabla_{g}-D$ (so $D$ is the hypotenuse of the triangle), then straightforward computation from the definition of $\nabla^{g}$ shows $\nabla^{g}=\nabla+D_{S}$ and therefore $\nabla^{g}=\nabla_{g}-D_{A}$ where $D_{S}=\tfrac{1}{2}(D+g^{-1}D^{T}g)=\tfrac{1}{2}(D_{i}{}^{k}{}_{j}+g^{kl}D_{i}{}^{m}{}_{l}g_{mj})$ and $D_{A}=\tfrac{1}{2}(D-g^{-1}D^{T}g)=\tfrac{1}{2}(D_{i}{}^{k}{}_{j}-g^{kl}D_{i}{}^{m}{}_{l}g_{mj})$ Now we set $B=D_{A}$ and $H=\Omega(\nabla^{g})$ for convenience, so we have in particular $\nabla^{g}=\nabla_{g}-B$. The difference tensor $B$ has one independent trace $B_{i}=B_{k}{}^{k}{}_{i}=B_{kki}=\tfrac{1}{2}(D_{jji}-D_{jij})$, and standard differential geometric calculations (see the appendix) reveal that $\displaystyle\mathrm{Pf}(H)$ $\displaystyle=(K_{g}-\mathrm{d}^{*}_{g}(B_{i}))\mathrm{d}V_{g}$ $\displaystyle=(K_{g}-\tfrac{1}{2}\mathrm{d}^{*}_{g}(D_{jji}-D_{jij}))\mathrm{d}V_{g}$ which by Thm. 2.2 yields the following ###### Proposition 5.1 (Global Gauss-Bonnet Theorem). Suppose $\nabla$ is a connection with curvature $\Omega$ on the tangent bundle of a closed oriented surface $(M^{2},g)$ with Levi-Civita connection $\nabla_{g}$, Gauss curvature $K_{g}$, and volume form $\mathrm{d}V_{g}$. If we write $\nabla=\nabla_{g}-D$ and $\nabla^{g}=\nabla_{g}-B$, then $\displaystyle 2\pi\,e(TM^{2})$ $\displaystyle=\mathrm{Pf}(\Omega-\tfrac{1}{4}(g^{-1}\nabla g)^{2})$ $\displaystyle=(K_{g}-\mathrm{d}^{*}_{g}(B_{i}))\mathrm{d}V_{g}$ $\displaystyle=(K_{g}-\tfrac{1}{2}\mathrm{d}^{*}_{g}(D_{jji}-D_{jij}))\mathrm{d}V_{g}$ and so $\displaystyle\int_{M^{2}}(K_{g}-\tfrac{1}{2}\mathrm{d}^{*}_{g}(D_{jji}-D_{jij}))\mathrm{d}V_{g}=2\pi\,\chi(M^{2})$ ### 5.2. Local Gauss-Bonnet Theorem Considering the divergence theorem with inward-pointing normal $N$, $\int_{D}-\mathrm{d}^{*}_{g}(B_{i})\mathrm{d}V_{g}=\int_{D}\mathrm{div}_{g}(B^{i})\mathrm{d}V_{g}=-\int_{\partial D}g_{ij}B^{i}N^{j}\mathrm{d}s=-\int_{\partial D}B_{i}(N)\mathrm{d}s$ the following proposition is the natural local version of the above Global Gauss-Bonnet theorem (we borrow some phrasing and notation from Theorem 9.3 of [9]). ###### Proposition 5.2 (Local Gauss-Bonnet Theorem). Suppose $\gamma$ is a curved polygon on an oriented surface $(M^{2},g)$, positively oriented as the boundary of an open set $D$ with compact closure, with inward pointing normal $N$, and exterior angles $\epsilon_{i}$. Let $\overline{\nabla}$ be the Levi-Civita connection, $\kappa_{N}=g(\overline{\nabla}_{\dot{\gamma}}\dot{\gamma},N)$ be the signed curvature of $\gamma$, and $B_{i}$ be a $1$-form. Then (5.1) $\int_{D}(K_{g}-\mathrm{d}^{*}_{g}(B_{i}))\mathrm{d}A_{g}+\int_{\gamma}(\kappa_{N}+B_{i}(N))\mathrm{d}s+\displaystyle\sum_{i}\epsilon_{i}=2\pi$ If $\nabla^{\prime}=\overline{\nabla}-B_{i}{}^{k}{}_{j}$ is a metric connection, i.e. $B_{ikj}=g_{km}B_{i}{}^{m}{}_{j}$ satisfies $B_{ikj}=-B_{ijk}$, and $B_{i}=B_{k}{}^{k}{}_{i}=B_{kki}$, then $K_{g}-\mathrm{d}^{*}_{g}(B_{i})=\mathrm{Pf}(\Omega_{\nabla^{\prime}})$ is the ``Gauss curvature'' of $\nabla^{\prime}$ and $\kappa_{N}+B_{i}(N)=g(\nabla^{\prime}_{\dot{\gamma}}\dot{\gamma},N)$ is the ``signed curvature of $\gamma$ with respect to $\nabla^{\prime}$.'' If $\nabla=\overline{\nabla}-D_{i}{}^{k}{}_{j}$ is a general connection, then according to Thm. 2.2 we apply this to the metric connection $\nabla^{g}=\overline{\nabla}-\tfrac{1}{2}(D_{i}{}^{k}{}_{j}-g^{kl}D_{i}{}^{m}{}_{l}g_{mj})$. ###### Proof. For any $1$-form $B_{i}$, Eqn 5.1 follows from the standard local Gauss-Bonnet theorem, the divergence theorem (i.e. Stokes' theorem), and an argument to approximate $D$ by smooth domains like the one at the end of the proof of the Gauss-Bonnet formula (Theorem 9.3) in [9]. If $\nabla^{\prime}=\overline{\nabla}-B_{i}{}^{k}{}_{j}$ is a metric connection, then $K_{g}-\mathrm{d}^{*}_{g}(B_{i})=\mathrm{Pf}(\Omega_{\nabla^{\prime}})$ is just Eqn. 7.3 of the appendix. We can also compute $g(\nabla^{\prime}_{\dot{\gamma}}\dot{\gamma},N)=g(\overline{\nabla}_{\dot{\gamma}}\dot{\gamma}-B_{\dot{\gamma}}\dot{\gamma},N)=\kappa_{N}-g(B_{\dot{\gamma}}\dot{\gamma},N)$ where in any orthonormal basis, say $\\{\dot{\gamma},N\\}$, $B=[\begin{smallmatrix}0&b\\\ -b&0\end{smallmatrix}]$ as an endomorphism- valued $1$-form, with $b=-\star_{1}B_{i}$ (see Eqn. 7.4 in the Appendix). Thus $B_{\dot{\gamma}}\dot{\gamma}=-b(\dot{\gamma})N=-(\star_{1}b)(N)N=-B_{i}(N)N,$ which yields $g(\nabla^{\prime}_{\dot{\gamma}}\dot{\gamma},N)=\kappa_{N}+B_{i}(N)$ ∎ ###### Remark 5.3. We will not go into the details, but the interested reader may wish to check the following: if one tracks the proof of Gauss-Bonnet given in [9] but substitutes a general connection for the Levi-Civita connection and writes this connection as a metric connection plus difference tensor, any contribution due to the difference tensor cancels out of the calculations. One is thereby left with Prop. 5.1. It is precisely when the metric connection is the canonical associated metric connection that we are being ``least wasteful'' in what gets cancelled out. ###### Remark 5.4. We presume that Prop. 5.1 is not really ``new.'' For instance, Corwin and Morgan in [7] prove a Gauss-Bonnet theorem on a smooth disk in a ``surface with densities'': $\mathrm{d}s=\delta_{1}\mathrm{d}s_{0}$ and $\mathrm{d}A=\delta_{2}\mathrm{d}A_{0}$ where the $\delta_{i}$ are positive density functions. They define $K^{\prime}=K_{g}-\Delta\log\delta_{1}$ and $\kappa=\tfrac{\delta_{1}}{\delta_{2}}\kappa_{N}-\tfrac{1}{\delta_{2}}g(\nabla\delta_{1},N)$ and show that $\int_{\gamma}\tfrac{\delta_{2}}{\delta_{1}}\kappa\,\mathrm{d}s_{0}+\int_{D}K^{\prime}\,\mathrm{d}A_{0}=2\pi$ However, this is clearly equivalent to $\int_{\gamma}(\kappa_{N}-\mathrm{d}(\log\delta_{1})(N))\,\mathrm{d}s_{0}+\int_{D}(K_{g}+\mathrm{d}^{*}_{g}(\mathrm{d}\log\delta_{1}))\,\mathrm{d}A_{0}=2\pi$ since $\Delta=-\mathrm{d}^{*}_{g}\mathrm{d}$. This is Prop. 5.1 when $B_{i}=-\mathrm{d}(\log\delta_{1})$. If we write $-\log\delta_{1}=f$, then one nice choice of connection $\nabla$ whose canonical metric connection yields this $B_{i}$ is $\nabla=\overline{\nabla}+(m-1)\mathrm{d}f_{i}\,\delta^{k}_{j}+(m-1)\mathrm{d}f_{j}\,\delta^{k}_{i}+(m+1)\overline{\nabla}f^{k}g_{ij}$ It's ``nice'' because it's torsion-free and the vanishing of its traceless Ricci tensor (with respect to $g$) is related to the surface being a generalized $m$-quasi-Einstein manifold, a notion which seems to be naturally related to that of ``manifold with density.'' See e.g. [13] and [5], and the next section. ## 6\. Applications & Explorations Given that $\mathrm{Pf}(\Omega-\frac{1}{4}(g^{-1}\nabla g)^{2})=\mathrm{Pf}(\Omega_{\nabla^{g}})$ and $\nabla^{g}$ is a metric connection, one may legitimately ask why we have bothered to frame Thm. 2.2 in terms of general connections. The real reason is that this is the form in which the theorem was discovered, and we prefer it aesthetically. However we also believe there are substantial reasons to frame the theorem this way, as follows. For one, we may be given a connection with special properties a priori unrelated to a metric and want to explore the topology of our manifold; we can then introduce a metric and use Thm. 2.2 to do so. One such instance is discussed in the next subsection. Another possibility is that we could have a metric and some other differential geometric data which is not obviously subordinate to the metric. It is not clear in such a scenario that one should default to the Levi-Civita connection, or any particular metric connection for that matter; perhaps some other, non-metric, connection is better adapted to the situation, or can suggest natural geometric quantities to study. If so, Thm. 2.2 can be used to explore the relation between the geometric data and the Euler characteristic. A potential example of this is discussed in the second subsection below. ### 6.1. Chern's Conjecture on the Euler Characteristic of Affine Manifolds An advantage of phrasing Thm. 2.2 in terms of general connections is that we hope it may find an application in the study of a conjecture of Chern. This conjecture posits that the Euler characteristic of a closed manifold with a torsion-free flat connection on its tangent bundle should have vanishing Euler characteristic (see [11] for an overview). If the connection is metric then the conjecture is true, even without the torsion-free assumption, by the Chern-Gauss-Bonnet theorem. The conjecture is also true in dimension two by Milnor's inequality [12], again without the torsion-free assumption. However, examples of Smillie in all even dimensions greater than two ([16], [8]) show that in general the torsion-free assumption is necessary for the truth of the conjecture. By Thm. 2.2, it suffices to show that $\mathrm{Pf}((g^{-1}\nabla g)^{2})$ integrates to $0$ for any Riemannian metric $g$ when $\nabla$ is a torsion- free flat connection; or equivalently to show that $\mathrm{Pf}((g^{-1}\nabla g)^{2})$ is exact. The author has made attempts at this in dimension two without noteworthy success. Perhaps instead of working with an arbitrary metric $g$, one should try to be more discerning and choose a $g$ adapted in some way to $\nabla$. Imposing $\nabla g=0$ is too strong, but what other equations are natural? One obvious idea based on the concepts discussed here is to demand that the metric $g$ minimize the distance between $\nabla$ and $\nabla^{g}$ globally, or what amounts to the same, minimize the magnitude of $\nabla g$. This suggests minimizing a scale-invariant functional like $\mathcal{F}_{\nabla}(g)=\int_{M^{n}}|\nabla g|^{n}\,\mathrm{d}V_{g}$ or minimizing, say, the functional $\mathcal{K}_{\nabla}(g)=\int_{M^{n}}|\nabla g|^{2}\,\mathrm{d}V_{g}$ subject to the constraint $\int_{M^{n}}\mathrm{d}V_{g}=1$. (Note that $\mathcal{F}_{\nabla}=\mathcal{K}_{\nabla}$ when $n=2$, so we don't need the volume constraint when using $\mathcal{K}_{\nabla}$ on a surface; is this possibly related to not needing the torsion-free assumption when $n=2$?) Perhaps the minima of these functionals, if they exist, will be well-adapted to the problem at hand. We will not, however, discuss these ideas any further here, and merely hope that they may prove fruitful in future efforts on this conjecture. ### 6.2. Natural Connections Incompatible with a Given Metric: Generalized $m$-Quasi-Einstein Manifolds In this subsection, we will discuss a situation in which a metric $g$ occurs alongside other differential geometric data, making the introduction of a connection $\nabla$ incompatible with $g$ plausibly natural. #### 6.2.1. Generalized $m$-quasi-Einstein Manifolds A _generalized $m$-quasi-Einstein manifold_ [2] is a Riemannian manifold such that (6.1) $r_{g}+\tfrac{1}{2}\pounds_{\varphi^{\\#}}g-\tfrac{1}{m}\varphi\otimes\varphi=\lambda g$ where $r_{g}$ is the Ricci tensor of $g$, $\varphi\in\Omega^{1}(M^{n})$, $\pounds$ denotes the Lie derivative, $m$ is an extended real number, and $\lambda\in C^{\infty}(M^{n})$. To understand the significance of this equation, it helps to look at some special cases. When $m=0$ we can understand, by convention, that the equation simplifies to $r_{g}=\lambda g$, which also occurs when $\varphi=0$ for any $m$; this is an Einstein manifold when $n\geq 3$. When $m=\pm\infty$ we have $\frac{1}{m}=0$, so if $\lambda$ is constant, Eqn. 6.1 becomes the equation of a Ricci soliton, a notion which occurs in the study of singularities of the Ricci flow [4]. In the case $m=-(n-2)$, the equation describes an Einstein-Weyl manifold (compare to the equation on page 100 of [14]), a notable concept in conformal geometry. Finally, when $\varphi=\mathrm{d}f$, $m$ is a non-zero integer, and $\lambda$ is constant, Eqn. 6.1 is related to constructing Einstein warped products and conformal Einstein metrics, depending on $m$ (see [5] for more information and references). As a Riemannian geometer working with Eqn. 6.1, it is tempting to default to using the Levi-Civita connection $\nabla_{g}$, which perhaps gives undue primacy to the metric $g$ and places $\varphi$ in an undeservedly subordinate role. We propose that one way to unify these two objects which places them on more equal footing is to consider a connection $\nabla$ which depends on both quantities and generates Eqn. 6.1 in an appropriate sense. Namely, we demand that Eqn. 6.1 be equivalent to $\mathring{r}_{(ij)}=0$, where $\mathring{r}_{()}$ is the traceless (with respect to $g$) symmetric part of the Ricci tensor of $R=\Omega(\nabla)$. (Our perspective here is strongly influenced by the philosophy, inherited from E. Cartan and espoused in [15], that differential geometry is the study of connections on principal bundles.) #### 6.2.2. The Connection Ansatz What would this connection $\nabla$ look like? Answering this question comes down to determining the difference tensor $\nabla-\nabla_{g}$. It's easy to see based on Eqn. 6.1 that this three-index difference tensor should be a tensor algebraic in $g$ and $\varphi$. Denoting the connection coefficients of $\nabla$ by $\Gamma_{i}{}^{k}{}_{j}$, and the Christoffel symbols of the Levi- Civita connection $\nabla_{g}$ by $\overline{\Gamma}_{i}{}^{k}{}_{j}$, our ansatz is (6.2) $\Gamma_{i}{}^{k}{}_{j}=\overline{\Gamma}_{i}{}^{k}{}_{j}+a\varphi_{i}\delta^{k}_{j}+b\varphi_{j}\delta^{k}_{i}+c\varphi^{k}g_{ij}$ The torsion of $\nabla$ vanishes if and only if $a=b$ but, without a full geometric understanding of the issue, we prefer to leave open the possibility that torsion may play some role. The formula for the traceless Ricci tensor of this connection is (see the Appendix) $\displaystyle\mathring{r}_{()}=$ $\displaystyle\,\overline{r}-\tfrac{1}{2}((n-1)b+c)\pounds_{\varphi^{\\#}}\,g+((n-1)b^{2}-c^{2})\varphi\otimes\varphi+$ $\displaystyle-\tfrac{1}{n}(\overline{s}+((n-1)b+c)\mathrm{d}^{*}_{g}\varphi+((n-1)b^{2}-c^{2})|\varphi|^{2})g$ From this we see clearly that Eqn 6.1 is identical to $\mathring{r}_{()}=0$ provided $(n-1)b+c=-1$ $(n-1)b^{2}-c^{2}=-\tfrac{1}{m}$ or equivalently $c=-1-(n-1)b$ $(n-1)(n-2)b^{2}+2(n-1)b+(1-\tfrac{1}{m})=0$ When $n=2$, these equations have the unique solution $b=-\tfrac{1}{2}(1-\tfrac{1}{m})$ $c=-\tfrac{1}{2}(1+\tfrac{1}{m})$ When $n>2$, the discriminant of the quadratic satisfied by $b$ is $\Delta=4(n-1)(1+\tfrac{n-2}{m})$ so only when $m=-(n-2)$ is there a unique solution, given by $b=-\tfrac{1}{n-2}$ $c=\tfrac{1}{n-2}$ When $a=b$, so that torsion vanishes, this is precisely the case in which the manifold is an Einstein-Weyl manifold, and the connection $\nabla$ is torsion- free and preserves the conformal class of the metric $g$. Otherwise, there is no unique solution and given Eqn. 6.1 we can construct _two_ torsion-free connections with our ansatz such that Eqn 6.1 is equivalent to $\mathring{r}_{()}=0$. The explicit solutions are $b=\tfrac{1}{n-2}\big{(}-1\pm\sqrt{\tfrac{m+n-2}{m(n-1)}}\,\big{)}$ $c=-\tfrac{1}{n-2}\big{(}-1\pm\sqrt{\tfrac{(m+n-2)(n-1)}{m}}\,\big{)}$ Intriguingly, the radical in the expression for $b$ (which is equivalent to $b+c$, see below) seems to occur in the integration of certain differential equations when constructing a class of quasi-Einstein manifolds; see the proof of Theorem 5.7 in [5]. Despite the non-uniqueness, there is one particularly noteworthy case which has been singled-out previously. This occurs when $a=b$ and $c=0$, which identifies the torsion-free connections among the family 6.2 which are projectively equivalent to (i.e. have the same unparametrized geodesics as) the Levi-Civita connection. From our system of equations for $b$, $c$, and $m$ we find the values $b=-\tfrac{1}{n-1}$ and $m=-(n-1)$. This was studied by Wylie and Yeroshkin in [18], primarily in the case where $\varphi$ is an exact form. Their interest seems to have been driven by the fact that when $a=b=-\tfrac{1}{n-1}$, $c=0$, and $\varphi$ is closed, the Ricci tensor of the torsion-free connection $\nabla$ is symmetric and particularly simple, satisfying (see the appendix) $r=r_{g}+\tfrac{1}{2}\pounds_{\varphi^{\\#}}g+\tfrac{1}{n-1}\varphi\otimes\varphi$ This is one of the so-called Bakry-Émery Ricci tensors. We should also note that the family 6.2 with $a=b$ and $\varphi$ an exact form was noted in [10] along with a computation of the Ricci tensor, but no connection was made to $m$-quasi-Einstein manifolds. (The author considered the connections 6.2 for the reasons outlined above before becoming aware of [18] or [10].) We hope that this brief introduction might interest others in investigating the connections defined by Eqn. 6.2 and what role they might play, if any, in the study of quasi-Einstein manifolds. It's possible that they are only noteworthy in the cases already studied (that is, the torsion-free cases in which the connection preserves at least the conformal class of the metric, or is projectively equivalent to the Levi-Civita connection), but the general case is likely worthy of further scrutiny. One possible point of entry to evaluating the significance of the connection $\nabla$ when it is torsion- free, given the attention already afforded to the conformal and projective special cases, is to write, $\displaystyle\Gamma_{i}{}^{k}{}_{j}=\,\,$ $\displaystyle\overline{\Gamma}_{i}{}^{k}{}_{j}-c\varphi_{i}\delta^{k}_{j}-c\varphi_{j}\delta^{k}_{i}+c\varphi^{k}g_{ij}$ $\displaystyle+(b+c)\varphi_{i}\delta^{k}_{j}+(b+c)\varphi_{j}\delta^{k}_{i}$ where we can easily compute $b+c=\mp\sqrt{\tfrac{m+n-2}{m(n-1)}}$ This realizes the connection $\nabla$ as being projectively equivalent to a connection $\nabla^{\prime}=\nabla_{g}-c\varphi_{i}\delta^{k}_{j}-c\varphi_{j}\delta^{k}_{i}+c\varphi^{k}g_{ij}$ which preserves the conformal class of $g$, and such that the $1$-form which determines the rescaling of the metric $g$ under $\nabla^{\prime}$-parallel transport is a constant multiple of the $1$-form which determines the reparametrization of the $\nabla^{\prime}$-geodesics. When $\varphi=\mathrm{d}f$, we have $\nabla=\nabla_{e^{-2cf}g}\mp\sqrt{\tfrac{m+n-2}{m(n-1)}}(\partial_{i}f\delta^{k}_{j}+\partial_{j}f\delta^{k}_{i})$ which seems to single out the conformal metric $e^{-2cf}g$ for consideration. (Though, recall that there are really _two_ values of $c$ which are roots of a quadratic, so we have conformal _metrics_ to consider; this suggests that additionally the midpoint $\tfrac{1}{n-2}$, and the corresponding conformal metric $e^{\frac{-2f}{n-2}}g$, may be worthy of consideration. This is perhaps further suggested by the fact that if one writes $\mathring{r}_{(ij)}=0$ in terms of $\tilde{g}=e^{\frac{-2f}{n-2}}g$ quantities, then second derivatives of $f$ don't appear explicitly.) The author has not fully pursued these ideas in favor of applying Thm. 2.2 with the connections in Eqn. 6.2 to investigate the Euler characteristic of generalized $m$-quasi-Einstein $4$-manifolds; this will appear in a future work. We encourage others to take up the general line of thought in this section, and have provided in the appendix an extensive compendium of formulae as an aid to those interested. ## 7\. Appendix ### 7.1. Relations Between Curvatures On a general vector bundle with connections $\nabla$ and $\nabla^{\prime}$ related by $\nabla=\nabla^{\prime}-B$, we can easily calculate that the relation (7.1) $\Omega_{\nabla}=\Omega_{\nabla^{\prime}}-\nabla^{\prime}\circ B+B\wedge B$ exists between the curvatures. Note that $\nabla^{\prime}\circ B=\mathrm{d}B+\omega^{\prime}\wedge B+B\wedge\omega^{\prime}$ denotes the $\nabla^{\prime}$-covariant derivative of $B$ as an $\mathrm{End}(E)$-valued $1$-form; thus it is an $\mathrm{End}(E)$-valued $2$-form. Specializing to $E=TM$ one computes that $(\nabla^{\prime}\circ B)_{ij}{}^{k}{}_{l}=\nabla^{\prime}_{i}B_{j}{}^{k}{}_{l}-\nabla^{\prime}_{j}B_{i}{}^{k}{}_{l}+T(\nabla^{\prime})_{i}{}^{m}{}_{j}\,B_{m}{}^{k}{}_{l}$ where $T(\nabla^{\prime})=\omega^{\prime}_{i}{}^{k}{}_{j}-\omega^{\prime}_{j}{}^{k}{}_{i}$ is the torsion of $\nabla^{\prime}$. Suppose now that $(M,g)$ is a Riemannian manifold, $\tilde{\nabla}$ is a metric connection on the tangent bundle, and $\overline{\nabla}=\nabla_{g}$ is the Levi-Civita connection. Write $\tilde{\nabla}=\overline{\nabla}-B$, and denote the curvatures by $H=\Omega(\tilde{\nabla})$, $\overline{R}=\Omega(\overline{\nabla})$. Then the formula above reads $H_{ij}{}^{k}{}_{l}=\overline{R}_{ij}{}^{k}{}_{l}-\overline{\nabla}_{i}B_{j}{}^{k}{}_{l}+\overline{\nabla}_{j}B_{i}{}^{k}{}_{l}+B_{i}{}^{k}{}_{m}B_{j}{}^{m}{}_{l}-B_{j}{}^{k}{}_{m}B_{i}{}^{m}{}_{l}$ We can convert this into a formula with all indices down by defining $H_{ijkl}=g_{km}H_{ij}{}^{m}{}_{l}$, $\overline{R}_{ijkl}=g_{km}\overline{R}_{ij}{}^{m}{}_{l}$, and $B_{ikj}=g_{km}B_{i}{}^{m}{}_{j}$ yielding (7.2) $H_{ijkl}=\overline{R}_{ijkl}-\overline{\nabla}_{i}B_{jkl}+\overline{\nabla}_{j}B_{ikl}+B_{ikm}B_{jml}-B_{jkm}B_{iml}$ where we're using an extended Einstein summation convention such that summation over repeated lower indices is understood and denotes contraction with the inverse metric. #### 7.1.1. On the Tangent Bundle of a Surface Consider now Eqn 7.2 on a surface, where we work in a local orthonormal basis $\\{e_{1},e_{2}\\}$. We have only the single scalar equation $H_{1212}=\overline{R}_{1212}-\overline{\nabla}_{1}B_{212}+\overline{\nabla}_{2}B_{112}+B_{11m}B_{2m2}-B_{21m}B_{1m2}$ Considering the skew-symmetry of $B$ in the second and third indices (since it's the difference between metric connections), each of the last two terms vanishes, so this is in fact just $H_{1212}=K_{g}-\overline{\nabla}_{1}B_{212}+\overline{\nabla}_{2}B_{112}$ by also substituting in the Gauss curvature $K_{g}$ of $g$. Now we trace $-\overline{\nabla}_{i}B_{jkl}+\overline{\nabla}_{j}B_{ikl}$ over $i,k$ and $j,l$ to find $-\overline{\nabla}_{i}B_{jij}+\overline{\nabla}_{j}B_{iij}=-\overline{\nabla}_{1}B_{212}+\overline{\nabla}_{2}B_{112}-\overline{\nabla}_{2}B_{121}+\overline{\nabla}_{1}B_{221}=2(-\overline{\nabla}_{1}B_{212}+\overline{\nabla}_{2}B_{112})$ by using the skew-symmetry of $B$, while also $-\overline{\nabla}_{i}B_{jij}+\overline{\nabla}_{j}B_{iij}=2\overline{\nabla}_{i}B_{jji}=-2\,\mathrm{d}^{*}_{g}(B_{jji})=-2\,\mathrm{d}^{*}_{g}(B_{i})$ by again using skew-symmetry and renaming dummy indices in the first equality, defining $B_{jji}=B_{i}$ in the last, and where we are denoting by $\mathrm{d}^{*}_{g}$ the formal $L^{2}$-adjoint of $\mathrm{d}$ which is induced by $g$; $\mathrm{d}^{*}_{g}\psi=-\star_{2}\mathrm{d}\star_{1}\psi=-\nabla_{i}\psi_{i}$ on $1$-forms, where $\star_{i}$ is the Hodge star operator on $i$-forms. Thus (7.3) $H_{1212}=K_{g}-\mathrm{d}^{*}_{g}(B_{i})$ Alternatively we can proceed from Eqn. 7.1 $H=\bar{R}-\nabla^{\prime}\circ B+B\wedge B=\bar{R}-(\mathrm{d}B+\bar{\Gamma}\wedge B+B\wedge\bar{\Gamma})+B\wedge B$ in our orthonormal frame $\\{e_{1},e_{2}\\}$, in which case $\bar{\Gamma}=\begin{bmatrix}0&\bar{\gamma}\\\ -\bar{\gamma}&0\end{bmatrix},\hskip 7.22743ptB=\begin{bmatrix}0&b\\\ -b&0\end{bmatrix}$ where $\bar{\gamma}$ is a locally-defined $1$-form and $b=\mathrm{Pf}(B)$ is a globally-defined $1$-form if $M^{2}$ is oriented and our frame is consistent with the orientation, but $b$ generally has an ambiguous sign. It's trivial to compute that $\bar{\Gamma}\wedge\bar{\Gamma}$, $B\wedge B$, and the (super)commutator $\bar{\Gamma}\wedge B+B\wedge\bar{\Gamma}$ vanish, so $\displaystyle H$ $\displaystyle=\bar{R}-\mathrm{d}B$ $\displaystyle=\mathrm{d}\bar{\Gamma}-\mathrm{d}B$ $\displaystyle=\begin{bmatrix}0&\mathrm{d}\bar{\gamma}-\mathrm{d}b\\\ -(\mathrm{d}\bar{\gamma}-\mathrm{d}b)&0\end{bmatrix}$ $\displaystyle=\begin{bmatrix}0&K_{g}\mathrm{d}V_{g}-\mathrm{d}b\\\ -(K_{g}\mathrm{d}V_{g}-\mathrm{d}b)&0\end{bmatrix}$ Comparing to Eqn 7.3 reveals $\mathrm{d}b=\mathrm{d}^{*}_{g}(B_{i})\mathrm{d}V_{g};$ this reflects, and is a consequence of, the easily computed fact that (7.4) $b=-\star_{1}B_{i}\hskip 7.22743pt\Leftrightarrow\hskip 7.22743pt\star_{1}b=B_{i}$ which sheds more light on our earlier observation that $b$ has an ambiguous local sign. In any case, putting this all together, $\displaystyle\mathrm{Pf}(H)$ $\displaystyle=H_{1212}\,e^{1}\wedge e^{2}$ $\displaystyle=(K_{g}-\mathrm{d}^{*}_{g}(B_{i}))\mathrm{d}V_{g}$ ### 7.2. The Curvature of the Levi-Civita Connection Plus a Pure-Trace Tensor Let $\overline{\Gamma}_{i}{}^{k}{}_{j}$ denotes the Christoffel symbols of the Levi-Civita connection $\nabla_{g}$ of a metric $g$. We consider a connection $\nabla$ whose coefficients are given by $\Gamma_{i}{}^{k}{}_{j}=\overline{\Gamma}_{i}{}^{k}{}_{j}+\alpha_{i}\delta^{k}_{j}+\beta_{j}\delta^{k}_{i}+\gamma^{k}g_{ij}$ where $\alpha$, $\beta$, $\gamma$ are $1$-forms. We will record various quantities associated to this connection. #### 7.2.1. Torsion, Non-Metricity, Canonical Metric Connection The torsion is $T(\nabla)=(\alpha-\beta)_{i}\delta^{k}_{j}-(\alpha-\beta)_{j}\delta^{k}_{i}$ which clearly vanishes exactly when $\alpha=\beta$, while the non-metricity is $\nabla_{i}g_{jk}=-2\alpha_{i}g_{jk}-(\beta+\gamma)_{j}g_{ik}-(\beta+\gamma)_{k}g_{ij}$ which is easily shown to vanish if and only if $\alpha=0$ and $\beta=-\gamma$, and to be proportional to the metric $g_{jk}$ exactly when $\beta=-\gamma$; this latter situation is equivalent to $\nabla$ preserving the conformal class of $g$. It's additionally immediate from the non-metricity tensor that the canonical associated metric tensor is $\nabla^{g}=\nabla_{g}-B$ where $B_{i}{}^{k}{}_{j}=-\tfrac{(\beta-\gamma)_{j}}{2}\delta^{k}_{i}+\tfrac{(\beta-\gamma)^{k}}{2}g_{ij}$ or (with indices lowered) $B_{ikj}=-\tfrac{(\beta-\gamma)_{j}}{2}g_{ik}+\tfrac{(\beta-\gamma)_{k}}{2}g_{ij}$ This tensor has one independent trace, $B_{j}=B_{iij}=-\tfrac{n-1}{2}(\beta-\gamma)_{j}$ #### 7.2.2. Curvature of $\nabla$ The curvature of $\nabla$ is $\displaystyle R_{ij}{}^{k}{}_{l}=\,$ $\displaystyle\overline{R}_{ij}{}^{k}{}_{l}+(\partial_{i}\alpha_{j}-\partial_{j}\alpha_{i})\delta^{k}_{l}+\overline{\nabla}_{i}\beta_{l}\delta^{k}_{j}-\overline{\nabla}_{j}\beta_{l}\delta^{k}_{i}+\overline{\nabla}_{i}\gamma^{k}g_{jl}-\overline{\nabla}_{j}\gamma^{k}g_{il}$ $\displaystyle+(\delta^{k}_{i}\beta_{j}-\delta^{k}_{j}\beta_{i})\beta_{l}+(\gamma_{i}g_{jl}-\gamma_{j}g_{il})\gamma^{k}+g(\beta,\gamma)(\delta^{k}_{i}g_{jl}-\delta^{k}_{j}g_{il})$ with index-lowered variant $R_{ijkl}=g_{km}R_{ij}{}^{m}{}_{l}$ given by $\displaystyle R_{ijkl}=$ $\displaystyle\overline{R}_{ijkl}+(\partial_{i}\alpha_{j}-\partial_{j}\alpha_{i})g_{kl}+\overline{\nabla}_{i}\beta_{l}g_{kj}-\overline{\nabla}_{j}\beta_{l}g_{ki}+\overline{\nabla}_{i}\gamma_{k}g_{jl}-\overline{\nabla}_{j}\gamma_{k}g_{il}$ $\displaystyle+(g_{ik}\beta_{j}-g_{jk}\beta_{i})\beta_{l}+(\gamma_{i}g_{jl}-\gamma_{j}g_{il})\gamma_{k}+g(\beta,\gamma)(g_{ik}g_{jl}-g_{jk}g_{il})$ Since $\nabla$ is not necessarily torsion-free or metric, the curvature tensor lacks the familiar symmetries of a Riemannian curvature tensor, and there are three independent traces of $R$, not the usual one: the Ricci tensor $r_{jl}=R_{ij}{}^{i}{}_{l}$, $\rho_{jk}=R_{ijki}$, and $\zeta_{ij}=R_{ij}{}^{k}{}_{k}$ We first examine the Ricci tensor: $\displaystyle r=$ $\displaystyle\,\,\overline{r}+\mathrm{d}\alpha-\tfrac{1}{2}\mathrm{d}((n-1)\beta+\gamma)-\tfrac{1}{2}\pounds_{((n-1)\beta+\gamma)^{\\#}}\,g$ $\displaystyle+(n-1)\beta\otimes\beta-\gamma\otimes\gamma+(-\mathrm{d}^{*}_{g}(\gamma)+g((n-1)\beta+\gamma,\gamma))g$ Clearly the symmetric part of $r$ is $r_{()}=\overline{r}-\tfrac{1}{2}\pounds_{((n-1)\beta+\gamma)^{\\#}}\,g+(n-1)\beta\otimes\beta-\gamma\otimes\gamma+(-\mathrm{d}^{*}_{g}(\gamma)+g((n-1)\beta+\gamma,\gamma))g$ and the antisymmetric part is $r_{[\,]}=\mathrm{d}\alpha-\tfrac{1}{2}\mathrm{d}((n-1)\beta+\gamma)$ with trace $s=g^{ij}r_{ij}=g^{ij}r_{(ij)}$ satisfying $s=\overline{s}+(n-1)\mathrm{d}^{*}_{g}(\beta-\gamma)+(n-1)(|\beta|^{2}+n\,g(\beta,\gamma)+|\gamma|^{2})$ Therefore the following equation holds for the traceless symmetric Ricci tensor $\mathring{r}_{()}=r_{()}-\tfrac{s}{n}g$: $\displaystyle\mathring{r}_{()}=$ $\displaystyle\,\,\overline{r}-\tfrac{1}{2}\pounds_{((n-1)\beta+\gamma)^{\\#}}\,g+(n-1)\beta\otimes\beta-\gamma\otimes\gamma+$ $\displaystyle-\tfrac{1}{n}(\overline{s}+\mathrm{d}^{*}_{g}((n-1)\beta+\gamma)+(n-1)|\beta|^{2}-|\gamma|^{2})g$ Now consider the tensor $\rho$: $\displaystyle\rho=$ $\displaystyle-\overline{r}-\mathrm{d}\alpha-\tfrac{1}{2}\mathrm{d}(\beta+(n-1)\gamma)-\tfrac{1}{2}\pounds_{(\beta+(n-1)\gamma)^{\\#}}\,g$ $\displaystyle+\beta\otimes\beta-(n-1)\gamma\otimes\gamma+(-\mathrm{d}^{*}_{g}\beta-g(\beta,\beta+(n-1)\gamma))g$ The symmetric part is $\rho_{()}=-\overline{r}-\tfrac{1}{2}\pounds_{(\beta+(n-1)\gamma)^{\\#}}\,g+\beta\otimes\beta-(n-1)\gamma\otimes\gamma+(-\mathrm{d}^{*}_{g}\beta-g(\beta,\beta+(n-1)\gamma))g$ while the antisymmetric part is $\rho_{[\,]}=-\mathrm{d}\alpha-\tfrac{1}{2}\mathrm{d}(\beta+(n-1)\gamma)$ and the trace $\sigma=g^{ij}\rho_{ij}=g^{ij}\rho_{(ij)}$ is fully determined by the trace of the Ricci tensor: $\sigma=-s$ Thus for the traceless symmetric part we have $\displaystyle\mathring{\rho}_{()}=$ $\displaystyle\,-\overline{r}-\tfrac{1}{2}\pounds_{(\beta+(n-1)\gamma)^{\\#}}\,g+\beta\otimes\beta-(n-1)\gamma\otimes\gamma$ $\displaystyle-\tfrac{1}{n}(-\overline{s}+\mathrm{d}^{*}_{g}(\beta+(n-1)\gamma)+|\beta|^{2}-(n-1)|\gamma|^{2})g$ Some interesting simplifications occur if we consider $\tfrac{1}{2}(r-\rho)$ and $\tfrac{1}{2}(r+\rho)$ (these are natural as traces of $R$ after breaking the $kl$ index pair into antisymmetric and symmetric parts). The traces are $s$ and $0$ respectively, while the other components are as follows: $\displaystyle\tfrac{1}{2}(\mathring{r}-\mathring{\rho})_{()}=$ $\displaystyle\,\,\overline{r}+\tfrac{n-2}{2}(-\tfrac{1}{2}\pounds_{(\beta-\gamma)^{\\#}}g+\beta\otimes\beta+\gamma\otimes\gamma)$ $\displaystyle-\tfrac{1}{n}(\overline{s}+\tfrac{n-2}{2}(\mathrm{d}^{*}_{g}(\beta-\gamma)+|\beta|^{2}+|\gamma|^{2}))g$ $\displaystyle\tfrac{1}{2}(r-\rho)_{[\,]}=$ $\displaystyle\,\,\mathrm{d}\alpha-\tfrac{n-2}{4}\mathrm{d}(\beta-\gamma)$ $\displaystyle\tfrac{1}{2}(r+\rho)_{()}=$ $\displaystyle\,\tfrac{1}{2}(\mathring{r}+\mathring{\rho})_{()}$ $\displaystyle=$ $\displaystyle-\tfrac{n}{4}\pounds_{(\beta+\gamma)^{\\#}}g+\tfrac{n}{2}(\beta+\gamma)\cdot(\beta-\gamma)-\tfrac{1}{2}(\mathrm{d}^{*}_{g}(\beta+\gamma)+g(\beta+\gamma,\beta-\gamma))g$ $\displaystyle\tfrac{1}{2}(r+\rho)_{[\,]}=$ $\displaystyle-\tfrac{n}{4}\mathrm{d}(\beta+\gamma)$ Finally we have the antisymmetric tensor $\zeta$: $\zeta=n\mathrm{d}\alpha+2\mathrm{d}(\beta+\gamma)$ #### 7.2.3. Curvature of $\nabla^{g}$ The canonical metric connection of $\nabla$ is $\nabla^{g}$, whose Christoffel symbols $\eta_{i}{}^{k}{}_{j}$ satisfy (see Section 7.1) $\eta_{i}{}^{k}{}_{j}=\overline{\Gamma}_{i}{}^{k}{}_{j}+\tfrac{(\beta-\gamma)_{j}}{2}\delta^{k}_{i}-\tfrac{(\beta-\gamma)^{k}}{2}g_{ij}$ This is the same type of connection that we considered above, with $\alpha=0$ and $\beta$ and $-\gamma$ replaced by $\tfrac{1}{2}(\beta-\gamma)$. Denote the curvature of $\eta$ by $H=H_{ijkl}$. This tensor is antisymmetric in the $ij$ and $kl$ pairs since $\nabla^{g}$ is compatible with $g$, but we can't expect further symmetries. This is, however, enough to imply that there is only one independent trace; we'll use the Ricci trace over the $ik$ indices and call it $h$. We then have the formulas that follow. (1,3)-Curvature Tensor: $\displaystyle H_{ij}{}^{k}{}_{l}=\,$ $\displaystyle\overline{R}_{ij}{}^{k}{}_{l}+\tfrac{1}{2}\overline{\nabla}_{i}(\beta-\gamma)_{l}\delta^{k}_{j}-\tfrac{1}{2}\overline{\nabla}_{j}(\beta-\gamma)_{l}\delta^{k}_{i}-\tfrac{1}{2}\overline{\nabla}_{i}(\beta-\gamma)^{k}g_{jl}+\tfrac{1}{2}\overline{\nabla}_{j}(\beta-\gamma)^{k}g_{il}$ $\displaystyle+\tfrac{1}{4}(\delta^{k}_{i}(\beta-\gamma)_{j}-\delta^{k}_{j}(\beta-\gamma)_{i})(\beta-\gamma)_{l}+\tfrac{1}{4}((\beta-\gamma)_{i}g_{jl}-(\beta-\gamma)_{j}g_{il})(\beta-\gamma)^{k}$ $\displaystyle-\tfrac{1}{4}|\beta-\gamma|^{2}(\delta^{k}_{i}g_{jl}-\delta^{k}_{j}g_{il})$ (0,4)-Curvature Tensor: $\displaystyle H_{ijkl}=$ $\displaystyle\overline{R}_{ijkl}+\tfrac{1}{2}\overline{\nabla}_{i}(\beta-\gamma)_{l}g_{kj}-\tfrac{1}{2}\overline{\nabla}_{j}(\beta-\gamma)_{l}g_{ki}-\tfrac{1}{2}\overline{\nabla}_{i}(\beta-\gamma)_{k}g_{jl}+\tfrac{1}{2}\overline{\nabla}_{j}(\beta-\gamma)_{k}g_{il}$ $\displaystyle+\tfrac{1}{4}g\owedge[(\beta-\gamma)\otimes(\beta-\gamma)]_{ijkl}-\tfrac{1}{2}|\beta-\gamma|^{2}g\owedge g_{ijkl}$ Ricci tensor: $\displaystyle h=\,\,$ $\displaystyle\overline{r}-\tfrac{n-2}{4}\mathrm{d}(\beta-\gamma)-\tfrac{n-2}{4}\pounds_{(\beta-\gamma)^{\\#}}g$ $\displaystyle+\tfrac{n-2}{4}(\beta-\gamma)\otimes(\beta-\gamma)+(\tfrac{1}{2}\mathrm{d}^{*}_{g}(\beta-\gamma)-\tfrac{n-2}{4}|\beta-\gamma|^{2})g$ Symmetric Ricci tensor: $\displaystyle h_{()}=\,\,$ $\displaystyle\overline{r}-\tfrac{n-2}{4}\pounds_{(\beta-\gamma)^{\\#}}g+\tfrac{n-2}{4}(\beta-\gamma)\otimes(\beta-\gamma)$ $\displaystyle+(\tfrac{1}{2}\mathrm{d}^{*}_{g}(\beta-\gamma)-\tfrac{n-2}{4}|\beta-\gamma|^{2})g$ Antisymmetric Ricci tensor: $h_{[\,]}=-\tfrac{n-2}{4}\mathrm{d}(\beta-\gamma)$ Trace: $\tau=\overline{s}+(n-1)\mathrm{d}^{*}_{g}(\beta-\gamma)-\tfrac{(n-1)(n-2)}{4}|\beta-\gamma|^{2}$ Traceless symmetric Ricci tensor: $\displaystyle\mathring{h}_{()}=\,\,$ $\displaystyle\overline{r}-\tfrac{n-2}{4}\pounds_{(\beta-\gamma)^{\\#}}g+\tfrac{n-2}{4}(\beta-\gamma)\otimes(\beta-\gamma)$ $\displaystyle-\tfrac{1}{n}(\overline{s}+\tfrac{n-2}{2}\mathrm{d}^{*}_{g}(\beta-\gamma)+\tfrac{n-2}{4}|\beta-\gamma|^{2})g$ #### 7.2.4. Specializing the equations If we substitute $\alpha=a\varphi$, $\beta=b\varphi$, $\gamma=c\varphi$ we obtain further specializations: $\nabla$ tensors: (1,3)-Curvature Tensor: $\displaystyle R_{ij}{}^{k}{}_{l}=\,$ $\displaystyle\overline{R}_{ij}{}^{k}{}_{l}+a(\partial_{i}\varphi_{j}-\partial_{j}\varphi_{i})\delta^{k}_{l}+b\overline{\nabla}_{i}\varphi_{l}\delta^{k}_{j}-b\overline{\nabla}_{j}\varphi_{l}\delta^{k}_{i}+c\overline{\nabla}_{i}\varphi^{k}g_{jl}-c\overline{\nabla}_{j}\varphi^{k}g_{il}$ $\displaystyle+b^{2}(\delta^{k}_{i}\varphi_{j}-\delta^{k}_{j}\varphi_{i})\varphi_{l}+c^{2}(\varphi_{i}g_{jl}-\varphi_{j}g_{il})\varphi^{k}+bc|\varphi|^{2}(\delta^{k}_{i}g_{jl}-\delta^{k}_{j}g_{il})$ (0,4)-Curvature Tensor: $\displaystyle R_{ijkl}=$ $\displaystyle\overline{R}_{ijkl}+a(\partial_{i}\varphi_{j}-\partial_{j}\varphi_{i})g_{kl}+b\overline{\nabla}_{i}\varphi_{l}g_{kj}-b\overline{\nabla}_{j}\varphi_{l}g_{ki}+c\overline{\nabla}_{i}\varphi_{k}g_{jl}-c\overline{\nabla}_{j}\varphi_{k}g_{il}$ $\displaystyle+b^{2}(g_{ik}\varphi_{j}-g_{jk}\varphi_{i})\varphi_{l}+c^{2}(\varphi_{i}g_{jl}-\varphi_{j}g_{il})\varphi_{k}+bc|\varphi|^{2}(g_{ik}g_{jl}-g_{jk}g_{il})$ Ricci tensor: $\displaystyle r=$ $\displaystyle\,\,\overline{r}+a\mathrm{d}\varphi-\tfrac{1}{2}((n-1)b+c)\mathrm{d}\varphi-((n-1)b+c)\tfrac{1}{2}\pounds_{\varphi^{\\#}}\,g$ $\displaystyle+((n-1)b^{2}-c^{2})\varphi\otimes\varphi+c(-\mathrm{d}^{*}_{g}\varphi+((n-1)b+c)|\varphi|^{2})g$ Symmetric Ricci tensor: $r_{()}=\overline{r}-((n-1)b+c)\tfrac{1}{2}\pounds_{\varphi^{\\#}}\,g+((n-1)b^{2}-c^{2})\varphi\otimes\varphi+c(-\mathrm{d}^{*}_{g}\varphi+((n-1)b+c)|\varphi|^{2})g$ Antisymmetric Ricci tensor: $r_{[\,]}=a\mathrm{d}\varphi-\tfrac{1}{2}((n-1)b+c)\mathrm{d}\varphi$ Trace of Ricci tensor: $\displaystyle s$ $\displaystyle=(\overline{s}+((n-1)b+c)\mathrm{d}^{*}_{g}\varphi+((n-1)b^{2}-c^{2})|\varphi|^{2})+n\cdot c(-\mathrm{d}^{*}_{g}\varphi+((n-1)b+c)|\varphi|^{2})$ $\displaystyle=\overline{s}+(n-1)(b-c)\mathrm{d}^{*}_{g}\varphi+(n-1)(b^{2}+nbc+c^{2})|\varphi|^{2}$ Traceless symmetric Ricci tensor: $\displaystyle\mathring{r}_{()}=$ $\displaystyle\,\overline{r}-\tfrac{1}{2}((n-1)b+c)\pounds_{\varphi^{\\#}}\,g+((n-1)b^{2}-c^{2})\varphi\otimes\varphi+$ $\displaystyle-\tfrac{1}{n}(\overline{s}+((n-1)b+c)\mathrm{d}^{*}_{g}\varphi+((n-1)b^{2}-c^{2})|\varphi|^{2})g$ $\rho$ tensor: $\displaystyle\rho=$ $\displaystyle-\overline{r}-a\mathrm{d}\varphi-\tfrac{1}{2}(b+(n-1)c)\mathrm{d}\varphi-\tfrac{1}{2}(b+(n-1)c)\pounds_{\varphi^{\\#}}\,g$ $\displaystyle+(b^{2}-(n-1)c^{2})\varphi\otimes\varphi+b(-\mathrm{d}^{*}_{g}\varphi-(b+(n-1)c)|\varphi|^{2})g$ Symmetric $\rho$: $\rho_{()}=-\overline{r}-\tfrac{1}{2}(b+(n-1)c)\pounds_{\varphi^{\\#}}\,g+(b^{2}-(n-1)c^{2})\varphi\otimes\varphi+b(-\mathrm{d}^{*}_{g}\varphi-(b+(n-1)c)|\varphi|^{2})g$ Antisymmetric $\rho$: $\rho_{[\,]}=-a\mathrm{d}\varphi-\tfrac{1}{2}(b+(n-1)c)\mathrm{d}\varphi$ Trace of $\rho$: $\sigma=-s$ Traceless symmetric $\rho$: $\displaystyle\mathring{\rho}_{()}=$ $\displaystyle\,-\overline{r}-\tfrac{1}{2}(b+(n-1)c)\pounds_{\varphi^{\\#}}\,g+(b^{2}-(n-1)c^{2})\varphi\otimes\varphi$ $\displaystyle-\tfrac{1}{n}(-\overline{s}+(b+(n-1)c)\mathrm{d}^{*}_{g}\varphi+(b^{2}-(n-1)c^{2})|\varphi|^{2})g$ Linear combinations of $r$ and $\rho$: $\displaystyle\tfrac{1}{2}(\mathring{r}-\mathring{\rho})_{()}=$ $\displaystyle\,\,\overline{r}+\tfrac{n-2}{2}(-(b-c)\tfrac{1}{2}\pounds_{\varphi^{\\#}}g+(b^{2}+c^{2})\varphi\otimes\varphi)$ $\displaystyle-\tfrac{1}{n}(\overline{s}+\tfrac{n-2}{2}((b-c)\mathrm{d}^{*}_{g}\varphi+(b^{2}+c^{2})|\varphi|^{2})g$ $\displaystyle\tfrac{1}{2}(r-\rho)_{[\,]}=$ $\displaystyle\,\,a\mathrm{d}\varphi-\tfrac{n-2}{4}(b-c)\mathrm{d}\varphi$ $\displaystyle\tfrac{1}{2}(r+\rho)_{()}=$ $\displaystyle\,\tfrac{1}{2}(\mathring{r}+\mathring{\rho})_{()}$ $\displaystyle=$ $\displaystyle(b+c)(-\tfrac{n}{4}\pounds_{\varphi^{\\#}}g+\tfrac{n}{2}(b-c)\varphi\otimes\varphi-\tfrac{1}{2}(\mathrm{d}^{*}_{g}\varphi+(b-c)|\varphi|^{2})g)$ $\displaystyle\tfrac{1}{2}(r+\rho)_{[\,]}=$ $\displaystyle-\tfrac{n}{4}(b+c)\mathrm{d}\varphi$ Antisymmetric $\zeta$ tensor: $\zeta=n\cdot a\,\mathrm{d}\varphi+2(b+c)\mathrm{d}\varphi$ $\nabla^{g}$ tensors: (1,3)-Curvature Tensor: $\displaystyle H_{ij}{}^{k}{}_{l}=\,$ $\displaystyle\overline{R}_{ij}{}^{k}{}_{l}+\tfrac{b-c}{2}(\overline{\nabla}_{i}\varphi_{l}\delta^{k}_{j}-\overline{\nabla}_{j}\varphi_{l}\delta^{k}_{i}-\overline{\nabla}_{i}\varphi^{k}g_{jl}+\overline{\nabla}_{j}\varphi^{k}g_{il})$ $\displaystyle+\tfrac{(b-c)^{2}}{4}(\delta^{k}_{i}\varphi_{j}-\delta^{k}_{j}\varphi_{i})\varphi_{l}+\tfrac{(b-c)^{2}}{4}(\varphi_{i}g_{jl}-\varphi_{j}g_{il})\varphi^{k}$ $\displaystyle-\tfrac{(b-c)^{2}}{4}|\varphi|^{2}(\delta^{k}_{i}g_{jl}-\delta^{k}_{j}g_{il})$ (0,4)-Curvature Tensor: $\displaystyle H_{ijkl}=$ $\displaystyle\overline{R}_{ijkl}+\tfrac{b-c}{2}(\overline{\nabla}_{i}\varphi_{l}g_{kj}-\overline{\nabla}_{j}\varphi_{l}g_{ki}-\overline{\nabla}_{i}\varphi_{k}g_{jl}+\overline{\nabla}_{j}\varphi_{k}g_{il})$ $\displaystyle+\tfrac{(b-c)^{2}}{4}g\owedge[\varphi\otimes\varphi]_{ijkl}-\tfrac{(b-c)^{2}}{2}|\varphi|^{2}g\owedge g_{ijkl}$ Ricci tensor: $\displaystyle h=\,\,$ $\displaystyle\overline{r}-\tfrac{n-2}{4}(b-c)\mathrm{d}\varphi-\tfrac{n-2}{4}(b-c)\pounds_{\varphi^{\\#}}g$ $\displaystyle+\tfrac{n-2}{4}(b-c)^{2}\varphi\otimes\varphi+(\tfrac{b-c}{2}\mathrm{d}^{*}_{g}\varphi-\tfrac{n-2}{4}(b-c)^{2}|\varphi|^{2})g$ Symmetric Ricci tensor: $\displaystyle h_{()}=\,\,$ $\displaystyle\overline{r}-\tfrac{n-2}{4}(b-c)\pounds_{\varphi^{\\#}}g+\tfrac{n-2}{4}(b-c)^{2}\varphi\otimes\varphi$ $\displaystyle+(\tfrac{b-c}{2}\mathrm{d}^{*}_{g}\varphi-\tfrac{n-2}{4}(b-c)^{2}|\varphi|^{2})g$ Antisymmetric Ricci tensor: $h_{[\,]}=-\tfrac{n-2}{4}(b-c)\mathrm{d}\varphi$ Trace: $\tau=\overline{s}+(n-1)(b-c)\mathrm{d}^{*}_{g}\varphi-\tfrac{(n-1)(n-2)}{4}(b-c)^{2}|\varphi|^{2}$ Traceless symmetric Ricci tensor: $\displaystyle\mathring{h}_{()}=\,\,$ $\displaystyle\overline{r}-\tfrac{n-2}{4}(b-c)\pounds_{\varphi^{\\#}}g+\tfrac{n-2}{4}(b-c)^{2}\varphi\otimes\varphi$ $\displaystyle-\tfrac{1}{n}(\overline{s}+\tfrac{n-2}{2}(b-c)\mathrm{d}^{*}_{g}\varphi+\tfrac{n-2}{4}(b-c)^{2}|\varphi|^{2})g$ #### 7.2.5. Assuming $a=b$, $(n-1)b+c=-1$, $(n-1)b^{2}-c^{2}=-\tfrac{1}{m}$ If we assume $a=b$, $(n-1)b+c=-1$, and $(n-1)b^{2}-c^{2}=-\tfrac{1}{m}$, then when $n>2$ we have two solutions $(b_{+},c_{+})$ and $(b_{-},c_{-})$, given explicitly by $b_{\pm}=\tfrac{1}{n-2}\big{(}-1\pm\sqrt{\tfrac{m+n-2}{m(n-1)}}\,\big{)}$ $c_{\pm}=-\tfrac{1}{n-2}\big{(}-1\pm\sqrt{\tfrac{(m+n-2)(n-1)}{m}}\,\big{)}$ This produces two torsion-free connections $\nabla_{+}=\nabla_{g}+b_{+}\varphi_{i}\delta^{k}_{j}+b_{+}\varphi_{j}\delta^{k}_{i}+c_{+}\varphi^{k}g_{ij}$ $\nabla_{-}=\nabla_{g}+b_{-}\varphi_{i}\delta^{k}_{j}+b_{-}\varphi_{j}\delta^{k}_{i}+c_{-}\varphi^{k}g_{ij}$ with identical and particularly simple traceless symmetric Ricci tensors $\mathring{r}_{\pm}{}_{()}=\overline{r}+\tfrac{1}{2}\pounds_{\varphi^{\\#}}\,g-\tfrac{1}{m}\varphi\otimes\varphi-\tfrac{1}{n}(\overline{s}-\mathrm{d}^{*}_{g}\varphi-\tfrac{1}{m}|\varphi|^{2})g$ despite the fact that their symmetric Ricci tensors differ due to an explicit dependence on $c$: $r_{\pm}{}_{()}=\overline{r}+\tfrac{1}{2}\pounds_{\varphi^{\\#}}\,g-\tfrac{1}{m}\varphi\otimes\varphi+c_{\pm}(-\mathrm{d}^{*}_{g}\varphi-|\varphi|^{2})g$ However, if one considers the average of the symmetric Ricci tensors, $\tfrac{1}{2}(r_{+}{}_{()}+r_{-}{}_{()})=\overline{r}+\tfrac{1}{2}\pounds_{\varphi^{\\#}}\,g-\tfrac{1}{m}\varphi\otimes\varphi+\tfrac{1}{n-2}(-\mathrm{d}^{*}_{g}\varphi-|\varphi|^{2})g$ (using $\tfrac{1}{2}(c_{+}+c_{-})=\tfrac{1}{n-2}$) then this does not depend on choosing between $(b_{+},c_{+})$ and $(b_{-},c_{-})$. From this we find also $\tfrac{1}{2}(s_{+}+s_{-})=\overline{s}-2\mathrm{d}^{*}_{g}\varphi-\tfrac{m+1}{m}|\varphi|^{2}-\tfrac{2}{n-2}(\mathrm{d}^{*}_{g}\varphi+|\varphi|^{2})$ The expression $\overline{s}-2\mathrm{d}^{*}_{g}\varphi-\tfrac{m+1}{m}|\varphi|^{2}$ is an analogue of the notion of weighted scalar curvature in [5]. Perhaps studying the averages of these and other curvature tensors will prove worthwhile; it's not hard to see that the quantities $b_{+}+b_{-}$, $c_{+}+c_{-}$, $b_{+}^{2}+b_{-}^{2}$, $c_{+}^{2}+c_{-}^{2}$, and $b_{+}c_{+}+b_{-}c_{-}$ don't involve any root extraction, so the averages of curvature tensors won't involve any root expressions. However, it's not entirely clear what underlying principle would lead to the consideration of such averages. We leave such explorations for the interested reader to embark upon. ## 8\. Acknowledgements The author would like to thank the Rutgers University, New Brunswick Mathematics Department for their financial support while this work was completed. ## References * [1] Allendoerfer, C. B.,& Weil, A. (1943). The Gauss-Bonnet theorem for Riemannian polyhedra. Transactions of the American Mathematical Society, 53(1), 101-129. * [2] Barros, A. A., & Gomes, J. N. (2017). Triviality of compact m-quasi-Einstein manifolds. Results in Mathematics, 71(1-2), 241-250. * [3] Bell, D. (2006). The Gauss-Bonnet theorem for vector bundles. Journal of Geometry, 85(1-2), 15-21. * [4] Cao, H. D. (2009). Recent progress on Ricci solitons. arXiv preprint arXiv:0908.2006. * [5] Case, J. S. (2012). Smooth metric measure spaces and quasi-Einstein metrics. International Journal of Mathematics, 23(10), 1250110. * [6] Chern, S. S. (1944). A simple intrinsic proof of the Gauss-Bonnet formula for closed Riemannian manifolds. Annals of mathematics, 747-752. * [7] Corwin, I., & Morgan, F. (2012). The Gauss-Bonnet formula on surfaces with densities. Involve, a Journal of Mathematics, 4(2), 199-202. * [8] Goldman, W. M. (2011). Two papers which changed my life: Milnor's seminal work on flat manifolds and flat bundles. arXiv preprint arXiv:1108.0216. * [9] Lee, J. M. (2006). Riemannian manifolds: an introduction to curvature (Vol. 176). Springer Science & Business Media. * [10] Li, J., & Xia, C. (2017). An integral formula for affine connections. The Journal of Geometric Analysis, 27(3), 2539-2556. * [11] Martínez Madrid, D. (2017). On Chern's conjecture about the Euler characteristic of affine manifolds. Instituto de Matemática Pura y Aplicada. * [12] Milnor, J. (1977). On fundamental groups of complete affinely flat manifolds. Advances in Mathematics, 25(2), 178-187. * [13] Morgan, F. (2009). Manifolds with density and Perelman's proof of the Poincaré conjecture. The American Mathematical Monthly, 116(2), 134-142. * [14] Pedersen, H., & Swann, A. (1993). Einstein-Weyl geometry, the Bach tensor and conformal scalar curvature. J. reine angew. Math, 441, 99-113. * [15] Sharpe, R. W. (2000). Differential geometry: Cartan's generalization of Klein's Erlangen program (Vol. 166). Springer Science & Business Media. * [16] Smillie, J. The Euler characteristic of flat bundles. preprint. * [17] Spivak, M. D. (1970). A comprehensive introduction to differential geometry. Publish or perish. * [18] Wylie, W., & Yeroshkin, D. (2016). On the geometry of Riemannian manifolds with density. arXiv preprint arXiv:1602.08000.
∎ 11institutetext: H. Alanazi, R. Machleidt 22institutetext: Department of Physics, University of Idaho, Moscow, ID 83844, USA 22email<EMAIL_ADDRESS> # The relevance of pion-exchange contributions versus contact terms in the chiral effective field theory description of nucleon-nucleon scattering H. Alanazi R. Machleidt (Received: date / Accepted: date) ###### Abstract The standard way to demonstrate the relevance of chiral symmetry for the $NN$ interaction is to consider higher partial waves of $NN$ scattering which are controlled entirely by chiral pion-exchanges (since contacts vanish). However, in applications of $NN$-potentials to nuclear structure and reactions, the lower partial waves are the important ones, making the largest contributions. Lower partial waves are sensitive to the short-range potential, and so, when the short-range contacts were to dominate over the chiral pion-contributions in lower partial waves, then the predictions from “chiral potentials” would have little to do with chiral symmetry. To address this issue, we investigate systematically the role of the (chiral) one- and two-pion exchanges, on the one hand, and the effect of the contacts, on the other hand, in the lower partial waves of $NN$ scattering. We are able to clearly identify the signature of chiral symmetry in lower partial waves. Our study has also a pedagogical spin-off as it demonstrates in detail how the reproduction of the lower partial-wave phase shifts comes about from the various ingredients of the theory. ## 1 Introduction During the past three decades, it has been demonstrated that chiral effective field theory (chiral EFT) represents a powerful tool to deal with hadronic interactions at low energy (see Refs. ME11 ; EHM09 ; HKK20 for recent reviews). By construction, chiral EFT is a model-independent theory with a firm connection to QCD via the (broken) chiral symmetry of low-energy QCD. Moreover, the method is systematic in the sense that the various contributions to a particular dynamical process can be arranged as an expansion in terms of powers of the soft scale, $Q\sim m_{\pi}$, over the hard scale, $\Lambda_{\chi}\sim m_{\rho}$: $Q/\Lambda_{\chi}$ (where $m_{\pi}$ denotes the pion mass and $m_{\rho}$ the mass of the $\rho$-meson). Within the chiral EFT approach, nucleon-nucleon ($NN$) interactions have been constructed up to fifth order of the chiral expansion (see Refs. Pia15 ; Pia16 ; Car16 ; EMN17 ; RKE18 ; Eks18 for some recent examples). These chiral $NN$ potentials complemented by chiral three-nucleon forces have been applied in calculations of few-nucleon reactions NRQ10 ; Viv13 ; Gol14 ; Gir19 , the structure of light- and medium-mass nuclei Nav07a ; Rot11 ; Rot12 ; Hag12a ; Hag12b ; BNV13 ; Her13 ; Hag14a ; Bin14 ; HJP16 ; Sim16 ; Sim17 ; Mor18 ; Som19 ; Hop19 ; Rot19 , and infinite matter HS10 ; Heb11 ; Cor13 ; Hag14b ; Cor14 ; Sam15 ; MS16 ; Dri16 ; Heb15 ; DHS19 ; SM20 ; Jia20 —with, by and large, satisfactory results. These successes have been attributed to the chiral symmetry foundation of the potentials MS20 . In the chiral EFT procedure to nuclear forces, a clear dictinction is made between the long- and short-range parts of the $NN$ potential. While the long- range part is given by one- and multi-pion exchanges, the short-range description is very different. Since the short-range nucleon structure cannot be resolved at the low energy scale characteristic for traditional nuclear physics, the short-range description consists simply of polynomials of increasing degree, also known as contact terms, which contribute exclusively in the lower partial waves of the $NN$ interaction. Only the pion-contributions are ruled by chiral symmetry, while the contacts are based on just the usual non-relativistic invariances and have nothing to do with chiral symmetry. Therefore, the standard way to demonstrate the relevance of chiral symmetry for the $NN$ interaction is to consider only higher partial waves to which contacts do not make contributions and that are, thus, controled entirely by chiral symmetry KBW97 ; Ent15a ; Ent15b . But, how about the lower partial waves? Here, the situation is different, because both contacts and pion-exchanges contribute. Thus, to obtain a clear picture of the role of chiral symmetry in the lower partial waves, the chiral pion-exchanges need to be disentagled from the contact contributions, which may not be easy. Nevertheless, this issue is of interest for the following reasons. Lower partial waves are more sensitive to the short-range potential. Therefore, one may suspect that the contact contributions are dominant and simply override the pion-exchange contributions in lower partial waves. In applications of $NN$-potentials to nuclear structure and reactions, the lower partial waves make the large contributions. Thus, if chiral symmetry would rule only the upper partial waves while the lower partial wave were essentially governed by the contacts, then the predictions from these “chiral potentials” for nuclear structure and reactions would have little to do with chiral symmetry. Chiral potentials would not be much different from phenomenological potentials. Motivated by the above concerns, the purpose of this paper is to systematically investigate the role of the (chiral) one- and two-pion exchange contributions, on the one hand, and the effect of the contacts, on the other hand, in the lower partial waves of chiral $NN$ potentials; to determine if chiral symmetry plays an essential role in those lower waves. Besides this main physical motivation for this study, we like to point out that there is also a pedagogical spin-off to this work. We will demonstrate in detail how the reproduction of the lower partial-wave phase shifts comes about from the various ingredients of the theory. In Sect. 2, will present the formalism for the chiral $NN$ potential, order by order, as it matters to the investigation of the issue, which is conducted in Sect. 3. Sect. 4 concludes the paper. ## 2 The chiral $NN$ potential ### 2.1 Power counting In nuclear EFT, contributions are analyzed in terms of powers of small external momenta over the large scale: $(Q/\Lambda_{b})^{\nu}$, where $Q$ is generic for an external momentum (nucleon three-momentum or pion four- momentum) or a pion mass and $\Lambda_{b}\sim 0.6$ GeV is the breakdown scale appropriate specifically for the $NN$ problem EKM15 . Determining the power $\nu$ has become known as power counting. Applying (naive) dimensional analysis or Weinberg counting Wei90 , one obtains: for the power of a connected irreducible diagram involving $A$ nucleons ME11 , $\nu=-2+2A-2C+2L+\sum_{i}{\Delta_{i}},$ (1) with $\Delta_{i}\equiv d_{i}+\frac{n_{i}}{2}-2,$ (2) where $C$ denotes the number of separately connected pieces and $L$ the number of loops in the diagram; $d_{i}$ is the number of derivatives or pion-mass insertions and $n_{i}$ the number of nucleon fields (nucleon legs) involved in vertex $i$; the sum runs over all vertexes $i$ contained in the diagram under consideration. Note that $\Delta_{i}\geq 0$ for all interactions allowed by chiral symmetry. Since we use the heavy-baryon formalism, we encounter terms which include factors of $Q/M_{N}$, where $M_{N}$ denotes the nucleon mass. We count the order of such terms by the rule $Q/M_{N}\sim(Q/\Lambda_{\chi})^{2}$, for reasons explained in Ref. Wei90 . In this paper, we are mainly concerned with the $NN$ system $(A=2,C=~{}1)$, in which case the power formula collapses to the very simple expression $\nu=2L+\sum_{i}\Delta_{i}.$ (3) ### 2.2 The long-range $NN$ potential The long-range part of the $NN$ potential is built up from pion exchanges, which are ruled by chiral symmetry. The various pion-exchange contributions may be analyzed according to the number of pions being exchanged between the two nucleons: $V_{\pi}=V_{1\pi}+V_{2\pi}+V_{3\pi}+\cdots,$ (4) where the meaning of the subscripts is obvious and the ellipsis represents $4\pi$ and higher pion exchanges. For each of the above terms, we have a low- momentum expansion: $\displaystyle V_{1\pi}$ $\displaystyle=V_{1\pi}^{(0)}+V_{1\pi}^{(2)}+V_{1\pi}^{(3)}+V_{1\pi}^{(4)}+V_{1\pi}^{(5)}+\cdots,$ (5) $\displaystyle V_{2\pi}$ $\displaystyle=V_{2\pi}^{(2)}+V_{2\pi}^{(3)}+V_{2\pi}^{(4)}+V_{2\pi}^{(5)}+\cdots,$ (6) $\displaystyle V_{3\pi}$ $\displaystyle=V_{3\pi}^{(4)}+V_{3\pi}^{(5)}+\cdots,$ (7) where the superscript denotes the order $\nu$ of the expansion. Order by order, the long-range $NN$ potential builds up as follows: $\displaystyle V_{L\Omega}$ $\displaystyle\equiv V^{(0)}=V_{1\pi}^{(0)},$ (8) $\displaystyle V_{NL\Omega}$ $\displaystyle\equiv V^{(2)}=V_{L\Omega}+V_{1\pi}^{(2)}+V_{2\pi}^{(2)},$ (9) $\displaystyle V_{NNL\Omega}$ $\displaystyle\equiv V^{(3)}=V_{NL\Omega}+V_{1\pi}^{(3)}+V_{2\pi}^{(3)},$ (10) $\displaystyle V_{N^{3}L\Omega}$ $\displaystyle\equiv V^{(4)}=V_{NNL\Omega}+V_{1\pi}^{(4)}+V_{2\pi}^{(4)}+V_{3\pi}^{(4)},$ (11) $\displaystyle V_{N^{4}L\Omega}$ $\displaystyle\equiv V^{(5)}=V_{N^{3}L\Omega}+V_{1\pi}^{(5)}+V_{2\pi}^{(5)}+V_{3\pi}^{(5)},$ (12) where L$\Omega$ stands for leading order, NL$\Omega$ for next-to-leading order, etc.. #### 2.2.1 Leading order Table 1: Basic constants used throughout this work PDG . Quantity | | Value ---|---|--- Axial-vector coupling constant $g_{A}$ | | 1.29 Pion-decay constant $f_{\pi}$ | | 92.4 MeV Charged-pion mass $m_{\pi^{\pm}}$ | | 139.5702 MeV Neutral-pion mass $m_{\pi^{0}}$ | | 134.9766 MeV Average pion-mass $\bar{m}_{\pi}$ | | 138.0390 MeV Proton mass $M_{p}$ | | 938.2720 MeV Neutron mass $M_{n}$ | | 939.5654 MeV Average nucleon-mass $\bar{M}_{N}$ | | 938.9183 MeV At leading order (L$\Omega$, $\nu=0$), only one-pion exchange (1PE) contributes to the long range. The charge-independent 1PE is given by $V_{1\pi}^{(CI)}(\vec{p^{\prime}},\vec{p})=-\>\frac{g_{A}^{2}}{4f_{\pi}^{2}}\>\bm{\tau}_{1}\cdot\bm{\tau}_{2}\>\frac{{\vec{\sigma}}_{1}\cdot\vec{q}\>\>\vec{\sigma}_{2}\cdot\vec{q}}{q^{2}+m_{\pi}^{2}}\>,$ (13) where $\vec{p^{\prime}}$ and $\vec{p}$ denote the final and initial nucleon momenta in the center-of-mass system (CMS), respectively. Moreover, $\vec{q}={\vec{p}}\,^{\prime}-\vec{p}$ is the momentum transfer and $\vec{\sigma}_{1,2}$ and $\bm{\tau}_{1,2}$ are the spin and isospin operators of nucleons 1 and 2. Parameters $g_{A},f_{\pi}$ and $m_{\pi}$ denote the axial-vector coupling constant, pion-decay constant, and the pion mass, respectively. See Table 1 for their values. Higher order corrections to the 1PE are taken care of by mass and coupling constant renormalizations. Note also that, on shell, there are no relativistic corrections. Thus, we apply 1PE in the form Eq. (13) through all orders. The 1PE potential, Eq. (13), can be re-written as follows: $V_{1\pi}^{(CI)}(\vec{p^{\prime}},\vec{p})=-\>\frac{g_{A}^{2}}{12f_{\pi}^{2}}\>\bm{\tau}_{1}\cdot\bm{\tau}_{2}\left(\vec{\sigma}_{1}\cdot\vec{\sigma}_{2}\,-\,\vec{\sigma}_{1}\cdot\vec{\sigma}_{2}\,\frac{m_{\pi}^{2}}{q^{2}+m_{\pi}^{2}}+\,\frac{S_{12}(\vec{q})}{q^{2}+m_{\pi}^{2}}\>\right),$ (14) with tensor operator $S_{12}(\vec{q})=3\>\vec{\sigma}_{1}\cdot\vec{q}\>\>\vec{\sigma}_{2}\cdot\vec{q}\,-\,\vec{\sigma}_{1}\cdot\vec{\sigma}_{2}\,q^{2}\,,$ (15) where the 1PE has been broken up into a zeroth order spin-spin contact term (“$\delta$-function term”), a spin-spin Yukawa central force, and a tensor piece. The 1PE tensor force is known to be strong, while the spin-spin central force is weak. If one takes the charge-dependence of the 1PE into account, then, in proton- proton ($pp$) and neutron-neutron ($nn$) scattering one has $V_{1\pi}^{(pp)}(\vec{p^{\prime}},\vec{p})=V_{1\pi}^{(nn)}(\vec{p^{\prime}},\vec{p})=V_{1\pi}(m_{\pi^{0}})$ (16) and in $np$ scattering $V_{1\pi}^{(np)}(\vec{p^{\prime}},\vec{p})=-V_{1\pi}(m_{\pi^{0}})+(-1)^{I+1}\>2\>V_{1\pi}(m_{\pi^{\pm}})\>,$ (17) where $I=0,1$ denotes the total isospin of the two-nucleon system and $V_{1\pi}(m_{\pi})=-\>\frac{g_{A}^{2}}{4f_{\pi}^{2}}\>\frac{\vec{\sigma}_{1}\cdot\vec{q}\>\vec{\sigma}_{2}\cdot\vec{q}}{q^{2}+m_{\pi}^{2}}\>.$ (18) The charge-dependence of the 1PE is of order NL$\Omega$ ME11 , but we include it already at order L$\Omega$ to make the comparison with the $np$ phase-shift analyses meaningful. #### 2.2.2 Next-to-leading order At next-to-leading order (NL$\Omega$, $\nu=2$), two-pion exchange (2PE) starts and continues through all higher orders. The 2PE potential expressions will be stated in terms of contributions to the momentum-space $NN$ amplitudes in the CMS, which takes the following general form: $\displaystyle V({\vec{p}}~{}^{\prime},\vec{p})$ $\displaystyle=$ $\displaystyle\>\,V_{C}\>\,+\bm{\tau}_{1}\cdot\bm{\tau}_{2}\,W_{C}$ (19) $\displaystyle+$ $\displaystyle\left[\,V_{S}\>\,+\bm{\tau}_{1}\cdot\bm{\tau}_{2}\,W_{S}\,\>\,\right]\,\vec{\sigma}_{1}\cdot\vec{\sigma}_{2}$ $\displaystyle+$ $\displaystyle\left[\,V_{LS}+\bm{\tau}_{1}\cdot\bm{\tau}_{2}\,W_{LS}\right]\,\left(-i\vec{S}\cdot(\vec{q}\times\vec{k})\,\right)$ $\displaystyle+$ $\displaystyle\left[\,V_{T}\>\,+\bm{\tau}_{1}\cdot\bm{\tau}_{2}\,W_{T}\,\>\,\right]\,\vec{\sigma}_{1}\cdot\vec{q}\,\,\vec{\sigma}_{2}\cdot\vec{q}$ $\displaystyle+$ $\displaystyle\left[\,V_{\sigma L}+\bm{\tau}_{1}\cdot\bm{\tau}_{2}\,W_{\sigma L}\,\right]\,\vec{\sigma}_{1}\cdot(\vec{q}\times\vec{k}\,)\,\,\vec{\sigma}_{2}\cdot(\vec{q}\times\vec{k}\,)\,,$ where $\vec{k}=({\vec{p}}\,^{\prime}+\vec{p})/2$ is the average momentum and $\vec{S}=(\vec{\sigma}_{1}+\vec{\sigma}_{2})/2$ the total spin. For on-shell scattering, $V_{\alpha}$ and $W_{\alpha}$ ($\alpha=C,S,LS,T,\sigma L$) can be expressed as functions of $q=|\vec{q}\,|$ and $p=|{\vec{p}}\,^{\prime}|=|\vec{p}\,|$, only. The NL$\Omega$ 2PE is given by KBW97 ; Ent15a : $\displaystyle W_{C}$ $\displaystyle=$ $\displaystyle{L(\tilde{\Lambda};q)\over 384\pi^{2}f_{\pi}^{4}}\left[4m_{\pi}^{2}(1+4g_{A}^{2}-5g_{A}^{4})+q^{2}(1+10g_{A}^{2}-23g_{A}^{4})-{48g_{A}^{4}m_{\pi}^{4}\over w^{2}}\right]$ (20) $\displaystyle+\,\mbox{\rm polynomial terms of order two}\,,$ $\displaystyle V_{T}$ $\displaystyle=$ $\displaystyle-{1\over q^{2}}V_{S}\;=\;-{3g_{A}^{4}\over 64\pi^{2}f_{\pi}^{4}}L(\tilde{\Lambda};q)\,+\,\mbox{\rm polynomial terms of order zero}\,,$ (21) with $w=\sqrt{4m_{\pi}^{2}+q^{2}}$ and where the (regularized) logarithmic loop function is given by $L(\tilde{\Lambda};q)={w\over 2q}\ln{\frac{\tilde{\Lambda}^{2}(2m_{\pi}^{2}+q^{2})-2m_{\pi}^{2}q^{2}+\tilde{\Lambda}\sqrt{\tilde{\Lambda}^{2}-4m_{\pi}^{2}}\,q\,w}{2m_{\pi}^{2}(\tilde{\Lambda}^{2}+q^{2})}}\,.$ (22) Note that $\lim_{\tilde{\Lambda}\rightarrow\infty}L(\tilde{\Lambda};q)={w\over q}\ln{\frac{w+q}{2m_{\pi}}}$ (23) results in the logarithmic loop function of dimensional regularization. For the explicit expressions of the polynomial terms that contribute to Eqs. (20) and (21), see Ref. KBW97 . #### 2.2.3 Next-to-next-to-leading order Table 2: Values of the $\pi N$ LECs as determined in Ref. Hof15 . The $c_{i}$ and $\bar{d_{i}}$ are the LECs of the second and third order $\pi N$ Lagrangians and are in units of GeV-1 and GeV-2, respectively. The uncertainties in the last digit are given in parentheses after the value. | NNL$\Omega$ | N3L$\Omega$ ---|---|--- $c_{1}$ | -0.74(2) | -1.07(2) $c_{2}$ | | 3.20(3) $c_{3}$ | -3.61(5) | -5.32(5) $c_{4}$ | 2.44(3) | 3.56(3) $\bar{d_{1}}+\bar{d_{2}}$ | | 1.04(6) $\bar{d_{3}}$ | | -0.48(2) $\bar{d_{5}}$ | | 0.14(5) $\bar{d}_{14}-\bar{d}_{15}$ | | -1.90(6) At next-to-next-to-leading order (NNL$\Omega$, $\nu=3$), we have the following 2PE contributions KBW97 ; Ent15a : $\displaystyle V_{C}$ $\displaystyle=$ $\displaystyle{3g_{A}^{2}\over 16\pi f_{\pi}^{4}}\left[2m_{\pi}^{2}(c_{3}-2c_{1})+c_{3}q^{2}\right](2m_{\pi}^{2}+q^{2})A(\tilde{\Lambda};q)$ (24) $\displaystyle+\,\mbox{\rm polynomial terms of order three}\,,$ $\displaystyle W_{T}$ $\displaystyle=$ $\displaystyle-{1\over q^{2}}W_{S}=-{g_{A}^{2}\over 32\pi f_{\pi}^{4}}c_{4}w^{2}A(\tilde{\Lambda};q)$ (25) $\displaystyle+\,\mbox{\rm polynomial terms of order one}\,.$ The loop function that appears in the above expressions, regularized by spectral-function cut-off $\tilde{\Lambda}$, is $A(\tilde{\Lambda};q)={1\over 2q}\arctan{q(\tilde{\Lambda}-2m_{\pi})\over q^{2}+2\tilde{\Lambda}m_{\pi}}\,,$ (26) with $\lim_{\tilde{\Lambda}\rightarrow\infty}A(\tilde{\Lambda};q)={1\over 2q}\arctan{q\over 2m_{\pi}}$ (27) yielding the corresponding loop function of dimensional regularization. The polynomial terms that occur in Eqs. (24) and (25) are given in Ref. KBW97 . In the above expressions, the $\pi N$ low-energy constants (LECs), $c_{i}$, from the second order $\pi N$ Lagrangian appear for the first time. We use the very precise values as determined in the Roy-Steiner analysis by Ref. Hof15 (Table 2). #### 2.2.4 Next-to-next-to-next-to-leading order At next-to-next-to-next-to-leading order (N3L$\Omega$, $\nu=4$), we have many contributions which we will subdivide into separate groups. ##### Football diagram at N3L$\Omega$. The 2PE football diagram at N3L$\Omega$ generates Kai01 : $\displaystyle V_{C}$ $\displaystyle=$ $\displaystyle{3\over 16\pi^{2}f_{\pi}^{4}}\left[\left({c_{2}\over 6}w^{2}+c_{3}(2m_{\pi}^{2}+q^{2})-4c_{1}m_{\pi}^{2}\right)^{2}+{c_{2}^{2}\over 45}w^{4}\right]L(\tilde{\Lambda};q)\,,$ (28) $\displaystyle W_{T}$ $\displaystyle=$ $\displaystyle-{1\over q^{2}}W_{S}={c_{4}^{2}\over 96\pi^{2}f_{\pi}^{4}}w^{2}L(\tilde{\Lambda};q)\,.$ (29) In addition to the non-polynomial contributions shown in the above equations, there are polynomial terms of order four in the central potential and polynomial terms of order two in the tensor (and spin-orbit) potentials, which we do not show explicitly. This note applies to all N3L$\Omega$ expressions. ##### Leading two-loop contributions. We state the 2PE two-loop contributions in terms of their spectral functions Kai01 : $\displaystyle{\rm Im}\,V_{C}$ $\displaystyle=$ $\displaystyle{3g_{A}^{4}(2m_{\pi}^{2}-\mu^{2})\over\pi\mu(4f_{\pi})^{6}}\Bigg{[}(m_{\pi}^{2}-2\mu^{2})\left(2m_{\pi}+{2m_{\pi}^{2}-\mu^{2}\over 2\mu}\ln{\mu+2m_{\pi}\over\mu-2m_{\pi}}\right)$ (30) $\displaystyle+\,4g_{A}^{2}m_{\pi}(2m_{\pi}^{2}-\mu^{2})\Big{]}\,,$ $\displaystyle{\rm Im}\,W_{C}$ $\displaystyle=$ $\displaystyle{2\kappa\over 3\mu(8\pi f_{\pi}^{2})^{3}}\int_{0}^{1}dx\,\Big{[}g_{A}^{2}(\mu^{2}-2m_{\pi}^{2})+2(1-g_{A}^{2})\kappa^{2}x^{2}\Big{]}$ (31) $\displaystyle\times\Bigg{\\{}\,96\pi^{2}f_{\pi}^{2}\left[(2m_{\pi}^{2}-\mu^{2})(\bar{d}_{1}+\bar{d}_{2})-2\kappa^{2}x^{2}\bar{d}_{3}+4m_{\pi}^{2}\bar{d}_{5}\right]$ $\displaystyle+\left[4m_{\pi}^{2}(1+2g_{A}^{2})-\mu^{2}(1+5g_{A}^{2})\right]{\kappa\over\mu}\ln{\mu+2\kappa\over 2m_{\pi}}\,$ $\displaystyle+\,{\mu^{2}\over 12}(5+13g_{A}^{2})-2m_{\pi}^{2}(1+2g_{A}^{2})$ $\displaystyle-\,3\kappa^{2}x^{2}+6\kappa x\sqrt{m_{\pi}^{2}+\kappa^{2}x^{2}}\ln{\kappa x+\sqrt{m_{\pi}^{2}+\kappa^{2}x^{2}}\over m_{\pi}}$ $\displaystyle+\,g_{A}^{4}\left(\mu^{2}-2\kappa^{2}x^{2}-2m_{\pi}^{2}\right)$ $\displaystyle\times\left[{5\over 6}+{m_{\pi}^{2}\over\kappa^{2}x^{2}}-\left(1+{m_{\pi}^{2}\over\kappa^{2}x^{2}}\right)^{3/2}\ln{\kappa x+\sqrt{m_{\pi}^{2}+\kappa^{2}x^{2}}\over m_{\pi}}\right]\Bigg{\\}}\,,$ $\displaystyle{\rm Im}\,V_{S}$ $\displaystyle=$ $\displaystyle\mu^{2}\,{\rm Im}\,V_{T}={g_{A}^{2}\mu\kappa^{3}\over 8\pi f_{\pi}^{4}}\left(\bar{d}_{15}-\bar{d}_{14}\right)\,+\,{2g_{A}^{6}\mu\kappa^{3}\over(8\pi f_{\pi}^{2})^{3}}\int_{0}^{1}dx(1-x^{2})$ (32) $\displaystyle\times\left[{1\over 6}-{m_{\pi}^{2}\over\kappa^{2}x^{2}}+\left(1+{m_{\pi}^{2}\over\kappa^{2}x^{2}}\right)^{3/2}\ln{\kappa x+\sqrt{m_{\pi}^{2}+\kappa^{2}x^{2}}\over m_{\pi}}\right]\,,$ $\displaystyle{\rm Im}\,W_{S}$ $\displaystyle=$ $\displaystyle\mu^{2}\,{\rm Im}\,W_{T}(i\mu)={g_{A}^{4}(4m_{\pi}^{2}-\mu^{2})\over\pi(4f_{\pi})^{6}}$ (33) $\displaystyle\times\left[\left(m_{\pi}^{2}-{\mu^{2}\over 4}\right)\ln{\mu+2m_{\pi}\over\mu-2m_{\pi}}+(1+2g_{A}^{2})\mu m_{\pi}\right]\,,$ where $\kappa=\sqrt{\mu^{2}/4-m_{\pi}^{2}}$. The above expressions involve the $\pi N$ LECs, $\bar{d}_{i}$, from the third order $\pi N$ Lagrangian. The values we apply for these LECs are shown in Table 2. The momentum space amplitudes $V_{\alpha}(q)$ and $W_{\alpha}(q)$ are obtained from the above spectral functions via the subtracted dispersion integrals, $\displaystyle V_{C,S}(q)$ $\displaystyle=$ $\displaystyle-{2q^{6}\over\pi}\int_{2m_{\pi}}^{\tilde{\Lambda}}d\mu\,{{\rm Im\,}V_{C,S}(i\mu)\over\mu^{5}(\mu^{2}+q^{2})}\,,$ $\displaystyle V_{T}(q)$ $\displaystyle=$ $\displaystyle{2q^{4}\over\pi}\int_{2m_{\pi}}^{\tilde{\Lambda}}d\mu\,{{\rm Im\,}V_{T}(i\mu)\over\mu^{3}(\mu^{2}+q^{2})}\,,$ (34) and similarly for $W_{C,S,T}$. For $\tilde{\Lambda}\rightarrow\infty$ the above dispersion integrals yield the results of dimensional regularization, while for finite $\tilde{\Lambda}\geq 2m_{\pi}$ we have what has become known as spectral-function regularization (SFR) EGM04 . The purpose of the finite scale $\tilde{\Lambda}$ is to constrain the imaginary parts to the low- momentum region where chiral effective field theory is applicable. Thus, a reasonable choice for $\tilde{\Lambda}$ is to keep it below the masses of the vector mesons $\rho(770)$ and $\omega(782)$, but above the $f_{0}(500)$ [also know as $\sigma(500)$] PDG . This suggests that the region 600-700 MeV is appropriate for $\tilde{\Lambda}$. Consequently, we use $\tilde{\Lambda}=650$ MeV. The subtracted dispersion integrals generate (besides the non-polynomial contributions) polynomial terms of order four for the central potentials and polynomial tertms of order two for the tensor potentials. ##### Leading relativistic corrections. The relativistic corrections to the 2PE NL$\Omega$ diagrams count as N3L$\Omega$ ($\nu=4$) and are given by ME11 : $\displaystyle V_{C}$ $\displaystyle=$ $\displaystyle\frac{3g_{A}^{4}}{128\pi f_{\pi}^{4}M_{N}}\bigg{[}\frac{m_{\pi}^{5}}{2w^{2}}+(2m_{\pi}^{2}+q^{2})(q^{2}-m_{\pi}^{2})A(\tilde{\Lambda};q)\bigg{]}\,,$ (35) $\displaystyle W_{C}$ $\displaystyle=$ $\displaystyle\frac{g_{A}^{2}}{64\pi f_{\pi}^{4}M_{N}}\Bigg{\\{}\frac{3g_{A}^{2}m_{\pi}^{5}}{2w^{2}}+\big{[}g_{A}^{2}(3m_{\pi}^{2}+2q^{2})-2m_{\pi}^{2}-q^{2}\big{]}$ (36) $\displaystyle\times(2m_{\pi}^{2}+q^{2})A(\tilde{\Lambda};q)\Bigg{\\}}\,,$ $\displaystyle V_{T}$ $\displaystyle=$ $\displaystyle-\frac{1}{q^{2}}V_{S}=\frac{3g_{A}^{4}}{256\pi f_{\pi}^{4}M_{N}}(5m_{\pi}^{2}+2q^{2})A(\tilde{\Lambda};q)\,,$ (37) $\displaystyle W_{T}$ $\displaystyle=$ $\displaystyle-\frac{1}{q^{2}}W_{S}=\frac{g_{A}^{2}}{128\pi f_{\pi}^{4}M_{N}}\big{[}g_{A}^{2}(3m_{\pi}^{2}+q^{2})-w^{2}\big{]}A(\tilde{\Lambda};q)\,,$ (38) $\displaystyle V_{LS}$ $\displaystyle=$ $\displaystyle{3g_{A}^{4}\over 32\pi f_{\pi}^{4}M_{N}}\,(2m_{\pi}^{2}+q^{2})A(\tilde{\Lambda};q)\,,$ (39) $\displaystyle W_{LS}$ $\displaystyle=$ $\displaystyle{g_{A}^{2}(1-g_{A}^{2})\over 32\pi f_{\pi}^{4}M_{N}}\,w^{2}A(\tilde{\Lambda};q)\,.$ (40) ##### Leading three-pion exchange The leading $3\pi$-exchange contributions that occur at N3L$\Omega$ have been calculated in Refs. Kai00a ; Kai00b and are found to be negligible. We, therefore, omit them. #### 2.2.5 Subleading relativistic corrections We also include the $1/M_{N}$ corrections of the 2PE NNL$\Omega$ diagrams. This contribution is repulsive and proportional to $c_{i}/M_{N}$. Since we count $Q/M_{N}\sim(Q/\Lambda_{b})^{2}$, these relativistic corrections are formally of order N4L$\Omega$ ($\nu=5$), but we add them to our N3L$\Omega$ potential to compensate the excessive attraction generated by the football diagram at N3L$\Omega$ EMN17 . The result for this group of diagrams reads Kai01 ; EMN17 : $\displaystyle V_{C}$ $\displaystyle=$ $\displaystyle{g_{A}^{2}\,L(\tilde{\Lambda};q)\over 32\pi^{2}M_{N}f_{\pi}^{4}}\Big{[}(6c_{3}-c_{2})q^{4}+4(3c_{3}-c_{2}-6c_{1})q^{2}m_{\pi}^{2}$ (41) $\displaystyle+\,6(2c_{3}-c_{2})m_{\pi}^{4}-24(2c_{1}+c_{3})m_{\pi}^{6}w^{-2}\Big{]}\,,$ $\displaystyle W_{C}$ $\displaystyle=$ $\displaystyle-{c_{4}\over 192\pi^{2}M_{N}f_{\pi}^{4}}\left[g_{A}^{2}(8m_{\pi}^{2}+5q^{2})+w^{2}\right]q^{2}\,L(\tilde{\Lambda};q)\,,$ (42) $\displaystyle W_{T}$ $\displaystyle=$ $\displaystyle-{1\over q^{2}}W_{S}={c_{4}\over 192\pi^{2}M_{N}f_{\pi}^{4}}\left[w^{2}-g_{A}^{2}(16m_{\pi}^{2}+7q^{2})\right]L(\tilde{\Lambda};q)\,,$ (43) $\displaystyle V_{LS}$ $\displaystyle=$ $\displaystyle{c_{2}\,g_{A}^{2}\over 8\pi^{2}M_{N}f_{\pi}^{4}}\,w^{2}L(\tilde{\Lambda};q)\,,$ (44) $\displaystyle W_{LS}$ $\displaystyle=$ $\displaystyle-{c_{4}\over 48\pi^{2}M_{N}f_{\pi}^{4}}\left[g_{A}^{2}(8m_{\pi}^{2}+5q^{2})+w^{2}\right]L(\tilde{\Lambda};q)\,.$ (45) ### 2.3 The short-range $NN$ potential ($NN$ contact terms) In the EFT approach, the short range interaction is described by contributions of the contact type, which are constrained by parity, time-reversal, and the usual invariances, but not by chiral symmetry. Only even powers of momentum are allowed because of parity and time-reversal. Thus, the expansion of the contact potential is formally given by $V_{ct}=V_{ct}^{(0)}+V_{ct}^{(2)}+V_{ct}^{(4)}+V_{ct}^{(6)}+\cdots\>,$ (46) where the superscript denotes the power or order. In operator form, the contact potentials are given by: Zeroth-order (leading order, L$\Omega$), $V_{ct}^{(0)}(\vec{p^{\prime}},\vec{p})=C_{S}+C_{T}\>\vec{\sigma}_{1}\cdot\vec{\sigma}_{2}\,.$ (47) Second order (next-to-leading order, NL$\Omega$), $\displaystyle V_{ct}^{(2)}(\vec{p^{\prime}},\vec{p})$ $\displaystyle=C_{1}q^{2}+C_{2}k^{2}+(C_{3}q^{2}+C_{4}k^{2})\vec{\sigma_{1}}\cdot\vec{\sigma_{2}}$ $\displaystyle+C_{5}[-i\vec{S}\cdot(\vec{q}\times\vec{k})]+C_{6}(\vec{\sigma_{1}}\cdot\vec{q})(\vec{\sigma_{2}}\cdot\vec{q})$ $\displaystyle+C_{7}(\vec{\sigma_{1}}\cdot\vec{k})(\vec{\sigma_{2}}\cdot\vec{k})\,.$ (48) Fourth order (next-to-next-to-next-to-leading order, N3L$\Omega$): $\displaystyle V_{ct}^{(4)}(\vec{p^{\prime}},\vec{p})$ $\displaystyle=D_{1}q^{4}+D_{2}k^{4}+D_{3}q^{2}k^{2}+D_{4}(\vec{q}\times\vec{k})^{2}$ $\displaystyle+[D_{5}q^{4}+D_{6}k^{4}+D_{7}q^{2}k^{2}+D_{8}(\vec{q}\times\vec{k})^{2}]\vec{\sigma_{1}}\cdot\vec{\sigma_{2}}$ $\displaystyle+(D_{9}q^{2}+D_{10}k^{2})[-i\vec{S}\cdot(\vec{q}\times\vec{k}]$ $\displaystyle+(D_{11}q^{2}+D_{12}k^{2})(\vec{\sigma_{1}}\cdot\vec{q})(\vec{\sigma_{2}}\cdot\vec{q})$ $\displaystyle+(D_{13}q^{2}+D_{14}k^{2})(\vec{\sigma_{1}}\cdot\vec{k})(\vec{\sigma_{2}}\cdot\vec{k})$ $\displaystyle+D_{15}[\vec{\sigma_{1}}\cdot(\vec{q}\times\vec{k})\vec{\sigma_{2}}\cdot(\vec{q}\times\vec{k})]\,.$ (49) In terms of a partial-wave decomposition, we have up to fourth order: $\displaystyle\langle{}^{1}S_{0},p^{\prime}|V_{ct}|{}^{1}S_{0},p\rangle$ $\displaystyle=\widetilde{C}_{{}^{1}S_{0}}+C_{{}^{1}S_{0}}(p^{2}+p^{\prime 2})+\widehat{D}_{{}^{1}S_{0}}(p^{\prime 4}+p^{4})+{D}_{{}^{1}S_{0}}p^{\prime 2}p^{2},$ $\displaystyle\langle{}^{3}S_{1},p^{\prime}|V_{ct}|{}^{3}S_{1},p\rangle$ $\displaystyle=\widetilde{C}_{{}^{3}S_{1}}+C_{{}^{3}S_{1}}(p^{2}+p^{\prime 2})+\widehat{D}_{{}^{3}S_{1}}(p^{\prime 4}+p^{4})+{D}_{{}^{3}S_{1}}p^{\prime 2}p^{2},$ $\displaystyle\langle{}^{3}S_{1},p^{\prime}|V_{ct}|{}^{3}D_{1},p\rangle$ $\displaystyle=C_{{}^{3}S_{1}-{}^{3}D_{1}}p^{2}+\widehat{D}_{{}^{3}S_{1}-{}^{3}D_{1}}p^{4}+{D}_{{}^{3}S_{1}-{}^{3}D_{1}}p^{\prime 2}p^{2},$ $\displaystyle\langle{}^{1}P_{1},p^{\prime}|V_{ct}|{}^{1}P_{1},p\rangle$ $\displaystyle=C_{{}^{1}P_{1}}\>pp^{\prime}+D_{{}^{1}P_{1}}(p^{\prime 3}p+p^{\prime}p^{3}),$ $\displaystyle\langle{}^{3}P_{0},p^{\prime}|V_{ct}|{}^{3}P_{0},p\rangle$ $\displaystyle=C_{{}^{3}P_{0}}\>pp^{\prime}+D_{{}^{3}P_{0}}(p^{\prime 3}p+p^{\prime}p^{3}),$ $\displaystyle\langle{}^{3}P_{1},p^{\prime}|V_{ct}|{}^{3}P_{1},p\rangle$ $\displaystyle=C_{{}^{3}P_{1}}\>pp^{\prime}+D_{{}^{3}P_{1}}(p^{\prime 3}p+p^{\prime}p^{3}),$ $\displaystyle\langle{}^{3}P_{2},p^{\prime}|V_{ct}|{}^{3}P_{2},p\rangle$ $\displaystyle=C_{{}^{3}P_{2}}\>pp^{\prime}+D_{{}^{3}P_{2}}(p^{\prime 3}p+p^{\prime}p^{3}),$ $\displaystyle\langle{}^{3}P_{2},p^{\prime}|V_{ct}|{}^{3}F_{2},p\rangle$ $\displaystyle={D}_{{}^{3}P_{2}-{}^{3}F_{2}}p^{\prime}p^{3},$ $\displaystyle\langle{}^{1}D_{2},p^{\prime}|V_{ct}|{}^{1}D_{2},p\rangle$ $\displaystyle={D}_{{}^{1}D_{2}}p^{\prime 2}p^{2},$ $\displaystyle\langle{}^{3}D_{1},p^{\prime}|V_{ct}|{}^{3}D_{1},p\rangle$ $\displaystyle={D}_{{}^{3}D_{1}}p^{\prime 2}p^{2},$ $\displaystyle\langle{}^{3}D_{2},p^{\prime}|V_{ct}|{}^{3}D_{2},p\rangle$ $\displaystyle={D}_{{}^{3}D_{2}}p^{\prime 2}p^{2},$ $\displaystyle\langle{}^{3}D_{3},p^{\prime}|V_{ct}|{}^{3}D_{3},p\rangle$ $\displaystyle={D}_{{}^{3}D_{3}}p^{\prime 2}p^{2}.$ (50) Notice that, in our notation, partial-wave contact LECs * • $\widetilde{C}_{\alpha}$ are of zeroth order (there are two), * • ${C}_{\alpha}$ are of second order (there are seven), and * • $\widehat{D}_{\alpha}$ and ${D}_{\alpha}$ are of fourth order (there are 15), where $\alpha$ stands for a partial wave or a combination thereof. There exist linear one-to-one relations between the two $\widetilde{C}_{\alpha}$ and $C_{S}$ and $C_{T}$ of Eq. (47), the seven $C_{\alpha}$ and the seven $C_{i}$ of Eq. (48), and the 15 $\widehat{D}_{\alpha}$ and $D_{\alpha}$ and the 15 $D_{i}$ of Eq. (49). The relations can be found in Ref. ME11 . Note that the partial-wave decomposition of $Q^{\nu}$ (where $Q$ is either the momentum transfer $q$ or the average momentum $k$) has an interesting property. For even $\nu$, $Q^{\nu}=f_{\frac{\nu}{2}}(\cos\theta)\,,$ (51) where $f_{m}$ stands for a polynomial of degree $m$ and $\theta$ is the CMS scattering angle. The partial-wave decomposition of $Q^{\nu}$ for a state of orbital-angular momentum $L$ involves the integral $I^{(\nu)}_{L}=\int_{-1}^{+1}Q^{\nu}P_{L}(\cos\theta)d\cos\theta=\int_{-1}^{+1}f_{\frac{\nu}{2}}(\cos\theta)P_{L}(\cos\theta)d\cos\theta\,,$ (52) where $P_{L}$ is a Legendre polynomial. Due to the orthogonality of the $P_{L}$, $I^{(\nu)}_{L}=0\hskip 14.22636pt\mbox{for}\hskip 14.22636ptL>\frac{\nu}{2}\,.$ (53) Consequently, contact terms of order zero contribute only in $S$-waves, while second order terms can contribute up to $P$-waves, fourth order terms up to $D$-waves, etc.. ### 2.4 Scattering equation and regularization The potential $V$ is, in principal, an invariant amplitude (with relativity taken into account perturbatively) and, thus, satisfies a relativistic scattering equation, like, e. g., the Blankenbeclar-Sugar (BbS) equation BS66 , which reads explicitly, ${T}({\vec{p}}~{}^{\prime},{\vec{p}})={V}({\vec{p}}~{}^{\prime},{\vec{p}})+\int\frac{d^{3}p^{\prime\prime}}{(2\pi)^{3}}\>{V}({\vec{p}}~{}^{\prime},{\vec{p}}~{}^{\prime\prime})\>\frac{M_{N}^{2}}{E_{p^{\prime\prime}}}\>\frac{1}{{p}^{2}-{p^{\prime\prime}}^{2}+i\epsilon}\>{T}({\vec{p}}~{}^{\prime\prime},{\vec{p}})\,,$ (54) with $E_{p^{\prime\prime}}\equiv\sqrt{M_{N}^{2}+{p^{\prime\prime}}^{2}}$. The advantage of using a relativistic scattering equation is that it automatically includes relativistic kinematical corrections to all orders. Thus, in the scattering equation, no propagator modifications are necessary when moving up in the orders. Defining $\displaystyle\widehat{V}(\vec{p^{\prime}},\vec{p})\equiv\frac{1}{(2\pi)^{3}}\sqrt{\frac{M_{N}}{E_{p^{\prime}}}}\>V(\vec{p^{\prime}},\vec{p})\sqrt{\frac{M_{N}}{E_{p}}}$ (55) and $\displaystyle\widehat{T}(\vec{p^{\prime}},\vec{p})\equiv\frac{1}{(2\pi)^{3}}\sqrt{\frac{M_{N}}{E_{p^{\prime}}}}\>T(\vec{p^{\prime}},\vec{p})\sqrt{\frac{M_{N}}{E_{p}}}\,,$ (56) the BbS equation collapses into the usual, nonrelativistic Lippmann-Schwinger (LS) equation, $\displaystyle\widehat{T}(\vec{p^{\prime}},\vec{p})$ $\displaystyle=\widehat{V}(\vec{p^{\prime}},\vec{p})+\int{d^{3}\>p^{\prime\prime}}\>\widehat{V}(\vec{p^{\prime}},\vec{p^{\prime\prime}})\>\frac{M_{N}}{p^{2}-p^{\prime\prime 2}+i\epsilon}\>\widehat{T}(\vec{p^{\prime\prime}},\vec{p})\,.$ (57) Iteration of $\widehat{V}$ in the LS equation, Eq. (57), requires cutting $\widehat{V}$ off for high momenta to avoid infinities. This is consistent with the fact that ChPT is a low-momentum expansion which is valid only for momenta $Q\leq\Lambda_{b}\approx 0.6$ GeV. Therefore, the potential $\widehat{V}$ is multiplied with a regulator function $f(p^{\prime},p)$, $\widehat{V}(\vec{p^{\prime}},\vec{p})\longmapsto\widehat{V}(\vec{p^{\prime}},\vec{p})\>f(p^{\prime},p)\,,$ (58) with $f(p^{\prime},p)=exp[-(p^{\prime}/\Lambda)^{2n}\>-(p/\Lambda)^{2n}].$ (59) For the chiral potentials applied in this investigation, we use $\Lambda=500$ MeV EMN17 . ## 3 Relevance of contact terms versus pion exchanges in lower partial waves [height=12cm, width=8cm]figph1a.pdf Figure 1: Chiral expansion of $np$ scattering as represented by the phase shifts in the $J=0,1$ states. Five orders ranging from L$\Omega$ to N4L$\Omega$ are shown as denoted. The solid dots and open circles are results from the Nijmegen multienergy $np$ phase shift analysis Sto93 and GWU single- energy $np$ analysis SP07 , respectively. (From Ref. EMN17 ) -0.75cm Figure 2: Same as Fig. 1, but for $J=2$, $\epsilon_{1}$, $\epsilon_{2}$, and ${}^{3}D_{3}$. In Ref. EMN17 , $NN$ potentials through all orders from L$\Omega$ to N4L$\Omega$ were constructed. There are improvements in the reproduction of the empirical phase shifts as the orders increase and an excellent agreement is achieved at orders N3L$\Omega$ and N4L$\Omega$, Figs. 1 and 2. Similar results have been obtained by other researchers in the field Pia15 ; Pia16 ; Car16 ; RKE18 ; Eks18 ; EKM15 . Note that such fits involve two contacts at L$\Omega$ which contribute in $S$-waves, nine contacts at NL$\Omega$ and NNL$\Omega$ which contribute up to $P$-waves, and 24 contacts at N3L$\Omega$ and N4L$\Omega$ which contribute up to $D$-waves [cf. Eq. (50)]. It is the purpose of this study to analyze in detail the role of these contacts in the fits of the $L\leq 2$ phase shifts up to N3L$\Omega$. As discussed, the nuclear force consists essentially of two parts: the short range (Sect. 2.3) and the long range (Sect. 2.2). In chiral EFT, the long- range is represented by one- and multi-pion exchanges, and the short range is described by contact terms. The lower partial waves are particularly sensitive to the short range and, in fact, at N3L$\Omega$, four contact terms contribute to each $S$-wave, two to each $P$-wave, and one to each $D$-wave, Eq. (50). There are no contact contributions in $F$ and higher partial waves—at N3L$\Omega$. As explained in the Introduction, since lower partial waves are sensitive to the short range potential, one may suspect that the contact contributions are dominant and simply override the pion-exchange contributions in lower partial waves which, on the other hand, are the most important ones in applications of the potentials to nuclear structure and reactions. To shed light on this issue, we will now systematically investigate the role of those contacts versus pion exchange in those lower partial waves of $NN$ scattering. In our $NN$ potential construction, the $\pi N$ LECs are not fit-parameters; they are held fixed at their values determined in $\pi N$ scattering, Table 2. Therefore, the LECs of the $NN$ contacts are the only fit parameters available to optimize the reproduction of the $NN$ data (below 300 MeV laboratory energy). In this investigation, we will use the contact LECs to fit specific $NN$ low-energy parameters or phase shifts. We will consider various scenarios, namely, using contacts only or using contacts together with pion contributions of increasing chiral order. The failure to reproduce the $NN$ data by contacts only and the improvements that occur when (chiral) pion contributions are added will reveal the relevance of chiral symmetry in those lower partial waves. To obtain maximum insight into the role that contact terms can play, we will not follow here the rule that contact and pion contributions should be of the same order. In fact, we may, for example, consider contact contributions up to fourth order alone or with just the (lowest order) 1PE or low-order 2PE added, to demonstrate what contacts can maximally achieve or not achieve. For contacts and pion exchanges, we consider orders up to N3L$\Omega$ (fourth order). To keep it simple at the start, we begin with the partial waves that have only one contact, namely, $D$-waves and, then, proceed to the more elaborate cases, $P$ and $S$-waves. ### 3.1 $D$-waves To demonstrate the relevance of the pion exchange contributions (versus contacts) in $D$-waves, we consider the following cases for which we introduce the short notation given in parenthesis. * • Contact contribution only (ct1). * • L$\Omega$ pion exchange (i. e., 1PE) only and no contact term (L0). * • L$\Omega$ 1PE plus contact term (L1). * • NL$\Omega$ pion exchanges only, no contact term (NL0). * • NL$\Omega$ pion exchanges plus contact term (NL1). * • NNL$\Omega$ pion exchanges only, no contact term (NNL0). * • NNL$\Omega$ pion exchanges plus contact term (NNL1). * • N3L$\Omega$ pion exchanges plus contact term (NNNL1). Our short notation (given in parenthesis) is designed such that the letters always indicate the order of the pion exchanges included and the integer states the number of contacts involved (from the contacts available for the given partial wave). Note that in $D$-waves, there is only one (fourth order) contact per partial wave available, Eq. (50). When we include this contact term, we fit it to the empirical phase-shift at 50 MeV laboratory energy as determined in the Nijmegen phase-shift analysis Sto93 . The values for the contact LECs so obtained are listed in Table 3. Note that the chiral 2PE expressions at orders NL$\Omega$, Eqs. (20) and (21), and NNL$\Omega$, Eqs. (24) and (25), include polynomial terms up to order $Q^{3}$ KBW97 , which do not contribute in $D$-waves [cf. Eq. (53) and text below the equation]. Therefore, in the cases of L1, NL1, and NNL1, the $Q^{4}$ contacts are not renormalized and represent the true corrections needed on top of the non-polynomial parts of the pion-exchanges, denoted by L0, NL0, and NNL0, respectively (Table 3 and Fig. 3). The situation is different at N3L$\Omega$. The subtracted dispersion integrals, Eq. (34), generate—besides the non-polynomial parts—polynomial terms up to fourth order. Moreover, the other N3L$\Omega$ 2PE contributions (Sect. 2.2.4) and the contributions of Sect. 2.2.5 also include polynomial terms of $\mathcal{O}(Q^{4})$. Thus, the fourth order contact term, we introduce to fit the phase shift at 50 MeV, includes a compensation for the fourth order polynomial terms generated by the 2PE contributions. Therefore, in the case of NNNL1 of Table 3, the contact LECs are “renormalized”. It is not just the correction needed besides the non-polynominal 2PE contribution at N3L$\Omega$ to fit the $D$-wave phase shifts at 50 MeV, and should not be interpreted that way. In fact, the large size of the NNNL1 contact LECs shown in Table 3 indicate that the fourth order polynomial terms generated by N3L$\Omega$ pion contributions can be sizable. The phase shifts up to 300 MeV predicted for the various cases are shown in Fig. 3. Next we will discuss those phase shifts partial wave by partial wave. Table 3: Contact LECs used for $D$-waves [cf. Eq. (50)] in units of $10^{4}$ GeV-6. case | $D_{{}^{1}D_{2}}$ | $D_{{}^{3}D_{2}}$ | $D_{{}^{3}D_{3}}$ ---|---|---|--- ct1 | -3.2575 | -5.7202 | -1.0130 L1 | -1.6165 | -0.0578 | -1.7843 NL1 | -1.2045 | -0.3464 | -2.3773 NNL1 | -0.2068 | 0.2023 | -1.3345 NNNL1 | -2.088 | -3.3804 | -1.4764 (a) (b) (c) (d) (e) (f) Figure 3: $D$-wave phase shifts of neutron-proton scattering for the various cases discussed in the text. Solid dots and open circles as in Fig. 1. #### 3.1.1 The ${}^{1}D_{2}$-wave We start with the left ${}^{1}D_{2}$ frame in Fig. 3. When only the contact term is applied and no pion-exchanges (curve ct1) then the phase shift increases dramatically with energy indicating that the contact contribution is of very short range and completely inadequate to describe this $D$-wave. 1PE is weak (curve L0). Adding the contact to 1PE brings the phase shift up, but too much since obviously the contact is dominant. When 2PE contributions are added (right ${}^{1}D_{2}$ frame), the description improves with increasing order. While the NL$\Omega$ 2PE is weak and, therefore, does not lead to much improvement (cf. NL0 and NL1), the NNL$\Omega$ 2PE is known to provide a realistic intermediate-range attraction and together with the contact leads to a quantitative description (curve NNL1), and so does NNNL1. The conclusion is that the contact alone can by no means describe ${}^{1}D_{2}$. The strong intermediate-range attraction provided by chiral 2PE at NNL$\Omega$ and N3L$\Omega$ is crucial. As the small contact LEC in the case of NNL1 reveals (Table 3), the contact contribution is minor, while chiral 2PE rules. This example demonstrates that even when a contact term is involved, chirality can still be the major factor and show a clear signature. #### 3.1.2 The ${}^{3}D_{2}$-wave Also in the ${}^{3}D_{2}$-wave, the contact contribution alone (cf. ct1 curve in the left ${}^{3}D_{2}$ frame in Fig. 3) leads to a dramatically wrong description. In this particular partial wave, the 1PE (L0 curve) happens to play a dominant role, because the matrix element of the tensor operator is 2 in this state which, in addition, is mutiplied by (-3) from the $\bm{\tau}_{1}\cdot\bm{\tau}_{2}$ factor, resulting in an overall factor of (-6) for the pion tensor potential. As the L0 curve reveals, this large tensor contribution alone, essentially, explains the ${}^{3}D_{2}$-wave. 2PE contributions play only a minor role (cf. right ${}^{3}D_{2}$ frame), because the (mainly) central forces provided by 2PE are small as compared to the huge tensor force contribution from 1PE in this particuler wave. This scenario leaves little room for contact contributions. One-pion-exchange, the most pronounced expression of chiral symmetry, rules this wave. #### 3.1.3 The ${}^{3}D_{3}$-wave The cases ct1, L0, and L1 are inadequate similarly to what we have seen in ${}^{1}D_{2}$. The 2PE contributions at NL$\Omega$ and NNL$\Omega$ without and with contact contribution (NL0, NNL0 and NL1, NNL1, respectively) do not lead to much improvement. Finally, with NNNL1 a more realistic result starts to develop. A quantitative description has to wait for N4L$\Omega$ as demonstrated in the ${}^{3}D_{3}$ frame of Fig. 2. In any case, the contact alone cannot describe the ${}^{3}D_{3}$ wave, since the contact contribution is too short-ranged. Substantial intermediate-range attraction is needed, which only chiral 2PE can provide. #### 3.1.4 The ${}^{3}D_{1}$-wave Since the ${}^{3}D_{1}$ wave is coupled to ${}^{3}S_{1}$, we will discuss it in conjunction with the coupled ${}^{3}S_{1}$-${}^{3}D_{1}$-$\epsilon_{1}$ system, below. #### 3.1.5 $D$-waves summary Contacts alone can not reproduce $D$-waves (cf. all the ct1 cases in the left column of Fig. 3), because of the short-range nature of contact contributions, which are ill-suited for $D$-waves. The strong intermediate-range attraction provided by chiral 2PE at NNL$\Omega$ and N3L$\Omega$ is crucial, unless the 1PE tensor force is dominant, which also is a reflection of chiral symmetry. For exact fits, contact corrections are needed, but they are very small. Thus, in spite of contributions from contacts, chirality makes the largest imprint on $D$-waves. The $D$-waves are, in fact, an interesting case. On the one hand, they are not so peripheral that the (very long-ranged) 1PE is dominant and, on the other hand, their orbital angular momentum is large enough to prevent them from being too sensitive to the (short-ranged) contact potential. Thus, the $D$-waves are a true window on the intermediate range. Consequently, they test the reality of the (internediate-ranged) 2PE as produced by chiral symmetry. In particular, the ${}^{1}D_{2}$-wave demonstrates that this test is passed well. ### 3.2 $P$-waves In $P$-waves, we have two contacts available per partial wave; one is of order two, $C_{\alpha}$, and the other one is of order four, $D_{\alpha}$ [cf. Eq. (50)]. We then consider the following cases with the short notation given in parenthesis. * • One contact contribution and nothing else (ct1). * • Two contact contributions (ct2). * • L$\Omega$ pion exchange (i. e., 1PE) only and no contact term (L0). * • L$\Omega$ 1PE plus one contact term (L1). * • L$\Omega$ 1PE plus two contact terms (L2). * • NL$\Omega$ pion exchanges plus one contact term (NL1). * • NL$\Omega$ pion exchanges plus two contact terms (NL2). * • NNL$\Omega$ pion exchanges plus one contact term (NNL1). * • NNL$\Omega$ pion exchanges plus two contact terms (NNL2). * • N3L$\Omega$ pion exchanges plus two contact terms (NNNL2). The values for the contact LECs used in the various cases are listed in Table 4. As mentioned, the chiral 2PE expressions at orders NL$\Omega$ and NNL$\Omega$ include polynomial terms of order $Q^{2}$ and the 2PE expressions at order N3L$\Omega$ include polynomial terms up to order $Q^{4}$. $\mathcal{O}(Q^{2})$ and $\mathcal{O}(Q^{4})$ polynomial terms do not vanish in $P$-waves [Eq. (53)]. These terms are absorbed by the second and fourth order contact terms. Therefore, the minimal number of contacts to be applied at NL$\Omega$ and NNL$\Omega$ is one (second order) contact and two (second and fourth order) contacts at N3L$\Omega$. Thus, the contact LECs shown in Table 4 for NL1, NNL1, and NNNL2 are not just the corrections needed besides the genuine 2PE contributions and their size does not reflect the size of “what is missing”. However, in the cases NL2 and NNL2, the second contact included, $D_{\alpha}$ (fourth order contact), is not renormalized (since NL$\Omega$ and NNL$\Omega$ 2PE does not generate $Q^{4}$ polynomials) and, therfore, reflects a true fourth order correction. The phase shifts up to 300 MeV that result from the various $P$-wave cases are shown in Fig. 4, which we will discuss now. Table 4: Contact LECs used in $P$-waves [cf. Eq. (50)]. Second order contacts, $C_{\alpha}$, are in units of 104 GeV-4, while fourth order contacts, $D_{\alpha}$, are in units of 104 GeV-6. | ${}^{1}P_{1}$ | ${}^{3}P_{0}$ | ${}^{3}P_{1}$ | ${}^{3}P_{2}$ ---|---|---|---|--- case | $C_{{}^{1}P_{1}}$ | $D_{{}^{1}P_{1}}$ | $C_{{}^{3}P_{0}}$ | $D_{{}^{3}P_{0}}$ | $C_{{}^{3}P_{1}}$ | $D_{{}^{3}P_{1}}$ | $C_{{}^{3}P_{2}}$ | $D_{{}^{3}P_{2}}$ ct1 | 6.5533 | 0 | -0.4631 | 0 | 4.3248 | 0 | -0.3256 | 0 ct2 | 2.17 | -5.0 | -0.874 | 10.0 | 1.4127 | -5.0 | -0.4766 | 1.6 L1 | 0.1349 | 0 | 0.8463 | 0 | -0.1732 | 0 | -0.2302 | 0 L2 | 0.1613 | 0.95 | 0.8531 | -0.55 | -0.1480 | 1.58 | -0.3300 | 1.1 NL1 | 0.2295 | 0 | 1.3228 | 0 | -0.4607 | 0 | -0.2203 | 0 NL2 | 0.2664 | 1.45 | 1.3234 | -0.03 | -0.4352 | 1.2 | -0.3203 | 1.1 NNL1 | 0.1821 | 0 | 1.1415 | 0 | -0.7851 | 0 | -0.6333 | 0 NNL2 | 0.1912 | 0.3 | 1.1495 | -0.95 | -0.8133 | -0.58 | -0.6251 | -0.1 NNNL2 | 0.1933 | 9.72 | 1.1883 | 4.92 | -0.8105 | 4.74 | -0.7464 | 5.95 (a) (b) (c) (d) (e) (f) (g) (h) Figure 4: $P$-wave phase shifts of neutron-proton scattering for the various cases discussed in the text. Solid dots and open circles as in Fig. 1. When we apply only one contact, we use the order two one and fit it to the empirical phase-shift at 50 MeV laboratory energy as determined in the Nijmegen phase-shift analysis Sto93 . When both contacts are involved, we fit the empirical phase-shifts at 50 MeV and 150 MeV (if possible). Obviously, with just one contact term and no pion contributions (cases ct1 of the left column of Fig. 4) the description is grossly wrong in all $P$-waves. Adding the second contact does not lead to any improvement in ${}^{1}P_{1}$ and ${}^{3}P_{1}$ and, in fact, in these two cases it is not possible to fit the phase shift at 150 MeV. The ${}^{3}P_{0}$ and ${}^{3}P_{2}$ partial waves improve with the second contact, but are not any close to a quantitative description. Adding 1PE (L0) together with one or two contacts (L1, L2) brings about considerable improvement in most $P$-waves. Turning to the frames of the right column of Fig. 4 where the 2PE exchanges of various orders are added, we observe order by order improvement. ${}^{1}P_{1}$ is described well in the cases of NNL1 and NNL2, while the other partial waves assume a quantitative character only when the powerfull 2PE at N3L$\Omega$ is added (case NNNL2). In summary, contacts alone are inadequate to describe $P$-waves. 1PE brings improvement, but strong chiral 2PE is needed for a quantitative description of $P$-waves. Thus, a clear signature of chiral symmetry can be identified in $P$-waves. A note is in place on ${}^{3}P_{2}$, since it is coupled with ${}^{3}F_{2}$ and $\epsilon_{2}$ through the contact LEC $D_{{}^{3}P_{2}-^{3}F_{2}}$, Eq. (50). We found that the latter parameter has only a weak effect on the ${}^{3}P_{2}$ phase shift and, therefore, we decided to leave it out of our considerations. We kept it at zero. ### 3.3 The ${}^{1}S_{0}$-wave Table 5: Columns two to five show the contact LECs used in the ${{}^{1}S_{0}}$ wave [cf. Eq. (50)]. The zeroth order contact $\widetilde{C}_{{}^{1}S_{0}}$ is in units of $10^{4}$ GeV-2; the second order contact $C_{{}^{1}S_{0}}$ in units of $10^{4}$ GeV-4; and fourth order contacts $\widehat{D}_{{}^{1}S_{0}}$ and $D_{{}^{1}S_{0}}$ in units of $10^{4}$ GeV-6. Column six and seven display the $np$ scattering length, $a_{np}$, and effective range, $r_{np}$, in the ${{}^{1}S_{0}}$ state. case | $\widetilde{C}_{{}^{1}S_{0}}$ | $C_{{}^{1}S_{0}}$ | $\widehat{D}_{{}^{1}S_{0}}$ | $D_{{}^{1}S_{0}}$ | $a_{np}$ (fm) | $r_{np}$ (fm) ---|---|---|---|---|---|--- ct1 | -0.063985 | 0 | 0 | 0 | -23.74 | 0.69 ct2 | 0.475799 | 4.0 | 0 | 0 | -23.74 | 2.37 ct3 | -0.158301 | 2.0 | -6.0 | 0 | -23.74 | 2.66 L1 | -0.109340 | 0 | 0 | 0 | -23.74 | 1.73 L2 | -0.130919 | 1.33 | 0 | 0 | -23.74 | 2.70 NL2 | -0.146214 | 1.815 | 0 | 0 | -23.74 | 2.70 NNL2 | -0.152032 | 2.36 | 0 | 0 | -23.74 | 2.70 NNNL4 | -0.139563 | 2.417 | -2.332 | -16.74 | -23.74 | 2.70 (a) (b) Figure 5: ${}^{1}S_{0}$ phase shifts of neutron-proton scattering for the various cases discussed in the text. Solid dots and open circles as in Fig. 1. In the ${}^{1}S_{0}$ wave, we have available a total of four contact terms [cf. Eq. (50)], namely, one zeroth order contact, $\widetilde{C}_{{}^{1}S_{0}}$, one second order contact, $C_{{}^{1}S_{0}}$, and two fourth order contacts, $\widehat{D}_{{}^{1}S_{0}}$ and $D_{{}^{1}S_{0}}$. When we use only one contact, we pick the zeroth order one and fit it to the $np$ ${}^{1}S_{0}$ scattering length, $a_{np}=-23.74$ MeV. When we apply two contacts, we fit, besides the scattering length, the ${}^{1}S_{0}$ $np$ effective range parameter, $r_{np}=2.70\pm 0.05$ MeV. With three parameters, we also try to reproduce (if possible) the empirical phase- shift at 50 MeV laboratory energy as determined in the Nijmegen phase-shift analysis Sto93 and, with four parameters, the phase shift at 150 MeV is included in the fit. We consider the following cases with the short notation given in parenthesis. * • One contact contribution, and nothing else (ct1). * • Two contact contributions (ct2). * • Three contact contributions (ct3). * • L$\Omega$ 1PE and no contact term (L0). * • L$\Omega$ 1PE plus one contact term (L1). * • L$\Omega$ 1PE plus two contact terms (L2). * • NL$\Omega$ pion exchanges plus two contact terms (NL2). * • NNL$\Omega$ pion exchanges plus two contact terms (NNL2). * • N3L$\Omega$ pion exchanges plus four contact terms (NNNL4). The values for the contact LECs are listed in Table 5 and the phase shifts up to 300 MeV that result from the various ${}^{1}S_{0}$ cases are shown in Fig. 5, which we will discuss now. When only one contact term is used (fit to $a_{np}$) and no pion contributions (case ct1), then the ${}^{1}S_{0}$ phase shifts for intermediate energies are far above the data. Adding more contacts (cases ct2 and ct3) moves those predictions below the data. The prediction with four contacts is essentially the same as with three contacts and, therefore, not shown. Clearly, contacts alone cannot describe the ${}^{1}S_{0}$ wave at intermediate energies, no matter how many contacts one is using. 1PE alone (L0) is small and adding to it one or two contacts (cases L1 and L2) brings about predictions that are very similar to the coresponding cases with contacts alone (ct1 and ct2) and, again, adding more contacts does essentially not change anything. Thus, in ${}^{1}S_{0}$, 1PE is obviously of very limited relevance, except for the effective range parameter, $r_{np}$, which is improved by 1PE (cf. Table 5). The strong part of 1PE is its tensor force, which does not contribute in singlet states where only the (weak) central force has a presence. The momentum-space 1PE includes also a constant term/contact term [see Eq. (14)], which converts into a $\delta(\vec{r})$-function in position space. The L0 case includes the $\delta(\vec{r})$-function contribution. We now turn to the frame on the right of Fig. 5, where the 2PE exchanges of the various orders are added in. The NL$\Omega$ 2PE (curve NL2) does not create any improvement over the L2 case. However 2PE at NNL$\Omega$ (curve NNL2) leads to an excellent reproduction of the ${}^{1}S_{0}$ phase shifts up to 300 MeV. Adding more contacts beyond two in the cases of NL$\Omega$ and NNL$\Omega$ does not improve the description, which is why we do not show these cases. The NNNL4 case creates further subtle refinements. We remind the reader again of the fact that the chiral 2PE expressions at orders NL$\Omega$ and NNL$\Omega$ include polynomial terms of order $Q^{0}$ and $Q^{2}$, and the 2PE expressions at order N3L$\Omega$ include polynomial terms up to order $Q^{4}$, which are always compensated by contacts of the same order. Therefore, in the case of the ${}^{1}S_{0}$ wave, the minimal number of contacts to be applied at NL$\Omega$ and NNL$\Omega$ is two (zeroth and second order) and four (of orders zero, two, and four) at N3L$\Omega$. Thus, the contact LECs shown in Table 5 for NL2, NNL2, and NNNL4 are renormalized numbers whose size does not necessarily reflect the size of what is missing beyond the genuine pion exchange contributions. In summary, contacts alone are inadequate to describe the ${}^{1}S_{0}$-wave at intermediate energies. The strong chiral 2PE that starts at NNL$\Omega$ is needed for a quantitative description of the ${}^{1}S_{0}$-wave. There is a clear signature of chiral symmetry in ${}^{1}S_{0}$-wave. ### 3.4 The coupled ${}^{3}S_{1}$-${}^{3}D_{1}$-$\epsilon_{1}$ system Table 6: Columns two to seven show the contact LECs used in the ${}^{3}S_{1}-^{3}D_{1}$ waves [cf. Eq. (50)]. The $\widetilde{C}_{\alpha}$ of the zeroth order contact are given in units of $10^{4}$ GeV-2; the $C_{\alpha}$ of second order in $10^{4}$ GeV-4; and $\widehat{D}_{\alpha}$ and $D_{\alpha}$ of fourth order in $10^{4}$ GeV-6. Column eight and nine display the triplet scattering length, $a_{t}$, and effective range, $r_{t}$, respectively, in the ${{}^{3}S_{1}}$ state. case | $\widetilde{C}_{{}^{3}S_{1}}$ | $C_{{}^{3}S_{1}}$ | $\widehat{D}_{{}^{3}S_{1}}$ | $D_{{}^{3}S_{1}}$ | ${D}_{{}^{3}D_{1}}$ | $C_{{{}^{3}S_{1}}-{{}^{3}D_{1}}}$ | $a_{t}$ (fm) | $r_{t}$ (fm) ---|---|---|---|---|---|---|---|--- ct1 | -0.077103 | 0 | 0 | 0 | 0 | 0 | 5.42 | 0.68 ct5 | -0.1311 | 2.0 | -0.5 | 0 | 27.0 | -1.25 | 5.42 | 1.76 L1 | -0.06366 | 0 | 0 | 0 | 0 | 0 | 5.42 | 1.59 L5 | -0.13345 | 0.4 | -0.7 | 0 | -2.0 | 0.41 | 5.42 | 1.73 NL2 | -0.136835 | -0.39 | 0 | 0 | 0 | 0 | 5.42 | 1.76 NL5 | -0.1255 | -0.5 | -2.3 | 0 | -2.3 | 0.1 | 5.42 | 1.73 NNL2 | -0.10002 | -0.335 | 0 | 0 | 0 | 0 | 5.42 | 1.75 NNL5 | -0.14875 | 0.4 | -0.1 | 0 | -1.4 | 0.4 | 5.42 | 1.74 NNNL8a | -0.159635 | 0.8233 | -4.319 | -19.17 | -5.59 | 0.503 | 5.42 | 1.75 a In the case of NNNL8, besides the six parameters given, $\widehat{D}_{{}^{3}S_{1}-^{3}D_{1}}=1.162$ and $D_{{}^{3}S_{1}-^{3}D_{1}}=1.759$. In all other cases, $\widehat{D}_{{}^{3}S_{1}-^{3}D_{1}}=D_{{}^{3}S_{1}-^{3}D_{1}}=0$. (a) (b) (c) (d) (e) (f) Figure 6: ${}^{3}S_{1}$, ${}^{3}D_{1}$, and $\epsilon_{1}$ phase parameters of neutron-proton scattering for the various cases discussed in the text. Solid dots and open circles as in Fig. 1. In the coupled ${}^{3}S_{1}$-${}^{3}D_{1}$-$\epsilon_{1}$ system, we have available a total of eight contact terms [cf. Eq. (50)]; namely, four for ${}^{3}S_{1}$ ($\widetilde{C}_{{}^{3}S_{1}}$, $C_{{}^{3}S_{1}}$, $\widehat{D}_{{}^{3}S_{1}}$, $D_{{}^{3}S_{1}}$), one for ${}^{3}D_{1}$ (${D}_{{}^{3}D_{1}}$), and three for the ${{}^{3}S_{1}}$-${{}^{3}D_{1}}$ transition potential ($C_{{{}^{3}S_{1}}-{{}^{3}D_{1}}}$, $\widehat{D}_{{{}^{3}S_{1}}-{{}^{3}D_{1}}}$, ${D}_{{{}^{3}S_{1}}-{{}^{3}D_{1}}}$). When we use only one of the eight contacts, we pick the zeroth order one, $\widetilde{C}_{{}^{3}S_{1}}$, and fit it to the ${}^{3}S_{1}$ scattering length, $a_{t}=5.42$ MeV. When we apply the two contacts $\widetilde{C}_{{}^{3}S_{1}}$ and $C_{{}^{3}S_{1}}$, we fit, besides the scattering length, the ${}^{3}S_{1}$ effective range parameter, $r_{t}=1.75\pm 0.02$ MeV. Using the three ${}^{3}S_{1}$ parameters $\widetilde{C}_{{}^{3}S_{1}}$, $C_{{}^{3}S_{1}}$, and $\widehat{D}_{{}^{3}S_{1}}$, we try to also reproduce (if possible) the empirical ${}^{3}S_{1}$ phase-shift at 50 MeV laboratory energy as determined in the Nijmegen phase-shift analysis Sto93 . Besides the three contact LECs mentioned, we will, in some cases, also include $D_{{}^{3}D_{1}}$ and $C_{{}^{3}S_{1}-^{3}D_{1}}$, which affect the ${}^{3}D_{1}$ phase shift and the $\epsilon_{1}$ parameter, respectively. To prevent our investigation from becoming too involved, we do not vary the LECs $D_{{}^{3}S_{1}}$, $\widehat{D}_{{}^{3}S_{1}-^{3}D_{1}}$, and $D_{{}^{3}S_{1}-^{3}D_{1}}$ at orders up to NNL$\Omega$ and keep them at zero. Thus, up to NNL$\Omega$, we will be experimenting with maximally five contacts in the ${}^{3}S_{1}$-${}^{3}D_{1}$-$\epsilon_{1}$ system. We consider the following cases with the short notation given in parenthesis. * • One contact contribution, and nothing else (ct1). * • Five contact contributions (ct5). * • L$\Omega$ pion exchange (i. e., 1PE) plus one contact term (L1). * • L$\Omega$ 1PE plus five contact terms (L5). * • NL$\Omega$ pion exchanges plus two contact terms (NL2). * • NL$\Omega$ pion exchanges plus five contact terms (NL5). * • NNL$\Omega$ pion exchanges plus two contact terms (NNL2). * • NNL$\Omega$ pion exchanges plus five contact terms (NNL5). * • NNNL$\Omega$ pion exchanges plus eight contact terms (NNNL8). The values for the contact LECs used in the various cases are listed in Table 6, and the corresponding phase shifts up to 300 MeV are shown in Fig. 6. When only one contact term is used (fit to $a_{t}$) and no pion contributions (case ct1), then the ${}^{3}S_{1}$ phase shifts at intermediate energies are substantially above the data and $r_{t}$ is off by about 1 fm. Adding one more contact (case ct2, not shown), gets $r_{t}$ correct, but moves the phase shifts at intermediate energies far below the data, very similar to the case ct5 that is shown in Fig. 6. In fact, adding more contacts to the coupled system under consideration does not change the ${}^{3}S_{1}$ phase shifts up to the maximum of five contacts. Clearly, contacts alone cannot describe the ${}^{3}S_{1}$ wave at intermediate energies, no matter how many contacts one is using. However, adding 1PE (case L1) makes a big difference, getting the ${}^{3}S_{1}$ phase shifts almost right and finally perfect with more contacts (L5). This is quite in contrast to ${}^{1}S_{0}$, where 1PE has little influence and where 1PE plus contacts never lead to a reproduction of the phase shifts. The reason for this is that, in the coupled ${}^{3}S_{1}$-${}^{3}D_{1}$ state, the 1PE tensor force contributes strongly which is crucial for the correct description of this coupled system. We now turn to the ${}^{3}S_{1}$ frame on the right of Fig. 6, where the 2PE exchanges of the various orders are added, and we see that 2PE does not make much difference. Turning to the ${}^{3}D_{1}$ phase shifts, we see again that contacts alone cannot get this partial wave right. The contact contribution is too short- ranged for this partial wave as clearly seen by the very small contribution at low energies and too strong a contribution above 150 MeV. Adding 1PE gets it right at low energies, but requires short-ranged corrections at higher energies. This can be done by contacts (case L5) or by 2PE contributions of higher order together with moderate contacts (right ${}^{3}D_{1}$ frame). Finally, we turn to the $\epsilon_{1}$ parameter, which is interesting, because it is proportional to the ${}^{3}S_{1}$-${}^{3}D_{1}$ transition potential created exclusively by the tensor force. 1PE generates a (too) strong tensor force (L1) which, when damped by a short ranged contact, gets it about right (L5). The 2PE of the various orders do also generate more or less tensor force contributions which require short-range contact corrections to get it right. Thus, qualitatively, 1PE plus a short-range correction is all that is needed for the ${}^{3}S_{1}$-${}^{3}D_{1}$ system. Interestingly, the chiral 2PE contributions are not important in this case. The deeper reason for this is that the iteration of the 1PE tensor force in this coupled system generates a 2PE contribution that is so strong that it makes other 2PE contributions insignificant. Because of the polynomial terms that accompany chiral 2PE contributions, in the case of the coupled ${}^{3}S_{1}$-${}^{3}D_{1}$ system, the minimal number of contacts to be applied at NL$\Omega$ and NNL$\Omega$ is three, namely, $\widetilde{C}_{{}^{3}S_{1}}$, $C_{{}^{3}S_{1}}$, and $C_{{}^{3}S_{1}-^{3}D_{1}}$. In the case of N3L$\Omega$ it is eight. In summary, contacts alone are inadequate to describe the ${}^{3}S_{1}$-${}^{3}D_{1}$-$\epsilon_{1}$ system at intermediate energies. Crucial is the 1PE which, for good reasons, is called the Leading Order of the chiral expansion. ## 4 Summary and conclusions The most characteristic feature in the design of chiral $NN$ potentials is that the long-range part of the potential is described by one- and multi-pion exchanges which are ruled by chiral symmetry. In contrast, the short-range part consists simply of polynomial terms (“contact” terms), since the short- range nucleon structure cannot be resolved at low energies. In the lower partial waves of $NN$ scattering, which are the dominant ones for predictions of observables of nuclear structure and reactions, contacts as well as pion-exchanges contribute. But, since lower waves are more sensitive to the short-range, the contacts may be squashing the pion-exchange contributions, thus, diminishing the role of chiral symmetry for those predictions. Hence, the purpose of this study was to investigate the role of the contacts, on the one hand, and the effect of the pion exchanges, on the other hand, in the lower partial waves of chiral $NN$ potentials. We have shown in detail, what contact terms alone can achieve. This is displayed by the brown ct curves in the left frames of Figs. 3 to 6, which all demonstrate that contacts alone are totally inadequate and do not catch anything of the nature of the nuclear force in those partial waves. Adding (chiral) 1PE yields semi-realistic results in some specific partial-wave states, where the tensor force plays an outstanding role. Such cases are the ${}^{3}D_{2}$ state and the ${}^{3}S_{1}$-${}^{3}D_{1}$-$\epsilon_{1}$ system that is coupled through the tensor force. Chiral 2PE at NL$\Omega$ is generally weak and, therefore, does not bring about much improvement. However, the NNL$\Omega$ 2PE is strong, creating a realistic intermediate range attraction that cannot be simulated by contacts. This fact is also reflected in the $\chi^{2}$ calculations for the fit of the $NN$ data conducted in Ref. EMN17 . While the $\chi^{2}$/datum at NL$\Omega$ comes out to be 51.5, at NNL$\Omega$ it is 6.3, even though in both cases the number of contact terms is the same. The improvement in the $\chi^{2}$ is due to an improvement of the chiral 2PE at NNL$\Omega$. Obviously, the contacts cannot substitute the chiral terms. For very low energies, the so-called pionless EFT has been developed HKK20 , which consists only of contact terms and does not include any pion-exchange contributions. In view of our rather poor results for most of our “contacts only” fits, one may wonder, how well the pionless EFT is doing in describing $NN$ scattering. It needs to be explained that the pionless EFT is meant to be used only for momenta less than the pion mass, say, 100 MeV/c CMS momentum or less, which is equivalent to a laboratory energy of about 20 MeV. Moreover, the pionless theory is mostly used only for $S$-waves. Fitting the $S$-waves at low energies is then understood as reproducing the two effective range parameters. As shown in Table 5, cases ct2 and ct3, the ${}^{1}S_{0}$ $a_{np}$ and $r_{np}$ can be reproduced o.k. with two or three contacts. Concerning ${}^{3}S_{1}$, the ct5 case shown in Table 6 demonstrates a perfect description of $a_{t}$ and $r_{t}$. Note that for just two contacts in ${}^{3}S_{1}$, the same result is obtained (not shown in the table). Also the ${}^{1}S_{0}$ and ${}^{3}S_{1}$ phase shifts up to 25 MeV (or even 50 MeV) are reasonably close to the empirical ones, Figs. 5 and 6. But, what our fits also show is that, when one moves above about 50 MeV, contacts only are inadequate and pion-contributions are needed in a decisive way. In this context, it should be noted that, in the pionless theory, better fits above 50 MeV can probably be achieved if one allows, e. g., for different parametrizations in different spin-isospin channels. In the present work, we have not attempted this, but see, e. g., Ref. CRS99 . Finally, we also note that fitting phase shifts is not everything. The ultimate purpose of nuclear forces (in the context of nuclear structure) is to bind nuclei (with the proper binding energies). Thus, to judge the failure or sucess of different contributions to nuclear forces, it would be interesting to study what (combination of) contributions are needed to bind nuclei properly, where the analysis should be subdivided into the consideration of light, intermediate, and heavy nuclei. First attempts to examine light nuclei in terms of pionless forces have been started HKK20 , but so far there have not been any comprehensive inquiries. This subject represents a very attractive topic for future research. In conclusion, despite the fact that contact and pion-exchange contributions are entangled in the all important lower partial waves of an $NN$ potential, we were able to disentangle them. We managed to identify and pin down many characteristic signatures of chiral symmetry that are crucial for the quantitaive description of the nuclear force in those low angular momentum states. However, that does not imply that contacts are totally useless. For the accurate fit of $NN$ quantities, like, the effective range parameters, the phase shifts at low energies, and the deuteron binding energy, contacts are needed. They play a subtle role and are like the “dot over the i”. ###### Acknowledgements. This work was supported in part by the U.S. Department of Energy under Grant No. DE-FG02-03ER41270. ## References * (1) R. Machleidt and D. R. Entem. Phys. Rep. 503, 1 (2011). * (2) E. Epelbaum, H.-W. Hammer and U.-G. Meißner, Rev. Mod. Phys. 81, 1773 (2009). * (3) H. W. Hammer, S. Kőnig, and U. van Kolck, Rev. Mod. Phys. 92, 025004 (2020). * (4) M. Piarulli, L. Girlanda, R. Schiavilla, R. Navarro Pérez, J. E. Amaro, and E. Ruiz Arriola, Phys. Rev. C 91, 024003 (2015). * (5) M. Piarulli, L. Girlanda, R. Schiavilla, A. Kievsky, A. Lovato, L. E. Marcucci, Steven C. Pieper, M. Viviani, and R. B. Wiringa, Phys. Rev. C 94, 054007 (2016). * (6) B. D. Carlsson et al., Phys. Rev. X 6, 011019 (2016) * (7) D. R. Entem, R. Machleidt, and Y. Nosyk, Phys. Rev. C 96, 024004 (2017). * (8) P. Reinert, H. Krebs and E. Epelbaum, Eur. Phys. J. A 54, 86 (2018). * (9) A. Ekstrőm, G. Hagen, T. D. Morris, T. Papenbrock and P. D. Schwartz, Phys. Rev. C 97, 024332 (2018) * (10) P. Navratil, R. Roth, and S. Quaglioni, Phys. Rev. C 82, 034609 (2010). * (11) M. Viviani, L. Girlanda, A. Kievsky, and L. E. Marcucci, Phys. Rev. Lett. 111, 172302 (2013). * (12) J. Golak et al., Eur. Phys. J. A 50, 177 (2014). * (13) L. Girlanda, A. Kievsky, M. Viviani, and L. E. Marcucci, Phys. Rev. C 99, 054003 (2019). * (14) P. Navratil, V. G. Gueorguiev, J. P. Vary, W. E. Ormand and A. Nogga, Phys. Rev. Lett. 99, 042501 (2007). * (15) R. Roth, J. Langhammer, A. Calci, S. Binder and P. Navratil, Phys. Rev. Lett. 107, 072501 (2011). * (16) R. Roth, S. Binder, K. Vobig, A. Calci, J. Langhammer and P. Navratil, Phys. Rev. Lett. 109, 052501 (2012). * (17) H. Hagen, M. Hjorth-Jensen, G. R. Jansen, R. Machleidt, and T. Papenbrock, Phys. Rev. Lett. 108, 242501 (2012). * (18) H. Hagen, M. Hjorth-Jensen, G. R. Jansen, R. Machleidt, and T. Papenbrock, Phys. Rev. Lett. 109, 032502 (2012). * (19) B. R. Barrett, P. Navratil, and J. P. Vary, Prog. Part. Nucl. Phys. 69, 131 (2013). * (20) H. Hergert, S. K. Bogner, S. Binder, A. Calci, J. Langhammer, R. Roth, and A. Schwenk, Phys. Rev. C 87, 034307 (2013). * (21) G. Hagen, T. Papenbrock, M. Hjorth-Jensen, and D. J. Dean, Rept. Prog. Phys. 77, 096302 (2014). * (22) S. Binder, J. Langhammer, A. Calci and R. Roth, Phys. Lett. B 736, 119 (2014). * (23) G. Hagen, G. R. Jansen and T. Papenbrock, Phys. Rev. Lett. 117, 172501 (2016). * (24) J. Simonis, K. Hebeler, J. D. Holt, J. Menendez and A. Schwenk, Phys. Rev. C 93, 011302 (2016) * (25) J. Simonis, S. R. Stroberg, K. Hebeler, J. D. Holt, and A. Schwenk, Phys. Rev. C 96, 014303 (2017) * (26) T. D. Morris, J. Simonis, S. R. Stroberg, C. Stumpf, G. Hagen, J. D. Holt, G. R. Jansen, T. Papenbrock, R. Roth, and A. Schwenk, Phys. Rev. Lett. 120, 152503 (2018). * (27) V. Soma, P. Navratil, F. Raimondi, C. Barbieri, and T. Duguet, Phys. Rev. C 101,014318 (2020). * (28) J. Hoppe, C. Drischler, K. Hebeler, A. Schwenk and J. Simonis, Phys. Rev. C 100, 024318 (2019). * (29) T. Hűther, K. Vobig, K. Hebeler, R. Machleidt, and R. Roth, Phys. Lett. B 808, 135651 (2020). * (30) K. Hebeler and A. Schwenk, Phys. Rev. C 82, 014314 (2010). * (31) K. Hebeler, S. K. Bogner, R. J. Furnstahl, A. Nogga, and A. Schwenk, Phys. Rev. C 83, 031301(R) (2011). * (32) L. Coraggio, J. W. Holt, N. Itaco, R. Machleidt, and F. Sammarruca, Phys. Rev. C 87, 014322 (2013). * (33) G. Hagen, T. Papenbrock, A. Ekstrőm, K. A. Wendt, G. Baardsen, S. Gandolfi, M. Hjorth-Jensen, and C. J. Horowitz, Phys. Rev. C 89, 014319 (2014). * (34) L. Coraggio, J. W. Holt, N. Itaco, R. Machleidt, L. E. Marcucci, and F. Sammarruca, Phys. Rev. C 89, 044321 (2014). * (35) F. Sammarruca, L. Coraggio, J. W. Holt, N. Itaco, R. Machleidt, and L. E. Marcucci, Phys. Rev. C 91, 054311 (2015). * (36) R. Machleidt and F. Sammarruca, Phys. Scr. 91, 083007 (2016). * (37) C. Drischler, A. Carbone, K. Hebeler, and A. Schwenk, Phys. Rev. C 94, 054307 (2016). * (38) K. Hebeler, H. Krebs, E. Epelbaum, J. Golak and R. Skibinski, Phys. Rev. C 91, 044001 (2015). * (39) C. Drischler, K. Hebeler and A. Schwenk, Phys. Rev. Lett. 122, 042501 (2019) * (40) F. Sammarruca and R. Millerson, “Exploring the relationship between nuclear matter and finite nuclei with chiral two- and three-nucleon forces,” arXiv:2005.01958 [nucl-th]. * (41) W. G. Jiang, A. Ekstrőm, C. Forssén, G. Hagen, G. R. Jansen and T. Papenbrock, “Accurate bulk properties of nuclei from $A=2$ to $\infty$ from potentials with $\Delta$ isobars,” arXiv:2006.16774 [nucl-th]. * (42) R. Machleidt and F. Sammarruca, Eur. Phys. J. A 56, 95 (2020). * (43) N. Kaiser, R. Brockmann, and W. Weise, Nucl. Phys. A625, 758 (1997). * (44) D. R. Entem, N. Kaiser, R. Machleidt, and Y. Nosyk, Phys. Rev. C 91, 014002 (2015). * (45) D. R. Entem, N. Kaiser, R. Machleidt, and Y. Nosyk, Phys. Rev. C 92, 064001 (2015). * (46) E. Epelbaum, H. Krebs, and Ulf-G. Meißner, Eur. Phys. J. A 51, 53 (2015). * (47) S. Weinberg, Phys. Lett B251, 288 (1990); Nucl. Phys. B363, 3 (1991). * (48) P.A. Zyla et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2020, 083C01 (2020). * (49) N. Kaiser, Phys. Rev. C64, 057001 (2001). * (50) M. Hoferichter, J. Ruiz de Elvira, B. Kubis, and U.-G. Meißner, Phys. Rev. Lett. 115, 192301 (2015); Phys. Rep. 625, 1 (2016). * (51) E. Epelbaum, W. Glöckle, and U.-G. Meißner, Eur. Phys. J. A 19, 125 (2004). * (52) N. Kaiser, Phys. Rev. C 61, 014003 (2000). * (53) N. Kaiser, Phys. Rev. C 62, 024001 (2000). * (54) R. Blankenbecler and R. Sugar, Phys. Rev. 142, 1051 (1966). * (55) V. G. J. Stoks, R. A. M. Klomp, M. C. M. Rentmeester, and J. J. de Swart, Phys. Rev. C 48, 792 (1993). * (56) R. A. Arndt, W. J. Briscoe, I. I.Strakovsky, and R. L. Workman, Phys. Rev. C 76, 025209 (2007). * (57) J.-W. Chen, G. Rupak, and M. J. Savage, Nucl. Phys. A 653, 386 (1999).
# Local rules for fabricating allosteric networks Nidhi Pashine<EMAIL_ADDRESS>Department of Physics, The University of Chicago, Chicago IL, 60637, USA ###### Abstract Mechanical properties of disordered networks can be significantly tailored by modifying a small fraction of their bonds. This procedure has been used to design and build mechanical metamaterials with a variety of responses. A long- range ‘allosteric’ response, where a localized input strain at one site gives rise to a localized output strain at a distant site, has been of particular interest. This work presents a novel approach to incorporating allosteric responses in experimental systems by pruning disordered networks in-situ. Previous work has relied on computer simulations to design and predict the response of such systems using a cost function where the response of the entire network to each bond removal is used at each step to determine which bond to prune. It is not feasible to follow such a design protocol in experiments where one has access only to local response at each site. This paper presents design algorithms that allow determination of what bonds to prune based purely on the local stresses in the network without employing a cost function; using only local information, allosteric networks are designed in simulations and then built out of real materials. The results show that some pruning strategies work better than others when translated into an experimental system. A method is presented to measure local stresses experimentally in disordered networks. This approach is then used to implement pruning methods to design desired responses in-situ. Results from these experiments confirm that the pruning methods are robust and work in a real laboratory material. ## I Introduction Recent advances in the field of mechanical metamaterials have shown that disordered networks are extremely tunable so that their mechanical response can be altered dramatically by modifying a small number of edges or bonds between nodes. This can be demonstrated in the Poisson’s ratio, $\nu$, which is the negative of the ratio of the strain along the transverse axes to an applied strain along a given axis. In an isotropic network material in $d$ dimensions, $\nu$ can be varied between the two theoretical limits, $\nu=-1$ (auxetic) and $\nu=+1/(d-1)$ (incompressible), by selectively removing a small fraction of the network bonds goodrich2015principle ; hexner2018a ; hexner2018b . A more general property that can be incorporated into a disordered network is a long-distance response, where applying an input strain at a local site in the system creates an output strain at another distant localized site rocks2017designing ; yan2017architecture ; tlusty2017 . This is referred to as a mechanical ‘allosteric’ response because it is inspired by the property of allostery in protein molecules. Both allosteric and auxetic responses have been successfully designed and incorporated into physical networks by pruning selected bonds rocks2017designing ; reid2018auxetic . An important difference between auxetic and allosteric response is that the Poisson’s ratio is a monotonic function of the ratio of the shear, $G$, and bulk, $B$, moduli of a material. In a disordered system, once contributions of every bond to the bulk and shear moduli are known, it is straightforward to change its Poisson’s ratio by pruning specific bonds. On the other hand, in earlier works, allosteric systems have been designed by removing bonds from a disordered network using a cost function. This protocol succeeds well for designing materials with multiple targets controlled by a single source using computer simulations rocks2017designing ; rocks2019 . A cost function calculates the global response of the system due to the removal of each bond individually in the network in order to decide what bonds need to be pruned to minimize the difference from a desired response. Such a cost function is difficult to interpret or quantify in terms of simple local properties of the individual bonds in the network. This makes it difficult to create such behavior in a network in situ so that the result can be achieved without recourse to prior design on a computer. This paper takes an alternate approach for designing a pruning protocol; the aim is to use only local information encoded in the stresses on each bond due to an externally applied strain. This would allow the creation of allosteric responses in spring-network simulations by using only local information before a bond is removed. This approach is a generalization of the one used to incorporate auxetic response into networks goodrich2015principle . In that case, the pruning was based on the local stresses in the bonds due to an externally applied strain. In the case of allostery, the procedure is extended to include the response to a set of separate, individually applied, strains which are then combined. The results are then tested and validated in experiments; I take the networks that were designed in simulations and build them out of rubber sheets. One problem encountered in using simulated networks to prune real materials is that the simulations used, which have been of disordered central-spring networks derived from jammed packings of spheres liu2010jamming , are overly simplified models of the real materials. A physical system is more complicated than such a spring network because it has forces other than those derived from harmonic central-spring interactions. To circumvent this problem, I present an experimental approach to measure the relative magnitude of stresses in networks under any external strain and use it to prune the network systems in- situ. This is done using photoelastic networks that are observed between pairs of cross-polarizers. In this approach, no simulations are necessary for determining which bonds to prune. I find that experimental networks that are designed in simulations have a drastically different response than ones that are pruned in-situ using photoelastic stress measurements. In addition, the different local pruning methods can produce different results in experiments even when they are designed to give the same response in simulations. Taken together, this work improves the understanding of the mechanisms that control allostery in mechanical systems and opens up possibilities of building new and interesting mechanical responses in real materials. ## II Theoretical Approach The random disordered spring networks are created in two dimensions, $2D$, with periodic boundary conditions. These networks are derived from $2D$ jammed packings of soft discs which are under force balance liu2010jamming ; vanHecke_2015 ; Ohern . Each point of contact between the discs is replaced by a harmonic spring that connects the centers of the two discs. The equilibrium length of each spring is chosen to be the distance between the centers of the discs. This ensures that the resultant network of nodes connected by bonds is under zero stress in its ground state. The network coordination number, which is the average number of bonds coming out of a node, is denoted by $Z$. In order for such a network to be rigid, it needs to have an average $Z\geq Z_{c}$, the critical coordination number. In $d$ dimensions, and excluding finite-size effects, $Z_{c}=2d$; the $2D$ networks used here have $Z>Z_{c}=4$. In order to incorporate a long distance ‘allosteric’ response between two distant sites within a network, two pairs of nodes are picked at random as the source and target respectively. These are separated by typically half of the system size. One such network is shown in Fig. 1(a). In order to have an allosteric interaction, there should be an output strain, $\epsilon_{T}$, at the target pair of nodes when an input strain, $\epsilon_{S}$, is applied between the two source nodes. The ratio of output to input strains is $\eta\equiv\frac{\epsilon_{T}}{\epsilon_{S}}$. The aim is to incorporate an allosteric response, with a desired value of $\eta$ in the network by removing specific bonds from the network using a local pruning rule that uses only information that is available prior to the pruning itself. The general idea is to apply strains at both the input and output sites (in some cases simultaneously and sometimes separately) to discover which bonds should be removed in order to produce a stress at the target when the source is activated. One might be tempted simply to minimize the energy for a specific mechanical behavior. That is, one might consider removing the bonds that are under highest stress when both the source and target are simultaneously put under the desired strains. This, however, would nearly always fail because the dominant energy for the strained system is often just due to the strains of the source and target irrespective of whether the source and target are applied simultaneously. In case of designing an allosteric response, the goal is not merely to lower the energy for the input and output strains, but to create an interaction between the source and target sites. Thus, the source and target sites must communicate with each other. In order to achieve this, one needs to identify specifically the bonds that facilitate and the ones that hinder this allosteric response. By identifying and pruning the right set of bonds, it is possible to minimize the interaction energy (not just the total energy of distortion) of the input and output strains. We apply a deformation to our system, $\epsilon^{k}$. This $\epsilon^{k}$ could be a single strain applied between two points in the system or a combination of strains applied at various locations. Due to this applied strain $\epsilon^{k}$, each bond in the system experiences some stress. The stress in bond $j$ that appears due to $\epsilon^{k}$ is $S_{j}^{k}$. For example, $S_{j}^{source}$ is defined as the stress in bond $j$ as a result of a the input strain applied at the $source$. Since all the calculations below are in the linear-response regime, $S_{j}^{-k}=-S_{j}^{k};~{}~{}S_{j}^{k+l}=S_{j}^{k}+S_{j}^{l}$ One can calculate the energies in all the bonds of the network under any applied deformation: The energy, $U_{j}^{k}$, in bond $j$ when it is under a stress $S_{j}^{k}$ is: $U_{j}^{k}=\frac{1}{2}S_{j}^{k}\gamma_{j}^{k}$ (1) where $\gamma_{j}^{k}$ is the strain of the bond $j$ under applied external strain $\epsilon^{k}$. Since the total energy of the network is simply the sum of the energies of all the individual springs, the total energy stored in the network under an applied strain $\epsilon^{k}$ is $U^{k}=\sum_{j}U^{k}_{j}$. The modulus $M^{k}$ for any given deformation $\epsilon^{k}$ is defined as $U^{k}=\frac{1}{2}M^{k}(\epsilon^{k})^{2}$. It can be decomposed into the contributions of each bond: $M^{k}=\sum_{j}M^{k}_{j}$. $M_{j}^{k}$ is related to $S_{j}^{k}$ as follows: $U^{k}_{j}=\frac{1}{2}M_{j}^{k}(\epsilon^{k})^{2}=\frac{1}{2}S_{j}^{k}\gamma_{j}^{k}$ (2) For a linear spring, $S_{j}$ and $\gamma_{j}$ only differ by a factor of the spring constant. This gives the following: $M_{j}^{k}\propto(S^{k}_{j})^{2}$ (3) $M^{k}_{j}$ is the contribution of bond $j$ to $M^{k}$, where $M^{k}$ is the modulus for the deformation $\epsilon^{k}$. In section IV it will be shown that $M_{j}^{k}$ is a quantity that can be measured experimentally. Moreover, it is easier to measure $M_{j}^{k}$ than to measure $S_{j}^{k}$ or $\gamma_{j}^{k}$. Since the goal is to be able to prune the networks in experiments, the pruning protocols presented below will be based on measurements of $M_{j}^{k}$. Figure 1: (a) A sample network with adjacent nodes chosen to be source and target sites. (b) Results from pruning in simulations. Success rate of pruning networks as a function of $|\eta|$. Networks pruned to lower the effective moduli $M^{eff}=M^{link}(M^{target})^{3}$ (blue circles) have the highest success rate, followed by $M^{link}M^{target}$ (black triangles), with $M^{link}$ (red squares) being the least effective way to prune. (c) Average fraction of bonds pruned as a function of $|\eta|$ for each of the three $M^{eff}$ (d) Performance of experimental networks that were designed in simulations to have $\eta\approx 1$. Plot shows the fraction of networks with a response higher than $\eta^{*}$ as a function of $\eta^{*}$. ### Pruning algorithm In order to incorporate an allosteric response, it is not particularly important how much energy is required to move the $source$ nodes apart as long as it results in an effect of the correct sign and magnitude at the $target$ site. Therefore, it is of little importance what are the individual values of ($S_{j}^{source}$) and ($S_{j}^{target}$); the important quantity is the product of the two, ($S_{j}^{source}$)($S_{j}^{target}$). This term identifies which bonds are most relevant to both $source$ and $target$ and pruning the bonds with the largest $(S_{j}^{source}S_{j}^{target})$ helps create an interaction between the $source$ and $target$ sites. Consider applying the input strain at $source$ and output strain at $target$. This is represented as $S^{s+t}$. $S_{j}^{s+t}=S_{j}^{source}+S_{j}^{target}.$ (4) Similarly, applying the input strain at $source$ and the negative of output strain at $target$ is represented by $S^{s-t}$. $S_{j}^{s-t}=S_{j}^{source}-S_{j}^{target}.$ (5) The moduli $M_{j}$ can be expressed in terms of $S_{j}$: $M_{j}^{s+t}\propto(S_{j}^{s+t})^{2}=(S_{j}^{source}+S_{j}^{target})^{2}$ (6) $M_{j}^{s-t}\propto(S_{j}^{source}-S_{j}^{target})^{2}$ (7) and $M_{j}^{link}\equiv M_{j}^{s+t}-M_{j}^{s-t}=4(S_{j}^{source})(S_{j}^{target}).$ (8) Note that taking the difference of the in-phase and out-of-phase terms produces the product of stresses due to applied strain at source and target sites. This term, $M^{link}$, links the effects of strains at both source and target and can be either positive or negative. One pruning protocol that would create an allosteric interaction between the source and target would be to prune those bonds in the network that have a maximum value of $M^{link}$. Because $M_{j}^{link}=(S_{j}^{source})(S_{j}^{target})$ is symmetric between source and target, the source and the target have been treated on an equal footing. If this were the only criterion for pruning bonds, then the effect on the target by activating the source would be the same as the effect on the source by activating the target. Thus, such a criterion would produce $\eta\approx 1$. However in many situations it might be preferable to have $\eta\neq 1$. For example one might want to create a strain at the target that is twice as large as the strain at the source (i.e., $\eta=2$). This would require that the symmetry between source and target be broken so that, for example, the target nodes are easier to move than the source nodes. One effective way to break the symmetry,is to bias the modulus by giving more weight to $S_{j}^{target}$ than to $S_{j}^{source}$. One way to do this is to prune the bond with maximum value of $(M_{j}^{link})(M_{j}^{target})^{n}$ where $n>0$. These combinations of moduli are referred to as the effective modulus, $M^{eff}$. In the results presented in the next section below there are three examples: 1. 1. $(M^{eff,0})=M^{link}$, 2. 2. $(M^{eff,1})=M^{link}M^{target}$, 3. 3. $(M^{eff,3})=M^{link}(M^{target})^{3}$. Our pruning algorithm is as follows: 1. 1. Calculate $M_{j}^{eff}$ for each bond $j$ in the network; 2. 2. Remove the bond with the maximum value of $M_{j}^{eff}$; 3. 3. Calculate the new value of $\eta$; 4. 4. Repeat until the desired value of $\eta$ is obtained. By using effective moduli as the underlying quantity that controls a network’s behavior, it is possible to incorporate responses in disordered networks using local rules alone. The rest of this paper explores the efficacy of this pruning approach using spring network simulations followed by an experimental method to measure $M^{eff}$ in order to incorporate allosteric responses in- situ in physical systems. ## III Simulation results In order to check the efficacy of these algorithms, I simulate the response of networks as the protocols are applied. The simulations can be performed on networks with periodic boundaries as well as ones with free boundaries. A free boundary network is created by cropping out a circular section from a periodic network. This often produces dangling bonds or zero modes which are eliminated by removing the relevant bonds and nodes from the edges of the cropped network. Since open boundary networks are easier to build in experiments, I use these networks to compare the response between simulations and experiments. The simulation results shown here are performed on networks that have periodic boundaries with $\sim 500$ particles and $\sim 1080$ bonds. Unpruned networks have $\eta\approx 0.0$ on average between randomly chosen source and target sites. Networks are pruned until the desired value of $\eta$ is reached or until the process fails due to the creation of a zero mode in the system. $50$ different networks are pruned for both positive and negative values of $\eta$. The networks used in these simulations have an average $\Delta Z=Z-Z_{c}\approx 0.32$. This corresponds to an excess of $\sim 7\%$ bonds more than necessary to maintain rigidity. The success rate of each of the pruning methods is shown in Fig. 1(b). As one might expect, if we prune for higher $|\eta|$, the success rate decreases. It is clear from this data that using just $M^{link}$ to prune a network is not the best strategy because, due to the symmetry between source and target, one can prune only to a maximum of $|\eta|\approx 1$. Even this response can be achieved only about half the time. Biasing $M^{eff}$ towards the target improves the effectiveness of pruning significantly; using $M^{eff,3}=(M^{link})(M^{target})^{3}$ makes it possible to reach $|\eta|>1000$. However, it is important to note that these calculations are performed in the linear-response regime of the network. Such a very large $|\eta|$ implies that the source strain must be extremely small in order for the target strain to be $1000$ times the source strain and still be in the linear regime. This makes it highly impractical to measure such a response in a laboratory system. Fig. 1(c) shows the average fraction of bonds that need to be pruned as a function of $\eta$. We see that $100\%$ of the networks fail after an average of $\sim 6\%$ bonds are removed. This is when nearly all the excess bonds above the rigidity threshold have been removed; therefore removing any subsequent bond has a high probability of creating a zero-energy mode. ### Efficiency in Experiments It is known from previous studies that linear spring simulations do not capture all the material details of a real network reid2018auxetic ; reid2019auxetic . There are other interactions, such as angle bending forces, that are present in a real material. In order to test how well our pruning algorithms translate to real networks, we design 2D networks with open boundaries using the three mentioned protocols and then fabricate them in experiments. We took networks with free boundaries ranging between 110 and 150 particles in size and pruned each of them using our three protocols. We stopped pruning either once the network has achieved $\eta>1$ or once a zero mode is produced so that the pruning process failed. Since not all of our algorithms have a $100\%$ success rate, we chose 10 networks that could be pruned successfully using the three effective moduli, $(M^{eff,0})$, $(M^{eff,1})$, and $(M^{eff,3})$. For consistency, in all three cases the same set of source and target nodes are used. For any given starting network, each protocol removes a different set of bonds. I then laser cut 30 realizations of these networks (10 networks $\times$ 3 algorithms) and measured their responses in experiments. Our networks were lasercut from $1.5mm$ thick sheets of silicone rubber with a hardness of shore A70. The bonds were made thinner near the nodes to minimize angle-bending interactions in the networks as was done in previous work rocks2017designing . The ratio of the width of a bond to its average length is 1:6 with the bonds being half as wide near the nodes. In order to measure the observed $\eta$ of these lasercut networks, an input strain of $5\%$ is applied at the source and the output strain is measured at the target site. Figure 1(d) shows the response of these networks. Since each designed network has a slightly different value of $\eta$, we normalize our experimental results, $\eta_{exp}$ by the value of $\eta$ produced in the simulation, $\eta_{sim}$: $\eta^{*}={\eta_{exp}}/{\eta_{sim}}$ and plot it along the abscissa. The ordinate shows the fraction of networks whose response exceeds a given $\eta^{*}$. If our simulations were a perfect model for the experimental systems, then the data in Fig. 1(d) would have been a horizontal line at $1.0$. This data is surprising because some algorithms have much better agreement between experiments and simulations than others. Interestingly, Fig. 1(b) shows that pruning with $M^{eff,0}=M^{link}$ works only about $50\%$ of the time but when these networks are translated to experiments, they have a very high success rate. On the other hand, $M^{eff,3}=M^{link}(M^{target})^{3}$ works very well in simulations but not in experiments. This suggests that the disparity between experiments and simulations increases as the complexity of the pruning algorithm increases. I hypothesize that the inclusion of $(M^{target})^{n}$, increases the effect of the non-linear terms so that the predictions from simulation are farther from our experimental results. ## IV Pruning in-situ Our results show that linear spring models do not work perfectly for designing real materials. In order to see what is going on in the laboratory material, we need a way to measure the stresses in a physical network. This section presents experiments to measure the stress distribution in physical networks and use that information to prune the networks in-situ. ### Setup Figure 2: (a) Setup for visualizing stresses in photoelastic networks (b) Sample image when the network has no external strain (c) Image of the same network as in (b) with applied strains between adjacent node pairs circled in yellow. The stresses in a transparent material can be quantitatively detected by measuring stress-induced birefringence hecht2002optics . The linear polarization of a beam of light will not be affected as it passes through an isotropic material; an analyzing polarizer with perpendicular orientation on the exiting side of the material will block all the light. A photographic image would be completely dark. However, if there are stresses in the photoelastic material, the polarization axis of the light will be rotated depending on the orientation of the stress with respect to the polarization axis. The relative phase shift, $\Delta$ between two principal directions is proportional to the difference in the two principal stresses hecht2002optics . $\Delta\propto S_{1}-S_{2}$ (9) If the stresses are small enough so that the rotation angle is small, the analyzing polarizer will transmit the light in proportion to the stress. A photographic image will be bright in those regions where the stress is large and completely dark where there is no net stress. Using circularly polarized light allows the magnitude of the stress to be measured regardless of its orientation. The drawback of using circular polarization is that circular polarizers are sensitive to the wavelength of the light. Thus monochromatic light must be used. In the experimental measurements of stresses reported here, the disordered networks are made out of molded urethane rubber (Smooth-on Clear Flex™ 50) with a shore hardness of A50. This material is highly sensitive to stresses and allows measurement of stresses in the range of $4$\mathrm{kPa}$-20$\mathrm{kPa}$$ in the current setup. The liquid urethane material is poured into molds in the shape of the desired networks. The molds are 3D printed in a soft rubber material with a shore hardness of A28. Before each use, the molds were coated with a release agent (EaseRelease™ 200) to ensure that the molded urethane networks are easy to remove from the molds. The networks were cured in the mold at room temperature for a duration of 12 hours. After removal from the mold, they were cured for an additional 2-3 days at room temperature to reduce the tackiness of the surfaces of the molded networks. A schematic of the experimental setup is shown in Fig. 2(a). Data is collected by the camera at wavelength $\lambda=500\pm 10nm$. The initially unpolarized white light is first polarized using a linear polarizer and then converted to circular polarization using a quarter-wave plate for $\lambda=500nm$. The light then passes through the sample. On the other side of the sample, another quarter-wave plate converts the light back to linearly polarized light followed by a linear polarizer that is oriented perpendicular to the polarization of the initial light. The total stress at each point in the material can be measured by the intensity of the light in the photographic image. Even after letting the molded photoelastic networks cure for a few days, their surfaces are still tacky enough to stick to a glass or acrylic surface. This produces extraneous stresses in the material that are unrelated to those caused simply by placing strains at the source or target nodes. Additionally, because of a high surface tension of the molding liquid, the top surface of the network has a meniscus so that it is not completely flat. As a result, a ray of light going through such a curved surface is reflected and refracted in various directions and gives rise to unwanted signals in our data. Both of these problems can be eliminated by submerging the networks in an index-matched fluid. I used mineral oil with a refractive index of $1.47$, which is very close to the refractive index of the photoelastic networks which is $1.48$. Sample images from this setup with and without an applied strain are shown in Fig. 2(b) and (c) respectively. The image in Fig. 2(b) shows that when the network is under no stress, the image of the network in the camera is dark and nearly undetectable. When an external strain is applied to the network as shown in Fig. 2(c), different bonds in the network are stressed by different amounts as a reaction to the input strain. These stressed regions now appear as bright spots in our image. The intensity of light, $I(x,y)$ at any point in the material $(x,y)$ is related to the electric field, $E(x,y)$, and the phase shift, $\Delta(x,y)$: $I(x,y)\propto E(x,y)^{2}\propto\sin(\Delta(x,y))^{2}\propto\sin(S(x,y))^{2}.$ (10) As long as the input strain is sufficiently small, the resultant stresses are in the regime where $\sin(S)\approx S$. In this limit, the brightness of a bond is proportional to the square of stress. Hence, the intensity $I$ averaged over the length of bond $j$ is proportional to $M_{j}$ in Eq. 3. ### Experimental Results Figure 3: Success rate of networks pruned in-situ as a function of $\eta$. Following the same trend as the simulations, networks pruned to lower the effective moduli $M^{eff}=M^{link}(M^{target})^{3}$ (blue circles) have the highest success rate, followed by $M^{eff}=M^{link}M^{target}$ (black triangles), and $M^{eff}=M^{link}$ (red squares). I conduct the in-situ pruning experiments on photoelastic networks that are between $110$ and $150$ nodes in size. The photoelastic networks are molded to be 5mm thick with an average bond length of 12mm. The ratio of the width of a bond to its average length is $1:4$ with the bonds being a third as wide near the nodes than in their middle. This slight difference in geometry makes the middle part of the bonds wider than in the experiments described above that pruned bonds according to the algorithm based on simulation results. It ensures that the effect of noise from the edges of photoelastic bonds is minimized. In order to measure the response of these pruned networks, I remake them in silicone rubber using the same specifications as in section III. This is done to avoid any differences arising from the material properties while comparing the response of designed and experimentally pruned networks. In order to prune these networks, I take images where a strain is applied (i) at the source and target in phase $(M^{s+t})$, (ii) out of phase $(M^{s-t})$, and (iii) only at the target site $(M^{target})$. By computing combinations of these images, I obtain a measure of the same three effective moduli as defined in section II: $M^{eff}=M^{link}$, $M^{eff}=M^{link}M^{target}$, and $M^{eff}=M^{link}(M^{target})^{3}$. The experiments were conducted on $10$ different networks for each $M^{eff}$. In each pruning step, I first calculate $M^{eff}$ for all the bonds, remove the bond with maximum $M^{eff}$, and then measure $\eta$ of the pruned network. As the bonds are pruned sequentially, the signal-to-noise ratio drops steadily; when the signal is too low to be reliable, the pruning is halted. The recorded $\eta$ for a particular sample is the maximum $\eta$ that the network exhibits at any point during the pruning process. The results from in-situ pruning experiments are shown in Fig. 3. To some extent, all pruning methods show an allosteric response. In particular, by sequentially removing the bond with the largest value of $M^{eff,3}=M^{link}(M^{target})^{3}$ at each step, networks can often be successfully pruned to produce $\eta=1$. The three different pruning methods have very different success rates. This was also seen in the simulation results shown in Fig. 1(b). The results from the experiments and from the simulations follow the same trend. The same network design pruned to minimize the same $M^{eff}$ leads to the removal of very different sets of bonds in simulations versus those pruned in the photoelastic experiments. This is not surprising since the simulations only considered central, harmonic springs, whereas the experiments had all the interactions inherent in an elastic sheet such as angle-bending forces at each node and non-linear stress/strain curves for all the bonds. However, it is surprising that, compared to the experimental results shown in Fig. 1(d), the trend for networks pruned in-situ is completely reversed. This suggests that the simulation models are too simple to capture all the stresses in a real material. These results reinforce the conclusion that in order to make a physical system with an allosteric response, a simplified model is not sufficient and it is important to visualize and measure all the interactions in the system. Using an experimental procedure to detect the stresses in physical networks, it is possible to create allosteric networks in experiments with a high success rate. ## V Conclusions The emphasis of this work has been to understand the local pruning rules that control allosteric response in mechanical networks as well as to fabricate these networks in real (physical) experimental systems. In order to create a robust response, it is important to identify the relevant parameters that govern allosteric behavior. The effective moduli introduced here are the bond- level contributions to allostery. The simulations show that this approach is very effective at designing allosteric responses with minimal computational cost. In some cases, the results of these simulations can be directly translated into experimental realizations. These local pruning rules are more efficient than previous methods for incorporating allosteric behavior into experimental systems rocks2017designing . Similar local pruning rules can also be used to create more general responses such as multiple pairs of allosteric source and target sites, or a single source site controlling multiple target sites. As expected, when the complexity of the incorporated response increases, the efficiency of such pruning algorithms goes down. As expected, however, the simulations of networks connected by linear, central-force springs do not capture all the details of a real material. In order to create allosteric responses efficiently in experiments, it is necessary to be able to measure stress distributions in the physical networks. I have presented an experimental procedure to visualize and measure the stresses in these systems and use them to prune the appropriate bonds in the networks in-situ to achieve a desired response. The experimental technique to measure local stresses provides a pathway for modifying physical systems with more complex interactions. It would be interesting to see if it can be used to create other mechanical responses, such as auxetic behavior. This protocol can also be applied to other disordered systems that are not based on spring networks. The pruning methods explored here could be extended to 3D systems and to smaller length scales by using stimuli responsive materials that can detect stresses in polymers at the molecular level chen2020force ; deneke2020engineer . Recent work has shown that stress-induced aging can be used to modify material properties pashine2019directed . An externally imposed strain can direct the evolution of a material and determine its mechanical response. Our understanding of local pruning rules that control allostery in combination with directed aging protocols can be very useful in designing allosteric systems while eliminating the need to manipulate the material manually at the microscopic level. Local learning rules, such as the ones presented here, are also of interest in supervised learning of elastic and flow networks stern2020supervised In conclusion, local pruning rules that allow the manipulation of material response combined with the ability to measure stress distributions in-situ, enable the modification of materials that are more complex than the linear spring networks that have been the focus of previous simulation studies. This approach opens up novel ways to build mechanical metamaterials with a desired function without relying on computer simulations. ## VI Acknowledgements I would like to thank Daniel Hexner, Andrea J. Liu, and Nachi Stern for insightful discussions. I am deeply grateful to Sidney R. Nagel for his advising and mentoring. I would also like to thank Cacey Stevens Bester for information regarding photoelastic materials, and Robert Morton for help with 3D printing. This work was supported by the NSF MRSEC Program DMR-2011854 (for experimental studies), the US Department of Energy, Office of Science, Basic Energy Sciences, under Grant DE-SC0020972 (for theoretical model development) and by the Simons Foundation for the collaboration “Cracking the Glass Problem” Award 348125 (for simulations). ## References * [1] Carl P Goodrich, Andrea J Liu, and Sidney R Nagel. The principle of independent bond-level response: Tuning by pruning to exploit disorder for global behavior. Physical review letters, 114(22):225501, 2015. * [2] Daniel Hexner, Andrea J Liu, and Sidney R Nagel. Role of local response in manipulating the elastic properties of disordered solids by bond removal. Soft Matter, 14:312–318, 2018. * [3] Daniel Hexner, Andrea J Liu, and Sidney R Nagel. Linking microscopic and macroscopic response in disordered solids. Phys.. Rev. E, 97:063001, 2018. * [4] Jason W Rocks, Nidhi Pashine, Irmgard Bischofberger, Carl P Goodrich, Andrea J Liu, and Sidney R Nagel. Designing allostery-inspired response in mechanical networks. Proceedings of the National Academy of Sciences, 114(10):2520–2525, 2017. * [5] Le Yan, Riccardo Ravasio, Carolina Brito, and Matthieu Wyart. Architecture and coevolution of allosteric materials. Proceedings of the National Academy of Sciences, 114(10):2526–2531, 2017. * [6] Tsvi Tlusty, Albert Libchaber, and Jean-Pierre Eckmann. Physical model of the genotype-to-phenotype map of proteins. Phys.Rev. X, 7:021037, 2017. * [7] Daniel R Reid, Nidhi Pashine, Justin M Wozniak, Heinrich M Jaeger, Andrea J Liu, Sidney R Nagel, and Juan J de Pablo. Auxetic metamaterials from disordered networks. Proceedings of the National Academy of Sciences, 115(7):E1384–E1390, 2018. * [8] Jason W Rocks, Henrik Ronellenfitsch, Andrea J Liu, Sidney R Nagel, and Eleni Katifori. Limits of multifunctionality in tunable networks. Proceedings of the National Academy of Sciences, 116(7):2506–2511, 2019. * [9] Andrea J Liu and Sidney R Nagel. The jamming transition and the marginally jammed solid. Annu. Rev. Condens. Matter Phys., 1(1):347–369, 2010. * [10] Wouter G. Ellenbroek, Varda F. Hagh, Avishek Kumar, M. F. Thorpe, and Martin van Hecke. Rigidity loss in disordered systems: Three scenarios. Phys. Rev. Lett., 114:135501, 2015. * [11] Corey S. O’Hern, Leonardo E. Silbert, Andrea J. Liu, and Sidney R. Nagel. Jamming at zero temperature and zero applied stress: The epitome of disorder. Phys. Rev. E, 68:011306, 2003. * [12] Daniel R Reid, Nidhi Pashine, Alec S Bowen, Sidney R Nagel, and Juan J de Pablo. Ideal isotropic auxetic networks from random networks. Soft Matter, 15:8084–8091, 2019. * [13] Eugene Hecht. Optics, 4e. Addison-Wesley, 2002. * [14] Yinjun Chen, C Joshua Yeh, Yuan Qi, Rong Long, and Costantino Creton. From force-responsive molecules to quantifying and mapping stresses in soft materials. Science Advances, 6(20):eaaz5093, 2020. * [15] Naomi Deneke, Mitchell L Rencheck, and Chelsea Simone Davis. An engineer’s introduction to mechanophores. Soft Matter, 2020. * [16] Nidhi Pashine, Daniel Hexner, Andrea J Liu, and Sidney R Nagel. Directed aging, memory, and nature’s greed. Science advances, 5(12):eaax4215, 2019. * [17] Menachem Stern, Daniel Hexner, Jason W Rocks, and Andrea J Liu. Supervised learning in physical networks: From machine learning to learning machines. arXiv preprint arXiv:2011.03861, 2020.
# An embedded multichannel sound acquisition system for drone audition Michael Clayton, Lin Wang, Andrew McPherson, Andrea Cavallaro Manuscript received: December 25, 2020The authors are with Centre for Intelligent Sensing, Queen Mary University of London, London, UK (e-mail: {m.p.clayton, lin.wang, a.mcpherson<EMAIL_ADDRESS> ###### Abstract Microphone array techniques can improve the acoustic sensing performance on drones, compared to the use of a single microphone. However, multichannel sound acquisition systems are not available in current commercial drone platforms. To encourage the research in drone audition, we present an embedded sound acquisition and recording system with eight microphones and a multichannel sound recorder mounted on a quadcopter. In addition to recording and storing locally the sound from multiple microphones simultaneously, the embedded system can connect wirelessly to a remote terminal to transfer audio files for further processing. This will be the first stage towards creating a fully embedded solution for drone audition. We present experimental results obtained by state-of-the-art drone audition algorithms applied to the sound recorded by the embedded system. ###### Index Terms: Drone audition, microphone array, embedded system ## I Introduction The use of drones for remote sensing has substantially increased in the past decade, with operation in broadcasting, surveillance, inspection, and search and rescue [1]. Sensing is primarily based on cameras (optical and thermal) and lasers[2, 3, 4, 5, 6], whereas microphones are rarely used because of the inherently challenging sound sensing conditions [7]. When visual data is unreliable due to low light, poor weather conditions or visual obstructions [8], drone audition would greatly benefit the above-mentioned applications. One of the main obstacles when capturing audio on a drone is the strong ego- noise created by the rotating motors, propellers and the airflow during flight. The ego-noise masks the target sound sources and causes poor recording quality. Microphone array techniques can be used to improve the drone audition performance through sound enhancement [9, 10, 11, 12, 13, 14] and sound source localization [15, 16, 17, 18, 20]. An important bottleneck for deploying microphone array algorithms on drones is the requirement of a multichannel sound acquisition system to enable sampling the sound from multiple microphones simultaneously and convert it to multichannel digital signals before further processing. The sound acquisition system needs to fly with the drone, which imposes additional constraints on the size and weight of the system. To the best of our knowledge, there is no dedicated multichannel sound acquisition device available in current commercial drone platforms. Researchers have to design and implement their own hardware systems for data collection on drones, and the processing of the data is often done offline after the flight due to limited computational resources onboard. To conduct and encourage the research in the field of drone audition, we designed an embedded multichannel sound acquisition system that is suitable for drone audition and can be mounted on a drone for acoustic sensing during flight. The system is designed based on Bela [21], an embedded computing platform dedicated to audio processing, and can accommodate up to eight microphones placed in arbitrary shapes. The system can record and store the sound locally for on-device processing; and can also transfer the recorded sound file via wireless communication to a remote terminal. In the remainder of the paper, we disclose the technical details for hardware, software design and development. The paper is organized as follows. Sec. II reviews related works. Sec. III and Sec. IV present the hardware and software design of the embedded system. Sec. V presents real data collection with the hardware and presents baseline processing results with state-of-the-art drone audition algorithms. TABLE I: Existing multichannel sound acquisition systems on drones. Q - Quadcoptor; H - Hexacopters; O - Octocopter Ref | Number of | Shape | Placement | Audio interface | Drone Type | Remark ---|---|---|---|---|---|--- | microphones | of the array | of the array | | | [11] | 6 | T-shape | Side | Zoom H6 | Self-assembled (Q) | Portable recorder [7] | 8 | Circular | Top | Zoom R24 | 3DR Iris (Q) | Portable recorder [23] | 6 | Circular (fixed) | Top | ReSpeaker + Raspberry Pi | Self-assembled (O) | Intelligent voice interface [24] | 7 | Circular (fixed) | Side | UMA-8 Mic array + Raspberry Pi | Self-assembled (Q) | Intelligent voice interface [25] | 8 | Circular | Below | MiniDSP USBStreamer I2S-to-USB | Matrice 100 (Q) | Sound card [17] | 8 | Cubic | Below | 8SoundsUSB | MK-Quadro (Q) | Sound card [28] | 8 | Circular | Top, below, side | 8SoundsUSB | Matrice 100 (Q) | Sound card [19] | 12 | Spherical | Side | RASP-ZX | Surveyor MS-06LA (H) | Sound card [26] | 8 | Circular | Side | RASP-24 | Parrot AR Drone (Q) | Sound card [18] | 16 | Octagon | Side | RASP-ZX | Surveyor MS-06LA (H) | Sound card Proposed | 8 | Circular | Top | Bela | Matrice 100 (Q) | Sound card ## II Related work Three types of audio hardware are employed: multichannel sound card, intelligent multichannel voice interface, and portable multichannel sound recorder. #### II-1 A portable multichannel sound recorder This is the easiest way to capture sound from drones as there is no requirement for any configuration of the system, e.g. Zoom H6 [11] and Zoom R24 [7, 27]. The hardware supports arbitrary array topology. The drawback is the the hardware can only achieve recording and does not support sound processing. Another drawback is that the hardware, e.g. Zoom R24, is usually too heavy a payload for the drone to fly. #### II-2 Intelligent multichannel voice interface This type of hardware integrates the microphone array and sound processing into a compact IC board, e.g. ReSpeaker [23] and UAM-8 [24]. This hardware usually requires an additional controller, e.g. Rasberry Pi, for sound acquisition and sound processing. This hardware is also usually easy to use and configure for audio purposes. One of the main advantages is that the hardware is also very compact and light-weight, and is suitable to fly with the drone. The drawback is the topology of the array is fixed, which limits the performance and flexibility of microphone array algorithms. #### II-3 Multichannel sound card This is the most popular approach for sound recording on drones, using e.g. RASP series [19, 26, 18], 8SoundUSB [17, 28], USB Streamer [25]. This hardware supports arbitrary array topology along with sound acquisition and sound processing. The main drawback is the user requires knowledge of the hardware circuit design. This particular hardware also requires an operating system to control sound recording and processing, e.g. the RASP series is used in combination with the HARK system [29]. A good understanding of the back-end driver is necessary. The lack of related resources is also intimidating for algorithm designers. ## III Hardware design Fig. 1 and Fig. 2 illustrate the architecture and the real objects of the multichannel sound acquisition system, respectively. The system mainly consists of three parts: the microphone array, the drone, the hardware tray containing the Bela sound acquisition system and the cables. Table II lists the components used by the system. Fig. 3 illustrates the Bela hardware system assembly and peripheral connections. Figure 1: Architecture of the multichannel sound acquisition system. Figure 2: Real objects of the multichannel sound acquisition system. (a) Front view; (b) Side view; (c) Top view; (d) Microphone array. TABLE II: Components used in the hardware system. Component | Type | Functionality ---|---|--- Drone | Matrice 100 | / Microphones (8) | Lapel microphones | / Array frame | 3D printing | Holding microphones Hardware tray | 3D printing | Holding hardware and cables Bela | BeagleBone Black | 1GHz ARM Cortex-A8 processor CTAG Beast (2) | / | Multichannel audio acquisition CTAG Molex breakout board (2) | / | Audio inputs Molex to 3.5mm adapter cable (4) | / | Connects microphones to Bela Mono to stereo adapter cable (4) | / | Split stereo signal to mono signal USB LiPo battery | 5V, 2Amp | Provide power to Bela CTAG BEAST USB storage | / | Save and store recorded audio files locally WiFi Dongle | / | Wireless connection to bela IDE ### III-A Microphone array and drone We use a circular microphone array consisting of eight Boya BY-M1 lapel microphones that are each powered by an LR44 (1.5V) battery. A balanced audio signal is provided from the microphones. The diameter of the array is 16.5 cm. The microphone array frame is 3D printed and constructed from Acrylonitrile butadiene styrene (ABS). The array is mounted on top of the drone to avoid the air flow from the rotating propellers blowing downward [30]. The vertical distance from the array to the drone body is 18 cm. For the drone, we use DJI Matrice 100, which has a payload capacity of 1 kg. Figure 3: Bela multichannel audio hardware system assembly and peripheral. The core processing part is highlighted in the red box. ### III-B Bela-based sound acquisition system The sound acquisition system consists of four units: the core processing unit, the storage and transmission unit, and the hardware tray. Fig. 3 illustrates the hardware connections. #### III-B1 Core processing unit The core processing unit consists of one Beaglebone device flashed with the latest Bela software. To access multichannel audio, Bela uses a customized expansion board called CTAG BEAST, featuring an audio codec with 4 audio input and 8 audio output channels. One CTAG BEAST consists of 2 x CTAG FACE capes pre-configured for use as a BEAST [31], two CTAG Molex breakout boards, and one external LiPo power battery. Bela111Bela is an embedded audio programming and processing platform invented by academia from Queen Mary University of London[21]. The compact size, light- weight, low-latency and multichannel sound acquisition makes it suitable for sound processing on drones[22]. Bela also comes with a user friendly browser- based Integrated Development Environment (IDE), which is used for easy access for editing, building and managing the system. For this reason, we decided to develop a multichannel sound acquisition system based on the Bela device. This is the first time the Bela system has been applied to robotic platforms assisting audition. is a dedicated audio processing platform based on BeagleBone Black (BBB) single-board computer, which is featured by a 1GHz ARM Cortex-A8 processor, two PRUs (Programmable Realtime Units), 512 MB RAM, and a diverse range of on-board peripherals. Bela is used for controlling the sound acquisition and audio processing. Bela is externally powered by a LiPo USB battery that operates at 5V and 2Amp for stability and powering the USB peripherals. The Bela configuration only requires 5V / 300 - 400mA of power for operation. The audio codec operates at 48 kHz sampling rate with 16 bits analogue-to- digital converter (ADC) and digital-to-analogue converter (DAC) conversion. To accommodate 8 microphone inputs, two CTAG FACE capes are stacked on top of each other and connected with Bela via the onboard metal contacts. #### III-B2 Storage and wireless unit A external USB hub is connected to the USB socket of the Beaglebone device. The hub accommodates a USB storage stick, which stores the recording locally, and a USB WiFi Dongle, eliminating the need for a hard-wired connection to the system IDE and enabling the recorded audio to be transferred to a remote processing terminal. #### III-B3 Hardware tray A hardware tray is designed to accommodate the Bela system and the cables. The tray contains a Bela enclosure (made from ABS) and shock case (made from Thermoplastic Polyurethane - TPU) to aid with protecting the hardware from impacts in the event of a crash. The tray is produced with 3D printing. ## IV software design The software design has three objectives: to run the code in a stand-alone device; to record the sound locally to the USB storage; to transfer the sound via WiFi to a remote terminal. All the objectives are achieved with the assistance of the Bela Integrated Development Environment (IDE). ### IV-A IDE for stand-alone processing Figure 4: Interacting with Bela from a local computer via wireless network. Figure 5: Bela Integrated Development Environment. The Bela IDE (Fig.5) is a browser based integrated development environment with features that allow for editing, building and managing projects easily from a ground station (remote terminal) via a self-organized wireless network. The IDE software is pre-installed at the Bela device, along with an operation system. Following the steps in the tutorial222www.eecs.qmul.ac.uk/$\thicksim$linwang/download/bela_documentation.pdf, we set up a self-organized wireless network through a WiFi dongle mounted on the Bela device. Upon system boot, Bela starts a NodeJS server that allows connection to its system from a ground station via the wireless network. The WiFi is setup as a peer-to-peer connection to ensure that the board acts as a dynamic host configuration protocol (DHCP) server. To connect to the Bela device from the ground station, we first need to select the WiFi network hosted by the Bela system. After connection, the IDE can be loaded by entering the IP address of the host device from the web browser. The IDE interface (Fig. 5) will appear automatically at the web browser of the ground station. After compiling, building and running a project from the IDE. The project can be set to run on boot in the IDE Settings tab by selecting the desired project in the drop down menu. The program will operate on Bela without connecting to the ground station as long as the external power is provided. ### IV-B Sound recording The code that enables the Bela to function as a multichannel recording device is written in C++. This allows for quick access in the event that the system requires any modifications, creating a flexible system. The source code for sound recording is given in the Appendix, with the processing flow shown in Fig. 6. In brief, after importing the required library, configuring global variables and file path, the program sets up the recording task to capture the multichannel audio data, writes the stream to the audio buffer (memory block), and stores the data in the pre-defined file path. Once the recording is finished, a clean-up function finalizes the writing process and closes the file. The IDE enables the user to start/stop recording, change settings and download audio files directly from the system among some other features. After building and running the project, the recording will start by writing the digital audio to the specified system path. Pressing the stop icon in the IDE will stop the recording process. The audio data is continuously written to the local storage during recording. Once the stop button is pressed the .wav file is finalised and closed. Figure 6: Processing flow for sound recording with Bela. ### IV-C WiFi Network Connection The WiFi connection enables the user to access the Bela system through the IDE without a hard-wired connection. Instead of recording the audio to the USB storage, we can alter the target file-path to the default RAM memory of the Bela device (see Appendix) in order to have the file appear and update during the recording process within the resources section of the project explorer tab in the IDE. The network connection is continuous to allow the user to change different functions in the IDE. The current WiFi signal is able to achieve an operational range of 20 metres between ground station and the drone. When the network connection is lost momentarily, the IDE user interface stops updating the user about the project running and dependent on filepath selection, the file size of the current recording. The IDE recovers after coming back into WiFi signal range. When the wireless network is re-established the existing file has essentially continued running on the system and the IDE user interface resumes updating the recording progress of the file. The WiFi connection is not required to conduct the recording itself but to monitor its progress. ## V Experiment ### V-A Setup To verify the validity of the developed hardware system, we conduct in-flight testing and recording. We record the ego-noise and the speech, separately. When recording the ego-noise, the altitude of the drone during flight is maintained at about 2 meters above the ground via the flight controller (Fig. 7). We record two types of ego-noise: drone hovering and drone moving. In the former case, the drone is hovering in the air using the GPS stabilised mode with additional manual input (correcting small drift) to allow the drone to remain reasonably stable throughout the recording. In the latter case, the drone is moving in the air at a speed of around 1 meter/second, with random rotation and tilting during flight. When recording the speech-only data, the drone is muted on the ground and a loudspeaker plays sound at a distance of 2 meters. The original sampling rate is 48 kHz. The audio is downsampled to 8 kHz before processing. All the analysis is completed offline and not on the Bela system. The sound recording is available online333www.eecs.qmul.ac.uk/$\thicksim$linwang/bela.html. Figure 7: A drone with microphone array hovering in the air during recording. ### V-B Ego-noise analysis Fig. 8(a) depicts time-domain waveform of the ego-noise recorded at the hovering and the moving status. Fig. 8(c) plots the time-frequency domain spectrogram, which is computed with a moving window of 128 ms and half overlap. Fig. 8(b) plots the power at each time frame. Fig. 8(d) plots the frequency-domain spectrum at the 25-th second and 35-th second of the two ego- noises, respectively. From Fig. 8(b), the power of the ego-noise does not show big differences at the hovering status and the moving status. The mean and standard deviation of the power across time frames (10$s$-40$s$) are -29.9 dB and 0.41 dB, respectively, at the hovering status. The mean and standard deviation of the power across time frames (10$s$-60$s$) are -29.6 dB and 0.42 dB, respectively, at the moving status. For the moving status, we observe a sudden rise of the power at 45$s$, which is possibly due to the rotation operation of the drone. From Fig. 8(c), it can be observed that the ego-noise consists of multiple harmonics. Since the four motors might operate at a slight different rotating speed, the harmonic ego-noise presents several pitches, which can be verified from Fig. 8(d). At the hovering status, the pitch of the ego-noise remains stable. At the moving status, the pitch of the ego-noise varies with time, depending on flight status of the drone. The recording is made in an outdoor environment with a light breeze present. However, from the spectrogram of the recording we do not observe an evident influence of the wind in the low frequency. This is possibly due to the windshield worn by each microphone and also the placement of the microphones on top of the drone. Figure 8: Visualization of the ego-noise when the drone is hovering and moving. (a) Time-domain waveform; (b) Power plot; (c) Time-frequency spectrogram; (d) Frequency-domain plot. Figure 9: Benchmark performance achieved by two spatial filters at various input SNRs. Figure 10: Processing results (beamformer) for input SNR -20 dB. The output SNR is 3.1 dB. (a) Clean speech; (b) Noisy signal before procesisng; (c) Noisy signal after processing. ### V-C Processing results We synthesize a noisy signal at the microphones by adding the ego-noise (hovering status) and the speech at different input SNRs, which vary from -35 dB to 0 dB, with an interval of 5 dB. The testing signal is 25 seconds long. We employ a block-wise processing strategy, using a non-overlap sliding block of 4 seconds. For simplicity, we just verify the performance of two benchmark spatial filters enhancing the target sound from the ego-noise. The first spatial filter is a beamformer based on Multichannel Wiener filtering [14] which computes the correlation matrix of the target sound and the noise separately assuming the speech-only and noise-only signals are available. The second spatial filter is based on blind source separation (BSS) [13], assuming the permutation ambiguities can be perfectly solved by referencing to the speech-only signals. The speech enhancement performance is evaluated with the SNR measure, which is defined, given speech $s(n)$ and noise $v(n)$, as [32] $\text{SNR}=10\log_{10}\frac{\sum_{n}{s^{2}(n)}}{\sum_{n}{v^{2}(n)}}$ (1) We average the output SNR across all the processing blocks. Fig. 9 depicts the output SNR achieved by the two spatial filters at different input SNRs. BSS performs slightly better than beamformer when the input SNR is lower than -15 dB, while beamformer performs better at higher input SNRs. On average, the two spatial filters improve the SNR by about 20 dB. Fig. 10 illustrates exemplar processing results at input SNR -20 dB with the beamformer. The output SNR is 3.1 dB. Fig. 10(b) shows that the speech signal is completely buried in the ego-noise in the time-domain waveform and not distinguishable between each other in the time-frequency spectrogram. Fig. 10 shows the enhanced speech after processing, where the speech is better observed in the time-frequency spectrogram. It should be noted that the two spatial filters are estimated with ideal assumption (i.e. the correlation matrix of the target and the noise are known) and thus set the benchmark of the performance of spatial filtering. In practice, the correlation matrices of the target and noise have to be estimated from the noisy data, which leads to performance drop in low-SNR scenarios [14]. A comprehensive evaluation will be left for future work. ## VI Conclusion We present an embedded multichannel sound acquisition system that can fly with the drone. The system can accommodate up to 8 microphones placed in an arbitrary shape, record the sound locally and can transfer the recorded file to a remote terminal via a self-organized wireless network. Experimental results with recordings made with this hardware verify its validity. This will be the first stage towards creating a fully embedded solution for drone audition. Future work would be to conduct a comprehensive evaluation of the state-of- the-art algorithms for ego-noise reduction and to optimize the code for real- time processing at Bela, which is able to process audio at very low latency ($<$1 millisecond) [21]. The size and weight of the system can be furthered reduced by designing the microphone array circuit manually. ## Appendix: Sound recording Figure 11: Source code in render.cpp Fig. 11 lists the C++ source code file that is used for multichannel sound recording with the Bela device. There are several crucial configurations for sound recording: the file path, the number of channels and the sampling rate. The file path can be configured by setting the global variable const char* path in the source code, e.g. the command const char* path = "/mnt/usb/audiofilename" sets USB storage as the file path. The system can automatically recognize the the amount of active audio inputs and thus does not need to configure the number of channels. The sampling rate can be configured by setting the global variable gAudioSampleRate. The ADC and DAC gain is adjustable within the IDE settings tab. Once the recording is finished, the file on the USB storage can be downloaded through IDE after copying them to the project folder, e.g. using the command cp /mnt/usb/audiofile.wav /root/Bela/projects/projectname/. Alternatively, we can remove the USB storage from Bela and insert it into a computer for data transfer. A breakdown and interpretation of the source code is given below. ⬇ #include <Bela.h> #include <libraries/Pipe/Pipe.h> #include <libraries/sndfile/sndfile.h> #include <libraries/WriteFile/WriteFile.h> There are four main library imports that are used in the code. Bela.h is the central control code for hard real-time audio on BeagleBone Black using PRU and Xenomai Linux extensions. The pipe library enables the use of a bi- directional pipe that allows for data to be exchanged between realtime and non-realtime thread. The Writefile library is imported to enable the use of the generateUniqueFilename function, that returns a unique filename appending a number at the end of the original filename. This is important in order to avoid overwriting existing recordings. The sndfile library allows the use of the libsndfile API (application programming interface) which is designed to allow the reading and writing of many different sampled sound file formats. ⬇ const char* path = ”/mnt/usb/audioCh.wav”; SNDFILE * outfile2; char originalFilename[] = ”/mnt/usb/audioCh.wav”; char* uniqueFilename = WriteFile::generateUniqueFilename(originalFilename); The file path is created first, in this case the usb is used as the main storage. The outfile2 is the name of the reference for the SNDFILE pointer. The path to the original filename is assigned and then the generateUniqueFilename function is called on the original filename to return a unique filename. Instead of recording the audio to the USB storage, the filepath can be altered to const char* path = "./audiofilename" in order to have the file appear and update during the recording process within the resources section of the project explorer tab in the IDE. ⬇ AuxiliaryTask gFillBufferTask; unsigned int gAudioFrames; unsigned int gAudioInChannels; float gAudioSampleRate; Pipe gPipe; The global variables are established for use later in the code. The gFillBufferTask is a Auxiliary task variable that is used to write to the audio buffer. The gPipe, gAudioSampleRate, gAudioInChannels and gAudioFrames are global variables that utilise the established behaviours present in their corresponding library class files. ⬇ void openFile() { SF_INFO sfinfo; sfinfo.channels = gAudioInChannels; sfinfo.samplerate = gAudioSampleRate; sfinfo.format = SF_FORMAT_WAV | SF_FORMAT_PCM_16; outfile2 = sf_open(uniqueFilename, SFM_WRITE, &sfinfo); } The openFile function details the SF_INFO structure and the specified file format, sample rate, amount of channels. The sf_open function opens the sound file at the specified path and utilises the write only mode SFM_WRITE and the sfinfo structure for passing data between the calling function and the library when opening the file for in this case writing. ⬇ void closeFile() { sf_write_sync(outfile2); sf_close(outfile2); printf(”.wav␣file␣written␣and␣closed\n”); } The closeFile function closes and writes the file to disk. sf_write_sync allows for the file if it is opened using SFM_WRITE to call the operating system’s function to force the writing of all file cache buffers to disk. sf_close closes the file, deallocates its internal buffers and returns 0 on success or an error value otherwise. ⬇ void writeBuffer(void*) { unsigned int numItems = gAudioFrames * gAudioInChannels; float buf[numItems]; int ret; while((ret = gPipe.readNonRt(buf, numItems) ) > 0) { sf_write_float(outfile2, &buf[0], ret); } } The writeBuffer function essentially writes data to the buffer. This function calculates the number of items by multiplying the audio frames and audio input channels. A buffer array holds the audio frame and input channel items. An integer variable is declared with the purpose of holding the number of items. The while loop will loop through the block of code as long as the specified condition is true. The readNonRt reads data from the non-realtime side. The sf_write_float function writes the data in the array pointing to the pointer of the file. For items-count functions, the items parameter specifies the size of the array and must be an integer product of the number of channels or an error will occur. ⬇ bool setup(BelaContext* context, void* arg) { gAudioSampleRate = context->audioSampleRate; gAudioFrames = context->audioFrames; gAudioInChannels = context->audioInChannels; gPipe.setup(”sndfile-write”, 65536, false, false); openFile(); if((gFillBufferTask = Bela_createAuxiliaryTask(&writeBuffer, 90, ”writeBuffer”)) == 0) { return false; } return true; } The setup function is a user-defined initialisation function which runs before audio rendering begins. This function runs once at the beginning of the program, after most of the system initialisation has begun but before audio rendering starts. This is used to prepare any memory or resources that will be needed in render. The audio sample rate, frames and audio input channels are all setup using the Bela context structure. The pipe is setup to write data with a specified size and whether the reads at the realtime and non-realtime side should be blocking. The openFile function is called to open the file for writing the audio data. The auxiliary task is then created with write buffer parameters. ⬇ void render(BelaContext* context, void* arg) { gPipe.writeRt(context->audioIn, context->audioFrames * context->audioInChannels); Bela_scheduleAuxiliaryTask(gFillBufferTask); } The render function is a user-defined callback function to process audio and sensor data. This function is called regularly by the system every time there is a new block of audio and/or sensor data to process. The writeRt function reads data from the non-realtime side. The context->audioIn is the float* that points to all the input samples, stored as interleaved channels. The audioIn is an array of 4 frames * 2 channels = 8 audio input samples. The auxiliary task which has previously been created is scheduled to run. ⬇ void cleanup(BelaContext* context, void* arg) { closeFile(); free(uniqueFilename); } The cleanup function runs when the program finishes to free up memory. This function is called by the system once after audio rendering has finished, before the program quits. It is used to release any memory allocated in setup and to perform any other required cleanup. If no initialisation is performed in setup, then this function will usually be empty. The file is closed and the file that has been created by the generateUniqueFilename has its block of memory deallocated. ## References * [1] D. Floreano, D. and R. J. Wood, “Science, technology and the future of small autonomous drones,” Nature, vol. 521, no. 7553, pp. 460-466, 2015. * [2] G. Parascandolo, H. Huttunen, and T. Virtanen, “Recurrent neural networks for polyphonic sound event detection in real life recordings,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., Shanghai, China, 2016, pp. 6440-6444. * [3] S. Li and D. Yeung, “Visual object tracking for unmanned aerial vehicles: A benchmark and new motion models,” in Proc. Thirty-First AAAI Conf. Artificial Intelligence, San Francisco, USA, 2017, pp. 4140-4146. * [4] P. Misra, A. A. Kumar, P. Mohapatra, and P. Balamuralidhar, “Aerial drones with location-sensitive ears,” IEEE Commun. Mag., vol. 56, no. 7, pp. 154-160, Jul. 2018. * [5] L. Wang, R. Sanchez-Matilla, and A. Cavallaro, “Tracking a moving sound source from a multi-rotor drone,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Sys., Madrid, Spain, 2018, pp. 2511-2516. * [6] R. Sanchez-Matilla, L. Wang, and A. Cavallaro, “Multi-modal localization and enhancement of multiple sound sources from a micro aerial vehicle,” Proc. ACM Multimedia, Silicon Valley, USA, 2017, pp. 1591-1599. * [7] L. Wang and A. Cavallaro, “Acoustic sensing from a multi-rotor drone,” IEEE Sensors J., vol. 18, no. 11, pp. 4570-4582, Nov. 2018. * [8] A. Deleforge, D. Di Carlo, M. Strauss, R. Serizel, and L. Marcenaro, “Audio-based search and rescue with a drone: highlights from the IEEE signal processing cup 2019 student competition,” IEEE Signal Process. Mag., vol. 36, no. 5, pp. 138-144, Sep. 2019. * [9] A. Schmidt, H. W. Lollmann, and W. Kellermann, “A novel ego-noise suppression algorithm for acoustic signal enhancement in autonomous systems,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., Calgary, Canada, 2018, pp. 6583-6587. * [10] B. Kang, H. Ahn, and H. Choo, “A software platform for noise reduction in sound sensor equipped drones,” IEEE Sensors J., vol. 19, no. 21 pp. 10121-10130, Nov. 2019. * [11] Y. Hioka, M. Kingan, G. Schmid, R. McKay, and K. A. Stol, “Design of an unmanned aerial vehicle mounted system for quiet audio recording,” Appl. Acoust., vol. 155, pp. 423-427, 2019. * [12] L. Wang and A. Cavallaro, “Deep learning assisted time-frequency processing for speech enhancement on drones,” IEEE Trans. Emerging Topics Computational Intelligence, 2020, Early Access, DOI: 10.1109/TETCI.2020.3014934. * [13] L. Wang and A. Cavallaro, “A blind source separation framework for ego-noise reduction on multi-rotor drones,” IEEE/ACM Trans. Audio Speech Lang. Process., vol. 28, pp. 2523-2537, 2020. * [14] L. Wang and A. Cavallaro, “Microphone-array ego-noise reduction algorithms for auditory micro aerial vehicles,” IEEE Sensors J., vol. 17, no. 8, pp. 2447-2455, Aug. 2017. * [15] B. Yen and Y. Hioka, “Noise power spectral density scaled SNR response estimation with restricted range search for sound source localisation using unmanned aerial vehicles,” EURASIP J. Audio Speech Music Process., vol. 2020, no. 1, pp. 1-26, 2020. * [16] L. Wang and A. Cavallaro, “Time-frequency processing for sound source localization from a micro aerial vehicle,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., New Orleans, USA, 2017, pp. 496-500. * [17] M. Strauss, P. Mordel, V. Miguet, and A. Deleforge, “DREGON: dataset and methods for UAV-embedded sound source localization,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., Madrid, Spain, 2018, pp. 5735-5742. * [18] M. Wakabayashi, H. G. Okuno, and M. Kumon, “Drone audition listening from the sky estimates multiple sound source positions by integrating sound source localization and data association,” Advanced Robotics, pp. 1-12, 2020. * [19] K. Hoshiba, K. Washizaki, M. Wakabayashi, T. Ishiki, M. Kumon, Y. Bando, D. Gabriel, K. Nakadai, and H.G. Okuno, “Design of UAV-embedded microphone array system for sound source localization in outdoor environments,” Sensors, vol. 17, no. 11, pp. 1-16, Nov. 2017. * [20] K. Furukawa, K. Okutani, K. Nagira, T. Otsuka, K. Itoyama, K. Nakadai, and H. G. Okuno, “Noise correlation matrix estimation for improving sound source localization by multirotor UAV,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., Tokyo, Japan, 2013, pp. 3943-3948. * [21] A. McPherson and V. Zappi, “An environment for submillisecond-latency audio and sensor processing on BeagleBone Black,” in Proc. Audio Engineering Society Convention 138, 2015, pp. 1-7. * [22] A. McPherson, H. J. Robert, and G. Moro, “Action-sound latency: Are our tools fast enough?”, in Proc. Int. Conf. New Interfaces Musical Expression, Brisbane, Australia, 2016, pp. 1-6. * [23] M. B. Andra, B. Rohman, and T. Usagawa, “Feasibility evaluation for keyword spotting system using mini microphone array On UAV”, in Proc. IEEE International Geoscience Remote Sensing Symp., Yokohama, Japan, 2019, pp. 2264-2267. * [24] Z. W. Tan, A. H. Nguyen, and A. W. Khong, “An efficient dilated convolutional neural network for UAV noise reduction at low input SNR,” in Proc. Asia-Pacific Signal Information Process. Association Annual Summit Conf., Lanzhou, China, 2019, pp. 1885-1892. * [25] D. Salvati, C. Drioli, G. Ferrin, and G. L. Foresti, “Acoustic source localization from multirotor UAVs”, IEEE Trans. Industrial Electronics, vol. 67, no. 10, pp. 8618-8628, Oct. 2020. * [26] K. Okutani, T. Yoshida, K. Nakamura, and K. Nakadai, “Outdoor auditory scene analysis using a moving microphone array embedded in a quadrocopter,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., Vilamoura-Algarve, Portugal, 2012, pp. 3288-3293. * [27] L. Wang, R. Sanchez-Matilla, and A. Cavallaro, “Audio-visual sensing from a quadcopter: dataset and baselines for source localization and sound enhancement,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Sys., Macao, China, 2019, pp. 5320-5325. * [28] O. Ruiz-Espitia, J. Martinez-Carranza, and C. Rascon. “AIRA-UAS: An evaluation corpus for audio processing in unmanned aerial system,” in Proc. Int. Conf. Unmanned Aircraft Systems, Dallas, USA, 2018, pp. 836-845. * [29] K. Nakadai, H. G. Okuno, and T. Mizumoto, “Development, deployment and applications of robot audition open source software HARK,” J. Robotics and Mechatronics, vol. 29, no. 1, pp. 16-25, Jan. 2017. * [30] L. Wang and A. Cavallaro, “Ear in the sky: Ego-noise reduction for auditory micro aerial vehicles”, in Proc. Int. Conf. Adv. Video Signal-Based Surveillance, Colorado Springs, USA, 2016, pp. 152-158. * [31] H. Langer and R. Manzke, “Embedded multichannel Linux audiosystem for musical applications,” in Proc. Int. Audio Mostly Conf. Augmented Participatory Sound Music Experiences, London, UK, 2017, pp. 1-5. * [32] L. Wang, T. Gerkmann, and S. Doclo, “Noise power spectral density estimation using MaxNSR blocking matrix,” IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 23, no. 9, pp. 1493-1508, Sep. 2015.
# Robust Energy-Efficient Resource Management, SIC Ordering, and Beamforming Design for MC MISO-NOMA Enabled 6G Abolfazl Zakeri, Student Member, IEEE, Ata Khalili, Member, IEEE, Mohammad Reza Javan, Senior Member, IEEE, Nader Mokari, Senior Member, IEEE, and Eduard A Jorswieck, Fellow, IEEE A. Zakeri, A. Khalili and N. Mokari are with the Department of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran, (e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>and [email protected]). M. R. Javan is with the Department of Electrical Engineering, Shahrood University of Technology, Iran, (e-mail: [email protected]). Eduard A. Jorswieck is with Institute for Communications Technology, TU Braunschweig, Germany, Email: <EMAIL_ADDRESS> ###### Abstract This paper studies a novel approach for successive interference cancellation (SIC) ordering and beamforming in a multiple antennas non-orthogonal multiple access (NOMA) network with multi-carrier multi-user setup. To this end, we formulate a joint beamforming design, subcarrier allocation, user association, and SIC ordering algorithm to maximize the worst-case energy efficiency (EE). The formulated problem is a non-convex mixed integer non-linear programming (MINLP) which is generally difficult to solve. To handle it, we first adopt the linearizion technique as well as relaxing the integer variables, and then we employ the Dinkelbach algorithm to convert it into a more mathematically tractable form. The adopted non-convex optimization problem is transformed into an equivalent rank-constrained semidefinite programming (SDP) and is solved by SDP relaxation and exploiting sequential fractional programming. Furthermore, to strike a balance between complexity and performance, a low complex approach based on alternative optimization is adopted. Numerical results unveil that the proposed SIC ordering method outperforms the conventional existing works addressed in the literature. ###### Index Terms: Multi carrier (MC), multiple input single output (MISO), non-orthogonal multiple access (NOMA), successive interference cancellation (SIC), beamforming, energy efficiency (EE). ## I Introduction ### I-A Motivations and State of the Art In the recent decades, wireless communications have appealed to a growing number of customers, demanding high quality services, and ubiquitous connections. In order to fulfill these demands, the next generation of wireless networks, namely sixth-generation (6G)111Recently, fifth generation of wireless networks is deployed and its evolution towards 6G has been started [2]., should be redesigned and exploit advanced technologies [1]. Regarding this, the network must be designed in such a way to dynamically change its architecture and the communications technologies. In such a flexible architecture, significant amount of signaling and computational resources are needed to optimally manage the network resources and enable to design the flexible resource sharing. Recently, various radio access network (RAN) architectures as distributed and centralized RAN (C-RAN) are developed to provide an efficient computational resource sharing and resource utilization [3]. To enable sustainable 6G networks, new emerging techniques such as new multiple access (MA) and multiple antennas systems (MAS) are needed to improve the network performance (e.g., energy efficiency (EE)) [5]. In this regard, non-orthogonal multiple access (NOMA) and multiple-input single-output (MISO) are promising approaches which can significantly improve EE and provide massive connectivity applications, i.e., Internet of Thing (IoT) compared to the orthogonal multiple access (OMA) and single antenna systems[6, 7, 8, 9]. In fact, improving the EE, fairness, and flexibility in resource allocation have turned to apply NOMA systems as the main trend in the beyond current wireless network. The authors in [9] present a basic principle of NOMA in which they state a systematic comparison among the different NOMA techniques from the viewpoint of the EE and receiver complexity. In particular, NOMA utilizes power domain (PD-NOMA) based networks for MA as well as successive interference cancellation (SIC) which removes the undesired multiuser interference[10]. In NOMA, SIC ordering is one of the key challenges and it is critical for the performance of data transmission to handle the NOMA interference [11, 12]. However, the SIC ordering problem has not been addressed well, and there are open problems that need to be addressed properly [11, 12]. In fact, in most of the works, SIC ordering is considered based on the channel gain which is not practical and optimal due to necessity of full channel state information (CSI) [25, 9, 14, 20, 18, 13, 17, 16]. Besides, there is uncertainty in the CSI that cannot be applicable for multiple antennas systems. This paper proposes a worst-case SIC ordering, resource allocation policy, and beamforming design for multicarrier (MC) MISO-NOMA networks. We would like to see how much we can get performance gain in MAS for the SIC ordering as compared to OMA as well as traditional methods in which SIC ordering is based on the channel gains. ### I-B Related Works In NOMA, the efficient allocation of scarce resources and SIC ordering are turned into a challenging necessity for improving users’ satisfaction [11]. There are some attempts to find proper resource allocation strategies which improves the overall performance of such networks[15, 16, 14, 18, 13, 17]. For instance, the problem of power allocation and precoding design is proposed in [13] in which they employ single carrier multiple-input multiple-output (MIMO) NOMA systems. The authors in [14] propose an optimal resource allocation to maximize the system throughput for NOMA and full-duplex (FD) systems, respectively. However, the base station (BS) is equipped with a single antenna which cannot fully exploit the degrees of freedom of the network. The works in [20, 22, 19, 32] consider beamforming design for MISO-NOMA systems to optimize the performance and cost of the system. In particular, the authors in [19] propose a robust beamforming design for MISO-NOMA system to maximize the minimum data rate. In [20], beamforming design and subcarrier allocation for maximizing the total data rate are proposed where optimal and sub-optimal solutions are provided. The beamforming design for maximizing the minimum EE and proportional fairness are developed in [22] to strike a balance between the EE of the system and the fairness between users. Most of the previous works considered the fixed SIC ordering, in which the order of decoding at each receiver is determined according to the channel gains [14, 20, 18, 13, 17, 16]. In SIC, the users are ordered and each user can remove the interference from users determined by the ordering scheme. Although most of the works on the SIC ordering sort the users based on their channels, this is not a practical scenario and can not be guaranteed as well. Also, it should be noted that sorting users for SIC based on the channel gains is neither optimal nor practical scheme at all, especially in MAS due to unavailability of the full CSI [20]. To circumvent this problem, SIC ordering should be based on the network, channel gain, and the available resource conditions. The authors in [21] address this problem for single antenna BS and perfect CSI scenario which is not practical and appropriate for future networks due to considering single-antenna BS and also having perfect CSI channel. Besides, EE is an important metric for wireless networks, especially for enabling green communication. New communication technologies are proposed to improve the system EE. In particular, various techniques are proposed which aim to enhance the network throughput while consuming less energy without sacrificing the quality of service (QoS). At the same time, EE maximization problems are indispensable in NOMA systems, to strike a good throughput-power tradeoff and improve the system performance which is noticed as one of the key performance metrics in future wireless networks. However, there is a deficit of existing works on the literature considering the EE. For instance, in [23], EE maximization is studied to obtain an optimal power allocation based on the non-linear fractional programming method. Furthermore, in [24], a subchannel assignment and power allocation is investigated to maximize the EE in NOMA networks. However, in real scenarios assuming perfect CSI is not a valid assumption due to some issues like quantization and channel estimation errors as well as hardware limitations. In this regards the works in [19, 25, 27, 26] address the robust solution for imperfect CSI. In particular, in [19, 25, 27], robust designs for the MISO-NOMA systems are developed based on the bounded channel uncertainties. The joint user scheduling and power allocation are explored in [26] while considering imperfect CSI. User association in multi- cell NOMA systems is also challenging. Specifically, in addition to the NOMA interference caused by the co-channel interference, the interference between cells also needs to be taken into consideration [28, 29]. Nonetheless to the best of the authors knowledge, the problem of beamforming design and SIC ordering in a MC MISO-NOMA enabled C-RAN network while considering imperfect CSI has not been investigated yet. In [14, 21, 13, 18, 17, 16], the BS is equipped with single antenna while assuming perfect CSI. The works in [25, 27, 19] consider robust beamforming design while fixed SIC ordering. Furthermore, the authors in [28, 29] consider user association for the single antenna BS while SIC is based on the channel gains. In addition, in [21], the SIC ordering problem for single antenna BS and perfect CSI is considered. Consequently, user association policy and SIC ordering in a MISO-NOMA enabled C-RAN network with imperfect CSI are still open problems which have not been addressed yet. ### I-C Contributions and Research Outcomes In this paper, we aim to bridge the above mentioned knowledge gap. In particular, we propose a joint beamforming design, subcarrier allocation, user association, and SIC ordering algorithm which maximizes the EE of the network under imperfect CSI. To this end, we formulate the problem of beamforming design and SIC ordering to maximize the worst-case system EE. In our method, SIC ordering is considered as an optimization variable while in more existing works, SIC ordering is fixed and depends on the channel gains. The optimization problem is a non-convex mixed integer non-linear programming which is very difficult to solve. To handle it, we employ majorization minimization (MM), abstract Lagrangian method, semi-definite relaxation (SDR) method, and sequential fractional programming to handle the beamforming design and integer variables. Our main contributions are summarized as follows: * • We propose a novel SIC ordering method for the downlink of a MC MISO NOMA. To this end, we formulate a novel optimization problem to maximize the EE by performing the subcarrier allocation, beamforming design, user association, and SIC ordering. In particular, we formulate a new problem to investigate how to order users to apply successful SIC based on the available resources. Also, we derive the worst-case SIC ordering condition as an optimization constraint and then tackle its non-convexity. * • We study the practical imperfect CSI in C-RAN networks. In doing so, we consider the worst-case EE to provide a robust resource allocation algorithm. * • We propose a solution based on rank-constrained semidefinite programming (SDP) relaxation and exploiting sequential fractional programming. In particular, we adopt MM approach and penalty factor to make it mathematically tractable and then we adopt Dinkelbach algorithm. Moreover, we provide a low complexity iterative algorithm in which the scheduling variable, i.e., user association and subcarrier assignment, is obtained through the matching algorithm. * • Numerical results reveal that the proposed worst-case EE maximization and SIC ordering algorithm can alleviate negative effect of imperfect of CSI, SIC, and limited power and spectrum resources on the network performance. Also, the results showcase the superiority of the proposed algorithm compared to the other conventional schemes. The rest of this paper is organized as follows. The system model and problem formulation is discussed in Sec. II. The solution algorithm and complexity analysis are presented in Sec. III. Finally, the simulation analysis and conclusions are provided in Secs. IV and V, respectively. Notations: Vector and matrix variables are indicated by bold lower-case and upper-case letters, respectively. $|.|$ indicate the absolute value, $\|.\|$ or $\|.\|_{2}$ denotes the Euclidean norm ($\textit{l}_{2}$ norm), and $\mathbb{A}^{\dagger}$ and $\mathbb{A}^{T}$ indicate the conjugate transpose and transpose of matrix $\mathbb{A}$, respectively. Also, $\text{Tr}[\mathbb{A}]$ denotes the trace of matrix $\mathbb{A}$ and $\mathbb{I}_{M}$ denotes the $M\times M$ identity matrix. $a^{*}$ denotes the optimal value of variable $a$. $\nabla_{\mathbb{x}}g$ indicate the gradient vector of function $g(\mathbb{x})$. $\mathcal{S}$ denotes the set $\\{1,2,\dots,S\\}$ and $|\mathcal{S}|=S$ is the cardinality of set $\mathcal{S}$. $\mathcal{S}\backslash\\{s\\}$ discards the element $s$ from the set $\mathcal{S}$. $\mathcal{C}^{M\times 1}$ denotes the set of $M$-by-$1$ dimensional complex vectors, and operation $\mathbb{E}\\{.\\}$ denotes the statistical expectation. ## II System Model and Problem Formulation ### II-A System Model Descriptions and Related Constraints In this paper, we consider a downlink scenario for a C-RAN consisting of a set of $F$ active antenna units (AAUs) indexed by $f$, whose set is denoted by $\mathcal{F}=\\{1,\dots,{F}\\}$, where $f=1$ is a high power AAU, and a base band unit (BBU). Let $P_{\text{max}}^{f}$ be the transmit power budget of AAU $f$. Each AAU is equipped with $M$ antennas, uses a set $\mathcal{N}$ of $N$ shared subcarriers, is connected to the BBU with a limited bandwidth fronthaul/metro-edge link, and utilizes PD-NOMA to transmit data to single antenna end-users. In fact, we consider a MC MISO-NOMA communication network setup. We denote the set of all users as $\mathcal{K}=\\{1,\dots,K\\}$ which are randomly distributed with the uniform distribution inside the coverage/service area of the network [29, 32]. The considered system model is depicted in Fig. 1. Considering that user $2$ performs SIC on users $1$ and $3$ over subcarrier $2$, we have such output for SIC ordering variable which will be explained in Sec. II-A1, $\xi^{2}_{1,2}=1,\xi^{2}_{3,2}=1,\xi^{3}_{2,2}=0,$ and $\xi^{1}_{2,2}=0$. The definition of main notations are listed in Table I. Figure 1: Schematic presentation of our designed MISO NOMA-enabled C-RAN and an example of the proposed SIC ordering algorithm. In addition, $s_{k,n}^{f}$ is the signal222We assume its power is normalized to one, i.e., $\mathbb{E}\big{\\{}|s_{k,n}^{f}|^{2}\big{\\}}=1$. of user $k$ over subcarrier $n$ from AAU $f$, and $\mathbb{w}_{k,n,f}=[w_{k,n,f}^{m}]\in\mathcal{C}^{M\times 1}$ is the vector of beamforming variables that is designed by AAU $f$ for user $k$ over subcarrier $n$. Table I: Table of the main notations Notation | Description ---|--- Notations/parameters $\mathcal{F}/F/f$ | Set/number/index of all AAUs in the network $\mathcal{K}/K/k$ | Set/number/index of all users in the network $\mathcal{N}/N/n$ | Set/number/index of the shared subcarriers in each AAU ${M}/m$ | Number/index of antennas in each AAU $P_{\text{max}}^{f}$ | Maximum allowable transmit power of AAU $f$ $P_{\text{Total}}$ | Total consumed power $R_{\max}^{f}$ | Maximum capacity of fronthaul link AAU $f$ $\mathbb{h}_{k,n,f}$ | Real channel coefficient between user $k$ | and AAU $f$ on subcarrier $n$ $\tilde{\mathbb{h}}_{k,n,f}$ | Estimated channel coefficient between user $k$ | and AAU $f$ on subcarrier $n$ $\bm{\epsilon}_{k,n,f}$ | Channel estimation error for user $k$ and AAU $f$ | on subcarrier $n$ $\delta_{k,n}^{f}$ | Channel uncertainty radius for user $k$ and AAU $f$ | on subcarrier $n$ $L_{n}^{f}$ | Maximum reused number of each subcarrier $n$ at AAU $f$ ${\sigma_{k,n,f}^{2}}$ | Variance of noise at user $k$ on subcarrier $n$ from AAU $f$ $s_{k,n}^{f}$ | Transmit signal at user $k$ on subcarrier $n$ from AAU $f$ $r_{k,n}^{f}$ | Achieved rate of user $k$ on subcarrier $n$ from AAU $f$ $\varphi_{f}$ | Preprocessing weight of each fronthual link of AAU $f$ $\beta$ | Drain efficiency of the power amplifier Optimization Variables ${\rho_{k,n}^{f}}$ | Binary subcarrier assignment variables, equals to 1 means | that user $k$ is scheduled to AAU $f$ and subcarrier $n$, | otherwise, it is 0 $\xi_{i,n}^{k}$ | Binary SIC ordering variable, which $\xi_{i,n}^{k}=1$, if user $k$ | decodes the signal of user $i$ on subcarrier $n$, else $\xi_{i,n}^{k}=0$ ${\mathbb{w}_{k,n,f}}$ | Beamforming vector from AAU $f$ to user $k$ | on subcarrier $n$ Let us define a joint subcarrier and user associations binary variable333Herein, we call it the scheduling variable., $\rho_{k,n}^{f}$, if subcarrier $n$ is assigned to user $k$ that is served by $f$, $\rho_{k,n}^{f}=1$, otherwise, $\rho_{k,n}^{f}=0$. We introduce our scheduling policy in terms of subcarrier assignment technique and connectivity of users to AAUs as follows: $\bullet$ Subcarrier Assignment as Multiple Access Technique: By exploiting NOMA, each subcarrier $n$ can be assigned to at most $L_{n}^{f}$ users in AAU $f$ which is ensured by $\displaystyle\sum_{k\in\mathcal{K}}\rho_{k,n}^{f}\leq L_{n}^{f},\,\forall n\in\mathcal{N},f\in\mathcal{F}.$ (1) $\bullet$ User Association as Connectivity Technique: In general, user association refers to find an algorithm to assign users to the radio stations. We propose a novel user association policy where each user can be configured to receive its data on different subcarriers from different AAUs. We call it multi-connectivity technique which is different from the coordinated multipoint technologies444Because it dose not require synchronization between different AAUs. [30]. Therefore, each user on each subcarrier can be connected to at most one AAU which is ensured by the following constraint: $\displaystyle{\sum_{f\in\mathcal{F}}\rho_{k,n}^{f}\leq 1,\forall n\in\mathcal{N},{k}\in\mathcal{K}}.$ (2) Let $\mathbb{h}_{k,n,f}\in\mathcal{C}^{M\times 1}$ be the channel coefficient between user $k$ and AAU $f$ on subcarrier $n$. Following channel uncertainty, i.e., imperfect CSI model, we assume that the global CSI is not known because of estimation errors and/or feedback delays [19, 38]. Therefore, the real channel gain is given as follows[19]: $\displaystyle\mathbb{h}_{k,n,f}=\tilde{\mathbb{h}}_{k,n,f}+\bm{\epsilon}_{k,n,f},\,\forall k,n,f,$ (3) where $\tilde{\mathbb{h}}_{k,n,f}$ denotes the estimated channel gain and $\bm{\epsilon}_{k,n,f}$ indicates the error of estimation which lies in a bounded spherical set as given by $\|\bm{\epsilon}_{k,n,f}\|_{2}^{2}\leq\delta_{k,n}^{f}$, where $\delta_{k,n}^{f}$ is the channel uncertainty radius and is assumed be a small constant [38, 46]. In the other words, we have $\mathbb{h}_{k,n,f}\in\mathcal{H}_{k,n,f}$, where $\mathcal{H}_{k,n,f}$ is as follows: $\displaystyle\mathcal{H}_{k,n,f}\triangleq\Big{\\{}\tilde{\mathbb{h}}_{k,n,f}+\bm{\epsilon}_{k,n,f}\leavevmode\nobreak\ \big{|}\leavevmode\nobreak\ \|\bm{\epsilon}_{k,n,f}\|_{2}^{2}\leq\delta_{k,n}^{f}\Big{\\}}.$ (4) The indispensable part of NOMA is the SIC algorithm which is applied in the receiver side to handle the NOMA interference. Since in NOMA, the SIC ordering has a key impact on the received signal to interference plus noise ratio (SINR), and the performance of NOMA for cell-edge or cell-central users [21], we devise a new SIC ordering method as follows: #### II-A1 Proposed SIC Ordering Algorithm In contrast to sorting users based on the channel condition to perform SIC, we introduce a new binary variable as $\xi_{i,n}^{k}$, where $\xi_{i,n}^{k}=1$ if user $k$ decodes the signal of user $i$ on the assigned subcarrier $n$ (assuming both users $k$ and $i$ are multiplexed on subcarrier $n$), and otherwise, $\xi_{i,n}^{k}=0$. Note that users $i$ and $k$ can be connected to different AAUs. It is worth noting that the traditional SIC ordering is based on the channel power gain, channel gain [6, 7, 14], and normalized noise power [45]. $\bullet$ Worst-Case Data Rate: The achievable rate of user $k$ on subcarrier $n$ and AAU $f$ with channel $\mathbb{h}_{k,n,f}$ is obtained by (5) shown at the top of next page. $\displaystyle r_{k,n}^{f}=\log_{2}\left(1+\frac{\rho_{k,n}^{f}|\mathbb{h}_{k,n,f}^{\dagger}\mathbb{w}_{k,n,f}|^{2}}{\sum\limits_{f^{\prime}\in\mathcal{F}}\sum\limits_{i\in\mathcal{K}\backslash\\{k\\}}\rho_{i,n}^{f^{\prime}}\cdot(1-\xi_{i,n}^{k})\cdot|\mathbb{h}_{k,n,f^{\prime}}^{\dagger}\mathbb{w}_{i,n,f^{\prime}}|^{2}+\sigma_{k,n,f}^{2}}\right),$ (5) The worst-case data rate of user $k$ over the uncertainty set can be formulated as $\displaystyle r_{k}=\sum_{f^{\prime}\in\mathcal{F}}\sum_{n\in\mathcal{N}}\min_{\Big{\\{}\mathbb{h}_{k,n,f}\in\mathcal{H}_{k,n,f},\forall f\Big{\\}}}r_{k,n}^{f^{\prime}},\leavevmode\nobreak\ \leavevmode\nobreak\ \forall k\in\mathcal{K}.$ (6) $\bullet\leavevmode\nobreak\ $Successful Decoding Constraints as SIC Constraints: To ensure that user $k$ can successfully cancel the signal of user $i$, i.e., user $k$ is determined to perform SIC on subcarrier $n$ which means $\xi_{i,n}^{k}=1$, we consider the following three constraints should be satisfied, simultaneously. 1. 1. SIC ordering variable can be $1$, when both users $k$ and $i$ are multiplexed on subcarrier $n$ which is ensured by $\displaystyle\xi_{i,n}^{k}\leq\sum_{f\in\mathcal{F}}\rho_{k,n}^{f}\cdot\sum_{f^{\prime}\in\mathcal{F}}\rho_{i,n}^{f^{\prime}},\forall n\in\mathcal{N},\leavevmode\nobreak\ i\neq k,k,i\in\mathcal{K}.$ (7) 2. 2. Ensuring that one of the user $k$ or user $i$ performs SIC over the multiplexed subcarrier $n$ by $\displaystyle\xi_{i,n}^{k}+\xi_{k,n}^{i}\leq 1,\leavevmode\nobreak\ \forall n\in\mathcal{N},k,i\in\mathcal{K},k\neq i.$ (8) 3. 3. Successful decoding constraint, which ensures that signal of user $i$ (connected to $f^{\prime}$) on subcarrier $n$ is detected and cancelled by user $k$ (connected to $f$) in the worst condition (based on (5)), is given by (9) shown at the top of the next page. $\displaystyle\underbrace{\xi_{i,n}^{k}}_{\text{A}}\underbrace{\max_{\Big{\\{}\mathbb{h}_{i,n,f}\in\mathcal{H}_{i,n,f},\forall f\Big{\\}}}}_{\text{B}}\underbrace{\log_{2}\left(1+\frac{\rho_{i,n}^{f^{\prime}}|\mathbb{h}_{i,n,f^{\prime}}^{\dagger}\mathbb{w}_{i,n,f^{\prime}}|^{2}}{\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\rho_{k^{\prime},n}^{f}(1-\xi_{k^{\prime},n}^{i})|\mathbb{h}_{i,n,f}^{\dagger}\mathbb{w}_{k^{\prime},n,f}|^{2}+\sigma_{i,n,f}^{2}}\right)}_{\text{C}}$ (9) $\displaystyle\leq\underbrace{\xi_{i,n}^{k}}_{\text{A}}\underbrace{\min_{\mathbb{h}_{k,n,f}\in\mathcal{H}_{k,n,f},\forall f}}_{\text{D}}\underbrace{\log_{2}\left(1+\frac{\rho_{i,n}^{f^{\prime}}|\mathbb{h}_{k,n,f^{\prime}}^{\dagger}\mathbb{w}_{i,n,f^{\prime}}|^{2}}{\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\rho_{k^{\prime},n}^{f}(1-\xi_{k^{\prime},n}^{i})|\mathbb{h}_{k,n,f}^{\dagger}\mathbb{w}_{k^{\prime},n,f}|^{2}+\sigma_{k,n,f}^{2}}\right)}_{\text{E}}.$ In (9), part A ensures the constraint holds for user $k$ that is determined to perform SIC to decode and remove user $i$’s signal where both of them are multiplexed on subcarrier $n$. Parts B and D assure that the constraint holds for the worst-case estimation of CSI. To be clear, assume an example where we have two parameters as $a\in[a_{\min},a_{\max}]$ and $b\in[b_{\min},b_{\max}]$, and to ensure an inequality $a\leq b$ for all/worst-cases, obviously, it occurs for $a_{\max}\leq b_{\min}$. Part C is the obtained rate of user $i$ and part E is the rate of user $i$ achieved by user $k$ [20, 6, 7]. It should be noted that (9) is sufficient (but not necessary) for SIC. #### II-A2 Fronthaul Link Capacity Constraints Since the bandwidth of fronthaul links are limited, we introduce a new link capacity constraint as $\displaystyle\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\varphi_{f}\cdot r_{k,n}^{f}\leq R_{\max}^{f},\forall f\in\mathcal{F},$ (10) where $\varphi_{f}$ and $R_{\max}^{f},\forall f\in\mathcal{F}$ are the preprocessing weight related to the fronthaul transmission technologies and maximum available transmission capacity of AAU $f$, respectively. ### II-B Objective Function and Problem Formulation In this section, the considered objective function and problem formulation are introduced. #### II-B1 Objective Function Considering the worst-case channel uncertainties, the main goal of the optimization problem is to maximize the worst-case global EE (GEE) of the system. GEE is defined as the ratio of the global achievable sum rate to the total consumed power [40], and the worst-case of GEE is obtained with considering the worst-case CSI in our model. To formulate the worst-case EE, we need the worst-case throughput of the system and the total consumed power. The total worst-case data rate can be calculated by $\displaystyle R_{\text{Total}}^{\text{Worst}}=\sum_{k\in\mathcal{K}}r_{k}.$ (11) The total power consumption of the system is obtained by [31] $\displaystyle P_{\text{Total}}=\sum_{f\in\mathcal{F}}\underbrace{\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\frac{1}{\beta}\|\mathbb{w}_{k,n,f}\|_{2}^{2}}_{P_{\text{TX}}^{f}}+P_{\text{Static}},$ (12) where $0<\beta<1$ is the drain efficiency of the power amplifier and $P_{\text{Static}}$ is the static term of power consumption which is obtained as $P_{\text{Static}}=\sum_{f\in\mathcal{F}}P_{\text{Static}}^{f}$, where $P_{\text{Static}}^{f}$ is the static term of power which is given by $P_{\text{Static}}^{f}=\left\\{\begin{array}[]{ll}P_{\text{Hardware}}^{f}&0<P_{\text{TX}}^{f}\leq P_{\max}^{f},\\\ P_{\text{Sleep}}^{f}&P_{\text{TX}}^{f}=0,\end{array}\right.$ (13) where $P_{\text{Sleep}}^{f}$ is the consumed power at BBU in the sleep mode of AAU $f$ and $P_{\text{Hardware}}^{f}=C_{\text{Circuit}}\times M$ is the power that is used by hardware at AAU $f$ in the transmission mode, and $C_{\text{Circuit}}$ is the consumed circuit power constant that is used for the signal processing functions at AAU which includes the power dissipation in the filtering, frequency synthesizer, digital-to-analog converter, etc. Therefore, the worst-case EE of the system is calculated by $\displaystyle\eta_{\text{EE}}^{\text{Worst}}=\frac{R_{\text{Total}}^{\text{Worst}}}{P_{\text{Total}}},$ (14) where $R_{\text{Total}}^{\text{Worst}}$ and $P_{\text{Total}}$ are given by (11) and (12), respectively. #### II-B2 Problem Formulation Based on these definitions and assumptions, our main aim is to maximize the worst case of EE considering beamforming and SIC ordering constraints. The optimization problem is mathematically formulated as follows: $\displaystyle\max_{\mathbb{W},\bm{\xi},\bm{\rho}}\;$ $\displaystyle\leavevmode\nobreak\ \eta_{\text{EE}}^{\text{Worst}}$ (15a) s.t. $\displaystyle\text{C}_{1}:\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{K}}\rho_{k,n}^{f}\|\mathbb{w}_{k,n,f}\|_{2}^{2}\leq P_{\text{max}}^{f},\forall f\in\mathcal{F},$ (15b) $\displaystyle\text{C}_{2}:\sum_{k\in\mathcal{K}}\rho_{k,n}^{f}\leq L_{n}^{f},\,\forall n\in\mathcal{N},f\in\mathcal{F},$ (15c) $\displaystyle\text{C}_{3}:{\sum_{f\in\mathcal{F}}\rho_{k,n}^{f}\leq 1,\forall n\in\mathcal{N},{k}\in\mathcal{K}},$ (15d) $\displaystyle\text{C}_{4}:\xi_{i,n}^{k},\rho_{k,n}^{f}\in\begin{Bmatrix}0,1\end{Bmatrix},\,\,\forall k\in\mathcal{K},n\in\mathcal{N},\forall f\in\mathcal{F},$ (15e) $\displaystyle\eqref{SIC_Sub_Con},\eqref{SIC_Con_1},\eqref{SIC_Ordeing},\eqref{Fronthaul_Con},$ where $\mathbb{W}=\left[{w}_{k,n,f}^{m}\right]$, $\bm{\rho}=\left[\rho_{k,n}^{f}\right]$, and $\bm{\xi}=\left[\xi_{i,n}^{k}\right]$. Constraint (15b) indicates the maximum available power budget and constraint (15c) verifies that each subcarrier $n$ in each AAU $f$ can be utilized no more than $L_{n}^{f}$ times, recognized as NOMA constraint. Constraint (15d) ensures that each user on each subcarrier is served only by one AAU, (15e) stands for binary variables, and (7) indicates that the SIC on each subcarrier between users which are multiplexed on that subcarrier. Moreover, (8) indicates that for any two users only one of them can perform SIC on another one and (9) ensures that user $k$ decodes the message of user $i$ on the assigned subcarrier $n$, successfully [6, 22]. Finally, constraint (10) is the link capacity restriction of fronthaul links. ## III Proposed Solution Methods The optimization problem in (15) is a non-convex mixed integer non-linear programming (MINLP) which is complicated to solve. We propose two different solution algorithms which are discussed in the following. ### III-A Algorithm 1: One Step Solution In this section, we explain our proposed one-step solution, i.e., all variables are obtained without using alternating approach. First, let us define matrices $\mathbb{W}_{k,n,f}$ and $\tilde{\mathbb{H}}_{k,n,f}$ with size $M\times M$ as $\mathbb{W}_{k,n,f}\triangleq\mathbb{w}_{k,n,f}\mathbb{w}_{k,n,f}^{\dagger}$ and $\tilde{\mathbb{H}}_{k,n,f}\triangleq\tilde{\mathbb{h}}_{k,n,f}\tilde{\mathbb{h}}_{k,n,f}^{\dagger}$, respectively. To this end, we rewrite $\|\mathbb{h}_{k,n,f}^{\dagger}\mathbb{w}_{k,n,f}\|^{2}$ as follows: $\displaystyle\|\mathbb{h}_{k,n,f}^{\dagger}\mathbb{w}_{k,n,f}\|^{2}=$ $\displaystyle\mathbb{w}_{k,n,f}^{\dagger}(\tilde{\mathbb{h}}_{k,n,f}+\bm{\epsilon}_{k,n,f})^{\dagger}(\tilde{\mathbb{h}}_{k,n,f}+\bm{\epsilon}_{k,n,f})\mathbb{w}_{k,n,f}$ $\displaystyle=\mathbb{w}_{k,n,f}^{\dagger}(\tilde{\mathbb{h}}_{k,n,f}+\bm{\Delta}_{k,n,f})\mathbb{w}_{k,n,f}$ $\displaystyle=\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f}+\bm{\Delta}_{k,n,f})\mathbb{W}_{k,n,f}],$ (16) where $\bm{\Delta}_{k,n,f}=\tilde{\mathbb{h}}_{k,n,f}\bm{\epsilon}_{k,n,f}^{\dagger}+\bm{\epsilon}_{k,n,f}\tilde{\mathbb{h}}_{k,n,f}+\bm{\epsilon}_{k,n,f}\bm{\epsilon}_{k,n,f}^{\dagger}$ is a norm-bounded matrix which satisfies the following region: $\displaystyle\|\bm{\Delta}_{k,n,f}\|$ $\displaystyle\leq\|\tilde{\mathbb{h}}_{k,n,f}\bm{\epsilon}_{k,n,f}^{\dagger}\|+\|\bm{\epsilon}_{k,n,f}\tilde{\mathbb{h}}_{k,n,f}^{\dagger}\|+\|\bm{\epsilon}_{k,n,f}\bm{\epsilon}^{\dagger}_{k,n}\|,$ $\displaystyle\leq\|\tilde{\mathbb{h}}_{k,n,f}\|\|\bm{\epsilon}_{k,n,f}^{\dagger}\|+\|\bm{\epsilon}_{k,n,f}\|\|\tilde{\mathbb{h}}_{k,n,f}^{\dagger}\|$ $\displaystyle+\|\bm{\epsilon}_{k,n,f}\|\|\bm{\epsilon}^{\dagger}_{k,n}\|$ $\displaystyle=(\delta^{f}_{k,n})^{2}+2\delta_{k,n}^{f}\|\tilde{\mathbb{h}}_{k,n,f}\|\triangleq e_{k,n}^{f}.$ (17) Therefore, equation (5) can be rewritten as (18) shown at the top of the next page. $\displaystyle{r}^{f}_{k,n}=\log_{2}\left(1+\frac{\rho_{k,n}^{f}\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f}+\bm{\Delta}_{k,n,f})\mathbb{W}_{k,n,f}]}{\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\rho_{i,n}^{f^{\prime}}(1-\xi_{i,n}^{k})\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f^{\prime}}+\bm{\Delta}_{k,n,f^{\prime}})\mathbb{W}_{i,n,f^{\prime}}]+\sigma_{k,n,f}^{2}}\right).$ (18) Now, we aim at maximizing the worst-case data rate (18). Since the log function is a monotonic function, finding the worst-case would be done over the SINR in (18). This can be obtained by minimizing (18) over $\Delta_{k,n,f}$ and $\Delta_{k,n,f^{\prime}}$ where indexes $f$ and $f^{\prime}$ can be the same, i.e., when we calculate the intra-cell interference. One conservative method to find the minimum of the SINR is minimizing the numerator and maximizing the denominator of SINR in (18) with respect to norm-bounded matrices [49]. Note that by this method, we provide a strictly bounded robust solution (SBRS) [49]. Motivated by this idea, the lower bound of SINR in (18) subject to (III-A) can be obtained by solving the following optimization problems: $\displaystyle\min_{\|\bm{\Delta}_{k,n,f}\|\leq e_{k,n,f}}\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f}+\bm{\Delta}_{k,n,f})\mathbb{W}_{k,n,f}],$ (19) $\displaystyle\max_{\|\bm{\Delta}_{k,n,f^{\prime}}\|\leq e_{k,n,f^{\prime}}}\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f^{\prime}}+\bm{\Delta}_{k,n,f^{\prime}})\mathbb{W}_{i,n,f^{\prime}}].$ (20) Here, we apply the Lagrangian-based method to find the optimal solution of (19) and (20) for the given the beamforming matrices [49]. ###### Proposition 1. For the given $\mathbb{W}_{k,n,f}$, the minimum and maximum norm-bounded matrices of (19) and (20) given as follows, respectively, for all $k,i,n,f,f^{\prime}$: $\displaystyle\Delta^{f,\min}_{k,n}=-e_{k,n,f}\frac{\mathbb{W}^{\dagger}_{k,n,f}}{\|\mathbb{W}_{k,n,f}\|},\leavevmode\nobreak\ \forall k,n,f,$ (21) $\displaystyle\Delta^{f^{\prime},\max}_{k,n}=e_{k,n,f^{\prime}}\frac{\mathbb{W}^{\dagger}_{i,n,f^{\prime}}}{\|\mathbb{W}_{i,n,f^{\prime}}\|},\leavevmode\nobreak\ \forall k,i\in\mathcal{K},n,f,k\neq i.$ (22) ###### Proof. Please see Appendix A. ∎ The following remark provides the exact solution for the worst-case of SINR. ###### Remark 1. The exact worst-case of the SINR can be obtained by using the fractional programming. The minimization of the SINR over a bounded error is a fractional program as follows: $\displaystyle\min_{\|\bm{\Delta}_{k,n,f}\|\leq e_{k,n,f}}$ (23) $\displaystyle\frac{\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f}+\bm{\Delta}_{k,n,f})\mathbb{W}_{k,n,f}]}{\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f^{\prime}}+\bm{\Delta}_{k,n,f^{\prime}})\mathbb{W}_{i,n,f^{\prime}}]+\sigma_{k,n,f}^{2}}.$ The solution of (23) can be obtained using the idea of fractional programming. This method provides a numerical solution for the worst-case $\Delta_{k,n,f}$. Hence, by idea of fractional programming, we restate (23) in parametric form as: $\displaystyle\min_{\|\bm{\Delta}_{k,n,f}\|\leq e_{k,n,f}}F(\bm{\Delta}_{k,n,f},\nu),$ (24) where $\displaystyle F(\bm{\Delta}_{k,n,f},\nu)\triangleq{\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f}+\bm{\Delta}_{k,n,f})\mathbb{W}_{k,n,f}]}$ (25) $\displaystyle-\nu\Big{(}{\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f^{\prime}}+\bm{\Delta}_{k,n,f^{\prime}})\mathbb{W}_{i,n,f^{\prime}}]+\sigma_{k,n,f}^{2}}\Big{)},$ and $\nu$ is an auxiliary variable updated in each iteration of Dinkelbach algorithm by $\displaystyle\nu=$ (26) $\displaystyle\frac{\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f}+\bm{\Delta}_{k,n,f}^{*})\mathbb{W}_{k,n,f}]}{{\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f^{\prime}}+\bm{\Delta}_{k,n,f^{\prime}}^{*})\mathbb{W}_{i,n,f^{\prime}}]+\sigma_{k,n,f}^{2}}},$ where $\bm{\Delta}_{k,n,f}^{*}$ is obtained by (28) which will be explained next. In each iteration of the Dinkelbach algorithm, we need to obtain $\text{argmin}_{\|\bm{\Delta}_{k,n,f}\|\leq e_{k,n,f}}F(\bm{\Delta}_{k,n,f},q)$. We adopt the Lagrangian method. To this end, we define the Lagrangian function as follows: $\displaystyle\mathcal{L}(\bm{\Delta}_{k,n,f})=F(\bm{\Delta}_{k,n,f},\nu)+\Omega(\|\bm{\Delta}_{k,n,f}\|-e_{k,n,f}),$ (27) where $\Omega$ is the Lagrangian multiplier. Taking the derivations of (27) with respect to $\bm{\Delta}_{k,n,f}$ and $\Omega$, and setting their values to zero, we obtain $\bm{\Delta}_{k,n,f}^{*}$ as follows: $\displaystyle\bm{\Delta}_{k,n,f}^{*}=\text{argmin}_{\mathcal{L}(\bm{\Delta}_{k,n,f})}.$ (28) The update procedure in Dinkelbach algorithm is proceed until the convergence condition is met. We call this method as exact robust solution (ExRS). It is worthwhile mentioning that, since the objective function in (23) is pseudo- linear, we can solve the max (best-case) using the same approach. By substituting (21) and (22) into the data rate (18), we obtain (29), $\displaystyle\tilde{r}_{k,n}^{f}=\log_{2}\bigg{(}1+\frac{\rho_{k,n}^{f}\big{(}\text{Tr}[\tilde{\mathbb{H}}_{k,n,f}\mathbb{W}_{k,n,f}]-e_{k,n}^{f}{\|\mathbb{W}_{k,n,f}\|}\big{)}}{\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\rho_{i,n}^{f^{\prime}}(1-\xi_{i,n}^{k})\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}\mathbb{W}_{i,n,f^{\prime}}]+e_{k,n}^{f^{\prime}}{\|\mathbb{W}_{i,n,f^{\prime}}\|}\big{\\}}+\sigma_{k,n,f}^{2}}\bigg{)}.$ (29) where $\tilde{r}_{k,n}^{f}$ is the lower bound on the worst-case data rate. Therefore, the total data rate for the worst-case is given by $\displaystyle R_{\text{Total}}^{\text{Worst}}=\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\tilde{r}_{k,n}^{f}.$ (30) Moreover, we restate the constraint (9) by using Proposition 1 as (31) shown at top of the next page which may not be tight. $\displaystyle\log_{2}\bigg{(}1+\frac{\xi_{i,n}^{k}\rho_{i,n}^{f^{\prime}}\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f^{\prime}}\mathbb{W}_{i,n,f^{\prime}}]+e_{i,n}^{f^{\prime}}{\|\mathbb{W}_{i,n,f^{\prime}}\|}\big{\\}}}{\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\rho_{k^{\prime},n}^{f}(1-\xi_{k^{\prime},n}^{i})\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f}\mathbb{W}_{k^{\prime},n,f}]-e_{i,n}^{f}{\|\mathbb{W}_{k^{\prime},n,f}\|}\big{\\}}+\sigma_{i,n,f^{\prime}}^{2}}\bigg{)}$ (31) $\displaystyle\leq\log_{2}\bigg{(}1+\frac{\xi_{i,n}^{k}\rho_{i,n}^{f^{\prime}}\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}\mathbb{W}_{i,n,f^{\prime}}]-e_{k,n}^{f^{\prime}}{\|\mathbb{W}_{i,n,f^{\prime}}\|}\big{\\}}}{\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\rho_{k^{\prime},n}^{f}(1-\xi_{k^{\prime},n}^{i})\\{\text{Tr}[\tilde{\mathbb{H}}_{k,n,f}\mathbb{W}_{k^{\prime},n,f}]+e_{k,n}^{f}{\|\mathbb{W}_{k^{\prime},n,f}\|}\\}+\sigma_{k,n,f}^{2}}\bigg{)}.$ In (31), the left hand side is obtained for the maximum while the right hand side is obtained for the minimum. We also note that the term $\\{\xi_{i,n}^{k}\cdot\rho_{i,n}^{f^{\prime}}\\}$ in (29) is non-convex. To tackle this issue, we define a new variable as the multiplication of two binary variables $x_{i,n}^{k,f^{\prime}}=\xi_{i,n}^{k}\cdot\rho_{i,n}^{f^{\prime}}$, which indicates the joint SIC ordering and scheduling variables. Then, we adopt a linearizion technique and add the following constraints to the optimization problem [51, 50]: $\displaystyle\text{C}_{7}:\leavevmode\nobreak\ $ $\displaystyle\xi_{i,n}^{k}\geq x_{i,n}^{k,f},\leavevmode\nobreak\ \forall k,i\in\mathcal{K},n\in\mathcal{N},f\in\mathcal{F},$ (32) $\displaystyle\text{C}_{8}:\leavevmode\nobreak\ $ $\displaystyle\rho_{i,n}^{f}\geq x_{i,n}^{k,f},\leavevmode\nobreak\ \forall k,i\in\mathcal{K},n\in\mathcal{N},f\in\mathcal{F},$ (33) $\displaystyle\text{C}_{9}:\leavevmode\nobreak\ $ $\displaystyle\rho_{i,n}^{f}+\xi_{i,n}^{k}-1\leq x_{i,n}^{k,f},\forall k,i\in\mathcal{K},n\in\mathcal{N},f\in\mathcal{F}.$ (34) Furthermore, in order to handle the non-convex integer variables in problem (15), we rewrite the constraint (15e) as the intersection of the following regions: $\displaystyle\text{C}_{10}:0\leq\xi_{i,n}^{k}\leq 1,\leavevmode\nobreak\ \text{C}_{11}:\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\bigg{(}\xi_{i,n}^{k}-(\xi_{i,n}^{k})^{2}\bigg{)}\leq 0,$ $\displaystyle\text{C}_{12}:0\leq\rho_{k,n}^{f}\leq 1,\leavevmode\nobreak\ $ $\displaystyle\text{C}_{13}:\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\bigg{(}\rho_{k,n}^{f}-(\rho_{k,n}^{f})^{2}\bigg{)}\leq 0,$ $\displaystyle\text{C}_{14}:0\leq x_{i,n}^{k,f}\leq 1,$ $\displaystyle\text{C}_{15}:\sum_{f\in\mathcal{F}}\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\bigg{(}x_{i,n}^{k,f}-(x_{i,n}^{k,f})^{2}\bigg{)}\leq 0.$ Now, we rewrite the optimization problem as: $\displaystyle\max_{\mathbb{W},\bm{\xi},\bm{\rho}}\;$ $\displaystyle\leavevmode\nobreak\ \eta_{\text{EE}}^{\text{Worst}}$ (35a) s.t. $\displaystyle\text{C}_{1}:\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{K}}\text{Tr}[\rho_{k,n}^{f}\mathbb{W}_{k,n,f}]\leq P^{f}_{\text{max}},$ (35b) $\displaystyle\text{C}_{2}:\sum_{k\in\mathcal{K}}\rho_{k,n}^{f}\leq L_{n}^{f},$ (35c) $\displaystyle\text{C}_{3}:\mathbb{W}_{k,n,f}\succeq\mathbb{0},$ (35d) $\displaystyle\text{C}_{4}:\text{Rank}(\mathbb{W}_{k,n,f})\leq 1,$ (35e) $\displaystyle\text{C}_{7}-\text{C}_{15},$ (35f) $\displaystyle\eqref{User_Assosiaction_2},\eqref{SIC_Sub_Con},\eqref{SIC_Con_1},\eqref{Fronthaul_Con},\eqref{SIC_C}.$ (35g) It can be concluded from the optimization problem (35), the product term of $\rho_{k,n}^{f}\mathbb{W}_{k,n,f}$ is an obstacle for solving the optimization problem. Let us define two new auxiliary variables as follows: $\displaystyle\tilde{\mathbb{W}}_{k,n,f}\triangleq\rho_{k,n}^{f}\mathbb{W}_{k,n,f},\leavevmode\nobreak\ \leavevmode\nobreak\ {\mathbb{W^{\prime}}}_{i,k,n,f}\triangleq x_{i,n}^{k,f}\mathbb{W}_{i,n,f}.$ Also, we employ the big-M method [39, 14] to circumvent this difficulty. In particular, we impose the following additional constraints to make it convex as follows: $\displaystyle\text{C}_{16}:\leavevmode\nobreak\ $ $\displaystyle\tilde{\mathbb{W}}_{k,n,f}\preceq P_{\text{max}}^{f}\mathbb{I}_{M}\rho_{k,n}^{f},$ (36) $\displaystyle\text{C}_{17}:\leavevmode\nobreak\ $ $\displaystyle\tilde{\mathbb{W}}_{k,n,f}\preceq\mathbb{W}_{k,n,f},\leavevmode\nobreak\ \text{C}_{18}:\leavevmode\nobreak\ \tilde{\mathbb{W}}_{k,n,f}\succeq\mathbb{0},$ (37) $\displaystyle\text{C}_{19}:\leavevmode\nobreak\ $ $\displaystyle\tilde{\mathbb{W}}_{k,n,f}\succeq\mathbb{W}_{k,n,f}-(1-\rho_{k,n}^{f})P_{\text{max}}^{f}\mathbb{I}_{M},$ (38) $\displaystyle\text{C}_{20}:\leavevmode\nobreak\ $ $\displaystyle{\mathbb{W^{\prime}}}_{i,k,n,f}\preceq P_{\text{max}}^{f}\mathbb{I}_{M}\leavevmode\nobreak\ x_{i,n}^{k,f},$ (39) $\displaystyle\text{C}_{21}:\leavevmode\nobreak\ $ $\displaystyle{\mathbb{W}^{\prime}}_{i,k,n,f}\preceq\mathbb{W}_{k,n,f},\leavevmode\nobreak\ \text{C}_{22}:\leavevmode\nobreak\ {\mathbb{W}^{\prime}}_{i,k,n,f}\succeq\mathbb{0},$ (40) $\displaystyle\text{C}_{23}:\leavevmode\nobreak\ $ $\displaystyle{\mathbb{W}^{\prime}}_{i,k,n,f}\succeq\mathbb{W}_{k,n,f}-(1-x_{i,n}^{k,f})P_{\text{max}}^{f}\mathbb{I}_{M}.$ (41) The worst-case data rate (30) and constraint (31) are still non-convex. To handle these and facilitate the solution, (30) can be rewritten as $\displaystyle R_{\text{Tota}}^{\text{Worst}}=\log_{2}\prod_{f\in\mathcal{F}}\prod_{k\in\mathcal{K}}\prod_{n\in\mathcal{N}}\frac{\psi_{f,k,n}}{\phi_{f,k,n}},$ (42) where $\displaystyle\psi_{f,k,n}=\text{Tr}[\tilde{\mathbb{H}}_{k,n,f}\tilde{\mathbb{W}}_{k,n,f}]-e_{k,n}^{f}\big{(}\mathcal{A}(\tilde{\mathbb{W}}_{k,n,f})\big{)}+\phi_{f,k,n},$ $\displaystyle\phi_{f,k,n}=$ $\displaystyle\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}\tilde{\mathbb{W}}_{i,n,f^{\prime}}]-\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}{\mathbb{W}^{\prime}}_{i,k,n,f^{\prime}}]$ $\displaystyle+e_{k,n}^{f^{\prime}}\Big{(}\mathcal{A}(\tilde{\mathbb{W}}_{i,n,f^{\prime}})-\mathcal{B}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}})\Big{)}+\sigma^{2}_{k,n,f},$ (43) where we define $\displaystyle\mathcal{A}(\tilde{\mathbb{W}}_{k,n,f})\triangleq\|\rho_{k,n}^{f}\mathbb{W}_{k,n,f}\|={\|\tilde{\mathbb{W}}_{k,n,f}\|},$ (44) $\displaystyle\mathcal{B}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}})\triangleq\|x_{i,n}^{k,f^{\prime}}\mathbb{W}_{i,n,f^{\prime}}\|=\|{\mathbb{W}^{\prime}}_{i,k,n,f^{\prime}}\|.$ (45) However, the total data rate (42) is still non-convex. Let us define $\displaystyle\psi_{f,k,n}\triangleq\exp({a_{f,k,n}}),\leavevmode\nobreak\ \phi_{f,k,n}\triangleq\exp({b_{f,k,n}}),$ (46) where $a_{f,k,n}$ and $b_{f,k,n}$ are slack variables. Moreover, the lower bound of slack variables have the following form [48] $\displaystyle\exp({a_{f,k,n}})\geq\sigma^{2}_{k,n,f},\leavevmode\nobreak\ \exp({b_{f,k,n}})\geq\sigma^{2}_{k,n,f}.$ (47) Now, by substituting (46) into the objective function, we can obtain $\displaystyle\max_{\mathbb{W},\mathbb{W^{\prime}},\tilde{\mathbb{W}},\bm{\xi},\bm{\rho},\mathbb{x}}\;$ $\displaystyle\leavevmode\nobreak\ \frac{\log_{2}\prod_{f}\prod_{k}\prod_{n}\frac{\psi_{f,k,n}}{\phi_{f,k,n}}}{P_{\text{Total}}(\mathbb{W})}$ $\displaystyle=\max_{\mathbb{W},\mathbb{W^{\prime}},\tilde{\mathbb{W}},\bm{\xi},\bm{\rho},\mathbb{x},\mathbb{a},\mathbb{b}}$ $\displaystyle\frac{\log_{2}\prod_{f}\prod_{k}\prod_{n}\exp({a_{f,k,n}}-b_{f,k,n})}{P_{\text{Total}}(\mathbb{W})}$ $\displaystyle=\max_{\mathbb{W},\mathbb{W^{\prime}},\tilde{\mathbb{W}},\bm{\xi},\bm{\rho},\mathbb{x},\mathbb{a},\mathbb{b}}$ $\displaystyle\frac{\sum\limits_{f\in\mathcal{F}}\sum\limits_{k\in\mathcal{K}}\sum\limits_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})\log_{2}(e)}{P_{\text{Total}}(\mathbb{W})},$ where $\mathbb{a}=[a_{f,k,n}]$ and $\mathbb{b}=[b_{f,k,n}]$ are the collection vectors of slack variables. Also, $P_{\text{Total}}(\mathbb{W})=\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\frac{1}{\beta}\text{Tr}[\mathbb{W}_{k,n,f}]+P_{\text{Static}}$. For notation simplicity, let define $\Xi\triangleq\left[\mathbb{W^{\prime}},\tilde{\mathbb{W}},{\mathbb{W}},\bm{\xi},\bm{\rho},\mathbb{x},\mathbb{a},\mathbb{b}\right]$ as the collection of the optimization variables. It is worthwhile to mention that by this method, we can easily rewrite the non-convex constraint (10) as follows: $\displaystyle\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\varphi_{f}(a_{f,k,n}-b_{f,k,n})\leq R_{\max}^{f},\forall f\in\mathcal{F}.$ (48) Now, the optimization problem at hand can be mathematically formulated as $\displaystyle\max_{\Xi}$ (49a) $\displaystyle\frac{\sum\limits_{f\in\mathcal{F}}\sum\limits_{k\in\mathcal{K}}\sum\limits_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})\log_{2}(e)}{P_{\text{Total}}(\mathbb{W})}+h(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)$ (49b) s.t. $\displaystyle\text{C}_{1}:\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{K}}\text{Tr}[\tilde{\mathbb{W}}_{k,n,f}]\leq P^{f}_{\text{max}},\leavevmode\nobreak\ $ (49c) $\displaystyle\text{C}_{2}:\sum_{k\in\mathcal{K}}\rho_{k,n}^{f}\leq L_{n}^{f},\,$ (49d) $\displaystyle\text{C}_{3}:\mathbb{W}_{k,n,f}\succeq\mathbb{0},$ (49e) $\displaystyle\text{C}_{4}:\text{Rank}(\mathbb{W}_{k,n,f})\leq 1,$ (49f) $\displaystyle\exp({a_{f,k,n}})\geq\sigma^{2}_{k,n,f},\leavevmode\nobreak\ \exp({b_{f,k,n}})\geq\sigma^{2}_{k,n,f},$ (49g) $\displaystyle a_{f,k,n}\geq b_{f,k,n},$ (49h) $\displaystyle\psi_{f,k,n}\geq\exp(a_{f,k,n}),\leavevmode\nobreak\ \phi_{f,k,n}\leq\exp(b_{f,k,n}),$ (49i) $\displaystyle\text{C}_{7}-\text{C}_{10},\text{C}_{12},\text{C}_{14},\text{C}_{16}-\text{C}_{23},$ (49j) $\displaystyle\eqref{User_Assosiaction_2},\eqref{SIC_Sub_Con},\eqref{SIC_Con_1},\eqref{Fronthaul_Con},\eqref{SIC_C},$ (49k) where $h(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)\triangleq-f(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)+g(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)$ is the penalty function. Furthermore, $f(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)$ and $g(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)$ are defined as $\displaystyle f(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)\triangleq$ $\displaystyle\lambda\Bigg{(}\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\xi_{i,n}^{k}+\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\rho_{k,n}^{f}+\sum_{f\in\mathcal{F}}\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}x_{i,n}^{k,f}\Bigg{)},$ (50) $\displaystyle g(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)\triangleq\lambda\Bigg{(}\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}(\xi_{i,n}^{k})^{2}$ $\displaystyle+\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}(\rho_{k,n}^{f})^{2}+\sum_{f\in\mathcal{F}}\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}(x_{i,n}^{k,f})^{2}\Bigg{)},$ (51) respectively. It is worth mentioning that $\lambda$ indicates a penalty factor to penalize the objective function for any $\xi_{i,n}^{k}$, $\rho_{k,n}^{f}$, and $x_{k,n}^{f}$ which are not binary (i.e., their values are in $[0,1]$) [33]. However, the precise $0$ and $1$ are not always available. In this case, we round their values to the nearest integer values. The following proposition provides a mathematical analysis of the penalty factor. ###### Proposition 2. For sufficiently large value of $\lambda$, we have $\min_{\lambda}\max_{\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}}\mathcal{L}(\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x})=\max_{\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}}\min_{\lambda}\mathcal{L}(\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x})$555Note that $\mathcal{L}(\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x})$ is the objective function in (35).. In other words, the optimization problem (35) is equivalent to (49) where both problems obtain the same values. ###### Proof. Please see Appendix B. ∎ Problem (49) is non-convex due to the constraints (49i) and (31) and $g(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)$ in the objective function. In order to convert it into a convex one, we employ MM approach where a surrogate function is approximated by the first order Taylor approximation. Therefore, we use the following inequalities: $\displaystyle g(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)$ $\displaystyle\leq g(\bm{\xi}^{t-1},\bm{\rho}^{t-1},\mathbb{x}^{t-1},\lambda)$ $\displaystyle+\nabla_{\bm{\xi}}g^{T}(\bm{\xi}^{t-1},\bm{\rho}^{t-1},\mathbb{x}^{t-1},\lambda)(\bm{\xi}-\bm{\xi}^{t-1})$ $\displaystyle+\nabla_{\bm{\rho}}g^{T}(\bm{\xi}^{t-1},\bm{\rho}^{t-1},\mathbb{x}^{t-1},\lambda)(\bm{\rho}-\bm{\rho}^{t-1})$ $\displaystyle+\nabla_{\mathbb{x}}g^{T}(\bm{\xi}^{t-1},\bm{\rho}^{t-1},\mathbb{x}^{t-1},\lambda)(\mathbb{x}-\mathbb{x}^{t-1})$ $\displaystyle\triangleq\tilde{g}(\bm{\xi},\bm{\rho},\mathbb{x},\lambda),$ (52) $\displaystyle\mathcal{A}(\tilde{\mathbb{W}}_{k,n,f})\leq\mathcal{A}(\tilde{\mathbb{W}}_{k,n,f})^{t-1}$ $\displaystyle+\text{Tr}\Big{(}\nabla_{\tilde{\mathbb{W}}_{k,n,f}}\mathcal{A}(\tilde{\mathbb{W}}_{k,n,f})^{t-1}[\mathcal{A}(\tilde{\mathbb{W}}_{k,n,f})^{t}-\mathcal{A}(\tilde{\mathbb{W}}_{k,n,f})^{t-1}]\Big{)}$ $\displaystyle\triangleq\tilde{\mathcal{A}}(\tilde{\mathbb{W}}_{k,n,f}),$ (53) $\displaystyle\mathcal{B}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}})\leq\mathcal{B}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}})^{t-1}$ $\displaystyle+\text{Tr}\Big{(}\nabla_{\mathbb{W}^{\prime}_{i,k,n,f^{\prime}}}\mathcal{B}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}})^{t-1}\big{[}\mathcal{B}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}})^{t}$ $\displaystyle-\mathcal{B}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}})^{t-1}\big{]}\Big{)}\triangleq\tilde{\mathcal{B}}(\mathbb{W}^{\prime}_{i,k,n,f^{\prime}}),$ (54) $\displaystyle\phi_{f,k,n}\leq\exp(b^{t-1}_{f,k,n})(b_{f,k,n}^{t}-b^{t-1}_{f,k,n}+1)\triangleq\tilde{\phi}_{f,k,n}.$ (55) It should be noted that the right hand sides of (III-A)-(54) are affine functions. Now the main challenge in problem (49) is the non-convex constraint (31), i.e., SIC ordering constraint. Next, we handle this constraint similar to the objective function. To this end, first, we rewrite (31) as (56) shown at the top of the next page, by replacing the previously defined auxiliary variables $x_{i,n}^{k,f^{\prime}}=\xi_{i,n}^{k}\cdot\rho_{i,n}^{f^{\prime}}$, $\tilde{\mathbb{W}}_{k,n,f}=\rho_{k,n}^{f}\mathbb{W}_{k,n,f},$ ${\mathbb{W^{\prime}}}_{i,k,n,f}=x_{i,n}^{k,f}\mathbb{W}_{i,n,f}$, (44), and (45). $\displaystyle\log_{2}\bigg{(}1+\frac{\text{Tr}[\tilde{\mathbb{H}}_{i,n,f^{\prime}}{\mathbb{W^{\prime}}}_{i,k,n,f^{\prime}}]+e_{i,n}^{f^{\prime}}{\mathcal{B}({\mathbb{W^{\prime}}}_{i,k,n,f^{\prime}})}}{\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\Big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f}\tilde{\mathbb{W}}_{k^{\prime},n,f}]-e_{i,n}^{f}{\mathcal{A}(\tilde{\mathbb{W}}_{k^{\prime},n,f})}-\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f}\mathbb{W^{\prime}}_{i,k^{\prime},n,f}]-e_{i,n}^{f}\mathcal{B}(\mathbb{W^{\prime}}_{i,k^{\prime},n,f})\big{\\}}\Big{\\}}+\sigma_{i,n,f^{\prime}}^{2}}\bigg{)}$ (56) $\displaystyle\leq\log_{2}\bigg{(}1+\frac{\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}\mathbb{W^{\prime}}_{i,k,n,f^{\prime}}]-e_{k,n}^{f^{\prime}}{\mathcal{B}({\mathbb{W^{\prime}}}_{i,k,n,f^{\prime}})}}{{\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\Big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{k,n,f}\tilde{\mathbb{W}}_{k^{\prime},n,f}]+e_{i,n}^{f}{\mathcal{A}(\tilde{\mathbb{W}}_{k^{\prime},n,f})}-\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f}\mathbb{W^{\prime}}_{i,k^{\prime},n,f}]+e_{i,n}^{f}\mathcal{B}(\mathbb{W^{\prime}}_{i,k^{\prime},n,f})\big{\\}}\Big{\\}}+\sigma_{k,n,f}^{2}}}\bigg{)}.$ Now, we can rewrite (56) as follows: $\displaystyle\log_{2}(\frac{\mathcal{D}_{i,k,n,f^{\prime}}}{\mathcal{E}_{i,n,f^{\prime}}})-\log_{2}(\frac{\mathcal{F}_{i,k,n,f^{\prime}}}{\mathcal{G}_{i,k,n,f^{\prime}}})\leq 0,$ (57) where $\displaystyle\mathcal{E}_{i,n,f^{\prime}}=\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\Big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f}\tilde{\mathbb{W}}_{k^{\prime},n,f}]-e_{i,n}^{f}{\mathcal{A}(\tilde{\mathbb{W}}_{k^{\prime},n,f})}$ $\displaystyle-\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f}\mathbb{W^{\prime}}_{i,k^{\prime},n,f}]-e_{i,n}^{f}\mathcal{B}(\mathbb{W^{\prime}}_{i,k^{\prime},n,f})\big{\\}}\Big{\\}}+\sigma_{i,n,f^{\prime}}^{2},$ (58) $\displaystyle\mathcal{D}_{i,k,n,f^{\prime}}=$ $\displaystyle\text{Tr}[\tilde{\mathbb{H}}_{i,n,f^{\prime}}{\mathbb{W^{\prime}}}_{i,k,n,f^{\prime}}]+e_{i,n}^{f^{\prime}}{\mathcal{B}({\mathbb{W^{\prime}}}_{i,k,n,f^{\prime}})}+E_{i,n,f^{\prime}},$ (59) $\displaystyle\mathcal{G}_{i,k,n,f^{\prime}}=\sum_{f\in\mathcal{F}}\sum_{k^{\prime}\in\mathcal{K}\backslash\\{i\\}}\text{Tr}[\tilde{\mathbb{H}}_{k,n,f}\tilde{\mathbb{W}}_{k^{\prime},n,f}]+e_{i,n}^{f}{\mathcal{A}(\tilde{\mathbb{W}}_{k^{\prime},n,f})}$ $\displaystyle-\big{\\{}\text{Tr}[\tilde{\mathbb{H}}_{i,n,f}\mathbb{W^{\prime}}_{i,k^{\prime},n,f}]+e_{i,n}^{f}\mathcal{B}(\mathbb{W^{\prime}}_{i,k^{\prime},n,f})\big{\\}}+\sigma_{k,n,f}^{2},$ (60) $\displaystyle\mathcal{F}_{i,k,n,f^{\prime}}=$ $\displaystyle\mathcal{G}_{i,k,n,f^{\prime}}+\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}\mathbb{W^{\prime}}_{i,k,n,f^{\prime}}]-e_{k,n}^{f^{\prime}}{\mathcal{B}({\mathbb{W^{\prime}}}_{i,k,n,f^{\prime}})}.$ (61) Hereafter, we drop the subscripts of $\mathcal{E}_{i,n,f^{\prime}},\leavevmode\nobreak\ \mathcal{D}_{i,k,n,f^{\prime}},\leavevmode\nobreak\ \mathcal{G}_{i,k,n,f^{\prime}},\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ \mathcal{F}_{i,k,n,f^{\prime}}$ for the notation simplicity and denote them by $\mathcal{E},\leavevmode\nobreak\ \mathcal{D},\leavevmode\nobreak\ \mathcal{G}$, and $\mathcal{F}$, respectively. After this, similar to (46), we consider the following definitions: $\displaystyle\mathcal{D}\triangleq\exp(d),\mathcal{E}\triangleq\exp(e),\mathcal{F}\triangleq\exp(f),\mathcal{G}\triangleq\exp(g),$ (62) where $d,\leavevmode\nobreak\ e,\leavevmode\nobreak\ f$, and $g$ are the auxiliary variables. Substituting (62) into (57), we obtain $\displaystyle\begin{split}0&\geq\log_{2}(\frac{\mathcal{D}}{\mathcal{E}})-\log_{2}(\frac{\mathcal{F}}{\mathcal{G}}),\\\ &=\log_{2}(\exp(d-e))-\log_{2}(\exp(f-g)),\\\ &=\log_{2}e\times(d-e-(f-g)),\end{split}$ (63) which has a linear form. For these auxiliary variables, we have the following constraints: $\displaystyle\begin{split}&d\geq e,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ f\geq g,\\\ &\mathcal{D}\geq\exp(d),\mathcal{E}\leq\exp(e),\mathcal{F}\geq\exp(f),\mathcal{G}\leq\exp(g).\end{split}$ (64) Constraint (64) is non-convex due to the form of $\mathcal{E}$, $\mathcal{G}$, and constraints $\mathcal{E}\leq\exp(e)$ and $\mathcal{G}\leq\exp(g)$. However, by replacing $\mathcal{A}(.)$ and $\mathcal{B}(.)$ with their linear approximations defined in (53) and (54), $\mathcal{E}$ and $\mathcal{G}$ would become affine functions denoted by $\tilde{\mathcal{E}}$ and $\tilde{\mathcal{G}}$, respectively. Finally, by employing a Taylor series expansion of $\exp(e)$ and $\exp(g)$, the constraints $\mathcal{E}\leq\exp(e)$ and $\mathcal{G}\leq\exp(g)$ can be transformed by $\displaystyle\begin{split}&\tilde{\mathcal{E}}\leq\exp(\tilde{e})(e-\tilde{e}+1),\\\ &\tilde{\mathcal{G}}\leq\exp(\tilde{g})(g-\tilde{g}+1),\end{split}$ (65) where $\tilde{e}$ and $\tilde{g}$ are the feasible points. By considering these, the convex form of (31) is given by the following constraint: $\displaystyle\begin{split}&d\geq e,\leavevmode\nobreak\ f\geq g,\leavevmode\nobreak\ \mathcal{D}\geq\exp(d),\leavevmode\nobreak\ \eqref{SIC- eg}.\end{split}$ (66) Recall that for notation simplicity, we removed the subscripts of $d,e,f$, and $g$. As for the final step, we have to tackle non-convex constraint (7). To this end, we first handle the right hand side of (7), i.e., $\xi_{k,n}^{i}\leq\sum_{f\in\mathcal{F}}\rho_{k,n}^{f}\cdot\sum_{f^{\prime}\in\mathcal{F}}\rho_{i,n}^{f^{\prime}},k\neq i$, by using the MM approach to make a convex form that is inspired from [37]. It is straight forward to show that $z_{1}z_{2}=\frac{1}{2}[(z_{1}+z_{2})^{2}-(z_{1})^{2}-(z_{2})^{2}]$, where we can define $z_{1}=z_{k,n}\triangleq\sum_{f\in\mathcal{F}}\rho_{k,n}^{f},$ and $z_{2}=y_{i,n}\triangleq\sum_{f^{\prime}\in\mathcal{F}}\rho_{i,n}^{f^{\prime}}$ to transform (7) into a convex one. Now, we rewrite the constraint in (7) as follows: $\displaystyle 2\xi_{k,n}^{i}\leq\big{(}z_{k,n}+y_{i,n}\big{)}^{2}-\bigg{(}(z_{k,n})^{2}+(y_{i,n})^{2}\bigg{)},\forall k,i\in\mathcal{K},k\neq i.$ The above constraint is non-convex. Similarly, we adopt Taylor approximation for $Q_{k,n}^{i}\triangleq(z_{k,n})^{2}+(y_{i,n})^{2}$ to obtain a convex constraint. Therefore, after employing Taylor approximation this constraint can be written as follows: $\displaystyle 2\xi_{k,n}^{i}\leq\bigg{(}z_{k,n}+y_{i,n}\bigg{)}^{2}-\tilde{Q}_{k,n}^{i},\forall k,i\in\mathcal{K},k\neq i,$ (67) where $\tilde{Q}_{k,n}^{i}$ is the first order Taylor approximation for ${Q}_{k,n}^{i}$. We further use the following theorem which is related to the nonlinear fractional programming. ###### Definition 1. A generalized fractional problem is defined as: $\displaystyle\max_{\mathbf{x}}\leavevmode\nobreak\ \leavevmode\nobreak\ \min_{k=1,...,K}\frac{f_{k}(\mathbf{x})}{g_{k}(\mathbf{x})},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{s.t.}\leavevmode\nobreak\ \leavevmode\nobreak\ \mathbf{x}\in\mathcal{D}.$ (68) ###### Proposition 3. The optimal vector $\mathbf{x}^{\ast}$ for solution of (68) can be achieved if and only if [35] $\mathbf{x}^{\ast}=\text{argmax}_{\mathbf{x}\in{\mathcal{D}}}\bigg{\\{}\min_{k=1,...,K}[f_{k}(\mathbf{x})-\alpha^{\ast}g_{k}(\mathbf{x})]\bigg{\\}},$ (69) where $\alpha^{\ast}$ is obtained via the following problem: $F(\alpha)=\max_{\mathbf{x}\in\mathcal{D}}\min_{k,1,...,K}[{f_{k}(\mathbf{x})-\alpha g_{k}(\mathbf{x})]=0}.$ (70) By using Proposition 3, the optimal value of the EE is given by $\displaystyle q^{\ast}\triangleq\frac{\log_{2}(e)\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}^{*}}-b_{f,k,n}^{*})}{P_{\text{Total}}(\mathbb{W}^{\ast})}=$ $\displaystyle\max_{\Xi}\frac{\log_{2}(e)\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})}{P_{\text{Total}}(\mathbb{W})}.$ (71) As a result, the maximum EE, $q^{*}$ can be achieved if and only if $\displaystyle\max_{\Xi}\leavevmode\nobreak\ \log_{2}(e)\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})-q^{\ast}P_{\text{Total}}(\mathbb{W})$ $\displaystyle=\log_{2}(e)\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}^{*}}-b_{f,k,n}^{*})-q^{\ast}P_{\text{Total}}(\mathbb{W}^{\ast})=0.$ (72) Based on the previous steps and defining $\Xi^{\prime}\triangleq[\Xi,d,e,f,g]$, we solve the following problem instead of dealing with fractional programming in (49) $\displaystyle\max_{\Xi^{\prime}}$ $\displaystyle\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})$ $\displaystyle-q\cdot P_{\text{Total}}(\mathbb{W})-f(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)+\tilde{g}(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)$ (73a) s.t. $\displaystyle\psi_{f,k,n}\geq\exp(a_{f,k,n}),$ (73b) $\displaystyle\text{C}_{7}-\text{C}_{10},\text{C}_{12},\text{C}_{14},\text{C}_{16}-\text{C}_{23},$ (73c) $\displaystyle\leavevmode\nobreak\ \eqref{User_Assosiaction_2},\eqref{SIC_Con_1},\eqref{Fronthualu1},\eqref{SIC_Con},\eqref{bound},\eqref{7final},\eqref{A_bound}.$ (73d) Notice that, (73) is an standard SDP programming which can be solved optimally by using an off-the-shelf optimization tool, e.g., CVX. Denote by ${\mathbb{W}}_{k,n,f}^{\ast},\forall k,n,f,$ the optimal value of variable ${\mathbb{W}}_{k,n,f}$ in the solution of (73). If ${\mathbb{W}}_{k,n,f}^{\ast}$ satisfies the rank-one constraint, i.e., Rank$(\mathbb{W}_{k,n,f}^{\ast})=1$, the optimal solution $\mathbb{w}_{k,n,f}^{\ast}$ can be obtained by using the eigenvalue decomposition (EVD) of $\mathbb{W}_{k,n,f}^{\ast}$. As for the final step, we employ SDP relaxation by removing constraint ${C}_{4}$. The problem (73) may not yield a rank-one solution. Thus, we propose a penalty function to the objective function to penalize it [48, 53]. To this end, first, we introduce the following proposition. ###### Proposition 4. The inequality $||\mathbf{Y}||_{*}\triangleq\sum_{i}\sigma_{i}\geq||\mathbf{Y}||_{2}=\underset{i}{\text{max}}\\{\sigma_{i}\\}$ holds for any given $\mathbf{Y}\in\mathbb{H}^{m\times n},m,n\in\mathbb{N}$, where $\sigma_{i}$ is the $i$-th eigenvalue value of $\mathbf{Y}$. The equality holds if and only if $\mathbf{Y}$ is rank-one. Inspired by Proposition 4, the equivalent form of the rank-one constraint $C_{4}$ can be written as $\varpi_{k,n,f}\triangleq||\mathbf{W}_{k,n,f}||_{*}-||\mathbf{W}_{k,n,f}||_{2}\leq 0,\forall k,n,f$. Hence, we use the penalty-based approach by integrating such a constraint into the objective function. By introducing $\mu>1$ as a penalty factor, the objective function, by dropping the constant term $\log_{2}(e)$, can be rewritten as follows: $\displaystyle\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})-q\cdot P_{\text{Total}}$ $\displaystyle-f(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)+\tilde{g}(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)-{\mu}\times\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\varpi_{k,n,f}.$ (74) For a sufficiently large value of $\mu$, maximizing (III-A) under the constraints of (73) yields a rank-one solution by ensuring a small value of $\varpi_{k,n,f}$ [53, 48]. However, (III-A) is still a non-convex function over $\mathbf{W}_{k,n,f}$. Hence, we rewrite $\varpi_{k,n,f}$ as follows: $\displaystyle\tilde{\varpi}_{k,n,f}=||\mathbf{W}_{k,n,f}||_{*}-\tilde{\mathcal{A}}(\mathbf{W}_{k,n,f}),\forall k,n,f,$ (75) where $\tilde{\mathcal{A}}(\mathbf{W}_{k,n,f})$ is obtained by (53). The above equation is updated iteratively with iteration number $t$. We also update the penalty factor in each iteration as $\mu^{(t+1)}=\alpha\mu^{(t)}$, where $\alpha>1$ is a constant. Nevertheless, the returned solution may not be rank- one. In such cases, the Gaussian randomization method is exploited to obtain a feasible solution. Consequently, the optimization problem (73) can be written with same constraints by considering the following objective function: $\displaystyle\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})-q\cdot P_{\text{Total}}(\mathbb{W})$ $\displaystyle\underbrace{\underbrace{-f(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)+\tilde{g}(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)}_{\text{Term 1}}\underbrace{-{\mu}\times\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\tilde{\varpi}_{k,n,f}}_{\text{Term 2}}}_{\text{Penalty Function}},$ (76) where “Term 1” of the penalty function is to penalties of the relaxed binary variables and “Term 2” is for rank-one solution discussed above. It is worth mentioning that the final optimization problem is an standard SDP programming which can be solved optimally using CVX. Therefore, an iterative algorithm can be employed to tighten the obtained upper bound of (III-A) in iteration $t$ is used as an approximation point for the next iteration $(t+1)$ [20, 33]. The main steps of the proposed solution algorithm are listed in Algorithm 1. Next, we discuss the initialization algorithm and optimally of the proposed solution. 0: $q_{0}=0,\leavevmode\nobreak\ t=0,\leavevmode\nobreak\ \epsilon>0$, initialize feasible points as described in Section III-A1 1: $q_{t}$ : Dinkelbach parameter 2: $t$ : Iteration index 3: $\epsilon$ : The maximum tolerance 4: while $q_{t}-q_{t-1}>\epsilon$ do 5: Obtain resource allocation policy through solving problem (73) 6: Set $t=t+1$ 7: Set $q^{\ast}=\frac{\log_{2}(e)\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}^{*}}-b^{*}_{f,k,n})}{P_{\text{Total}}(\mathbb{W}^{\ast})}=\max_{\Xi^{\prime}}\eqref{RankF}$ 8: end while 9: Set {$\mathbb{W^{\prime}}^{\ast},\tilde{\mathbb{W}}^{\ast},\bm{\xi}^{\ast},\bm{\rho}^{\ast},\mathbb{x}^{\ast},\mathbb{b}^{\ast}$} =$\\{\mathbb{W^{\prime}}^{t-1},\tilde{\mathbb{W}}^{t-1},\bm{\xi}^{t-1},\bm{\rho}^{t-1},\mathbb{x}^{t-1},\mathbb{b}^{t-1}$} 10: return Algorithm 1 Proposed Iterative Algorithm #### III-A1 Initialization Algorithm Due to the existence of Taylor approximations, we should determine the initial feasible values for relevant variables. The initial point for the relaxed variables denoted as $\bm{\rho}^{0}=[\rho_{k,n}^{f,0}],\leavevmode\nobreak\ \bm{\xi}^{0}=[\xi_{k,n}^{f,0}],\leavevmode\nobreak\ \mathbb{X}^{0}=[x_{i,n}^{k,f,0}]$, beam-forming variables, i.e., $\mathbb{W}^{0},\leavevmode\nobreak\ \mathbb{W^{\prime}}^{0},\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ \tilde{\mathbb{W}}^{0}$ (subscripts are removed for simplicity), and auxiliary variables, i.e., $\mathbb{b}^{0}=[b_{f,k,n}^{0}],\leavevmode\nobreak\ \mathbb{e}^{0}=[e_{i,n,f^{\prime}}^{0}],\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ \mathbb{g}^{0}=[g_{i,k,n,f^{\prime}}^{0}]$. We randomly generated $\bm{\rho}^{0},\leavevmode\nobreak\ \bm{\xi}^{0},\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ \mathbb{X}^{0}$ between zero and one. For beamforming variable, it should satisfy the power budget constraint. Therefore, we set $\mathbb{w}^{0}=\sqrt{\frac{P_{\max}^{1}}{M}}\cdot\mathbb{r}_{M\times 1}$, where $P_{\max}^{1}$ denotes the power budget of the macro AAU, $M$ is the number of antennas, and $\mathbb{r}_{M\times 1}$ is the vector with size $M\times 1$ and random elements between zero and one. Now, according to the previous definitions, we can set $\mathbb{W}_{k,n,f}^{0}=\mathbb{w}^{0}(\mathbb{w}^{0})^{\dagger}\leavevmode\nobreak\ $, $\mathbb{W^{\prime}}_{i,k,n,f}^{0}={x}_{i,n}^{k,f,0}\mathbb{W}_{k,n,f}^{0},\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ \tilde{\mathbb{W}}_{k,n,f}^{0}=\rho_{k,n}^{f,0}\mathbb{W}_{k,n,f}^{0},\leavevmode\nobreak\ \forall f,k,i,n$. The feasible points for the auxiliary variables can be set as follows: $\displaystyle b_{f,k,n}^{0}\triangleq\phi_{f,k,n}^{0}=\ln\Big{(}\sum_{f^{\prime}\in\mathcal{F}}\sum_{i\in\mathcal{K}\backslash\\{k\\}}\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}\tilde{\mathbb{W}}_{i,n,f^{\prime}}^{0}]$ $\displaystyle-\text{Tr}[\tilde{\mathbb{H}}_{k,n,f^{\prime}}\mathbb{W^{\prime}}_{i,k,n,f^{\prime}}^{0}]$ $\displaystyle+e_{k,n}^{f^{\prime}}\Big{(}\mathcal{A}(\mathbb{W}_{i,n,f^{\prime}}^{0})-\mathcal{B}(\mathbb{W^{{}^{\prime}}}_{i,k,n,f^{\prime}}^{0})\Big{)}+\sigma^{2}_{k,n,f},\Big{)}.$ (77) Similarly, the feasible points of the auxiliary variables for handling of the SIC ordering constraint (56) are obtained by $\displaystyle e_{i,n,f^{\prime}}^{0}\triangleq\ln(\mathcal{E}_{i,n,f^{\prime}}^{0}),\forall i,n,f^{\prime},$ (78) $\displaystyle g_{i,k,n,f^{\prime}}^{0}\triangleq\ln(\mathcal{G}_{i,k,n,f^{\prime}}^{0}),$ (79) where $\mathcal{E}_{i,n,f^{\prime}}^{0}$ and $\mathcal{G}_{i,k,n,f^{\prime}}^{0}$ are obtained by (58) and (III-A), respectively, with replacing the above initial values. However, the randomly generated values may not be a feasible solution. In such cases, we generate a new one, until we find a feasible point. It is worthwhile mentioning that this method may not be efficient, especially, when we have more constraints and high dimensional variables. For such cases, the initialization algorithm proposed in [53] can be utilized. Further, sometimes for a given resources, e.g., power budget, the problem may not be feasible. In this case, the elasticization method can be used [55]. #### III-A2 Optimality Analysis In the following, we discuss about the optimally of the proposed algorithm. As discussed before, we resort some approximations and relaxation methods, i.e., MM, Taylor series expansion, and SDR technique, to transform the original problem into a convex problem. In particular, we apply a penalty factor for both relaxation and SDR to guarantee tightness of solution [48]. More specifically, we consider two penalty functions in the new objective function. In fact, the penalty factors $\lambda$ and $\mu$ are adopted to penalize the objective function when the integer values as well as rank-one solution are not available. Hence, for large values of the penalty parameters, the value of relaxation form of the binary variables are binary and beamforming matrix is rank-one. In this case, the maximum point of the new objective function is the maximum point of the original problem [53, 48]. We further adopt the MM technique that may not approach the globally optimum solution. However, the solution achieves a closely optimal solution due to the performance of the MM algorithm [14]. ### III-B Low Complexity Algorithm Design It can be perceived that Algorithm 1 achieves a close to optimal solution. However, it may not suitable for large scale resource allocation with limited computational complexity. Now, we aim at designing a low complexity algorithm for improving the practicality of Algorithm 1. The proposed low-complexity algorithm is based on the heuristic solution known as two-step iterative approach. In particular, the original problem is decomposed into two sub- problems, namely: 1) scheduling (i.e., joint user association and subcarrier allocation) and 2) beamforming design and SIC ordering. Each subproblem can be solved while fixing the variables of other problems. The iterative procedure is adopted to find the scheduling, beamforming design, and SIC ordering. #### III-B1 Solution of the Scheduling Subproblem By assuming fixed beamforming design and SIC ordering parameters, the scheduling sub-problem is formulated as follows: $\displaystyle\max_{\bm{\rho}}\;$ $\displaystyle\leavevmode\nobreak\ \eta_{\text{EE}}^{\text{Worst}}$ (80a) s.t. $\displaystyle\text{C}_{2}:\sum_{k\in\mathcal{K}}\rho_{k,n}^{f}\leq L_{n}^{f},\,\forall n\in\mathcal{N},f\in\mathcal{F},$ (80b) $\displaystyle\text{C}_{4}:\rho_{k,n}^{f}\in\begin{Bmatrix}0,1\end{Bmatrix},\,\,\forall k\in\mathcal{K},n\in\mathcal{N},\forall f\in\mathcal{F},$ (80c) $\displaystyle\eqref{User_Assosiaction_2},\eqref{7final},\eqref{SIC_Con_1},\eqref{SIC_Con}.$ Problem (80) is integer non-linear programming. To solve it, we propose a low- complex modified two-sided many-to-many matching algorithm [21]. As stated before, the scheduling variable determines both AAU selection and subcarrier assignment. To complete it, we propose two-stage matching algorithm[21, 43, 47, 42]. In the proposed matching algorithm, in the first stage, we match users to the AAUs (“AAU selection”) and in the second stage, we match users of each AAUs to the subcarriers (“subcarrier assignment”). In doing so, each user $k\in\mathcal{K}$ constructs its own preference list of AAUs denoted by $\mathcal{L}_{k}^{\text{AAU}}=\\{l_{k,f}\\},\forall k,f,$ based on the path loss, i.e., the nearest AAU has the maximum preference and is the first in list $\mathcal{L}_{k}^{\text{AAU}}$. All paired users of each AAU $f$ is indicated by $\mathcal{K}_{f}$ which is the output of the first stage of matching process. After that each user in $\mathcal{K}_{f}$ regenerates the preference list with respect to the each subcarrier $n$ based on the $|\mathbb{h}_{k,n,f}^{\dagger}\mathbb{w}_{k,n,f}|,\leavevmode\nobreak\ \mathbb{h}_{k,n,f}\in\mathcal{H}_{k,n,f},\leavevmode\nobreak\ \forall k\in\mathcal{K}_{f}$ as a matching criteria. After that, the second stage of matching process is started to assign subcarriers to users in each AAU. ###### Proposition 5. Each stage of the adopted matching algorithm after a few numbers of iterations will be a two-sided exchange-stable matching. Therefore, the devised two-stage matching algorithm is an stable matching algorithm. ###### Proof. Please see [Proposition 1, [21]]. ∎ #### III-B2 Beamforming Design and SIC Ordering Subproblems For the given scheduling, we aim to solve the problem of beamforming design and SIC ordering. To this end, we first define a new varaible as $\tilde{\mathbb{W}}_{i,k,n,f}\triangleq\xi^{k}_{i,n}\mathbb{W}_{k,n,f}$. In a similar manner, we adopt the same approach for obtaining the beamforming design as well as SIC ordering. Consequently, the problem at hand can be written mathematically as $\displaystyle\max_{\tilde{\mathbb{W}},{\mathbb{W}},\bm{\xi},\mathbb{a},\mathbb{b}}$ $\displaystyle\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}({a_{f,k,n}}-b_{f,k,n})\log_{2}(e)-q\cdot P_{\text{Total}}(\mathbb{W})$ $\displaystyle-f(\bm{\xi},\lambda)+\tilde{g}(\bm{\xi},\lambda)-{\mu}\times\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\tilde{\varpi}_{k,n,f}$ (81a) s.t. $\displaystyle\text{C}_{1}:\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{K}}\text{Tr}[\rho_{k,n}^{f}{\mathbb{W}}_{k,n,f}]\leq P^{f}_{\text{max}},\leavevmode\nobreak\ \forall f\in\mathcal{F},$ (81b) $\displaystyle\text{C}_{2}:\mathbb{W}_{k,n,f}\succeq\mathbb{0},$ (81c) $\displaystyle\text{C}_{3}:\exp({a_{f,k,n}})\geq\sigma^{2}_{k,n,f},\leavevmode\nobreak\ \exp({b_{f,k,n}})\geq\sigma^{2}_{k,n,f}$ (81d) $\displaystyle\text{C}_{4}:a_{f,k,n}\geq b_{f,k,n}$ (81e) $\displaystyle\text{C}_{5}:\psi_{f,k,n}\geq\exp(a_{f,k,n}),\leavevmode\nobreak\ \phi_{f,k,n}\leq\exp(b_{f,k,n}),$ (81f) $\displaystyle\text{C}_{6}:\leavevmode\nobreak\ \tilde{\mathbb{W}}_{i,k,n,f}\preceq P_{\text{max}}^{f}\mathbb{I}_{M}\xi_{i,n}^{k},$ (81g) $\displaystyle\text{C}_{7}:\leavevmode\nobreak\ \tilde{\mathbb{W}}_{i,k,n,f}\preceq\mathbb{W}_{k,n,f},$ (81h) $\displaystyle\text{C}_{8}:\leavevmode\nobreak\ \tilde{\mathbb{W}}_{i,k,n,f}\succeq\mathbb{W}_{k,n,f}-(1-\xi_{i,n}^{k})P_{\text{max}}\mathbb{I}_{M},$ (81i) $\displaystyle\text{C}_{9}:\leavevmode\nobreak\ \tilde{\mathbb{W}}_{i,k,n,f}\succeq\mathbb{0},$ (81j) $\displaystyle\text{C}_{10}:\leavevmode\nobreak\ 0\leq\xi_{i,n}^{k}\leq 1,$ (81k) $\displaystyle\eqref{SIC_Con_1},\eqref{Fronthualu1},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\eqref{SIC_Con}},\eqref{7final},$ (81l) where $\tilde{\mathbb{W}}$ stands for all of $\tilde{\mathbb{W}}_{i,k,n,f},\leavevmode\nobreak\ \forall k,i,n,f,\leavevmode\nobreak\ i\neq k$. This problem is convex and can be solved via efficient convex programming libraries like CVX. ### III-C Complexity Analysis of the Solution Algorithms This section provides the complexity analysis and the comparison of the proposed solution algorithms. In the first algorithm, i.e., Algorithm 1, we solve the original problem in one step as the form of (73) via CVX. In this problem, there are totally $7KNF+5K^{2}NF$ variables and $10KNF+8K^{2}NF+4K^{2}NFM^{2}+F+2KN+4KNFM^{2}$ convex and affine constraints. Note that the term $K^{2}$ instead of $K$ is from considering the SIC ordering variable and the resulting constraints. Thus, the complexity of the algorithm per iteration is $\mathcal{O}\big{(}(7KNF+5K^{2}NF)^{3}(10KNF+8K^{2}NF+4K^{2}NFM^{2}+F+2KN+4KNFM^{2})\big{)}$ [52]. Therefore, by considering $K>F\leavevmode\nobreak\ \text{and}\leavevmode\nobreak\ N>F$, which is logical for practical setting, and for sufficiently large values of $K$ and $N$, the overall order of the complexity of our first algorithm can be calculated by $\mathcal{O}\big{(}(K^{2}NF)^{4}\big{)}$. In the second algorithm, we applied the alternating approach. Based on this, the overall complexity is a linear combination of the complexity of solution of each sub-problem. The solution of the first sub-problem is a matching algorithm whose complexity is a linear function of the number of the sub-carriers, users, and AAUs, i.e., $\mathcal{O}(K\times N\times F)$. For the second sub-problem (81), we also applied a similar approach as the first algorithm, but the number and dimension of variables as well as constraints are considerably reduced. Problem (81) includes $4KNF+3K^{2}NF$ variables and $7KNF+5K^{2}NFM^{2}+2F$ convex and affine constraints. Based on the solution algorithm of (81), the computational complexity per iteration is $\mathcal{O}\big{(}(4KNF+3K^{2}NF)^{2}(7KNF+5K^{2}NFM^{2}+2F)\big{)}$. Thus, the overall complexity is $\mathcal{O}\big{(}(K^{2}NF)^{3}\big{)}$. As a result, both of the proposed algorithms have a polynomial order of complexity, whereas the overall complexity of an exhaustive method is exponential over the number of constraints and search variables. ## IV Numerical Evaluation This section presents numerical results to assess and compare the designed SIC ordering and beamforming scheme under various configurations which makes comparisons with conventional ones. We provide numerical results regarding to different metrics such as energy efficiency and utilized power under variation of different parameters. ### IV-A Simulation Setup We consider a C-RAN network such that a high power AAU is located at the center of a service coverage area with $500$ m radius, and $3$ low power AAUs with a circular coverage area with $20$ m radius which are randomly located [41]. Also, the number of total users is $K=8$ and the number of antennas for each AAU is $M=3$ [27, 38]. Unless otherwise stated, the simulation results are based on values of the parameters which are listed in Table II. Moreover, the small-scale fading of the channels is assumed to be Rayleigh fading and the large-scale fading effect is denoted by $d_{k,f}^{-\alpha}$ to incorporate the path-loss effects, where $d_{k,f}$ is the distance between user $k$ and AAU $f$ measured in meters, and $\alpha=3$ is the path-loss exponent [27]. Table II: Main parameters values for network setup Parameter(s) | Value(s) ---|--- $K/N/M/F$ | $8/5/3/4$ Coverage radius of Macro AAU/Small AAU | $500/20$ m Power budget of Macro AAU/Small AAU/$C_{\text{Circuit}}$ | $40/30/30$ dBm $\lambda/\beta$ | $10^{5}$/$\frac{1}{4}$ $\sigma_{f,k,n}^{2}/\delta_{k,n}^{f}$ | $-174\leavevmode\nobreak\ \text{dBm/Hz}/0.001$ ### IV-B Results Discussions In this subsection, we discuss about the simulation results achieved for the following main scenarios: 1. 1. Proposed near-optimal solution and SIC ordering method (Proposed algorithm) 2. 2. Proposed two-step solution and SIC ordering as a baseline (Baseline 1): In this baseline, the main problem is solved iteratively in which the scheduling variable is obtained by the devised matching algorithm. 3. 3. SIC ordering based on channel gains as a baseline (Baseline 2): In this baseline, the decoding of users is based on the absolute value of channel gains as considered in [25, 9, 14, 20, 18, 13, 17, 16]. Note that in this algorithm, SIC constraints lead to the lower achievable rates. Figure 2: EE versus the value of power budget for each AAU for different number of AAUs. All the above scenarios are investigated under different system parameters which are discussed in the following. First, we investigate the effect of variation of power budget on the EE of the network for different number of AAUs in Fig. 2. In this figure, we change the power budget values from $20$ dBm to $47$ dBm and also we observe that the EE first increases and then is saturated when the transmit power is larger than $35$ dBm, i.e., $P_{\max}^{f}=35$ dBm. This is because of exploiting power control via designing beamforming for all schemes, and also we can deduct that the beamforming works well to improve EE up to the maximum point. Besides, we observe that the performance of our proposed SIC ordering and beamforming algorithm in terms of EE significantly outperforms the other baselines. The main reason behind this achievement is that the proposed SIC ordering algorithm is performed via optimizing the SIC ordering variable which is exploited to handle the intra-cell and inter-cell interference to maximize EE. We also observe that our proposed algorithm has a better performance as compared to baseline 1 due to performing resource allocation design and SIC ordering jointly in a one-step optimization problem. While, in baseline 2, SIC ordering is based on the absolute value of the channel gains. However, SIC ordering in baseline 2 is applicable with an acceptable performance guarantee for single antenna systems and cannot be applied for MISO NOMA systems, efficiently. Also, this figure investigates the effect of AAUs on the EE of the system. It is evident that our proposed algorithm outperforms other baseline schemes due to performing joint user association and subcarrier allocation which improves significantly the performance of the system. Figure 3: EE versus the number of shared subcarriers in each AAU for NOMA and OMA. In Fig. 3, we study the effect of the number of subcarriers and also the performance of NOMA as compared to the conventional OMA on the baseline schemes. It is observed that the improvement of the proposed algorithm compared to the baselines is sustainable. This improvement is achieved not only in the NOMA-based systems but also in the OMA-based systems. Note that in OMA, there is no intra-cell inference due to orthogonality in the utilization of the subcarriers. Besides, our proposed SIC ordering controls the inter-cell interference. Thus, our designed SIC ordering and beamforming are applicable not only for NOMA but also for any co-channel interference suffered communication networks without any need on the CSI of these channels. Consequently, the improvement of EE in NOMA is more than OMA. Also we can conclude that the performance of NOMA is much better than that of OMA due to exploiting each subcarrier more than one in the network. Figure 4: Energy efficiency versus the of number of antennas in each AAU. In Fig. 4, we evaluate the EE of the considered schemes while considering the effect of the number of antennas in each AAU for different circuit power values. As can be seen from this figure, EE increases as the number of antennas increases due to the array giants and spatial diversity, and then drops after certain value for the number of antennas, i.e., $M>7$. This is because that employing more antenna enables high degrees of freedom in the spatial diversity gain which turns on improving SE while exceeding power consumption, specifically hardware power consumption which linearly increases as $M$ increases due to activating RF chain per each antenna. While the SE is changed slowly with respect to the value of $M$ which turns a strike a balance between SE and power consumption which leads to trade-off between SE and EE. Further, for the higher values of circuit power, the maximum value of EE is obtained for a lower number of $M$. This is because that system’s aggregated power consumption has a greater impact on improving system’s EE than maximizing the SE as SE commensurates to log-function. The interesting results from this evaluation can be explained as two folds: First, we need an appropriate beamforming design for massive antennas communication systems and second there is a need for designing an efficient antenna selection algorithm to select appropriate antennas and then doing precoding, especially for massive MIMO mmWave 6G networks. Also, this figure reveals the performance of our SIC ordering algorithm for massive antennas networks. In our future work, we will propose an appropriate antenna selection (finding optimum $M$) as well as beamforming design for massive mmWave networks. Figure 5: Impact of channel estimation error bound on the EE. Fig. 5 illustrates the impact of channel estimation error on the EE for different values of the power budget. As can be seen, the relation between the error bound and the system EE is indirect. Also, the upper bound is obtained for the perfect CSI setting666It is worthwhile to note that the zero error is equivalent to the perfect CSI in which the complete information of channels of users is available in the BBU side.. It is seen that with increasing the error bound, our algorithm has a vigorous capability for deducting the impact of the imperfect CSI. It can be also observe that the performance gap with respect to the baseline schemes becomes significantly large. This is because of the existing indirect efficacy of the error bound on the performance of the SIC ordering based on the channel gains. It is worthwhile to note that performing SIC based on the channel gains needs full CSI which is not practical in the real wireless communication networks. Furthermore, more power consumption is needed for large values of the error bound to reach the same SE which makes the reduction on EE with increasing the error bound. In other words, for a fixed value of consumed power, the SE tends to low values for the higher error bounds which results in the EE reduction. Figure 6: EE versus the number of users for different reused numbers. Moreover, we study the behavior of EE achieved via the scenarios with respect to the number of users and maximum reuse number of each subcarrier in each AAU which is plotted in Fig. 6. Note that when the reuse number is $1$, the considered network operates as OMA while for values of $2$ and $3$, the network operates as NOMA. From this figure, it can be observed that the EE grows with the number of users and reuse factor of NOMA because of multi user diversity. In addition, for the higher number of reuse factors in NOMA, the improvement on the EE is low. The reason behind this trend is that the high reuse factors in NOMA boosts the denominator of the system throughput due to incorporating intra-cell interference in the data rate which results in an exceeding power consumption and consequently degrading the EE of the system. Furthermore, we can declare that for the high reuse factor, it is better to adopt clustering, i.e., user pairing methods, especially for the large number of users (e.g., massive connections). Finally, we investigate the performance gap between the exactly robust solution denoted by ExRS (approach in Remark 1) and strictly bounded robust solution denoted by SBRS, and the behavior of the introduced penalty function in (III-A). For the First, Fig. 7 shows the performance comparison between ExRS and SBRS under variation of channel estimation error bound for different power budget for macro AAU denoted by $P_{\max}^{1}$. As can be seen from the figure, for low power budget and small values of the error approximately the performance of ExRS and SBRS are close to each other. Figure 7: EE versus the channel estimation error bound for different robust algorithms. Moreover, the effect of penalty factors on the penalty function is examined in Fig. 8. In this figure, $P_{\max}^{1}=35\leavevmode\nobreak\ \text{dBm},\leavevmode\nobreak\ K=8,\leavevmode\nobreak\ N=5$, $\lambda=\mu$, and both axes are plotted in the Logarithmic scale. Note that in “Without Round’, the penalty function values is the penalty function in (III-A) (“Term 1+Term 2”), and in “With Round”, the penalty function is only the penalty term for the rank-one (“Term 2” in (III-A)) due to $h(\bm{\xi},\bm{\rho},\mathbb{x},\lambda)=0$. As can be seen form Fig. 8, the penalty function values close to $0$, for the sufficiently large value of penalty factors, e.g., $10^{5}$, which ensures achieving a rank-one solution and the relaxed binary variables converges to binary ones. Figure 8: Impact of penalty factors on the penalty function of (III-A). ## V Conclusion Remarks In this paper, we proposed a novel SIC ordering and also provided a robust and efficient algorithm for resource allocation and beamforming design for C-RAN assisted MC NOMA networks with imperfect CSI. In particular, we formulated the worst-case EE by optimizing the SIC ordering, beamforming, and scheduling variables. Although, the underlying optimization problem is non-convex which is in the form of MINLP, we adopted majarization-minimization and penalty factor methods to convert it into the convex one. Furthermore, we provided a low complexity algorithm based on two-step iterative solution to strike the balance between the complexity and performance gain. Extensive simulations were provided to assess the performance of the proposed algorithms. Moreover, simulation results unveil the superiority of the proposed algorithm as compared to the baseline schemes. The SIC ordering algorithm in NOMA-based communications has not been well addressed, especially restraining the inter-cell interference, and it would be a critical issue and pivotal impact on the performance of co-channel interference communication networks. In order to broaden a new horizon that is inferred from the results for the future of massive antennas and high energy efficiency that necessitates the ubiquitous 6G, it is crucial to design efficient antenna selection, clustered beamforming, channel estimation, and spectrum management algorithms in a future wireless networks. ## VI Appendix ### VI-A Proof of Proposition 1 The proof includes two parts: 1) minimization and 2) maximization which are discussed as follows. Proof of minimization: Using an arbitrary positive multiplier $\phi\geq 0$, the Lagrangian function of (19) can be written as $\displaystyle\mathcal{L}(\bm{\Delta}_{k,n,f},\phi)=\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f}+\bm{\Delta}_{k,n,f})\mathbb{W}_{k,n,f}]$ $\displaystyle+\phi(\text{Tr}\leavevmode\nobreak\ [\bm{\Delta}_{k,n,f}\bm{\Delta}_{k,n,f}^{\dagger}]-e_{k,n,f}).$ (82) Setting the derivative of the Lagrangian with respect to $\bm{\Delta}_{k,n,f}$ to zero, we have: $\nabla_{\bm{\Delta}^{*}_{k,n,f}}\mathcal{L}(\bm{\Delta}_{k,n,f},\phi)=\mathbb{W}^{\dagger}_{k,n,f}+\phi\bm{\Delta}_{k,n,f}=0.$ (83) The optimal value of $\bm{\Delta}_{k,n,f}$ is denoted by $\bm{\Delta}^{*}_{k,n,f}$ which can be obtained as $\bm{\Delta}^{*}_{k,n,f}=-\frac{1}{\phi}\mathbb{W}^{\dagger}_{k,n,f}.$ We also differentiate the Lagrangian function with respect to $\phi$ and equates it to zero as $\nabla_{\phi}\mathcal{L}(\bm{\Delta}_{k,n,f},\phi)=0,$ where the optimal solution for $\phi$ is given by $\phi^{*}=\frac{1}{e_{k,n,f}}\|\mathbb{W}^{\dagger}_{k,n,f}\|.$ By substituting the optimal value of $\phi$, i.e., $\phi^{*}$, we conclude that $\bm{\Delta}^{f,\text{min}}_{k,n}=-e_{k,n,f}\frac{\mathbb{W}^{\dagger}_{k,n,f}}{\|\mathbb{W}^{\dagger}_{k,n,f}\|}.$ (84) Hessian of the Lagrangian function verifies the obtained solution is minimum. To this end, we need to check the second derivative at the optimal point that should be positive semi-definite, i.e., [54] $\displaystyle\nabla^{2}_{\bm{\Delta}_{k,n,f}}\mathcal{L}(\bm{\Delta}_{k,n,f}^{*},\phi^{*})=$ (85) $\displaystyle\frac{\|\mathbb{W}^{\dagger}_{k,n,f}\|}{e_{k,n,f}}\bigg{(}\text{vec}\\{\mathbb{I}_{M}\\}\text{vec}\\{\mathbb{I}_{M}\\}\bigg{)}^{T}\succeq\mathbb{0}.$ (86) Proof of maximization: The Lagrangian of (20) is given by $\displaystyle\mathcal{L}(\bm{\Delta}_{k,n,f^{\prime}},\phi)=\text{Tr}[(\tilde{\mathbb{H}}_{k,n,f^{\prime}}+\bm{\Delta}_{k,n,f^{\prime}})\mathbb{W}_{i,n,f^{\prime}}]$ $\displaystyle-\phi(\text{Tr}\leavevmode\nobreak\ [\bm{\Delta}_{k,n,f^{\prime}}\bm{\Delta}_{k,n,f^{\prime}}^{\dagger}]-e^{2}_{k,n,f^{\prime}}).$ (87) By differentiating above function with respect to $\bm{\Delta}_{k,n,f^{\prime}}$ and setting the derivative to zero, we have: $\nabla_{\bm{\Delta}^{*}_{k,n,f^{\prime}}}\mathcal{L}(\bm{\Delta}_{k,n,f^{\prime}},\phi)=\mathbb{W}^{\dagger}_{i,n,f^{\prime}}-\phi\bm{\Delta}_{k,n,f^{\prime}}=0.$ (88) We will found that $\bm{\Delta}^{*}_{k,n,f^{\prime}}=\frac{1}{\phi}\mathbb{W}^{\dagger}_{i,n,f^{\prime}}.$ Following the same steps for eliminating the role of $\phi$, we obtain that $\bm{\Delta}^{f^{\prime},\text{max}}_{k,n}=e_{k,n,f^{\prime}}\frac{\mathbb{W}^{\dagger}_{i,n,f^{\prime}}}{\|\mathbb{W}^{\dagger}_{i,n,f^{\prime}}\|}.$ (89) Hessian of the Lagrangian function verifies that the obtained solution is maximum. Hence, we check the second derivative at the optimal point that should be negative semi-definite, i.e., [54] $\displaystyle\nabla^{2}_{\bm{\Delta}_{k,n,f}}\mathcal{L}(\bm{\Delta}_{k,n,f}^{*},\phi^{*})=$ $\displaystyle-\frac{\|\mathbb{W}^{\dagger}_{k,n,f}\|}{e_{k,n,f}}\bigg{(}\text{vec}\\{\mathbb{I}_{M}\\}\text{vec}\\{\mathbb{I}_{M}\\}\bigg{)}^{T}\preceq\mathbb{0}.$ (90) ### VI-B Proof of Proposition 2 We aim to prove this proposition by using the abstract Lagrangian duality. The primal problem of (35) can be written as $p^{*}=\max_{\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}}\min_{\lambda}\mathcal{L}(\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}),$ where the dual problem of (49a) is given by: $d^{*}=\min_{\lambda}\max_{\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}}\mathcal{L}(\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}).$ (91) For simplicity, we also define: $\mu(\lambda)\triangleq\max_{\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}}\mathcal{L}(\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}).$ (92) Based on the weak duality theorem, we have: $\displaystyle p^{*}\leq d^{*}=\min_{\lambda\geq 0}\mu(\lambda).$ (93) It should be noted that for the feasible set, we have 2 cases as follows: _Case 1_ : One can easily verify that at the optimal point, we have $\displaystyle\mathcal{R}_{1}:\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\bigg{(}\xi_{i,n}^{k}-(\xi_{i,n}^{k})^{2}\bigg{)}=0,\leavevmode\nobreak\ $ (94) $\displaystyle\mathcal{R}_{2}:\sum_{f\in\mathcal{F}}\sum_{k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\bigg{(}\rho_{k,n}^{f}-(\rho_{k,n}^{f})^{2}\bigg{)}=0,$ (95) $\displaystyle\leavevmode\nobreak\ \mathcal{R}_{3}:\leavevmode\nobreak\ \sum_{f\in\mathcal{F}}\sum_{i,k\in\mathcal{K}}\sum_{n\in\mathcal{N}}\bigg{(}x_{i,n}^{k,f}-(x_{i,n}^{k,f})^{2}\bigg{)}=0.$ (96) As a result, $d^{*}$ is also a feasible solution of (35). Subsequently, substituting the optimal value of $\lambda$, i.e., $\lambda^{*}$, into the optimization problem (35) yields $d^{*}=\mu(\lambda^{*})=\max_{\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x}}\mathcal{L}(\mathbb{W},\bm{\xi},\bm{\rho},\mathbb{x})=p^{*}.$ (97) Moreover, referring to Lagrangian function, in the region $\bm{\xi},\bm{\rho},\leavevmode\nobreak\ \mathbb{x},\leavevmode\nobreak\ \mathcal{R}_{1},\leavevmode\nobreak\ \mathcal{R}_{2},\leavevmode\nobreak\ \mathcal{R}_{3}$, function $\mu(\lambda)$ is a monotonically decreasing function with respect to $\lambda$. On the other hand, it is argued that $d^{*}=\min_{\lambda\geq 0}\mu(\lambda)$, so we have $\mu(\lambda)=d^{*},\forall\leavevmode\nobreak\ \lambda\geq\lambda^{*}.$ (98) This means that for any value of $\lambda\geq\lambda^{*}$, the solution of Lagrangian function yields the optimal solution of (35). _Case 2_ : The second case occurs when some of integer variables take some values between 0 and 1, causing $\mathcal{R}_{1}>0,\leavevmode\nobreak\ \mathcal{R}_{2}>0,\leavevmode\nobreak\ \mathcal{R}_{3}>0.$ (99) Referring to the Lagrangian function and (92), at the optimal point, $\mu(\lambda^{*})$ tends to $-\infty$. However, this can not happen as it contradicts with primal solution stating that $\mu(\lambda^{*})$ is limited from below by the solution of (35) which is always greater than zero. Thus, at the optimal point, we have, $\mathcal{R}_{1}=0,\leavevmode\nobreak\ \mathcal{R}_{2}=0$, and $\mathcal{R}_{3}=0$. ## References * [1] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems”, IEEE Network. vol. 34, no. 3, pp. 134-142, May. 2020. * [2] A. Zakeri et al., “Digital transformation via 5G: Deployment plans,” in Proc. ITU Kaleidoscope: Industry-Driven Digital Transformation (ITU K), Ha Noi, Vietnam, 2020, pp. 1-8. * [3] C. Pan, M. Elkashlan, J. Wang, J. Yuan and L. Hanzo, “User-centric C-RAN architecture for ultra-dense 5G networks: Challenges and methodologies,” IEEE Commun. Mag., vol. 56, no. 6, pp. 14-20, Jun. 2018. * [4] Z. Ding, L. Dai, R. Schober, and H. Vincent Poor, “NOMA meets finite resolution analog beamforming in massive MIMO and millimeter-wave networks,” IEEE Commun. Lett., vol. 21, no. 8, pp. 1879-1882, Aug. 2017. * [5] Z. Ding, X. Lei, G. K. Karagiannidis, R. Schober, J. Yuan, and V. K. Bhargava, “A survey on non-orthogonal multiple access for 5G networks: research challenges and future trends,” IEEE J. Select. Areas Commun., vol. 35, no. 10, pp. 2181-2195, Oct. 2017. * [6] Z. Ding, P. Fan, and H. V. Poor, “Impact of user pairing on 5G non-orthogonal multiple-access downlink transmissions,” IEEE Trans. Veh. Technol., vol. 65, no. 8, pp. 6010-6023, Aug. 2016. * [7] Z. Ding, Z. Yang, P. Fan, and H. V. Poor, “On the performance of non-orthogonal multiple access in 5G systems with randomly deployed users,” IEEE Signal Process. Lett., vol. 21, no. 12, pp. 1501-1505, Dec. 2014. * [8] K. Yang, N. Yang, N. Ye, M. Jia, Z. Gao, and R. Fan, “Non-orthogonal multiple access: Achieving sustainable future radio access,” IEEE Commun. Mag., vol. 57, no. 2, pp. 116–121, Nov. 2019. * [9] L. Dai, B. Wang, Z. Ding, Z. Wang, S. Chen, and L. Hanzo, “A survey of non-orthogonal multiple access for 5G,” IEEE Commun. Surveys Tuts., vol. 20, no. 3, pp. 2294-2323, May. 2018. * [10] S. M. R. Islam, M. Zeng, O. A. Dobre, and K. Kwak, “Resource allocation for downlink NOMA systems: Key techniques and open issues,” IEEE Wireless Commun., vol. 25, no. 2, pp. 40-47, Apr. 2018. * [11] Z. Ding, R. Schober, and H. V. Poor, “Unveiling the importance of SIC in NOMA systems—Part 1: State of the art and recent findings,” IEEE Commun. Lett., vol. 24, no. 11, pp. 2373-2377, Nov. 2020. * [12] Z. Ding, R. Schober, and H.V. Poor, “Unveiling the importance of SIC in NOMA systems: Part II: New results and future directions,” IEEE Commun. Lett., vol. 24, no. 11, pp. 2378-2382, Nov. 2020. * [13] M. F. Hanif, Z. Ding, T. Ratnarajah, and G. K. Karagiannidis, “A minorization-maximization method for optimizing sum rate in the downlink of non-orthogonal multiple access systems,” IEEE Trans. Signal Process., vol. 64, no. 1, pp. 76–88, Jan. 2016. * [14] Y. Sun, D. W. K. Ng, Z. Ding, and R. Schober, “Optimal joint power and subcarrier allocation for full-duplex multicarrier non-orthogonal multiple access systems,” IEEE Trans. Commun., vol. 65, no. 3, pp. 1077-1091, Mar. 2017. * [15] Z. Wei, D. W. K. Ng, J. Yuan, and H. M. Wang, “Optimal resource allocation for power-efficient MC-NOMA with imperfect channel state information,” IEEE Trans. Commun., vol. 65, no. 9, pp. 3944-3961, Sep. 2017. * [16] S. Sharma, K. Deka, V. Bhatia, and A. Gupta, “Joint power-domain and SCMA-based NOMA system for downlink in 5G and beyond,” IEEE Commun. Lett., pp. 1–1, Apr. 2019. * [17] S. Li, M. Derakhshani and S. Lambotharan, “Outage-constrained robust power allocation for downlink MC-NOMA with imperfect SIC,” Proc. IEEE ICC, Kansas City, MO, USA, May. 2018, pp. 1-7. * [18] M. Moltafet, P. Azmi, N. Mokari, M. R. Javan, and A. Mokdad, “Optimal and fair energy efficient resource allocation for energy harvesting-enabled-PD-NOMA-based HetNets,” IEEE Trans. Wireless Commun., vol. 17, no. 3, pp. 2054-2067, Mar. 2018. * [19] Q. Zhang, Q. Li, and J. Qin, “Robust beamforming for non-orthogonal multiple-access systems in MISO channels,” IEEE Trans. Veh. Technol, vol. 65, no. 12, pp. 10231-10236, Dec. 2016. * [20] Y. Sun, D. W. K. Ng, and R. Schober, “Optimal resource allocation for multicarrier MISO-NOMA systems,” Proc. IEEE ICC,, Paris, France, May. 2017, pp. 1-7. * [21] A. Zakeri, M. Moltafet, and N. Mokari, “Joint radio resource allocation and SIC ordering in NOMA-based networks using submodularity and matching theory,” IEEE Trans. Veh. Technol., vol. 68, no. 10, pp. 9761-9773, Oct. 2019. * [22] H. Al-Obiedollah, K. Cumanan, J. Thiyagalingam, A. G. Burr, Z. Ding and O. A. Dobre, “Energy efficiency fairness beamforming designs for MISO NOMA systems,” Proc. IEEE WCNC,, Marrakesh, Morocco, Morocco, Apr. 2019, pp. 1-6. * [23] Y. Zhang, H. M. Wang, T. X. Zheng, and Q. Yang, “Energy-efficient transmission design in non-orthogonal multiple access,” IEEE Trans. Veh. Technol, vol. 66, no. 3, pp. 2852–2857, Mar. 2017. * [24] F. Fang, H. Zhang, J. Cheng, and V. C. M. Leung, “Energy-efficient resource allocation for downlink non-orthogonal multiple access network,” IEEE Trans. Commun., vol. 64, no. 9, pp. 3722–3732, Sep. 2016. * [25] F. Alavi, K. Cumanan, Z. Ding, and A. G. Burr, “Robust beamforming techniques for non-orthogonal multiple access systems with bounded channel uncertainties,” IEEE Commun. Lett., vol. 21, no. 9, pp. 2033–2036, Sep. 2017. * [26] F. Fang, H. Zhang, J. Cheng, S. Roy, and V. C. M. Leung, “Joint user scheduling and power allocation optimization for energy-efficient NOMA systems with imperfect CSI,” IEEE J. Select. Areas Commun., vol. 35, no. 12, pp. 2874–2885, Dec. 2017. * [27] F. Alavi, K. Cumanan, M. Fozooni, Z. Ding, S. Lambotharan, and O. A. Dobre, “Robust energy-efficient design for MISO non-orthogonal multiple access systems,” IEEE Trans. Commun., vol. 67, no. 11, pp. 7937-7949, Nov. 2019. * [28] M. S. Ali, E. Hossain, A. Al-Dweik, and D. I. Kim, “Downlink power allocation for CoMP-NOMA in multi-cell networks,” IEEE Trans. Commun., vol. 66, no. 9, pp. 3982–3998, Sep. 2018. * [29] K. Wang, Y. Liu, Z. Ding, A. Nallanathan, and M. Peng, “User association and power allocation for multi-cell non-orthogonal multiple access networks,” IEEE Trans. Wireless Commun., vol. 18, no. 11, pp. 5284-5298, Nov. 2019. * [30] A. Wolf, P. Schulz, M. Dörpinghaus, J. C. S. Santos Filho, and G. Fettweis, “How reliable and capable is multi-connectivity?,” IEEE Trans. Commun., vol. 67, no. 2, pp. 1506-1520, Feb. 2019. * [31] D. W. K. Ng, E. S. Lo, and R. Schober, “Energy-efficient resource allocation in multi-cell OFDMA systems with limited backhaul capacity,” IEEE Trans. Wireless Commun., vol. 11, no. 10, pp. 3618-3631, Oct. 2012. * [32] A. Zakeri, N. Mokari, and H. Yanikomeroglu, “Joint radio resource allocation and 3D beam-forming in MISO-NOMA-based network with profit maximization for mobile virtual network operators.” arXiv preprint arXiv:1907.05161 (2019). * [33] A. Khalili, M. R. Mili, M. Rasti, S. Parsaeefard, and D. W. K. Ng, “Antenna selection strategy for EE maximization in uplink OFDMA networks: A multi-objective approach,” IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 595-609, Jan. 2020. * [34] A. Khalili, S. Zarandi, and M. Rasti, “Joint resource allocation and offloading decision in mobile edge computing,” IEEE Commun. Lett., vol. 23, no. 4, pp. 684-687, Apr. 2019. * [35] A. Zappone, E. Björnson, L. Sanguinetti, and E. Jorswieck, “Globally optimal energy-efficient power control and receiver design in wireless networks,” IEEE Trans. Signal Process., vol. 65, no. 11, pp. 2844-2859, 1 Jun. 2017. * [36] V. W. S. Wong, R. Schober, D. W. K. Ng, and L. C. Wang, Key technologies for 5G wireless systems, 1st ed. Cambridge University Pres, 2017. * [37] A. Khalili, S. Akhlaghi, H. Tabassum, and D. W. K. Ng, “Joint user association and resource allocation in the uplink of heterogeneous networks,” IEEE Wireless Commun. Lett., vol. 9, no. 6, pp. 804-808, Jun. 2020. * [38] Y. Wang, L. Ma, and Y. Xu, “Joint network optimization in cooperative transmission networks with imperfect CSI,” Proc. IEEE ICC, Kuala Lumpur, Malaysia, May. 2016, pp. 1-6. * [39] J. Lee and S. Leyffer, Mixed integer nonlinear programming. Springer Science Business Media, 2011. * [40] L. Sboui, Z. Rezki, A. Sultan, and M. Alouini, “A new relation between energy efficiency and spectral efficiency in wireless communications systems,” IEEE Trans. Wireless Commun., vol. 26, no. 3, pp. 168-174, Jun. 2019. * [41] M. Moltafet, S. Parsaeefard, M. R. Javan, and N. Mokari, “Robust radio resource allocation in MISO-SCMA assisted C-RAN in 5G networks,” IEEE Trans. Veh. Technol., vol. 68, no. 6, pp. 5758-5768, Jun. 2019. * [42] M. Youssef, J. Farah, C. A. Nour, and C. Douillard, “Resource allocation in NOMA systems for centralized and distributed antennas with mixed traffic using matching theory,” IEEE Trans. Commun., vol. 68, no. 1, pp. 414-428, Jan. 2020. * [43] T. Hoessler, P. Schulz, E. A. Jorswieck, M. Simsek, and G. P. Fettweis, “Stable matching for wireless URLLC in multi-cellular, multi-user systems,” IEEE Trans. Commun., vol. 68, no. 8, pp. 5228-5241, Aug. 2020. * [44] Y. Saito, Y. Kishiyama, A. Benjebbour, T. Nakamura, A. Li, and K. Higuchi, “Non-orthogonal multiple Access (NOMA) for cellular future radio access,” Proc. IEEE VTC Spring, Dresden, Germany, Jun. 2013, pp. 1-5. * [45] L. Salaün, M. Coupechoux, and C. S. Chen, “Joint subcarrier and power allocation in NOMA: Optimal and approximate algorithms,” IEEE Trans. on Signal Process., vol. 68, pp. 2215-2230, 2020. * [46] Q. Zhang, Q. Li, and J. Qin, “Robust beamforming for non-orthogonal multiple-access systems in MISO channels,” IEEE Trans. Veh. Technol., vol. 65, no. 12, pp. 10231-10236, Dec. 2016. * [47] C. Chao, C. Wang, C. Lee, H. Wei, and W. Chen, “Pair auction and matching for resource allocation in full-duplex cellular systems,” IEEE Trans. Veh. Technol., vol. 69, no. 4, pp. 4325-4339, Apr. 2020. * [48] Q. Qi, X. Chen, and D. W. K. Ng, “Robust beamforming for NOMA-based cellular massive IoT with SWIPT,” IEEE Trans. Signal Process., vol. 68, pp. 211-224, 2020. * [49] E. A. Gharavol, Y. Liang, and K. Mouthaan, “Robust downlink beamforming in multiuser MISO cognitive radio networks with imperfect channel-state information,” IEEE Trans. Veh. Technol., vol. 59, no. 6, pp. 2852-2860, Jul., 2010. * [50] R. A. Addad, T. Taleb, M. Bagaa, D. L. C. Dutra, and H. Flinck, “Towards modeling cross-domain network slices for 5G,” in Proc. IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, Dec. 2018, pp. 1-7. * [51] Glover, Fred. “Improved linear integer programming formulations of nonlinear integer problems.” Management Science, vol. 22, no. 4, pp. 455-460, 1975. * [52] A. Ben-Tal and A. Nemirovski, “On polyhedral approximations of the second-order cone,” Math. Operations Res., vol. 26, no. 2, pp. 193–205, May 2001. * [53] Z. Lin, M. Lin, J. Wang, T. de Cola, and J. Wang, “Joint beamforming and power allocation for satellite-terrestrial integrated networks with non-orthogonal multiple access,” in IEEE J. Sel. Topics Signal Process., vol. 13, no. 3, pp. 657-670, Jun. 2019. * [54] A. Hjorungnes and D. Gesbert, “Complex-valued matrix differentiation: Techniques and key results,” in IEEE Trans. Signal Process., vol. 55, no. 6, pp. 2740-2746, Jun. 2007. * [55] Chinneck, John W, “Feasibility and Infeasibility in Optimization: Algorithms and Computational Methods,” Springer Science & Business Media, vol. 118, 2007.
# Heterogeneous Similarity Graph Neural Network on Electronic Health Records Zheng Liu1, Xiaohan Li1, Hao Peng2, Lifang He3, Philip S. Yu1 1University of Illinois at Chicago, Chicago, IL, USA {zliu212, xli241<EMAIL_ADDRESS>2Beihang University, Beijing, China <EMAIL_ADDRESS>3Lehigh University, Bethlehem, PA, USA <EMAIL_ADDRESS> ###### Abstract Mining Electronic Health Records (EHRs) becomes a promising topic because of the rich information they contain. By learning from EHRs, machine learning models can be built to help human expert to make medical decisions and thus improve healthcare quality. Recently, many models based on sequential or graph model are proposed to achieve this goal. EHRs contain multiple entities and relations, and can be viewed as a heterogeneous graph. However, previous studies ignore the heterogeneity in EHRs. On the other hand, current heterogeneous graph neural networks cannot be simply used on EHR graph because of the existence of hub nodes in it. To address this issue, we propose Heterogeneous Similarity Graph Neural Network (HSGNN) to analyze EHRs with a novel heterogeneous GNN. Our framework consists of two parts: one is a preprocessing method and the other is an end-to-end GNN. The preprocessing method normalizes edges and splits the EHR graph into multiple homogeneous graphs while each homogeneous graph contains partial information of the original EHR graph. The GNN takes all homogeneous graphs as input and fuses all of them into one graph to make prediction. Experimental results show that HSGNN outperforms other baselines in the diagnosis prediction task. ††publicationid: pubid: 978-1-7281-6251-5/20/$31.00 ©2020 IEEE ## I Introduction The accumulation of large-scale Electronic Health Records (EHRs) provides us with great opportunity of deep learning applications on healthcare. Recently, many deep learning models have been applied to medical tasks such as phenotyping [1, 2], medical predictive modeling [3, 4] and medication recommendation [5]. Generally, raw EHRs consist of multiple kinds of features of patients, including demographics, observations, diagnoses, medications, and procedures ordered by time. For example, Fig.1 shows an example of an EHR graph with two patients and three visit records. In Fig.1, there are two patients $p_{1}$ and $p_{2}$, where $p_{1}$ has visited the medical provider twice and $p_{2}$ has visited once (with timestamp recorded). During the visit some diagnoses or medications may occur to the patient. All medical concepts such as diagnosis, medications and procedures are medical codes and scientists can easily track them through some medical ontology. Because one patient can have multiple visits recorded, EHR can be viewed as sequential historical records for each patient. Moreover, because of the variety of medical codes and their relations, EHR can be viewed as a heterogeneous graph with multiple types of nodes and edges. EHR analysis plays an important role in medical research and can improve the level of healthcare. By learning from EHRs, scientists can either discover useful facts or build intelligent applications. For example, the prescriptions in EHRs can help make medication recommendations [5], and the phenotypes of patients indicate the distribution of cohorts [6]. With Artificial Intelligence (AI) technologies, scientists can build applications to provide useful suggestions to doctors, or let patients understand their physical conditions better. To build such a medical AI application, a key issue is to learn effective representations for each medical concept and patient [7, 8]. However, there are two challenges of learning such representations. One is data insufficiency. Due to the the privacy policy and the expense of collecting data, the volume of an EHR dataset is generally smaller than image or language datasets. Therefore, it is difficult for deep learning models designed for images or languages tasks to process EHR data. The other is the heterogeneity of EHR. EHR is of complex structure and contains multiple relationships. Only when all relations are properly used then the model can achieve a satisfactory performance. Figure 1: An example of heterogeneous EHR graph. Previously, many models regard EHRs as sequences and use sequential models such as RNNs to analyze EHR [9, 10, 11, 12, 13]. These methods use historical information to predict the next-period situation of a patient. However, sequential models are not enough to capture structural information and need a large amount of data to train. To address these issues, some other approaches take EHR as a graph shown in Fig.1, and then use graph neural networks (GNNs) to learn embedding vectors for each node [14, 15, 16, 17]. Among them, GRAM [16] proposes the first graph-based model that can integrate external hierarchical ontologies when generating results. MiME [17] learns multi-level representations of medical codes based on EHR data in a hierarchical order. Graph convolutional transformer (GCT) [15] learns the medical representations together with the hidden causal structure of EHR using the “pre-training&fine-tuning” procedure. Compared with sequential models, these graph-based models are more robust to insufficient data because of the use of structural information: the model can use neighbor information to complete missing entries in the dataset. As the above GNN models are designed only for homogeneous graphs, they fail to take all kinds of medical codes into account. EHR data contains multiple kinds of medical codes and relations, so it is naturally heterogeneous. To capture multiple relations in the graph, heterogeneous graph neural networks [18, 19, 20] are necessary in the EHR analysis. Basically, these models take a heterogeneous graph as input and process different kinds of nodes or meta- paths[21] respectively. However, applying these models on EHR graphs directly can cause very low performance because of the hub nodes with high visibility [22]. For example, if an EHR graph contains the “gender” information, and then all patient nodes would link to either “male” or “female” nodes. If we don’t conduct a normalization on these links, these two gender nodes would strongly influence all other nodes. After applying heterogeneous GNNs, all other nodes in the graph will eventually learn the same representations as either “male” or “female” nodes. This phenomenon is similar to the over-smoothing [23] problem. Over-smoothing means after applying GNNs with multiple layers, all node embeddings become close and finally indistinguishable. Some studies [24] indicate the reason of over-smoothing is the existence of noise in the graph, which can be supported in our case: since gender is not the most informative attribute of a patient (containing too much noise), introducing it into the graph does not always helpful to the prediction task. To address this issue, we propose Heterogeneous Similarity Graph Neural Network (HSGNN), a framework using GNN to analyze EHR graphs. It consists of two parts: the preprocessing step and the end-to-end model. In the preprocessing step, we first construct the heterogeneous EHR graph, and then split it into multiple homogeneous subgraphs according to the weight assigned to each edge. By doing so, we eliminate the noise in the original heterogeneous graph while preserving its structural information. After preprocessing step, each subgraph contains partial information of the original graph. Then in the end-to-end model, we try to combine all subgraphs together into one integrated homogeneous graph $A_{meta}$ so that it can be input into any general GNN layers to make downstream predictions. Inspired by [15], we set all weights in $A_{meta}$ as trainable variables but not fixed values. It means all weights in $A_{meta}$ are randomly initialized before training, and are optimized during the model training process. Compared with previous models, HSGNN has these innovations: First, to the best of our knowledge, this is the first study that uses heterogeneous graph structure to represent EHR data, which can preserve the most information. Second, in the preprocessing step, HSGNN uses similarity values to represent the weights in the graph. This method is proved effective in the experiments to reduce over-smoothing. Third, we use trainable weights and construct a new graph in HSGNN, which can reveal true relationship between each nodes. To demonstrate the advantages of HSGNN, we evaluate its performance on the MIMIC-III dataset. On the diagnosis prediction task, HSGNN outperforms all other baseline approaches and achieves state-of-the-art performance. We also prove the effectiveness of using similarity values by comparing HSGNN with a variant that uses $PathCount$s as graph weights. Finally, we visualize the structure of learned graph to prove that HSGNN can learn a new graph with higher quality. Conclusively, we make the following contributions in this paper: * • We propose a novel framework HSGNN, which can learn informative representations for medical codes and make predictions for patients in EHR. * • We use the similarity subgraphs generated from original heterogeneous graph as input, which is shown effective to improve the performance of prediction. * • We propose an end-to-end model that can jointly learn high-quality graph embeddings based on similarity subgraphs and make accurate predictions . * • Experimental results show the superiority of our proposed model on the diagnosis prediction task. Experiments also prove the effectiveness of using similarity subgraphs and the quality of learned graph embeddings. The code of our proposed HSGNN is available at https://github.com/ErikaLiu/HSGNN. ## II Related Works Since EHR analysis is an interdisciplinary topic, many studies are related to our work. In this section, we only choose the most representative and inspiring studies. These studies mainly focus on four aspects: 1. Graph Neural Networks, 2. GNN-based EHR analysis, 3. heterogeneous graph neural networks and 4. some studies of the nature of graph. ### II-A Graph Neural Networks Currently, Graph Neural Networks (GNNs) have been widely explored to process graph-structure data. Motivated by convolutional neural networks, Bruna et al. [25] propose graph convolutions in spectral domain. Then, Kipf and Welling [26] simplified the previous graph convolution operation and designed a Graph Convolutional Network (GCN) model. Besides, to inductively generate node embeddings, Hamilton et al. propose the GraphSAGE [27] model to learn node embeddings with sampling and aggregation functions. All these models have shown their performance on many tasks [28, 29, 30, 31, 32]. ### II-B GNN-based EHR analysis Previously, many studies use RNNs to analyse EHR [9, 10, 11]. However, with the improvement of graph neural networks [26, 33, 27], many approaches develop GNNs to analyse EHR [14, 15, 16, 34]. These models can capture structural information from raw EHR and thus outperform previous approaches. Among these models, GRAM [16] and KAME [14] use GNNs to process external hierarchical ontologies. They can learn embeddings for medical codes in the ontologies and then these embeddings can be used for downstream tasks. MiME [17] and GCT [15] assume that there are some latent causal relations between different kinds of medical codes in EHR. Based on this assumption, MiME learns multilevel representations in a hierarchical order and GCT can jointly learn the hidden causal structure of EHR while performing predictions. Above studies only focus on homogeneous graphs, while raw EHRs contain multiple kinds of medical codes and thus are naturally heterogeneous. This fact provides us with opportunities to model EHR with heterogeneous graphs. ### II-C Heterogeneous Graph Neural Networks According to [35], a heterogeneous information network (HIN) is an information network with multiple kinds of nodes and edges. To process HIN, a key issue is to deal with the heterogeneity of the network. Here we introduce some methods in the previous studies to eliminate the heterogeneity of the network. HAN [18] is the first study using graph attention network to process heterogeneous graphs. MAGNN [20] is another recent study proposing aggregators to make inductive learning on heterogeneous graphs. Both of these two models use meta-path when processing heterogeneous graphs since it can capture meaningful patterns in the graph. Also, both of their models consists of two modules: the meta-path level GNN and the node level GNN, which can aggregate node features hierarchically. HetGNN [19] proposes another method to eliminate the heterogeneity, which uses random walk and type-based aggregators. However, in the experiment we find that these methods do not perform ideally because they did not deal with nodes with different visibility properly. ### II-D Over-smoothing and Node Visibility According to [23], after applying GNN with multiple layers, the derived node embeddings will become closer to each other and finally indistinguishable. This is called over-smoothing and [23] is the first work discover this phenomenon. Recently, the causes of this phenomenon are still being investigated and some studies are trying to resolve it. For example, [36] discovers row-level and col-level over-smoothing because information wrongly spread through nodes and features. Another study [24] attribute over-smoothing to the noise in the network. Nevertheless, these different explanations may direct to the same reason. That is, the structural information of the graph may not accurate enough, making information spread to wrong nodes or wrong features through GNN. Therefore, correct the “wrong edges” in the graph is a possible way to overcome over-smoothing. On the other hand, many traditional studies focus more on the nature of graph [21, 22]. These research propose the concept “node visibility” to measure the influence of one node on the whole graph. Generally, the degree of the node can be used to measure the visibility of it. If one node have many neighbors, it can influence more other nodes and making itself “visible” in the whole graph. In GNNs, the existence of these highly visible nodes is one reason of over-smoothing because they can result in multiple nodes having similar embeddings. Figure 2: The proposed HSGNN framework. Heterogeneous EHR graph is preprocessed by calculating SPS along with each meta-path (the dash box) and then input into the end-to-end model (the solid box). Here we take meta-path V-D-V as an example to explain SPS. The 1st and 2nd visits of patient 1 have one common diagnosis in total, and therefore the numerator of similarity between them is 1*2=2. Besides, they have 4 diagnosis neighbors in total, and thus the denominator is 4. The similarity of these two nodes along with meta- path V-D-V is 1/2. ## III Methods HSGNN consists of two parts: one is preprocessing step that splits the heterogeneous graph into multiple subgraphs; and the other is an end-to-end graph neural network that takes multiple graphs as input. In the first part, we introduce the definition of heterogeneous EHR graph, meta-path and symmetric PathSim (SPS). In the second part, we provide the forward propagation rules of our model. In this section we use the EHR with the same structure as Fig.1 to introduce our model. Table I: Notations Notation | Explanation ---|--- $n_{i}$ | Node $i$ in the heterogeneous EHR graph. $\phi(\cdot)$ | Mapping function to retrieve the type of node. $PathCount_{p}(\cdot,\cdot)$ or $PC_{p}(\cdot,\cdot)$ | PathCount w.r.t meta-path $p$. $SPS_{p}(\cdot,\cdot)$ | Symmetric PathSim w.r.t meta-path $p$. $\bm{A}_{k}$ | The $k$-th input adjacency matrix. $\bm{F}$ | The input node features. $K$ | Number of meta-paths. $N$ | Number of nodes in the graph. $\bm{A}_{meta}$ | The fused adjacency matrix. $\bm{F}_{meta}$ | The aggregated node features. $\bm{w}$ or $\bm{W}$ | Parameters used to derive $\bm{A}_{meta}$. $\bm{\Omega}$ | Parameters used to derive $\bm{W}$. $meta\\_GNN_{k}(\cdot,\cdot)$ | meta GNN module for the $k$-th meta-path. $AGGREGATOR_{F}(\cdot)$ | Aggregation function to derive $\bm{F}_{meta}$. ### III-A Similarity Subgraph Construction via Meta-path Since the heterogeneous EHR graph consists of multiple types of node and edge, traditional GNN cannot process it directly. A approach is to process each node in the graph according to the node types[19]. However, the links between different types of nodes can form some unique patterns and may possess specific meaning. Therefore, we introduce meta-path to process the heterogeneous graph and then calculate similarities between nodes along with each meta-path. ##### Heterogeneous EHR Graph As shown in left part of Fig. 1, a heterogeneous EHR graph consists of medical information from all patients. There are four kinds of nodes in the graph, patient $c$, visit $v$, diagnosis $d$ and medication $m$. Formally, we use $S=C+V+D+M$ to represent the set of all nodes in the graph, where $C$, $V$, $D$ and $M$ correspond to sets of patients, visits, diagnoses and medications. For each node $n\in S$, we also define a mapping $\phi(n)\in\\{``C",``V",``D".``M"\\}$ to find its type. ##### Meta-path A meta-path $p=t_{1}t_{2}\cdots t_{n}$ is a sequence where $t\in\\{``C",``V",``D".``M"\\}$. It can represent a pattern of node types in a given path. For example, a meta-path $"VDV"$ denotes the pattern of “visit node - diagnosis node - visit node” in the heterogeneous graph, and the path “patient 1’s 1st visit - headache - patient 2’s 1st visit” is an instance of this meta-path. ##### PathCount Suppose we have two nodes $n_{i},n_{j}\in S$ and a meta-path $p=t_{1}t_{2}\cdots t_{n}$ where $\phi(n_{i})=t_{1}$ and $\phi(n_{j})=t_{n}$. The $PathCount$ (shortened as $PC$) of $n_{i},n_{j}$ w.r.t. $p$ is a function of the number of meta-path instances between node pairs. For example, in Fig. 2 the $PC$ under mata-path “DVM” between node pair (“headache”,“benzodiazepines”) is 2, since they have 2 common visit neighbors. ##### Symmetric PathSim (SPS) Inspired by [21], we propose the symmetric PathSim (SPS) used to measure the similarity of a node pair $(n_{i},n_{j})$ under a specific meta-path $p$ in the heterogeneous graph. $SPS_{p}(n_{i},n_{j})=\frac{PC_{p}(n_{i},n_{j})+PC_{p}(n_{j},n_{i})}{PC_{p}(n_{i},n_{i})+PC_{p}(n_{j},n_{j})}.$ (1) Basically, when the $PC$ between two nodes is higher, these two nodes tend to have a stronger relation. However, some nodes may have higher degree but are less important. For example, a node denoting gender “female” may link to half of the patient nodes in the graph, but the effect of gender on medication is much less than the effect of diagnosis. To eliminate the influence of nodes with high visibility (degree) and low importance, SPS normalizes the $PC$ with the sum of $n_{i}$ and $n_{j}$’s self loop count. SPS is symmetric, which means $SPS_{p}(n_{i},n_{j})=SPS_{p}(n_{j},n_{i})$. In the preprocessing step, we construct the heterogeneous EHR graph and calculate the similarities of all node pairs under a group of meta-paths $P=\\{p_{1},p_{2},\cdots\,p_{K}\\}$ (the similarity of two nodes is set to 0 if their node types are not applicable to the mata-path). After this step, we can obtain a series of symmetric similarity matrices $\mathcal{A}=\\{\bm{A}_{1},\bm{A}_{2},\cdots,\bm{A}_{K}\\}$ where $K$ is both the number of meta-paths and the number of similarity matrices. The size of each matrix $\bm{A}_{i}$ in $\mathcal{A}$ is ${N\times N}$, where $N=|S|$ is the number of nodes. In this way, the heterogeneous graph is split into multiple homogeneous graphs and each homogeneous graph contains partial information of the original graph. Figure 3: A dissection of $A_{meta}$. ### III-B Heterogeneous Similarity Graph Neural Network The solid box in Fig. 2 shows the architecture of our proposed HSGNN. The preprocessing step derives multiple homogeneous graphs with meta-path and then we take them as inputs of HSGNN. The primary goal of HSGNN is to fuse the homogeneous graphs into one graph $A_{meta}$ containing true relations between each node pair. Then, $A_{meta}$ can be used for later general GNN layers such as Graph Convolutional Network (GCN) [26] or for other downstream tasks. To achieve this goal, suppose the initial node feature matrix is $\bm{F}$ and the $K$ input graphs are $\mathcal{A}=\\{\bm{A}_{1},\bm{A}_{2},\cdots,\bm{A}_{K}\\}$, here we propose several variants of HSGNN. #### III-B1 Simply Weighted Sum A straightforward approach is to use weighted sum: $\bm{A}_{meta}=\sum_{k=1}^{K}w_{k}\bm{A}_{k}$ (2) where $w_{k}$ is a trainable scalar weight of matrix $\bm{A}_{i}$ and $\sum_{k=1}^{K}w_{k}=1$. An advantage of this approach is its simplicity. However, this method assumes the effect of one meta-path keeps constant over all nodes in the graph, regardless the uniqueness of each node. For example, to predict the condition of a patient, doctors may rely on different medical codes when making decisions. Since medical codes correspond to different meta- paths, we need to adjust weight scalars on each node pair. #### III-B2 Attention Sum We have a node feature matrix $\bm{F}$ as the input, and it can help us learn the proper weights of each graph. Since we want to assign a unique weight for each node pair under each meta-path, the weight tensor can be denoted as $\bm{W}\in[0,1]^{K\times N\times N}$ and each element $w_{kij}$ in it means the attention weight under node pair $(n_{i},n_{j})$ on the $k$-th meta-path. Similarly, we need to make sure $\sum_{k=1}^{K}w_{kij}=1$. We adopt a one-layer feed forward neural network to calculate the attention value $w_{kij}$. The neural network takes two node features $\bm{f}_{i}$ and $\bm{f}_{j}\in\bm{F}$ as the input, and outputs the weight of this node pair on all graphs. Formally, we have: $w_{k,i,j}={\text{softmax}}_{k}(att_{ij})=\frac{\text{exp}(\sigma(\bm{\omega}_{k}^{T}[\bm{f}_{i}||\bm{f}_{j}]))}{\sum_{l=1}^{K}\text{exp}(\sigma(\bm{\omega}_{l}^{T}[\bm{f}_{i}||\bm{f}_{j}]))}.$ (3) In Eq. (3), $\bm{f}_{i}$ and $\bm{f}_{j}$ are feature vectors of node $n_{i}$ and $n_{j}$ and $||$ denotes concatenation operation. $\bm{\Omega}_{att}=\\{\bm{\omega}_{1};\bm{\omega}_{2};\cdots;\bm{\omega}_{K}\\}$ is the parameter set of the neural network. After obtaining $w_{kij}$, we can get $A_{meta}$: $\bm{A}_{meta}=\sum_{k=1}^{K}\bm{W}_{k}\circ\bm{A}_{k}$ (4) where $\bm{W}_{k}$ means the $k$-th $N\times N$ matrix in $\bm{W}$ and $\circ$ means element-wise multiplication. This equation adjusts personalized weights for different node pairs based on node features. However, this approach fails to improve the performance in the experiments. The reason is that, the node feature $\bm{F}$ we use in the experiments is not informative, and thus it can introduce noise into the model, and prevent it from learning meaningful attention weights. To address this issue, we need to let the node features firstly learn from $\mathcal{A}$, and then use them to generate meaningful attention weights. #### III-B3 Aggregated Attention Sum After learning from $\mathcal{A}$ to obtain a more informative node feature matrix $\bm{F}_{meta}$, we use $\bm{F}_{meta}$ to generate the attention weights of graph aggregation. Motivated from [18], in this step we apply GNN on each graph to obtain multiple features for each node. Formally, for $k\in\\{1,2,\cdots,K\\}$ we have: $\bm{F}^{(0)}_{k}=meta\\_GNN_{k}(\bm{F},\bm{A}_{k})$ (5) where $meta\\_GNN$ can be any kind of GNN layers. In the next step, to learn the node feature matrix $\bm{F}_{meta}$, we use $\bm{F}_{meta}=AGGREGATOR_{F}([\bm{F}^{(0)}_{1};\bm{F}^{(0)}_{2};\cdots;\bm{F}^{(0)}_{K}]),$ (6) where $AGGREGATOR_{F}$ is the aggregation function, which can be Graph Attention Network (GAT) [33]. Here we can also use some other operations such as concatenate or average $\bm{F}^{(0)}_{1},\bm{F}^{(0)}_{2},\cdots,\bm{F}^{(0)}_{K}$ together. When we get the $\bm{F}_{meta}$, we can use Eq. 3 to learn the attention weights on graphs: $\begin{split}w_{k,i,j}={\text{softmax}}_{k}(meta_{a}tt_{ij})=\\\ \frac{\text{exp}(\sigma(\bm{\omega}_{k}^{T}[\bm{f}^{meta}_{i}||\bm{f}^{meta}_{j}]))}{\sum_{l=1}^{K}\text{exp}(\sigma(\bm{\omega}_{l}^{T}[\bm{f}^{meta}_{i}||\bm{f}^{meta}_{j}]))}.\end{split}$ (7) Many kinds of operations and aggregators can be used as $AGGREGATOR_{F}$. Here we provide several options which are compared in the experiments. Suppose previously we obtain $K$ node feature matrices $\bm{F}^{(0)}_{1},\bm{F}^{(0)}_{2},\cdots,\bm{F}^{(0)}_{K}$, we propose the following aggregation functions in our model. * • Mean operation. That is, $\bm{F}_{meta}=\sum_{k=1}^{K}\bm{F}^{(0)}_{k}/K$ (8) . * • Concatenation operation. That is, $\bm{F}_{meta}=CONCAT([\bm{F}^{(0)}_{1};\bm{F}^{(0)}_{2};\cdots;\bm{F}^{(0)}_{K}])$ (9) . After obtaining $\bm{F}_{meta}$ and $\bm{A}_{meta}$, we use general GNN layers such as GCN [26] and GAT [33] to derive final predictions. ### III-C Quick Inference When New Records Coming Basically, HSGNN needs all nodes in the graph to present during training and thus is transductive. According to [27], transductive GNN cannot handle new nodes and edges without re-training. However, there is a special characteristic of EHR graph: the number of all medical code nodes, such as diagnosis node and medication node, keep almost constant in all EHR graphs. The total number of all diagnoses, medications, procedures and lab tests in real-world dataset is about 5000 and they seldom change. This number is relatively small and their similarities can be easily stored in the memory. Another fact is that new coming patients/visits are never isolated, as they always appear with some medical features. In other words, there are always “patient/visit-medical code” links in the test set. Therefore, using these two properties, we can use HSGNN to infer new patients/visits without re-train the model. After the training step, we obtain a well-trained $\bm{A}_{meta}$ in HSGNN. Since $\bm{A}_{meta}$ contains medical relations, it can be used in the inference step. As shown in Fig. 3, when we dissect $\bm{A}_{meta}$, all edges in $\bm{A}_{meta}$ can be grouped into three categories. * • Medical code-medical code edges. Edges between two medical codes such as “diagnosis-medication” relation reveals the relationship between medical factors. Weights of these edges keep stable after training and can be reused in the inference step. * • Human-medical code edges. These edges represent the relationship between a human (patient/visit node) and a medical code. Since human nodes are different in training and testing step, weights of these edges cannot be reused. However, we can calculate these weights in the preprocessing step using testing data under “human - $\cdots$ \- medical code” meta-paths. * • Human-human edges. Weights in this part is set to 0s since there is no way to calculate them. The volume of testing data is relatively small and we still have other edges available, so these 0s won’t interfere prediction. After obtaining a new $\bm{A}_{meta}$ for testing set, we can use general GNNs to predict testing results. More details about this part will be provided in the experiment section. Figure 4: The data schema of the MIMIC-III network. ## IV Experiments In this section, we conduct experiments on the public MIMIC dataset and show the superiority of HSGNN over other baselines. ### IV-A MIMIC-III Dataset MIMIC [37, 38] is a publicly available dataset consisting of health records of 46,520 intensive care unit (ICU) patients over 11 years. Table II shows the statistics of the graph we construct and Fig. 4 shows the structure of MIMIC- III dataset. Raw MIMIC-III data consists of 17 tables, including demographics, laboratory test results, microbiology test results, diagnoses, medications, procedures, medical notes, etc. For each patient and visit, there is a unique ID to track its corresponding information through tables. There are extra tables recording the patient-visit relations, demographics and data dictionaries as well. To build a clean and efficient heterogeneous graph based on these data, we mainly do the following things. Table II: Node statistics for the MIMIC-III network. MIMIC-III | # of code | | Avg # of the code --- (for each visit) Patient | 46,520 | – Visit | 58,976 | – Diagnosis | 203 | 11.20 Procedure | 157 | 4.65 Medication | 304 | 23.18 Lab tests | 480 | 27.55 Microbiology tests | 258 | 0.94 Symptoms | 324 | 19.06 Table III: Overall top-$k$ precision of all baselines and HSGNN variants on MIMIC-III dataset. Model | Visit-level precision@k | Patient-level precision@k ---|---|--- 5 | 10 | 15 | 20 | 5 | 10 | 15 | 20 Dipole | 0.5929 | 0.7426 | 0.7942 | 0.7540 | 0.6393 | 0.7359 | 0.7271 | 0.7239 KAME | 0.6107 | 0.7475 | 0.7967 | 0.7562 | 0.6472 | 0.7565 | 0.7530 | 0.7288 HeteroMed | 0.5893 | 0.7314 | 0.7866 | 0.7670 | 0.6285 | 0.7255 | 0.7171 | 0.7193 MAGNN | 0.6142 | 0.7471 | 0.8092 | 0.7693 | 0.6501 | 0.7585 | 0.7548 | 0.7394 HAN | 0.6135 | 0.7464 | 0.8083 | 0.7691 | 0.6494 | 0.7582 | 0.7550 | 0.7386 HetGNN | 0.6124 | 0.7456 | 0.8070 | 0.7689 | 0.6489 | 0.7580 | 0.7452 | 0.7379 GCT | 0.6297 | 0.7503 | 0.8107 | 0.7703 | 0.6633 | 0.7592 | 0.7685 | 0.7384 HSGNN | 0.6426 | 0.7658 | 0.8189 | 0.7736 | 0.6778 | 0.7613 | 0.7702 | 0.7456 simi-HSGNN | 0.6123 | 0.7396 | 0.8034 | 0.7689 | 0.6488 | 0.7481 | 0.7452 | 0.7479 sum-HSGNN | 0.6412 | 0.7630 | 0.8129 | 0.7683 | 0.6724 | 0.7597 | 0.7696 | 0.7384 HSGNN-m | 0.6410 | 0.7638 | 0.8175 | 0.7667 | 0.6752 | 0.7740 | 0.7723 | 0.7429 ##### data disambiguation There are more than 1000 kinds of medications in the original dataset. Most of them are different abbreviations or preparations of the same medicine. In the experiment, we disambiguate these medicines by comparing the most common strings in the name of medications and finally extract 304 most common medications. ##### continuous variables bucketization Lab test results are mostly continuous values. Therefore we need to bucketize them into discrete variables and integrate these variables into the graph. Some of the entries in the lab test table in MIMIC-III contain a “N/A” flag, indicating whether the test result is normal or abnormal. We then set up two nodes for this lab test representing “normal” and “abnormal”. For other lab tests that do not have such an flag, we use the quartiles of the lab test to bucketize the outliers from common values. Some data engineering works are also conducted in this step to make sure we get sensible thresholds. ##### medical notes preprocessing There are no symptom records for patients in MIMIC-III, but there exist medical nodes for each visit. Medical notes contain rich diagnostic information but are difficult to process since they are free texts. To extract diagnostic information (symptoms) from them without data leakage, we use an extra knowledge graph MeSH (Medical Subject Headings)111https://meshb.nlm.nih.gov/treeView to extract meaningful structural diagnostic information from free text. We extract entries “medications on admission”, “family history”, “impression”, “chief complaint”, “physical examination on admission” and “history” from the medical notes, and then match words in these entries to MeSH. After that we use these matched keyworks together with its connections in MeSH to help building the heterogeneous MIMIC-III network. By doing this we extract keywords from the diagnostic texts while incorporating external knowledge graph into our graph. ##### other medical codes MIMIC-III uses ICD-9-PC and ICD-9 ontology to represent all procedures and diagnoses. International Classification of Diseases (ICD) is a medical ontology which is widely used in healthcare. In these ontologies, diagnoses and procedures are organized in hierarchical structures and the first several digits denote a high-level concept of the codes. In this case, we choose the first two digits for procedure codes and the first three digits for diagnosis codes to predict. We then select the most commonly existing codes as nodes in our graph and dismiss other rare codes. Visit-level precision. Visit-level time. Patient-level precision. Patient-level time. Fig. 5: Precision and running time of Quick Inference compare with traditional train and test procedure. (Dark green denotes training, brown denotes testing and purple denoted quick reference.) ### IV-B Baselines To demonstrate the advantage of HSGNN, we select three medical predictive models and three graph neural networks as our baseline. * • Dipole [10]. Dipole uses bidirectional recurrent neural networks and attention mechanism to make predictions. In this experiment, we use patient conditions at different times in one visit to make the visit-level prediction, and use information of different visits to make patient-level prediction. * • KAME [14]. KAME learns to predict patients’ health situation. It incorporates medical knowledge graph, and utilizes attention mechanism to make accurate predictions. We leverage the MeSH ontology as the knowledge graph to run this model. It is fair to compare KAME and our proposed HSGNN becuase they all make use of the same ontology although in different settings. * • HeteroMed [39]. HeteroMed is the first approach using HIN to process EHR. It exploits meta-paths and employs a joint embedding framework to predict diagnosis for patients. We use the same graph structure on this model as HSGNN. * • MAGNN [20]. MAGNN proposes intra-metapath aggregators and inter-metapath aggregators to make inductive predictionson heterogeneous graphs. We use the same graph structure and meta-paths on this model as HSGNN. * • HetGNN [19]. HetGNN is a heterogeneous graph neural network that introduces a random walk to sample a fixed size of heterogeneous neighbors and leverages a neural network architecture with two modules to aggregate feature information of those sampled neighboring nodes * • HAN [18]. HAN is a heterogeneous graph neural network based on hierarchical attention, including node-level and semantic-level attentions to learn the importance between a node and its metapath based neighbors and the importance of different meta-paths. * • GCT [15]. GCT uses graph convolutional transformers to jointly learn the hidden structure of EHR while performing prediction tasks on EHR data. GCT uses data statistics to guide the structure learning process. In the experiments, we use the data schema mentioned above to generate its pre- training weights. Meanwhile, we also conduct experiments on following five variants on HSGNN to find the best architecture. * • HSGNN Model we proposed in this paper, using concatenation operation to derive $\bm{F}_{meta}$ (Eq. 5 9) and aggregated attentional sum to derive $\bm{A}_{meta}$ (Eq. 7 4). Then a one-layer GCN is applied on $\bm{F}_{meta}$ and $\bm{A}_{meta}$ to make final predictions. * • simi-HSGNN Use $PathCount$ but not SPS to derive $\mathcal{A}$. This is to show the efficiency of SPS. * • sum-HSGNN Use simply weighted sum to derive $\bm{A}_{mate}$ (Eq. 2). Then a one-layer GCN is applied on $\bm{F}$ and $\bm{A}_{meta}$ to make final predictions. This is to compare HSGNN with a simpler model to show the efficiency of splitting EHR graph into multiple subgraphs. * • HSGNN-m Use mean aggregator to derive $\bm{A}_{mate}$. Other settings are the same as HSGNN. This variant is to show the effect of different aggregation functions. ### IV-C Problem Introduction Diagnosis prediction can be viewed as a multi-label classification problem where we try to predict multiple possible diagnoses for the patients or visits. We conduct both patient level prediction and visit level prediction on the dataset. As for patient level prediction, only diagnoses existing on all visits of this patient would be counted as the diangosis of the patient. We then split training and testing set by removing the corresponding “visit- diagnosis ” edges in the graph. Then, since medication and procedure can be determined by diagnosis, these edges are also removed to prevent data leakage. ### IV-D Experiment Settings In the experiment, we use the concatenation of feature vectors from different sources as the features of the visits, and then we use them for all baseline models. For each experiment, we use 10-fold cross validation. Training, validation and testing sets are with a 7 : 1 : 2 ratio. Our method is implemented by Tensorflow 2.0 and Python 3.6, and tested on an machine with 32G RAM and 2 NVIDIA GeForce RTX 2080 Ti GPU. To evaluate the quality of prediction, we use precision at top-$k$ as the metric. We set the value of $k$ as 5, 10, 15, 20. ### IV-E Results of diagnosis prediction DeepWalk. metapath2vec. GRAM. HSGNN. Fig. 6: T-SNE scatterplots of diagnoses trained by HSGNN, DeepWalk, metapath2vec and GRAM. #### IV-E1 Comparison with other baselines Table III displays the performance of all comparable models on MIMIC-III. In the table, HSGNN and its variant HSGNN-m. outperform all other baselines. We conduct diagnosis prediction task the MIMIC-III dataset. Generally, there are about 10 diagnoses for each visit and 4 visits for each patient. Therefore, when $k$ increases, the precision may either increase or decrease. The accuracy of a model approximately reach its maximum when $k=10$ for patient diagnosis prediction and $k=15$ for visit level prediction. This is also why we choose maximum $k=20$. Therefore, if we focus on the column of $k=15$ of the visit-level prediction and $k=10$ of patient-level prediction, we can find HSGNN improve 0.7% and 1.4% on both tasks. All baselines, together with HSGNN can be classified into three categories: RNN models, homogeneous graph models and heterogeneous graph models. From the results we can infer that homogeneous graph models (KAME and GCT) perform better than RNN models (Dipole), and heterogeneous graph approaches (MAGNN and HSGNN) perform better than homogeneous approaches. It demonstrates the effectiveness of considering structural information when making predictions. Compared with homogeneous graphs, heterogeneous graphs carry more information and thus can achieve more improvement when applied to the model. Among all baseline models, GCT achieves the best performance even if it uses homogeneous graph. Note that a common design of GCT and HSGNN is that they both use trainable weights and construct a virtual graph in the model. Therefore, we can infer that compared with using the original input graph, a virtual graph constructed in the model can improve the performance of GNN. Since our proposed HSGNN outperforms GCT, our model can learn a more accurate graph structure in the model. This phenomenon is because our model uses the heterogeneous graph as input and considers the difference between meta-paths. #### IV-E2 Comparison among HSGNN variants We also test some variants of HSGNN to find the best architecture of HSGNN while making some ablation studies. The first variant we compare with is simi- HSGNN, which uses the $PathCount$ as similarity measure but not SPS. By doing so, HSGNN becomes almost equivalent to HAN [18] and its performance can be viewed as the performance of HAN. Simi-HSGNN performs worse than HSGNN for around 2% on both tasks, showing that using normalize similarity measure SPS is an essential way to achieve better results. Another variant considered is sum-HSGNN. Compared with HSGNN, sum-HSGNN is its simplified version since it contains less parameters in the model and is faster to train. Nevertheless, the performance of sum-HSGNN doesn’t decrease a lot because of its simplicity and sum-HSGNN outperforms all other variants and baseline models except HSGNN and HSGNN-m. The reason may be that sum-HSGNN still preserves the mechanism of learning a trainable virtual graph. HSGNN-mm shows the impact of different node aggregators on the model performance. However, we discover the influence of aggregators is limited if the size of embeddings are kept constant. Therefore, we choose the mean aggregator, the one which is easier to implement and can achieve satisfactory performance, to be compared in the experiments. ### IV-F Performance of Quick Inference To compare the efficiency and the effectiveness of our quick inference method (III. C) to traditional testing step, we design the following experiment to evaluate its performance. Firstly, we choose $a\%$ of data randomly from the dataset as training and validation samples. Then we split the remaining $1-a\%$ samples equally for traditional testing and quick inference. Secondly, in the preprocessing step, both training samples and testing samples are used to generate the graph. Then this graph is fed forward to our model. Finally, when the model is well-trained, we use the quick inference method to predict the remaining $(1-a\%)/2$ samples, and compare its precision and running time to the traditional testing procedure. In this experiment, we set $a=80\%,70\%,60\%$ respectively. Fig. 5 shows the result of training performance, testing performance and the quick inference performance under visit-level and patient-level prediction. For each task, we evaluate the precision@10 of training samples, testing samples and quick inference samples after the model is well-trained. We also measure the time for testing samples and quick reference samples to get the results. We do not measure the time of training procedure because it depends on parameters such as learning rate. From Fig. 5 we can discover that the quick inference accuracy is only slightly lower then the traditional testing precision on both visit prediction and patient prediction tasks. Nevertheless, the time of getting quick inference results is much shorter than getting a traditional testing result. This is because quick inference can get $\bm{A}_{meta}$ without forward-propagation, and then get results simply through a one-layer graph neural network. With $a\%$ decreasing, all the training, testing and quick inference precision decreases. It is because of the lack of training samples, making the model under-fitting. On the other hand, the decrease of training samples means the number of testing samples and quick inferences are increasing. Therefore, the number of inference is increasing. ### IV-G Representation Learning with External Knowledge HSGNN can learn representations for nodes. Since many models such as GRAM can learn high quality representations by integrating medical ontologies, we try to test the ability of HSGNN to learn informative representations on the same task. In this experiment, we apply ICD-9 ontology on both GRAM and HSGNN to let them learn representations for diagnoses. Here are we choose nine categories in ICD-9 ontology to build the graph. Since diagnoses in the same category are directly connected and are more relative to each other, an ideal result is that all diagnosis nodes belong to the same category can form a cluster in visualization. To train HSGNN in an unsupervised way, we apply a loss like [27] which maximizes the dot product of diagnoses in the same category. Fig. 6 shows the result of representation learning by ploting the t-SNE result [40]. Here we compare the results of HSGNN with GRAM, DeepWalk [41] and metapath2vec [42]. In Fig. 6, the colors of the dots represents the ICD-9 categories. According to the visualization, we can prove that HSGNN can produce representations with high quality since it forms clear clusters for each category. ## V Conclusion EHR data is highly heterogeneous with high-dimensional temporal data. To model the intrinsic complexity of EHRs and utilize external medical knowledge, we propose HSGNN framework to learn high quality representations while generating predictions. HSGNN accepts similarity matrices as inputs and use the attention mechanism to measure the impact of each meta-paths. In the experiment section, we conduct diagnosis prediction task on MIMIC-III dataset, proving the superiority ability of HSGNN over baseline models. The visualization of representations shows the ability of HSGNN in generating reasonable representations both for diagnosis and patients. The superiority of HSGNN is mainly because it can make use of external medical ontologies together with both temporal and structural information. ## Acknowledgment The corresponding author is Hao Peng. This work is supported by Key Research and Development Project of Hebei Province (No. 20310101D), NSFC No.62002007 and No.62073012, and in part by NSF under grants III-1763325, III-1909323, IIS-1763365 and SaTC-1930941. ## References * [1] T. Fu, T. N. Hoang, C. Xiao, and J. Sun, “DDL: deep dictionary learning for predictive phenotyping,” in _IJCAI_ , 2019, pp. 5857–5863. * [2] T. Bai, A. K. Chanda, B. L. Egleston, and S. Vucetic, “Ehr phenotyping via jointly embedding medical concepts and words into a unified vector space,” _BMC medical informatics and decision making_ , vol. 18, no. 4, p. 123, 2018\. * [3] X. S. Zhang, F. Tang, H. H. Dodge, J. Zhou, and F. Wang, “Metapred: Meta-learning for clinical risk prediction with limited patient electronic health records,” in _SIGKDD_ , 2019, pp. 2487–2495. * [4] Z. C. Lipton, D. C. Kale, C. Elkan, and R. Wetzel, “Learning to diagnose with lstm recurrent neural networks,” _arXiv preprint arXiv:1511.03677_ , 2015\. * [5] J. Shang, C. Xiao, T. Ma, H. Li, and J. Sun, “Gamenet: Graph augmented memory networks for recommending medication combination,” in _AAAI_ , 2019, pp. 1126–1133. * [6] Z. Che and Y. Liu, “Deep learning solutions to computational phenotyping in health care,” in _ICDM Workshops_. IEEE Computer Society, 2017, pp. 1100–1109. [Online]. Available: https://doi.org/10.1109/ICDMW.2017.156 * [7] X. Cai, J. Gao, K. Y. Ngiam, B. C. Ooi, Y. Zhang, and X. Yuan, “Medical concept embedding with time-aware attention,” in _Proceedings of the 27th International Joint Conference on Artificial Intelligence_ , 2018, pp. 3984–3990. * [8] E. Choi, M. T. Bahadori, E. Searles, C. Coffey, M. Thompson, J. Bost, J. Tejedor-Sojo, and J. Sun, “Multi-layer representation learning for medical concepts,” in _SIGKDD_ , 2016, pp. 1495–1504. * [9] E. Choi, M. T. Bahadori, J. Sun, J. Kulas, A. Schuetz, and W. F. Stewart, “RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism,” in _NeurIPS_ , 2016, pp. 3504–3512. * [10] F. Ma, R. Chitta, J. Zhou, Q. You, T. Sun, and J. Gao, “Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks,” in _SIGKDD_ , 2017, pp. 1903–1911. * [11] Z. C. Lipton, D. C. Kale, C. Elkan, and R. C. Wetzel, “Learning to diagnose with LSTM recurrent neural networks,” in _4th International Conference on Learning Representations, ICLR 2016_ , 2016. * [12] E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, and J. Sun, “Doctor AI: predicting clinical events via recurrent neural networks,” in _Proceedings of the 1st Machine Learning in Health Care, MLHC 2016_ , vol. 56. JMLR.org, 2016, pp. 301–318. * [13] M. Aczon, D. Ledbetter, L. V. Ho, A. M. Gunny, A. Flynn, J. Williams, and R. C. Wetzel, “Dynamic mortality risk predictions in pediatric critical care using recurrent neural networks,” _CoRR_ , vol. abs/1701.06675, 2017. * [14] F. Ma, Q. You, H. Xiao, R. Chitta, J. Zhou, and J. Gao, “KAME: knowledge-based attention model for diagnosis prediction in healthcare,” in _CIKM_ , 2018, pp. 743–752. * [15] E. Choi, Z. Xu, Y. Li, M. W. Dusenberry, G. Flores, E. Xue, and A. M. Dai, “Learning the graphical structure of electronic health records with graph convolutional transformer,” in _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence_. AAAI Press, 2020. * [16] E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, and J. Sun, “GRAM: graph-based attention model for healthcare representation learning,” in _KDD_ , 2017, pp. 787–795. * [17] E. Choi, C. Xiao, W. F. Stewart, and J. Sun, “Mime: Multilevel medical embedding of electronic health records for predictive healthcare,” in _NIPS 2018_ , 2018, pp. 4552–4562. * [18] X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu, “Heterogeneous graph attention network,” in _The World Wide Web Conference, WWW 2019_. ACM, 2019, pp. 2022–2032. * [19] C. Zhang, D. Song, C. Huang, A. Swami, and N. V. Chawla, “Heterogeneous graph neural network,” in _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_. ACM, 2019, pp. 793–803. * [20] X. Fu, J. Zhang, Z. Meng, and I. King, “MAGNN: metapath aggregated graph neural network for heterogeneous graph embedding,” in _WWW ’20: The Web Conference 2020_. ACM / IW3C2, 2020, pp. 2331–2341. * [21] Y. Sun, J. Han, X. Yan, P. S. Yu, and T. Wu, “Pathsim: Meta path-based top-k similarity search in heterogeneous information networks,” _PVLDB_ , vol. 4, no. 11, pp. 992–1003, 2011. * [22] Y. Shi, P. Chan, H. Zhuang, H. Gui, and J. Han, “Prep: Path-based relevance from a probabilistic perspective in heterogeneous information networks,” in _Proceedings of the 23rd ACM SIGKDD_. ACM, 2017, pp. 425–434. * [23] Q. Li, Z. Han, and X. Wu, “Deeper insights into graph convolutional networks for semi-supervised learning,” in _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence_. AAAI Press, 2018, pp. 3538–3545. * [24] D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, “Measuring and relieving the over-smoothing problem for graph neural networks from the topological view,” in _AAAI 2020_. AAAI Press, 2020, pp. 3438–3445. * [25] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and locally connected networks on graphs,” _arXiv preprint arXiv:1312.6203_ , 2013. * [26] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in _ICLR_. OpenReview.net, 2017. * [27] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in _Advances in neural information processing systems_ , 2017, pp. 1024–1034. * [28] Y. Dou, Z. Liu, L. Sun, Y. Deng, H. Peng, and P. S. Yu, “Enhancing graph neural network-based fraud detectors against camouflaged fraudsters,” in _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_, 2020, pp. 315–324. * [29] H. Peng, J. Li, Q. Gong, Y. Song, Y. Ning, K. Lai, and P. S. Yu, “Fine-grained event categorization with heterogeneous graph convolutional networks,” _IJCAI_ , 2019. * [30] Z. Liu, X. Li, Z. Fan, S. Guo, K. Achan, and P. S. Yu, “Basket recommendation with multi-intent translation graph neural network,” _arXiv preprint arXiv:2010.11419_ , 2020. * [31] X. Li, M. Zhang, S. Wu, Z. Liu, L. Wang, and P. S. Yu, “Dynamic graph collaborative filtering,” in _ICDM_ , 2020. * [32] Y. Gao, L. Xiaoyong, P. Hao, B. Fang, and P. Yu, “Hincti: A cyber threat intelligence modeling and identification system based on heterogeneous information network,” _IEEE Transactions on Knowledge and Data Engineering_ , 2020. * [33] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in _ICLR_ , vol. abs/1710.10903, 2018\. * [34] Y. Cao, H. Peng, and S. Y. Philip, “Multi-information source hin for medical concept embedding,” in _Pacific-Asia Conference on Knowledge Discovery and Data Mining_. Springer, 2020, pp. 396–408. * [35] C. Shi, Y. Li, J. Zhang, Y. Sun, and P. S. Yu, “A survey of heterogeneous information network analysis,” _IEEE Trans. Knowl. Data Eng._ , vol. 29, no. 1, pp. 17–37, 2017. [Online]. Available: https://doi.org/10.1109/TKDE.2016.2598561 * [36] L. Zhao and L. Akoglu, “Pairnorm: Tackling oversmoothing in gnns,” in _8th International Conference on Learning Representations, ICLR 2020_ , 2020\. * [37] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley, “Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals,” _Circulation_ , vol. 101, no. 23, pp. e215–e220, 2000. * [38] A. E. Johnson, T. J. Pollard, L. Shen, H. L. Li-wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark, “Mimic-iii, a freely accessible critical care database,” _Scientific data_ , vol. 3, p. 160035, 2016. * [39] A. Hosseini, T. Chen, W. Wu, Y. Sun, and M. Sarrafzadeh, “Heteromed: Heterogeneous information network for medical diagnosis,” in _CIKM_ , 2018, pp. 763–772. * [40] P. E. Rauber, A. X. Falcão, and A. C. Telea, “Visualizing time-dependent data using dynamic t-sne,” in _EuroVis_ , 2016, pp. 73–77. * [41] B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: online learning of social representations,” in _SIGKDD_ , 2014, pp. 701–710. * [42] Y. Dong, N. V. Chawla, and A. Swami, “metapath2vec: Scalable representation learning for heterogeneous networks,” in _SIGKDD_ , 2017, pp. 135–144.
# Measure-conditional Discriminator with Stationary Optimum for GANs and Statistical Distance Surrogates Liu Yang Tingwei Meng George Em Karniadakis ###### Abstract We propose a simple but effective modification of the discriminators, namely measure-conditional discriminators, as a plug-and-play module for different GANs. By taking the generated distributions as part of input so that the target optimum for the discriminator is stationary, the proposed discriminator is more robust than the vanilla one. A variant of the measure-conditional discriminator can also handle multiple target distributions, or act as a surrogate model of statistical distances such as KL divergence with applications to transfer learning. Machine Learning, ICML ## 1 Introduction Generative adversarial networks (GANs) (Goodfellow et al., 2014) have proven to be successful in training generative models to fit the target distributions. Apart from tasks of image generation (Brock et al., 2019; Zhu et al., 2017), text generation (Zhang et al., 2017; Fedus et al., 2018), etc., GANs have also been used in physical problems to infer unknown parameters in stochastic systems (Yang et al., 2020b; Yang & Perdikaris, 2019; Yang et al., 2020a). Due to the variety of the generative models, GAN loss functions, and the need for high accuracy inferences, such tasks usually set a strict requirement to the robustness of GANs, as well as the similarity between the generated and target distributions in various metrics. The optimum for the discriminator is, in general, non-stationary, i.e., it varies during the training, since it depends on the generated distributions. Such issue could lead to instability or oscillation in the training. Here, we propose a simple but effective modification to the discriminator as a plug- and-play module for different GANs, including vanilla GANs (Goodfellow et al., 2014), Wasserstein GANs with gradient penalty (WGAN-GP) (Gulrajani et al., 2017), etc. The main idea is to make the discriminator conditioned on the generated distributions, so that its optimum is stationary during the training. The neural network architecture of the measure-conditional discriminator is adapted from DeepSets neural network (Zaheer et al., 2017), which is widely used in point cloud related tasks. It is also used in GANs (Li et al., 2018) where each sample corresponds to a point cloud, while we target on more general tasks where each sample corresponds to a particle, an image, etc. In Lucas et al. (2018), the discriminator takes the mixture of real and generated distributions (instead of individual samples) as input, but it also has a non- stationary target optimum, and performs worse than our discriminators in experiments. We also emphasize the difference between the measure-conditional discriminator and conditional GANs (Mirza & Osindero, 2014). Conditioned on a vector featuring the target distributions, the conditional GANs still have a non-stationary target optimum for the discriminator and are limited to the scenarios where the samples can be categorized. Moreover, the measure- conditional discriminator can also be applied in conditional GANs, by making the original discriminator further conditioned on the generated distributions. In Section 2 we discuss why we need stationary target optimum for the discriminator. In Section 3 we give a detailed description of the proposed discriminator neural networks and how to apply them in GANs. In Section 4 we extend the application of measure-conditional discriminators as surrogate models of statistical distances. In Section 5 we present a universal approximation theorem of the neural networks used in this paper. The experimental results are shown in Section 6. We conclude in Section 7. (a) (b) (c) (d) (e) Figure 1: Results for the illustrative example of oscillation. (a): vanilla discriminator using gradient descent with learning rate 0.01. (b-d): vanilla discriminator using optimistic mirror descent, with learning rate 0.1, 0.01, and 0.001, respectively. Note that the multiplications of learning rate and training iterations are kept the same. (e): measure-conditional discriminator using gradient descent with learning rate 0.01. The red and blue lines show the generator and discriminator parameters in both two dimensions, while the black horizontal lines represent the ground truth for the generator parameters. ## 2 Stationary Target Optimum for the Discriminator In general, there are two mathematical perspectives for GANs. The first perspective is to view GANs as a two-player zero-sum game between the generator $G$ and the discriminator $D$. The hope is that the iterative adversarial training of $G$ and $D$ will lead to the Nash equilibrium of this zero-sum game, where the generated distribution will be identical to the target distribution of real data. The second perspective is that the discriminator gives the distance between the generated distribution and the target distribution in a variational form. For example, vanilla GANs can be formulated as: $\displaystyle\min_{G}$ $\displaystyle\max_{D}V(G_{\\#}\mathcal{N},D),$ (1) $\displaystyle V(G_{\\#}\mathcal{N},D)$ $\displaystyle=\mathbb{E}_{x\sim G_{\\#}\mathcal{N}}\log(1-D(x))+\mathbb{E}_{x\sim Q}\log(D(x))$ $\displaystyle=\mathbb{E}_{z\sim\mathcal{N}}\log(1-D(G(z)))+\mathbb{E}_{x\sim Q}\log(D(x)),$ while Wasserstein GANs (WGANs) can be formulated as: $\displaystyle\min_{G}$ $\displaystyle\max_{D\text{ is 1-Lipschitz}}V(G_{\\#}\mathcal{N},D),$ (2) $\displaystyle V(G_{\\#}\mathcal{N},D)$ $\displaystyle=-\mathbb{E}_{x\sim G_{\\#}\mathcal{N}}D(x)+\mathbb{E}_{x\sim Q}D(x)$ $\displaystyle=-\mathbb{E}_{z\sim\mathcal{N}}D(G(z))+\mathbb{E}_{x\sim Q}D(x),$ where $Q$ represents the target distribution, $\mathcal{N}$ represents input noise, # is the push-forward operator, thus $G_{\\#}\mathcal{N}$ represents the generated distribution. In vanilla GANs and WGANs, $\max_{D}V(G_{\\#}\mathcal{N},D)$ and $\max_{D\text{ is 1-Lipschitz}}V(G_{\\#}\mathcal{N},D)$ are nothing but the Jensen-Shannon (JS) divergence and the Wasserstein-1 distance up to constants between $G_{\\#}\mathcal{N}$ and $Q$, respectively. ### 2.1 Non-stationary Target Optimum Hurts: An Illustrative Example From both two perspectives of GANs, the discriminator will approach its optimum $D^{*}$ in each iteration. However, we will use the following illustrative example to demonstrate that $D^{*}$ could be totally different as we perturb the generator, and such issue would lead to the oscillation of both generator and discriminator during training. This illustrative problem is adapted from Daskalakis et al. (2018) with different analysis. We first consider a linear discriminator as well as a translation function as the generator, i.e., $\displaystyle D_{w}(x)$ $\displaystyle=\langle w,x\rangle=\sum_{i=1}^{d}w_{i}x_{i},$ (3) $\displaystyle G_{\theta}(z)$ $\displaystyle=z+\theta,z\sim\mathcal{N}(0,I)=:\mathcal{N}$ with the target real distribution $Q=\mathcal{N}(v,I)$. The goal is to learn $\theta$ with ground truth $\theta^{*}=v$. The WGAN with weight-clipping is formulated as $\displaystyle\min_{\theta}$ $\displaystyle\max_{|w_{i}|\leq c}-\mathbb{E}_{z\sim\mathcal{N}}D(G(z)))+\mathbb{E}_{x\sim Q}D(x),$ (4) where $c>0$ is the weight-clipping bound. In practice, we will use the empirical distributions to calculate the expectations, but if we calculate it analytically, we have the following min-max formulation: $\displaystyle\min_{\theta}$ $\displaystyle\max_{|w_{i}|\leq c}\langle w,v-\theta\rangle.$ (5) This two-player game has a unique equilibrium at $\theta=v,w=0$, which appears to be satisfactory. However, if we set $\theta=v+\epsilon$, where $\epsilon\neq 0$ is the inevitable small fluctuation vector due to the randomness of the training data, moments in the optimizer, etc., then $w$ would achieve the corresponding optimum at $-\text{sgn}(\epsilon)c$, where “sgn” denotes the component-wise sign function. In other words, the optimal $w$ would jump between $c$ and $-c$ for each entry as $\theta$ fluctuates around the ground truth. Such issue of jumping optimum will lead to the oscillation of both the generator and discriminator during training, as is illustrated in Figure 1(a), where we test on a 2D problem with ground truth $v=(3,4)$ and $c=10$. Note that even if we set the discriminator as a general 1-Lipschitz function as in Equation 2, the corresponding optimum will be $D^{*}(x)=-\epsilon\cdot x/|\epsilon|+d$, which is still sensitive to the small fluctuation $\epsilon$, where $d$ is an arbitrary constant. To remove the oscillation, Daskalakis et al. (2018) proposed to replace the gradient descent (GD) for the min-max formulation (5) $\displaystyle w_{t+1}-w_{t}$ $\displaystyle=\eta(v-\theta_{t})=:\eta\dot{w}_{t},$ (6) $\displaystyle\theta_{t+1}-\theta_{t}$ $\displaystyle=\eta w_{t}=:\eta\dot{\theta}_{t},$ with the optimistic mirror descent (OMD) $\displaystyle w_{t+1}-w_{t}$ $\displaystyle=2\eta(v-\theta_{t})-\eta(v-\theta_{t-1})=\eta\dot{w}_{t}-\eta^{2}\dot{\theta}_{t-1},$ (7) $\displaystyle\theta_{t+1}-\theta_{t}$ $\displaystyle=2\eta w_{t}-\eta w_{t-1}=\eta\dot{\theta}_{t}+\eta^{2}\dot{w}_{t-1},$ where $\eta$ is the learning rate. However, we report that the oscillation decay with OMD could be very slow when a small learning rate is used, as is illustrated in Figures 1(b),1(c),1(d). This is because the differences between the OMD and GD update, $-\eta^{2}\dot{\theta}_{t-1}$ and $\eta^{2}\dot{w}_{t-1}$, are second order w.r.t. $\eta$, i.e., one order higher than the GD update, $\eta\dot{w}_{t}$ and $\eta\dot{\theta}_{t}$. The difference between the GD and OMD dynamics thus vanishes as $\eta$ goes to zero. In the following, we will propose a much simpler and more effective strategy to remove these oscillations. ### 2.2 Benefits of Stationary Target Optimum Since the aforementioned problem is due to the fact that the target optimum for the discriminator is non-stationary, to remove the oscillation, we propose to modify the discriminator architecture so that its target optimum is stationary during the training. While keeping the generator and min-max formulation unchanged as in Equations 3 and 4, we set the discriminator $\displaystyle D_{w}(x)=\sum_{i=1}^{d}w_{i}x_{i}(\mathbb{E}_{x\sim Q}(x)-\mathbb{E}_{z\sim\mathcal{N}}(G_{\theta}(z)))_{i}$ (8) where $(\cdot)_{i}$ denotes the $i$-th component. The only differences between Equations 3 and 8 are the weights for $w_{i}x_{i}$. If we calculate the expectations in the min-max formulation 4 analytically, we will have $\displaystyle\min_{\theta}$ $\displaystyle\max_{|w_{i}|\leq c}\sum_{i=1}^{d}w_{i}(v_{i}-\theta_{i})^{2}.$ (9) For this min-max problem, any $w$ with non-negative entries and $\theta=v$ is a Nash equilibrium. If we set $\theta=v+\epsilon$ with $\epsilon\neq 0$, then $w$ would achieve the corresponding optimum at $c$ for each entry, i.e., the target optimum for the discriminator is stationary. As shown in Figure 1(e), the oscillation is totally removed. Each entry of $w$ is heading to the optimum $c$ in the early stage of training, while the change becomes negligible after $\theta$ converges to $v$, indicating that the Nash equilibrium is achieved. The magic of the above solution lies in the fact that by designing the discriminator properly, we have a stationary target optimum for the discriminator during the training. Is this possible for more general GAN tasks where generators and discriminators are neural networks, and the target distributions are more flexible? Note that the discriminator in Equation 8 can be interpreted as a discriminator conditioned on the generated and target distribution, so for more general GAN tasks we can simply design the discriminator as $D_{mc}=D_{mc}(x,G_{\\#}\mathcal{N}),$ (10) where $G_{\\#}\mathcal{N}$ is the generated distribution. The target distribution $Q$ is omitted in the input since it is usually fixed in a GAN task, but we will revisit this in Section 4. We name the discriminator in Equation 10 as a “measure-conditional discriminator” since it is conditioned on the probability measure corresponding to the generated distribution. The proposed measure-conditional discriminator can be a plug-and-play module in a variety of GANs. We only need to replace the original discriminator $D(\cdot)$ with $D_{mc}(\cdot,G_{\\#}\mathcal{N})$, while the generator and the min-max formulation of GANs will be kept unchanged. A more detailed introduction of the measure-conditional discriminator in GANs will be presented in Section 3. We can see that by taking $G_{\\#}\mathcal{N}$ as part of the input, the measure-conditional discriminator will have a stationary target optimum during the training process. Indeed, for a general GAN problem originally formulated as $\min_{G\in\mathcal{G}}\max_{D\in\mathcal{D}}V(G_{\\#}\mathcal{N},D),$ (11) with two examples given in Equation 1 and Equation 2, the target optimum for the measure-conditional discriminator is $D^{*}_{mc}(x,G_{\\#}\mathcal{N})=\left(\operatorname*{arg\,max}_{D\in\mathcal{D}}V(G_{\\#}\mathcal{N},D)\right)(x).$ (12) Although $G_{\\#}\mathcal{N}$ varies during the training, $D^{*}_{mc}$ is a function of $G_{\\#}\mathcal{N}$ and $x$ is stationary. From the perspective of statistical distances, the target optimum $D^{*}_{mc}$ is exactly a surrogate model for the distance between $G_{\\#}\mathcal{N}$ and $Q$. For example, in vanilla GANs, $\mathbb{E}_{z\sim\mathcal{N}}\log(1-D^{*}_{mc}(G(z),G_{\\#}\mathcal{N}))+\mathbb{E}_{x\sim Q}\log(D^{*}_{mc}(x,G_{\\#}\mathcal{N}))$ (13) represents the JS divergence between $G_{\\#}\mathcal{N}$ and $Q$ up to constants, while in WGANs, $-\mathbb{E}_{z\sim\mathcal{N}}D^{*}_{mc}(G(z),G_{\\#}\mathcal{N})+\mathbb{E}_{x\sim Q}D^{*}_{mc}(x,G_{\\#}\mathcal{N})$ (14) represents the Wasserstein-1 distance between $G_{\\#}\mathcal{N}$ and $Q$ up to constants. It is hard to attain the target optimum $D^{*}_{mc}$, considering that it is a function of measures. However, we note that $D_{mc}$ does not need to attain $D^{*}_{mc}$ for the convergence of GANs. Indeed, we only require $D_{mc}$ to approximate the optimum for $G_{\\#}\mathcal{N}$, instead of the whole space of probability measures. The vanilla discriminator only utilizes the result of the previous one iteration to provide the initialization. If the optimum is sensitive to the generated distribution as in the above illustrative example, in each iteration, the vanilla discriminator need to “forget the wrong optimum” inherited from the previous iteration and head for the new one in a few discriminator updates. In contrast, the measure-conditional discriminator progressively head for the stationary target optimum in all the iterations. In fact, even outdated generated distributions can be used to train the measure- conditional discriminator. If the optimum is sensitive to the generated distribution, the measure-conditional discriminator does not need to forget the inheritances from previous iterations, but only need to learn the sensitivity w.r.t. the input measure. To some extent, the generator and the measure-conditional discriminator are trained in a collaborative way, in that the generator adaptively produces new distributions as training data to help the discriminator approximate $D^{*}_{mc}$, while the discriminator provides statistical distances to help the generator approach the target distribution. This concept is actually similar to reinforcement learning in the actor-critic framework (Grondman et al., 2012), with parallelism between the generator and actor, as well as between the discriminator and critic. ## 3 Measure-conditional Discriminator in GANs Proposed in Zaheer et al. (2017), the DeepSets neural network having the form of $H(X)=g(\sum_{x_{i}\in X}f(x_{i}))$ is widely used to represent a function of a point cloud $X$. The summation can be replaced by averaging to represent a function of probability measure $P$, i.e., $H(P)=g(\frac{1}{n}\sum_{i=1}^{n}f(x_{i}))$ where $\\{x_{i}\\}_{i=1}^{n}$ are samples from $P$. In order to take a probability measure and an individual sample simultaneously as the discriminator input, we adapt the neural network architecture above to get $\displaystyle D_{mc}(x,P)=h(\mathbb{E}_{y\sim P}[f(y)],g(x))\approx h(\frac{1}{n}\sum_{i=1}^{n}f(y_{i}),g(x))$ (15) where $f$, $g$ and $h$ are neural networks, and $\\{y_{i}\\}_{i=1}^{n}$ are samples from $P$. The measure-conditional discriminator is a plug-and-play module in a various GANs. The only modification is to replace $D(\cdot)$ with $D_{mc}(\cdot,G_{\\#}\mathcal{N})$. For example, in vanilla GANs, the loss functions for the generator and the discriminator are $\displaystyle L_{g}=$ $\displaystyle\mathbb{E}_{z\sim\mathcal{N}}\log(1-D_{mc}(G(z),G_{\\#}\mathcal{N}))$ (16) $\displaystyle+\mathbb{E}_{x\sim Q}\log(D_{mc}(x,G_{\\#}\mathcal{N})),$ $\displaystyle L_{d}=$ $\displaystyle-\mathbb{E}_{z\sim\mathcal{N}}\log(1-D_{mc}(G(z),G_{\\#}\mathcal{N}))$ $\displaystyle-\mathbb{E}_{x\sim Q}\log(D_{mc}(x,G_{\\#}\mathcal{N})),$ respectively. In WGAN with gradient penalty (WGAN-GP), the loss functions are $\displaystyle L_{g}=$ $\displaystyle-\mathbb{E}_{z\sim\mathcal{N}}D_{mc}(G(z),G_{\\#}\mathcal{N})+\mathbb{E}_{x\sim Q}D_{mc}(x,G_{\\#}\mathcal{N}),$ (17) $\displaystyle L_{d}=$ $\displaystyle\mathbb{E}_{z\sim\mathcal{N}}D_{mc}(G(z),G_{\\#}\mathcal{N})-\mathbb{E}_{x\sim Q}D_{mc}(x,G_{\\#}\mathcal{N})$ $\displaystyle+\lambda\mathbb{E}_{\hat{x}\sim\rho_{\hat{x}}}(\|\nabla_{\hat{x}}D_{mc}(\hat{x},G_{\\#}\mathcal{N})\|_{2}-1)^{2},$ respectively, where $\lambda$ is the weight for gradient penalty, and $\rho_{\hat{x}}$ is the distribution generated by sampling uniformly on interpolation lines between pairs of points sampled from real distributions and generated distributions. Note that the expectation over the real distribution $Q$ cannot be removed from the generator loss, since this term dependents on the generator now. ## 4 Measure-conditional Discriminator for Statistical Distances Surrogate The target distribution is usually fixed in GANs, thus omitted in the input of $D_{mc}$. Taking one step further, we will build a measure-conditional discriminator $D_{sr}$ conditioned on two probability measures $P$ and $Q$, which can act as a surrogate model to approximate the statistical distances between $P$ and $Q$. Specifically, the neural network $D_{sr}$ is formulated as $\displaystyle D_{sr}(x,P,Q)$ $\displaystyle=h(\mathbb{E}_{y\sim P}[f_{1}(y)],\mathbb{E}_{y\sim Q}[f_{2}(y)],g(x))$ (18) $\displaystyle\approx h(\frac{1}{n_{1}}\sum_{i=1}^{n_{1}}f_{1}(y_{i}^{P}),\frac{1}{n_{2}}\sum_{i=1}^{n_{2}}f_{2}(y_{i}^{Q}),g(x))$ where $f_{1}$, $f_{2}$, $g$ and $h$ are neural networks, and $\\{y_{i}^{P}\\}_{i=1}^{n_{1}}$ and $\\{y_{i}^{Q}\\}_{i=1}^{n_{2}}$ are samples from $P$ and $Q$, respectively. ### 4.1 Unsupervised Training with Variational Formula We will train $D_{sr}$ using the variational form of the statistical distances, in the same spirit as in GANs. Here, we take the KL divergence as an example, which has the following variational formula (Nguyen et al., 2010): $\displaystyle D_{KL}(P||Q)$ $\displaystyle=\sup_{g>0}\left(\mathbb{E}_{x\sim P}[\log(g(x))]-\mathbb{E}_{x\sim Q}[g(x)]+1\right),$ (19) $\displaystyle=\sup\left(\mathbb{E}_{x\sim P}[g(x)]-\mathbb{E}_{x\sim Q}[\exp(g(x))]+1\right).$ Thus, the loss function for $D_{sr}$ can be written as $\displaystyle L_{KL}$ $\displaystyle=\mathbb{E}_{(P,Q)\sim\mu}[-l_{KL}(P,Q)]$ (20) $\displaystyle l_{KL}(P,Q)$ $\displaystyle=\mathbb{E}_{x\sim P}[D_{sr}(x,P,Q)]$ $\displaystyle-\mathbb{E}_{x\sim Q}[\exp(D_{sr}(x,P,Q))]+1,$ where $\mu$ represents the distribution for the probability measure pairs $(P,Q)$ in the training. Ideally, $l_{KL}(P,Q)$ will approximate $D_{KL}(P,Q)$ if $D_{sr}$ achieves optimum. Similar loss functions can be constructed for many other statistical distances like JS divergence, total variation etc., provided with variational forms as in Equation 19. We also give an example of a surrogate model with results for the optimal transport map in Supplementary Material. Note that after the optimization of $D_{sr}$ (which can be offline), via a forward propagation of $D_{sr}$, we can estimate $l_{KL}(P,Q)$ as an approximation of $D_{KL}(P,Q)$ for various $(P,Q)$ pairs sampled from $\mu$, and even for $(P,Q)$ pairs that are never seen in the training procedure (thanks to the generalization of neural networks). The computational cost for the forward propagation grows linearly w.r.t. the sample size. More importantly, no labels are required to train $D_{sr}$. Instead, we only need to prepare samples of $P$ and $Q$ as training data. Here, $\mu$ can, of course, be prescribed by the users. It can also be decided actively during the training, depending on specific tasks. In the context of GANs, with $Q$ being different target distributions and $P$ being the corresponding generated distributions, $(P,Q)$ samples can be induced from a family of GAN tasks. $(P,Q)$ samples can also be induced from a single GAN task, if we need to fit multiple target distributions simultaneously, e.g., the distributions at multiple time instants in time-dependent problems. We will demonstrate this with an example in Section 6. ### 4.2 Transfer Learning with Statistical Distance Surrogate As a surrogate model of statistical distances between distributions, it is possible that a well-trained $D_{sr}$ can be transferred to GANs and act as a discriminator without any update. For example, if $D_{sr}$ is pretrained with Equation 20, the generator can be trained with the loss function $L_{g}=l_{KL}(G_{\\#}\mathcal{N},Q)$, with $l_{KL}$ from Equation 20. However, training $G$ with a frozen $D_{sr}$ requires that $(G_{\\#}\mathcal{N},Q)$ is not an outlier of $\mu$. This typically means that the degree of freedom for $G$ is limited. Alternatively, the pretrained $D_{sr}$ can be employed as an initialization of the discriminator and be fine-tuned in GANs. With $Q$ fixed as the target distribution, $D_{sr}$ is reduced to a function of $P$ and $x$, just as $D_{mc}$. We can then train it iteratively with the generator as in Section 3 . Note that the loss function in GANs should coincide with that in the pretraining of $D_{sr}$. For example, if $D_{sr}$ is pretrained with Equation 20, then the loss functions for the generator $G$ is given by $L_{g}=l_{KL}(G_{\\#}\mathcal{N},Q)$, while the loss functions for $D_{sr}$ is $-L_{g}$. (a) (b) (c) (d) (e) (f) (g) Figure 2: Comparison between the vanilla discriminator and measure-conditional discriminator $D_{mc}$ (ours) in three 2D problems. Each row represents the results for one problem. (a): The three target distributions. (b-g): $\overline{\widehat{W}}_{1}(P,Q)$ against generator iterations, using different versions of GANs, discriminator/generator iteration ratios, and $(\beta_{1},\beta_{2})$ in Adam optimizer. (b): Vanilla GAN, 1:1, (0.5, 0.9), (c): Vanilla GAN, 1:1, (0.9, 0.999), (d): WGAN-GP, 1:1, (0.5, 0.9), (e): WGAN- GP, 1:1, (0.9, 0.999), (f): WGAN-GP, 5:1, (0.5, 0.9), (g): WGAN-GP, 5:1, (0.9, 0.999). The y-axes are shared for each row, and the black dashed lines represent $\overline{\widehat{W}}_{1}(Q,Q)$. ## 5 Universal Approximation Property The measure-conditional discriminators introduced above have the general form $\tilde{H}(P_{1},\dots,P_{k})=g(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}]),$ (21) where $\mathbb{E}_{P_{i}}[f_{i}]$ denotes $\mathbb{E}_{x\sim P_{i}}[f_{i}(x)]$, each $f_{j}\colon\mathbb{R}^{n_{j}}\to\mathbb{R}^{m_{j}}$ for $j=1,\dots,k$ and $g\colon\mathbb{R}^{m}\to\mathbb{R}$ with $m:=\sum_{j=1}^{k}m_{j}$ are neural networks. Note that $P_{i}$ can be a Dirac measure $\delta_{x}$, in which case $\mathbb{E}_{P_{i}}[f_{i}]$ is reduced to $f_{i}(x)$. While Pevny & Kovarik (2019) have presented a version of universal approximation theorem for nested neural networks on spaces of probability measures, the neural network architecture in Equation 21 actually takes a simpler form. We present the universal approximation theorem for the neural network in the form of Equation 21 in this Section while leaving the proof in Supplementary Material. We use $\mathcal{P}(K)$ to denote the space of probability distribution on a set $K$. Let $S_{h,l}^{n,m}$ be a space of neural networks from $\mathbb{R}^{n}$ to $\mathbb{R}^{m}$ with $l$ hidden layers and the activation $h\colon\mathbb{R}\to\mathbb{R}$, with arbitrary number of neurons in each layer. Let $C(\prod_{j=1}^{k}\mathcal{P}(K_{j});\mathbb{R})$ denote the space of real-valued continuous functions on $\prod_{j=1}^{k}\mathcal{P}(K_{j})$, which is equipped with the product of the weak topology. ###### Theorem 5.1. Let $h\colon\mathbb{R}\to\mathbb{R}$ be an analytic and Lipschitz continuous non-polynomial activation function, and $K_{j}$ be a compact set in $\mathbb{R}^{n_{j}}$ for $j=1,\dots,k$. Let $\mathcal{H}$ be the space of functions in the form of 21 with $f_{j}\in S_{h,l_{j}}^{n_{j},m_{j}}$ and $g\in S_{h,l}^{m}$, where $l,l_{1},\dots,l_{k},m,m_{1},\dots,m_{k}\in\mathbb{Z}^{+}$. Then, $\mathcal{H}$ is dense in $C(\prod_{j=1}^{k}\mathcal{P}(K_{j});\mathbb{R})$ with respect to the uniform norm topology. ## 6 Experimental Results We show some results for the experimental comparisons in this section. The detailed neural network architectures are given in Supplementary Material. We emphasize that although measure-conditional discriminators $D_{mc}$ have more inputs than vanilla ones, the neural networks for both are designed to have almost the same number of parameters for the same problem. For each set-up in Section 6.1 and 6.2 we run the code with three different random seeds; the colored lines and shaded areas in the figures represent the mean and standard deviation. ### 6.1 2D Distributions and Image Generation We first compare the vanilla discriminator and $D_{mc}$ for different GAN setp-ups on 2D problems. In particular, we test the vanilla GANs and WGAN-GP with different discriminator/generator iteration ratios, and $(\beta_{1},\beta_{2})$ in the Adam optimizer (the initial learning rate is set as 0.0001). (a) (b) (c) (d) Figure 3: FID against generator iterations in image generation tasks, with various discriminator/generator iteration ratios. (a): CIFAR10, 1:1, (b): CIFAR10, 5:1, (c): CelebA, 1:1, (d): CelebA, 5:1. To evaluate the generated distribution $P=G_{\\#}\mathcal{N}$, we take the expectation of empirical Wasserstein-1 distance, i.e., $\overline{\widehat{W}}_{1}(P,Q):=\mathbb{E}_{\hat{P}_{n},\hat{Q}_{n}}[W_{1}(\hat{P}_{n},\hat{Q}_{n})]$ as an approximation of $W_{1}(P,Q)$, where $\hat{P}_{n}$ is the (random) empirical measure of $P$ with $n=1000$ samples, similarly for $\hat{Q}_{n}$. We average over 100 empirical Wasserstein distances, which can be calculated via linear programming, to calculate the expectation. The target distributions and results are shown in Figure 2 with more results in Supplementary Material. It is clear that for all set-ups except WGAN-GP with iteration ratio 5:1 and $(\beta_{1},\beta_{2})=(0.5,0.9)$, the measure-conditional discriminator significantly outperforms the vanilla discriminator in achieving smaller Wasserstein distances or converging faster. In fact, the measure-conditional discriminator is very robust w.r.t. the versions of GANs, the iteration ratio, and the optimizer hyperparameters, achieving approximately the same performance in different set-ups, in contrast to the vanilla discriminator. (a) (b) (c) (d) (e) Figure 4: Comparison between different set-ups in the task of stochastic dynamic inference. (a): WGAN-GP, vanilla discriminator, Adam optimizer, (b): WGAN-GP, vanilla discriminator, Optimistic Adam optimizer, (c): WGAN-GP, $D_{mc}$, Adam optimizer, (d): WGAN-GP, $D_{sr}$, Adam optimizer, (e): BGAN with Adam optimizer. First row: the absolute error of the inferred dynamic parameters $\\{a_{i}\\}_{i=0}^{3}$ and $\sigma$ against generator iterations. Second row: the generated distributions at $t=0.2,0.5,1.0$, in the end of the training, with the dashed black lines showing the ground truth. We then compare the vanilla discriminator and measure-conditional discriminator $D_{mc}$ for image generation tasks. Specifically, we test our method on the CIFAR10 dataset (Krizhevsky et al., 2009) and the CelebA dataset (Liu et al., 2015), using WGAN-GP with $(\beta_{1},\beta_{2})$ fixed as $(0.5,0.9)$, while two discriminator/generator iteration ratios, i.e. 1:1 and 5:1, are used. The results of Fréchet inception distance (FID) against the generator iterations are shown in Figure 3. For both tasks, while the difference between the two discriminators is negligible if the iteration ratio is set as 5:1, the measure-conditional discriminator significantly outperforms the vanilla discriminator if the iteration ratio is 1:1, achieving similar FID as in the cases with 5:1 iteration ratio. A possible explanation is that the training of the both discriminators is saturated with 5:1 iteration ratio in these two tasks. However, with 1:1 iteration ratio, the vanilla discriminator cannot give a correct guidance to the generator since it is under-trained in each iteration, while the measure-conditional discriminator can still do so by approaching its stationary target optimum in an accumulative way. ### 6.2 Stochastic Dynamic Inference To further show the advantage of measure-conditional discriminator, here we compare it with the vanilla discriminator on the problem of inferring stochastic dynamics from observations of particle ensembles, following the framework in Yang et al. (2020a). Specifically, we consider a particle system whose distributions at $t>0$, denoted as $\rho_{t}$, are determined by the initial distribution $\rho_{0}=\mathcal{N}(0,0.2)$ and the dynamics for each particle, which is governed by the stochastic ordinary differential equation: $dx=(a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3})dt+\sigma dW_{t},$ (22) where $a_{0}=0,a_{1}=1,a_{2}=0,a_{3}=-1,\sigma=1$, and $dW_{t}$ is the standard Brownian motion. We consider the scenario where we do not know $\rho_{0}$, $\\{a_{i}\\}_{i=0}^{3}$ and $\sigma$, but have observations of $10^{5}$ indistinguishable particles at $t=0.2,0.5,1.0$, which can be viewed as samples from $\rho_{0.2}$, $\rho_{0.5}$ and $\rho_{1.0}$. Our goal is to infer $\\{a_{i}\\}_{i=0}^{3}$ and $\sigma$ from these observations. Taking the standard Gaussian noise as input, the generator $G$ is a feedforward neural network whose output distribution aims to approximate $\rho_{0}$, followed by a first-order numerical discretization of Equation 22 with $\\{a_{i}\\}_{i=0}^{3}$ and $\sigma$ replaced by trainable variables (the variable for $\sigma$ is activated by a softplus function to guarantee positivity), so that the particle distributions at any $t>0$ can be generated. Note that we need to tune the feedforward neural network as well as the five trainable variables to fit the target distributions $\rho_{0.2}$, $\rho_{0.5}$ and $\rho_{1.0}$ simultaneously. We compare the following set-ups in WGAN-GP: (a) vanilla discriminators with Adam optimizer, (b) vanilla discriminators with Optimistic Adam optimizer (Daskalakis et al., 2018), which is the combination of Adam and optimistic mirror descent, (c) $D_{mc}$ in Equation 15 with Adam optimizer, (d) $D_{sr}$ in Equation 18 with Adam optimizer. We emphasize that the discriminator/generator iteration ratio is set as 5:1 and $(\beta_{1},\beta_{2})=(0.5,0.9)$, for which the measure-conditional discriminator does not outperform the vanilla one in Section 6.1. We also compare with another version of GAN, i.e., (e) BGAN (Lucas et al., 2018), where the discriminator also takes a distribution (instead of individual samples) as input. In short, the BGAN discriminator takes the mixture of real and generated samples as input and aims to tell the ratio of real samples. For set-up (a,b,c,e) we have to use three discriminators, denoted as $D_{0.2},D_{0.5},D_{1.0}$, to handle $\rho_{0.2}$, $\rho_{0.5}$, $\rho_{1.0}$, respectively. The losses for the discriminators and the generator are $\displaystyle L_{D_{t}}$ $\displaystyle=L_{d}(G,D_{t},\rho_{t}),t=0.2,0.5,1.0$ (23) $\displaystyle L_{G}$ $\displaystyle=\sum_{t\in S}L_{g}(G,D_{t},\rho_{t}),S=\\{0.2,0.5,1.0\\}$ where $L_{d}(G,D_{t},\rho_{t})$ and $L_{g}(G,D_{t},\rho_{t})$ are discriminator and generator loss functions, given generator $G$, discriminator $D_{t}$, and a single target distribution $\rho_{t}$. We only need one $D_{sr}$ in set-up (d) since $D_{sr}$ can also take various $\rho_{t}$ as input. In particular, the discriminator and generator loss functions for $D_{sr}$ are $L_{D_{sr}}=\sum_{t\in S}L_{d}(G,D_{sr},\rho_{t})$ and $L_{G}=\sum_{t\in S}L_{g}(G,D_{sr},\rho_{t})$, respectively. In Figure 4 we visualize the results for the inferred dynamic parameters $\\{a_{i}\\}_{i=0}^{3}$ and $\sigma$, as well as the generated distributions at $t=0.2,0.5,1.0$ for each set-up. More results are presented in Supplementary Material. Note that $D_{mc}$ significantly outperforms the vanilla discriminator with the Adam or Optimistic Adam optimizer, even with a 5:1 iteration ratio. $D_{sr}$ achieves results as good as if not better than $D_{mc}$, and the performance is almost independent of the random seed. Moreover, since only one discriminator is involved, set-up (d) has less than half discriminator parameters compared with other setups. Such a difference in the model size will be even larger for problems with more time instants. As for BGAN, two out of three runs encountered the “NAN” issue, while the rest one did not outperform WGAN-GP with a measure-conditional discriminator. ### 6.3 Surrogate Model for KL Divergence We consider the problem of approximating $D_{KL}(P||Q)$ by using the surrogate model $D_{sr}$. We set $\mu_{train}$ and $\mu_{test}$ as probability measures on $d$-dimensional Gaussian distributions, denoted as $\mathcal{N}(m,\Sigma)$. We set $m_{i}\sim U([0.15,0.5]\cup[-0.5,-0.15])$ for $\mu_{train}$, while $m_{i}\sim U([-0.15,0.15])$ for $\mu_{test}$, where the subscripts represent the component indices. For both $\mu_{train}$ and $\mu_{test}$ we set $\sqrt{\Sigma_{i,i}}\sim U([0.5,1.0])$, and the correlation coefficient $\Sigma_{i,j}/\sqrt{\Sigma_{i,i}\Sigma_{j,j}}\sim U([-0.5,0.5])$ for $i\neq j$. The surrogate model is trained with $P$ and $Q$ i.i.d sampled from $\mu_{train}$, with the batch size set as 100 for $(P,Q)$ pairs, and 1000 samples for each $P$ and $Q$, i.e., $n_{1}=n_{2}=1000$ in Equation 18. The surrogate model is then tested on three cases: (a): $P\sim\mu_{train},Q\sim\mu_{train}$, (b): $P\sim\mu_{train},Q\sim\mu_{test}$, (c): $P\sim\mu_{test},Q\sim\mu_{test}$. Note that there are $d(d+3)$ degrees of freedom for each $(P,Q)$ pair. For such cases there exists an analytical formula for $D_{KL}(P||Q)$ as a ground truth: $D_{KL}(P||Q)=\frac{1}{2}[\log\frac{|\Sigma_{q}|}{|\Sigma_{p}|}-d+tr(\Sigma_{q}^{-1}\Sigma_{p})+(m_{p}-m_{q})^{T}\Sigma_{q}^{-1}(m_{p}-m_{q})]$. We compare the surrogate model against direct calculation via $D_{KL}(P,Q)=\mathbb{E}_{x\sim P}[\log(P(x))-\log(Q(x))]$, where the densities are estimated via the kernel density estimation, for dimensionality $d=2$. In Figure 5 we quantify the accuracy against floating-point operations (FLOPs). Note that the computational cost grows linearly w.r.t. the sample size in the surrogate model, while quadratically w.r.t. the sample size in the direct calculation. The surrogate model outperforms the direct calculation in that it can achieve smaller errors with the same FLOPs for all the three $(P,Q)$ distributions in test. In Supplementary Material, we show scatter plots of the inference against the ground truth for dimensionality $d=2$ and 3, as well as the results of transferring the surrogate model to GANs. Figure 5: Comparison between the surrogate model and direct calculation for the KL divergence. For each line, the sample size, i.e., $n_{1}$ and $n_{2}$ in Equation 18, increases from 1000 to 10000, to improve accuracy in the cost of FLOPs. ## 7 Summary and Discussion In this paper we propose measure-conditional discriminators as a plug-and-play module for a variety of GANs. Conditioned on the generated distributions so that the target optimum is stationary during training, the measure-conditional discriminators are more robust w.r.t. the GAN losses, discriminator/generator iteration ratios, and optimizer hyperparameters, compared with the vanilla ones. A variant of the measure-conditional discriminator can also be employed in the scenarios with multiple target distributions, or as surrogate models of statistical distances. Note that even outdated generated distributions can be used to training the measure-conditional discriminator. It is worth to study if training the discriminator with generated distributions from a replay buffer, which contains the generated distributions in history just as in off-policy reinforcement learning, can further improve the performance. Also, as a proof of concept, the neural network architectures in this paper have a very straight-forward form, leaving a lot of room for improvements. For example, different weights can be assigned to the samples of the input distributions, which is similar to importance sampling in statistics or the attention mechanism in deep learning. Moreover, the statistical distance surrogate can be applied as a building block in replacement of direct calculation in more complicated models. We leave these tasks for future research. Acknowledgements We acknowledge support from the DOE PhILMs project (No. DE-SC0019453) and OSD/AFOSR MURI Grant FA9550-20-1-0358. ## References * Brock et al. (2019) Brock, A., Donahue, J., and Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=B1xsqj09Fm. * Daskalakis et al. (2018) Daskalakis, C., Ilyas, A., Syrgkanis, V., and Zeng, H. Training GANs with optimism. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=SJJySbbAZ. * Fedus et al. (2018) Fedus, W., Goodfellow, I., and Dai, A. M. MaskGAN: Better text generation via filling in the ____. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=ByOExmWAb. * Flamary & Courty (2017) Flamary, R. and Courty, N. Pot python optimal transport library, 2017. URL https://pythonot.github.io/. * Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. _Advances in neural information processing systems_ , 27:2672–2680, 2014. * Grondman et al. (2012) Grondman, I., Busoniu, L., Lopes, G. A., and Babuska, R. A survey of actor-critic reinforcement learning: Standard and natural policy gradients. _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_ , 42(6):1291–1307, 2012. * Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of wasserstein gans. In _Advances in neural information processing systems_ , pp. 5767–5777, 2017. * Kidger & Lyons (2020) Kidger, P. and Lyons, T. Universal approximation with deep narrow networks. In _Conference on Learning Theory_ , pp. 2306–2327. PMLR, 2020\. * Krizhevsky et al. (2009) Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009\. * Li et al. (2018) Li, C.-L., Zaheer, M., Zhang, Y., Poczos, B., and Salakhutdinov, R. Point cloud GAN. _arXiv preprint arXiv:1810.05795_ , 2018. * Liu et al. (2015) Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In _Proceedings of International Conference on Computer Vision (ICCV)_ , December 2015. * Lucas et al. (2018) Lucas, T., Tallec, C., Ollivier, Y., and Verbeek, J. Mixed batches and symmetric discriminators for gan training. In _International Conference on Machine Learning_ , pp. 2844–2853. PMLR, 2018. * Mirza & Osindero (2014) Mirza, M. and Osindero, S. Conditional generative adversarial nets. _arXiv preprint arXiv:1411.1784_ , 2014. * Nguyen et al. (2010) Nguyen, X., Wainwright, M. J., and Jordan, M. I. Estimating divergence functionals and the likelihood ratio by convex risk minimization. _IEEE Transactions on Information Theory_ , 56(11):5847–5861, 2010. * Pevny & Kovarik (2019) Pevny, T. and Kovarik, V. Approximation capability of neural networks on spaces of probability measures and tree-structured domains. _arXiv preprint arXiv:1906.00764_ , 2019. * Seguy et al. (2018) Seguy, V., Damodaran, B. B., Flamary, R., Courty, N., Rolet, A., and Blondel, M. Large-scale optimal transport and mapping estimation. In _Proceedings of the International Conference in Learning Representations_ , 2018. * Stinchcombe (1999) Stinchcombe, M. Neural network approximation of continuous functionals and continuous functions on compactifications. _Neural Networks_ , 12(3):467 – 477, 1999. ISSN 0893-6080. doi: https://doi.org/10.1016/S0893-6080(98)00108-7. URL http://www.sciencedirect.com/science/article/pii/S0893608098001087. * Yang et al. (2020a) Yang, L., Daskalakis, C., and Karniadakis, G. E. Generative ensemble-regression: Learning stochastic dynamics from discrete particle ensemble observations. _arXiv preprint arXiv:2008.01915_ , 2020a. * Yang et al. (2020b) Yang, L., Zhang, D., and Karniadakis, G. E. Physics-informed generative adversarial networks for stochastic differential equations. _SIAM Journal on Scientific Computing_ , 42(1):A292–A317, 2020b. * Yang & Perdikaris (2019) Yang, Y. and Perdikaris, P. Adversarial uncertainty quantification in physics-informed neural networks. _Journal of Computational Physics_ , 394:136–152, 2019\. * Zaheer et al. (2017) Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. Deep sets. In _Advances in neural information processing systems_ , pp. 3391–3401, 2017. * Zhang et al. (2017) Zhang, Y., Gan, Z., Fan, K., Chen, Z., Henao, R., Shen, D., and Carin, L. Adversarial feature matching for text generation. In Precup, D. and Teh, Y. W. (eds.), _Proceedings of the 34th International Conference on Machine Learning_ , volume 70 of _Proceedings of Machine Learning Research_ , pp. 4006–4015, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/zhang17b.html. * Zhu et al. (2017) Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In _Proceedings of the IEEE international conference on computer vision_ , pp. 2223–2232, 2017. ## 8 Supplementary Material ### 8.1 Neural Network Architecture In this section, we present the neural network architectures used in the main text. All noises into the generators are multi-variant standard Gaussians. An additional sigmoid activation is applied to the discriminator outputs in vanilla GANs. We emphasize that the vanilla discriminator and measure- conditional discriminator $D_{mc}$ share almost the same number of parameters in the same problem. In image generation tasks, the convolutional layers, denoted as “conv” as follows, have kernels of size $5\times 5$, stride of 2, and “same” padding. 2D problem: generator (33,666 parameters): $2\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}2$; vanilla discriminator (33,537 parameters): $2\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}1$; $D_{mc}$ (33,921 parameters), $f$ and $g$: $2\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}64$, $h$: $128\xrightarrow{\text{dense}}128\xrightarrow{}\text{ReLU}\xrightarrow{\text{dense}}1$. CIFAR10: generator (1,565,955 parameters): $128\xrightarrow{\text{dense}}4096\xrightarrow{}\text{BatchNorm}\xrightarrow{}\text{ReLU}\xrightarrow{\text{reshape}}(4,4,256)\xrightarrow{\text{conv}}(8,8,128)\xrightarrow{}\text{BatchNorm}\xrightarrow{}\text{ReLU}\xrightarrow{\text{conv}}(16,16,64)\xrightarrow{}\text{BatchNorm}\xrightarrow{}\text{ReLU}\xrightarrow{\text{conv}}(32,32,3)\xrightarrow{}\text{tanh}$; vanilla discriminator (1,291,521 parameters): $(32,32,3)\xrightarrow{\text{conv}}(16,16,64)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(8,8,128)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(4,4,256)\xrightarrow{\text{flatten}}4096\xrightarrow{\text{dense}}64\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}1$; $D_{mc}$ (1,296,449 parameters), $f$: $(32,32,3)\xrightarrow{\text{conv}}(16,16,64)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(8,8,64)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(4,4,64)\xrightarrow{\text{flatten}}1024$, $g$: $(32,32,3)\xrightarrow{\text{conv}}(16,16,64)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(8,8,128)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(4,4,192)\xrightarrow{\text{flatten}}3072$, $h$: $4096\xrightarrow{\text{dense}}64\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}1$. CelebA: generator (1,331,843 parameters): $128\xrightarrow{\text{dense}}8192\xrightarrow{}\text{BatchNorm}\xrightarrow{}\text{ReLU}\xrightarrow{\text{reshape}}(8,8,128)\xrightarrow{\text{conv}}(16,16,64)\xrightarrow{}\text{BatchNorm}\xrightarrow{}\text{ReLU}\xrightarrow{\text{conv}}(32,32,32)\xrightarrow{}\text{BatchNorm}\xrightarrow{}\text{ReLU}\xrightarrow{\text{conv}}(64,64,3)\xrightarrow{}\text{tanh}$; vanilla discriminator (1,307,457 parameters): $(64,64,3)\xrightarrow{\text{conv}}(32,32,32)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(16,16,64)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(8,8,128)\xrightarrow{\text{flatten}}8192\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}1$; $D_{mc}$ (1,309,921 parameters), $f$: $(32,32,3)\xrightarrow{\text{conv}}(32,32,32)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(16,16,32)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(8,8,32)\xrightarrow{\text{flatten}}2048$, $g$: $(64,64,3)\xrightarrow{\text{conv}}(32,32,32)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(16,16,64)\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{conv}}(8,8,96)\xrightarrow{\text{flatten}}6144$, $h$: $8192\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}1$. Stochastic Dynamic Inference: generator for $\rho_{0}$ (33,409 parameters): $1\xrightarrow{\text{dense}}128\xrightarrow{}\text{tanh}\xrightarrow{\text{dense}}128\xrightarrow{}\text{tanh}\xrightarrow{\text{dense}}128\xrightarrow{}\text{tanh}\xrightarrow{\text{dense}}1$; vanilla discriminator (49,921$\times$3 parameters): $1\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}1$; $D_{mc}$ (50,177$\times$3 parameters), $f$ and $g$: $1\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}64$, $h$: $128\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}1$. $D_{sr}$ (66,881 parameters), $f_{1}$, $f_{2}$ and $g$: $1\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}64$, $h$: $192\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}128\xrightarrow{}\text{LeakyReLU}\xrightarrow{\text{dense}}1$. The discriminator architecture in BGAN is from the original paper (Lucas et al., 2018), with 4 hidden layers, each of width 128, and LeakyReLU activation, 99,329$\times$3 parameters in total. KL Surrogate Model: $D_{sr}$ (6,137 parameters), $f_{1}$, $f_{2}$ and $g$: $2\xrightarrow{\text{dense}}32\xrightarrow{}\text{tanh}\xrightarrow{\text{dense}}32\xrightarrow{}\text{tanh}\xrightarrow{\text{dense}}8$, $h$: $24\xrightarrow{\text{dense}}32\xrightarrow{}\text{tanh}\xrightarrow{\text{dense}}32\xrightarrow{}\text{tanh}\xrightarrow{\text{dense}}1$. ### 8.2 Proof of the Universal Approximation Theorem In this section, we provide the proof for Theorem 5.1 in the main text. Before that, we provide a useful lemma and its proof. The following lemma provides an universal approximation theorem for the functions in the following form $\tilde{H}(P_{1},\dots,P_{k})=g(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}]),$ (24) where $g$ is a neural network with the activation function $h$ and $l$ hidden layers, and each $f_{j}$ is any bounded continuous function from $\mathbb{R}^{n_{j}}$ to $\mathbb{R}^{m_{j}}$ for any positive integer $m_{j}$. We denote the set containing such functions by $S_{h,l}^{\mathcal{P}}$. In the following lemma, we show that any continuous function on $\prod_{j=1}^{k}\Omega_{j}$ equipped with the product weak topology, where each $\Omega_{j}$ is a tight set, can be approximated using some function in $S_{h,l}^{\mathcal{P}}$. ###### Lemma 1. Let $h\colon\mathbb{R}\to\mathbb{R}$ be an analytic and Lipschitz continuous non-polynomial activation function. Let $k,n_{1},\dots,n_{k}\in\mathbb{Z}^{+}$ be positive integers. Then, for each $l\in\mathbb{Z}^{+}$, $S_{h,l}^{\mathcal{P}}$ is dense in $C(\prod_{j=1}^{k}\Omega_{j};\mathbb{R})$ with respect to the uniform norm topology, where $\Omega_{j}$ is an arbitrary tight set in $\mathcal{P}(\mathbb{R}^{n_{j}})$ for each $j=1,\dots,k$. ###### Proof. Since any tight set in $\mathcal{P}(\mathbb{R}^{n_{j}})$ is a precompact set under weak topology, then to prove the conclusion it suffices to assume $\Omega_{j}$ is a compact set in $\mathcal{P}(\mathbb{R}^{n_{j}})$. Let $\Omega_{j}$ be any compact set in $\mathcal{P}(\mathbb{R}^{n_{j}})$ for $j=1,\dots,k$. Let $F\colon\prod_{j=1}^{k}\Omega_{j}\to\mathbb{R}$ be an arbitrary continuous function w.r.t the product of the weak topology. Let $\epsilon>0$ and $l\in\mathbb{Z}^{+}$. It suffices to prove there exist positive integers $m_{1},\dots,m_{k}$ in $\mathbb{Z}^{+}$, a neural networks $g$ in $S_{h,l}^{m}$ with $m:=\sum_{j=1}^{k}m_{j}$ and bounded continuous functions $f_{1}\in C_{b}(\mathbb{R}^{n_{1}};\mathbb{R}^{m_{1}}),\dots,f_{k}\in C_{b}(\mathbb{R}^{n_{k}};\mathbb{R}^{m_{k}})$ satisfying $\sup_{\begin{subarray}{c}P_{j}\in\Omega_{j}\\\ \forall\,j\in\\{1,\dots,k\\}\end{subarray}}|F(P_{1},\dots,P_{k})-g(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}])|\leq\epsilon.$ (25) We prove this statement by induction on $l$. We apply (Stinchcombe, 1999) in each step. Since $h$ is analytic, and $h$ is not a polynomial, then by Thm.2.3 in (Stinchcombe, 1999), $S_{h,1}^{1}$ satisfies the assumption of Thm.5.1 in (Stinchcombe, 1999). First, we consider the case when $l=1$. Let $\mathcal{A}$ in Thm.5.1 in (Stinchcombe, 1999) be the vector space of measurable functions from $\prod_{j=1}^{k}\mathcal{P}(\mathbb{R}^{n_{j}})$ to $\mathbb{R}$ defined by $\begin{split}\mathcal{A}&:=Span\\{(P_{1},\dots,P_{k})\mapsto\mathbb{E}_{P_{j}}[f]\colon\\\ &\quad\quad\quad\quad\quad\quad f\in C_{b}(\mathbb{R}^{n_{j}};\mathbb{R}),\,j\in\\{1,\dots,k\\}\\}\\\ &=\Bigg{\\{}(P_{1},\dots,P_{k})\mapsto\sum_{j=1}^{k}\mathbb{E}_{P_{j}}[f_{j}]\colon\\\ &\quad\quad\quad\quad\quad\quad f_{j}\in C_{b}(\mathbb{R}^{n_{j}};\mathbb{R}),\,\forall\,j\in\\{1,\dots,k\\}\Bigg{\\}}.\end{split}$ Then, $\mathcal{A}$ contains any constant function, since for any constant function $f\equiv C$, we have $\mathbb{E}_{P_{j}}[f]=C$ for any $P_{j}\in\mathcal{P}(\mathbb{R}^{n_{j}})$. Recall that each probability space on $\mathbb{R}^{n_{j}}$ is a subset of the space of Radon measures on $\mathbb{R}^{n_{j}}$, and the space of Radon measures is the dual space of $C_{0}(\mathbb{R}^{n_{j}};\mathbb{R})$, which denotes the set of continuous functions from $\mathbb{R}^{n_{j}}$ to $\mathbb{R}$ which vanish at infinity. As a result, for any distinct measures $P_{j}$ and $Q_{j}$ in $\mathcal{P}(\mathbb{R}^{n_{j}})$, there exists a function $f\in C_{0}(\mathbb{R}^{n_{j}};\mathbb{R})\subset C_{b}(\mathbb{R}^{n_{j}};\mathbb{R})$ satisfying $\mathbb{E}_{P_{j}}[f]\neq\mathbb{E}_{Q_{j}}[f]$. Therefore, $\mathcal{A}$ separates points in $\prod_{j=1}^{k}\mathcal{P}(\mathbb{R}^{n_{j}})$. Then, $\mathcal{A}$ satisfies the assumptions in Thm.5.1 in (Stinchcombe, 1999), which implies that for each $\epsilon>0$, there exists a function $H$ in $Span(h\circ\mathcal{A})$ satisfying $\sup_{\begin{subarray}{c}P_{j}\in\Omega_{j}\\\ \forall\,j\in\\{1,\dots,k\\}\end{subarray}}|F(P_{1},\dots,P_{k})-H(P_{1},\dots,P_{k})|\leq\epsilon.$ (26) Since $H$ is a function in $Span(h\circ\mathcal{A})$, then there exist a positive integer $\tilde{m}\in\mathbb{Z}^{+}$, real numbers $\alpha_{1},\dots,\alpha_{\tilde{m}}\in\mathbb{R}$, and bounded continuous functions $\tilde{f}_{ij}\in C_{b}(\mathbb{R}^{n_{j}};\mathbb{R})$ for each $i\in\\{1,\dots,\tilde{m}\\}$ and $j\in\\{1,\dots,k\\}$, such that there holds $H(P_{1},\dots,P_{k})=\sum_{i=1}^{\tilde{m}}\alpha_{i}h\left(\sum_{j=1}^{k}\mathbb{E}_{P_{j}}[\tilde{f}_{ij}]\right).$ Now, we prove that $H$ is a function in $S_{h,1}^{\mathcal{P}}$. For each $j\in\\{1,\dots,k\\}$, let $f_{j}\colon\mathbb{R}^{n_{j}}\to\mathbb{R}^{\tilde{m}}$ be defined by $f_{j}(x):=(\tilde{f}_{1j}(x),\dots,\tilde{f}_{\tilde{m}j}(x)),\quad\forall x\in\mathbb{R}^{n_{j}}.$ And define $g\colon\mathbb{R}^{k\tilde{m}}\to\mathbb{R}$ by $g(x):=\sum_{i=1}^{\tilde{m}}\alpha_{i}h(w_{i}\cdot x),\quad\forall x\in\mathbb{R}^{k\tilde{m}},$ where $w_{i}:=(e_{i},\dots,e_{i})\in\mathbb{R}^{k\tilde{m}}$ is a vector repeating $e_{i}$ for $k$ times (where $e_{i}$ denotes the $i$-th standard basis vector in $\mathbb{R}^{\tilde{m}}$). Then, we have $f_{j}\in C_{b}(\mathbb{R}^{n_{j}};\mathbb{R}^{\tilde{m}})$ and $g\in S_{h,1}^{k\tilde{m}}$. Moreover, after some computations, we obtain $\begin{split}&g(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}])\\\ =\,&\sum_{j=1}^{\tilde{m}}\alpha_{i}h\left(w_{i}\cdot\left(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}]\right)\right)\\\ =\,&\sum_{j=1}^{\tilde{m}}\alpha_{i}h\left(\sum_{j=1}^{k}e_{i}\cdot\mathbb{E}_{P_{j}}[f_{j}]\right)\\\ =\,&\sum_{j=1}^{\tilde{m}}\alpha_{i}h\left(\sum_{j=1}^{k}\mathbb{E}_{P_{j}}[e_{i}\cdot f_{j}]\right)\\\ =\,&\sum_{j=1}^{\tilde{m}}\alpha_{i}h\left(\sum_{j=1}^{k}\mathbb{E}_{P_{j}}[\tilde{f}_{ij}]\right)\\\ =\,&H(P_{1},\dots,P_{k}).\end{split}$ (27) As a result, (25) is proved for the case of $l=1$ according to (26) and (27). Now, assume (25) holds for some $l\in\mathbb{Z}^{+}$, and we prove the conclusion for $l+1$. Let $\mathcal{A}$ in Thm.5.1 in (Stinchcombe, 1999) be the vector space $S_{h,l}^{\mathcal{P}}$. Then, $\mathcal{A}$ contains constant functions by setting $f_{1},\dots,f_{k}$ to be constant functions in (24). Since $C_{b}(\mathbb{R}^{n_{j}};\mathbb{R}^{m_{j}})$ separates measures in $\mathcal{P}(\mathbb{R}^{n_{j}})$ for each $j=1,\dots,k$ as we proved in the case of $l=1$, and the space of neural networks $S_{h,l}^{m}$ also separates points in $\mathbb{R}^{m}$, then the space $\mathcal{A}$ separates points in $\prod_{j=1}^{k}\mathcal{P}(\mathbb{R}^{n_{j}})$. Therefore, $\mathcal{A}$ satisfies the assumptions in Thm.5.1 in (Stinchcombe, 1999), which implies that for each $\epsilon>0$, there exist $\tilde{m}\in\mathbb{Z}^{+}$, real numbers $\alpha_{1},\dots,\alpha_{\tilde{m}}$ in $\mathbb{R}$, and functions $H_{1},\dots,H_{\tilde{m}}$ in $S_{h,l}^{\mathcal{P}}$ satisfying $\sup_{\begin{subarray}{c}P_{j}\in\Omega_{j}\\\ \forall\,j\in\\{1,\dots,k\\}\end{subarray}}\left|F(P_{1},\dots,P_{k})-\sum_{i=1}^{\tilde{m}}\alpha_{i}h(H_{i}(P_{1},\dots,P_{k}))\right|\leq\epsilon.$ (28) For each $i\in\\{1,\dots,\tilde{m}\\}$, since $H_{i}$ is a function in $S_{h,l}^{\mathcal{P}}$, there exist positive integers $\tilde{m}_{i1},\dots,\tilde{m}_{ik}\in\mathbb{Z}^{+}$, bounded continuous functions $\tilde{f}_{ij}\in C_{b}(\mathbb{R}^{n_{j}};\mathbb{R}^{\tilde{m}_{ij}})$ for each $j\in\\{1,\dots,k\\}$, and a function $\tilde{g}_{i}\in S_{h,l}^{\tilde{m}_{i0}}$ with $\tilde{m}_{i0}:=\sum_{j=1}^{k}\tilde{m}_{ij}$, such that $H_{i}(P_{1},\dots,P_{k})=\tilde{g}_{i}(\mathbb{E}_{P_{1}}[\tilde{f}_{i1}],\dots,\mathbb{E}_{P_{k}}[\tilde{f}_{ik}])$ holds for each $P_{1}\in\mathcal{P}(\mathbb{R}^{n_{1}}),\dots,P_{k}\in\mathcal{P}(\mathbb{R}^{n_{k}})$. As a result, we have $\begin{split}&\sum_{i=1}^{\tilde{m}}\alpha_{i}h(H_{i}(P_{1},\dots,P_{k}))\\\ =\,&\sum_{i=1}^{\tilde{m}}\alpha_{i}h\left(\tilde{g}_{i}\left(\mathbb{E}_{P_{1}}[\tilde{f}_{i1}],\dots,\mathbb{E}_{P_{k}}[\tilde{f}_{ik}]\right)\right),\end{split}$ for each $(P_{1},\dots,P_{k})\in\prod_{j=1}^{k}\mathcal{P}(\mathbb{R}^{n_{j}})$. Set $\tilde{m}_{0j}:=\sum_{i=1}^{\tilde{m}}\tilde{m}_{ij}$ and $m:=\sum_{j=1}^{k}\tilde{m}_{0j}=\sum_{i=1}^{\tilde{m}}\tilde{m}_{i0}$. For each $j\in\\{1,\dots,k\\}$, define $f_{j}\in C_{b}(\mathbb{R}^{n_{j}};\mathbb{R}^{\tilde{m}_{0j}})$ by $f_{j}(x):=\left(\tilde{f}_{1j}(x),\tilde{f}_{2j}(x),\dots,\tilde{f}_{\tilde{m}j}(x)\right),$ for each $x\in\mathbb{R}^{n_{j}}$. For each $i\in\\{1,\dots,\tilde{m}\\}$, define $g_{i}\in S_{h,l}^{m}$ by $g_{i}(x_{1},x_{2},\dots,x_{k}):=\tilde{g}_{i}((x_{1})_{i},\dots,(x_{k})_{i}),$ for each $x_{1}\in\mathbb{R}^{\tilde{m}_{01}},\dots,x_{k}\in\mathbb{R}^{\tilde{m}_{0k}}$, where each $(x_{j})_{i}\in\mathbb{R}^{\tilde{m}_{ij}}$ denotes the vector whose $r$-th component is the $\left(\sum_{I=1}^{i-1}\tilde{m}_{Ij}+r\right)$-th component of $x_{j}$. With this notation, for each $i\in\\{1,\dots,\tilde{m}\\}$ and each $j\in\\{1,\dots,k\\}$, we have $\left(\mathbb{E}_{P_{j}}[f_{j}]\right)_{i}=\left(\mathbb{E}_{P_{j}}[\tilde{f}_{1j}],\dots,\mathbb{E}_{P_{j}}[\tilde{f}_{\tilde{m}j}]\right)_{i}=\mathbb{E}_{P_{j}}[\tilde{f}_{ij}].$ Moreover, we define $g\in S_{h,l+1}^{m}$ by $\begin{split}g(x_{1},x_{2},\dots,x_{k})&:=\sum_{i=1}^{\tilde{m}}\alpha_{i}h(g_{i}(x_{1},x_{2},\dots,x_{k}))\\\ &=\sum_{i=1}^{\tilde{m}}\alpha_{i}h\left(\tilde{g}_{i}((x_{1})_{i},\dots,(x_{k})_{i})\right)\end{split}$ for each $x_{1}\in\mathbb{R}^{\tilde{m}_{01}},\dots,x_{k}\in\mathbb{R}^{\tilde{m}_{0k}}$. Then, after some computations, we obtain $\begin{split}&g(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}])\\\ =\,&\sum_{i=1}^{\tilde{m}}\alpha_{i}h\left(\tilde{g}_{i}\left(\left(\mathbb{E}_{P_{1}}[f_{1}]\right)_{i},\dots,\left(\mathbb{E}_{P_{k}}[f_{k}]\right)_{i}\right)\right)\\\ =\,&\sum_{i=1}^{\tilde{m}}\alpha_{i}h\left(\tilde{g}_{i}\left(\mathbb{E}_{P_{1}}[\tilde{f}_{i1}],\dots,\mathbb{E}_{P_{k}}[\tilde{f}_{ik}]\right)\right)\\\ =\,&\sum_{i=1}^{\tilde{m}}\alpha_{i}h(H_{i}(P_{1},\dots,P_{k})).\end{split}$ Combining this with (28), we conclude that (25) holds for $l+1$. Therefore, the conclusion holds by induction. ∎ Proof of Theorem 5.1 Let $\epsilon>0$. It suffices to construct $m_{1},\dots,m_{k}\in\mathbb{Z}^{+}$, $g\in S_{h,l}^{m}$ with $m:=\sum_{j=1}^{k}m_{j}$, and $f_{j}\in S_{h,l_{j}}^{n_{j},m_{j}}$ for each $j=1,\dots,k$, such that there holds $|F(P_{1},\dots,P_{k})-g(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}])|\leq\epsilon,$ (29) for any $(P_{1},\dots,P_{k})\in\prod_{j=1}^{k}\mathcal{P}(K_{j})$. Since each $K_{j}$ is a compact set in $\mathbb{R}^{n_{j}}$, then $\mathcal{P}(K_{j})$ is tight in $\mathcal{P}(\mathbb{R}^{n_{j}})$. Then, by Lemma 1, there exist $m_{1},\dots,m_{k}\in\mathbb{Z}^{+}$, $g\in S_{h,l}^{m}$ with $m:=\sum_{j=1}^{k}m_{j}$, and $\tilde{f}_{j}\in C_{b}(\mathbb{R}^{n_{j}};\mathbb{R}^{m_{j}})$ for each $j\in\\{1,\dots,k\\}$ satisfying $\left|F(P_{1},\dots,P_{k})-g\left(\mathbb{E}_{P_{1}}[\tilde{f}_{1}],\dots,\mathbb{E}_{P_{k}}[\tilde{f}_{k}]\right)\right|<\frac{\epsilon}{2},$ (30) for any $(P_{1},\dots,P_{k})\in\prod_{j=1}^{k}\mathcal{P}(K_{j})$. Since the activation function $h$ is Lipschitz, and the Lipschitz property is preserved under composition, then the function $g$ is also Lipschitz. Denote by $L>0$ the Lipschitz constant of $g$. By the universal approximation theorem for neural networks (for instance, see (Kidger & Lyons, 2020)), for each $j\in\\{1,\dots,k\\}$, there exists a neural network $f_{j}\in S_{h,l_{j}}^{n_{j},m_{j}}$ satisfying $\sup_{x\in K_{j}}\|f_{j}(x)-\tilde{f}_{j}(x)\|<\frac{\epsilon}{2L\sqrt{k}}.$ (31) Now, we prove (29). For each $j\in\\{1,\dots,k\\}$, let $P_{j}$ be an arbitrary measure in $\mathcal{P}(K_{j})$. Combining (30) and (31), we obtain $\begin{split}&\,|F(P_{1},\dots,P_{k})-g(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}])|\\\ \leq&\,\left|g\left(\mathbb{E}_{P_{1}}[\tilde{f}_{1}],\dots,\mathbb{E}_{P_{k}}[\tilde{f}_{k}]\right)-g\left(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}]\right)\right|\\\ &\quad+\left|F(P_{1},\dots,P_{k})-g\left(\mathbb{E}_{P_{1}}[\tilde{f}_{1}],\dots,\mathbb{E}_{P_{k}}[\tilde{f}_{k}]\right)\right|\\\ <&\,L\left\|\left(\mathbb{E}_{P_{1}}[\tilde{f}_{1}],\dots,\mathbb{E}_{P_{k}}[\tilde{f}_{k}]\right)-\left(\mathbb{E}_{P_{1}}[f_{1}],\dots,\mathbb{E}_{P_{k}}[f_{k}]\right)\right\|+\frac{\epsilon}{2}\\\ \leq&\,L\sqrt{k}\sup_{j\in\\{1,\dots,k\\}}\left\\{\sup_{x\in K_{j}}\|\tilde{f}_{j}(x)-f_{j}(x)\|\right\\}+\frac{\epsilon}{2}\\\ \leq&\,\epsilon,\end{split}$ where the second inequality holds by (30) and the Lipschitz property of $g$, the third inequality holds by the assumption that each $P_{j}$ is supported in the compact set $K_{j}$, and the fourth inequality holds according to (31). ∎ ### 8.3 More Results for 2D Problems In Figure 6 we show the comparison between the vanilla discriminator and measure-conditional discriminator $D_{mc}$ on three 2D problems, using vanilla GAN with 5:1 discriminator/generator iteration ratio. We encountered “NAN” issue occasionally with both discriminators, the corresponding runs are omitted. The measure-conditional discriminator outperforms the vanilla one, as in the main text. (a) (b) (c) Figure 6: More results for the comparison between the vanilla discriminator and measure-conditional discriminator (ours) in three 2D problems. (a): The three target distributions. (b): Vanilla GAN, 5:1, (0.5, 0.9), (c): Vanilla GAN, 5:1, (0.9, 0.999). See a more detailed caption in Figure 2 in the main text. ### 8.4 More Results for Stochastic Dynamic Inference In Figure 7 we show the results of $D_{mc}$ and $D_{sr}$ with the Optimistic Adam optimizer on the task of stochastic dynamic inference, using the same neural networks as in the main text. The results are similar to those with the Adam optimizer, with a slight improvement. (a) (b) Figure 7: Comparison between different set-ups in the task of stochastic dynamic inference. (a): WGAN-GP, $D_{mc}$, Optimistic Adam optimizer, (b): WGAN-GP, $D_{sr}$, Optimistic Adam optimizer. See a more detailed caption in Figure 4 in the main text. In addition, in Figure 8 we show the results using smaller neural networks for the vanilla discriminator and $D_{mc}$ (the number of hidden layers for the vanilla discriminator and $h$ in $D_{mc}$ is reduced by 1). For the vanilla discriminator, the Optimistic Adam optimizer manages to remove the high- frequency oscillation, compared with the Adam optimizer, but the inferred parameters are still incorrect. In contrast, both optimizers give good inference with $D_{mc}$ discriminator, and the Optimistic Adam optimizer performs better in that the inference converges faster. (a) (b) (c) (d) Figure 8: Comparison between different set-ups in the task of stochastic dynamic inference, using smaller discriminator neural networks. (a): WGAN-GP, vanilla discriminator, Adam optimizer, (b): WGAN-GP, vanilla discriminator, Optimistic Adam optimizer, (c): WGAN-GP, $D_{mc}$, Adam optimizer, (d): WGAN- GP, $D_{mc}$, Optimistic Adam optimizer. See a more detailed caption in Figure 4 in the main text. ### 8.5 More Results for the Statistical Distance Surrogate As a supplement of Figure 5 in the main text, in Figure 9(a), 9(b) we show the scatter plots of the inference of KL divergence against the ground truth for dimensionality $d=2$. In Figure 9(c), 9(d) we also show the results for the 3D case, with a larger $D_{sr}$ neural network (128 as the hidden layer width and 32 as the output dimension of $f_{1}$, $f_{2}$ and $g$). (a) (b) (c) (d) (e) (f) Figure 9: Results for the KL divergence surrogate model. (a-d): Inference against the ground truth. (a): 2D, 1000 samples, (b): 2D, 10000 samples, (c): 3D, 1000 samples, (d): 3D, 10000 samples. (e-f) Generator parameters during the GAN training with different generator and discriminator set-ups. (e): The first generator set-up, with 4 degrees of freedom. (f): The second generator set-up, with 5 degrees of freedom. Different colors represent different generator parameters, while different line styles represent the results from different discriminator setups and the ground truth. As a proof of concept, we then employ the 2D surrogate model as a discriminator in GAN. The target distribution is set as $\mathcal{N}(m,\Sigma)$ with $m=[0.1,-0.1]$ and $\Sigma=\text{diag}([0.3,0.6])$, which is a sample from $\mu_{test}$. The generator is defined as $G(z)=Az+b$, and we test with two generator set-ups: (1) $A=\text{diag}([a_{1},a_{2}]),b=[b_{1},b_{2}]$ with 4 degrees of freedom, and (2) $A=[[a_{1},a_{2}],[0,a_{3}]],b=[b_{1},b_{2}]$ with 5 degrees of freedom, both having ground truth for the parameters. We compare the following three set-ups of the discriminator: (a) $D_{sr}$ transferred from the well- trained surrogate model and is further trained in GAN, (b) transferred $D_{sr}$ without further training in GAN, (c) random initialized $D_{sr}$ with training in GAN. The generator parameters during the GAN training are visualized in Figure 9(e), 9(f). One can see that for discriminator set-up (b), the generator parameters are not too bad in the first generator set-up with 4 degrees of freedom, but totally failed in the second generator setup. A possible explanation is that $G_{\\#}\mathcal{N}$ becomes an outlier of $\mu_{train}$ during the training and thus $D_{sr}$ cannot provide correct statistical distances. Discriminator set-ups (a) and (c) worked well in both generator setups, but note that the set-up (a), i.e. the one with transfer learning, converges faster and does not have the burrs on the curve. This demonstrates the benefit of the transfer learning with the pretrained $D_{sr}$. ### 8.6 Surrogate Model for Optimal Transport Map In Seguy et al. (2018) the authors proposed a two-step method for learning the barycentric projections of regularized optimal transport, as approximations of optimal transport maps between continuous measures. Their method solves the map between one pair of measures in one training process, but we can make a modification with measure-conditional discriminators and obtain a surrogate model for the optimal transport maps between various pairs of measures. Figure 10: Results of the surrogate model for optimal transport maps between 16 pairs of Gaussian distributions. The red arrows represent the barycentric projection maps given by the surrogate model $D_{sr,G}$, while the black arrows represent the reference optimal transport maps from linear programming. Specifically, we use two $D_{sr}$ neural networks, denoted as $D_{sr,G}$ and $D_{sr,D}$ to approximate the transport map and an auxiliary function, respectively. The first step in Seguy et al. (2018) is to maximize $\mathbb{E}_{x\sim P,y\sim Q}[u(x)+v(y)-\frac{1}{4\epsilon}(u(x)+v(y)-c(x,y))_{+}^{2}],$ (32) which is the variational form of the optimal transport cost with $L^{2}$ regularization, where $u$ and $v$ are two neural network to train, $\epsilon=0.02$ is the regularization weight, and $c(x,y)$ is set as $||x-y||^{2}$. Utilizing the symmetry between the optimal $u$ and $v$ if we swap $P$ and $Q$, we use $D_{sr,D}(P,Q,x)$ and $D_{sr,D}(Q,P,y)$ to replace $u(x)$ and $v(y)$, respectively. The loss function for $D_{sr,D}$ writes as $\displaystyle L_{D}=$ $\displaystyle\mathbb{E}_{(P,Q)\sim\mu}\mathbb{E}_{x\sim P,y\sim Q}$ (33) $\displaystyle[-D_{sr,D}(P,Q,x)-D_{sr,D}(Q,P,y)$ $\displaystyle+$ $\displaystyle\frac{1}{4\epsilon}(D_{sr,D}(P,Q,x)+D_{sr,D}(Q,P,y)-c(x,y))_{+}^{2}].$ The second step in Seguy et al. (2018) is to train $f$ to minimize $\mathbb{E}_{x\sim P,y\sim Q}[\frac{1}{2\epsilon}c(y,f(x))(u(x)+v(y)-c(x,y))_{+}],$ (34) so that the minimizer $f^{*}$ is the barycentric projection of the regularized optimal transport, which can be viewed as an approximation of the optimal transport map from $P$ to $Q$. We will use $D_{sr,G}(P,Q,x)$ to replace $f(x)$, and the loss function for $D_{sr,G}$ writes as $\displaystyle L_{G}=$ $\displaystyle\mathbb{E}_{(P,Q)\sim\mu}\mathbb{E}_{x\sim P,y\sim Q}[\frac{1}{2\epsilon}c(y,D_{sr,G}(P,Q,x))$ (35) $\displaystyle(D_{sr,D}(P,Q,x)+D_{sr,D}(Q,P,y)-c(x,y))_{+}],$ Note that we take the expectation over $(P,Q)$ in Equation 33 and 35, so that in the end of training, $D_{sr,G}(P,Q,x)$ will approximate the optimal transport map from $P$ to $Q$ for various $(P,Q)$ pairs. Seguy et al. (2018) propose to train $u$ and $v$ until convergence, and then train $f$. But we found that training $D_{sr,D}$ and $D_{sr,G}$ iteratively after a warming-up training of $D_{sr,D}$ also works. We train and test with $P,Q$ independently sampled from the 2D $\mu_{train}$ in Section 6.3 of the main text, and show the results after 200,000 iterations with 10,000 warming- up steps in Figure 10. The reference map is the empirical optimal transport map between 1000 samples, calculated by the POT package (Flamary & Courty, 2017) using linear programming. One can see that the surrogate model provides a similar transport map as the reference.
# TREGO: a Trust-Region Framework for Efficient Global Optimization Y. Diouane Department of Mathematics and Industrial Engineering, Polytechnique Montréal. E-mail<EMAIL_ADDRESS>V. Picheny Secondmind, 72 Hills Road, Cambridge, CB2 1LA, UK. E-mail<EMAIL_ADDRESS>R. Le Riche CNRS LIMOS, Mines St-Etienne and UCA, France. E-mail<EMAIL_ADDRESS>A. Scotto Di Perrotolo ISAE-SUPAERO, Université de Toulouse, France. E-mail: <EMAIL_ADDRESS> ###### Abstract Efficient Global Optimization (EGO) is the canonical form of Bayesian optimization that has been successfully applied to solve global optimization of expensive-to-evaluate black-box problems. However, EGO struggles to scale with dimension, and offers limited theoretical guarantees. In this work, a trust-region framework for EGO (TREGO) is proposed and analyzed. TREGO alternates between regular EGO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), the proposed algorithm enjoys global convergence properties, while departing from EGO only for a subset of optimization steps. Using extensive numerical experiments based on the well-known COCO bound constrained problems, we first analyze the sensitivity of TREGO to its own parameters, then show that the resulting algorithm is consistently outperforming EGO and getting competitive with other state-of-the-art black- box optimization methods. Keywords: non-linear optimization; black-box optimization; Gaussian processes; Bayesian optimization; trust-region. ## 1 Introduction In the past 20 years, Bayesian optimization (BO) has encountered great successes and a growing popularity for solving global optimization problems with expensive-to-evaluate black-box functions. Examples range from aircraft design [26] to automatic machine learning [55] to crop selection [43]. In a nutshell, BO leverages non-parametric (Gaussian) processes (GPs) to provide flexible surrogate models of the objective. Sequential sampling decisions are based on the GPs, judiciously balancing exploration and exploitation in search for global optima (see [34, 40] for early works or [14] for a recent review). BO typically tackles problems of the form: $\displaystyle\displaystyle\min_{x\in\Omega}f(x),$ (1) where $f$ is a pointwise observable objective function defined over a continuous set $\Omega{\subseteq}\mathbb{R}^{n}$, with $n$ relatively small (say, 2 to 20). In this work, the objective function $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is assumed observable exactly (i.e., without random noise), bounded from below in $\mathbb{R}^{n}$ and Lipschitz continuous near appropriate limit points. The constraints set $\Omega$ will be treated as explicit [[, i.e. not relying on estimates, as in]]schonlau1998global and non-relaxable [38], meaning that the objective function cannot be evaluated outside the feasible region. In our numerical experiments, $\Omega$ will be set as a bound constraints set. Despite its popularity and successes, BO suffers from a couple of important drawbacks. First, it is very sensitive to the curse of dimensionality, as with growing dimension exploration tends to overcome exploitation and learning an accurate model throughout the search volume is typically not feasible within a limited number of function evaluations. Several recent works have tackled this problem, either making strong structural assumptions [13, 35, 61] or incentivizing sampling away from the boundaries [42, 54]. Second, the theoretical properties for BO are rather limited, in particular in the noiseless context. For BO algorithms based on the expected improvement acquisition function, Vazquez and Bect [59] showed that the sequence of evaluation points is dense in the search domain providing some strong assumptions on the objective function. Bull [16] built upon this result to provide a convergence rate for EGO when GP models with a Matérn kernel are used. However, the proposed convergence rate requires the addition of a well- calibrated epsilon-greedy strategy to EGO and it is valid for a limited family of objective functions. Over the past two decades, there has been a growing interest in deterministic Derivative-Free Optimization (DFO) [4, 19]. DFO methods either try to build local models of the objective function based on samples of the function values, e.g. trust-region methods, or directly exploit a sample set of function evaluations without building an explicit model, e.g. direct-search methods. Motivated by the large number of DFO applications, researchers and practitioners have made significant progress on the algorithmic and theoretical aspects (in particular, proofs of global convergence) of the DFO methods. In this paper, we propose to equip a classical BO method with known techniques from deterministic DFO using a trust-region scheme, and a sufficient decrease condition to accept new iterates [36]. This is in line with recent propositions hybridizing BO and DFO [24, 48] that showed great promise empirically, but with limited theoretical guarantees. The proposed TREGO algorithm (Trust-Region framework for Efficient Global Optimization) benefits from both worlds: TREGO rigorously achieves global convergence under reasonable assumptions, while enjoying the flexible predictors and efficient exploration-exploitation trade-off provided by the GPs. Contrary to the aforementioned propositions, TREGO maintains a global search step, ensuring that the algorithm can escape local optima and maintain the asymptotic properties of BO [16, 59]. The remainder of this article is organized as follows. Section 2 presents the classical BO framework. Section 3 describes our hybrid algorithm, and Section 4 its convergence properties. Intensive numerical experiments have been carried out using the COCO test bed [30]. They represent months of CPU time and have allowed to study TREGO and compare it with state-of-the-art alternatives. These experiments are reported in Section 5. Conclusions and perspectives are finally provided in Section 6. By default this paper uses $\ell_{2}$ norms. ## 2 The Efficient Global Optimization Framework Efficient Global Optimization (EGO) [34] is a class of BO methods relying on two key ingredients: (i) the construction of a GP surrogate model of the objective function and (ii) the use of an acquisition function. EGO proceeds along the following steps: 1. 1. an initial set of evaluations (often referred to as Design of Experiment, $DoE$) of the objective function is obtained, typically using a space-filling design [25]; 2. 2. a GP surrogate model is trained on this data; 3. 3. a fast-to-evaluate acquisition function, defined with the GP model, is maximized over $\Omega$; 4. 4. the objective function is evaluated at the acquisition maximizer; 5. 5. this new observation is added to the training set and the model is re-trained; 6. 6. Steps 3 to 5 are repeated until convergence or budget exhaustion. The surrogate model is built by assuming that $f$ is a realization of a Gaussian process (GP) $(Y_{x})_{x\in\Omega}\sim\mathcal{GP}\left(m,c\right)$, with prior mean function $m(x):=\mathbb{E}(Y_{x})$ and covariance function $c(x,x^{\prime}):=\operatorname{cov}(Y_{x},Y_{x^{\prime}})$, $x,x^{\prime}\in\Omega$. Given a DoE of size $t\in\mathbb{N}^{*}$, i.e., $\mathcal{D}_{t}=\\{x_{1},x_{2},\ldots,x_{t}\\}$ and $\mathcal{Y}_{t}=\\{f(x_{1}),f(x_{2}),\ldots,f(x_{t})\\}$, the posterior distribution of the process conditioned by $\mathcal{D}_{t},\mathcal{Y}_{t}$ is Gaussian with mean and covariance given by [47]: $\displaystyle m_{t}(x)$ $\displaystyle:=$ $\displaystyle m(x)+\lambda_{t}(x)\left(Y_{t}-M_{t}\right),$ $\displaystyle c_{t}(x,x^{\prime})$ $\displaystyle:=$ $\displaystyle c(x,x^{\prime})-\lambda_{t}(x)c_{t}(x^{\prime}),$ where $\lambda_{t}(x):=c_{t}(x)^{\top}C_{t}^{-1}$, $c_{t}(x):=(c(x,x_{1}),c(x,x_{2}),\dots,c(x,x_{t}))^{\top}$, $C_{t}:=(c(x_{i},x_{j}))_{1\leq i,j\leq t}$, $Y_{t}:=(f(x_{1}),f(x_{2}),\ldots,f(x_{t}))^{\top}$ and $M_{t}:=(m(x_{1}),m(x_{2}),\ldots,m(x_{t}))^{\top}$. Typically, $m$ is taken as constant or a polynomial of small degree and $c$ belongs to a family of covariance functions such as the Gaussian and Matérn kernels, based on hypotheses about the smoothness of $f$. Corresponding hyperparameters are often obtained as maximum likelihood estimates; see for example [47, 57] for the corresponding details. Once the surrogate model is built, an acquisition function is used to determine which point is most likely to enrich efficiently the model regarding the search for a global minimizer of the objective function $f$. The expression of the acquisition function only depends on the probabilistic surrogate model and usually integrates a trade-off between exploitation (i.e., low $m_{t}(x)$) and exploration (i.e., high $c_{t}(x,x)$) [27]. In the noise- free setting, the canonical acquisition is Expected Improvement (EI) [34], i.e., $\displaystyle\operatorname{EI}_{t}(x)$ $\displaystyle:=$ $\displaystyle(f_{\min}-m_{t}(x))\Phi\left(\frac{f_{\min}-m_{t}(x)}{\sqrt{c_{t}(x,x)}}\right)+\sqrt{c_{t}(x,x)}\phi\left(\frac{f_{\min}-m_{t}(x)}{\sqrt{c_{t}(x,x)}}\right),$ where $f_{\min}=\min_{1\leq i\leq t}(f(x_{i}))$. The functions $\phi$ and $\Phi$ denote the probability and cumulative density functions, respectively, of the standard normal variable. Note that many alternative acquisition functions have been proposed over the past 20 years, see for example [52] for a recent review. We stress that while the focus here is on EI for simplicity, the proposed framework described later is not limited to EI and other acquisitions can be used instead (see Section 4 for suitable choices). Given $\mathcal{D}_{t}$ the set of observations available at iteration $k$, the next optimization iterate $x_{k+1}$ is given by $x_{k+1}^{\operatorname{global}}\in\underset{x\in\Omega}{\operatorname{argmax}}~{}\alpha(x;\mathcal{D}_{t}).$ (2) where $\alpha$ corresponds to the chosen acquisition function at iteration $k$ (for EGO, $\alpha(x;\mathcal{D}_{t})~{}=~{}\operatorname{EI}_{t}(x)$). For most existing implementations of EGO, the stopping criterion relies typically on a maximum number of function evaluations. In fact, unlike gradient-based methods where the gradient’s norm can be used as a relevant stopping criterion which ensures a first-order stationarity, derivative-free optimization algorithms have to cope with a lack of general stopping criterion and the EGO algorithm makes no exception. ## 3 A Trust-Region Framework for EGO (TREGO) In this section, we propose a modified version of EGO where a control parameter is included (which depends on the decrease of the true objective function) to ensure some form of global convergence without jeopardizing the performance of the algorithm. ### 3.1 The TREGO algorithm Our methodology follows the lines of the search/poll direct-search methods [12, 19, 22, 58]. In the context of EGO, this results in a scheme alternating between local and global phases. The global phase corresponds to running one iteration of the classical EGO algorithm over the whole design space as in Eq. 2. This phase ensures an efficient global exploration and aims at identifying the neighborhood of a global minimizer. The local phase corresponds to running one iteration of EGO, but restricting the search to the vicinity of the current best point ( the trust-region $\Omega_{k}$, detailed hereafter), so that $x^{\operatorname{local}}_{k+1}\in\underset{x\in\Omega_{k}}{\operatorname{argmax}}~{}\alpha(x;\mathcal{D}_{t}).$ (3) Associated with a proper management of the trust-region $\Omega_{k}$, this phase ensures that the algorithm converges to a stationary point. All the trial points, whether coming from the global or from the local phase, are included in the $DoE$ to refine the surrogate model of the objective function $f$. By default, only the global phase is used. The local one is activated when the global phase is not successful, that is when it fails to sufficiently reduce the best objective function value. In addition, the local phase consists of a fixed number of steps (typically only one), after which the algorithm reverts to the global phase. Consequently, the original EGO algorithm is entirely maintained over a subset of steps. The local phase management follows two widely used techniques in the field of nonlinear optimization with and without derivatives. First, some form of _sufficient decrease condition_ is imposed on the objective function values to declare an iteration successful. Second, we control the size of the steps taken at each iteration using a parameter $\sigma_{k}$ that is updated depending on the sufficient decrease condition (increased if successful, decreased otherwise). Given a current best point $x_{k}^{*}$, at iteration $k$, a trust-region around $x_{k}^{*}$ is defined as $\Omega_{k}:=\\{x\in\Omega\;{:}\;d_{\min}\sigma_{k}\leq\|x-x_{k}^{*}\|\leq d_{\max}\sigma_{k}\\},$ (4) where $d_{\min}<d_{\max}$ are any two strictly positive real values. The inclusion in the algorithm of the bounds $d_{\min}$ and $d_{\max}$ on the definition of $\Omega_{k}$ is essential to our convergence analysis. In practice, the constant $d_{\min}$ can be chosen very small and the upper bound $d_{\max}$ can be set to a very large number. Note that the definition of the trust-region as given in (4) uses the $\ell_{2}$ norm, however other norms can be preferred depending on the nature of the constraints set $\Omega$. For instance, if $\Omega$ contains only bound constraints, it is more practical to use the $\ell_{1}$ norm as we will do in our experiments. At each iteration of the local phase, the following sufficient decrease condition on the objective function is imposed: $f(x^{\operatorname{local}}_{k+1})\leq f(x^{*}_{k})-\rho(\sigma_{k}),$ (5) where $\rho:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ is a forcing function [36], i.e., a positive continuous nondecreasing function such that $\rho(\sigma)/\sigma\rightarrow 0$ when $\sigma\downarrow 0$ (for instance, $\rho(\sigma)=\sigma^{2}$). The step size parameter $\sigma_{k}$ is increased if the iteration is successful, i.e., $\sigma_{k+1}=\gamma\sigma_{k}$ with $\gamma\in(1,+\infty)$. An iteration is declared successful if the new iterate $x^{*}_{k+1}$ decreases sufficiently the objective function. In this case, the iterate $x^{*}_{k+1}$ can be updated either within the global phase ($x^{*}_{k+1}=x^{\operatorname{global}}_{k+1}$) or the local one ($x^{*}_{k+1}=x^{\operatorname{local}}_{k+1}$). If the sufficient decrease condition (5) is not satisfied, the current iterate is kept unchanged ($x^{*}_{k+1}=x^{*}_{k}$) and the step size is reduced, $\sigma_{k+1}=\beta\sigma_{k}$ with $\beta\in(0,1)$. A classical scheme is to keep $\beta\in(0,1)$ constant, and apply: $\displaystyle\sigma_{k+1}$ $\displaystyle=$ $\displaystyle\frac{{\sigma_{k}}}{\beta}\quad\text{ if the iteration is successful,}$ $\displaystyle\sigma_{k+1}$ $\displaystyle=$ $\displaystyle\beta{\sigma_{k}}\quad\text{ otherwise.}$ (6) Figure 1 is a schematic illustration of the algorithm. The pseudo-code of the full algorithm is given in Appendix A. Global phase over $\Omega$ (Update the DoE) Start from $x_{0}^{*}$ --- $k=0$ Local phase over the trust-region $\Omega_{k}$ (Update the DoE) Failure $\sigma_{k+1}=\gamma\sigma_{k}$ Update $x^{*}_{k+1}$ $\sigma_{k+1}=\beta\sigma_{k}$ $x^{*}_{k+1}=x^{*}_{k}$ Success $x^{\text{local}}_{k+1}$Failure A stopping condition is satisfied Stop and return current iterate as solution $x^{\text{global}}_{k+1}$Increment $k$NoYes Figure 1: An overview of the TREGO framework (detailed in Algorithm 1). ### 3.2 Extensions We now present several possible extensions to TREGO. Some of these extensions are tested in the ablation study of Section 5.3. #### Local / global ratio: in the previous section, a single local step is performed when the global step fails. The local/global ratio can easily be controlled by forcing several consecutive steps of either the global or the local phase. For example, a “gl3-5” (see algorithms names later) tuning would first perform three global steps regardless of their success. If the last step fails, it then performs five local steps. Such modification will not alter the structure of the algorithm. Moreover, since the convergence analysis relies on a subsequence of unsuccessful iterations, the validity of the convergence analysis (see Section 4) is not called into question. In fact, during the local phase, we keep using the same sufficient decrease condition to decide whether the current iteration is successful or not. #### Local acquisition function: our analysis (see Section 4) does not require using the same acquisition for the global and local steps. For example, as EI tends to become numerically unstable in the vicinity of a cluster of observations, it might be beneficial to use the GP mean or a lower confidence bound [56] as an acquisition function for the local step. #### Local model: similarly, our approach does not require using a single model for the global and local steps. One could choose a local model that uses only the points inside the trust-region to allow a better fit locally, in particular for heterogeneously varying functions. #### Non BO local step finally, our analysis holds when the algorithm employed for the local step is not Bayesian. For example, using BFGS would allow a more aggressive local search, which could prove beneficial [39]. In fact, as far as the condition (5) is used to decide whether the current iteration is successful or not, the convergence theory of the next section applies. ### 3.3 Related work #### TRIKE [48] (Trust-Region Implementation in Kriging-based optimization with Expected improvement) implements a trust-region-like approach where each iterate is obtained by maximizing the expected improvement acquisition function within some trust region. The two major differences with TREGO are: 1) the criterion used to monitor the step size evolution is based on the ratio of the expected improvement and the actual improvement, rather than sufficient decrease; 2) TRIKE does not have a global phase. In [48], TRIKE is associated with a restart strategy to ensure global search. #### TURBO [24] (a TrUst-Region BO solver) carries out a collection of simultaneous BO runs using independent GP surrogate models, each within a different trust region. The trust-region radius is updated with a failure/success mechanism based on the progress made on the objective function111Importantly, TURBO uses a simple decrease rule of the objective function, which turns to be insufficient to ensure convergence to a stationary point with GP models.. At each iteration, a global phase (managed by an implicit multi-armed bandit strategy) allocates samples between these local areas and thus decides which local optimizations to continue. Both TRIKE and TURBO display very promising performances, in particular when solving high dimensional optimization problems. However, both rely on several heuristics that hinder theoretical guarantees. In contrast, the use of the search/poll direct-search algorithmic design [12, 22, 19, 58] allows TREGO to benefit from global convergence properties. ## 4 Convergence Analysis of TREGO Under appropriate assumptions, the global convergence of the proposed algorithm is now deduced. By global convergence, we mean the ability of a method to generate a sequence of points converging to a stationary point regardless of the starting DoE. A point is said to be stationary if it satisfies the first-order necessary conditions, in the sense that the gradient is equal to zero if the objective function is differentiable. In the non- smooth case, the first-order necessary conditions mean that, for any direction $d$, the Clarke generalized derivative [18] along the direction $d$ is non- negative. In order to achieve our goal, the following additional assumption on the forcing function $\rho(\cdot)$ is made. There exist constants $\bar{\gamma}$ and $\bar{\beta}$ satisfying $\bar{\beta}<1<\bar{\gamma}$, such that, for each $\sigma>0$, $\rho(\beta\sigma)\ \leq\ \bar{\beta}\rho(\sigma)~{}~{}~{}~{}~{}\mbox{and}~{}~{}~{}~{}~{}\rho(\gamma\sigma)\ \leq\ \bar{\gamma}\rho(\sigma).$ (7) Such assumption is not restrictive as it holds in particular for the classical forcing functions of the form $\rho(\sigma)=c\sigma^{q}$ with $c>0$ and $q\geq 1$. The next lemma shows that, as far as the objective function is bounded below, the $\sum^{+\infty}_{k=0}\rho(\sigma_{k})$ is bounded above. The proof of the lemma is inspired from what is done in DFO [3, 10, 11] when handling stochastic noisy estimates of the objective function, e.g., [3, Theorem 1]. ###### Lemma 4.1 Consider TREGO without any stopping criterion. Let $f$ be bounded below by $f_{\mathrm{low}}$. Then, one has $\sum^{+\infty}_{k=0}\rho(\sigma_{k})\leq 2\left(f(x^{*}_{0})-f_{\mathrm{low}}\right)+\frac{2(1-\bar{\nu})}{\bar{\nu}}\rho(\sigma_{0})<\infty,$ where $\bar{\nu}=\max\left\\{\frac{\bar{\gamma}-1}{\bar{\gamma}-\frac{1}{2}},\frac{1-\bar{\beta}}{\frac{3}{2}-\bar{\beta}}\right\\}$. Proof. For the sake of our proof, the following function is introduced $\phi_{k}\ :=\ \bar{\nu}(f(x^{*}_{k})-f_{\mathrm{low}})+(1-\bar{\nu})\rho(\sigma_{k}),$ (8) where $\bar{\nu}=\max\left\\{\frac{\bar{\gamma}-1}{\bar{\gamma}-\frac{1}{2}},\frac{1-\bar{\beta}}{\frac{3}{2}-\bar{\beta}}\right\\}$. Then, if an iteration $k$ is unsuccessful $x^{*}_{k+1}=x^{*}_{k}$ and $\sigma_{k+1}=\beta\sigma_{k}$, this leads to $\phi_{k+1}-\phi_{k}=(1-\bar{\nu})(\rho(\sigma_{k+1})-\rho(\sigma_{k}))\leq(1-\bar{\nu})(\bar{\beta}-1)\rho(\sigma_{k})\leq-\frac{\bar{\nu}}{2}\rho(\sigma_{k}),$ (9) where we used $\rho(\beta\sigma_{k})\leq\bar{\beta}\rho(\sigma_{k})$ and the fact that $\bar{\nu}\geq\frac{1-\bar{\beta}}{\frac{3}{2}-\bar{\beta}}$. Otherwise, if the iteration $k$ is successful, then $x^{*}_{k+1}$ is changed and $\sigma_{k+1}=\gamma\sigma_{k}$. Then, by using the fact that $\rho(\gamma\sigma_{k})\leq\bar{\gamma}\rho(\sigma_{k})$ and $\bar{\nu}\geq\frac{\bar{\gamma}-1}{\bar{\gamma}-\frac{1}{2}}$, we obtain $\displaystyle\phi_{k+1}-\phi_{k}$ $\displaystyle\leq$ $\displaystyle\bar{\nu}\rho(\sigma_{k})+(1-\bar{\nu})(\bar{\gamma}-1)\rho(\sigma_{k})\leq-\frac{\bar{\nu}}{2}\rho(\sigma_{k}).$ (10) Hence, from (9) and (10), one deduces that for any iteration $k$, one gets $\phi_{k+1}-\phi_{k}\ \leq\ -\frac{\bar{\nu}}{2}\rho(\sigma_{k}).$ (11) Thus, by applying the sum over the subscript $k$, one gets for a given iteration index $n$ $\phi_{n+1}-\phi_{0}=\sum^{n}_{k=0}\phi_{k+1}-\phi_{k}=\leq-\frac{\bar{\nu}}{2}\sum^{n}_{k=0}\rho(\sigma_{k}).$ Since $\phi_{n+1}\geq 0$, one deduces that by taking $n\to\infty$ $\sum^{+\infty}_{k=0}\rho(\sigma_{k})\leq 2\left(f(x^{*}_{0})-f_{\mathrm{low}}\right)\frac{2(1-\bar{\nu})}{\bar{\nu}}\rho(\sigma_{0})<\infty.$ From Lemma 4.1, we conclude that the full sequence $\\{\rho(\sigma_{k})\\}$ must converge to zero. Thus, by using the properties of the forcing function $\rho(.)$ (i.e., a continuous and nondecreasing function), one deduces that $\lim_{k\to+\infty}\sigma_{k}=0.$ The result is stated in the next theorem. ###### Theorem 4.1 Consider TREGO without any stopping criterion. If the objective function $f$ is bounded below, then one has $\lim_{k\rightarrow+\infty}\sigma_{k}=0.$ We now introduce the following definition (similar to those in [3, 4, 5, 6]) to show the existence of convergent subsequences of TREGO iterates. ###### Definition 4.1 [3, Definition 5] A convergent subsequence $\\{x^{*}_{k}\\}_{k\in\mathcal{K}}$ of TREGO iterates (for some subset of indices $\mathcal{K}$) is said to be a refining subsequence, if and only if $\\{\sigma_{k}\\}_{k\in\mathcal{K}}$ converges to zero. The limit $x^{*}$ of $\\{x^{*}_{k}\\}_{k\in\mathcal{K}}$ is called a refined point. Assuming that TREGO is producing iterates that lie in a compact set, one can ensure the existence of a refining subsequence. ###### Theorem 4.2 Consider TREGO without any stopping criterion. Let $f$ be bounded below. If the sequence $\\{x^{*}_{k}\\}$ lies in a compact set, then there exists a convergent refining subsequence $\\{x^{*}_{k}\\}_{k\in\mathcal{K}}$. The proposed convergence analysis will rely on iterates from the local phase. Thus, in what comes next, we will use $\\{\hat{x}^{\operatorname{local}}_{k}\\}_{k\in\mathcal{K}^{\prime}}\subseteq\\{x^{*}_{k}\\}_{k\in\mathcal{K}}$ (where $\mathcal{K}^{\prime}\subseteq\mathcal{K}$ is an infinite subset of indices) to denote a refining subsequence associated with TREGO local phase iterates. The global convergence will be achieved by establishing that some type of directional derivatives are non-negative at limit points of refining subsequences along certain limit directions, known as refining directions (see [3, 4, 5, 6]). ###### Definition 4.2 Given a convergent refining subsequence (associated with the TREGO local phase) $\\{\hat{x}^{\operatorname{local}}_{k}\\}_{k\in\mathcal{K^{\prime}}}$ and its corresponding refined point $x^{*}$. Let $\\{d_{k}\\}_{k\in\mathcal{K^{\prime}}}$ be a sequence such that, $d_{k}:=(x^{\operatorname{local}}_{k+1}-\hat{x}^{\operatorname{local}}_{k})/\sigma_{k}$, for all $k\in\mathcal{K^{\prime}}$. A direction $d$ is said to be a refining direction $x^{*}$ if and only if there exists an infinite subset $\mathcal{L}\subseteq\mathcal{K}^{\prime}$ such that $\lim_{k\in\mathcal{L}}d_{k}=d$. Note that by construction, one has $d_{\min}\leq d_{k}\leq d_{\max}$, for all $k\in\mathcal{K}^{\prime}$. Thus, the existence of a refining direction $d$ is justified as the sequence $\\{d_{k}\\}_{k\in\mathcal{K^{\prime}}}$ lies in a compact set. When $f$ is Lipschitz continuous near $x^{*}$, one can make use of the Clarke- Jahn generalized derivative along a direction $d$ $f^{\circ}(x^{*};d)\;:=\;\limsup_{\begin{array}[]{c}x\rightarrow x^{*},x\in\Omega\\\ t\downarrow 0,x+td\in\Omega\end{array}}\frac{f(x+td)-f(x)}{t}.$ (Such a derivative is essentially the Clarke generalized directional derivative [18], adapted by Jahn [33] to the presence of constraints.) However, for the proper definition of $f^{\circ}(x^{*};d)$, one needs to guarantee that $x+td\in\Omega$ for $x\in\Omega$ arbitrarily close to $x^{*}$ which is assured if $d$ is hypertangent to $\Omega$ at $x^{*}$. In the following definition from [6, 19], we will use the notation $B(x;\Delta):=\\{y\in\mathbb{R}^{n}:\|y-x\|<\Delta\\}$ to denote the open ball of radius $\Delta$ centered at $x$. ###### Definition 4.3 [6, Definition 3.3] A vector $d\in\mathbb{R}^{n}$ is said to be a hypertangent vector to the set $\Omega\subseteq\mathbb{R}^{n}$ at the point $x$ in $\Omega$ if there exists a scalar $\epsilon>0$ such that $y+tw\in\Omega\quad\forall y\in\Omega\cap B(x;\epsilon),\quad w\in B(d;\epsilon)\quad\text{and}\quad 0<t<\epsilon.$ The hypertangent cone to $\Omega$ at $x$, denoted by $T_{\Omega}^{H}(x)$, is the set of all hypertangent vectors to $\Omega$ at $x$. Then, the Clarke tangent cone to $\Omega$ at $x$ (denoted by $T_{\Omega}(x)$) can be defined as the closure of the hypertangent cone $T_{\Omega}^{H}(x)$ (when the former is nonempty, an assumption we need to make for global convergence anyway). The Clarke tangent cone generalizes the notion of tangent cone in nonlinear programming [41]. In the following definition from [4, 6, 18, 19], we give the formal notion of the Clarke tangent cone. ###### Definition 4.4 [6, Definition 3.5] A vector $d\in\mathbb{R}^{n}$ is said to be a Clarke tangent vector to the set $\Omega\subseteq\mathbb{R}^{n}$ at the point $x$ in the closure of $\Omega$ if for every sequence $\\{y_{k}\\}$ of elements of $\Omega$ that converges to $x$ and for every sequence of positive real numbers $\\{t_{k}\\}$ converging to zero, there exists a sequence of vectors $\\{w_{k}\\}$ converging to $d$ such that $y_{k}+t_{k}w_{k}\in\Omega$, for a sufficiently large $k$. The set $T_{\Omega}(x)$ of all Clarke tangent vectors to $\Omega$ at $x$ is called the Clarke tangent cone to $\Omega$ at $x$. If we assume that $f$ is Lipschitz continuous near $x^{*}$ and by using [6, Propostion 3.5], for any a direction $v$ in the Clarke tangent cone (possibly not in the hypertangent one), one can consider the Clarke-Jahn generalized derivative to $\Omega$ at $x^{*}$ as the limit $f^{\circ}(x^{*};v)\;=\;\lim_{d\in T_{\Omega}^{H}(x^{*}),d\rightarrow v}f^{\circ}(x^{*};d).$ A point $x^{*}\in\Omega$ is considered Clarke stationary if $f^{\circ}(x^{*};d)\geq 0$, $\forall d\in T_{\Omega}(x^{*})$. Moreover, when $f$ is strictly differentiable at $x^{*}$, one has $f^{\circ}(x^{*};d)=\nabla f(x^{*})^{\top}d$. Hence in this case, if $x^{*}$ is a Clark stationary point is being equivalent to $\nabla f(x^{*})^{\top}d\geq 0$ , $\forall d\in T_{\Omega}(x^{*})$. It remains now to state the next lemma which will be useful for the proof of the optimality result based on the Clarke derivative. The proof of this lemma is inspired from [3, Theorem 5]. ###### Lemma 4.2 Consider TREGO without any stopping criterion. Let $f$ be bounded below by $f_{\mathrm{low}}$. Then, one has $\displaystyle\liminf_{k\to+\infty}\frac{f(\hat{x}^{\operatorname{local}}_{k})-f(\hat{x}^{\operatorname{local}}_{k}+\sigma_{k}d_{k})}{\sigma_{k}}$ $\displaystyle\leq$ $\displaystyle 0.$ Proof. By contradiction, assume that there exists $\epsilon>0$ such that, $\displaystyle\frac{f(\hat{x}^{\operatorname{local}}_{k})-f(\hat{x}^{\operatorname{local}}_{k}+\sigma_{k}d_{k})}{\sigma_{k}}$ $\displaystyle\geq$ $\displaystyle\epsilon,~{}~{}~{}~{}~{}~{}\mbox{for all $k\in\mathbb{N}$}.$ (12) From Theorem 4.1, one has $\lim_{k\to+\infty}\sigma_{k}=0$, hence by using the forcing function properties one has also $\lim_{k\to+\infty}\frac{\rho(\sigma_{k})}{\sigma_{k}}=0$. This means that there exists $k_{0}>0$, such that $\displaystyle\rho(\sigma_{k})$ $\displaystyle\leq$ $\displaystyle\epsilon\sigma_{k},~{}~{}~{}~{}~{}~{}\mbox{for all $k\geq k_{0}$}.$ (13) By combining (12) and (13), one gets $f(\hat{x}^{\operatorname{local}}_{k})-f(\hat{x}^{\operatorname{local}}_{k}+\sigma_{k}d_{k})\geq\rho(\sigma_{k}),~{}~{}~{}~{}~{}~{}\mbox{for all $k\geq k_{0}$}.$ Hence, for all $k\geq k_{0}$, the $k$-th iteration of TREGO is successful and $\sigma_{k+1}=\gamma\sigma_{k}$ (with $\gamma>1$). This contradicts $\lim_{k\to+\infty}\sigma_{k}=0$ and thus the claim (12) is false. The next theorem states the global convergence of TREGO. The obtained result is in the vein of those first established in [6, Theorem 3.2] for simple decrease and Lipschitz continuous functions and later generalized in [21, 60] for sufficient decrease and directionally Lipschitz functions. ###### Theorem 4.3 Let the assumptions made in Theorem 4.1 hold. Let $x^{*}\in\Omega$ be a refined point of a refining subsequence associated with the TREGO local phase $\\{\hat{x}^{\operatorname{local}}_{k}\\}_{k\in\mathcal{K}^{\prime}}$. Assume that $f$ is Lipschitz continuous near $x^{*}$ and that $T_{\Omega}^{H}(x^{*})\neq\emptyset$. Let $d\in T_{\Omega}^{H}(x^{*})$ be a refining direction associated with $\\{d_{k}\\}_{k\in\mathcal{K}^{\prime}}$. Then, the Clarke-Jahn generalized derivative of $f$ at $x^{*}$ in the direction $d$ is nonnegative, i.e., $f^{\circ}(x^{*};d)\geq 0$. Proof. In fact, from Lemma 4.2, there exists a subset $\mathcal{K}$ such that $\lim_{k\in\mathcal{K}}\frac{f(\hat{x}^{\operatorname{local}}_{k})-f(x^{\operatorname{local}}_{k+1})}{\sigma_{k}}~{}\leq~{}0$. From Theorem 4.1, one has also $\lim_{k\in\mathcal{K}}\sigma_{k}=0$. Now, by using Theorem 4.2, there exists a subset $\mathcal{K}^{\prime}\subseteq\mathcal{K}$ such that $\lim_{k\in\mathcal{K}^{\prime}}\hat{x}^{\operatorname{local}}_{k}=x^{*}$. Then, since the subsequence $\\{d_{k}\\}_{k\in\mathcal{K}}$ lies in a compact set, there must exist a subset $\mathcal{L}\subseteq\mathcal{K}^{\prime}$ such that $\\{d_{k}\\}_{k\in\mathcal{L}}$ converges to $d$ and $\displaystyle\lim_{k\in\mathcal{L}}\frac{f(\hat{x}^{\operatorname{local}}_{k})-f(\hat{x}^{\operatorname{local}}_{k}+\sigma_{k}d_{k})}{\sigma_{k}}$ $\displaystyle\leq$ $\displaystyle 0.$ (14) From the Lipschitz continuity of $f$ near $x^{*}$ and using [6, Proposition 3.9], one deduces that the Clarke generalized derivative is continuous with respect to $d$ on the Clarke tangent cone. Hence, $f^{\circ}(x^{*};d)=\lim_{k\in\mathcal{L}}f^{\circ}(x^{*};d_{k})$ Additionally, one has $\hat{x}^{\operatorname{local}}_{k}+\sigma_{k}d_{k}\in\Omega$ for all $k\in\mathcal{L}$, this leads to $\displaystyle f^{\circ}(x^{*};d)$ $\displaystyle=$ $\displaystyle\lim_{k\in\mathcal{L}}\limsup_{\begin{array}[]{c}x\rightarrow x^{*},x\in\Omega\\\ t\downarrow 0,x+td_{k}\in\Omega\end{array}}\frac{f(x+td_{k})-f(x)}{t},$ (17) $\displaystyle\geq$ $\displaystyle\limsup_{k\in\mathcal{L}}\frac{f(\hat{x}^{\operatorname{local}}_{k}+\sigma_{k}d_{k})-f(\hat{x}^{\operatorname{local}}_{k})}{\sigma_{k}}.$ (18) Hence, by substituting (14) into (17), one gets $\displaystyle f^{\circ}(x^{*};d)$ $\displaystyle\geq$ $\displaystyle\limsup_{k\in\mathcal{L}}\frac{f(\hat{x}^{\operatorname{local}}_{k}+\sigma_{k}d_{k})-f(\hat{x}^{\operatorname{local}}_{k})}{\sigma_{k}}\geq 0.$ ## 5 Numerical Experiments The objective of this section is twofold: first, to evaluate the sensitivity of TREGO to its own parameters and perform an ablation study; second, to compare our algorithm with the original EGO and other BO alternatives to show its strengths and weaknesses. TREGO is available both in the R package DiceOptim 222https://cran.r-project.org/package=DiceOptim and python library trieste 333https://secondmind-labs.github.io/trieste/. ### 5.1 Testing procedure using the BBOB benchmark Our experiments are based on the COCO (COmparing Continuous Optimizers, [30]) software. COCO is a recent effort to build a testbed that allows the rigorous comparison of optimizers. We focus here on the noiseless Black-Box Optimization Benchmarking (BBOB) test suite in the expensive objective function setting [29] that contains 15 instances of 24 functions [15]; each function is defined for an arbitrary number of parameters ($\geq 2$) to optimize. Each instance corresponds to a randomized modification of the original function (rotation of the coordinate system and a random translation of the optimum). The functions are divided into 5 groups: 1) separable, 2) unimodal with moderate conditioning, 3) unimodal with high conditioning, 4) multi-modal with adequate global structure, and 5) multi-modal with weak global structure. Note that group 4 is often seen as the main target for Bayesian optimization [34]. The full description of the functions is available in Appendix B (see Table 2). _A problem_ is a pair [function, target to reach]. Therefore, for each instance of a function, there are several problems to solve of difficulty varying with the target value. The _Empirical Run Time Distributions_ (ERTD) gives, for a given budget (i.e. number of objective function evaluations), the proportion of problems which are solved by an algorithm. This metric can be evaluated for a single function and dimension, or averaged over a set of functions (typically over one of the 5 groups or over the 24 functions). To set the target values and more generally define a reference performance, COCO relies on a composite fake algorithm called best09. best09 is made at each optimization iteration of the best performing algorithm of the BBOB 2009 [29]. In our experiments, the targets were set at the values reached by best09 after $[0.5,1,3,5,7,10,15,20]\times n$ function evaluations. Note that outperforming best09 is a very challenging task, as it does not correspond to the performance of a single algorithm but of the best performing algorithm for each instance. In the following, the best09 performance is added to the plots as a reference. In addition, we added the performance of a purely random search, to serve as a lower bound. ### 5.2 Implementation details For a fair comparison, TREGO, EGO and TRIKE are implemented under a unique framework, based on the R packages DiceKriging (Gaussian process models) and DiceOptim (BO) [44, 50]. Our setup aligns with current practices in BO [27, 53], as we detail below. All GP models use a constant trend and an anisotropic Matérn covariance kernel with smoothness parameter $\nu=5/2$. The GP hyperparameters are inferred by maximum likelihood after each addition to the training set; the likelihood is maximized using a multi-start L-BFGS scheme. In case of numerical instability, a small regularization value is added to the diagonal of the covariance matrix. Trust regions are defined using the $\ell_{1}$ norm, see (4), so that they are hyper-rectangles. This allow us to optimize the expected improvement using a multi-start L-BFGS scheme. Each experiment starts with an initial set of $2n+4$ observations, generated using latin hypercube sampling improved through a maximin criterion [25]. All BO methods start with the same DoEs, and the DoE is different (varying the seed) for each problem instance. For locGP, the local model uses the same kernel and mean function as the global one, but its hyperparameters are inferred independently. To avoid numerical instability, the local model is always trained on at least $2n+1$ points. If the trust-region does not contain enough points, the points closest to the center of the trust-region are also added to the training set. ### 5.3 Sensitivity analysis and ablation study TREGO depends on a number of parameters (see Section 3) and has some additional degrees of freedom worth exploring (see Section 3.2). The objective of these experiments is to answer the following questions: 1. 1. is TREGO sensitive to the initial size of the trust region? 2. 2. is TREGO sensitive to the contraction factor $\beta$ (see Eq. 6) of the trust region? 3. 3. is using a local model beneficial? 4. 4. is there an optimal ratio of global and local steps? To answer these questions, we run a default version of TREGO and 9 variants, as reported in Table 1. The contraction parameter $\beta$ is either $0.9$ (which is classical in DFO algorithms) or $0.5$ (which corresponds to an aggressive reduction of the trust region). The choice of the initial trust- region $\sigma_{0}$, within the default TREGO, corresponds to setting the initial trust-region volume to $20\%$ of the search space. In this case, the initial trust-region volume is given by $(2\sigma_{0})^{n}$. We test also as alternatives with a small initial trust-region (i.e., 10% of the search space) and a larger one (i.e., 40% of the search space). The global-local ratio varies from 10-1 (which is expected to behave almost similarly to the original EGO) to 1-10 (very local). Acronym | Solvers ---|--- random | random search best09 | best of all BBOB 2009 competitors at each budget [8] TRIKE | TRIKE algorithm of [48] SMAC | SMAC algorithm of [31] DTS-CMA | DTS-CMA algorithm of [9] EGO | original EGO algorithm of [34] TREGO | default TREGO with $\beta=0.9$, $\gamma=1/\beta$, $\sigma_{0}=\frac{1}{2}(1/5)^{1/n}$, $\rho(\sigma)=\sigma^{2}$, $d_{\max}=1$, $d_{\min}=10^{-6}$ , global/local ratio = 1 / 1 (i.e., $G=1$ and $L=1$), with no local GP model gl1-10, gl1-4, gl4-1 and gl10-1 | TREGO with a global/local ratio of 1/10, 1/4, 4/1 and 10/1, respectively smV0 and lgV0 | TREGO with small (i.e., $\sigma_{0}=\frac{1}{2}(1/10)^{1/n}$) and large (i.e., $\sigma_{0}~{}=~{}\frac{1}{2}(2/5)^{1/n}$) initial trust-region size fstC | TREGO with fast contraction of the trust-region, i.e., $\beta=0.5$ fstCsmV0 | TREGO with fast contraction of the trust-region and small $\sigma_{0}$ locGP | TREGO with a local GP model Table 1: Names of the compared algorithms. For the TREGO variants, when not specified, the parameter values are the ones of the default, TREGO. Because of the cost of a full COCO benchmark with EGO-like algorithms, the interaction between these parameters is not studied. Also, the ablation experiments are limited to the problems with dimensions 2 and 5 and relatively short runs ($30n$ function evaluations). With these settings and 15 repetitions of each optimization run, an EGO algorithm is tested within a couple of days of computing time on a recent single processor. Figure 2, top row, summarizes our study on the effect of the global versus local iterations ratio. There is measurable advantage of algorithms devoting more iterations to local rather than global search. gl1-4 and gl1-10 consistently outperform gl4-1 and gl10-1. gl1-4 and gl1-10 slightly outperform the TREGO baseline, the effect being more visible with higher dimension (see also Figure 3 for results with 10 dimensions). By further splitting results into function groups (see Figure 5 in Appendix), it is observed that the performance gain due to having more local iterations happens on the unimodal function groups (the 2nd and 3rd, i.e., unimodal functions with low and high conditioning) when less difference can be observed on multimodal functions (first, fourth and fifth group). For multimodal functions with a weak global structure (fifth group, bottom right plot of Figure 5), gl10-1 is even on average (over the budgets) the best strategy. These findings are intuitive, as unimodal function may not benefit at all from global steps, while on the other hand a too aggressively local strategy (e.g. gl1-10) may get trapped in a local optimum of a highly multimodal function. Overall on this benchmark, gl1-4 offers the best trade-off over all groups between performance and robustness. $n=2$ | $n=5$ ---|--- | | Figure 2: Effect of changing the amount of local and global iterations (top), and changing the other parameters of the TREGO algorithm (bottom). Performance is reported in terms of ERTD, averaged over the entire noiseless BBOB testbed in 2 (left) and 5 (right) dimensions. Run length is $30\times n$. Figure 2, bottom row, shows the average performance of other variants of TREGO. Overall, TREGO has very little sensitivity to its internal parameters, the average performances of all TREGO variants being similar in both dimensions. The robustness of TREGO performance with respect to the other parameters is an advantage of the method, and is in line with what is generally observed for trust region based algorithms. The effects of the TREGO parameters are studied by function groups in Figure 5 (see Appendix). The main visible results are: * • a slightly positive effect of the local GP (locGP) on the groups 1 and 2 but a strong negative effect on unimodal functions with bad conditioning (group 3), and no effect on the remaining groups. Despite offering attractive flexibility in theory, the local GP provides in practice either limited gain or has a negative impact on performance. As this variant is also more complicated than TREGO, it may be discarded. * • a positive effect of fast contraction of the trust region (fstC and fstCsmV0) on highly multimodal functions (group 5) during early iterations. By making the trust region more local earlier in the search, the fast contraction allows to reach the easy targets, but this early performance prevents the algorithm from finding other better targets later on (those variants being outperformed by others at the end of the runs). The gl1-4 variant of TREGO is shown to offer the best trade-off over all groups between performance and robustness. In our comparison with the state- of-the-art BBO algorithms, we will use the name TREGO to refer to the gl1-4 solver. ### 5.4 Comparison with state-of-the-art BBO algorithms Longer runs of length $50n$ (function evaluations) are made with TREGO in dimensions 2, 5 and 10. The results are compared to state-of-the-art Bayesian optimization algorithms: a vanilla EGO, that serves as a baseline, TRIKE (see Section 3.3), SMAC, DTS-CMA, Nomad and MCS. A COCO test campaign of such a set of algorithms up to dimension 10, with run length of $50n$ and 15 repetitions of the optimizations takes of the order of 3 weeks of computing time on a recent single processor. DTS-CMA [9] is a surrogate-assisted evolution strategy based on a combination of the CMA-ES algorithm and Gaussian process surrogates. The DTS-CMA solver is known to be very competitive compared to the state-of-the-art black-box optimization solvers particularly on some classes of multimodal test problems. SMAC [31] (in its BBOB version) is a BO solver that uses an isotropic GP to model the objective function and a stochastic local search to optimize the expected improvement. SMAC is known to perform very well early in the search compared to the state-of-the-art black-box optimizers. Nomad [2, 37] is a C++ solver based on the mesh adaptive direct search method [6]. We have tested Nomad version 4.2.0 via its provided Python interface where the variable neighborhood search (VNS) strategy was enabled to enhance its global exploration. Nomad enjoys similar convergence properties to those of TREGO, hence a comparison between the two solvers is meaningful. MCS [32] is a multilevel coordinate search solver that balances global and local search (the latter using quadratic interpolation). MCS is among the best DFO solvers on bound constrained optimization problems [49]. DTS-CMA, SMAC and MCS results are directly extracted from the COCO database. This is not the case of Nomad and TRIKE. As TRIKE follows a relatively standard BO framework, we use our own implementation to compare TREGO against it. As TURBO has a complex structure and the available code is too computationally demanding to be used directly with COCO, it is left out of this study. Figure 3 gives the average performance of the algorithms on all the functions of the testbed. Results in 5 and 10 dimensions split by function groups are provided in Figure 4. $n=2$ | $n=5$ | $n=10$ ---|---|--- | | Figure 3: Comparison of TREGO with state-of-the-art optimization algorithms, averaged over the entire COCO testbed in 2, 5 and 10 dimensions. Run length = $50\times n$. | Multimodal | Multimodal, weak struct. | Unimodal, low cond. ---|---|---|--- $n=5$ | | | $n=10$ | | | Figure 4: Comparison of TREGO with state-of-the-art optimization algorithms, averaged over the multi-modal functions with adequate (left, f15 to f19) and weak (middle, f20 to f24) global structure, unimodal functions with low conditioning (right), $n=5$ (top row) and $n=10$ (bottom row) dimensions. Run length = $50\times n$. Results for the other groups are given in Appendix, Figure 6. #### EGO is significantly outperformed by both trust regions algorithms (i.e., TREGO and TRIKE). This performance gap is limited for $n=2$ but very visible for $n=5$ and even higher for $n=10$. It is also significant for any budget (as soon as the shared initialization is done). The improvement is also visible for all function groups (see Figure 4), in particular for groups with strong structure. For the multimodal with weak structure group, the effect is mostly visible for the larger budgets. #### Nomad has an overall performance comparable to TREGO. Nomad is shown in particular to be very efficient for small budgets ($<20\times n$) but then it gets outperformed by TREGO as the evaluation budget gets larger. The performance gap between Nomad and TREGO is limited for $n=2$ but very visible for $n=5$ and even higher for $n=10$, see Figure 3. The good start of Nomad can be explained by the fact that it requires only one point to start the optimization process, while Bayesian optimization solvers need a set of points (i.e., the initial DoE) to initiate the optimization process. Thanks to its variable neighborhood search strategy, Nomad seems to outperform most of the tested solvers on the group of multimodal optimization problems, see Figure 4. #### MCS is outperformed by most of the tested solvers despite its very good performance at the early stages of the optimization process. In fact, the performance of MCS at the beginning seems to deteriorate very fast as the the budget is getting larger, particularly when the regarded optimization problems are multimodal, see Figure 4. #### SMAC has an early start and is visibly able to start optimizing while all other methods are still creating their initial DoE. However, it is outperformed by all trust region variants before the number of evaluations reaches 10 times the problem dimension (vertical line on the graphs). This effect also increases with dimension. #### DTS-CMA has conversely a slower start, so that it is slightly outperformed by trust regions for small budgets ($<20\times n$). However, for large budgets and $n=10$, DTS-CMA largely outperforms other methods on average. However, looking at Figure 4, DTS-CMA clearly outperforms the other methods (including the best09 baseline) on multimodal functions with strong structure for $n=10$ and large budgets, while TREGO remains competitive in other cases. #### TRIKE has an overall performance comparable to TREGO. For $n=5$, it slightly outperforms the other methods for intermediate budget values, but looses its advantage for larger budgets. Figure 6 (see Appendix) reveals that this advantage is mainly achieved on the unimodal group with high conditioning, but on multi-modal problems, TREGO’s ability to perform global steps offer a substantial advantage. #### Overall performance Overall, this benchmark does not reveal a universal winner. SMAC, Nomad and MCS excel with extremely limited budgets, while DTS-CMA outperforms the other methods for the largest dimensions and budgets. TREGO is overall very competitive on intermediate values, in particular for multi-modal functions. #### Discussion It appears clearly from our experiments that trust regions are an efficient way to improve EGO’s scalability with dimension. EGO is known to over-explore the boundaries in high dimension [42, 24], and narrowing the search space to the vicinity of the current best point naturally solves this issue. Thus, since EGO is outperformed for any budget, we can conclude that the gain obtained by focusing early on local optima is not lost later by missing the global optimum region. Trust regions also improve performance of EGO on problems for which GPs are not the most natural fit (i.e. unimodal functions). For this class of problems, the most aggressively local algorithm (TRIKE) can perform best in some cases (Figure 6), however our more balanced approach is almost as good, if better (Figure 6, unimodal functions with low conditioning). On the other hand, maintaining a global search throughout the optimization run allows escaping local optima and ultimately delivering better performance for larger budgets (see in particular Figure 4, all multimodal functions). ## 6 Conclusions and perspectives In this work, the TREGO method is introduced: a Bayesian optimization algorithm based on a trust-region mechanism for the optimization of expensive- to-evaluate black-box functions. TREGO builds on the celebrated EGO algorithm by alternating between a standard global step and a local step during which the search is limited to a trust region. Equipped with such a local step, TREGO rigorously achieves global convergence, while enjoying the flexible predictors and efficient exploration-exploitation trade-off provided by the GPs. An extensive benchmark is then performed, which allowed us to form the following conclusions: * • TREGO benefits from having a relatively high proportion of local steps, but is otherwise almost insensitive to its other parameters. * • A more complex approach involving both a local and a global model, which is possible in the TREGO framework, does not provide any benefit. * • TREGO significantly outperforms EGO in all tested situations. * • TREGO is a highly competitive algorithm for multi-modal functions with moderate dimensions and budgets. Making TREGO a potential overall winner on the experiments reported here is an avenue for future work. This would require improving its performance on unimodal functions with high conditioning, and improving its performance at very early steps, for example by leveraging SMAC for creating the initial DoEs. Our empirical evaluation focused on bound constrained BBO problems. However, TREGO readily applies to the case of explicit, non-relaxable constraints, which may be studied in the future. Moreover, inspired by e.g. [7, 20, 28] from the DFO community and [46, 51] from the BO one, TREGO can also be naturally extended to handle constraints that are allowed to be violated during the optimization process. Another important future work may include the extension of TREGO to the case of noisy observations, following recent results in DFO [1, 3, 17, 23] and established BO techniques [45]. ## References * [1] S.-K. Anagnostidis, A. Lucchi, and Y. Diouane. Direct-search for a class of stochastic min-max problems. In International Conference on Artificial Intelligence and Statistics, pages 3772–3780. PMLR, 2021. * [2] C. Audet, S. Le Digabel, V. Rochon Montplaisir, and C. Tribes. Nomad version 4: Nonlinear optimization with the mads algorithm. arXiv preprint arXiv:2104.1167, 2021. * [3] C. Audet, K. J. Dzahini, M. Kokkolaras, and S. Le Digabel. Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates. Comput. Optim. Appl., 19:1–34, 2021. * [4] C. Audet and W. Hare. Derivative-Free and Blackbox Optimization. Springer, Cham, Philadelphia, 2017. * [5] C. Audet and J. E. Dennis Jr. Analysis of generalized pattern searches. SIAM J. Optim., 13:889–903, 2002. * [6] C. Audet and J. E. Dennis Jr. Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim., 17:188–217, 2006. * [7] C. Audet and J. E. Dennis Jr. A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim., 20:445–472, 2009. * [8] A. Auger, S. Finck, N. Hansen, and R. Ros. BBOB 2009: Comparison Tables of All Algorithms on All Noiseless Functions. Technical Report RT-0383, INRIA, April 2010. * [9] L. Bajer, Z. Pitra, J. Repický, and M. Holena. Gaussian Process Surrogate Models for the CMA Evolution Strategy. Evol. Comput., 27:665–697, 2019. * [10] E. Bergou, Y. Diouane, V. Kungurtsev, and C. W. Royer. A stochastic Levenberg-Marquardt method using random models with complexity results. SIAM-ASA J. Uncertain. Quantif., 10:507–536, 2022. * [11] J. Blanchet, C. Cartis, M. Menickelly, and K. Scheinberg. Convergence rate analysis of a stochastic trust region method via supermartingales. INFORMS J. Optim., 1:92–119, 2019. * [12] A. J. Booker, J. E. Dennis Jr., P. D. Frank, D. B. Serafini, V. Torczon, and M. W. Trosset. A rigorous framework for optimization of expensive functions by surrogates. Struct. Multidiscipl. Optim., 17:1–13, 1998. * [13] M. A. Bouhlel, N. Bartoli, R. G. Regis, A. Otsmane, and J. Morlier. Efficient global optimization for high-dimensional constrained problems by using the kriging models combined with the partial least squares method. Eng. Optim., 50:2038–2053, 2018. * [14] E. Brochu, V. M. Cora, and N. De Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010. * [15] D. Brockhoff. Online description of the BBOB functions. https://coco.gforge.inria.fr/, 2006. * [16] A. D. Bull. Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res., 12:2879–2904, 2011. * [17] R. Chen, M. Menickelly, and K. Scheinberg. Stochastic optimization using trust-region method and random models. Math. Program., 169:447–487, 2018. * [18] F. H. Clarke. Optimization and Nonsmooth Analysis. John Wiley & Sons, New York, 1983. Reissued by SIAM, Philadelphia, 1990. * [19] A. R. Conn, K. Scheinberg, and L. N. Vicente. Introduction to Derivative-Free Optimization. MPS-SIAM Series on Optimization. SIAM, Philadelphia, 2009. * [20] Y. Diouane. A merit function approach for evolution strategies. EURO J. Comput. Optim., 9:100001, 2021. * [21] Y. Diouane, S. Gratton, and L. N. Vicente. Globally convergent evolution strategies. Math. Program., 152:467–490, 2015. * [22] Y. Diouane, S. Gratton, and L. N. Vicente. Globally convergent evolution strategies for constrained optimization. Comput. Optim. Appl., 62:323–346, 2015. * [23] Y. Diouane, A. Lucchi, and V. Patil. A globally convergent evolutionary strategy for stochastic constrained optimization with applications to reinforcement learning. In International Conference on Artificial Intelligence and Statistics, pages 3772–3780. PMLR, 2022. * [24] D. Eriksson, M. Pearce, J. Gardner, R. D. Turner, and M. Poloczek. Scalable global optimization via local Bayesian optimization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5496–5507. Curran Associates, Inc., 2019. * [25] K.-T. Fang, R. Li, and A. Sudjianto. Design and modeling for computer experiments. CRC press, 2005. * [26] A. I. J. Forrester, A. Sóbester, and A. J. Keane. Multi-fidelity optimization via surrogate modelling. Proceedings of the royal society a: mathematical, physical and engineering sciences, 463:3251–3269, 2007. * [27] P. I. Frazier. A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811, 2018. * [28] S. Gratton and L. N. Vicente. A merit function approach for direct search. SIAM J. Optim., 24:1980–1998, 2014. * [29] N. Hansen, A. Auger, R. Ros, S. Finck, and P. Pošík. Comparing results of 31 algorithms from the black-box optimization benchmarking bbob-2009. In Proceedings of the 12th annual conference companion on Genetic and evolutionary computation, pages 1689–1696. ACM, 2010. * [30] N. Hansen, A. Auger, R. Ros, O. Mersmann, T. Tušar, and D. Brockhoff. COCO: a platform for comparing continuous optimizers in a black-box setting. Optim. Methods Softw., 36:114–144, 2021. * [31] F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Proceedings of the 5th International Conference on Learning and Intelligent Optimization, LION’05, pages 507–523, Berlin, Heidelberg, 2011\. Springer-Verlag. * [32] W. Huyer and A. Neumaier. Global optimization by multilevel coordinate search. J. Global Optim., 14:331–355, 1999. * [33] J. Jahn. Introduction to the Theory of Nonlinear Optimization. Springer-Verlag, Berlin, 1996. * [34] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions. J. Global Optim., 13:455–492, 1998. * [35] K. Kandasamy, J. Schneider, and B. Póczos. High dimensional Bayesian optimisation and bandits via additive models. In International conference on machine learning, pages 295–304, 2015. * [36] T. G. Kolda, R. M. Lewis, and V. Torczon. Optimization by direct search: New perspectives on some classical and modern methods. SIAM Rev., 45:385–482, 2003. * [37] S. Le Digabel. Algorithm 909: Nomad: Nonlinear optimization with the mads algorithm. ACM Transactions on Mathematical Software (TOMS), 37:44, 2011. * [38] S. Le Digabel and S.M. Wild. A Taxonomy of Constraints in Simulation-Based Optimization. Technical Report G-2015-57, Les cahiers du GERAD, 2015. * [39] M. McLeod, S. Roberts, and M. A. Osborne. Optimization, fast and slow: optimally switching between local and Bayesian optimization. In International Conference on Machine Learning, pages 3443–3452, 2018. * [40] J. Mockus. Bayesian approach to global optimization: theory and applications. Springer Science & Business Media, 2012. * [41] J. Nocedal and S. J. Wright. Numerical Optimization. Springer-Verlag, Berlin, second edition, 2006. * [42] Ch. Y. Oh, E. Gavves, and M. Welling. BOCK: Bayesian optimization with cylindrical kernels. In International Conference on Machine Learning, pages 3868–3877, 2018. * [43] V. Picheny, P. Casadebaig, R. Trépos, R. Faivre, D. Da Silva, P. Vincourt, and E. Costes. Using numerical plant models and phenotypic correlation space to design achievable ideotypes. Plant Cell Environ., 40:1926–1939, 2017. * [44] V. Picheny and D. Ginsbourger. Noisy Kriging-based optimization methods: a unified implementation within the DiceOptim package. Comput. Stat. Data Anal., 71:1035–1053, 2014. * [45] V. Picheny, T. Wagner, and D. Ginsbourger. A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidiscipl. Optim., 48:607–626, 2013. * [46] Victor Picheny, Robert B Gramacy, Stefan Wild, and Sebastien Le Digabel. Bayesian optimization under mixed constraints with a slack-variable augmented lagrangian. Advances in neural information processing systems, 29, 2016. * [47] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. MIT press Cambridge, MA, 2006. * [48] R. G. Regis. Trust regions in Kriging-based optimization with expected improvement. Eng. Optim., 48:1037–1059, 2016. * [49] L. Rios and N. Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. J. Global Optim., 56:1247–1293, 2013. * [50] O. Roustant, D. Ginsbourger, and Y. Deville. DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization. J. Stat. Softw., 51, 2012. * [51] M. Schonlau, W. J. Welch, and D. R. Jones. Global versus local search in constrained optimization of computer models. Lecture Notes-Monograph Series, pages 11–25, 1998. * [52] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104:148–175, 2015. * [53] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104:148–175, 2016. * [54] E. Siivola, A. Vehtari, J. Vanhatalo, J. González, and M. R. Andersen. Correcting boundary over-exploration deficiencies in Bayesian optimization with virtual derivative sign observations. In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE, 2018. * [55] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959, 2012. * [56] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on Machine Learning, 2010. * [57] M. L. Stein. Interpolation of spatial data: some theory for Kriging. Springer Science & Business Media, 2012. * [58] A. I. F. Vaz and L. N. Vicente. A particle swarm pattern search method for bound constrained global optimization. J. Global Optim., 39:197–219, 2007. * [59] E. Vazquez and J. Bect. Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. J. Stat. Plan. and Inference, 140:3088–3095, 2010. * [60] L. N. Vicente and A. L. Custódio. Analysis of direct searches for discontinuous functions. Math. Program., 133:299–325, 2012. * [61] Z. Wang, F. Hutter, M. Zoghi, D. Matheson, and N. de Feitas. Bayesian optimization in a billion dimensions via random embeddings. J. Artif. Intell. Res., 55:361–387, 2016. ## Appendix A Pseudo-code of the TREGO algorithm Data: Create an initial DoE $\mathcal{D}_{t_{0}}=\\{x_{1},x_{2},\ldots,x_{t_{0}}\\}$ of $t_{0}$ points in a given set $\Omega\subset\mathbb{R}^{n}$ with a given method. Set $\mathcal{Y}_{t_{0}}=\\{f(x_{1}),f(x_{2}),\ldots,f(x_{t_{0}})\\}$. Choose $G\geq 0$ the number of the global steps and $L\geq 1$ the number of the local steps. Initialize the step-size parameter $\sigma_{0}$, $x^{*}_{0}\in\mathcal{D}_{t_{0}}$, choose the constants $\beta$, $\gamma$, $d_{\min}$ and $d_{\max}$ such that $0<\beta<1<\gamma$ and $0<d_{\min}<d_{\max}$. Select a forcing function $\rho(.)$ and set $k=0$ and $t=t_{0}$; while _some stopping criterion is not satisfied_ do /* A global phase over $\Omega$: */ for _$i=1,\ldots,G$_ do Step 1 (global acquisition function maximization): Set $x^{\operatorname{global}}_{t}:=\underset{x\in\Omega}{\operatorname{argmax}}~{}\alpha(x;\mathcal{D}_{t});$ Step 2 (update the DoE): Set $\mathcal{D}_{t+1}=\mathcal{D}_{t}\cup\left\\{x^{\operatorname{global}}_{t}\right\\}$ and $\mathcal{Y}_{t+1}=\mathcal{Y}_{t}\cup\left\\{f\left(x^{\operatorname{global}}_{t}\right)\right\\}$; Increment $t$; end for Let $x^{\operatorname{global}}_{k+1}$ be the best point (in term of $f$) in the DoE $\mathcal{D}_{t}$; Step 3 (imposing sufficient decrease globally): if _$f(x_{k+1}^{\operatorname{global}})\leq f(x^{*}_{k})-\rho(\sigma_{k})$_ then the global phase is successful, set $x^{*}_{k+1}=x^{\operatorname{global}}_{k+1}$ and $\sigma_{k+1}=\gamma\sigma_{k}$; else /* A local phase over the trust-region $\Omega_{k}$: */ for _$i=1,\ldots,L$_ do Step 4 (local acquisition function maximization): Set $x^{\operatorname{local}}_{t}:=\underset{x\in\Omega_{k}}{\operatorname{argmax}}~{}\alpha(x;\mathcal{D}_{t}),$ where $\Omega_{k}$ is the trust-region given by $\Omega_{k}=\\{x\in\Omega\;|\;d_{\min}\sigma_{k}\leq\|x-x^{*}_{k}\|\leq d_{\max}\sigma_{k}\\}$; Step 5 (update the DoE): Set $\mathcal{D}_{t+1}=\mathcal{D}_{t}\cup\left\\{x^{\operatorname{local}}_{t}\right\\}$ and $\mathcal{Y}_{t+1}=\mathcal{Y}_{t}\cup\left\\{f\left(x^{\operatorname{local}}_{t}\right)\right\\}$; Increment $t$; end for Let $x^{\operatorname{local}}_{k+1}$ be the best point (in term of $f$) in the DoE $\mathcal{D}_{t}$; Step 6 (imposing sufficient decrease locally): if _$f(x^{\operatorname{local}}_{k+1})\leq f(x^{*}_{k})-\rho(\sigma_{k})$_ then the local phase and iteration are successful, set $x^{*}_{k+1}=x^{\operatorname{local}}_{k+1}$ and $\sigma_{k+1}=\gamma\sigma_{k}$ ; else the local phase and iteration are not successful, set $x^{*}_{k+1}=x^{*}_{k}$, and $\sigma_{k+1}={\beta}\sigma_{k}$; end if end if Increment $k$; end while Algorithm 1 A Trust-Region framework for EGO (TREGO). ## Appendix B Functions of the BBOB noiseless testbed ID | name | comments ---|---|--- separable functions f1 | Sphere | unimodal, allows to checks numerical accuracy at convergence f2 | Ellipsoidal | unimodal, conditioning $\approx 10^{6}$ f3 | Rastrigin | $10^{n}$ local minima, spherical global structure f4 | Büche-Rastrigin | $10^{n}$ local minima, asymmetric global structure f5 | Linear Slope | linear, solution on the domain boundary functions with low or moderate conditioning f6 | Attractive Sector | unimodal, highly asymmetric f7 | Step Ellipsoidal | unimodal, conditioning $\approx 100$, made of many plateaus f8 | Original Rosenbrock | good points form a curved $n-1$ dimensional valley f9 | Rotated Rosenbrock | rotated f8 unimodal functions with high conditioning $\approx 10^{6}$ f10 | Ellipsoidal | rotated f2 f11 | Discus | a direction is 1000 times more sensitive than the others f12 | Bent Cigar | non-quadratic optimal valley f13 | Sharp Ridge | resembles f12 with a non-differentiable bottom of valley f14 | Different Powers | different sensitivities w.r.t. the $x_{i}$’s near the optimum multimodal functions with adequate global structure f15 | Rastrigin | rotated and asymmetric f3 f16 | Weierstrass | highly rugged and moderately repetitive landscape, non unique optimum f17 | Schaffers F7 | highly multimodal with spatial variation of frequency and amplitude, smoother and more repetitive than f16 f18 | moderately ill-conditioned Schaffers F7 | f17 with conditioning $\approx 1000$ f19 | Composite Griewank-Rosenbrock | highly multimodal version of Rosenbrock multimodal functions with weak global structure f20 | Schwefel | $2^{n}$ most prominent optima close to the corners of a shrinked and rotated rectangle f21 | Gallagher’s Gaussian 101-me peaks | 101 optima with random positions and heights, conditioning $\approx 30$ f22 | Gallagher’s Gaussian 21-hi peaks | 21 optima with random positions and heights, conditioning $\approx 1000$ f23 | Katsuura | highly rugged and repetitive function with more than $10^{n}$ global optima f24 | Lunacek bi-Rastrigin | highly multimodal function with 2 funnels, one leading to a local optimum and covering about 70% of the search space Table 2: Functions of the BBOB noiseless testbed, divided in groups. ## Appendix C Complementary experimental results | Local / global ratio | | | Other parameters ---|---|---|---|--- Separable | | | | Low conditioning | | | | High conditioning | | | | Multimod., strong struct. | | | | Multimod., weak struct. | | | | Figure 5: Effect of changing parameters of the TREGO algorithm, averaged by function groups for $n=5$. Run length is $30\times n$. | Separable | | | Unimodal, high cond. ---|---|---|---|--- $n=5$ | | | | $n=10$ | | | | Figure 6: Comparison of TREGO with state-of-the-art optimization algorithms on separable (left) and unimodal with high conditioning functions (right), for $n=5$ (top) and $n=10$ (bottom). Run length = $50\times n$.
# Metachronal waves in concentrations of swimming Turbatrix aceti nematodes and an oscillator chain model for their coordinated motions A. C. Quillen<EMAIL_ADDRESS>Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA A. Peshkov <EMAIL_ADDRESS>Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA Esteban Wright<EMAIL_ADDRESS>Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA Sonia McGaffigan<EMAIL_ADDRESS>Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA ###### Abstract At high concentration, free swimming nematodes known as vinegar eels (Turbatrix aceti), collectively exhibit metachronal waves near a boundary. We find that the frequency of the collective traveling wave is lower than that of the freely swimming organisms. We explore models based on a chain of oscillators with nearest neighbor interactions that inhibit oscillator phase velocity. The phase of each oscillator represents the phase of the motion of the eel’s head back and forth about its mean position. A strongly interacting directed chain model mimicking steric repulsion between organisms robustly gives traveling wave states and can approximately match the observed wavelength and oscillation frequency of the observed traveling wave. We predict body shapes assuming that waves propagate down the eel body at a constant speed. The phase oscillator model that impedes eel head overlaps also reduces close interactions throughout the eel bodies. ## I Introduction Concentrations of biological organisms can be considered active materials as they are comprised of self-driven units and energy is continuously expended through locomotion [1]. Collective behavior of groups of organisms include flocking or swimming in schools [2, 3] and synchronization [4, 5]. Synchronization processes in nature include glowing rhythms of colonies of fireflies [4], crowd synchrony of pedestrians walking on a bridge [6] and flagella beating in phase with one another [7]. The head or tail of an individual snake, flagellum, cilium or nematode moves back and forth with respect to a mean position. This periodic motion can be described with a phase of oscillation (e.g., [8]). In concentrations of mobile oscillators, both synchronization and swarming can occur together, and such systems can display a rich diversity of collective states (e.g., the swarmalators studied by O’Keeffe et al. [9]) including collectively organized and coordinated motions known as traveling or metachronal waves. A metachronal rhythm or metachronal wave refers to a locally synchronized motion of individuals with a delay between them, in contrast to globally synchronized patterns of oscillation. Metachronal waves require coordinated motions between neighboring structures or organisms [10, 11]. Swimming spermatozoa synchronize the beating of their cilia, and flagellates can synchronize the motions of their flagella when they are in close proximity [7, 12, 13, 14, 15, 16]. When a constant phase difference or time delay is maintained between neighboring oscillating structures, the collective motion has the appearance of a traveling wave. One approach to modeling metachronal wave formation in cilia or flagella is to model them as an array of flexible filaments that oscillate or beat when alone. Self-organized metachronal waves then arise due to hydrodynamic [17, 18, 11, 13, 19, 8, 20, 14] or steric [21] interactions between neighboring filaments. Even though a filament can bend and flex, its behavior can approximately be described with an angle or phase which specifies the position of its moving tip (e.g., [11, 14, 13]). Although each filament moves in three dimensions, simplified models consisting of discrete linear chains of interacting oscillators can describe the collective behavior [11, 14, 13]. Phase oscillator chain models, known as local Kuramoto models, exhibit both long lived synchronous and traveling wave states [22, 23, 24, 25, 26, 27]. However, in many of these models, a system with randomly chosen initial phases is more likely to evolve into a synchronous state than a traveling wave state [26, 27]. Simple criteria are not available for predicting whether an interacting phase oscillator model is likely to give traveling wave states if initialized with random phases. However, physically motivated interacting phase oscillator models for metachronal waves in cilia and flagella have succeeded in robustly giving traveling wave states [14, 13]. In this study we report on collective behavior in a system of undulating free- swimming organisms, vinegar eels, species Turbatrix aceti (T. aceti), which are a type of free-living nematode. They are found living in beer mats, slime from tree wounds and cultures of edible vinegars. Because they are hardy, they are used in aquaculture by fish keepers and aquarists as food for newly hatched fish or crustaceans. Vinegar eels are tolerant of variation in acidity and they subsist on yeast. The metachronal waves in T. aceti, reported by Peshkov [28], Peshkov et al. [29], are similar to those seen in cilia. However, unlike cilia which are affixed to a cell membrane, the vinegar eels are freely swimming organisms. At about 1 mm in length, the vinegar eels are visible by naked eye and are much larger than cilia (typically a few $\mu$m in length) or flagella on colonies of microorganisms that display metachronal waves (e.g., with flagella length $\sim 10\mu$m; [14]). Concentrated suspensions of vinegar eels are a novel biological system in which we can study ensemble coordination and synchronization. Henceforth we refer to the vinegar eels colloquially as ‘eels’ even though they are nematodes. Ensembles of active particles can exhibit a phase transition from gaseous to collective behavior at higher number density due to particle interactions (e.g., for unipolar self-propelled particles [30]). Metachronal waves are only present in high concentrations of vinegar eels [29] so interactions between them are necessary for the coordinated wave motion. Collective coordinated motion is likely to be mediated by the interactions between the organisms. In our study we compare the motion of the vinegar eels participating in metachronal waves to those that are freely swimming to probe the nature of these interactions. While the well studied nematode Caenorhabditis elegans (C. elegans) naturally grows in soil, C. elegans is also an undulatory swimmer in water (e.g., [31]). C. elegans nematodes congregate near surfaces and boundaries (they exhibit bordertaxis) [31]. In close proximity, a pair of swimming C. elegans nematodes will synchronize their gait [32]. Collective behavior of C. elegans includes the formation of a network on a surface [33] and synchronization of clusters of tens of nematodes [32]. We have observed similarities between the reported behavior of C. elegans and our vinegar eel nematodes. These similarities include undulatory swimming, bordertaxis, and synchronization in the gait of clusters of organisms. We have not found descriptions of metachronal waves in concentrations of C. elegans or other nematodes in the literature nor have we seen metachronal waves in concentrations of C. elegans in our lab [29]. We briefly describe our experimental methods in II. Measurements of individual vinegar eels at low concentration are discussed in section III. We describe the behavior of high concentrations of vinegar eels in section IV. Models of metachronal waves in cilia and flagella have described these systems as a chain of interacting phase oscillators, where each phase describes the motion of a cilium or flagellum tip [13, 14]. In section V we adopt a similar approach and model our ensemble of vinegar eels with a chain of interacting oscillators, but each phase describes the motion of an eel’s head. A summary and discussion follows in section VI. ## II Experimental Methods We obtained our T. aceti nematode and yeast culture from an aquarium supply store, and we grow it at room temperature in a 1:1 mixture of water and food grade apple cider vinegar. A few slices of apple were added to the mixture as a food source for the yeast. After a few ml of the purchased culture is added to the vinegar and apple mixture, it takes a few weeks before large numbers of vinegar eels are visible by eye in the mixture. The vinegar eels congregate at the surface and crawl up the container walls. To study the motion of the vinegar eels, we used a Krontech Chronos 1.4 high speed video camera at 1057 frames per second (fps) giving image frames with 1024 $\times$ 1280 pixels. To connect the video camera to a conventional stereo compound microscope under bright field illumination, we used a 0.5X reduction lens adapter that matches the C-mount of our camera. The other end of the adapter fits in the 23.2 mm diameter eyepiece holder of our microscope. Videos were taken using the X4 or X10 microscope objectives. At each magnification, we made short videos of a calibration slide with a small ruler on it. Frames from these videos were used to measure the pixel scale, giving 315 mm/pixel and 838 mm/pixel at X4 and X10 magnification, respectively. The field of view is 1.22 mm $\times$ 1.53 mm at X4 magnification and 3.25 mm $\times$ 4.06 mm at X10 magnification. We present two videos, both taken on Feb 26, 2020. The first video [34], denoted Video A, filmed at X10 magnification, is of the vinegar eels at low concentration. The second video [35], denoted Video B, is at higher concentration and was filmed at X4 magnification. To achieve high vinegar eel concentration, we placed about 10 ml of the vinegar eel culture in a test tube and then used a centrifuge (a few minutes at a few thousand rpm or about 1000 $g$) to concentrate the eels at the bottom. A pipette was then used to extract fluid from the bottom of the tube. Each video views a drop of about $100\mu$l of dilute vinegar containing vinegar eels that was deposited on a dry glass slide. The drop was not covered with a coverslip, so its surface is curved due to surface tension. The slides wet so the drop is not spherical. The outer edge of the drop where it touches the slide remains fixed due to surface tension. In both videos, the drop was about a cm in diameter. In Video B, we touched the edge of the drop with a metal pin a few times to pull and extend the drop radially outward. This increased the drop surface area on the slide and decreased its depth. This system is nearly two dimensional as the vinegar eels rarely swim above or below one another. Additional experiments of drops containing T. aceti are discussed by Peshkov et al. [29]. --- Figure 1: Characteristics of an adult mm long freely-swimming vinegar eel. (a) The gray-scale image shows a sum of 5 frames from high speed Video A showing the same freely swimming eel. The 5 frames are equally spaced in time with interval $T_{u}=170.3$ ms that is approximately one oscillation or undulation period. The oscillation frequency is written on the lower left in Hz; $f_{u}=1/T_{u}$. The swim speed, $v_{\rm swim}$, eel length, $L$ and diameter $w$ are written on the top of the frame. The images have been rotated so that the organism is swimming in the horizontal direction and to the left. (b) Body positions are shown with colored dots at 9 equally spaced times during a single oscillation period. The images used to measure the body position have been shifted to take into account the mean swim speed. The body positions are plotted on top of the first video frame in the sequence. (c) Using the first time shown in (b), the $y$ position of the center of the eel body as a function of $x$ is plotted with red dots enclosed in black circles. The red line shows the sine function $y=A_{u}\cos(k_{u}x-\phi_{0})$ fit to these points. The wavelength and amplitude of this function are shown on the lower left. The colored lines show $y=A_{u}\cos(k_{u}x-\phi_{0}-2\pi j/9)$ for integers $j\in 1...8$ corresponding to the phases of oscillation shown in (b). The eel body is approximately sinusoidal in shape over much of its body and during most of its gait. Table 1: Properties of a freely swimming vinegar eel Quantity | Symbol | units | Value ---|---|---|--- Length | $L$ | mm | $0.96\pm 0.03$ Diameter | $w$ | mm | $0.021\pm 0.001$ Length/diameter | $L/w$ | - | 45 Wavelength | $\lambda_{u}$ | mm | $0.50\pm 0.02$ Amplitude | $A_{u}$ | mm | $0.045\pm 0.005$ Swim speed | $v_{\rm swim}$ | mm/s | $0.38\pm 0.03$ Amplitude/phys. length | $A_{u}/h_{x}$ | - | 0.055 Amplitude times wave-vector | $A_{u}k_{u}$ | - | 0.56 Oscillation period | $T_{u}$ | ms | $170\pm 6$ Oscillation frequency | $f_{u}=1/T_{u}$ | Hz | $5.9\pm 0.2$ Undulation wave speed along body | $v_{u}=\lambda_{u}/T_{u}$ | mm/s | 3.0 The length $h_{x}$ is the linear distance between head and tail measured along the direction of motion. The length $L$ is that of the eel, integrated along its body or measured if it were extended to its maximum length. Because the eel is not straight while it is swimming $h_{x}<L$. The wave speed along the body is that of undulation. Uncertainties describe the range of values that would be consistent with the motion during a 1 s long segment of video. The vinegar eel is shown in Figure 1. ## III Observations of lone eels at low concentration In Video A, the vinegar eels are at low concentration and we can find intervals when an individual eel is not strongly influenced by nearby eels or borders. We focus on an adult $\sim 1$ mm long vinegar eel, shown in Figure 1, because it can be directly compared to prior work studying 1 mm long C. elegans kinematics (e.g., [36, 37, 31]) and because eels of this length actively participate in the metachronal wave. A median image was subtracted from all frames in Video A to remove smooth variations in lighting. After subtracting the median image, we rotated the video frames so that the lone vinegar eel swims to the left. To find the eel’s oscillation or gait period we summed 5 equally spaced (in time) video frames. We adjusted the time interval between the frames until the eel body shape was similar in each of the 5 frames, indicating that they are at about the same phase of undulation. This time interval gives us an estimate for the eel undulation period $T_{u}$. The sum of 5 images is shown in Figure 1a with the eel head on the left. We estimated the eel’s mean swim speed, $v_{\rm swim}$ by shifting the images so that the eel bodies in the 5 video frames appear to be at the same position. The required shift to align the eels after one oscillation period divided by the oscillation period $T_{u}$ gives the mean swim speed, $v_{\rm swim}$. We used the mean swim speed to shift the video images so that positions are viewed in the reference frame moving with this average speed. At 9 different phases of oscillation during a single oscillation period, we measured eel body centerlines by fitting Gaussian functions to equally spaced vertical slices in the image. The mean of the Gaussian gives the eel’s centerline $y$ value as a function of horizontal distance $x$. The body centerlines at these 9 different phases of oscillation are shown with different colored dots in Figure 1b. The body centerlines are plotted on top of the first video frame in the sequence which is shown with the underlying grayscale image. In this figure, the origin is near the head’s mean position. The positive $x$ axis opposite to the swim direction and the $y$ axis is perpendicular to it. By integrating distances between the points along the eel’s centerline, we computed the length $L$ of the eel. We measured the eel’s body diameter $w$ by measuring its apparent width across its middle. In Figure 1b the horizontal extent of the eel $h_{x}$ along the $x$ axis is smaller than the eel length because the eel body is not straight. To estimate a beat amplitude $A_{u}$ and a wave vector $k_{u}$, we fit a sine wave to the body centerline at one phase of oscillation $y(x)=A_{u}\cos(k_{u}x-\phi_{0}).$ (1) Figure 1c shows the fit sine function with a red line. The sine describes the $y$ coordinate of the eel’s centerline as a function of $x$ and $\phi_{0}$ is a phase. The wavelength of the body shape $\lambda_{u}=2\pi/k_{u}$. The amplitude $A_{u}$ describes the size of deviations from the mean of the centerline. The speed that waves travel down the body $v_{u}$ is estimated from $v_{u}=\lambda_{u}/T_{u}$. Measurements of the freely swimming vinegar eel are summarized in Table 1. Uncertainties listed in this table give the range of values that are consistent with the eel’s motion during a 1 s long segment of video. The centerline positions in Figure 1b show that larger amplitude motions, or larger deviations from a pure sine shape occur at the head and tail of the vinegar eel. Over much of the body the eel’s shape is well described with a sine function and the eel’s body is nearly sinusoidal in shape during most of its oscillation. The spacing and offsets between centerline curves at different phases of oscillation in Figure 1b and c imply the advance of the sine shape occurs at a nearly constant wave speed. Our vinegar eels culture contains nematodes of different sizes, ranging from about 0.3 to 2 mm in length (see Figure 2a). We measured the frequency of oscillation for different length eels and found that this frequency is not strongly dependent on eel length. We have noted that the ratio of length to wavelength $L/\lambda_{u}$ is larger for the larger and longer eels than the smaller ones. In the longer eels about 1.5 wavelengths are present whereas only 1 wavelength is present on the shorter ones. The key findings of this section are the measurement of the frequency of undulation for freely swimming vinegar eels ($f_{u}\sim 6$ Hz) and that the shape and motion of much of the vinegar eel’s body can be described with a sine function. ### III.1 Comparison between C. elegans and T. aceti Since the C. elegans nematode is well studied, we compare its kinematics to that of the vinegar eel nematode, T. aceti. The frequency of undulation we measured in the vinegar eels $\sim 6$ Hz is faster than the $\sim 2$ Hz measured in similar length (1 mm long) C. elegans [36, 31]. The length to diameter ratio for our 1 mm eel is about $L/w\sim$ 45 whereas C. elegans is not as slender with $L/w\sim 12$ [36]. More than 1 wavelength fits within the eel body in T. aceti, particularly in the longer eels. In contrast about a single wavelength fits on the C. elegans body while it is swimming [36]. The speed that waves travel down the body, $v_{u}\sim 3$ mm/s for the eel, is somewhat higher than than of C. elegans (2.1 mm/s, [36]). The swim speeds are similar; 0.4 mm/s for the 1 mm long vinegar eel and 0.36 mm/s in C. elegans. In the vinegar eels, the amplitude of motion is larger at the head and tail, than in the middle and is largest at the tail. This behavior is similar to swimming C. elegans [31] (see their Figure 1a) though Sznitman et al. [36] measured the largest body curvature variations near the head. For vinegar eels at low concentration, we did not find a significant difference between the undulation frequency of eels that are swimming near or along the edge of the drop and of those that are swimming in the center of a drop. In this respect our vinegar eels are similar to C. elegans. For C. elegans exhibiting bordertaxis and swimming near a surface, the frequency of oscillation is similar to that of the freely swimming organism [31]. Table 2: Metachronal wave measurements Quantity | Symbol | Value ---|---|--- Metachronal wave velocity | $v_{\rm MW}$ | 3.7 $\pm$ 0.2 mm/s Metachronal wave frequency | $f_{\rm MW}$ | 4.0 $\pm$ 0.2 Hz Wavelength of metachronal wave | $\lambda_{\rm MW}$ | 0.89 $\pm$ 0.03 mm Number of eels per wavelength | $N_{\rm MW}$ | 13-16 Ratio of frequencies | $f_{\rm MW}/f_{u}$ | $\sim 0.68$ Amplitude of motion | $A_{\rm MW}$ | $\sim 0.07$ mm Figure 2: (a) A raw video frame from Video B. This video is of a dilute vinegar drop containing a high concentration of vinegar eels seen through a conventional microscope at X4 magnification. The edge of the drop on the slide is marked with yellow arrows. The concentration of eels is higher near the edge of the drop. There are eels of different lengths and ages in the solution, however the smaller eels are less likely to participate in the metachronal wave. (b) A photograph taken from above of a drop on a slide containing a high concentration of vinegar eels. Detritus in the culture has been pushed to the center of the drop. The feathery white ridges on the edge of the drop are the metachronal wave. (c) An illustration of the drop of concentrated vinegar eel solution on a slide. The white feathery features represent the traveling wave in the vinegar eels near the edge of the drop. Figure 3: Each panel show the same subregion of a series of frames from Video B. The edge of the drop is near the bottom of each panel. The time of each frame from the beginning of the sequence is shown in yellow on the top right of each panel. The $x$ and $y$ axes are in mm. Figure 4: Correlation function computed using equation 2 from image intensity as a function of spatial shift $\Delta x$ and time delay $\Delta t$. The metachronal wave speed depends on the slope of the ridges. The estimated metachronal wave speed of $v_{\rm MW}=3.7$ mm/s is shown with the red segment. Figure 5: Head positions for 4 eels were tracked over 2 seconds of video and their trajectories are shown in red on the image. The black dots show the location of the eel heads at the same time as the video frame. The eels don’t advance forward very quickly or at all while they are engaged in the metachronal wave. The amplitude of back and forth motion is about $A_{\rm MW}\sim 0.07$ mm, and exceeds that of the freely swimming eel. ## IV Observations of metachronal waves at high concentrations At high concentration and a few minutes after the drop is placed on the slide, the eels collect near the edge of the drop, where the air/fluid boundary touches the slide, and just within the outer rim of the drop. Collective motion in the form of a traveling wave becomes progressively stronger and can be seen without magnification by eye as the vinegar eels are about 1 mm long (see Figure 2). In Figures 2a and 3 we show frames from taken from Video B. The frames in Figure 3 have been rotated to orient the drop edge horizontally and at the bottom of each panel. To aid in comparing the frames at different times, we geometrically distorted each frame with a near identity quadratic coordinate transformation so as to make the boundary horizontal. The transformation used is $(x,y)\to\left(x,y-\frac{1}{2R_{c}}(x-x_{c})^{2}\right)$ with $x_{c}$ the $x$ coordinate of the center of the image and $R_{c}$ is a radius of curvature. Due to surface tension the actual drop edge is curved, with a radius of curvature of about $R_{c}\approx 7$ mm. Using frames from the rotated and distorted video we created a time series of one dimensional arrays by integrating intensity along the vertical axis of the image. The vertical distance integrated is 1 mm and covers the frames in the series shown in Figure 3. This integration gives an intensity array $\rho(x,t)$ as a function of time $t$ with $x$ axis parallel to the drop edge. We use $\rho(x,t)$ to estimate the metachronal travel speed. We compute a correlation function, shown in Figure 4, $C(\Delta x,\Delta t)=\frac{\int dx\ \rho(x,t)\rho(x+\Delta x,t+\Delta t)}{\int dx\ \rho(x,t)^{2}}.$ (2) where $\Delta x$ is a horizontal shift and $\Delta t$ is a time delay. The ridges in Figure 4 are regions of higher intensity that propagate as a wave and their slope, shown with a red segment, is the metachronal wave speed, $v_{\rm MW}$. We estimate the metachronal wave speed by shearing the correlation function image until the ridges are vertical. The uncertainty in $v_{\rm MW}$ is estimated from the range of shear values that give vertical ridges upon visual inspection of the sheared correlation array. We estimate the metachronal wavelength $\lambda_{\rm MW}$ with a Fourier transform of the orientation angles array shown in Figure 6 (which is discussed in more detail below). The size of the error is based on the estimated covariance of a Gaussian fit to the Fourier transform. We checked that this wavelength was consistent with that measured from the distance between peaks in the correlation function shown in Figure 4. The wavelength and wave speed also give a metachronal wave oscillation frequency $f_{\rm MW}=v_{\rm MW}/\lambda_{\rm MW}$. The measurements of the metachronal wave, $v_{\rm MW},\lambda_{\rm MW}$, and $f_{\rm MW}$, are listed in Table 2. Head positions for 4 eels were tracked by clicking on their head positions in two hundred frames spanning 2 seconds from Video B and their trajectories are shown in red in Figure 5. The eels don’t swim forward very quickly. The four eels were chosen because their heads were easiest to identify during the 2 s video clip. The amplitude of back and forth motion for the eel heads is about $A_{\rm MW}\sim 0.07$ mm. This amplitude is an estimate for the amplitude of motion for eels engaged in the metachronal wave and it exceeds the amplitude of motion $A_{u}\sim 0.045$ mm in the 1 mm long freely swimming eel. By counting eel widths, we estimate that $N_{\rm MW}=13$ to 15 eels per metachronal wavelength $\lambda_{\rm MW}$ are involved in the traveling wave. However only about 8 eels per mm have heads visible near the edge of the drop. Some of the eel heads are more distant from the edge of the drop and are confined between other eel bodies. For deeper water/vinegar drops, the number of eels per unit length in the metachronal wave is sensitive to wetting angle [29]. The metachronal wave frequency $f_{\rm MW}\sim 4\pm 0.2$ Hz is significantly lower than the undulation frequency of individual freely swimming eels, $f_{u}\approx 6$ Hz. Studies of metachronal wave formation in cilia and flagellate bacteria have found that as the filaments or flagella enter a traveling wave state, their frequency of oscillation increases because hydrodynamic drag on the filaments is reduced when they are collectively beating in a wave pattern [13, 14]. However, here we find that the metachronal wave frequency is lower than that of the freely swimming eels. Since eels swimming along the edge of the drop do not exhibit a lower undulation frequency, the reduced frequency must be due to interactions between organisms and we infer that interactions between neighboring eels reduce, rather than increase, their oscillation frequency. ### IV.1 Body orientations Figure 3 suggests that when engaged in the metachronal wave, portions of the eel’s bodies spend more time at some orientation angles than others. Figure 5 shows that during some phases of the wave, the eel heads move away from their neighbors. There are larger gaps between eels at some phases of the wave. These observations suggest there are deviations from sinusoidal motion. In this section we measure body orientations from the video frames to quantitatively examine this possibility. To measure the local orientation of the eel bodies we compute local histograms of oriented gradients (HOG). These histograms are commonly used in object recognition software [38]. Figure 6 was made from one of the panels shown in Figure 3. In each 12x12 pixel square cell in the image, we computed histograms of oriented gradients with the hog routine that is part of the image processing python package scikit-image. We use unsigned gradients so orientation angles lie between $[-\pi/2,\pi/2]$. At each cell an average direction was computed using the histograms and these are plotted as blue segments on top of the original video frame in Figure 6a. In Figure 6b, the same blue segments are plotted on top of a color image with color showing the angles themselves. The color bar on the right relates orientation angle to color, with white corresponding to a horizontal orientation. In non-empty regions, we estimate an uncertainty less than $\pm 20^{\circ}$ in the orientational angles based on inspection of Figure 6a. To examine statistical variations in the body orientations we computed distributions from the orientation angles (like those shown in Figure 6) but using 200 video frames from Video B spanning a duration of 2 s. A large number of video frames were used to average over the different phases of the wave. Orientation angle distributions are shown in Figure 7b. Three rectangular regions are drawn in Figure 7a on one of the image frames and each region is plotted with the same color and thickness line as used in Figure 7b. In Figure 7b we show distributions of orientation angles measured in these three rectangular regions. The three region centers have different distances from the edge of the drop, $0.47,0.29$ and $0.13$ mm. The higher color opacity lines in Figure 7b are distributions computed with weights so that regions of high eel intensity contribute more to the histogram. The lighter and lower opacity lines are distributions computed without weighting. The difference between the higher and lower opacity lines shows that the orientation angle distributions are not sensitive to local variations in image intensity. The red rectangular region (plotted with wider lines) is more distant from the edge of the drop than the blue region. The red histogram is wider than the blue one, indicating that there is a wider range of body orientation angles more distant from the drop edge. The distributions shown in Figure 7b have a trough and are asymmetric or lopsided, with one peak higher than the other. This asymmetry is not expected as a sine wave has distribution of orientations (computed from its slope) that would be symmetrical about a mean value. Models for the orientation angle distribution are discussed further in section V.4. In summary, we find that for vinegar eels engaged in a metachronal wave, the distribution of body orientation angles has two peaks of different heights and depends on distance to the drop edge. The asymmetry in the orientation angle distribution and inspection of eel heads near the drop edge implies that eel body shapes and motions are not perfectly sinusoidal. This contrasts with our study of the freely swimming eels in section III where we found that the shape and motion of freely swimming eels is nearly sinusoidal. Figure 6: Body orientation angles. In (a) and (b) panels the blue segments are oriented with the means of locally computed histograms of oriented gradients. The histograms of oriented gradients were computed from one of the images in Figure 3 from Video B. The same image is shown in gray-scale in panel (a). The color image in panel (b) displays the orientation angles, with color-bar on the right in degrees. Figure 7: Distributions of orientation angles in the metachronal wave. (a) Three rectangular regions are shown on top of one of the image frames. The color and line width showing each region is the same as in panel (b). (b) Normalized distributions of orientation angles in the three rectangular image regions. The histograms were computed using orientations like those shown in Figure 6, but using 200 video frames from Video B spanning a duration of 2 s. The higher opacity lines are histograms computed with intensity weights with regions of higher eel density contributing more to the histogram. The lighter lines are histograms computed without weighting. The distribution is narrower near the edge of the drop. The gray bars have orientation equivalent to their $x$ coordinates on the plot and are plotted at multiples of $30^{\circ}$. The difference in the two peak heights in each distribution suggest that there are deviations from sinusoidal shapes and motions. ## V Oscillator models for traveling waves Experimental observations have shown that motility of swimming nematodes, such as C. elegans, is due to the propagation of bending waves along the nematode’s body length [39]; (for a summary of nematode locomotion neurobiology, see [40]). The bending waves consist of alternating phases of coordinated dorsal and ventral muscle contractions and extensions [41]. During locomotion, motor neurons excite muscles on either (ventral/dorsal) side of the body while inhibiting muscles on the opposite side. The gait of C. elegans adapts to the mechanical load imposed by the environment [42]. Swimming involves higher frequency and longer wavelength undulations than crawling on agar, though both behaviors may be part of a continuous spectrum of neural control [43, 44]. Oscillation frequencies also decrease for C. elegans swimming in higher viscosity aqueous media [36]. Proprioception is when sensory receptors in muscles or other tissues are sensitive to the motion or position of the body. In models for nematode locomotion, the sensitivity to environment involves proprioceptive integration or feedback on the neuronal control model [43, 45, 37, 40]. Experiments of restrained C. elegans [37] show that the bending of the posterior regions requires anterior bending (see Figure 3 by Wen et al. [37]). If the nematode is held fixed at its middle, the body can undulate between head and constraint, but past the constraint to the tail, there will be no undulation. These experiments suggest that the body itself lacks central pattern generating circuits and motivates locomotion models that rely on an oscillator in the head [37]. To create a model for collective motion in the vinegar eels, we assume that the waves that propagate down the nematode’s body are initiated at the organism’s head. We use the phase of the head’s back and forth motion with respect to its mean position to describe the state of each organism and we model our ensemble of eels as a chain of phase oscillators. In the absence of interactions, each oscillator has intrinsic frequency equal to the oscillation frequency of a freely swimming eel. Because the mean positions (averaged over the oscillation period) of the eel’s heads drift very slowly (see Figure 5), we neglect drift in the mean or averaged (over a period) oscillator positions. Here the oscillator phase is associated with back and forth motion of an eel head because the head is assumed to be the source of the body wave. This differs from the models by Niedermayer et al. [13], Brumley et al. [14] where the phase describes motions of a cilium or flagellum tip. When the vinegar eels are engaged in metachronal waves, the organisms are often touching each other. Chelakkot et al. [21] simulated steric interactions between active and elastic filaments in arrays and found that short-ranged steric inter-filament interactions can account for formation of collective patterns such as metachronal waves. Because undulation frequency of C. elegans is slower when under mechanical load imposed by the environment, we assume that steric interactions in our vinegar eels reduce the phase velocity of oscillation. To construct a model for metachronal waves, we consider the head of a single organism to be an oscillator and we consider ensembles of $N$ oscillators. The $i$-th oscillator can be described with a phase $\theta_{i}$ and a frequency of oscillation or a phase velocity $\frac{d\theta_{i}}{dt}=\dot{\theta}_{i}$. Here $i$ is an integer index and $\theta_{i}$ is a function of time $t$. Collective phenomena involving synchronization of oscillators has been described with different nomenclature. Following [46, 13], a synchronized state of an ensemble of $N$ oscillators is one where all oscillators have identical phases, $\theta_{i}(t)=\theta_{j}(t)$ for all $i,j\in(0,1,...N-1)$. A phase-locked or frequency synchronized state [22, 23, 24] is one where all oscillators have identical phase velocities $\dot{\theta}_{i}(t)=\dot{\theta}_{j}(t)$ for all $i,j\in(0,1,...N-1)$. An entrained state has identical mean phase velocities $\tilde{\omega}_{i}=\tilde{\omega}_{j}$ for all $i,j\in(0,1,...N-1)$. The time average of the phase velocity can be computed with an integral over time, $\tilde{\omega}_{i}=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\dot{\theta}(t)dt$, or by integrating over an oscillation period if oscillator motions become periodic. For a chain of oscillators, the index $i$ specifies the order in the chain. One type of traveling wave is a non-synchronous phase-locked state characterized by a constant phase delay or offset between consecutive oscillators in a chain or loop of oscillators. In other words $\theta_{i+1}=\theta_{i}+\chi$ for consecutive oscillators, where $\chi$ is the phase delay and $\dot{\theta}_{i}\neq 0$ for all $i$. If individual oscillators undergo similar periodic motions, then another type of traveling wave is a non-synchronous but entrained state characterized by a time delay between the motions of consecutive oscillators. In other words $\theta_{i}(t+\tau)=\theta_{i+1}(t)$ with time delay $\tau$. In this case the phase velocities would be periodic and need not be constant. Both types of traveling waves involve periodic oscillator motions and are known in the biological literature as metachronal waves. ### V.1 Local Kuramoto models The Kuramoto model [47, 48, 46] consists of $N$ oscillators, that mutually interact via a sinusoidal interaction term $\frac{d\theta_{i}}{dt}=\omega_{i}+\sum_{j=1}^{N}K_{ij}\sin(\theta_{j}-\theta_{i})$ (3) where $K_{ij}$ are non-negative coefficients giving the strength of the interaction between a pair of oscillators. In the absence of interaction, the $i$-th oscillator would have a constant phase velocity $\omega_{i}$ which is called its intrinsic frequency. With only nearest neighbor interactions a well studied model, sometimes called a local Kuramoto model, is described by $\frac{d\theta_{i}}{dt}=\omega_{i}+K\left[\sin(\theta_{i+1}-\theta_{i})+\sin(\theta_{i-1}-\theta_{i})\right]$ (4) [22, 23, 24, 25, 26, 27]. At low values of positive interaction parameter $K$, the oscillators are not affected by their neighbors. At higher $K$, the oscillators cluster in phase velocity, and the number of clusters decreases until they fuse into a single cluster that spans the system. At and above a critical value of $K=K_{s}$ the entire system must enter a global phase-locked state [49]. Above the critical value $K>K_{s}$, there can be multiple stable phase-locked attractors, each with its own value of global rotation rate $\Omega=\frac{1}{N}\sum_{i}\omega_{i}$ [50, 26]. What fraction of possible initial conditions would converge onto a phase- locked solution that is not synchronous? The set of initial conditions that converge onto a particular solution are called its basin of attraction. The basins of attraction for traveling wave solutions (or non-synchronous phase- locked states) are smaller than that of the synchronous state [26, 27]. Using random and uniformly generated initial phases in $0$ to $2\pi$ for each oscillator, the system is more likely to enter a synchronous rather than a traveling wave state. Because well studied local Kuramoto models like that of equation 4 are more likely to enter a synchronous than a traveling wave state, they do not capture the behavior illustrated by our vinegar eels, or other systems that exhibit metachronal waves such as chains of cilia [13] or flagella on the surface of Volvox carteri alga colonies [14]. Relevant models should exhibit a larger basin of attraction for traveling wave states than for the synchronous state. In models for metachronal waves in cilia or flagellates [18, 13, 14] the end of a filament moves in a plane and on a trajectory of radius $R$ from a central position with phase $\theta$ in polar coordinates. Active forces are induced via tangential forces exerted on the filament. Interactions between the oscillators are based on hydrodynamic interactions between pairs of filaments and are computed using Stokes equation which is valid at low Reynolds number [18, 13, 8, 14]. Motion is over-damped so the equations of motion are a balance between driving and hydrodynamic forces. The filament velocities are computed as a function of their positions and it is not necessary to compute accelerations. The equations of motion describe motions of the phase, radius and orientation angle of the end of the filament’s trajectory. However, if the distance between filaments is large compared to the radius of motion, the dynamical system can be approximated with nearest neighbor interactions and neglecting variations in the radius or plane of motion [13]. This gives a local oscillator chain model dependent only on phases. ### V.2 An oscillator model based on heads that overlap We desire a model that has a wide basin of attraction for traveling wave states, similar to those by Niedermayer et al. [13], Brumley et al. [14]. The oscillator chain model by Niedermayer et al. [13] included sine and cosine terms of the sums and differences of pairs of phases and that by Brumley et al. [14] included both radial and phase motions. We can similarly assume that motion is over-damped and can be described by equations for phase and phase velocity and lacking phase accelerations. Since steric interactions are likely to be important, we can adopt a model with only nearest neighbor interactions, as did Niedermayer et al. [13]. However, opposite to the hydrodynamic interaction models, the interactions between our eels are likely to be strong, and they should reduce the oscillator phase velocity rather than increase it. We observe that eel heads near the edge of the drop (see Figures 3, 5) were not near other eel bodies during portions of the traveling wave. If undulation is generated at the eel head, then interactions on it are only strong during about half of the head’s oscillation cycle. Consider two eels oriented horizontally as shown Figure 8a with $x$ the horizontal axis and $y$ the vertical one. The eels undulate with amplitude $A$ and without varying the head’s $x$ position or the orientation of its mean centerline, which is shown with dotted lines. The $y$ position of the $i$-th head $y_{i}=A\cos\theta_{i}-id,$ (5) where $d$ is the distance between the neighboring eel’s mean centerlines. The phase of oscillation is given by the angle $\theta_{i}$. The distance between the two heads with index $i$ and $i-1$ is $\Delta_{\rm left}=d+A\cos\theta_{i}-A\cos\theta_{i-1}.$ (6) The eels with index $i$ and $i-1$ overlap near their heads if the left-sided overlap function $o_{\rm left}(\theta_{i-1},\theta_{i})=\frac{\Delta_{\rm left}}{A}=\cos\theta_{i}-\cos\theta_{i-1}+\beta<0,$ (7) where the dimensionless overlap parameter $\beta\equiv\frac{d}{A}.$ (8) We assume that a strong steric interaction on the $i$-th eel’s head would reduce its phase velocity when $o_{{\rm left}}(\theta_{i-1},\theta_{i})<0$. Otherwise, the eel head’s phase velocity would remain at its intrinsic phase velocity. Because the eels tend to be closer together than the amplitude of undulation when they are involved in a metachronal wave, we expect $\beta$ to be smaller than 1. The amplitude $A$ of body motions for eels engaged in the metachronal wave need not be the same as that of the freely swimming eel, $A_{u}$. Consider three eels oriented at an angle as shown in Figure 8b. The oscillator in the $i$-th eel’s head is more strongly influenced by the motions of the organism to its left (with index $i-1$) and less so by the one to its right (with index $i+1$). When the eels are tilted with respect to the edge of the drop, we expect directed interactions where the phase of the eel’s head is primarily influenced by its nearest neighbor on one side. Figure 8: (a) Two eels undulate with amplitude $A$ but without moving their mean centerlines. The two mean centerlines are shown with dotted lines and are separated by distance $d$. The eal heads are shown with large black dots. We assume that the undulation on the body is initiated by oscillators in the eel’s heads. The oscillators have phases $\theta_{i}$ and $\theta_{i-1}$. When $\Delta_{\rm left}=d+A\cos\theta_{i}-A\cos\theta_{i-1}<0$, the eel heads overlap and steric interaction would slow their motion. (b) Three consecutive eels are tilted by angle $\phi_{\rm tilt}$ with respect to the horizontal direction. The oscillator in the $i$-th eel’s head is more strongly influenced by the motions of the organism to its left (with index $i-1$) than the one to its right (with index $i+1$). At lower tilt angle $\phi_{\rm tilt}$, the interactions are increasingly lopsided. A modification to the local Kuramoto model with directed or one-sided nearest neighbor interactions $\frac{d\theta_{i}}{dt}\omega_{0}^{-1}=1-Kf(\theta_{i-1},\theta_{i}).$ (9) Here positive and dimensionless parameter $K$ describes the strength of the interaction. The nearest neighbor interaction function $0<f(\theta_{i-1},\theta_{i})\leq 1$, reduces the phase velocity and mimics the role of one-sided steric interactions. The intrinsic angular phase velocity $\omega_{0}$ is the same for each oscillator. We work with time in units of $\omega_{0}^{-1}$ which is equivalent to setting $\omega_{0}=1$. One choice for the interaction function should give 1 if the overlap function $o_{\rm left}$ (defined in equation 7) is negative and there is an overlap and gives 0 otherwise. This choice neglects eel body width. We have checked with numerical integrations that a numerical model based on a Heaviside step function can robustly give traveling wave solutions. However, numerical integration of a discontinuous function with a conventional numerical integrator can give results that are dependent on step size or sensitive to round-off or discretization errors. To mitigate this problem we use a smooth function to approximate the step function, $f(\theta_{i-1},\theta_{i})=\frac{1}{2}\left[1-\tanh\frac{o_{\rm left}(\theta_{i-1},\theta_{i})}{h_{ol}}\right]$ where dimensionless parameter $h_{ol}$ sets the abruptness of the transition of the function from 0 to 1. In the limit of small $h_{ol}$ we recover the Heaviside function. An oscillator model that uses this smooth function has equation of motion $\frac{d\theta_{i}}{dt}\omega_{0}^{-1}=1-\frac{K}{2}\left[\tanh\left(\frac{\cos\theta_{i-1}-\cos\theta_{i}-\beta}{h_{ol}}\right)+1\right].$ (10) Figure 9: A directed oscillator chain model numerical integration. Equation 10 is integrated with $N=200$ oscillators in a chain with a non-periodic boundary condition and randomly chosen initial phases. The interaction parameter $K=0.5$, intrinsic frequency $\omega_{0}=1$, overlap parameter $\beta=0.25$, smoothness parameter $h_{ol}=0.05$ and time-step $dt=0.05$. The system was integrated to time $t=1001$. At the end of this integration the average phase velocity $\tilde{\omega}=0.77\omega_{0}$ and the average wavelength is $N_{\lambda}=12$ oscillators. (a) From top to bottom panels, the phase angles $\theta_{j}$, phase velocity $d\theta_{j}/dt$ and phase difference $\chi_{j}=\theta_{j+1}-\theta_{j}$ are plotted as a function of index $j$ at two different times. The outputs at $t=1000$ and $t=10001$ are plotted with red and blue lines. Comparison between these two outputs shows that they are similar but shifted by a time delay. The system is an entrained state which can also be described as a traveling wave state. (b) From top to bottom panels, the images show phase angle $\theta_{j}$, phase velocity $d\theta_{j}/dt$ and phase difference $\chi_{j}$ with color shown in the color-bars on the right. The horizontal axes is time and the vertical axes are the oscillator index $j$. The fine diagonal features at large times are the traveling waves. The horizontal features are discontinuities that eventually disappear as coherent regions merge. Figure 10: Wavelengths $N_{\lambda}$ and mean phase velocity $\tilde{\omega}$ computed for numerical integrations at $t=1000$ of the directed oscillator chain model given in equation 10. The integrations have $N=200$ oscillators, the timestep is $dt=0.05$ the smoothness parameter is $h_{ol}=0.1$, the boundary is not periodic and initial phases were randomly chosen. If the entire chain of oscillators did not reach a traveling wave state at $t=1000$, a black dot is plotted otherwise the dot has color giving the wavelength $N_{\lambda}$ (top panel) and mean phase velocity $\tilde{\omega}$ (bottom panel). The $x$ axis is the interaction strength parameter $K$ and the $y$ axes are the overlap parameter $\beta$. ### V.3 Numerical integrations of a directed overlap phase oscillator chain model The directed overlap phase oscillator model given by equation 10 depends on three positive parameters, the interaction strength $K$, an overlap parameter $\beta$ and the parameter setting the smoothness of the interaction function $h_{ol}$. The model is also sensitive to the number of oscillators in the chain or loop $N$, the boundary condition and the choice of initial conditions. We integrate this model using a first order explicit Euler method. The initial phases for each oscillator are randomly generated using a uniform distribution spanning $[0,2\pi]$. In local Kuramoto models, stable solutions that are present in a loop may not be present if one link is dissolved and the loop becomes a chain [26, 51]. To ensure that traveling waves are robustly generated in our model, we purposely do not chose a periodic boundary condition. The boundary at the end of the chain or for $\theta_{N-1}$ does not affect the dynamics because of the direction of the interactions. For the left boundary (with phase $\theta_{0}$) we set the phase velocity $\frac{d\theta_{0}}{dt}=(1-K)\omega_{0}$. We find that a slow left boundary is less likely to excite perturbations that propagate through the system. A numerical integration with $N=200$ oscillators, intrinsic frequency $\omega_{0}=1$, interaction parameter $K=0.5$, overlap parameter $\beta=0.25$, and smoothness parameter $h_{ol}=0.05$ is shown in Figure 9. The time step used is $dt=0.05$ and we have checked that a smaller step size does not significantly change the integration output. In Figure 9a the panels show phase angle $\theta_{j}$, phase velocity $d\theta_{j}/dt$ and phase shift $\chi_{j}=\theta_{j+1}-\theta_{j}$ as a function of index $j$ for an integration at two times $t=1000$ and $t=1001$. In Figure 9b we show the same quantities but with color arrays as a function of both index and time. Despite the absence of a diffusive-like interaction term (similar to that in equation 4), the model has attracting entrained or traveling wave solutions. A comparison between the two outputs in the top panel shows that phases at different times can be related with a time delay. At the beginning of the integration clusters of entrained or nearly phase-locked groups form and later merge to give a fully entrained or traveling wave state. This type of behavior was previously seen in the oscillator models developed for hydrodynamic interactions between cilia and flagella [13, 14]. When initial conditions are random, there are initially groups of neighboring oscillators with large phase differences and these large differences can remain on the same group of oscillators for many oscillation periods. These are nearly horizontal streaks seen in the bottom panel showing phase difference $\chi$ in Figure 9b. Had we added a diffusive-like term to our model, small wavelength perturbations would be more rapidly damped, but such a term would also affect the velocity and wavelength of traveling wave states. We ran the integration to a maximum time $t=1001$ with $\omega_{0}=1$ corresponding to $1001/(2\pi)\approx 160$ oscillation periods ($2\pi/\omega_{0}$). For an oscillation frequency of $f_{u}\sim 6$ Hz (as we observed for our vinegar eels) this duration corresponds to 27 seconds. The metachronal waves take a few minutes appear after the drop is placed on the slide. The time it takes for all entrained clusters to merge in the numerical model is shorter than the few minutes it takes for traveling waves to form on a large portion of the drop edge in our concentrated eel experiments. However, our model is of a fixed chain of oscillators so it does not take into account the time it takes for the vinegar eels to collect on the boundary or sources of noise in the system. At the end of the numerical integration shown in Figure 9, the average phase velocity $\tilde{\omega}=0.77\omega_{0}$ (computed from all oscillators at that time), the average wavelength is $N_{\lambda}=12$ oscillators. The phase delay for the entrained state $\tau=\frac{2\pi}{\tilde{\omega}N_{\lambda}}=0.68$. The number of oscillators for a change of $2\pi$ in phase, $N_{\lambda}$, is comparable to that we estimated for the metachronal wave in the vinegar eels (see Table 2). The average phase velocity ratio $\tilde{\omega}/\omega_{0}$ is near but somewhat higher than the ratio of metachronal wave to freely swimming undulation frequency $f_{\rm MW}/f_{u}\sim 0.67$ that we estimated for the vinegar eels (listed in Table 2 and discussed in section IV). If all phases are initially set to the same value, the dynamical system described by equation 10 remains in a synchronous state. However, if some noise is introduced into the system (in the form of small stochastic perturbations on each oscillator) then the system is likely to enter the traveling wave state even with flat initial conditions. The basin of attraction for the traveling wave state is significantly larger than that of the synchronous state. With a fixed value of smoothness parameter $h_{ol}$, we integrated equation equation 10 for different values of interaction parameter $K$ and overlap parameter $\beta$. These integrations have random initial conditions and non- periodic boundary, as described above, intrinsic frequency $\omega_{0}=1$ and smoothness parameter $h_{ol}=0.1$. At $t=1000$ we inspected plots like those in Figure 9 to see if the system was in an entrained state. If so, we measured the mean wavelength $N_{\lambda}$ and the mean phase velocity $\tilde{\omega}$. In Figure 10 points are plotted as a function of $\beta$ and $K$ and with color set by their wavelength $N_{\lambda}$ (top panel) or mean phase velocity $\tilde{\omega}$ (bottom panel). Systems that exhibited discontinuities at the end of the simulation (other than at the left boundary) are plotted in black. A fairly wide range of interaction and overlap parameters robustly gives entrained or traveling wave states. At larger overlap parameter, $\beta$, the oscillators spend less time overlapped and this tends to give a shorter wavelength and higher mean phase velocity $\tilde{\omega}$ in the entrained state. If eels are more distant from each other or have lower amplitude oscillations then $\beta$ is larger. At large overlap parameters $\beta\gtrsim 0.4$ (on the top of each panel in Figure 10) the system is less likely to be in a traveling wave state at $t=1000$. This is due to clusters of oscillators that begin with large phase differences between neighbors that do not dissipate. High eel concentration would reduce the overlap parameter $\beta$, so the model does account for the sensitivity of the metachronal wave to eel concentration on the boundary. Figure 10 shows that for $K<0.4$ (on the left side of the figure) entrained states are not present at the end of the integration. This is due to groups of neighboring oscillators with initially large phase differences. If integrated longer, these irregularities or discontinuities might eventually disappear. The interaction parameter $K$ influences the time it takes for the short wavelength structure to dissipate. In a more realistic model, noise and diffusive interactions would also affect the range of parameters giving an entrained or traveling wave state. The odd black points at $\beta\approx 0.25,K=0.7$ are due to discontinuities at the left boundary that continuously propagate through the system. We are not sure why our left boundary condition caused this problem only in this region of parameter space. What properties of a phase oscillator model are required for a large basin of attraction to an entrained or traveling wave state? The model by Brumley et al. [14] is two dimensional as it depends on oscillator radius as well as phase so it is more complex than a model that consists only of a chain of phases. With only phases, both our model and that by Niedermayer et al. [13] are not potential models, and interactions between pairs of oscillators are not applied equally and oppositely to each oscillator in a pair, the way conventional physical forces are applied. These three examples (our model, and those by Niedermayer et al. [13], Brumley et al. [14]) of models developed for traveling waves in biological systems might yield clues for more general classification of the basins of attraction for phase oscillator models with local interactions. For most of our integration parameters we saw only a single possible entrained state. Is it possible to predict the phase delay $\tau$, or wavelength, $N_{\lambda}$, of this entrained state? The integration shown in Figure 9a of the model given by Equation 10 shows that the phase at a single output time has two regions, One region has a low phase velocity and the other region has a higher phase velocity. In the fast and slow regions, the phase velocity is constant and phase differences between neighboring oscillators are maintained. In appendix A, we estimate the phase delay $\tau$ and wavelength $N_{\lambda}$ of the entrained state from the phase shifts that occur during the transitions between the fast and slow regions. Figure 11: Distributions for the directed chain integrated oscillator chain model shown in Figure 9 are plotted in blue. These are compared to distributions for a constant phase velocity model which is shown in orange and referred to as ‘sinusoidal’. (a) The distribution of phase angles for the integrated oscillator chain model and the sinusoidal model. (b) The distribution of phase velocities for the integrated oscillator chain model. The sinusoidal model has $d\theta/dt\ \omega_{0}^{-1}=1$. (c) The distribution of orientation angles for both oscillator chain and sinusoidal models computed using equation 15, $\phi_{\rm tilt}=0$ and $\pi_{A}=A\omega_{0}/v=1$. The red dotted line shows the distribution function (in equation 17) for the sinusoidal model. (d) We show smoothed distributions of orientation angles computed using equation 15, $\phi_{\rm tilt}=20^{\circ}$ and $\pi_{A}=0.7$ for both oscillator chain and sinusoidal models. The sinusoidal and oscillator chain model distributions have been smoothed with a Gaussian filter with a standard deviation of $\sigma=12^{\circ}$. With a thick green line, we show the distribution of orientations measured from the vinegar eels in Video B. This distribution is the same as plotted in green in Figure 7b. The directed chain oscillator model displays an asymmetry in the associated orientation angle distributions (i.e., peaks of different heights) that is present in the observed distribution. ### V.4 Distributions of orientation angles How do we relate the oscillator chain model to the orientation distributions displayed in Figure 7b for the vinegar eels engaged in a metachronal wave? The undulation velocity we measured in the freely swimming eel $v_{u}\sim 3.0$ mm/s is similar to the metachronal wave velocity $v_{\rm MW}\sim 3.7$mm/s so we could use either one to make an estimate for how motions of the head propagate to the rest of the body. The free eel undulation frequency of $f_{u}=5.9$ Hz gives intrinsic phase velocity $\omega_{0}=2\pi f_{u}=37\ {\rm s}^{-1}$. It is useful to compute the dimensionless ratio $\pi_{A,{\rm MW}}\equiv\frac{A_{\rm MW}\omega_{0}}{v_{\rm MW}}\approx 0.70$ (11) using parameters listed in Table 1 and Table 2 that we measured for the freely swimming eel and metachronal wave. The phase $\theta$ in our oscillator model represents the phase of back and forth oscillation in an eel’s head. We constructed our interaction function assuming that the eel head moves away from its mean centerline with coordinate perpendicular to the mean centerline $y=A\cos\theta$. We assume that the head’s motion excites a constant velocity traveling wave along the eel body $y(x,t)$ with distance $y$ from the mean centerline a function of distance $x$ along the mean centerline. The head’s motion gives boundary condition $y(x=0,t)=A\cos\left[\theta(t)\right],$ (12) where the function $\theta(t)$ gives the phase of the head oscillation as a function of time. With constant undulation wave velocity $v$ $y(x,t)=A\cos\left[\theta\left(t-\frac{x}{v}\right)\right]$ (13) is consistent with the boundary condition at $x=0$ (equation 12). The velocity that waves propagate down the eel body $v$ may not be the same as $v_{u}$, the wave velocity for the freely swimming eel. The slope of the body $\frac{dy(x,t)}{dx}=A\sin\left[\theta\left(t-\frac{x}{v}\right)\right]\theta^{\prime}\left(t-\frac{x}{v}\right)v^{-1}.$ (14) Here $\theta^{\prime}$ is the derivative of the function $\theta(t)$. The distribution of the slopes should be the same as the distribution of $\frac{A}{v}\frac{d\theta}{dt}\sin\theta$ where the phases $\theta$ and phase velocities $\dot{\theta}$ are those at different times and positions for the heads in the oscillator array after the integration achieves an entrained state. The slope of the body is $\frac{dy}{dx}=\tan\phi$ where $\phi$ is the body orientation angle. From our model phases and phase velocities we can compute the distribution of body orientation angles $\phi$ assuming a constant wave velocity $v$ with $\phi=\arctan\left[\pi_{A}\left(\frac{d\theta}{dt}\frac{1}{\omega_{0}}\right)\sin\theta\right]+\phi_{\rm tilt},$ (15) with $\pi_{A}\equiv\frac{A\omega_{0}}{v}.$ (16) We have purposely written equation 15 in terms of dimensionless parameters so as to facilitate comparison of our model with the vinegar eel collective motions. Here the tilt angle $\phi_{\rm tilt}$, illustrated in Figure 8, lets us adjust the angle of the eel centerlines with respect to the drop edge. We generate model orientation distributions for the oscillator chain model with parameters and integration shown in Figure 9. In Figure 11 we use the arrays from 20 different times (spaced at 0.5 duration intervals) to compute the distributions of phase angle $\theta$, phase velocity $\frac{d\theta}{dt}$, and orientation angle $\phi$. The orientation angles are computed with equation 15 from the phases and phase velocities. The distributions have been normalized so that they integrate to 1. For comparison, we similarly generate and show distributions for a constant phase velocity model that has $\frac{d\theta}{dt}=\omega_{0}$. This model has a flat distribution of phases and can be considered purely sinusoidal. In this special case, the orientation angle distribution function consistent with equation 15 and equation 16 is $p(\phi)_{\rm sinusoidal}=\frac{1}{\pi}\frac{1+\tan^{2}(\phi-\phi_{\rm tilt})}{\sqrt{\pi_{A}^{2}-\tan^{2}(\phi-\phi_{\rm tilt})}}.$ (17) The phase velocity distribution for the oscillator chain model shown in Figure 11b shows two peaks, a low one for when there are interactions between neighboring oscillators and a high one that is at the intrinsic phase velocity. This is what we would expect from inspection of the phase velocities in Figure 9. Figure 11c shows orientation angles $\phi$ computed with no tilt, $\phi_{\rm tilt}=0$, and ratio $\pi_{A}=A\omega_{0}/v=1$. Orientation angle distributions for both oscillator chain model and constant phase velocity model exhibit two peaks and a trough. in Figure 11c, the constant phase velocity model distribution, in orange, is consistent with the distribution function of equation 17 that is shown with a dotted red line. The peaks of the orientation angle distribution for the oscillator chain model have different heights due to the uneven phase velocity distribution, whereas the distribution is symmetrical about $\phi=0$ for the sinusoidal (constant phase velocity) model. We can compare the modeled distribution of body orientations to those measured in our videos of the eels engaged in the metachronal wave, shown in Figure 7b, and discussed in section IV.1. Figure 11d shows orientation angle distributions computed with $\phi_{\rm tilt}=20^{\circ}$ and ratio $\pi_{A}=0.7$ which is that of equation 11. To facilitate comparison between distributions we have smoothed the model distributions using a Gaussian filter with standard deviation of $12^{\circ}$. In Figure 11c, we replot one of the orientation angle distributions that was shown in Figure 7b and is measured from Video B of a metachronal wave. The model orientation distributions shows two peaks, and when corrected by the same factor (setting $\pi_{A}=\pi_{A,{\rm MW}}$) and smoothed, they have a width and two peaks similar to that observed for the metachronal wave. Unlike the observed distribution, the sinusoidal model’s orientation angle distribution is symmetrical about $\phi_{\rm tilt}$ and its two peaks are the same height. In contrast, the oscillator chain model distribution is asymmetric or lopsided and its two peaks have different heights. Because there are variations in oscillator phase velocity in the oscillator chain model, the associated orientation angle distribution is lopsided. The oscillator chain model offers an explanation for the asymmetry that is present in the observed orientation angle distribution. To compare the oscillator chain model to the observed orientation angle distribution we smoothed the model. Noise-like variations in the observed orientation angle distribution can be due to eels that are not aligned with their neighbors and variations in shading that affect the accuracy of the HOG algorithm. The oscillator chain model’s distribution is more lopsided than the observed distribution which implies that variations in the phase velocity are not as extreme as predicted in Figure 11b. A more complex oscillator chain model would be needed to give a better fit to the observed orientation angle distribution. Figure 12: (a) Eel body positions that are computed with a series of outputs at different times of the phase oscillator model shown in Figure 9 and using equation 35. Overlaps are reduced not only at the eel heads but throughout their body. (b) A panel from Video B similar to those shown in Figure 3. The morphology of the model wave in (a) resembles that seen in the vinegar eels. (c) Eel body positions were estimated via equation 35 but with a constant time delay and constant phase velocity model. Other parameters were the same. This model causes eel bodies to overlap. A comparison between (a) and (c) suggests that there must be variations in the phase velocities to reduce steric interactions. ### V.5 Body shapes In equation 15 we used model phases to compute the distribution of body orientation angles $\phi$ assuming a constant wave velocity $v$. With the same assumption we can compute the position and shape of the entire body using a time series of model outputs. Our procedure for doing this is described in appendix B. In Figure 12a we show computed eel body shapes that are derived from the integrated phase oscillator model output shown in Figure 9 (integrating equation 10) and computed along the body lengths using equation 35. To generate the body positions we used amplitude $A=0.07$ mm, (based on that measured from eel head motions for eels engaged in the metachronal wave), and intrinsic phase velocity $\omega_{0}=2\pi f_{u}$ with $f_{u}=5.9$ Hz based on freely swimming eels. We adopted tilt angle $\phi_{\rm tilt}=20^{\circ}$ (the same as we used to generate orientation distributions in Figure 11). To match the metachronal wavelength we used a horizontal distance between eel mean centerlines of $D=0.11$ mm, (as defined in Figure 8). Lastly we use a wave speed $v=4.1$ mm/s. The ratio $\pi_{A}=A\omega_{0}/v=0.63$ is similar to given in equation 11 and was used to create the model orientation distributions in Figure 11d. The eel body shapes using these parameters are shown in Figure 12a and they illustrate similar morphology to the vinegar eels themselves when engaged in the metachronal wave. Figure 12b shows a panel like those of Figure 3 from Video B for comparison. Figure 12a shows that the periodic variations in phase delay and phase velocity of an entrained state from our oscillator chain model (equation 10) reduce overlap between eels, not just near the eel heads but throughout their bodies. The eel bodies are nearly equidistant from each other everywhere. In Figure 12c we show body positions generated with a constant phase velocity ($\omega_{0}$) and constant phase delay (with the same wavelength $N_{\lambda}$) model. The constant phase delay and phase velocity model fails badly. Variations in phase delay between neighboring eels and in their phase velocity during different parts of the oscillation are probably needed to prevent strong steric interactions between the eels. We chose the wave speed $v$ along the body to best match the observed morphology, however it exceeds both the metachronal wave speed of about $v_{\rm MC}\sim 3.7$ mm/s and the undulation wave speed on the 1 mm long freely swimming eel of $v_{u}\sim 3.0$ mm/s. We might expect $v=v_{\rm MC}/\cos\phi_{\rm tilt}=3.9$ mm/s using $v_{\rm MC}=3.7$ mm/s and $\phi_{\rm tilt}=20^{\circ}$. Our chosen value for $v$ exceeds this. Our assumption for computing orientation angle $\phi$ in equation 15 and body shape ignores interactions between organisms that should affect the speed of wave propagation down the eel bodies. A more complex model that takes into account proprioception feedback throughout the eels body lengths might give a smoother and more symmetric orientation angle distribution, (reducing the discrepancy between that modeled and measured in Figure 11d) and a closer match to the wave morphology (improving the comparison between Figure 12a and b). We observe that the amplitude of motion in the metachronal wave $A_{\rm MW}>A_{u}$ exceeds the amplitude of undulation when freely swimming, $A_{\rm MW}>A_{u}$ and the speed of waves traveling down the body exceeds that when freely undulating $v>v_{u}$. A feedback motor control model, perhaps based on local body curvature, might predict or explain these characteristics. There is a discrepancy between the overlap parameter $\beta=d/A=0.25$ of the numerical oscillator model we adopted (shown in Figure 9 and used to create Figures 11 and 12) and that derived from the additional parameters we used to make Figure 12a for the eel bodies. The distance between eel centerlines $d$ is related to the horizontal distance between mean centerlines $D$ with $d=D\sin\phi_{\rm tilt}$ (see Figure 8). For the model shown in Figure 12a, we used $D=0.11$ mm, $A=0.07$ mm and $\phi_{\rm tilt}=20^{\circ}$ giving $d=0.038$ mm. We can estimate an overlap parameter for the tilted system $\beta\sim D\sin\phi_{\rm tilt}/A=0.54$ which exceeds our oscillator model overlap. This discrepancy might be reduced if we included the eel body width and the tilt angle $\phi_{\rm tilt}$ in our overlap criterion function. A more complex model that takes into account feedback throughout the eels body lengths might also resolve this discrepancy. ## VI Summary and Discussion We presented high speed videos of swimming vinegar eel nematodes (T. aceti) at low and high concentration. In a drop containing a high concentration of the vinegar eels, the eels concentrate at the edge of a drop and engage in collective wave-like motion known as a metachronal wave. We found that freely swimming organisms have oscillation frequency of about 6 Hz. However, at high concentration the nematodes cluster on a boundary and exhibit traveling waves with a lower frequency of about 4 Hz. For a freely swimming vinegar eel, the body shape is nearly sinusoidal over much of its body length. In contrast, the distribution of body orientation angles for organisms engaged in the metachronal wave has two peaks of different heights, implying that the motion is not purely sinusoidal. The bodies spend more time at higher orientation angles w.r.t to their mean body orientation angle (averaged over a cycle). We constructed a model for the collective behavior based on a chain of phase oscillators. Because we do not see large drifts in the mean eel head positions, averaged over an oscillation cycle, we neglect the head’s forward motion. Because experiments of a similar nematode, C. elegans, support a model where the undulation is initiated at the head [37], we use the phase of the head’s back and forth motion to describe it as an oscillator. Because the metachronal wave frequency is lower than the undulation frequency of a freely swimming eel, we adopt interactions that reduce the oscillator phase velocity. Our oscillator model uses strong but directed or one-sided nearest neighbor to mimic steric interactions between organisms. The oscillator model (equation 10) robustly exhibits entrained or traveling wave solutions and can have traveling waves with wavelength (in terms of numbers of organisms or oscillators) and mean phase velocity (in units of the intrinsic or freely swimming undulation frequency) similar to that of the vinegar eels when engaged in a metachronal wave. To estimate the distribution of body orientation angles and body shapes from our oscillator model, we assume that the undulation waves propagating down the body from the eel head have a constant wave velocity. This gives a two humped distribution of body orientations with peaks of different heights, similar to that observed for vinegar eels engaged in the metachronal wave. The body shapes are similar to those engaged in the wave and the eel bodies don’t overlap over their entire length. The model which was designed to impede eel head overlaps also reduces close interactions throughout the eel bodies. Our model neglects interactions between organisms that should affect the amplitude and speed of wave propagation down the eel bodies. Our model also neglects the ability of the eels to change direction and congregate. Improved models could take into account the positions and phases of all points in the eel’s bodies and allow them to swim, reorient and congregate. Few known simple phase oscillator models exhibit a large basin of attraction to an entrained or traveling wave state. Perhaps our model (given in equation 10) and that by Niedermayer et al. [13] can serve as examples that might give insight for more general classification of coupled phase oscillator models that would be helpful for predicting wavelike collective behavior. Vinegar eels are visible by eye and are large compared to other biological systems that exhibit metachronal waves, such as carpets of cilia [52, 12, 15] or flagella on the surface of Volvox carteri alga colonies [14]. Their large size facilitates study, however it also places them in an interesting intermediate hydrodynamic regime, with swimming Reynolds number $Re=v_{\rm swim}L/\nu\sim 0.4$ (where $\nu\sim 1\ {\rm mm}^{2}{\rm s}^{-1}$ is the kinematic viscosity of water), so the nature of hydrodynamic interactions between them should differ from that of microorganisms which are at much lower Reynolds number (e.g., [16, 53]). Their proximity when involved in collective behavior suggests that steric interactions may be important. Studies of the similar nematode C. elegans locomotion [37] imply that feedback in motor control affects their gait. It is exciting to have a relatively large system in which collective motion can be studied, however, this system also presents new challenges for understanding its behavior. In on-going studies we will describe experiments of concentrations of C. elegans, explore collectively formed dense coherent filaments in T. aceti that we have observed advance on a vinegar/oil interface and explore the role of concentration, drop shape and wetting angle in affecting metachronal wave formation in T. aceti [29]. Similarities between T. aceti and C. elegans suggest that it may be possible to use techniques developed for C. elegans to perform genetic modifications on the T. aceti nematode. In future, genetically modified strains may help us better understand the molecular underpinnings of the collective motion. Future studies could question whether there is an evolutionary advantage to the collective behavior which may help populations of nematodes penetrate crowded environments to reach food or drive flows that transport oxygen and nutrients. ###### Acknowledgements. We thank Ed Freedman for the gift of a microscope. We thank Kanika Vats for advice with filming through a microscope and letting us try filming at high speed with an inverted microscope. We thank William Houlihan for lending us an inverted microscope. We thank Nick Reilly for obtaining a centrifuge and showing us how to us it. We thank Doug Portman for helpful discussions on C. elegans. We thank Keith Nehrke, Sanjib K. Guha, Yunki Im and other members of Nehrke’s lab for helping us explore C. elegans, teaching us how to culture C. elegans and giving us some materials and live worms to culture in our lab. We thank Steve Teitel, Sanjib K. Guha, Keith Nehrke and Randal C. Nelson for helpful suggestions and discussions. This material is based upon work supported in part by NASA grants 80NSSC21K0143 and 80NSSC17K0771, National Science Foundation Grant No. PHY-1757062, and National Science Foundation Grant No. DMR-1809318. ## References * Marchetti et al. [2013] M. C. Marchetti, J. F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Reviews of Modern Physics 85, 1143 (2013). * Partridge [1982] B. L. Partridge, Scientific American 246, 114 (1982), ISSN 00368733, 19467087, URL http://www.jstor.org/stable/24966618. * Calovi et al. [2014] D. S. Calovi, U. Lopez, S. Ngo, C. Sire, H. Chaté, and G. Theraulaz, New Journal of Physics 16, 015026 (2014), URL https://doi.org/10.1088%2F1367-2630%2F16%2F1%2F015026. * Buck and Buck [1966] J. Buck and E. Buck, Nature 211, 562 (1966), URL https://doi.org/10.1038%2F211562a0. * Strogatz [2012] S. Strogatz, _Sync: How Order Emerges From Chaos In the Universe, Nature, and Daily Life_ (Hachette Books, 2012), ISBN 9781401304461, URL https://books.google.com/books?id=vHw44RSiOCwC. * Strogatz et al. [2005] S. H. Strogatz, D. M. Abrams, A. McRobie, B. Eckhardt, and E. Ott, Nature 438, 43 (2005), URL https://doi.org/10.1038%2F438043a. * Taylor [1951] G. I. Taylor, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 209, 447 (1951), URL https://doi.org/10.1098%2Frspa.1951.0218. * Uchida and Golestanian [2011] N. Uchida and R. Golestanian, Phys. Rev. Lett. 106, 058104 (2011), URL https://link.aps.org/doi/10.1103/PhysRevLett.106.058104. * O’Keeffe et al. [2017] K. P. O’Keeffe, H. Hong, and S. H. Strogatz, Nature Communications 8, 1504 (2017), URL https://doi.org/10.1038%2Fs41467-017-01190-3. * Winfree [2002] A. Winfree, Science 298, 2336 (2002). * Lenz and Ryskin [2006] P. Lenz and A. Ryskin, Phys. Biol. 8, 285 (2006). * Tamm et al. [1975] S. L. Tamm, T. M. Sonneborn, and R. V. Dippell, The Journal of Cell Biology 64, 98 (1975), URL https://doi.org/10.1083%2Fjcb.64.1.98. * Niedermayer et al. [2008] T. Niedermayer, B. Eckhardt, and P. Lenz, Chaos: An Interdisciplinary Journal of Nonlinear Science 18, 037128 (2008), URL https://doi.org/10.1063%2F1.2956984. * Brumley et al. [2012] D. R. Brumley, M. Polin, T. J. Pedley, and R. E. Goldstein, Physics Review Letters 109, 268102 (2012). * Elgeti and Gompper [2013] J. Elgeti and G. Gompper, PNAS; Proceedings of the National Academy of Sciences 110, 4470 (2013). * Elgeti et al. [2015] J. Elgeti, R. G. Winkler, and G. Gompper, Reports on Progress in Physics 78, 056601 (2015), URL https://doi.org/10.1088%2F0034-4885%2F78%2F5%2F056601. * Gueron and Levit-Gurevich [1998] S. Gueron and K. Levit-Gurevich, Biophysical Journal 74, 1658 (1998), URL https://doi.org/10.1016%2Fs0006-3495%2898%2977879-8. * Vilfan and Jülicher [2006] A. Vilfan and F. Jülicher, Physical Review Letters 96, 058102 (2006), URL https://doi.org/10.1103%2Fphysrevlett.96.058102. * Lindemann and Lesich [2010] C. B. Lindemann and K. A. Lesich, Journal of Cell Science 123, 519 (2010), URL https://doi.org/10.1242%2Fjcs.051326. * Uchida and Golestanian [2012] N. Uchida and R. Golestanian, The European Physical Journal E 35 (2012), URL https://doi.org/10.1140%2Fepje%2Fi2012-12135-5. * Chelakkot et al. [2021] R. Chelakkot, M. F. Hagan, and A. Gopinath, Soft Matter x, x (2021), URL https://doi.org/10.1039%2Fd0sm01162b. * Ermentrout and Kopell [1986] G. Ermentrout and N. Kopell, Comm. Pure Appl. Math. 49, 623 (1986). * Ermentrout and Kopell [1990] G. Ermentrout and N. Kopell, SIAM J. Appl. Math. 50, 1014 (1990). * Ren and Ermentrout [2000] L. Ren and B. Ermentrout, Physica D: Nonlinear Phenomena 143, 56 (2000), URL https://doi.org/10.1016%2Fs0167-2789%2800%2900096-8. * Muruganandam et al. [2008] P. Muruganandam, F. F. Ferreira, H. F. El-Nashar, and H. A. Cerdeira, Pramana 70, 1143 (2008), URL https://doi.org/10.1007%2Fs12043-008-0119-8. * Tilles et al. [2011] P. F. C. Tilles, F. F. Ferreira, and H. A. Cerdeira, Physical Review E 83, 066206 (2011), URL https://doi.org/10.1103%2Fphysreve.83.066206. * Dénes et al. [2019] K. Dénes, B. Sándor, and Z. Néda, Communications in Nonlinear Science and Numerical Simulation 78, 104868 (2019), ISSN 1007-5704, URL http://www.sciencedirect.com/science/article/pii/S1007570419301881. * Peshkov [2019] A. Peshkov (2019), lab experiments Oct 2019. * Peshkov et al. [2021] A. Peshkov, S. McGaffigan, and A. C. Quillen, _Wiggling droplets: metachronal waves in popluations of turbatrix aceti_ , preprint https://arxiv.org/abs/2104.10316 (2021). * Vicsek et al. [1995] T. Vicsek, A. Czirok, E. Ben-Jacob, I. Cohen, and O. Shochet, Physical Review Letters 75, 1226 (1995). * Yuan et al. [2015] J. Yuan, D. M. Raizen, and H. H. Bau, J. R. Soc. Interface 12, 20150227 (2015). * Yuan et al. [2014] J. Yuan, D. M. Raizen, and H. H. Bau, PNAS 111, 6865 (2014). * Sugi et al. [2019] T. Sugi, H. Ito, M. Nishimura, and K. H. Nagai, Nature Communications 10, 683 (2019). * vid [a] See Supplemental Material at [URL] for video A, a high speed 1057 frames per second video of vinegar eels (T. aceiti) at low concentration seen under the microscope. * vid [b] See Supplemental Material at [URL] for video B, a high speed 1057 frames per second video of vinegar eels (T. aceiti) at high concentration seen under the microscope. * Sznitman et al. [2010] J. Sznitman, X. Shen, R. Sznitman, and P. E. Arratia, Physics of Fluids 22, 121901 (2010). * Wen et al. [2012] Q. Wen, M. D. Po, E. Hulme, S. Chen, X. Liu, S. W. Kwok, M. Gershow, A. M. Leifer, V. Butler, C. Fang-Yen, et al., Neuron 76, 750 (2012). * Dalal and Triggs [2005] N. Dalal and B. Triggs, in _2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 05)_ (IEEE, 2005), URL https://doi.org/10.1109%2Fcvpr.2005.177. * Gray and Lissmann [1964] J. Gray and H. W. Lissmann, J. Exp. Biol. 41, 135 (1964). * Cohen and Sanders [2014] N. Cohen and T. Sanders, Current Opinion in Neurobiology 25, 99 (2014). * Haspel and O’Donovan [2011] G. Haspel and M. O’Donovan, Journal of Neuroscience 31, 14611 (2011). * Boyle et al. [2012] J. H. Boyle, S. Berri, and N. Cohen, Frontiers in Computational Neuroscience 6 (2012), URL https://doi.org/10.3389%2Ffncom.2012.00010. * Niebur and Erdös [1991] E. Niebur and P. Erdös, Biophysical Journal 60, 1132 (1991), ISSN 0006-3495, URL http://www.sciencedirect.com/science/article/pii/S000634959182149X. * Lebois et al. [2012] F. Lebois, P. Sauvage, C. Py, O. Cardoso, B. Ladoux, P. Hersen, and J.-M. D. Meglio, Biophysical Journal 102, 2791 (2012), ISSN 0006-3495, URL http://www.sciencedirect.com/science/article/pii/S0006349512005516. * Berri et al. [2009] S. Berri, J. H. Boyle, M. Tassieri, I. A. Hope, and N. Cohen, HFSP Journal 3, 186 (2009), URL https://doi.org/10.2976%2F1.3082260. * Acebron et al. [2005] J. A. Acebron, L. L. Bonilla, C. J. P. Vicente, F. Ritort, and R. Spigler, Reviews of Modern Physics 77, 137 (2005). * Kuramoto [1975] Y. Kuramoto, in _Int. Symposium on Mathematical Problems in Theoretical Physics_ , edited by H. Araki (Springer, 1975), vol. 39 of _Lecture Notes in Physics_ , pp. 420–422. * Kuramoto and Nishikawa [1987] Y. Kuramoto and I. Nishikawa, Journal of Statistical Physics 49, 569 (1987), URL https://doi.org/10.1007%2Fbf01009349. * Aeyels and Rogge [2004] D. Aeyels and J. A. Rogge, Progress of Theoretical Physics 112, 921 (2004). * Zheng et al. [1998] Z. Zheng, G. Hu, and B. Hu, Phys. Rev. Lett. 81, 5318 (1998). * Ottino-Löffler and Strogatz [2016] B. Ottino-Löffler and S. H. Strogatz, Physical Review E 94, 062203 (2016), URL https://doi.org/10.1103%2Fphysreve.94.062203. * Tamm and Horridge [1970] S. L. Tamm and G. A. Horridge, Proceedings of the Royal Society of London. Series B, Biological Sciences 175, 219 (1970), ISSN 00804649, URL http://www.jstor.org/stable/76025. * Gilpin et al. [2020] W. Gilpin, M. S. Bull, and M. Prakash, Nature Reviews Physics 2, 74 (2020), URL https://doi.org/10.1038%2Fs42254-019-0129-0. Figure 13: We plot phase $\theta$ vs time for 11 consecutive oscillators for the directed chain oscillator model with the same parameters as shown in Figure 9. Each oscillator is plotted with a different color and the oscillator indices are given in the key. The figure shows the periodic compression and rarefaction of phase in the entrained state. The region of lower phase velocity, between $\theta\approx 0.25\pi$ and $0.82\pi$, is marked with the vertical thick light gray lines. The inverse of the local slope of one of the curves gives the phase velocity and the horizontal distance between neighboring curves gives the phase delay between consecutive oscillators. ## Appendix A Compression and rarefaction in entrained states The integration shown in Figure 9 of the model given by Equation 10 shows that each oscillator has a periodic trajectory but with two regions. One region has a low phase delay and phase velocity and the other region has a higher phase delay and phase velocity. We can also show this behavior by plotting phase angle $\theta$ against time for a series of oscillators. This type of plot is often used to study shock compression or rarefaction. On this plot, the inverse of the slope gives the phase velocity and the horizontal distance between consecutive lines gives the phase delay. We show such a plot in Figure 13 for an integration with the same parameters as in Figure 9. We plot phase $\theta$ vs time for 11 consecutive oscillators after integrating to $t=1000$ and for a duration of $\Delta t=10$. The region of lower phase velocity lies between the thick gray vertical lines which are at $\theta=0.25\pi$ and $0.82\pi$. We make the assumption that an entrained state has two regions, like those seen at the end of the integrations shown in Figure 9 and in Figure 13. For our model in equation 10, which we repeat here for clarity, $\frac{d\theta_{i}}{dt}\omega_{0}^{-1}=1-\frac{K}{2}\left[\tanh\left(\frac{\cos\theta_{i-1}-\cos\theta_{i}-\beta}{h_{ol}}\right)+1\right].$ (18) the high value of the phase velocity is the intrinsic phase velocity $\omega_{0}$ and the low value is $\omega_{0}(1-K)$. An entrained state has a phase delay $\tau$ where $\theta_{j}(t+\tau)=\theta_{j+1}(t).$ (19) We expand the left side to first order in $\tau$ and write $\theta_{j+1}$ in terms of the phase delay $\chi_{j}=\theta_{j+1}-\theta_{j}$, giving $\dot{\theta}_{j}(t)\tau\approx\chi_{j}.$ (20) We denote the phase delay for the slower state as $\chi_{s}$ and that of the faster state as $\chi_{f}$. Equation 20 gives $\displaystyle\chi_{s}$ $\displaystyle\approx\omega_{0}(1-K)\tau$ $\displaystyle\chi_{f}$ $\displaystyle\approx\omega_{0}\tau.$ (21) In the fast and slow regions, the phase velocity is constant and phase differences between oscillators are maintained. The properties of the entrained states must be set by the transition regions. We consider two oscillators, one in the slow region and the other that is exiting the slow region. We can estimate the change in phase delay between the two regions from the time $\Delta t_{fs}$ it takes a single oscillator to exit the slow region $\chi_{f}-\chi_{s}\approx\Delta t_{fs}\ \tilde{\omega},$ (22) where $\tilde{\omega}\approx(1-K/2)\omega_{0}$ (23) is the average phase velocity. We use equation 22 to estimate the phase delay $\tau$. For small phase delay $\chi_{j-1}=\theta_{j}-\theta_{j-1}$ equation 18 can be written to first order in phase shift $\chi_{j-1}$ as $\frac{d\theta_{j}}{dt}\omega_{0}^{-1}\approx 1-\frac{K}{2}\left[\tanh\left(\frac{\sin\theta_{j}\ \chi_{j-1}-\beta}{h_{ol}}\right)+1\right].$ (24) The time $\Delta t_{fs}$ it takes oscillator $j$ to pass through the transition from slow to fast regions we estimate from the time it takes $|\sin\theta_{j}\chi_{j-1}|/h_{ol}$ to change by about 2 (corresponding the region of high slope for the tanh function). This transition time is approximately $\Delta t_{fs}\sim 2h_{ol}\left|\cos\theta_{j}\frac{d\theta_{j}}{dt}\chi_{j-1}\right|^{-1}.$ (25) We assume that the transition boundaries are where $|\cos\theta|\sim 1$ and take an average of the fast and slow values for $\frac{d\theta}{dt}$ and $\chi$ (using equations 21 and 23) to estimate the duration of the transition from a fast to slow region or vice versa $\displaystyle\Delta t_{fs}$ $\displaystyle\sim\frac{8h_{ol}}{(\chi_{f}+\chi_{s})(2-K)\omega_{0}}.$ (26) Using equations 22 and 21 we estimate the delay $\tau$ $\tau\sim\frac{2}{\omega_{0}}\sqrt{\frac{h_{ol}}{K(2-K)}}$ (27) and the wavelength $N_{\lambda}\sim\frac{2\pi}{\tilde{\omega}\tau}\sim 2\pi\sqrt{\frac{K}{(2-K)h_{ol}}}.$ (28) For $h_{ol}=0.05$ and $K=0.5$ this gives $N_{\lambda}\sim 16$ which is a reasonable value but exceeds the value of 12 we see in the integration shown in Figure 13. We verified that $N_{\lambda}$ decreases with increasing $h_{ol}$, though it does not decrease as quickly as predicted by equation 28. A better prediction would take into account the phases of the transitions and the difference between compression and rarefaction transitions. The comparison between estimated and numerically measured wavelengths suggests that techniques used to study non-linear differential equations may be useful for predicting the properties of entrained states. ## Appendix B Predicting body positions and shapes from a phase oscillator model In this section we show how we compute eel body shapes and positions from an oscillator chair model. We assume the eel head positions are described by a chain of oscillators as illustrated in Figure 8. We assume that waves propagate down the body with a constant speed $v$. We adopt an Cartesian coordinate system ${\bf X}=(X,Y)$ on the plane to describe positions of points on the body of a chain of eels, as shown in Figure 8b. We assume the mean centerline position of the $i$-th eel’s head has coordinates ${\bf X}_{i,hc}$ and the eel’s mean centerline is tilted by angle $\phi_{\rm tilt}$ with respect to the horizontal direction. We assume that the mean centerline head positions are fixed and are equally spaced on the $X$ axis ${\bf X}_{i,hc}=\begin{pmatrix}iD\\\ 0\end{pmatrix},$ (29) where $D$ is the horizontal distance between the mean centerlines. We assume the wave travels down the body with velocity $v$, as given in equation 13 which we repeat here $y_{i}(x,t)=A\cos\left[\theta_{i}\left(t-\frac{x}{v}\right)\right].$ (30) The $i$-th eel’s head position is at $y(x=0,t)$. Here $x$ is the distance along the mean centerline and $y_{i}$ is the distance perpendicular to it. We rotate the centerlines by $\phi_{\rm tilt}$ so that in the $(X,Y)$ coordinate system the head of the $i$-th eel is at $\displaystyle{\bf X}_{i,h}(t)$ $\displaystyle=\begin{pmatrix}\cos\phi_{\rm tilt}&-\sin\phi_{\rm tilt}\\\ \sin\phi_{\rm tilt}&\cos\phi_{\rm tilt}\end{pmatrix}\begin{pmatrix}0\\\ A\cos(\theta_{i}(t))\end{pmatrix}$ $\displaystyle\ \ \ \ +{\bf X}_{i,hc}.\qquad\qquad\qquad$ (31) We can use the coordinate along the mean centerline $x$ to specify body positions $\displaystyle{\bf X}_{i}(x,t)$ $\displaystyle=\begin{pmatrix}\cos\phi_{\rm tilt}&-\sin\phi_{\rm tilt}\\\ \sin\phi_{\rm tilt}&\cos\phi_{\rm tilt}\end{pmatrix}\begin{pmatrix}x\\\ A\cos\left[\theta_{i}\left(t-\frac{x}{v}\right)\right]\end{pmatrix}$ $\displaystyle\ \ \ \ +{\bf X}_{i,hc}.\qquad\qquad\qquad$ (32) With $x=0$, this is consistent with equation 31 for the $i$-th eel’s head. Using equation 30, at $t=0$, the $y$ position of the $i$-th eel is determined by its head position at an earlier time, $\displaystyle y_{i}(x,t=0)$ $\displaystyle=A\cos\left[\theta_{i}\left(-\frac{x}{v}\right)\right],$ (33) where the earlier time is $t^{\prime}=-\frac{x}{v}.$ (34) Using a phase oscillator model we can generate arrays of phases $\theta_{i}$ at a series of times. The arrays at different output times $t^{\prime}$ then can be used to predict the ${\bf X}$ positions at $t=0$ along the eel’s bodies; $\displaystyle{\bf X}_{i}(t^{\prime})$ $\displaystyle=\begin{pmatrix}\cos\phi_{\rm tilt}&-\sin\phi_{\rm tilt}\\\ \sin\phi_{\rm tilt}&\cos\phi_{\rm tilt}\end{pmatrix}\begin{pmatrix}-vt^{\prime}\\\ A\cos\left[\theta_{i}\left(t^{\prime}\right)\right]\end{pmatrix}$ $\displaystyle\ \ \ \ +\begin{pmatrix}iD\\\ 0\end{pmatrix},\qquad\qquad\qquad$ (35) where we have used equations 29, 32 and 34. From a series of phase arrays computed at different output times for the phase oscillator model of equation 10 we can generate eel body positions using equation 35. To do this we require values for the velocity of waves along the eel body $v$, the amplitude $A$, the horizontal distance between the mean positions of organism heads $D$ and the body tilt angle $\phi_{\rm tilt}$. Also, the outputs of the integration must be put in units of time using the intrinsic oscillator phase velocity $\omega_{0}$.
# Classification of $K$-type formulas for the Heisenberg ultrahyperbolic operator $\square_{s}$ for $\widetilde{SL}(3,\mathbb{R})$ and tridiagonal determinants for local Heun functions Toshihisa Kubo and Bent Ørsted Faculty of Economics, Ryukoku University, 67 Tsukamoto-cho, Fukakusa, Fushimi-ku, Kyoto 612-8577, Japan <EMAIL_ADDRESS>Department of Mathematics, Aarhus University, Ny Munkegade 118 DK-8000 Aarhus C Denmark<EMAIL_ADDRESS> ###### Abstract. The $K$-type formulas of the space of $K$-finite solutions to the Heisenberg ultrahyperbolic equation $\square_{s}f=0$ for the non-linear group $\widetilde{SL}(3,\mathbb{R})$ are classified. This completes a previous study of Kable for the linear group $SL(m,\mathbb{R})$ in the case of $m=3$, as well as generalizes our earlier results on a certain second order differential operator. As a by-product we also show several properties of certain sequences $\\{P_{j}(x;y)\\}_{j=0}^{\infty}$ and $\\{Q_{j}(x;y)\\}_{j=0}^{\infty}$ of tridiagonal determinants, whose generating functions are given by local Heun functions. In particular, it is shown that these sequences satisfy a certain arithmetic-combinatorial property, which we refer to as a _palindromic property_. We further show that classical sequences of Cayley continuants $\\{\mathrm{Cay}_{j}(x;y)\\}_{j=0}^{\infty}$ and Krawtchouk polynomials $\\{\mathcal{K}_{j}(x;y)\\}_{j=0}^{\infty}$ also admit this property. In the end a new proof of Sylvester’s formula for certain tridiagonal determinant $\mathrm{Sylv}(x;n)$ is provided from a representation theory point of view. ###### Key words and phrases: intertwining differential operator, Heisenberg ultrahyperbolic operator, Peter–Weyl theorem for solution spaces, $K$-type solution, polynomial solution, hypergeometric differential equation, Heun’s differential equation, tridiagonal determinant, Sylvester determinant, Cayley continuant, palindromic property. ###### 2020 Mathematics Subject Classification: 22E46, 17B10, 05B20, 33C05, 33C45, 33E30, ###### Contents 1. 1 Introduction 2. 2 Peter–Weyl theorem for the space of $K$-finite solutions 3. 3 Specialization to $(\widetilde{SL}(3,\mathbb{R}),B)$ 4. 4 Heisenberg ultrahyperbolic operator for $\widetilde{SL}(3,\mathbb{R})$ 5. 5 Hypergeometric model $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})f(t)=0$ 6. 6 Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$ 7. 7 Sequences $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ of tridiagonal determinants 8. 8 Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ and Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ 9. 9 Appendix A: local Heun functions 10. 10 Appendix B: A proof of Sylvester’s formula ## 1\. Introduction The representation theory of a reductive Lie group $G$ is intimately tied to the analysis on corresponding flag manifolds $G/P$, where $P$ is a parabolic subgroup. For a representation of $P$, one considers the corresponding homogeneous vector bundle over the flag manifold $G/P$ and the space of sections either in the smooth sense or in some square integrable sense. Understanding the structure of this space of sections is crucial, for example, whether there are interesting subspaces invariant under the left regular action of $G$. This might for example happen if there is given a $G$-invariant differential equation on the space of sections, so that its solutions form an invariant subspace. In order to analyze such a space of solutions, it is convenient to use the analogue of Fourier series, namely, via the maximal compact subgroup $K$ in $G$; here $G=KP$, so the flag manifold $G/P$ is also a homogeneous space for $K$, and sections can be described according to how they transform under the action of $K$. In the case that we consider in this paper, there will be a one-parameter family $\square_{s}$ of natural invariant differential operators and explicit spaces of solutions. It turns out that these will be related to classical function theory, namely, hypergeometric functions and local Heun functions, and some of the relevant identities analogous to classical tridiagonal determinants of Sylvester type and Cayley type. Further, the representations obtained from the solution space to the differential equation $\square_{s}f=0$ include ones, which are to be thought of as minimal representations (in some sense). We provide a new aspect on a connection between the representation theory of reductive groups, ordinary differential equations in the complex domains, and sequences of polynomials. The aim of this paper is threefold. The first is the classification of $K$-type formulas of the space of $K$-finite solutions to the Heisenberg ultrahyperbolic equation $\square_{s}f=0$ for $\widetilde{SL}(3,\mathbb{R})$. The second is a study of certain sequences $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ of tridiagonal determinants arising from the study of the $K$-finite solutions to $\square_{s}f=0$. The third is an application of the arguments for $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ to classical sequences of Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ and Krowtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$. We describe these three topics in detail now. ### 1.1. Heisenberg ultrahyperbolic operator $\square_{s}$ For a moment, let ${\mathfrak{g}}=\mathfrak{sl}(m,\mathbb{C})$ with $m\geq 3$ and take a real form ${\mathfrak{g}}_{0}$ of ${\mathfrak{g}}$ such that there exists a parabolic subalgebra ${\mathfrak{p}}_{0}={\mathfrak{m}}_{0}\oplus{\mathfrak{a}}_{0}\oplus{\mathfrak{n}}_{0}$ with Heisenberg nilpotent radical ${\mathfrak{n}}_{0}$, namely, $\dim_{\mathbb{R}}[{\mathfrak{n}}_{0},{\mathfrak{n}}_{0}]=1$. The Heisenberg condition on ${\mathfrak{n}}_{0}$ forces ${\mathfrak{g}}_{0}$ to be either $\mathfrak{sl}(m,\mathbb{R})$ or $\mathfrak{su}(p,q)$ with $p+q=m$. We write ${\mathfrak{p}}={\mathfrak{m}}\oplus{\mathfrak{a}}\oplus{\mathfrak{n}}$ for the complexification of ${\mathfrak{p}}_{0}={\mathfrak{m}}_{0}\oplus{\mathfrak{a}}_{0}\oplus{\mathfrak{n}}_{0}$. We write $\mathcal{U}({\mathfrak{g}})$ for the universal enveloping algebra of the complexified Lie algebra ${\mathfrak{g}}={\mathfrak{g}}_{0}\otimes_{\mathbb{R}}\mathbb{C}$. In [18], under the framework of ${\mathfrak{g}}$ and ${\mathfrak{p}}$ as above, Kable introduced a one-parameter family $\square_{s}$ of differential operators with $s\in\mathbb{C}$ as an example of conformally invariant systems ([2, 3]). The operator $\square_{s}$ is referred to as the _Heisenberg ultrahyperbolic operator_ ([18]). For instance, for $m=3$, it is defined as $\square_{s}=R((XY+YX)+s[X,Y]),$ (1.1) where $R$ denotes the infinitesimal right translation and $X,Y$ are certain nilpotent elements in $\mathfrak{sl}(3,\mathbb{R})$ (see (3.1)). The differential operator $\square_{s}$ for $m=3$ in (1.1) in particular recovers the second order differential operator studied in [27] as the case of $s=0$. Some algebraic and analytic properties of the Heisenberg ultrahyperbolic operator $\square_{s}$ as well as its generalizations are investigated in [17, 18, 19, 20] for a linear group $SL(m,\mathbb{R})$. From a viewpoint of intertwining operators, the Heisenberg ultrahyperbolic operator $\square_{s}$ is an intertwining differential operator between parabolically induced representations for $G\supset P$, where $G$ and $P$ are Lie groups with Lie algebras ${\mathfrak{g}}_{0}$ and ${\mathfrak{p}}_{0}$, respectively. Thus the space $\mathcal{S}ol(\square_{s})$ of smooth solutions to the equation $\square_{s}f=0$ in the induced representation is a subrepresentation of $G$, and also the space $\mathcal{S}ol(\square_{s})_{K}$ of $K$-finite solutions is a $({\mathfrak{g}},K)$-module. We then consider the following problem. ###### Problem 1.2. Classify the $K$-type formulas for $\mathcal{S}ol(\square_{s})_{K}$. In [18], an attempt for this direction was made for $G=SL(m,\mathbb{R})$. In the paper, Kable introduced a notion of _$\mathcal{H}$ -modules_ and developed an algebraic theory for them. Although the theory is powerful to compute the dimension of the ${\mathfrak{k}}\cap{\mathfrak{m}}$-invariant subspace of the space of $K$-type solutions for each $K$-type in $\mathcal{S}ol(\square_{s})_{K}$, the determination of the explicit $K$-type formulas was not achieved. In this paper, we consider $G=\widetilde{SL}(3,\mathbb{R})$, the universal covering group of $SL(3,\mathbb{R})$. By making use of a technique from our earlier paper [27], we successfully classified the $K$-type formulas of $\mathcal{S}ol(\square_{s})_{K}$ for $\widetilde{SL}(3,\mathbb{R})$. Our method is different from the algebraic theory used in [18]; we rather utilize differential equations. In order to describe our results in more detail we next introduce some notation. ### 1.2. $K$-type formulas For the rest of this introduction let $G=\widetilde{SL}(3,\mathbb{R})$ with Lie algebra ${\mathfrak{g}}_{0}$. Fix a minimal parabolic subgroup $B$ of $G$ with Langlands decomposition $B=MAN$. Here the subgroup $M$ is isomorphic to the quaternion group $Q_{8}$ of order 8. Let $K$ be a maximal compact subgroup of $G$ so that $G=KAN$ is an Iwasawa decomposition of $G$. We have $K\simeq SU(2)\simeq Spin(3)$. Let $\mathrm{Irr}(M)$ and $\mathrm{Irr}(K)$ denote the set of equivalence classes of irreducible representations of $M$ and $K$, respectively. As $M\simeq Q_{8}$, the set $\mathrm{Irr}(M)$ may be given as $\mathrm{Irr}(M)=\\{\textnormal{\mbox{\smaller($+$,$+$)}},\,\textnormal{\mbox{\smaller($+$,$-$)}},\,\textnormal{\mbox{\smaller($-$,$+$)}},\,\textnormal{\mbox{\smaller($-$,$-$)}},\,\mathbb{H}\\},$ where $(\pm,\pm)$ are some characters (see Section 3.4 for the definition) and $\mathbb{H}$ stands for the unique two-dimensional genuine representation of $M$. The character ($+$,$+$) is, for instance, the trivial character. Let $\left(n/2\right)$ denote the irreducible finite-dimensional representation of $K\simeq Spin(3)$ with dimension $n+1$. Then we have $\mathrm{Irr}(K)=\left\\{\left(n/2\right):n\in\mathbb{Z}_{\geq 0}\right\\}.$ For $\sigma\in\mathrm{Irr}(M)$ and a character $\lambda$ of $A$, we write $I(\sigma,\lambda)=\mathrm{Ind}_{B}^{G}(\sigma\otimes(\lambda+\rho)\otimes\mathbb{1})$ for the representation of $G$ induced from the representation $\sigma\otimes(\lambda+\rho)\otimes\mathbb{1}$ of $B=MAN$, where $\rho$ is half the sum of the positive roots corresponding to $B$. We realize the induced representation $I(\sigma,\lambda)$ on the space of smooth sections for a $G$-equivariant homogeneous vector bundle over $G/B$. Let $\mathrm{Diff}_{G}(I(\sigma_{1},\lambda_{1}),I(\sigma_{2},\lambda_{2}))$ denote the space of intertwining differential operators from $I(\sigma_{1},\lambda_{1})$ to $I(\sigma_{2},\lambda_{2})$. Let ${\mathfrak{g}}$ denote the complexified Lie algebra of ${\mathfrak{g}}_{0}=\mathfrak{sl}(3,\mathbb{R})$. Since ${\mathfrak{g}}=\mathfrak{sl}(3,\mathbb{C})$, the Heisenberg ultrahyperbolic operator $\square_{s}$ is given as in (1.1). We then set $D_{s}:=(XY+YX)+s[X,Y]\in\mathcal{U}({\mathfrak{g}}),$ (1.3) so that $\square_{s}=R(D_{s})$. It follows from Proposition 4.3 that we have $R(D_{s})\in\mathrm{Diff}_{G}(I(\textnormal{\mbox{\smaller($+$,$+$)}},-\widetilde{\rho}(s)),I(\textnormal{\mbox{\smaller($-$,$-$)}},\widetilde{\rho}(-s))),$ where $\widetilde{\rho}(s)$ is a certain weight determined by $\widetilde{\rho}:=\rho/2$ (see (4.1)). It is further shown that, for $\sigma\in\mathrm{Irr}(M)$, $R(D_{s})\otimes\mathrm{id}_{\sigma}\in\mathrm{Diff}_{G}(I(\sigma,-\widetilde{\rho}(s)),I(\textnormal{\mbox{\smaller($-$,$-$)}}\otimes\sigma,\widetilde{\rho}(-s))),$ where $\mathrm{id}_{\sigma}$ denotes the identity map on $\sigma$. For notational convenience we consider $R(D_{\bar{s}})$ in place of $R(D_{s})$, where $\bar{s}$ denotes the complex conjugate of $s\in\mathbb{C}$ (see Section 4.1 for the details). We define $\displaystyle\mathcal{S}ol(s;\sigma)$ $\displaystyle:=\text{the space of smooth solutions to $(R(D_{\bar{s}})\otimes\mathrm{id}_{\sigma})f=0$},$ $\displaystyle\mathcal{S}ol(s;\sigma)_{K}$ $\displaystyle:=\text{the space of $K$-finite solutions to $(R(D_{\bar{s}})\otimes\mathrm{id}_{\sigma})f=0$}.$ (1.4) It follows from a Peter–Weyl theorem for the solution space (Theorem 2.11) that the $({\mathfrak{g}},K)$-module $\mathcal{S}ol(s;\sigma)_{K}$ decomposes as $\mathcal{S}ol(s;\sigma)_{K}\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}(n/2)\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}(s;n),\sigma\right),$ (1.5) where $\mathrm{Sol}(s;n)$ is the space of $K$-type solutions to $D_{s}$ (without complex conjugation on $s$) on the $K$-type $(n/2)=(\delta,V_{(n/2)})\in\mathrm{Irr}(K)$, that is, $\mathrm{Sol}(s;n)=\\{v\in V_{(n/2)}:d\delta(D^{\flat}_{s})v=0\\}.$ (1.6) (see (4.7) and (4.1)). Here $d\delta$ is the differential of $\delta$ and $D^{\flat}_{s}$ denotes the compact model of $D_{s}$ (see Definition 2.5). Then the $K$-type decomposition $\mathcal{S}ol(s;\sigma)_{K}$ for $(s,\sigma)\in\mathbb{C}\times\mathrm{Irr}(M)$ is explicitly given as follows. ###### Theorem 1.7. The following conditions on $(\sigma,s)\in\mathrm{Irr}(M)\times\mathbb{C}$ are equivalent. 1. (i) $\mathcal{S}ol(s;\sigma)\neq\\{0\\}$. 2. (ii) One of the following conditions holds. * • $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ $s\in\mathbb{C}$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ $s\in\mathbb{C}$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ $s\in 1+4\mathbb{Z}$. * • $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ $s\in 3+4\mathbb{Z}$. * • $\sigma=\mathbb{H}:$ $s\in 2\mathbb{Z}$. Further, the $K$-type formulas for $\mathcal{S}ol(s;\sigma)_{K}$ are given as follows. 1. (1) $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ $\qquad\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$+$)}})_{K}\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}\left(2n\right)\hskip 28.90755pt\textnormal{for all $s\in\mathbb{C}$.}\qquad$ 2. (2) $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ $\mathcal{S}ol(s;\textnormal{\mbox{\smaller($-$,$-$)}})_{K}\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}\left(1+2n\right)\quad\textnormal{for all $s\in\mathbb{C}$.}$ 3. (3) $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ $\qquad\qquad\mathcal{S}ol(s;\textnormal{\mbox{\smaller($-$,$+$)}})_{K}\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}((|s|+1)/2+2n)\quad\textnormal{for $s\in 1+4\mathbb{Z}$}.$ 4. (4) $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ $\qquad\qquad\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$-$)}})_{K}\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}((|s|+1)/2+2n)\quad\textnormal{for $s\in 3+4\mathbb{Z}$}.$ 5. (5) $\sigma=\mathbb{H}:$ $\qquad\qquad\mathcal{S}ol(s;\mathbb{H})_{K}\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}((|s|+1)/2+2n)\quad\textnormal{for $s\in 2\mathbb{Z}$}.$ We shall deduce Theorem 1.7 from Theorem 5.36 at the end of Section 5. The proof is not case-by-case analysis on $\sigma\in\mathrm{Irr}(M)$; each $M$-representation $\sigma$ is treated uniformly via a recipe for the $K$-type decomposition of $\mathcal{S}ol(s;\sigma)_{K}$. See Section 4.4 for the details of the recipe. In regard to Theorem 1.7, we shall also classify the space $\mathrm{Sol}(s;n)$ of $K$-type solutions for all $n\in\mathbb{Z}_{\geq 0}$. This is done in Theorems 5.16 and 6.7. Recall from (1.2) that $\mathcal{S}ol(s;\sigma)_{K}$ concerns $R(D_{\bar{s}})$. Theorem 1.7 shows that the $K$-type decompositions are in fact independent of taking the complex conjugate on the parameter $s\in\mathbb{C}$. There are several remarks on the space $\mathrm{Sol}(s;n)$ of $K$-type solutions and the $K$-type decompositions of $\mathcal{S}ol(s;\sigma)_{K}$. First, as mentioned above, the dimensions of ${\mathfrak{k}}\cap{\mathfrak{m}}$-invariant subspaces of the spaces of $K$-type solutions are determined in [18] for $SL(m,\mathbb{R})$ with arbitrary rank $m\geq 3$. In particular, as ${\mathfrak{k}}\cap{\mathfrak{m}}=\\{0\\}$ for $m=3$, the dimensions $\dim_{\mathbb{C}}\mathrm{Sol}(s;n)$ for $n\in 2\mathbb{Z}_{\geq 0}$ are obtained in the cited paper ([18, Thm. 5.13]). We note that our normalization on $s\in\mathbb{C}$ is different from one for $z\in\mathbb{C}$ in [18] by $s=-2z$. In the paper, factorization formulas of certain tridiagonal determinants are essential to compute $\dim_{\mathbb{C}}\mathrm{Sol}(s;n)$. For further details on the factorization formulas, see the remark after (1.22) below. By making use of differential equations, we determine the dimensions $\dim_{\mathbb{C}}\mathrm{Sol}(s;n)$ for all $n\in\mathbb{Z}_{\geq 0}$ independently to the results of [18]. Table 1 summarizes the results of [18] and this paper, concerning the $K$-type decompositions of $\mathcal{S}ol(s;\sigma)_{K}$ for ${\mathfrak{g}}_{0}=\mathfrak{sl}(3,\mathbb{R})$. Table 1. Comparison between [18] and this paper for ${\mathfrak{g}}_{0}=\mathfrak{sl}(3,\mathbb{R})$ ${\mathfrak{g}}_{0}=\mathfrak{sl}(3,\mathbb{R})$ | $\dim_{\mathbb{C}}\mathrm{Sol}(s;n)$ | $K$-type decomposition of $\mathcal{S}ol(s;\sigma)_{K}$ ---|---|--- [18] | Done for $SL(3,\mathbb{R})$ | Not obtained this paper | Done for $\widetilde{SL}(3,\mathbb{R})$ | Done for $\widetilde{SL}(3,\mathbb{R})$ Secondly, Tamori recently investigates in [36] the representations on the space of smooth solutions to the same differential equation as the Heisenberg ultrahyperbolic equation $R(D_{s})f=0$, as part of his thorough study on minimal representations. In particular, the $K$-type formula $\mathcal{S}ol(s;\mathbb{H})_{K}$ for the case $\sigma=\mathbb{H}$ is determined in [36, Prop. 5.4.6]. The representations on $\mathcal{S}ol(s;\mathbb{H})$ for $s\in 2\mathbb{Z}$ are genuine representations and not unitarizable unless $s=0$ ([36, Rem. 5.4.5]). We remark that, both in [36] and in this paper, the $K$-type formula for $\mathcal{S}ol(s;\mathbb{H})_{K}$ is obtained by realizing the $K$-type solutions in $\mathrm{Sol}(s;n)$ as hypergeometric polynomials; nevertheless, the methods are rather different. For instance, some combinatorial computations are carried out in [36], whereas we simply solve the hypergeometric equation. Further, we also give the $K$-type solutions by Heun polynomials. We illustrate our methods in detail in Section 1.3. Lastly, the $K$-type formulas $\mathcal{S}ol(0;\sigma)_{K}$ for the case of $s=0$ are previously classified by ourselves in [27, Thm. 1.6]. In this case only three $M$-representations $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}$, ($-$,$-$), $\mathbb{H}$ contribute to $\mathcal{S}ol(0;\sigma)\neq\\{0\\}$ with explicit $K$-type formulas $\displaystyle\mathcal{S}ol(0;\textnormal{\mbox{\smaller($+$,$+$)}})_{K}$ $\displaystyle\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}\left(2n\right),$ $\displaystyle\mathcal{S}ol(0;\textnormal{\mbox{\smaller($-$,$-$)}})_{K}$ $\displaystyle\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}\left(1+2n\right),$ $\displaystyle\mathcal{S}ol(0;\mathbb{H})_{K}$ $\displaystyle\simeq\bigoplus_{n\in\mathbb{Z}_{\geq 0}}\left((1/2)+2n\right).$ It was quite mysterious that only the series of $(3/2)+2\mathbb{Z}_{\geq 0}$ does not appear in the $K$-type formulas. Theorem 1.7 now gives an answer for this question. The representations realized on $\mathcal{S}ol(0;\sigma)$ for $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}$, ($-$,$-$), $\mathbb{H}$ are known to be unitarizable and the resulting representations are the ones attached to the minimal nilpotent orbit ([33]). For instance, the representation realized on $\mathcal{S}ol(0;\mathbb{H})$ is the genuine representation so-called Torasso’s representation ([39]). Here are two remarks on a recent progress on the unitarity of the representations on $\mathcal{S}ol(0;\sigma)$. First, Dahl recently constructed in [9] the unitary structures for the three unitarizable representations on $\mathcal{S}ol(0;\sigma)$ by using the Knapp–Stein intertwining operator and the Fourier transform on the Heisenberg group. Furthermore, Frahm also recently gives the $L^{2}$-models of these unitary representations in [11], as part of his intensive study on the $L^{2}$-realizations of the minimal representations realized on the solution space of conformally invariant systems. ### 1.3. Hypergeometric and Heun’s differential equations The main idea for accomplishing Theorem 1.7 is use of the decomposition formula (1.5) and a refinement of the method applied for the case $s=0$ in [27]. The central problem is classifying the space $\mathrm{Sol}(s;n)$ of $K$-type solutions. In order to do so, one needs to proceed the following two steps. 1. Step 1: Classify $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ so that $\mathrm{Sol}(s;n)\neq\\{0\\}$. 2. Step 2: Classify the $M$-representations on $\mathrm{Sol}(s;n)\neq\\{0\\}$. In [27] (the case $s=0$), via the polynomial realization $\mathrm{Irr}(K)\simeq\\{(\pi_{n},\mathrm{Pol}_{n}[t]):n\in\mathbb{Z}_{\geq 0}\\}$ (1.8) of $\mathrm{Irr}(K)$ (see (3.14)), we carried out the two steps by realizing $\mathrm{Sol}(0;n)$ in (1.6) as $\mathrm{Sol}(0;n)=\\{p(t)\in\mathrm{Pol}_{n}[t]:d\pi^{\textnormal{I}}_{n}(D^{\flat}_{0})p(t)=0\\},$ the space of polynomial solutions to an ordinary differential equation $d\pi^{\textnormal{I}}_{n}(D^{\flat}_{0})p(t)=0$ via a certain identification $\Omega^{\textnormal{I}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$ (see Sections 3.2 and 4.2). It turned out that the differential equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{0})p(t)=0$ is a hypergeometric differential equation. Steps 1 and 2 were then easily carried out by observing the Gauss hypergeometric functions arising from $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{0})p(t)=0$. In this paper we apply the same idea for the general case $s\in\mathbb{C}$. Nonetheless, in the general case, Steps 1 and 2 do not become as simple as the case of $s=0$, as the differential equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})p(t)=0$ is turned out to be Heun’s differential equation $\mathcal{D}_{H}(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2})p(t)=0$ (1.9) with the $P$-symbol $P\left\\{\begin{matrix}0&1&-1&\infty&&\\\ 0&0&0&-\frac{n}{2}&t^{2}&-\frac{ns}{4}\\\ \frac{1}{2}&\frac{1+n+s}{2}&\frac{1+n-s}{2}&-\frac{n-1}{2}&&\end{matrix}\right\\}.$ (See Section 9.1 for the definition of $\mathcal{D}_{H}(a,q;\alpha,\beta,\gamma,\delta;z)$ and the notation of the $P$-symbol.) As such, one needs to deal with local Heun functions at 0 ([38]); these are not as easy to handle as hypergeometric functions for our purpose. For instance, in Step 1, one needs to classify $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ for which the local Heun functions in consideration are polynomials; however, as opposed to hypergeometric functions, classifying such parameters is not an easy problem at all, since only known are necessary conditions in which local Heun functions are reduced to polynomials. To resolve this problem, inspired by a work [36] of Tamori, we use a different identification $\Omega^{\textnormal{II}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$ so that the differential equation $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})p(t)=0$ becomes again a hypergeometric equation, namely, $D_{F}(-\frac{n}{2},-\frac{n+s-1}{4},\frac{3-n+s}{4};t^{2})p(t)=0.$ (1.10) (See (4.39) for the definition of $\mathcal{D}_{F}(a,b,c;z)$.) For the detail, see Sections 3.2 and 4.2. We accomplished the $K$-type formulas of $\mathcal{S}ol(s;\sigma)_{K}$ by using the hypergeometric model $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})p(t)=0$ (see Theorem 5.36). The Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})p(t)=0$ and the hypergeometric model $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})p(t)=0$ are related by a Cayley transform $\pi_{n}(k_{0})$ given by an element $k_{0}$ of $K$ ((4.18)). Namely, we have $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})p(t)=0\quad\Longleftrightarrow\quad d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})\pi_{n}(k_{0})p(t)=0$ (see Proposition 4.19). For instance, put $u_{[s;n]}(t):=Hl(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2})$ (1.11) and $a_{[s;n]}(t):=F(-\frac{n}{2},-\frac{n+s-1}{4},\frac{3-n+s}{4};t^{2}),$ where $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ denotes the local Heun function at $z=0$ and $F(a,b,c;z)\equiv{{}_{2}}F_{1}(a,b,c;z)$ is the Gauss hypergeometric function. Then, for $\mathcal{D}_{H}^{[s;n]}:=\mathcal{D}_{H}(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2})$ in (1.9) and $\mathcal{D}_{F}^{[s;n]}:=D_{F}(-\frac{n}{2},-\frac{n+s-1}{4},\frac{3-n+s}{4};t^{2})$ in (1.10), we have $\mathcal{D}_{H}^{[s;n]}u_{[s;n]}(t)=0\quad\text{and}\quad\mathcal{D}_{F}^{[s;n]}a_{[s;n]}(t)=0.$ (1.12) It will be shown in Lemma 6.3 that, for appropriate $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ at which $u_{[s;n]}(t)$ and $a_{[s;n]}(t)$ are both polynomials, the two functions $u_{[s;n]}(t)$ and $a_{[s;n]}(t)$ are related as $\pi_{n}(k_{0})a_{[s;n]}(t)\in\mathbb{C}u_{[s;n]}(t),$ (1.13) equivalently, $(1-\sqrt{-1}t)^{n}a_{[s;n]}\left(\frac{1+\sqrt{-1}t}{1-\sqrt{-1}t}\cdot\sqrt{-1}\right)\in\mathbb{C}u_{[s;n]}(t).$ For the case of $s=0$, the Gauss-to-Heun transformation by $\pi_{n}(k_{0})$ from $a_{[s;n]}(t)$ to $u_{[s;n]}(t)$ can be reduced to a Gauss-to-Gauss transformation, as the local Heun function $u_{[s;n]}(t)$ can be reduced to a hypergeometric function for $s=0$. (See Remarks 4.37 and 6.4.) The proportional constant for (1.13) may be given by a ratio of shifted factorials. Indeed, let $I^{-}_{0}$ and $J_{0}$ be certain subsets of $\mathbb{Z}_{\geq 0}$ (see (5.3)). Then, for $n\equiv 0\ (\mathrm{mod}\ 4)$ with $s\in\mathbb{C}\backslash(I^{-}_{0}\cup J_{0})$, we have $\pi_{n}(k_{0})a_{[s;n]}(t)=\frac{\left(\frac{2-n}{4},\frac{n}{4}\right)}{\left(\frac{3-n+s}{4},\frac{n}{4}\right)}u_{[s;n]}(t)$ with $(\ell,m):=\frac{\Gamma(\ell+m)}{\Gamma(\ell)}$. (As the proportional constants do not play any role for our main purposes, we shall not discuss them in this paper.) We remark that a Cayley transform, which is slightly different from ours $\pi_{n}(k_{0})$, is used also in [12] under the name of the MacWilliams transform, to construct the generating function of the discrete Chebyshev polynomials from Jacobi polynomials via the Heun equation. Further, Gauss-to- Heun transformations are studied in, for instance, [29, 40, 41]. In the cited papers, considered are the cases in which local Heun functions are pulled back from a hypergeometric function. As opposed to the references, this paper concerns some cases that, for suitable $(s;n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$, a certain linear combination of $a_{[s;n]}(t)$ and the second solution $b_{[s;n]}(t)$ to (1.10) is transformed to $u_{[s;n]}(t)$. Namely, put $b_{[s;n]}(t):=t^{\frac{1+n-s}{2}}F(-\frac{n+s-1}{4},-\frac{s-1}{2},\frac{5+n-s}{4};t^{2}).$ Then, for appropriate $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$, we have $\pi_{n}(k_{0})\left(a_{[s;n]}(t)\pm C(s;n)b_{[s;n]}(t)\right)\in\mathbb{C}u_{[s;n]}(t),$ where $C(s;n)$ is some constant, to be defined in (5.14). For more details, see Section 6.1. Via the transformation $\pi_{n}(k_{0})$, we also give in detail the space $\mathrm{Sol}(s;n)$ of $K$-type solutions for the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})p(t)=0$ (Section 6). ### 1.4. Palindromic property of a sequence $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$ of polynomials In the last half of this paper, we shall discuss certain arithmetic- combinatorial properties of four sequences of polynomials of two variables, namely. $\\{P_{k}(x;y)\\}_{k=0}^{\infty},\quad\\{Q_{k}(x;y)\\}_{k=0}^{\infty},\quad\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty},\quad\text{and}\quad\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty},$ (1.14) where $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ are obtained from the study of the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{0})p(t)=0$. Since we could not find the property in consideration, we refer to it as a _palindromic property_. As such, we shall digress for a moment from the main results of this paper. To state the general definition, let $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$ be a sequence of polynomials of two variables with $p_{0}(x;y)=1$, and $\\{a_{k}\\}_{k=0}^{\infty}$ a sequence of non-zero numbers. Let $d\colon\mathbb{Z}_{\geq 0}\to\mathbb{R}$ be a given map. For $n\in\mathbb{Z}_{\geq 0}$, we put $\mathcal{S}\mathrm{ol}_{k}(p;n):=\\{s\in\mathbb{C}:p_{k}(s;n)=0\\}.$ ###### Definition 1.15 (palindromic property). A pair $(\\{p_{k}(x;y)\\}_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$ is said to be a _palindromic pair_ with _degree_ $d(n)$ if, for each $n\in d^{-1}(\mathbb{Z}_{\geq 0})$, there exists a map $\theta_{(p;n)}\colon\mathcal{S}\mathrm{ol}_{d(n)+1}(p;n)\to\\{\pm 1\\}$ such that, for all $s\in\mathcal{S}\mathrm{ol}_{d(n)+1}(p;n)$, we have $p_{k}(s;n)=0$ for $k\geq d(n)+1$ and $\frac{p_{k}(s;n)}{a_{k}}=\theta_{(p;n)}(s)\frac{p_{d(n)-k}(s;n)}{a_{d(n)-k}}\quad\text{for $k\leq d(n)$}.$ (1.16) We call the identity (1.16) the _palindromic identity_ and $\theta_{(p;n)}(s)$ its _sign factor_. Further, for a given palindromic pair $(\\{p_{k}(x;y)\\}_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$, the sequence $\\{a_{k}\\}_{k=0}^{\infty}$ is said to be an _associated sequence_ of $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$. If there exits a sequence $\\{a_{k}\\}_{k=0}^{\infty}$ such that $(\\{p_{k}(x;y)\\}_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$ is a palindromic pair, then we say that $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$ admits a _palindromic property_. We note that our notion of palindromic property does not concern polynomials themselves but sequences of polynomials; for instance, each $p_{k}(x;n)$ is not necessarily a palindromic polynomial. ###### Example 1.17. Here are some simple examples. 1. (a) If $p_{0}(x;y)=1$ and $p_{k}(x;y)=0$ for all $k\geq 1$, then, for any sequence $\\{a_{k}\\}_{k=0}^{\infty}$ of non-zero numbers, the pair $(\\{p_{k}(x;y)\\}_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$ is a palindromic pair with degree $0$. 2. (b) If $p_{0}(x;y)=p_{1}(x;y)=1$ and $p_{k}(x;y)=0$ for all $k\geq 2$, then, for any sequence $\\{a_{k}\\}_{k=0}^{\infty}$ of non-zero numbers with $a_{0}=\pm a_{1}$, the pair $(\\{p_{k}(x;y)\\}_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$ is a palindromic pair with degree $1$. 3. (c) If $p_{k}(x;y)=x^{k}$, then, for any sequence $\\{a_{k}\\}_{k=0}^{\infty}$ of non-zero numbers, the pair $(\\{x^{k}\\}_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$ is a palindromic pair with degree $0$. We shall provide four examples for which the degree $d(n)$ is not constant. (See Sections 1.7, 1.8, and 1.9.) Now suppose that $(\\{p_{k}(x;y)_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$ is a palindromic pair with degree $d(n)$, where the associated sequence $\\{a_{k}\\}_{k=0}^{\infty}$ is of the form $a_{k}=\left(c_{k}\right)!$ for some sequence $\\{c_{k}\\}_{k=1}^{\infty}$ of non-negative integers with $c_{0}=1$. Then, as $p_{0}(x;y)=1$ by definition, the palindromity of $(\\{p_{k}(x;y)_{k=0}^{\infty},\\{a_{k}\\}_{k=0}^{\infty})$ implies that, for $n\in d^{-1}(\mathbb{Z}_{\geq 0})$, the $d(n)$th term $p_{d(n)}(x;y)$ satisfies the identity $p_{d(n)}(s;n)=\theta_{(p;n)}(s)\left(c_{d(n)}\right)!\quad\text{for $s\in\mathcal{S}\mathrm{ol}_{d(n)+1}(p;n)$}.$ (1.18) We refer to the identity (1.18) as the _factorial identity_ of $p_{d(n)}(x;n)$ on $\mathcal{S}\mathrm{ol}_{d(n)+1}(p;n)$. Further, we mean by the _refinement_ of a factorial (resp. palindromic) identity a factorial (resp. palindromic) identity for each explicit value of $s\in\mathcal{S}\mathrm{ol}_{d(n)+1}(p;n)$. In order to show the refinements of a palindromic property and factorial identity for $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$, it is important to understand the zero set $\mathcal{S}\mathrm{ol}_{d(n)+1}(p;n)$ of $p_{d(n)+1}(x;n)$. For the purpose we also consider the _factorization formula_ of the polynomial $p_{d(n)+1}(x;n)$. In summary, we shall investigate the following properties for the sequences of polynomials $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$ in (1.14): * • Factorization formula of $p_{d(n)+1}(x;n)$; * • Palindromic property of $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$; * • Factorial identity of $p_{d(n)}(x;n)$; * • Refinement of a factorial identity of $p_{d(n)}(x;n)$. It is remarked that factorization formulas for $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ will be given in a more general situation. We are going to describe these topics in detail now. ### 1.5. Sequences $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ of tridiagonal determinants Recall from (1.12) that the local Heun function $u_{[s;n]}(t)$ in (1.11) is a local solution at $t=0$ to the Heun differential equation (1.9). Let $v_{[s;n]}(t)$ be the second solution (see (4.35)). It will be shown in Proposition 7.16 that $u_{[s;n]}(t)$ and $v_{[s;n]}(t)$ may be given in power series representations as $u_{[s;n]}(t)=\sum_{k=0}^{\infty}P_{k}(s;n)\frac{t^{2k}}{(2k)!}\qquad\text{and}\qquad v_{[s;n]}(t)=\sum_{k=0}^{\infty}Q_{k}(s;n)\frac{t^{2k+1}}{(2k+1)!},$ where $P_{k}(x;y)$ and $Q_{k}(x;y)$ are certain $k\times k$ tridiagonal determinants (see Section 7.1). For instance, the $\frac{n}{2}\times\frac{n}{2}$ tridiagonal determinant $P_{\frac{n}{2}}(x;n)$ for $y=n\in 2\mathbb{Z}_{\geq 0}$ is given as $P_{0}(x;0)=1$, $P_{1}(x;2)=2x$, and for $n\in 4+2\mathbb{Z}_{\geq 0}$, $\small P_{\frac{n}{2}}(x;n)=\begin{vmatrix}nx&1\cdot 2&&&&\\\ -(n-1)n&(n-4)x&3\cdot 4&&&\\\ &-(n-3)(n-2)&(n-8)x&5\cdot 6&&\\\ &&\dots&\dots&\dots&\\\ &&&-5\cdot 6&-(n-8)x&(n-3)(n-2)\\\ &&&&-3\cdot 4&-(n-4)x\\\ \end{vmatrix}.\\\ $ (1.19) Similarly, the $\frac{n-2}{2}\times\frac{n-2}{2}$ tridiagonal determinant $Q_{\frac{n-2}{2}}(x;n)$ for $y=n\in 2(1+\mathbb{Z}_{\geq 0})$ is given as $Q_{0}(x;2)=1$, $Q_{1}(x;4)=2x$, and for $n\in 6+2\mathbb{Z}_{\geq 0}$, $\small Q_{\frac{n-2}{2}}(x;n)=\begin{vmatrix}(n-2)x&2\cdot 3&&&&\\\ -(n-2)(n-1)&(n-6)x&4\cdot 5&&&\\\ &-(n-4)(n-3)&(n-10)x&6\cdot 7&&\\\ &&\dots&\dots&\dots&\\\ &&&-6\cdot 7&-(n-10)x&(n-4)(n-3)\\\ &&&&-4\cdot 5&-(n-6)x\\\ \end{vmatrix}.\\\ $ (1.20) ### 1.6. Factorization formulas of $P_{[(n+2)/2]}(x;n)$ and $Q_{[(n+1)/2]}(x;n)$ In 1854, Sylvester observed that an $(n+1)\times(n+1)$ centrosymmetric tridiagonal determinant $\mathrm{Sylv}(x;n):=\small\begin{vmatrix}x&1&&&&&\\\ n&x&2&&&&\\\ &n-1&x&3&&&\\\ &&\dots&\dots&\dots&&\\\ &&&3&x&n-1&\\\ &&&&2&x&n\\\ &&&&&1&x\end{vmatrix}\\\ $ (1.21) satisfies the following formula ([35]). $\displaystyle\mathrm{Sylv}(x;n)=\begin{cases}\;\;(x^{2}-1^{2})(x^{2}-3^{2})\cdots(x^{2}-(n-2)^{2})(x^{2}-n^{2})&\text{if $n$ is odd},\\\ x(x^{2}-2^{2})(x^{2}-4^{2})\cdots(x^{2}-(n-2)^{2})(x^{2}-n^{2})&\text{if $n$ is even}.\\\ \end{cases}$ (1.22) By utilizing some results for the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{0})p(t)=0$, we show that $P_{[(n+2)/2]}(x;n)$ and $Q_{[(n+1)/2]}(x;n)$ enjoy similar but more involved factorization formulas. See Theorems 7.18 and 7.22 for the details. Based on the factorization formula, we also express $P_{(n+1)/2}(x;n)$ for $n$ odd in terms of the Sylvester determinant $\mathrm{Sylv}(x;n)$ (Corollary 7.27). Here are some remarks on the factorization formulas in order. First, the factorization formulas $P_{(n+2)/2}(x;n)$ and $Q_{n/2}(x;n)$ for $n\in 2\mathbb{Z}$ also follow from a more general formula in [18, Prop. 5.11], which is obtained via some clever trick based on a change of variables and some general formula on tridiagonal determinants (see, for instance, [5, p. 52] and [18, Lem. 5.10]). As mentioned in the remark after Theorem 1.7 above, the factorization formulas for $P_{(n+2)/2}(x;n)$ and $Q_{n/2}(x;n)$ play an essential role in [18] to determine $\dim_{\mathbb{C}}\mathrm{Sol}(s;n)$. In this paper we somewhat do this process backwards including the case of $n$ odd. Schematically, the difference is described as follows. * • [18]: “factorization $\to$ $\dim_{\mathbb{C}}\mathrm{Sol}(s;n)$” ($n\in 2\mathbb{Z}_{\geq 0}$) * • this paper: “$\mathrm{Sol}(s;n)$ $\to$ Heun $\to$ factorization” ($n\in\mathbb{Z}_{\geq 0}$) (It is recalled that, as the linear group $SL(m,\mathbb{R})$ is considered in [18], only even $n\in 2\mathbb{Z}_{\geq 0}$ appear for the space $\mathrm{Sol}(s;n)$ of $K$-type solutions for $SO(3)\subset SL(3,\mathbb{R})$ in the case of $m=3$, whereas we handle all $n\in\mathbb{Z}_{\geq 0}$ in this paper, as $Spin(3)\subset\widetilde{SL}(3,\mathbb{R})$ is in consideration.) The techniques used in [18] can also be applied for the case of $n$ odd. Nevertheless, it requires some involved computations; for instance, the trick used in the cited paper does not make the situation as simple as for the case of $n$ even, and further, the general formula ([5, p. 52] and [18, Lem. 5.10]) cannot be applied either. By simply applying the results for the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{0})p(t)=0$, we successfully avoid such computations. ### 1.7. Palindromic properties for $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ We next describe the palindromic properties and factorial identities for $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$. The palindromic properties are given as follows. ###### Theorem 1.23 (Theorems 7.29 and 7.41). The pair $(\\{P_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k)!\\}_{k=0}^{\infty})$ is a palindromic pair with degree $\frac{n}{2}$. Similarly, $(\\{Q_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k+1)!\\}_{k=0}^{\infty})$ is a palindromic pair with degree $\frac{n-2}{2}$. For the case for $n$ odd, see Propositions 7.39 and 7.47. The key idea of proofs of Theorem 1.23 is to use a symmetry of the generating functions $u_{[s;n]}(t)$ and $v_{[s;n]}(t)$ with respect to the non-trivial Weyl group element $m^{\textnormal{I}}_{2}$ ((3.11)) of $SU(2)$ in the form of the inversion. See Section 7.4 for the details. It follows from Theorem 1.23 that $P_{\frac{n}{2}}(x;y)$ ((1.19)) and $Q_{\frac{n-2}{2}}(x;y)$ ((1.20)) satisfy factorial identities (see Corollaries 7.37 and 7.45). Corollaries 1.24 and 1.26 below are the refinements of the identities via the factorization formulas (Theorem 7.18) of $P_{\frac{n+2}{2}}(x;n)$ and $Q_{\frac{n}{2}}(x;n)$. ###### Corollary 1.24 (Refinement of the factorial identity of $P_{\frac{n}{2}}(x;n)$). Let $n\in 2\mathbb{Z}_{\geq 0}$. Then the following hold. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ The values of $P_{\frac{n}{2}}(s;n)$ are $P_{\frac{n}{2}}(s;n)=n!\quad\textnormal{for all $s\in\mathbb{C}$}.$ 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4):$ The values of $P_{\frac{n}{2}}(s;n)$ for $s=\pm 1,\pm 5,\ldots,\pm(n-2)$ are given as $P_{\frac{n}{2}}(s;n)=\begin{cases}n!&\textnormal{if $s=1,5,9\dots,n-2$},\\\ -n!&\textnormal{if $s=-1,-5,-9\dots,-(n-2)$}.\end{cases}$ (1.25) ###### Corollary 1.26 (Refinement of the factorial identity of $Q_{\frac{n-2}{2}}(x;n)$). Let $n\in 2(1+\mathbb{Z}_{\geq 0})$. Then the following hold. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ The values of $Q_{\frac{n-2}{2}}(s;n)$ for $s=\pm 3,\pm 7,\ldots,\pm(n-1)$ are given as $Q_{\frac{n-2}{2}}(s;n)=\begin{cases}(n-1)!&\textnormal{if $s=3,7,11,\ldots,n-1$},\\\ -(n-1)!&\textnormal{if $s=-3,-7,-11,\ldots,-(n-1)$}.\end{cases}$ (1.27) 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4):$ The values of $Q_{\frac{n-2}{2}}(s;n)$ are $Q_{\frac{n-2}{2}}(s;n)=(n-1)!\quad\textnormal{for all $s\in\mathbb{C}$.}$ We shall give proofs of Corollaries 1.24 and 1.26 in Sections 7.4.1 and 7.4.2. ### 1.8. Sequence $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ of Cayley continuants In 1858, four years after the observation of Sylvester on (1.22), Cayley considered in [4] a sequence $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ of $k\times k$ tridiagonal determinants $\mathrm{Cay}_{k}(x;y)$ and expressed each $\mathrm{Cay}_{k}(x;n)$ in terms of the Sylvester determinant $\mathrm{Sylv}(x;n)$ (see, for instance, [4, 31] and [30, p. 429])). The first few terms are given as $\mathrm{Cay}_{0}(x;y)=1,\quad\mathrm{Cay}_{1}(x;y)=x,\quad\mathrm{Cay}_{2}(x;y)=\begin{vmatrix}x&1\\\ y&x\end{vmatrix},\quad\mathrm{Cay}_{3}(x;n)=\small\begin{vmatrix}x&1&\\\ y&x&2\\\ &y-1&x\end{vmatrix},\quad\ldots.$ The $(n+1)$th term $\mathrm{Cay}_{n+1}(x;n)$ with $y=n$ is nothing but the Sylvester determinant $\mathrm{Cay}_{n+1}(x;n)=\mathrm{Sylv}(x;n)$. Further, the $n$th term $\mathrm{Cay}_{n}(x;n)$ may be thought of as the “almost” Sylvester determinant, as it is given as $\mathrm{Cay}_{n}(x;n)=\small\begin{vmatrix}x&1&&&&\\\ n&x&2&&&\\\ &n-1&x&3&&\\\ &&\dots&\dots&\dots&\\\ &&&3&x&n-1\\\ &&&&2&x\end{vmatrix}\\\ $ (1.28) (compare (1.28) with (1.21)). Following [31], we refer to each $\mathrm{Cay}_{k}(x;y)$ as a Cayley continuant. As part of the third aim of this paper, we shall show the palindromic property of the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ as follows. ###### Theorem 1.29 (Theorem 8.8). The pair $(\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty},\\{k!\\}_{k=0}^{\infty})$ is a palindromic pair with degree $n$. For the palindromic identity, see Theorem 8.8. As for the cases of $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$, we prove Theorem 1.29 by observing a symmetry of the generating function of the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ with respect to the inversion (see Section 8.2). Theorem 1.29 implies that the almost Sylvester determinant $\mathrm{Cay}_{n}(x;y)$ ((1.28)) also satisfies a factorial identity. Corollary 1.30 below is the refinement of the factorial identity (see Theorem 8.8) of $\mathrm{Cay}_{n}(x;n)$ on $\mathcal{S}\mathrm{ol}_{n+1}(\mathrm{Cay};n)$ via Sylvester’s factorization formula (1.22) of $\mathrm{Sylv}(x;n)=\mathrm{Cay}_{n+1}(x;n)$. ###### Corollary 1.30 (Refinement of the factorial identity of $\mathrm{Cay}_{n}(x;n)$). For $n$ even, the values of the almost Sylvester determinant $\mathrm{Cay}_{n}(s;n)$ for $s=0,\pm 2,\dots,\pm(n-2),\pm n$ are given as follows. * • $n\equiv 0\ (\mathrm{mod}\ 4):$ $\mathrm{Cay}_{n}(s;n)=\begin{cases}n!&\textnormal{if $s=0,\pm 4,\dots,\pm(n-4),\pm n$},\\\ -n!&\textnormal{if $s=\pm 2,\pm 6,\dots,\pm(n-2)$}.\end{cases}$ * • $n\equiv 2\ (\mathrm{mod}\ 4):$ $\mathrm{Cay}_{n}(s;n)=\begin{cases}-n!&\textnormal{if $s=0,\pm 4,\dots,\pm(n-4),\pm n$},\\\ n!&\textnormal{if $s=\pm 2,\pm 6,\dots,\pm(n-2)$}.\end{cases}$ Similarly, for $n$ odd, the values of $\mathrm{Cay}_{n}(s;n)$ for $s=\pm 1,\pm 3,\dots,\pm(n-2),\pm n$ are given as follows. * • $n\equiv 1\ (\mathrm{mod}\ 4):$ $\mathrm{Cay}_{n}(s;n)=\begin{cases}n!&\textnormal{if $s=1,-3,5,\dots,-(n-2),n$},\\\ -n!&\textnormal{if $s=-1,3,-5,\dots,n-2,-n$}.\end{cases}$ * • $n\equiv 3\ (\mathrm{mod}\ 4):$ $\mathrm{Cay}_{n}(s;n)=\begin{cases}-n!&\textnormal{if $s=1,-3,5,\dots,n-2,-n$},\\\ n!&\textnormal{if $s=-1,3,-5,\dots,-(n-2),n$}.\end{cases}$ We shall give a proof of Corollary 1.30 at the end of Section 8.2. It has been more than 160 years since Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ were introduced in [4]. We therefore suppose that Theorem 1.29 and Corollary 1.30 are already in the literature. Yet, in contrast to a large variety of a proof for Sylvester’s formula (1.22) (see the remark after Theorem 7.22), we could not find them at all. It will be quite surprising if the classical identities in Corollary 1.30 have not been found over a century. It is also remarked that Sylvester’s formula (1.22) readily follows from the representation theory of $\mathfrak{sl}(2,\mathbb{C})$ without any computation. Nonetheless, it seems not in the literature either. We shall then provide a proof of (1.22) from a representation theory point of view (see Section 10). The idea of the proof is in principle the same as the one discussed in Section 1.3 for the classification of $K$-type formulas. Furthermore, by applying the idea for determining the generating functions of $\\{P(x;y)\\}_{k=0}^{\infty}$ and $\\{Q(x;y)\\}_{k=0}^{\infty}$, we also give in Proposition 8.3 another proof for Sylvester’s formula (1.22) as well as the generating function of the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$. (Thus we give two different proofs of Sylvester’s formula (1.22) in this paper.) ### 1.9. Sequence $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ of Krawtchouk polynomials For $k\in\mathbb{Z}_{\geq 0}$, let $\mathcal{K}_{k}(x;y)$ be a polynomial of homogeneous degree $k$ such that $\mathcal{K}_{k}(x;n)$ for $y=n\in\mathbb{Z}_{\geq 0}$ is a Krawtchouk polynomial in the sense of [28, p. 137], namely, $\mathcal{K}_{k}(x;y)=\sum_{j=0}^{k}(-1)^{j}\binom{x}{j}\binom{y-x}{k-j}$ (1.31) (see Definition 8.11). In this paper we also call the polynomials $\mathcal{K}_{k}(x;y)$ of two variables Krawtchouk polynomials. From the observation of the generating functions of $\mathrm{Cay}_{k}(x;y)$ and $\mathcal{K}_{k}(x;y)$, it is immediate that $\mathcal{K}_{k}(x;y)=\frac{\mathrm{Cay}_{k}(y-2x;y)}{k!}$ (1.32) (see (8.4) and (8.13)). In the last part of this paper, via the identity (1.32), we deduce the palindromic property of the Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ from that of the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ as follows. ###### Theorem 1.33 (Theorem 8.19). The pair $(\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty},\\{1\\}_{k=0}^{\infty})$ is a palindromic pair with degree $n$. See Theorem 8.19 for the palindromic identity. As the associated sequence $\\{1\\}_{k=0}^{\infty}$ for $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ is of the form $1=1!$ for all $k$, the $n$th term $\mathcal{K}_{n}(x;y)$ satisfies a factorial identity. Corollary 1.34 below is the refinement of the factorial identity of $\mathcal{K}_{n}(x;n)$ on $\mathcal{S}\mathrm{ol}_{n+1}(\mathcal{K},n)$ (see Theorem 8.19). ###### Corollary 1.34 (Refinement of the factorial identity of $\mathcal{K}_{n}(x;n)$). Given $n\in\mathbb{Z}_{\geq 0}$, the values of $\mathcal{K}_{n}(s;n)$ for $s=0,1,2,\dots,n$ are given as follows. $\mathcal{K}_{n}(s;n)=\begin{cases}1&\textnormal{if $s=0,\,2,\,4,\,\dots,\,n_{\textrm{even}}$},\\\ -1&\textnormal{if $s=1,\,3,\,5,\,\dots,\,n_{\textrm{odd}}$},\end{cases}$ (1.35) where $n_{\textrm{odd}}$ and $n_{\textrm{even}}$ are defined as $n_{\textnormal{odd}}:=\begin{cases}n&\textnormal{if $n$ is odd},\\\ n-1&\textnormal{if $n$ is even}\end{cases}\qquad\textnormal{and}\qquad n_{\textnormal{even}}:=\begin{cases}n-1&\textnormal{if $n$ is odd},\\\ n&\textnormal{if $n$ is even}.\end{cases}$ We shall give a proof of Corollary 1.34 at the end of Section 8.3 as a corollary of Theorem 8.19. We remark that the identities (1.35) can also be easily shown directly from the definition (1.31) by an elementary observation. The Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ are also a classical object at least for the case of $y=n\in\mathbb{Z}_{\geq 0}$. Therefore Theorem 1.33 may be a known as, for instance, an exercise problem; nonetheless, we could not find it in the literature. ### 1.10. Summary of the data for palindromic properties To close this introduction, we summarize some data on the palindromic property of the sequence $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$ of polynomials in (1.14). For the associated sequence $\\{a_{k}\\}_{k=0}^{\infty}$ of $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$, we write $g_{[x;y]}(t):=\sum_{k=0}^{\infty}p_{k}(x;y)\frac{t^{b_{k}}}{a_{k}},$ where $\\{b_{k}\\}_{k=0}^{\infty}$ is some sequence of non-negative integers. Table 2 exhibits the generating function $g_{[x;y]}(t)$, the associated sequence $\\{a_{k}\\}_{k=0}^{\infty}$, the sequence $\\{b_{k}\\}_{k=0}^{\infty}$ of exponents, the degree $d(n)$, and the sign factor $\theta_{(p;n)}(s)$ for $\\{p_{k}(x;y)\\}_{k=0}^{\infty}$ in (1.14). (For the sign factors $\theta_{(P;n)}(s)$ and $\theta_{(Q;n)}(s)$, see (7.28) and (7.40), respectively.) Table 2. Summary on palindromic properties $p_{k}(x;y)$ | $g_{[x;y]}(t)$ | $a_{k}$ | $b_{k}$ | $d(n)$ | $\theta_{(p;n)}(s)$ ---|---|---|---|---|--- $\mathcal{K}_{k}(x;y)$ | $(1+t)^{y-x}(1-t)^{x}$ | $1$ | $k$ | $n$ | $(-1)^{s}$ $\mathrm{Cay}_{k}(x;y)$ | $(1+t)^{\frac{y+x}{2}}(1-t)^{\frac{y-x}{2}}$ | $k!$ | $k$ | $n$ | $(-1)^{\frac{n-s}{2}}$ $P_{k}(x;y)$ | $u_{[x;y]}(t)$ (4.34) | $(2k)!$ | $2k$ | $\frac{n}{2}$ | (7.28) $Q_{k}(x;y)$ | $v_{[x;y]}(t)$ (4.35) | $(2k+1)!$ | $2k+1$ | $\frac{n-2}{2}$ | (7.40) ### 1.11. Organization We now outline the rest of this paper. This paper consists of ten sections including this introduction. In Section 2, we overview a general framework established in [27] for the Peter–Weyl theorem (1.5) for the space of $K$-finite solutions to intertwining differential operators. In Section 3, we collect necessary notation and normalizations for $\widetilde{SL}(3,\mathbb{R})$. Two identifications $\Omega^{J}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$ for $J=\textnormal{I},\textnormal{II}$ are discussed in this section. The purpose of Section 4 is to recall from [18] the Heisenberg ultrahyperebolic operator $\square_{s}=R(D_{s})$ for $\widetilde{SL}(3,\mathbb{R})$ and to study the associated differential equation $d\pi_{n}^{J}(D^{\flat}_{s})p(t)=0$. In this section we identify the equation $d\pi_{n}^{J}(D^{\flat}_{s})p(t)=0$ with Heun’s differential equation ($J=\textnormal{I}$) and the hypergeometric differential equation ($J=\textnormal{II}$) via the identifications $\Omega^{J}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$. At the end of this section we give a recipe to classify the space $\mathrm{Sol}(s;n)$ of $K$-type solutions to $\square_{s}=R(D_{s})$. In accordance with the recipe given in Section 4, we show the $K$-type decompositions of $\mathcal{S}ol(s;\sigma)_{K}$ in the hypergeometric model $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})p(t)=0$ in Section 5. This is accomplished in Theorem 5.36. We then convert the results in the hypergeometric model $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})p(t)=0$ to the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})p(t)=0$ by a Cayley transform $\pi_{n}(k_{0})$ in Section 6. Sections 7 and 8 are devoted to applications to sequences of polynomials. In Section 7, by utilizing the results in the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})p(t)=0$, we investigate two sequences $\\{P_{k}(x;n)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;n)\\}_{k=0}^{\infty}$ of tridiagonal determinants. The main results of this sections are factorization formulas (Theorems 7.18 and 7.22) and palindromic properties (Theorems 7.29 and 7.41). We also show an expression of $P_{\frac{n+1}{2}}(x;n)$ for $n$ odd in terms of the Sylvester determinant $\mathrm{Sylv}(x;n)$ (Corollary 7.27). In Section 8, we show the palindromic properties for Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ and Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$. These are achieved in Theorems 8.8 and 8.19, respectively. The last two sections are appendices. In order to study $\\{P_{k}(x;n)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;n)\\}_{k=0}^{\infty}$, we use some general facts on the coefficients of a power series expression of local Heun functions. We collect those facts in Section 9. In Section 10, for future possible convenience, we give a proof of Sylvester’s formula (1.22) from an $\mathfrak{sl}(2,\mathbb{C})$ point of view. ## 2\. Peter–Weyl theorem for the space of $K$-finite solutions The aim of this section is to recall from [27, Sect. 2] a general framework established in [27]. In particular, we give a Peter–Weyl theorem for the space of $K$-finite solutions to intertwining differential operators between parabolically induced representations. This is done in Theorem 2.11. ### 2.1. General framework Let $G$ be a reductive Lie group with Lie algebra ${\mathfrak{g}}_{0}$. Choose a Cartan involution $\theta:{\mathfrak{g}}_{0}\to{\mathfrak{g}}_{0}$ and write ${\mathfrak{g}}_{0}={\mathfrak{k}}_{0}\oplus{\mathfrak{s}}_{0}$ for the Cartan decomposition of ${\mathfrak{g}}_{0}$ with respect to $\theta$. Here ${\mathfrak{k}}_{0}$ and ${\mathfrak{s}}_{0}$ stand for the $+1$ and $-1$ eigenspaces of $\theta$, respectively. We take maximal abelian subspaces ${\mathfrak{a}}^{\min}_{0}\subset{\mathfrak{s}}_{0}$ and ${\mathfrak{t}}_{0}^{\min}\subset{\mathfrak{m}}_{0}^{\min}:=Z_{{\mathfrak{k}}_{0}}({\mathfrak{a}}_{0}^{\min})$ such that ${\mathfrak{h}}_{0}:={\mathfrak{a}}_{0}^{\min}\oplus{\mathfrak{t}}_{0}^{\min}$ is a Cartan subalgebra of ${\mathfrak{g}}_{0}$. For a real Lie algebra $\mathfrak{y}_{0}$, we denote by $\mathfrak{y}$ the complexification of $\mathfrak{y}_{0}$. For instance, the complexifications of ${\mathfrak{g}}_{0}$, ${\mathfrak{h}}_{0}$, ${\mathfrak{a}}_{0}^{\min}$, and ${\mathfrak{m}}^{\min}_{0}$ are denoted by ${\mathfrak{g}}$, ${\mathfrak{h}}$, ${\mathfrak{a}}$, and ${\mathfrak{m}}^{\min}$, respectively. We write $\mathcal{U}(\mathfrak{y})$ for the universal enveloping algebra of a Lie algebra $\mathfrak{y}$. Let $\Delta\equiv\Delta({\mathfrak{g}},{\mathfrak{h}})$ be the set of roots with respect to the Cartan subalgebra ${\mathfrak{h}}$ and $\Sigma\equiv\Sigma({\mathfrak{g}}_{0},{\mathfrak{a}}_{0})$ denote the set of restricted roots with respect to ${\mathfrak{a}}_{0}$. We choose a positive system $\Delta^{+}$ and $\Sigma^{+}$ in such a way that $\Delta^{+}$ and $\Sigma^{+}$ are compatible. We denote by $\rho$ for half the sum of the positive roots. Let ${\mathfrak{n}}^{\min}_{0}$ be the nilpotent subalgebra of ${\mathfrak{g}}_{0}$ corresponding to $\Sigma^{+}$, so that ${\mathfrak{p}}^{\min}_{0}:={\mathfrak{m}}^{\min}_{0}\oplus{\mathfrak{a}}^{\min}_{0}\oplus{\mathfrak{n}}^{\min}_{0}$ is a Langlands decomposition of a minimal parabolic subalgebra ${\mathfrak{p}}_{0}^{\min}$ of ${\mathfrak{g}}_{0}$. Fix a standard parabolic subalgebra ${\mathfrak{p}}_{0}\supset{\mathfrak{p}}_{0}^{\min}$ with Langlands decomposition ${\mathfrak{p}}_{0}={\mathfrak{m}}_{0}\oplus{\mathfrak{a}}_{0}\oplus{\mathfrak{n}}_{0}$. Let $P$ be a parabolic subalgebra of $G$ with Lie algebra ${\mathfrak{p}}_{0}$. We write $P=MAN$ for the Langlands decomposition of $P$ corresponding to ${\mathfrak{p}}_{0}={\mathfrak{m}}_{0}\oplus{\mathfrak{a}}_{0}\oplus{\mathfrak{n}}_{0}$. For $\mu\in{\mathfrak{a}}^{*}\simeq\operatorname{Hom}_{\mathbb{R}}({\mathfrak{a}}_{0},\mathbb{C})$, we define a one-dimensional representation $\mathbb{C}_{\mu}$ of $A$ as $a\mapsto e^{\mu}(a):=e^{\mu(\log a)}$ for $a\in A$. Then, for a finite- dimensional representation $W_{\sigma}=(\sigma,W)$ of $M$ and weight $\lambda\in{\mathfrak{a}}^{*}$, we define an $MA$-representation $W_{\sigma,\lambda}$ as $W_{\sigma,\lambda}:=W_{\sigma}\otimes\mathbb{C}_{\lambda}.$ (We note that the definition of $W_{\sigma,\lambda}$ was slightly different from the one in [27].) As usual, by letting $N$ act on $W_{\sigma,\lambda}$ trivially, we regard $W_{\sigma,\lambda}$ as a representation of $P$. We identify the Fréchet space $C^{\infty}\left(G/P,\mathcal{W}_{\sigma,\lambda}\right)$ of smooth sections for the $G$-equivariant homogeneous vector bundle $\mathcal{W}_{\sigma,\lambda}:=G\times_{P}W_{\sigma,\lambda}\to G/P$ as $C^{\infty}\left(G/P,\mathcal{W}_{\sigma,\lambda}\right)\simeq(C^{\infty}(G)\otimes W_{\sigma,\lambda})^{P}$, that is, $\displaystyle C^{\infty}\left(G/P,\mathcal{W}_{\sigma,\lambda}\right)\simeq\left\\{f\in C^{\infty}(G)\otimes W_{\sigma,\lambda}:f(gman)=\sigma(m)^{-1}e^{-\lambda}(a)f(g)\;\text{for all $man\in MAN$}\right\\},$ where $G$ acts by left translation. Then we realize a parabolically induced representation $I_{P}(\sigma,\lambda):=\text{Ind}_{P}^{G}(\sigma\otimes(\lambda+\rho)\otimes\mathbb{1})$ (2.1) of $G$ on $C^{\infty}\left(G/P,\mathcal{W}_{\sigma,\lambda+\rho}\right)$. For pairs $(\sigma,\lambda),(\eta,\nu)$ of finite-dimensional representations $\sigma,\eta$ of $M$ and weights $\lambda,\nu\in{\mathfrak{a}}^{*}$, we write $\mathrm{Hom}_{G}(I_{P}(\sigma,\lambda),I_{P}(\eta,\nu))$ for the space of intertwining operators from $I_{P}(\sigma,\lambda)$ to $I_{P}(\eta,\nu)$. Then we set $\mathrm{Diff}_{G}(I_{P}(\sigma,\lambda),I_{P}(\eta,\nu)):=\mathrm{Diff}(I_{P}(\sigma,\lambda),I_{P}(\eta,\nu))\cap\mathrm{Hom}_{G}(I_{P}(\sigma,\lambda),I_{P}(\eta,\nu)),$ where $\mathrm{Diff}(I_{P}(\sigma,\lambda),I_{P}(\eta,\nu))$ is the space of differential operators from $I_{P}(\sigma,\lambda)$ to $I_{P}(\eta,\nu)$. Let $\mathrm{Irr}(M)_{\mathrm{fin}}$ be the set of equivalence classes of irreduicible finite-dimensional representations of $M$. As $M$ is not connected in general, we write $M_{0}$ for the identity component of $M$. Let $\mathrm{Irr}(M/M_{0})$ denote the set of irreducible representations of the component group $M/M_{0}$. Via the surjection $M\twoheadrightarrow M/M_{0}$, we regard $\mathrm{Irr}(M/M_{0})$ as a subset of $\mathrm{Irr}(M)_{\mathrm{fin}}$. Let $\mathrm{id}_{\xi}$ denote the identity map on $\xi\in\mathrm{Irr}(M/M_{0})$. Lemma 2.2 below shows that the tensored operator $\mathcal{D}\otimes\mathrm{id}_{\xi}$ to $\mathcal{D}\in\mathrm{Diff}_{G}(I_{P}(\sigma,\lambda),I_{P}(\eta,\nu)$ is also an intertwining differential operator. ###### Lemma 2.2 ([27, Lems. 2.17, 2.21]). Let $\mathcal{D}\in\mathrm{Diff}_{G}(I_{P}(\sigma,\lambda),I_{P}(\eta,\nu))$. For any $\xi\in\mathrm{Irr}(M/M_{0})$, we have $\mathcal{D}\otimes\mathrm{id}_{\xi}\in\mathrm{Diff}_{G}(I_{P}(\sigma\otimes\xi,\lambda),I_{P}(\eta\otimes\xi,\nu)).$ If the $M$-representation $\sigma$ is the trivial character $\sigma=\chi_{\mathrm{triv}}$, then Lemma 2.2 in particular shows that differential operator $\mathcal{D}\in\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\eta,\nu))$ yields $\mathcal{D}\otimes\mathrm{id}_{\xi}\in\mathrm{Diff}_{G}(I_{P}(\xi,\lambda),I_{P}(\eta\otimes\xi,\nu))\quad\text{for $\xi\in\mathrm{Irr}(M/M_{0})$}.$ (2.3) ### 2.2. Peter–Weyl theorem for $\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}$ For a later purpose, we specialize the representations $\sigma,\eta$ of $M$ to be $\sigma=\chi_{\mathrm{triv}}$ and $\eta=\chi$, where $\chi$ is some character of $M$. In this situation, it follows from (2.3) that, for $\mathcal{D}\in\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\chi,\nu))$ and $\xi\in\mathrm{Irr}(M/M_{0})$, we have $\mathcal{D}\otimes\mathrm{id}_{\xi}\in\mathrm{Diff}_{G}(I_{P}(\xi,\lambda),I_{P}(\chi\otimes\xi,\nu)).$ By the duality theorem between intertwining differential operators and homomorphisms between generalized Verma modules (see, for instance, [6, 23, 25]), any intertwining differential operator $\mathcal{D}\in\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\chi,\nu))$ is of the form $\mathcal{D}=R(u)$ for some $u\in\mathcal{U}(\bar{\mathfrak{n}})$, where $R$ denotes the infinitesimal right translation of $\mathcal{U}({\mathfrak{g}})$ and $\bar{\mathfrak{n}}$ is the opposite nilpotent radical to ${\mathfrak{n}}$, namely, $\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\chi,\nu))=\\{\text{$R(u)$ for some $u\in\mathcal{U}(\bar{{\mathfrak{n}}})$}\\}.$ In particular, intertwining differential operators $\mathcal{D}$ are determined by the complex Lie algebra ${\mathfrak{g}}$ and independent of its real forms. It follows from [15, Lem. 2.1] and [27, Lem. 2.24] that there exists a $(\mathcal{U}({\mathfrak{k}}\cap{\mathfrak{m}}),K\cap M)$-isomorphism $\mathcal{U}({\mathfrak{k}})\otimes_{\mathcal{U}({\mathfrak{k}}\cap{\mathfrak{m}})}\mathbb{C}_{\chi_{\mathrm{triv}}}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathcal{U}({\mathfrak{g}})\otimes_{\mathcal{U}({\mathfrak{p}})}\mathbb{C}_{\chi_{\mathrm{triv}},-(\lambda+\rho)},\quad u\otimes\mathbb{1}_{\chi_{\mathrm{triv}}}\mapsto u\otimes(\mathbb{1}_{\chi_{\mathrm{triv}}}\otimes\mathbb{1}_{-(\lambda+\rho)}),$ (2.4) where $K\cap M$ acts on $\mathcal{U}({\mathfrak{k}})\otimes_{\mathcal{U}({\mathfrak{k}}\cap{\mathfrak{m}})}\mathbb{C}_{\chi_{\mathrm{triv}}}$ and $\mathcal{U}({\mathfrak{g}})\otimes_{\mathcal{U}({\mathfrak{p}})}\mathbb{C}_{\chi_{\mathrm{triv}},-(\lambda+\rho)}$ diagonally. Via the isomorphism (2.4), we define a _compact model_ of $u^{\flat}\in\mathcal{U}({\mathfrak{k}})$ of $u\in\mathcal{U}(\bar{{\mathfrak{n}}})$ as follows. ###### Definition 2.5. For $R(u)\in\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\chi,\nu))$ with $u\in\mathcal{U}(\bar{{\mathfrak{n}}})$, we denote by $u^{\flat}\in\mathcal{U}({\mathfrak{k}})$ an element of $\mathcal{U}({\mathfrak{k}})$ such that the identity $u^{\flat}\otimes(\mathbb{1}_{\chi_{\mathrm{triv}}}\otimes\mathbb{1}_{-(\lambda+\rho)})=u\otimes(\mathbb{1}_{\chi_{\mathrm{triv}}}\otimes\mathbb{1}_{-(\lambda+\rho)})$ (2.6) holds in $\mathcal{U}({\mathfrak{g}})\otimes_{\mathcal{U}({\mathfrak{p}})}\mathbb{C}_{\chi_{\mathrm{triv}},-(\lambda+\rho)}$. ###### Remark 2.7. We remark that compact model $u^{\flat}$ is not unique; any choice of $u^{\flat}$ satisfying the identity (4.1) is acceptable. Let $\mathrm{Irr}(K)$ be the set of equivalence classes of irreducible representations of the maximal compact subgroup $K$ with Lie algebra ${\mathfrak{k}}_{0}$. For $V_{\delta}:=(\delta,V)\in\mathrm{Irr}(K)$ and $u\in\mathcal{U}(\bar{\mathfrak{n}})$, we define a subspace $\mathrm{Sol}_{(u)}(\delta)$ of $V_{\delta}$ by $\mathrm{Sol}_{(u)}(\delta):=\\{v\in V_{\delta}:d\delta(\tau(u^{\flat}))v=0\\}.$ (2.8) Here $d\delta$ denotes the differential of $\delta$ and $\tau$ denotes the conjugation $\tau\colon{\mathfrak{g}}\to{\mathfrak{g}}$ with respect to the real form ${\mathfrak{g}}_{0}$, that is, $\tau(X_{1}+\sqrt{-1}X_{2})=X_{1}-\sqrt{-1}X_{2}$ for $X_{1},X_{2}\in{\mathfrak{g}}_{0}$. ###### Lemma 2.9 ([27, Lem. 2.41]). The space $\mathrm{Sol}_{(u)}(\delta)$ is a $K\cap M$-representation. We set $\mathrm{Sol}^{{\mathfrak{k}}\cap{\mathfrak{m}}}_{(u)}(\delta):=\mathrm{Sol}_{(u)}(\delta)\cap V_{\delta}^{{\mathfrak{k}}\cap{\mathfrak{m}}},$ where $V^{{\mathfrak{k}}\cap{\mathfrak{m}}}_{\delta}$ is the subspace of ${\mathfrak{k}}\cap{\mathfrak{m}}$-invariant vectors of $V_{\delta}$. Clearly $\mathrm{Sol}^{{\mathfrak{k}}\cap{\mathfrak{m}}}_{(u)}(\delta)$ is a $K\cap M$-subrepresentation of $\mathrm{Sol}_{(u)}(\delta)$. Further, since ${\mathfrak{k}}\cap{\mathfrak{m}}$ acts on $\xi\in\mathrm{Irr}(M/M_{0})$ trivially, we have $\mathrm{Hom}_{K\cap M}\left(\mathrm{Sol}_{(u)}(\delta),\xi\right)\neq\\{0\\}\quad\text{if and only if}\quad\mathrm{Hom}_{K\cap M}\left(\mathrm{Sol}^{{\mathfrak{k}}\cap{\mathfrak{m}}}_{(u)}(\delta),\xi\right)\neq\\{0\\}$ (2.10) via the composition of maps $K\cap M\hookrightarrow M\twoheadrightarrow M/M_{0}$. Let $I_{P}(\chi_{\mathrm{triv}},\lambda)_{K}$ denote the $({\mathfrak{g}},K)$-module consisting of the $K$-finite vectors of $I_{P}(\chi_{\mathrm{triv}},\lambda)$. Then, given $R(u)\in\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\chi,\nu))$ and $\xi\in\mathrm{Irr}(M/M_{0})$, we set $\displaystyle\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}$ $\displaystyle:=\\{f\in I_{P}(\xi,\lambda)_{K}:(R(u)\otimes\mathrm{id}_{\xi})f=0\\}.$ For the sake of future convenience, we now give a slight modification of a Peter–Weyl theorem [27, Thm. 1.2] for $\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}$, although such a modification is not necessary for the main objective of this paper. We remark that this modified version is a more direct generalization of the argument given after the proof of [18, Thm. 2.6] than [27, Thm. 1.2]. ###### Theorem 2.11 (Peter–Weyl theorem for $\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}$). Let $R(u)\in\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\chi,\nu))$ and $\xi\in\mathrm{Irr}(M/M_{0})$. Then the $({\mathfrak{g}},K)$-module $\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}$ can be decomposed as a $K$-representation as $\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}\simeq\bigoplus_{\delta\in\mathrm{Irr}(K)}V_{\delta}\otimes\mathrm{Hom}_{K\cap M}\left(\mathrm{Sol}^{{\mathfrak{k}}\cap{\mathfrak{m}}}_{(u)}(\delta),\xi\right).$ (2.12) ###### Proof. It follows from [27, Thm. 1.2] that the space $\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}$ can be decomposed as $\mathcal{S}ol_{(u;\lambda)}(\xi)_{K}\simeq\bigoplus_{\delta\in\mathrm{Irr}(K)}V_{\delta}\otimes\mathrm{Hom}_{K\cap M}\left(\mathrm{Sol}_{(u)}(\delta),\xi\right).$ Now the equivalence (2.10) concludes the theorem. ∎ When $G$ is a split real group and $P=MAN$ is a minimal parabolic subgroup of $G$, we have $K\cap M=M$, $M/M_{0}=M$, and ${\mathfrak{k}}\cap{\mathfrak{m}}=\\{0\\}$. Thus in this case Theorem 2.11 is simplified as follows. ###### Corollary 2.13. Suppose that $G$ is split real and $P=MAN$ is minimal parabolic subgroup of $G$. Let $R(u)\in\mathrm{Diff}_{G}(I_{P}(\chi_{\mathrm{triv}},\lambda),I_{P}(\chi,\nu))$ and $\sigma\in\mathrm{Irr}(M)$. Then the $({\mathfrak{g}},K)$-module $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}$ can be decomposed as $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}\simeq\bigoplus_{\delta\in\mathrm{Irr}(K)}V_{\delta}\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}_{(u)}(\delta),\sigma\right)$ (2.14) ## 3\. Specialization to $(\widetilde{SL}(3,\mathbb{R}),B)$ The purpose of this section is to specialize the general theory discussed in Section 2 to a pair $(\widetilde{SL}(3,\mathbb{R}),B)$, where $B$ is a minimal parabolic subgroup of $\widetilde{SL}(3,\mathbb{R})$. In this section we in particular give a recipe for computing $K$-type formulas of the space $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}$ of $K$-finite solutions for $R(u)\in\mathrm{Diff}_{G}(I_{B}(\chi_{\mathrm{triv}},\lambda),I_{B}(\chi,\nu))$. It is described in Section 3.7. ### 3.1. Notation and normalizations We begin by recalling from [27, Sect. 4.1] the notation and normalizations for $\widetilde{SL}(3,\mathbb{R})$. Let $G=\widetilde{SL}(3,\mathbb{R})$ with Lie algebra ${\mathfrak{g}}_{0}=\mathfrak{sl}(3,\mathbb{R})$. Fix a Cartan involution $\theta\colon{\mathfrak{g}}_{0}\to{\mathfrak{g}}_{0}$ such that $\theta(U)=-U^{t}$. We write ${\mathfrak{k}}_{0}$ and ${\mathfrak{s}}_{0}$ for the $+1$ and $-1$ eigenspaces of $\theta$, respectively, so that ${\mathfrak{g}}_{0}={\mathfrak{k}}_{0}\oplus{\mathfrak{s}}_{0}$ is a Cartan decomposition of ${\mathfrak{g}}_{0}$. We put ${\mathfrak{a}}_{0}:=\text{span}_{\mathbb{R}}\\{E_{ii}-E_{i+1,i+1}:i=1,2\\}$ and ${\mathfrak{n}}_{0}:=\text{span}_{\mathbb{R}}\\{E_{12},E_{23},E_{13}\\}$, where $E_{ij}$ denote matrix units. Then ${\mathfrak{b}}_{0}:={\mathfrak{a}}_{0}\oplus{\mathfrak{n}}_{0}$ is a minimal parabolic subalgebra of ${\mathfrak{g}}_{0}$. Let $K$, $A$, and $N$ be the analytic subgroups of $G$ with Lie algebras ${\mathfrak{k}}_{0}$, ${\mathfrak{a}}_{0}$, and ${\mathfrak{n}}_{0}$, respectively. Then $G=KAN$ is an Iwasawa decomposition of $G$. We write $M=Z_{K}({\mathfrak{a}}_{0})$, so that $B:=MAN$ is a minimal parabolic subgroup of $G$ with Lie algebra ${\mathfrak{b}}_{0}$. We denote by ${\mathfrak{g}}$ the complexification of the Lie algebra ${\mathfrak{g}}_{0}$ of $G$. A similar convention is employed also for subgroups of $G$; for instance, $\mathfrak{b}=\mathfrak{a}\oplus\mathfrak{n}$ is a Borel subalgebra of ${\mathfrak{g}}=\mathfrak{sl}(3,\mathbb{C})$. We write $\bar{{\mathfrak{n}}}$ for the nilpotent radical opposite to ${\mathfrak{n}}$. Let $\Delta\equiv\Delta(\mathfrak{g},\mathfrak{a})$ denote the set of roots of $\mathfrak{g}$ with respect to $\mathfrak{a}$. We denote by $\Delta^{+}$ and $\Pi$ the positive system corresponding to ${\mathfrak{b}}$ and the set of simple roots of $\Delta^{+}$, respectively. We have $\Pi=\\{\varepsilon_{1}-\varepsilon_{2},\varepsilon_{2}-\varepsilon_{3}\\}$, where $\varepsilon_{j}$ are the dual basis of $E_{jj}$ for $j=1,2,3$. The root spaces ${\mathfrak{g}}_{\varepsilon_{1}-\varepsilon_{2}}$ and ${\mathfrak{g}}_{\varepsilon_{2}-\varepsilon_{3}}$ are then given as ${\mathfrak{g}}_{\varepsilon_{1}-\varepsilon_{2}}=\mathbb{C}E_{12}$ and ${\mathfrak{g}}_{\varepsilon_{2}-\varepsilon_{3}}=\mathbb{C}E_{23}$. We write $\rho$ for half the sum of the positive roots, namely, $\rho=\varepsilon_{1}-\varepsilon_{3}$. We define $X,Y\in{\mathfrak{g}}$ as $X=\small\begin{pmatrix}0&0&0\\\ 1&0&0\\\ 0&0&0\end{pmatrix}\quad\text{and}\quad Y=\small\begin{pmatrix}0&0&0\\\ 0&0&0\\\ 0&1&0\end{pmatrix}.$ (3.1) Then $X$ and $Y$ are root vectors for $-(\varepsilon_{1}-\varepsilon_{2})$ and $-(\varepsilon_{2}-\varepsilon_{3})$, respectively. The opposite nilpotent radical $\bar{{\mathfrak{n}}}$ is thus given as $\bar{{\mathfrak{n}}}=\text{span}\\{X,Y,[X,Y]\\}$. ### 3.2. Two identifications for $\mathfrak{so}(3,\mathbb{C})\simeq\mathfrak{sl}(2,\mathbb{C})$ As ${\mathfrak{k}}_{0}=\mathfrak{so}(3)\simeq\mathfrak{su}(2)$, we have ${\mathfrak{k}}=\mathfrak{so}(3,\mathbb{C})\simeq\mathfrak{sl}(2,\mathbb{C})$. For later applications we consider two identifications of $\mathfrak{so}(3,\mathbb{C})$ with $\mathfrak{sl}(2,\mathbb{C})$: $\Omega^{J}\colon\mathfrak{so}(3,\mathbb{C})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathfrak{sl}(2,\mathbb{C})\quad\text{for $J=\textnormal{I},\textnormal{II}$}.$ Loosely speaking, these identifications are described as follows. 1. (1) Identification $\Omega^{\textnormal{I}}\colon\mathfrak{so}(3,\mathbb{C})\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$: This is an identification via a Lie algebra isomorphism $\Omega^{\textnormal{I}}_{0}\colon\mathfrak{so}(3)\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{u}(2)$. Schematically, we have $\textstyle{\mathfrak{so}(3,\mathbb{C})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\scriptstyle{\Omega^{\textnormal{I}}}$$\textstyle{\mathfrak{sl}(2,\mathbb{C})}$$\textstyle{\mathfrak{so}(3)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\otimes_{\mathbb{R}}\mathbb{C}\;}$$\scriptstyle{\sim}$$\scriptstyle{\Omega^{\textnormal{I}}_{0}}$$\textstyle{\mathfrak{su}(2)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\;\otimes_{\mathbb{R}}\mathbb{C}}$ 2. (2) Identification $\Omega^{\textnormal{II}}\colon\mathfrak{so}(3,\mathbb{C})\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$: This is an identification independent of $\mathfrak{su}(2)$. Schematically, we have $\textstyle{\mathfrak{so}(3,\mathbb{C})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\scriptstyle{\Omega^{\textnormal{II}}}$$\textstyle{\mathfrak{sl}(2,\mathbb{C})}$$\textstyle{\mathfrak{so}(3)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\otimes_{\mathbb{R}}\mathbb{C}\;}$$\textstyle{\mathfrak{su}(2)}$ Let $E_{+}$, $E_{-}$, and $E_{0}$ be the elements of $\mathfrak{sl}(2,\mathbb{C})$ defined as $E_{+}:=\begin{pmatrix}0&1\\\ 0&0\end{pmatrix},\quad E_{-}:=\begin{pmatrix}0&0\\\ 1&0\end{pmatrix},\quad E_{0}:=\begin{pmatrix}1&0\\\ 0&-1\end{pmatrix}.$ (3.2) We now describe the identifications $\Omega^{\textnormal{I}}$ and $\Omega^{\textnormal{II}}$ in detail separately. #### 3.2.1. Identification ${\mathfrak{k}}\simeq\mathfrak{sl}(2,\mathbb{C})$ via $\Omega^{\textnormal{I}}$ We start with the identification $\Omega^{\textnormal{I}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$. First observe that ${\mathfrak{k}}_{0}=\mathfrak{so}(3)$ is spanned by the three matrices $B_{1}:=\small\begin{pmatrix}0&0&-1\\\ 0&0&0\\\ 1&0&0\end{pmatrix},\quad B_{2}:=\small\begin{pmatrix}0&0&0\\\ 0&0&-1\\\ 0&1&0\end{pmatrix},\quad B_{3}:=\small\begin{pmatrix}0&-1&0\\\ 1&0&0\\\ 0&0&0\end{pmatrix}$ with commutation relations $[B_{1},B_{2}]=B_{3},\quad[B_{1},B_{3}]=-B_{2},\quad\text{and}\quad[B_{2},B_{3}]=B_{1}.$ (3.3) On the other hand, the Lie algebra $\mathfrak{su}(2)$ is spanned by $A_{1}:=\begin{pmatrix}\sqrt{-1}&0\\\ 0&-\sqrt{-1}\end{pmatrix},\quad A_{2}:=\begin{pmatrix}0&1\\\ -1&0\end{pmatrix},\quad A_{3}:=\begin{pmatrix}0&\sqrt{-1}\\\ \sqrt{-1}&0\end{pmatrix}$ with commutation relations $[A_{1},A_{2}]=2A_{3},\quad[A_{1},A_{3}]=-2A_{2},\quad\text{and}\quad[A_{2},A_{3}]=2A_{1}.$ Then one may identify ${\mathfrak{k}}_{0}$ with $\mathfrak{su}(2)$ via the linear map $\Omega_{0}^{\textnormal{I}}\colon{\mathfrak{k}}_{0}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathfrak{su}(2),\quad B_{j}\mapsto\tfrac{1}{2}A_{j}\quad\text{for $j=1,2,3$}.$ (3.4) Let $Z_{+},Z_{-},Z_{0}$ be the elements of ${\mathfrak{k}}=\mathfrak{so}(3,\mathbb{C})$ defined as $Z_{+}:=B_{2}-\sqrt{-1}B_{3},\quad Z_{-}:=-(B_{2}+\sqrt{-1}B_{3}),\quad Z_{0}:=[Z_{+},Z_{-}]=-2\sqrt{-1}B_{1}.$ (3.5) Then we have ${\mathfrak{k}}=\text{span}\\{Z_{+},\,Z_{-},\,Z_{0}\\}.$ (3.6) As $A_{2}-\sqrt{-1}A_{3}=2E_{+}$ and $-(A_{2}+\sqrt{-1}A_{3})=2E_{-}$, the isomorphism (3.4) yields a Lie algebra isomorphism $\Omega^{\textnormal{I}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathfrak{sl}(2,\mathbb{C}),\quad Z_{j}\mapsto E_{j}\quad\text{for $j=+,-,0$}.$ (3.7) #### 3.2.2. Identification ${\mathfrak{k}}\simeq\mathfrak{sl}(2,\mathbb{C})$ via $\Omega^{\textnormal{II}}$ We next discuss the identification $\Omega^{\textnormal{II}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathfrak{sl}(2,\mathbb{C})$. Let $W_{+}$, $W_{-}$, and $W_{0}$ be the elements of $\mathfrak{so}(3,\mathbb{C})$ defined by $W_{+}:=B_{1}+\sqrt{-1}B_{3},\quad W_{-}:=-B_{1}+\sqrt{-1}B_{3},\quad W_{0}:=[W_{+},W_{-}]=-2\sqrt{-1}B_{2}.$ (3.8) Then we have ${\mathfrak{k}}=\text{span}\\{W_{+},\,W_{-},\,W_{0}\\}.$ (3.9) It follows from the commutation relations (3.3) that the triple $\\{W_{+},\,W_{-},\,W_{0}\\}$ forms an $\mathfrak{sl}(2)$-triple, namely, $[W_{+},W_{-}]=W_{0}$, $[W_{0}.W_{+}]=2W_{+}$, and $[W_{0},W_{-}]=-2W_{-}$. Therefore, ${\mathfrak{k}}$ may be identified with $\mathfrak{sl}(2,\mathbb{C})$ via the Lie algebra isomorphism $\Omega^{\textnormal{II}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathfrak{sl}(2,\mathbb{C}),\quad W_{j}\mapsto E_{j}\quad\text{for $j=+,-,0$}.$ (3.10) ### 3.3. A realization of the subgroup $M=Z_{K}({\mathfrak{a}}_{0})$ As $K$ is isomorphic to $SU(2)$, we realize $M=Z_{K}({\mathfrak{a}}_{0})$ as a subgroup of $SU(2)$ via the isomorphism (3.7). To do so, first observe that the adjoint action $\mathrm{Ad}$ of $SU(2)$ on $\mathfrak{su}(2)$ yields a two-to-one covering map $SU(2)\twoheadrightarrow\mathrm{Ad}(SU(2))\simeq SO(3)$ on $SO(3)$. We realize $\mathrm{Ad}(SU(2))$ as a matrix group with respect to the ordered basis $\\{A_{2},A_{1},A_{3}\\}$ of $\mathfrak{su}(2)$ in such a way that the elements $\pm m_{j}^{\textnormal{I}}\in SU(2)$ with $m^{\textnormal{I}}_{0}:=\begin{pmatrix}1&0\\\ 0&1\\\ \end{pmatrix},\;m^{\textnormal{I}}_{1}:=\begin{pmatrix}\sqrt{-1}&0\\\ 0&-\sqrt{-1}\\\ \end{pmatrix},\;m^{\textnormal{I}}_{2}:=\begin{pmatrix}0&1\\\ -1&0\end{pmatrix},\;m^{\textnormal{I}}_{3}:=\begin{pmatrix}0&\sqrt{-1}\\\ \sqrt{-1}&0\end{pmatrix}$ (3.11) are mapped to $\pm m_{j}^{\textnormal{I}}\mapsto m_{j}\in SO(3)$ for $j=0,1,2,3$, where $\mbox{\smaller$m_{0}=\small\begin{pmatrix}1&0&0\\\ 0&1&0\\\ 0&0&1\end{pmatrix},\;m_{1}=\small\begin{pmatrix}-1&0&0\\\ 0&1&0\\\ 0&0&-1\end{pmatrix},\;m_{2}=\small\begin{pmatrix}1&0&0\\\ 0&-1&0\\\ 0&0&-1\end{pmatrix},\;m_{3}=\small\begin{pmatrix}-1&0&0\\\ 0&-1&0\\\ 0&0&1\end{pmatrix}$}.$ One can easily check that the map $\pm m^{\textnormal{I}}_{j}\mapsto m_{j}$ respects the Lie algebra isomorphism $\Omega^{\textnormal{I}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$ in (3.7) ([27, Lem. 4.6]). Namely, for $Z\in{\mathfrak{k}}$, we have $\Omega^{\textnormal{I}}(\mathrm{Ad}(m_{j})Z)=\mathrm{Ad}(m^{\textnormal{I}}_{j})\Omega^{\textnormal{I}}(Z)\quad\text{for $j=0,1,2,3$}.$ As $Z_{SO(3)}({\mathfrak{a}}_{0})=\\{m_{0},m_{1},m_{2},m_{3}\\}$, we then realize $M$ as a subgroup of $SU(2)$ as $M=\left\\{\pm m^{\textnormal{I}}_{0},\;\pm m^{\textnormal{I}}_{1},\;\pm m^{\textnormal{I}}_{2},\;\pm m^{\textnormal{I}}_{3}\right\\}.$ The subgroup $M$ is isomorphic to the quaternion group $Q_{8}$, a non- commutative group of order $8$. ### 3.4. Irreducible representations $\mathrm{Irr}(M)$ of $M$ In order to compute a $K$-type formula via (2.14), we next discuss the sets $\mathrm{Irr}(M)$ and $\mathrm{Irr}(K)$ of equivalence classes of irreducible representations of $M$ and $K$, respectively. We first consider $\mathrm{Irr}(M)$ via the isomorphism (3.7). As $M$ is isomorphic to the quaternion group $Q_{8}$, the set $\mathrm{Irr}(M)$ consists of four characters and one 2-dimensional irreducible representation. For $\varepsilon,\varepsilon^{\prime}\in\\{\pm\\}$, we define a character $\chi^{SO(3)}_{(\varepsilon,\varepsilon^{\prime})}\colon Z_{SO(3)}({\mathfrak{a}}_{0})\to\\{\pm 1\\}$ of $Z_{SO(3)}({\mathfrak{a}}_{0})$ as $\chi^{SO(3)}_{(\varepsilon,\varepsilon^{\prime})}(\mathrm{diag}(a_{1},a_{2},a_{3})):=|a_{1}|_{\varepsilon}\;|a_{3}|_{\varepsilon^{\prime}},$ where $|a|_{+}:=|a|$ and $|a|_{-}:=a$. Via the character $\chi^{SO(3)}_{(\varepsilon,\varepsilon^{\prime})}$ of $Z_{SO(3)}({\mathfrak{a}}_{0})$, we define a character $\chi_{(\varepsilon,\varepsilon^{\prime})}\colon M\to\\{\pm 1\\}$ of $M$ as $\chi_{(\varepsilon,\varepsilon^{\prime})}(\pm m^{\textnormal{I}}_{j}):=\chi_{(\varepsilon,\varepsilon^{\prime})}(m_{j})\quad\textnormal{for\; $j=0,1,2,3$}.$ (3.12) We often abbreviate $\chi_{(\varepsilon,\varepsilon^{\prime})}$ as $(\varepsilon,\varepsilon^{\prime})$. The character ($+$,$+$), for instance, is the trivial character of $M$. Table 3 illustrates the character table for $(\varepsilon,\varepsilon^{\prime})=\chi_{(\varepsilon,\varepsilon^{\prime})}$. The set $\mathrm{Irr}(M)$ may then be described as follows: $\mathrm{Irr}(M)=\\{\textnormal{\mbox{\smaller($+$,$+$)}},\,\textnormal{\mbox{\smaller($+$,$-$)}},\,\textnormal{\mbox{\smaller($-$,$+$)}},\,\textnormal{\mbox{\smaller($-$,$-$)}},\,\mathbb{H}\\},$ (3.13) where $\mathbb{H}$ is the unique genuine $2$-dimensional representation of $M\simeq Q_{8}$. Table 3. Character table for $(\varepsilon,\varepsilon^{\prime})$ for $m_{j}^{\textnormal{I}}$ | $\pm m_{0}^{\textnormal{I}}$ | $\pm m_{1}^{\textnormal{I}}$ | $\pm m_{2}^{\textnormal{I}}$ | $\pm m_{3}^{\textnormal{I}}$ ---|---|---|---|--- ($+$,$+$) | $1$ | $1$ | $1$ | $1$ ($+$,$-$) | $1$ | $-1$ | $-1$ | $1$ ($-$,$+$) | $1$ | $-1$ | $1$ | $-1$ ($-$,$-$) | $1$ | $1$ | $-1$ | $-1$ ### 3.5. Irreducible representations $\mathrm{Irr}(K)$ of $K$ We next consider a polynomial realization of $\mathrm{Irr}(K)$. Let $\mathrm{Pol}[t]$ be the space of polynomials of one variable $t$ with complex coefficients and set $\mathrm{Pol}_{n}[t]:=\\{p(t)\in\mathrm{Pol}[t]:\deg p(t)\leq n\\}.$ Then we realize the set $\mathrm{Irr}(K)$ of equivalence classes of irreducible representations of $K\simeq SU(2)$ as $\mathrm{Irr}(K)\simeq\\{(\pi_{n},\mathrm{Pol}_{n}[t]):n\in\mathbb{Z}_{\geq 0}\\},$ (3.14) where the representation $\pi_{n}$ of $SU(2)$ on $\mathrm{Pol}_{n}[t]$ is defined as $\left(\pi_{n}(g)p\right)(t):=(ct+d)^{n}p\left(\frac{at+b}{ct+d}\right)\quad\text{for}\quad g=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}^{-1}.$ (3.15) It follows from (3.15) that the elements $m_{j}^{\textnormal{I}}$ defined in (3.11) act on $\mathrm{Pol}_{n}[t]$ via $\pi_{n}$ as $m_{1}^{\textnormal{I}}\colon p(t)\mapsto(\sqrt{-1})^{n}p(-t);\;\;m_{2}^{\textnormal{I}}\colon p(t)\mapsto t^{n}p\left(-\frac{1}{t}\right);\;\;m_{3}^{\textnormal{I}}\colon p(t)\mapsto(-\sqrt{-1}t)^{n}p\left(\frac{1}{t}\right).$ (3.16) Let $d\pi_{n}$ be the differential of the representation $\pi_{n}$. As usual we extend $d\pi_{n}$ complex-linearly to $\mathfrak{sl}(2,\mathbb{C})$ and also naturally to the universal enveloping algebra $\mathcal{U}(\mathfrak{sl}(2,\mathbb{C}))$. It follows from (3.2) and (3.15) that $E_{+}$, $E_{-}$, and $E_{0}$ act on $\mathrm{Pol}_{n}[t]$ via $d\pi_{n}$ as $d\pi_{n}(E_{+})=-\frac{d}{dt},\quad d\pi_{n}(E_{-})=-nt+t^{2}\frac{d}{dt},\quad\text{and}\quad d\pi_{n}(E_{0})=-2t\frac{d}{dt}+n.$ (3.17) ### 3.6. Peter–Weyl theorem for $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}$ with the polynomial realization of $\mathrm{Irr}(K)$ Let $\Omega\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$ be an identification of ${\mathfrak{k}}$ with $\mathfrak{sl}(2,\mathbb{C})$ such as (3.7) and (3.10). Then elements $F\in\mathcal{U}({\mathfrak{k}})$ can act on $\mathrm{Pol}_{n}[t]$ via $d\pi_{n}$ as $d\pi_{n}(\Omega(F))$. To simplify the notation we write $d\pi_{n}^{\Omega}(F)=d\pi_{n}(\Omega(F))\quad\text{for $F\in\mathcal{U}({\mathfrak{k}})$}.$ (3.18) Then, for $R(u)\in\mathrm{Diff}_{G}(I(\textnormal{\mbox{\smaller($+$,$+$)}},\lambda),I(\chi,\nu))$ with ($+$,$+$) the trivial character of $M$, we set $\mathrm{Sol}_{(u)}(n):=\\{p(t)\in\mathrm{Pol}_{n}[t]:d\pi_{n}^{\Omega}\big{(}\tau(u^{\flat})\big{)}p(t)=0\\},$ (3.19) where $u^{\flat}\in\mathcal{U}({\mathfrak{k}})$ is a compact model of $u\in\mathcal{U}(\bar{{\mathfrak{n}}})$ (see Definition 2.5). Then the $K$-type decomposition of $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}$ in (2.14) with the polynomial realization (3.14) of $\mathrm{Irr}(K)$ becomes $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{n}[t]\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}_{(u)}(n),\sigma\right).$ (3.20) We remark that since $K\cap M=M$ and ${\mathfrak{m}}=\\{0\\}$, the $(\mathcal{U}({\mathfrak{k}}\cap{\mathfrak{m}}),K\cap M)$-isomorphism (2.4) reduces to an $M$-isomorphism $\mathcal{U}({\mathfrak{k}})\otimes\mathbb{C}_{\tiny{\textnormal{\mbox{\smaller($+$,$+$)}}}}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathcal{U}({\mathfrak{g}})\otimes_{\mathcal{U}({\mathfrak{b}})}\mathbb{C}_{\tiny{\textnormal{\mbox{\smaller($+$,$+$)}}},-\lambda}$. In particular a compact model $u^{\flat}\in\mathcal{U}({\mathfrak{k}})$ of $u\in\mathcal{U}(\bar{{\mathfrak{n}}})$ is indeed unique in the present situation (see Remark 2.7). ### 3.7. Recipe for determining the $K$-type formula for $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}$ For later convenience we summarize a recipe for computing the $K$-type formula for $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}$ via (3.20). Let $R(u)\in\mathrm{Diff}_{G}(I(\textnormal{\mbox{\smaller($+$,$+$)}},\lambda),I(\chi,\nu))$. Step 0: Choose an identification $\Omega\colon{\mathfrak{k}}\simeq\mathfrak{sl}(2,\mathbb{C})$. Step 1: Find the compact model $u^{\flat}$ of $u$ (see Definition 2.5). Step 2: Find the explicit formula of the differential operator $d\pi_{n}^{\Omega}\big{(}\tau(u^{\flat})\big{)}$. Step 3: Solve the differential equation $d\pi_{n}^{\Omega}\big{(}\tau(u^{\flat})\big{)}p(t)=0$ and classify $n\in\mathbb{Z}_{\geq 0}$ such that $\mathrm{Sol}_{(u)}(n)\neq\\{0\\}$. Step 4: For $n\in\mathbb{Z}_{\geq 0}$ with $\mathrm{Sol}_{(u)}(n)\neq\\{0\\}$, classify the $M$-representations on $\mathrm{Sol}_{(u)}(n)$. Step 5: Given $\sigma\in\mathrm{Irr}(M)$, classify $n\in\mathbb{Z}_{\geq 0}$ with $\mathrm{Sol}_{(u)}(n)\neq\\{0\\}$ such that $\mathrm{Hom}_{M}\left(\mathrm{Sol}_{(u)}(n),\,\sigma\right)\neq\\{0\\}.$ ## 4\. Heisenberg ultrahyperbolic operator for $\widetilde{SL}(3,\mathbb{R})$ The aim of this section is to discuss a certain second order differential operator called the Heisenberg ultrahyperbolic operator $R(D_{s})$. In this section we apply to $R(D_{s})$ the theory described in Section 3 for arbitrary intertwining differential operators $R(u)$ for $\widetilde{SL}(3,\mathbb{R})$. We continue the notation and normalizations from Section 3. ### 4.1. Heisenberg ultrahyperbolic operator $R(D_{s})$ We start with the definition of the Heisenberg ultrahyperbolic operator $R(D_{s})$ for $\widetilde{SL}(3,\mathbb{R})$. As $R(D_{s})$ being an intertwining differential operator (see Proposition 4.3 below), it is defined for the complex Lie algebra ${\mathfrak{g}}=\mathfrak{sl}(3,\mathbb{C})$. Let $X$ and $Y$ be the elements of $\bar{{\mathfrak{n}}}$ defined in (3.1). For $s\in\mathbb{C}$, we define $D_{s}$ as $D_{s}:=(XY+YX)+s[X,Y]\in\mathcal{U}({\mathfrak{g}}).$ Then the second order differential operator $R(D_{s})=R\big{(}(XY+YX)+s[X,Y]\big{)}$ is called the _Heisenberg ultrahyperbolic operator_ for ${\mathfrak{g}}=\mathfrak{sl}(3,\mathbb{C})$. For the general definition for $\mathfrak{sl}(m,\mathbb{C})$ with arbitrary rank $m\geq 3$, see [18, Sect. 3]. Recall from Section 3.1 that the simple roots $\alpha,\beta\in\Pi$ are realized as $\alpha=\varepsilon_{1}-\varepsilon_{2}$ and $\beta=\varepsilon_{2}-\varepsilon_{3}$. Then, for $s\in\mathbb{C}$, we set $\widetilde{\rho}(s):=\widetilde{\rho}-s\widetilde{\rho}^{\perp}$ (4.1) with $\widetilde{\rho}:=\frac{1}{2}(\varepsilon_{1}-\varepsilon_{3})\quad\text{and}\quad\widetilde{\rho}^{\perp}:=\frac{1}{2}(\varepsilon_{1}+\varepsilon_{3}).$ Remark that as $\widetilde{\rho}^{\perp}\notin{\mathfrak{a}}^{*}$, we have $\widetilde{\rho}(s)\notin{\mathfrak{a}}^{*}$ unless $s=0$. For $\sigma\in\mathrm{Irr}(M)$, we write $I(\sigma,\widetilde{\rho}(s))=I_{B}(\sigma,\widetilde{\rho}(s)|_{{\mathfrak{a}}^{*}})$ (4.2) for the parabolically induced representation $I_{B}(\sigma,\widetilde{\rho}(s)|_{{\mathfrak{a}}^{*}})$ defined as in (2.1). Proposition 4.3 below shows that the operator $R(D_{s})$ is indeed an intertwining differential operator between parabolically induced representations. ###### Proposition 4.3. We have $R(D_{s})\in\mathrm{Diff}_{G}(I(\textnormal{\mbox{\smaller($+$,$+$)}},-\widetilde{\rho}(s)),I(\textnormal{\mbox{\smaller($-$,$-$)}},\widetilde{\rho}(-s)).$ Consequently, for $\sigma\in\mathrm{Irr}(M)$, we have $R(D_{s})\otimes\mathrm{id}_{\sigma}\in\mathrm{Diff}_{G}(I(\sigma,-\widetilde{\rho}(s)),I(\textnormal{\mbox{\smaller($-$,$-$)}}\otimes\sigma,\widetilde{\rho}(-s)).$ ###### Proof. The first assertion readily follows from [18, Lem. 3.2] and [27, Lem. 6.4]. Lemma 2.2 then concludes the second. ∎ Our aim is to compute the branching law of the space $\mathcal{S}ol_{(D_{s};-\widetilde{\rho}(s))}(\sigma)_{K}$ of $K$-finite solutions to $(R(D_{s})\otimes\mathrm{id}_{\sigma})f=0$. The $K$-type formula in (4.4) with $(u;\lambda)=(D_{s};-\widetilde{\rho}(s))$ gives $\mathcal{S}ol_{(D_{s};-\widetilde{\rho}(s))}(\sigma)_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{n}[t]\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}_{(D_{s})}(n),\sigma\right)$ (4.4) with $\mathrm{Sol}_{(D_{s})}(n)=\\{p(t)\in\mathrm{Pol}_{n}[t]:d\pi_{n}^{\Omega}\big{(}\tau(D_{s}^{\flat})\big{)}p(t)=0\\}.$ Observe that, for $X$ and $Y$ in (3.1), we have $X,Y\in{\mathfrak{g}}_{0}=\mathfrak{sl}(3,\mathbb{R})$. Thus $\tau(D_{s}^{\flat})$ for $D_{s}=(XY+YX)+s[X,Y]$ is simply given as $\tau(D^{\flat}_{s})=\tau(D_{s})^{\flat}=D^{\flat}_{\bar{s}}\,,$ (4.5) where $\bar{s}$ denotes the complex conjugate of $s\in\mathbb{C}$. Then we write $\mathrm{Sol}(s;n):=\mathrm{Sol}_{(D_{\bar{s}})}(n)$ so that $\displaystyle\mathrm{Sol}(s;n)$ $\displaystyle=\\{p(t)\in\mathrm{Pol}_{n}[t]:d\pi_{n}^{\Omega}\big{(}\tau(D_{\bar{s}}^{\flat})\big{)}p(t)=0\\}$ $\displaystyle=\\{p(t)\in\mathrm{Pol}_{n}[t]:d\pi_{n}^{\Omega}\big{(}D_{s}^{\flat}\big{)}p(t)=0\\}.$ (4.6) Similarly, we put $\mathcal{S}ol(s;\sigma)_{K}:=\mathcal{S}ol_{(D_{\bar{s}};-\widetilde{\rho}(\bar{s}))}(\sigma)_{K}.$ Then the $K$-type formula (4.4) yields $\mathcal{S}ol(s;\sigma)_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{n}[t]\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}(s;n),\sigma\right).$ (4.7) Hereafter we consider the branching law of $\mathcal{S}ol(s;\sigma)_{K}$ instead of $\mathcal{S}ol_{(D_{s};-\widetilde{\rho}(s))}(\sigma)_{K}$. In order to solve the equation $d\pi_{n}^{\Omega}\big{(}D_{s}^{\flat}\big{)}p(t)=0$ in (4.1), we next find the compact model $D_{s}^{\flat}$ of $D_{s}$, which is an element of $\mathcal{U}({\mathfrak{k}})$ such that $D_{s}^{\flat}\otimes(\mathbb{1}_{{\tiny\textnormal{\mbox{\smaller($+$,$+$)}}}}\otimes\mathbb{1}_{\widetilde{\rho}(s)-\rho})=D_{s}\otimes(\mathbb{1}_{{\tiny\textnormal{\mbox{\smaller($+$,$+$)}}}}\otimes\mathbb{1}_{\widetilde{\rho}(s)-\rho})$ in $\mathcal{U}({\mathfrak{g}})\otimes_{\mathcal{U}({\mathfrak{b}})}\mathbb{C}_{{\tiny\textnormal{\mbox{\smaller($+$,$+$)}}},\,\widetilde{\rho}(s)-\rho}$. For $X,Y\in\bar{{\mathfrak{n}}}$ in (3.1), define $X^{\flat},Y^{\flat}\in{\mathfrak{k}}$ as $X^{\flat}:=X+\theta(X)\quad\text{and}\quad Y^{\flat}:=Y+\theta(Y),$ where $\theta$ is the Cartan involution defined as $\theta(U)=-U^{t}$. ###### Lemma 4.8. We have $D^{\flat}_{s}=X^{\flat}Y^{\flat}+Y^{\flat}X^{\flat}+s[X^{\flat},Y^{\flat}]\in\mathcal{U}({\mathfrak{k}}).$ (4.9) ###### Proof. One can easily verify that $\displaystyle(X^{\flat}Y^{\flat}+Y^{\flat}X^{\flat}+s[X^{\flat},Y^{\flat}])\otimes(\mathbb{1}_{{\tiny\textnormal{\mbox{\smaller($+$,$+$)}}}}\otimes\mathbb{1}_{\widetilde{\rho}(s)-\rho})=D_{s}\otimes(\mathbb{1}_{{\tiny\textnormal{\mbox{\smaller($+$,$+$)}}}}\otimes\mathbb{1}_{\widetilde{\rho}(s)-\rho})$ in $\mathcal{U}({\mathfrak{g}})\otimes_{\mathcal{U}({\mathfrak{b}})}\mathbb{C}_{{\tiny\textnormal{\mbox{\smaller($+$,$+$)}}},\,\widetilde{\rho}(s)-\rho}$. ∎ ### 4.2. Relationship between $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})$ and $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})$ In Section 3.2, we discussed two identifications of ${\mathfrak{k}}$ with $\mathfrak{sl}(2,\mathbb{C})$, namely, $\Omega^{\textnormal{I}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})\quad\text{and}\quad\Omega^{\textnormal{II}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C}).$ (4.10) Thus compact model $D^{\flat}_{s}$ may act on $\mathrm{Pol}_{n}[t]$ via $d\pi_{n}$ as $d\pi_{n}(\Omega^{\textnormal{I}}(D^{\flat}_{s}))\quad\text{and}\quad d\pi_{n}(\Omega^{\textnormal{II}}(D^{\flat}_{s})).$ (4.11) As for the notation $d\pi_{n}^{\Omega}(F)=d\pi_{n}(\Omega(F))$ in (3.18), we abbreviate (4.11) as $d\pi_{n}^{J}(D^{\flat}_{s})=d\pi_{n}(\Omega^{J}(D^{\flat}_{s}))\quad\text{for $J\in\\{\textnormal{I},\textnormal{II}\\}$}.$ We next discuss a relationship between $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})$ and $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})$. We begin with the expression of $\Omega^{J}(D^{\flat}_{s})$ in terms of the $\mathfrak{sl}(2)$-triple $\\{E_{+},\,E_{-},\,E_{0}\\}$ in (3.2). First, recall from (3.6) and (3.9) that ${\mathfrak{k}}$ can be given as ${\mathfrak{k}}=\text{span}\\{Z_{+},\,Z_{-},\,Z_{0}\\}=\text{span}\\{W_{+},\,W_{-},\,W_{0}\\},$ where $\\{Z_{+},\,Z_{-},\,Z_{0}\\}$ and $\\{W_{+},\,W_{-},\,W_{0}\\}$ are the $\mathfrak{sl}(2)$-triples of ${\mathfrak{k}}$ defined in (3.5) and (3.8), respectively. ###### Lemma 4.12. The compact model $D^{\flat}_{s}=X^{\flat}Y^{\flat}+Y^{\flat}X^{\flat}+s[X^{\flat},Y^{\flat}]$ is expressed as $\displaystyle D^{\flat}_{s}$ $\displaystyle=\frac{\sqrt{-1}}{2}\left((Z_{+}+Z_{-})(Z_{+}-Z_{-})-(s-1)Z_{0}\right)$ (4.13) $\displaystyle=\frac{1}{2}\left((W_{+}+W_{-})W_{0}-(s-1)(W_{+}-W_{-})\right).$ (4.14) ###### Proof. A direct computation shows that $X^{\flat}$ and $Y^{\flat}$ are expressed in terms of $\\{Z_{+},\,Z_{-},\,Z_{0}\\}$ and $\\{W_{+},\,W_{-},\,W_{0}\\}$ as $\displaystyle X^{\flat}$ $\displaystyle=\frac{\sqrt{-1}}{2}(Z_{+}+Z_{-})$ $\displaystyle=-\frac{\sqrt{-1}}{2}(W_{+}+W_{-}),$ $\displaystyle Y^{\flat}$ $\displaystyle=\frac{1}{2}(Z_{+}-Z_{-})$ $\displaystyle=\frac{\sqrt{-1}}{2}W_{0}.$ By substituting these expressions into $D^{\flat}_{s}$, one obtains the lemma. ∎ ###### Lemma 4.15. The elements $\Omega^{\textnormal{I}}(D^{\flat}_{s})$ and $\Omega^{\textnormal{II}}(D^{\flat}_{s})$ in $\mathcal{U}(\mathfrak{sl}(2,\mathbb{C}))$ are given as $\displaystyle\Omega^{\textnormal{I}}(D^{\flat}_{s})$ $\displaystyle=\frac{\sqrt{-1}}{2}\left((E_{+}+E_{-})(E_{+}-E_{-})-(s-1)E_{0}\right),$ (4.16) $\displaystyle\Omega^{\textnormal{II}}(D^{\flat}_{s})$ $\displaystyle=\frac{1}{2}\left((E_{+}+E_{-})E_{0}-(s-1)(E_{+}-E_{-})\right).$ (4.17) ###### Proof. This follows from (3.7), (3.10), and Lemma 4.12. ∎ We set $k_{0}:=\frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{-1}&1\\\ -1&-\sqrt{-1}\end{pmatrix}\in SU(2).$ (4.18) ###### Proposition 4.19. We have $\Omega^{\textnormal{I}}(D^{\flat}_{s})=\mathrm{Ad}(k_{0})\Omega^{\textnormal{II}}(D^{\flat}_{s}).$ (4.20) Consequently, for $p(t)\in\mathrm{Pol}_{n}[t]$, the following are equivalent: 1. (i) $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})p(t)=0$; 2. (ii) $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})\pi_{n}(k_{0})p(t)=0$. ###### Proof. One can easily check that $\displaystyle E_{+}+E_{-}$ $\displaystyle=-\mathrm{Ad}(k_{0})(E_{+}+E_{-}),$ $\displaystyle E_{+}-E_{-}$ $\displaystyle=\sqrt{-1}\mathrm{Ad}(k_{0})E_{0}.$ Now the identity (4.20) follows from Lemma 4.15. Since $d\pi_{n}^{J}(D^{\flat}_{s})=d\pi_{n}(\Omega^{J}(D^{\flat}_{s}))$ for $J\in\\{\textnormal{I},\textnormal{II}\\}$, the equivalence between (i) and (ii) readily follows from (4.20). ∎ For $J\in\\{\textnormal{I},\textnormal{II}\\}$, we write $\mathrm{Sol}_{J}(s;n)=\\{p(t)\in\mathrm{Pol}_{n}[t]:d\pi^{J}_{n}(D^{\flat}_{s})p(t)=0\\}.$ (4.21) In place of $\mathrm{Sol}(s;n)$ with $\mathrm{Sol}_{J}(s;n)$, the $K$-type decomposition (4.7) becomes $\mathcal{S}ol(s;\sigma)_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{n}[t]\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}_{J}(s;n),\sigma\right).$ (4.22) It follows from Proposition 4.19 that $\pi_{n}(k_{0})$ yields a linear isomorphism $\pi_{n}(k_{0})\colon\mathrm{Sol}_{\textnormal{II}}(s;n)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathrm{Sol}_{\textnormal{I}}(s;n).$ (4.23) For $m^{\textnormal{I}}_{j}\in M$ for $j=0,1,2,3$ defined in (3.11), we define $m^{\textnormal{II}}_{j}\in M$ as $m^{\textnormal{II}}_{j}:=k_{0}^{-1}m^{\textnormal{I}}_{j}k_{0}$ (4.24) so that the following diagram commutes. $\textstyle{\mathrm{Sol}_{\textnormal{II}}(s;n)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{n}(k_{0})}$$\scriptstyle{\sim}$$\scriptstyle{\pi_{n}(m^{\textnormal{II}}_{j})}$$\textstyle{\mathrm{Sol}_{\textnormal{I}}(s;n)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{n}(m^{\textnormal{I}}_{j})}$$\textstyle{\mathrm{Sol}_{\textnormal{II}}(s;n)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{n}(k_{0})}$$\scriptstyle{\sim}$$\textstyle{\mathrm{Sol}_{\textnormal{I}}(s;n)}$ A direct computation shows that $m^{\textnormal{II}}_{j}$ are given as follows. $m^{\textnormal{II}}_{0}=\begin{pmatrix}1&0\\\ 0&1\\\ \end{pmatrix},\;m^{\textnormal{II}}_{1}=\begin{pmatrix}0&1\\\ -1&0\end{pmatrix},\;m^{\textnormal{II}}_{2}=\begin{pmatrix}\sqrt{-1}&0\\\ 0&-\sqrt{-1}\\\ \end{pmatrix},\;m^{\textnormal{II}}_{3}=-\begin{pmatrix}0&\sqrt{-1}\\\ \sqrt{-1}&0\end{pmatrix}.$ (4.25) By (3.11) and (4.25), we have $m_{0}^{\textnormal{II}}=m_{0}^{\textnormal{I}},\quad m_{1}^{\textnormal{II}}=m_{2}^{\textnormal{I}},\quad m_{2}^{\textnormal{II}}=m_{1}^{\textnormal{I}},\quad m_{3}^{\textnormal{II}}=-m_{3}^{\textnormal{I}}.$ (4.26) It then follows from (3.16) that $m_{j}^{\textnormal{II}}$ act on $\mathrm{Pol}_{n}[t]$ via $d\pi_{n}$ as $m_{1}^{\textnormal{II}}\colon p(t)\mapsto t^{n}p\left(-\frac{1}{t}\right);\;\;m_{2}^{\textnormal{II}}\colon p(t)\mapsto(\sqrt{-1})^{n}p(-t);\;\;m_{3}^{\textnormal{II}}\colon p(t)\mapsto(\sqrt{-1}t)^{n}p\left(\frac{1}{t}\right).$ (4.27) Table 4 is the character table for $(\varepsilon,\varepsilon^{\prime})$ with $m_{j}^{\textnormal{II}}$. Table 4. Character table for $(\varepsilon,\varepsilon^{\prime})$ for $m_{j}^{\textnormal{II}}$ | $\pm m_{0}^{\textnormal{II}}$ | $\pm m_{1}^{\textnormal{II}}$ | $\pm m_{2}^{\textnormal{II}}$ | $\pm m_{3}^{\textnormal{II}}$ ---|---|---|---|--- ($+$,$+$) | $1$ | $1$ | $1$ | $1$ ($+$,$-$) | $1$ | $-1$ | $-1$ | $1$ ($-$,$+$) | $1$ | $-1$ | $1$ | $-1$ ($-$,$-$) | $1$ | $1$ | $-1$ | $-1$ ### 4.3. Differential equation $d\pi_{n}^{J}(D^{\flat}_{s})f(t)=0$ for $J\in\\{\textnormal{I},\textnormal{II}\\}$ As indicated in the recipe in Section 3.7, to determine the $K$-type decomposition of $\mathcal{S}ol(s;n)_{K}$, it is crucial to determine the space $\mathrm{Sol}_{J}(s;n)$ of $K$-type solutions, for which one needs to find polynomial solutions $p(t)\in\mathrm{Pol}_{n}[t]$ to differential equations $d\pi_{n}^{J}(D^{\flat}_{s})f(t)=0$ for $J\in\\{\textnormal{I},\textnormal{II}\\}$. For this purpose we next investigate these differential equations. #### 4.3.1. Differential equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$ We start with the differential equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$. The explicit formula of $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})$ is given as follows. ###### Lemma 4.28. We have $-2\sqrt{-1}\,d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})=(1-t^{4})\frac{d^{2}}{dt^{2}}+2\left((n-1)t^{2}+s\right)t\frac{d}{dt}-n\left((n-1)t^{2}+s\right).$ (4.29) ###### Proof. Recall from (4.16) that we have $-2\sqrt{-1}\,\Omega^{\textnormal{I}}(D^{\flat}_{s})=(E_{+}+E_{-})(E_{+}-E_{-})-(s-1)E_{0}.$ Now the proposed identity follows from a direct computation with (3.17). ∎ In order to study the differential equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$, we set $\mathcal{D}_{H}(a,q;\alpha,\beta,\gamma,\delta;z):=\frac{d^{2}}{dz^{2}}+\left(\frac{\gamma}{z}+\frac{\delta}{z-1}+\frac{\varepsilon}{z-a}\right)\frac{d}{dz}+\frac{\alpha\beta z-q}{z(z-1)(z-a)},$ (4.30) where $a,q,\alpha,\beta,\gamma,\delta,\varepsilon$ are complex parameters with $a\neq 0,1$ and $\gamma+\delta+\varepsilon=\alpha+\beta+1$. Then the differential equation $\mathcal{D}_{H}(a,q;\alpha,\beta,\gamma,\delta;z)f(z)=0$ (4.31) is called _Heun’s differential equation_. For a brief account of the equation (4.31), see Section 9.1. The differential equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$ can be identified with Heun’s differential equation as in Lemma 4.32 below. (We are grateful to Hiroyuki Ochiai for pointing it out.) ###### Lemma 4.32. We have $-\frac{d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})}{4}=\mathcal{D}_{H}(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2}).$ ###### Proof. This follows from a direct computation with a change of variables $z=t^{2}$ on (4.29). ∎ By Lemma 4.32, to determine the space $\mathrm{Sol}_{\textnormal{I}}(s;n)$ of $K$-type solutions to $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$, it suffices to find polynomial solutions to the Heun equation $\mathcal{D}_{H}(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2})f(t)=0.$ (4.33) Let $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ denote the power series solution $Hl(a,q;\alpha,\beta,\gamma,\delta;z)=\sum_{r=0}^{\infty}c_{r}z^{r}$ with $c_{0}=1$ to (4.31) at $z=0$ ([38]). We set $\displaystyle u_{[s;n]}(t):=Hl(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2}),$ (4.34) $\displaystyle v_{[s;n]}(t):=tHl(-1,-\frac{(n-2)s}{4};-\frac{n-1}{2},-\frac{n-2}{2},\frac{3}{2},\frac{1-n-s}{2};t^{2}).$ (4.35) It follows from (9.1) in Section 9 that $u_{[s;n]}(t)$ and $v_{[s;n]}(t)$ are linearly independent solutions at $t=0$ to the Heun equation (4.33). Therefore we have $\mathrm{Sol}_{\textnormal{I}}(s;n)\subset\mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t).$ (4.36) We shall classify the parameters $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ such that $u_{[s;n]}(t),v_{[s;n]}(t)\in\mathrm{Sol}_{\textnormal{I}}(s;n)$ in Section 6 (see Propositions 6.5 and 6.6). ###### Remark 4.37. Let $F(a,b,c;z)\equiv{}_{2}F_{1}(a,b,c;z)$ denote the Gauss hypergeometric function. In [29, (3.8)], Maier found the following Heun-to-Gauss reduction formula: $Hl(-1,0;\alpha,\beta,\gamma,\frac{\alpha+\beta-\gamma+1}{2};t)=F(\frac{\alpha}{2},\frac{\beta}{2},\frac{\gamma+1}{2};t^{2}).$ (4.38) We remark that the case of $s=0$ for $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{0})f(t)=0$ gives special cases of this identity. Indeed, it follows from [27, Lem. 7.3] that a change of variables $z=t^{4}$ yields the identity $\frac{\sqrt{-1}}{32t^{2}}d\pi_{n}(D^{\flat}_{0})=\mathcal{D}_{F}(-\frac{n}{4},-\frac{n-1}{4},\frac{3}{4};t^{4}),$ where $\mathcal{D}_{F}(a,b,c;z)$ denotes the differential operator $\mathcal{D}_{F}(a,b,c;z)=z(1-z)\frac{d^{2}}{dz^{2}}+(c-(a+b+1)z)\frac{d}{dz}-ab$ (4.39) such that $\mathcal{D}_{F}(a,b,c;z)f(z)=0$ is the hypergeometric differential equation. Thus the equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{0})f(t)=0$ is also equivalent to the hypergeometric equation $\mathcal{D}_{F}(-\frac{n}{4},-\frac{n-1}{4},\frac{3}{4};t^{4})f(t)=0.$ (4.40) Then (4.33) and (4.40) yield the following Heun-to-Gauss reductions: $\displaystyle Hl(-1,0;-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},-\frac{n-1}{2};t)$ $\displaystyle=F(-\frac{n}{4},-\frac{n-1}{4},\frac{3}{4};t^{2});$ (4.41) $\displaystyle Hl(-1,0;-\frac{n-1}{2},-\frac{n-2}{2},\frac{3}{2},-\frac{n-1}{2};t)$ $\displaystyle=F(-\frac{n-1}{4},-\frac{n-2}{4},\frac{5}{4};t^{2}).$ (4.42) #### 4.3.2. Differential equation $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})f(t)=0$ We next consider $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})f(t)=0$. The equation $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})f(t)=0$ can be identified with the hypergeometric equation $\mathcal{D}_{F}(a,b,c;z)f(z)=0$ as follows. ###### Lemma 4.43. We have $\frac{d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})}{16t}=D_{F}(-\frac{n}{2},-\frac{n+s-1}{4},\frac{3-n+s}{4};t^{2}).$ ###### Proof. First, it follows from (4.17) and (3.17) that $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})$ is given as $2\,d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})=2(1-t^{2})t\frac{d^{2}}{dt^{2}}+((s+3n-3)t^{2}+(s-n+1))\frac{d}{dt}-n(s+n-1)t.$ (4.44) Then a direct computation with a change of variable $z=t^{2}$ concludes the lemma. ∎ Similar to the equation $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$, Lemma 4.43 shows that, to determine the space $\mathrm{Sol}_{\textnormal{II}}(s;n)$ of $K$-type solutions to $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})f(t)=0$, it suffices to find polynomial solutions to the hypergeometric equation $D_{F}(-\frac{n}{2},-\frac{n+s-1}{4},\frac{3-n+s}{4};t^{2})f(t)=0.$ (4.45) We set $\displaystyle a_{[s;n]}(t):=F(-\frac{n}{2},-\frac{n+s-1}{4},\frac{3-n+s}{4};t^{2}),$ $\displaystyle b_{[s;n]}(t):=t^{\frac{1+n-s}{2}}F(-\frac{n+s-1}{4},-\frac{s-1}{2},\frac{5+n-s}{4};t^{2}).$ Then $a_{[s;n]}(t)$ and $b_{[s;n]}(t)$ form fundamental set of solutions to (4.45) for suitable parameters $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ for which $a_{[s;n]}(t)$ and $b_{[s;n]}(t)$ are well-defined. Therefore we have $\mathrm{Sol}_{\textnormal{II}}(s;n)\subset\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t).$ (4.46) The precise conditions of $(s,n)$ such that $a_{[s;n]}(t),b_{[s;n]}(t)\in\mathrm{Sol}_{\textnormal{II}}(s;n)$ will be investigated in Section 5.1. ### 4.4. Recipe for the $K$-type decomposition of $\mathcal{S}ol(s;\sigma)_{K}$ In Section 3.7, we gave a general recipe to determine the $K$-type formula (3.20) of $\mathcal{S}ol_{(u;\lambda)}(\sigma)_{K}$. We modify the recipe in such a way that it will fit well for $\mathcal{S}ol(s;\sigma)_{K}$ with $\mathcal{S}ol(s;\sigma)_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{n}[t]\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}_{J}(s;n),\sigma\right).$ We first fix an identification $\Omega^{J}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathfrak{sl}(2,\mathbb{C})\;\;\text{for $J\in\\{\textnormal{I},\textnormal{II}\\}$}.$ Then the $K$-type formula of $\mathcal{S}ol(s;\sigma)_{K}$ may be determined in the following three steps. Step A: Classify $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ such that $\mathrm{Sol}_{J}(s;n)\neq\\{0\\}$. It follows from (4.36) and (4.46) that this is indeed equivalent to classifying $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ such that * • $u_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ or $v_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ for $J=\textnormal{I}$; * • $a_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ or $b_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ for $J=\textnormal{II}$. Step B: For $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ with $\mathrm{Sol}_{J}(s;n)\neq\\{0\\}$, classify the $M$-representations on $\mathrm{Sol}_{J}(s;n)$. Step C: Given $\sigma\in\mathrm{Irr}(M)$, classify $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ with $\mathrm{Sol}_{J}(s;n)\neq\\{0\\}$ such that $\mathrm{Hom}_{M}\left(\mathrm{Sol}_{J}(s;n),\,\sigma\right)\neq\\{0\\}.$ In Sections 5 and 6, we shall proceed Steps A, B, and C for $J=\textnormal{II}$ and $J=\textnormal{I}$, respectively. ## 5\. Hypergeometric model $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})f(t)=0$ The aim of this section is to classify the $K$-type formulas for $\mathcal{S}ol(s;\sigma)_{K}$ by using the hypergeometric model $d\pi^{\textnormal{II}}_{n}(D^{\flat}_{s})f(t)=0$. The decomposition formulas are achieved in Theorem 5.36. ### 5.1. The classification of $\mathrm{Sol}_{\textnormal{II}}(s;n)$ As Step A of the recipe in Section 4.4, we first wish to classify $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ such that $\mathrm{Sol}_{\textnormal{II}}(s;n)\neq\\{0\\}$, where $\mathrm{Sol}_{\textnormal{II}}(s;n)=\\{p(t)\in\mathrm{Pol}_{n}[t]:d\pi^{\textnormal{II}}_{n}(D^{\flat}_{s})p(t)=0\\}.$ Recall from (4.46) that we have $\mathrm{Sol}_{\textnormal{II}}(s;n)\subset\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)$ with $\displaystyle a_{[s;n]}(t)=F(-\frac{n}{2},-\frac{n+s-1}{4},\frac{3-n+s}{4};t^{2}),$ (5.1) $\displaystyle b_{[s;n]}(t)=t^{\frac{1+n-s}{2}}F(-\frac{n+s-1}{4},-\frac{s-1}{2},\frac{5+n-s}{4};t^{2}).$ (5.2) It thus suffices to classify $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ such that $a_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ or $b_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. Given $n\in\mathbb{Z}_{\geq 0}$, we define $I_{0}^{\pm}$, $J_{0}$, $I_{1}$, $I_{2}^{\pm}$, $J_{2}$, $I_{3}\subset\mathbb{Z}$ as follows: $\displaystyle\begin{aligned} I_{0}^{\pm}&:=\\{\pm(3+4j):j=0,1,\ldots,\frac{n}{4}-1\\}\quad(n\in 4\mathbb{Z}_{\geq 0});\\\ J_{0}&:=\\{1+4j:j=0,1,\ldots,\frac{n}{4}-1\\}\quad(n\in 4\mathbb{Z}_{\geq 0});\\\ I_{1}&:=\\{\pm 4j:j=0,1,\dots,\left[\frac{n}{4}\right]\\};\\\ I_{2}^{\pm}&:=\\{\pm(1+4j):j=0,1,\ldots,\left[\frac{n}{4}\right]\\};\\\ J_{2}&:=\\{3+4j:j=0,1,\ldots,\left[\frac{n}{4}\right]-1\\};\\\ I_{3}&:=\\{\pm(2+4j):j=0,1,\dots,\left[\frac{n}{4}\right]\\}.\end{aligned}$ (5.3) We start by observing when $a_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. A key observation is that, for $(a,b,c)\in\mathbb{C}$ for which the hypergeometric function $F(a,b,c;z)$ is well-defined, we have $F(a,b,c;z)\in\mathrm{Pol}_{n}[z]\quad\Longleftrightarrow\quad\text{$a\in\\{0,-1,\ldots,-n\\}$ or $b\in\\{0,-1,\ldots,-n\\}$}.$ ###### Proposition 5.4. The following conditions on $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ are equivalent. 1. (i) $a_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. 2. (ii) One of the following conditions hold: 1. (a) $n\equiv 0\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}\backslash J_{0};$ 2. (b) $n\equiv 1\ (\mathrm{mod}\ 4):$ $s\in I_{1};$ 3. (c) $n\equiv 2\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}\backslash J_{2};$ 4. (d) $n\equiv 3\ (\mathrm{mod}\ 4):$ $s\in I_{3}$. ###### Proof. The proposition follows from a careful observation for the parameters of $a_{[s;n]}(t)$ in (5.1). Indeed, suppose that $n$ is even. Then the parameters of $a_{[s;n]}(t)$ imply that $a_{[s;n]}(t)\notin\mathrm{Pol}_{n}[t]$ if and only if the following conditions are satisfied $\frac{3-n+s}{4}\in-\mathbb{Z}_{\geq 0},\quad-\frac{n}{2}<\frac{3-n+s}{4},\quad\text{and}\quad-\frac{n+s-1}{4}<\frac{3-n+s}{4},$ which is equivalent to $s\in J_{0}$ for $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in J_{2}$ for $n\equiv 2\ (\mathrm{mod}\ 4)$. Now the assertions for (a) and (c) follow from the contrapositive of the arguments. Next suppose that $n$ is odd. In this case it follows from (5.1) that $a_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ if and only if $0\leq\frac{n+s-1}{4}\leq\left[\frac{n}{2}\right]\quad\text{and}\quad\frac{n+s-1}{4}\in\mathbb{Z}_{\geq 0},$ which is equivalent to $s\in I_{1}$ for $n\equiv 1\ (\mathrm{mod}\ 4)$ and $s\in I_{3}$ for $n\equiv 3\ (\mathrm{mod}\ 4)$. Here we remark that, for $s\in I_{1}\cup I_{3}$, we have $\frac{3-n-s}{4}\notin\mathbb{Z}$. This concludes the proposition. ∎ Suppose that $a_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. Then generically we have $\deg a_{[s;n]}(t)=n$. Lemma 5.5 below classifies the singular parameters of $s\in\mathbb{C}$ for $a_{[s;n]}(t)$ in a sense that $\deg a_{[s;n]}(t)<n$ (see Section 5.2.4). ###### Lemma 5.5. Suppose that $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash J_{k}$ for $k=0,2$. Then the following conditions on $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ are equivalent. 1. (i) $\deg a_{[s;n]}(t)<n$. 2. (ii) $\deg a_{[s;n]}(t)=\frac{n+s-1}{2}$. 3. (iii) One of the following conditions holds: 1. (a) $n\equiv 0\ (\mathrm{mod}\ 4):$ $s\in I_{0}^{-};$ 2. (b) $n\equiv 2\ (\mathrm{mod}\ 4):$ $s\in I_{2}^{-}$. ###### Proof. It follows from the parameters of $a_{[s;n]}(t)$ that $\deg a_{[s;n]}(t)<n$ if and only if $2\cdot\frac{n+s-1}{4}<n\quad\text{and}\quad\frac{n+s-1}{4}\in\mathbb{Z}_{\geq 0},$ (5.6) which shows the equivalence between (i) and (ii). Moreover, (5.6) is equivalent to $s\in\\{-n+1+4j:j=0,1,2,\ldots,\frac{n}{2}-1\\}.$ (5.7) One can readily verifty that, under the condition $s\notin J_{k}$, (5.7) is indeed equivalent to $s\in I^{-}_{k}$ for $k=0,2$. ∎ We next consider $b_{[s;n]}(t)$ in (5.2). ###### Proposition 5.8. The following conditions on $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ are equivalent. 1. (i) $b_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ with $b_{[s;n]}(t)\neq a_{[s;n]}(t)$. 2. (ii) One of the following conditions holds. 1. (a) $n\equiv 0\ (\mathrm{mod}\ 4):$ $s\in I^{+}_{0}\cup I^{-}_{0}\cup J_{0};$ 2. (b) $n\equiv 1\ (\mathrm{mod}\ 4):$ $s\in I_{1};$ 3. (c) $n\equiv 2\ (\mathrm{mod}\ 4):$ $s\in I^{+}_{2}\cup I^{-}_{2}\cup J_{2};$ 4. (d) $n\equiv 3\ (\mathrm{mod}\ 4):$ $s\in I_{3};$ ###### Proof. Observe that if $b_{[s;n]}(t)\in\mathrm{Pol}[t]$, then the exponent $\frac{1+n-s}{2}$ for $t^{\frac{1+n-s}{2}}$ in (5.2) must satisfy $\frac{1+n-s}{2}\in\mathbb{Z}_{\geq 0}$, which in particular forces $\frac{5+n-s}{4}\notin-\mathbb{Z}_{\geq 0}$. Moreover, if $\frac{1+n-s}{2}=0$, then $b_{[n+1;n]}(t)=F(-\frac{n}{2},-\frac{n}{2},1;t^{2})=a_{[n+1;n]}(t).$ Consequently, we have $b_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ with $b_{[s;n]}(t)\neq a_{[s;n]}(t)$ if and only if either $\displaystyle\frac{1+n-s}{2}\in 1+\mathbb{Z}_{\geq 0},\quad$ $\displaystyle\frac{n+s-1}{4}\in\mathbb{Z}_{\geq 0},\quad$ $\displaystyle\text{and}\quad\frac{1+n-2}{2}+2\cdot\frac{n+s-1}{4}\leq n,$ (5.9) or $\displaystyle\frac{1+n-s}{2}\in 1+\mathbb{Z}_{\geq 0},\quad$ $\displaystyle\frac{s-1}{2}\in\mathbb{Z}_{\geq 0},\quad$ $\displaystyle\text{and}\quad\frac{1+n-2}{2}+2\cdot\frac{s-1}{2}\leq n.$ (5.10) A direct observation shows that the conditions (5.9) and (5.10) are equivalent to $s\in(-n+1+4\mathbb{Z}_{\geq 0})\cap(-\infty,n+1)$ (5.11) and $\text{$n$ is even and $s\in(1+2\mathbb{Z}_{\geq 0})\cap(-\infty,n+1)$},$ (5.12) respectively. One can directly verify that (5.11) and (5.12) are equivalent to the conditions on $s\in\mathbb{C}$ stated in Proposition 5.8 for $n\equiv k\ (\mathrm{mod}\ 4)$ for $k=0,1,2,3$. Indeed, if $n\equiv 0\ (\mathrm{mod}\ 4)$, then (5.11) and (5.12) are equivalent to $s\in I^{-}_{0}\cup J_{0}$ and $s\in I^{+}_{0}\cup J_{0}$, respectively. Since the other three cases can be shown similarly, we omit the proof. ∎ It follows from Propositions 5.4 and 5.8 that if $n\equiv k\ (\mathrm{mod}\ 4)$ for $k=0,2$, then $\mathrm{Sol}_{\textnormal{II}}(s;n)=\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)\quad\textnormal{for $s\in I_{k}^{+}\cup I_{k}^{-}$}.$ (5.13) For a later purpose for determining the $M$-representations on $\mathrm{Sol}_{\textnormal{II}}(s;n)$, for such $n\equiv k\ (\mathrm{mod}\ 4)$, we define $c^{\pm}_{[s;n]}(t):=a_{[s;n]}(t)\pm C(s;n)b_{[s;n]}(t)\quad\text{for $s\in I_{k}^{-}$},$ where $C(s;n):=\begin{cases}\frac{\left(-\frac{n}{2},\left[\frac{n}{4}\right]\right)}{\left[\frac{n}{4}\right]!}&\textnormal{if $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s=-1$},\vspace{5pt}\\\ \frac{\left(-\frac{n}{2},\frac{n+s-1}{4}\right)\left(-\frac{n+s-1}{4},\frac{n+s-1}{4}\right)}{\left(\frac{3-n+s}{4},\frac{n+s-1}{4}\right)}\cdot\frac{1}{\left(\frac{n+s-1}{4}\right)!}&\textnormal{otherwise}.\end{cases}$ (5.14) Here $(\ell,m)$ stands for the shifted factorial, namely, $(\ell,m)=\frac{\Gamma(\ell+m)}{\Gamma(\ell)}$. Then, for $n\equiv k\ (\mathrm{mod}\ 4)$ for $k=0,2$, the space $\mathrm{Sol}_{\textnormal{II}}(s;n)$ may be described as $\mathrm{Sol}_{\textnormal{II}}(s;n)=\begin{cases}\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in I_{k}^{+}$},\\\ \mathbb{C}c^{+}_{[s;n]}(t)\oplus\mathbb{C}c^{-}_{[s;n]}(t)&\textnormal{if $s\in I_{k}^{-}$}.\end{cases}$ (5.15) We then summarize the parameters $(s,n)$ such that $\mathrm{Sol}_{\textnormal{II}}(s;n)\neq\\{0\\}$ as follows. ###### Theorem 5.16. The following conditions on $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ are equivalent. 1. (i) $\mathrm{Sol}_{\textnormal{II}}(s;n)\neq\\{0\\}$. 2. (ii) One of the following conditions is satisfied. * • $n\equiv 0\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}$. * • $n\equiv 1\ (\mathrm{mod}\ 4):$ $s\in I_{1}$. * • $n\equiv 2\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}$. * • $n\equiv 3\ (\mathrm{mod}\ 4):$ $s\in I_{3}$. Further, for such $(s,n)$, the space $\mathrm{Sol}_{\textnormal{II}}(s;n)$ may be given as follows. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)=\begin{cases}\mathbb{C}a_{[s;n]}(t)&\textnormal{if $s\in\mathbb{C}\backslash(I_{0}^{+}\cup I_{0}^{-}\cup J_{0})$},\\\ \mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in I_{0}^{+}$},\\\ \mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in J_{0}$},\\\ \mathbb{C}c^{+}_{[s;n]}(t)\oplus\mathbb{C}c^{-}_{[s;n]}(t)&\textnormal{if $s\in I_{0}^{-}$}.\end{cases}$ 2. (2) $n\equiv 1\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)=\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)\quad\textnormal{for $s\in I_{1}$}.$ 3. (3) $n\equiv 2\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)=\begin{cases}\mathbb{C}a_{[s;n]}(t)&\textnormal{if $s\in\mathbb{C}\backslash(I_{2}^{+}\cup I_{2}^{-}\cup J_{2})$},\\\ \mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in I_{2}^{+}$},\\\ \mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in J_{2}$},\\\ \mathbb{C}c^{+}_{[s;n]}(t)\oplus\mathbb{C}c^{-}_{[s;n]}(t)&\textnormal{if $s\in I_{2}^{-}$}.\end{cases}$ 4. (4) $n\equiv 3\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)=\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)\quad\textnormal{for $s\in I_{3}$}.$ ###### Proof. This is a summary of the results in Propositions 5.4 and 5.8 and (5.15). ∎ ###### Remark 5.17. Theorem 5.16 shows that the structure of $\mathrm{Sol}_{\textnormal{II}}(s;n)$ (hypergeometric model) is somewhat complicated. It will be shown in Theorem 6.7 that that of $\mathrm{Sol}_{\textnormal{I}}(s;n)$ (Heun model) is more straightforward. ###### Remark 5.18. Theorem 5.16 also completely classifies the dimension $\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{II}}(s;n)$. When $n$ is even, the dimension $\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{II}}(s;n)$ ($=\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{I}}(s;n)$) was determined in [16, Thm. 5.13] by factorization formulas of certain tridiagonal determinants (see Theorem 7.18 and Remark 7.21). We shall study such determinants in Section 7 from a different perspective from [16]. ### 5.2. The $M$-representations on $\mathrm{Sol}_{\textnormal{II}}(s;n)$ As Step B of the recipe in Section 4.4, we next classify the $M$-representations on $\mathrm{Sol}_{\textnormal{II}}(s;n)$. Here is the classification. ###### Theorem 5.19. For each $(s;n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ determined in Theorem 5.16, the $M$-representations on $\mathrm{Sol}_{\textnormal{II}}(s;n)$ are classified as follows. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)\simeq\begin{cases}\textnormal{\mbox{\smaller($+$,$+$)}}&\textnormal{if $s\in\mathbb{C}\backslash(I_{0}^{+}\cup I_{0}^{-}\cup J_{0})$},\\\ \textnormal{\mbox{\smaller($+$,$+$)}}\oplus\textnormal{\mbox{\smaller($+$,$-$)}}&\textnormal{if $s\in I_{0}^{+}$},\\\ \textnormal{\mbox{\smaller($+$,$+$)}}&\textnormal{if $s\in J_{0}$},\\\ \textnormal{\mbox{\smaller($+$,$+$)}}\oplus\textnormal{\mbox{\smaller($-$,$+$)}}&\textnormal{if $s\in I_{0}^{-}$}.\end{cases}$ 2. (2) $n\equiv 1\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)\simeq\mathbb{H}\quad\textnormal{for $s\in I_{1}$}.$ 3. (3) $n\equiv 2\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)\simeq\begin{cases}\textnormal{\mbox{\smaller($-$,$-$)}}&\textnormal{if $s\in\mathbb{C}\backslash(I_{2}^{+}\cup I_{2}^{-}\cup J_{2})$},\\\ \textnormal{\mbox{\smaller($-$,$-$)}}\oplus\textnormal{\mbox{\smaller($-$,$+$)}}&\textnormal{if $s\in I_{2}^{+}$},\\\ \textnormal{\mbox{\smaller($-$,$-$)}}&\textnormal{if $s\in J_{2}$},\\\ \textnormal{\mbox{\smaller($-$,$-$)}}\oplus\textnormal{\mbox{\smaller($+$,$-$)}}&\textnormal{if $s\in I_{2}^{-}$}.\end{cases}$ 4. (4) $n\equiv 3\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{II}}(s;n)\simeq\mathbb{H}\quad\textnormal{for $s\in I_{3}$}.$ Here the characters $(\varepsilon,\varepsilon^{\prime})$ stand for the ones on $\mathbb{C}a_{[s;n]}(t)$, $\mathbb{C}b_{[s;n]}(t)$, and $\mathbb{C}c^{\pm}_{[s;n]}(t)$ at the same places in Theorem 5.16. We prove Theorem 5.19 by considering the following cases separately. * • Case 1: $n$ is odd. * • Case 2: $n$ is even. Let $k\in\\{0,2\\}$. * – Case 2a: $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{k}^{-}\cup J_{k})$ * – Case 2b: $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in I_{k}^{+}\cup J_{k}$ * – Case 2c: $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in I_{k}^{-}$ #### 5.2.1. Case 1 We start with the case that $\mathrm{Sol}_{\textnormal{II}}(s;n)$ is a two- dimensional representation of $M$. ###### Lemma 5.20. Suppose that $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in I_{k}$ for $k=1,3$. Then, as an $M$-representation, we have $\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)\simeq\mathbb{H}.$ ###### Proof. In this case $a_{[s;n]}(t)$ and $b_{[s;n]}(t)$ are even and odd functions, respectively. Then the assertion can be shown by essentially the same argument as the one for [27, Prop. 6.11] by replacing $u_{n}(t)$ and $v_{n}(t)$ with $a_{[s;n]}(t)$ and $b_{[s;n]}(t)$ , respectively. Hence we omit the proof. ∎ #### 5.2.2. Cases 2a We next consider the characters on $\mathbb{C}a_{[s;n]}(t)$. ###### Lemma 5.21. Suppose that $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{k}^{-}\cup J_{k})$ for $k=0,2$, Then $M$ acts on $\mathbb{C}a_{[s;n]}(t)$ as a character. ###### Proof. We only give a proof for the case $k=0$, namely, $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{0}^{-}\cup J_{0})$; the other case can be shown similarly. As $\mathbb{C}\backslash(I_{0}^{-}\cup J_{0})=(\mathbb{C}\backslash(I_{0}^{+}\cup I_{0}^{-}\cup J_{0}))\cup I_{0}^{+},$ we consider the cases $s\in\mathbb{C}\backslash(I_{0}^{+}\cup I_{0}^{-}\cup J_{0})$ and $s\in I_{0}^{+}$, separately. First suppose that $s\in\mathbb{C}\backslash(I_{0}^{+}\cup I_{0}^{-}\cup J_{0})$. By Theorem 5.16, we have $\mathrm{Sol}_{\textnormal{II}}(s;n)=\mathbb{C}a_{[s;n]}(t)$. Since $\mathrm{Sol}_{\textnormal{II}}(s;n)$ is an $M$-representation by Lemma 2.9, the assertion clearly holds for this case. We next suppose that $s\in I_{0}^{+}$. In this case we have $\mathrm{Sol}_{\textnormal{II}}(s;n)=\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t)$ by Theorem 5.16. As the exponent $\frac{1+n-s}{2}$ for $t^{\frac{1+n-s}{2}}$ of $b_{[s;n]}(t)$ is odd, $a_{[s;n]}(t)$ and $b_{[s;n]}(t)$ are an even and odd function, respectively. The transformation laws (4.27) imply that the action of $M$ on $\mathrm{Pol}_{n}[t]$ preserves the parities of the polynomials for $n$ even. Hence $M$ acts both on $\mathbb{C}a_{[s;n]}(t)$ and $\mathbb{C}b_{[s;n]}(t)$ as a character. ∎ We next determine the characters on $a_{[s;n]}(t)$ explicitly. It follows from Lemma 5.5 that $a_{[s;n]}(t)$ has degree $\deg a_{[s;n]}(t)=n$ for $(s,n)$ with $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{k}^{-}\cup J_{k})$ for $k=0,2$. Thus, in this case, the hypergeometric polynomial $a_{[s;n]}(t)$ is given as $a_{[s;n]}(t)=\sum_{j=0}^{n/2}A_{j}(s;n)t^{2j}$ (5.22) with $A_{j}(s;n)=\frac{\left(-\frac{n}{2},j\right)\left(-\frac{n+s-1}{4},j\right)}{\left(\frac{3-n+s}{4},j\right)}\cdot\frac{1}{j!},$ where $(\ell,m)$ stands for the shifted factorial. Recall from (4.25) that $M=\\{\pm m_{j}^{\textnormal{II}}:j=0,1,2,3\\}$ with $m^{\textnormal{II}}_{0}=\begin{pmatrix}1&0\\\ 0&1\\\ \end{pmatrix},\;m^{\textnormal{II}}_{1}=\begin{pmatrix}0&1\\\ -1&0\end{pmatrix},\;m^{\textnormal{II}}_{2}=\begin{pmatrix}\sqrt{-1}&0\\\ 0&-\sqrt{-1}\\\ \end{pmatrix},\;m^{\textnormal{II}}_{3}=-\begin{pmatrix}0&\sqrt{-1}\\\ \sqrt{-1}&0\end{pmatrix}.$ As $m_{3}^{\textnormal{II}}=m_{1}^{\textnormal{II}}m_{2}^{\textnormal{II}}$, it suffices to check the actions of $m_{1}^{\textnormal{II}}$ and $m_{2}^{\textnormal{II}}$ on $a_{[s;n]}(t)=\sum_{j=0}^{n/2}A_{j}(s;n)t^{2j}$. ###### Proposition 5.23. Under the same hypothesis in Lemma 5.21, the character $(\varepsilon,\varepsilon^{\prime})$ on $\mathbb{C}a_{[s;n]}(t)$ is given as follows. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{0}^{-}\cup J_{0}):$ $\mathbb{C}a_{[s;n]}(t)\simeq\textnormal{\mbox{\smaller($+$,$+$)}}.$ 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{2}^{-}\cup J_{2}):$ $\mathbb{C}a_{[s;n]}(t)\simeq\textnormal{\mbox{\smaller($-$,$-$)}}.$ ###### Proof. Since the second assertion can be shown similarly, we only give a proof for the first assertion. Let $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{0}^{-}\cup J_{0})$. We wish to show that both $m_{1}^{\textnormal{II}}$ and $m_{2}^{\textnormal{II}}$ act trivially. First, it is easy to see that the action of $m_{2}^{\textnormal{II}}$ is trivial. Indeed, since $n\equiv 0\ (\mathrm{mod}\ 4)$ and $a_{[s;n]}(t)$ is an even function, the transformation law of (4.27) shows that $m_{2}^{\textnormal{II}}\colon a_{[s;n]}(t)\longrightarrow(\sqrt{-1})^{n}a_{[s;n]}(-t)=a_{[s;n]}(t).$ In order to show that $m_{1}^{\textnormal{II}}$ also acts trivially, observe that, by (4.27) and (5.22), we have $m_{1}^{\textnormal{II}}\colon a_{[s;n]}(t)\longrightarrow t^{n}a_{[s;n]}\left(-\frac{1}{t}\right)=\sum_{j=0}^{n/2}A_{\frac{n}{2}-j}(s;n)t^{2j}.$ On the other hand, by Lemma 5.21 and Table 4, $m_{1}^{\textnormal{II}}$ acts on $a_{[s;n]}(t)$ by $\pm 1$. Therefore, $\sum_{j=0}^{n/2}A_{\frac{n}{2}-j}(s;n)t^{2j}=\pm\sum_{j=0}^{n/2}A_{j}(s;n)t^{2j}.$ An easy computation shows that we have $A_{\frac{n}{2}}(s;n)=1=A_{0}(s;n)$, which forces $t^{n}a_{[s;n]}\left(-\frac{1}{t}\right)=a_{[s;n]}(t)$. Hence $m_{1}^{\textnormal{II}}$ also acts on $\mathbb{C}a_{[s;n]}(t)$ trivially. ∎ #### 5.2.3. Cases 2b Next we consider the characters on $\mathbb{C}b_{[s;n]}(t)$. ###### Proposition 5.24. Suppose that $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in I_{k}^{+}\cup J_{k}$ for $k=0,2$. Then $M$ acts on $\mathbb{C}b_{[s;n]}(t)$ as a character as follows. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in I_{0}^{+}\cup J_{0}:$ $\mathbb{C}b_{[s;n]}(t)\simeq\begin{cases}\textnormal{\mbox{\smaller($+$,$-$)}}&\textnormal{if $s\in I^{+}_{0}$},\\\ \textnormal{\mbox{\smaller($+$,$+$)}}&\textnormal{if $s\in J_{0}$}.\end{cases}$ 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s\in I_{2}^{+}\cup J_{2}:$ $\mathbb{C}b_{[s;n]}(t)\simeq\begin{cases}\textnormal{\mbox{\smaller($-$,$+$)}}&\textnormal{if $s\in I^{+}_{2}$},\\\ \textnormal{\mbox{\smaller($-$,$-$)}}&\textnormal{if $s\in J_{2}$}.\\\ \end{cases}$ ###### Proof. Since the assertions can be shown similarly to Lemma 5.21 and Proposition 5.23, we omit the proof. We remark that it is already shown in the proof of Lemma 5.21 that $M$ acts on $\mathbb{C}b_{[s;n]}(t)$ as a character for $(s,n)$ with $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in I_{0}^{+}$. ∎ #### 5.2.4. Case 2c Now we consider $\mathbb{C}c^{\pm}_{[s;n]}(t)$ for $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in I_{k}^{-}$ for $k=0,2$, where $\mathbb{C}c^{\pm}_{[s;n]}(t)$ is given as $c^{\pm}_{[s;n]}(t)=a_{[s;n]}(t)\pm C(s;n)b_{[s;n]}(t)$ with $C(s;n)$ in (5.14). Observe that, in this case, $a_{[s;n]}(t)$ has degree $\deg a_{[s;n]}(t)=\frac{n+s-1}{2}<n$ by Lemma 5.5. Therefore, $a_{[s;n]}(t)$ is of the form $a_{[s;n]}(t)=\sum_{j=0}^{(n+s-1)/4}\widetilde{A}_{j}(s;n)t^{2j}.$ (5.25) To give an explicit formula of $\widetilde{A}_{j}(s;n)$, first remark that when $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s=-1$, we have $-\frac{n+s-1}{4}=\frac{3-n+s}{4}\in\mathbb{Z}_{\geq 0}$. Namely, for $n=4\ell+2$ and $s=-1$, the hypergeometric polynomial $a_{[-1;4\ell+2]}(t)$ is $a_{[-1;4\ell+2]}(t)=F(-2\ell-1,-\ell,-\ell;t^{2}).$ (5.26) In this paper we regard (5.26) as $a_{[-1;4\ell+2]}(t)=\sum_{j=0}^{k}\frac{(-2\ell-1,j)}{j!}t^{2j}$ so that $a_{[-1;4\ell+2]}(t)$ and $b_{[-1;4\ell+2]}(t)$ still form a fundamental solutions to the hypergeometric equation (4.45). Therefore $\widetilde{A}_{j}(s;n)$ in (5.25) is given as $\widetilde{A}_{j}(s;n)=\begin{cases}\frac{\left(-\frac{n}{2},j\right)}{j!}&\textnormal{if $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s=-1$}\vspace{5pt}\\\ \frac{\left(-\frac{n}{2},j\right)\left(-\frac{n+s-1}{4},j\right)}{\left(\frac{3-n+s}{4},j\right)}\cdot\frac{1}{j!}&\textnormal{otherwise}.\end{cases}$ In particular, by (5.14), we have $C(s;n)=\widetilde{A}_{\frac{n+s-1}{4}}(s;n).$ (5.27) It follows from (5.2) that $b_{[s;n]}(t)$ is given as $b_{[s;n]}(t)=t^{\frac{1+n-s}{2}}\sum_{j=0}^{(n+s-1)/4}B_{j}(s;n)t^{2j}$ (5.28) with $B_{j}(s;n)=\frac{\left(-\frac{n+s-1}{4},j\right)\left(-\frac{s-1}{2},j\right)}{\left(\frac{5+n-s}{4},j\right)}\cdot\frac{1}{j!}.$ Lemma 5.29 below plays a key role to determine the $M$-representation on $\mathbb{C}c^{\pm}_{[s;n]}(t)$. ###### Lemma 5.29. Suppose that $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in I_{k}^{-}$ for $k=0,2$. Then, $\pi_{n}(m_{1}^{\textnormal{II}})a_{[s;n]}(t)=C(s;n)b_{[s;n]}(t).$ ###### Proof. We only show the case of $k=0$; the other case can be shown similarly, including the exceptional case for $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s=-1$. Let $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in I_{0}^{-}$. It follows from (5.13) that we have $\mathrm{Sol}_{\textnormal{II}}(s;n)=\mathbb{C}a_{[s;n]}(t)\oplus\mathbb{C}b_{[s;n]}(t).$ Therefore there exist some constants $c_{1},c_{2}\in\mathbb{C}$ such that $\pi_{n}(m_{1}^{\textnormal{II}})a_{[s;n]}(t)=c_{1}a_{[s;n]}(t)+c_{2}b_{[s;n]}(t).$ (5.30) On the other hand, by (4.27) and (5.25), we have $\pi_{n}(m_{1}^{\textnormal{II}})a_{[s;n]}(t)=t^{n}a_{[s;n]}\left(-\frac{1}{t}\right)=\sum_{j=0}^{(n+s-1)/4}\widetilde{A}_{j}(s;n)t^{n-2j}.$ (5.31) Since $n-\frac{n+s-1}{2}>\frac{n+s-1}{2}=\deg a_{[s;n]}(t)$ for $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in I_{0}^{-}$, all exponents $n-2j$ for $t^{n-2j}$ in (5.31) is $n-2j>\deg a_{[s;n]}(t)$. Therefore, $\pi_{n}(m_{1}^{\textnormal{II}})a_{[s;n]}(t)=c_{2}b_{[s;n]}(t).$ Further, by (5.31) and (5.28), we then have $\widetilde{A}_{\frac{n+s-1}{4}}(s;n)=c_{2}B_{0}(s;n)=c_{2}.$ Now (5.27) concludes the assertion. ∎ ###### Proposition 5.32. Suppose that $n\equiv k\ (\mathrm{mod}\ 4)$ and $s\in I_{k}^{-}$ for $k=0,2$. Then $M$ acts on $\mathbb{C}c^{\pm}_{[s;n]}(t)$ as the following characters. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in I_{0}^{-}$. $\mathbb{C}c^{+}_{[s;n]}(t)\simeq\textnormal{\mbox{\smaller($+$,$+$)}}\quad\textnormal{and}\quad\mathbb{C}c^{-}_{[s;n]}(t)\simeq\textnormal{\mbox{\smaller($-$,$+$)}}.$ 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s\in I_{2}^{-}$. $\mathbb{C}c^{+}_{[s;n]}(t)\simeq\textnormal{\mbox{\smaller($-$,$-$)}}\quad\textnormal{and}\quad\mathbb{C}c^{-}_{[s;n]}(t)\simeq\textnormal{\mbox{\smaller($+$,$-$)}}.$ ###### Proof. As in Lemma 5.29, we only show the case of $k=0$; the other case can be treadted similarly (including the exceptional case for $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s=-1$). Let $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in I_{0}^{-}$. To show the assertion it suffices to consider only $m_{1}^{\textnormal{II}}$. Indeed, since the exponent $\frac{1+n-s}{2}$ for $t^{\frac{1+n-s}{2}}$ in (5.28) is even, $a_{[s;n]}(t)$ and $b_{[s;n]}(t)$ are both even functions; thus, $m_{2}^{\textnormal{II}}$ acts on $\mathbb{C}c^{\pm}_{[s;n]}(t)$ trivially by (4.27). To consider the action of $m_{1}^{\textnormal{II}}$ on $\mathbb{C}c^{\pm}_{[s;n]}(t)$, observe that, by Lemma 5.29, we have $\pi_{n}(m_{1}^{\textnormal{II}})b_{[s;n]}(t)=C(s;n)^{-1}a_{[s;n]}(t).$ Therefore, for $\varepsilon\in\\{+,-\\}$, $m_{1}^{\textnormal{II}}$ transforms $c^{\varepsilon}_{[s;n]}(t)$ as $m_{1}^{\textnormal{II}}\colon c^{\varepsilon}_{[s;n]}(t)\longrightarrow\varepsilon c^{\varepsilon}_{[s;n]}(t).$ Now Table 4 concludes the assertion. ∎ ### 5.3. The classification of $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\sigma)$ As Step C in the recipe in Section 4.4, we now classify $(\sigma,s,n)\in\mathrm{Irr}(M)\times\mathbb{C}\times\mathbb{Z}_{\geq 0}$ such that $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\sigma)\neq\\{0\\}$. Given $\sigma\in\mathrm{Irr}(M)$, we define $I^{\pm}(\textnormal{\mbox{\smaller($+$,$-$)}})$, $I^{\pm}(\textnormal{\mbox{\smaller($-$,$+$)}})$, $I(\mathbb{H})\subset\mathbb{Z}\times\mathbb{Z}_{\geq 0}$ as follows. $\displaystyle\begin{aligned} I^{+}(\textnormal{\mbox{\smaller($+$,$-$)}})&:=\\{(s,n)\in(3+4\mathbb{Z}_{\geq 0})\times(4\mathbb{Z}_{\geq 0}):n>s\\},\\\ I^{-}(\textnormal{\mbox{\smaller($+$,$-$)}})&:=\\{(s,n)\in-(1+4\mathbb{Z}_{\geq 0})\times(2+4\mathbb{Z}_{\geq 0}):n>|s|\\},\\\\[5.0pt] I^{+}(\textnormal{\mbox{\smaller($-$,$+$)}})&:=\\{(s,n)\in(1+4\mathbb{Z}_{\geq 0})\times(2+4\mathbb{Z}_{\geq 0}):n>s\\},\\\ I^{-}(\textnormal{\mbox{\smaller($-$,$+$)}})&:=\\{(s,n)\in-(3+4\mathbb{Z}_{\geq 0})\times(4\mathbb{Z}_{\geq 0}):n>|s|\\},\\\\[5.0pt] I(\mathbb{H})&:=\\{(s,n)\in(2\mathbb{Z})\times(1+2\mathbb{Z}_{\geq 0}):n>|s|\\}.\end{aligned}$ (5.33) ###### Theorem 5.34. The following conditions on $(\sigma,s,n)\in\mathrm{Irr}(M)\times\mathbb{C}\times\mathbb{Z}_{\geq 0}$ are equivalent. 1. (i) $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\sigma)\neq\\{0\\}$. 2. (ii) $\dim_{\mathbb{C}}\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\sigma)=1$. 3. (iii) One of the following conditions holds. * • $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ $(s,n)\in\mathbb{C}\times 4\mathbb{Z}_{\geq 0}$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ $(s,n)\in\mathbb{C}\times(2+4\mathbb{Z}_{\geq 0})$. * • $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($+$,$-$)}})\cup I^{-}(\textnormal{\mbox{\smaller($+$,$-$)}})$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($-$,$+$)}})\cup I^{-}(\textnormal{\mbox{\smaller($-$,$+$)}})$. * • $\sigma=\mathbb{H}:$ $(s,n)\in I(\mathbb{H})$. Further, for such $(\sigma,s,n)$, the space $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\sigma)$ is given as follows. 1. (1) $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ For $n\in 4\mathbb{Z}_{\geq 0}$, we have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($+$,$+$)}})=\begin{cases}\mathbb{C}a_{[s;n]}(t)&\textnormal{if $s\in\mathbb{C}\backslash(I^{-}_{0}\cup J_{0})$},\\\ \mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in J_{0}$},\\\ \mathbb{C}c^{+}_{[s;n]}(t)&\textnormal{if $s\in I^{-}_{0}$}.\end{cases}$ 2. (2) $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ For $n\in 2+4\mathbb{Z}_{\geq 0}$, we have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($-$,$-$)}})=\begin{cases}\mathbb{C}a_{[s;n]}(t)&\textnormal{if $s\in\mathbb{C}\backslash(I^{-}_{2}\cup J_{2})$},\\\ \mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in J_{2}$},\\\ \mathbb{C}c^{+}_{[s;n]}(t)&\textnormal{if $s\in I^{-}_{2}$.}\end{cases}$ 3. (3) $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ We have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($+$,$-$)}})=\begin{cases}\mathbb{C}b_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($+$,$-$)}})$},\\\ \mathbb{C}c^{-}_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{-}(\textnormal{\mbox{\smaller($+$,$-$)}})$}.\end{cases}$ 4. (4) $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ We have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($-$,$+$)}})=\begin{cases}\mathbb{C}b_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($-$,$+$)}})$},\\\ \mathbb{C}c^{-}_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{-}(\textnormal{\mbox{\smaller($-$,$+$)}})$}.\end{cases}$ 5. (5) $\sigma=\mathbb{H}:$ We have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\mathbb{H})=\mathbb{C}\varphi^{(s;n)}_{\textnormal{II}}\quad\textnormal{for $(s,n)\in I(\mathbb{H})$},$ where $\varphi^{(s;n)}_{\textnormal{II}}$ is a non-zero $M$-isomorphism $\varphi^{(s;n)}_{\textnormal{II}}\colon\mathrm{Sol}_{\textnormal{II}}(s;n)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathbb{H}.$ ###### Proof. The theorem simply follows from Theorems 5.16 and 5.19. Indeed, for instance, suppose that $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}$. It then follows from Theorem 5.19 that $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($+$,$+$)}})\neq\\{0\\}$ if and only if $n\equiv 0\ (\mathrm{mod}\ 4)$. Moreover, Theorems 5.16 and 5.19 show that $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($+$,$+$)}})=\begin{cases}\mathbb{C}a_{[s;n]}(t)&\textnormal{if $s\in\mathbb{C}\backslash(I^{-}_{0}\cup J_{0})$},\\\ \mathbb{C}b_{[s;n]}(t)&\textnormal{if $s\in J_{0}$},\\\ \mathbb{C}c^{+}_{[s;n]}(t)&\textnormal{if $s\in I^{-}_{0}$}.\end{cases}$ This concludes the assertion for $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}$. Similarly, suppose that $\sigma=\mathbb{H}$. Then, by Theorem 5.19, we have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\mathbb{H})\neq\\{0\\}$ if and only if $(s,n)\in I(\mathbb{H})$. Furthermore, in this case, $\mathrm{Sol}_{\textnormal{II}}(s;n)\simeq\mathbb{H}$. Therefore, $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\mathbb{H})$ is spanned by a non-zero $M$-isomorphism $\varphi^{(s;n)}_{\textnormal{II}}\colon\mathrm{Sol}_{\textnormal{II}}(s;n)\stackrel{{\scriptstyle\sim}}{{\to}}\mathbb{H}$. Since the other cases can be handled similarly, we omit the proof. ∎ ###### Remark 5.35. As for Theorem 5.16, it will be shown in Theorem 6.9 that $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\sigma)$ is simpler than $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\sigma)$. Now we give the $K$-type formulas for $\mathcal{S}ol(s;n)_{K}$ for each $\sigma\in\mathrm{Irr}(M)$ in the polynomial realization (3.14) of $\mathrm{Irr}(K)$. ###### Theorem 5.36. The following conditions on $(\sigma,s)\in\mathrm{Irr}(M)\times\mathbb{C}$ are equivalent. 1. (i) $\mathcal{S}ol(s;\sigma)_{K}\neq\\{0\\}$. 2. (ii) One of the following conditions holds. * • $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ $s\in\mathbb{C}$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ $s\in\mathbb{C}$. * • $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ $s\in 3+4\mathbb{Z}$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ $s\in 1+4\mathbb{Z}$. * • $\sigma=\mathbb{H}:$ $s\in 2\mathbb{Z}$ Further, the $K$-type formulas for $\mathcal{S}ol(s;\sigma)_{K}$ may be given as follows. 1. (1) $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ $\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$+$)}})_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{4n}[t]\quad\;\;\;\textnormal{for all $s\in\mathbb{C}$}.$ 2. (2) $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ $\mathcal{S}ol(s;\textnormal{\mbox{\smaller($-$,$-$)}})_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{2+4n}[t]\quad\textnormal{for all $s\in\mathbb{C}$}.$ 3. (3) $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ $\qquad\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$-$)}})_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{|s|+1+4n}[t]\quad\textnormal{for $s\in 3+4\mathbb{Z}$}.$ 4. (4) $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ $\qquad\mathcal{S}ol(s;\textnormal{\mbox{\smaller($-$,$+$)}})_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{|s|+1+4n}[t]\quad\textnormal{for $s\in 1+4\mathbb{Z}$}.$ 5. (5) $\sigma=\mathbb{H}:$ $\qquad\mathcal{S}ol(s;\mathbb{H})_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{|s|+1+4n}[t]\quad\textnormal{for $s\in 2\mathbb{Z}$}.$ ###### Proof. We only give a proof for $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}$; the other cases may be handled similarly. Suppose that $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}$. By (4.22) with $J=\textnormal{II}$, we have $\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$-$)}})_{K}\simeq\bigoplus_{n\geq 0}\mathrm{Pol}_{n}[t]\otimes\mathrm{Hom}_{M}\left(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($+$,$-$)}}\right).$ Thus, $\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$-$)}})_{K}\neq\\{0\\}$ if and only if $\mathrm{Hom}_{M}\left(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($+$,$-$)}}\right)\neq\\{0\\}$ for some $n\in\mathbb{Z}_{\geq 0}$, which is further equivalent to $s\in(3+4\mathbb{Z}_{\geq 0})\cup(-(1+4\mathbb{Z}_{\geq 0}))=3+4\mathbb{Z}$ by Theorem 5.34. This shows the assertion between (i) and (ii) for $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}$. To determine the explicit branching law, observe that, by the Frobenius reciprocity, we have $\operatorname{Hom}_{K}(\mathrm{Pol}_{n}[t],\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$-$)}})_{K})\neq\\{0\\}\quad\text{if and only if}\quad\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{II}}(s;n),\textnormal{\mbox{\smaller($+$,$-$)}})\neq\\{0\\},$ which is equivalent to $n\in 4\mathbb{Z}_{\geq 0}$ with $n>s$ for $s\in 3+4\mathbb{Z}_{\geq 0}$ or $n\in 2+4\mathbb{Z}_{\geq 0}$ with $n>|s|$ for $s\in-(1+4\mathbb{Z}_{\geq 0})$ by Theorem 5.34. It also follows from Theorem 5.34 that $\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$-$)}})_{K}$ is multiplicity- free. Therefore, we have $\mathcal{S}ol(s;\textnormal{\mbox{\smaller($+$,$-$)}})_{K}\simeq\begin{cases}\begin{aligned} \bigoplus\limits_{\begin{subarray}{c}n\equiv 0\ (\mathrm{mod}\ 4)\\\ n>s\end{subarray}}\mathrm{Pol}_{n}[t]\quad&\textnormal{if $s\in 3+4\mathbb{Z}_{\geq 0}$},\\\\[4.30554pt] \bigoplus\limits_{\begin{subarray}{c}n\equiv 2\ (\mathrm{mod}\ 4)\\\ n>|s|\end{subarray}}\mathrm{Pol}_{n}[t]\quad&\textnormal{if $s\in-(1+4\mathbb{Z}_{\geq 0})$},\\\ \end{aligned}\end{cases}\\\ $ which is equivalent to the proposed formula. ∎ ###### Remark 5.37. The $K$-type formula for the case of $\sigma=\mathbb{H}$ is obtained also in [36, Prop. 5.4.6]. We close this section by giving a proof of Theorem 1.7 in the introduction. ###### Proof of Theorem 1.7. The assertions simply follow from Theorem 5.36. Indeed, as $\mathcal{S}ol(s;n)_{K}$ is dense in $\mathcal{S}ol(s;n)$, we have $\mathcal{S}ol(s;n)_{K}\neq\\{0\\}$ if and only if $\mathcal{S}ol(s;n)\neq\\{0\\}$. Then the equivalence $(\pi_{n},\mathrm{Pol}_{n}[t])\simeq(n/2)$ concludes the theorem. ∎ ## 6\. Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$ The purpose of this short section is to interpret the results in Section 5 to the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$. In particular we give a variant of Theorem 5.34 for $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\sigma)$ in Theorem 6.9. We remark that the results in this section will play a key role to compute certain tridiagonal determinants in Section 7. ### 6.1. Relationships between $a_{[s;n]}(t),b_{[s;n]}(t),c^{\pm}_{[s;n]}(t)$ and $u_{[s;n]}(t),v_{[s;n]}(t)$ As for the hypergeometric model $d\pi_{n}^{\textnormal{II}}(D^{\flat}_{s})f(t)=0$, we start with Step A of the recipe in Section 4.4, that is, the classification of $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ such that $\mathrm{Sol}_{\textnormal{I}}(s;n)\neq\\{0\\}$. Recall from (4.36) that we have $\mathrm{Sol}_{\textnormal{I}}(s;n)\subset\mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t)$ (6.1) with $\displaystyle u_{[s;n]}(t)=Hl(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2}),$ $\displaystyle v_{[s;n]}(t)=tHl(-1,-\frac{n-2}{4}s;-\frac{n-1}{2},-\frac{n-2}{2},\frac{3}{2},\frac{1-n-s}{2};t^{2}).$ Thus, equivalently, we wish to classify $(s,n)$ such that $u_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$ or $v_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. To make full use of the results for $\mathrm{Sol}_{\textnormal{II}}(s;n)$, we first consider a transfer of elements in $\mathrm{Sol}_{\textnormal{II}}(s;n)$ to $\mathrm{Sol}_{\textnormal{I}}(s;n)$. Recall from (4.18) and (4.23) that $k_{0}=\frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{-1}&1\\\ -1&-\sqrt{-1}\end{pmatrix}\in SU(2)$ gives an $M$-isomorphism $\pi_{n}(k_{0})\colon\mathrm{Sol}_{\textnormal{II}}(s;n)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathrm{Sol}_{\textnormal{I}}(s;n).$ (6.2) For $p(t),q(t)\in\mathrm{Pol}_{n}[t]$ with $\pi_{n}(k_{0})p(t)\in\mathbb{C}q(t)$, we write $p(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}q(t).$ Then, for $n\in 2\mathbb{Z}_{\geq 0}$, the polynomials $a_{[s;n]}(t),b_{[s;n]}(t),c^{\pm}_{[s;n]}(t)\in\mathrm{Sol}_{\textnormal{II}}(s;n)$ are transferred to $u_{[s;n]}(t),v_{[s;n]}(t)\in\mathrm{Sol}_{\textnormal{I}}(s;n)$ via $\pi_{n}(k_{0})$ as follows. ###### Lemma 6.3. Let $n\in 2\mathbb{Z}_{\geq 0}$. We have the following. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ 1. (a) $a_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}u_{[s;n]}(t)$ for $s\in\mathbb{C}\backslash(I^{-}_{0}\cup J_{0})$. 2. (b) $b_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}u_{[s;n]}(t)$ for $s\in J_{0}$. 3. (c) $c^{+}_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}u_{[s;n]}(t)$ for $s\in I_{0}^{-}$. 4. (d) $b_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}v_{[s;n]}(t)$ for $s\in I^{+}_{0}$. 5. (e) $c^{-}_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}v_{[s;n]}(t)$ for $s\in I^{-}_{0}$. 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4):$ 1. (a) $a_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}v_{[s;n]}(t)$ for $s\in\mathbb{C}\backslash(I^{-}_{2}\cup J_{2})$. 2. (b) $b_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}v_{[s;n]}(t)$ for $s\in J_{2}$. 3. (c) $c^{+}_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}v_{[s;n]}(t)$ for $s\in I_{2}^{-}$. 4. (d) $b_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}u_{[s;n]}(t)$ for $s\in I^{+}_{2}$. 5. (e) $c^{-}_{[s;n]}(t)\stackrel{{\scriptstyle k_{0}}}{{\sim}}u_{[s;n]}(t)$ for $s\in I^{-}_{2}$. ###### Proof. We only give a proof of (1)(a); the other cases can be shown similarly. Let $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}\backslash(I_{0}^{-}\cap J_{0})$. It follows from Theorem 5.16 that in this case $a_{[s;n]}(t)\in\mathrm{Sol}_{\textnormal{II}}(s;n)$ and thus $\pi_{n}(k_{0})a_{[s;n]}(t)\in\mathrm{Sol}_{\textnormal{I}}(s;n)$. By (6.1), this implies that $\pi_{n}(k_{0})a_{[s;n]}(t)=c_{1}u_{[s;n]}(t)+c_{2}v_{[s;n]}(t)$ for some constants $c_{1},c_{2}\in\mathbb{C}$. We wish to show $c_{2}=0$. To do so, it suffices to show that $\pi_{n}(k_{0})a_{[s;n]}(t)$ is an even function, as $u_{[s;n]}(t)$ and $v_{[s;n]}(t)$ are even and odd functions, respectively. It follows from Theorem 5.19 that $m_{1}^{\textnormal{II}}$ acts on $a_{[s;n]}(t)$ trivially. Since the linear map $\pi_{n}(k_{0})\colon\mathrm{Sol}_{\textnormal{II}}(s;n)\to\mathrm{Sol}_{\textnormal{I}}(s;n)$ is an $M$-isomorphism, by (4.24), this implies that $m_{1}^{\textnormal{I}}$ acts on $\pi_{n}(k_{0})a_{[s;n]}(t)$ trivially. Then, by (3.16) and the assumption $n\equiv 0\ (\mathrm{mod}\ 4)$, we have $\displaystyle\pi_{n}(k_{0})a_{[s;n]}(-t)$ $\displaystyle=(\sqrt{-1})^{n}\pi_{n}(k_{0})a_{[s;n]}(-t)$ $\displaystyle=\pi_{n}(m_{1}^{\textnormal{I}})\pi_{n}(k_{0})a_{[s;n]}(t)$ $\displaystyle=\pi_{n}(k_{0})a_{[s;n]}(t).$ Now the assertion follows. ∎ ###### Remark 6.4. Lemma 6.3 in particular gives Gauss-to-Heun transformations. Moreover, by Remark 4.37, the transformations (1)(a) and (2)(a) reduce to Gauss-to-Gauss transformations for $s=0$. ###### Proposition 6.5. Given $n\in\mathbb{Z}_{\geq 0}$, the following conditions on $s\in\mathbb{C}$ are equivalent. 1. (i) $u_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. 2. (ii) One of the following holds. 1. (a) $n\equiv 0\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}$. 2. (b) $n\equiv 1\ (\mathrm{mod}\ 4):$ $s\in I_{1}$. 3. (c) $n\equiv 2\ (\mathrm{mod}\ 4):$ $s\in I^{+}_{2}\cup I^{-}_{2}$. 4. (d) $n\equiv 3\ (\mathrm{mod}\ 4):$ $s\in I_{3}$. ###### Proof. For the case of $n$ even, the assertions simply follow from Theorem 5.16 and Lemma 6.3. For the case of $n$ odd, suppose that $n\equiv 1\ (\mathrm{mod}\ 4)$. By Theorem 5.16, we have $\mathrm{Sol}_{\textnormal{II}}(s;n)\neq\\{0\\}$ if and only if $s\in I_{1}$. Moreover, for such $s\in I_{1}$, the dimension $\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{II}}(s;n)$ is $\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{II}}(s;n)=2$. Thus $\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{I}}(s;n)=2$ for $s\in I_{1}$. Now the assertion follows from (6.1). Since the case of $n\equiv 3\ (\mathrm{mod}\ 4)$ can be handled similarly, we omit the proof. ∎ ###### Proposition 6.6. Given $n\in\mathbb{Z}_{\geq 0}$, the following conditions on $s\in\mathbb{C}$ are equivalent. 1. (i) $v_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. 2. (ii) One of the following holds. 1. (a) $n\equiv 0\ (\mathrm{mod}\ 4):$ $s\in I^{+}_{0}\cup I^{-}_{0}$. 2. (b) $n\equiv 1\ (\mathrm{mod}\ 4):$ $s\in I_{1}$. 3. (c) $n\equiv 2\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}$. 4. (d) $n\equiv 3\ (\mathrm{mod}\ 4):$ $s\in I_{3}$. ###### Proof. Since the proof is similar to the one for Proposition 6.5, we omit the proof. ∎ ###### Theorem 6.7. The following conditions on $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ are equivalent. 1. (i) $\mathrm{Sol}_{\textnormal{I}}(s;n)\neq\\{0\\}$. 2. (ii) One of the following conditions is satisfied. * • $n\equiv 0\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}$. * • $n\equiv 1\ (\mathrm{mod}\ 4):$ $s\in I_{1}$. * • $n\equiv 2\ (\mathrm{mod}\ 4):$ $s\in\mathbb{C}$. * • $n\equiv 3\ (\mathrm{mod}\ 4):$ $s\in I_{3}$. Moreover, for such $(s,n)$, the space $\mathrm{Sol}_{\textnormal{I}}(s;n)$ may be described as follows. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)=\begin{cases}\mathbb{C}u_{[s;n]}(t)&\textnormal{if $s\in\mathbb{C}\backslash(I^{+}_{0}\cup I^{-}_{0})$},\\\ \mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t)&\textnormal{if $s\in I^{+}_{0}$},\\\ \mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t)&\textnormal{if $s\in I^{-}_{0}$}.\end{cases}$ 2. (2) $n\equiv 1\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)=\mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t)\quad\textnormal{for $s\in I_{1}$}.$ 3. (3) $n\equiv 2\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)=\begin{cases}\qquad\qquad\quad\mathbb{C}v_{[s;n]}(t)&\textnormal{if $s\in\mathbb{C}\backslash(I^{+}_{2}\cup I^{-}_{2})$},\\\ \mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t)&\textnormal{if $s\in I^{+}_{2}$},\\\ \mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t)&\textnormal{if $s\in I^{-}_{2}$}.\end{cases}$ 4. (4) $n\equiv 3\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)=\mathbb{C}u_{[s;n]}(t)\oplus\mathbb{C}v_{[s;n]}(t),\quad\textnormal{for $s\in I_{3}$}.$ ###### Proof. This is a summary of the results in Propositions 6.5 and 6.6. ∎ ###### Theorem 6.8. For each $(s,n)\in\mathbb{C}\times\mathbb{Z}_{\geq 0}$ determined in Theorem 6.7, the $M$-representations on $\mathrm{Sol}_{\textnormal{II}}(s;n)$ are classified as follows. 1. (a) $n\equiv 0\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)\simeq\begin{cases}\textnormal{\mbox{\smaller($+$,$+$)}}&\textnormal{if $s\in\mathbb{C}\backslash(I_{0}^{+}\cup I_{0}^{-})$},\\\ \textnormal{\mbox{\smaller($+$,$+$)}}\oplus\textnormal{\mbox{\smaller($+$,$-$)}}&\textnormal{if $s\in I_{0}^{+}$},\\\ \textnormal{\mbox{\smaller($+$,$+$)}}\oplus\textnormal{\mbox{\smaller($-$,$+$)}}&\textnormal{if $s\in I_{0}^{-}$}.\end{cases}$ 2. (b) $n\equiv 1\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)\simeq\mathbb{H}\quad\textnormal{for $s\in I_{1}$}.$ 3. (c) $n\equiv 2\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)\simeq\begin{cases}\qquad\quad\;\;\textnormal{\mbox{\smaller($-$,$-$)}}&\textnormal{if $s\in\mathbb{C}\backslash(I_{2}^{+}\cup I_{2}^{-})$},\\\ \textnormal{\mbox{\smaller($-$,$+$)}}\oplus\textnormal{\mbox{\smaller($-$,$-$)}}&\textnormal{if $s\in I_{2}^{+}$},\\\ \textnormal{\mbox{\smaller($+$,$-$)}}\oplus\textnormal{\mbox{\smaller($-$,$-$)}}&\textnormal{if $s\in I_{2}^{-}$}.\end{cases}$ 4. (d) $n\equiv 3\ (\mathrm{mod}\ 4):$ $\mathrm{Sol}_{\textnormal{I}}(s;n)\simeq\mathbb{H}\quad\textnormal{for $s\in I_{3}$}.$ Here the characters $(\varepsilon,\varepsilon^{\prime})$ stand for the ones on $\mathbb{C}u_{[s;n]}(t)$ and $\mathbb{C}v_{[s;n]}(t)$ at the same places in Theorem 6.7. ###### Proof. The case of $n$ odd, the assertions simply follow from Theorem 5.19 via the $M$-isomorphism $\pi_{n}(k_{0})\colon\mathrm{Sol}_{\textnormal{II}}(s;n)\stackrel{{\scriptstyle\sim}}{{\to}}\mathrm{Sol}_{\textnormal{I}}(s;n)$. Similarly, the assertions for $n$ even are drawn by Lemma 6.3, Theorem 6.7, and Propositions 5.23, 5.24, and 5.32. ∎ Recall from (5.33) the subsets $I^{\pm}(\textnormal{\mbox{\smaller($+$,$-$)}})$, $I^{\pm}(\textnormal{\mbox{\smaller($-$,$+$)}})$, $I(\mathbb{H})\subset\mathbb{Z}\times\mathbb{Z}_{\geq 0}$. We now give the explicit description of $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\sigma)\neq\\{0\\}$. ###### Theorem 6.9. The following conditions on $(\sigma,s,n)\in\mathrm{Irr}(M)\times\mathbb{C}\times\mathbb{Z}_{\geq 0}$ are equivalent. 1. (i) $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\sigma)\neq\\{0\\}$. 2. (ii) $\dim_{\mathbb{C}}\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\sigma)=1$. 3. (iii) One of the following conditions holds. * • $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ $(s,n)\in\mathbb{C}\times 4\mathbb{Z}_{\geq 0}$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ $(s,n)\in\mathbb{C}\times(2+4\mathbb{Z}_{\geq 0})$. * • $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($+$,$-$)}})\cup I^{-}(\textnormal{\mbox{\smaller($+$,$-$)}})$. * • $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($-$,$+$)}})\cup I^{-}(\textnormal{\mbox{\smaller($-$,$+$)}})$. * • $\sigma=\mathbb{H}:$ $(s,n)\in I(\mathbb{H})$. Moreover, for such $(\sigma,s,n)$, the space $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\sigma)$ is given as follows. 1. (1) $\sigma=\textnormal{\mbox{\smaller($+$,$+$)}}:$ For $n\in 4\mathbb{Z}_{\geq 0}$, we have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\textnormal{\mbox{\smaller($+$,$+$)}})=\mathbb{C}u_{[s;n]}(t)\quad\textnormal{for all $s\in\mathbb{C}$}.$ 2. (2) $\sigma=\textnormal{\mbox{\smaller($-$,$-$)}}:$ For $n\in 2+4\mathbb{Z}_{\geq 0}$, we have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\textnormal{\mbox{\smaller($-$,$-$)}})=\mathbb{C}v_{[s;n]}(t)\quad\textnormal{for all $s\in\mathbb{C}$}.$ 3. (3) $\sigma=\textnormal{\mbox{\smaller($+$,$-$)}}:$ We have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\textnormal{\mbox{\smaller($+$,$-$)}})=\begin{cases}\mathbb{C}v_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($+$,$-$)}})$},\\\ \mathbb{C}u_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{-}(\textnormal{\mbox{\smaller($+$,$-$)}})$}.\end{cases}$ 4. (4) $\sigma=\textnormal{\mbox{\smaller($-$,$+$)}}:$ We have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\textnormal{\mbox{\smaller($-$,$+$)}})=\begin{cases}\mathbb{C}u_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{+}(\textnormal{\mbox{\smaller($-$,$+$)}})$},\\\ \mathbb{C}v_{[s;n]}(t)&\textnormal{if $(s,n)\in I^{-}(\textnormal{\mbox{\smaller($-$,$+$)}})$}.\end{cases}$ 5. (5) $\sigma=\mathbb{H}:$ We have $\operatorname{Hom}_{M}(\mathrm{Sol}_{\textnormal{I}}(s;n),\mathbb{H})=\mathbb{C}\varphi^{(s;n)}_{\textnormal{I}}\quad\textnormal{for $(s,n)\in I(\mathbb{H})$},$ where $\varphi^{(s;n)}_{\textnormal{I}}$ is a non-zero $M$-isomorphism $\varphi^{(s;n)}_{\textnormal{I}}\colon\mathrm{Sol}_{\textnormal{I}}(s;n)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathbb{H}.$ ###### Proof. Since the theorem can be shown similarly to Theorem 5.34, we omit the proof. ∎ ## 7\. Sequences $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ of tridiagonal determinants The aim of this section is to discuss about two sequences $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ of $k\times k$ tridiagonal determinants associated to polynomial solutions to the Heun model $d\pi_{n}^{\textnormal{I}}(D^{\flat}_{s})f(t)=0$. We give factorization formulas for $P_{\left[\frac{n+2}{2}\right]}(x;n)$ and $Q_{\left[\frac{n+1}{2}\right]}(x;n)$ as well as palindromic property of $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ for $n$ even. (See Definition 1.15 for the definition of palindromic property.) These are achieved in Theorems 7.18 and 7.22 (factorization formulas), and 7.29 and 7.41 (palindromic property). ### 7.1. Sequences $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ of tridiagonal determinants We start with the definitions of $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$. Let $a(x)$ and $b(x)$ be two polynomials such that $a(x)=2x(2x-1)\quad\text{and}\quad b(x)=2x(2x+1).$ (7.1) For instance, for $x=1,2,3,\dots$, we have $\displaystyle a(1)=1\cdot 2,\quad$ $\displaystyle a(2)=3\cdot 4,\quad$ $\displaystyle a(3)=5\cdot 6,\dots,$ $\displaystyle b(1)=2\cdot 3,\quad$ $\displaystyle b(2)=4\cdot 5,\quad$ $\displaystyle b(3)=6\cdot 7,\dots.$ Similarly, for $x=\frac{1}{2},\frac{3}{2},\frac{5}{2},\ldots$, we have $\displaystyle a\left(\frac{1}{2}\right)=0\cdot 1,\quad$ $\displaystyle a\left(\frac{3}{2}\right)=2\cdot 3,\quad$ $\displaystyle a\left(\frac{5}{2}\right)=4\cdot 5,\dots,$ $\displaystyle b\left(\frac{1}{2}\right)=1\cdot 2,\quad$ $\displaystyle b\left(\frac{3}{2}\right)=3\cdot 4,\quad$ $\displaystyle b\left(\frac{5}{2}\right)=5\cdot 6,\dots.$ Clearly, the polynomials $a(x)$ and $b(x)$ satisfy $a(x)=b\left(\frac{2x-1}{2}\right)\quad\text{and}\quad b(x)=a\left(\frac{2x+1}{2}\right).$ (7.2) We define $k\times k$ tridiagonal determinants $P_{k}(x;y)$ and $Q_{k}(x;y)$ in terms of $a(x)$ and $b(x)$ as follows. 1. (1) $P_{k}(x;y):$ * • $k=0:$ $P_{0}(x;y)=1$, * • $k=1:$ $P_{1}(x;y)=yx$, * • $k\geq 2:$ $P_{k}(x;y)=\small\begin{vmatrix}yx&a(1)&&&&\\\ -a\left(\frac{y}{2}\right)&(y-4)x&a(2)&&&\\\ &-a\left(\frac{y-2}{2}\right)&(y-8)x&a(3)&&\\\ &&\dots&\dots&\dots&\\\ &&&-a\left(\frac{y-2k+6}{2}\right)&(y-4k+8)x&a(k-1)\\\ &&&&-a\left(\frac{y-2k+4}{2}\right)&(y-4k+4)x\\\ \end{vmatrix}$. 2. (2) $Q_{k}(x;y):$ * • $k=0:$ $Q_{0}(x;y)=1$, * • $k=1:$ $Q_{1}(x;y)=(y-2)x$, * • $k\geq 2:$ $Q_{k}(x;y)=\small\begin{vmatrix}(y-2)x&b(1)&&&&\\\ -b\left(\frac{y-2}{2}\right)&(y-6)x&b(2)&&&\\\ &-b\left(\frac{y-4}{2}\right)&(y-10)x&b(3)&&\\\ &&\dots&\dots&\dots&\\\ &&&-b\left(\frac{y-2k+4}{2}\right)&(y-4k+6)x&b(k-1)\\\ &&&&-b\left(\frac{y-2k+2}{2}\right)&(y-4k+2)x\\\ \end{vmatrix}$. ###### Example 7.3. Here are a few examples of $P_{k}(x;4)$ and $Q_{k}(x;6)$ for $k=2,3,4$. 1. (1) $P_{k}(x;4):$ $P_{2}(x;4)=\begin{vmatrix}4x&1\cdot 2\\\ -3\cdot 4&0\end{vmatrix},\quad P_{3}(x;4)=\small\begin{vmatrix}4x&1\cdot 2&\\\ -3\cdot 4&0&3\cdot 4\\\ &-1\cdot 2&-4x\end{vmatrix},\quad P_{4}(x;4)=\small\begin{vmatrix}4x&1\cdot 2&&\\\ -3\cdot 4&0&3\cdot 4&\\\ &-1\cdot 2&-4x&5\cdot 6\\\ &&0&-12x\end{vmatrix}$ 2. (2) $Q_{k}(x;6):$ $Q_{2}(x;6)=\begin{vmatrix}4x&2\cdot 3\\\ -4\cdot 5&0\end{vmatrix},\quad Q_{3}(x;6)=\small\begin{vmatrix}4x&2\cdot 3&\\\ -4\cdot 5&0&4\cdot 5\\\ &-2\cdot 3&-4x\end{vmatrix},\quad Q_{4}(x;6)=\small\begin{vmatrix}4x&2\cdot 3&&\\\ -4\cdot 5&0&4\cdot 5&\\\ &-2\cdot 3&-4x&6\cdot 7\\\ &&0&-12x\end{vmatrix}$ Moreover, for instance, for $y=5$ and $k=3$, we have $P_{3}(x;5)=\small\begin{vmatrix}5x&1\cdot 2&\\\ -4\cdot 5&x&3\cdot 4\\\ &-2\cdot 3&-3x\end{vmatrix}\qquad\text{and}\qquad Q_{3}(x;5)=\small\begin{vmatrix}3x&2\cdot 3&\\\ -3\cdot 4&-x&4\cdot 5\\\ &-1\cdot 2&-5x\end{vmatrix}.$ ###### Remark 7.4. In general $P_{k}(x;y)$ and $Q_{k}(x;y)$ satisfy the following properties for specific $y$ and $k$. 1. (1) If $y=n\in 2+2\mathbb{Z}_{\geq 0}$, then $P_{\frac{n+2}{2}}(x;n)$ is anti- centrosymmetric: $\small P_{\frac{n+2}{2}}(x;n)=\begin{vmatrix}nx&a(1)&&&&\\\ -a\left(\frac{n}{2}\right)&(n-4)x&a(2)&&&\\\ &-a\left(\frac{n-2}{2}\right)&(n-8)x&a(3)&&\\\ &&\dots&\dots&\dots&\\\ &&&-a(2)&-(n-4)x&a\left(\frac{n}{2}\right)\\\ &&&&-a(1)&-nx\\\ \end{vmatrix}.\\\ $ (7.5) 2. (2) If $y=n\in 4+2\mathbb{Z}_{\geq 0}$, then $Q_{\frac{n}{2}}(x;n)$ is also anti- centrosymmetric: $\small Q_{\frac{n}{2}}(x;n)=\begin{vmatrix}(n-2)x&b(1)&&&&\\\ -b\left(\frac{n-2}{2}\right)&(n-6)x&b(2)&&&\\\ &-b\left(\frac{n-4}{2}\right)&(n-10)x&b(3)&&\\\ &&\dots&\dots&\dots&\\\ &&&-b(2)&-(n-6)x&b\left(\frac{n-2}{2}\right)\\\ &&&&-b(1)&-(n-2)x\\\ \end{vmatrix}.\\\ $ (7.6) 3. (3) It follows from (7.2) that for $y=n\in 3+2\mathbb{Z}_{\geq 0}$, we have $\displaystyle P_{\frac{n+1}{2}}(x;n)$ $\displaystyle=\small\begin{vmatrix}nx&a(1)&&&&\\\ -b\left(\frac{n-1}{2}\right)&(n-4)x&a(2)&&&\\\ &-b\left(\frac{n-3}{2}\right)&(n-8)x&a(3)&&\\\ &&\dots&\dots&\dots&\\\ &&&-b\left(2\right)&-(n-6)x&a\left(\frac{n-1}{2}\right)\\\ &&&&-b\left(1\right)&-(n-2)x\\\ \end{vmatrix}\normalsize$ (7.7) $\displaystyle Q_{\frac{n+1}{2}}(x;n)$ $\displaystyle=\small\begin{vmatrix}(n-2)x&b(1)&&&&\\\ -a\left(\frac{n-1}{2}\right)&(n-6)x&b(2)&&&\\\ &-a\left(\frac{n-3}{2}\right)&(n-10)x&b(3)&&\\\ &&\dots&\dots&\dots&\\\ &&&-a(2)&-(n-4)x&b\left(\frac{n-1}{2}\right)\\\ &&&&-a(1)&-nx\\\ \end{vmatrix}.\normalsize$ (7.8) We also have $P_{1}(x;1)=x$ and $Q_{1}(x;1)=-x$. Therefore, $Q_{\frac{n+1}{2}}(x;n)=(-1)^{\frac{n+1}{2}}P_{\frac{n+1}{2}}(x;n)\quad\text{for $n\in 1+2\mathbb{Z}_{\geq 0}$.}$ (7.9) The tridiagonal determinants $P_{k}(x;y)$ and $Q_{k}(x;y)$ enjoy the following property. ###### Lemma 7.10. For $k\in\mathbb{Z}_{\geq 0}$, we have $P_{k}(-x;y)=(-1)^{k}P_{k}(x;y)\quad\text{and}\quad Q_{k}(-x;y)=(-1)^{k}Q_{k}(x;y).$ ###### Proof. The identities for $k=0,1$ clearly hold by definition. The assertions for $k\geq 2$ follow from the following general property of $k\times k$ tridiagonal determinants: $\small\begin{vmatrix}-a_{1}&b_{1}&&&&\\\ c_{1}&-a_{2}&b_{2}&&&\\\ &c_{2}&-a_{3}&b_{3}&&&\\\ &&\dots&\dots&\dots&&\\\ &&&c_{k-2}&-a_{k-1}&b_{k-1}\\\ &&&&c_{k-1}&-a_{k}\\\ \end{vmatrix}=(-1)^{k}\small\begin{vmatrix}a_{1}&b_{1}&&&&\\\ c_{1}&a_{2}&b_{2}&&&\\\ &c_{2}&a_{3}&b_{3}&&&\\\ &&\dots&\dots&\dots&&\\\ &&&c_{k-2}&a_{k-1}&b_{k-1}\\\ &&&&c_{k-1}&a_{k}\\\ \end{vmatrix}.\qed$ (7.11) From the next subsections $y$ is taken to be $y=n\in\mathbb{Z}_{\geq 0}$ and we shall discuss several properties of $\\{P_{k}(x;n)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;n)\\}_{k=0}^{\infty}$. ### 7.2. Generating functions of $\\{P_{k}(x;n)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;n)\\}_{k=0}^{\infty}$ We first give the generating functions of $\\{P_{k}(x;n)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;n)\\}_{k=0}^{\infty}$. Let $u_{[s;n]}(t)$ be the local Heun function defined in (4.34). Write $u_{[s;n]}(t)=\sum_{k=0}^{\infty}U_{k}(s;n)t^{2k}$ for the power series expansion at $t=0$. It follows from (9.12) with (9.9) in Section 9 that each coefficient $U_{k}(s;n)$ can be given as $U_{k}(s;n)=\small\begin{vmatrix}E^{u}_{0}&-1&&&&\\\ F^{u}_{1}&E^{u}_{1}&-1&&&\\\ &F^{u}_{2}&E^{u}_{2}&-1&&&\\\ &&\dots&\dots&\dots&&\\\ &&&F^{u}_{k-2}&E^{u}_{k-2}&-1\\\ &&&&F^{u}_{k-1}&E^{u}_{k-1}\\\ \end{vmatrix}\\\ $ (7.12) with $E^{u}_{0}=\frac{ns}{a(1)},\quad E^{u}_{k}=\frac{(n-4k)s}{a(k+1)},\quad\text{and}\quad F^{u}_{k}=\frac{a\left(\frac{n-2k+2}{2}\right)}{a(k+1)}.$ (7.13) Similarly, equation (9.13) with (9.11) in Section 9 shows that the coefficients $V_{k}(s;n)$ of the power series expansion $v_{[s;n]}(t)=\sum_{k=0}^{\infty}V_{k}(s;n)t^{2k+1}$ of the second solution $v_{[s;n]}(t)$ to the Heun equation (4.33) at $t=0$ are given as $V_{k}(s;n)=\small\begin{vmatrix}E^{v}_{0}&-1&&&&\\\ F^{v}_{1}&E^{v}_{1}&-1&&&\\\ &F^{v}_{2}&E^{v}_{2}&-1&&&\\\ &&\dots&\dots&\dots&&\\\ &&&F^{v}_{k-2}&E^{v}_{k-2}&-1\\\ &&&&F^{v}_{k-1}&E^{v}_{k-1}\\\ \end{vmatrix}$ (7.14) with $E^{v}_{0}=\frac{(n-2)s}{b(1)},\quad E^{v}_{k}=\frac{(n-4k-2)s}{b(k+1)},\quad\text{and}\quad F^{v}_{k}=\frac{b\left(\frac{n-2k}{2}\right)}{b(k+1)}.$ (7.15) Proposition 7.16 below then shows that $u_{[s;n]}(t)$ and $v_{[s;n]}(t)$ are in fact the “hyperbolic cosine” generating function of $\\{P_{k}(s;n)\\}_{k=0}^{\infty}$ and “hyperbolic sine” generating function of $\\{Q_{k}(s;n)\\}_{k=0}^{\infty}$, respectively. ###### Proposition 7.16. We have $\displaystyle u_{[s;n]}(t)=\sum_{k=0}^{\infty}P_{k}(s;n)\frac{t^{2k}}{(2k)!}\quad\mathrm{and}\quad v_{[s;n]}(t)=\sum_{k=0}^{\infty}Q_{k}(s;n)\frac{t^{2k+1}}{(2k+1)!}.$ ###### Proof. We only show the identity for $u_{[s;n]}(t)$; the assertion for $v_{[s;n]}(t)$ can be shown similarly. We wish to show that $U_{k}(s;n)=P_{k}(s;n)/(2k)!$ for all $k\in\mathbb{Z}_{\geq 0}$. By definition, it is clear that $U_{0}(s;n)=1=P_{0}(s;n)$ and $U_{1}(s;n)=ns/2=P_{1}(s;n)/2!$. We then assume that $k\geq 2$. It follows from (7.13) that $U_{k}(s;n)$ can be given as $U_{k}(s;n)=\frac{(-1)^{k}}{\prod^{k}_{j=1}a(j)}P_{k}(-s;n)=\frac{P_{k}(s;n)}{\prod^{k}_{j=1}a(j)}.$ Here, Lemma 7.10 is applied from the second identity to the third. Now the assertion follows from the identity $\prod^{k}_{j=1}a(j)=(2k)!$. ∎ For $R_{k}(s;n)\in\\{P_{k}(s;n),Q_{k}(s;n)\\}$, we define $\mathcal{S}\mathrm{ol}_{k}\left(R;n\right):=\\{s\in\mathbb{C}:R_{k}(s;n)=0\\}.$ We recall from (5.3) the subsets $I^{\pm}_{0}$, $I_{1}$, $I^{\pm}_{2}$, $I_{3}\subset\mathbb{Z}$. ###### Corollary 7.17. Let $n\in 2\mathbb{Z}_{\geq 0}$. Then $\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)$ and $\mathcal{S}\mathrm{ol}_{\frac{n}{2}}(Q;n)$ are given as $\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)=\begin{cases}\mathbb{C}&\emph{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] I^{+}_{2}\cup I^{-}_{2}&\emph{if $n\equiv 2\ (\mathrm{mod}\ 4)$}\end{cases}$ and $\mathcal{S}\mathrm{ol}_{\frac{n}{2}}(Q;n)=\begin{cases}I^{+}_{0}\cup I^{-}_{0}&\emph{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] \mathbb{C}&\emph{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ Further, for $n\in 1+2\mathbb{Z}_{\geq 0}$, the sets $\mathcal{S}\mathrm{ol}_{\frac{n+1}{2}}(P;n)$ and $\mathcal{S}\mathrm{ol}_{\frac{n+1}{2}}(Q;n)$ are given as $\mathcal{S}\mathrm{ol}_{\frac{n+1}{2}}(P;n)=\mathcal{S}\mathrm{ol}_{\frac{n+1}{2}}(Q;n)=\begin{cases}I_{1}&\emph{if $n\equiv 1\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] I_{3}&\emph{if $n\equiv 3\ (\mathrm{mod}\ 4)$}.\end{cases}$ ###### Proof. It follows from Propositions 6.5 and 7.16 that the following conditions on $s\in\mathbb{C}$ are equivalent for $n\in 2\mathbb{Z}_{\geq 0}$: 1. (i) $s\in\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)$; 2. (ii) $u_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$; 3. (iii) $s\in\begin{cases}\mathbb{C}&\text{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\ I^{+}_{2}\cup I^{-}_{2}&\text{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ This concludes the assertion for $\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)$. The other cases can be shown similarly. ∎ ### 7.3. Factorization formulas of $P_{\left[\frac{n+2}{2}\right]}(x;n)$ and $Q_{\left[\frac{n+1}{2}\right]}(x;n)$ We next show the factorization formulas for $P_{\frac{n+2}{2}}(x;n)$ and $Q_{\frac{n}{2}}(x;n)$ for $n$ even, and for $P_{\frac{n+1}{2}}(x;n)$ and $Q_{\frac{n+1}{2}}(x;n)$ for $n$ odd. We remark that any product of the form $\prod^{j-1}_{\ell=j}c_{\ell}$ with $c_{\ell}\in\mathrm{Pol}[t]$ (in particular, $c_{\ell}\in\mathbb{C}$) is regarded as $\prod^{j-1}_{\ell=j}c_{\ell}=1$. #### 7.3.1. Factorization formulas of $P_{\frac{n+2}{2}}(x;n)$ and $Q_{\frac{n}{2}}(x;n)$ We start with $P_{\frac{n+2}{2}}(x;n)$ and $Q_{\frac{n}{2}}(x;n)$ for $n$ even (see (7.5) and (7.6)). For $n\in 2\mathbb{Z}_{\geq 0}$, let $\alpha_{n}$ and $\beta_{n}$ be the products of the coefficients of $x$ on the main diagonal of $P_{\frac{n+2}{2}}(x;n)$ and $Q_{\frac{n}{2}}(x;n)$, respectively, Namely, we have $\alpha_{n}=\prod^{\frac{n}{2}}_{\ell=0}(n-4\ell)\quad\text{and}\quad\beta_{n}=\prod^{\frac{n-2}{2}}_{\ell=0}(n-2-4\ell).$ It is remarked that $\alpha_{n}$ and $\beta_{n}$ may be given as follows. $\alpha_{n}=\begin{cases}0&\text{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] (-4)^{\frac{n+2}{4}}\prod\limits^{\frac{n-2}{4}}_{\ell=0}(1+2\ell)^{2}&\text{if $n\equiv 2\ (\mathrm{mod}\ 4)$}\end{cases}$ and $\beta_{n}=\begin{cases}(-4)^{\frac{n}{4}}\prod\limits^{\frac{n-4}{4}}_{\ell=0}(1+2\ell)^{2}&\text{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] 0&\text{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ ###### Theorem 7.18 (Factorization formulas of $P_{\frac{n+2}{2}}(x;n)$ and $Q_{\frac{n}{2}}(x;n)$). For $n\in 2\mathbb{Z}_{\geq 0}$, the polynomials $P_{\frac{n+2}{2}}(x;n)$ and $Q_{\frac{n}{2}}(x;n)$ are either $0$ or factored as follows. $P_{\frac{n+2}{2}}(x;n)=\begin{cases}0&\emph{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] \alpha_{n}\prod\limits^{\frac{n-2}{4}}_{\ell=0}(x^{2}-(4\ell+1)^{2})&\emph{if $n\equiv 2\ (\mathrm{mod}\ 4)$}\end{cases}$ (7.19) and $Q_{\frac{n}{2}}(x;n)=\begin{cases}\beta_{n}\prod\limits^{\frac{n-4}{4}}_{\ell=0}(x^{2}-(4\ell+3)^{2})&\emph{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] 0&\emph{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ (7.20) ###### Proof. We only demonstrate the proof for $P_{\frac{n+2}{2}}(x;n)$; the assertion for $Q_{\frac{n}{2}}(x;n)$ can be shown similarly. It follows from Corollary 7.17 that $\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)=\begin{cases}\mathbb{C}&\text{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] I^{+}_{2}\cup I^{-}_{2}&\text{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ In particular, as $\alpha_{n}$ is the product of the coefficients of $x$ in $P_{\frac{n+2}{2}}(x;n)$, we have $P_{\frac{n+2}{2}}(x;n)=\begin{cases}0&\text{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[2.0pt] \alpha_{n}\prod\limits_{s\in I^{+}_{2}\cup I^{-}_{2}}(x-s)&\text{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ Now the assertion follows from the identity $\prod_{s\in I^{+}_{2}\cup I^{-}_{2}}(x-s)=\prod^{\frac{n-2}{4}}_{\ell=0}(x^{2}-(4\ell+1)^{2})$. ∎ ###### Remark 7.21. The factorization formulas (7.19) and (7.20) can also be obtained from [18, Prop. 5.11]. In fact, in [18], the dimension $\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{I}}(s;n)(=\dim_{\mathbb{C}}\mathrm{Sol}_{\textnormal{II}}(s;n))$ was determined by using (7.19) and (7.20) (see Remark 5.18). #### 7.3.2. Factorization formula of $P_{\frac{n+1}{2}}(x;n)$ We next consider $P_{\frac{n+1}{2}}(x;n)$ and $Q_{\frac{n+1}{2}}(x;n)$ for $n$ odd (see (7.7) and (7.8)). As shown in (7.9), the determinant $Q_{\frac{n+1}{2}}(x;n)$ is given as $Q_{\frac{n+1}{2}}(x;n)=(-1)^{\frac{n+1}{2}}P_{\frac{n+1}{2}}(x;n)\quad\text{for $n\in 1+2\mathbb{Z}_{\geq 0}$}.$ It thus suffices to only consider $P_{\frac{n+1}{2}}(x;n)$. For $n$ odd, let $\gamma_{n}$ be the product of the coefficients of $x$ on the main diagonal of $P_{\frac{n+1}{2}}(x;n)$, namely, $\gamma_{n}=\prod^{\frac{n-1}{2}}_{\ell=0}(n-4\ell).$ We remark that $\gamma_{n}$ may be given as $\gamma_{n}=\begin{cases}(-1)^{\frac{n-1}{4}}\prod\limits^{\frac{n-3}{2}}_{\ell=0}(3+2\ell)&\text{if $n\equiv 1\ (\mathrm{mod}\ 4)$},\\\\[8.61108pt] (-1)^{\frac{n+1}{4}}\prod\limits^{\frac{n-3}{2}}_{\ell=0}(3+2\ell)&\text{if $n\equiv 3\ (\mathrm{mod}\ 4)$}.\end{cases}$ ###### Theorem 7.22 (Factorization formula of $P_{\frac{n+1}{2}}(x;n)$). For $n\in 1+2\mathbb{Z}_{\geq 0}$, the polynomial $P_{\frac{n+1}{2}}(x;n)$ is factored as $P_{\frac{n+1}{2}}(x;n)=\gamma_{n}\prod\limits^{\frac{n-1}{2}}_{\ell=0}(x-(n-1)+4\ell).$ (7.23) Equivalently, we have $P_{\frac{n+1}{2}}(x;n)=\begin{cases}\gamma_{n}\,x\prod\limits^{\frac{n-1}{4}}_{\ell=1}(x^{2}-(4\ell)^{2})&\textnormal{if $n\equiv 1\ (\mathrm{mod}\ 4)$},\\\\[5.0pt] \gamma_{n}\prod\limits^{\frac{n-3}{4}}_{\ell=0}(x^{2}-(4\ell+2)^{2})&\textnormal{if $n\equiv 3\ (\mathrm{mod}\ 4)$}.\end{cases}$ ###### Proof. Since the argument is similar to that for Theorem 7.18, we omit the proof. ∎ The factorization formula of $P_{\frac{n+1}{2}}(x;n)$ in (7.23) suggests that one may express $P_{\frac{n+1}{2}}(x;n)$ in terms of the so-called _Sylvester determinant_ $\mathrm{Sylv}(x;n)$ ((1.21)). The factorization formula (7.25) below for $\mathrm{Sylv}(x;n)$ was first observed by Sylvester in 1854 ([35]). Later, a number of people such as Askey ([1]), Edelman–Kostlan ([10]), Holtz ([13]), Kac ([21]), Mazza ([30, p. 442]), Taussky-Todd ([37]), among others, gave several proofs. For more details about the Sylvester determinant $\mathrm{Sylv}(x;n)$ as well as some related topics, see, for instance, [7, 8, 31, 32] and references therein. We remark that the factorization formula (7.25) also readily follows from a general theory of $\mathfrak{sl}(2,\mathbb{C})$ representations (see Proposition 8.3 and Section 10). ###### Fact 7.24 (Sylvester’s factorization formula). Let $n\in 1+\mathbb{Z}_{\geq 0}$. The Sylvester determinant $\mathrm{Sylv}(x;n)$ in (1.21) is factored as $\mathrm{Sylv}(x;n)=\prod_{\ell=0}^{n}(x-n+2\ell).$ (7.25) Equivalently, we have $\mathrm{Sylv}(x;n)=\begin{cases}\prod\limits^{\frac{n-1}{2}}_{\ell=0}(x^{2}-(2\ell+1)^{2})&\text{if $n$ is odd},\\\ x\prod\limits^{\frac{n}{2}}_{\ell=1}(x^{2}-(2\ell)^{2})&\text{if $n$ is even}.\\\ \end{cases}$ (7.26) ###### Corollary 7.27. For $n$ odd, we have $P_{\frac{n+1}{2}}(x;n)=2^{n+1}\mathrm{Sylv}(\frac{1}{2};\frac{n-1}{2})\mathrm{Sylv}(\frac{x}{2};\frac{n-1}{2}).$ ###### Proof. By a direct computation one finds $\prod^{\frac{n-1}{2}}_{\ell=0}(n-4\ell)=2^{\frac{n+1}{2}}\mathrm{Sylv}(\frac{1}{2};\frac{n-1}{2})$ and $\prod^{\frac{n-1}{2}}_{\ell=0}(x-(n-1)+4\ell)=2^{\frac{n+1}{2}}\mathrm{Sylv}(\frac{x}{2};\frac{n-1}{2}).$ Now the proposed identity follows from Theorem 7.22 as $\gamma_{n}=\prod^{\frac{n-1}{2}}_{\ell=0}(n-4\ell)$. ∎ ### 7.4. Palindromic properties of $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ We finish this section by showing the palindromic properties of $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ (see Definition 1.15). For $s\in\mathbb{R}\backslash\\{0\\}$, we set $\mathrm{sgn}(s):=\begin{cases}+1&\text{if $s>0$},\\\ -1&\text{if $s<0$}.\end{cases}$ #### 7.4.1. Palindromic property of $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ We start with $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$. Recall from Corollary 7.17 that $\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)$ for $n\in 2\mathbb{Z}_{\geq 0}$ is given as $\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)=\begin{cases}\mathbb{C}&\textnormal{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] I^{+}_{2}\cup I^{-}_{2}&\textnormal{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ We then define a map $\theta_{(P;n)}\colon\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)\to\\{\pm 1\\}$ as $\theta_{(P;n)}(s)=\begin{cases}1&\textnormal{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\ \mathrm{sgn}(s)&\textnormal{if $n\equiv 2\ (\mathrm{mod}\ 2)$}.\end{cases}$ (7.28) ###### Theorem 7.29 (Palindromic property of $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$). The pair $(\\{P_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k)!\\}_{k=0}^{\infty})$ is a palindromic pair with degree $\frac{n}{2}$. Namely, let $n\in 2\mathbb{Z}_{\geq 0}$ and $s\in\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)$. Then we have $P_{k}(s;n)=0$ for $k\geq\frac{n+2}{2}$ and $\frac{P_{k}(s;n)}{(2k)!}=\theta_{(P;n)}(s)\frac{P_{\frac{n}{2}-k}(s;n)}{(n-2k)!}\quad\emph{for $k\leq\frac{n}{2}$}.$ (7.30) Equivalently, for $k\leq\frac{n}{2}$, the palindromic identity (7.30) is given as follows. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ We have $\frac{P_{k}(s;n)}{(2k)!}=\frac{P_{\frac{n}{2}-k}(s;n)}{(n-2k)!}\quad\emph{for all $s\in\mathbb{C}$}.$ (7.31) 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4):$ We have $\frac{P_{k}(s;n)}{(2k)!}=\mathrm{sgn}(s)\frac{P_{\frac{n}{2}-k}(s;n)}{(n-2k)!}\quad\emph{for $s\in I^{+}_{2}\cup I^{-}_{2}$}.$ (7.32) ###### Proof. Take $s\in\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)$. It follows from the equivalence given in the proof of Corollary 7.17 that $P_{k}(s;n)=0$ for $k\geq\frac{n+2}{2}$. Thus, to prove the theorem, it suffices to show (7.31) and (7.32) for $k\leq\frac{n}{2}$. We first show that $\frac{P_{k}(s;n)}{(2k)!}=\pm\frac{P_{\frac{n}{2}-k}(s;n)}{(n-2k)!}.$ (7.33) It follows from Theorem 6.8 and Corollary 7.17 that $M$ acts on $u_{[s;n]}(t)$ as a character $\chi_{(\varepsilon,\varepsilon^{\prime})}$ in (3.12); in particular, we have $u_{[s;n]}(t)=\chi_{(\varepsilon,\varepsilon^{\prime})}(m_{2}^{\textnormal{I}})\pi_{n}(m_{2}^{\textnormal{I}})u_{[s;n]}(t).$ (7.34) By (3.16), the identity (7.34) is equivalent to $u_{[s;n]}(t)=\chi_{(\varepsilon,\varepsilon^{\prime})}(m_{2}^{\textnormal{I}})t^{n}u_{[s;n]}\left(-\frac{1}{t}\right).$ (7.35) As $s\in\mathcal{S}\mathrm{ol}_{\frac{n+2}{2}}(P;n)$, by Proposition 7.16, we have $u_{[s;n]}(t)=\sum_{k=0}^{\frac{n}{2}}P_{k}(s;n)\frac{t^{2k}}{(2k)!}.$ The identity (7.35) thus yields the identity $\sum_{k=0}^{\frac{n}{2}}P_{k}(s;n)\frac{t^{2k}}{(2k)!}=\chi_{(\varepsilon,\varepsilon^{\prime})}(m_{2}^{\textnormal{I}})\sum_{k=0}^{\frac{n}{2}}P_{\frac{n}{2}-k}(s;n)\frac{t^{2k}}{(n-2k)!}.$ (7.36) Since $\chi_{(\varepsilon,\varepsilon^{\prime})}(m_{2}^{\textnormal{I}})\in\\{\pm 1\\}$, the identity (7.33) follows from (7.36). In order to show (7.31) and (7.32), observe that Theorem 6.8 shows that the character $\chi_{(\varepsilon,\varepsilon^{\prime})}$ is given as * • $\chi_{(\varepsilon,\varepsilon^{\prime})}=\chi_{\textnormal{\mbox{\smaller($+$,$+$)}}}$ for $n\equiv 0\ (\mathrm{mod}\ 4)$ and $s\in\mathbb{C}$, * • $\chi_{(\varepsilon,\varepsilon^{\prime})}=\chi_{\textnormal{\mbox{\smaller($-$,$+$)}}}$ for $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s\in I^{+}_{2}$, * • $\chi_{(\varepsilon,\varepsilon^{\prime})}=\chi_{\textnormal{\mbox{\smaller($+$,$-$)}}}$ for $n\equiv 2\ (\mathrm{mod}\ 4)$ and $s\in I^{-}_{2}$. By Table 3, we have $\chi_{\textnormal{\mbox{\smaller($+$,$+$)}}}(m_{2}^{\textnormal{I}})=\chi_{\textnormal{\mbox{\smaller($-$,$+$)}}}(m_{2}^{\textnormal{I}})=1$ and $\chi_{\textnormal{\mbox{\smaller($+$,$-$)}}}(m_{2}^{\textnormal{I}})=-1$. Now (7.31) and (7.32) follow from (7.36). ∎ Theorem 7.29 in particular implies the factorial identity of $P_{\frac{n}{2}}(s;n)$ (see (1.19)) as follows. ###### Corollary 7.37 (Factorial identity of $P_{\frac{n}{2}}(x;n)$). Let $n\in 2\mathbb{Z}_{\geq 0}$. Then the following hold. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ We have $P_{\frac{n}{2}}(s;n)=n!\quad\emph{for all $s\in\mathbb{C}$}.$ 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4):$ We have $P_{\frac{n}{2}}(s;n)=\mathrm{sgn}(s)n!\quad\emph{for $s\in I^{+}_{2}\cup I^{-}_{2}$}.$ (7.38) ###### Proof. This is simply the case of $k=\frac{n}{2}$ in (7.31) and (7.32). ∎ We now give a proof for Corollary 1.24 in the introduction. ###### Proof of Corollary 1.24. It follows from Corollary 7.37 that it suffices to show (1.25). Suppose $n\equiv 2\ (\mathrm{mod}\ 4)$. Since $I^{+}_{2}\cup I^{-}_{2}=\left\\{\pm(4j+1):j=0,1,2,\ldots,\frac{n-2}{4}\right\\},$ we have $I^{+}_{2}\cup I^{-}_{2}=\\{\pm 1,\pm 5,\pm 9\dots,\pm(n-2)\\}.$ The identity (7.38) then concludes the assertion. ∎ Theorem 7.29 shows that $(\\{P_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k)!\\}_{k=0}^{\infty})$ is a palindromic pair for $n$ even. Proposition 7.39 below shows that it is not the case for $n$ odd. ###### Proposition 7.39. The pair $(\\{P_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k)!\\}_{k=0}^{\infty})$ is not a palindromic pair for $n$ odd. ###### Proof. Proof by contradiction. Suppose the contrary, that is, there exist $n_{0}\in 1+2\mathbb{Z}_{\geq 0}$ and a map $d\colon\mathbb{Z}_{\geq 0}\to\mathbb{R}$ with $d(n_{0})\in\mathbb{Z}_{\geq 0}$ such that, for all $s\in\mathcal{S}\mathrm{ol}_{d(n_{0})+1}(P;n_{0})$, we have $P_{k}(s;n_{0})=0$ for $k\geq d(n_{0})+1$ and $\frac{P_{k}(s;n_{0})}{(2k)!}=\pm\frac{P_{d(n_{0})-k}(s;n_{0})}{(2d(n_{0})-2k)!}\quad\text{for $k\leq d(n_{0})$}.$ Take $s\in\mathcal{S}\mathrm{ol}_{d(n_{0})+1}(P;n_{0})$. It follows from Proposition 7.16 that the palindromity of the pair $(\\{P_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k)!\\}_{k=0}^{\infty})$ implies $u_{[s;n_{0}]}(t)=\sum_{k=0}^{d(n_{0})}P_{k}(s;n_{0})\frac{t^{2k}}{(2k)!}\in\mathrm{Pol}_{d(n_{0})}[t]$ and $t^{d(n_{0})}u_{[s;n_{0}]}\left(\frac{1}{t}\right)=\pm u_{[s;n_{0}]}(t),$ which is, by (3.16), equivalent to $\pi_{d(n_{0})}(m_{2}^{\textnormal{I}})u_{[s;n_{0}]}(t)=\pm u_{[s;n_{0}]}(t).$ Since $u_{[s;n_{0}]}(-t)=u_{[s;n_{0}]}(t)$, this further implies that $\mathbb{C}u_{[s;n_{0}]}(t)$ is a one-dimensional representation of $M$ for such $(s,n_{0})$. On the other hand, as $n_{0}\in 1+2\mathbb{Z}_{\geq 0}$, it follows from Proposition 6.5 that, for $n_{0}\equiv\ell\ (\mathrm{mod}\ 4)$ for $\ell=1,3$, we have $s\in I_{\ell}\cap\mathcal{S}\mathrm{ol}_{d(n_{0})+1}(P;n_{0}).$ In particular, by Theorem 6.8, the space $\mathbb{C}u_{[s;n_{0}]}(t)\oplus\mathbb{C}v_{[s;n_{0}]}(t)$ forms the unique two-dimensional irreducible representation of $M$, which is a contradiction. Now the proposed assertion follows. ∎ #### 7.4.2. Palindromic property of $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ We next consider $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$. Corollary 7.17 shows that $\mathcal{S}\mathrm{ol}_{\frac{n}{2}}(Q;n)=\begin{cases}I^{+}_{0}\cup I^{-}_{0}&\textnormal{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\\[3.0pt] \mathbb{C}&\textnormal{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ We then define a map $\theta_{(Q;n)}\colon\mathcal{S}\mathrm{ol}_{\frac{n}{2}}(Q;n)\to\\{\pm 1\\}$ as $\theta_{(Q;n)}(s)=\begin{cases}\mathrm{sgn}(s)&\textnormal{if $n\equiv 0\ (\mathrm{mod}\ 4)$},\\\ 1&\textnormal{if $n\equiv 2\ (\mathrm{mod}\ 4)$}.\end{cases}$ (7.40) ###### Theorem 7.41 (Palindromic property of $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$). The pair $(\\{Q_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k+1)!\\}_{k=0}^{\infty})$ is a palindromic pair with degree $\frac{n-2}{2}$. Namely, let $n\in 2(1+\mathbb{Z}_{\geq 0})$ and $s\in\mathcal{S}\mathrm{ol}_{\frac{n}{2}}(Q;n)$. Then we have $Q_{k}(s;n)=0$ for $k\geq\frac{n}{2}$ and $\frac{Q_{k}(s;n)}{(2k+1)!}=\theta_{(Q;n)}(s)\frac{Q_{\frac{n-2}{2}-k}(s;n)}{(n-2k-1)!}\quad\emph{for $k\leq\frac{n-2}{2}$}.$ (7.42) Equivalently, for $k\leq\frac{n}{2}-1$, the palindromic identity (7.42) is given as follows. 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ We have $\frac{Q_{k}(s;n)}{(2k+1)!}=\mathrm{sgn}(s)\frac{Q_{\frac{n-2}{2}-k}(s;n)}{(n-2k-1)!}\quad\emph{for $s\in I^{+}_{0}\cup I^{-}_{0}$}.$ (7.43) 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4):$ We have $\frac{Q_{k}(s;n)}{(2k+1)!}=\frac{Q_{\frac{n-2}{2}-k}(s;n)}{(n-2k-1)!}\quad\emph{for all $s\in\mathbb{C}$}.$ (7.44) ###### Proof. Since the argument goes similarly to the one for Theorem 7.29, we omit the proof. ∎ ###### Corollary 7.45 (Factorial identity of $Q_{\frac{n-2}{2}}(x;n)$). Let $n\in 2(1+\mathbb{Z}_{\geq 0})$. Then the following hold for $Q_{\frac{n-2}{2}}(s;n)$ (see (1.20)). 1. (1) $n\equiv 0\ (\mathrm{mod}\ 4):$ We have $Q_{\frac{n-2}{2}}(s;n)=\mathrm{sgn}(s)(n-1)!\quad\emph{for $s\in I^{+}_{0}\cup I^{-}_{0}$}.$ (7.46) 2. (2) $n\equiv 2\ (\mathrm{mod}\ 4):$ We have $Q_{\frac{n-2}{2}}(s;n)=(n-1)!\quad\emph{for all $s\in\mathbb{C}$}.$ ###### Proof. This is the case of $k=\frac{n-2}{2}$ in (7.43) and (7.44). ∎ Here is a proof of Corollary 1.26 in the introduction. ###### Proof of Corollary 1.26. By Corollary 7.45, it suffices to show (1.27). Suppose $n\equiv 0\ (\mathrm{mod}\ 4)$. Since $I^{+}_{0}\cup I^{-}_{0}=\left\\{\pm(4j+3):j=0,1,2,\ldots,\frac{n-4}{4}\right\\},$ we have $I^{+}_{0}\cup I^{-}_{0}=\left\\{\pm 3,\pm 7,\pm 11,\ldots,\pm(n-1)\right\\}.$ Then (7.46) concludes the proposed identity. ∎ We close this section by stating about the palindromity of the pair $(\\{Q_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k+1)!\\}_{k=0}^{\infty})$ for $n$ odd. ###### Proposition 7.47. The pair $(\\{Q_{k}(x;y)\\}_{k=0}^{\infty},\\{(2k+1)!\\}_{k=0}^{\infty})$ is not a palindromic pair for $n$ odd. ###### Proof. As the proof is similar to the one for Proposition 7.39, we omit the proof. ∎ ## 8\. Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ and Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ In Section 7, it was shown that $\\{P_{k}(x;y)\\}_{k=0}^{\infty}$ and $\\{Q_{k}(x;y)\\}_{k=0}^{\infty}$ admit palindromic properties. In this short section we show that the sequences of Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ and Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ also admit palindromic properties. These are achieved in Theorem 8.8 for Cayley continuants and in Theorem 8.19 for Krawtchouk polynomials. ### 8.1. Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ We start with the definition of $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$. For each $k\in\mathbb{Z}_{\geq 0}$, the $k\times k$ tridiagonal determinants $\mathrm{Cay}_{k}(x;y)$ are defined as follows ([4, 31] and [30, p. 429]). * • $k=0:$ $\mathrm{Cay}_{0}(x:y)=1$, * • $k=1:$ $\mathrm{Cay}_{1}(x;y)=x$, * • $k\geq 2:$ $\mathrm{Cay}_{k}(x;y)=\small\begin{vmatrix}x&1&&&&\\\ y&x&2&&&\\\ &y-1&x&3&&\\\ &&\dots&\dots&\dots&\\\ &&&y-k+3&x&k-1\\\ &&&&y-k+2&x\\\ \end{vmatrix}$. Following [31], we refer to each $\mathrm{Cay}_{k}(x;y)$ as a _Cayley continuant_. There are some combinatorial interpretations of them. For instance, they can be thought of as a generalization of the raising factorial (shifted factorial) $x^{\overline{k}}:=x(x+1)\cdots(x+k-1)$, the falling factorial $x^{\underline{k}}:=x(x-1)\cdots(x-k+1)$, and the factorial $k!$. Indeed, we have $\mathrm{Cay}_{k}(x;x)=x^{\underline{k}},\quad\mathrm{Cay}_{k}(x;-x)=x^{\overline{k}},\quad\text{and}\quad\mathrm{Cay}_{k}(1;-1)=k!.$ For more details on a relationship with combinatorics, see [31]. ###### Remark 8.1. When $y=n\in 1+\mathbb{Z}_{\geq 0}$, the Cayley continuant $\mathrm{Cay}_{n+1}(x;n)$ is the Sylvester determinant $\mathrm{Sylv}(x;n)$ (see (1.21)). Thus, by (7.25), we have $\mathrm{Cay}_{n+1}(x;n)=\small\begin{vmatrix}x&1&&&&&\\\ n&x&2&&&&\\\ &n-1&x&3&&&\\\ &&\dots&\dots&\dots&&\\\ &&&3&x&n-1&\\\ &&&&2&x&n\\\ &&&&&1&x\end{vmatrix}=\mathrm{Sylv}(x;n)=\prod_{\ell=0}^{n}(x-n+2\ell).$ (8.2) The generating function of the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ is known and several proofs are in the literature (see, for instance, [21, 31, 37]). In Proposition 8.3 below, by utilizing the idea for the proof of Proposition 7.16, we shall provide another proof for the generating function as well as Sylvester’s formula (8.2). We remark that our argument for the formula (8.2) can be more abstract. (See Section 10.) ###### Proposition 8.3. We have $(1+t)^{\frac{y+x}{2}}(1-t)^{\frac{y-x}{2}}=\sum_{k=0}^{\infty}\mathrm{Cay}_{k}(x;y)\frac{t^{k}}{k!}.$ (8.4) In particular, $\mathrm{Cay}_{n+1}(x;n)=\prod_{\ell=0}^{n}(x-n+2\ell).$ ###### Proof. Let $E_{+}$ and $E_{-}$ be the elements of $\mathfrak{sl}(2,\mathbb{C})$ in (3.2). It then follows from (3.17) that the matrix $d\pi_{n}(E_{+}+E_{-})_{\mathcal{B}}$ with respect to the ordered basis $\mathcal{B}:=\\{1,t,t^{2},\ldots,t^{n}\\}$ is given as $d\pi_{n}(E_{+}+E_{-})_{\mathcal{B}}=\small\begin{pmatrix}0&-1&&&&&\\\ -n&0&-2&&&&\\\ &-(n-1)&0&-3&&&\\\ &&\dots&\dots&\dots&&\\\ &&&-3&0&-(n-1)&\\\ &&&&-2&0&-n\\\ &&&&&-1&0\end{pmatrix}.\\\ $ Therefore the Sylvester determinant $\mathrm{Cay}_{n+1}(x;n)(=\mathrm{Sylv}(x;n)))$ is the characteristic polynomial of $d\pi_{n}(E_{+}+E_{-})$; thus, to show (8.2), it suffices to find the eigenvalues of $d\pi_{n}(E_{+}+E_{-})$. We set $\mathcal{D}_{[x;n]}:=(1-t^{2})\frac{d}{dt}+nt-x.$ By (3.17), for $p(t)\in\mathrm{Pol}_{n}[t]$, we have $(d\pi_{n}(E_{+})+d\pi_{n}(E_{-})+x)p(t)=0\quad\Longleftrightarrow\quad\mathcal{D}_{[x;n]}p(t)=0.$ Thus it further suffices to determine $s\in\mathbb{C}$ for which $\mathcal{D}_{[s;n]}p(t)=0$ has a non-zero solution. By separation of variables, any solution to $\mathcal{D}_{[x;n]}f(t)=0$, where $f(t)$ is not necessarily a polynomial, is of the from $f(t)=c\cdot(1+t)^{\frac{n+x}{2}}(1-t)^{\frac{n-x}{2}}$ for some constant $c$. It is clear that $(1+t)^{\frac{n+x}{2}}(1-t)^{\frac{n-x}{2}}$ becomes a polynomial if and only if $x=n-2\ell$ for $\ell=0,1,\ldots,n$. Now Sylvester’s formula (8.2) follows. In order to show (8.4), suppose that $\sum_{k=0}^{\infty}h_{k}(x;n)t^{k}$ is the power series solution of $\mathcal{D}_{[x;n]}f(t)=0$ with $h_{0}(x;n)=1$. Then, by a direct observation, the coefficients $h_{k}(x;n)$ for $k\geq 1$ satisfy the following recurrence relations: $\displaystyle h_{1}(x;n)$ $\displaystyle=x,$ $\displaystyle h_{k+1}(x;n)$ $\displaystyle=\frac{x}{k+1}h_{k}(x;n)+\frac{k-n-1}{k+1}h_{k-1}(x;n).$ It then follows from Lemma 9.5 in Section 9 with (7.11) that each $h_{k}(x;n)$ is given as $\displaystyle h_{k}(x;n)=\small\begin{vmatrix}x&-1&&&&\\\ -\frac{n}{2}&\frac{x}{2}&-1&&&\\\ &-\frac{n-1}{3}&\frac{x}{3}&-1&&\\\ &&\ddots&\ddots&\ddots&\\\ &&&-\frac{n-k+3}{k-1}&\frac{x}{k-1}&-1\\\ &&&&-\frac{n-k+2}{k}&\frac{x}{k}\\\ \end{vmatrix}=\frac{\mathrm{Cay}_{k}(x;n)}{k!}.$ Thus $\sum_{k=0}^{\infty}\frac{\mathrm{Cay}_{k}(x;n)}{k!}t^{k}$ is a solution of $\mathcal{D}_{[x;n]}f(t)=0$; consequently, there exists a constant $c$ such that $c\cdot(1+t)^{\frac{n+x}{2}}(1-t)^{\frac{n-x}{2}}=\sum_{k=0}^{\infty}\frac{\mathrm{Cay}_{k}(x;n)}{k!}t^{k}.$ As both sides are equal to $1$ at $t=0$, we have $c=1$; thus, the identity (8.4) holds for $y=n\in\mathbb{Z}_{\geq 0}$. Since both sides of (8.4) are well-defined for any $y\in\mathbb{C}$ for suitable values of $t$, the desired identity now holds. ∎ ###### Remark 8.5. The differential operator $d\pi_{n}(E_{+}+E_{-})$ is related to the study of $K$-type decomposition of the space of $K$-finite solutions to intertwining differential operators. In fact, let $X$ be an nilpotent element of $\mathfrak{sl}(2,\mathbb{C})$ defined in (3.1). Then $d\pi_{n}(E_{+}+E_{-})$ is the realization of the differential operator $R(X)$ on the $K$-type $\mathrm{Pol}_{n}[t]$ via the identification $\Omega^{\textnormal{I}}\colon{\mathfrak{k}}\stackrel{{\scriptstyle\sim}}{{\to}}\mathfrak{sl}(2,\mathbb{C})$ in (3.7) (see [27, Sect. 5]). We write $g^{C}_{[x;n]}(t):=(1+t)^{\frac{n+x}{2}}(1-t)^{\frac{n-x}{2}}$ and set $\mathcal{S}\mathrm{ol}_{k}(\mathrm{Cay};n):=\\{s\in\mathbb{C}:\mathrm{Cay}_{k}(s;n)=0\\}.$ It follows from (8.2) (or, equivalently, from (8.4)) that $\mathcal{S}\mathrm{ol}_{n+1}(\mathrm{Cay};n)=\\{n-2\ell:\ell=0.1,\dots,n\\}.$ (8.6) ###### Corollary 8.7. The following conditions of $s\in\mathbb{C}$ are equivalent. 1. (i) $g^{C}_{[s;n]}(t)\in\mathrm{Pol}_{n}[t]$. 2. (ii) $s\in\mathcal{S}\mathrm{ol}_{n+1}(\mathrm{Cay};n)$. ###### Proof. A direct consequence of Proposition 8.3. ∎ ### 8.2. Palindromic property for $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ Now we show the palindromic property and factorial identity for the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$. ###### Theorem 8.8 (Palindromic property of $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$ and factorial identity of $\mathrm{Cay}_{n}(x;n)$). The pair $(\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty},\\{k!\\}_{k=0}^{\infty})$ is a palindromic pair with degree $n$. Namely, let $n\in\mathbb{Z}_{\geq 0}$ and $s\in\mathcal{S}\mathrm{ol}_{n+1}(\mathrm{Cay};n)$. Then we have $\mathrm{Cay}_{k}(s;n)=0$ for $k\geq n+1$ and $\frac{\mathrm{Cay}_{k}(s;n)}{k!}=(-1)^{\frac{n-s}{2}}\frac{\mathrm{Cay}_{n-k}(s;n)}{(n-k)!}\quad\emph{for $k\leq n$}.$ In particular we have $\mathrm{Cay}_{n}(s;n)=(-1)^{\frac{n-s}{2}}n!\quad\emph{for $s\in\mathcal{S}\mathrm{ol}_{n+1}(\mathrm{Cay};n)$}.$ (8.9) ###### Proof. Take $s\in\mathcal{S}\mathrm{ol}_{n+1}(\mathrm{Cay};n)$. It follows from Corollary 8.7 that, for such $s$, we have $g^{C}_{[s;n]}(t)=\sum_{k=0}^{n}\frac{\mathrm{Cay}_{k}(s;n)}{k!}t^{k},$ (8.10) namely, $\mathrm{Cay}_{k}(s;n)=0$ for $k\geq n+1$. Further, (8.10) shows that $t^{n}g^{C}_{[s;n]}\left(\frac{1}{t}\right)$ is given as $\displaystyle t^{n}g^{C}_{[s;n]}\left(\frac{1}{t}\right)=\sum_{k=0}^{n}\frac{\mathrm{Cay}_{n-k}(s;n)}{(n-k)!}t^{k}.$ On the other hand, as $g^{C}_{[x;n]}(t)=(1+t)^{\frac{n+x}{2}}(1-t)^{\frac{n-x}{2}}$, we also have $t^{n}g^{C}_{[s;n]}\left(\frac{1}{t}\right)=(-1)^{\frac{n-s}{2}}g^{C}_{[s;n]}(t).$ Therefore, $\sum_{k=0}^{n}\frac{\mathrm{Cay}_{n-k}(s;n)}{(n-k)!}t^{k}=(-1)^{\frac{n-s}{2}}\sum_{k=0}^{n}\frac{\mathrm{Cay}_{k}(s;n)}{k!}t^{k},$ which yields $\frac{\mathrm{Cay}_{k}(x;n)}{k!}=(-1)^{\frac{n-s}{2}}\frac{\mathrm{Cay}_{n-k}(x;n)}{(n-k)!}\quad\text{for all $k\leq n$}.$ The second identity is the case $k=n$. This concludes the theorem. ∎ We close this section by showing a proof of Corollary 1.30 in the introduction. ###### Proof of Corollary 1.30. As $\mathrm{Cay}_{n}(x;n)=\mathrm{Sylv}(x;n)$, it follows from Sylvester’s factorization formula (1.22) or (7.26) that $\displaystyle\mathcal{S}\mathrm{ol}_{n+1}(\mathrm{Cay};n)$ $\displaystyle=\\{s\in\mathbb{C}:\mathrm{Sylv}(s;n)=0\\}$ $\displaystyle=\begin{cases}\\{0,\pm 2,\pm 4,\ldots,\pm n\\}&\text{if $n$ is even},\\\ \\{\pm 1,\pm 3,\pm 5,\ldots,\pm n\\}&\text{if $n$ is odd}.\end{cases}$ Now the proposed identities are direct consequences of (8.9). ∎ ### 8.3. Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ From the palindromic property of the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$, one may deduce that of a certain sequence $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ of polynomials, where the polynomials $\mathcal{K}_{k}(x;n)$ for $y=n\in\mathbb{Z}_{\geq 0}$ are Krawtchouk polynomials in the sense of [28, p. 130] (equivalently, the case of $p=2$ for [28, p. 151]). We close this section by discussing the palindromic property of $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$. We start with the definition of $\mathcal{K}_{k}(x;y)$. ###### Definition 8.11. For $k\in\mathbb{Z}_{\geq 0}$, we define a polynomial $\mathcal{K}_{k}(x;y)$ of $x$ and $y$ as $\mathcal{K}_{k}(x;y):=\sum_{j=0}^{k}(-1)^{j}\binom{x}{j}\binom{y-x}{k-j},$ where the binomial coefficient $\binom{a}{m}$ is defined as $\binom{a}{m}=\begin{cases}\frac{a(a-1)\cdots(a-m+1)}{m!}&\text{if $m\in 1+\mathbb{Z}_{\geq 0}$},\\\ 1&\text{if $m=0$},\\\ 0&\text{otherwise}.\end{cases}$ For $y=n\in\mathbb{Z}_{\geq 0}$, the polynomials $\mathcal{K}_{k}(x;n)$ are called Krawtchouk polynomials ([28, p. 130]). In this paper, we also call the polynomials $\mathcal{K}_{k}(x;y)$ of two variables Krawtchouk polynomials. ###### Remark 8.12. Symmetric Krawtchouk polynomials (see, for instance, [24, p. 237] ) are used to show Sylvester’s factorization formula (8.2) in [1]. We remark that the definition of the Krawtchouk polynomial $\mathcal{K}_{k}(x;n)$ in this paper is different from one in the cited paper; in particular, $\mathcal{K}_{k}(x;n)$ is non-symmetric. It readily follows from Definition 8.11 that the Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ have the following generating function: $(1+t)^{y-x}(1-t)^{x}=\sum_{k=0}^{\infty}\mathcal{K}_{k}(x;y)t^{k}.$ (8.13) By comparing (8.13) with the generating function (8.4) of the Cayley continuants $\\{\mathrm{Cay}_{k}(x;y)\\}_{k=0}^{\infty}$, we have $\mathcal{K}_{k}(x;y)=\frac{\mathrm{Cay}_{k}(y-2x;y)}{k!}.$ (8.14) Then $\mathrm{Cay}_{k}(x;y)$ may be expressed explicitly as follows. Here we remark that in Proposition 8.15 below we use the notation $(m)^{\overline{j}}:=m(m+1)\cdots(m+j-1)$ for the raising factorial (shifted factorial) instead of $(m,j)$ in (5.14) to make the comparison with the falling factorial $(m)^{\underline{j}}:=m(m-1)\cdots(m-j+1)$ clearer. ###### Proposition 8.15. We have $\mathrm{Cay}_{k}(x;y)=\sum_{j=0}^{k}\binom{k}{j}\left(\frac{x+y}{2}\right)^{\underline{j}}\left(\frac{x-y}{2}\right)^{\overline{k-j}}.$ (8.16) ###### Proof. By (8.4) and (8.13), we have $\displaystyle\mathrm{Cay}_{k}(x;y)=(k!)\cdot\mathcal{K}_{k}(\frac{y-x}{2};y)=(k!)\cdot\sum_{j=0}^{k}(-1)^{j}\binom{\frac{y-x}{2}}{j}\binom{\frac{y+x}{2}}{k-j}.$ As $(-1)^{j}(m)^{\underline{j}}=(-m)^{\overline{j}}$, this concludes the proposed identity. ∎ ###### Remark 8.17. The identity (8.16) is also shown in [31, p. 356] slightly differently. Via the identity (8.13), we are now going to give the factorization formula, the palindromic property, and the factorial identity for the Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$. ###### Proposition 8.18 (Factorization formula of $\mathcal{K}_{n+1}(x;n)$). For given $n\in\mathbb{Z}_{\geq 0}$, we have $\mathcal{K}_{n+1}(x;n)=\frac{(-2)^{n+1}}{(n+1)!}\prod_{\ell=0}^{n}(x-\ell)=(-2)^{n+1}\binom{x}{n+1}.$ ###### Proof. The proposed identity simply follows from (8.14) and Sylvester’s factorization formula (8.2). ∎ We next show the palindromic property and factorial identity for Krawtchouk polynomials $\\{\mathcal{K}_{k}(x;y)\\}$. We denote by $\\{1\\}_{k=0}^{\infty}$ the sequence $\\{a_{k}\\}_{k=0}^{\infty}$ such that $a_{k}=1$ for all $k$. ###### Theorem 8.19 (Palindromic property of $\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty}$ and factorial identity of $\mathcal{K}_{n}(x;n)$). The pair $(\\{\mathcal{K}_{k}(x;y)\\}_{k=0}^{\infty},\\{1\\}_{k=0}^{\infty})$ is a palindromic pair with degree $n$. Namely, let $n\in\mathbb{Z}_{\geq 0}$ and $s\in\mathcal{S}\mathrm{ol}_{n+1}(\mathcal{K};n)$. Then we have $\mathcal{K}_{k}(s;n)=0$ for $k\geq n+1$ and $\mathcal{K}_{k}(s;n)=(-1)^{s}\mathcal{K}_{n-k}(s;n)\quad\emph{for $k\leq n$}.$ In particular, we have $\mathcal{K}_{n}(s;n)=(-1)^{s}\quad\emph{for $s\in\mathcal{S}\mathrm{ol}_{n+1}(\mathcal{K};n)$}.$ (8.20) ###### Proof. The proof is the same as that of Theorem 8.8, we omit the proof. ∎ We close this section by giving a proof of Corollary 1.34 in the introduction. ###### Proof of Corollary 1.34. It follows from Proposition 8.18 that $\displaystyle\mathcal{S}\mathrm{ol}_{n+1}(\mathcal{K};n)$ $\displaystyle=\\{s\in\mathbb{C}:\mathcal{K}_{n+1}(s;n)=0\\}$ $\displaystyle=\\{0,1,2,\dots,n\\}.$ Then (8.20) concludes the proposed identity. ∎ ## 9\. Appendix A: local Heun functions In this appendix we collect several facts and lemmas for the local Heun function $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ at $z=0$ that are used in the main part of this paper. ### 9.1. General facts As in (4.30), we set $\displaystyle\mathcal{D}_{H}(a,q;\alpha,\beta,\gamma,\delta;z):=\frac{d^{2}}{dz^{2}}+\left(\frac{\gamma}{z}+\frac{\delta}{z-1}+\frac{\varepsilon}{z-a}\right)\frac{d}{dz}+\frac{\alpha\beta z-q}{z(z-1)(z-a)}$ with $\gamma+\delta+\varepsilon=\alpha+\beta+1$. Then the equation $\mathcal{D}_{H}(a,q;\alpha,\beta,\gamma,\delta;z)f(z)=0$ is called _Heun’s differential equation_. The $P$-symbol is $P\left\\{\begin{matrix}0&1&a&\infty&&\\\ 0&0&0&\alpha&z&q\\\ 1-\gamma&1-\delta&1-\varepsilon&\beta&&\end{matrix}\right\\}.$ Let $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ stands for the local Heun function at $z=0$ ([38]). As in [38], we normalize $Hl(a,q;\alpha,\beta,\gamma,\delta;z)$ so that $Hl(a,q;\alpha,\beta,\gamma,\delta;0)=1$. It is known that, for $\gamma\notin\mathbb{Z}$, the functions $Hl(a,q;\alpha,\beta,\gamma,\delta;z)\quad\text{and}\quad z^{1-\gamma}Hl(a,q^{\prime};\alpha-\gamma+1,\beta-\gamma+1,2-\gamma,\delta;z)$ (9.1) with $q^{\prime}:=q+(1-\gamma)(\varepsilon+a\delta)$ are two linearly independent solutions at $z=0$ to the Heun equation $\mathcal{D}_{H}(a,q;\alpha,\beta,\gamma,\delta;z)f(z)=0$. (See, for instance, [29] and [34, p. 99].) ###### Remark 9.2. In [34, p. 99], there seems to be a typographical error on the formula $\lambda^{\prime}=\lambda+(a+b+1-c)(1-c).$ This should read as $\lambda^{\prime}=\lambda+(a+b+1-c+(t-1)d)(1-c).$ Let $Hl(a,q;\alpha,\beta,\gamma,\delta;z)=\sum_{k=0}^{\infty}c_{k}z^{k}$ be the power series expansion at $z=0$. In our normalization we have $c_{0}=1$. Then $c_{k}$ for $k\geq 1$ satisfy the following recurrence relations (see, for instance, [38, p. 34]). $\displaystyle-q+a\gamma c_{1}$ $\displaystyle=0,$ (9.3) $\displaystyle P_{k}c_{k-1}-(Q_{k}+q)c_{k}+R_{k}c_{k+1}$ $\displaystyle=0,$ (9.4) where $\displaystyle P_{k}$ $\displaystyle=(k-1+\alpha)(k-1+\beta),$ $\displaystyle Q_{k}$ $\displaystyle=k[(k-1+\gamma)(1+a)+a\delta+\varepsilon],$ $\displaystyle R_{k}$ $\displaystyle=(k+1)(k+\gamma)a.$ Lemma 9.5 below shows that each coefficient $c_{k}$ for $Hl(a,q;\alpha,\beta,\gamma,\delta;z)=\sum_{k=0}^{\infty}c_{k}z^{k}$ has a determinant representation. ###### Lemma 9.5 (see, for instance, [26, Lem. B.1]). Let $\\{a_{k}\\}_{k\in\mathbb{Z}_{\geq 0}}$ be a sequence with $a_{0}=1$ generated by the following relations: * • $a_{1}=A_{0}$; * • $a_{k+1}=A_{k}a_{k}+B_{k}a_{k-1}$ $(k\geq 1)$ for some $A_{0}$, $A_{k}$, $B_{k}\in\mathbb{C}$. Then $a_{k}$ can be expressed as $a_{k}=\small\begin{vmatrix}A_{0}&-1&&&&\\\ B_{1}&A_{1}&-1&&&\\\ &B_{2}&A_{2}&-1&&&\\\ &&\dots&\dots&\dots&&\\\ &&&B_{k-2}&A_{k-2}&-1\\\ &&&&B_{k-1}&A_{k-1}\\\ \end{vmatrix}.$ ### 9.2. The local solutions $u_{[s;n]}(t)$ and $v_{[s;n]}(t)$ Now we consider the following local solutions to (4.33). $\displaystyle u_{[s;n]}(t)=Hl(-1,-\frac{ns}{4};-\frac{n}{2},-\frac{n-1}{2},\frac{1}{2},\frac{1-n-s}{2};t^{2}),$ (9.6) $\displaystyle v_{[s;n]}(t)=tHl(-1,-\frac{n-2}{4}s;-\frac{n-1}{2},-\frac{n-2}{2},\frac{3}{2},\frac{1-n-s}{2};t^{2}).$ (9.7) ###### Lemma 9.8. Let $u_{[s;n]}(t)=\sum_{k=0}^{\infty}U_{k}(s;n)t^{2k}$ be the power series expansion of $u_{[s;n]}(t)$ at $t=0$. Then the coefficients $U_{k}(s;n)$ for $k\geq 1$ satisfy the following recurrence relations. $\displaystyle U_{1}(s;n)$ $\displaystyle=E^{u}_{0},$ $\displaystyle U_{k+1}(s;n)$ $\displaystyle=E^{u}_{k}U_{k}(s;n)+F^{u}_{k}U_{k-1}(s;n),$ where $E^{u}_{0}=\frac{ns}{2},\quad E^{u}_{k}=\frac{(n-4k)s}{(2k+1)(2k+2)}\quad\text{and}\quad F^{u}_{k}=\frac{(n-2k+1)(n-2k+2)}{(2k+1)(2k+2)}.$ (9.9) ###### Proof. This follows from (9.3) and (9.4) for the specific parameters in (9.6). ∎ ###### Lemma 9.10. Let $v_{[s;n]}(t)=\sum_{k=0}^{\infty}V_{k}(s;n)t^{2k+1}$ be the power series expansion of $v_{[s;n]}(t)$ at $t=0$. Then the coefficients $V_{k}(s;n)$ $(k\geq 1)$ satisfy the following recurrence relations. $\displaystyle V_{1}(s;n)$ $\displaystyle=E^{v}_{0},$ $\displaystyle V_{k+1}(s;n)$ $\displaystyle=E^{v}_{k}V_{k}(s;n)+F^{v}_{k}V_{k-1}(s;n),$ where $E^{v}_{0}=\frac{(n-2)s}{6},\quad E^{v}_{k}=\frac{(n-4k-2)s}{(2k+2)(2k+3)}\quad\text{and}\quad F^{v}_{k}=\frac{(n-2k)(n-2k+1)}{(2k+2)(2k+3)}.$ (9.11) ###### Proof. This follows from (9.3) and (9.4) for the specific parameters in (9.7). ∎ It follows from Lemmas 9.8, 9.10, and 9.5 that $U_{k}(s;n)$ and $V_{k}(s;n)$ have determinant representations $U_{k}(s;n)=\small\begin{vmatrix}E^{u}_{0}&-1&&&&\\\ F^{u}_{1}&E^{u}_{1}&-1&&&\\\ &F^{u}_{2}&E^{u}_{2}&-1&&&\\\ &&\dots&\dots&\dots&&\\\ &&&F^{u}_{k-2}&E^{u}_{k-2}&-1\\\ &&&&F^{u}_{k-1}&E^{u}_{k-1}\\\ \end{vmatrix}$ (9.12) and $V_{k}(s;n)=\small\begin{vmatrix}E^{v}_{0}&-1&&&&\\\ F^{v}_{1}&E^{v}_{1}&-1&&&\\\ &F^{v}_{2}&E^{v}_{2}&-1&&&\\\ &&\dots&\dots&\dots&&\\\ &&&F^{v}_{k-2}&E^{v}_{k-2}&-1\\\ &&&&F^{v}_{k-1}&E^{v}_{k-1}\\\ \end{vmatrix}.$ (9.13) ## 10\. Appendix B: A proof of Sylvester’s formula In this short appendix we provide a proof of Sylvester’s factorization formula $\mathrm{Sylv}(x;n)=\prod_{\ell=0}^{n}(x-n+2\ell),$ (10.1) based on a general theory of $\mathfrak{sl}(2,\mathbb{C})$ representations. For other proofs and related topics, see the remark after Theorem 7.22 and Proposition 8.3. For possible wide audience of this appendix, we shall discuss a proof in two ways; one of which is abstract in nature (Section 10.1) and the other uses a concrete realization of irreducible represenations (Section 10.2). We remark that the arguments in this appendix can be thought of as a baby case of the one discussed in Section 4.2. ### 10.1. Sylvester determinant $\mathrm{Sylv}(x;n)$ and $\mathfrak{sl}(2,\mathbb{C})$ representations I Let $E_{+}$, $E_{-}$, and $E_{0}$ be the elements of $\mathfrak{sl}(2,\mathbb{C})$ defined in (3.2), and let $(d\rho_{n},V_{n})$ be an irreducible representation of $\mathfrak{sl}(2,\mathbb{C})$ with $\dim_{\mathbb{C}}V_{n}=n+1$. Then there exists an ordered basis $\mathcal{B}$ of $V_{n}$ such that the matrix $d\rho_{n}(E_{+}+E_{-})_{\mathcal{B}}$ with respect to $\mathcal{B}$ is given as $d\rho_{n}(E_{+}+E_{-})_{\mathcal{B}}=\small\begin{pmatrix}0&-1&&&&&\\\ -n&0&-2&&&&\\\ &-(n-1)&0&-3&&&\\\ &&\dots&\dots&\dots&&\\\ &&&-3&0&-(n-1)&\\\ &&&&-2&0&-n\\\ &&&&&-1&0\end{pmatrix}\\\ $ (10.2) (cf. [14, Sect. 7.2]). Then the Sylvester determinant $\mathrm{Sylv}(x;n)$ ((7.25)) is the characteristic polynomial of $d\rho_{n}(E_{+}+E_{-})_{\mathcal{B}}$. As the leading coefficient of $\mathrm{Sylv}(x;n)$ is one, this implies that $\mathrm{Sylv}(x;n)=\prod_{\lambda\in\mathrm{Spec}(d\rho_{n}(E_{+}+E_{-}))}(x-\lambda),$ (10.3) where $\mathrm{Spec}(T)$ denotes the set of eigenvalues of a linear map $T$. Thus, to show (10.1), it suffices to find the eigenvalues of $d\rho_{n}(E_{+}+E_{-})$. Since $E_{+}+E_{-}$ is conjugate $E_{0}$ via an element of $SU(2)$ (cf. [22, Thm. 4.34]), it is further sufficient to determine $\mathrm{Spec}(d\rho_{n}(E_{0}))$. It is well-known that the eigenvalues of $d\rho_{n}(E_{0})$ are $n-2j$ for $j=0,1,2,\ldots,n-1,n$ (cf. [14, Sect. 7.2]). Therefore we have $\mathrm{Spec}(d\rho_{n}(E_{+}+E_{-}))=\\{n-2j:j=0,1,2,\ldots,n-1,n\\}.$ Now the desired factorization formula (10.1) follows from (10.3). ### 10.2. Sylvester determinant $\mathrm{Sylv}(x;n)$ and $\mathfrak{sl}(2,\mathbb{C})$ representations II We now give some details of the arguments given in Section 10.1 with a concrete realization of irreducible $\mathfrak{sl}(2,\mathbb{C})$ representations. Let $(\pi_{n},\mathrm{Pol}_{n}[t])$ be the irreducible representation of $SU(2)$ defined in (3.15) and $(d\pi_{n},\mathrm{Pol}_{n}[t])$ the corresponding irreducible representation of $\mathfrak{sl}(2,\mathbb{C})$, so that $d\pi_{n}(E_{j})$ for $j=+,-,0$ are the differential operators given in (3.17). It is easily checked that the matrix $d\pi_{n}(E_{+}+E_{-})_{\mathcal{B}}$ with respect to the ordered basis $\mathcal{B}:=\\{1,\,t,\,t^{2},\,\ldots,\,t^{n}\\}$ is given as (10.2); therefore, we have $\mathrm{Sylv}(x;n)=\prod_{\lambda\in\mathrm{Spec}(d\pi_{n}(E_{+}+E_{-}))}(x-\lambda).$ (10.4) On the other hand, the matrix $d\pi_{n}(E_{0})_{\mathcal{B}}$ is given as $d\pi_{n}(E_{0})_{\mathcal{B}}=\textnormal{diag}(n,\,n-2,\,n-4,\,\ldots,\,-n+2,\,-n),$ where $\textnormal{diag}(m_{0},m_{1},\ldots,m_{n})$ denotes a diagonal matrix. Thus the eigenvalues of $d\pi_{n}(E_{0})$ on $\mathrm{Pol}_{n}[t]$ are $n-2j$ for $j=0,1,2,\ldots,n-1,n$. For $g_{0}:=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\\ -1&1\end{pmatrix}\in SU(2),$ we have $\mathrm{Ad}(g_{0})(E_{+}+E_{-})=E_{0}.$ Thus, $d\pi_{n}(E_{0})=\pi_{n}(g_{0})d\pi_{n}(E_{+}+E_{-})\pi_{n}(g_{0})^{-1}.$ Therefore, $\mathrm{Spec}(d\pi_{n}(E_{+}+E_{-}))=\\{n-2j:j=0,1,2,\ldots,n-1,n\\}.$ (10.5) Now (10.4) and (10.5) conclude the desired formula. Acknowledgements. Part of this research was conducted during a visit of the first author at the Department of Mathematics of Aarhus University. He is appreciative of their support and warm hospitality during his stay. The authors are grateful to Anthony Kable for his suggestion to study the $K$-type formulas for the Heisenberg ultrahyperbolic operator. They would also like to express their gratitude to Hiroyuki Ochiai for the helpful discussions on the hypergeometric equation and Heun equation, especially for Lemma 4.32. Their deep appreciation also goes to Hiroyoshi Tamori for sending his Ph.D. dissertation, from which they thought of an idea for the hypergeometric model. The first author was partially supported by JSPS Grant-in-Aid for Young Scientists (JP18K13432). ## References * [1] R. Askey, _Evaluation of Sylvester type determinants using orthogonal polynomials_ , Advances in Analysis, World Sci. Publ., 2005, pp. 1–16. * [2] L. Barchini, A.C. Kable, and R. Zierau, _Conformally invariant systems of differential equations and prehomogeneous vector spaces of Heisenberg parabolic type_, Publ. Res. Inst. Math. Sci. 44 (2008), no. 3, 749–835. * [3] by same author, _Conformally invariant systems of differential operators_ , Adv. Math. 221 (2009), no. 3, 788–811. * [4] A. Cayley, _On the determination of the value of a certain determinant_ , Quart. J. Math. II (1858), 163–166, The collected mathematical papers of Arthur Cayley, VOL. III, pp. 120–123. * [5] P.A. Clement, _A class of triple-diagonal matrices for test purposes_ , SIAM Rev. 1 (1959), 50–52. * [6] D.H. Collingwood and B. Shelton, _A duality theorem for extensions of induced highest weight modules_ , Pacific J. Math. 146 (1990), no. 2, 227–237. * [7] C.M. da Fonseca and E. Kiliç, _An observation on the determinant of a Sylvester-Kac type matrix_ , An. Ştiinţ. Univ. “Ovidius” Constanţa Ser. Mat. 28 (2020), 111–115. * [8] C.M. da Fonseca, D.A. Mazilu, I. Mazilu, and H.T. Williams, _The eigenpairs of a Sylvester-Kac type matrix associated with a simple model for one-dimensional deposition and evaporation_ , Appl. Math. Lett. 26 (2013), 1206–1211. * [9] T.T. Dahl, _Knapp-Stein Operators and Fourier Transformations_ , Ph.D. thesis, Arhus University, 2019. * [10] A. Edelman and E. Kostlan, _The road from Kac’s matrix to Kac’s random polynomials_ , in: J. Lewis (Ed.), Proc. of the Fifth SIAM Conf. on Applied Linear Algebra (Philadelphia), SIAM, 1994, pp. 503–507. * [11] J. Frahm, _Conformally invariant differential operators on Heisenberg groups and minimal representations_ , preprint (2020), arXiv:2012.05952. * [12] N. Gogin and M. Hirvensalo, _On the generating function of discrete Chebyshev polynomials_ , Zap. Nauchn. Sem. POMI 448 (2016), 124–134. * [13] O. Holtz, _Evaluation of Sylvester type determinants using block-triangularization_ , Advances in Analysis, World Sci. Publ., 2005, pp. 395–405. * [14] J.E. Humphreys, _Introduction to Lie algebras and representation theory_ , Grad. Texts in Math., vol. 9, Springer-Verlag, New York, 1972. * [15] A.C. Kable, _$K$ -finite solutions to conformally invariant systems of differential equations_, Tohoku Math. J. 63 (2011), no. 4, 539–559, Centennial Issue. * [16] by same author, _Conformally invariant systems of differential equations on flag manifolds for $G_{2}$ and their K-finite solutions_, J. Lie Theory 22 (2012), no. 1, 93–136. * [17] by same author, _The Heisenberg ultrahyperbolic equation: The basic solutions as distributions_, Pacific J. Math 258 (2012), no. 1, 165–197. * [18] by same author, _The Heisenberg ultrahyperbolic equation: $K$-finite and polynomial solutions_, Kyoto J. Math 52 (2012), no. 4, 839–894. * [19] by same author, _On certain conformally invariant systems of differential equations II: Further study of type A systems_ , Tsukuba J. Math. 39 (2015), 39–81. * [20] by same author, _The structure of the space of polynomial solutions to the canonical central systems of differential equations on the block Heisenberg groups: A generalization of a theorem of Korányi_ , Tohoku Math. J. 70 (2018), 523–545. * [21] M. Kac, _Random walk and the theory of Brownian motion_ , Amer. Math. Monthly 54 (1947), 369–390. * [22] A.W. Knapp, _Lie groups beyond an introduction, second edition_ , Progr. Math., vol. 140, Birkhäuser, Boston Press, New York, 2002. * [23] T. Kobayashi and M. Pevzner, _Differential symmetry breaking operators. I. General theory and F-method_ , Selecta Math. (N.S.) 22 (2016), 801–845. * [24] R. Koekoek, P.A. Lesky, and R.F. Swarttouw, _Hypergeometric Orthogonal Polynomials and Their $q$-Analogues_, Springer Monographs in Mathematics, Springer-Verlag, Berlin Heidelberg, 2010. * [25] B. Kostant, _V erma modules and the existence of quasi-invariant differential operators_, Noncommutative harmonic analysis, Lecture Notes in Math., vol. 466, Springer, 1975, pp. 101–128. * [26] G. Kristensson, _Second order differential equations: Special functions and their classification_ , Springer, New York, 2010. * [27] T. Kubo and B. Ørsted, _On the space of $K$-finite solutions to intertwining differential operators_, Represent. Theory 23 (2019), 213–248. * [28] F.J. MacWilliams and N.J.A. Sloane, _The theory of error-correcting codes_ , North-Holland Mathematical Library, vol. 16, North-Holland Publishing Co., Amsterdam, 1977. * [29] R.S. Maier, _On reducing the Heun equation to the hypergeometric equation_, J. Differential Equations 213 (2005), 171–203. * [30] T. Muir, _The Theory of Determinants in the Historical Order of Development_ , vol. II, 1911, University of Michigan Library, 2006, Reprinted. * [31] E. Munarini and D. Torri, _Cayley continuants_ , Theoret. Comput. Sci. 347 (2005), 353–369. * [32] K. Nomura and P. Terwilliger, _Krawtchouk polynomials, the Lie algebra $\mathfrak{sl}_{2}$, and Leonard pairs_, Linear Algebra Appl. 437 (2012), 345–375. * [33] J. Rawnsley and S. Sternberg, _On representations associated to the minimal nilpotent coadjoint orbit of ${SL}(3,\mathbf{R})$_, Amer. J. Math. 104 (1982), no. 6, 1153–1180. * [34] S.Y. Slavyanov and W. Lay, _Special functions. A unified theory based on singularities_, Oxford Mathematical Monographs, Oxford Science Publications, Oxford University Press, Oxford, 2000, With a foreword by Alfred Seeger. * [35] J.J. Sylvester, _Théorème sur les déterminants de M. Sylvester_ , Nouvelles annales de mathématiques: journal des candidats aux écoles polytechnique et normale, 1e série 13 (1854), 305. * [36] H. Tamori, _Classification and Construction of minimal representations_ , Ph.D. thesis, The University of Tokyo, 2020. * [37] O. Taussky and J. Todd, _Another look at a matrix of Mark Kac_ , Linear Algebra Appl. 150 (1991), 341–360. * [38] A. Ronveaux (ed.), _Heun’s differential equations_ , Oxford Science Publications, Oxford University Press, Oxford, 1995, With contributions by F.M. Arscott, S. Yu. Slavyanov, D. Schmidt, G. Wolf, P. Maroni, and A. Duval. * [39] P. Torasso, _Quantication géométrique, opérateurs d’entrelacement et représentations unitaires de $(\widetilde{SL})_{3}(\mathbf{R})$_, Acta Math. 150 (1983), no. 3-4, 153–242. * [40] R. Vidunas and G. Filipuk, _Parametric transformations between the Heun and Gauss hypergeometric functions_ , Funkcial. Ekvac. 56 (2013), 271–321. * [41] by same author, _A classification of coverings yielding Heun-to-hypergeometric reductions_ , Osaka J. Math. 51 (2014), 867–903.
chapter[0pt]Chapter : Sparse Array Beamformer Design for Active and Passive Sensing by Syed Ali Hamza College of Engineering, Villanova University DOCTORATE OF PHILOSOPHY Advisor: Moeness Amin ## Copyright © 2020, Syed Ali Hamza All Rights Reserved ## STATEMENT BY AUTHOR This dissertation has been submitted in partial fulfillment of requirements for an advanced degree at the Villanova University. Brief quotations from this dissertation are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Associate Dean for Graduate Studies and Research of the College of Engineering when in his or her judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author. ## ACKNOWLEDGEMENTS This dissertation is the result of my research at Villanova University. The project described in this thesis was supported in part by National Science Foundation and Office of Naval Research. Its contents are solely the responsibility of the author and do not necessarily represent the official views of the sponsors. First and foremost, I would like to thank my family. My parents and my siblings made everything seem so easy, and generously supported me along the way. A very special gratitude goes out to my friends who made sure I had a life outside my research. Many people who I cannot mention individually deserve thanks and appreciation for their support in the preparation of my dissertation. I would like to express my gratitude to my advisor, Prof. Moeness Amin, who provided me an opportunity to work on timely and exciting topics. It would not have been possible without his unfettered support, guidance and encouragement. Besides my advisor, I would like to thank Prof. Aboulnasr Hassanien for sharing his experiences and providing insightful comments during my research. Special gratitude goes to the rest of my thesis committee, Prof. Bijan Mobasseri and Prof. Ahmad Hoorfar, for their invaluable support. Also, I am grateful to Janice Moughan and to the university staff at the International Student Office for their help in managing administrative tasks, which are always complicated for me. I thank my fellow labmates for the stimulating discussions inside and outside the lab. Thanks to everyone who supported me. ## DEDICATION This dissertation is lovingly dedicated to my parents and siblings, Maryam, Samrat and Usama! ###### TABLE OF CONTENTS Section Page 1. STATEMENT BY AUTHOR 2. ACKNOWLEDGEMENTS 3. DEDICATION 4. NOMENCLATURE 5. ABSTRACT 6. 1 INTRODUCTION AND MOTIVATION 1. 1.1 Sparse Arrays 2. 1.2 Main Contributions and Thesis Organization 3. 1.3 List of publications 1. 1.3.1 Journal articles 2. 1.3.2 Conference papers 7. 2 Optimum Sparse Array Design for Maximizing Signal-to-Noise Ratio in Presence of Local Scatterings 1. 2.1 Introduction 2. 2.2 Problem Formulation 3. 2.3 Gaussian and circular scattering models 4. 2.4 Optimum sparse array design 5. 2.5 Simulations 1. 2.5.1 Gaussian model 2. 2.5.2 Circular model 6. 2.6 Conclusion 7. 2.7 Appendix 8. 3 Hybrid Sparse Array Beamforming Design for General Rank Signal Models 1. 3.1 Introduction 2. 3.2 Problem Formulation 3. 3.3 Optimum sparse array design 1. 3.3.1 Fair gain beamforming 2. 3.3.2 Modified re-weighting for fully augmentable hybrid array 3. 3.3.3 Symmetric arrays 4. 3.4 Simulations 1. 3.4.1 Single point source 2. 3.4.2 Multiple point sources 3. 3.4.3 Fully augmentable linear arrays 4. 3.4.4 Fully augmentable 2D arrays 5. 3.5 Conclusion 6. 3.6 Appendix 1. 3.6.1 Proof of the Conjugate symmetric property of optimal weight vector 9. 4 Sparse Array Design for Maximizing the Signal-to-Interference-plus-Noise-Ratio by Matrix Completion 1. 4.1 Introduction 2. 4.2 Problem Formulation 3. 4.3 Sparse array design through SCA algorithm 1. 4.3.1 Hybrid sparse array design 4. 4.4 Toeplitz matrix completion and Fully augmentable completion through averaging 5. 4.5 Simulations 1. 4.5.1 Example comparing both designs 2. 4.5.2 Monte Carlo design for random scenarios 6. 4.6 Conclusion 10. 5 Sparse Array Beamforming Design for Wideband Signal Models 1. 5.1 Introduction 2. 5.2 Problem Formulation 1. 5.2.1 TDL Implementation scheme 2. 5.2.2 DFT Implementation scheme 3. 5.3 Optimum sparse array design 1. 5.3.1 Semidefinite relaxation (SDR) for sparse solution 2. 5.3.2 Successive convex approximation (SCA) 4. 5.4 Sparse matrix completion of block Toeplitz matrices 5. 5.5 Simulations 1. 5.5.1 Example 1 2. 5.5.2 Example 2 3. 5.5.3 Comparison of SDR and SCA under both models 4. 5.5.4 Practical Considerations for sparse array design 6. 5.6 Conclusion 11. 6 Sparse Array Design for Transmit Beamforming 1. 6.1 Introduction 2. 6.2 Problem Formulation 3. 6.3 Sparse array design 1. 6.3.1 Sparse solution through SDR 2. 6.3.2 Reweighting sparsity 3. 6.3.3 Transmit power constraint 4. 6.4 Simulations 1. 6.4.1 Example 5. 6.5 Conclusion 12. 7 Sparse Array Capon Beamformer Design Availing Deep Learning Approach 1. 7.1 Introduction 2. 7.2 Problem Formulation 3. 7.3 Optimum sparse array design 1. 7.3.1 Sparse beamformer spectral analysis (SBSA) design 2. 7.3.2 DNN based learning of the SBSA and enumerated designs 4. 7.4 Simulations 1. 7.4.1 Enumerated design 2. 7.4.2 DNN based SBSA design 3. 7.4.3 Robust design 4. 7.4.4 Performance comparisions with state-of-the-art 5. 7.5 Conclusion 13. 8 CONCLUSIONS AND RECOMMENDATIONS ###### LIST OF TABLES 1. 3.1 Proposed algorithm to achieve desired cardinality of optimal weight vector 2. 4.1 SCA for sparse array beamforming. 3. 5.1 SDR for the sparse wideband beamformer 4. 5.2 SCA for the sparse wideband beamformer 5. 6.1 SDR-Transmit Beamformer Design 6. 7.1 SBSA Algorithm ###### LIST OF FIGURES 1. 1.1 (a) Minimum redundancy array (7 elements); (b) Corresponding coarray 2. 1.2 (a) Nested array (11 elements); (b) Corresponding coarray 3. 1.3 Reconfigurable array through sensor switching 4. 2.1 (a) Circular model; (b) Correlation coefficient as a function of lags 5. 2.2 Output SNR comparison for various array configurations; (a) Gaussian model; (b) circular model. Beampattern of optimum array; (c) Gaussian model; (d) circular model. 6. 2.3 (a) Worst sparse array (Gaussian); (b) Second best (Gaussian); (c) Optimum array (circular); (d) Worst sparse array (circular); (e) Nested array; (f) Coprime array 7. 2.4 Output SNR for different arrays vs centre angle. 8. 3.1 Block diagram of adaptive switched sensor beamformer 9. 3.2 Output SINR for different array topologies 10. 3.3 Average Output SINR for different array topologies over $6000$ Monte Carlo trials 11. 3.4 Array configurations obtained for the point source at the array broadside (a) Optimum (Enumeration) (b) NFSDR-approach (c) Perturbed-NFSDR (d) Worst performing array configuration 12. 3.5 (a) Antenna array multiple sources (NFSDR-approach) (b) Fair gain $10$ element antenna array (NFSDR-approach) (c) Hybrid $10$ antenna array for multiple desired sources (FSDR) 13. 3.6 Beampattern for multiple point sources 14. 3.7 (a) $14$ element antenna array (NFSDR-approach) (b) Hybrid $14$ antenna sparse array ($8$ prefixed, $6$ selected through FSDR) (c) Hybrid $14$ antenna sparse array ($8$ prefixed, $6$ selected through FSDR) 15. 3.8 Average Output SINR for different array topologies over $3500$ Monte Carlo trials 16. 3.9 $24$ element antenna sparse array (NFSDR-approach) 17. 3.10 $24$ element hybrid antenna sparse array ($19$ prefixed, $5$ selected through FSDR) 18. 3.11 $24$ element worst performing hybrid antenna sparse array ($19$ prefixed, $5$ selected) 19. 3.12 Beampattern for the antenna array in Fig. 3.10 20. 3.13 Beampattern for a $6\times 4$ compact rectangular array 21. 4.1 Block diagram implementing adaptive beamforming and antenna switching 22. 4.2 (a) Initial configuration; randomly selected 16 antennas from 36 (b) Initial configuration leading to fully augmentable array (c) Freely designed array (d) Hybrid designed array (e) Initial random configuration; selected 16 antennas from 36 (f) Initial configuration leading to fully augmentable array (g) Freely designed array (h) Hybrid designed array (i) Best performing array configuration (j) Worst performing array configuration 23. 4.3 Average SINR performance of various sparse topologies against desired source DOA for $T=100$ snapshots. 24. 4.4 Average SINR performance of various sparse topologies against desired source DOA for $T=250$ snapshots. 25. 4.5 Average SINR performance of various sparse topologies against desired source DOA for $T=1000$ snapshots. 26. 4.6 Sensor switching comparison vs the free-design and the hybrid design. 27. 5.1 Block Diagram of sparse array wideband processing. 28. 5.2 TDL realization of wideband beamforming. 29. 5.3 DFT implementation of wideband beamforming. 30. 5.4 Frequency dependent beampattern for the array configuration recovered through convex relaxation. 31. 5.5 Frequecy dependent beampattern for the compact ULA (Fig. 5.6e). 32. 5.6 Example 1 - (a) Optimum array TDL implementation scheme (Enumeration) (b) TDL-SDR (c) TDL-SCA (d) DFT-SDR, DFT-SCA (e) $14$ sensor compact ULA 33. 5.7 Example 2 - (a) Optimum array TDL implementation scheme (Enumeration) (b) TDL-SCA (c) TDL-SDR (d) DFT-SCA (e) DFT-SDR (f) Worst case array (TDL, DFT) 34. 5.8 Performance comparisons of SCA under DFT model. 35. 5.9 Performance comparisons of SCA under TDL model. 36. 5.10 Performance comparisons of SDR under DFT model. 37. 5.11 Performance comparisons of SDR under TDL model. 38. 5.12 Performance comparisons of the optimum sparse array, the worst performing array and the compact ULA (TDL implementation scheme). 39. 6.1 Various array configurations 40. 6.2 Transmit beampattern for the SDR-optimized configuration. 41. 6.3 Transmit beampattern for the nested array configuration. 42. 6.4 Transmit beampattern for the compact ULA. 43. 6.5 Constituent transmit beampatterns for the SDR-optimized configuration. 44. 6.6 Cross-correlation pattern against the Target 1 for various sparse configurations. 45. 6.7 Cross-correlation pattern against the Target 2 for various sparse configurations. 46. 7.1 Block diagram of adaptive switched sensor beamformer 47. 7.2 Architecture of Deep Neural Network (DNN) 48. 7.3 Overview of the proposed approach using Deep Neural Network (DNN) 49. 7.4 Eight element sparse array configuration 50. 7.5 Lag Redundancy of the sparse array shown in Fig. 7.4 51. 7.6 DFT of the lag redundancy 52. 7.7 Power spectrum of the desired signal 53. 7.8 Explanation of the proposed objective criterion for the optimum array configuration shown in Fig. 7.4 54. 7.9 Explanation of the proposed objective criterion for the worst possible array configuration 55. 7.10 Plot of the proposed objective criterion in ascending order 56. 7.11 Performance comparison enumerated design under unlimited snapshots 57. 7.12 Performance comparison SBSA design under unlimited snapshots 58. 7.13 Performance comparison enumerated design under 1000 snapshots 59. 7.14 Performance comparison SBSA design under 1000 snapshots 60. 7.15 Performance comparisons with the state-of-the-art ## NOMENCLATURE CRB | Cramer Rao bound ---|--- CPI | Coherent processing interval DFT | Discrete Fourier transform DNN | Deep Neural Network DOA | Direction of arrival FOV | Field of view MLE | Maximum likelihood estimate MaxSNR | Maximizing signal to noise ratio MaxSINR | Maximizing signal to interfernce plus noise ratio MIMO | Multiple input multiple output MRA | Minimum redundancy array QCQP | Quadrtically constraint quadratic program SDR | Semidefinite relaxation SCA | Successive convex relaxation TDL | Time delay line ULA | Uniform linear array ## ABSTRACT Sparse array design through sensor selection reduces system transceiver overhead by lowering the hardware costs and processing complexity. Sparse- based design can potentially improve active and passive sensing in radar, communications, and underwater acoustics. Sparse array design approaches include coarse sampling in time, frequency, and space at both the transmitter (for active sensing) and the receiver. These approaches present a new paradigm to, for instance, direction-of-arrival (DOA) or Doppler estimation and allow dealing with more sources than physical sensors. Sparse sensor placement, with various design objectives, has successfully been employed in diverse application areas, particularly for enhanced parameter estimation and receiver performance. The sparse array design criteria are generally categorized into environment-independent and environment-dependent performance metrics. The former are largely benign to the underlying environment and, in principle, seek to maximize the spatial degrees of freedom by extending the coarray aperture. This enables high resolution DOA estimation possibly involving more sources than the available physical sensors. Environment-dependent objectives, on the other hand, consider the operating conditions characterized by emitters and targets in the array field of view, in addition to receiver noise. In this regard, applying such objectives renders the array configuration as well as the array weights time-varying in response to dynamic and changing environment. This work is geared towards designing environment-dependent sparse array beamformer to improve the output signal-to-interference and noise ratio using both narrowband and wideband signal platforms. This is achieved through low latency sensor selection technology that enables cost effective sparse array design that can be readily configured to meet environment-dependent performance metrics. In this case, the system cost can be considerably reduced by multiplexing the expensive transceiver chains to serve many more perspective sensor locations. However, at any given time, only a few sensor locations are operational which correspond to the active connections to the transceiver chains. One key challenge in implementing the data-dependent approaches is the lack of knowledge of exact or estimated values of the data autocorrelation function across the full sparse array aperture. The sparse array design can only have few active sensors at a time, in essence making it difficult to furnish the correlation values corresponding to the inactive sensor locations. At the core of this work is to address the aforementioned issues by devising innovative solutions using convex optimization and machine learning tools, structured sparsity concepts, low rank matrix completion schemes and fusing the environment-dependent and environment-independent deigns by developing a hybrid approach. Sparse array design is also proposed using the deep neural network (DNN). The DNN based design paves the way for a successful real-time nimble implementation of data driven sparse beamformer that efficiently overcomes the bottlenecks specific to sparse implementations. Finally, active sparse array design is considered which is critical to implementing an efficient receiver design for adaptive radar operations. A desirable transmit beampattern design is achieved which focuses the transmitted power towards the perceived target locations and, as such, suppresses the transmission towards undesired directions. In so doing, it can steer clear of heavy clutter environment or avoid probing towards an adversary location in covert situations. This work also addresses another critical design objective for the transmitter, i.e., to minimize the cross correlation of the returns from different targets to enable effective adaptive receiver processing. High target cross correlation can severely deteriorate performance and impede the performance of adaptive radar operations. This is achieved by lending a separate beamformer for each target location, such that all beamformers share a common sparse configuration. ## Chapter 1INTRODUCTION AND MOTIVATION ### 1.1 Sparse Arrays Planning sensor locations can potentially economize the receiver cost by curtailing the valuable hardware and computational resources. Sparse sensor placement, with various design objectives, has successfully been employed in diverse application areas, particularly for enhanced parameter estimation and receiver performance [1, 2, 3, 4, 5, 6]. The sparse array design criteria are generally categorized into environment-independent and environment-dependent performance metrics. The former are largely benign to the underlying environment and, in principle, seek to maximize the spatial degrees of freedom by extending the coarray aperture.The difference coarray of a given array configuration demonstrates the unique correlation lags that can be exploited to maximize the degrees-of-freedom (DOF). This enables high resolution direction of arrival (DOA) estimation possibly involving more sources than the available physical sensors. Towards this goal, sparse array is interpreted in the coarray domain to yield a higher number of degrees-of-freedom (DOFs). For instance, for a given number of physical sensors, the minimum redundancy array (MRA [7]) (Fig. 1.1a) maximizes the number of consecutive virtual sensors in the resulting difference coarray as shown in Fig. 1.1b. However, for an arbitrary number of sensors, the MRA configurations are hard to optimize since they don’t follow any general expressions. Alternatively, several structured array configurations have recently been proposed to maximize the contiguous difference coarray including the coprime arrays and nested arrays [8, 9, 10]. Nested array is easy to construct configuration obtained by combining two uniform linear subarrays with different interelement and result in a larger number of consecutive virtual sensors under the coarray equivalence as shown in Fig. 1.2. (a) (b) Figure 1.1: (a) Minimum redundancy array (7 elements); (b) Corresponding coarray (a) (b) Figure 1.2: (a) Nested array (11 elements); (b) Corresponding coarray Environment dependent objectives, on the other hand, consider the operating conditions characterized by the underlying constellation of emitters and targets or interferences in the array field of view, in addition to receiver noise. In this regard, applying such objectives renders the array configuration as well as the array weights time-varying in response to dynamic and changing environment. In this thesis, we focus on environment dependent optimum sparse array beamformer design that maximizes the signal to- interference and noise ratio (MaxSINR). Sparse array design typically involves the selection of a subset of uniform grid points for sensor placements. For a given number of sensors, it is assumed that the number of grid points, spaced by half wavelength, is limited due to the constraint on the physical aperture. In this approach, antenna positions are selected from uniformly spaced locations that are served by a limited number of transceiver chains. The environment-sensitive design objectives have recently become more realizable due to advances in efficient sensor switching technologies that readily activates a subset of sensors on a predefined grid points resulting in reconfigurable arrays [11] depicted in Fig. 1.3. Thereby, the system cost can significantly be reduced by limiting the number of expensive transceivers chains. ### 1.2 Main Contributions and Thesis Organization The work presented in this thesis is geared towards designing environment- dependent sparse array beamformer to improve the output signal-to-interference and noise ratio using both narrowband and wideband signal platforms. It addresses key challenges in implementing the data-dependent approaches including the lack of knowledge of exact or estimated values of the data autocorrelation function across the full sparse array aperture. This is because the sparse array design can only have few active sensors at a time, in essence making it difficult to furnish the correlation values corresponding to the inactive sensor locations. At the core of this work is to address the aforementioned issues by devising innovative solutions using convex optimization and machine learning tools, structured sparsity concepts, low rank matrix completion schemes and fusing the environment-dependent and environment-independent deigns by developing a hybrid approach. The main contributions of the thesis are organized as follows. Chapters 2 deals with sparse array design to optimally process the source seen through the local scatterers. Chapter 3 and 4 focus on the MaxSINR narrowband sparse receive beamformer through the hybrid and matrix completion approaches. Chapter 5 develops the design of wideband sparse array design through TDL and DFT implementations. Sparse array active design is investigated in Chapters 6 and Chapter 7 presents DNN based sparse beamformer design, followed by conclusion. Figure 1.3: Reconfigurable array through sensor switching Chapter 2 examines the MaxSNR problem in presence of local scatterings which may follow specific deterministic or statistical scattering models. It is shown that if the scatterers assume a Gaussian distribution centered around the source angular position, the optimum array configuration for maximizing the SNR is the commonly used uniform linear array (ULA). If the scatterers circulate the source, the array design is optimized to harness sparse array topologies with superior performance over ULAs. Simulation results are presented to show the effectiveness of array configurability in the case of both Gaussian and circularly spread sources. In Chapter 3, we consider sparse array design for receive beamforming achieving maximum signal-to-interference plus noise ratio (MaxSINR) for both single point source and multiple point sources, operating in an interference active environment. Unlike existing sparse design methods which either deal with structured, environment-independent, or non-structured, environment- dependent arrays, our method is a hybrid approach and seeks a full augumentable array that optimizes beamformer performance. Thereby, one can in essence reap the benefits of structured and non-structured arrays. This paradigm calls for a new aperture design approach that strives to provide filled co-arrays and, at the same time, be environment-sensitive. This approach proves important for limited aperture that constrains the number of possible uniform grid points for sensor placements. The problem is formulated as quadratically constraint quadratic program (QCQP), with the cost function penalized with weighted $l_{1}$-norm squared of the beamformer weight vector. Performance comparisons among the proposed sparse array, the commonly used uniform arrays, arrays obtained by other design methods, and arrays designed without the augmentability constraint are provided. Chapter 4 address the sparse array optimization design requiring to estimate the data autocorrelations at all spatial lags across the array aperture. Towards this end, we adopt low rank matrix completion under the semidefinite Toeplitz constraint for interpolating those autocorrelation values corresponding to the missing lags. We compare the performance of matrix completion approach with that of the fully augmentable sparse array design acting on the same objective function. The optimization tool employed is the regularized $l_{1}$-norm successive convex approximation (SCA). Design examples with simulated data are presented using different operating scenarios, along with performance comparisons among various configurations. In Chapter 5, we develop sparse array receive beamformer design methods achieving maximum signal-to-interference plus noise ratio (MaxSINR) for wideband sources and jammers. Both tapped delay line (TDL) filtering and the DFT realizations to wideband array processing are considered. The sparse array configuration design problem is formulated as a quadratically constraint quadratic program (QCQP) and solved by using SDR (semidefinite relaxation). A computationally viable approach through SCA (successive convex relaxation) is also pursued. It is assumed that the sensor configuration remains the same within the observation time. In radar, this assumption amounts to configuring the same sparse configuration over the coherent processing interval (CPI). The notion of mixed $l_{1-q}$ norm regularization is exploited to thrive the group sparsity to conceive a wideband sparse beamformer. In order to realize an implementable design, in presence of missing autocorrelation lags, we propose parameter-free block Toeplitz matrix completion to estimate the received data correlation matrix across the entire array aperture. Chapter 6 examines the sparse array design for transmit beamforming. The main task is to design the beamformer to direct the bulk of the transmitted power to target locations. This, in turn, enables improved receiver performance which strives to maximize the signal-to-interference plus noise ratio (MaxSINR) of the radar returns. In order to realize an environmental-dependent adaptive design, it is incumbent that the directed signals towards different targets are maximally mutually independent. This is achieved by lending a separate beamformer for each target location, such that all beamformers share a common sparse configuration. It is shown that the proposed approach can efficiently utilize the available array aperture to achieve the desired transmitted beampattern characteristics. In Chapter 7, we consider sparse array design for receive beamforming achieving maximum signal-to-interference plus noise ratio (MaxSINR) for a desired point source, operating in a narrowband interference active environment. The sparse array design methods developed thus far, are either data driven or rely entirely on the prior knowledge of the interference DOAs and respective powers. In this chapter, we develop a sparse beamformer spectral analysis (SBSA) approach, exploiting the prior information of the interference parameters for implementing a MaxSINR beamformer with superior performance and low computational complexity. The data-driven design is essentially conceived by integrating the proposed SBSA design and the Deep Neural Network (DNN). Towards this goal, the training scenarios are generated either through enumeration or simulated more expediently by SBSA methodology for DNN, to learn an effective representation that can perform well in a downstream task. The DNN effectively approximates the unknown mapping from the input received correlation matrix and outputs the sparse configuration with superior interference mitigation capability. The performance of the DNN is evaluated by the ability to learn the enumerated and SBSA designs. It is shown that DNN effectively learns the proposed algorithms and, as such, paves the way for efficient real-time implementation. Conclusion and recommendations follow at the end. ### 1.3 List of publications The following publications are the result of this research. #### 1.3.1 Journal articles * • S. Hamza and M. Amin, “Sparse Array Design for Maximizing the Signal-to- Interference-plus-Noise-Ratio by Matrix Completion,” Digital Signal Processing, 2020. * • S. Hamza and M. G. Amin, “Hybrid Sparse Array Beamforming Design for General Rank Signal Models,” IEEE Transactions on Signal Processing, 2019. * • S. Hamza and M. G. Amin, “Sparse Array Beamforming for Wideband Signal Models,” IEEE Transactions on Aerospace and Electronic Systems (Under review). * • S. Hamza and M. G. Amin, “Sparse Array Beamformer Design Availing Deep Learning Approach,” IEEE Transactions on Signal Processing (Ready for submission). #### 1.3.2 Conference papers * • S. Hamza and M. Amin, “Sparse Array Receiver Beamformer Design for Multi- Functional Antenna,” submitted to EUSIPCO, 2020. * • S. Hamza and M. G. Amin, “Sparse Array Design for Transmit Beamforming,” IEEE Radar Conference (RadarConf20), 2020. * • S. Hamza and M. G. Amin, “Planar Sparse Array Design for Transmit Beamforming,” SPIE Defense and Commercial Sensing, 2020. * • S. Hamza and M. G. Amin, “A Method for Optimum Sparse Array Beamforming Design,” SPIE Defense and Commercial Sensing, 2020. * • S. Hamza and M. G. Amin “Hybrid Sparse Array Design for Under-determined Models,” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, 2019. * • S. Hamza and M. G. Amin “Sparse Array Design Utilizing Matix Completion,” Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 2019. * • S. Hamza and M. Amin, “Sparse Array DFT Beamformers for Wideband Sources,” 2019 IEEE Radar Conference (RadarConf19), Boston, MA, USA. * • S. Hamza and M. Amin, “Optimum Sparse Array Beamforming for General Rank Signal Models,” 2018 IEEE Radar Conference (RadarConf18), Oklahoma City, OK, 2018. * • S. Hamza, M. G. Amin and G. Fabrizio “Optimum Sparse Array Design for Maximizing Signal- to-Noise Ratio in Presence of Local Scatterings,” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, 2018. * • S. Hamza and M. G. Amin “Optimum Sparse Array Receive Beamforming for Wideband Signal Model,” 2018 Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 2018. * • S. Hamza and M. G. Amin “Two Dimensional Sparse Array Design of Wideband Signals for Data Reduction,” SPIE Defense and Commercial Sensing, 2019 ## Chapter 2Optimum Sparse Array Design for Maximizing Signal-to-Noise Ratio in Presence of Local Scatterings ### 2.1 Introduction Sparse arrays have recently attracted much attention that is motivated by switched antenna technologies and advances in constrained minimization and convex optimization techniques. There are several metrics to design sparse arrays and decide on optimum array configurations. Among those metrics, maximum signal-to-noise ratio (MaxSNR), maximum signal-to-interference and noise ratio (MaxSINR), and reduced Cramer-Rao bound (CRB) for direction-of- arrival (DOA) estimation, yield improved beamforming and direction finding performance [2, 11, 12, 13, 14, 15]. Designing optimum sparse arrays for MaxSNR strives to maximize the SNR at the array output for a given source direction-of-arrival (DOA). Depending on the number of antennas and permissible antenna locations, it has been shown that a significant performance improvement can be achieved over other commonly used sparse arrays, including nested and coprime structured array [8, 9, 10]. In past contributions, point sources have typically been assumed where each source is characterized by a steering vector and provides a rank-one spatial covariance matrix at the array receiver. However, depending on the multipath environments and source signal bandwidth, these steering vectors, along with the corresponding covariance matrix rank, can significantly change [16, 17, 18, 19, 20]. In this chapter, the effect of the spatial channel on optimum sparse array beamforming for MaxSNR is examined for the first time. Two models for local scatterings, namely, the Gaussian model and circular model, are considered. These scattering models are most suitable for dense urban environment, in which the signal encounter rich scattering prior to its arrival at the array receiver [21, 22, 23, 24]. It is shown that the optimum sparse array always elects a configuration that seeks the highest spatial correlation values across the antennas. On the other hand for a spatial correlation that is monotonically decreasing, the sparse array would assume the minimum possible antenna separation, so as to collect the highest correlation values. This is accomplished by configuring a ULA. For those scattering models in which the correlation rises and falls with increased antenna spacing, the optimum MaxSNR sparse array configuration deviates from a ULA, and positions the antennas such that their separations are consistent with the highest sensor correlation values. We pose the problem as optimally selecting $K$ antennas out of $N$ possible equally spaced locations on a uniform grid. The antenna selection problem for maximizing SNR amounts to maximizing the principal eigenvalue of the source correlation matrix [25]. In order to realize convex relaxation for this NP hard optimization problem and avoid computational burden of singular value decomposition for each possible configuration, we maximize the lower bound of output SNR instead. The lower bound optimization objective is approximated by Taylor series and formulated as an iterative linear program. The rest of the chapter is organized as follows: In section 2.2, we formulate the problem for maximizing the output SNR. The role of array configurability for MaxSNR for Gaussian and circular scattering models is explained in section 2.3. Section 2.4 deals with the iterative solution of finding optimum array design. Simulations and conclusion follows at the end. ### 2.2 Problem Formulation Consider a spatially spread source with $P$ independently scattered components impinging on a linear array with $N$ uniformly placed antennas. Then, the signal received at the array at time instant $t$ is given by: $\mathbf{x}(t)=\sum_{k=1}^{P}(\alpha_{k}(t))\mathbf{a}(\theta_{k})+\mathbf{n}(t),$ (2.1) where $\mathbf{a}({\theta_{k}})=[1\,\,\,e^{j(2\pi/\lambda)dcos(\theta_{k})}\,.\,.\,.\,e^{j(2\pi/\lambda)d(N-1)cos(\theta_{k})}]^{T},$ is the steering vector corresponding to the $k$th scatterer with the direction-of-arrival $\theta_{k}$, $d$ is the inter-element spacing in terms of wavelengths $\lambda$, $\alpha_{k}(t)$ $\in\mathbb{C}$ represents the complex amplitude of $k$th scatterer and $\mathbf{n}(t)$ $\in\mathbb{C}^{N}$ represents the additive Gaussian noise with variance $\sigma_{n}^{2}$ at the receiver output. The received signal $\mathbf{x}(t)$ is linearly combined by the $N$-antenna beamformer that strives to maximize the output SNR. The output signal $y(t)$ of the optimum beamformer for maximum SNR is given by [25], $y(t)=\mathbf{w_{0}}^{H}\mathbf{x}(t),$ (2.2) with $\mathbf{w_{0}}=\mathscr{P}\\{\mathbf{R_{n}}^{-1}\mathbf{R}\\}.$ (2.3) The operator $\mathscr{P}\\{.\\}$ computes the principal eigenvector, $\mathbf{R}=\mathbf{UR_{s}U}^{H}$ is the received source correlation matrix with $\mathbf{U}=[\mathbf{a}({\theta_{1}})\,.\,.\,.\,\mathbf{a}({\theta_{P}})]$ and $k$, $l$th entry of $\mathbf{R_{s}}(k,l)=E\\{\alpha_{k}(t)\alpha_{k}^{H}(t)\\}$ for $k=l$ and zero otherwise. For spatially uncorrelated noise, $\mathbf{R_{n}}=\sigma_{n}^{2}\mathbf{I}$, we obtain the corresponding optimum output $SNR_{o}$ as follows: $\mathbf{w_{0}}=\mathscr{P}\\{\mathbf{R}\\},$ (2.4) $SNR_{o}=\frac{\mathbf{w_{0}}^{H}\mathbf{R}\mathbf{w_{0}}}{\mathbf{w_{0}}^{H}\mathbf{R_{n}}\mathbf{w_{0}}}=\frac{||\mathbf{R}||_{2}}{\sigma_{n}^{2}}.$ (2.5) Here, $||.||_{2}$ denotes the spectral norm or the maximum eigenvalue of the matrix. Equations (2.4) and (2.5) show that the optimum beamformer for MaxSNR is directly tied to the eigen-structure of the correlation matrix. As such, there is a need to analyze the correlation matrix under the scattering models. ### 2.3 Gaussian and circular scattering models Gaussian model assumes that the directions of arrival of the scatterers are Gaussianly distributed, $\mathscr{N}(\theta_{0},\sigma_{\theta})$, having mean direction of arrival $\theta_{0}$ and variance $\sigma_{\theta}$. Consequently, each element of the steering vector would be jointly Gaussian with zero mean and covariance matrix given by [21], $\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}\approx(\mathbf{a}({\theta_{0}})\mathbf{a}^{H}({\theta_{0}}))\circ\mathbf{B}_{(\theta_{0},\sigma_{0})},$ (2.6) where ‘$\circ$’ denotes Hadamard product and $\mathbf{B}_{(\theta_{0},\sigma_{0})}(k,l)=e^{-2(\pi\delta(k-l))^{2}\sigma_{0}^{2}cos^{2}(\theta_{0})}.$ (2.7) Denote $\mathbf{z}\in\\{0,1\\}^{N}$ a selection vector whose entries $0$’s and $1$’s represents the presence and absence of corresponding antennas respectively. The steering vector corresponding to antenna selection vector $\mathbf{z}$ is given by $\mathbf{a}_{{\theta_{0}}}(\mathbf{z})=\mathbf{a}{{(\theta_{0})}}\odot\mathbf{z}$. Here ‘$\odot$’ is the element-wise product operator which allows the selection of antenna elements according to $\mathbf{z}$. Similarly, $\mathbf{B}_{(\theta_{0},\sigma_{0})}(\mathbf{z})=\mathbf{B}_{(\theta_{0},\sigma_{0})}\odot\mathbf{Z}$ with $\mathbf{Z}=\mathbf{z}\mathbf{z}^{T}$ being the corresponding antenna selection matrix. Equation (2.6) with selected antennas can be re-written as follows: $\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{z})\approx(\mathbf{a}_{{\theta_{0}}}(\mathbf{z})\mathbf{a}^{H}_{{\theta_{0}}}(\mathbf{z}))\circ\mathbf{B}_{(\theta_{0},\sigma_{0})}(\mathbf{z}).$ (2.8) We note that the trace of $\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{z})$ is constant since the input source power remains the same irrespective of the array configuration. Accordingly, the sum of eigenvalues is constant for all possible correlation matrices associated with the $K$ antenna selection problem. To be more explicit, the problem formulated in Eq. (2.5) can be expressed as: $\displaystyle\underset{\mathbf{z}}{\text{max}}$ $\displaystyle\quad\quad{||\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{z})||_{2}},$ (2.9) $\displaystyle\text{given};$ $\displaystyle\,\sum_{k=1}^{K}v_{\mathbf{z}}(k)=\mathbf{Tr}\,(\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{z}))=K\mathbf{Tr}(\mathbf{R_{s}}),$ where $\mathbf{Tr}$ $(.)$ denotes the trace of the matrix, $v_{\mathbf{z}}(k)$ is the $k$th eigenvalue of correlation matrix $\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{z})$. Equations (2.6) and (2.7) show that the correlation drops monotonically with increased correlation lag. As shown below, this property compels the optimum sparse array to assume a ULA with minimum inter-element spacing; hence, the solution does not require any iteration-based method or enumeration. Let $\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{u})$ be the correlation matrix for ULA ‘$\mathbf{u}$’ and $\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{s})$ be the correlation matrix associated with the sparse ‘$\mathbf{s}$’ configuration with the same number of antennas, $K$. The $k$th eigenvalue $v_{\mathbf{z}M}(k)$ of $\mathbf{R}^{M}_{\mathbf{g}(\theta_{0},\sigma_{0})}(\mathbf{z})$ is related to its corresponding eigenvlaue $v_{\mathbf{z}}(k)$ of $\mathbf{R_{g}}_{(\theta_{0},\sigma_{0})}(\mathbf{z})$ by [26], $v_{\mathbf{z}M}(k)=v^{M}_{\mathbf{z}}(k),\,\,\,\,\forall\,M\geq 0.$ (2.10) For the Gaussian spatial channel, $\mathbf{Tr}$ $(\mathbf{R}^{M}_{\mathbf{g}(\theta_{0},\sigma_{0})}(\mathbf{u}))>\mathbf{Tr}\,(\mathbf{R}^{M}_{\mathbf{g}(\theta_{0},\sigma_{0})}(\mathbf{s}))$ (proof in section 2.7) along with Eq. (2.10), yields $\sum_{k=1}^{K}v^{M}_{\mathbf{u}}(k)>\sum_{k=1}^{K}v^{M}_{\mathbf{s}}(k),\,\,\,\,\,\forall\,M\geq 2.$ (2.11) From Eq. (2.11), it can be readily shown that $max\,(v_{\mathbf{u}}(k))>max\,(v_{\mathbf{s}}(k))$. Here, we make use of the fact that all the eigenvalues of the correlation matrices are greater or equal to zero. (a) (b) Figure 2.1: (a) Circular model; (b) Correlation coefficient as a function of lags Therefore, for the Gaussian scattering model where the correlation function is monotonically decreasing with sensors spacing, the optimum sparse array would always cluster with minimum spatial separation, configuring a ULA. The correlation between consecutive sensors for the circular model is given by [27], $r_{c}(\theta_{0},\sigma_{0})=\frac{1}{P}\sum_{i=0}^{P-1}e^{-j2\pi(\delta(k-l))cos(\theta_{0}+\theta_{i})},$ (2.12) where $r_{c}(\theta_{0},\sigma_{0})$ is the $k,l$th element of the correlation matrix $\mathbf{R_{c}}_{(\theta_{0},\sigma_{0})}$ and $\theta_{i}$’s are circularly distributed around the source as shown in Fig. 2.1a. Contrary to Gaussian model, the sensor data correlation in circular case shows oscillatory behaviour (Fig. 2.1b) as a function of lags (Eq. (2.12)). We can, however, bound the upper and lower limits of the optimum SNR as follows: $\frac{\mathbf{Tr}\,(\mathbf{R_{c}}_{(\theta_{0},\sigma_{0})}(\mathbf{w}))}{K}\leq SNR_{o}\leq\mathbf{Tr}\,(\mathbf{R_{c}}_{(\theta_{0},\sigma_{0})}(\mathbf{o})).$ (2.13) $\mathbf{R_{c}}_{(\theta_{0},\sigma_{0})}(\mathbf{o})$ and $\mathbf{R_{c}}_{(\theta_{0},\sigma_{0})}(\mathbf{w})$ are the optimum ‘$\mathbf{o}$’ and worst ‘$\mathbf{w}$’ array correlation matrices, respectively. In the optimal case, the eigenvalues are maximally spread, whereas the worst possibility for MaxSNR arises when all the eigenvalues are equal. We also note that Eq. (2.11) in fact determines the eigen-spread of the correlation matrix asymptotically. It can be shown that this equation remains valid for circular correlation matrix for some $\zeta$ sufficiently large, such that $\sum_{k=1}^{K}v^{M}_{\mathbf{o}}(k)>\sum_{k=1}^{K}v^{M}_{\mathbf{s}}(k),\,\,\,\,\,\forall\,M\geq\zeta$. Therefore, finding the optimum configuration amounts to maximizing $\sum_{k=1}^{K}v^{M}_{\mathbf{z}}(k)$ or equivalently, Tr ($\mathbf{R}^{M}_{\mathbf{c}(\theta_{0},\sigma_{0})}(\mathbf{z})$) for any $M\geq\zeta$, over all possible configurations. Though maximizing Tr ($\mathbf{R}^{M}_{\mathbf{c}(\theta_{0},\sigma_{0})}(\mathbf{z})$) is computationally less expensive than maximizing the principal eigenvalue, yet it is highly computationally involved. Therefore, in the next section we resort to lower bound relaxation to design the optimum configuration. ### 2.4 Optimum sparse array design Following the approach in [20], we assume that we have an estimate of $\mathbf{R_{(\theta_{0},\sigma_{0})}}$, which is the full antenna array source correlation matrix. Then, the problem in Eq. (2.9) can be rewritten as follows: $\displaystyle\underset{\mathbf{z}}{\text{max}}$ $\displaystyle||\mathbf{R_{(\theta_{0},\sigma_{0})}(z)}||_{2},$ (2.14) s.t. $\displaystyle||\mathbf{z}||_{0}=K.$ Here, $||.||_{0}$ determines the cardinality of selection vector $\mathbf{z}$. Given $\mathbf{e_{0}}$, the principal eigenvector corresponding to the full antenna array, we approximate the thinned vector $\mathbf{e_{0}}(\mathbf{z})/||\mathbf{e_{0}}(\mathbf{z})||_{2}$ as the principal eigenvector of the selected $K$-element subarray $\mathbf{z}$ [28]. The problem in Eq. (2.14) can then be approximated by: $\displaystyle\underset{\mathbf{z}}{\text{max}}$ $\displaystyle\frac{\mathbf{e_{0}}^{H}(\mathbf{z})\mathbf{R_{(\theta_{0},\sigma_{0})}}(\mathbf{z})\mathbf{e_{0}}(\mathbf{z})}{||\mathbf{e_{0}}(\mathbf{z})||_{2}^{2}},$ (2.15) s.t. $\displaystyle||\mathbf{z}||_{0}=K.$ This approximation represents a lower bound of optimum SNR. Define, $\tilde{\mathbf{R}}_{\theta,\sigma_{0}}=\mathbf{R}_{(\theta_{0},\sigma_{0})}\circ(\mathbf{e_{0}}\mathbf{e_{0}}^{H})$ and $\tilde{\mathbf{e_{o}}}=\mathbf{e^{*}_{o}}\circ\mathbf{e_{o}}$. Equation (2.15) can be rephrased as follows: $\displaystyle\underset{\mathbf{z}}{\text{max}}$ $\displaystyle\,\,\,\,\frac{\mathbf{z}^{T}\mathbf{\tilde{R}_{\theta,\sigma_{0}}z}}{\mathbf{z}^{T}\mathbf{\tilde{e_{0}}}},$ (2.16) s.t. $\displaystyle\,\,||\mathbf{z}||_{1}=K,$ $\displaystyle\,\,0\leq\mathbf{z}\leq 1.$ The constraint in Eq. (2.15) is relaxed to affine equality constraint ($||.||_{1}$ denotes $l^{1}$-norm) and a box constraint, but the objective function still remains non-convex. Therefore, we resort to iterative first order Taylor approximation as follows: $\displaystyle\underset{\mathbf{z}}{\text{max}}$ $\displaystyle\quad\quad\frac{-\mathbf{z}^{iT}\mathbf{\tilde{R}_{\theta,\sigma_{0}}z}^{i}+2\mathbf{z}^{iT}\mathbf{\tilde{R}_{\theta,\sigma_{0}}z}}{\mathbf{z}^{T}\mathbf{\tilde{e_{0}}}},$ (2.17) s.t. $\displaystyle\quad\quad||\mathbf{z}||_{1}=K,$ $\displaystyle\quad\quad 0\leq\mathbf{z}\leq 1.$ Here, $i$ is the iteration number. This linear fractional program (LFP) can be turned into LP by simple change of variables, $\alpha=1/\mathbf{z}^{T}\mathbf{\tilde{e_{0}}}$ and $\mathbf{v}=\bf{z}/\mathbf{z}^{T}\mathbf{\tilde{e_{0}}}$ as follows [29], $\displaystyle\underset{\mathbf{v},\alpha}{\text{max}}$ $\displaystyle\quad\quad{-\mathbf{z}^{iT}\mathbf{\tilde{R}_{\theta,\sigma_{0}}z}^{i}\alpha+2\mathbf{z}^{iT}\mathbf{\tilde{R}_{\theta,\sigma_{0}}v}},$ (2.18) s.t. $\displaystyle\quad\quad\mathbf{1}^{T}\mathbf{v}-K\alpha=0,\,\,\mathbf{\tilde{e_{0}}}^{T}\mathbf{v}=1,$ $\displaystyle\quad\quad\mathbf{v}\geq 0,\,\,\mathbf{v}-\alpha\leq 0,\,\,\alpha\geq 0.$ yielding, the estimate of the selection vector $\mathbf{z}=\mathbf{v}/\alpha$. ### 2.5 Simulations For both scattering models, we select $K=9$ sensors from $N=21$ possible equally spaced locations with inter-element spacing of $\lambda/2$. The source SNR is $0$ dB. (a) (b) (c) (d) Figure 2.2: Output SNR comparison for various array configurations; (a) Gaussian model; (b) circular model. Beampattern of optimum array; (c) Gaussian model; (d) circular model. #### 2.5.1 Gaussian model Figure 2.2a plots the optimum SNR for all possible array configurations in ascending order of output SNR for Gaussian spread source with centre angle of 900 and variance of 50. As expected, the optimum array emerges as a ULA with output SNR of 8 dB. The lower bound relaxation is also depicted in Fig. 2.2a and is shown to offer a good metric in the underlying case. The ULA has more than 3 dB advantage over the worst array configuration having less than 5 dB output SNR. The worst array configuration is shown in the Fig. 2.3a where ‘.’ and ‘$\times$’ represent the presence and absence of sensor respectively. We also observe that if sensor malfunction prevents a contiguous uniformly spaced antenna array configuration, the optimum sparse array opts to minimally stretch, incorporating the next closest antenna position, as shown in the Fig. 2.3b. In order to simulate this case, we set antenna 9 as a faulty antenna. The output SNR corresponding to this new configuration is 7.9 dB which is 0.1 dB less than the optimum array configuration. This lower SNR is the price paid by including a smaller correlation value corresponding to antenna position 10 compared to that of antenna position 9. The output SNR corresponding to coprime and nested sparse arrays, as depicted in Fig. 2.2a, are significantly less as compared to the ULA. Figure 2.2c shows the optimum beampattern corresponding to the different configurations with ULA giving the widest main lobe with the highest gain possible at the scatterer’s centre. Moreover, the sidelobes for ULA are significantly lower as compared to other sparse arrays which is highly desirable. #### 2.5.2 Circular model Figure 2.2b shows the scenario for circular scattering model with a single source at the broadside with the spatial spread of $30^{0}$. It shows that the optimum sparse array (shown in Fig. 2.3c) has SNR of $5.68$ dB, whereas the worst sparse array (shown in Fig. 2.3d) has output SNR of $3$ dB, which is more than 2.5 dB down as compared to the optimum array topology. (a) (b) (c) (d) (e) (f) Figure 2.3: (a) Worst sparse array (Gaussian); (b) Second best (Gaussian); (c) Optimum array (circular); (d) Worst sparse array (circular); (e) Nested array; (f) Coprime array It is informative to compare the performance of optimum array with the ULA, which is a de facto array configuration in many applications. The ULA gives output SNR of 5.28 dB which is 0.4 dB lower than the optimum performance. The significant difference however lies in the beampattern of the two arrays, as shown in Fig. 2.2d. Contrary to the optimum configuration, the ULA attempts to maximize the output SNR by placing a null exactly in the centre of the scatterer beam which is highly undesirable. Figure 2.2b also shows that the optimum sparse array has a clear advantage over coprime and nested array topologies in terms of output SNR. This is beacuse the optimum array manages wider mainlobe with higher gain where the scatterers are most dense (Fig. 2.2d). Figure 2.4 shows that better performance of the optimum sparse array is more pronounced at the broadside, whereas the ULA is the optimum array configuration for DOAs near the array end-fire location. It can be seen that the performance of the optimum array and sub optimum sparse arrays differ significantly over a wide field of view. ### 2.6 Conclusion This chapter considered optimum sparse array configuration for maximizing the beamformer output SNR for a single source that is seen to the receiver through its local scatterers. It is shown that for the Gaussian local scattering model, the correlation is weakened monotonically across the receiver antennas. As such, the optimum configuration, in seeking to capture the highest spatial correlation values, becomes the ULA. We showed that for a circular local scattering model, the optimum sparse array loses uniformity in quest of including the high correlation values corresponding to large antenna separations. In both cases, we solved the optimization problem both iteratively and by enumerations and showed strong agreement between the two methods. Figure 2.4: Output SNR for different arrays vs centre angle. ### 2.7 Appendix Proof of the result used in Eq. (2.11): $\displaystyle\mathbf{Tr}\,(\mathbf{R}^{M}_{\mathbf{g}(\theta_{0},\sigma_{0})}(\mathbf{z}))$ $\displaystyle\approx\mathbf{Tr}\,((\mathbf{a}_{{\theta_{0}}}(\mathbf{z})\mathbf{a}^{H}_{{\theta_{0}}}(\mathbf{z})\circ\mathbf{B}_{(\theta_{0},\sigma_{0})}(\mathbf{z}))^{M})$ (2.19) $\displaystyle=\mathbf{Tr}\,(\mathbf{a}_{{\theta_{0}}}(\mathbf{z})\mathbf{a}^{H}_{{\theta_{0}}}(\mathbf{z})\circ\mathbf{B}^{M}_{(\theta_{0},\sigma_{0})}(\mathbf{z}))$ $\displaystyle=\mathbf{Tr}\,(\mathbf{B}^{M}_{(\theta_{0},\sigma_{0})}(\mathbf{z})).$ For Gaussian model, $\mathbf{B}_{(\theta_{0},\sigma_{0})}(\mathbf{u})\geq\mathbf{B}_{(\theta_{0},\sigma_{0})}(\mathbf{s})>0$. Here, ‘$\geq$’ means element wise comparison and strict equality holds only for diagonal entries. This implies, $\displaystyle\mathbf{B}^{M}_{(\theta_{0},\sigma_{0})}(\mathbf{u})$ $\displaystyle>\mathbf{B}^{M}_{(\theta_{0},\sigma_{0})}(\mathbf{s}),\,\,\,\,\quad\quad\forall\,M\geq 2$ (2.20) $\displaystyle\mathbf{Tr}\,(\mathbf{B}^{M}_{(\theta_{0},\sigma_{0})}(\mathbf{u}))$ $\displaystyle>\mathbf{Tr}\,(\mathbf{B}^{M}_{(\theta_{0},\sigma_{0})}(\mathbf{s})).$ Combining (2.19) and (2.20), we have, $\mathbf{Tr}\,(\mathbf{R}^{M}_{\mathbf{g}(\theta_{0},\sigma_{0})}(\mathbf{u}))>\mathbf{Tr}\,(\mathbf{R}^{M}_{\mathbf{g}(\theta_{0},\sigma_{0})}(\mathbf{s}))\quad\forall\,M\geq 2$ ∎ ## Chapter 3Hybrid Sparse Array Beamforming Design for General Rank Signal Models ### 3.1 Introduction Sparse array design through sensor selection reduces system receiver overhead by lowering the hardware costs and processing complexity. It finds applications in sensor signal processing for communications, radar, sonar, satellite navigation, radio telescopes, speech enhancement and ultrasonic imaging [1, 2, 3, 4, 5, 6]. One primary goal in these applications is to determine sensor locations to achieve optimality for some pre-determined performance criteria. This optimality includes minimizing the mean radius of the confidence ellipsoid associated with the estimation error covariance matrix [5], and lowering the Cramer Rao bound (CRB) for angle estimation in direction finding problem [30]. The receiver performance then depends largely on the operating environment, which may change according to the source and interference signals and locations. This is in contrast to sparse arrays whose configurations follow certain formulas and seek to attain high extended aperture co-arrays. The driving objective, in this case, is to enable direction of arrival (DOA) estimation of more sources than physical sensors. Common examples are structured arrays such as nested and coprime arrays [7, 8, 10]. Sparse array design typically involves the selection of a subset of uniform grid points for sensor placements. For a given number of sensors, it is often assumed that the number of grid points, spaced by half wavelength, is unlimited. However, in many applications, there is a constraint on the spatial extent of the system aperture. In this case, a structured array, in seeking to maximize the number of spatial autocorrelation lags, may find itself placing sensors beyond the available physical aperture. The problem then becomes that of dual constraints, one relates to the number of sensors, and the other to the number of grid-points. With a limited aperture constraint invoked, few sensors may in fact be sufficient to produce a desirable filled structured co-array, even with narrowband assumption and without needing wideband or multiple frequencies [31]. In this case, any additional sensors, constitute a surplus that can be utilized to meet an environment-dependent performance criterion, such as maximum signal-to-interference and noise ratio (SINR). Thereby, one can in essence reap the benefits of structured and non-structured arrays. This paradigm calls for a new aperture design approach that strives to provide filled co-arrays and, at the same time, be environment-sensitive. This hybrid design approach is the core contribution of this chapter. Sparse sensor design has thoroughly been studied to economize the receive beamformer [32, 33, 34, 35, 36, 37, 15, 38, 39, 40, 41, 42, 43, 44]. However, in contrast to MaxSINR design, the main focus of the efforts, therein, was in achieving desirable beampattern characteristics with nominal sidelobe levels, since the sparse beamformer is susceptible to high sidelobe levels. For example, an array thinning design was proposed for sidelobe minimization in [36] by starting from a fully populated array and sequentially removing sensors in a systematic manner. Instead, the sparse array design presented in [37] to optimize the peak sidelobe level involves a joint design of sensor locations and their corresponding beamforming weights. A beampattern matching design explained in [15] can effectively recover sparse topologies through an iterative cyclic approach. Additionally, global optimization tools such as Genetic Algorithms/Simulated Annealing and convex relaxation schemes based on re-weighted $l_{1}$-norm minimization have been rigorously exploited in sensor selection problem for synthesizing a user-specified receive beampattern response [39, 40, 41, 42, 43, 44]. In environment-dependent array design, signal power estimation and enhancement in an interference active environment has a direct bearing on improving target detection and localization for radar signal processing, increasing throughput or channel capacity for MIMO wireless communication systems, and enhancing resolution capability in medical imaging [45, 46, 47]. It is noted that with sparse array, the commonly used Capon beamforming must not only find the optimum weights but also the optimum array configuration. This is clearly an entwined optimization problem, and requires finding maximum SINR over all possible sparse array configurations. Maximum signal to noise ratio (MaxSNR) and MaxSINR have been shown to yield significantly efficient beamforming with performance depending mainly on the positions of the sensors as well as the locations of sources in the field of view (FOV) [11, 13, 14]. In this chapter, we consider a bi-objective optimization problem, namely achieving the filled co-array and maximizing the SINR. The proposed technique enjoys key advantages as compared to state-of-the-art sparse aperture design, namely, (a) It does not require any a priori knowledge of the jammers directions of arrival and their respective power which is implicitly assumed in previous contributions [20, 48, 12]. As such, it is possible to directly work on the received data correlation matrix (b) It extends to spatial spread sources in a straightforward way. Figure 3.1: Block diagram of adaptive switched sensor beamformer The proposed hybrid approach first determines a prefixed sparse array that results in a filled co-array with minimum number of sensors. This prefixed configuration could be a minimum redundancy array (MRA) [7], nested or coprime array configuration that fills the aperture under consideration with minimal sensors, allowing maximum degrees of freedom for SINR maximization. This prefixed sensor configuration can be achieved by an optimization problem involving the minimum number of sensors spanning a pre-determined aperture. However, for the scope of this chapter, the prefixed configuration is set by MRA or other structured arrays. The remaining sensors after forming the prefixed array are utilized to maximize the SINR. The cascade nature of the proposed hybrid approach is relatively simpler than the ultimate design approach that produces the optimum filled sparse array that maximizes SINR. Environment-dependent array design lowers the hardware complexity by reducing the expensive transmission chains through sensor switching as shown in the block diagram in Fig. 3.1. The proposed hybrid approach, however, has an added advantage of offering a simplified sensor switching in time-varying environment. This is attributed to large number of fixed location sensors which would always remain non-switched, irrespective of the sources and interferences in the FOV. The proposed hybrid approach is particularly permissive as the number $N$ of possible sensor locations increases. To further clarify, it is noted that sparse arrays having $N$ available sensors can typically span a filled array aperture of the order of $\mathcal{O}(N(N-1)/2)$ [8]; conversely, given an aperture spanning $N$ possible sensor locations, only $\mathcal{O}(N^{1/2})$ sensors are sufficient to synthesize a fully augmentable array design. This emphasizes the fact that as the possible aperture size increases, then relatively few sensors are required to meet the full augmentability condition, leaving more degrees of freedom to optimize for SINR enhancement. The hybrid approach also lends itself to more desirable beampattern characteristics by maintaining minimum spacing between sensor elements. It is important to note that having fully augmentable arrays not only provide the benefits of simplified sensor switching and improved identifiability of large number of sources, but also they ensure the availability of full array data covariance matrix essential to carry optimized SINR configuration [49], [50]. Therefore, the proposed simplified hybrid sensor switching architecture ensures the knowledge of global data statistics at all times, in contrast to previous efforts in [51, 52, 53] that sort to optimize data dependent microphone placement viz a viz transmission power. The proposed methodology therein targets a different objective function and primarily relies on local heuristics. In this case, sensor switching comes with an additional implementation overhead, in an attempt to recursively match the performance offered by the knowledge of global statistics. We consider the problem of MaxSINR sparse arrays with limited aperture for both single and higher rank signal correlation matrices. The case of single rank correlation matrix arises when there is one desired source signal in the FOV, whereas the case of higher rank signal model occurs for spatially spread source. The problem is posed as optimally selecting $P$ sensors out of $N$ possible equally spaced grid points. Maximizing SINR amounts to maximizing the principal eigenvalue of the product of the inverse of data correlation matrix and the desired source correlation matrix [25]. Since it is an NP hard optimization problem, we pose this problem as QCQP with weighted $l_{1}$-norm squared to promote sparsity. The re-weighted $l_{1}$-norm squared relaxation is effective for reducing the required sensors and minimizing the transmit power for multicast beamforming [2]. We propose a modified re-weighting matrix based iterative approach to control the sparsity of the optimum weight vector so that $P$ sensor fully augmentable hybrid array is finally selected. This modified regularization re-weighting matrix based approach incorporates the prefixed structured array assumption in our design and works by minimizing the objective function around the presumed prefixed array. The rest of the chapter is organized as follows: In the next section, we state the problem formulation for maximizing the output SINR under general rank signal correlation matrix. Section 3.3 deals with the optimum sparse array design by semidefinite relaxation (SDR) and proposed modified re-weighting based iterative algorithm of finding $P$ sensor fully augmentable hybrid sparse array design. In section 3.4, with the aid of number of design examples, we demonstrate the usefulness of fully augmentable arrays achieving MaxSINR and highlight the effectiveness of the proposed methodology for sparse array design. Concluding remarks follow at the end. ### 3.2 Problem Formulation Consider $K$ desired sources and $L$ independent interfering source signals impinging on a linear array with $N$ uniformly placed sensors. The baseband signal received at the array at time instant $t$ is then given by; $\mathbf{x}(t)=\sum_{k=1}^{K}(\alpha_{k}(t))\mathbf{s}(\theta_{k})+\sum_{l=1}^{L}(\beta_{l}(t))\mathbf{v}(\theta_{l})+\mathbf{n}(t),$ (3.1) where, $\mathbf{s}({\theta_{k}})$ and $\mathbf{v}({\theta_{l}})$ $\in\mathbb{C}^{N}$ are the corresponding steering vectors respective to directions of arrival, $\theta_{k}$ or $\theta_{l}$, and are defined as follows; $\mathbf{s}({\theta_{k}})=[1\,\,\,e^{j(2\pi/\lambda)dcos(\theta_{k})}\,.\,.\,.\,e^{j(2\pi/\lambda)d(N-1)cos(\theta_{k})}]^{T}.$ (3.2) The inter-element spacing is denoted by $d$, ($\alpha_{k}(t)$, $\beta_{l}(t))$ $\in\mathbb{C}$ denote the complex amplitudes of the incoming baseband signals [54]. The additive Gaussian noise $\mathbf{n}(t)$ $\in\mathbb{C}^{N}$ has a variance of $\sigma_{n}^{2}$ at the receiver output. The received signal vector $\mathbf{x}(t)$ is combined linearly by the $N$-sensor beamformer that strives to maximize the output SINR. The output signal $y(t)$ of the optimum beamformer for maximum SINR is given by [25], $y(t)=\mathbf{w}_{o}^{H}\mathbf{x}(t),$ (3.3) where $\mathbf{w}_{o}$ is the solution of the optimization problem given below; $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s^{{}^{\prime}}}\mathbf{w},$ (3.4) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}=1.$ For statistically independent signals, the desired source correlation matrix is given by, $\mathbf{R}_{s}=\sum_{k=1}^{K}\sigma^{2}_{k}\mathbf{s}(\theta_{k})\mathbf{s}^{H}(\theta_{k})$, where, $\sigma^{2}_{k}=E\\{\alpha_{k}(t)\alpha_{k}^{H}(t)\\}$. Likewise, we have the interference and noise correlation matrix $\mathbf{R}_{s^{{}^{\prime}}}=\sum_{l=1}^{L}(\sigma^{2}_{l}\mathbf{v}(\theta_{l})\mathbf{v}^{H}(\theta_{l})$) + $\sigma_{n}^{2}\mathbf{I}_{N\times N}$, with $\sigma^{2}_{l}=E\\{\beta_{l}(t)\beta_{l}^{H}(t)\\}$ being the power of the $l$th interfering source. The problem in (3.4) can be written equivalently by replacing $\mathbf{R}_{s^{{}^{\prime}}}$ with the received data covariance matrix, $\mathbf{R_{xx}}=\mathbf{R}_{s}+\mathbf{R}_{s^{{}^{\prime}}}$ as follows [25], $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w},$ (3.5) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1.$ It is noted that the equality constraint in (4) is relaxed in (5) due to the inclusion of the constraint as part of the objective function, and as such, (5) converges to the equality constraint. Additionally, the optimal solution in (5) is invariant up to uncertainty of the absolute powers of the sources of interest. Accordingly, the relative power profile of the sources of interest would suffice. For a single desired point source, this implies that only the knowledge of the DOA of the desired source is sufficient rather than the exact knowledge of the desired source correlation matrix. Similarly, neither the source power nor the average power of the scatterers is required in (5) for spatially spread sources when the spatial channel model, such as the Gaussian or circular, is assumed [21]. However, in practice, these assumptions can deviate from the actual received data statistics and hence the discrepancy is typically mitigated, to an extent, by preprocessing the received data correlation matrix through diagonal loading or tapering the correlation matrix [47]. There exists a closed form solution of the above optimization problem and is given by $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{R}_{s^{{}^{\prime}}}^{-1}\mathbf{R}_{s}\\}=\mathscr{P}\\{\mathbf{R_{xx}}^{-1}\mathbf{R}_{s}\\}$. The operator $\mathscr{P}\\{.\\}$ computes the principal eigenvector of the input matrix. Substituting $\mathbf{w}_{o}$ into (3.3) yields the corresponding optimum output SINRo; $\text{SINR}_{o}=\frac{\mathbf{w}_{o}^{H}\mathbf{R}_{s}\mathbf{w}_{o}}{\mathbf{w}_{o}^{H}\mathbf{R}_{s^{{}^{\prime}}}\mathbf{w}_{o}}=\Lambda_{max}\\{\mathbf{R}^{-1}_{s^{{}^{\prime}}}\mathbf{R}_{s}\\}.$ (3.6) This shows that the optimum output SINRo is given by the maximum eigenvalue ($\Lambda_{max}$) associated with the product of the inverse of interference plus noise correlation matrix and the desired source correlation matrix. Therefore, the performance of the optimum beamformer for maximizing the output SINR is directly related to the desired and interference plus noise correlation matrix. It is to be noted that the rank of the desired source signal correlation matrix equals $K$, i.e. the cardinality of the desired sources. ### 3.3 Optimum sparse array design The problem of locating the maximum principal eigenvalue among all the correlation matrices associated with $P$ sensor selection is a combinatorial optimization problem. The constraint optimization (3.5) can be re-formulated for optimum sparse array design by incorporating an additional constraint on the cardinality of the weight vector; $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w},$ (3.7) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1,$ $\displaystyle\quad||\mathbf{w}||_{0}=P.$ Here, $||.||_{0}$ determines the cardinality of the weight vector $\mathbf{w}$. We assume that we have an estimate of all the filled co-array correlation lags corresponding to the correlation matrix of the full aperture array. The problem expressed in (3.7) can be relaxed to induce the sparsity in the beamforming weight vector $\mathbf{w}$ without placing a hard constraint on the specific cardinality of $\mathbf{w}$, as follows [55]; $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w}+\mu(||\mathbf{w}||_{1}),$ (3.8) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1.$ Here, $||.||_{1}$ is the sparsity inducing $l_{1}$-norm and $\mu$ is a parameter to control the desired sparsity in the solution. Even though the relaxed problem expressed in (3.8) is not exactly similar to that of (3.7), yet it is well known that $l_{1}$-norm regularization has been an effective tool for recovering sparse solutions in many diverse formulations [56, 57]. The problem in $(\ref{b2_ch3})$ can be penalized instead by the weighted $l_{1}$-norm function which is a well known sparsity promoting formulation [58], $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w}+\mu(||(\mathbf{b}^{i}\circ|\mathbf{w}|)||_{1}),$ (3.9) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{s}}\mathbf{w}\geq 1.$ where, “$\circ$” denotes the element wise product, “$|.|$” is the modulus operator and $\mathbf{b}^{i}\in\mathbb{R}^{N}$ is the regularization re- weighting vector at the $i$th iteration. Therefore, $(\ref{c2_ch3})$ is the sequential optimization methodology, where the regularization re-weighting vector $\mathbf{b}^{i}$ is typically chosen as an inverse function of the beamforming weight vector obtained at the previous iteration. This, in turn, suppresses the sensors corresponding to smaller beamforming weights, thereby encouraging sparsity in an iterative fashion. The weighted $l_{1}$-norm function in $(\ref{c2_ch3})$ is replaced by the $l_{1}$-norm squared function which does not alter the regularization property of the weighted $l_{1}$-norm function [2], $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w}+\mu(||(\mathbf{b}^{i}\circ|\mathbf{w}|)||^{2}_{1}),$ (3.10) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1.$ The semidefinite formulation (SDP) of the above problem can then be realized by re-expressing the quadratic form, $\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w}=$Tr$(\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w})=$Tr$(\mathbf{R_{xx}}\mathbf{w}\mathbf{w}^{H})=$ Tr$(\mathbf{R_{xx}}\mathbf{W})$, where Tr(.) is the trace of the matrix. Similarly, the regularization term $||(\mathbf{b}^{i}\circ|\mathbf{w}|)||^{2}_{1}=(|\mathbf{w}|^{T}\mathbf{b}^{i})((\mathbf{b}^{i})^{T}|\mathbf{w}|)=\mathbf{|w|}^{T}\mathbf{B}^{i}\mathbf{|w|}=$Tr$(\mathbf{B}^{i}\mathbf{|W|})$. Here, $\mathbf{W}=\mathbf{w}\mathbf{w}^{H}$ and $\mathbf{B}^{i}=\mathbf{b}^{i}(\mathbf{b}^{i})^{T}$ is the regularization re- weighting matrix at the $i$th iteration. Utilizing these quadratic expressions in (3.10) yields the following problem [59, 60, 2], $\displaystyle\underset{\mathbf{W\in\mathbb{C}}^{N\times N},\mathbf{\tilde{W}\in\mathbb{R}}^{N\times N}}{\text{minimize}}$ $\displaystyle\quad\text{Tr}(\mathbf{R_{xx}}\mathbf{W})+\mu\text{Tr}(\mathbf{B}^{i}\mathbf{\tilde{W}}),$ (3.11) s.t. $\displaystyle\quad\text{Tr}(\mathbf{R}_{s}\mathbf{W})\geq 1,$ $\displaystyle\quad\mathbf{\tilde{W}}\geq|\mathbf{W}|,$ $\displaystyle\quad\mathbf{W}\succeq 0,\,\text{Rank}(\mathbf{W})=1.$ The function “$|.|$” returns the absolute values of the entries of the matrix, “$\geq$” is the element wise comparison and “$\succeq$” denotes the generalized matrix inequality. The auxiliary matrix $\mathbf{\tilde{W}}\in\mathbb{R}^{N\times N}$ implements the weighted $l_{1}$-norm squared regularization along with the re-weighting matrix $\mathbf{B}^{i}$. The rank constraint in (3.11) is non convex and therefore need to be removed. The rank relaxed approximation works well for the underlying problem. In case, the solution matrix is not rank $1$, we can resort to randomization to harness rank $1$ approximate solutions [61]. Alternatively, one could minimize the nuclear norm of $\mathbf{W}$, as a surrogate for $l_{1}$-norm in the case of matrices, to induce sparsity in the eigenvalues of $\mathbf{W}$ and promote rank one solutions [62, 63]. The resulting rank relaxed semidefinite program (SDR) is given by; $\displaystyle\underset{\mathbf{W\in\mathbb{C}}^{N\times N},\mathbf{\tilde{W}\in\mathbb{R}}^{N\times N}}{\text{minimize}}$ $\displaystyle\quad\text{Tr}(\mathbf{R_{xx}}\mathbf{W})+\mu\text{Tr}(\mathbf{B}^{i}\mathbf{\tilde{W}}),$ (3.12) s.t. $\displaystyle\quad\text{Tr}(\mathbf{R}_{s}\mathbf{W})\geq 1,$ $\displaystyle\quad\mathbf{\tilde{W}}\geq|\mathbf{W}|,$ $\displaystyle\quad\mathbf{W}\succeq 0.$ In general, QCQP is NP hard and cannot be solved in polynomial time. The formulation in (3.12) is clearly convex, in terms of unknown matrices, as all the other correlation matrices involved are guaranteed to be positive semidefinite. The sparsity parameter $\mu$ largely determines the cardinality of the solution beamforming weight vector. To ensure $P$ sensor selection, appropriate value of $\mu$ is typically found by carrying a binary search over the probable range of $\mu$. After achieving the desired cardinality, the reduced size thinned correlation matrix $\mathbf{R_{xx}}$ is formed corresponding to the non-zero values of $\mathbf{\tilde{W}}$. The reduced dimension SDR is now solved with setting $\mu=0$, yielding optimum beamformer $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{W}\\}$. #### 3.3.1 Fair gain beamforming The optimization in (3.12) strives to incorporate the signal from all the directions of interest while optimally removing the interfering signals. To achieve this objective, the optimum sparse array may show leaning towards a certain source of interest, consequently, not offering fair gain towards all sources. In an effort to promote equal gain towards all sources, we put a separate constraint on the power towards all desired sources as follows; $\displaystyle\underset{\mathbf{W\in\mathbb{C}}^{N\times N},\mathbf{\tilde{W}\in\mathbb{R}}^{N\times N}}{\text{minimize}}$ $\displaystyle\quad\text{Tr}(\mathbf{R_{xx}}\mathbf{W})+\mu\text{Tr}(\mathbf{B}^{i}\mathbf{\tilde{W}}),$ (3.13) s.t. $\displaystyle\quad\text{Tr}(\mathbf{R}_{k}\mathbf{W})\geq 1,\,\,\,\,\forall k\in(1,2,3...K)$ $\displaystyle\quad\mathbf{\tilde{W}}\geq|\mathbf{W}|,$ $\displaystyle\quad\mathbf{W}\succeq 0.$ Here, $\mathbf{R}_{k}=\mathbf{s}(\theta_{k})\mathbf{s}^{H}(\theta_{k})$ is the rank $1$ covariance matrix associated with the source at DOA $(\theta_{k})$. However, the above SDR can be solved to an arbitrary small accuracy $\zeta$, by employing interior point methods involving the worst case complexity of $\mathcal{O}\\{$max$(K,N)^{4}N^{(1/2)}\log(1/\zeta)\\}$ [61]. Table 3.1: Proposed algorithm to achieve desired cardinality of optimal weight vector 0: Data correlation matrix $\mathbf{R_{xx}}$, $N$, $P$, look direction DOA’s $\theta_{k}$, hybrid selection vector $\mathbf{z}$. 0: $P$ sensor beamforming weight vector $\mathbf{w}_{o}$, Initialize $\epsilon$.Initialize $\mu_{lower}$, $\mu_{upper}$ (Initializing lower and upper limits of sparsity parameter range for binary search for desired cardinality $P$)FSDR: Initialize $\mathbf{B=zz}^{T}$. NFSDR: For optimum array design without the augmentability constraint, initialize $\mathbf{z}$ to be all ones vector, $\mathbf{B=zz}^{T}$ (all ones matrix). Perturbed-NFSDR: Locate the sensor $i$ such that, if not selected, results in the minimum compromise of the objective function. Perturb $\mathbf{z}$ at position $i$, $\mathbf{z}(i)=\mathbf{z}(i)+\gamma$, afterwards calculating $\mathbf{B=zz}^{T}$. while (Cardinality of $\mathbf{w}_{o}$ $\neq$ $P$) do Update $\mu$ through binary search. for (Typically requires five to six iterations) do Run the SDR of (3.12) or (3.13) (Fair gain case). Update the regularization weighting matrix $\mathbf{B}$ according to (3.15). end for end while After achieving the desired cardinality, run SDR for reduced size correlation matrix corresponding to nonzero values of $\mathbf{\tilde{W}}$ and $\mu=0$, yielding, $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{W}\\}$. return $\mathbf{w}_{o}$ #### 3.3.2 Modified re-weighting for fully augmentable hybrid array For the case without the full augmentability constraint the regularization re- weighting matrix $\mathbf{B}$ is initialized unweighted i.e. by all ones matrix and the $m,n$th element of B is iteratively updated as follows [58], $\mathbf{B}_{m,n}^{i+1}=\frac{1}{|\mathbf{W}_{m,n}^{i}|+\epsilon}.$ (3.14) The parameter $\epsilon$ avoids the unwanted case of division by zero, though its choice is fairly independent to the performance of the iterative algorithm but at times very small values of $\epsilon$ can result in the algorithm getting trapped in the local minima. For the hybrid array design, we initialize the re-weighting matrix instead as an outer product of hybrid selection vector $\mathbf{z}$. The hybrid selection vector $\mathbf{z}$ is an $N$ dimensional vector containing binary entries of zero and one, where, zeros correspond to the pre-selected sensors and ones correspond to the remaining sensors to be selected. Hence, the cardinality of $\mathbf{z}$ is equal to the difference of the total number of available sensors and the number of pre- selected sensors. This modified re-weighting approach ensures that the sensors corresponding to the pre-selected configuration is not penalized as part of the regularization, hence, $\mathbf{B=zz}^{T}$, thrives solutions that incorporate the pre-selected array topology. The modified penalizing weight update for the hybrid array design can be expressed as; $\mathbf{B}^{i+1}=(\mathbf{zz}^{T})\varoslash({|\mathbf{W}^{i}|+\epsilon}).$ (3.15) The symbol “$\varoslash$” denotes element wise division. For the hybrid design, (15) is proposed with appropriate selection of $\mathbf{z}$, as explained above, and hereafter referred to as the Fixed SDR (FSDR). The array designed without the augmentability consideration is the special case of (15) with $\mathbf{z}$ being an all ones vector and the algorithm is subsequently regarded as the Non-Fixed SDR (NFSDR). The pseudo-code for controlling the sparsity of the optimal weight vector $\mathbf{w}_{o}$ is summarized in Table 3.1. #### 3.3.3 Symmetric arrays The solution of the NFSDR formulation is penchant for symmetric arrays in the case of symmetric initialization vector $\mathbf{z}$. The plausible explanation is as follows. We first show that the beamforming weights which maximizes the output SINR for symmetric sparse array topologies are conjugate symmetric w.r.t. the array center. ###### Proposition 1. The conjugate symmetry of the optimal weight vector holds for centro-symmetric sparse array configurations in case of the general rank desired source model. (Refer to the section 3.6 for the proof.) ∎ We observe that the regularized cost function does not invoke sparsity until after the first few initial iterations. Consequently, the initial solutions of the semidefinite program has symmetric coefficients as the NFSDR seeks near optimal solutions which are analytically shown to be conjugate symmetric. Moreover, the iterative sparsity enhancing formulation introduces sparsity by penalizing the beamforming weight vector according to (3.15), where, it only accounts the magnitude of the beamforming weights. Therefore, at each iteration the regularization re- weighting matrix $\mathbf{B}$ happens to penalize the solution weight vector in a symmetric fashion around the array center. Thus, the iterative NFSDR sparse solution favors symmetric configurations by discarding corresponding symmetric sensors simultaneously. Though, the symmetric configuration can be suitable for certain applications [64], and can have desirable performance, yet, it reduces the available degrees of freedom. Therefore, to avoid curtailing the available degrees of freedom, we perturb the re-weighting regularization matrix $\mathbf{B}$ at the initial iteration, as follows. From $N$ prospective locations, find the sensor position, which if not selected, results in the least compromise of the objective function performance. Corresponding to the aforementioned position, set the regularization weight to be relatively high through perturbation by parameter $\gamma$. By so doing, we resolve the issues arising from the symmetric regularization re-weighting matrix. This modified algorithm is henceforth referred to as the perturbed- NFSDR and is detailed in Table 3.1. Figure 3.2: Output SINR for different array topologies ### 3.4 Simulations In this section, we show the effectiveness of the proposed techniques for the sparse array design for MaxSINR. We initially examine the proposed approach for array configurability by considering arbitrary arrays without the augmentability constraint. In the later examples, we demonstrate the effectiveness of fully augmentable hybrid sparse array design through linear and 2D arrays. We focus on the EM modality, and as such we use antennas for sensors. #### 3.4.1 Single point source We select $P=8$ antennas from $N=16$ possible equally spaced locations with inter-element spacing of $\lambda/2$. Figure 3.2 shows the output SINR for different array configurations for the case of single desired point source with its DOA varying from $40^{0}$ to $140^{0}$. The interfering signals are located at $20^{0}$ and $\pm 10^{0}$ degree apart from the desired source angle. To explain this scenario, suppose that the desired source is at $60^{0}$, we consider the respective directions of arrival of the three interfering signals at $40^{0}$, $50^{0}$ and $70^{0}$. The SNR of the desired signal is $10$ dB, and the interference to noise ratio (INR) is set to $10$ dB for each scenario. The input SINR is $-4.9$ dB. The upper and lower limit of the sparsity parameter $\mu$ is set to $1.5$ and $0.01$ respectively, $\gamma=0.05$ and $\epsilon=0.1$. From the Fig. 3.2, it is evident that the NFSDR-approach performs close to the performance of the optimum array found by exhaustive search ($12870$ possible configurations), which has very high computational cost attributed to expensive singular value decomposition (SVD) for each enumeration. Moreover, the perturbed-NFSDR algorithm results in comparable or better performance. Except for the slightly lower performance at the desired source of DOA of $70^{0}$, we observe that for the desired source of DOA at $90^{0}$, $100^{0}$ and $130^{0}$, the perturbed-NFSDR recovers a sparse array with better performance than the NFSDR-approach. For the other DOAs, the perturbed-NFSDR recovers the same symmetric configuration as that recovered by the NFSDR-approach. This emphasizes that the perturbed-NFSDR does not eliminate the possibility of symmetric solutions and optimizes over both the symmetrical and unsymmetrical array configurations. On average, the proposed algorithms takes six to seven iterations to converge to the optimum antenna locations; hence, offering considerable savings in the computational cost. It is of interest to compare the optimum sparse array performance with that of compact uniform linear array (ULA). It can be seen from Fig. 3.2, that the optimum sparse array offers considerable SINR advantage over the compact ULA for all source angles of arrival. The ULA performance degrades severely when the source of interest is more towards the array end-fire location. In this case, the ULA fails to resolve and cancel the strong interferers as they are located close to the desired source. Figure 3.3: Average Output SINR for different array topologies over $6000$ Monte Carlo trials (a) (b) (c) (d) Figure 3.4: Array configurations obtained for the point source at the array broadside (a) Optimum (Enumeration) (b) NFSDR-approach (c) Perturbed-NFSDR (d) Worst performing array configuration For the case of the desired source at the array broadside, the maximum output SINR of the optimum array found through enumeration (Fig. 3.4a) is $19$ dB. The optimum array design obtained through the NFSDR-approach yields an output SINR of $18.6$ dB, which is $0.4$ dB less than the corresponding SINR of the optimum array found through exhaustive search. The broadside source arrays are shown in the Fig. 3.4 (where green-filled circle indicates antenna present whereas gray-filled circle indicates antenna absent). The sparse array recovered through NFSDR-approach is clearly a symmetric configuration (Fig. 3.4b). Figure 3.4c shows the sparse array found after addressing the symmetry bias by the approach explained in Section 3.3.3. The SINR for this non- symmetric configuration is $18.7$ dB and is suboptimal merely by $0.3$ dB. It is worth noticing that the worst performing sparse array configuration (Fig. 3.4d) comparatively engages larger array aperture than the optimum array found through enumeration (Fig. 3.4a), yet it has an output SINR as low as $2.06$ dB. This emphasizes the fact that if an arbitrary sparse array structure is employed, it could degrade the performance catastrophically irrespective of the occupied aperture and could perform far worst than the compact ULA, which offers modest output SINR of $15.07$ dB for the scenario under consideration. ##### Monte Carlo Simulation To thoroughly examine the performance of the proposed algorithms under random interfering environments, we perform 6000 Monte Carlo simulations. For this purpose, the desired source DOA is fixed with SNR of $10$ dB, and eight interferences are generated which are uniformly distributed anywhere from $20^{0}$ to $160^{0}$. The INRs of these sources are uniformly drawn from $10$ dB to $15$ dB. We choose 8 antennas out of 16 possible locations. The upper and lower limit of the sparsity parameter $\mu$ is set to 3 and 0.01 respectively, $\gamma=0.1$ and $\epsilon=0.05$. The performance curves are shown in Fig. 3.3 for the desired source fixed at $11$ different DOAs varying from $40^{0}$ to $140^{0}$. On average, the proposed perturbed-NFSDR algorithm consistently provided superior SINR performance. However, this performance is around $1.2$ dB suboptimal than the average SINR computed through enumeration. The average SINR performance of the perturbed-NFSDR algorithm is around $0.35$ dB better than the proposed NFSDR-approach. This is because the degrees of freedom are limited by the inherent array symmetry enforced by the re-weighted optimization scheme. The performances of the proposed algorithms are compared with the design methodology proposed in [48], which relies on the a priori knowledge of the interference steering vectors and respective powers. It is noted that in the underlying scenario the design in [48] is more than $1$ dB suboptimal than the proposed algorithms and around $2$ dB suboptimal as compared to the performance upper bound. The algorithm in [48] relies on successive linear approximation of the objective function as opposed to the quadratic implementation of the SDR, thereby suffering in performance. The SINR performances for the compact ULA, sparse ULA and randomly employed sparse topology are also shown in the Fig. 3.3, further highlighting the utility of sparse array design. (a) (b) (c) Figure 3.5: (a) Antenna array multiple sources (NFSDR-approach) (b) Fair gain $10$ element antenna array (NFSDR-approach) (c) Hybrid $10$ antenna array for multiple desired sources (FSDR) Figure 3.6: Beampattern for multiple point sources (a) (b) (c) Figure 3.7: (a) $14$ element antenna array (NFSDR-approach) (b) Hybrid $14$ antenna sparse array ($8$ prefixed, $6$ selected through FSDR) (c) Hybrid $14$ antenna sparse array ($8$ prefixed, $6$ selected through FSDR) #### 3.4.2 Multiple point sources For the multiple point sources scenario, consider three desired signals impinging from DOAs $40^{0}$, $65^{0}$ and $90^{0}$ with SNR of $0$ dB each. Unlike the example in 3.4.1, we set four strong interferers with INR of $30$ dB are operational at DOAs $50^{0}$, $60^{0}$, $120^{0}$ and $150^{0}$. In so doing, we analyze the robustness of the proposed scheme under very low input SINR of $-36.02$ dB. We select $10$ antennas out of $18$ available slots. The optimum array recovered through convex relaxation is shown in Fig. 3.5a. This configuration results with an output SINR of $11.85$ dB against SINR of $12.1$ dB for the optimum configuration found through enumeration. For the fair gain beamforming, we apply the optimization of (13) and the array configuration for MaxSINR for the fair gain beamforming is shown in Fig 3.5b. The output SINR for the fair beamforming case is $11.6$ dB which is slightly less than the optimum array without the fair gain consideration ($11.85$ dB). However, the advantage of fair beamforming is well apparent from the beampatterns in both cases as shown in Fig 3.6, where the gain towards the source at $65^{0}$ is around $4.24$ dB higher than the case of optimum array without the fair gain consideration. The maximum gain deviation for the fair gain case is $3.5$ dB vs. $8$ dB variation without the fair gain consideration. The SINR of compact ULA is compromised more than $3$ dB as compared to the optimum sparse array (Fig. 3.5a) obtained through the proposed methodology. This improved performance is due to the optimum sparse array smartly engaging its degrees of freedom to eradicate the interfering signals while maintaining maximum gain towards all sources of interest. #### 3.4.3 Fully augmentable linear arrays Consider selecting $14$ antennas out of $24$ possible available locations with antenna spacing of $\lambda/2$. A desired source is impinging from DOA of $30^{0}$ and SNR of $10$ dB, whereas narrowband jammers are operating at $20^{0}$, $40^{0}$ and $120^{0}$ with INR of $10$ dB each. The range of $\mu$ and other parameters are the same as in 3.4.1. Optimum array configuration (Fig. 3.7a) achieved through convex relaxation (NFSDR-approach) has an output SINR of $21.29$ dB as compared to SINR of $21.32$ dB of an optimum array recovered through enumeration ($1.96*10^{6}$ possible configurations). It should be noted that the array recovered without filled co-array constraint is not essentially fully augmentable as is the case in the optimum array (Fig. 3.7a) which clearly has missing co-array lags. In quest of fully augmentable array design we prefix 8 antennas (red elements in Fig. 3.7b) in a minimum redundancy array (MRA) configuration over 24 uniform grid points. This provides 24 consecutive autocorrelation lags. We are, therefore, left with six antennas to be placed in the remaining 16 possible locations ($8008$ possible configurations). We enumerated the performance of all possible hybrid arrays associated with underlying MRA configuration and found the output SINR ranges from $18.1$ dB to $21.3$ dB. Figure 3.7b shows the configuration recovered through the proposed approach which has an output SINR of $20.96$ dB. The proposed approach thus recovers the hybrid sparse array with performance close to the best possible, moreover it approximately yields $3$ dB advantage over worst fully augmentable hybrid array. As MRAs are not unique we started with a different 8 element MRA structured array (red elements in Fig. 3.7c), to further reinforce the effectiveness of fully augmentable sparse arrays. The dynamic performance range associated with MRA of Fig. 3.7c, is from $17.59$ dB to $21.3$ dB. The performance in this case is very similar to the aforementioned MRA configuration with the output SINR of $21.08$ dB for the hybrid array recovered through proposed methodology (Fig. 3.7c). The maximum possible SINR offered by both hybrid arrays is $21.3$ dB which is extremely close to SINR performance of $21.32$ dB offered by the optimum array without augmentability constraint. Figure 3.8: Average Output SINR for different array topologies over $3500$ Monte Carlo trials ##### Monte Carlo Simulation We generate 3500 Monte Carlo simulations for comparison between the performance of the sparse arrays that are designed freely and that of sparse array design involving full augmentability constraint. We choose $16$ antennas out of $24$ available locations. The desired source DOA is fixed with SNR of $10$ dB as in 3.4.1. We assume twelve narrowband interferences drawn uniformly from $20^{0}$ to $160^{0}$ with respective INRs uniformly distributed from $10$ dB to $15$ dB. For binary search, the upper and lower limit of the sparsity parameter $\mu$ is 5 and 0.01 respectively and $\epsilon=0.1$, for all 3500 scenarios. Fig. 3.8 shows the average SINR performance, where the proposed NFSDR-approach is only $0.57$ dB suboptimal relative to the optimum array found through enumeration (choosing 16 antennas out of 24 involves $735471$ prospective configurations). However, this performance is achieved by sparse arrays without ensuring the augmentabilty constraint. Therefore, we prefix $8$ antennas in MRA topology, namely Hybrid $1$ and Hybrid $2$ prefix configurations, shown in red circles in Figs. 3.7b and Fig. 3.7c respectively. The MaxSINR performance, found by enumeration, for either of the underlying hybrid topologies competes very closely as evident in Fig. 8. The average MaxSINR (found by enumeration), under both prefixed configurations, is only compromised by $0.28$ dB relative to the average MaxSINR performance offered without the augmentability constraint. It is noted that in this case, the possible sparse configurations are drastically reduced from $735471$ to $12870$ (choose the remaining $8$ antennas from the remaining $16$ possible locations due to prefixing $8$ antennas a priori). It is clear from Fig. 3.8 that the proposed FSDR algorithm successfully recovers the hybrid sparse array with an average SINR performance loss of $0.8$ dB. We remark that the performance of the hybrid sparse array is still slightly better than the optimum sparse array receive beamforming proposed in [48] that assumes the knowledge of jammers’ steering vectors and utilizes all the available degrees of freedom, unlike the hybrid sparse array. #### 3.4.4 Fully augmentable 2D arrays Consider a $7\times 7$ planar array with grid pacing of $\lambda/2$ where we place 24 antennas at 49 possible positions. A desired source is impinging from elevation angle $\theta=50^{0}$ and azimuth angle of $\phi=90^{0}$. Here, elevation angle is with respect to the plane carrying the array rather than reference from the zenith. Four strong interferes are impinging from ($\theta=20^{0}$, $\phi=30^{0}$), ($\theta=40^{0}$, $\phi=80^{0}$), ($\theta=120^{0}$, $\phi=75^{0}$) and ($\theta=35^{0}$, $\phi=20^{0}$). The INR corresponding to each interference is $20$ dB and SNR is set to $0$ dB. There are of the order of $10^{14}$ possible 24 antenna configurations, hence the problem is prohibitive even by exhaustive search. Therefore, we resort to the upper bound of performance limits to compare our results. Here, we utilize the fact that the best possible performance occurs when the interferes are completely canceled in the array output and the output SINR in that case would equal the array gain offered by the 24 element array which amounts to $13.8$ dB. Figure 3.9 shows the optimum antenna locations recovered by the proposed NFSDR-approach. The output SINR for this configuration is $13.68$ dB which is sufficiently close to the ideal performance. It should be noted that again the array recovered in the Fig. 3.9 is not fully augmentable as it is missing quiet a few correlation lags. We now introduce the condition of full augmentability by placing 19 antennas in nested lattice configuration [65] to form a filled co-array (red elements in Fig. 3.10). The rest of five available antennas can be placed in the remaining 30 possible locations hence resulting in approximately $1.5*10^{5}$ possibilities. Figure 3.10 shows the hybrid sparse geometry recovered by FSDR algorithm and offers SINR of $13.25$ dB which is around $0.4$ dB less than the optimum array. The performance range of the hybrid arrays associated with the structured nested lattice array ranges from $11.4$ dB to $13.38$ dB (found through exhaustive search). In this regard the FSDR algorithm finds the hybrid sparse array with the performance degradation of little more than $0.1$ dB. The worst performing hybrid array (Fig. 3.11) has an output SINR of $11.4$ dB and is around $2$ dB lower than the best performing hybrid sparse array. It is of interest to compare the performance of aforementioned sparse arrays with a compact $2$D array. For this purpose, we chose a $6\times 4$ rectangular array. The compact rectangular array performs very poorly in the underlying scenario and has an output SINR of $7.8$ dB which is more than $5$ dB down from the hybrid sparse array recovered through the semidefinite relaxation. This performance degradation is very clear from the beampattern of both arrays shown in Figs. 3.12 and 3.13 (normalized beampattern in dB). In the case of the hybrid sparse array recovered through FSDR (Fig. 3.10), the target has the maximum gain towards the direction of interest with minimum gain simultaneously towards all unwanted DOAs (Fig. 3.12). In contrast, it is clear from Fig. 3.13 that the beampattern of the compact rectangular array could not manage maximum gain towards the direction of interest while effectively rejecting the interfering signals. Although, the $6\times 5$ and $6\times 6$ compact arrays utilize $6$ and $12$ additional sensors, yet the respective output SINRs of $9.04$ dB and $11$ dB are considerably suboptimal relative to the proposed solutions. It is noted that adding 18 additional sensors resulting in $7\times 6$ rectangular array has an output SINR of $12.87$ dB. Still, the 24 element free-design as well as the hybrid design outperform the compact $42$ element rectangular array. However, a $49$ element fully populated $7\times 7$ rectangular array has an output SINR of $14.37$ dB, which is marginal improvement given the SINR of 24 element designed topologies. The hybrid array also appears to be more robust as it has higher dynamic performance range threshold ($11.4$ dB). The performance of arbitrarily designed arrays is more prone to deteriorate catastrophically even far worse than that of the compact uniform or rectangular arrays. Figure 3.9: $24$ element antenna sparse array (NFSDR-approach) We also test the fully augmentable array design for the case of multiple point source scenario described previously (Section 3.4.2). The hybrid array recovered through proposed methodology is shown in the Fig. 3.5c (red elements showing the $7$ element MRA). The output SINR is $11.566$ dB and is sufficiently close to the performance achieved through enumeration. Figure 3.10: $24$ element hybrid antenna sparse array ($19$ prefixed, $5$ selected through FSDR) Figure 3.11: $24$ element worst performing hybrid antenna sparse array ($19$ prefixed, $5$ selected) ### 3.5 Conclusion This chapter considered fully augmentable sparse array configurations for maximizing the beamformer output SINR for general rank desired signal correlation matrices. It proposed a hybrid sparse array design that simultaneously considers co-array and environment-dependent objectives. The proposed array design approach uses a subset of the available antennas to obtain a fully augmentable array while employing the remaining antennas for achieving the highest SINR. It was shown that the hybrid design is data driven and hence practically viable, as it ensures the availability of the full data correlation matrix with a reasonable trade off in the SINR performance. We applied the modified re-weighting QCQP which proved effective in recovering superior SINR performance for hybrid sparse arrays in polynomial run times. The proposed approach was extended for fair gain beamforming towards multiple sources. We solved the optimization problem by both the proposed algorithms and enumeration and showed strong agreement between the two methods. Figure 3.12: Beampattern for the antenna array in Fig. 3.10 Figure 3.13: Beampattern for a $6\times 4$ compact rectangular array ### 3.6 Appendix #### 3.6.1 Proof of the Conjugate symmetric property of optimal weight vector The correlation matrix $\mathbf{R}$ for centro-symmetric arrays have a conjugate persymmetric structure such that [66]: $\mathbf{TR^{{}^{\prime}}T}=\mathbf{R}$ (3.16) Here $\\{^{\prime}\\}$ is the conjugate operator and $\mathbf{T}$ is the transformation matrix which flips the entries of a vector upside down by left multiplication; $\mathbf{T}=\begin{bmatrix}0&\dots&0&0&1\\\ 0&\dots&0&1&0\\\ \vdots&\dots&&\vdots\\\ 1&\dots&0&&0\end{bmatrix}$ The optimal weight vector which maximizes the SINR is given by; $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{R_{s^{{}^{\prime}}}}^{-1}\mathbf{R_{s}}\\}$ (3.17) where, $\\{\mathbf{R_{s^{{}^{\prime}}}}^{-1}\mathbf{R_{s}}\\}\mathbf{w}_{o}=\Lambda_{max}\mathbf{w}_{o}$ (3.18) Using the relation in (3.16), (3.18) can be re-expressed as follows, $\displaystyle\\{\mathbf{(TR^{{}^{\prime}}_{s^{{}^{\prime}}}T)^{-1}}\mathbf{(TR^{{}^{\prime}}_{s}T)}\\}\mathbf{w}_{o}$ $\displaystyle=\Lambda_{max}\mathbf{w}_{o}$ (3.19) $\displaystyle\\{\mathbf{T^{-1}(R^{{}^{\prime}}_{s^{{}^{\prime}}})^{-1}T^{-1}}\mathbf{(TR^{{}^{\prime}}_{s}T)}\\}\mathbf{w}_{o}$ $\displaystyle=\Lambda_{max}\mathbf{w}_{o}$ Multiplying both sides by $\mathbf{T}$ and applying the conjugate operator, $\\{\mathbf{R_{s^{{}^{\prime}}}}^{-1}\mathbf{R_{s}}\\}\mathbf{T}\mathbf{w}^{{}^{\prime}}_{o}=\Lambda_{max}\mathbf{T}\mathbf{w}^{{}^{\prime}}_{o}$ (3.20) From (3.20), we note, that $\mathbf{T}\mathbf{w}_{o}^{{}^{\prime}}$ is also the principal eigenvector associated with matrix $\mathbf{R_{s^{{}^{\prime}}}}^{-1}\mathbf{R_{s}}$. Since the principal eigenvector of the positive definite Hermitian matrix is unique up to the scalar complex multiplier, this directly implies that; $\mathbf{w}_{o}=\mathbf{T}\mathbf{w}_{o}^{{}^{\prime}}$ ∎ ## Chapter 4Sparse Array Design for Maximizing the Signal-to-Interference- plus-Noise-Ratio by Matrix Completion ### 4.1 Introduction Sensor selection schemes strive to optimize various performance metrics while curtailing valuable hardware and computational resources. Sparse sensor placement, with various design objectives, has successfully been employed in diverse application areas, particularly for enhanced parameter estimation and receiver performance [1, 2, 4, 44, 5, 15, 6, 64]. The sparse array design criteria are generally categorized into environment-independent and environment-dependent performance metrics. The former are largely benign to the underlying environment and, in principle, seek to maximize the spatial degrees of freedom by extending the co-array aperture. This enables high resolution direction of arrival (DOA) estimation possibly involving more sources than the available physical sensors [7, 8, 10, 31, 67]. Environment- dependent objectives, on the other hand, consider the operating conditions characterized by emitters and targets in the array field of view, in addition to receiver noise. In this regard, applying such objectives renders the array configuration as well as the array weights time-varying in response to dynamic and changing environment. In this chapter, we focus on optimum sparse array design for receive beamforming that maximizes the output signal-to-interference and noise ratio (MaxSINR) [68, 47, 60, 69, 70]. It has been shown that optimum sparse array beamforming involves both array configuration and weights, and can yield significant dividends in terms of SINR performance in presence of desired and interfering sources [11, 13, 14, 20, 12, 48, 28]. However, one key challenge in implementing the data-dependent approaches, like Capon beamforming, is the need to have the exact or estimated values of the data autocorrelation function across the full sparse array aperture [47, 25]. This underlying predicament arises as the sparse array design can only have few active sensors at a time, in essence making it difficult to furnish the correlation values corresponding to the inactive sensor locations. To address the aforementioned problem, we propose in this chapter a matrix completion strategy assuming a single desired source and multiple interfering sources. This strategy permits the interpolation of the missing data correlation lags, thus enabling optimum “thinning” of the array for MaxSINR. The low rank matrix completion has been utilized successfully in many applications, including the high-resolution direction of arrival estimation. We compare the matrix completion strategy to the hybrid sparse array design that has been recently introduced and which also provides full spatial autocorrelation function for array thinning [71, 72]. The fundamental thrust of the hybrid design is to pre-allocate some of the available sensors in such a way so as to ensure that all possible correlation lags can be estimated. In this case, the difference between the available sensors and those pre- allocated can be utilized for maximum SINR. In essence, the hybrid design locks few spatial degrees of freedom in an attempt to making the full autocorrelation matrix available to carry out the array optimization at all times. In that sense, it is a hybrid between structured and non-structured arrays. With pre-allocated sensors, the design approach offers a simplified antenna switching as the environment changes. In contrast, the matrix completion-based design is not tied in to any pre-allocated sensor position and, therefore, has the ability to optimize over all the available sensor locations. However, low rank matrix completion is a pre-processing step that is required every time we decide on sensor selection as the environment changes. This significantly adds to the overall overhead and computational complexity. We examine both approaches using estimated autocorrelation function, in lieu of its exact values, and compare their respective performances under different settings and degrees of freedom. Figure 4.1: Block diagram implementing adaptive beamforming and antenna switching It is worth noting that MaxSINR sparse array design using either is an entwined optimization problem that jointly optimizes the beamforming weights and determines the active sensor locations. The optimization is posed as finding $P$ sensor positions out of $N$ possible equally spaced grid points for the highest SINR performance. It is known that maximizing the SINR is equivalent to the problem of maximizing the principal eigenvalue of the product of the inverse of data correlation matrix and the desired source correlation matrix [25]. However, the maximum eigenvalue problem over all possible sparse topologies is a combinatorial problem and is challenging to solve in polynomial times. To alleviate the computational complexity of exhaustive combinatorial search, we pose this problem as successive convex approximation (SCA) with reweighted $l_{1}$-norm regularization to promote sparsity in the final solution. To proceed with the SCA optimization, it is essential to input the algorithm with the full data correlation matrix. For the hybrid design, all the correlation lags are available and, therefore, we resort to averaging across the available correlation lags to harness a Toepltiz matrix estimate of the received data. On the other hand, a sparse array designed freely without preallocating sensor locations necessitates the use of low rank matrix completion to interpolate the missing lags and subsequently applying the SCA optimization [73]. The word ‘free’ implies no preset position of any of the sensors involved. It is shown that the matrix completion is an effective approach to accomplish the MaxSINR sparse design. The performance of matrix completion can potentially surpass the hybrid design at the expense of more involved sensor switching and additional computational complexity stemming from the Toeplitz interpolation of the missing correlation lags. The overall system implementation is depicted in Fig. 4.1. The rest of the chapter is organized as follows: In the next section, we state the problem formulation for maximizing the output SINR. Section 4.3 details the SCA for arrays designed freely alongside the hybrid design approach and the associated modified SCA optimization. Section 4.4 explains the matrix completion approach. In section 4.5, with the aid of Monte Carlo simulations, we compare the performance of hybrid-designed arrays viz a viz freely-designed arrays in a limited snapshot environment. Concluding remarks follow at the end. ### 4.2 Problem Formulation We consider an emitter source in the presence of narrowband interfering signals. The signals impinge on a uniform grid of $N$ linear elements with the inter-element spacing of $d$ and the received signal is given by, $\mathbf{x}(n)=b_{s}(n)\mathbf{s}(\theta)+\sum_{k=1}^{I}b_{ik}(n)\mathbf{i}(\theta_{k})+\mathbf{v}(n)$ (4.1) The sampling instance is $n$, $I$ denotes the number of interfering sources and ($b_{s}(n)$, $b_{ik}(n))$ $\in\mathbb{C}$ are the baseband signals for the source and interferences, respectively. The steering vector corresponding to the direction of arrival of desired source $\mathbf{s}({\theta})$ $\in\mathbb{C}^{N}$ is given by, $\mathbf{s}({\theta})=[1\,\,\,e^{j(2\pi/\lambda)dcos(\theta)}\,.\,.\,.\,e^{j(2\pi/\lambda)d(N-1)cos(\theta)}]^{T}$ (4.2) The interference steering vectors $\mathbf{i}({\theta_{k}})$ are similarly defined. The additive noise $\mathbf{v}(n)$ $\in\mathbb{C}^{N}$ is Gaussian with variance $\sigma_{v}^{2}$. The beamformer processes the received signal $\mathbf{x}(n)$ linearly to improve SINR. The beamformer output $y(n)$ is given by, $y(n)=\mathbf{w}_{o}^{H}\mathbf{x}(n)$ (4.3) The optimal beamforming weights $\mathbf{w}_{o}$ that maximizes SINR is given by solving the following optimization problem [25]; $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{i}\mathbf{w}$ (4.4) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}=1$ The source correlation matrix, $\mathbf{R}_{s}=\sigma^{2}\mathbf{s}(\theta)\mathbf{s}^{H}(\theta)$, with power $\sigma^{2}=E\\{b_{s}(n)b_{s}^{H}(n)\\}$. The sum of the interference and noise correlation matrix is $\mathbf{R}_{i}=\sum_{k=1}^{I}\sigma^{2}_{k}\mathbf{i}(\theta_{k})\mathbf{i}^{H}(\theta_{k})$ $+\sigma_{v}^{2}\mathbf{I}_{N\times N}$, with the $k$th interference power $\sigma^{2}_{k}=E\\{b_{ik}(n)b_{ik}^{H}(n)\\}$. Since $\mathbf{R_{x}}=\mathbf{R}_{s}+\mathbf{R}_{i}$, then formulation (4.4) can be written as part of the objective function as follows [25], $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{x}}\mathbf{w}$ (4.5) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1,$ where the equality constraint is relaxed due to the inclusion of the relationship between the data and signal autocorrelation matrices in the cost function. The optimum solution of the above problem only requires the knowledge of the received data correlation matrix $\mathbf{R_{x}}=E(\mathbf{x}\mathbf{x}^{H})$ and the DOA of the desired source. The former can readily be estimated from the received data vector $\mathbf{x}$ over $T$ snapshots, $\mathbf{\hat{R}_{x}}=\frac{1}{T}\sum_{n=1}^{T}\mathbf{x}(n)\mathbf{x}^{H}(n)$. The analytical solution of the optimization problem is given by $\mathbf{w}_{o}=\\{\mathbf{R}_{i}^{-1}\mathbf{s}(\theta)\\}$ with the optimum output SINRo; $\text{SINR}_{o}=\frac{\mathbf{w}_{o}^{H}\mathbf{R}_{s}\mathbf{w}_{o}}{\mathbf{w}_{o}^{H}\mathbf{R}_{i}\mathbf{w}_{o}}=\Lambda_{max}\\{\mathbf{R}^{-1}_{i}\mathbf{R}_{s}\\},$ (4.6) which is in fact the maximum eigenvalue ($\Lambda_{max}$) of the product of inverse of data correlation matrix and the desired source correlation matrix. In the next section, the formulation in (4.5) is extended to the sparse beamformer design. ### 4.3 Sparse array design through SCA algorithm The expression in (4.6) is applicable to any array topology, including uniform and sparse arrays with the respective correlation matrices. To achieve sparse solutions, given the knowledge of full correlation matrix, (4.5) is introduced with an additional constraint, $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{x}}\mathbf{w}$ (4.7) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1$ $\displaystyle\quad||\mathbf{w}||_{0}=P$ The operator $||.||_{0}$ denotes the $l_{0}$ norm which constrains the cardinality of the weight vector $\mathbf{w}$ to the number of available sensors, $P$. The problem in (4.7) is clearly non convex involving a hard constraint, rendering the formulation challenging to solve in polynomial time [55]. The objective function and quadratic constraint in (4.7) are interchanged, transforming into equivalent formulation as follows, $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{N}}{\text{maximize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w},$ (4.8) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{x}}\mathbf{w}\leq 1$ $\displaystyle\quad||\mathbf{w}||_{0}=P$ In general, the beamforming weight vector is complex valued, however the quadratic functions are real. The real and imaginary entries of the optimal weight vector are typically decoupled, permitting the involvements of only real unknowns. This is achieved through concatenating the beamforming weight vector and defining the respective correlation matrices [74], $\,\,\quad\quad\quad\quad\quad\tilde{\mathbf{R}}_{s}=\begin{bmatrix}\text{real}({\mathbf{R}}_{s})&-\text{imag}({\mathbf{R}}_{s})\\\ \\\ \text{imag}({\mathbf{R}}_{s})&\text{real}({\mathbf{R}}_{s})\\\ \end{bmatrix},\quad\quad\quad\quad\quad\tilde{\mathbf{w}}=\begin{bmatrix}\text{real}({\mathbf{w}})\\\ \\\ \text{imag}({\mathbf{w}})\\\ \end{bmatrix}$ (4.9) $\displaystyle\tilde{\mathbf{R}}_{x}=\begin{bmatrix}\text{real}({\mathbf{R}}_{x})&-\text{imag}({\mathbf{R}}_{x})\\\ \\\ \text{imag}({\mathbf{R}}_{x})&\text{real}({\mathbf{R}}_{x})\\\ \end{bmatrix}$ (4.10) Replacing $\mathbf{R_{s}}$ and $\mathbf{R_{x}}$ by $\tilde{\mathbf{R}_{s}}$ and $\tilde{\mathbf{R}_{x}}$ respectively, (4.8) can be expressed in terms of real variables, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2N}}{\text{maximize}}$ $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\tilde{\mathbf{R}}_{s}\mathbf{\tilde{w}}$ (4.11) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\mathbf{\tilde{R}_{x}}\mathbf{\tilde{w}}\leq 1$ $\displaystyle\quad||\mathbf{w}||_{0}=P$ The quadratic constraint clearly has the convex feasibility region, however, there is still a non convex constraint involving the $l_{0}$ norm. In order to realize the convex feasible region, the $l_{0}$ norm is typically relaxed to the $l_{1}$ norm, which has been effectively used in many sparse recovery applications. The maximization problem is first transformed to a minimization in order to move the $l_{1}$ norm constraint to the objective function and realize a sparse solution. This is achieved by reversing the sign of the entries of the desired source correlation matrix $\bar{\mathbf{R}}_{s}=-\mathbf{\tilde{R}}_{s}$, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\bar{\mathbf{R}}_{s}\mathbf{\tilde{w}}$ (4.12) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\mathbf{\tilde{R}_{x}}\mathbf{\tilde{w}}\leq 1$ $\displaystyle\quad||\mathbf{w}||_{0}=P$ To convexify the objective function, the concave objective is iteratively approximated through successive linear approximation. The approximation coefficients $\mathbf{m}^{i}$ and ${b^{i}}$, are updated iteratively $\mathbf{m}^{i+1}=2\bar{\mathbf{R}}_{s}\mathbf{\tilde{w}}^{i},b^{i+1}=-\mathbf{\tilde{w}}^{i}{{}^{\prime}}\bar{\mathbf{R}}_{s}\mathbf{\tilde{w}}^{i}$ by first order approximation, resulting in the following form, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{m}^{i}{{}^{\prime}}\mathbf{\tilde{w}}+b^{i}$ (4.13) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\mathbf{\tilde{R}_{x}}\mathbf{\tilde{w}}\leq 1$ $\displaystyle\quad||\mathbf{w}||_{0}=P$ Finally, the non convex $l_{0}$ norm is relaxed through minimizing the mixed $l_{1-\infty}$ norm to recover sparse solutions, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{m^{i}{{}^{\prime}}}\mathbf{\tilde{w}}+b^{i}+\mu(\sum_{k=1}^{N}||\mathbf{\tilde{w}}_{k}||_{\infty})$ (4.14) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\mathbf{\tilde{R}_{x}}\mathbf{\tilde{w}}\leq 1$ The summation implements the $l_{1}$ norm that is minimized as a convex surrogate of $l_{0}$ norm. The vector $\mathbf{\tilde{w}}_{k}\in\mathbb{R}^{2}$ has two entries containing the real and imaginary parts of the beamforming weight corresponding to the $k$th sensor. The $||.||_{\infty}$ selects the maximum entry of $\mathbf{\tilde{w}}_{k}$ and discourages the real and imaginary entries concurrently. This is because not selecting a sensor implies the simultaneous removal of both the real and corresponding imaginary entries in the final solution vector. The sparsity parameter $\mu$ is set to zero for the first few initial iterations to allow the solution to converge to optimal solution for the full array elements. The sparsity parameter $\mu$ by itself does not guarantee the final solution to be $P$ sparse. To guarantee a $P$ sparse solution, the optimization problem is solved successively against different values of $\mu$. The values of $\mu$ are typically given by a binary search over the possible upper and lower limit of $\mu$ until the algorithm converges to $P$ sensors [2]. Table 4.1: SCA for sparse array beamforming. 0: Received data sparse correlation matrix $\mathbf{R}_{P}$, look direction DOA $\theta$. 0: $P$ sensor beamforming weight vector Matrix Completion: Run Eq. (4.4) for free design and Toeplitz averaging for hybrid design to estimate the full correlation matrix. Set the lowest eigenvalues of $\mathbf{\hat{R}_{x}}$ corresponding to the noise subspace to be equal to the noise floor. Initialization: Initialize the beamforming vectors randomly to find $\mathbf{m}$ and $b$. Initialize $\epsilon$, $\mu=0$. while (Solution does not converge corresponding to $\mu=0$) do Run Eq. (4.16). end whileInitialize $\mathbf{h}^{i}$= all ones vector, Binary vector for hybrid design.Select $\mu$ (Binary search) while (Beamforming weight vector is not $P$ sparse) do Run Eq. (4.16) . (for initial iteration use $\mathbf{m}^{i}$ and $b^{i}$ from previous while loop) Update the regularization weighting parameter, $\mathbf{h}^{i+1}(k)=\frac{1}{||\mathbf{\tilde{w}}_{k}^{i}||_{2}+\epsilon}$, Update $\mathbf{m}^{i}$ and $b^{i}$ end while After achieving the desired cardinality, analytically solve for $\mathbf{\tilde{w}}$ corresponding to the selected sensor locations, yielding, optimal weight vector. return Optimal weight vector $\mathbf{w}_{o}$ #### 4.3.1 Hybrid sparse array design Formulation (4.14) penalizes all the sensor weights rather judiciously in an effort to optimize the objective function. We refer to this approach as free- design. On the other hand, the hybrid sparse array design, penalizes only some sensor weights, leaving the remaining sensors to assume prefixed positions. These position are chosen to guarantee full augmentatbility of the sparse array, i.e., provide the ability to estimate the autocorrelation at all spatial lags across the array aperture. This provide the means for thinning the array and carrying out sparse optimization all the times. In order to discriminate the prefixed sensors from those which are available for design, the weighted formulation is adopted, in turn modifying (4.14) as follows, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{m^{i}{{}^{\prime}}}\mathbf{\tilde{w}}+b^{i}+\mu(\sum_{k=1}^{N}\mathbf{h}(k)||\mathbf{\tilde{w}}_{k}||_{\infty})$ (4.15) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\mathbf{\tilde{R}_{x}}\mathbf{\tilde{w}}\leq 1$ The weighting vector $\mathbf{h}$ is a binary vector with $1^{\prime}s$ and $0^{\prime}s$ entries. The entries corresponding to the prefixed sensor locations are set to $0$ while the remaining entries are initialized to $1$. In this way, the partial penalization is implemented in (4.15) that ensures the sparsity is not enforced to the prefixed locations. The weighted penalization can easily be extended to the reweighting formulation which can further promote sparsity and facilitates the $P$ sparse solution [58]. This is achieved by iteratively updating weighting vector $\mathbf{h}$ [41, 42], $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{m^{i}{{}^{\prime}}}\mathbf{\tilde{w}}+b^{i}+\mu(\sum_{k=1}^{N}\mathbf{h}^{i}(k)||\mathbf{\tilde{w}}_{k}||_{\infty})$ (4.16) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{{}^{\prime}}\mathbf{\tilde{R}_{x}}\mathbf{\tilde{w}}\leq 1$ The re-weighting vector $\mathbf{h}^{i}$, at the $i$-th iteration, is updated as an inverse function of the beamforming weights at the present iteration, $\mathbf{h}^{i+1}(k)=\frac{1}{||\mathbf{\tilde{w}}_{k}^{i}||_{2}+\epsilon}$ (4.17) This relatively suppresses the low magnitude weights in the next iteration to accelerate sparsity. The parameter $\epsilon$ avoids the case of division by zero. The reweighting is applied to both the freely designed array and the hybrid design. For the former, the vector $\mathbf{h}$ is initialized to all $1^{\prime}s$ vector and updated iteratively. However, to preserve the prefixed sensor locations for the hybrid design, the entries of $\mathbf{h}^{i}$ corresponding to the prefixed locations must remain zero for all iterations, while the remaining entries are initialized to $1$ and updated as explained above. The procedure is summarized in Table 4.1. ### 4.4 Toeplitz matrix completion and Fully augmentable completion through averaging The key concern in the free-design sparse array formulation is the assumption regarding the knowledge of the full array correlation matrix. This is because the data from only $P$ active sensors is available to estimate the correlation matrix. The full correlation matrix, in this case, is not readily available and could have many missing correlation lags. Many different approaches for sparse matrix completion, under variant assumptions about the data model, have been considered in the literature including high resolution DOA estimation. We adopt a positive semidefinite Toepltiz matrix completion scheme that effectively exploits the structure of the unknown correlation matrix. It is well known that the narrowband far field sources impinging on the ULA resultantly has the hermitian positive definite correlation matrix having the Toeplitz structure. Along with the Toepltiz positive definite condition, the trace heuristic is incorporated to interpolate the missing lags. The trace heuristics is successfully used in many areas of control systems and array processing to recover simpler and low rank data models [62, 63, 61]. Moreover, it has been shown that the trace heuristic is equivalent to the nuclear norm minimization, rendering gridless recovery of the underlying narrowband sources, thus recovering the missing correlation lags [75, 76, 77, 78, 79]. The matrix completion problem is, therefore, written as, $\displaystyle\underset{l\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad||Toeplitz(l)\odot\mathbf{Z}-\mathbf{R}_{P}||_{F}^{2}+\zeta\text{Tr}(Toeplitz(l))$ s.t. $\displaystyle\quad Toeplitz(l)\succeq 0$ (4.18) Here, $l$ is a complex vector with a real first element, then $Toeplitz(l)$ returns the symmetric Toeplitz matrix having $l$ and $l^{H}$ defining its first row and column respectively. Matrix $\mathbf{R}_{P}$ is the received data correlation matrix with missing correlation lags. The entries corresponding to the missing correlation lags are set to zero. The symbol ‘$\odot$’ denotes the element wise multiplication and ‘$\succeq$’ denotes the matrix inequality enforcing the positive semidefinite constraint. The matrix $\mathbf{Z}$ is a binary matrix which only fits the non zero elements in $\mathbf{R}_{P}$ to the unknown Toepltiz matrix. The function ‘$||.||_{F}^{2}$’ is the square of the Frobenius norm of the matrix which seeks to minimize the sum of error square between the observed correlation values and the corresponding entries of the unknown Toepltiz matrix. The symbol ‘$\zeta$’ gives the trade off between the denoising term and the trace heuristic pursuing simpler model. The nominal value of the parameter ‘$\zeta$’ is typically tuned from the numerical experience for the underlying problem. However, the Toeplitz estimate can potentially be ill conditioned having quite a few eigenvalues close to zero. We utilize the maximum likelihood estimate of the interpolated Toeplitz correlation matrix by incorporating the knowledge of the noise floor. In so doing, the eigenvalues corresponding to the noise subspace are set equal to the noise floor. Unlike the free-design sparse array, where missing lags manifest themselves as zero values at all entries of some of the autocorrelation matrix sub- diagonals, the hybrid design would ensure that at least one element in each matrix sub-diagonal is available. This facilities the Toeplitz estimation of the received data correlation matrix by averaging the non zero correlation entries across each sub-diagonal. The averaging scheme, however, does not guarantee the positive definiteness of the Toeplitz estimate [49], [50]. This renders the formulation in (4.16) non convex, which essentially requires $\mathbf{R_{x}}$ to be positive semidefinite. In order to circumvent this issue, we return to the maximum likelihood estimate adopted for the matrix completion approach to facilitate a positive definite estimate by eliminating the negative eigenvalues typically appearing in the noise subspace. Finally, the estimated data correlation matrix $\mathbf{\hat{R}_{x}}=Toeplitz(l)$ is used in lieu of $\mathbf{R_{x}}$ to carry out the data dependent optimization for MaxSINR. ### 4.5 Simulations We show examples under different design scenarios to access the performance of the proposed methodology achieving MaxSINR. We establish two performance benchmarks in order to examine the sensitivity of the proposed algorithm to the initial array configuration. This is because the matrix interpolation approach is guided on the initial configuration that decides the location of the missing entries in the data correlation matrix. The initial configuration refers to the $P$-element sparse array topology at the start before commencing of any adaptation process. In general, the initial configuration could be any random array, or the optimized configuration from the preceding operating conditions. The first performance benchmark applies the SCA algorithm under the assumption that the data from all the perspective sensor locations is available. In this way, the actual full correlation matrix utilizing $T$ snapshots is input to the SCA algorithm. Clearly, the performance of the aforementioned benchmark is not reliant on the initial configuration but is dependent on the observed data realization and the number of snapshots. Another deterministic performance benchmark assumes perfect knowledge of the full correlation matrix, representing the case of unlimited data snapshots. To draw a proper distinction, the former would be referred as the ‘Full correlation statistics-limited snapshots (FCS-LSS),’ and the latter is henceforth called the ‘Full correlation statistics-unlimited snapshots (FCS- USS)’. #### 4.5.1 Example comparing both designs (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 4.2: (a) Initial configuration; randomly selected 16 antennas from 36 (b) Initial configuration leading to fully augmentable array (c) Freely designed array (d) Hybrid designed array (e) Initial random configuration; selected 16 antennas from 36 (f) Initial configuration leading to fully augmentable array (g) Freely designed array (h) Hybrid designed array (i) Best performing array configuration (j) Worst performing array configuration Given $N=36$ perspective sensor locations placed linearly with an inter- element spacing of $\lambda/2$. Consider selecting $P=16$ sensors among these locations so as to maximize the SINR. A single source of interest is operating at $90^{0}$, i.e. array broadside. There are also six jammers, concurrently active at locations $40^{0}$, $85^{0}$, $95^{0}$, $135^{0}$, $140^{0}$ and $160^{0}$. The SNR of the desired signal is $0$ dB, whereas each jammer has the interference to noise ratio (INR) of $20$ dB. The range of binary search for the sparsity parameter $\mu$ is set from $0.01$ to $5$, $\gamma=10^{-3}$ (sparsity threshold) and $\epsilon=0.05$. The initial $16$-element sparse array configuration to estimate the data correlation matrix is randomly chosen, and shown in the Fig. 4.2a. This configuration has missing correlation lags and is occupying a fraction of the available aperture. The array collects the data for $T=1000$ snapshots. The full array Toeplitz estimate is recovered through matrix completion with the regularization parameter $\zeta=0.5$. The proposed SCA approach employing matrix completion renders an array configuration with SINR of $11.73$ dB. It is worth noting that, for the underlying case, the number of possible array configurations is of order $10^{9}$ which makes the problem infeasible to solve through exhaustive search. The upper bound of performance, however, is $12$ dB which corresponds to the case when interferences are completely canceled in the output. In this regard, the designed array configuration is very close to the performance upper bound. The optimized array configuration is shown in the Fig. 4.2c. It is noted that this configuration has also missing few correlation lags. In order to access the performance of the hybrid design approach, we consider a randomly selected $16$ element fully augmentable array, which is shown in Fig. 4.2b. The full data correlation matrix is estimated using the same $T=1000$ snapshots and averaging is carried over the available correlation lags to yield a Toepltiz estimate. The SCA approach, in this case, achieves the array design shown in Fig. 4.2d and has a reasonable SINR performance of $10.92$ dB. The designed hybrid array is fully augmentable and involves the prefixed sensor locations which are arranged in the nested array topology (prefixed configuration shown in red color). The hybrid design is clearly sub optimal as compared to the array designed freely. It is noted that the number of possible hybrid sparse array configurations associated with the prefixed sensors is $53130$. Although, the possible fully augmentable configurations are significantly less as compared to $10^{9}$ possibilities, the maximum SINR hybrid design found through enumeration is $11.93$ dB and is close to the upper performance bound of $12$ dB. The performance of both designs are compared with the benchmark design initialized with FCS-LSS estimated from $T=1000$ samples supposedly collected from all $N$ sensors. The benchmark design yields the freely designed and hybrid sparse configurations with the SINR of $11.82$ dB and $11.65$ dB respectively. This performance is superior to the above mentioned designs that employ the Toeplitz estimation in lieu of the actual full correlation matrix. It is of interest to analyze the effect of the initial sparse array configuration on the proposed SCA optimization. This time, the data is collected through the initial configurations depicted in Figs. 4.2e and 4.2f, instead of the configurations (Figs. 4.2a and 4.2b) employed for the earlier example. The underlying operating environment and all other parameters remain the same as above. As before, the freely designed array is achieved through matrix completion, whereas the hybrid design involves averaging to estimate the full data correlation matrix. The free-design and the hybrid design achieve SINR of $11.82$ dB and $11.65$ dB, respectively. The designed array configurations are shown in the Figs. 4.2g and 4.2h. These configurations offer superior performances to those optimized earlier, assuming different initial configurations. This underscores the dependence of sparse array beamforming optimization on the array initial conditions. It is noted that for the same underlying environment and initial configuration, the proposed solution is still not unique and dependent on the random realizations of the received data. In order to reliably gauge the performance of the proposed scheme, we report the average results repeated over $100$ independent trials. It is found that under the initial configurations shown in Figs. 4.2a and 4.2b, the average SINR performances are $11.79$ dB for freely designed SCA and $11.18$ dB for the hybrid design. On the other hand, the initial configurations, shown in Figs. 4.2e and 4.2f, yield the average performances of $11.6$ dB and $11.54$ dB for the free and hybrid designs, respectively. These performances are compared with the FCS-LSS benchmark. It is found that the FCS-LSS offers the same performance as is achieved by SCA under initial configurations adopted in Figs. 4.2e and 4.2f. We remark that under the initial array configurations shown in Figs. 4.2a and 4.2b, the SCA-based matrix completion even surpasses the FCS-LSS benchmark, however, it offers slightly lower SINR for the hybrid design ($11.18$ dB as compared to $11.54$ dB). The optimum hybrid array configuration found through enumeration is shown in Fig. 4.2i with an SINR of $11.9$ dB, whereas the worst case hybrid configuration (shown in Fig. 4.2j) has an associated SINR of $7.5$ dB which is considerably lower than the above designs. #### 4.5.2 Monte Carlo design for random scenarios Figure 4.3: Average SINR performance of various sparse topologies against desired source DOA for $T=100$ snapshots. Figure 4.4: Average SINR performance of various sparse topologies against desired source DOA for $T=250$ snapshots. The above examples tie the performance of the proposed algorithm not only to the location of the sources and their respective powers but also show the dependence on the initial array configuration, the number of snapshots and the observed realization of the received data. In order to provide a more meaningful assessment, the simulation scenarios are designed keeping the aforementioned variables in perspective. We generate $11$ different scenarios. For each scenario, the desired source DOA is kept fixed, whereas six jammers are randomly placed anywhere from $30^{0}$ to $150^{0}$ with the respective powers uniformly distributed from $5$ to $15$ dB. The experiments are repeated $3000$ times and the initial array configuration is randomly chosen for each experiment. For the freely designed array, the initial array configuration is selected by randomly choosing $16$ sensors from $36$ sensors. However, the initial configuration for the hybrid design is randomly chosen from all the possible $16$ sensor fully augmentable array configurations associated with the prefixed sensors arranged in nested configuration as depicted in Fig. 4.2b (red color sensors). Figure 4.3 shows the results for $T=100$. The performance curve of the SCA algorithm for the freely designed array incorporating matrix completion lies in between (for most points) the benchmark designs incorporating FCS-USS and FCS-LSS. That is the matrix completion approach even outperforms the benchmark design incorporating the FCS-LSS. This performance is explainable because matrix completion coupled with the apriori knowledge of noise floor renders a more accurate estimate of the full correlation matrix as compared to FCS-LSS, without incorporating knowledge of noise floor, which has high noise variance because of limited snapshots. The performance of the other benchmark incorporating the exact knowledge of the correlation matrix (FCS-USS) is clearly superior over matrix completion. The results are fairly similar for the hybrid design, where the performance curve utilizing the Toeplitz averaging is sandwiched between the benchmark designs incorporating the exact correlation matrix (FCS-USS) and the one utilizing the presumably observed full data correlation matrix (FCS-LSS). The hybrid designed and freely designed arrays, both demonstrate desirable performances. However, the matrix completion marginally outperforms the hybrid design with an average performance gain of $0.2$ dB. The performance curves are re-evaluated by increasing the snapshots to $T=250$ and $T=1000$, as shown in Figs. 4.4 and 4.5. With such increase, the performances of the proposed SCA using Toeplitz completion move closer to the performances of the FCS-USS benchmark. It is also noted that in contrast to lower snapshots ($T=100$), the FCS-LSS benchmark for higher samples ($T=1000$) offers superior average performance over SCA designs incorporating Toeplitz completion. It is of interest to track the average antenna switching involved per trial for both the free-design and the hybrid design. Fig. 4.6 shows that freely designed array involves $9$ antenna switching per trial which is more than twice that of the hybrid design ($4$ antenna switching per trial). It is also noted that for the hybrid design, the maximum antenna switching is constrained to $5$ antennas as the rest of $11$ sensors are prefixed. In this regard, the hybrid design has more efficient switching as it utilizes 80 percent (4/5) of the DOF as compared to the mere 55 percent (9/16) switching efficiency of freely designed arrays. Figure 4.5: Average SINR performance of various sparse topologies against desired source DOA for $T=1000$ snapshots. Figure 4.6: Sensor switching comparison vs the free-design and the hybrid design. ### 4.6 Conclusion Sparse array design for maximizing the beamformer output SINR is considered for a single source in an interference active environment. The chapter addressed the problem that the optimization of the array configuration requires full data correlation matrix which is not readily available in practice. Two different design approaches were considered; one assumes prefixed position of subset of sensors so as to provide full array augmentation, referred to as the hybrid-design approach, whereas the other, which is referred to as free-design approach, has no such restriction, and freely allocates all degrees of freedom to maximize the objective function. It was shown that the Toeplitz estimation of the autocorrelation at the missing spatial lags has a desirable performance. The SCA was proposed for both the freely designed and hybrid designed arrays to achieve MaxSINR in polynomial run times with a reasonable trade off in SINR. It was shown that, in contrast to hybrid design, the matrix completion scheme does not require to pre- allocate sensor resources and, therefore, offers more design flexibility and better SINR performance. This performance improvement is, however, at the cost of increased computational complexity and finer parameter tuning as required to accomplish Toepltiz matrix completion. The simulation examples showed that the performance of the proposed SCA algorithm incorporating Toeplitz completion is agreeable with the established benchmark designs. ## Chapter 5Sparse Array Beamforming Design for Wideband Signal Models ### 5.1 Introduction Wideband systems can deliver accurate target localization for radar systems [80], provide diversity, reliability and anti jamming capabilities to the wireless communication systems and signal enhancement for microphone arrays [81, 45], whereas the UWB (Ultra-wideband) systems play a major role in high resolution imagery in medical imaging [82, 83]. Beamforming techniques for wideband signals either involve a fixed design such as a frequency invariant beamformer or an adaptive design based on the linearly constrained minimum variance (LCMV) beamformer [84, 85]. Irrespective of the design criterion, the wideband beamformers are typically implemented jointly in the spatial and temporal domains, as shown in Fig. 5.1. This spatio-temporal processing is often realized through two different schemes, namely, the tapped delay line (TDL) filtering or subband processing like DFT. For the former, an $L$ TDL filter is assumed for each sensor, and the received data at each sampling instant is processed for all sensors jointly [86, 87]. In the DFT implementation scheme, the data at each sensor is buffered and transformed to the frequency domain by $L$-point DFT. Once expressed in terms of narrowband signals, optimal beamforming is performed in each DFT bin. The DFT implementation is computationally more viable [88, 89, 90]. However, the TDL implementation scheme has an added advantage since buffering is not required and the spatio-temporal weight vector can be updated at each sampling instant. The TDL and DFT beamformers would render identical output signal if the corresponding beamformer weights are related through a DFT transformation. However, carrying the beamformer design, separately in each domain, doesn’t warrant the beamformer weights forming strictly a DFT pair. Resultantly, the output could differ slightly for each implementation. To circumvent the computationally expensive TDL beamformer design, a DFT beamformer is rather optimized and the DFT transformation is subsequently used to obtain the TDL beamformer. This dual domain TDL implementation designed primarily in the DFT domain can yield adequate output performance in practice [91]. Sparse array design strives to optimally deploy sensors, essentially achieving desirable beamforming characteristics, lowering the system hardware costs and reducing the computational complexity. Sparse array design is known to yield considerable performance advantages under different design criteria for narrowband signal models. These criteria can largely be segregated into environment-independent and environment-dependent designs. The minimum redundancy arrays (MRA) and the structured sparse array configurations are cases of the former design. They seek to optimize the environment-blind design criterion to enable the DOA estimation of more sources than physical sensors [9, 10, 8, 31]. More recently, the switched antenna and beam technologies have motivated the design for environment adaptive sparse arrays. The available sensors are switched according to changing environmental conditions, enabling optimum use of expensive transceiver chains [92, 93, 94, 95]. Sparse array design based on Cramer-Rao lower bound (CRLB) minimization criterion was shown effective for DOA estimation schemes [96]. On the other hand, beamforming approaches typically implements MaxSINR criterion, yielding efficient adaptive performance that is dependent on the underlying operating environment [11, 12, 20, 97, 2, 13, 15]. Designing sparse arrays has proved to be particularly advantageous for wideband signal models, as it circumvents the contradicting design requirements posed by the frequency spread of the signal. On the one hand, the lower frequency end of the spectrum puts a minimum aperture constraint on the array to achieve certain resolution. On the other hand, the higher end of the spectrum dictates the minimum separation between consecutive sensor locations so as to avoid spatial aliasing, consequently, resulting in the oversampling of the lower frequency content. For a limited number of available sensors, wideband sparse array design can, in essence, yield improved performance in many design applications by offering more control over beampattern characteristics for all frequencies of interest [98, 99, 100, 101]. Many different metrics, such as frequency invariant beampattern synthesis, robust beampattern design and side-lobe level control have been proposed for optimal wideband sparse array beamforming [102, 103, 100, 104, 105]. For instance, simulated annealing has been applied in [104] to achieve the desired peak sidelobe levels, while jointly optimizing the beamformer weights and sensor locations. Frequency invariant beampattern optimization for wideband sparse array design employing compressive sensing (CS) technique has been implemented in [44]. The authors therein, invoked sparsity in the temporal and spatial domains, simultaneously, in an attempt to decrease the overall processing complexity. In this chapter, we examine the Capon based MaxSINR sparse array design from both the DFT and TDL filtering realization perspectives. We consider a switched array adaptive beamformer design which is fundamentally different as compared to the aforementioned wideband performance metrics that optimize a prefixed receive beampattern for certain sector/frequencies of interest, independent of the received data statistics. We examine environment-dependent sparse arrays that maximize the SINR for frequency spread source operating in wideband jamming environments. The objective of the populated and the sparse wideband beamformers is fundamentally the same; minimization of noise and interference signals at the array output while simultaneously maintaining a desired response in the direction of interest. We adopt a Capon based methodology for enhancing the signal power for the desired source operating in an interference active environment. Capon method is a well known linear constraint beamforming approach that rejects the interference and maximizes the output SINR [106]. It provides superior estimation accuracy but assumes the exact knowledge or estimated version of the received data correlation matrix across the entire array aperture. The latter is the case in many key applications in radar signal processing and medical imaging [46, 47, 68, 71, 107, 69]. This assumption, however, cannot be readily made for sparse array configurations. Figure 5.1: Block Diagram of sparse array wideband processing. We pose the design problem as optimally selecting $P$ sensors out of $N$ possible equally spaced locations. For the scope of this chapter, we ignore any mutual coupling or sensor failure possibilities that could arise due to the closely spaced perspective locations on the grid. Each sensor has an associated $L$ TDL or $L$ point DFT filtering to jointly process the signal in the temporal and spatial domains. Our approach is a natural extension of Capon beamforming at the receiver and amounts to maximizing the SINR over all possible sparse array configurations. In the case of the TDL realization, we select those sensors that maximize the principal eigenvalue of the product of the inverse of received data correlation matrix and the desired source correlation matrix [25]. For the DFT implementation scheme, the maximization is performed over all DFT bins. In either case, it is an NP hard optimization problem. In order to realize convex relaxation and avoid the computational burden of applying the singular value decomposition (SVD) for each possible configuration, we solve the underlying problem using constrained minimization based approach. We consider two different optimization approaches, namely, the semidefinite relaxation (SDR) and successive convex approximation (SCA). For SDR-based approach, we pose the problem as quadratically constraint quadratic program (QCQP) with weighted $l_{1-\infty}$-norm squared to promote group sparsity. An eigenvector based iterative methodology is adopted to promote group sparsity and ensure that only $P$ sensors are finally selected. It is noted that the re-weighted $l_{1-\infty}$-norm squared relaxation is shown to be effective for reducing the number of sensors in multicast transmit beamforming [2]. However, owing to the computational complexity associated with the SDR approach, we alternatively pose the problem as successive convex relaxation (SCA) that approximates the problem iteratively by first order gradient approximation [74]. The proposed algorithms are suitable for moderate-size antenna systems to enable real time applications wherein the environment largely stays stationary relative to the time required for sparse configurability. In order to enable a data-dependent design for sparse array wideband beamforming, we require knowledge of the received data correlation matrix corresponding to the full array aperture. With only few active sensors at any time instant, it is infeasible to assume such knowledge due to missing correlation entries. We circumvent this problem by employing a low rank block Toeplitz matrix completion scheme to interpolate the missing correlation entries. Subsequently, the interpolated data correlation matrix is input to the proposed sparse optimization algorithms. We demonstrate the offerings of the proposed sparse array design utilizing matrix completion under limited data snapshots by comparing its performance with that achieved through enumeration. The rest of the chapter is organized as follows: In the next section, we state the problem formulation for maximizing the output SINR under wideband source signal model by elucidating the TDL and DFT signal model. Section 5.3 deals with the optimum sparse array design by semidefinite relaxation as well as successive convex relaxation to obtain the optimum $P$ sparse array geometry. Section 5.4 discusses the block Toeplitz matrix completion approach for a conceivable sparse array design from an implementation perspective. Design examples and conclusion follow at the end. ### 5.2 Problem Formulation Consider a single desired source and $Q$ interfering source signals impinging on a linear array with $N$ uniformly placed sensors. The Nyquist sampled received baseband signal $\mathbf{x}(n)\in\mathbb{C}^{N}$ at time instant $n$ is, therefore, given by, $\mathbf{x}(n)=\mathbf{s}(n)+\sum_{k=1}^{Q}\mathbf{i}_{k}(n)+\mathbf{v}(n),$ (5.1) where $\mathbf{s}(n)\in\mathbb{C}^{N}$ is the contribution from the desired signal located at $\theta_{s}$, $\mathbf{i}_{k}(n)$ is the $k$th interfering signal vectors corresponding to the respective direction of arrival, $\theta_{k}$, and $\mathbf{v}(n)$ is the spatially uncorrelated sensor array output noise. Figure 5.2: TDL realization of wideband beamforming. #### 5.2.1 TDL Implementation scheme We assume a TDL of length $L$ associated with each sensor, as shown in Fig. 5.2. The symbol z-1 denotes the time delay and w${}_{k}(m)$ is the beamforming weight for the k$th$ sensor at m$th$ sampling instant. We define a stacked vector $\mathbf{X}=[\mathbf{x}^{T}(n),\mathbf{x}^{T}(n-1),...,\mathbf{x}^{T}(n-L+1)]^{T}\in\mathbb{C}^{NL}$ containing the array data collected over $L$ sampling instances ((.)T denotes the transpose). Rewriting (5.1) in a compact way in terms of stacked vectors, we obtain, $\mathbf{X}=\mathbf{S}+\sum_{k=1}^{Q}\mathbf{I}_{k}+\mathbf{V}$ (5.2) Here, $\mathbf{S}=[\mathbf{s}^{T}(n),\mathbf{s}^{T}(n-1),...,\mathbf{s}^{T}(n-L+1)]^{T}$ and similarly $\mathbf{I}_{k}$ and $\mathbf{V}$ are defined, respectively, as interference and noise stacked vectors. The received signal $\mathbf{X}$ is then combined linearly to maximize the output SINR. The output signal $y(n)$ of the optimum beamformer for maximum SINR is given by [25], $y(n)=\mathbf{w}_{o}^{H}\mathbf{X},$ (5.3) where $\mathbf{w}_{o}$ is the solution of the following optimization problem, $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{NL}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{n}\mathbf{w}$ (5.4) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}=1$ The w${}_{k}(m)$ beamforming weight (shown in Fig. 2) maps to the $(N(m)+k)$th element of the stacked beamformer $\mathbf{w}$ ($m\in 0,1,...,L-1$, $k\in 1,2,...,N$). Here, (.)H denotes Hermitian transpose, $\mathbf{R}_{s}=E\mathbf{(SS}^{H})\in\mathbb{C}^{NL\times NL}$ is the desired signal correlation matrix. Likewise, $\mathbf{R}_{n}$ is the correlation matrix associated with interference and noise stacked vectors. In the case of spatial spread or wideband source signal, the correlation matrix is given by [86], $\mathbf{R}_{s}=\int_{B_{s}}\int_{\Theta_{s}}\sigma^{2}_{\theta_{s}}(\omega)\mathbf{a}(\theta_{s},\omega)\mathbf{a}^{H}(\theta_{s},\omega)d\theta_{s}d\omega$ (5.5) Here, $\sigma^{2}_{\theta_{s}}(\omega)$ is the signal power as a function of $\theta_{s}$ and $\omega$, $\Theta_{s}$ and $B_{s}$ are the spatial and spectral supports of the desired source signal. We only consider point sources with no significant spatial extent, hence rewriting (5.5) as follows, $\mathbf{R}_{s}=\int_{B_{s}}\sigma^{2}_{\theta_{s}}(\omega)\mathbf{a}(\theta_{s},\omega)\mathbf{a}^{H}(\theta_{s},\omega)d\omega$ (5.6) The space-time steering vector $\mathbf{a}(\theta_{s},\omega)\in\mathbb{C}^{NL}$, corresponding to the source signal, can be represented as a Kronecker product ($\otimes$), $\mathbf{a(\theta_{s},\omega)}=\boldsymbol{\phi}_{\omega}\otimes\mathbf{a}_{\theta_{s}}(\omega),$ (5.7) with, $\boldsymbol{\phi}_{\omega}=[1\,\,\,e^{j(\pi\omega/\omega_{max})}\,.\,.\,.\,e^{j(\pi\omega/\omega_{max})(L-1)}]^{T},$ (5.8) $\displaystyle\mathbf{a}_{\theta_{s}}(w)$ $\displaystyle=[1\,\,\,e^{j(2\pi/\lambda_{\omega})dcos(\theta_{s})}\,.\,.\,.\,e^{j(2\pi/\lambda_{\omega})d(N-1)cos(\theta_{s})}]^{T}$ (5.9) $\displaystyle=[1\,\,\,e^{j\pi(\frac{\omega_{c}+\omega}{\Omega_{max}})cos(\theta_{s})}\,.\,.\,.\,e^{j\pi(\frac{\omega_{c}+\omega}{\Omega_{max}})(N-1)cos(\theta_{s})}]^{T},$ where $\lambda_{\omega}$ is the wavelength corresponding to $\omega_{c}+\omega$, $\omega$ and $\omega_{c}$ represent the baseband source angular frequency and the carrier angular frequency respectively, and $\omega_{max}$ is the maximum source baseband angular frequency. The data is sampled temporally at the Nyquist rate for a given signal bandwidth. Similarly, we set the inter-element spacing $d=\lambda_{min}/2$ to avoid spatial aliasing corresponding to the highest spatial angular frequency $\Omega_{max}=\omega_{c}+\omega_{max}$, where $\lambda_{min}$ is the wavelength corresponding to $\Omega_{max}$. The correlation matrix $\mathbf{R}_{k}\in\mathbb{C}^{NL\times NL}$ for the interferer $\mathbf{i}_{k}$ is defined according to (5.6) with respective to $\theta_{k}$ and $B_{k}$. The sensor noise correlation matrix, $\mathbf{R}_{v}=\sigma_{v}^{2}\mathbf{I}\in\mathbb{C}^{NL\times NL}$ assumes spatially and temporally uncorrelated noise $\mathbf{v}(n)$ with variance $\sigma_{v}^{2}$. The problem in (5.4) can be written equivalently by replacing $\mathbf{R}_{n}=\sum_{k=1}^{Q}\mathbf{R}_{k}+\mathbf{R}_{v}$ with $\mathbf{R}=\mathbf{R}_{s}+\mathbf{R}_{n}$ as follows [25], Figure 5.3: DFT implementation of wideband beamforming. $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{NL}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}\mathbf{w}$ (5.10) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1$ The equality constraint is relaxed in (5.10) due to the inclusion of the constraint autocorrelation matrix as part of the objective function, thereby the optimal solution always converges at the equality constraint. The analytical solution of the above optimization problem exists and is given by $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{R}_{n}^{-1}\mathbf{R}_{s}\\}=\mathscr{P}\\{\mathbf{R}^{-1}\mathbf{R}_{s}\\}$. The operator $\mathscr{P}\\{.\\}$ computes the principal eigenvector of it’s argument. Substituting $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{R}_{n}^{-1}\mathbf{R}_{s}\\}$ into the SINR formula yields the corresponding optimum output $SINR_{o}$ ($\Lambda_{max}$ denotes the maximum eigenvalue of the matrix); $SINR_{o}=\frac{\mathbf{w}_{o}^{H}\mathbf{R}_{s}\mathbf{w}_{o}}{\mathbf{w}_{o}^{H}\mathbf{R}_{n}\mathbf{w}_{o}}=\Lambda_{max}\\{\mathbf{R}_{n}^{-1}\mathbf{R}_{s}\\},$ (5.11) which shows that the optimum beamformer for maximizing SINR is directly related to the desired and interference plus noise correlation matrices. #### 5.2.2 DFT Implementation scheme Figure 5.3 shows the DFT implementation scheme of wideband array processing. The received signal $\mathbf{x}(n)$ is processed in the spectral domain by taking an $L$ point DFT for the data received by $k$th sensor ${x_{k}}(n)$, ${X}_{k}^{(l)}=\sum_{p=0}^{L-1}x_{k}(n-p)(e^{-j\frac{2\pi}{L}})^{lp},\,\,\,\,\,\,l\in\\{0,1,...,\,L-1\\}$ (5.12) Define a vector ${\mathbf{X}}^{(l)}\in\mathbb{C}^{N}$, containing the $l$th DFT bin data corresponding to each sensor (superscript (l) denotes the $l$th DFT bin), ${\mathbf{X}}^{(l)}=[{X}_{1}^{(l)},{X}_{2}^{(l)},...,{X}_{N}^{(l)}]^{T}$ (5.13) These samples are then combined linearly by the weight vector $\mathbf{w}^{(l)}\in\mathbb{C}^{N}$ such that, ${y}^{(l)}={\mathbf{w}^{(l)}}^{H}\mathbf{X}^{(l)},\,\,\,\,\,\,l\in\\{0,1,...,\,L-1\\}$ (5.14) Subsequently, the overall beamformer output $y$ is generated by taking the inverse DFT of $y^{(l)}$ across the $L$ beamformers. The DFT implementation scheme seeks to maximum the output SINR for each frequency bin, yielding the optimum beamforming weight vector $\mathbf{w}_{o}^{(l)}$ as the solution of the following optimization problem, $\displaystyle\underset{\mathbf{w}^{(l)}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=0}^{L-1}{\mathbf{w}^{(l)}}^{H}\mathbf{R}^{(l)}\mathbf{w}^{(l)}$ (5.15) s.t. $\displaystyle\quad{\mathbf{w}^{(l)}}^{H}\mathbf{R}_{s}^{(l)}\mathbf{w}^{(l)}\geq 1.\quad l\in\\{0,1,...,\,L-1\\}$ The correlation matrix $\mathbf{R}^{(l)}=\mathbf{X}^{(l)}{\mathbf{X}^{(l)}}^{H}$ is the received correlation matrix for the $l$th processing bin. Similarly, the source correlation matrix $\mathbf{R}_{s}^{(l)}$ for the desired source impinging from direction of arrival $\theta_{s}$ is given by, $\mathbf{R}_{s}^{(l)}=\mathbf{S}^{(l)}{\mathbf{S}^{(l)}}^{H}={\sigma_{s}^{(l)}}^{2}\mathbf{a}_{\theta_{s}}^{(l)}{\mathbf{a}_{\theta_{s}}^{(l)}}^{H}$ (5.16) Here, $\mathbf{S}^{(l)}$ is the received data vector representing the desired source and ${\sigma_{s}^{(l)}}^{2}$ denotes the power of this source in the $l$th bin, $\mathbf{a}_{\theta_{s}}^{(l)}$ is the corresponding steering vector for the source (DOA $\theta_{s}$) and is defined as follows, $\displaystyle\mathbf{a}_{\theta_{s}}^{(l)}={}$ $\displaystyle[1\,\,\,e^{j\pi(\frac{\Omega_{min}+l\Delta_{\omega}}{\Omega_{max}})cos(\theta_{s})}{}$ (5.17) $\displaystyle\,.\,.\,.\,\,\,\,\,\,\,e^{j\pi(\frac{\Omega_{min}+l\Delta_{\omega}}{\Omega_{max}})(N-1)cos(\theta_{s})}]^{T}$ Eq. (5.17) models the steering vector for the $l$th DFT bin, where $\Omega_{min}$ is the lower edge of the passband and $\Delta_{\omega}=\frac{2w_{max}}{L}$ is the spectral resolution. The overall output SINR is given by averaging the SINR over all DFT bins. Similar to the TDL, the DFT implementation scheme determines the optimum sparse array geometry for enhanced MaxSINR performance as explained in the following section. ### 5.3 Optimum sparse array design The problem of maximizing the principal eigenvalue for the $P$ sensor selection is a combinatorial optimization problem. In this section, we assume that the full array data correlation matrix is known. However, Section 5.4 explains the means to avoid this assumption through fully augmentable sparse array design or utilizing the matrix completion approach [108, 109, 72]. We first formulate the sparse array design for maximizing SINR in the case of wideband beamforming as an SDR. Owing to the high computational complexity of SDR, the problem is solved by SCA, for both the TDL and DFT implementation schemes. #### 5.3.1 Semidefinite relaxation (SDR) for sparse solution ##### TDL Implementation scheme We assume that the sensor configuration remains the same within the observation time. In radar, this assumption amounts to selecting the same $P$ sensors over the coherent processing interval (CPI). Therefore, the task is to select $P$ entries from the first $N$ elements of $\mathbf{w}$, and the same $P$ entries from each subsequent block of $N$ elements (there are $L$ such blocks). Define $\mathbf{w}_{k}=[\mathbf{w}(k),\mathbf{w}(k+N),\cdot\cdot\cdot,\mathbf{w}(k+N(L-1))]\in\mathbb{C}^{L}$ $(k\in\\{1,...,\,N\\})$ as the weights corresponding to TDL of $k$th sensor. Then, in seeking sparse solution, the problem in (5.10) can be reformulated as follows, $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{NL}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}\mathbf{w}+\bar{\mu}(\sum_{k=1}^{N}||\mathbf{w}_{k}||_{q})$ (5.18) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1$ Here, $||.||_{q}$ denotes the $q$-norm of the vector, $\bar{\mu}$ is the sparsity regularization parameter. The mixed $l_{1-q}$ norm regularization is known to thrive the group sparsity in the solution. In so doing, the above formulation enables the decision making on whether to collectively select or discard all the $L$ sampling instances associated with the $k$th sensor. Therefore, structured group sparsity is essential for wideband beamformer, since the final sparse solution has to ensure that only $PL$ out of $NL$ spatio-temporal possible sampling instances are chosen through only $P$ physical sensors. The relaxed problem expressed in the above equation induces group sparsity in the optimal weight vector without placing a hard constraint on the desired cardinality. The constrained minimization in $(\ref{a2_ch5})$ can be penalized instead by the weighted $l_{1}$-norm function to further promote sparse solutions [41, 42, 58], $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{NL}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}\mathbf{w}+\bar{\mu}(\sum_{k=1}^{N}\mathbf{u}^{i}(k)||\mathbf{w}_{k}||_{q})$ (5.19) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1,$ where, $\mathbf{u}^{i}(k)$ is the $k$th element of re-weighting vector $\mathbf{u}^{i}$ at the $i$th iteration. We choose the $\infty$-norm for the $q$-norm and replace the weighted $l_{1}$-norm function in $(\ref{b2_ch5})$ by the $l_{1}$-norm squared function with a modified regularization parameter ($\mu$ instead of $\bar{\mu}$). This change does not affect the regularization property of the $l_{1}$-norm [2]. The result is, $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{NL}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}\mathbf{w}+\mu(\sum_{k=1}^{N}\mathbf{u}^{i}(k)||\mathbf{w}_{k}||_{\infty})^{2}$ (5.20) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1$ The SDR of the above problem is realized by substituting $\mathbf{W}=\mathbf{w}\mathbf{w}^{H}$. The quadratic function, $\mathbf{w}^{H}\mathbf{R}\mathbf{w}=$ Tr$(\mathbf{w}^{H}\mathbf{R}\mathbf{w})=$Tr$(\mathbf{R}\mathbf{w}\mathbf{w}^{H})=$ Tr$(\mathbf{R}\mathbf{W})$, where Tr(.) is the trace of the matrix. Similarly, we replace the regularization term in (5.20) by Tr$(\mathbf{U}^{i}\mathbf{\tilde{W}})$, with $\mathbf{U}^{i}=\mathbf{u}^{i}(\mathbf{u}^{i})^{T}$ and $\mathbf{\tilde{W}}$ being the auxiliary matrix implementing $\infty-$norm as follows [59, 61, 2], $\displaystyle\underset{\mathbf{W\in\mathbb{C}}^{NL\times NL},\mathbf{\tilde{W}\in\mathbb{R}}^{N\times N}}{\text{minimize}}$ $\displaystyle\quad\text{Tr}(\mathbf{R}\mathbf{W})+\mu\text{Tr}(\mathbf{U}^{i}\mathbf{\tilde{W}})$ (5.21) s.t. $\displaystyle\quad\text{Tr}(\mathbf{R}_{s}\mathbf{W})\geq 1,$ $\displaystyle\quad\mathbf{\tilde{W}}\geq|\mathbf{W}_{ll}|,\quad l\in\\{0,...,\,L-1\\},$ $\displaystyle\quad\mathbf{W}\succeq 0,\,\text{Rank}(\mathbf{W})=1$ Table 5.1: SDR for the sparse wideband beamformer 0: Sparse correlation matrices $\overset{\circ}{\mathbf{R}}$ for TDL-SDR ($\overset{\circ}{\mathbf{R}}{{}^{(l)}}$ for DFT-SDR), $N$, $P$, $L$, ${\mathbf{R}_{s}}$ for TDL-SDR (${\mathbf{R}_{s}^{(l)}}$ for DFT-SDR). 0: MaxSINR beamformer against $P$ active sensors. Matrix Completion: Procedure in Section 5.4 is adopted to recover the estimated full data received correlation matrix $\mathbf{\hat{R}}$ for TDL-SDR ($\mathbf{\hat{R}}^{(l)}$ for DFT-SDR).Initialization: Initialize $\epsilon$, $\mu_{max}$ (upper limit of binary search), $\mu_{min}$ (lower limit of binary search), $\mathbf{U}=$ all ones matrix. Appropriate value of $\mu$ is selected through the binary search to ensure $P$ sensor selection. while (Beamforming weight vector is not $P$ sparse) do Select $\mu=\frac{1}{2}(\mu_{max}+\mu_{min})$ (Binary search). Run the SDR of (5.3.1) for TDL-SDR or (5.3.1) for DFT-SDR (Use either $\mathbf{\hat{R}}$,$\mathbf{\hat{R}}^{(l)}$ in-lieu of $\mathbf{{R}}$,$\mathbf{{R}}^{(l)}$). Update $\mathbf{U}^{i}$ according to (5.25). Update $\mu_{max}/\mu_{min}$ according to the binary search. end while After achieving the desired cardinality, run SDR for reduced size correlation matrix corresponding to nonzero values of $\mathbf{\tilde{W}}$ and $\mu=0$, yielding, $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{W}\\}$ for TDL-SDR ($\mathbf{w}_{o}^{(l)}=\mathscr{P}\\{\mathbf{W}^{(l)}\\}$ for DFT-SDR). return Optimal Beamformer $\mathbf{w}_{o}$-TDL-SDR ($\mathbf{w}_{o}^{(l)}$-DFT-SDR) The operator ‘$|.|$’ returns the element wise absolute values of the entries of the matrix, ‘$\geq$’ is the element wise comparison and ‘$\succeq$’ represents the generalized matrix inequality, implementing the positive semidefinite constraint, $\mathbf{W}_{ll}\in\mathbb{C}^{N\times N}$ is the $l$th diagonal block matrix of $\mathbf{W}$. We note that the solution matrix $\mathbf{W}$ is Hermitian and therefore, it is sufficient to constrain the upper or lower triangular entries of $\mathbf{W}_{ll}$ while forcing $\mathbf{\tilde{W}}$ to be symmetric matrix. In so doing, we reduce the inequality constraints and decrease the run time substantially. In addition, we drop rank constraint in (5.21) which is non convex, resulting in the following SDR, $\displaystyle\underset{\mathbf{W\in\mathbb{C}}^{NL\times NL},\mathbf{\tilde{W}\in\mathbb{R}}^{N\times N}}{\text{minimize}}$ $\displaystyle\quad\text{Tr}(\mathbf{R}\mathbf{W})+\mu\text{Tr}(\mathbf{U}^{i}\mathbf{\tilde{W}}),$ s.t. $\displaystyle\quad\text{Tr}(\mathbf{R}_{s}\mathbf{W})\geq 1,$ $\displaystyle\quad\overset{\bigtriangleup}{\mathbf{\tilde{W}}}\geq|\overset{\bigtriangleup}{\mathbf{W}}_{ll}|,\quad l\in\\{0,1,...,\,L-1\\},$ $\displaystyle\quad\mathbf{\tilde{W}}=\mathbf{\tilde{W}}^{T},\quad\mathbf{W}\succeq 0.$ (5.22) Here, △ represents the upper or lower triangular entries of the matrix. ##### DFT Implementation scheme The DFT implementation scheme for sparse array design is achieved by starting from (5.15) and following similar steps of the TDL. The group sparsity is invoked as the data in each DFT bin requires the underlying array configuration to remain the same for processing over all DFT bins. The SDR is finally realized as follows, $\displaystyle\underset{\mathbf{W}^{(l)}\mathbf{\in\mathbb{C}}^{N\times N},\mathbf{\tilde{W}\in\mathbb{R}}^{N\times N}}{\text{minimize}}$ $\displaystyle\sum_{l=0}^{L-1}\text{Tr}(\mathbf{R}^{(l)}\mathbf{W}^{(l)})+\mu\text{Tr}(\mathbf{U}^{i}\mathbf{\tilde{W}})$ s.t. $\displaystyle\,\,\text{Tr}(\mathbf{R}_{s}^{(l)}\mathbf{W}^{(l)})\geq 1,\,\,l\in\\{0,1,...,\,L-1\\},$ $\displaystyle\,\,\overset{\bigtriangleup}{\mathbf{\tilde{W}}}{{}^{(l)}}\geq|\overset{\bigtriangleup}{\mathbf{W}}{{}^{(l)}}|,\quad l\in\\{0,1,...,\,L-1\\},$ $\displaystyle\,\,\mathbf{\tilde{W}}=\mathbf{\tilde{W}}^{T},\quad\mathbf{W}\succeq 0.$ (5.23) In general, QCQP of order $M$ with $T$ quadratic constraints can be solved to an arbitrary small accuracy $\zeta$ by employing interior point methods involving the worst case complexity of $\mathcal{O}\\{$max$(T,M)^{4}M^{(1/2)}\log(1/\zeta)\\}$ [61]. It is apparent from (5.3.1) and (5.3.1) that the dimensionality of the TDL implementation scheme is $NL\times NL$, whereas that for the DFT approach involves $L$ unknown variables of size $N\times N$. Therefore, the computational complexity for TDL implementation scheme is $\mathcal{O}\\{(NL)^{9}\log(1/\zeta)\\}$, and is $\mathcal{O}\\{(N^{9}L^{(9/2)})\log(1/\zeta)\\}$ for the DFT implementation scheme, which renders the DFT implementation scheme computationally more viable. ##### Unit rank promoting iteration Promoting sparse solutions iteratively would depend on careful selection of the regularization weighting matrix at each iteration. As suggested in [58, 2], the weighting matrix $\mathbf{U}^{i}$ in (5.3.1) and (5.3.1) is initialized unweighted, i.e., a matrix of all ones. Afterwards, this matrix is iteratively updated in an inverse relationship to $\mathbf{\tilde{W}}$, which is related to the solution matrix $\mathbf{W}$ from the previous iteration. This enables the beamforming weights having relatively lower magnitude to be penalized aggressively, hence encouraging them to go to zero in an iterative fashion. The parameter $\epsilon$ prevents the unwanted case of division by zero and also avoids the solution to converge to local minima as follows, $\mathbf{U}^{i+1}(m,n)=\frac{1}{\mathbf{\tilde{W}}^{i}(m,n)+\epsilon}$ (5.24) However, the solution matrix $\mathbf{W}$ in the case of the TDL implementation scheme is not exactly rank one at initial iterations. The problem aggravates when the weight matrix is updated according to the above equation, inadvertently pushing the rank of $\mathbf{W}$ to build up with each iteration. In this respect, updating $\mathbf{U}$ according to (5.24) favors solutions of higher ranks, and, as such, fails to yield sparse configurations iteratively. To mitigate this problem, we propose a modified re-weighting scheme that relies on updating the regularization weighting matrix that depends on the inverse of a rank $1$ matrix $\mathbf{Y}$ instead of $\mathbf{\tilde{W}}$ as follows, $\mathbf{U}^{i+1}(m,n)=\frac{1}{\mathbf{Y}^{i}(m,n)+\epsilon},$ (5.25) where, $\mathbf{Y}^{i}=\mathbf{y}^{i}(\mathbf{y}^{i})^{T},$ for $\mathbf{y}^{i}=\frac{1}{L}\sum_{l=1}^{L}(\mathbf{|\mathscr{P}\\{W}_{ll}^{i}\\}|)^{2}$. Clearly, $\mathbf{Y}^{i}$ is a rank one matrix that is synthesized from the rank one approximation of each block diagonal matrix $\mathbf{{W}}_{ll}^{i}$. It is noted that $\mathbf{{W}}_{ll}^{i}$ are the only entries of $\mathbf{{W}}$ that are constrained by the SDR formulated in (5.3.1). Since $\mathbf{{W}}_{ll}^{i}$ are the diagonal block matrices of the solution matrix $\mathbf{{W}}$, then sparsity is implicitly encouraged in $\mathbf{{W}}$ by unit rank penalization. This modified re-weighting approach given by (5.25) effectively solves the optimum sparse array selection problem for the wideband beamforming. It is noted that the arbitrarily chosen sparsity parameter $\mu$ does not ensure the final solution to be exactly $P$ sparse. In order to do so, the optimization problem should be solved against various values of $\mu$. This is typically achieved by successively running the optimization and updating the values of $\mu$ through a binary search over the possible upper and lower limit of $\mu_{max}/\mu_{min}$, until the solution converges to $P$ sensors [2]. The proposed algorithm for achieving the sparse optimal weight vector under the TDL and DFT implementation schemes is summarized in Table 5.1. Table 5.2: SCA for the sparse wideband beamformer 0: Sparse correlation matrices $\overset{\circ}{\mathbf{R}}$ for TDL-SCA ($\overset{\circ}{\mathbf{R}}{{}^{(l)}}$ for DFT-SCA), $N$, $P$, $L$, ${\mathbf{R}_{s}}$ for TDL-SCA (${\mathbf{R}_{s}^{(l)}}$ for DFT-SCA). 0: MaxSINR beamformer against $P$ active sensors. Matrix Completion: Procedure in Section 5.4 is adopted to recover the estimated full data received correlation matrix $\mathbf{\hat{R}}$ for TDL-SCA ($\mathbf{\hat{R}}^{(l)}$ for DFT-SCA).Initialization: Initialize the beamforming vectors randomly to find $\mathbf{m}$ and $b$. Initialize $\epsilon$, $\mu_{max}$ (upper limit of binary search), $\mu_{min}$ (lower limit of binary search), $\mu=0$. while (Solution does not converge corresponding to $\mu=0$) do Run (5.31) for TDL-SCA or (5.32) for DFT-SCA. end whileInitialize $\mathbf{u}^{i}$= all ones vector. while (Beamforming weight vector is not $P$ sparse) do Select $\mu=\frac{1}{2}(\mu_{max}+\mu_{min})$ (Binary search). Run (5.31) for TDL-SCA or (5.32) for DFT-SCA (Use $\mathbf{\hat{R}}$ or $\mathbf{\hat{R}}^{(l)}$ in-lieu of $\mathbf{{R}}$,$\mathbf{{R}}^{(l)}$ to synthesize $\mathbf{\tilde{R}}$, $\mathbf{\tilde{R}}^{(l)}$).(Also use the optimal non sparse solution from the previous while loop for $\mathbf{m}$ and $b$). Update the regularization weighting parameter $\mathbf{u}^{i+1}(k)=\frac{1}{||\mathbf{\tilde{w}}_{k}^{i}||_{2}+\epsilon}$.Update $\mu_{max}/\mu_{min}$ according to the binary search. end while After achieving the desired cardinality, run (5.31) for TDL-SCA or (5.32) for DFT-SCA, with reduced dimension corresponding to nonzero values of $\mathbf{\tilde{w}}$ and $\mu=0$, yielding, optimal weight vector. return Optimal Beamformer $\mathbf{w}_{o}$-TDL-SCA ($\mathbf{w}_{o}^{(l)}$-DFT-SCA) #### 5.3.2 Successive convex approximation (SCA) ##### TDL Implementation scheme The problem in (5.10) can equivalently be rewritten by swapping the objective and constraint functions as follows, $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{NL}}{\text{maximize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}$ (5.26) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}\mathbf{w}\leq 1$ Although this swapping operation allows the associated constraint to be convex, however it renders the objective function non convex. We note that the formulation in (5.26) doesn’t have a trivial solution $\mathbf{w}=0$, as the objective and constraint are coupled due to $\mathbf{R}_{s}$ being part of $\mathbf{R}$. The maximization of the convex function is replaced by the minimization of the concave function. The transformation to the minimization problem will later enable carrying out the minimization based on $P$ sparse solution. Rewriting (5.26) by reversing the sign of the desired source correlation matrix $\bar{\mathbf{R}}_{s}=-\mathbf{R}_{s}$ as follows, $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{NL}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\bar{\mathbf{R}}_{s}\mathbf{w}$ (5.27) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}\mathbf{w}\leq 1$ The beamforming weight vectors are generally complexed valued, whereas the quadratic functions are real. This observation allows expressing the problem with only real variables and is typically accomplished by replacing the correlation matrix $\bar{\mathbf{R}}_{s}$ by $\tilde{\mathbf{R}}_{s}$ and concatenating the beamforming weight vector accordingly [74], $\quad\quad\quad\quad\quad\tilde{\mathbf{R}}_{s}=\begin{bmatrix}\text{real}(\bar{\mathbf{R}}_{s})&-\text{imag}(\bar{\mathbf{R}}_{s})\\\ \\\ \text{imag}(\bar{\mathbf{R}}_{s})&\text{real}(\bar{\mathbf{R}}_{s})\\\ \end{bmatrix},\quad\quad\quad\quad\quad\quad\tilde{\mathbf{w}}=\begin{bmatrix}\text{real}({\mathbf{w}})\\\ \\\ \text{imag}({\mathbf{w}})\\\ \end{bmatrix}$ (5.28) Similarly, the received data correlation matrix $\mathbf{R}$ is replaced by $\tilde{\mathbf{R}}$. The problem in (5.27) then becomes, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2NL}}{\text{minimize}}$ $\displaystyle\quad\mathbf{\tilde{w}}^{T}\tilde{\mathbf{R}}_{s}\mathbf{\tilde{w}}$ (5.29) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{T}\mathbf{\tilde{R}}\mathbf{\tilde{w}}\leq 1$ We can then proceed to convexify the objective function by utilizing the first order approximation iteratively, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2NL}}{\text{minimize}}$ $\displaystyle\quad{\mathbf{m}^{i}}^{T}\mathbf{\tilde{w}}+b^{i}$ (5.30) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{T}\mathbf{\tilde{R}}\mathbf{\tilde{w}}\leq 1,$ The linearization coefficients $\mathbf{m}^{i}$ and ${b^{i}}$ are updated as, $\mathbf{m}^{i+1}=2\tilde{\mathbf{R}}_{s}\mathbf{\tilde{w}}^{i}$, and $b^{i+1}=-{{\mathbf{\tilde{w}}^{i}}}^{T}\tilde{\mathbf{R}}_{s}\mathbf{\tilde{w}}^{i}$ (superscript i denotes the iteration number). Finally, to invoke sparsity in the beamforming weight vector, the re-weighted mixed $l_{1-2}$ norm is adopted primarily for promoting group sparsity, $\displaystyle\underset{\mathbf{\tilde{w}\in\mathbb{R}}^{2NL}}{\text{minimize}}$ $\displaystyle\quad{\mathbf{m}^{i}}^{T}\mathbf{\tilde{w}}+b^{i}+\mu(\sum_{k=1}^{N}\mathbf{u}^{i}(k)||\mathbf{\tilde{w}}_{k}||_{2})$ (5.31) s.t. $\displaystyle\quad\mathbf{\tilde{w}}^{T}\mathbf{\tilde{R}}\mathbf{\tilde{w}}\leq 1$ Here, $\mathbf{\tilde{w}}_{k}\in\mathbb{R}^{2L}$ are the beamforming weights corresponding to TDL of $k$th sensor. Discouraging a sensor ($||.||_{2}$ denotes the $l_{2}$ norm) implies a simultaneous removal of both the real and corresponding imaginary entries of all beamforming weights associated with the removed sensor [74]. ##### DFT implementation scheme The above formulation can be extended for the DFT implementation scheme as follows: $\displaystyle\underset{\mathbf{\tilde{w}}^{(l)}\mathbf{\in\mathbb{R}}^{2N}}{\text{minimize}}$ $\displaystyle\sum_{l=0}^{L-1}({{\\{\mathbf{m}^{(l)}}\\}^{i}}^{T}\mathbf{\tilde{w}}^{(l)}+\\{b^{(l)}\\}^{i})+\mu(\sum_{k=1}^{N}\mathbf{u}^{i}(k)||\mathbf{\tilde{w}}_{k}||_{2})$ (5.32) s.t. $\displaystyle\quad{\mathbf{\tilde{w}}^{(l)}}^{T}\mathbf{\tilde{R}}^{(l)}\mathbf{\tilde{w}}^{(l)}\leq 1,\quad l\in\,\,\\{0,1,...L-1\\},$ where $\mathbf{\tilde{w}}_{k}\in R^{2L}$ contains the $L$ DFT bins data for the $k$th sensor, $\\{{\mathbf{m}^{(l)}}\\}{{}^{i}}$ and $\\{{b^{(l)}\\}{{}^{i}}}$ are the approximation coefficients at the $i$th iteration for the desired source correlation matrix in the $l$th bin, with $\\{{\mathbf{m}^{(l)}}\\}{{}^{i}}=2\tilde{\mathbf{R}}_{s}^{(l)}\\{{\mathbf{\tilde{w}}^{(l)}}\\}{{}^{i}},\\{{b^{(l)}\\}{{}^{i}}}=-{\\{\mathbf{\tilde{w}}^{(l)}\\}^{i}}^{T}\tilde{\mathbf{R}}_{s}^{(l)}\\{\mathbf{\tilde{w}}^{(l)}\\}^{i}$. The initial estimates $\\{{\mathbf{m}^{(l)}}\\}{{}^{i}}$ and $\\{{b^{(l)}\\}{{}^{i}}}$ are calculated for the optimal non sparse solution. These parameters can be found by setting the sparsity parameter $\mu$ to zero. In so doing, the solution and the corresponding parameters $\\{{\mathbf{m}^{(l)}}\\}{{}^{i}}$ and $\\{{b^{(l)}\\}{{}^{i}}}$ converge to the optimal value against the full array elements. Using these values as initial conditions has proven appropriate in our design for recovering effective sparse solutions. The sparsity parameter $\mu$ is chosen according to the binary search over the possible range of $\mu$ to warrant the desired cardinality of the beamforming weight vector as explained before in section 5.3.1. The $k$th entry of re-weighting vector $\mathbf{u}^{i}(k)$ is updated according to $\mathbf{u}^{i+1}(k)=\frac{1}{||\mathbf{\tilde{w}}_{k}^{i}||_{2}+\epsilon}$. The computational complexity involving SCA is considerably lower as compared to the SDR formulation, as the latter intrinsically squares the number of variables involved, essentially exacerbating the runtime [74]. The SCA for sparse array design for wideband beamforming is summarized in Table 5.2. ### 5.4 Sparse matrix completion of block Toeplitz matrices The aforementioned sparse array design formulations require the received data correlation matrix corresponding to the full array aperture. This is a rather stringent requirement in an adaptive switching environment where the data is fetched from only $P$ active sensor locations over a given observation period. The received data correlation matrix for the sparse array design using TDL implementation scheme has $L^{2}$($N^{2}-P^{2}$) missing correlation entries, whereas there are $L$($N^{2}-P^{2}$) missing correlation values for the DFT implementation scheme. Clearly, for the large values of $L$, the TDL implementation scheme has significantly higher number of missing correlation entries as compared to the DFT implementation scheme. Recently, the hybrid sparse design for the narrowband beamforming was introduced to alleviate the issue of missing correlation lags in the received data correlation matrix [71]. This is primarily achieved by pre-allocating few sensors to guarantee a fully augmentable sparse array, while engaging the remaining degrees of freedom (DOF) to maximize the SINR. However, locking in few DOFs to ensure the array full augmentability can lead to suboptimal sparse beamformers. Alternatively, the matrix completion approach can be used to provide the missing lags [50, 49, 78]. We propose, herein, sparse matrix completion to efficiently exploit the structure of the data correlation matrix to recover the missing correlation values. The received data correlation matrix for the TDL implementation scheme is a Hermitian positive definite matrix but it also follows a block Toeplitz formation, as shown below, $\quad\quad\quad\quad\quad\quad\mathbf{R}=\begin{bmatrix}\mathbb{T}_{0}&{\mathbb{T}}_{1}&{\mathbb{T}}_{2}&\ldots&{\mathbb{T}}_{L-1}\\\ \\\ \mathbb{T}_{-1}&{\mathbb{T}}_{0}&{\mathbb{T}}_{1}&\ldots&{\mathbb{T}}_{L-2}\\\ \\\ {\mathbb{T}}_{-2}&{\mathbb{T}}_{-1}&{\mathbb{T}}_{0}&\ldots&{\mathbb{T}}_{L-3}\\\ \\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\ \\\ {\mathbb{T}}_{-(L-1)}&{\mathbb{T}}_{-(L-2)}&{\mathbb{T}}_{-(L-3)}&\ldots&{\mathbb{T}}_{0}\\\ \end{bmatrix}\quad\quad\quad\quad\quad\quad$ (5.33) By definition, block Toeplitz matrices doesn’t necessitate each comprising block to be Toeplitz within itself. Therefore, matrix $\mathbf{R}$ in (5.33) represents a special case of block Toeplitz matrices, where the Toeplitz structure is also preserved for each constituent block $\mathbb{T}_{k}\in\mathbb{C}^{N\times N}$. Because of the matrix Hermitian symmetry, we also have $\mathbb{T}_{k}^{H}=\mathbb{T}_{-k}(\text{for}\,\,\,k\neq 0$). Instead of recovering $\mathbf{R}$ as a single unit, we focus on completing the constituent blocks and then synthesizing the full correlation matrix $\mathbf{R}$. This approach potentially caps the computational expenses considerably but also efficiently exploits the formation of $\mathbf{R}$. There is an important distinction between the constituent blocks $\mathbb{T}_{0}$ and $\mathbb{T}_{k}\,\,\,(\text{for}\,\,\,k\neq 0$). It is noted that $\mathbb{T}_{0}$ is positive definite Hermitian Toeplitz matrix, whereas $\mathbb{T}_{k}$ $\,\,\,(\text{for}\,\,\,k\neq 0$) are indefinite Toeplitz matrices which are not necessarily Hermitian. Therefore, we resort to two different ways with regards to our treatment of $\mathbb{T}_{0}$ and $\mathbb{T}_{k}$ while adopting a Toepltiz matrix completion scheme under the low rank assumption. It is known that the correlation matrix $\mathbf{R}$ for the wideband far field sources impinging on the ULA resultantly follows the structure in (5.33) and can be represented effectively with a relatively low rank approximation depending on the observed source time-bandwidth product [90]. The trace heuristic is a well known approach which is adopted generally as a convex surrogate in recovering low rank matrices. This approach has been successfully used in many areas of control systems and array processing to recover simpler and low rank data models [62, 63, 61]. Moreover, it has been shown that the trace heuristic is equivalent to the nuclear norm minimization in recovering positive semidefinite correlation matrices[75, 77, 78, 79]. The low rank positive semidefinite Toeplitz matrix completion problem has been proposed in [76] for interpolating missing correlation lags in coprime array configuration and can be adopted to interpolate $\mathbb{T}_{0}$ as follows, $\displaystyle\underset{l\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad||\mathcal{T}(l)\odot\mathbf{Z}-\overset{\circ}{\mathbb{T}}_{0}||_{F}^{2}+\zeta\text{Tr}(\mathcal{T}(l))$ (5.34) s.t. $\displaystyle\quad\mathcal{T}(l)\succeq 0$ Here, the unknown Hermitian Toeplitz matrix $\mathcal{T}(l)$, can uniquely be defined by a single vector $l$ representing the first row of $\mathcal{T}(l)$, and $l^{H}$ denoting the matrix first column. Matrix $\overset{\circ}{\mathbb{T}}_{0}$ is the received data correlation matrix with the missing correlation values set equal to zero. The element wise product is denoted by symbol ‘$\odot$’ and ‘$\succeq$’ implements the positive semidefinite constraint. The objective function attempts to minimize the error between the observed correlation values and the corresponding entries of $\mathcal{T}(l)$ implemented through the Frobenius norm of the error matrix (The function ‘$||.||_{F}^{2}$’, represents the square of the Frobenius norm of matrix which returns the sum of square of it’s entries). The parameter ‘$\zeta$’ pursues the trade off between the denoising term and the trace heuristic to recover a simpler low rank model. The nominal value of the parameter ‘$\zeta$’ is challenging to locate and is typically gleaned from the numerical experience. In order to do away with the nuisance parameter ‘$\zeta$’, we adopt a fusion based approach more suited to our application. We note that the non zero elements in $\overset{\circ}{\mathbb{T}}_{0}$ can be segregated into two classes. With regards to the sparse entries in $\overset{\circ}{\mathbb{T}}_{0}$, either we have the whole sub-diagonal entries missing in $\overset{\circ}{\mathbb{T}}_{0}$ or the sub-diagonals are sparse. The former situation arises if there is missing correlation lag in $\overset{\circ}{\mathbb{T}}_{0}$, whereas the latter arises when the corresponding correlation lag is present but lacking the intrinsic redundancy corresponding to the compact sensor grid. The observed correlation lags are averaged across the sub diagonal to filter the sensor noise as follows, $\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{l}}}^{i}(k)=(\frac{1}{c_{k}})\sum_{\forall\,m-n=k}\overset{\circ}{\mathbb{T}}^{i}_{0}(m,n)$ (5.35) Here, $\overset{\circ}{\mathbb{T}}^{i}_{0}$ is the $\mathbf{R}(i,i)$ entry in (5.33) denoting the estimate of $\overset{\circ}{\mathbb{T}}^{i}_{0}$ at $i$th sampling instance, $k\in 0,1,\cdots,N-1$ represents the respective lag or the sub diagonal of $\overset{\circ}{\mathbb{T}}^{i}_{0}$ and $c_{k}$ is the observed redundancy of the $k$th lag. As evident in (5.33), there are $L$ copies of $\overset{\circ}{\mathbb{T}}_{0}$ corresponding to $L$ sampling instances. Hence, $\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{l}}}^{i}(k)$ is averaged over $L$ blocks to yield an estimate of the given lag $\hat{l}(k)=\frac{1}{L}\sum_{i=0}^{L-1}\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{l}}}^{i}(k)$. The fused matrix completion formulation, therefore, substitutes the sparse sub-diagonals with the estimated average value $\hat{l}(k)$, whereas the completely missing sub-diagonals are interpolated as follows, $\displaystyle\underset{l\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\text{Tr}(\mathcal{T}(l))$ (5.36) s.t. $\displaystyle\quad l(\text{lag present})=\hat{l}(k),$ $\displaystyle\quad\mathcal{T}(l)\succeq 0$ The above formulation relies on fairly accurate estimate of the observed correlation lag which not only involves averaging over the corresponding sub diagonal but also across $L$ sampling instances. Such accuracy can become challenging to meet the semidefinite constraint if the available degrees of freedom are few. To circumvent this problem, the $0$ lag is removed from the constraint which gives the additional degree of freedom to the algorithm to set aside an appropriate loading factor to make the problem feasible. It is noted that the formulation in (5.36) would choose the minimum possible diagonal loading as it requires to minimize the trace heuristic and hence strives to maximize the sparse eigenvalues. The trace heuristic can also be extended for indefinite matrices to recover sparse models [110]. We couple this observation along with the above discussion to perform low rank matrix completion for $\mathbb{T}_{k}\,\,\,(\text{for}\,\,\,k\neq 0$) as follows, $\displaystyle\underset{l_{r},l_{c}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\text{Tr}(\mathbf{W}_{1})+\text{Tr}(\mathbf{W}_{2})$ (5.37) s.t. $\displaystyle\quad l_{r}(\text{lag present})=\hat{l_{r}}(k),$ s.t. $\displaystyle\quad l_{c}(\text{lag present})=\hat{l_{c}}(k),$ $\displaystyle\quad\begin{bmatrix}\mathbf{W}_{1}&\mathcal{T}(l_{r},l_{c})\\\ \mathcal{T}(l_{r},l_{c})&\mathbf{W}_{2}\end{bmatrix}\succeq 0$ Here, $\mathbf{W}_{1}$ and $\mathbf{W}_{2}$ are auxiliary matrices implementing trace heuristic to recover low rank indefinite matrices. The function $Toeplitz(l_{r},l_{c})$ returns the Toeplitz matrix with $l_{r}$ and $l_{c}$ being the first row and the first column, respectively. This distinction is important as in general $\mathbb{T}_{k}\,\,\,(\text{for}\,\,\,k\neq 0$) is not a Hermitian Toeplitz matrix. The formulation in (5.37) is repeated to yield an estimate for all constituent Toeplitz blocks. Upon performing matrix completion for each constituent Toeplitz block, the individual Toeplitz blocks can be plugged back into (5.33) to yield an estimate $\mathbf{\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{{R}}}}}$. We can improve the estimate $\mathbf{\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{{R}}}}}$ by incorporating the noise variance which is generally known or estimated apriori. This is achieved through the maximum likelihood estimate (MLE) approach where the eigenvalues corresponding to the eigenvectors of the noise subspace are set equal to the noise floor while the remaining eigenvalues are kept the same. The MLE of $\mathbf{\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{{R}}}}}$ is donated by $\mathbf{\hat{R}}$ and is given by the outer product of the original eigenvectors of $\mathbf{\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{{R}}}}}$ reweighted by the modified eigenvalues. However, it is noted that in practice the number of eigenvectors associated with the noise floor are not exactly known. Nevertheless, we only reset those eigenvalues which are less than the noise floor. Finally, the maximum likelihood estimate $\mathbf{\hat{R}}$, is used in lieu of $\mathbf{\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{{R}}}}}$, to carry out the data dependent optimization for MaxSINR. It is also noted that unlike $\mathbf{\hat{\vphantom{\rule{1.0pt}{5.71527pt}}\smash{\hat{{R}}}}}$, the matrix $\mathbf{\hat{R}}$ is no longer block Toeplitz but is now guaranteed to be positive definite as is strictly required to implement the proposed sparse optimization algorithms. The formulation in (5.36) is sufficient for the DFT implementation scheme which only involves $L$ received data matrices corresponding to the $L$ DFT bins. ### 5.5 Simulations The effectiveness of the sparse array design for MaxSINR beamforming is demonstrated by design examples considering a wideband source operating in the presence of a mix of narrowband and wideband jammers. The MATLAB-based CVX modeling tool is used for convex optimization. The importance of sparse array design for MaxSINR is further emphasized by comparing the optimum design with the sub optimum array configurations under the TDL and DFT implementation schemes. The simulation results are presented for perspective linear sensor locations, nevertheless, the proposed algorithms are applicable to rectangular grid points or arbitrary placed sensors on 3D surfaces. #### 5.5.1 Example 1 The task is to select $P=14$ sensors from $N=20$ possible equally spaced locations with inter-element spacing of $\lambda_{min}/2$. We consider an $8$ delay line filter taps associated with each selected sensor ($8$ DFT bins for DFT implementation scheme) implying $L=8$. A wideband desired point source impinges on a linear array from DOA $40^{0}$. Five strong wideband jammers with uniform PSDs (power spectral density) are located at angles $30^{0}$, $45^{0}$, $50^{0}$, $140^{0}$ and $150^{0}$. The PSD of all frequency spread sources are uniform with the fractional bandwidth of $0.22$. A narrowband jammer locked onto the source carrier frequency is active at $130^{0}$. The SNR of the desired signal is $0$ dB, and the INR of each interfering signals is set to $30$ dB. Thereby, the input SINR is $-37$ dB. Figure 5.4: Frequency dependent beampattern for the array configuration recovered through convex relaxation. Figure 5.4 shows the frequency dependent beampattern for the array configuration recovered through TDL-SDR. It is evident from the beampattern that a high gain is maintained throughout the band of interest for the desired source signal while alleviating the interferers from respective DOAs and frequency bands. These beampattern characteristics translate to an output SINR of $7.05$ dB. The TDL-SDR performs close to the optimum array found by enumeration which has an output SINR of $7.83$ dB. The corresponding sparse array configurations obtained by enumeration and TDL-SDR are shown in the Fig. 5.6a and 5.6b, respectively (the green color denotes corresponding sensor location selected, whereas the gray color shows the sensor location not selected). It is important to mention that the optimum array found by enumeration requires a search over $38760$ possible sparse array configurations, which has a prohibitive computational cost attributed to expensive singular value decomposition (SVD) for each enumeration. The optimum sparse array design achieved through TDL-SCA is shown in Fig. 5.6c. This array configuration is capable of delivering output SINR of $7.7$ dB with the use of optimal beamforming weights. It performs very close to the optimum sparse array found through enumeration with the performance degradation of around $0.1$ dB. This performance is superior to that of the sparse array found through TDL-SDR by around $0.65$ dB with less computational complexity. In general, for any given array configuration, the TDL implementation scheme yields marginally higher SINR as compared to the DFT implementation scheme. Owing to the reduced computational cost of the DFT sparse design, and relatively better performance of the TDL sparse design, one can entwine TDL and DFT beamformers to capitalize on both merits. That is, we proceed to find the optimum TDL beamformer weights based on the sensor array configuration that results from the optimum DFT-based implementation scheme. This way, optimum sparse array design would involve both implementation schemes. This is referred as the dual-domain design implementation scheme since it considers both time and frequency domains in generating the optimum receiver design. This design has slightly elevated computational expense over the DFT design, as it involves calculating the optimum TDL beamformer weights corresponding to the DFT optimized configuration. For the underlying problem, the dual-domain design, through either the DFT-SDR or DFT-SCA gives an output SINR of $7.76$ dB which is close to the maximum possible SINR of $7.83$ dB. The DFT-SDR and DFT-SCA both render an identical sparse configurations, which is shown in Fig. 5.6d. Therefore, we remark that for the example considered, the dual domain design underlying the DFT implementation scheme, performs slightly better than the design carried out exclusively by the TDL implementation scheme. Figure 5.5: Frequecy dependent beampattern for the compact ULA (Fig. 5.6e). (a) (b) (c) (d) (e) Figure 5.6: Example 1 - (a) Optimum array TDL implementation scheme (Enumeration) (b) TDL-SDR (c) TDL-SCA (d) DFT-SDR, DFT-SCA (e) $14$ sensor compact ULA (a) (b) (c) (d) (e) (f) Figure 5.7: Example 2 - (a) Optimum array TDL implementation scheme (Enumeration) (b) TDL-SCA (c) TDL-SDR (d) DFT-SCA (e) DFT-SDR (f) Worst case array (TDL, DFT) Fig. 5.5 shows the beampattern for the sparse array design resulting in the worst case output SINR. In this case, the worst array configuration turns out to be the compact ULA (Fig. 5.6e). This configuration could not deliver the resolution required to mitigate the closely located jammers. It can be observed from the beampattern that the compact ULA, in an effort to remove the strong interfering signals, failed to provide maximum gain towards the source of interest. This results in a considerable low output SINR that is equal to $-4.5$ dB. Therefore, a sparse array design which optimally and efficiently utilizes the array aperture and the additional degrees of freedom made available by the switching transceiver chains yields an SINR improvement of more than $12$ dB. #### 5.5.2 Example 2 Consider a wideband source of interest at $45^{0}$ and the wideband jammers located at $35^{0}$, $55^{0}$, $60^{0}$, $145^{0}$ and $155^{0}$. A narrowband jammer is located at $135^{0}$ at an operating frequency of $f_{c}$, all other parameters are the same as in Example $1$. The SINR of the optimum array for the TDL implementation scheme (Fig. 5.7a) is $11.32$ dB (found through enumeration). Optimization performed using SCA yields the array in Fig. 5.7b, with respective SINR of $11.2$ dB, whereas that performed by SDR yields the array shown in Fig. 5.7c and corresponding SINR of $10.9$ dB. For the dual domain design, the optimum sparse arrays found through DFT-SCA (Fig. 5.7d) and through the DFT-SDR algorithm (Fig. 5.7e) deliver an approximately similar output SINR, around $11.02$ dB. This is inferior to the exclusive TDL design. It is important to note that the array configuration resulting in the worst case performance, shown in Fig. 5.7f, spans the full aperture as the optimum array, yet it offers an output SINR of only $7.35$ dB, underscoring the importance of carefully performing sensor selection for the sparse array design. #### 5.5.3 Comparison of SDR and SCA under both models The design examples discussed thus far show amicable performance of the proposed algorithms under the assumption of the knowledge of full data correlation matrix. The results clearly tie the performance to the location of the sources and their respective powers. However, evaluating the performance under matrix completion involves analysis for additional variables, namely, the initial sparse array configuration prior to optimization and the number of snapshots. The performance is, therefore, dependent on the observed realization of the received data. In order to have a holistic assessment of the proposed algorithms, Monte Carlo simulations are generated. We select $P=8$ locations out of $N=16$ available locations. For specified DOA of the desired source, trials are generated involving six jammers occupying random locations from $30^{0}$ to $150^{0}$. The SNR of the desired source is $0$ dB, while the powers of the jammers are uniformly distributed from $10$ to $20$ dB. The simulation is repeated at $11$ different desired source DOAs, and the average SINR computed. In total, $1500$ experiments are conducted. For each trial, a random $P$-sparse array topology serves as an initial array configuration. This configuration could be an optimized configuration from the preceding operating conditions. The sparse data correlation matrix is estimated based on sensor locations in the initial configuration before performing matrix completion and subsequent optimization process. The binary search for the sparsity parameter $\mu$ ranges from $0.01$ to $3$, sparsity threshold $\gamma=10^{-3}$ and $\epsilon=0.05$, relative signal bandwidths and other parameters are the same as given in Example 1. Three benchmarks are established to access the performance of the proposed algorithm under limited snapshots and lack of knowledge of the full data correlation matrix. The first benchmark applies the enumeration technique for MaxSINR design under the assumption that the data from all the perspective sensor locations is available and accurate knowledge of the data correlation matrix, i.e., assuming unlimited number of snapshots. This benchmark is referred as “Knowledge of Full Correlation Matrix-Unlimited Snapshots (KFM- USS)”. Other benchmarks utilize matrix completion to recover the missing lags (corresponding to $N-P$ missing sensors). We refer to these benchmarks as "Matrix Completion-Unlimited Snapshots (MC-USS)" and "Matrix Completion- Limited Snap Shots (MC-LSS)," depending on whether the correlation values are accurately known through unlimited snapshots or estimated from the limited snapshots. The evaluation under limited snapshots considers $T=500$. The performance of the SCA algorithm for the DFT implementation scheme is shown in the Fig. 5.8. The performance upper bound is given by the MaxSINR design evaluated through enumeration (DFT-Enumeration (KFM-USS)). In this case, the average performance over all the desired source DOAs is $7.24$ dB. The proposed DFT-SCA algorithm under the KFM-USS benchmark offers an average SINR of $6.63$ dB. This performance is also comparable to the one achieved through the proposed matrix completion, as is evident in Fig. 5.8. However, the DFT- SCA design incorporating the MC-LSS benchmark ($T=500$) has a slight performance trade off of $0.14$ dB w.r.t. the DFT-SCA MC-USS design. The aforementioned robustness of the MaxSINR design under limited snapshots is partially attributable to a rather accurate full matrix estimate achieved by incorporating the apriori knowledge of noise floor. Figure 5.8: Performance comparisons of SCA under DFT model. Figure 5.9: Performance comparisons of SCA under TDL model. The performance of the SCA under the TDL model is evaluated based on the aforementioned benchmarks, as depicted in Fig. 5.9. The performance trends are similar, however, the average SINR offered by the TDL implementation scheme is slightly superior to the DFT implementation scheme which is consistent with the literature on wideband beamforming for compact arrays [91]. Moreover, it is noted that the DFT-SCA dual domain design achieves comparable performance to the TDL-SCA under all design benchmarks. This demonstrates the potential of the dual design in achieving an effective MaxSINR beamformer with reduced complexity. Figs. 5.10 and 5.11 depict the Monte Carlo performance results analyzing the proposed SDR. It shows that the SDR offers comparable performance to the SCA technique, but involves a heavy computational overhead. It is also clear from the plots that the optimized array configurations offer a consequential advantage over both the compact ULA and the high resolution structured arrays, such as coprime and nested arrays [9, 8], and the randomly selected $P$ sparse array configuration, each employing their respective optimal beamforming weights. The average worst case SINR is, however, reduced significantly to only $1.1$ dB. The performance is also re-evaluated at varying number of snapshots with consistent results. The performances of the proposed algorithms under MC-LSS inch closer to the MC-USS benchmark with increased data. Figure 5.10: Performance comparisons of SDR under DFT model. Figure 5.11: Performance comparisons of SDR under TDL model. Figure 5.12: Performance comparisons of the optimum sparse array, the worst performing array and the compact ULA (TDL implementation scheme). #### 5.5.4 Practical Considerations for sparse array design To assess the SINR advantage of the optimum sparse array design, we consider the effect of two important environment dependent parameters, namely, the DOA of the desired source and the relative locations of the jammers w.r.t. the desired source. To demonstrate this effect, the desired source DOA in the above examples is changed in steps of $5^{0}$, with the relative locations of the jammers remaining the same with respect to the desired source. For example, when the desired source is at $50^{0}$ instead of $45^{0}$, the corresponding jammer locations shift by $5^{0}$. Figure 5.12 compares the performance of the optimal configuration, the worst performing array, and the compact ULA, with the desired source DOA varying from $30^{0}$ to $60^{0}$ under Example 1 and 2. It is evident from Fig. 5.12 that under all scenarios generated in Example 1, the compact ULA delivers the worst performance, irrespective of the desired source DOA. This is because the jammers are located closer to the source of interest and the compact ULA lacks the resolution due to it’s limited array aperture. On the other hand, for Example 2, the jammers are comparatively widely spaced and as such, the compact ULA has a satisfactory performance that is close to the optimum sparse arrays, especially when the source DOAs are near the array broadside. The performance degradation of the ULA near end-fire is due to the increasing overlap between the desired signal subspace and the interference plus noise subspace [111], therefore lowering SINR performance. In such scenarios, the sparse array design efficiently utilizes its degrees of freedom to improve SINR by increasing the separation between the two subspaces. These examples show that the sparse array design is most critical when the underlying jamming environment calls for additional degrees of freedom to engage the available array aperture more efficiently and to fulfill the resolution requirements posed by the closer proximity of jammers and the desired source DOA. ### 5.6 Conclusion This chapter considered optimum sparse array design for maximizing the beamformer output SINR for the case of wideband signal models. Two different implementations, namely the TDL and the DFT implementation schemes, were presented for the optimum sparse array design. The DFT implementation scheme reduces the MaxSINR sparse array design problem to the lower dimensional space, thereby reducing computational cost. The sparse array configuration optimized in the DFT domain and later imported to the TDL implementation scheme is analyzed to alleviate the computational cost of the TDL sparse implementation. It was shown that the imported design can possibly yield comparable performance to the design carried out exclusively through the TDL implementation scheme. For both approaches, we solved the problem using the iterative unit rank promoting SDR algorithm and a simplified implementation using SCA. The parameter-free block Toeplitz matrix completion was proposed to realize the data dependent design. It was shown that the SDR and SCA formulation perform reasonably close to the optimum sparse array design achieved through enumeration under limited data snapshots. The MaxSINR optimum sparse array yielded considerable performance improvement over suboptimal sparse arrays and compact ULA for the underlying sensing scenarios. ## Chapter 6Sparse Array Design for Transmit Beamforming ### 6.1 Introduction Low latency sensor selection technology enables cost effective sparse array design that can be readily configured to meet environment-dependent performance metrics. In this case, the system cost can be considerably reduced by multiplexing the expensive transmission chains to serve many more perspective sensor locations. However, at any given time, only a few sensor locations are operational which correspond to the active connections to the transmission chains. This approach is fundamentally different from the environment-independent sparse array design which seeks to maximize the number of spatial autocorrelation lags by producing a hole-free coarray aperture for a given number of sensors. There, the main task is to enhance source identifiability for DOA estimation and be able to handle more sources than sensors [7, 8, 108, 10, 67]. This task can also be achieved for wideband or multiple frequency sources [31]. Similarly, the environment-independent fixed beamformer design optimizes the receive beampattern characteristics such as desirable main lobe width and minimum sidelobe levels as well as frequency invariant beampattern design for wideband sources [98, 99, 100, 101]. Environment-dependent sparse array optimum configuration design has primarily been considered from the perspective of receive beampattern characteristics. The optimality criteria such as maximizing the signal-to-interference plus noise ratio (MaxSINR) at the receiver can potentially harness effective array configurations for enhancing target parameter estimation accuracy [97, 11, 28, 73, 112, 113]. In this chapter, we consider environment-dependent sparse array design for transmit beamforming. Transmit bempattern design is critical to implementing an efficient receiver design for adaptive radar operations [114, 115, 116]. A desirable transmit design focuses the transmitted power towards the perceived target locations and, as such, suppress the transmission towards undesired directions. In so doing, it can steer clear of heavy clutter environment or avoid probing towards an adversary location in covert situations. Another critical design objective for the transmitter is to minimize the cross correlation from the returns from different targets to enable effective adaptive receiver processing, as high target cross correlation can severely deteriorate performance. The transmit beamforming design, in essence, is realized by designing the waveforms launched across all array sensors. This is in contrast to receive beamforming which optimizes the weights to yield a desirable output of a linear combiner. The transmit design can be simplified into two steps, one pertains to designing the correlation matrix of the transmitted waveform and the other involves synthesizing the actual transmitted sequence from the optimized correlation matrix [117]. For the scope of this chapter, we consider the transmit waveform correlation matrix optimization. There are also additional constraints that arise in transmit waveform design vis-a-vis receiver design. The former typically requires a total transmit power constraint as well as a unit modulus or equal power constraint for each sensor location so as to utilize system resources judiciously. In this chapter, we propose sparse array design for transmit beamforming which is formulated as maximizing the transmit signal power at the target locations w.r.t. to the power transmitted to the undesired locations while minimizing the cross correlation among the transmitted signals towards target locations. Our approach extends the sparse array transmit design proposed in [15] which optimizes the transmit waveform correlation matrix to maximize the power towards the desired targets for a given total power constraint. The methodology therein neither incorporates the cross correlation term nor proposes a method to suppress transmission towards certain unwanted directions. We pose the active sensing design problem as optimally selecting $P$ antennas out of $N$ possible equally spaced locations. The optimum sparse transmit array beamformer, in this case, is the one that achieves the design objective, considering all possible sparse array configurations that stem from different arrangements of the available sensors. It is important to note that for the compact arrays (ULAs), the transmit beampattern design only involves optimizing the transmitted waveform sequence [25]. However, for the underlying problem, sparse array design involves optimization over two sets of variables, namely, sensor placements and transmitted waveforms. Selecting sensor positions for transmit beamformer is a combinatorial optimization problem and is, therefore, NP hard. In order to avoid the extensive computational burden for enumerating each possible array configuration, we solve the problem by convex approximation. The design problem at hand is posed as QCQP with weighted mixed $l_{1-\infty}$-norm penalization. We propose an approach for transmit beampattern design to handle the cross correlation term rather implicitly and best suited for the sparse transmitter design. The proposed formulation involves an $L$ rank waveform correlation matrix design, synthesized by $L$ unit rank correlation matrices corresponding to each target location. As a result, the proposed sparse array design promotes group sparsity harmoniously across $L$ constituent correlation matrices to ensure that the designed correlation matrix correspond to $P$ sparse solution. The QCQP is solved by the SDR approach through convex relaxation and, therefore, can be solved in polynomial time. The rest of the chapter is organized as follows: In the next section, we state the problem formulation for transmit beamformer design in case of a pre- specified array configuration. Section 6.3 elaborates the sparse array design by semidefinite relaxation to find the optimum $P$ sparse transmit array geometry. In the subsequent section, simulations are presented to demonstrate the offerings of the proposed sparse array transmit design, followed by the concluding remarks. ### 6.2 Problem Formulation We consider $L$ target locations at $\\{\theta_{l}\\}_{1}^{L}$ and $Q$ known adversary or undesired locations for transmission specified by $\\{\theta_{q}\\}_{1}^{Q}$. The received baseband probing signal ${x}_{l}(n)$ at target locations $\theta_{l}$ at time instant $n$ is given by, ${x}_{l}(n)=\mathbf{s}^{H}(n)\mathbf{a}_{l}$ (6.1) The signal vector $\mathbf{s}(n)$ is launched from a linear array with $N$ uniformly placed sensors with an inter-element spacing of $d$. Assuming a narrowband signal model with targets at far field, the steering vector $\mathbf{a}_{l}\in\mathbb{C}^{N}$ is given by, $\mathbf{a}_{l}=[1\,\,\,e^{j(2\pi/\lambda)dcos(\theta_{l})}\,.\,.\,.\,e^{j(2\pi/\lambda)d(N-1)cos(\theta_{l})}]^{T}$ (6.2) Likewise, the baseband signal ${x_{q}}(n)\in\mathbb{C}$ at locations $\theta_{q}$ is defined. The expression of the signal power at locations $\theta_{l}$ is given as follows, $P_{l}=\mathbf{a}^{H}_{l}\mathbf{R}\mathbf{a}_{l}$ (6.3) The matrix $\mathbf{R}=E(\mathbf{s}(n)\mathbf{s}(n)^{H})$ is the transmitted waveform correlation matrix. The symbol $E$ denotes the expectation operator. The problem of maximizing the signal power at the target locations w.r.t. the unwanted locations can then be formulated as seeking to maximize the directed power for each target location, yielding the following optimization problem, $\displaystyle\underset{\mathbf{R}}{\text{minimize}}$ $\displaystyle\quad\sum_{q=1}^{Q}w_{q}(\mathbf{a}_{q}^{H}\mathbf{R}\mathbf{a}_{q})$ (6.4) s.t. $\displaystyle\quad\mathbf{a}_{l}^{H}\mathbf{R}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ The weighting coefficients $w_{q}$ determines the relative preference of minimizing one signal over another. We incorporate the cross-correlations of any two target signals located at $\theta_{l^{{}^{\prime}}}$ and $\theta_{l}$, which is given by, $P_{l,{l^{{}^{\prime}}}}=|\mathbf{a}_{l}^{H}\mathbf{R}\mathbf{a}_{l^{{}^{\prime}}}|,$ (6.5) where ‘$|.|$’ is the modulus operator. The objective function in (6.4) can then be rewritten as follows [117], $\displaystyle\underset{\mathbf{R}}{\text{minimize}}$ $\displaystyle\quad\sum_{q=1}^{Q}w_{q}(\mathbf{a}_{q}^{H}\mathbf{R}\mathbf{a}_{q})+\sum_{l=1}^{L}\sum_{{l^{{}^{\prime}}}=l+1}^{L}w_{ll^{{}^{\prime}}}|\mathbf{a}_{l}^{H}\mathbf{R}\mathbf{a}_{l^{{}^{\prime}}}|$ s.t. $\displaystyle\quad\mathbf{a}_{l}^{H}\mathbf{R}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ (6.6) The weighting coefficients $w_{ll^{{}^{\prime}}}$ determines the relative trade off in minimizing the cross correlation terms. We adopt a method to handle the cross correlation term in a more implicit way. Consider the correlation matrix $\mathbf{R}$ decomposed into $L$ matrices such that $\mathbf{R}=\sum_{l=1}^{L}\mathbf{R}_{l}$. Matrix $\mathbf{R}_{l}$ is chosen such that the associated cross correlation term is zero, i.e., $\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}}=0$ for $l\neq l^{{}^{\prime}}$. This is possible, for instance, if $\mathbf{a}_{l^{{}^{\prime}}}\in\mathcal{N}(\mathbf{R}_{l})\,\,\,\forall\,\,l\neq l^{{}^{\prime}}$. The symbol $\mathcal{N}$ denotes the null space of the input matrix. Assuming the aforementioned conditions are satisfied, then (6.2) can be rewritten as, $\displaystyle\underset{\mathbf{R}_{1},\mathbf{R}_{2},...,\mathbf{R}_{L}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}\sum_{q=1}^{Q}w_{q}(\mathbf{a}_{q}^{H}\mathbf{R}_{l}\mathbf{a}_{q})$ (6.7) s.t. $\displaystyle\quad\sum_{l=1}^{L}\mathbf{a}_{l^{{}^{\prime}}}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}}=1,\quad{l^{{}^{\prime}}}\in\,\,\\{1,...L\\}$ It is noted that the constraints in the above equation can also be simplified owing to the aforementioned assumptions on $\mathbf{a}_{l^{{}^{\prime}}}\in\mathcal{N}(\mathbf{R}_{l})\,\,\,\forall\,\,l\neq l^{{}^{\prime}}$. We rewrite (6.7) as, $\displaystyle\underset{\mathbf{R}_{1},\mathbf{R}_{2},...,\mathbf{R}_{L}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}\sum_{q=1}^{Q}w_{q}(\mathbf{a}_{q}^{H}\mathbf{R}_{l}\mathbf{a}_{q})$ (6.8) s.t. $\displaystyle\quad\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ The zero cross-correlation conditions can be achieved approximately by introducing additional terms as part of the objective function, that is, $\displaystyle\underset{\mathbf{R}_{1},\mathbf{R}_{2},...,\mathbf{R}_{L}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}\sum_{q=1}^{Q}w_{q}(\mathbf{a}_{q}^{H}\mathbf{R}_{l}\mathbf{a}_{q})+\sum_{l=1}^{L}\sum_{l^{{}^{\prime}}\neq l}w_{l^{{}^{\prime}}}(\mathbf{a}_{l^{{}^{\prime}}}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}})$ (6.9) s.t. $\displaystyle\quad\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ Here, we have utilized the fact that since $\mathbf{a}_{l^{{}^{\prime}}}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}}$ bounds the cross-term $|\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}}|$, therefore minimizing the terms $\mathbf{a}_{l^{{}^{\prime}}}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}}$ for all $l\neq l^{{}^{\prime}}$ also minimizes the cross terms $|\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}}|$ for all $l\neq l^{{}^{\prime}}$. The weighting coefficients $w_{l^{{}^{\prime}}}$ controls the the relative cross correlations among the target locations. Therefore, instead of enforcing exact zero value for the cross-terms, we resort to minimizing the augmented term in (6.9), leading to, $\displaystyle\underset{\mathbf{R}_{1},\mathbf{R}_{2},...,\mathbf{R}_{L}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}(\sum_{q=1}^{Q}w_{q}(\mathbf{a}_{q}^{H}\mathbf{R}_{l}\mathbf{a}_{q})+\sum_{l^{{}^{\prime}}\neq l}w_{l^{{}^{\prime}}}(\mathbf{a}_{l^{{}^{\prime}}}^{H}\mathbf{R}_{l}\mathbf{a}_{l^{{}^{\prime}}}))$ s.t. $\displaystyle\quad\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ (6.10) In essence, the proposed formulation in (6.2) limits the cross-terms inherently by treating the $Q$ unwanted signals somewhat similarly to the other target locations except the $l$th target. We define the manifold matrix corresponding to the unwanted locations $\theta_{q}$, $\mathbf{V}=[\mathbf{a}_{1},\mathbf{a}_{2},...,\mathbf{a}_{Q}]$. Let $\mathbf{S}_{l}=[\mathbf{a}_{1},\mathbf{a}_{2},...,\mathbf{a}_{L-1}]$ incorporate all target steering vectors except the $l$th target. Define $\mathbf{B}_{l}=[\mathbf{V}\,\,\,\,\mathbf{S}_{l}]$. Lumping the weighting coefficients in a diagonal matrix $\mathbf{W}_{l}$ other than the $l$th target allows rewriting (6.2) in a compact manner as, $\displaystyle\underset{\mathbf{R}_{1},\mathbf{R}_{2},...,\mathbf{R}_{L}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}(\mathbf{W}_{l}\mathbf{B}_{l}^{H}\mathbf{R}_{l}\mathbf{B}_{l}+\rho_{l}\text{Tr}(\mathbf{R}_{l}))$ (6.11) s.t. $\displaystyle\quad\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ The additional term involving Tr($\mathbf{R}_{l}$) is introduced in (6.11), where the symbol ‘Tr(.)’ represents the trace of the matrix and its minimization limits the norm of $\mathbf{R}_{l}$. This additional regularization term ($\rho_{l}$ is the regularization parameter) can also help to alleviate high sidelobe levels in the transmit beampattern [47]. The above quadratic formulation requires the information of the target locations as well as the undesired locations, but it holds true irrespective of the array configuration and is equally valid for compact arrays as well as sparse arrays. The sparse optimization of the above formulation is explained in the next section. ### 6.3 Sparse array design The above quadratic problem for $P$ antenna selection is a combinatorial optimization problem and can’t be solved in polynomial times. We formulate the sparse array design as a rank relaxed semidefinite program (SDR) with the mixed norm regularization to recover group sparse solutions. #### 6.3.1 Sparse solution through SDR Invoking sparsity in the designed correlation matrix $\mathbf{R}_{l}$ of the transmitted waveform essentially implies that a missing sensor translates to the sparsity in the corresponding row and column of $\mathbf{R}_{l}$. Furthermore, $\mathbf{R}=\sum_{l=1}^{L}\mathbf{R}_{l}$ implies that the sparse rows and columns should be consistent across all $L$ correlation matrices. In order to do so, and invoke sparsity in a systematic manner, the positive semidefinite correlation matrices $\mathbf{R}_{l}^{\prime}s$ are assumed to be unit rank and expressed as an outer product, $\mathbf{R}_{l}=\mathbf{r}_{l}\mathbf{r}_{l}^{H}$. Define $\mathbf{r}^{(k)}=[\mathbf{r}_{1}(k),\mathbf{r}_{2}(k),...,\mathbf{r}_{L}(k)]\in\mathbb{C}^{L}$ as the $k$th entry for all the $L$ vectors $\mathbf{r}_{l}$’s which is associated with the $k$th transmitter. The problem formulated in (6.11) can be rewritten with an additional regularization term as, $\displaystyle\underset{\mathbf{R}_{l}\mathbf{\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}(\mathbf{W}_{l}\mathbf{B}_{l}^{H}\mathbf{R}_{l}\mathbf{B}_{l}+\rho_{l}\text{Tr}(\mathbf{R}_{l}))+\mu(\sum_{k=1}^{N}||\mathbf{r}^{(k)}||_{q})$ s.t. $\displaystyle\quad\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ (6.12) Here, $||.||_{q}$ denotes the $q$-norm of the vector. The mixed $l_{1-q}$ norm regularization is known to thrive the group sparsity in the solution. This ensures that the identical $P$ transmitters are selected for each constituent correlation matrix $\mathbf{R}_{l}$, and without specifying the cardinality of active transmitters. For the underlying design problem, the re-weighted $l_{1}$-norm is adopted which is a well known regularizer for promoting sparse solutions iteratively [58], $\displaystyle\underset{\mathbf{R}_{l}\mathbf{\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}(\mathbf{W}_{l}\mathbf{B}_{l}^{H}\mathbf{R}_{l}\mathbf{B}_{l}+\rho_{l}\text{Tr}(\mathbf{R}_{l}))$ $\displaystyle\quad\quad\quad\quad\qquad\qquad\qquad\qquad\qquad\qquad+\mu(\sum_{k=1}^{N}\mathbf{u}^{i}(k)||\mathbf{r}^{(k)}||_{q})$ s.t. $\displaystyle\quad\sum_{l=1}^{L}\mathbf{a}_{l}^{H}\mathbf{R}_{l}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ (6.13) The choice of the $k$th element of the re-weighting vector $\mathbf{u}^{i}(k)$ at the $i$th iteration is later discussed. Implementing the group sparsity through the $\infty$-norm and choosing $l_{1}$-norm squared function instead of the $l_{1}$-norm can facilitate the semidefinite realization [2], $\displaystyle\underset{\mathbf{R}_{l}\mathbf{\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}(\mathbf{W}_{l}\mathbf{B}_{l}^{H}\mathbf{R}_{l}\mathbf{B}_{l}+\rho_{l}\text{Tr}(\mathbf{R}_{l}))$ $\displaystyle\quad\quad\quad\quad\quad\qquad\qquad\qquad\qquad\qquad+\mu(\sum_{k=1}^{N}\mathbf{u}^{i}(k)||\mathbf{r}^{(k)}||_{\infty})^{2}$ s.t. $\displaystyle\quad\sum_{l=1}^{L}\mathbf{a}_{l}^{H}\mathbf{R}_{sl}\mathbf{a}_{l}=1,\quad l\in\,\,\\{1,...L\\}$ (6.14) The above problem can then be posed as semidefinite programming by equivalently expressing the quadratic functions, $\mathbf{W}_{l}\mathbf{B}_{l}^{H}\mathbf{R}_{l}\mathbf{B}_{l}=$ Tr$(\mathbf{W}_{l}\mathbf{B}_{l}^{H}\mathbf{R}_{l}\mathbf{B}_{l})=$Tr$(\mathbf{R}_{l}\mathbf{W}_{l}\mathbf{B}_{l}\mathbf{B}_{l}^{H})=$ Tr$(\mathbf{R}_{l}\mathbf{\bar{B}}_{l})$. Here, $\mathbf{\bar{B}}_{l}=\mathbf{W}_{l}\mathbf{B}_{l}\mathbf{B}_{l}^{H}$ and similarly $\mathbf{{A}}_{l}=\mathbf{a}_{l}\mathbf{a}_{l}^{H}$ is the outer product of $l$th target steering vector. The formulation in (6.14) takes the following form [59, 61, 72, 60, 2], $\displaystyle\underset{\mathbf{R}_{l}\mathbf{\in\mathbb{C}}^{N.N},\mathbf{\tilde{R}\in\mathbb{R}}^{N.N}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}\text{Tr}(\mathbf{R}_{l}(\mathbf{\bar{B}}_{l}+\rho_{l}\mathbf{I}))+\mu\text{Tr}(\mathbf{U}^{i}\mathbf{\tilde{R}})$ s.t. $\displaystyle\quad\text{Tr}(\mathbf{R}_{l}\mathbf{A}_{l})\geq 1,\quad l\in\,\,\\{1,...L\\},$ $\displaystyle\quad\mathbf{\tilde{R}}\geq|\mathbf{R}_{l}|\quad l\in\,\,\\{1,...L\\},$ $\displaystyle\quad\mathbf{R}_{l}\succeq 0,\,\text{Rank}(\mathbf{R}_{l})=1$ (6.15) Here ‘$\geq$’ is the element wise comparison and ‘$\succeq$’ denotes the positive semidefinite constraint, and $\mathbf{U}^{i}=\mathbf{u}^{i}(\mathbf{u}^{i})^{T}$ ((.)T denotes the transpose). The SDR realization of the problem in (6.3.1) is pursued by dropping the non-convex rank limiting constraint, $\displaystyle\underset{\mathbf{R}_{l}\mathbf{\in\mathbb{C}}^{N.N},\mathbf{\tilde{R}\in\mathbb{R}}^{N.N}}{\text{minimize}}$ $\displaystyle\quad\sum_{l=1}^{L}\text{Tr}(\mathbf{R}_{l}(\mathbf{\bar{B}}_{l}+\rho_{l}\mathbf{I}))+\mu\text{Tr}(\mathbf{U}^{i}\mathbf{\tilde{R}})$ s.t. $\displaystyle\quad\text{Tr}(\mathbf{R}_{l}\mathbf{A}_{l})\geq 1,\quad l\in\,\,\\{1,...L\\},$ $\displaystyle\quad\mathbf{\tilde{R}}\geq|\mathbf{R}_{l}|\quad l\in\,\,\\{1,...L\\},$ $\displaystyle\quad\mathbf{R}_{l}\succeq 0,\quad l\in\,\,\\{1,...L\\}$ (6.16) Although the formulation in (6.3.1) doesn’t enforce the rank constraint, it has been observed, through extensive numerical simulations, that it usually renders rank 1 solutions and, therefore, approximates the original problem fairly accurately. Table 6.1: SDR-Transmit Beamformer Design 0: $N$, $P$, $L$, Look directions $\theta_{q}$ and $\theta_{l}$. 0: $\mathbf{R}=\sum_{l=1}^{L}\mathbf{R}_{l}$ transmit waveform correlation matrix. Initialization: Initialize $\gamma$, $\epsilon$, $\mathbf{W}_{l}$, $\rho_{l}$, $\mathbf{U}$ as all ones matrix. Bisection search for desired cardinality $P$$l=\mu_{lower}$, $u=\mu_{upper}$ (Initializing lower and upper limits of sparsity parameter range) while (Cardinality of $\mathbf{r}_{l}\text{'s}$ $\neq$ $P$) do $\mu=[(l+u)/2]$ Run the SDR of Eq. (16). Update $\mathbf{U}$ according to Eq. (17). Update $\mu$ through bisection method. end while After achieving the desired cardinality, run SDR for reduced size correlation matrix corresponding to nonzero values of $\mathbf{\tilde{R}}$ and $\mu=0$, yielding, $\mathbf{R}=\sum_{l=1}^{L}\mathbf{R}_{l}$. return $\mathbf{R}$ #### 6.3.2 Reweighting sparsity The reweighting matrix $\mathbf{U}^{i}$ is typically initialized unweighted with an all ones matrix. It is subsequently updated with an inverse function of the absolute values of the entries of the correlation matrix as follows [58], $\mathbf{U}_{m,n}^{i+1}=\frac{1}{\mathbf{\tilde{R}}^{i}(m,n)+\epsilon}$ (6.17) The $m,n$th entry of $\mathbf{\tilde{R}}$ is given by $\mathbf{\tilde{R}}^{i}(m,n)$. The parameter $\epsilon$ prevents the solution to converge to local minima and avoids the unwanted case of division by zero. Choosing the randomly selected sparsity parameter $\mu$ does not exactly correspond to $P$ sparse solution. In order to trace $\mu$ corresponding to $P$ sparse solution, the optimization problem is typically solved for different values of $\mu$ found through a bisection search, until the solution converges to $P$ sensors [2]. The proposed algorithm for controlling the sparsity of the transmitted waveform correlation matrix is elaborated further in Table 6.1. #### 6.3.3 Transmit power constraint It is noted that the formulation adopted so far doesn’t explicitly account for the total transmit power constraint which is critical for a pragmatic design. However, the proposed approach can inherently accommodate this constraint as it is primarily based on relative target power maximization w.r.t. the undesired locations. This renders the design invariant to the scaling factor. Therefore, for a given transit power of $P_{t}$ and the designed transmit correlation matrix $\mathbf{R}$, the total transmit power constraint is realized by a scaling factor $\alpha$ such that $\alpha$Tr$(\mathbf{R})=P_{t}$. For the scope this chapter, we don’t consider the uniform elemental power constraint. ### 6.4 Simulations In this section, we demonstrate the effectiveness of the active sparse array design from the perspective of the transmit beampattern. The performance of sparse array design is compared with the compact ULA and the structured sparse array design. (a) (b) (c) (d) Figure 6.1: Various array configurations #### 6.4.1 Example We consider $N=18$ equally spaced locations are potentially available for sensor selection. The minimum spacing among the sensors is $d=\lambda/2$, and $P=10$ sensor locations can be chosen concurrently. The three targets locations, $L=3$, namely Targets 1, 2 and 3 are perceived at $40^{0}$, $50^{0}$ and $65^{0}$, respectively. Four unwanted directions are located at $25^{0}$, $60^{0}$, $110^{0}$ and $120^{0}$. All the weighting coefficients corresponding to the targets and unwanted locations are set to one ($\rho_{l}=1)$. The sparsity parameter $\mu$ is found through bisection search ranging from $0.01$ to $3$, the sensor locations corresponding to the correlation values of less than the sparsity threshold $\gamma=10^{-5}$ are declared sparse and $\epsilon=0.05$ is chosen for the underlying problem. Figure (6.1a) shows the sparse array found through enumeration that optimizes the formulated objective function over all possible configurations. This sparse array configuration has a performance that is comparable to that of the array optimized through the proposed SDR algorithm. The SDR-optimized topology is shown in (6.1b), and has the performance trade off less than 0.23 dB as compared to the enumerated design. The transmit beampattern of the optimized topology is depicted in Fig. (6.2). It is observed form the normalized beampattern that the designed configuration fairly attempts to maximize the transmitted power towards the target locations while mitigating the power towards the unwanted directions. In so doing, the average directed gain towards the target locations is less than 0.5 dB down as compared to the maximum gain. On the other hand, the nested array configuration and the compact ULA offers an average gain towards the targets which is $2.1$ dB and $2.45$ dB down, respectively, as compared to the corresponding locations with the maximum transmitted power. The nested array and the compact ULA are shown on Figs. (6.1c) and (6.1d), with the respective beampatterns depicted in Figs. (6.3) and (6.4) , respectively. Figure 6.2: Transmit beampattern for the SDR-optimized configuration. Figure 6.3: Transmit beampattern for the nested array configuration. It is noted from the depicted beampatterns that the proposed methodology, in an attempt to minimize the cross correlation among the target signals, renders a synthesized beampattern whose maxima are precisely towards the target locations. This is also evident from the beampattern of the SDR-optimized array as shown in Fig. (6.2), where the maxima is slightly skewed from the target locations. This effect can be explained from the transmit beampatterns of the constituent correlation matrices shown in Fig. (6.5). It is clear that the individual constituent beampatterns have the maxima at the respective target locations. However, since the transmit beampattern is a cumulative pattern found by the superposition of constituent beampatterns, the overall maxima can drift from the proximity of the target locations. This effect doesn’t pose a major drawback to the proposed design as it is evident from the beampattern of Fig. (6.2), where the beampattern maxima is located somewhere in the middle of the closely located targets at $40^{0}$, $50^{0}$ and is merely $0.5$ dB strong as compared to the target gains. In this case, still significantly higher power is directed to the target locations relative to the total average transmitted power. Figure 6.4: Transmit beampattern for the compact ULA. Figure 6.5: Constituent transmit beampatterns for the SDR-optimized configuration. To analyze the cross correlation term, we plotted the following normalized cross-term $\hat{P}_{l,{l^{{}^{\prime}}}}$ for each target location, $P_{l,{l^{{}^{\prime}}}}=\frac{|\mathbf{a}^{H}(\theta_{l})\mathbf{R}\mathbf{a}(\theta_{l^{{}^{\prime}}})|}{(\mathbf{a}^{H}(\theta_{l})\mathbf{R}\mathbf{a}(\theta_{l}))^{\frac{1}{2}}(\mathbf{a}^{H}(\theta_{l^{{}^{\prime}}})\mathbf{R}\mathbf{a}(\theta_{l^{{}^{\prime}}}))^{\frac{1}{2}}}$ Fig. (6.6) shows the cross-correlation pattern w.r.t. to the Target 1. It is clear that the cross-correlation contributions of the Target 1 to the target locations 2 and 3 are negligible and are less than $20$ dB. In contrast, the cross correlation coefficients of the Target 1 to Target 2, in case of the nested array and the compact ULA are $0.46$ and $0.53$ which are significantly higher. However, the cross correlations of the Target 1 to Target 3 are substantially lower for the nested array and the compact ULA as Target 3 is sufficiently farther away from Target 1 location. This also holds true for the cross correlations of the Target 2 to Target 3 owing to the spacing of both target locations as shown in Fig. (6.7). ### 6.5 Conclusion This chapter considered active sensing using sparse array configuration. It sought optimum transmit beamforming for radar applications. The problem was formulated as optimizing the transmit waveform correlation matrix and solved through convex relaxation with low computational complexity. It was shown that the active sparse array design can achieve desirable beampattern characteristics such as directing a high proportion of the transmitted power towards the perspective target locations with minimal cross-correlation among target signals. We showed the effectiveness of the proposed approach by comparing its performance with that of compact ULA, a structured array, and the design achieved through enumeration. Figure 6.6: Cross-correlation pattern against the Target 1 for various sparse configurations. Figure 6.7: Cross-correlation pattern against the Target 2 for various sparse configurations. ## Chapter 7Sparse Array Capon Beamformer Design Availing Deep Learning Approach ### 7.1 Introduction Sparse array design reduces system transceiver costs by reducing the hardware and processing complexity through sensor selection. It is useful in multitude of sensor signal processing tasks for MIMO communications, radar/sonar, satellite navigation, radio telescopes, speech enhancement and medical imaging applications [1, 2, 3, 4, 5, 6]. The performance gains in using sparse arrays stem from their inherent ability of tending the additional degrees of freedom to accomplish certain pre-defined performance metrics. Several different performance metrics have been proposed in the literature, and can generally be categorized into environment-independent or environment-dependent design. In the latter case, the receiver performance then largely depends on the operating environment, which may change according to the source and interference signals and locations, implemented via reconfigurable sensor arrays. This is in contrast to environment-independent sparse arrays whose configurations follow certain formulas and seek to attain structured sparse configurations with extended aperture co-arrays. The driving objective, in this case, is to enable direction of arrival (DOA) estimation of more sources than the available physical sensors. Common examples of structured sparse arrays are the nested and coprime arrays [7, 8, 10]. Reliably extracting a desired signal waveform by enhancing SINR has a direct bearing on improving target detection and localization for radar signal processing, increasing throughput or channel capacity for MIMO wireless communication systems, and enhancing resolution capability in medical imaging [45, 46, 47]. Maximizing signal to noise ratio (MaxSNR) and MaxSINR criteria have been shown to yield significantly efficient beamforming performance and interference mitigation. For sparse array design, the MaxSINR beamforming performance depends mainly on the selected positions of the sensors as well as the locations of sources in the field of view (FOV) [11, 13, 14, 12, 20, 71]. It is noted that with sparse arrays, the commonly used Capon beamforming must not only find the optimum weights but also the optimum array configuration. This is clearly an entwined optimization problem and requires attaining maximum SINR over all possible sparse array configurations. Sparse array design typically involves the selection of a subset of uniform grid points for sensor placements. For a given number of sensors, it is assumed that the number of perspective grid points, spaced by half wavelength, is limited due to a size constraint on the physical aperture. For environment- dependent sparse arrays, the antenna positions are selected from uniformly spaced locations that are served by a limited number of transceiver chains. The environment-sensitive array design objectives have recently become more realizable due to advances in efficient sensor switching technologies that readily activates a subset of sensors on a predefined grid points resulting in rapid array reconfigurability. Thereby, the system cost can significantly be reduced by limiting the number of expensive transceivers chains as shown in Fig. 7.1 [92, 95, 94, 93]. Environment-dependent sparse array design algorithms generally follow two different approaches. The designs based on prior knowledge of interfernce parameters, essentially require that the real time interference parameters such as DOAs and respective SINRs are either known or are estimated apriori in real time. The other approach doesn’t explicitly require the information of interfering environment which is the case in the Capon beamforming formulation. In both cases, several iterative algorithms have been proposed to optimize the sparse array beamformer design. Although, convex based optimization algorithms, such as semidefinite relaxation (SDR) and successive convex approximation (SCA) have been developed to yield sparse configurations with superior beamforming performances [20, 71], the real time implementation of these algorithms remains limited due to the relatively high computation cost. The latter is undesirable in rapidly changing environments stemming from temporal and spatial nonstationary behaviors of the sources in the field of view. In this chapter, we propose a sparse beamformer methodology implementing data- driven array design by training the DNN to learn and mimic the sparse beamforming design algorithms [118, 119]. Towards this goal, the training scenarios are simulated in two different ways. In the first approach, we use enumeration technique to generate the training labels for any given received sensor correlation function. This is achieved by finding the MaxSINR configuration by sifting through all possible sparse configurations and choosing the best performing topology. Although in practice, the training data is generated offline, it becomes infeasible to obtain optimum configuration even for a moderate size arrays due to the enormous sensor permutations. In order to circumvent this problem, we propose a new technique to expedite the generation of large number to training data labels to input the DNN. For a given environment in the training set, this technique considers the array spatial spectrum and incorporates the lag redundancy in determining desired array structures. Aside from efficient generation of DNN training data, the same technique can be used as a stand alone method to determine the best array configuration if prior information of the interference parameter is provided. The DNN approximates the unknown mapping from the input correlation matrix to the output of sensor placements (Fig. 7.2). It is shown that DNN effectively learns the optimum array structure. This makes DNN suitable for real-time implementation as DNN output prediction only requires a fewer number of trivial operations. Figure 7.1: Block diagram of adaptive switched sensor beamformer Prior work: DNNs have shown great potential due to its automatic feature selection with demonstrated ability of effective feature learning in many applications including computer vision, speech recognition, and natural language processing. Recently, ’learn to optimize’ approaches has been proposed to improve and automate the implementation of DNNs iteslf, which largely require the laborious task of designing the learning algorithms alongside model and hyperparameter selection that needs to be manually tailored from task to task. In these methods, an additional DNN is trained (called the meta-DNN) to ensure better optimization of the original DNN (called the base-DNN) and generalize over many tasks, hence avoiding to redesign algorithm for closely related tasks [120, 121, 122]. Depending on the task at hand, the DNN employed to learn can either be trained by reinforcement learning or supervised learning. It has been shown that the reinforcement learning is a reliable approach, in case, the training examples are not independent and identically distributed (i.i.d.). This is precisely the case in optimizer learning because the step vector towards the optimizer, for any given iteration, affects the gradients at all subsequent iterations. On the other hand, the DNN designs based on standard supervised learning approach has been shown to realize computationally efficient implementation of iterative signal processing algorithms [123, 124]. These computationally intensive algorithms, take a given set of design parameters and produce the “optimized” solutions as their outputs. The DNN learns from the training examples that are generated by running these algorithms offline. Also, the parameters of the network are learned entirely offline, whereas efficient online implementation is realized by passing the input through this pre- trained DNN which requires a small number of simple operations to yield the optimized output. In this chapter, we develop sparse array beamformer design implementation using supervised training. Learning sparse optimization techniques has been studied before in the contest of developing sparse representations and seeking simpler models [125]. Unlike, sparse representation approaches the sparse beamformer optimization refers to ’sparsity in the sensing’ rather than ’sparsity in the sensed’ and has important distinctions. Learning sparse algorithms, thus far, are mainly focused on iterative “unfolding” concept implemented by a single layer of the network [126, 127, 128, 129, 125]. The proposed approach approximate rather simple algorithms implemented through iterative soft-thresholding such as ISTA algorithm for sparse optimization. The sparse beamformer design, on the other hand, involves intricate operations such as singular value decomposition and matrix inversion. Also, the sparse beamfomer designs mainly implemented through convex relaxation are based on SDR and SCA algorithms involving sparsity promoting regularizers and are very expensive to implement in real time. Main Contributions: The main contributions of this chapter can be summarized as follows, 1) Sparse beamformer spectral analysis (SBSA) algorithm is proposed which provides an insightful perspective for MaxSINR beamformer design with reduced computational complexity and superior performance. The design elucidates the concept of ’inherently amenable’ sparse configurations for interference mitigation. 2) The DNN based approach is developed, for the first time, to configure a Capon based data driven sparse beamfomer by learning the enumerated algorithm as well as SBSA design. The design is achieved through a direct mapping of the received correlation matrix to the optimum sparse configuration for a given ‘look direction’. The proposed methodology combines the merits of the data dependent designs and the designs assuming prior information of the interfering environment. 3) The proposed design is shown to be robust in the case of limited data snapshots and can easily be extended to robust adaptive beamforming to cater the uncertainty regarding the look direction DOA as well as array calibration errors. The rest of the chapter is organized as follows: In the next section, we state the problem formulation for maximizing the output SINR. Section 7.3 deals with the optimum sparse array design by SBSA algorithm and DNN based Capon implementation. In section 7.4, with the aid of number of design examples, we demonstrate the usefulness of proposed algorithms in achieving MaxSINR sparse array design. Concluding remarks follow at the end. Figure 7.2: Architecture of Deep Neural Network (DNN) ### 7.2 Problem Formulation Consider a desired source and $L$ independent interfering source signals impinging on a linear array with $N$ uniformly placed sensors. The baseband signal received at the array at time instant $t$ is then given by; $\mathbf{x}(t)=(\alpha(t))\mathbf{s}(\theta)+\sum_{l=1}^{L}(\beta_{l}(t))\mathbf{v}(\theta_{l})+\mathbf{n}(t),$ (7.1) where, $\mathbf{s}({\theta_{k}})$ and $\mathbf{v}({\theta_{l}})$ $\in\mathbb{C}^{N}$ are the steering vectors corresponding to the direction of arrival, $\theta$ or $\theta_{l}$, and are defined as follows; $\mathbf{s}({\theta})=[1\,\,\,e^{j(2\pi/\lambda)dcos(\theta)}\,.\,.\,.\,e^{j(2\pi/\lambda)d(N-1)cos(\theta)}]^{T}.$ (7.2) where $d$ is the inter-element spacing and ($\alpha(t)$, $\beta_{l}(t))$ $\in\mathbb{C}$ are the complex amplitudes of the incoming baseband signals [54]. The additive Gaussian noise $\mathbf{n}(t)$ $\in\mathbb{C}^{N}$ is of variance of $\sigma_{n}^{2}$. The received signal vector $\mathbf{x}(t)$ is combined linearly by the $N$-sensor beamformer that strives to maximize the output SINR. The output signal $y(t)$ of the optimum beamformer for maximum SINR is given by [25], $y(t)=\mathbf{w}_{o}^{H}\mathbf{x}(t),$ (7.3) where $\mathbf{w}_{o}$ is the solution of the optimization problem given by, $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s^{{}^{\prime}}}\mathbf{w},$ (7.4) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}=1.$ For statistically independent signals, the desired source correlation matrix is $\mathbf{R}_{s}=\sigma^{2}\mathbf{s}(\theta)\mathbf{s}^{H}(\theta)$, where $\sigma^{2}=E\\{\alpha(t)\alpha^{H}(t)\\}$. Likewise, the interference and noise correlation matrix, $\mathbf{R}_{s^{{}^{\prime}}}=\sum_{l=1}^{L}(\sigma^{2}_{l}\mathbf{v}(\theta_{l})\mathbf{v}^{H}(\theta_{l})$) + $\sigma_{n}^{2}\mathbf{I}_{N\times N}$, with $\sigma^{2}_{l}=E\\{\beta_{l}(t)\beta_{l}^{H}(t)\\}$ being the power of the $l$th interfering source. The problem in (7.4) can be written equivalently by replacing $\mathbf{R}_{s^{{}^{\prime}}}$ with the received data covariance matrix, $\mathbf{R_{xx}}=\mathbf{R}_{s}+\mathbf{R}_{s^{{}^{\prime}}}$ as follows [25], $\displaystyle\underset{\mathbf{w}\in\mathbb{C}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w},$ (7.5) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1.$ Figure 7.3: Overview of the proposed approach using Deep Neural Network (DNN) It is noted that the equality constraint in (7.4) is relaxed in (7.5) due to the inclusion of the constraint as part of the objective function, and as such, (7.5) converges to the equality constraint. Additionally, the optimal solution in (7.5) is invariant up to uncertainty in the absolute power of the source of interest. However, in practice, these assumptions can deviate from the actual received data statistics and hence the discrepancy is typically mitigated, to an extent, by preprocessing the received data correlation matrix through diagonal loading or tapering the correlation matrix [47]. The closed form solution of the above optimization problem exists and is given by $\mathbf{w}_{o}=\mathscr{P}\\{\mathbf{R}_{s^{{}^{\prime}}}^{-1}\mathbf{R}_{s}\\}=\mathscr{P}\\{\mathbf{R_{xx}}^{-1}\mathbf{R}_{s}\\}$. The operator $\mathscr{P}\\{.\\}$ computes the principal eigenvector of the input matrix. Substituting $\mathbf{w}_{o}$ into (7.3) yields the corresponding optimum output SINRo; $\text{SINR}_{o}=\frac{\mathbf{w}_{o}^{H}\mathbf{R}_{s}\mathbf{w}_{o}}{\mathbf{w}_{o}^{H}\mathbf{R}_{s^{{}^{\prime}}}\mathbf{w}_{o}}=\Lambda_{max}\\{\mathbf{R}^{-1}_{s^{{}^{\prime}}}\mathbf{R}_{s}\\}.$ (7.6) This shows that the optimum output SINRo is given by the maximum eigenvalue ($\Lambda_{max}$) associated with the product of the inverse of interference plus noise correlation matrix and the desired source correlation matrix. Therefore, the performance of the optimum beamformer for maximizing the output SINR is directly related to the desired and interference plus noise correlation matrices. ### 7.3 Optimum sparse array design The constraint optimization (7.5) can be re-formulated for optimum sparse array design by incorporating an additional constraint on the cardinality of the weight vector; $\displaystyle\underset{\mathbf{w\in\mathbb{C}}^{N}}{\text{minimize}}$ $\displaystyle\quad\mathbf{w}^{H}\mathbf{R_{xx}}\mathbf{w},$ (7.7) s.t. $\displaystyle\quad\mathbf{w}^{H}\mathbf{R}_{s}\mathbf{w}\geq 1,$ $\displaystyle\quad||\mathbf{w}||_{0}=P.$ Here, $||.||_{0}$ determines the cardinality of the weight vector $\mathbf{w}$. This is a combinatorial optimization problem and can be solved optimally by enumerating over all possible locations. Several different approaches have been developed to mitigate the computational expense of the combinatorial search either by exploiting the information of the interference parameters or employing data-dependent approaches realized through the SDR and SCA algorithms. These algorithms have high computational costs, impeding real time implementations, especially in applications involving rapidly changing environments. In the context of designing sparse arrays using machine learning, the optimum sparse arrays for different operating environments constitute labels for the network training data. These labels can be generated off-line through enumerations. Although, this procedure is off line, accounting for all possible source and interference scenarios leads to a formidable computational problem rendering solutions for large arrays and a large number of training samples infeasible. In order to simplify the problem, we propose a sparse beamformer spectral analysis (SBSA) design technique to efficiently train the DNN. This technique is detailed below, and has improved performance and yields lower computational complexity as compared to the state-of-the-art. The subsequent subsection presents the implementation of the SBSA and enumerated design algorithms using the DNN approach, this further reduces the implementation time and also paves the way to data dependent design (refer to Fig. 7.3 for an overview). #### 7.3.1 Sparse beamformer spectral analysis (SBSA) design Figure 7.4: Eight element sparse array configuration Figure 7.5: Lag Redundancy of the sparse array shown in Fig. 7.4 Figure 7.6: DFT of the lag redundancy Figure 7.7: Power spectrum of the desired signal Figure 7.8: Explanation of the proposed objective criterion for the optimum array configuration shown in Fig. 7.4 Figure 7.9: Explanation of the proposed objective criterion for the worst possible array configuration Figure 7.10: Plot of the proposed objective criterion in ascending order As shown in Eq. 7.6, the sparse array configuration for MaxSINR design depends on the beamforming weights and also on the sparse array configuration. Therefore, the problem of interference mitigation is not a cascade design but is an entwined task, calling for the simultaneous optimization for the beamforming weights and the sparse array configuration. We adopt a frequency domain approach offering a unique insight to the problem and addressing the suitability of a given sparse configuration, irrespective of the beamforming weights, in canceling the interfering signals. The problem formulation developed in the previous section is valid irrespective of the array configuration and holds true for the compact ULA or any generic sparse configuration. The beamformer output signal, $y(t)={\mathbf{w}}^{H}\mathbf{x}$, for a sparse beamfromer can be rewritten as $\overset{\circ}{y}(t)=\overset{\circ}{\mathbf{w}}^{H}\mathbf{x}$. The typeset ‘${{\circ}}$’ indicates that the corresponding vector is sparse with few zero entries. The sparse beamformer $\overset{\circ}{y}(t)$ can also be rewritten, equivalently, as $\overset{\circ}{y}(t)=\overset{\circ}{\mathbf{w}}^{H}\\{\overset{\circ}{\mathbf{z}}\odot\mathbf{x}\\}$. Here, the point-wise multiplication ($\odot$) of the received vector $\mathbf{x}$ with a sparse selection vector $\overset{\circ}{\mathbf{z}}$ sets the entries of the received signal to zero which corresponds to the zero beamforming weights of $\overset{\circ}{\mathbf{w}}$. The entries of $\overset{\circ}{\mathbf{z}}\in\mathbb{R}^{N}$ are either 1’s or 0’s depending on whether the corresponding sensor location is active or inactive respectively. The above formulation shows that the sparse beamforming filter $\overset{\circ}{\mathbf{w}}$ can also be viewed as pre-processing the received signal by point-wise multiplication, prior to applying the beamforming filter. We analyze how the point-wise multiplication of received signal with the selection vector can either help or hinder the performance of the subsequent beamforming filter $\overset{\circ}{\mathbf{w}}$. Denote the DFT of the input data as $\mathbf{X}=\mathscr{F}(\mathbf{x})$ and $\mathbf{Z}=\mathscr{F}(\overset{\circ}{\mathbf{z}})$, respectively. Then, the DFT of the pointwise multiplication of these two vectors is given by the circular convolution of the correspond DFTs, $\mathscr{F}(\overset{\circ}{\mathbf{z}}\odot\mathbf{x})=\mathbf{X}\circledast\mathbf{Z}$, where $\circledast$ denotes the circular convolution. Furthermore, because the received data vector can be written as $\mathbf{x}=\alpha\mathbf{s}(\theta_{k})+\sum_{l=1}^{L}\beta_{l}\mathbf{v}(\theta_{l})+\mathbf{n}$, then by invoking the linearity of the DFT transform, we obtain $\mathscr{F}(\overset{\circ}{\mathbf{z}}\odot\mathbf{x})=\alpha\mathbf{S}\circledast\mathbf{Z}+\sum_{l=1}^{L}\beta_{l}\mathbf{V}_{l}\circledast\mathbf{Z}+\mathbf{N}.\circledast\mathbf{Z}$. For an effective sparse beamformer, it is desirable to minimize the overlapping of the spatial spectrum of the desired source and the spectra of interfering signals. This action is poised to aid the subsequent beamformer filtering to effectively suppress the unwanted signals. We propose a design metric $\Omega(\mathbf{Z})$ based on weighted sum of the spatial spectrum of the individual interfering signals scaled by the desired signal spatial spectrum as follows, $\Omega(\mathbf{Z})=\sum_{l=1}^{L}\\{\alpha^{2}|\mathbf{S}|^{2}\circledast|\mathbf{Z}|^{2}\\}\odot\\{\beta_{l}^{2}|\mathbf{V}_{l}|^{2}\circledast|\mathbf{Z}|^{2}\\}$ (7.8) It is clear that Eq. 7.8 performs element wise scaling of the interfering powers in the DFT domain. Therefore, if the interfering signal power, after convolution through the selection filter, is concentrated in the DFT bins different from those occupied primarily by the desired signal, then the pointwise product is always lower. Conversely, if there is a significant overlap between the results of the two convolutions, then the objective function $\Omega(\mathbf{Z})$ is significantly higher. The spatial spectrum can be estimated by computing the DFT of the autocorrelation function of the corresponding signal. It is worth noting that for a given sparse configuration, the autocorrelation function of the selection vector is given by the corresponding redundancy of the autocorrelation lags. Therefore, unlike the structured sparse array design which seeks to maximize the contiguous correlation lags, the MaxSINR sparse design is directly linked to the DFT of the autocorrelation sequence of the lag redundancy. We illustrate the proposed approach with the help of the following example. Consider an 8-element sparse array on the 24 point equally spaced grid locations that are potentially available for sensor selections. The minimum spacing among the sensors is $d=\lambda/2$. Consider a source signal located at $60^{0}$ and four unwanted intereferneces located at $154^{0}$, $55^{0}$, $117^{0}$ and $50^{0}$ with the INRs ranging from $10-20$ dB. The sparse array configuration achieving the best performance is found through enumeration and shown in Fig. 7.4. The associated correlation lag redundancy of this configuration and the corresponding spatial spectrum is depicted in Figs. 7.5 and 7.6, respectively. The spatial spectrum of the desired source at $60^{0}$ is depicted in the Fig. 7.7. The resultant normalized spectrum of the two convolved spectra in Figs 7.6,7.7 is shown as solid lines (blue) in the Fig. 7.8. The normalized spectrum for each interfering signal is shown as dotted lines in the same figure. Fig. 7.9 plots the normalized spectrum for the sparse array worst case scenario for comparison purposes. Note the maximum of the convolved desired signal spectrum is at 35th DFT bin. For the best case scenario, all the convolved interfering signals assume minimum power at the aforementioned DFT position. This is in contrast to the worst case scenario depicted in Fig. (7.9) where the convolved interfering signals have considerable power at the maxima of the desired source. Apart form the maxima location, it is noted that for the best case design, there is minimum overlapping between the desired signal and interfering signals at the DFT bin locations which is clearly in contrast to the worst case design. In order to further understand the effectiveness of the proposed approach, Fig. (7.10) plots the SINR performance of all possible sparse configurations after sorting the array topologies in ascending order of the output of the proposed objective function. It is clear that the average SINR in the plot is higher and more desirable towards the left side where the objective function is minimum vis a vis the right side where the objective function is high. It is also noted that the best enumerated result of MaxSINR performance does not correspond to the array configuration with the smallest objective function. This is because, the optimum sparse configuration also depends on the beamformer weights to minimize the interfering signals. This is also clear from the high variance of the curve. Similarly, the worst performing array is very close to the right side of the curve where the proposed objective function is high due to the significant overlap of the desired and undesired signal spectra. The significantly reduced performance towards the right side of the plot can now be explained intuitively in terms of the inherent inability of these configurations to mitigate the significant overlap in the desired and interfering signals which is an artifact of the sparse beamformer explained in the frequency domain. Therefore, in light of the above discussions, it becomes prudent to seek sensor arrangements that minimize the proposed objective function in an attempt to find more suitable sparse configuration for MaxSINR design. Towards this end, we propose an iterative algorithm that minimize the proposed objective function by successive sensor selection, hence deciding on one senor location at a time. For the initial iteration, a sensor location is chosen randomly on an $N$ grid points. For the $i$th subsequent iteration, the proposed objective is evaluated at the remaining $N-i$ locations and then selecting the sensor location that yields the minimum objective function. The procedure is iterated $P$ times for selecting the $P$ locations from $N$ possible locations. Due to high variance of the curve in the above figure, it is best to initialize the algorithm with different sensor location or DFT size and find the corresponding configurations for each initialization, and eventually consider the one with the best SINR performance. Steps of the algorithm are detailed in Table 7.1. Table 7.1: SBSA Algorithm 0: $N$, $P$, Look direction DOA $\theta$, Interfernce DOAs, SNR and INRs 0: Sparse beamformer $\mathbf{w}_{o}$ Initialize $\mathbf{z}$=$[0\,\,1\,\,0\,...\,0]$ where all entries of $\mathbf{z}$ are zero except an arbitrarily selected entry Compute the spatial spectrum of desired source and interfering signals For (j=1 to $P$-1) For (i=1 to $P$-1-j) Select the ith sensor from $P$-1-j remaining locations Compute the lag redundancy of this sparse array consisting of j+1 sensor Compute the spatial spectrum of the j+1 sensor sparse array Convolve the spatial spectrum of the j+1 sensor sparse array with the spectrum of the desired source and the interfering sources Compute the overlapping power in the spatial spectra by computing the proposed metric in Eq. 7.8 EndFor Select the $i$th sensor from the inner for loop which results in the minimum overlapping power computed by Eq. 7.8 Update $\mathbf{z}$ by setting the jth location in $\mathbf{z}$ to 1 EndFor After finding the sparse configuration find $\mathbf{w}_{o}$ by running Eq. 7.5 for reduced size correlation matrix while ignoring the sensor locations corresponding to $\mathbf{z}$ #### 7.3.2 DNN based learning of the SBSA and enumerated designs Modeling the behaviour of optimization algorithms by training a DNN is justifiable, as DNNs are universal approximators for arbitrary continuous functions and have the required capacity to accurately map almost any input/output formula. For effective learning, it makes sense that the DNN can generalize to a more broader class that is represented by the finite number of drawn training examples. From Capon beamforming perspective, it is instructive to realize that a given arrangement of a desired source direction, interference DOAs and respectives SNR/INRs constitute one particular example. The class, in this case, would constitute any arbitrary permutation of the interference DOAs and respective powers while keeping the desired source DOA fixed. The DNN task is, therefore, to learn from a dataset, characterized by a set of different training examples and corresponding optimum sparse array predictions. Here, an important question arises that whether we could aim for a more stronger notion of generalization, i.e., instead of training for a fixed desired source, is it possible to generalize over all possible desired source DOAs. To answer this query, we note that for a given desired source and interference setting, the received signal remains the same even if one of the interfering signal is assumed as a desired signal and the desired signal is treated as an interference. In this case, although the received signal is identical in both scenarios yet the Capon beamformer task is flipped and is now required to be pointing to an entirely different direction. In so doing, the corresponding optimum sparse configuration and beamformer weights could be very different than before. Therefore, instead of relying entirely on the information of the received data correlation it is imperative to incorporate the knowledge of the desired source or look direction, which is always assumed in Capon beamforming formulation. For DNN learning, this information can either be incorporated by exclusively training the DNN for each desired source DOA or the desired source DOA can be incorporated as an additional input feature to DNN. In this chapter we adopt the former approach. Our proposed approach uses a fully connected neural network with the input layer of size $2N-1$ and the output layer of size $N$. Although there are $N$ unique correlation lags corresponding to the $N$ sensor locations on the grid, the dimensionality of the input layer is $2N-1$. This is due to concatenating the real and imaginary entries of the generally complex valued correlation lags except the zeroth lag. We have 3 hidden layers with 450, 250 and 80 nodes, respectively. The ReLU activation function is used for all hidden layers activation. The network is input the correlation values of the received data assuming a stationary environment. The network output is a binary vector such that 1 indicate sensor selection and 0 indicates the absence of the corresponding sensor location. For the scope of this chapter, we assume that we have an estimate of all the filled co-array correlation lags corresponding to the correlation matrix of the full aperture array. The received data is generated in the following manner. For a given desired source location, the $i$th training realization is simulated by randomly selecting $L$ interfering signals from a DOA grid spanning the range of $10^{0}$ to $170^{0}$, with a grid spacing of $1^{0}$. The interfering jammers are allocated random powers uniformly distributed with INR from 10 dB to 20 dB. For this given scenario, the received correlation function, which includes the desired source signal, is calculated corresponding to the full sensor configuration. The corresponding optimum configuration is found through enumeration and also through the proposed SBSA algorithm. The process is repeated 30000 times against a given desired source DOA to complete the training data set. Similarly, a small sized validation data set is generated for hyperparamter tuning for model selection, minibatch size and learning rate. For the training stage, the weights of the neural network are optimized to minimize the mean squared error between the label and the output of the network. The ADAM algorithm is used to carry out parameter optimization. This algorithm used an efficient implementation of mini-batch stochastic gradient descent called the ADAM algorithm [130], that improves learning as a function of the most recent gradient values. The learning rate is set to 0.001 and dropout regularization with the keep probability of 0.9 is used. The initial weights are initialized using the Xavier initialization. The robustness of the learned models is demonstrated by generating the testing data that is different from the training stage, and is generated by assuming the DOA of the interfering signals off grid. This is simulated by adding the Gaussian noise to the interference DOAs on the grid. We also present the results under limited and unlimited data snapshots. Additionally, to ensure the selection of $P$ antenna locations at the output of DNN, we declare the $P$ highest values in the output as the selected sensor locations. Therefore, generalization in this context means that the learned DNN works on different interference settings which can change according to changing environment conditions. ### 7.4 Simulations In this section, we show the effectiveness of the proposed approach for sparse array design achieving MaxSINR. The results are examined first by training the DNN to learn the enumerated optimum array configurations. Then, we demonstrate in the follow-on examples the effectiveness of the DNN when trained by the labels drawn from SBSA algorithm. #### 7.4.1 Enumerated design In this example, we pose the problem as selecting $P=6$ antennas from $N=12$ possible equally spaced locations with inter-element spacing of $\lambda/2$. For all numerical results, we use a network with three hidden layers, one input layer, and one output layer. Accordingly, the input to the network is of size 23 and output is size 12. Figure 7.11 shows the output SINR performance comparisons for different array configurations. The horizontal axis is the DOA of the desired point source, and the performance is computed at six different source DOAs varying from $15^{0}$ to $90^{0}$ in steps of $15^{0}$. The SNR of the desired signal is $0$ dB, and the interference-to-noise-ratio (INR) for each interference is chosen randomly between $10-20$ dB for a given realization. The results presented in Fig. 7.11 are obtained by using unlimited number of data snapshots (USS) and employing enumerated labels to train the DNN. The network performance is reported by averaging over $900$ testing scenarios for each desired source DOA. It is evident that the DNN-EN design (prediction using the enumerated labels) performs close (0.45 dB trade off) to the performance of the optimum array found by enumeration ($980$ possible configurations). The latter gives the highest SINR performance but involves expensive singular value decomposition (SVD) for each enumeration and is also not scale-able with the problem size, facing the curse of dimensionality. The DNN-EN design performance is also compared with the NNC (nearest neighbour correlation) design which returns the label corresponding to the input nearest neighbour correlation function (in terms of mean square error). NNC design is simply a lookup table such that for a given test data it returns the label of the closest training example by sifting over the entire training set. It is noted that the DNN-EN design outperforms the NNC design with an average performance gain of 0.5 dB. The former approach not only offers superior performance but also is more economically viable from ‘Edge computing’ perspective as compared to the nearest neighbour design which requires to maintain a large dictionary and run exhaustive search over the entire training set for each prediction. Similar results are obtained for the NNC design obtained by minimizing the mean absolute error instead of the mean square error. For the underlying case, the DNN has around 88 $\%$ accuracy for the training data, and has around 54$\%$ accuracy on the test data (meaning that all sensor locations are correctly predicted). This implies that the superior performance of the DNN is not simply because it memorizes the optimum but due to the ability of DNN to generalize the learning as applied to the test set which is not seen before by the DNN. The superior performance of DNN over NNC design again signifies that the DNN doesn’t memorize a lookup table implementation to locate the nearest training data point and output the corresponding memorized optimal sparse configuration. It is also clear that the proposed design yields significant gains over a compact ULA, sparse ULA and randomly selected array topology. The utility of the effective sparse design is also evident from the worst case performance which exhibits significantly low output SINR of around -5 dB on average. Figure 7.11: Performance comparison enumerated design under unlimited snapshots Figure 7.12: Performance comparison SBSA design under unlimited snapshots #### 7.4.2 DNN based SBSA design Although the training phase is entirely offline, it is infeasible to train the DNN while relying on the enumerated results. This is because the number of possible sparse solutions can be considerably larger even for the modest size problem. For instance, choosing 14 elements out of 24 possible locations results in the order of $10^{6}$ candidate spare configurations for each training data, i.e., for each environment scenario. In order to circumvent this problem and generate a large amount of training data labels, we resort to the proposed SBSA design. Fig. 7.12 shows the performance of SBSA design which is merely 0.3 dB down than the design obtained through enumeration. Quite similar to the DNN-EN design, the DNN-SBSA design is around 0.4 dB down from DNN-SBSA design. However, this places the DNN-SBSA design further 0.8 dB suboptimal to the enumerated design. This is still a reasonable performance yielding significant dividends over the commonly used compact ULA, sparse ULA and random sparse topology. #### 7.4.3 Robust design In order to gauge the robustness of the DNN based scheme, the performance is evaluated under a limited number of data snapshots. Also, the desired source DOA is perturbed with Gaussian noise of 0.250 variance, in generating the test data to account of possible uncertainty around the desired source DOA. For simulating the limited snapshot scenario, 1000 snapshots are generated assuming the incoming signals (source and interfering signals) are independent BPSK signals in the presence of Gaussian noise. The correlation matrix under limited snapshots doesn’t follow the Toeplitz structure. Therefore, we average along the diagonal entries of the correlation matrix to calculate the average correlation values. Figs. 7.13 and 7.14 show the performance of DNN-EN and DNN-SBSA designs under the limited data snapshots. It is clear from the figures that the performance is largely preserved with an SINR discrepancy of less than 0.01 dB demonstrating the robustness of the proposed scheme. The NNC design, in this case is suboptimal with more than 0.3 dB additional performance loss. #### 7.4.4 Performance comparisions with state-of-the-art The performance of the proposed SBSA, DNN-EN and DNN-SBSA are compared with existing work on sparse array design which is based on SDR and SCA approaches [20, 71]. It is clear from Fig. 15 that the SBSA algorithm outperforms the other designs and is also more than 100X computationally efficient as compared to the SDR and SCA (Wang etal. [20]) approach. However, it is only fair to compare the SBSA design with the SCA (Wang etal.) approach because both incorporate the apriori knowledge of interference parameters. Therefore, in comparing the data dependent designs, it is found that SDR design (also the SDR-symmetric [71]) is comparable to the DNN-EN design, whereas the DNN-SBSA is marginally suboptimal with the average performance degradation of 0.37 dB. This slight performance trade off is readily leveraged by the real time implementation of the DNN-SBSA algorithm implementing the Capon beamformer in time frames of the order of few milli-seconds. ### 7.5 Conclusion This chapter considered sparse array design for maximizing the beamformer output SINR for a desired source in an interference active scenario. A sparse beamformer spectral analysis (SBSA) algorithm was proposed which provided an insightful perspective for MaxSINR beamformer design. Also a DNN based approach was developed to configure a data driven sparse beamfomer by learning the enumerated algorithm as well as SBSA design. The proposed methodology combined the merits of the data dependent designs and the designs assuming prior information of interference parameters. It was shown through the design examples that the proposed schemes are robust in the case of limited data snapshots and showed promising performance with reduced computational complexity. Figure 7.13: Performance comparison enumerated design under 1000 snapshots Figure 7.14: Performance comparison SBSA design under 1000 snapshots Figure 7.15: Performance comparisons with the state-of-the-art ## Chapter 8CONCLUSIONS AND RECOMMENDATIONS In this work, sparse array design was proposed to improve active and passive sensing for radar and communication applications. Novel algorithms for sparse beamfomer design were presented for both receive and transmit signal platforms. The proposed approaches were analyzed analytically as well as with the help of realistically simulated data. The thesis addressed the problem that the optimization of the array configuration requires full data correlation matrix which is not readily available in practice. Two different design approaches were considered; one assumed prefixed position of subset of sensors so as to provide full array augmentation, referred to as the hybrid-design approach, whereas the other used the Toeplitz estimation of the autocorrelation at the missing spatial lags. We also considered maximizing the beamformer output SINR for the case of wideband signal models. Two different implementations, namely the TDL and the DFT implementation schemes, were presented for the wideband sparse array design. The sparse array configuration optimized in the DFT domain and later imported to the TDL implementation scheme was proposed, yielding promising results. In all cases, the MaxSINR optimum sparse array delivered considerable dividends over other suboptimal sparse arrays and compact uniform array. A frequency domain sparse array algorithm was proposed which provided an insightful perspective for MaxSINR beamformer design. Also a DNN based approach was developed to configure a data driven sparse beamfomer by learning various sparse design algorithms to enable real time implementation. We also considered active sensing using sparse array configuration and sought optimum transmit beamforming for radar applications. It was shown that the active sparse array design can achieve desirable beampattern characteristics such as directing a high proportion of the transmitted power towards the perspective target locations with minimal cross-correlation among target signals. Future work will use the findings of this thesis to address some of the practical issues for sparse beamformer design such as evaluating the performance on real data and incorporating physical constraints such as antenna radiation pattern and mutual coupling effects [131, 132]. ## References * [1] G. R. Lockwood, J. R. Talman, and S. S. Brunke, “Real-time 3-D ultrasound imaging using sparse synthetic aperture beamforming,” _IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control_ , vol. 45, no. 4, pp. 980–988, July 1998. * [2] O. Mehanna, N. D. Sidiropoulos, and G. B. Giannakis, “Joint multicast beamforming and antenna selection,” _IEEE Transactions on Signal Processing_ , vol. 61, no. 10, pp. 2660–2674, May 2013. * [3] Y. He and K. P. Chong, “Sensor scheduling for target tracking in sensor networks,” in _2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601)_ , vol. 1, Dec 2004, pp. 743–748 Vol.1. * [4] W. V. Cappellen, S. J. Wijnholds, and J. D. Bregman, “Sparse antenna array configurations in large aperture synthesis radio telescopes,” in _2006 European Radar Conference_ , Sept 2006, pp. 76–79. * [5] S. Joshi and S. Boyd, “Sensor selection via convex optimization,” _IEEE Transactions on Signal Processing_ , vol. 57, no. 2, pp. 451–462, Feb 2009. * [6] H. Godrich, A. P. Petropulu, and H. V. Poor, “Sensor selection in distributed multiple-radar architectures for localization: A knapsack problem formulation,” _IEEE Transactions on Signal Processing_ , vol. 60, no. 1, pp. 247–260, Jan 2012. * [7] A. Moffet, “Minimum-redundancy linear arrays,” _IEEE Transactions on Antennas and Propagation_ , vol. 16, no. 2, pp. 172–175, March 1968. * [8] P. Pal and P. P. Vaidyanathan, “Nested arrays: A novel approach to array processing with enhanced degrees of freedom,” _IEEE Transactions on Signal Processing_ , vol. 58, no. 8, pp. 4167–4181, Aug. 2010. * [9] ——, “Coprime sampling and the music algorithm,” in _2011 Digital Signal Processing and Signal Processing Education Meeting (DSP/SPE)_ , Jan. 2011, pp. 289–294. * [10] S. Qin, Y. D. Zhang, and M. G. Amin, “Generalized coprime array configurations for direction-of-arrival estimation,” _IEEE Transactions on Signal Processing_ , vol. 63, no. 6, pp. 1377–1390, March 2015. * [11] X. Wang, E. Aboutanios, M. Trinkle, and M. G. Amin, “Reconfigurable adaptive array beamforming by antenna selection,” _IEEE Transactions on Signal Processing_ , vol. 62, no. 9, pp. 2385–2396, May 2014. * [12] X. Wang, M. G. Amin, X. Wang, and X. Cao, “Sparse array quiescent beamformer design combining adaptive and deterministic constraints,” _IEEE Transactions on Antennas and Propagation_ , vol. PP, no. 99, pp. 1–1, 2017. * [13] N. D. Sidiropoulos, T. N. Davidson, and Z.-Q. Luo, “Transmit beamforming for physical-layer multicasting,” _IEEE Transactions on Signal Processing_ , vol. 54, no. 6, pp. 2239–2251, June 2006. * [14] V. Roy, S. P. Chepuri, and G. Leus, “Sparsity-enforcing sensor selection for DOA estimation,” in _2013 5th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)_ , Dec. 2013, pp. 340–343. * [15] W. Roberts, L. Xu, J. Li, and P. Stoica, “Sparse antenna array design for MIMO active sensing applications,” _IEEE Transactions on Antennas and Propagation_ , vol. 59, no. 3, pp. 846–858, March 2011. * [16] G. G. Raleigh and V. K. Jones, “Multivariate modulation and coding for wireless communication,” _IEEE Journal on Selected Areas in Communications_ , vol. 17, no. 5, pp. 851–866, May 1999. * [17] H. Bolcskei, D. Gesbert, and A. J. Paulraj, “On the capacity of OFDM-based spatial multiplexing systems,” _IEEE Transactions on Communications_ , vol. 50, no. 2, pp. 225–234, Feb. 2002. * [18] P. Chavali and A. Nehorai, “Cognitive radar for target tracking in multipath scenarios,” in _2010 International Waveform Diversity and Design Conference_ , Aug. 2010, pp. 000 110–000 114. * [19] J. L. Krolik, J. Farrell, and A. Steinhardt, “Exploiting multipath propagation for GMTI in urban environments,” in _2006 IEEE Conference on Radar_ , April 2006, pp. 4 pp.–. * [20] X. Wang, M. Amin, and X. Cao, “Analysis and design of optimum sparse array configurations for adaptive beamforming,” _IEEE Transactions on Signal Processing_ , vol. PP, no. 99, pp. 1–1, 2017. * [21] R. B. Ertel, P. Cardieri, K. W. Sowerby, T. S. Rappaport, and J. H. Reed, “Overview of spatial channel models for antenna array communication systems,” _IEEE Personal Communications_ , vol. 5, no. 1, pp. 10–22, Feb. 1998. * [22] G. Raleigh, S. N. Diggavi, A. F. Naguib, and A. Paulraj, “Characterization of fast fading vector channels for multi-antenna communication systems,” in _Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers_ , vol. 2, Oct. 1994, pp. 853–857 vol.2. * [23] A. Abdi and M. Kaveh, “A space-time correlation model for multielement antenna systems in mobile fading channels,” _IEEE Journal on Selected Areas in Communications_ , vol. 20, no. 3, pp. 550–560, April 2002. * [24] A. Abdi, J. A. Barger, and M. Kaveh, “A parametric model for the distribution of the angle of arrival and the associated correlation function and power spectrum at the mobile station,” _IEEE Transactions on Vehicular Technology_ , vol. 51, no. 3, pp. 425–434, May 2002. * [25] S. Shahbazpanahi, A. B. Gershman, Z.-Q. Luo, and K. M. Wong, “Robust adaptive beamforming for general-rank signal models,” _IEEE Transactions on Signal Processing_ , vol. 51, no. 9, pp. 2257–2269, Sept. 2003. * [26] R. M. Gray, “Toeplitz and circulant matrices: A review,” _Foundations and Trends® in Communications and Information Theory_ , vol. 2, no. 3, pp. 155–239, 2006. * [27] W. C. Y. Lee, _Mobile Communications Engineering: Theory and Applications_. New York, NY, USA: McGraw-Hill, Inc., 1997. * [28] X. Wang, M. G. Amin, and X. Cao, “Optimum adaptive beamformer design with controlled quiescent pattern by antenna selection,” in _2017 IEEE Radar Conference (RadarConf)_ , May 2017, pp. 0749–0754. * [29] S. Boyd and L. Vandenberghe, _Convex Optimization_. New York, NY, USA: Cambridge University Press, 2004. * [30] S. P. Chepuri and G. Leus, “Sparsity-promoting sensor selection for non-linear measurement models,” _IEEE Transactions on Signal Processing_ , vol. 63, no. 3, pp. 684–698, Feb 2015. * [31] E. BouDaher, Y. Jia, F. Ahmad, and M. G. Amin, “Multi-frequency co-prime arrays for high-resolution direction-of-arrival estimation,” _IEEE Transactions on Signal Processing_ , vol. 63, no. 14, pp. 3797–3808, July 2015. * [32] H. Unz, “Linear arrays with arbitrarily distributed elements,” _IRE Transactions on Antennas and Propagation_ , vol. 8, no. 2, pp. 222–223, March 1960\. * [33] R. Harrington, “Sidelobe reduction by nonuniform element spacing,” _IRE Transactions on Antennas and Propagation_ , vol. 9, no. 2, pp. 187–192, March 1961. * [34] A. Maffett, “Array factors with nonuniform spacing parameter,” _IRE Transactions on Antennas and Propagation_ , vol. 10, no. 2, pp. 131–136, March 1962. * [35] Y. Lo and S. Lee, “A study of space-tapered arrays,” _IEEE Transactions on Antennas and Propagation_ , vol. 14, no. 1, pp. 22–30, 1966. * [36] P. Jarske, T. Saramaki, S. K. Mitra, and Y. Neuvo, “On properties and design of nonuniformly spaced linear arrays (antennas),” _IEEE Transactions on Acoustics, Speech, and Signal Processing_ , vol. 36, no. 3, pp. 372–380, March 1988. * [37] R. M. Leahy and B. D. Jeffs, “On the design of maximally sparse beamforming arrays,” _IEEE Transactions on Antennas and Propagation_ , vol. 39, no. 8, pp. 1178–1187, Aug 1991. * [38] D. Caratelli and M. C. Viganó, “Analytical synthesis technique for linear uniform-amplitude sparse arrays,” _Radio Science_ , vol. 46, no. 4, 2011\. * [39] R. L. Haupt, “Thinned arrays using genetic algorithms,” _IEEE Transactions on Antennas and Propagation_ , vol. 42, no. 7, pp. 993–999, July 1994\. * [40] A. Trucco and V. Murino, “Stochastic optimization of linear sparse arrays,” _IEEE Journal of Oceanic Engineering_ , vol. 24, no. 3, pp. 291–299, July 1999. * [41] B. Fuchs, “Application of convex relaxation to array synthesis problems,” _IEEE Transactions on Antennas and Propagation_ , vol. 62, no. 2, pp. 634–640, Feb. 2014. * [42] S. Eng Nai, W. Ser, Z. Liang Yu, and H. Chen, “Beampattern synthesis for linear and planar arrays with antenna selection by convex optimization,” _Antennas and Propagation, IEEE Transactions on_ , vol. 58, pp. 3923 – 3930, 01 2011. * [43] G. Prisco and M. D’Urso, “Maximally sparse arrays via sequential convex optimizations,” _IEEE Antennas and Wireless Propagation Letters_ , vol. 11, pp. 192–195, 2012. * [44] M. B. Hawes and W. Liu, “Sparse array design for wideband beamforming with reduced complexity in tapped delay-lines,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 22, no. 8, pp. 1236–1247, Aug 2014\. * [45] A. Goldsmith, _Wireless Communications_. New York, NY, USA: Cambridge University Press, 2005. * [46] H. L. V. Trees, _Detection, Estimation, and Modulation Theory: Radar-Sonar Signal Processing and Gaussian Signals in Noise_. Melbourne, FL, USA: Krieger Publishing Co., Inc., 1992. * [47] J. Li, P. Stoica, and Z. Wang, “On robust Capon beamforming and diagonal loading,” _IEEE Transactions on Signal Processing_ , vol. 51, no. 7, pp. 1702–1715, July 2003. * [48] X. Wang and M. Amin, “Design of optimum sparse array for robust MVDR beamforming against DOA mismatch,” in _2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)_ , Dec 2017, pp. 1–5. * [49] Y. I. Abramovich, D. A. Gray, A. Y. Gorokhov, and N. K. Spencer, “Positive-definite Toeplitz completion in DOA estimation for nonuniform linear antenna arrays. I. Fully augmentable arrays,” _IEEE Transactions on Signal Processing_ , vol. 46, no. 9, pp. 2458–2471, Sep 1998. * [50] Y. Abramovich, N. Spencer, and A. Gorokhov, “Positive-definite Toeplitz completion in DOA estimation for nonuniform linear antenna arrays. II. Partially augmentable arrays,” _Trans. Sig. Proc._ , vol. 47, no. 6, pp. 1502–1521, Jun. 1999. * [51] A. Bertrand and M. Moonen, “Efficient sensor subset selection and link failure response for linear MMSE signal estimation in wireless sensor networks,” in _2010 18th European Signal Processing Conference_ , Aug 2010, pp. 1092–1096. * [52] J. Szurley, A. Bertrand, M. Moonen, P. Ruckebusch, and I. Moerman, “Energy aware greedy subset selection for speech enhancement in wireless acoustic sensor networks,” in _2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO)_ , Aug 2012, pp. 789–793. * [53] J. Zhang, S. P. Chepuri, R. C. Hendriks, and R. Heusdens, “Microphone subset selection for MVDR beamformer based noise reduction,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 26, no. 3, pp. 550–563, March 2018. * [54] P. Stoica and R. L. Moses, _Introduction to spectral analysis_. Upper Saddle River, N.J. : Prentice Hall, 1997. * [55] D. L. Donoho, “For most large underdetermined systems of linear equations the minimal $l_{1}$-norm solution is also the sparsest solution,” _Communications on Pure and Applied Mathematics_ , vol. 59, no. 6, pp. 797–829, 2006. * [56] A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” _SIAM Rev._ , vol. 51, no. 1, pp. 34–81, Feb. 2009. [Online]. Available: http://dx.doi.org/10.1137/060657704 * [57] A. Y. Yang, S. S. Sastry, A. Ganesh, and Y. Ma, “Fast $l_{1}$-minimization algorithms and an application in robust face recognition: A review,” in _2010 IEEE International Conference on Image Processing_ , Sep. 2010, pp. 1849–1852. * [58] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted $l_{1}$ minimization,” _Journal of Fourier Analysis and Applications_ , vol. 14, no. 5, pp. 877–905, Dec. 2008. * [59] M. Bengtsson and B. Ottersten, “Optimal downlink beamforming using semidefinite optimization,” 1999. * [60] S. A. Hamza, M. G. Amin, and G. Fabrizio, “Optimum sparse array beamforming for general rank signal models,” in _2018 IEEE Radar Conference (RadarConf18)_ , April 2018, pp. 1343–1347. * [61] Z. q. Luo, W. k. Ma, A. M. c. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” _IEEE Signal Processing Magazine_ , vol. 27, no. 3, pp. 20–34, May 2010. * [62] B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” _SIAM Rev._ , vol. 52, no. 3, pp. 471–501, Aug. 2010. * [63] K. Mohan and M. Fazel, “Iterative reweighted algorithms for matrix rank minimization,” _J. Mach. Learn. Res._ , vol. 13, no. 1, pp. 3441–3473, Nov. 2012. * [64] R. Rajamäki and V. Koivunen, “Symmetric sparse linear array for active imaging,” in _2018 IEEE 10th Sensor Array and Multichannel Signal Processing Workshop (SAM)_ , July 2018, pp. 46–50. * [65] P. Pal and P. P. Vaidyanathan, “Nested arrays in two dimensions, Part 1: Geometrical considerations,” _IEEE Transactions on Signal Processing_ , vol. 60, no. 9, pp. 4694–4705, Sept 2012. * [66] K.-C. Huarng and C.-C. Teh, “Adaptive beamforming with conjugate symmetric weights,” _IEEE Transactions on Antennas and Propagation_ , vol. 39, no. 7, pp. 926–932, July 1991. * [67] A. Ahmed, Y. D. Zhang, and J. Zhang, “Coprime array design with minimum lag redundancy,” in _ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , May 2019, pp. 4125–4129. * [68] S. A. Hamza and M. G. Amin, “Two dimensional sparse array design for wideband signals ,” in _Big Data: Learning, Analytics, and Applications_ , vol. 10989, International Society for Optics and Photonics. SPIE, 2019, pp. 135 – 142. * [69] S. A. Hamza and M. G. Amin, “Sparse array dft beamformers for wideband sources,” in _2019 IEEE Radar Conference (RadarConf)_ , 2019, pp. 1–5. * [70] ——, “Optimum sparse array receive beamforming for wideband signal model,” in _2018 52nd Asilomar Conference on Signals, Systems, and Computers_ , Oct 2018, pp. 89–93. * [71] ——, “Hybrid sparse array beamforming design for general rank signal models,” _IEEE Transactions on Signal Processing_ , vol. 67, no. 24, pp. 6215–6226, Dec 2019. * [72] ——, “Hybrid sparse array design for under-determined models,” in _ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , May 2019, pp. 4180–4184. * [73] ——, “Sparse array design utilizing matrix completion,” in _2019 Asilomar Conference on Signals, Systems, and Computers_ , Nov 2019. * [74] M. S. Ibrahim, A. Konar, M. Hong, and N. D. Sidiropoulos, “Mirror-prox sca algorithm for multicast beamforming and antenna selection,” in _2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_ , June 2018, pp. 1–5. * [75] C. Zhou, Y. Gu, Z. Shi, and Y. D. Zhang, “Off-grid direction-of-arrival estimation using coprime array interpolation,” _IEEE Signal Processing Letters_ , vol. 25, no. 11, pp. 1710–1714, Nov 2018\. * [76] C. Zhou, Y. Gu, X. Fan, Z. Shi, G. Mao, and Y. D. Zhang, “Direction-of-arrival estimation for coprime array via virtual array interpolation,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 22, pp. 5956–5971, Nov 2018. * [77] C. Liu, P. P. Vaidyanathan, and P. Pal, “Coprime coarray interpolation for DOA estimation via nuclear norm minimization,” in _2016 IEEE International Symposium on Circuits and Systems (ISCAS)_ , May 2016, pp. 2639–2642. * [78] S. M. Hosseini and M. A. Sebt, “Array interpolation using covariance matrix completion of minimum-size virtual array,” _IEEE Signal Processing Letters_ , vol. 24, no. 7, pp. 1063–1067, July 2017. * [79] H. Qiao and P. Pal, “Gridless line spectrum estimation and low-rank Toeplitz matrix compression using structured samplers: A regularization-free approach,” _IEEE Transactions on Signal Processing_ , vol. 65, no. 9, pp. 2221–2236, May 2017. * [80] L. Yang and G. B. Giannakis, “Ultra-wideband communications: an idea whose time has come,” _IEEE Signal Processing Magazine_ , vol. 21, no. 6, pp. 26–54, Nov 2004. * [81] M. S. Brandstein and D. B. Ward, “Microphone arrays - signal processing techniques and applications,” in _Microphone Arrays_ , 2001. * [82] C. Paulson, J. Chang, C. Romero, J. Watson, F. Pearce, and N. Levin, “Ultra-wideband radar methods and techniques of medical sensing and imaging,” in _Proceedings of SPIE - The International Society for Optical Engineering_ , B. Cullum and J. Carter, Eds., vol. 6007, 2005. * [83] E. Pancera, “Medical applications of the ultra wideband technology,” in _2010 Loughborough Antennas Propagation Conference_ , Nov 2010, pp. 52–56. * [84] E. D. Di Claudio and R. Parisi, _Robust Wideband Beamforming_. John Wiley $\&$ Sons, Ltd, 2005, ch. 7, pp. 353–415. * [85] W. Liu and S. Weiss, _Wideband Beamforming: Concepts and Techniques_. John Wiley & Sons, 2010. * [86] K. Buckley, “Spatial/spectral filtering with linearly constrained minimum variance beamformers,” _IEEE Transactions on Acoustics, Speech, and Signal Processing_ , vol. 35, no. 3, pp. 249–266, March 1987. * [87] B. D. V. Veen and K. M. Buckley, “Beamforming: a versatile approach to spatial filtering,” _IEEE ASSP Magazine_ , vol. 5, no. 2, pp. 4–24, April 1988. * [88] O. L. Frost, “An algorithm for linearly constrained adaptive array processing,” _Proceedings of the IEEE_ , vol. 60, no. 8, pp. 926–935, Aug 1972. * [89] M. Er and A. Cantoni, “Derivative constraints for broad-band element space antenna array processors,” _IEEE Transactions on Acoustics, Speech, and Signal Processing_ , vol. 31, no. 6, pp. 1378–1393, Dec 1983. * [90] K. Buckley and L. Griffiths, “An adaptive generalized sidelobe canceller with derivative constraints,” _IEEE Transactions on Antennas and Propagation_ , vol. 34, no. 3, pp. 311–319, March 1986. * [91] L. C. Godara, “Application of the fast fourier transform to broadband beamforming,” _The Journal of the Acoustical Society of America_ , vol. 98, no. 1, pp. 230–240, 1995. [Online]. Available: https://doi.org/10.1121/1.413765 * [92] J. Sheinvald and M. Wax, “Direction finding with fewer receivers via time-varying preprocessing,” _IEEE Transactions on Signal Processing_ , vol. 47, no. 1, pp. 2–9, Jan 1999. * [93] Moon-Sik Lee, V. Katkovnik, and Yong-Hoon Kim, “System modeling and signal processing for a switch antenna array radar,” _IEEE Transactions on Signal Processing_ , vol. 52, no. 6, pp. 1513–1523, June 2004. * [94] Li Yang, Liang Liwan, Pan Weifeng, Chen Yaqin, and Feng Zhenghe, “Signal processing method for switch antenna array of the fmcw radar,” in _Proceedings of the 2001 IEEE Radar Conference (Cat. No.01CH37200)_ , May 2001, pp. 289–293. * [95] Y. Asano, S. Ohshima, T. Harada, M. Ogawa, and K. Nishikawa, “Proposal of millimeter-wave holographic radar with antenna switching,” in _2001 IEEE MTT-S International Microwave Sympsoium Digest (Cat. No.01CH37157)_ , vol. 2, May 2001, pp. 1111–1114 vol.2. * [96] M. Guo, Y. D. Zhang, and T. Chen, “DOA estimation using compressed sparse array,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 15, pp. 4133–4146, Aug 2018. * [97] X. Wang, M. G. Amin, and X. Wang, “Optimum sparse array design for multiple beamformers with common receiver,” in _2018 IEEE International Conference on Acoustics, Speech and Signal Processing_ , April 2018. * [98] M. H. Bae, I. H. Sohn, and S. B. Park, “Grating lobe reduction in ultrasonic synthetic focusing,” _Electronics Letters_ , vol. 27, no. 14, pp. 1225–1227, July 1991. * [99] J. Lu and J. Greenleaf, “Study of two-dimensional array transducers for limited diffraction beams,” _IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control_ , vol. 41, no. 5, pp. 724–739, 9 1994\. * [100] J. H. Doles and F. D. Benedict, “Broad-band array design using the asymptotic theory of unequally spaced arrays,” _IEEE Transactions on Antennas and Propagation_ , vol. 36, no. 1, pp. 27–33, Jan 1988. * [101] V. Murino, A. Trucco, and A. Tesei, “Beam pattern formulation and analysis for wide-band beamforming systems using sparse arrays,” _Signal Processing_ , vol. 56, no. 2, pp. 177 – 183, 1997. * [102] V. Murino, A. Trucco, and C. S. Regazzoni, “Synthesis of unequally spaced arrays by simulated annealing,” _IEEE Transactions on Signal Processing_ , vol. 44, no. 1, pp. 119–122, Jan 1996. * [103] D. B. Ward, R. A. Kennedy, and R. C. Williamson, “Theory and design of broadband sensor arrays with frequency invariant far-field beam patterns,” _The Journal of the Acoustical Society of America_ , vol. 97, no. 2, pp. 1023–1034, 1995. * [104] A. Trucco, “Synthesizing wide-band sparse arrays by simulated annealing,” in _MTS/IEEE Oceans 2001. An Ocean Odyssey. Conference Proceedings (IEEE Cat. No.01CH37295)_ , vol. 2, Nov 2001, pp. 989–994 vol.2. * [105] F. Anderson, W. Christensen, L. Fullerton, and B. Kortegaard, “Ultra-wideband beamforming in sparse arrays,” _IEE Proceedings H - Microwaves, Antennas and Propagation_ , vol. 138, no. 4, pp. 342–346, Aug 1991. * [106] I. S. Reed, J. D. Mallett, and L. E. Brennan, “Rapid convergence rate in adaptive arrays,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. AES-10, no. 6, pp. 853–863, Nov. 1974. * [107] S. A. Hamza and M. G. Amin, “Sparse array beamforming design for wideband signal models,” _IEEE Transactions on Aerospace and Electronic Systems_ , pp. 1–1, 2020. * [108] M. G. Amin, P. P. Vaidyanathan, Y. D. Zhang, and P. Pal, “Editorial for coprime special issue,” _Digital Signal Processing_ , vol. 61, no. Supplement C, pp. 1 – 2, 2017, special Issue on Coprime Sampling and Arrays. * [109] S. A. Hamza and M. G. Amin, “Sparse array design for maximizing the signal-to-interference-plus-noise-ratio by matrix completion,” _Digital Signal Processing_ , p. 102678, 2020. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1051200420300233 * [110] E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,” _Foundations of Computational Mathematics_ , vol. 9, no. 6, p. 717, Apr 2009. * [111] P. Stoica and A. Nehorai, “Music, maximum likelihood, and Cramer-Rao bound,” _IEEE Transactions on Acoustics, Speech, and Signal Processing_ , vol. 37, no. 5, pp. 720–741, May 1989. * [112] S. A. Hamza and M. Amin, “Planar sparse array design for transmit beamforming (Conference Presentation),” in _Radar Sensor Technology XXIV_ , K. I. Ranney and A. M. Raynal, Eds., vol. 11408, International Society for Optics and Photonics. SPIE, 2020. [Online]. Available: https://doi.org/10.1117/12.2557923 * [113] S. A. Hamza and M. G. Amin, “Sparse array design for transmit beamforming,” in _2020 IEEE International Radar Conference (RADAR)_ , 2020, pp. 560–565. * [114] A. Hassanien, M. G. Amin, Y. D. Zhang, and F. Ahmad, “Signaling strategies for dual-function radar communications: an overview,” _IEEE Aerospace and Electronic Systems Magazine_ , vol. 31, no. 10, pp. 36–45, 2016\. * [115] A. Hassanien, M. G. Amin, E. Aboutanios, and B. Himed, “Dual-function radar communication systems: A solution to the spectrum congestion problem,” _IEEE Signal Processing Magazine_ , vol. 36, no. 5, pp. 115–126, 2019. * [116] S. A. Hamza and M. G. Amin, “Sparse array receiver beamformer design for multi-functional antenna,” in _2020 28th European Signal Processing Conference (EUSIPCO)_ , 2021, pp. 1836–1840. * [117] P. Stoica, J. Li, and Y. Xie, “On probing signal design for MIMO radar,” _IEEE Transactions on Signal Processing_ , vol. 55, no. 8, pp. 4151–4161, Aug 2007. * [118] S. A. Hamza and M. G. Amin, “Learning sparse array capon beamformer design using deep learning approach,” in _2020 IEEE Radar Conference (RadarConf20)_ , 2020, pp. 1–5. * [119] S. A. Hamza and M. Amin, “A method for optimum sparse array beamforming design (Conference Presentation),” in _Big Data II: Learning, Analytics, and Applications_ , F. Ahmad, Ed., vol. 11395, International Society for Optics and Photonics. SPIE, 2020. [Online]. Available: https://doi.org/10.1117/12.2561526 * [120] K. Li and J. Malik, “Learning to optimize,” in _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_ , 2017. * [121] M. Andrychowicz, M. Denil, S. G. Colmenarejo, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. de Freitas, “Learning to learn by gradient descent by gradient descent,” in _Proceedings of the 30th International Conference on Neural Information Processing Systems_ , ser. NIPS’16. Red Hook, NY, USA: Curran Associates Inc., 2016, p. 3988–3996. * [122] T. J. O’Shea, T. C. Clancy, and R. W. McGwier, “Recurrent neural radio anomaly detection,” _CoRR_ , vol. abs/1611.00301, 2016. * [123] H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 20, pp. 5438–5453, 2018. * [124] Y. Shen, Y. Shi, J. Zhang, and K. B. Letaief, “LORM: learning to optimize for resource management in wireless networks with few training samples,” _IEEE Trans. Wireless Communications_ , vol. 19, no. 1, pp. 665–679, 2020\. * [125] P. Sprechmann, R. Litman, T. B. Yakar, A. M. Bronstein, and G. Sapiro, “Supervised sparse analysis and synthesis operators,” in _Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013, Lake Tahoe, Nevada, United States_ , 2013, pp. 908–916. * [126] T. J. O’Shea, T. Erpek, and T. C. Clancy, “Deep learning based MIMO communications,” _CoRR_ , vol. abs/1707.07980, 2017. * [127] J. R. Hershey, J. L. Roux, and F. Weninger, “Deep unfolding: Model-based inspiration of novel deep architectures,” _CoRR_ , vol. abs/1409.2574, 2014\. * [128] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” _SIAM Journal on Imaging Sciences_ , vol. 2, no. 1, pp. 183–202, 2009. * [129] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in _Proceedings of the 27th International Conference on International Conference on Machine Learning_ , ser. ICML’10. Madison, WI, USA: Omnipress, 2010, p. 399–406. * [130] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , Y. Bengio and Y. LeCun, Eds., 2015. * [131] E. BouDaher and A. Hoorfar, “Electromagnetic optimization using mixed-parameter and multiobjective covariance matrix adaptation evolution strategy,” _IEEE Transactions on Antennas and Propagation_ , vol. 63, no. 4, pp. 1712–1724, 2015. * [132] E. BouDaher, F. Ahmad, M. G. Amin, and A. Hoorfar, “Mutual coupling effect and compensation in non-uniform arrays for direction-of-arrival estimation,” _Digital Signal Processing_ , vol. 61, pp. 3 – 14, 2017, special Issue on Coprime Sampling and Arrays.
# Approximating monomials using Chebyshev polynomials Arvind K. Saibaba111Department of Mathematics, North Carolina State University, Raleigh, NC. Email<EMAIL_ADDRESS> ###### Abstract This paper considers the approximation of a monomial $x^{n}$ over the interval $[-1,1]$ by a lower-degree polynomial. This polynomial approximation can be easily computed analytically and is obtained by truncating the analytical Chebyshev series expansion of $x^{n}$. The error in the polynomial approximation in the supremum norm has an exact expression with an interesting probabilistic interpretation. We use this interpretation along with concentration inequalities to develop a useful upper bound for the error. Keywords— Chebyshev polynomials, Polynomial Approximation, Binomial Coefficients, Concentration inequalities ## 1 Motivation and Introduction We are interested in approximating the monomial $x^{n}$ by a polynomial of degree $0\leq k<n$ over the interval $[-1,1]$. The monomials $1,x,x^{2},\dots$ form a basis for $C[-1,1]$, so it seems unlikely that we can represent a monomial in terms of lower degree polynomials. In Figure 1, we plot a few functions from the monomial basis over $[0,1]$; the basis function look increasingly alike as we take higher and higher powers, i.e., they appear to “lose independence.” Numerical analysts often avoid the monomial basis in polynomial interpolation since they result in ill-conditioned Vandermonde matrices, leading to poor numerical performance in finite precision arithmetic. This loss of independence means that it is reasonable to approximate the monomial $x^{n}$ as a linear combination of lower order monomials, i.e., a lower order polynomial approximation. The natural question to ask, therefore, is: how small can $k$ be so that a well-chosen polynomial of degree $k$ can accurately approximate $x^{n}$? Figure 1: Visualization of a few monomials in the interval $[0,1]$. The surprising answer to this question is that we can approximate the monomial $x^{n}$ over $[-1,1]$ by a polynomial of small degree, which we will make precise. Let $\|f\|_{\infty}=\max_{x\in[-1,1]}|f(x)|$ denote the supremum norm on $C[-1,1]$ and let $\pi_{k}^{*}(\cdot)$ be the best polynomial approximation to $x^{n}$ in this norm; that is $E_{n,k}:=\min_{\pi\in\mathcal{P}_{k}}\|x^{n}-\pi(x)\|_{\infty}=\|x^{n}-\pi_{k}^{*}(x)\|_{\infty},$ where $\mathcal{P}_{k}$ is a vector space of polynomials with real coefficients of degree at most $k$. The minimizer $\pi_{k}^{*}(\cdot)$ exists and is unique [9, Chapter 10], but does not have a closed form expression. Newman and Rivlin [7, Theorem 2] showed that222We briefly mention that the notation our manuscript differs from [7] in that we reverse the roles of $n$ and $k$. $\frac{p_{n,k}}{4e}\leq\|x^{n}-\pi_{k}^{*}(x)\|_{\infty}\leq p_{n,k},$ (1) where the term $p_{n,k}$ is given by the formula $p_{n,k}=\frac{1}{2^{n-1}}\sum_{j=\lfloor(n+k)/2\rfloor+1}^{n}\binom{n}{j}.$ Since $p_{n,k}$ involves the sum of binomial coefficients, it has a probabilistic interpretation which we explore in Section 4. To see why a small $k$ is sufficient, consider the upper bound $p_{n,k}$. In Section 4 we use the probabilistic interpretation to obtain the following bound $p_{n,k}\leq 2\exp\left(-{k^{2}}/{2n}\right)$. Suppose we are given a user-defined tolerance $\epsilon>0$. To ensure $\|x^{n}-\pi_{k}^{*}(x)\|_{\infty}\leq\epsilon,$ we need to choose $k\geq\sqrt{2n\log(2/\epsilon)}$. The accuracy of the polynomial approximation is visualized in Figure 2, where in the left panel we plot the monomial $x^{n}$ for $n=75$ and the best polynomial approximation $\pi_{k}^{*}$ for $k=5,15,25$. The polynomial $\pi_{k}^{*}$ is computed using the Remez algorithm, implemented in chebfun [3]. We see that for $k=25$, the polynomial approximation looks very accurate. In the right panel, we display $p_{n,k}$, which is the upper bound of the best polynomial approximation, as well as the upper bound for $p_{n,k}$. We see that $p_{n,k}$ and its upper bound both have sharp decay with increasing $k$. Numerical evidence in [6] further confirms this analysis; the authors show that the error $E_{n,k}$ behaves approximately like $\frac{1}{2}\text{erfc}(k/\sqrt{n})$, where erfc is the complementary error function. . Figure 2: (left) Approximation of the monomial $x^{n}$ for $n=75$ by $\pi_{k}^{*}$, (right) $p_{n,k}$ and its upper bound $2\exp(-k^{2}/2n)$. The visualization is restricted to the interval $[0,1]$. Polynomial and rational approximations to the monomial has received considerable attention in the approximation theory community, and surveys of various results can be found in [8, 6]. Polynomial approximations to high order monomials have many applications in numerical analysis. This key insight was exploited by Cornelius Lanczos [5] in his “$\tau$-method” for the numerical solution of differential equations. For a simulating discussion on this topic, please see [6]. In numerical linear algebra, this has been exploited to efficiently compute matrix powers and the Schatten p-norm of a matrix [1, 4]. In this short note, we show to construct a polynomial approximation $x^{n}\approx\phi_{k}(x)$ using a truncated Chebyshev polynomial expansion. The error in the truncated representation equals the sum of the discarded coefficients and is precisely $p_{n,k}$. The polynomial $\phi_{k}$ and the resulting error can both be computed analytically and, therefore, is of great practical use. We briefly review Chebyshev polynomials in Section 2 and state and prove the main result in Section 3. In Section 4, we explore probabilistic interpretations of $p_{n,k}$ and obtain bounds for partial sums of binomial coefficients. ## 2 Chebyshev polynomials The Chebyshev polynomials of the first kind $T_{n}(x)$ for $n=0,1,2,\dots$ can be represented as $T_{n}(x)=\cos(n\arccos{x})\qquad x\in[-1,1].$ Starting with $T_{0}(x)=1$, $T_{1}(x)=x$, the Chebyshev polynomials satisfy a recurrence relationship of the form $T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x)$ for $n\geq 1$. The Chebyshev polynomials are orthogonal with respect to the weighted inner product $\langle u,v\rangle=\int_{-1}^{1}w(x)u(x)v(x)dx$ where the weight function takes the form $w(x)=(1-x^{2})^{-1/2}$. Any function $f\in C[-1,1]$ that is Lipschitz continuous can be represented in terms of a Chebyshev polynomial expansion of the form $f(x)=\frac{1}{2}c_{0}+\sum_{j=1}^{\infty}c_{j}T_{j}(x)\qquad x\in[-1,1],$ where the coefficients $c_{j}$ are obtained as $c_{j}=\frac{2}{{\pi}}\langle f(x),T_{j}(x)\rangle$ and the series is uniformly convergent. The monomial $x^{n}$ is rather special since it has the following exact representation in terms of the Chebyshev polynomials [2, Section 4] $x^{n}=\sum_{j=0}^{n}{}^{{}^{\prime}}c_{j}T_{j}(x),$ (2) where ${}^{{}^{\prime}}$ means the summand corresponding to $j=0$ is halved (if it appears) and the coefficients $c_{j}$ for $j=0,\dots,n$ are $c_{j}=\left\\{\begin{array}[]{ll}2^{1-n}\binom{n}{(n-j)/2}&n-j\text{ even}\\\ 0&\text{otherwise}.\end{array}\right.$ (3) Equation (2) takes a more familiar form, when we consider the trigonometric perspective of Chebyshev polynomials. For example, the well-known trigonometric identity $\cos(3\theta)=4\cos^{3}\theta-3\cos\theta$, can be arranged as $\cos^{3}\theta=\frac{3}{4}\cos\theta+\frac{1}{4}\cos(3\theta)=\frac{1}{2^{2}}\left(\binom{3}{1}\cos\theta+\binom{3}{0}\cos(3\theta)\right).$ With $x=\cos\theta$, we get $x^{3}=2^{-2}\binom{3}{1}T_{1}(x)+2^{-2}\binom{3}{0}T_{3}(x)$. For completeness, we provide a derivation of (2) in Appendix A. It is important to note here that the series in (2) is finite, but can be truncated to obtain an accurate approximation; see Section 3. Chebyshev polynomials have many applications in approximation theory and numerical analysis [9] but we limit ourselves to two such examples here. First, if the function is differentiable $r$ times or analytic, the Chebyshev coefficients exhibit decay (algebraic or geometric respectively). Therefore, the Chebyshev series can be truncated to obtain an polynomial approximation of the function and the accuracy of the approximation depends on the rate of decay of the coefficients. Another application of Chebyshev polynomials is in the theory and practice of polynomial interpolation. The polynomial ${q}_{n-1}^{*}(x)=x^{n}-2^{1-n}T_{n}(x)$ solves the minimax problem $\min_{q\in\mathcal{P}_{n-1}}\|x^{n}-q(x)\|_{\infty}=2^{1-n}.$ (4) Based on the minimax characterization, to interpolate a function over $[-1,1]$ by a polynomial of degree $n-1$, the function to be interpolated should be evaluated at the roots of the Chebyshev polynomial $T_{n}$ given by the points $x_{j}=\cos\left(\frac{2j+1}{2n}\pi\right)$ for $j=0,\dots,n-1$. ## 3 Main result We construct the polynomial approximation $x^{n}\approx\phi_{k}(x)$ by truncating the Chebyshev polynomial expansion in (2) beyond the term $j=k$. That is $\phi_{k}(x):=\sum_{j=0}^{k}{}^{{}^{\prime}}c_{j}T_{j}(x).$ Our main result is the following theorem, which quantifies the error in the polynomial approximation. The proof of this theorem is based on the expression in (2). We believe this result is new. ###### Theorem 1. The error in the polynomial approximation $\phi_{k}(x)$ satisfies $\|x^{n}-\phi_{k}(x)\|_{\infty}=p_{n,k}.$ ###### Proof. From (2), $x^{n}-\phi_{k}(x)=\sum_{j=k+1}^{n}c_{j}T_{j}(x)$. Using triangle inequality, we find that $\|x^{n}-\phi_{k}(x)\|_{\infty}\leq\sum_{j=k+1}^{n}c_{j}$ since the coefficients are nonnegative and the Chebyshev polynomials are bounded as $|T_{j}(x)|\leq 1$. Substituting the coefficients $c_{j}$ from (3), to get $\|x^{n}-\phi_{k}(x)\|_{\infty}\leq\frac{1}{2^{n-1}}\sum_{\begin{subarray}{c}j=k+1\\\ n-j\text{ even}\end{subarray}}^{n}\binom{n}{(n-j)/2}.$ (5) Using the properties of the binomial coefficients, the summation simplifies as $\sum_{\begin{subarray}{c}j=k+1\\\ n-j\text{ even}\end{subarray}}^{n}\binom{n}{(n-j)/2}=\sum_{\begin{subarray}{c}j=k+1\\\ n+j\text{ even}\end{subarray}}^{n}\binom{n}{(n+j)/2}=\sum_{j=\lfloor(n+k)/2\rfloor+1}^{n}\binom{n}{j}.$ Plug this identity into (5) to get $\|x^{n}-\phi_{k}(x)\|_{\infty}\leq p_{n,k}$. The bound is clearly achieved at $x=1$, where all the Chebyshev polynomials take the value $1$. ∎ This theorem shows that the polynomial approximation $\phi_{k}$ is nearly optimal, and the error due to this approximation is $p_{n,k}$. However, it is the optimal polynomial for the special case $k=n-1$. It is easy to see that $x^{n}-\phi_{n-1}(x)=2^{1-n}T_{n}(x)$ and so $\phi_{n-1}$ is the same as the best polynomial approximation $q^{*}_{n-1}$ in (4). For $k<n-1$, from (1) and Theorem 1 $\frac{\|x^{n}-\phi_{k}(x)\|_{\infty}}{4e}\leq\|x^{n}-\pi_{k}^{*}(x)\|_{\infty}\leq\|x^{n}-\phi_{k}(x)\|_{\infty},$ so that the error in the Chebyshev polynomial approximation is suboptimal by at most the factor $4e\approx 10.87$. Therefore, by using $\phi_{k}$ we lose only one significant digit of accuracy compared to $\pi^{*}_{k}$. ## 4 A probabilistic digression In Section 1, we saw that the error in the monomial approximation depends on $p_{n,k}$. Since $p_{n,k}$ depends on the sum of binomial coefficients, it has a probabilistic interpretation. Newman and Rivlin [7] observed that if a fair coin is tossed $n$ times, $p_{n,k}$ is the probability that the magnitude of the difference between the number of heads and the number of tails exceeds $k$. They used this insight along with the de Moivre-Laplace theorem [10, Section 1.3] (which is a special case of the Central Limit Theorem) to obtain the approximation $p_{n,k}\approx 2\text{erfc}(k/\sqrt{n})$. To convert this into a rigorous inequality for $p_{n,k}$ we use a different tool from probability, namely, concentration inequalities. The inequalities are useful in quantifying how much a random variable deviates from its mean. We start with the following alternative interpretation for $p_{n,k}$: it is twice the probability that greater than $\lfloor(n+k)/2\rfloor$ coin tosses result in heads (or equivalently tails). We associate each coin toss with an independent Bernoulli random variable $X_{i}$ with parameter $p=1/2$ since the coin is fair. The random variable $X=\sum_{i=1}^{n}X_{i}$ has the Binomial distribution with parameters $n$ and $p$. Then, $p_{n,k}=2\mathbb{P}\left(\lfloor(n+k)/2\rfloor+1\leq X\leq n\right)\leq 2\mathbb{P}\left(X\geq(n+k)/2\right).$ Since $X$ has the Binomial distribution, we can once again use the de Moivre- Laplace theorem, to say that as $n\rightarrow\infty$, $\frac{X-np}{\sqrt{np(1-p)}}\longrightarrow\mathcal{N}(0,1),\qquad\text{in distribution}.$ Roughly speaking, this theorem says that $X$ behaves as a normal random variable with mean $n/2$ and variance $n/4$. Since the tails of normal distributions decay exponentially, we expect that $X$ lies in the range $\frac{n}{2}\pm 1.96\sqrt{\frac{n}{4}}$ with nearly $95\%$ probability; alternatively, the probability that it is outside this range is very small. To make this more precise, we apply Hoeffding’s concentration inequality [10, Theorem 2.2.6], to obtain $\mathbb{P}\left(X\geq(n+k)/2\right)=\mathbb{P}\left(X-\mathbb{E}[X]\geq k/2\right)\leq\exp\left(-\frac{k^{2}}{2n}\right).$ This gives our desired bound $p_{n,k}\leq 2\exp(-{k^{2}}/{2n})$. We can use a similar technique to prove the following result which may be of independent interest. If $0\leq k\leq n/2$, then $\sum_{j=0}^{k}\binom{n}{j}\leq 2^{n}\exp\left(-\frac{(n-2k)^{2}}{2n}\right).$ Other concentration inequalities such as Chernoff and Bernstein (see [10, Chapter 2]) also give equally interesting bounds. We invite the reader to explore such results. ## 5 Acknowledgements The author would like to thank Alen Alexanderian, Ethan Dudley, Ivy Huang, Ilse Ipsen, and Nathan Reading for comments and feedback. The work was supported by the National Science Foundation through the grants DMS-1745654 and DMS-1845406. ## 6 Declaration of Interest The author has no relevant financial or non-financial competing interests to report. ## Appendix A Derivation of the monomial expansion In this appendix, we provide a short derivation of (2). We take $x=\cos\theta$ and write $\cos^{n}\theta=\left(\frac{e^{i\theta}+e^{-i\theta}}{2}\right)=\frac{1}{2^{n}}\sum_{j=0}^{n}\binom{n}{j}e^{ij\theta}e^{-i(n-j)\theta}=\frac{1}{2^{n}}\sum_{j=0}^{n}\binom{n}{j}e^{i(2j-n)\theta}.$ We have used Euler’s formula and the binomial theorem. At this point, the derivation splits into two different paths: 1\. $n$ is odd $\displaystyle\cos^{n}\theta=$ $\displaystyle\>\frac{1}{2^{n}}\left(\sum_{j=0}^{(n-1)/2}\binom{n}{j}e^{i(2j-n)\theta}+\sum_{j=(n+1)/2}^{n}\binom{n}{j}e^{i(2j-n)\theta}\right)$ $\displaystyle=$ $\displaystyle\>\frac{1}{2^{n-1}}\sum_{j=0}^{(n-1)/2}\binom{n}{j}\cos((n-2j)\theta)$ $\displaystyle=$ $\displaystyle\>\frac{1}{2^{n-1}}\sum_{\begin{subarray}{c}j=0\\\ n-j\text{ even}\end{subarray}}^{n}\binom{n}{(n-j)/2}\cos(j\theta).$ 2\. $n$ is even $\displaystyle\cos^{n}\theta=$ $\displaystyle\>\frac{1}{2^{n}}\left(\sum_{j=0}^{n/2-1}\binom{n}{j}e^{i(2j-n)\theta}+\binom{n}{n/2}+\sum_{j=n/2+1}^{n}\binom{n}{j}e^{i(2j-n)\theta}\right)$ $\displaystyle=$ $\displaystyle\>\frac{1}{2^{n}}\binom{n}{n/2}+\frac{1}{2^{n-1}}\sum_{j=0}^{n/2-1}\binom{n}{j}\cos((n-2j)\theta)$ $\displaystyle=$ $\displaystyle\>\frac{1}{2^{n-1}}\sum_{\begin{subarray}{c}j=0\\\ n-j\text{ even}\end{subarray}}^{n}{}^{{}^{\prime}}\binom{n}{(n-j)/2}\cos(j\theta).$ In either case, substitute $x=\cos\theta$ and $T_{j}(x)=\cos(j\theta)$ to complete the derivation. ## References * [1] H. Avron and S. Toledo. Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix. Journal of the ACM (JACM), 58(2):8, 2011. * [2] W. J. Cody. A survey of practical rational and polynomial approximation of functions. SIAM Review, 12(3):400–423, 1970. * [3] T. A. Driscoll, N. Hale, and L. N. Trefethen. Chebfun Guide. Pafnuty Publications, 2014. * [4] E. Dudley, A. K. Saibaba, and A. Alexanderian. Monte Carlo estimators for the Schatten p-norm of symmetric positive semidefinite matrices. arXiv preprint arXiv:2005.10174, 2020. * [5] C. Lanczos. Applied analysis. Dover Books on Advanced Mathematics. Dover Publications, Inc., New York, 1988. Reprint of the 1956 original. * [6] Y. Nakatsukasa and L. Trefethen. Rational approximation of $x^{n}$. Proceedings of the American Mathematical Society, 146(12):5219–5224, 2018. * [7] D. J. Newman and T. J. Rivlin. Approximation of monomials by lower degree polynomials. Aequationes Mathematicae, 14(3):451–455, 1976. * [8] A. R. Reddy. Approximations to $x^{n}$ and $|x|$—a survey. J. Approx. Theory, 51(2):127–137, 1987. * [9] L. N. Trefethen. Approximation theory and approximation practice. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2013. * [10] R. Vershynin. High-dimensional probability, volume 47 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2018.
# ExpFinder: An Ensemble Expert Finding Model Integrating $N$-gram Vector Space Model and Yong-Bin Kang, Hung Du, Abdur Rahim Mohammad Forkan, Prem Prakash Jayaraman, Amir Aryani, Timos Sellis, Yong-Bin Kang is with Department of Media and Communication, Swinburne University of Technology, Australia E-mail<EMAIL_ADDRESS>Hung Du, Abdur Rahim Mohammad Forkan, Prem Prakash Jayaraman, and Timos Sellis are with Department of Computer Science and Software Engineering, Swinburne University of Technology, Australia E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>and <EMAIL_ADDRESS>Amir Aryani is with Social Innovation Research Institute, Faculty of Business and Law, Swinburne University of Technology, Australia. E-mail<EMAIL_ADDRESS> ###### Abstract Finding an expert plays a crucial role in driving successful collaborations and speeding up high-quality research development and innovations. However, the rapid growth of scientific publications and digital expertise data makes identifying the right experts a challenging problem. Existing approaches for finding experts given a topic can be categorised into information retrieval techniques based on vector space models, document language models, and graph- based models. In this paper, we propose ExpFinder, a new ensemble model for expert finding, that integrates a novel $N$-gram vector space model, denoted as $n$VSM, and a graph-based model, denoted as , that is a proposed variation of the CO-HITS algorithm. The key of $n$VSM is to exploit recent inverse document frequency weighting method for $N$-gram words, and ExpFinder incorporates $n$VSM into to achieve expert finding. We comprehensively evaluate ExpFinder on four different datasets from the academic domains in comparison with six different expert finding models. The evaluation results show that ExpFinder is an highly effective model for expert finding, substantially outperforming all the compared models in 19% to 160.2%. ###### Index Terms: ExpFinder, Expert finding, N-gram Vector Space Model, $\mu$CO-HITS, Expert collaboration graph ## 1 Introduction Finding experts in a particular domain is key to accelerate rapid formation of teams to respond to new opportunities, as well as undertake and address new frontiers in research innovations. Further, accurately identified experts can significantly contribute to enhancing the research capabilities of an organisation leading to higher quality research outcomes. In general, an expert is defined as a person who has sufficient knowledge and skills in a given field [1]. Such knowledge and skills are called expertise. While digitally available data (e.g. scientific publications) describing expertise of experts is rapidly growing, manually collating such information to find experts seems impractical and expensive. Thus, often in a large research organisation with diverse disciplines, finding experts in a field that one does not know or has limited knowledge is particularly very challenging. Information retrieval techniques have been widely used to aid retrieval task for finding experts from digitally available expertise data (we collectively term these as documents in this paper) such as scientific publications [2]. Based on the literature [3], there are two specific tasks for expert retrieval: (1) expert finding \- identifying experts given a topic from available documents and rank them based on their expertise level, and (2) expert profiling \- identifying the areas of expertise given an expert. In this paper we focus on the first task (i.e. expert finding) and propose an ensemble model for it from unstructured documents. We use the term topic to represent a field of expertise. Most existing approaches for expert finding are based on vector space models (VSM), document language models (DLM), or graph-based models (GM). In VSM, expert finding is often solved by modeling the weights of topics, associated with the documents produced by experts, using Term Frequency-Inverse Document Frequency (TFIDF) or its variation [4, 5]. In DLM, expert finding is achieved by estimating the probability that a topic would be observed in the documents of an expert [6, 7, 8]. In GM, a graph is used to represent associations among experts, documents and/or topics. The strengths of the associations are inferred to estimate the expertise degree of an expert given a topic using various graph analytics such as expert-document-term association paths [9], Hyper-Induced Topic Search (HITS) [10, 11], or social network based link analysis methods [12, 13, 14]. Although various models in these approaches aforementioned have been proposed, integrating VSM and GM for expert finding has been little studied in the literature. In this work, we propose an ensemble model for expert finding, ExpFinder111ExpFinder’s source code is publicly available on https://github.com/Yongbinkang/ExpFinder, that integrates a novel $N$-gram VSM, denoted as $n$VSM, with a GM using an expert collaboration graph (ECG). We develop $n$VSM for estimating the expertise degree (or weight) of an expert given a topic by leveraging the recent Inverse Document Frequency (IDF) weighting [15] for $N$-gram words (simply $N$-grams) composed of two or more terms (for N$>$1). This method demonstrated a higher robustness and effectiveness in measuring the IDF weights of $N$-grams. We also build an ECG in the form of an expert-document bipartite graph to represent the associations between experts and documents based on the co-authorship information. To estimate the weight of an expert given a topic on the ECG, we propose the GM, , that is formed by applying two variation schemes to the generalised CO-HITS [16] algorithm. This paper makes three main contributions. First, to our best knowledge, ExpFinder is the first attempt to introduce $n$VSM for expert finding. Second, we propose ExpFinder an ensemble model that combines $n$VSM and the GM using to create a stronger model for expert finding that achieves better performance than a single one. ExpFinder incorporates the weights of experts estimated by $n$VSM into an ECG and uses on the ECG to better estimate the weights of experts for a given topic. Third, we conduct comprehensive empirical evaluations to measure the effectiveness of ExpFinder using four different datasets (LExR [17] and three DBLP datasets [18]) in academic domains and compare the results with six different expert finding models: the TFIDF-based VSM, two DLMs [6, 8] and three GMs [9, 19, 16]. This rest of the paper is organised as follows. Section 2 provides related works in expert finding. Section 3 presents an overview of ExpFinder and Section 4 discusses in-depth steps for building ExpFinder. Section 5 presents thorough empirical evaluations of ExpFinder, followed by conclusion in Section 6. ## 2 Related Work In recent years, with the growing amount of digital expertise sources, expert finding has become an intensive research area in information retrieval community [3]. We can mainly classify expert finding approaches into three categories: VSM, DLM and GM. In the VSM approach, the common idea is to estimate relevance between a document and a topic using a weighting scheme in VSM (e.g. TFIDF or its variation). Then, finding experts can be done by assuming that an expert is seen as the collection of its published documents $\mathcal{D}_{x}$. That is, the weight of an expert $x$ given a topic $t$ is estimated by aggregating relevance scores between each document in $\mathcal{D}_{x}$ and $t$. For example, TFIDF was used to find experts in community question answering websites in which the goal is to find users with relevant expertise to provide answers for given questions [20]. A variation of TFIDF was also applied for expert finding in an organization’s ERP system [21]. The work [4] also used TFIDF to identify experts given a topic using a topic extension method (finding interrelated terms of a given topic from the corpus), where TFIDF was used to estimate relevance between extended terms and each expert’s documents. TFIDF was also used to estimate the weights of topics indicating the interests of an expert, and this information is used with fuzzy logics for expert finding [5]. The aim of the DLM approach is to find experts whose documents are directly related to a given topic. In common, this approach estimates the relationships between a topic and an expert as the probability of generating the topic by the expert [6], or between an expert and its publications [22]. BMExpert [7] used the DLM [6] for expert finding using three factors: relevance of documents to the topic, importance of documents, and associations between documents and experts. Similarly, the work [23] used a probabilistic DLM for expert finding by probabilistically generating a textual representation of an expert according to his documents and then ranking such documents according to a given topic. Recently, a probabilistic model, WISER [8], estimated the importance of experts’ documents given a topic using BM25 [24]. Using this importance, such documents were ranked and these ranks were summed to represent the topic-sensitive weight of an expert. In the GM approach, experts are represented as nodes, and their relationships are represented by their edges or implicitly derived from a graph. Different algorithms were used in the GM approach, such as Hyperlink-Induced Topic Search (HITS) [10, 11, 25] and PageRank [26]. For expert finding, PageRank was adapted in the context of online community discussions on a user-user graph built based on votes from users whose questions were answered by whom [27]. Also, a modified PageRank algorithm was developed and applied for finding experts in online knowledge communities [28]. HITS is also a graph-based link analysis algorithm originally designed for ranking the importance of web pages based on authority and hub scores. The work [10] built an expert-expert bipartite graph based on email communication patterns and attempted to find the ranking of experts using HITS. CO-HITS was introduced [16] to incorporate a bipartite graph with the content information from both sides (e.g. experts and documents in our context) by adding personalised parameters to HITS, and CO-HITS showed higher performance than HITS [16]. Using an author-document- topic (ADT) graph, the expert finding GM model [9] leveraged possible paths between a topic and an expert on the ADT graph. Recently, diverse expert finding approaches were proposed in a social network. For example, the authors [12] proposed a method for finding experts who can answer questions in a social network for ‘community question answering’ using users’ votes and reputations. The approach [13] focused on finding experts who can answer users’ questions based on users’ online social activities in a social network (e.g. Twitter). Also, we observe that some models tend to mix different techniques among DLM, VSM, and/or GM [29]. For example, AuthorRank [30] combined a generative probabilistic DLM and a PageRank-like GM based on community engagement of expert candidates. The DLM was used to identify the most relevant documents, while the GM was used to model the authors’ authorities based on the community co-authorship. The work [31] combined a cluster-based language model and a VSM for finding experts in question and answer communities. The authors [29] proposed a complex model for community question answering using a variation of the DLM [6] and a HITS-based GM (the HITS algorithm on a competition based expertise network [32]), where the scores from these models were linearly combined to rank experts given a question. The work [33] used the Dempster- Shafer combination theory to combine the DLM [6] and a graph algorithm that analyses a social interaction of experts. However, this work did not provide technical details on how such combination is done. Differing from the above approaches, ExpFinder is a first attempt in devising an ensemble model that incorporates $n$VSM into on an ECG. The proposed $n$VSM takes advantage of the IDF weighting for $N$-grams [15]. is a novel variation of CO-HITS and runs on an ECG (i.e. expert-document bipartite graph), compared to previous works [10, 11] that applied HITS on an expert-expert graph. ## 3 Introduction to ExpFinder In this section, we present the overview of ExpFinder, and the basic notations that we will use in the paper. ### 3.1 Overview of ExpFinder ExpFinder aims to identify ranked experts according to their expertise degree given a topic. In this paper, we assume that a topic is represented as a noun phrase which is extracted from documents (e.g. scientific publications) of experts in a given domain. The reason is that domain-specific concepts are often described by noun phrases that represent the key information within a given corpus [34]. A noun phrase means a single-word noun or a group of words that function together as a noun. The key of ExpFinder is the utilisation of two knowledge facets in a unified manner. The one is the estimation of the weights of experts given a topic by utilising information in the proposed $n$VSM. The second facet is that performs on an expert collaboration graph (ECG), where the expert collaboration is measured by the joint production of experts (e.g. co-authored documents). We incorporate the result of $n$VSM into the ECG, and reinforce the weights of experts given a topic using . The following presents the key steps in ExpFinder (see also Fig. 1): Figure 1: The overview of ExpFinder. Step 1: Extract topics: Given experts and their documents (also called corpus) in a given domain, we extract noun phrases as topics. Step 2: Estimate the weights of experts and documents given topics: Given a topic, we estimate the weights of experts and documents based on the proposed $n$TFIDF method in $n$VSM. In this paper, we also call such weights topic- sensitive weights as these weights are sensitive to the given topic. Given a topic, the key of $n$TFIDF lies in a combination of the frequency of the topic with the IDF method of $N$-grams over the corpus [15]. The output of this step includes a topic-expert matrix and a topic-document matrix, where an entry reflects the weight of an expert and a document given a topic, respectively. Step 3: Construct an ECG: We construct an ECG to represent associations between experts and their jointly-published documents. This graph is modelled by a directed, weighted bipartite graph that has two kinds of nodes, one representing experts and the other representing documents. A directed edge points from a document $d$ to an expert $x$, if $x$ has published $d$. Step 4: Reinforce expert weights using : As presented above, to rank experts, ExpFinder integrates the two knowledge facets: (1) $n$VSM to estimate the weights of the experts and documents given a topic (Step 2); and (2) incorporating such weights into an ECG (Step 3) to further reinforce the weights of experts. The outcome of this step is the reinforced topic-expert matrix showing the weights of experts. Finally, we rank the experts for each topic from the matrix. ### 3.2 Notations We present the following basic notations in this paper. * • Let $\mathcal{X}$ be the set of experts, and $|\mathcal{X}|$ be the number of experts in $\mathcal{X}$. * • Let $\mathcal{D}$ be the set of all documents published by $\mathcal{X}$. Let $\mathcal{D}_{x}$ be all documents published by $x\in\mathcal{X}$. Also, let $\mathcal{X}_{d}$ denotes the set of the experts that have a document $d\in\mathcal{D}$ * • Let $\mathcal{T}$ be the set of topics extracted from $\mathcal{D}$. * • Let $\mathbf{TX}$ be a $|\mathcal{T}|\times|\mathcal{X}|$ topic-expert matrix where rows and columns are labeled with $\mathcal{T}$ and $\mathcal{X}$, respectively. The entry that lies in the $i$-th row and the $j$-th column of $\mathbf{TX}$ is denoted as $\mathbf{TX}_{i,j}$ that indicates the weight of $x_{j}\in\mathcal{X}$ on $t_{i}\in\mathcal{T}$. If a weight is higher, the more important the corresponding expert is on the given topic. * • Let $\mathbf{DX}$ be a $|\mathcal{D}|\times|\mathcal{X}|$ document-expert matrix where rows and columns are labeled with $\mathcal{D}$ and $\mathcal{X}$, respectively. The entry of $\mathbf{DX}_{i,j}$ shows the weight of an expert $x_{j}\in\mathcal{X}$ on a document $d_{i}\in\mathcal{D}$ based on $x_{j}$’s contribution towards $d_{i}$. * • Let $\mathbf{TD}$ be a $|\mathcal{T}|\times|\mathcal{D}|$ topic-document matrix where rows and columns are labeled with $\mathcal{T}$ and $\mathcal{D}$, respectively. $\mathbf{TD}_{i,j}$ represents the weight of document $d_{j}\in\mathcal{D}$ on $t_{i}\in\mathcal{T}$. If a weight is higher, the more important the corresponding document is on the given topic. ## 4 Design of ExpFinder In this section, we present the details of the four steps for designing and developing ExpFinder. ### 4.1 Extract Topics As presented in Section 3. we assume that a topic is represented as a noun phrase. We perform the following steps to extract noun phrases from $\mathcal{D}$. First, for each document $d\in\mathcal{D}$, we split $d$ into its sentences keeping their sequential indices. Second, for each sentence, we analyse POS tags of the words in the sentence and remove stopwords. POS tagging is the process for assigning a part of speech to each word in a sentence. Then, each word remained is converted into its lemmatised form. Lemmatisation is the process of grouping together the inflected forms of a word, thus they can be considered to be a single item (e.g. ‘patients’ is lemmatised to ‘patient’). Third, in the sentence, we use the following linguistic pattern based on POS tags to extract noun phrases: ${\rm(JJ)^{*}|(VBN)^{*}|(VBG)^{*}(N)^{+}},$ (1) where ‘${\rm JJ}$’ means adjective, ‘${\rm VBN}$’ past participle, ‘${\rm VBG}$’ gerund, and ‘${\rm N}$’ nouns. Using this pattern, we can extract a noun phrase starting with (1) one or more nouns; (2) one or more adjectives followed by one or more nouns (e.g. ‘medical system’); (3) one or more past participle followed by one or more nouns (e.g. ‘embedded system’); and (4) one or more gerund followed by one or more nouns (e.g. ‘learning system’). The symbol ‘*’ denotes zero or more occurrences, ‘+’ denotes one or more occurrences. Note that ExpFinder does not rely on a particular method for noun phrase extraction, and thus can incorporate any noun phrase extraction methods. ### 4.2 Estimate the weights of experts and documents given topics We now present the process for creating a topic-expert matrix $\mathbf{TX}$ and a topic-document matrix $\mathbf{TD}$ from the extracted topics using $n$TFIDF in $n$VSM. These matrices will be used as the input to . #### 4.2.1 Topic-Expert Matrix Creation To create a $\mathbf{TX}$, our fundamental is to utilise the definition of the DLM [6, 7] for expert finding. Thus, we first briefly describe how this DLM can measure the topic-sensitive weight of an expert $x\in\mathcal{X}$ given a topic $t\in\mathcal{T}$, denoted as $p(x|t)$. Formally, it is given as [6]: $p(x|t)={p(x,t)}/{p(t)},$ (2) where $p(x,t)$ is the joint probability of $x$ and $t$, and $p(t)$ is the probability of $t$. We can ignore $p(t)$ as this is a consistently constant over all experts $\mathcal{X}$. Thus, $p(x|t)$ is approximated by $p(x,t)$ that is reformulated considering documents $\mathcal{D}_{x}$ [6]: $\begin{split}p(x,t)&=\sum_{d\in\mathcal{D}_{x}}p(x,d,t)=\sum_{d\in\mathcal{D}_{x}}p(d)p(x,t|d)\\\ &=\sum_{d\in\mathcal{D}_{x}}p(d)p(t|d)p(x|d).\end{split}$ (3) In Eq. 3, we observe the following notations [7]: * • $p(d)$ is the prior probability of $d$ that can also be interpreted as the weight (or importance) of $d$. * • $p(x|d)$ is the conditional probability of $x$ given $d$ (e.g. in a simply way, it can be estimated based on the order of $x$ in the co-author list in $d$ [7]). * • $p(t|d)$ is the conditional probability of $t$ given $d$. In the DLM [6, 7], it is assumed that a document $d$ is described as a collection of terms that appear in $d$. An importance of a term $w\in t$ within $d$ is determined by the proportion of its occurrences. DLMs provide a way of capturing this notion by representing a document as multinomial probability distribution over the vocabulary of terms. To estimate $p(t|d)$, let $\theta_{d}$ be the document model of $d$, and the probability of $t$ in $\theta_{d}$ is $p(t|\theta_{d})$. This $p(t|\theta_{d})$ indicates how likely we see $t$ if we sampled $t$ randomly from $d$. Thus, $p(t|d)$ is rewritten as $p(t|\theta_{d})$ taking the product of $t$’s individual term probabilities as follows [6, 7]: $p(t|\theta_{d})=\prod_{w\in t}p(w|\theta_{d}),$ (4) where $w$ is an individual term in $t$. However, a limitation of Eq. 4 is that unseen terms in $d$ would get a zero probability. Thus, it is a common in DLMs to introduce a smoothing factor to assign non-zero probability to the unseen terms. Typically, it can be done by reducing the probabilities of the terms seen in the corpus and assigning the additional probability mass to unseen terms. Formally, $p(w|\theta_{d})$ is re-expressed as: $p(w|\theta_{d})=(1-\lambda_{\theta})p(w|d)+\lambda_{\theta}p(w|\mathcal{D})$ (5) where $p(w|d)$ is estimated by the term frequency of $w$ in $d$ divided by $|d|$ (the number of terms in $d$), denoted as $tf(w,d)$, and $p(w|\mathcal{D})$ is the term frequency of $w$ in $\mathcal{D}$ normalised by $|\mathcal{D}|$, i.e., ${tf(w,|\mathcal{D}|)}$. The parameter $\lambda_{\theta}$ controls the influence of the two probabilities. We now present our novelty for estimating $p(t|d)$ using $n$TFIDF. Since $n$TFIDF is an extension of TFIDF, we briefly describe how $p(t|d)$ can be estimated using TFIDF in VSM. In a sense, $p(t|d)$ can also be interpreted using TFIDF [35]. Note that TFIDF is a measure based on the distance between two probability distributions, expressed as the cross-entropy: (1) a local distribution of $w\in t$ in $d$, and (2) a global distribution of $w$ in $\mathcal{D}$. TFIDF is a measure of perplexity between these two distributions. A higher perplexity score implies a higher relevance of $d$ to $w$. The cross-entropy between distributions $p_{w}$ and $q_{w}$ is as follows: $-\sum_{w}p_{w}\log q_{w}=\sum_{w}p_{w}\log\frac{1}{q_{w}},$ (6) if we substitute $p_{w}$ with ${tf(w,d)}$ (TF) and $\frac{1}{q_{i}}$ with the inverted probability of encountering $d$ with a term $w$ (IDF), denoted as $\frac{|D|}{df(w)}$, where $df(w)$ is the document frequency of $w$, we obtain a TFIDF formula: $p(t|d)\approx\sum_{w\in t}{tf(w,d)}\log\frac{|D|}{df(w)}.$ (7) Thus, as highlighted in [36], VSM and DLM are actually closely related. The TF component ${tf(w,d)}$ is exactly same as the probability of seeing a term $w$ in DLM. The IDF component $\frac{|D|}{df(w)}$ is implicitly related to a smoothing method in DLM that uses the collection frequency (${tf(w,|\mathcal{D}|)}$: term frequency of $w$ in $\mathcal{D}$ normalised by $|\mathcal{D}|$. Based on the above observation, we now present our approach for estimating $p(t|d)$ using $n$TFIDF in $n$VSM. Although some variant forms of TFIDF methods have been proposed, the majority of TFIDF methods use the same IDF function [15]. However, one drawback of IDF is that it cannot handle $N$-grams, contiguous sequence of $N$ terms (for $N$$>$1). The reason is that IDF tends to give a higher weight to a term that occurs in fewer documents. Note that typically, phrases occur in fewer documents when their collocations are less common. Thus, uncommon phrases (e.g. noise phrases) are unintentionally assigned high weight, yielding the conflict with the definition of a good phrase that constitutes a succinct conceptual descriptor in text. To address it, $N$-gram IDF for weighting phrases was recently proposed [15]. $N$-gram IDF has shown the ability to accurately estimate weights of dominant phrases of any length, simply using the domain corpus. The key in $n$VSM is the proposed formula $n$TFIDF that uses a combination of the frequency of a topic $t$ with $t$’s $N$-gram IDF. As that frequency, we use the average frequencies of the constituent terms in $t$. Formally, using $n$TFIDF, $p(t|d)$ is defined as: $\begin{split}p(t|d)\approx&~{}n{\rm TFIDF}(t,d)=ntf(t,d)\cdot nidf(t),{\rm~{}where}\\\ &ntf(t,d)=\frac{\sum_{i=1}^{n}{tf({w_{i},d}})}{|t|},\\\ &nidf(t)=\log\frac{|\mathcal{D}|\cdot df(t)}{df(w_{1}\land w_{2}\land...\land w_{n})^{2}}\end{split}$ (8) where $w_{1},\ldots,w_{n}$ are $n$-constituent terms in $t$, $tf({w_{i},d})$ is the term frequency of $w_{i}$ in $d$ normalised by $|d|$, $|t|$ is equal to $n$, and $nidf(t)$ is the $N$-gram IDF method for $t$ [15]. Eq. 8 applies for all $|t|\geq 1$, where $nidf(t)$ is equal to the log-IDF, $\log\frac{|\mathcal{D}|}{df(t)}$, in Eq. 7, when $|t|$=1. Finally, in $n$VSM, $p(x|t)$ in Eq. 3 is calculated using $p(t|d)$ in Eq. 8 and is stored into the entry $\mathbf{TX}_{i(t),i(x)}$, where $i(t)$ and $i(x)$ indicate the row and column index of $t$ and $x$, respectively, in the $\mathbf{TX}$. #### 4.2.2 Topic-Document Matrix Creation To create a topic-document matrix $\mathbf{TD}$, we need to calculate the topic-sensitive weight of a document $d$ given a topic $t$. Following the idea of the DLM [6] again, we can estimate this weight by calculating the probability of $d$ being relevant to $t$: $p(d|t)$. Using the Bayes theorem, $p(d|t)$ can be calculated as: $p(d|t)={p(t|d)p(d)}/{p(t)},$ (9) where $p(t)$ can be ignored as it is a consistently constant over all documents. Thus, $p(t|d)p(d)$ can be calculated by multiplying $p(t|d)$ in Eq. 8 and $p(d)$. Finally, $p(d|t)$ is stored into the entry $\mathbf{TD}_{i(t),i(d)}$, where $i(t)$ and $i(d)$ indicate the row and column index of $t$ and $d$, respectively, in the $\mathbf{TD}$. ### 4.3 Construct an ECG Although we have estimated the topic-sensitive weight of an expert $x$ for a given topic $t$ in $n$VSM, one potential limit may be that $p(x|t)$ in Eq. 3 mainly relies on the documents $\mathcal{D}_{x}$ (i.e. $\sum_{d\in\mathcal{D}_{x}}$), ignoring the social importance (or influence) of experts. Our premise is that the expertise degree of $x$ on $t$ can depend not only on $x$’s knowledge on $t$, but also on $x$’s social importance among $t$’s collaborating experts in a given domain. Thus, we propose that an expert collaboration graph (i.e. ECG) can also be a valuable source, in order to estimate such social importance. This estimation is achieved by identifying more authoritative (or influential) topic-sensitive experts considering their joint documents. That is, in a sense, an ECG is a social network for experts, and ExpFinder calculates the authority score of $x$ by repeatedly exploring the collective importance of the joint documents, published by $x$ and $x$’s coauthors, using over the ECG. More specifically, ExpFinder incorporates the topic-sensitive weights of experts given topic, estimated by $n$VSM, into an ECG and reinforces such weights using . Let ${G}=(V,E)$ be an ECG (i.e. directed, weighted bipartite graph) that has two node types: experts $\mathcal{X}$ (also called authorities) and documents $\mathcal{D}$ (also called hubs). Thus, the node set $V$ = $\mathcal{X}\cup\mathcal{D}$. In ${G}$, each expert is not connected to any other experts, and the same is with the documents. A directed edge points from a document $d\in\mathcal{D}$ to an expert $x\in\mathcal{X}$, if $x$ has the authorship on $d$. This edge is denoted as ${e_{dx}}$. Thus, the set of edges $E$ contain directed edges from $\mathcal{D}$ to $\mathcal{X}$. Given ${e_{dx}}$, its weight, denoted as $w_{dx}$, comes from $\mathbf{DX}_{i(d),i(x)}$ (see Section 3.2). ((a)) ((b)) Figure 2: An example ECG (a) and the score propagation of authority and hub nodes (b). An example ECG is depicted in Fig. 2(a), where the solid lines show associations between experts and documents. The dashed arrows show implicit collaborations between experts via their joint documents: e.g., $x_{1}$ and $x_{2}$ have the joint documents $d_{1}$ and $d_{2}$, such that a collaboration between them is established as a bidirectional dashed arrow. ### 4.4 Reinforcing Expert Weights using Note that is a variation of CO-HITS [16]. Thus, we first present the basic notion of CO-HITS on the structure of ECG. That is, an important document is expected to point to important experts, while an important expert is linked by important documents. The importance of an expert $x$ is called the authority score of $x$, and the importance of a document $d$ is called the hub score of $d$. These scores are non-negative weights. Here, our goal is to reinforce the topic-sensitive weights of experts, estimated by $n$VSM, using on the underlying ECG. For this, our idea is that given a topic $t$, we propagate the authority and hub scores with respect to $t$ by traversing $\mathcal{X}$ and $\mathcal{D}$ on the ECG via an iterative process. An example is shown in Fig 2(b), where the hub scores, $H(d_{1})$ and $H(d_{2})$, are propagated to the expert $x_{1}$ to update the authority score $A(x_{1})$; $H(d_{2})$ is also propagate to update $A(x_{2})$; and $H(d_{3})$ is propagated to update $A(x_{2})$ and $A(x_{3})$. Once all the authority scores are updated, these scores are again propagated to the hubs to update their scores. This performs iteratively. The intuition behind the iteration is the repeated mutual reinforcement to estimate authority and hub scores from co-linked nodes on the ECG. In order for ExpFinder to incorporate an topic into , we take two steps. First, we extend the CO-HITS equation [16] to accommodate a topic. We call this extension topic-sensitive CO-HITS. As the initial authority and hub scores, our key idea is to use the estimated topic-sensitive weights of experts and documents in $n$VSM, respectively. Second, we newly design and apply our variation of topic-sensitive CO-HITS into the ECG. We elaborate these two steps in the rest of this section. As the first step, we formally present the topic-sensitive CO-HITS equation, given an expert $x$ and a topic $t$: $\displaystyle\begin{split}A(x;t)^{k}&=(1-\lambda_{x})\alpha_{x;t}+\lambda_{x}\sum_{e_{dx}\in E}w_{dx}H(d;t)^{k-1}\\\ H(d;t)^{k}&=(1-\lambda_{d})\alpha_{d;t}+\lambda_{d}\sum_{e_{du}\in E}w_{du}A(u;t)^{k}\end{split}$ (10) where * • $A(x;t)^{k}$ and $H(d;t)^{k}$ are the topic-sensitive authority score of $x$ and topic-sensitive hub score of $d$, respectively, given $t$ at $k$-th iteration. * • $w_{dx}$ denotes the weight of the edge $e_{dx}$, and thus $w_{dx}=\mathbf{DX}_{i(d),i(x)}$ and $w_{du}=\mathbf{DX}_{i(d),i(u)}$, where $i(d)$ and $i(x)$ indicate the row and column index of $d$ and $x$, respectively, on $\mathbf{DX}$. * • $k$ indicates a iteration number staring from 1. * • $\alpha_{x;t}$ is the initial score for $A(x;t)^{*}$ and $\alpha_{d;t}$ is the initial score for $H(d;t)^{*}$ given $t$. We call these scores personalised weights. In this work, these personalised weights are normalised to be the widely used L2-norm [37], that is, $\left(\sum_{x_{i}\in\mathcal{X}}\alpha_{x_{i};t}\right)^{1/2}$ = $1$ and $\left(\sum_{d_{i}\in\mathcal{D}}\alpha_{d_{i};t}\right)^{1/2}$ = $1$. Also, after updating the $k$-th iteration, the square root of the sum of squares of $A(x;t)^{k}$ and $H(d;t)^{k}$ are normalised using L2-norm, respectively. Assigning the personalised weights provides crucial information in CO-HITS as they provide valuable and make an impact on the propagation of the updates of both authority and hub scores [16]. Our approach to determining the personalised weights is presented when discussing our proposed variation equation of Eq. 11. * • $\lambda_{x}\in[0,1]$ and $\lambda_{d}\in[0,1]$ are personalised parameter for expert and document, respectively. These parameters determine how much we consider the personalised weights when calculating the $k$-th scores. Assigning lower values indicates that higher importance is given to the personalised weights while reducing the propagation effects of co-linked nodes. * • Using Eq. 10, the topic-sensitive CO-HITS algorithm performs as follows: (1) with the personalised weights, a user-specified $k$ and a topic $t$, update all authority scores of $\mathcal{X}$; and (2) update all hub scores of $\mathcal{D}$. These steps are repeatedly performed $k$ times. In the second step, we design the equation and apply it on the underlying ECG using two variation schemes of topic-sensitive CO-HITS: $\displaystyle\begin{split}A(x;t)^{k}&=(1-\lambda_{x})A(x;t)^{k-1}+\lambda_{x}\left(\frac{\sum\limits_{e_{dx}\in E}w_{dx}H(d;t)^{k-1}}{\sum\limits_{e_{dx}\in E}w_{dx}}\right)\\\ H(d;t)^{k}&=(1-\lambda_{d})H(d;t)^{k-1}+\lambda_{d}\left(\frac{\sum\limits_{e_{du}\in E}w_{du}A(u;t)^{k}}{\sum\limits_{e_{du}\in E}w_{du}}\right)\end{split}$ (11) where the interpretation of all the variables is the same as presented for Eq. 10, except the following two variation schemes. The first variation scheme is that rather than using the fixed personalised weights $\alpha_{x;t}$ and $\alpha_{d;t}$, uses dynamic personalised weights $A(x;t)^{k-1}$ and $H(d;t)^{k-1}$ at each $k$-th iteration. In Eq. 10, regardless of iterations, the authority and hub scores at each iteration are fixed to be $\alpha_{x;t}$ and $\alpha_{d;t}$. Different from it, our approach is to use personalised weights at the $k$-th iteration as the ($k$-1)-th authority and hub scores. By doing so, in the calculation of the authority (resp. hub) scores at the $k$-th iteration, the our aim is to exploit both the propagation of the hub (resp. authority) scores and the effect of the authority (resp. hub) score at the ($k$-1)-th iteration. Thus, in , personalised weights are updated at each iteration based on the authority and hub scores at the previous iteration. In our approach, as the initial personalised weights, we use the topic-sensitive weights of experts and documents estimated using $n$TFIDF in $n$VSM. Thus, $A(x;t)^{0}$ = $\mathbf{TX}_{i(t),i(x)}$ and $H(d;t)^{0}$ = $\mathbf{TD}_{i(t),i(d)}$. Similarly, in the topic-sensitive CO-HITS equation in Eq. 10, $\alpha_{x;t}$ and $\alpha_{d;t}$ are set to be $A(x;t)^{0}$ and $H(d;t)^{0}$, respectively. By doing so, we integrate $n$VSM with , generating a new unified formula for this integration. Our intuition for this integration is to improve the accuracy for expert finding by further exploring the implicit relationships between experts, derived from the ECG, in addition to the results of the $n$VSM approach. Note that $n$VSM ignores such relationships, only utilising the importance of a document $d$; the importance of a topic $t$ from the documents of an expert $x$; and the importance of $x$ given $d$ (see Eq. 3). The second variation scheme is that the aggregation of the authority and hub scores is different from that of topic-sensitive CO-HITS. In Eq. 10, $A(x;t)^{k}$ and $H(d;t)^{k}$ are calculated based on the square root of the sum of squares of $H(d;t)^{k-1}$ and $A(x;t)^{k}$, respectively. This approach tends to assign a higher authority score to an expert $x$ who has more documents (i.e. $|\mathcal{D}_{x}|$). Similarly, it is likely that a higher hub score is given to a document $d$ that is linked to more experts (i.e. $|\mathcal{X}_{d}|$) that have $d$. Instead, in , we use the central tendency of $H(d;t)^{k-1}$ to calculate $A(x;t)^{k}$; and also use the central tendency of $A(x;t)^{k}$ to calculate $H(d;t)^{k}$. The ‘average’ is used to measure such central tendency. The reason is that we have already incorporated the idea of using ‘the sum of squares of authority and hub scores’, used in topic-sensitive CO-HITS, in the context of $n$VSM. Note that in $n$VSM, we calculated the topic-sensitive weights of experts by using the sum operator as seen in Eq. 3 (i.e. $\sum_{d\in\mathcal{D}_{x}}$). Thus, to avoid the duplicated use of this ‘sum’ operator, given a topic $t$, we design in a way that estimates the importance of an expert $x$ at the $k$-th iteration (i.e. $A(x;t)^{k}$) by calculating the average of the ($k$-1)-th hub scores, in addition to personalised weight $A(x;t)^{k-1}$. Similarly, we estimate the importance of a document $d$ (i.e. $H(d;t)^{k}$) by calculating the average of the $k$-th authority scores, in addition to personalised weight $H(d;t)^{k-1}$. In the name , ‘$\mu$’ indicates the ‘average’ so that means a particular topic-specific CO-HITS using the ’average’ importance of authority and hub scores. We also highlight other features of . First, as with topic-sensitive CO-HITS, the updated authority and hub scores at each iteration are normalised using L2-norm. Second, if $\lambda_{x}$ and $\lambda_{d}$ are 0, returns the initial personalised weights at each iteration. Thus, ExpFinder does not use the score propagation effects on the ECG, returning the same results obtained from $n$VSM. Third, if $\lambda_{x}$ and $\lambda_{d}$ are all equal to 1, does not incorporate personalised weights. However, it calculates $H(x;t)^{1}$ based on the $H(d;t)^{0}$ that was obtained from the topic-document matrix $\mathbf{TD}$, i.e., $H(d;t)^{0}$ = $\mathbf{TD}_{i(t),i(d)}$, generated by $n$VSM. Also, $H(d;t)^{1}$ is calculated based on $A(u;t)^{1}$. ## 5 Evaluation of ExpFinder To assess the effectiveness of ExpFinder, we conduct the following evaluation. First, we measure the effectiveness of the first knowledge facet of ExpFinder: $n$VSM (Section 5.3). Second, we show how to empirically find the best values for personalised parameters of ExpFinder (Section 5.4). Third, we evaluate that ExpFinder is a highly competitive model for expert finding, in comparison with $n$VSM and the two GM approaches [9, 19]. Further, to show the capability of the second knowledge facet of ExpFinder, i.e., , over topic-sensitive CO- HITS (simply CO-HITS), we compare ExpFinder with an alternative ExpFinder form that combines $n$VSM and CO-HITS (Section 5.5). Finally, we summarise our evaluation (Section 5.6). ### 5.1 Datasets We use four benchmark datasets in our evaluation. One is the Lattes Expertise Retrieval (LExR)222LExR is available to download from http://toinebogers.com/?page_id=240 test collection [17] for expertise retrieval in academic. LExR provides a comprehensive, large-scale benchmark for evaluating expertise retrieval and it covers all knowledge areas (e.g. earth sciences, biology, health sciences, languages, art, etc) working in research institutions all over Brazil. Most publications are written in Portuguese, Spanish and English. In our evaluation, we only consider the English documents for our readability. The other three datasets333These datasets can be downloaded from http://www.lbd.dcc.ufmg.br/lbd/collections. are Information Retrieval (IR), Semantic Web (SW), and Computational Linguistics (CL) which are filtered subsets of DBLP dataset [18]. In these four datasets, we regard the authors as experts $\mathcal{X}$ and the publications as documents $\mathcal{D}$, where each publication is seen as a mixture of title and abstract. From $\mathcal{D}$, we extract phrases as the first step in ExpFinder (Section 3). These datasets also provide the ground-truth about who are the known experts for the known topics. The expert degrees for each topic are represented as non-relevance, relevance, and high relevance in LExR. We regard individuals with non-relevance as non-experts, and individuals with relevance and high relevance as experts. IR, SW and CL also provide the expert list for each topic. We formalise the candidates in such list as experts, and otherwise non- experts. From each dataset, we preprocess the following steps to be used in our evaluation. First, we remove publications containing empty title and abstract. Second, we remove publications whose abstracts provide little information, that is, less than 5 words after removing stopwords. Third, if there exists duplicated topics, we remove such ones. Table I shows an overview of the datasets after performing these steps. TABLE I: A summary of our four datasets | LExR | IR | SW | CL ---|---|---|---|--- # of documents | 14879 | 2355 | 1519 | 1667 # of experts | 620 | 276 | 394 | 358 # of topics | 227 | 268 | 2046 | 1583 Avg. # of documents per expert | 28 | 9 | 4 | 5 Avg. # of experts per topic | 6 | 10 | 9 | 8 Median # of experts per topic | 5 | 8 | 6 | 5 Max # of experts per topic | 26 | 177 | 226 | 158 We note that our chosen datasets are relatively more comprehensive than some previous works, which focused on academic domains for their evaluation, in terms of the number of topics considered, thereby providing a reasonable measure of the effectiveness of ExpFinder. For example, the works [38], [9] and [7] used two datasets with seven topics, two datasets with 13 and 203 topics and one dataset with 14 topics, respectively444These datasets are also no longer publicly available.. Note that our evaluation have been done using the larger numbers of the topics on the four datasets as seen in Table I. TABLE II: Expertise topics and corresponding similar phrases in four datasets LExR | IR | SW | CL ---|---|---|--- Topic | Phrase | Topic | Phrase | Topic | Phrase | Topic | Phrase synthesis | synthesis | information retrieval | information retrieval | semantic web | semantic web | question answering | question answering risk factor | risk factor | search engine | search engine | linked data | linked data | knowledge transfer | knowledge transfer public health | public health | patent search | patent search | text understanding | text understanding | summarization | summarization thin film | ultrathin film | data modelling | information modelling | auction | bidding | machine vision | computer vision development validation | validation process | cooperative work | collaborative working | invalidity | inadequacy | image classification | image recognition ### 5.2 Evaluation Framework We present our evaluation configuration and metrics. Recall that as a topic, we use a phrase. We assume that the maximum word length of each phrase is 3 in our evaluation. Also, we observe that there is no guarantee that an original known topic $t_{g}$ always appears in documents $\mathcal{D}$ in each dataset. Thus, given each $t_{g}$, we find its most similar phrase $t$ from $\mathcal{D}$. Then, $t$ is alternatively used as a topic, instead of $t_{g}$. To find $t$ given $t_{g}$, we use the scientific pre-trained model SciBERT555The pre-trained SciBERT model can be downloaded at https://github.com/allenai/scibert [39] that is a scientific language model trained on the fulltext of 1.14M papers and 3.1B words, where the papers were collected from ‘semanticscholar.org’. Using this model allows us to measure a semantic similarity between $t_{g}$ and $t$ by their cosine similarity according to their corresponding vectors represented in the model. More specifically, assume that $s_{1}$ is an original known topic and $s_{2}$ is a phrase extracted from $\mathcal{D}$. Then, we measure their similarity as: $sim(s_{1},s_{2})\approx\cos(\vec{s_{1}},\vec{s_{2}})=\frac{\vec{s_{1}}^{\,}\cdot\vec{s_{2}}^{\,}}{\left\|\vec{s_{1}}^{\,}\right\|\left\|\vec{s_{2}}^{\,}\right\|},$ (12) where $\vec{s_{1}}$ and $\vec{s_{2}}$ are the represented vectors of $s_{1}$ and $s_{2}$ in SciBERT, respectively. Each of these vectors is estimated by the average of the embedded vectors of its constituent terms. Suppose that $s_{1}$ consists of $n$-terms, $s_{1}=(w_{1},\ldots,w_{n})$, then, $\vec{s_{1}}\approx\frac{1}{n}(\vec{w_{1}}+\ldots+\vec{w_{n}}),$ where $(\vec{w}_{1},\ldots,\vec{w}_{n})$ are the embedded vectors of $(w_{1},\ldots,w_{n})$. The same principle is applied to $s_{2}$. Table II shows the examples of five topic-phrase pairs in each dataset, where each pair shows an original known topic $t_{g}$ and the most similar phrase $t$ used as a topic in our evaluation. As we see, some phrases ($t$) are equal to the original topic ($t_{g}$), while some others phrases are semantically very similar to the corresponding original topic (e.g. ‘image classification’-‘image recognition’ on CL). Other evaluation configuration includes: (1) we assume that the importance of documents is the same (i.e. $p(d)$=1) and the importance of all experts of $d$ is the same (i.e. $p(x|d)$=1) in Eq. 3. The reason is that one of our primary focuses is to evaluate the capability of $n$TFIDF in $n$VSM in calculating $p(t|d)$ in Eq. 3; (2) Thus, we also fix $w_{dx}$=1 and $w_{du}$=1 in Eq. 10 and Eq. 11; and (3) from our empirical testing, we observed Eq. 10 and Eq. 11 are commonly converged after 5 iterations, so we set $k=5$. For all expert finding models in our evaluation, our aim is to generate a ranked list of experts based on the outcome of the formula: (1) Eq. 11 in ExpFinder, (2) Eq. 10 in CO-HITS, (3) Eq. 3 and Eq. 4 in DLM, (4) Eq. 3 and Eq. 7 in TFIDF-based VSM, and (5) Eq. 3 and Eq. 8 in $n$VSM, where all these methods are compared, in addition to WISER [8] and RepModel [19]. We use two widely-used evaluation metrics for expert finding [7, 16]: (1) precision at rank n (P@n) and (2) Mean Average Precision (MAP). P@n measures the relevance of the $n$-top ranked experts with respect to a given query topic, defined as [16]: $P@n={|S_{n}\cap R_{t}|}/{n},$ (13) where $S_{n}$ is the set of $n$-top recommended experts for a given topic $t$, and $R_{t}$ is the set of known experts for $t$. We report from P@10 to P@30 (increasing by 5) for each topic and take the average over all topics. MAP measures the overall ability of a method to differentiate between known experts and non-experts. The average precision (AP) is defined as [7]: $AP=\frac{\sum_{i=1}^{n}(P@i*rel(i))}{|R_{t}|}$ (14) where $i$ is the rank, $rel(i)$ is a binary function indicating 1, if the result at $i$ is a known expert, otherwise 0. MAP is the mean value of AP values over all topics, and we use $n=30$ as used in [38]. | | | ---|---|---|--- Figure 3: Comparison on the average precision (AP) values: x-axis shows n of P@n and y-axis shows AP values. TABLE III: MAP and the the improvement ratio of $n$VSM. | LExR | | IR | | SW | | CL | | Avg. | ---|---|---|---|---|---|---|---|---|---|--- DLM-0.5 | 0.200 | (233.0%) | 0.208 | (6.7%) | 0.070 | (51.4%) | 0.063 | (68.3%) | 0.135 | (103.7%) DLM-0.6 | 0.159 | (318.9%) | 0.185 | (7.8%) | 0.068 | (55.9%) | 0.058 | (82.8%) | 0.118 | (133.1%) TFIDF | 0.493 | (35.1%) | 0.206 | (7.8%) | 0.087 | (21.8%) | 0.120 | (-11.7%) | 0.226 | (21.7%) WISER | 0.117 | (469.2%) | 0.150 | (48.0%) | 0.057 | (86.0%) | 0.051 | (107.8%) | 0.094 | (192.6%) $n$VSM | 0.666 | | 0.222 | | 0.106 | | 0.106 | | 0.275 | ### 5.3 Evaluation of $n$VSM As $n$VSM is one key compoenent in ExpFinder, we first measure its effectiveness. As presented in Section 4.2, the concepts VSM and DLM are closely related. Thus, we compare $n$VSM with TFIDF-based VSM and two particular DLMs: (1) TFIDF-based VSM expressed using Eq. 3 and Eq. 7 (denoted as TFIDF); (2) The DLM model [6, 7] denoted using Eq. 3 and Eq. 4 in which the probability of individual terms is estimated by Eq. 5, where we use two values for the best $\lambda_{\theta}$: 0.5 (DLM-0.5) and 0.6 (DLM-0.6), as suggested by [6] and [7], respectively; and (3) A recent probabilistic model WISER [8] that combines the document-centric approach exploiting the occurrence of topics in experts’ documents, with the profile-centric approach computing relatedness between experts using an external knowledge source, Wikipedia. Since our work does not consider such an external knowledge source, we only consider WISER with the document-centric approach for the fair comparison. In WISER, the topic-sensitive weight of an expert $x$ given a topic $t$ is calculated using Reciprocal Rank [40]: $\sum_{d}^{\mathcal{D}_{x,t}}$ $\frac{1}{rank(d)}$ that represents the ranks of $x$’s documents where $t$ appears ($\mathcal{D}_{x,t}$). Since $t$ is a phrase, $\mathcal{D}_{x,t}$ consists of the subset of $\mathcal{D}_{x}$ that any of $t$’s constituent terms appears. $rank(d)$ is the ranking position of a document $d$ out of $\mathcal{D}$, where the position is determined by BM25 [24]. The hyper- parameters $k1$ and $b$ in BM25 are set to be 1.2 and 0.75, respectively, based on the suggestion [41]. The evaluation results are presented in Fig. 3 that shows the AP values with n (n=10, 15, $\ldots$, 30) of P@n for all topics. We observe the following: (1) Overall, the VSM approaches (TFIDF and $n$VSM) largely outperform all DLM-0.5, DLM-0.6 and WISER. This indicates the VSM approaches can be more effectively used for identifying topic-sensitive experts than the compared DLMs; (2) DLM-0.5 is consistently better than DLM-0.6 but their difference seems minor; and (3) $n$VSM is clearly better than TFIDF from P@10 to P@30 consistently over all the four datasets. Also, $n$VSM substantially outperforms all the compared methods on all the four datasets. Table III shows the results on MAP and the relative improvement ratio of $n$VSM over the other models. The best one in each dataset is denoted in boldface. We see that $n$VSM’s improvements over DLM-0.5 and DLM-0.6 are substantial: up to 318.9% over DLM-0.6 on LExR, 7.8% over DLM-0.6 on IR, 55.9% on CL, and 82.8% on CL. Moreover, $n$VSM substantially outperforms WISER from 48.0% on IR to 469.2% on LExR. Also, $n$VSM is highly better than TFIDF except the only one case on CL. On average, we observe that $n$VSM largely outperforms DLM-0.5 in 103.7%; DLM-0.6 in 133.1%; WISER in 192.6%; and TFIDF in 21.7% across the four datasets. In summary, the results show an empirical evidence that $n$VSM can be competitive and effectively used for expert finding. Also, these show that ExpFinder is equipped with a powerful component, $n$VSM, for expert finding. ### 5.4 Finding the Best Values for Personalised Parameters in ExpFinder: $\lambda_{x}$ and $\lambda_{d}$ We now present how to empirically find the best values for personalised parameters $\lambda_{x}$ and $\lambda_{d}$ of (see Eq. 11) which is another key component of ExpFinder. Our approach is to make a full use of all the four datasets to determine such values. For this, we measure the mean impact of different values of $\lambda_{x}$ and $\lambda_{d}$, respectively, on generating the MAP results from the four datasets. Our aim is to provide an empirical guideline for choosing the best values for these parameters. Formally, let $Z$ be the set of candidate values [0, 0.1, $\ldots$, 1.0] for $\lambda_{x}$ and $\lambda_{d}$. Then, let us define MAP$(a,b)$ as the MAP value using a pair of $a\in Z$ for $\lambda_{x}$ and $b\in Z$ for $\lambda_{d}$. First, we choose the best value for $\lambda_{x}$. To this end, for each value $a\in Z$, we compute the mean of the MAP values with all values in $Z$ in each dataset: $Avg(a,\lambda_{x})=\frac{1}{|Z|}\sum_{b\in Z}{\textrm{MAP}}(a,b).$ (15) Then, we obtain the $|Z|$-length vector of $Avg(a,\lambda_{x})$ for all values in $Z$. Let us say that this vector is denoted as $Avg(Z,\lambda_{x})$. For example, if $Avg(Z,\lambda_{x})=[1,0.9,0.8,\ldots,0]$, then the corresponding element-wise rank vector is $R(Avg(Z,\lambda_{x}))=[11,10,9,\ldots,1]$, where the higher rank indicates the higher mean of the MAP values. Similarly, we use $R^{i}(Avg(Z,\lambda_{x}))$ to denote the $R(Avg(Z,\lambda_{x}))$ calculated on the dataset $i$. Finally, we compute the element-wise mean rank across the four datasets: $AvgR(Z,\lambda_{x}))=\frac{1}{n}\sum_{i=1}^{n}{R^{i}(Avg(Z,\lambda_{x}))},$ (16) where $n=4$ corresponding to the number of datasets. Using the above equation, we find the best value for $\lambda_{x}$ that is the $a\in Z$ generating the highest rank. Finding the best value for $\lambda_{d}$ is the same as the above procedure, except that we fix $a$ to be the identified best value for $\lambda_{x}$. Thus, Eq. 15 is modified as: $Avg(b,\lambda_{d})={\textrm{MAP}}(a,b)$. Then, we obtain the $|Z|$-length vector of $Avg(b,\lambda_{d})$ for all possible values for $b\in Z$. This vector is denoted as $Avg(Z,\lambda_{d})$. Also, $R^{i}(Avg(Z,\lambda_{d}))$ indicates the $R(Avg(Z,\lambda_{d}))$ calculated on the dataset $i$. Finally, we compute the element-wise mean rank across the four datasets using the Eq. 16 except that $\lambda_{x}$ is replaced with $\lambda_{d}$. By doing so, we find the best value for $\lambda_{d}$ by choosing the $b\in Z$ generating the highest rank. Fig. 4(a) - (b) show the average ranks of values in $Z$ for $\lambda_{x}$ and $\lambda_{d}$, respectively, across four datasets. As we see, $\lambda_{x}$ = $1.0$ produces the highest rank, whereas $\lambda_{d}$ = $0.7$ is the highest rank with $\lambda_{x}$ = $1.0$. The best ones are denoted in red color. ((a)) ((b)) Figure 4: Finding the best values for $\lambda_{x}$ and $\lambda_{d}$. ### 5.5 Evaluation of ExpFinder We now evaluate ExpFinder using the best values for $\lambda_{x}$ and $\lambda_{d}$. To measure its relative effectiveness, we also compare it with $n$VSM as well as two GMs: ADT [9] and Reputation Model (simply RepModel) [19]. Further, we compare (Eq. 11) with CO-HITS (Eq. 10) to validate the stronger capability of ’s variation approach over CO-HITS. Finally, we show that ExpFinder works well regardless of topic coverage. | | | ---|---|---|--- Figure 5: Comparison on the average precision (AP) values: x-axis shows n of P@n and y-axis shows AP values. TABLE IV: MAP and the the improvement ratio of ExpFinder. | LExR | | IR | | SW | | CL | | Avg. | ---|---|---|---|---|---|---|---|---|---|--- $n$VSM | 0.666 | (12.1%) | 0.222 | (15.3%) | 0.106 | (7.6%) | 0.106 | (4.7%) | 0.275 | (11.6%) ADT | 0.574 | (30.1%) | 0.186 | (37.6%) | 0.077 | (48.1%) | 0.106 | (4.7%) | 0.236 | (30.1%) RepModel | 0.260 | (187.3%) | 0.144 | (77.8%) | 0.061 | (86.9%) | 0.070 | (58.6%) | 0.134 | (129.1%) CO-HITS | 0.611 | (22.3%) | 0.228 | (12.3%) | 0.104 | (9.6%) | 0.088 | (26.1%) | 0.258 | (19.0%) ExpFinder | 0.747 | | 0.256 | | 0.114 | | 0.111 | | 0.307 | ADT uses an indirect and weighted tripartite (expert-document-topic) graph, where each triplet contains experts, documents and topics. Experts are connected to their documents, and also documents are connected to the topics based on their occurrences. The weight of an edge between an expert $x$ and a document $d$ ($w_{xd}$) corresponds to $p(x|d)$ in Eq. 3. The weight of an edge between $d$ and a topic $t$ ($w_{dt}$) is modelled as $p(t|d)$ in Eq. 3. Recall that we fixed $p(x|d)$ as 1 in Section 5.2. As ExpFinder models $p(t|d)$ as $n$TFIDF, we also model $w_{dt}$ as $n$TFIDF in ADT for the fair comparison. ADT ranks $x$ given a topic $t$ based on the score function $s(x,t)$ (the higher the more important): $s(x,t)=\sum_{d\in\mathcal{D}_{x}}{w_{xd}\cdot pweight(d,t)},$ (17) where $pweight(d,t)$ = $\sum_{p\in P(d,t)}{\prod_{i}{w(e_{i})}}$ where $p$ is a path between $d$ and $t$ comprising of edges such that $p=e_{1}e_{2}\ldots e_{n}$; $P(d,t)$ is the set of all possible paths between $d$ and $t$; and $w(e_{i})$ is the weight of the $i$-th edge in $p$. RepModel [19] was originally designed to estimate the topic-sensitive reputation of an organisation in the context of scientific research projects. This model uses topic-sensitive CO-HITS given a topic, where an organisation is seen as an expert and a project is seen as a document in our work. Thus, using the CO- HITS notations in Eq. 10, RepModel models $A(x;t)^{k}$ as $\sum_{w\in t}w_{w}A(x;w)^{k}$ and $H(d;t)^{k}$ as $\sum_{w\in t}w_{w}H(d;w)^{k}$, where $w_{w}=\frac{1}{|t|}$. As $\lambda_{x}$ and $\lambda_{d}$, we use 0.85 as used in [19]. In RepModel, the personalised weights $\alpha_{x;w}$ and $\alpha_{d;w}$ are defined as: $\alpha_{d;w}$ = $tf(d,w)$ denoting the term frequency of $w$ divided by $|d|$; and $\alpha_{x;w}$ = ${s(x,w)}$ if $w$ appears in $\mathcal{D}_{x}$, and 0, otherwise. $s(x,w)$ is defined as $1-\frac{\max_{s}-weight(x,w)}{\max_{s}-\min_{s}}$, where $\max_{s}$ and $\min_{s}$ are the max and min values of $weight(x,w)$ for all experts, and $weight(x,w)$ is the weight of $x$ on $w$ calculated by the number of documents in $\mathcal{D}_{x}$ where $w$ appears. We also set $k$=5 as done for . For the fair comparison between and CO-HITS, as , personalised weights $\alpha_{x;t}$ and $\alpha_{d;t}$ in CO-HITS in Eq. 10 are set as $\mathbf{TX}_{i(t),i(x)}$ and $\mathbf{TD}_{i(t),i(d)}$, respectively. Following the same experiment in Section 5.4, we found that the best values for $\lambda_{x}$ and $\lambda_{d}$ are chosen as 1.0 and 1.0, respectively, for CO-HITS. We also fix $k$ as 5 in Eq. 10 as ExpFinder. In our comparison below, CO-HITS indicates an alternative ExpFinder form incorporating $n$VSM into CO-HITS. | | | ---|---|---|--- Figure 6: MAP values of $n$VSM and ExpFinder: the x-axis shows topic coverage values, and the y-axis shows the MAP values (topic coverage of a topic $t$ means the number of known experts associated with $t$). Fig. 5 shows the evaluation results based on the AP values with n (n=10, 15, $\ldots$, 30) of P@n for all topics. First, when comparing ExpFinder with ADT, although ADT is slightly better than ExpFinder over two datasets IR and CL at n=10, ExpFinder largely outperforms at all values for n of P@n, i.e. n=15, $\ldots$, 30, on all the datasets. It is also clear that ExpFinder is substantially better than RepModel on all x-axis values. Second, ExpFinder is consistently better than CO-HITS on LExR and SW, and very similar to each other on IR. On CL, CO-HITS is better than at n=10 and n=15, but similar at n=20, and worse than ExpFinder at n=25 and n=30. Overall, these results also show that can have a competitive potential for improving the performance over CO-HITS. Third, to determine the impact of incorporating $n$VSM into in ExpFinder, let us compare ExpFinder with $n$VSM. As observed, ExpFinder is clearly and consistently better than $n$VSM over LExR and SW, although they are similar on IR and $n$VSM looks better than ExpFinder on CL. On average, we observe that our ensemble model ExpFinder combining $n$VSM with is observed to be more powerful than only $n$VSM. Table IV shows the evaluation results in MAP. The best one is denoted in boldface. As observed, ExpFinder outperforms all the methods in 11.6%, 30.1%, 129.1% and 19.0% over $n$VSM, ADT, RepModel and CO-HITS, respectively, on average. Interestingly, $n$VSM is observed as the second better one. This also shows that our $n$VSM for expert finding is more competitive than the compared GMs. Finally, it may be also worth analysing the distribution of the MAP values of a model across topics based on topic coverage in each dataset. In our context, the topic coverage of a topic $t$ means the number of known experts having expertise $t$. This enlightens how the model particularly works better or worse at which topic coverage values. Intuitively, it may be harder to find experts for topics whose topic coverage is lower. For this analysis, we pay only attention to our two models $n$VSM and ExpFinder. By comparing their distributions, we can identify which model is better than the other on what topic coverage values. The analysis results are seen in Fig. 6. In each plot, each value on the x-axis shows a topic coverage value. Each value on the y-axis shows the MAP value of the set of topics with the same topic coverage. For example, on LExR, the value 5 on the x-axis means that the topic coverage is 5. The corresponding y-axis value 0.85 indicates that the MAP value of the set of topics with the topic coverage 5 is 0.85. Each MAP value is also calculated using n=30 of P@n. Each black circle represents that both of $n$VSM and ExpFinder have the same MAP value. We can observe the following: (1) On LExR, ExpFinder dominantly outperforms or is equal to $n$VSM over all the topic coverage values except the 4 cases (i.e. 13, 15, 18, and 19); (2) On IR, ExpFinder also shows its improvement over $n$VSM over most of the topic coverage values, while one MAP value is the same on the topic coverage 26; (3) On SW, we observe that ExpFinder prevailingly outperforms $n$VSM across most of the topic values as ExpFinder’s distribution is predominantluy higher than $n$VSM’s distribution; (4) On CL, although $n$VSM is better than ExpFinder on the topic coverage values 10 and 30, ExpFinder is observed notably better than $n$VSM over the other topic coverage values; and (5) there seems no clear clue that ExpFinder performs particularly better on which range of topic coverage. For example, on LExR, ExpFinder seems to perform better in smaller topic coverage values, as the MAP values from 0 to 5 are clearly higher than those from 6 to 15 on the axis. But this pattern is not consistent with the datasets on SW and CL, where ExpFinder performs better on larger topic coverage values. In conclusion, the above observations may indicate that overall ExpFinder outperforms $n$VSM regardless of topic coverage, showing the validity of the design paradigm of ExpFinder, that is, incorporating $n$VSM into can have powerful capability for expert finding. ### 5.6 Discussions and Future Work Using the four datasets from academic domains, we evaluated ExpFinder and its two key components $n$VSM and , and compared the results with other expert finding models: TF-IDF based VSM (denoted as TFIDF), DLM-0.5 [6] and DLM-0.6 [7], WISER [8], ADT [9] and RepModel [19]. We showed the capability of $n$VSM using different AP values (n=10, 15, $\ldots$, 30) and MAP, in comparison with TFIDF, DLM-0.5 and DLM-0.6. On average, the improvement ratio of $n$VSM over them was turned out as from 21.7% to 133.1% in MAP. We also presented the empirical method for finding the best values of the two parameters used in ExpFinder, $\lambda_{x}$ and $\lambda_{d}$, based on the ranking of the MAP values. Moreover, we showed how much ExpFinder performs better than all the compared methods in Table III and Table IV in MAP. That is, we showed that ExpFinder improves DLM-0.5 and DLM-0.6 in 127.5% and 160.2%, respectively; TFIDF in 35.9%; WISER in 71.5%; ADT in 31%; $n$VSM in 11.6%; RepModel in 129.1%; and CO-HITS in 19.0%. Further, we showed that ExpFinder incorporating $n$VSM into indeed improves only $n$VSM. It means that exploiting network propagation effects on ECG using with the outcome of $n$VSM can contribute to better estimating topic-sensitive weights of experts. Also, by comparing ExpFinder with a model combining $n$VSM with CO-HITS, we proved that can be an effective approach for improving CO- HITS for expert finding. Finally, we analysed that ExpFinder works well regardless of topic coverage values. Our all evaluation results reinforce our motivation of designing ExpFinder that the proposed ensemble model ExpFinder for expert finding is effective and competitive for expert finding. As future work, it could be worth to try to find a way for improving precision for expert finding. As we have observed in Table IV, the average MAP value of ExpFinder is 0.307 across the four datasets. In the literature, we can also observe the similar MAP results. For example, WISER [8] reported that its best MAP values are 0.214 and 0.363 on the two datasets, BMExpert [7] also showed 0.06 as the best MAP value of the DLM [6] on the single dataset, and ADT [9] also showed its best MAP values are 0.0943 and 0.1986 on the two datasets. As another future work, we plan to accommodate a general expertise knowledge source as [8], e.g. Wikipedia, into ExpFinder to see its potential for enhancing ExpFinder’s capability. Another interesting future work is that we could examine graph embedding techniques for expert finding. One idea would be that we extend ECG by constructing an expert-document-topic graph based on their semantic relationships. Then we can train a machine to transform nodes, edges and their features into a vector space while maximally preserving their relationship information. Once we would be successfully able to map such a graph to a vector space, we could estimate the importance of experts given documents or topics by measuring their similarity (or relevance) between experts and documents or between experts and topics in terms of their corresponding vector values. ## 6 Conclusion In this paper, we proposed ExpFinder a novel ensemble model for expert finding. We presented the design of ExpFinder and conducted comprehensive empirical experiments to evaluate and validate its effectiveness using four publicly accessible datasets (LExR [17], Information Retrieval, Semantic Web and Computational Linguistics in DBLP dataset [18]) from the academic domains. The novelty of ExpFinder is in its incorporation of a novel $N$-gram vector space model ($n$VSM) into . The key to designing $n$VSM is to utilise the state-of-the-art IDF method [15] for estimating the topic-sensitive weights of experts given a topic. Such estimated weights are further improved by incorporating them into ECG using . Our novelty of is to design two variation schemes of CO-HITS [16], thus proposing a unified formula for successfully integrating $n$VSM with . We showed comprehensive evaluation, comparing ExpFinder with the six representative models, that is, TF-IDF based vector space models (TFIDF), two document language models (DLM) [6, 8], two graph- based models (GM) [9, 19], and topic-sensitive CO-HITS [16]. We showed ExpFinder is a highly effective ensemble model for expert finding, outperforming TFIDF with 35.9%, DLMs with 71.5% - 160.2%, GMs with 31% - 129.1%, and topic-sensitive CO-HITS with 19.0%. ## References * [1] O. Husain, N. Salim, R. A. Alias, S. Abdelsalam, and A. Hassan, “Expert finding systems: A systematic review,” _Applied Sciences_ , vol. 9, no. 20, p. 4250, 2019. * [2] M. Stankovic, C. Wagner, J. Jovanovic, and P. Laublet, “Looking for experts? what can linked data do for you?” in _Proceedings of the WWW2010 Workshop on Linked Data on the Web_ , 2010. * [3] R. Gonçalves and C. F. Dorneles, “Automated expertise retrieval: A taxonomy-based survey and open issues,” _ACM Computing Surveys (CSUR)_ , vol. 52, no. 5, pp. 1–30, 2019. * [4] C. T. Chuang, K. H. Yang, Y. L. Lin, and J. H. Wang, “Combining query terms extension and weight correlative for expert finding,” in _2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT)_ , vol. 1. IEEE, 2014, pp. 323–326. * [5] O. Alhabashneh, R. Iqbal, F. Doctor, and A. James, “Fuzzy rule based profiling approach for enterprise information seeking and retrieval,” _Information Sciences_ , vol. 394, pp. 18–37, 2017. * [6] K. Balog, L. Azzopardi, and M. de Rijke, “A language modeling framework for expert finding,” _Information Processing & Management_, vol. 45, no. 1, pp. 1 – 19, 2009. * [7] B. Wang, X. Chen, H. Mamitsuka, and S. Zhu, “Bmexpert Mining medline for finding experts in biomedical domains based on language model,” _IEEE/ACM Trans. Comput. Biol. Bioinformatics_ , vol. 12, no. 6, pp. 1286–1294, Nov. 2015. * [8] P. Cifariello, P. Ferragina, and M. Ponza, “WISER: A semantic approach for expert finding in academia based on entity linking,” _Information Systems_ , vol. 82, pp. 1 – 16, 2019. * [9] S. D. Gollapalli, P. Mitra, and C. L. Giles, “Ranking experts using author-document-topic graphs,” in _Proceedings of the 13th ACM/IEEE-CS Joint Conference on Digital Libraries_ , ser. JCDL ’13, 2013, pp. 87–96. * [10] C. S. Campbell, P. P. Maglio, A. Cozzi, and B. Dom, “Expertise identification using email communications,” in _Proceedings of the Twelfth International Conference on Information and Knowledge Management_ , ser. CIKM ’03, 2003, p. 528–531. * [11] R. Yeniterzi and J. Callan, “Constructing effective and efficient topic-specific authority networks for expert finding in social media,” 2014, p. 45–50. * [12] M. S. Faisal, A. Daud, A. U. Akram, R. A. Abbasi, N. R. Aljohani, and I. Mehmood, “Expert ranking techniques for online rated forums,” _Computers in Human Behavior_ , vol. 100, pp. 168–176, 2019. * [13] K. Bok, I. Jeon, J. Lim, and J. Yoo, “Expert finding considering dynamic profiles and trust in social networks,” _Electronics_ , vol. 8, no. 10, p. 1165, 2019. * [14] B. Sziklai, “How to identify experts in a community?” _International Journal of Game Theory_ , vol. 47, no. 1, pp. 155–173, 2018. * [15] M. Shirakawa, T. Hara, and S. Nishio, “IDF for Word N-Grams,” _ACM Trans. Inf. Syst._ , vol. 36, no. 1, Jun. 2017. * [16] H. Deng, M. R. Lyu, and I. King, “A Generalized CO-HITS Algorithm and Its Application to Bipartite Graphs,” in _Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2009, p. 239–248. * [17] V. Mangaravite, R. L. Santos, I. S. Ribeiro, M. A. Gonçalves, and A. H. Laender, “The LExR Collection for Expertise Retrieval in Academia,” in _Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , 2016, p. 721–724. * [18] G. Bordea, T. Bogers, and P. Buitelaar, “Benchmarking domain-specific expert search using workshop program committees,” in _Proceedings of the 2013 workshop on Computational scientometrics: theory & applications_, 2013, pp. 19–24. * [19] D. Schall, _A Social Network-Based Recommender Systems_. Springer, 2015. * [20] F. Riahi, Z. Zolaktaf, M. Shafiei, and E. Milios, “Finding expert users in community question answering,” in _Proceedings of the 21st International Conference on World Wide Web_ , 2012, pp. 791–798. * [21] L. K. Schunk and G. Cong, “Using transactional data from erp systems for expert finding,” in _Database and Expert Systems Applications_ , 2010, pp. 267–276. * [22] V. Mangaravite and R. L. Santos, “On information-theoretic document-person associations for expert search in academia,” in _Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , ser. SIGIR ’16. Association for Computing Machinery, 2016, p. 925–928. * [23] C. Van Gysel, M. de Rijke, and M. Worring, “Unsupervised, efficient and semantic expertise retrieval,” in _Proceedings of the 25th International Conference on World Wide Web_ , 2016, pp. 1069–1079. * [24] S. Robertson and H. Zaragoza, “The probabilistic relevance framework: Bm25 and beyond,” _Foundations and Trends® in Information Retrieval_ , vol. 3, no. 4, pp. 333–389, 2009. * [25] X. Jiang, X. Sun, Z. Yang, H. Zhuge, and J. Yao, “Exploiting heterogeneous scientific literature networks to combat ranking bias: Evidence from the computational linguistics area,” _Journal of the Association for Information Science and Technology_ , vol. 67, no. 7, pp. 1679–1702, 2016. * [26] C. L. Koumenides and N. R. Shadbolt, “Ranking methods for entity-oriented semantic web search,” _Journal of the Association for Information Science and Technology_ , vol. 65, no. 6, pp. 1091–1106, 2014. * [27] J. Zhang, M. S. Ackerman, and L. Adamic, “Expertise networks in online communities: structure and algorithms,” in _Proceedings of the 16th international conference on World Wide Web_ , 2007, pp. 221–230. * [28] G. A. Wang, J. Jiao, A. S. Abrahams, W. Fan, and Z. Zhang, “Expertrank: A topic-aware expert finding algorithm for online knowledge communities,” _Decision support systems_ , vol. 54, no. 3, pp. 1442–1451, 2013. * [29] D. Kundu and D. P. Mandal, “Formulation of a hybrid expertise retrieval system in community question answering services,” _Applied Intelligence_ , vol. 49, no. 2, pp. 463–477, 2019. * [30] H. Deng, I. King, and M. R. Lyu, “Enhanced models for expertise retrieval using community-aware strategies,” _IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)_ , vol. 42, no. 1, pp. 93–106, 2011. * [31] D.-R. Liu, Y.-H. Chen, W.-C. Kao, and H.-W. Wang, “Integrating expert profile, reputation and link analysis for expert finding in question-answering websites,” _Information processing & management_, vol. 49, no. 1, pp. 312–329, 2013. * [32] c. Aslay, N. O’Hare, L. M. Aiello, and A. Jaimes, “Competition-based networks for expert finding,” in _Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , ser. SIGIR ’13, 2013, p. 1033–1036. * [33] N. Torkzadeh Mahani, M. Dehghani, M. S. Mirian, A. Shakery, and K. Taheri, “Expert finding by the dempster-shafer theory for evidence combination,” _Expert Systems_ , vol. 35, no. 1, 2018. * [34] Y.-B. Kang, P. Delir Haghighi, and F. Burstein, “Cfinder: An intelligent key concept finder from text for ontology development,” _Expert Syst. Appl._ , vol. 41, no. 9, p. 4494–4504, 2014. * [35] T. Roelleke and J. Wang, “Tf-idf uncovered: A study of theories and probabilities,” in _Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval_ , 2008, p. 435–442. * [36] K. Lu, “An insight into vector space modeling and language modeling,” in _iConference_ , 2013. * [37] J. M. Kleinberg, “Authoritative sources in a hyperlinked environment,” _J. ACM_ , vol. 46, no. 5, p. 604–632, Sep. 1999. * [38] H. Deng, I. King, and M. R. Lyu, “Formal Models for Expert Finding on DBLP Bibliography Data,” in _Proceedings of the 2008 Eighth IEEE International Conference on Data Mining_ , ser. ICDM ’08, 2008, pp. 163–172. * [39] I. Beltagy, K. Lo, and A. Cohan, “Scibert: Pretrained language model for scientific text,” in _EMNLP_ , 2019. * [40] C. Macdonald and I. Ounis, “Voting for candidates: Adapting data fusion techniques for an expert search task,” in _Proceedings of the 15th ACM International Conference on Information and Knowledge Management_ , ser. CIKM ’06, 2006, p. 387–396. * [41] Y. Lv and C. Zhai, “Adaptive term frequency normalization for bm25,” in _Proceedings of the 20th ACM International Conference on Information and Knowledge Management_ , ser. CIKM ’11. Association for Computing Machinery, 2011, p. 1985–1988. | Yong-Bin Kang received his PhD in Information Technology from Monash University in 2011. Currently, he is a senior data science research fellow in the fields of natural language processing, machine learning and data mining at Swinburne University of Technology. He has been working on large industrial, multi-disciplinary research projects such as patent analytics, clinical data analytics, scientific-article analytics, social-media data analytics, expert finding and matching, and machine learning algorithms and applications. ---|--- | Hung Du is currently studying his final year Bachelor degree in Computer Science at Swinburne University of Technology, Australia and working as research assistant. His research interest include Natural Language Processing, Reinforcement Learning, Data Mining and Applied Machine Learning. ---|--- | Abdur Rahim Mohammad Forkan received his PhD degree in Computer Science from the RMIT University, Australia in 2016. He is a Senior Research Engineer at the Digital Innovation Lab, Swinburne University of Technology, Australia and working on several multi-disciplinary industrial research projects. His research interest include Data Analytics, Health Informatics, Pervasive Systems, Cloud Computing, Data Mining and Applied Machine Learning. ---|--- | Prem Prakash Jayaraman is an associate Professor and the Head of Digital Innovation Lab at Swinburne. He is broadly interested in the research areas of Internet of Things, Mobile and Cloud Computing, Health Informatics and application of Data Science techniques and methodologies in real-world settings. He was a key contributor and one of the architects of the Open Source Internet of Things project (OpenIoT) that has won the prestigious Black Duck Rookie of the Year Award in 2013 (https://github.com/OpenIotOrg/openiot). ---|--- | Amir Aryani is the Head of the Social Data Analytics (SoDA) Lab at the Swinburne University of Technology. He has experience with large-scale and cross-institution projects in Australia and Europe. His track records include collaboration with high-profile international institutions such as British Library, ORCID, Data Archiving and Network Analysis (DANS, Netherlands), and funders including ARC, NHMRC, and NIH. He has published high impact journals such as Nature Scientific Data, Metadata and Semantics Research, and Frontiers in Artificial Intelligence and Applications. ---|--- | Timos Sellis received the PhD degree in Computer Science from the University of California, Berkeley, in 1986. He is now a visiting scientist at Facebook and also adjunct professor at Swinburne. Between 2013 and 2015, he was a professor at RMIT University, Australia, and before 2013, the director of the Institute for the Management of Information Systems and a professor at the National Technical University of Athens, Greece. He is a fellow of the IEEE and ACM. In 2018, he was awarded the IEEE TCDE Impact Award, in recognition of his impact in the field and for contributions to database systems and data engineering research. ---|---
18326 LABEL:LastPageSep. 02, 2021Aug. 31, 2022 *This is an extended version of work presented in [MIG+21]. [a] [a] [b] [c] [a] # Counterexample-Guided Prophecy for Model Checking Modulo the Theory of Arrays* Makai Mann0000-0002-1555-5784 , Ahmed Irfan0000-0001-7791-9021 , Alberto Griggio0000-0002-3311-0893 , Oded Padon and Clark Barrett0000-0002-9522-3084 Stanford University, Stanford, USA <EMAIL_ADDRESS>Fondazione Bruno Kessler, Trento, Italy <EMAIL_ADDRESS>VMware Research, Palo Alto, USA<EMAIL_ADDRESS> ###### Abstract. We develop a framework for model checking infinite-state systems by automatically augmenting them with auxiliary variables, enabling quantifier- free induction proofs for systems that would otherwise require quantified invariants. We combine this mechanism with a counterexample-guided abstraction refinement scheme for the theory of arrays. Our framework can thus, in many cases, reduce inductive reasoning with quantifiers and arrays to quantifier- free and array-free reasoning. We evaluate the approach on a wide set of benchmarks from the literature. The results show that our implementation often outperforms state-of-the-art tools, demonstrating its practical potential. ###### Key words and phrases: Model Checking, Prophecy Variables, Quantified Invariant, Satisfiability Modulo Theories, Theory of Arrays ## 1\. Introduction Model checking is a widely-used and highly-effective technique for automated property checking. While model checking finite-state systems is a well- established technique for hardware and software systems, model checking infinite-state systems is more challenging. One challenge, for example, is that proving properties by induction over infinite-state systems often requires the use of universally quantified invariants. While some automated reasoning tools can reason about quantified formulas, such reasoning is typically not very robust. Furthermore, just discovering these quantified invariants remains very challenging. Previous work (e.g., [McM18]) has shown that prophecy variables can sometimes play the same role as universally quantified variables, making it possible to transform a system that would require quantified reasoning into one that does not. However, to the best of our knowledge, there has been no automatic method for applying such transformations. In this paper, we introduce a technique we call _counterexample-guided prophecy_. During the refinement step of an abstraction-refinement loop, our technique automatically introduces prophecy variables, which both help with the refinement step and may also reduce the need for quantified reasoning. We demonstrate the technique in the context of model checking for infinite-state systems with arrays, a domain which is known for requiring quantified reasoning. We show how a standard abstraction for arrays can be augmented with counterexample-guided prophecy to obtain an algorithm that reduces the model checking problem to quantifier-free, array- free reasoning. The paper makes the following contributions: 1. (1) we introduce an algorithm called Prophecize that uses history and prophecy variables to target a specific term at a specific time step of an execution, producing a new transition system that can effectively reason universally about that term; 2. (2) we develop an automatic abstraction-refinement procedure for arrays, which leverages the Prophecize algorithm during the refinement step, and show that it is sound and produces no false positives; 3. (3) we develop a prototype implementation of our technique; and 4. (4) we evaluate our technique on four sets of model checking benchmarks containing arrays and show that our implementation outperforms state-of-the-art tools on a majority of the benchmark sets. The rest of the paper is organized as follows. We start by providing relevant background information in Section 2. We then motivate the use of prophecy variables with an example and introduce the Prophecize algorithm in Section 3. We describe our abstraction-refinement framework for arrays in Section 4 and discuss its expressiveness and limitations in Section 5. In Section 6, we describe our prototype along with some implementation details. We evaluate our approach empirically in Section 7, cover related work in Section 8, and finally conclude in Section 9. This paper is an extended version of work presented in [MIG+21]. Notable changes include the proofs, a self-comparison with different options in Section 7, and Section 6, which discusses implementation details of the prototype. ## 2\. Background We assume the standard many-sorted first-order logical setting with the usual notions of signature, term, formula, and interpretation. A theory is a pair $\mathcal{T}=(\Sigma,\mathbf{I})$ where $\Sigma$ is a signature and $\mathbf{I}$ is a class of $\Sigma$-interpretations, the models of $\mathcal{T}$. A $\Sigma$-formula $\varphi$ is satisfiable (resp., unsatisfiable) in $\mathcal{T}$ if it is satisfied by some (resp., no) interpretation in $\mathbf{I}$. Given an interpretation $\mathcal{M}$, a variable assignment $s$ over a set of variables $X$ is a mapping that assigns each variable $x\in X$ of sort $\sigma$ to an element of $\sigma^{\mathcal{M}}$, denoted $x^{s}$. We write $\mathcal{M}[s]$ for the interpretation that is equivalent to $\mathcal{M}$ except that each variable $x\in X$ is mapped to $x^{s}$. Let $x$ be a variable, $t$ a term, and $\phi$ a formula. We denote with $\phi\\{x\mapsto t\\}$ the formula obtained by replacing every free occurrence of $x$ in $\phi$ with $t$. We extend this notation to sets of variables and terms in the usual way. If $f$ and $g$ are two functions, we write $f\circ g$ to mean functional composition, i.e., $f\circ g(x)=f(g(x))$. We use $\uplus$ to refer to set union. Let $\mathcal{T}_{A}$ be the standard theory of arrays [Mcc62] with extensionality, extended with constant arrays. Concretely, we assume sorts for arrays, indices, and elements, and function symbols $\mathit{read}$, $\mathit{write}$, and $\mathit{constarr}$. Here and below, we use $a$ and $b$ to refer to arrays, $i$ and $j$ to refer to array indices, and $e$ and $c$ to refer to array elements, where $c$ is also restricted to be an interpreted constant. The theory contains the class of all interpretations satisfying the following axioms: $\displaystyle\forall\,a,i,j,e.$ $\displaystyle i=j\implies\mathit{read(\mathit{write(a,j,e)},i)}=e~{}\wedge$ (write) $\displaystyle i\neq j\implies\mathit{read(\mathit{write(a,j,e)},i)}=\mathit{read(a,i)}$ $\forall\,a,b.\>(\forall\,i.\>\mathit{read(a,i)}=\mathit{read(b,i)})\implies a=b$ (ext) $\forall\,i.\>\mathit{read(\mathit{constarr(c)},i)}=c$ (const) ### 2.1. Symbolic Transition Systems and Model Checking For generality, assume a background theory $\mathcal{T}$ with signature $\Sigma$. We will assume that all terms and formulas are $\Sigma$-terms and $\Sigma$-formulas, that entailment is entailment modulo $\mathcal{T}$, and interpretations are $\mathcal{T}$-interpretations. A symbolic transition system (STS) $\mathcal{S}$ is a tuple $\mathcal{S}:=\langle X,I,T\rangle$, where $X$ is a finite set of state variables, $I(X)$ is a formula denoting the initial states of the system, and $T(X,X^{\prime})$ is a formula expressing a transition relation. Here, $X^{\prime}$ is the set obtained by replacing each variable $x\in X$ with a new variable $x^{\prime}$ of the same sort. Let $\mathit{prime}(x)=x^{\prime}$ be the bijection corresponding to this replacement. We say that a variable $x$ is frozen if $T\models x^{\prime}=x$. When the state variables are obvious, we will often drop $X$. A state $s$ of $\mathcal{S}$ is a variable assignment over $X$. An _execution_ of $\mathcal{S}$ of length $k$ is a pair $\langle\mathcal{M},\pi\rangle$, where $\mathcal{M}$ is an interpretation and $\pi:=s_{0},s_{1},\ldots,s_{k-1}$ is a _path_ of length $k$, a sequence of states such that $\mathcal{M}[s_{0}]\models I(X)$ and $\mathcal{M}[s_{i}][s_{i+1}\circ\mathit{prime}^{-1}]\models T(X,X^{\prime})$ for all $0\leq i<k-1$. When reasoning about paths, it is often convenient to have multiple copies of the state variables $X$. We use $X@n$ to denote the set of variables obtained by replacing each variable $x\in X$ with a new variable called $x@n$ of the same sort. We refer to these as _timed_ variables. A state $s$ is _reachable_ in $\mathcal{S}$ if it appears in a path of some execution of $\mathcal{S}$. We say that a formula $P(X)$ is an _invariant_ of $\mathcal{S}$, denoted by $\mathcal{S}\models P(X)$, if $P(X)$ is satisfied in every reachable state of $\mathcal{S}$ (i.e., for every execution $\langle\mathcal{M},\pi\rangle$, $\mathcal{M}[s]\models P(X)$ for each $s$ in $\pi$). The _invariant checking problem_ is, given $\mathcal{S}$ and $P(X)$, to determine if $\mathcal{S}\models P(X)$. A _counterexample_ is an execution $\langle\mathcal{M},\pi\rangle$ of $\mathcal{S}$ of length $k$ such that $\mathcal{M}[s_{k-1}]\not\models P(X)$. If $I(X)\models\phi(X)$ and $\phi(X)\wedge T(X,X^{\prime})\models\phi(X^{\prime})$, then $\phi(X)$ is an _inductive_ invariant. Every inductive invariant is an invariant (by induction over path length). In this paper we focus on model checking problems where $I$, $T$ and $P$ are quantifier-free. However, a _quantified inductive invariant_ might still be necessary to prove a property of the system. _Bounded Model Checking_ (BMC) is a bug-finding technique which attempts to find a counterexample for a property, $P(X)$, of length $k$ for some finite $k$ [BCCZ99]. A single BMC query at bound $k$ for an invariant property uses a constraint solver to check the satisfiability of the following formula: $BMC(\mathcal{S},P,k)\coloneqq I(X@0)\wedge(\bigwedge_{i=0}^{k-2}T(X@i,X@(i+1)))\wedge\neg P(X@(k-1))$. If the query is satisfiable, there is a bug. ### 2.2. Counterexample-Guided Abstraction Refinement (CEGAR) CEGAR is a general technique in which a difficult conjecture is tackled iteratively [Cla03]. Algorithm 1 shows a simple CEGAR loop for checking an invariant $P$ for an STS $\mathcal{S}$. It is parameterized by three functions. The Abstract function produces an initial abstraction of the problem. It must satisfy the contract that if $\langle\widehat{S},\widehat{P}\rangle=\textbf{Abstract}(\mathcal{S},P)$, then $\widehat{S}\models\widehat{P}\implies\mathcal{S}\models P$. The next function is the Prove function. This can be any (unbounded) model-checking algorithm that can return counterexamples. It checks whether a given property $P$ is an invariant of a given STS $\mathcal{S}$. If it is, it returns with $\mathit{proven}$ set to true. Otherwise, it returns a bound $k$ at which a counterexample exists. The final function is Refine. It takes the abstracted STS and property together with a bound $k$ at which a known counterexample for the abstract STS exists. Its job is to refine the abstraction until there is no longer a counterexample of size $k$. If it succeeds, it returns the new STS and property. It fails if there is an actual counterexample of size $k$ for the concrete system. In this case, it sets the return value $\mathit{refined}$ to false. Algorithm 1 STS-CEGAR($\mathcal{S}\coloneqq\langle X,I,T\rangle$, $P$) 1: $\langle\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P}\rangle\leftarrow\textbf{Abstract}(\mathcal{S},P)$ 2: while true do 3: $\langle k,\mathit{proven}\rangle\leftarrow\textbf{Prove}(\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P})$ // try to prove 4: if proven then return true // property proved 5: $\langle\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P},\mathit{refined}\rangle\leftarrow\textbf{Refine}(\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P},k)$ // try to refine 6: if $\neg\textit{refined}$ then return false // found counterexample 7: end while ### 2.3. Auxiliary variables We finish this section with relevant background on _auxiliary_ variables, a crucial part of the refinement step described in Section 4. Auxiliary variables are new variables added to the system which do not influence its behavior (i.e., a state is reachable in the old system iff it is a reduct [Hod93] to the old set of variables of a reachable state in the new system). There are two main categories of auxiliary variables we consider: history and prophecy. History variables, also known as ghost state, preserve a value, making its past value available in future states [OG76]. Prophecy variables are the dual of history variables and provide a way to refer to a value that occurs in a future state. Abadi and Lamport formally characterized soundness conditions for the introduction of history and prophecy variables [AL88]. Here, we consider a simple, structured form of history variables. Let $\mathcal{S}=\langle X,I,T\rangle$ be an STS, $t$ a term whose free variables are in $X$, and $n>0$, then $\textbf{Delay}(\mathcal{S},t,n)$ returns a tuple $\langle\langle X^{h},I^{h},T^{h}\rangle,\mathit{h_{t}^{n}}\rangle$, containing a new STS and history variable, where $X^{h}=X\uplus\\{\mathit{h_{t}^{1}},\ldots,\mathit{h_{t}^{n}}\\}$, $I^{h}=I$, and $T^{h}=T\wedge\mathit{h_{t}^{1}}^{\prime}=t\wedge\bigwedge_{i=2}^{n}\mathit{h_{t}^{i}}^{\prime}=\mathit{h_{t}^{i-1}}$. The Delay operator makes the current value of a term $t$ available for the next $n$ states in a path. This is accomplished by adding $n$ new history variables and creating an assignment chain that passes the value to the next history variable at each state. Thus, $\mathit{h_{t}^{k}}$ contains the value that $t$ had $k$ states ago. The initial value of each history variable is unconstrained. [[AL88]] Let $\mathcal{S}=\langle X,I,T\rangle$ be an STS, $P$ a property, and $\textbf{Delay}(\mathcal{S},v,n)=\langle\mathcal{S}^{h},\mathit{h_{v}^{n}}\rangle$. Then $\mathcal{S}\models P$ iff $\mathcal{S}^{h}\models P$. We refer to [AL88] for a general proof that subsumes Theorem 2.3. In contrast to the general approach for history variables, we use a version of prophecy that only requires a single frozen variable. The motivation for this is that a frozen variable can be used in place of a universal quantifier, as the following theorem adapted from [McM18] shows. [[McM18]] Let $\mathcal{S}=\langle X,I,T\rangle$ be an STS, $y$ a variable in formula $P(X\cup\\{y\\})$, and $v$ a fresh variable (i.e., not in $X\cup\\{y\\}$ or $X^{\prime}$). Let $\mathcal{S}^{p}=\langle X\cup\\{v\\},I,T\wedge v^{\prime}=v\rangle$. Then $\mathcal{S}\models\forall\,y.\>P(X\cup\\{y\\})$ iff $\mathcal{S}^{p}\models P(X\cup\\{y\\})\\{y\mapsto v\\}$. Proof Sketch. We prove the equivalent statement $\mathcal{S}\not\models\forall\,y.\>P(X\cup\\{y\\})\leftrightarrow\mathcal{S}^{p}\not\models P(X\cup\\{y\\})\\{y\mapsto v\\}$. $\rightarrow$: Suppose there is a counterexample trace demonstrating that $\mathcal{S}\not\models\forall\,y.\>P(X\cup\\{y\\})$. In the last step of the trace, the property is violated. There is a specific value of $y$ for which $P(X\cup\\{y\\})$ does not hold. We can reconstruct the same counterexample trace for $\mathcal{S}^{p}$ and $P(X\cup\\{y\\})\\{y\mapsto v\\}$ by assigning $v$ this same value in every step of the trace. $\leftarrow$: If there is a counterexample trace for $\mathcal{S}^{p}\not\models P(X\cup\\{y\\})\\{y\mapsto v\\}$, we can construct a counterexample for $\mathcal{S}$ and $\forall\,y.\>P(X\cup\\{y\\})$ by using the same trace but dropping the assignment to the frozen variable, $v$. Finally, the value of $y$ that violates the property in the last step of the trace will be the same value $v$ was assigned. $\Box$ Theorem 2.3 shows that a universally quantified variable in a property can be replaced with a fresh symbol in a process similar to Skolemization. The intuition is as follows. The frozen variable has the same value in all states, but it is uninitialized by $I$. Thus, for each path in $\mathcal{S}$, there is a corresponding path (i.e., identical except at $v$) in $\mathcal{S}^{p}$ for _every_ possible value of $v$. This proliferation of paths plays the same role as the quantified variable in $P$. We mention here one more theorem from [McM18]. This one allows us to _introduce_ a universal quantifier. [[McM18]] Let $\mathcal{S}=\langle X,I,T\rangle$ be an STS, $P(X)$ a formula, and $t$ a term. Then, $\mathcal{S}\models P(X)$ iff $\mathcal{S}\models\forall\,y.(y=t\implies P(X))$, where $y$ is not free in $P(X)$. Proof Sketch. $P(X)$ and $\forall\,y.(y=t\implies P(X))$ for $y\not\in X$ are equivalent in first order logic. Intuitively, when $y=t$, it simplifies to $P(X)$ and all other values of $y$ render the formula trivially true. $\Box$ Theorems 2.3 and 2.3 are special cases of Theorems 3 and 4 of [McM18], which handle temporal logic [Pnu77] formulas. Another notable difference is that Theorem 3 of [McM18] uses a fresh background symbol to replace the universally quantified variable. Note that it does not change as the system evolves because it is not a state variable of the transition system. Rather than allowing background symbols, we simulate this with a frozen variable that maintains its value in Theorem 2.3. ## 3\. Using Auxiliary Variables to Assist Induction We can use Theorem 2.3 followed by Theorem 2.3 to introduce frozen prophecy variables that predict the value of a term $t$ when the property $P$ is being checked. We refer to $t$ as the prophecy target and the process as universal prophecy. If we also use Delay, we can target a term at some finite number of steps _before_ the property is checked. This is captured by Algorithm 2, which takes a transition system, property $P(X)$, term $t$, and $n\geq 0$. If $n=0$, it introduces a universal prophecy variable for $t$. Otherwise, it first introduces history variables for $t$ and then applies universal prophecy to the delayed $t$. In either case it returns the augmented system, augmented property, and the prophecy variable. Algorithm 2 $\textbf{Prophecize}(\langle X,I,T\rangle,P(X),t,n)$ 1: if n = 0 then 2: return $\langle\langle X\uplus\\{p_{t}\\},I,T\wedge p_{t}^{\prime}=p_{t}\rangle,p_{t}=t\implies P(X),p_{t}\rangle$ 3: else 4: $\langle\langle X^{h},I^{h},T^{h}\rangle,\mathit{h_{t}^{n}}\rangle\coloneqq\textbf{Delay}(\langle X,I,T\rangle,t,n)$ 5: return $\langle\langle X^{h}\uplus\\{p_{t}^{n}\\},I,T\wedge p_{t}^{n^{\prime}}=p_{t}^{n}\rangle,p_{t}=\mathit{h_{t}^{n}}\implies P(X),p_{t}^{n}\rangle$ 6: end if We will use the STS shown in Fig. 1(a) as a running example throughout the paper (it is inspired by the hardware example from [Bje08]). We assume the background theory $\mathcal{T}$ includes integer arithmetic and arrays of integers indexed by integers. The variables in this STS include an array and four integer variables, representing the read index ($\mathit{i_{r}}$), write index ($\mathit{i_{w}}$), read data ($\mathit{d}_{r}$), and write data ($\mathit{d}_{w}$), respectively. The system starts with an array of all zeros. At every step, if the write data is less than 200, it writes that data to the array at the write index. Otherwise, the array stays the same. Additionally, the read data is updated with the current value of $a$ at $\mathit{i_{r}}$. This effectively introduces a one-step delay between when the value is read from $a$ and when the value is present in $\mathit{d}_{r}$. The property is that $\mathit{d}_{r}<200$. This property is clearly true, but it is not straightforward to prove with standard model checking techniques because it is not inductive. Note that it is also not $k$-inductive for any $k$ [SSS00]. The primary issue is that the property does not constrain the value of $a$ at all, so in an inductive proof, the value of $a$ could be anything in the induction hypothesis. $\displaystyle I\coloneqq$ $\displaystyle a=\mathit{constarr(0)}\wedge\mathit{d}_{r}<200$ $\displaystyle T\coloneqq$ $\displaystyle a^{\prime}=\mathit{ite}(\mathit{d}_{w}<200,$ $\displaystyle\hskip 36.00006pt\mathit{write(a,\mathit{i_{w}},\mathit{d}_{w})},a)\wedge$ $\displaystyle\mathit{d}_{r}^{\prime}=\mathit{read(a,\mathit{i_{r}})}$ $\displaystyle P\coloneqq$ $\displaystyle\mathit{d}_{r}<200$ (a) $\displaystyle I\coloneqq$ $\displaystyle a=\mathit{constarr(0)}\wedge\mathit{d}_{r}<200$ $\displaystyle T\coloneqq$ $\displaystyle a^{\prime}=\mathit{ite}(\mathit{d}_{w}<200,$ $\displaystyle\hskip 36.00006pt\mathit{write(a,\mathit{i_{w}},\mathit{d}_{w})},a)\wedge$ $\displaystyle\mathit{d}_{r}^{\prime}=$ $\displaystyle\mathit{read(a,\mathit{i_{r}})}\wedge p_{\mathit{i_{r}}}^{1^{\prime}}=p_{\mathit{i_{r}}}^{1}\wedge\mathit{h_{\mathit{i_{r}}}^{1}}^{\prime}=\mathit{i_{r}}$ $\displaystyle P\coloneqq$ $\displaystyle p_{\mathit{i_{r}}}^{1}=\mathit{h_{\mathit{i_{r}}}^{1}}\implies\mathit{d}_{r}<200$ (b) Figure 1. (a) Running example. (b) Running example with prophecy variable. One way to prove the property is to strengthen it with the quantified invariant: $\forall\,i.\>\mathit{read(a,i)}<200$. Remarkably, observe that by augmenting the system using Prophecize, it is possible to prove the property using only a _quantifier-free_ invariant. In this case, the relevant prophecy target is the value of $\mathit{i_{r}}$ one step before checking the property. We run $\textbf{Prophecize}(\langle X,I,T\rangle,P,i_{r},1)$ and it returns the system and property shown in Fig. 1(b), along with the prophecy variable $p_{\mathit{i_{r}}}^{1}$. This augmented system has a simple, quantifier-free invariant which can be used to strengthen the property, making it inductive: $\mathit{read(a,p_{\mathit{i_{r}}})}<200$. This formula holds in the initial state because of the constant array, and if we start in a state where it holds, it still holds after a transition. Notice that the invariant learned over the prophecy variable has the same form as the original quantified invariant. However, we have instantiated that universal quantifier with a fresh, frozen prophecy variable. Intuitively, the prophecy variable captures a proof by contradiction: assume the property does not hold, consider the value of $\mathit{i_{r}}$ one step before the first failure of the property, and then use this value to show the property holds. This example shows that auxiliary variables can be used to transform an STS without a quantifier-free inductive invariant into an STS with one. However, it is not yet clear how to identify good targets for history and prophecy variables. In the next section, we show how this can be done as part of an abstraction refinement scheme for symbolic transition systems over the theory of arrays. ## 4\. Abstraction Refinement for Arrays We now introduce our main contribution. Given a background theory $\mathcal{T}_{B}$ and a model checking algorithm for STSs over $\mathcal{T}_{B}$, we use an instantiation of the CEGAR loop in Algorithm 1 to check properties of STSs over the theory that combines $\mathcal{T}_{B}$ and the theory of arrays, $\mathcal{T}_{A}$. The key idea is to abstract all array operators and then add array lemmas as needed during refinement. ### 4.1. Abstract and Prove We use a standard abstraction for the theory of arrays, which we denote Abstract-Arrays. Every array sort is replaced with an uninterpreted sort, and the array variables are abstracted accordingly. Each constant array is replaced by a fresh abstract array variable, which is then constrained to be frozen (because constant arrays do not change over time). Additionally, we replace the $\mathit{read}$ and $\mathit{write}$ array operations with uninterpreted functions. Note that if the system contains multiple array sorts, we need to introduce separate read and write functions for each uninterpreted abstract array sort. Using uninterpreted sorts and functions for abstracting arrays is a common technique in Satisfiability Modulo Theories [BT18] (SMT) solvers [GKF08]. Intuitively, our initial abstraction starts with memoryless arrays, i.e., the array axioms are not initially enforced on the abstraction. We then incrementally refine the arrays’ memory as needed by adding prophecy variables to be used in array axioms. Intuitively, a prophecy variable keeps track of an index of the array that will faithfully store values. Fig. 2 shows the result of running Abstract-Arrays on the example from Fig. 1(a). Prove can be instantiated with any (unbounded) model checker that can accept expressions over the background theory $\mathcal{T}_{B}$ combined with the theory of uninterpreted functions. In particular, due to our abstraction, the model checker does not need to support the theory of arrays. $\displaystyle\widehat{I}\coloneqq$ $\displaystyle\widehat{a}=\widehat{\mathit{constarr0}}\wedge\mathit{d}_{r}<200$ $\displaystyle\widehat{T}\coloneqq$ $\displaystyle\widehat{a}^{\prime}=ite(\mathit{d}_{w}<200,\widehat{\mathit{write}}(\widehat{a},\mathit{i_{w}},\mathit{d}_{w}),\widehat{a})\wedge$ $\displaystyle\mathit{d}_{r}^{\prime}=\widehat{\mathit{read}}(\widehat{a},\mathit{i_{r}})\wedge\widehat{\mathit{constarr0}}^{\prime}=\widehat{\mathit{constarr0}}$ $\displaystyle\widehat{P}\coloneqq$ $\displaystyle\mathit{d}_{r}<200$ Figure 2. Result of calling Abstract on the example from Fig. 1(a) ### 4.2. Refine Here, we explain the refinement approach for our array abstraction. At a high level, the algorithm solves a BMC problem over the abstract STS at bound $k$. It identifies violations of array axioms in the returned abstract counterexample, and instantiates each violated axiom (this is essentially the same as the lazy array axiom instantiation approach used in SMT solvers [BMS06, BB08, CH15, dB09]). We then _lift_ these axioms to the STS-level by modifying the STS. It is this step that may require introducing auxiliary variables. The details are shown in Algorithm 3. Algorithm 3 Refine-Arrays ($\widehat{\mathcal{S}}\coloneqq\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P},k$) 1: $\mathcal{I}\leftarrow\mathit{ComputeIndices}(\widehat{\mathcal{S}},\widehat{P},k)$ 2: loop 3: $\rho\leftarrow$ BMC($\widehat{\mathcal{S}},\widehat{P},k$) 4: if $\rho=\bot$ then return $\langle\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P},\mathit{true}\rangle$ // Property holds up to bound $k$ 5: $\langle\mathit{ca},\mathit{nca}\rangle\leftarrow\mathit{CheckArrayAxioms}(\rho,\mathcal{I})$ 6: if $\mathit{ca}=\emptyset\wedge\mathit{nca}=\emptyset$ then return $\langle\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P},false\rangle$ // True counterexample 7: // Go through non-consecutive array axiom instantiations 8: for $\langle ax,i@n_{i}\rangle\in\mathit{nca}$ do 9: let $n_{min}\coloneqq\mathit{min}(\tau(ax)\backslash\\{n_{i}\\})$ 10: $\langle\langle X^{p},I^{p},T^{p}\rangle,P^{p},p_{i}^{k-n_{i}}\rangle\leftarrow\textbf{Prophecize}(\langle\widehat{X},\widehat{I},\widehat{T}\rangle,\widehat{P},i,k-n_{i})$ 11: $ax_{c}\leftarrow ax\\{i@n_{i}\mapsto p_{i}^{k-n_{i}}@n_{min}\\}$ 12: $\mathit{ca}\leftarrow\mathit{ca}\uplus\\{ax_{c}@n_{min}\\}$ // add consecutive version of axiom 13: $\mathcal{I}\leftarrow\mathcal{I}\uplus\\{p_{i}^{k-n_{i}}@0,\dots,p_{i}^{k-n_{i}}@k\\}$ 14: $\widehat{X}\leftarrow X^{p}$; $\widehat{I}\leftarrow I^{p}$; $\widehat{T}\leftarrow T^{p}$; $\widehat{P}\leftarrow P^{p}$ 15: end for 16: // Go through consecutive array axiom instantiations 17: for $ax\in\mathit{ca}$ do 18: let $n_{min}\coloneqq\mathit{min}(\tau(ax))$, $n_{max}\coloneqq\mathit{max}(\tau(ax))$ 19: assert($n_{max}=n_{min}\lor n_{max}=n_{min}+1$) 20: if $k=0$ then 21: $\widehat{I}\leftarrow\widehat{I}\wedge ax\\{X@n_{min}\mapsto X\\}$ 22: else if $n_{min}=n_{max}$ then 23: $\widehat{T}\leftarrow\widehat{T}\wedge ax\\{X@n_{min}\mapsto X\\}\wedge ax\\{X@n_{min}\mapsto X^{\prime}\\}$ 24: else 25: $\widehat{T}\leftarrow\widehat{T}\wedge ax\\{X@n_{min}\mapsto X\\}\\{X@(n_{min}+1)\mapsto X^{\prime}\\}$ 26: end if 27: end for 28: end loop Line 1 computes an index set $\mathcal{I}$ of index terms with $\mathit{ComputeIndices}$ — this set is used in the lazy axiom instantiation step below. The procedure adds to $\mathcal{I}$ every term that appears as an index in a $\widehat{\mathit{read}}$ or $\widehat{\mathit{write}}$ operation (recall that these appear as uninterpreted functions in the abstracted STS and property) in $BMC(\widehat{\mathcal{S}}$, $\widehat{P},k)$. Furthermore, it adds a witness index for every array equality — the witness corresponds to a Skolemized existential variable in the contrapositive of axiom (ext). For soundness, it must also add an extra variable $\lambda_{\sigma}$ for each index sort $\sigma$ and constrain it to be different from all the other index variables of the same sort (this is based on the approach in [BMS06]). Intuitively, this variable represents an arbitrary index different from those mentioned in the STS. We assume that the index sorts are from an infinite domain so that a distinct element is guaranteed. For simplicity of presentation, we also assume from now on that there is only a single index sort (e.g., integers). Otherwise, $\mathcal{I}$ must be partitioned by sort. For the abstract STS in Fig. 2, with $k=1$, the index set would be $\mathcal{I}\coloneqq\\{\mathit{i_{r}}@0,\mathit{i_{w}}@0,w_{0}@0,w_{1}@0,\lambda_{\mathit{Int}}@0,\mathit{i_{r}}@1,\mathit{i_{w}}@1,w_{0}@1,w_{1}@1,\lambda_{\mathit{Int}}@1\\}$, where $w_{0}$ and $w_{1}$ are witness indices. After computing indices, the algorithm enters the main loop. Line 3 dispatches a bounded model checking query, $BMC(\widehat{\mathcal{S}}$, $\widehat{P},k)$. The result $\rho$ is either a counterexample, or the distinguished value $\bot$, indicating that the query is unsatisfiable. If it is the latter, then it returns the refined STS and property, as the property now holds on the STS up to bound $k$. Otherwise, the algorithm continues. The next step (line 5) is to find violations of array axioms in the execution $\rho$ based on the index set $\mathcal{I}$. CheckArrayAxioms takes two arguments, a counterexample and an index set, and returns instantiated array axioms that do not hold over the counterexample. This works as follows. It first looks for occurrences of $\widehat{\mathit{write}}$ in the BMC formula. For each such occurrence, it instantiates the (write) axiom so that the $\widehat{\mathit{write}}$ term in the axiom matches the term in the formula (i.e., we use the $\widehat{\mathit{write}}$ term as a trigger). This instantiates all quantified variables except for $i$. Next it instantiates $i$ once for each variable in the index set. Each of the instantiated axioms are evaluated using the values from the counterexample and all instantiations that reduce to false are saved. The procedure does the same thing for the (const) axiom, using each constant array term in the BMC formula as a trigger. Finally, for each array equality $a@m=b@n$ in the BMC formula, it checks an instantiation of the contrapositive of (ext): $a@m\not=b@n\to\mathit{read(a@m,w_{i}@n)}\not=\mathit{read(b@n,w_{i}@n)}$. The saved instantiated formulas that do not hold in $\rho$ are added to the set of violated axioms. CheckArrayAxioms sorts the collected, violated axiom instantiations into two sets based on which timed variables they contain. The _consecutive_ set contains formulas with timed variables whose timing differs by at most one; whereas the timed variables in the formulas contained in the _non-consecutive_ set may differ by more. Formally, let $\tau$ be a function which takes a single timed variable and returns its time (e.g., $\tau(i@2)=2$). We lift this to formulas by having $\tau(\phi)$ return the set of all time-steps for variables in $\phi$. A formula $\phi$ is consecutive iff $\mathit{max}(\tau(\phi))-\mathit{min}(\tau(\phi))\leq 1$. Note that instantiations of (ext) are consecutive by construction. Additionally, because constant arrays have the same value in all time steps, the algorithm can always choose a representative time-step for instantiations of (const) that results in a consecutive formula. However, instantiations of (write) may be non-consecutive, because the variable from the index set may be from a time- step that is different from that of the trigger term. CheckArrayAxioms returns the pair $\langle\mathit{ca},\mathit{nca}\rangle$, where $\mathit{ca}$ is a set of consecutive axiom instantiations and $\mathit{nca}$ is a set of pairs — each of which contains a non-consecutive axiom instantiation and the index-set term that was used to create that instantiation. We assume that the index-set term used in a non-consecutive axiom is _not_ an auxiliary variable. Since auxiliary variables only record or predict the value of another index, it does not make sense to target one of these for prophecy. Line 6 checks if the returned sets are empty. If so, then there are no array axiom violations and $\rho$ is a concrete counterexample. In this case, the system, property, and $\mathit{false}$ are returned. Otherwise, the non- consecutive formulas are processed in lines 8–15. Given a non-consecutive formula $ax$ together with its index-set variable $i@n_{i}$, line 9 computes the minimum time-step of the axiom’s other variables, $n_{min}$. Then line 10 calls the Prophecize method to create a prophecy variable $p_{i}^{k-n_{i}}$, that is effectively a way to refer to $i@n_{i}$ at time-step $n_{min}$ (line 10). This allows the algorithm to create a consecutive formula $ax_{c}$ that is semantically equivalent to $ax$ (line 11). This new consecutive formula is added to $\mathit{ca}$ in line 12, and in line 13 the introduced prophecy variables (one for each time-step) are added to the index set. Then, line 14 updates the abstraction. At line 17, the algorithm is left with a set of consecutive formulas to process. For each consecutive formula $ax$, line 18 computes the minimum and maximum time-step of its variables, which must differ by no more than 1 (line 19). There are three cases to consider: i) when $k=0$, the counterexample consists of only the initial state — then the initial state is refined by adding the untimed version of $ax$ to $\widehat{I}$ (line 21); ii) if $ax$ contains only variables from a single time step, then the untimed version of $ax$ is added as a constraint for both $X$ and $X^{\prime}$, ensuring that it will hold in every state (line 23); iii) finally, if $ax$ contains variables from two adjacent time steps, it can be translated directly into a transition formula to be added to $\widehat{T}$ (line 25). The loop then repeats with the newly refined STS. ##### Example. Consider again the example from Fig. 2, and suppose Refine-Arrays is called on $\widehat{\mathcal{S}}$ and $\widehat{P}$ with $k=3$. At this unrolling, one possible abstract counterexample violates the following nonconsecutive axiom instantiation: $\displaystyle(\mathit{i_{r}}@2=\mathit{i_{w}}@0$ $\displaystyle\implies\widehat{\mathit{read}}(\widehat{\mathit{write}}(\widehat{a@0},\mathit{i_{w}}@0,\mathit{d}_{w}@0),\mathit{i_{r}}@2)=\mathit{d}_{w}@0)~{}\wedge$ $\displaystyle(\mathit{i_{r}}@2\neq\mathit{i_{w}}@0$ $\displaystyle\implies\widehat{\mathit{read}}(\widehat{\mathit{write}}(\widehat{a@0},\mathit{i_{w}}@0,\mathit{d}_{w}@0),\mathit{i_{r}}@2)=\widehat{\mathit{read}}(\widehat{a@0},\mathit{i_{r}}@2))$ To make this nonconsecutive axiom consecutive, it introduces a prophecy variable. The target will be the instantiated index, $\mathit{i_{r}}@2$. The relevant value to predict is one step before a possible property violation (i.e., the end of a finite path), because $k=3$, and $\tau(\mathit{i_{r}}@2)=2$, thus $k-\tau(\mathit{i_{r}}@2)=1$. This corresponds to the $k-n_{i}$ at line 10 of Algorithm 3. Calling $\textbf{Prophecize}(\widehat{\mathcal{S}},\widehat{P},\mathit{i_{r}},1)$ returns the new STS $\langle\langle\widehat{X}\uplus\\{\mathit{h_{\mathit{i_{r}}}^{1}},p_{\mathit{i_{r}}}^{1}\\},\widehat{I},\widehat{T}\wedge\mathit{h_{\mathit{i_{r}}}^{1^{\prime}}}=\mathit{i_{r}}\wedge p_{\mathit{i_{r}}}^{1^{\prime}}=p_{\mathit{i_{r}}}^{1}\rangle$ and the new property $p_{\mathit{i_{r}}}^{1}=\mathit{h_{\mathit{i_{r}}}^{1}}\implies\mathit{d}_{r}<200$. The history variable $\mathit{h_{\mathit{i_{r}}}^{1}}$ makes the previous value of $\mathit{i_{r}}$ available at each time-step, and the prophecy variable $p_{\mathit{i_{r}}}^{1}$ predicts the value of $\mathit{i_{r}}$ one step before a possible property violation. The axiom will be updated by replacing $\mathit{i_{r}}@2$ with the prophecy variable which has the same value. Since the prophecy variable is frozen, it is the same at every step. Thus, it can choose the prophecy variable at a time-step that makes the axiom consecutive. In this case, the algorithm substitutes $p_{\mathit{i_{r}}}^{1}@0$ for $\mathit{i_{r}}@2$. This results in the following consecutive axiom: $\displaystyle(p_{\mathit{i_{r}}}^{1}@0=\mathit{i_{w}}@0$ $\displaystyle\implies\widehat{\mathit{read}}(\widehat{\mathit{write}}(\widehat{a@0},\mathit{i_{w}}@0,\mathit{d}_{w}@0),p_{\mathit{i_{r}}}^{1}@0)=\mathit{d}_{w}@0)~{}\wedge$ $\displaystyle(p_{\mathit{i_{r}}}^{1}@0\neq\mathit{i_{w}}@0$ $\displaystyle\implies\widehat{\mathit{read}}(\widehat{\mathit{write}}(\widehat{a@0},\mathit{i_{w}}@0,\mathit{d}_{w}@0),p_{\mathit{i_{r}}}^{1}@0)=\widehat{\mathit{read}}(\widehat{a@0},p_{\mathit{i_{r}}}^{1}@0))$ The untimed version (and a primed version) of this consecutive axiom would be added to the transition relation at line 23 of Algorithm 3. We stress that processing nonconsecutive axioms using Prophecize is how Algorithm 3 automatically discovers the universal prophecy variable $p_{\mathit{i_{r}}}^{1}$, and it is exactly the universal prophecy variable that was needed in Section 3 to prove correctness of the running example. An alternative approach could avoid nonconsecutive axioms using Craig interpolants [Cra57] so that only consecutive axioms are found [BCS20]. However, quantifier-free interpolants are not guaranteed to exist for the standard theory of arrays [KMZ06, BGR12], and the auxiliary variables found using nonconsecutive axioms are needed to improve the chances of finding a quantifier-free inductive invariant. It is thus extremely important to start with a weak abstraction that allows us to examine spurious counterexamples in the BMC unrolling and find nonconsecutive axiom instantiations, which are then used to identify good prophecy targets. ### 4.3. Correctness We now state two important correctness theorems. ###### Theorem 1. Algorithm 1, instantiated with Abstract-Arrays, a sound model-checker Prove as described above, and Refine-Arrays is sound, i.e., if it returns _true_ then the property does hold. Proof Sketch. Algorithm 1 only returns true if Prove succeeds in proving the property. Our initial abstraction only removes the array theory semantics, but leaves every other theory intact, so it is a sound abstraction. The refinement performed by Refine-Arrays is also sound. Prophecize first optionally applies Delay depending on the input arguments, then introduces a prophecy variable. Theorem 2.3 guarantees that Delay preserves the invariance of the property. Furthermore, introducing a prophecy variable is accomplished by directly applying Theorem 2.3, followed by Theorem 2.3, which additionally guarantee that the resulting system and property are invariant if and only if the original property is invariant on the original system. Thus, the entire Prophecize procedure produces a new system and property that preserve invariance with respect to the initial query. Finally, each axiom instantiation is, by definition, valid in the theory of arrays, and lifting them simply requires them to hold in every state of a path. Furthermore, this does not rule out any true counterexamples, as the interpretations in true counterexamples must be $\mathcal{T}_{A}$-interpretations. Therefore, if at any point Prove is able to prove the property, it follows that the original property holds on the original concrete system, $\mathcal{S}$. $\Box$ ###### Theorem 2. If Algorithm 1, instantiated with Abstract-Arrays, Prove as described above, and Refine-Arrays, returns false, there is a counterexample in the concrete transition system. Proof Sketch. Theorems 2.3, 2.3, and 2.3 ensure that invariance of the property is preserved when adding auxiliary variables. Algorithm 1 returns false only when Refine-Arrays returns false in line 6 of Algorithm 3. This occurs if the refinement procedure is unable to find any array axioms that are violated in the BMC formula ($\rho$ from line 3 of Algorithm 3). It suffices to prove that if all enumerated axioms hold, then the BMC formula is satisfiable in $\mathcal{T}_{A}$ and there is a length $k$ counterexample. The array property fragment [BMS06, BM07, KS16] is a fragment of the theory of arrays that allows some universal quantification while staying decidable. It is defined for an extensional theory of arrays with $\mathit{read}$ and $\mathit{write}$ functions with the semantics given by (write) and (ext). The fragment is of the form $\forall\vec{i}~{}.~{}\phi_{I}(\vec{i})\rightarrow\phi_{V}(\vec{i})$, for a vector of bound index variables $\vec{i}$, index guard $\phi_{I}$, and value constraint $\phi_{V}$. Both $\phi_{I}$ and $\phi_{V}$ are constrained by a grammar. Our BMC queries are quantifier-free which falls within the array property fragment. The only universal quantifiers are hidden by the (const) axiom (because constant arrays are not explicitly included in the array property fragment theory of arrays). However, this simple form of universal quantification is contained in the fragment. Thus, our queries are a strict subset of the more general array property fragment. Our axiom enumeration is based on a reduction technique [BMS06, BM07, KS16] for the array property fragment that is sound and complete. Because the technique is complete, if the abstract formula is satisfiable and all enumerated axioms are true, then the original $\mathcal{T}_{A}$ formula is satisfiable. $\Box$ ## 5\. Expressiveness and Limitations ### 5.1. Expressiveness We now address the expressiveness of counterexample-guided prophecy with regard to the introduction of auxiliary variables. For simplicity, we ignore the array abstraction, relying on the correctness theorems. An inductive invariant using auxiliary variables can be converted to one without auxiliary variables by first universally quantifying over the prophecy variables, then existentially quantifying over the history variables. The details are captured by this theorem: ###### Theorem 3. Let $\mathcal{S}\coloneqq\langle X,I,T\rangle$ be an STS, and $P(X)$ be a property such that $\mathcal{S}\models P(X)$. Let $H$ be the set of history variables, and $\mathcal{P}$ be the set of prophecy variables introduced by Refine-Arrays. Let $\tilde{\mathcal{S}}\coloneqq\langle X\cup H\cup\mathcal{P},I,\tilde{T}\rangle$ and $\tilde{P}\coloneqq(\bigwedge_{p\in\mathcal{P}}p=\tilde{t}(p))\implies P(X)$ be the system and property with auxiliary variables. The function $\tilde{t}$ maps prophecy variables to their target term from Prophecize. If $\mathit{Inv}(X,H,\mathcal{P})$ is an inductive invariant for $\tilde{\mathcal{S}}$ and entails $\tilde{P}$, then $\exists H\forall\mathcal{P}Inv(X,H,\mathcal{P})$ is an inductive invariant for $\mathcal{S}$ and entails $P$, where $\exists H$ and $\forall\mathcal{P}$ bind each variable in the set with the corresponding quantifier. ###### Proof 5.1. We assume that $\mathit{Inv}(X,H,\mathcal{P})$ is an inductive invariant that guarantees $\tilde{P}$. Equivalently, it meets the following conditions: $\tilde{I}\models\mathit{Inv}(X,H,\mathcal{P})$ (initiation) $\mathit{Inv}(X,H,\mathcal{P})\wedge\tilde{T}\models\mathit{Inv}(X^{\prime},H^{\prime},\mathcal{P}^{\prime})$ (consecution) $\mathit{Inv}(X,H,\mathcal{P})\models\tilde{P}$ (safety) We must show that $\exists H\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})$ is an inductive invariant of $\mathcal{S}$ and entails $P$. We accomplish this by demonstrating that each of the three conditions must hold. Initiation: $I\models\exists H\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})$. This holds trivially because $I$ is unchanged, i.e., no auxiliary variables appear in the initial state constraint. Consecution: $\exists H\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})\wedge T\models\exists H^{\prime}\forall\mathcal{P}^{\prime}\mathit{Inv}(X^{\prime},H^{\prime},\mathcal{P}^{\prime})$. This is equivalent to the following formula (manipulated into negation normal form) being unsatisfiable: $\exists H\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})\wedge T\wedge\forall H^{\prime}\exists\mathcal{P}^{\prime}\neg\mathit{Inv}(X^{\prime},H^{\prime},\mathcal{P}^{\prime})$ (1) To complete this part of the proof, we introduce the function $\tilde{\sigma}$ which maps primed history variables to their next state update term from Delay. For example, suppose we called $\textbf{Delay}(\mathcal{S},x,2)$ for state variable $x$, then $\tilde{\sigma}(h_{x}^{1^{\prime}})=x$ and $\tilde{\sigma}(h_{x}^{2^{\prime}})=h_{x}^{1}$. Crucially, the terms in the range of $\tilde{\sigma}$ do not contain variables from $P$, because prophecy variables are not targeted by Prophecize. With this notation, the consecution of $\mathit{Inv}(X,H,\mathcal{P})$ for $\tilde{T}$ means that the following formula is unsatisfiable: $\mathit{Inv}(X,H,\mathcal{P})\wedge T\wedge\left(\bigwedge_{h\in H}h^{\prime}=\tilde{\sigma}(h^{\prime})\right)\wedge\left(\bigwedge_{p\in\mathcal{P}}p^{\prime}=p\right)\wedge\neg\mathit{Inv}(X^{\prime},H^{\prime},\mathcal{P}^{\prime})$ (2) We now show that the fact that (2) is unsatisfiable entails that (1) is unsatisfiable. First, observe that (2) is equisatisfiable with $\mathit{Inv}(X,H,\mathcal{P}^{\prime})\wedge T\wedge\neg\mathit{Inv}(X^{\prime},\tilde{\sigma}(H^{\prime}),\mathcal{P}^{\prime})$ (3) Next, if (1) is satisfiable, then the following formula is satisfiable, where we drop the quantifiers over $H$ and replace them with fresh uninterpreted constants (for convenience, we simply drop the quantifiers and treat the free variables as uninterpreted constants): $\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})\wedge T\wedge\forall H^{\prime}\exists\mathcal{P}^{\prime}\neg\mathit{Inv}(X^{\prime},H^{\prime},\mathcal{P}^{\prime})$. If this formula is satisfiable, then the following formula is also satisfiable, where we instantiate the universal quantifier over $H^{\prime}$ with $\tilde{\sigma}(H^{\prime})$: $\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})\wedge T\wedge\exists\mathcal{P}^{\prime}\neg\mathit{Inv}(X^{\prime},\tilde{\sigma}(H^{\prime}),\mathcal{P}^{\prime})$. Finally, if this formula is satisfiable, then we can drop the existential quantifiers for $P^{\prime}$ and instantiate the universal quantifier for $P$ with $P^{\prime}$, which gives that (3) is satisfiable. Since (3) is unsatisfiable, then (1) must be unsatisfiable as well. Safety: $\exists H\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})\models P(X)$. This holds when $\exists H\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})\wedge\neg P(X)$ is unsatisfiable. To show this, we construct quantifier instantiations such that the resulting formula must be unsatisfiable. We first instantiate all the variables in $H$ with fresh constants by dropping the existential quantification. Next, we instantiate each of the universally quantified $p\in\mathcal{P}$ with their target term $\tilde{t}(p)$. Note that this target term might be a history variable from $H$ which is now instantiated. We allow $\tilde{t}$ to be applied to sets in the straightforward way. A model for the resulting formula, $\mathit{Inv}(X,H,\mathcal{P})\\{\mathcal{P}\mapsto\tilde{t}(\mathcal{P})\\}\wedge\neg P(X)$, would be a counterexample for assumption (safety). Thus it must be unsatisfiable. We have shown that initiation holds trivially, and that consecution and safety hold using quantifier instantiations. Thus, $\exists H\forall\mathcal{P}\mathit{Inv}(X,H,\mathcal{P})$ must be an inductive invariant for $\mathcal{S}$ and $P(X)$. Although the invariants found using counterexample-guided prophecy correspond to $\exists\forall$ invariants over the unmodified system, we must acknowledge that the existential power is very weak. The existential quantifier is only used to remove history variables. While history variables can certainly be employed for existential power in an invariant [PHM+21], these specific history variables are introduced solely to target a term for prophecy and only save a term for some fixed, finite number of steps. Thus, we do not expect to gain much existential power in finding invariants on practical problems. This use of history and prophecy variables can be thought of as quantifier instantiation at the model checking level, where the instantiation semantically uses a term appearing in an execution of the system. Consequently, our technique performs well on systems where there is only a small number of instantiations needed over terms that are not too distant in time from a potential property violation that must be disproved (i.e., not many history variables are required). This appears to be a common situation for invariant-finding benchmarks, as we show empirically in Section 7. ### 5.2. Limitations If our CEGAR loop terminates, it either terminates with a proof or with a true counterexample. However, it is possible that the procedure may not terminate. In particular, while we can always refine the abstraction for a given bound $k$, there is no guarantee that this will eventually result in a refinement that rules out all spurious counterexamples (of any length). This failure mode occurs, for instance, when no finite number of calls to Prophecize can capture all the relevant indices of the array. Consider an example system with $I\coloneqq a=\mathit{constarr(0)}$, $T\coloneqq a^{\prime}=\mathit{write(a,i_{0},\mathit{read(a,i_{1})}+1)}$, and $P\coloneqq\mathit{read(a,\mathit{i_{r}})}\geq 0$. The array $a$ is initialized with 0 at every index, and at every step, $a$ is updated at a single index by reading from an arbitrary index of $a$ and adding 1 to the result.111An even simpler system which does not add 1 in the update would already be problematic; however, for that case, it is straightforward to extend our algorithm to have it learn that the array does not change. Note that the index variables are unconstrained: they can range over the integers freely at each time step. The property is that reading from $a$ at $i_{r}$ returns a positive value. This property holds because of a quantified invariant maintained by the system: $\forall i~{}.~{}\mathit{read(a,i)}\geq 0$. However, the initial abstraction is a memoryless array which can easily violate the property by returning negative values from reads. Since the array is updated in each step at an arbitrary index based on a read from another arbitrary index, no finite number of prophecy variables (of the form used in Prophecize) can capture all the relevant indices. It will successively rule out longer finite spurious counterexamples, but will never be refined enough to prove the property unboundedly. Note that this is related to our abstraction and choice to limit prophecy to predicting values a fixed, finite number of steps before a potential property violation. Another form of prophecy variable could be used to prove this property. For example, a prophecy variable that predicts the first index value that stores a negative value in $a$ could be used to show that this cannot happen. We believe that this issue can be circumvented in an automated fashion with future work. In fact, an approach introduced since the conference version [MIG+21] of this paper uses prophecy variables with a different refinement loop for verifying parameterized protocols, which cannot be handled by our technique due to this limitation [CGR21]. A related, but less fundamental issue is that the index set might not contain the best choice of targets for prophecy. While the index set _is_ sufficient for ruling out bounded counterexamples, it is possible there is a better target for universal prophecy that does not appear in the index set. However, based on the evaluation in Section 7, it appears that the index set does work well in practice. ## 6\. Implementation Details We will now describe our prototype of counterexample-guided prophecy along with some practical implementation details. Recall that Algorithm 1 can use any unbounded model checking technique for Prove. In our prototype, we choose to instantiate it with ic3ia [Gri20] (downloaded Apr 27, 2020), an open-source C++ implementation of IC3 via Implicit Predicate Abstraction (IC3IA) [CGMT16], which is itself a CEGAR loop that uses implicit predicate abstraction to perform IC3 [Bra11, BBW14] on infinite-state systems and uses interpolants to find new predicates. ic3ia uses MathSAT [CGSS13] (version 5.6.3) as the backend SMT solver and interpolant producer. We call our prototype prophic3 [MI]. ### 6.1. Engineering Heuristics and Options #### Weak and Strong Abstraction. It is important to have enough prophecy variables to assist in constructing inductive invariants. We found that we could often obtain a larger, richer set of prophecy variables by weakening our array abstraction. We do this by replacing equality between arrays by an uninterpreted predicate, and also checking the congruence axiom, the converse of (ext). Since more axioms are checked, there are more opportunities to introduce auxiliary variables. We call this _weak_ abstraction (WA) as opposed to _strong_ abstraction (SA), which uses regular equality between abstract arrays and guarantees congruence through UF axioms. Our default configuration uses weak abstraction. #### Lemma and Auxiliary Variable Filtering. Although the algorithm depends on introducing auxiliary variables, an excessive number of unnecessary auxiliary variables could overwhelm the Prove step. Thus, an improvement not shown in Algorithm 3 is to check consecutive axioms first and only add nonconsecutive ones when necessary. This is the motivation behind the custom array solver implementation CheckArrayAxioms based on [BMS06]. In principle, we could have used an SMT solver to find array axioms, but it would give no preference to consecutive axioms. Even when enumerating consecutive axioms first, we can still end up with more auxiliary variables than necessary. We use an unsat-core based procedure to prune nonconsecutive refinement axioms. In particular, we attempt to remove nonconsecutive axioms that target indices at times further from the end of the trace, because they would introduce more history variables. In practice, this can substantially reduce the number of added auxiliary variables. Similarly, we could overwhelm the algorithm with unnecessary consecutive axioms. CheckArrayAxioms can still produce hundreds or even thousands of (consecutive) axiom instantiations. Once these are lifted to the transition system, some may be redundant. To mitigate this issue, when the BMC check returns $\bot$ and we are about to return (line 4 of Algorithm 3), we keep only axioms that appear in the unsat core of the BMC formula [CGS11]. #### Abstract Values Refinement Loop. In our implementation, we also include a simple abstraction-refinement wrapper which abstracts large constant integers and refines them with the actual values if that fails. This is especially useful for verifying software benchmarks with large constant loop bounds. Otherwise, the system might need to be unrolled to a very large bound to reach an abstract counterexample. This was only necessary for a handful of benchmarks in the first benchmark set. #### Assume Property in Pre-State. As long as we are only interested in the first violation of a property, we can assume that the property was not violated in the past. This observation is formalized in Theorem 5 of [McM18]. It is common to achieve this for invariant checking by assuming the property over current state variables in the transition relation, so that every transition starts in a state satisfying the property. In the context of counterexample-guided prophecy, this strategy may prove useful because the property is weakened with each call to Algorithm 2 (the original property becomes the consequent of an implication). We can assume the original (stronger) property in all previous states which can help the algorithm converge. ⬇ int N; assume ( N > 0 ); int a[N]; int c; for ( int i = 0 ; i < N ; i++ ) { a[i] = c; } for ( int j = 0 ; j < N ; j++ ) { assert( a[j] == c ); } Figure 3. Initialize Array. $\displaystyle I\coloneqq$ $\displaystyle i=0\wedge j=0\wedge\neg\mathit{err}$ $\displaystyle T\coloneqq$ $\displaystyle i<N\rightarrow i^{\prime}=i+1\ \wedge$ $\displaystyle i<N\rightarrow a^{\prime}=\mathit{write(a,i,c)}\ \wedge$ $\displaystyle i<N\rightarrow j^{\prime}=j\ \wedge$ $\displaystyle i<N\rightarrow\mathit{err}^{\prime}=\mathit{err}\ \wedge$ $\displaystyle(i\geq N\wedge j<N)\rightarrow i^{\prime}=i\ \wedge$ $\displaystyle(i\geq N\wedge j<N)\rightarrow a^{\prime}=a\ \wedge$ $\displaystyle(i\geq N\wedge j<N)\rightarrow j^{\prime}=j+1\ \wedge$ $\displaystyle(i\geq N\wedge j<N\wedge\mathit{read(a,j)}=c\wedge\neg\mathit{err})\rightarrow\neg\mathit{err}^{\prime}\ \wedge$ $\displaystyle(i\geq N\wedge j\geq N)\rightarrow\bot$ $\displaystyle P\coloneqq$ $\displaystyle\neg\mathit{err}$ Figure 4. Possible STS encoding of Fig. 3. Consider the C program shown in Fig. 3, based on one of the benchmarks evaluated in Section 7. The example populates an array of arbitrary size $N$, with an arbitrary, fixed, value $c$. Fig. 4 shows one possible encoding of this program as an STS. This encoding is carefully chosen to illustrate a case where assuming the property in the pre-state is needed for counterexample- guided prophecy to converge. Other possible encodings could avoid this issue entirely. In this encoding, if we do not assume the original property in the pre-state, we observed that prophic3 would diverge and introduce an increasing number of prophecy variables. Consider a case where $N$ is assigned a specific value, $5$. Since the array abstraction starts memoryless, the algorithm needs to add 5 prophecy variables to refine the memory. However, since $N$ is arbitrary, this results in an infinite chain of new prophecy variables as longer traces are considered. Furthermore, each time a prophecy variable is introduced, the underlying IC3IA algorithm is restarted with a weaker property. This means that the original property, $\neg\mathit{err}$, cannot be assumed in the pre- state. Note, the new property produced by Prophecize is an implication that will be trivially true in most of the path. The most important part of the transition relation in Fig. 4 to consider is the second-to-last line of $T$. Let that be the _error rule_. Since we do not assume $\neg\mathit{err}$, the error rule might be trivially satisfied. Intuitively, without this assumption, we need to justify the assertion for the value of index $j$ at every time- step. However, we know it is safe to assume the original property in the pre-state for counter-example guided prophecy. If we do so, the algorithm converges. This is because that assumption coupled with predicting a single $j$, one step before a potential property violation, is sufficient to enforce the error rule which ensures $\neg\mathit{err}$ holds in the next state. #### Important Variables. In many implementations of IC3IA, including ic3ia, new predicates are obtained by mining interpolants from an unsatisfiable spurious counterexample trace conjoined with the concrete unrolled transitions. Typically not all of these predicates are necessary, so they are often reduced using unsatisfiable cores. However, in the context of counterexample-guided prophecy, we might prefer certain predicates. In particular, predicates involving prophecy variables are good candidates, since we know the prophecy variable was necessary to rule out a spurious counterexample. Note that there are two levels of abstraction refinement in this context: the array abstraction and refinement for counterexample-guided prophecy, and the predicate abstraction in IC3IA. Here, we are focused on the latter. One heuristic we tried is always keeping predicates that use a prophecy variable. #### Finite-domain Indices. In this paper, we have assumed that the index sort has an infinite domain. This fits our domain of problems that require quantified invariants. If the sort is finite, a universal quantifier could in principle be enumerated by instantiating it with every possible value. Although, it is certainly possible that a quantifier is much more efficient than this enumeration. The restriction to infinite domain indices is also a technical limitation of the array solving technique of [BMS06], which our approach is based on. This restriction is shared by many SMT solvers, particularly when there are chains of equalities between writes on constant arrays with different bases, e.g., $a=\mathit{constarr(0_{8})}\wedge b=\mathit{constarr(1_{8})}\wedge\mathit{write(a,i_{2},e_{8})}=\mathit{write(b,j_{2},d_{8})}$, where $c_{w}$ is a bitvector variable or value $c$ with width $w$. An infinite domain allows us to assume that there is always an index value that has not been used in the array formula. This is crucial for the $\lambda$ index, which has the primary goal of referring to indices initialized by the constant array axiom that were never overwritten. In a finite domain, we cannot make this assumption. See [BMS06, BM07] for more information on the $\lambda$ index. This limitation is only with regards to the array axiom enumeration. The other contributions of this paper, including using prophecy variables in place of universal quantification, are entirely applicable over finite domains. One low-effort approach for applying counterexample-guided prophecy over finite- domain indices is to give up completeness. By simply not including axioms over a $\lambda$ index for finite-domain sorts, the array solving procedure might conclude the query is satisfiable when it is actually unsatisfiable. Thus, the overall algorithm could return spurious counterexamples, but would still soundly return proofs. A given spurious counterexample is finite and would be straightforward to analyze, either with a dedicated checker or an SMT solver without this limitation. ## 7\. Experiments ### 7.1. Setup We evaluate our tool against three state-of-the-art tools for inferring universally quantified invariants over linear arithmetic and arrays: freqhorn, quic3, and gspacer. All these tools are Constrained Horn Clause (CHC) solvers built on Z3 [dMB08]. The algorithm implemented in this version of freqhorn [Fedb] is a _syntax-guided synthesis_ [ABD+15] approach for inferring universally quantified invariants over arrays [FPMG19]. quic3 is built on Spacer [KGC14], the default CHC engine in Z3, and extends IC3 over linear arithmetic and arrays to allow universally quantified frames (frames are candidates for inductive invariants maintained by the IC3 algorithm) [GSV18]. It also maintains a set of quantifier instantiations which are provided to the underlying SMT solver. quic3 was recently incorporated into Z3. We used Z3 version 4.8.9 with parameters suggested by the quic3 authors.222fp.spacer.q3.use_qgen=true fp.spacer.ground_pobs=false fp.spacer.mbqi=false fp.spacer.use_euf_gen=true Finally, gspacer is an extension of Spacer which adds three new inference rules for improving local generalizations with global guidance [KCSG20]. While this last technique does not specifically target universally quantified invariants, it can be used along with the quic3 options in Spacer and potentially executes a much different search. The gspacer submission [KG20] won the arrays category in CHC-COMP 2020 [R2̈0b]. We use the same configuration entered in the competition. We also include ic3ia and the default configuration of Spacer in our results, neither of which can produce universally quantified invariants. Our default configuration of prophic3 uses weak abstraction. We chose to build our prototype on ic3ia instead of Spacer, in part because we needed uninterpreted functions for our array abstraction, and Spacer does not handle them in a straightforward way, due to the semantics of CHC [BGMR15]. We compare these solvers on four benchmark sets: i) freqhorn — all benchmarks containing arrays in [Feda] from the freqhorn paper [FPMG19]; ii) quic3 — benchmarks from the quic3 paper [GSV18] (these were C programs from SV-COMP [Bey17] that were modified to require universally quantified invariants); iii) vizel — additional benchmarks provided to us by the authors of [GSV18]; and iv) chc-comp-2020 — the array category benchmarks of CHC-COMP 2020 [R2̈0a] (as explained below, these contain a translation of the quic3 benchmarks). Additionally, we sort the benchmarks into four categories: 1) Q — safe benchmarks solved by some tool supporting quantified invariants but none of the solvers that do not; 2) QF — those solved by at least one of the tools that do not support quantified invariants, plus any unsafe benchmarks; 3) US — unsafe benchmarks and 4) UK — unknown (i.e., unsolved) benchmarks. Because not all of the benchmark sets were guaranteed to require quantifiers, this is an approximation of which benchmarks required quantified reasoning to prove safe. Both prophic3 and ic3ia take a transition system and property specified in the Verification Modulo Theories (VMT) format [CRGI11], which is a transition system format built on SMT-LIB [BFT16]. All other solvers read the CHC format. We translated benchmark sets freqhorn and chc-comp-2020 from CHC to VMT using the _horn2vmt_ program which is distributed with ic3ia. For benchmark sets quic3 and vizel, we started with the C programs and generated both VMT and CHC using _Kratos2_ (an updated version of _Kratos_ [CGM+11]). We note that chc- comp-2020 includes another translation of the quic3 benchmarks to CHC by SeaHorn [GKKN15]. We ran all experiments on a 3.5GHz Intel Xeon E5-2637 v4 CPU with a timeout of 2 hours and a memory limit of 32GB. Figure 5. Number of solved benchmarks over time (sorted). solver | freqhorn (81) | quic3 (43) | vizel (33) | chc-comp-2020 (501) | total ---|---|---|---|---|--- prophic3 | 66/5/0 | 41/0/0 | 22/3/1 | 42/159/56 | 171/167/57 freqhorn | 65/4/0 | 0/0/0 | 0/1/0 | 4/46/1 | 69/51/1 quic3 | 55/4/0 | 34/0/0 | 15/4/1 | 74/137/75 | 178/145/76 gspacer | 34/5/0 | 27/0/0 | 18/3/1 | 66/139/94 | 145/147/95 Spacer | 0/5/0 | 0/0/0 | 0/4/1 | 0/134/77 | 0/143/78 ic3ia | 0/4/0 | 0/0/0 | 0/3/1 | 0/158/60 | 0/165/61 Figure 6. Experimental results. They are reported as _# Q_ / _# QF_ / _# US_. solver | freqhorn (81) | quic3 (43) | vizel (33) | chccomp2020 (501) | total ---|---|---|---|---|--- prophic3 | 66/5/0 | 41/0/0 | 22/3/1 | 42/159/56 | 171/167/57 prophic3_sa | 62/5/0 | 38/0/0 | 20/3/1 | 41/159/65 | 161/167/66 prophic3_nav | 57/4/0 | 42/0/0 | 21/3/1 | 43/160/55 | 163/167/56 prophic3_na | 68/4/0 | 41/0/0 | 18/3/1 | 44/159/56 | 171/166/57 prophic3_npr | 66/4/0 | 42/0/0 | 20/3/1 | 45/159/56 | 173/166/57 prophic3_ntp | 66/4/0 | 40/0/0 | 20/3/1 | 33/159/56 | 159/166/57 prophic3_nur | 66/5/0 | 42/0/0 | 20/3/1 | 40/159/55 | 168/167/56 prophic3_nhp | 68/4/0 | 41/0/0 | 22/3/1 | 42/159/56 | 173/166/57 prophic3_nar | 64/4/0 | 42/0/0 | 21/3/1 | 42/159/55 | 169/166/56 prophic3_noheur | 65/5/0 | 38/0/0 | 13/3/1 | 28/158/57 | 144/166/58 Figure 7. Self-comparison with different options, reported as _# Q_ / _# QF_ / _# US_. ### 7.2. Results The results are shown in Fig. 6 as a table. Fig. 5 shows cactus plots demonstrating the number of solved benchmarks over time.We first observe that prophic3 solves the most benchmarks in the freqhorn, quic3, and vizel benchmark sets, both overall and in category Q. The quic3 (and most of the freqhorn) benchmarks require quantified invariants; thus, ic3ia and Spacer cannot solve any of them. On solved instances in the Q category, prophic3 introduced an average of 1.2 prophecy variables and a median of 1. This makes sense because, upon inspection, most benchmarks only require one quantifier and we are careful to only introduce prophecy variables when needed. On benchmarks it cannot solve, ic3ia either times out or fails to compute an interpolant. This is expected because quantifier-free interpolants are not guaranteed over the standard theory of arrays. Even without arrays, it is also possible for prophic3 to fail to compute an interpolant, because MathSAT’s interpolation procedure is incomplete for combinations with non-convex theories such as integers. However, this was rarely observed in practice. We further observe that prophic3 does not perform as well on unsafe benchmarks. This is expected, because our array solving procedure is enumeration-based and should be slower than the array theory solvers within an SMT solver. However, we believe that a dedicated array solving procedure is important for performance of the overall algorithm and especially safe benchmarks. We tried minimal experiments with obtaining array lemmas directly from the SMT solver and did not achieve comparable performance. This is likely because our array solver is aware of the ultimate goal to run Prophecize with a small delay and can enumerate array axioms in a corresponding order, starting with index instantiations that would require the fewest history variables. There was one discrepancy in our experiments. On _chc-LIA-lin-arrays_381_ gspacer disagrees with quic3, Spacer, and prophic3. This is the same discrepancy mentioned in the CHC-COMP 2020 report [R2̈0b]. prophic3 proved this benchmark safe without introducing any auxiliary variables and we used both CVC4 [BCD+11] and MathSAT to verify that the solution was indeed an inductive invariant for the concrete system. We are confident that this benchmark is safe and thus do not count it as a solved instance for gspacer. Some of the tools are sensitive to the encoding. Since it is syntax-guided, freqhorn is sensitive to the encoding syntax. The freqhorn benchmarks were hand-written in CHC to be syntactically simple; this simplicity is maintained by horn2vmt and also benefits prophic3. However, prophic3 can be sensitive to other encodings. For example, the quic3 benchmarks translated by SeaHorn and included in chc-comp-2020 are much harder for prophic3 to solve (after translation by horn2vmt) compared to the direct C to VMT translation using _Kratos2_. We found that prophic3 solves 6 benchmarks when translated by horn2vmt $\circ$ SeaHorn, versus 41 when translated directly by _Kratos2_. We stress that the CHC solvers performed similarly on both encodings: our experiments showed that quic3 and freqhorn solved exactly the same number in both translations, and gspacer solved 27 when translated with _Kratos2_ and 34 when translated with SeaHorn. Importantly, prophic3 on the _Kratos2_ encoding solved more benchmarks than any other tool and encoding pair. There are two main reasons why prophic3 fails on the SeaHorn encodings. First, due to the LLVM-based encoding, some of the SeaHorn translations have index sets which are insufficient for finding the right prophecy variable. This has to do with the memory encoding and the way that fresh variables and guards are used. SeaHorn also splits memories into ranges which is problematic for our technique. Second, the SeaHorn translation is optimized for CHC, not for transition systems. For example, it introduces many new variables, and the argument order between different predicates may not match. In the transition system, this essentially has the effect of interchanging the values of variables between each loop. SeaHorn has options that address some of these issues, and these helped prophic3 solve more benchmarks, but none of these options produce encodings that work as well as the _Kratos2_ encodings. The difference between good CHC and transition system encodings could also explain the overall difference in performance on _chc-comp-2020_ benchmarks, most of which were translated by SeaHorn. Both of these issues are practical, not fundamental, and we believe they can be resolved with additional engineering effort. ### 7.3. Self Comparison Next, we run a self-comparison using different options in prophic3. We accomplish this by starting with the configuration used above, and dropping a single feature to obtain a new configuration. This serves as a metric of how important each heuristic is to the overall performance of prophic3. Each configuration has a unique string identifying it as follows: 1. (1) sa: with strong abstraction; 2. (2) nav: no outer CEGAR loop that abstracts large values; 3. (3) na: no assuming property in pre-state; 4. (4) npr: no attempting to reduce the number of prophecy variables introduced; 5. (5) ntp: no tracking prophecy variables as important variables to guide IC3IA to useful predicates; 6. (6) nur: no unsat-core based reduction when enumerating timed axioms; 7. (7) nhp: no seeding IC3IA with predicates obtained from equalities between history variables and targets over current state variables, e.g., if $h_{t}^{1\prime}=t$ is in the transition relation, would add $h_{x}^{1}=t$ as a predicate; 8. (8) nar: no additional reduction of consecutive axioms (differs from nur in that the consecutive axioms are lifted first); 9. (9) noheur: a combination of 2-8. Figure 8. Number of solved benchmarks in self-comparison over time (sorted) Fig. 7 shows the results in a table and Fig. 8 plots the number of solved benchmarks over time. We observe that prophic3_sa solves fewer benchmarks in the freqhorn, quic3, and vizel sets. However, it is faster on commonly solved instances. This makes sense because it needs to check fewer axioms (it uses built-in equality and thus does not check equality axioms). We suspect that it solves fewer benchmarks in the first three sets because it was unable to find the right prophecy variable. For example, for the standard_find_true-unreach- call_ground benchmark in the _quic3_ set, a prophecy variable is needed to find a quantifier-free invariant. However, because of the stronger reasoning power of SA, the system can be sufficiently refined without introducing auxiliary variables. ic3ia is then unable to prove the property on the resulting system without the prophecy variable, instead timing out. Interestingly, notice that prophic3_sa solves the most benchmarks in the QF category overall, suggesting that there are practical performance benefits of the CEGAR approach even when quantified reasoning is not needed. Feature nav is the additional CEGAR loop for abstracting large values. The results show that this primarily affects the freqhorn benchmarks. This is expected because those contained several examples with large, constant loop bounds. This means a quantifier was not strictly necessary, but was needed in practice. Without abstracting the loop bound, Algorithm 3 would take far too long to reach spurious counterexamples due to the large unrolling bound before an error state is reached. Based on these results, each of the other heuristics alone do not make a big difference for these benchmarks. However, the noheur experiments demonstrates that dropping all of them simultaneously does negatively impact performance. It is slower overall in the cactus plots of Fig. 8, and solves markedly less in the _vizel_ and _chc-comp-2020_ benchmark sets. The core algorithm performs well alone, but the heuristics interact to further improve performance. ## 8\. Related Work We refer often to McMillan’s work in [McM18]. In that paper, McMillan reduces infinite-state model checking problems to finite-state problems that can be checked with a SAT-based model checking algorithm by eagerly instantiating axioms. Not all possible axioms are instantiated, which is why this is an eager _abstraction_. This process requires introducing auxiliary variables. We use several of the same theorems, but for a different goal. Rather than reducing infinite-state to finite-state systems, we are interested in reducing problems with quantified inductive invariants to ones with quantifier-free ones. Furthermore, while the approach of [McM18] is a very general framework that is primarily applied manually, we focused on infinite arrays and provided a fully automated algorithm. There are two important related approaches for abstracting arrays in horn clauses [MG16] and memories in hardware [Bje08]. Both make a similar observation that arrays can be abstracted by modifying the property to maintain values at only a finite set of symbolic indices. We differ from the former by using a refinement loop that automatically adjusts the precision and targets relevant indices. The latter is also a refinement loop that adjusts precision, but differs in the domain and the refinement approach, which uses a multiplexor tree. Although neither paper uses the term _prophecy variable_ , their refinement approaches can be viewed as prophecy-variable based. We differ from both approaches in our use of array axioms to automatically find and add auxiliary variables. A similar lazy array axiom instantiation technique is proposed in [BCS20]. However, their technique utilizes interpolants for finding violated axioms and cannot infer universally quantified invariants. The work of [CGI+18] also uses lazy axiom-based refinement, abstracting non-linear arithmetic with uninterpreted functions. We differ in the domain and the use of auxiliary variables. In [PHM+21], prophecy variables defined by temporal logic formulas are used for liveness and temporal proofs, with the primary goal of increasing the power of a temporal proof system. In contrast, we use prophecy variables here for a different purpose, and we also find them automatically. The work of [CN09] includes an approach for synthesizing auxiliary variables for modular verification of concurrent programs. Our approach differs significantly in the domain and details. There is a substantial body of work on automated quantified invariant generation for arrays using first-order theorem provers [KV13, CKR16, KV09, McM08]. These include extensions to saturation-based theorem proving to analyze specific kinds of predicates, and an extension to paramodulation-based theorem proving to produce universally quantified interpolants. In [LTZZ16], the authors propose an abstract interpretation approach to synthesize universally quantified array invariants. Our method also uses abstraction, but in a CEGAR framework. Two other notable approaches capable of proving properties over arrays that require invariants with alternating quantifiers are [GGK20, PS20]. The former proposes _trace logic_ for extending first-order theorem provers to software verification, and the latter takes a _counterexample-guided inductive synthesis_ approach. Our approach takes a model checking perspective and differs significantly in the details. While these approaches are more general, we compared against state-of-the-art tools that focus specifically on universally quantified invariants. MCMT [GR10, GKLT12, CGK+17] and its derivatives [ABG+12, AGS14] are backward- reachability algorithms for proving properties over “array-based systems,” which are typically used to model parameterized protocols. These approaches target syntactically restricted _functional_ transition systems with universally quantified properties, whereas our approach targets general transition systems. Two other approaches for solving parameterized systems modeled with arrays are [GSM16] and [MGJ+19]. The former iteratively fixes the number of expected universal quantifiers, then eagerly instantiates them and encodes the invariant search to nonlinear CHC. The latter first uses a finite- state model checker to discover an inductive invariant for a specific parameterization and then applies a heuristic generalization process. We differ from all these techniques in domain and the use of auxiliary variables. Due to the limitations explained in Section 5, we do not expect our approach to work well for parameterized protocol verification without improvements. In [LB04], heuristics are proposed for finding predicates with free indices that can be universally quantified in a predicate abstraction-based inductive invariant search. Our approach is counterexample-guided and does not utilize predicate abstraction directly (although IC3IA does). The authors of [KKRS17] propose a technique for Java programs that associates heap memory with the program location where it was allocated and generates CHC verification conditions. This enables the discovery of invariants over all heap memory allocated at that location, which implicitly provides quantified invariants. This is similar to our approach in that it gives quantification power without explicitly using quantifiers and in that their encoding removes arrays. However, we differ in that we focus on transition systems and utilize a different paradigm to obtain this implicit quantification. Prophecy variables have also been proposed for Hoare-style reasoning about concurrent programs. In [ZFF+12], the authors formalize “structural” prophecy variables for Hoare logic, which can only predict state within their own thread. The authors of [JLP+20] generalize this approach for separation logic to allow predicting values between different threads. Our work differs in the domain and level of automation. ## 9\. Conclusion We presented a novel approach for model checking transition systems containing arrays. We observed that history and prophecy variables can be extremely useful for reducing quantified invariants to quantifier-free invariants. We demonstrated that an initially weak abstraction in our CEGAR loop can help us to _automatically_ introduce relevant auxiliary variables. Finally, we evaluated our approach on four sets of interesting array-manipulating benchmarks. In future work, we hope to improve performance, explore a tighter integration with the underlying model checker, address the limitations described in Section 5, and investigate applications of counterexample-guided prophecy to other theories. ## Acknowledgment This work was supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1656518. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Additional support was provided by DARPA, under grant No. FA8650-18-2-7854. We thank these sponsors for their support. We would also like to thank Alessandro Cimatti for his invaluable feedback on the initial ideas of this paper. ## Data Availability Statement The experimental results and the necessary software (on the TACAS 2021 Ubuntu 20.04 virtual machine [HJN20]) for reproducing the results shown in Figure 6 are available in the Figshare repository: https://doi.org/10.6084/m9.figshare.13619096. ## References * [ABD+15] Rajeev Alur, Rastislav Bodík, Eric Dallal, Dana Fisman, Pranav Garg, Garvit Juniwal, Hadas Kress-Gazit, P. Madhusudan, Milo M. K. Martin, Mukund Raghothaman, Shambwaditya Saha, Sanjit A. Seshia, Rishabh Singh, Armando Solar-Lezama, Emina Torlak, and Abhishek Udupa. Syntax-guided synthesis. In Dependable Software Systems Engineering, volume 40, pages 1–25. IOS Press, 2015. * [ABG+12] Francesco Alberti, Roberto Bruttomesso, Silvio Ghilardi, Silvio Ranise, and Natasha Sharygina. SAFARI: SMT-based abstraction for arrays with interpolants. In CAV, volume 7358 of Lecture Notes in Computer Science, pages 679–685. Springer, 2012. * [AGS14] Francesco Alberti, Silvio Ghilardi, and Natasha Sharygina. Booster: An acceleration-based verification framework for array programs. In ATVA, volume 8837 of Lecture Notes in Computer Science, pages 18–23. Springer, 2014. * [AL88] Martin Abadi and Leslie Lamport. The existence of refinement mappings. In Proceedings of the 3rd Annual Symposium on Logic in Computer Science, pages 165–175, July 1988. LICS 1988 Test of Time Award. URL: https://www.microsoft.com/en-us/research/publication/the-existence-of-refinement-mappings/. * [BB08] Robert Brummayer and Armin Biere. Lemmas on demand for the extensional theory of arrays. In Proceedings of the Joint Workshops of the 6th International Workshop on Satisfiability Modulo Theories and 1st International Workshop on Bit-Precise Reasoning, SMT ’08/BPR ’08, page 6–11, New York, NY, USA, 2008\. Association for Computing Machinery. doi:10.1145/1512464.1512467. * [BBW14] Johannes Birgmeier, Aaron R. Bradley, and Georg Weissenbacher. Counterexample to induction-guided abstraction-refinement (CTIGAR). In CAV, volume 8559 of Lecture Notes in Computer Science, pages 831–848. Springer, 2014. * [BCCZ99] Armin Biere, Alessandro Cimatti, Edmund Clarke, and Yunshan Zhu. Symbolic model checking without BDDs. In W. Rance Cleaveland, editor, Tools and Algorithms for the Construction and Analysis of Systems, pages 193–207, Berlin, Heidelberg, 1999\. Springer Berlin Heidelberg. * [BCD+11] Clark Barrett, Christopher L. Conway, Morgan Deters, Liana Hadarean, Dejan Jovanović, Tim King, Andrew Reynolds, and Cesare Tinelli. CVC4. In Ganesh Gopalakrishnan and Shaz Qadeer, editors, Proceedings of the 23rd International Conference on Computer Aided Verification (CAV ’11), volume 6806 of Lecture Notes in Computer Science, pages 171–177. Springer, July 2011. Snowbird, Utah. URL: http://www.cs.stanford.edu/~barrett/pubs/BCD+11.pdf. * [BCS20] Denis Bueno, Arlen Cox, and Karem Sakallah. EUFicient reachability for software with arrays. In Formal Methods in Computer Aided Design, 2020. * [Bey17] Dirk Beyer. Software verification with validation of results - (report on SV-COMP 2017). In TACAS (2), volume 10206 of Lecture Notes in Computer Science, pages 331–349, 2017. * [BFT16] Clark Barrett, Pascal Fontaine, and Cesare Tinelli. The Satisfiability Modulo Theories Library (SMT-LIB). www.SMT-LIB.org, 2016. * [BGMR15] Nikolaj Bjørner, Arie Gurfinkel, Kenneth L. McMillan, and Andrey Rybalchenko. Horn clause solvers for program verification. In Fields of Logic and Computation II, volume 9300 of Lecture Notes in Computer Science, pages 24–51. Springer, 2015. * [BGR12] Roberto Bruttomesso, Silvio Ghilardi, and Silvio Ranise. Quantifier-free interpolation of a theory of arrays. Logical Methods in Computer Science, 8, 04 2012. doi:10.2168/LMCS-8(2:4)2012. * [Bje08] P. Bjesse. Word-level sequential memory abstraction for model checking. In 2008 Formal Methods in Computer Aided Design, pages 1–9, Nov 2008. doi:10.1109/FMCAD.2008.ECP.20. * [BM07] Aaron R. Bradley and Zohar Manna. The calculus of computation - decision procedures with applications to verification. Springer, 2007. * [BMS06] Aaron R. Bradley, Zohar Manna, and Henny B. Sipma. What’s decidable about arrays? In E. Allen Emerson and Kedar S. Namjoshi, editors, Verification, Model Checking, and Abstract Interpretation, pages 427–442, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg. * [Bra11] Aaron R. Bradley. SAT-based model checking without unrolling. In VMCAI, volume 6538 of Lecture Notes in Computer Science, pages 70–87. Springer, 2011. * [BT18] Clark W. Barrett and Cesare Tinelli. Satisfiability modulo theories. In Handbook of Model Checking, pages 305–343. Springer, 2018. * [CGI+18] Alessandro Cimatti, Alberto Griggio, Ahmed Irfan, Marco Roveri, and Roberto Sebastiani. Incremental linearization for satisfiability and verification modulo nonlinear arithmetic and transcendental functions. ACM Trans. Comput. Log., 19(3):19:1–19:52, 2018. * [CGK+17] Sylvain Conchon, Amit Goel, Sava Krstic, Rupak Majumdar, and Mattias Roux. FAR-cubicle - A new reachability algorithm for Cubicle. In FMCAD, pages 172–175. IEEE, 2017. * [CGM+11] Alessandro Cimatti, Alberto Griggio, Andrea Micheli, Iman Narasamdya, and Marco Roveri. Kratos - A software model checker for SystemC. In CAV, volume 6806 of Lecture Notes in Computer Science, pages 310–316. Springer, 2011. * [CGMT16] Alessandro Cimatti, Alberto Griggio, Sergio Mover, and Stefano Tonetta. Infinite-state invariant checking with IC3 and predicate abstraction. Formal Methods in System Design, 49(3):190–218, 2016. * [CGR21] Alessandro Cimatti, Alberto Griggio, and Gianluca Redondi. Universal invariant checking of parametric systems with quantifier-free SMT reasoning. In CADE, volume 12699 of Lecture Notes in Computer Science, pages 131–147. Springer, 2021. * [CGS11] Alessandro Cimatti, Alberto Griggio, and Roberto Sebastiani. Computing small unsatisfiable cores in satisfiability modulo theories. J. Artif. Intell. Res., 40:701–728, 2011. * [CGSS13] Alessandro Cimatti, Alberto Griggio, Bastiaan Schaafsma, and Roberto Sebastiani. The MathSAT5 SMT Solver. In Nir Piterman and Scott Smolka, editors, Proceedings of TACAS, volume 7795 of LNCS. Springer, 2013. * [CH15] Jürgen Christ and Jochen Hoenicke. Weakly equivalent arrays. In Carsten Lutz and Silvio Ranise, editors, Frontiers of Combining Systems, pages 119–134, Cham, 2015. Springer International Publishing. * [CKR16] Yuting Chen, Laura Kovács, and Simon Robillard. Theory-specific reasoning about loops with arrays using Vampire. In Vampire@IJCAR, volume 44 of EPiC Series in Computing, pages 16–32. EasyChair, 2016. * [Cla03] Edmund M. Clarke. Counterexample-guided abstraction refinement. In TIME, page 7. IEEE Computer Society, 2003. * [CN09] Ariel Cohen and Kedar S. Namjoshi. Local proofs for global safety properties. Formal Methods Syst. Des., 34(2):104–125, 2009. * [Cra57] William Craig. Linear reasoning. A new form of the Herbrand-Gentzen theorem. J. Symb. Log., 22(3):250–268, 1957. * [CRGI11] Alessandro Cimatti, Marco Roveri, Alberto Griggio, and Ahmed Irfan. Verification Modulo Theories. http://www.vmt-lib.org, 2011. * [dB09] L. de Moura and N. Bjørner. Generalized, efficient array decision procedures. In 2009 Formal Methods in Computer-Aided Design, pages 45–52, Nov 2009. doi:10.1109/FMCAD.2009.5351142. * [dMB08] Leonardo de Moura and Nikolaj Bjørner. Z3: An efficient SMT solver. In C. R. Ramakrishnan and Jakob Rehof, editors, Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg. * [Feda] Grigory Fedyukovich. Freqhorn benchmarks. URL: https://github.com/grigoryfedyukovich/aeval/tree/615f4c4abfd51550d939495841aa9a531d4f09e2/bench˙horn. * [Fedb] Grigory Fedyukovich. Freqhorn implementation. URL: https://github.com/grigoryfedyukovich/aeval/commit/f5cc11808c1b73886a4e7d5a71daeffb45470b9a. * [FPMG19] Grigory Fedyukovich, Sumanth Prabhu, Kumar Madhukar, and Aarti Gupta. Quantified invariants via syntax-guided synthesis. In CAV (1), volume 11561 of Lecture Notes in Computer Science, pages 259–277. Springer, 2019. * [GGK20] Pamina Georgiou, Bernhard Gleiss, and Laura Kovács. Trace logic for inductive loop reasoning. In Formal Methods in Computer Aided Design, 2020. * [GKF08] Amit Goel, Sava Krstić, and Alexander Fuchs. Deciding array formulas with frugal axiom instantiation. In Proceedings of the Joint Workshops of the 6th International Workshop on Satisfiability Modulo Theories and 1st International Workshop on Bit-Precise Reasoning, SMT ’08/BPR ’08, page 12–17, New York, NY, USA, 2008\. Association for Computing Machinery. doi:10.1145/1512464.1512468. * [GKKN15] Arie Gurfinkel, Temesghen Kahsai, Anvesh Komuravelli, and Jorge A. Navas. The SeaHorn verification framework. In CAV (1), volume 9206 of Lecture Notes in Computer Science, pages 343–361. Springer, 2015. * [GKLT12] Amit Goel, Sava Krstic, Rebekah Leslie, and Mark R. Tuttle. SMT-based system verification with DVF. In SMT@IJCAR, volume 20 of EPiC Series in Computing, pages 32–43. EasyChair, 2012. * [GR10] Silvio Ghilardi and Silvio Ranise. MCMT: A model checker modulo theories. In Jürgen Giesl and Reiner Hähnle, editors, Automated Reasoning, pages 22–29, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. * [Gri20] Alberto Griggio. Open-source IC3 modulo theories with implicit predicate abstraction. https://es-static.fbk.eu/people/griggio/ic3ia/index.html, Accessed 2020. URL: https://es-static.fbk.eu/people/griggio/ic3ia/index.html. * [GSM16] Arie Gurfinkel, Sharon Shoham, and Yuri Meshman. SMT-based verification of parameterized systems. In SIGSOFT FSE, pages 338–348. ACM, 2016. * [GSV18] Arie Gurfinkel, Sharon Shoham, and Yakir Vizel. Quantifiers on demand. In Shuvendu K. Lahiri and Chao Wang, editors, Automated Technology for Verification and Analysis, pages 248–266, Cham, 2018. Springer International Publishing. * [HJN20] Sebastian Hjort Hyberts, Peter Gjøl Jensen, and Thomas Neele. Tacas 21 artifact evaluation vm - ubuntu 20.04 lts, September 2020. doi:10.5281/zenodo.4041464. * [Hod93] Wilfrid Hodges. Model theory, volume 42 of Encyclopedia of mathematics and its applications. Cambridge University Press, 1993. * [JLP+20] Ralf Jung, Rodolphe Lepigre, Gaurav Parthasarathy, Marianna Rapoport, Amin Timany, Derek Dreyer, and Bart Jacobs. The future is ours: prophecy variables in separation logic. Proc. ACM Program. Lang., 4(POPL):45:1–45:32, 2020. * [KCSG20] Hari Govind Vediramana Krishnan, YuTing Chen, Sharon Shoham, and Arie Gurfinkel. Global guidance for local generalization in model checking. In CAV (2), volume 12225 of Lecture Notes in Computer Science, pages 101–125. Springer, 2020. * [KG20] Hari Govind Vediramana Krishnan and Arie Gurfinkel. Spacer CHC-COMP 2020 Submission, 2020. URL: https://www.starexec.org/starexec/secure/details/configuration.jsp?id=350966. * [KGC14] Anvesh Komuravelli, Arie Gurfinkel, and Sagar Chaki. SMT-based model checking for recursive programs. In Armin Biere and Roderick Bloem, editors, Computer Aided Verification, pages 17–34, Cham, 2014. Springer International Publishing. * [KKRS17] Temesghen Kahsai, Rody Kersten, Philipp Rümmer, and Martin Schäf. Quantified heap invariants for object-oriented programs. In LPAR, volume 46 of EPiC Series in Computing, pages 368–384. EasyChair, 2017. * [KMZ06] Deepak Kapur, Rupak Majumdar, and Calogero G. Zarba. Interpolation for data structures. In SIGSOFT FSE, pages 105–116. ACM, 2006. * [KS16] Daniel Kroening and Ofer Strichman. Decision Procedures - An Algorithmic Point of View, Second Edition. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2016\. * [KV09] Laura Kovács and Andrei Voronkov. Finding loop invariants for programs over arrays using a theorem prover. In FASE, volume 5503 of Lecture Notes in Computer Science, pages 470–485. Springer, 2009. * [KV13] Laura Kovács and Andrei Voronkov. First-order theorem proving and Vampire. In CAV, volume 8044 of Lecture Notes in Computer Science, pages 1–35. Springer, 2013. * [LB04] Shuvendu K. Lahiri and Randal E. Bryant. Indexed predicate discovery for unbounded system verification. In CAV, volume 3114 of Lecture Notes in Computer Science, pages 135–147. Springer, 2004. * [LTZZ16] B. Li, Z. Tang, J. Zhai, and J. Zhao. Automatic invariant synthesis for arrays in simple programs. In 2016 IEEE International Conference on Software Quality, Reliability and Security (QRS), pages 108–119, Aug 2016. doi:10.1109/QRS.2016.23. * [Mcc62] J. Mccarthy. Towards a mathematical science of computation. In In IFIP Congress, pages 21–28. North-Holland, 1962. * [McM08] K. L. McMillan. Quantified invariant generation using an interpolating saturation prover. In C. R. Ramakrishnan and Jakob Rehof, editors, Tools and Algorithms for the Construction and Analysis of Systems, pages 413–427, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg. * [McM18] Kenneth L. McMillan. Eager abstraction for symbolic model checking. In Hana Chockler and Georg Weissenbacher, editors, Computer Aided Verification, pages 191–208, Cham, 2018. Springer International Publishing. * [MG16] David Monniaux and Laure Gonnord. Cell morphing: From array programs to array-free horn clauses. In SAS, volume 9837 of Lecture Notes in Computer Science, pages 361–382. Springer, 2016. * [MGJ+19] Haojun Ma, Aman Goel, Jean-Baptiste Jeannin, Manos Kapritsos, Baris Kasikci, and Karem A. Sakallah. I4: incremental inference of inductive invariants for verification of distributed protocols. In SOSP, pages 370–384. ACM, 2019. * [MI] Makai Mann and Ahmed Irfan. Prophic3 prototype. URL: https://github.com/makaimann/prophic3/commit/497e2fbfb813bcf0a2c3bcb5b55ad47b2a678611. * [MIG+21] Makai Mann, Ahmed Irfan, Alberto Griggio, Oded Padon, and Clark W. Barrett. Counterexample-guided prophecy for model checking modulo the theory of arrays. In TACAS (1), volume 12651 of Lecture Notes in Computer Science, pages 113–132. Springer, 2021. * [OG76] Susan S. Owicki and David Gries. An axiomatic proof technique for parallel programs I. Acta Informatica, 6:319–340, 1976. * [PHM+21] Oded Padon, Jochen Hoenicke, Kenneth L. McMillan, Andreas Podelski, Mooly Sagiv, and Sharon Shoham. Temporal prophecy for proving temporal properties of infinite-state systems. Formal Methods in System Design, Jul 2021. doi:10.1007/s10703-021-00377-1. * [Pnu77] Amir Pnueli. The temporal logic of programs. In FOCS, pages 46–57. IEEE Computer Society, 1977. * [PS20] Elizabeth Polgreen and Sanjit A. Seshia. Synrg: Syntax guided synthesis of invariants with alternating quantifiers. CoRR, abs/2007.10519, 2020. * [R2̈0a] Philipp Rümmer. CHC COMP 2020. https://chc-comp.github.io/, 2020. * [R2̈0b] Philipp Rümmer. Competition Report: CHC-COMP-20, 2020. URL: https://arxiv.org/abs/2008.02939. * [SSS00] Mary Sheeran, Satnam Singh, and Gunnar Stålmarck. Checking safety properties using induction and a SAT-solver. In FMCAD, volume 1954 of Lecture Notes in Computer Science, pages 108–125. Springer, 2000. * [ZFF+12] Zipeng Zhang, Xinyu Feng, Ming Fu, Zhong Shao, and Yong Li. A structural approach to prophecy variables. In TAMC, volume 7287 of Lecture Notes in Computer Science, pages 61–71. Springer, 2012.
# Unusual Nonmagnetic Ordered State in CeCoSi Revealed by 59Co-NMR and NQR Measurements Masahiro Manago<EMAIL_ADDRESS>Present address: Department of Physics and Materials Science, Graduate School of Natural Science and Technology, Shimane University, Matsue, Shimane, Japan. Hisashi Kotegawa Hideki Tou Hisatomo Harima Department of Physics, Kobe University, Kobe, Hyogo 657-8501, Japan Hiroshi Tanida Liberal Arts and Sciences, Toyama Prefectural University, Imizu, Toyama 939-0398, Japan ###### Abstract We performed 59Co nuclear magnetic and quadrupole resonance (NMR and NQR) measurements under pressure on a single-crystalline CeCoSi, which undergoes an unresolved phase transition at $T_{0}$. The NQR spectra clearly showed that the phase transition at $T_{0}$ is nonmagnetic, but any symmetry lowering at the Co site was not seen irrespective of the feature of second-order phase transition. By contrast, the NMR spectra were split by the induced magnetic field perpendicular to the external magnetic field. These results show that the phase below $T_{0}$ is not a simple paramagnetic state but is most likely electric multipolar ordered state of Ce $4f$ electrons. The development of the Kondo effect by applying pressure is thought to be crucial to stabilize this state and to show novel features beyond commonality of tetragonal Ce-based systems. Physical properties of solids are influenced intricately by degrees of freedom of electrons, that is, charge, spin, and orbital, through their mutual couplings and many-body interactions of electrons. Phase transition characterized by an order parameter is one of the demonstrations of their influence; therefore, it is a fundamental yet profound phenomenon. Rich interactions in solids induce various types of phase transitions; hence, they are attractive in condensed matter physics. Such diversity yields at times a mysterious ordered state. A well-known example is the “hidden order” state in the $5f$-electron system URu2Si2 Mydosh and Oppeneer (2011); Mydosh _et al._ (2020). It is a second-order phase transition with a symmetry reduction; however, the crystallographic symmetry of the ordered state is still controversial Okazaki _et al._ (2011); Tabata _et al._ (2014); Riggs _et al._ (2015); Wang _et al._ (2020). Several researchers have attempted unmasking the order parameter in URu2Si2 for three decades because it is just a fundamental question in condensed matter physics; the mechanism through which degrees of freedom of electrons can affect physical properties of solids. The $4f$-electrons system CeCoSi is a potential example exhibiting an extraordinary order parameter. It crystallizes in the tetragonal $P4/nmm$ ($D_{4h}^{7}$, No. 129) space-group symmetry with the CeFeSi-type structure Bodak _et al._ (1970). The spacial inversion symmetry is locally absent at the Ce site, whereas the crystal structure has the global inversion symmetry. The crystal electric field (CEF) ground state has been reported as the $\Gamma_{7}$ ($\mp 0.306\ket{\pm 5/2}\pm 0.95\ket{\mp 3/2}$) Kramers doublet, and the first excited state is separated from it by $\sim 100$ K Nikitin _et al._ (2020). It has been established that an antiferromagnetic (AFM) transition occurs at the Néel temperature $T_{\textrm{N}}=9.4$ K. The transition is of second-order Chevalier and Matar (2004), and the Ce moments with sizes of $m_{\textrm{Ce}}\sim 0.37(6)\mu_{\textrm{B}}$ are aligned along the $[100]$ axis with a $\bm{q}=\bm{0}$ structure, as revealed by a neutron scattering study Nikitin _et al._ (2020). Another phase, which is a matter of interest, has been initially reported to emerge under pressure below $\sim 40$ K at $\sim 1.5$ GPa Lengyel _et al._ (2013). The first unresolved issue of this phase is its intrinsic pressure phase diagram. Contrary to some reports Lengyel _et al._ (2013); Nikitin _et al._ (2020), a study on single- crystalline samples proposed that this “pressure-induced ordered phase” already exists at ambient pressure below $T_{0}=12$ K Tanida _et al._ (2019). The phase below $T_{0}$ at lower pressures seems to continuously connect to the pressure-induced phase Tanida _et al._ (2018, 2019). However, the anomaly at $T_{0}$ at ambient pressure is not sufficiently large to convince a phase transition. Therefore, microscopic measurements are desired to reveal whether the phase transition at $T_{0}$ is intrinsic or not at ambient pressure. The second unresolved issue is the order parameter below $T_{0}$. It seems different from usual AFM states, as deduced from an enhancement by the magnetic field Lengyel _et al._ (2013); Tanida _et al._ (2019). There have been various suggestions for the origin of this phase, as follows: the spin- density-wave order of Co $3d$ electrons, metaorbital transition, and antiferroquadrupolar (AFQ) ordering Lengyel _et al._ (2013); Tanida _et al._ (2018, 2019); however, the order parameter remains unresolved. In this Letter, we present the results of 59Co nuclear magnetic and quadrupole resonance (NMR and NQR) measurements on high-quality single-crystalline CeCoSi samples to clarify the above-mentioned issues. The combined NMR and NQR results revealed the intrinsic phase diagram, where the objective phase was present even at ambient pressure. In the ordered state, there is no clear indication of symmetry reduction at the Co site under zero field, whereas the NMR spectra split below $T_{0}$ when the magnetic field tilted from the [100] axis was applied. This NMR anomaly can be interpreted by the emergence of the induced magnetic field perpendicular to the external field. Present results suggest an unusual nonmagnetic ordered state that is most likely an electric multipole ordered state in CeCoSi. Plate-shaped single-crystalline CeCoSi samples ($\sim 3\times 4\times 0.5$ mm3) were grown using the Ce/Co eutectic flux method, as described in Ref. Tanida _et al._ , 2019. The 59Co ($I=7/2$) NMR and NQR measurements were performed in the temperature range 1.4–300 K using a standard spin-echo method. The NMR spectra were obtained under a field of 1 T near the $[100]$ direction, and its angle was deduced from the frequencies of the spectra above $T_{0}$. The local symmetry of the Co site in the $P4/nmm$ space group is $\bar{4}m2$ with four-fold improper rotational symmetry. Hydrostatic pressure was applied up to 2.35 GPa on another sample using a piston-cylinder-type cell with Daphne 7474 as a pressure-transmitting medium. The pressure was determined from the superconducting transition temperature of a Pb sample inside the cell. A LaCoSi sample consisting of single-crystalline pieces was also measured by NQR at $P=0$ as a reference system. Figure 1: (Color online) (a–c) Temperature dependence of the 59Co NQR spectra of CeCoSi at $\nu_{3}\equiv 3\nu_{\textrm{Q}}$ site without a field under a pressure of (a) 0, (b) 1.09, and (c) 1.52 GPa. The spectra at each temperature are shifted vertically. (d) Temperature and pressure dependence of $\nu_{\textrm{Q}}$ of CeCoSi and that of LaCoSi at ambient pressure. The vertical arrows indicate $T_{0}$ determined by NMR measurements, which detected it even at ambient pressure. The dashed line for LaCoSi indicates the conventional temperature dependence of $\nu_{\textrm{Q}}$. The NQR spectra are sensitive to both the magnetic and electric anomalies and are suitable for probing the nature of the ordered state. Figures 1(a)–1(c) show the temperature dependence of the NQR spectra above $T_{\textrm{N}}$ at the $\nu_{3}\equiv 3\nu_{\textrm{Q}}$ ($\pm 5/2\leftrightarrow\pm 7/2$) site at 0, 1.09, and 1.52 GPa. The value of the quadrupole frequency $\nu_{\textrm{Q}}$ was 2.09 MHz at $P=0$ and 20 K. The results on other pressures are shown in the Supplemental Materials SM . The NQR spectra did not split nor broaden, although they showed a shift at $T_{0}$. As shown in the temperature dependence of $\nu_{\textrm{Q}}$ in Fig. 1(d), the kink of $\nu_{\textrm{Q}}$ is clearer at higher pressures, and it is invisible at ambient pressure. The kink is not a necessary condition for the phase transition, and its presence at ambient pressure is proved by NMR measurements mentioned later. The absence of splitting or broadening NQR spectra demonstrates that the internal field is absent at the Co sites for $T_{\textrm{N}}<T<T_{0}$ in the entire pressure range. This clearly excludes the magnetic ordering of Co $3d$ moments below $T_{0}$. Our results also indicate that all the Co sites remain equivalent below $T_{0}$ while maintaining a local tetragonal symmetry, that is, a tetragonal crystal structure SM . If one considers a possibility of the magnetic ordering of Ce $4f$ moments, the only explanation for the NQR result is a cancellation of the internal magnetic field at the Co site, which is surrounded by four Ce ions. The transition to the underlying AFM state at $T_{\textrm{N}}$ is of second- order, and this AFM state with $\bm{q}=\bm{0}$ does not break the translational symmetry of the crystal above $T_{0}$. Then, the intermediate ordered state in $T_{\textrm{N}}<T<T_{0}$ should maintain the same translational symmetry, that is, the propagation vector is $\bm{q}=\bm{0}$. For the $\bm{q}=\bm{0}$ structure, the internal field is canceled only when the staggered Ce moment is along the $[001]$ axis SM . This possibility is excluded by the increase of the susceptibility along this axis below $T_{0}$ Tanida _et al._ (2019). Thus, we concluded that the ordered phase below $T_{0}$ is nonmagnetic with one tetragonal Co site. The unusual field response explained later also excludes the possibility of the magnetic state and yet indicates that this phase is not a simple paramagnetic state. The temperature dependence of $\nu_{\textrm{Q}}$ usually obeys a conventional monotonous behavior $\nu_{\textrm{Q}}(T)=\nu_{\textrm{Q}}(0)(1-\alpha T^{1.5})$ Christiansen _et al._ (1976) ($\alpha>0$) owing to the lattice expansion and vibration, as observed in LaCoSi. In contrast, the $\nu_{\textrm{Q}}(T)$ in CeCoSi deviates from this behavior above $T_{0}$, and the deviation becomes remarkable as the pressure increases. Such behavior has been observed in some $4f$-electron systems Shimizu _et al._ (1987); Kitagawa _et al._ (2017); Mito _et al._ (2013); Magishi _et al._ (2012); Yogi _et al._ (2014) and discussed to originate from a CEF splitting, a valence change of the $4f$ ion, or the Kondo effect. The distinct pressure dependence in CeCoSi indicates that such multiple effects influence $\nu_{\textrm{Q}}$ at the Co site through the pressure-enhanced hybridization between the $4f$ electron and conduction electrons. Figure 2: (Color online) Temperature dependence of 59Co NQR nuclear spin- lattice relaxation rate $1/T_{1}$ of CeCoSi from the Ce-$4f$ electrons at $H=0$ under several pressures. The $4f$ part was obtained by subtracting the $1/T_{1}$ value of LaCoSi. The vertical arrows indicate $T_{0}$ determined by NMR results. Inset (left panel): $1/T_{1}$ of CeCoSi at ambient pressure before subtraction as well as that of LaCoSi. The dashed line for LaCoSi shows the Korringa relation $1/T_{1}\propto T$ in the normal metal. Figure 2 shows the Ce-$4f$ part of the nuclear spin-lattice relaxation rate $1/T_{1}$ measured by the NQR at $H=0$. The $1/T_{1}$ has a Co $3d$ component, and it was subtracted using the result of LaCoSi (see the inset of the left panel of Fig. 2). The $1/T_{1}$ divided by temperature is shown in Supplemental Materials SM . The $1/T_{1}$ of LaCoSi shows the weakly- correlated Pauli paramagnet corresponding to a previous report Welter _et al._ (1994). The absence of phase transition in LaCoSi is consistent with the interpretation that the phase below $T_{0}$ originates from Ce $4f$ electrons. No divergent behavior was detected near $T_{0}$ in $1/T_{1}$ in CeCoSi, indicating the absence of the magnetic critical fluctuations. A clear drop was found at $T_{0}$ only at 2.03 GPa. This may correspond to the gaplike anomaly in the electric resistivity above $\sim 1.5$ GPa Lengyel _et al._ (2013) and suggest the decrease of the density of states. Around $T_{\textrm{N}}$, $1/T_{1}$ is expected to detect the magnetic fluctuations perpendicular to the $[001]$ axis SM . $1/T_{1}$ diverged owing to the critical slowing down of the magnetic fluctuations at ambient pressure (see also Fig. 7 in the Supplemental Materials SM ). However, the divergence of $1/T_{1}$ is suppressed with increasing pressure. This is not expected with the magnetic structure of $\bm{m}\parallel[100]$. Two possibilities are considered to interpret this result. One is that the magnetic moment is tilted along the [001] axis under pressure. Another is that the magnetic fluctuation is suppressed as the ordered state below $T_{0}$ gets stabilized. The $1/T_{1}$ significantly above $T_{0}$ shows the localized Ce $4f$ electrons ($1/T_{1}=\textrm{const.}$) at 1.52 GPa or below. Meanwhile, $1/T_{1}$ starts to decrease from higher temperatures than $T_{0}$ at 2.03 and 2.35 GPa. Such an itinerant behavior owing to coherent Kondo effect in $1/T_{1}$ is also seen in some heavy-fermion systems including CeCu2Si2 Ishida _et al._ (1999); Fujiwara _et al._ (2008). The $1/T_{1}$ result, combined with the $\nu_{\textrm{Q}}$ behavior, indicate an enhancement of the Kondo effect by pressure. Figure 3: (Color online) (a) 59Co NMR spectra of CeCoSi at ambient pressure at 10 and 20 K with the field $\mu_{0}H=1.011$ T and angles $\theta=87.8$° and 90°. The blue solid and dashed lines indicate the simulated peak frequencies at $T=10$ K assuming the staggered field of $\sim\pm 15$ mT along the $[001]$ axis. The vertical arrow indicates the central ($1/2\leftrightarrow-1/2$) line, whereas the vertical black dashed line indicates the frequency of $K=0$ of the central line. (b–d) Temperature dependence of the spectra under pressures of (b) 0, (c) 1.09, and (d) 1.52 GPa with $\mu_{0}H=1.0$ T slightly titled from the $[100]$ axis. The spectra are shifted vertically with an offset proportional to the temperatures. The horizontal arrows indicate $T_{0}$. (e) Schematic of the induced field $\bm{H}_{\textrm{ind}}$ under the external field $\bm{H}_{\textrm{ext}}$ at the Co sites below $T_{0}$. Only the $[001]$ components of $\bm{H}_{\textrm{ind}}$ are drawn for simplicity. Further information on the ordered state is provided by the spectra measured under a magnetic field. Figure 3(a) shows the full set of the NMR lines at ambient pressure at 10 K below $T_{0}=12$ K with a field strength of $\mu_{0}H=1.0$ T in addition to the results of 20 K. Here, $\theta=87.8$° and 90° are the field angles from the $[001]$ to $[100]$ axes. At 20 K, seven NMR lines were observed owing to the nuclear quadrupole interaction of $I=7/2$ 59Co nucleus. Meanwhile, at 10 K, the third satellite at the lowest frequency split when the field was tilted from the $[100]$ axis, clearly indicating the symmetry reduction below $T_{0}$. The two-split peaks merged just when $H\parallel[100]$, which was consistent with the NQR data and suggesting that the crystallographic Co site remains one site below $T_{0}$. Similar split and merging of the spectra were observed when the magnetic field is around $[001]$ axis (not shown). Figures 3(b)–3(d) show the temperature dependence of the NMR third satellite line with the field of $\mu_{0}H=1.0$ T slightly tilted from the $[100]$ axis under pressures of 0, 1.09, and 1.52 GPa. The results under other pressures are shown in Supplemental Materials SM . The spectra started to split at $T_{0}$ at all pressures, and the value of $T_{0}$ corresponds with the previous reports Tanida _et al._ (2018, 2019). This is the first microscopic evidence for the phase transition at $T_{0}$ at ambient pressure. Careful measurements just below $T_{0}$, especially at ambient pressure, indicate that the transition is of second-order SM . Three peaks were observed at intermediate temperatures at 1.52 GPa and other pressures SM , although it is unclear whether this is intrinsic or not. Here we assume the two-peak structure to be intrinsic because it is also observed at ambient pressure, where the stress or inhomogeneity owing to pressure is free. The NMR line split is remarkable at the low-frequency third satellite line and disappears for $H\parallel[100]$. This situation is reproduced only when the induced magnetic field $\bm{H}_{\textrm{ind}}$ is perpendicular to the external field, as shown in Fig. 3(e). The blue solid and dashed lines in Fig. 3(a) indicate the simulated NMR frequencies, where $\mu_{0}H_{\textrm{ind},c}\sim\pm 15$ mT along the $[001]$ axis are adopted. When the external field $\bm{H}_{\textrm{ext}}$ is tilted from $[100]$ to the $[001]$ axis, the total field at two Co sites, $\bm{H}_{\textrm{ext}}+\bm{H}_{\textrm{ind}}$, differs from each other, leading to the splitting of the NMR frequency. The field $\mu_{0}H_{\textrm{ind},z}\sim\pm 15$ mT is absent in the NQR spectra, and thus, this is induced by the external field. Such an induced field cannot be explained by a simple paramagnetic response of Ce $4f$ magnetic moment SM . Notably, this excludes the possibility that the internal field at the Co site is canceled out in the AFM alignment of the Ce moments because tilting of the AFM moments under an external field yields a similar response to the paramagnetic response without splitting of the NMR line. Figure 4: (Color online) Temperature–pressure phase diagram of CeCoSi determined by the NMR and NQR results. The solid lines are guides to the eye. The transitions are of second-order across $T_{0}$ and $T_{\textrm{N}}$. The disappearing pressures of two phases are from Ref. Lengyel _et al._ , 2013. Figure 4 shows the temperature–pressure phase diagram of CeCoSi determined by our NMR and NQR results. The phase transition at $T_{0}$ at ambient pressure was unambiguously confirmed to be intrinsic from the microscopic point of view. Our results clearly demonstrate that second-order transitions occur successively across $T_{0}$ and $T_{\textrm{N}}$. The pressure dependence of $T_{0}$ is reminiscent of the Doniach-type phase diagram, implying that the $c$–$f$ hybridization assists the stabilization of the ordered state at lower pressures. Two ordered phases are suppressed with increasing pressure, accompanied by the development of the Kondo effect, and the nonmagnetic ground state is realized in the pressure range of 1.4–2.15 GPa. Our NMR and NQR results limit possible order parameters in CeCoSi as follows: in $T_{\textrm{N}}<T<T_{0}$, they exclude magnetically ordered states arising from Ce-$4f$ and Co-$3d$ electrons. Nonmagnetic ordered states arising from Co-$3d$ electrons, such as a charge density wave (CDW) or orbital ordering are also unlikely. If the CDW transition occurs, the spectra will be split into several sites corresponding to the superlattice formation. Similarly, if the phase transition lifts the Co-$3d$ orbital degeneracy protected by local tetragonal symmetry, orthorhombic transition is essential similar to an electronic nematic state in Fe pnictide systemsKasahara _et al._ (2012). The on-site NMR and NQR results did not show the corresponding symmetry reduction. As an ordered state of nonmagnetic Ce origin, the “metaorbital” transition Hattori (2010), across which the occupancy of the Ce-$4f$ electrons changes steeply, is proposed Lengyel _et al._ (2013); however, it does not correspond with the second-order transition with a symmetry reduction. The remaining possibility that should be considered is the contribution of electric degrees of freedom of Ce $4f$ electrons; that is, an electric multipole ordering. The CEF ground state of tetragonal CeCoSi is a Kramers doublet, but the AFQ ordering is possible if the state including the first-excited level possesses a large interlevel interaction Yatsushiro and Hayami (2020a), which is most likely enhanced by the Kondo effect. Quadrupolar degrees of freedom involving CEF excited states are also known in PrOs4Sb12 Kohgi _et al._ (2003); Goto _et al._ (2004) and CeTe Kawarasaki _et al._ (2011). Before deepening this discussion, we provide a general symmetry consideration, as discussed for the hidden-order state in URu2Si2 Harima _et al._ (2010); Kambe _et al._ (2018). Because the transition to the nonmagnetic state across $T_{0}$ in CeCoSi is second-order, the space group below $T_{0}$ must be one of the subgroups of the room-temperature phase of No. 129 ($P4/nmm$). Moreover, the underlying AFM state orders to $\bm{q}=\bm{0}$ structure, preserving the translational symmetry Nikitin _et al._ (2020) while excluding the possibility of superlattice formation above $T_{\textrm{N}}$. Thus, the possible maximal subgroups are as follows: No. 59 ($Pmmn$), 85 ($P4/n$), 90 ($P42_{1}2$), 99 ($P4mm$), 113 ($P\bar{4}2_{1}m$), and 115 ($P\bar{4}m2$) Hahn (2005); SM . Because the Co site remains one site below $T_{0}$, the space group with two Co sites, namely, No. 115, is unlikely. The breaking of the four-fold symmetry was not detected at the Co site, which contradicts No. 59, 90, and 99 with two-fold symmetry at the Co site. Therefore, the space group No. 85 or No. 113 is preferable for the state below $T_{0}$ in CeCoSi. An interesting feature is that the symmetry reduction to No. 85 or No. 113 does not require the shift of the atomic positions SM ; Hahn (2005), that is, it can be satisfied by symmetrical change of electronic configurations of the Ce ions. If that is the case, the lattice distortion is triggered by the weak coupling to the electrons, and the detection of the symmetry change may not be straightforward in X-ray measurements. This would happen in URu2Si2 Harima _et al._ (2010); Kambe _et al._ (2018). Moreover, the No. 113 lacks global inversion symmetry; therefore, experimental methods that can observe the splitting of bands will be effective to confirm this. A clue to understand the origin of this phase is the unusual field response, i.e., the induced field perpendicular to the external field. Such a behavior is reminiscent of an AFQ state Shiina _et al._ (1997); Sakai _et al._ (1997), as observed in ferro- or antiferroquadrupolar systems CeB6 Takigawa _et al._ (1983), PrFe4P12 Ishida _et al._ (2005); Kikuchi _et al._ (2007), PrTi2Al20 Taniguchi _et al._ (2016), and NpO2 Tokunaga _et al._ (2005). Therefore, it is reasonable to consider the ordered state in CeCoSi as an AFQ state. Along this line of thought, a crucial point is consistency between the experiment and theoretical suggestion Yatsushiro and Hayami (2020a). In the case of multipolar states, the space groups No. 85 and 113, suggested by the NQR result, correspond to the hexadecapolar and $O_{xy}$-type AFQ states, respectively. The symmetry of the Ce site reduces to four-fold symmetry without mirror operations ($4..$) in the No. 85 and two-fold symmetry ($2.mm$) in the No. 113 space group SM . Meanwhile, identification of the AFQ order parameter is possible in principle from the NMR-line splitting by a theoretical workYatsushiro and Hayami (2020b). For example, the NMR line can split in the $O_{zx}$-type AFQ state when $H\perp[010]$, except for $H\parallel[100]$ and $H\parallel[001]$. Therefore, the induced field detected by our NMR suggests the $O_{zx}$-type AFQ state. In the case of $O_{xy}$ type, which is proposed by the zero-field result, the line splitting does not occur by the component along [001] of the field. Thus, we need to resolve this discrepancy between zero- and finite-field results to settle the origin of the ordered state. A point to be considered is the switching of the order parameter under field, as in some quadrupole systems such as CeB6 Nakamura _et al._ (1995) and PrTi2Al20 Taniguchi _et al._ (2019). Such a signature has not been observed thus far in CeCoSi, but investigations for the field-induced switching may unravel this issue. Another point is the lack of local inversion symmetry at the Ce site. In this case, odd-parity multipoles, such as electric octapole, are active in principle through the antisymmetric spin-orbit interaction at the Ce site, but the mechanism through which this effect influences the field response is unknown. In any case, our results capturing a peculiarity of the ordered state in CeCoSi will offer theoretical and further experimental challenges for the complete elucidation of the order parameter. In conclusion, we have performed single-crystalline NMR and NQR measurements to investigate the unresolved ordered state in CeCoSi. The absence of any signature of symmetry reduction in the 59Co-NQR spectra indicates that the ordered phase below $T_{0}$ is nonmagnetic and originates from Ce $4f$ electrons. An unusual field response, proved by the NMR-line splitting, characterizes the extraordinary ordered state, indicating that it is most likely the electric multipole state. Our results show that CeCoSi undergoes two types of second-order phase transitions with different symmetry lowerings, and the nonmagnetic phase can be the ground state above $\sim 1.4$ GPa. It is interesting to consider why CeCoSi differs from other non-cubic systems. As an idea, we expect that parity mixing by absence of the local inversion symmetry at the Ce site may be a key factor to induce this peculiar behavior in CeCoSi. In any case, CeCoSi is a novel example offering profound physics originating from the $4f^{1}$ state. ###### Acknowledgements. The authors thank M. Yatsushiro, S. Hayami, H. Hidaka, Y. Tokunaga, K. Ishida, and Y. Kuramoto for their insightful discussions. This work was supported by a Grant-in-Aid for Scientific Research on Innovative Areas “J-Physics” (Grants No. 15H05882, No. 15H05885, No. JP18H04320, and No. JP18H04321), and a Grant- in-Aid for JSPS Research Fellow (Grant No. JP19J00336) from JSPS. ## References * Mydosh and Oppeneer (2011) J. A. Mydosh and P. M. Oppeneer, Rev. Mod. Phys. 83, 1301 (2011). * Mydosh _et al._ (2020) J. A. Mydosh, P. M. Oppeneer, and P. S. Riseborough, J. Phys.: Condens. Matter 32, 143002 (2020). * Okazaki _et al._ (2011) R. Okazaki, T. Shibauchi, H. J. Shi, Y. Haga, T. D. Matsuda, E. Yamamoto, Y. Onuki, H. Ikeda, and Y. Matsuda, Science 331, 439 (2011). * Tabata _et al._ (2014) C. Tabata, T. Inami, S. Michimura, M. Yokoyama, H. Hidaka, T. Yanagisawa, and H. Amitsuka, Philos. Mag. 94, 3691 (2014). * Riggs _et al._ (2015) S. C. Riggs, M. Shapiro, A. V. Maharaj, S. Raghu, E. Bauer, R. Baumbach, P. Giraldo-Gallo, M. Wartenbe, and I. Fisher, Nat. Commun. 6, 6425 (2015). * Wang _et al._ (2020) L. Wang, M. He, F. Hardy, D. Aoki, K. Willa, J. Flouquet, and C. Meingast, Phys. Rev. Lett. 124, 257601 (2020). * Bodak _et al._ (1970) O. I. Bodak, E. I. Gladyshevskii, and P. I. Kripyakevich, J. Struct. Chem. 11, 283 (1970). * Nikitin _et al._ (2020) S. E. Nikitin, D. G. Franco, J. Kwon, R. Bewley, A. Podlesnyak, A. Hoser, M. M. Koza, C. Geibel, and O. Stockert, Phys. Rev. B 101, 214426 (2020). * Chevalier and Matar (2004) B. Chevalier and S. F. Matar, Phys. Rev. B 70, 174408 (2004). * Lengyel _et al._ (2013) E. Lengyel, M. Nicklas, N. Caroca-Canales, and C. Geibel, Phys. Rev. B 88, 155137 (2013). * Tanida _et al._ (2019) H. Tanida, K. Mitsumoto, Y. Muro, T. Fukuhara, Y. Kawamura, A. Kondo, K. Kindo, Y. Matsumoto, T. Namiki, T. Kuwai, and T. Matsumura, J. Phys. Soc. Jpn. 88, 054716 (2019). * Tanida _et al._ (2018) H. Tanida, Y. Muro, and T. Matsumura, J. Phys. Soc. Jpn. 87, 023705 (2018). * (13) (Supplemental Materials) It contains the NMR Knight shift, hyperfine coupling, examination of the breaking of the four-fold symmetry, temperature evolution of the NMR line split, NMR and NQR spectra not shown in the main text, NQR spectra at the AFM state, nuclear spin-lattice relaxation rate divided by temperature $1/T_{1}T$, and the details of the possible space groups below $T_{0}$. * Christiansen _et al._ (1976) J. Christiansen, P. Heubes, R. Keitel, W. Klinger, W. Loeffler, W. Sandner, and W. Witthuhn, Z. Phys. B 24, 177 (1976). * Shimizu _et al._ (1987) T. Shimizu, H. Yasuoka, Z. Fisk, and J. L. Smith, J. Phys. Soc. Jpn. 56, 4113 (1987). * Kitagawa _et al._ (2017) S. Kitagawa, T. Higuchi, M. Manago, T. Yamanaka, K. Ishida, H. S. Jeevan, and C. Geibel, Phys. Rev. B 96, 134506 (2017). * Mito _et al._ (2013) T. Mito, H. Hara, T. Ishida, K. Nakagawara, T. Koyama, K. Ueda, T. Kohara, K. Ishida, K. Matsubayashi, Y. Saiga, and Y. Uwatoko, J. Phys. Soc. Jpn. 82, 103704 (2013). * Magishi _et al._ (2012) K.-i. Magishi, H. Sugawara, M. Takahashi, T. Saito, K. Koyama, T. Saito, S. Tatsuoka, K. Tanaka, and H. Sato, J. Phys. Soc. Jpn. 81, 124706 (2012). * Yogi _et al._ (2014) M. Yogi, H. Niki, T. Kawata, and C. Sekine, JPS Conf. Proc. 81, 011046 (2014). * Welter _et al._ (1994) R. Welter, G. Venturini, E. Ressouche, and B. Malaman, J. Alloys Compd. 210, 279 (1994). * Ishida _et al._ (1999) K. Ishida, Y. Kawasaki, K. Tabuchi, K. Kashima, Y. Kitaoka, K. Asayama, C. Geibel, and F. Steglich, Phys. Rev. Lett. 82, 5353 (1999). * Fujiwara _et al._ (2008) K. Fujiwara, Y. Hata, K. Kobayashi, K. Miyoshi, J. Takeuchi, Y. Shimaoka, H. Kotegawa, T. C. Kobayashi, C. Geibel, and F. Steglich, J. Phys. Soc. Jpn. 77, 123711 (2008). * Kasahara _et al._ (2012) S. Kasahara, H. J. Shi, K. Hashimoto, S. Tonegawa, Y. Mizukami, T. Shibauchi, K. Sugimoto, T. Fukuda, T. Terashima, A. H. Nevidomskyy, and Y. Matsuda, Nature (London) 486, 382 (2012). * Hattori (2010) K. Hattori, J. Phys. Soc. Jpn. 79, 114717 (2010). * Yatsushiro and Hayami (2020a) M. Yatsushiro and S. Hayami, J. Phys. Soc. Jpn. 89, 013703 (2020a). * Kohgi _et al._ (2003) M. Kohgi, K. Iwasa, M. Nakajima, N. Metoki, S. Araki, N. Bernhoeft, J.-M. Mignot, A. Gukasov, H. Sato, Y. Aoki, and H. Sugawara, J. Phys. Soc. Jpn. 72, 1002 (2003). * Goto _et al._ (2004) T. Goto, Y. Nemoto, K. Sakai, T. Yamaguchi, M. Akatsu, T. Yanagisawa, H. Hazama, K. Onuki, H. Sugawara, and H. Sato, Phys. Rev. B 69, 180511 (2004). * Kawarasaki _et al._ (2011) Y. Kawarasaki, T. Matsumura, M. Sera, and A. Ochiai, J. Phys. Soc. Jpn. 80, 023713 (2011). * Harima _et al._ (2010) H. Harima, K. Miyake, and J. Flouquet, J. Phys. Soc. Jpn. 79, 033705 (2010). * Kambe _et al._ (2018) S. Kambe, Y. Tokunaga, H. Sakai, T. Hattori, N. Higa, T. D. Matsuda, Y. Haga, R. E. Walstedt, and H. Harima, Phys. Rev. B 97, 235142 (2018). * Hahn (2005) T. Hahn, ed., _International tables for crystallography_ , 5th ed., Vol. A (Springer, 2005) corrected reprint of the 5th edition. * Shiina _et al._ (1997) R. Shiina, H. Shiba, and P. Thalmeier, J. Phys. Soc. Jpn. 66, 1741 (1997). * Sakai _et al._ (1997) O. Sakai, R. Shiina, H. Shiba, and P. Thalmeier, J. Phys. Soc. Jpn. 66, 3005 (1997). * Takigawa _et al._ (1983) M. Takigawa, H. Yasuoka, T. Tanaka, and Y. Ishizawa, J. Phys. Soc. Jpn. 52, 728 (1983). * Ishida _et al._ (2005) K. Ishida, H. Murakawa, K. Kitagawa, Y. Ihara, H. Kotegawa, M. Yogi, Y. Kitaoka, B.-L. Young, M. S. Rose, D. E. MacLaughlin, H. Sugawara, T. D. Matsuda, Y. Aoki, H. Sato, and H. Harima, Phys. Rev. B 71, 024424 (2005). * Kikuchi _et al._ (2007) J. Kikuchi, M. Takigawa, H. Sugawara, and H. Sato, J. Phys. Soc. Jpn. 76, 043705 (2007). * Taniguchi _et al._ (2016) T. Taniguchi, M. Yoshida, H. Takeda, M. Takigawa, M. Tsujimoto, A. Sakai, Y. Matsumoto, and S. Nakatsuji, J. Phys. Soc. Jpn. 85, 113703 (2016). * Tokunaga _et al._ (2005) Y. Tokunaga, Y. Homma, S. Kambe, D. Aoki, H. Sakai, E. Yamamoto, A. Nakamura, Y. Shiokawa, R. E. Walstedt, and H. Yasuoka, Phys. Rev. Lett. 94, 137209 (2005). * Yatsushiro and Hayami (2020b) M. Yatsushiro and S. Hayami, Phys. Rev. B 102, 195147 (2020b). * Nakamura _et al._ (1995) S. Nakamura, T. Goto, and S. Kunii, J. Phys. Soc. Jpn. 64, 3941 (1995). * Taniguchi _et al._ (2019) T. Taniguchi, K. Hattori, M. Yoshida, H. Takeda, S. Nakamura, T. Sakakibara, M. Tsujimoto, A. Sakai, Y. Matsumoto, S. Nakatsuji, and M. Takigawa, J. Phys. Soc. Jpn. 88, 084707 (2019).
Adaptive Change Point Monitoring for High-Dimensional Data Teng Wu, Runmin Wang, Hao Yan, Xiaofeng Shao Department of Statistics, University of Illinois at Urbana-Champaign Department of Statistical Science, Southern Methodist University School of Computing Informatics & Decision Systems Engineering, Arizona State University > Abstract: In this paper, we propose a class of monitoring statistics for a > mean shift in a sequence of high-dimensional observations. Inspired by the > recent U-statistic based retrospective tests developed by Wang et al. (2019) > and Zhang et al. (2020), we advance the U-statistic based approach to the > sequential monitoring problem by developing a new adaptive monitoring > procedure that can detect both dense and sparse changes in real time. Unlike > Wang et al. (2019) and Zhang et al. (2020), where self-normalization was > used in their tests, we instead introduce a class of estimators for $q$-norm > of the covariance matrix and prove their ratio consistency. To facilitate > fast computation, we further develop recursive algorithms to improve the > computational efficiency of the monitoring procedure. The advantage of the > proposed methodology is demonstrated via simulation studies and real data > illustrations. > > Key words and phrases: Change point detection, Sequential monitoring, > Sequential testing, U-statistics. ## 1 Introduction Change point detection problems have been extensively studied in many areas, such as statistics, econometrics and engineering, and there are wide applications to fields in physical science and engineering. The literature is huge and is still growing rapidly. For the low-dimensional data, it dates back to early work by Page (1954); MacNeill (1974); Brown et al. (1975), among others. More recent work that studied change point problems for low/fixed dimensional multivariate time series data can be found in Shao and Zhang (2010); Matteson and James (2014); Kirch et al. (2015); Bücher et al. (2019), among others. We refer to Perron (2006), Aue and Horváth (2013) and Aminikhanghahi and Cook (2017) for some excellent reviews on this topic. The literature on change point detection can be roughly divided into two categories: retrospective testing and estimation of change points based on a complete data sequence offline and online sequential monitoring for change points based on some training data and sequentially arrived data. This paper is concerned with the sequential monitoring problem for temporally independent but cross-sectionally dependent high-dimensional data. There are two major lines of research for sequential change-point detection/monitoring. In one line, a huge body of work follows the paradigm set by pioneers in the field, such as Wald (1945), Page (1954) and Lorden (1971); see Lai (1995, 2001) and Polunchenko and Tartakovsky (2012) for comprehensive reviews. Most sequential detection methods along this line are optimized to have a minimal detection delay with a control of average run length under the null and also the existing procedures are mostly developed for low-dimensional data. These methods often require both pre-change distribution and post-change distribution to be specified or some parametric assumption to be made. In the other line, Chu et al. (1996) assumed that there is a stretch of training data (without any change points) and sequential monitoring was applied to sequentially arrived testing data. They employ the invariance principle to control the type I error and their framework has been adopted by many other researchers in both parametric and nonparametric contexts. See Horváth et al. (2004); Aue et al. (2012); Wied and Galeano (2013); Fremdt (2015); Dette and Gösmann (2019). Along this line, it is typical to use size and power (plus average detection delay) to describe and compare operating characteristics of competing procedures. Our procedure falls into the second category. It seems to us that these two frameworks are in general difficult to compare, as they differ in terms of the model assumptions and evaluation criteria etc. Nowadays, with the rapid improvement of data acquisition technology, high- dimensional data streams involving continuous sequential observations appear frequently in modern manufacturing and service industries, and the demand for efficient online monitoring tools for such data has never been higher. For example, in Yan et al. (2018), they proposed a method to monitor a multi- channel tonnage profile used for the forging process, which has thousands of attributes. Furthermore, image-based monitoring [Yan et al. (2014)] has become popular in the literature, which includes thousands of pixels per image. In Lévy-Leduc and Roueff (2009), they considered the problem of monitoring thousands of Internet traffic metrics provided by a French Internet service provider. This kind of high-dimensional data poses significant new challenges to the traditional multivariate statistical process control and monitoring, since when the dimension $p$ is high and is comparable to the sample size $n$, most existing sequential monitoring methods constructed based on the fixed- dimension assumptions become invalid. In this article, we propose a new class of sequential monitoring methodology to monitor a change in the mean of independent high-dimensional data based on (sequential) retrospective testing. Our proposal is inspired by recent work on retrospective testing of mean change in high-dimensional data by Wang et al. (2019) and Zhang et al. (2020). In Wang et al. (2019), the authors proposed a U-statistic based approach to target the $L_{2}$-norm of the mean difference by extending the U-statistic idea initiated in Chen and Qin (2010) from two- sample testing to the change point testing problem. In Zhang et al. (2020), they further extended the test in Wang et al. (2019) to a $L_{q}$-norm-based one mimicking He et al. (2018), where $q\in 2\mathbb{N}$, to capture sparse alternative. By combining $L_{2}$-norm-based test and $L_{q}$-norm-based one, the adaptive test statistic they proposed is shown to achieve high power for both dense and sparse alternatives. However, one of the limitations of these works is that these methods are designed for off-line analysis, which is not suitable to be applied to real-time online monitoring systems. Building on the works of Wang et al. (2019) and Zhang et al. (2020), we shall propose a new adaptive sequential monitoring procedure that can capture both sparse and dense alternatives. Instead of using the self-normalization scheme [Shao (2010); Shao and Zhang (2010); Shao (2015)], as done in Wang et al. (2019) and Zhang et al. (2020), we opt to use ratio-consistent estimators for $\|\Sigma\|_{q}^{q}$ based on the training data, where $\Sigma$ is the common covariance matrix of the sequence of random vectors and provide a rigorous proof. Further, we develop recursive algorithms for fast implementation so that at each time the monitoring statistics can be efficiently computed. The resulting adaptive monitoring procedure via a combination of $L_{2}$ and $L_{q}$ (say $q=6$) based sequential tests are shown to be powerful against both dense and sparse alternatives via theory and simulations. There is a growing literature on high-dimensional change point detection in the retrospective setting; see Horváth and Hušková (2012); Cho and Fryzlewicz (2015); Jirak (2015); Yu and Chen (2017); Wang and Samworth (2018); Yu and Chen (2019); Wang et al. (2019); Zhang et al. (2020); Wang and Shao (2020), among others. It is worth noting that Enikeeva and Harchaoui (2019) developed a test based on a combination of a linear statistic and a scan statistic, and their test can be adaptive to both sparse and dense alternatives. However, their Gaussian and independent components assumptions are also too restrictive. In addition, the literature for online monitoring of high- dimensional data streams has also been growing steadily in the literature of statistics and quality control in the last decade. In particular, Mei (2010) proposed a global monitoring scheme based on the sum of the cumulative sum monitoring statistics from each individual data stream. His method aims to minimize the delay time and control the global false alarm rate, which is based on the average run length under the null. This is different from the size and power analysis as done in our work. Note that the assumptions in Mei (2010) are quite restrictive in the sense that he assumed all data streams do not have cross-sectional dependence, and that both the pre-change and post- change distributions are known. See Wang and Mei (2015), Zou et al. (2015), Liu et al. (2019), and Li (2020) for several variants in the sense that they propose new ways of aggregating the local monitoring statistics. In Xie and Siegmund (2013), they proposed a mixture detection procedure based on a likelihood ratio statistic that takes into account the fraction of data streams being affected. They argue that the performance is good when the fraction of affected data streams are known and do not require to completely specify the post-change distribution. However, the mixture global log- likelihood they specified relies on that hypothesized affected fraction $p_{0}$, and they showed the robustness of different choices of $p_{0}$ only through numerical studies. The results they derived hold for data generated from a normal distribution or other exponential families of distributions. A common feature of all these works is that they assume the data streams do not have cross-sectional dependence, which may be violated in practice. As a matter of fact, our theory for the proposed monitoring statistic demonstrates the impact of the correlation/covariance structure of the multiple data streams, which seems not well appreciated in the above-mentioned literature. The rest of the paper is structured as follows: In Section 2, we specify the change point monitoring framework we use and propose the monitoring statistic that targets the $L_{q}$-norm of the mean change. An adaptive monitoring scheme can be derived by combining the test statistic for different $q$’s, $q\in 2\mathbb{N}$. Section 3 provides a ratio-consistent estimator for $\|\Sigma\|_{q}^{q}$, which is crucial when constructing the monitoring statistics. Section 4 provides simulation studies to examine the finite sample performance of the adaptive monitoring statistic. In Section 5, we apply the adaptive monitoring scheme to two real datasets. Section 6 concludes the paper. All the technical details can be found in the Appendix. ## 2 Monitoring Statistics In this section, we specify the general framework we use to perform change point monitoring. We consider a closed-end change point monitoring scenario following Chu et al. (1996). Assume that we observe a sequence of temporally independent high dimensional observations $X_{1},\ldots,X_{n}\in\mathbb{R}^{p}$, which are ordered in time and have constant mean $\bm{\mu}$ and covariance matrix $\Sigma$. We start the monitoring procedure from time $(n+1)$ to detect if the mean vector changes in the future. Throughout the analysis, we assume that all $X_{t}$’s are independent over time. A decision is to be made at each of the time points, and we will signal an alarm when the monitoring statistic exceeds a certain boundary. The process ends at time $nT$ regardless of whether a change point is detected, where $T$ is a pre-specified number. The Type-I error of the monitoring procedure is controlled at $\alpha$, which means the probability of signaling an alarm when there is no change within the period $[n+1,nT]$ is at most $\alpha$. Under the null hypothesis, no change occurs within the monitoring period and we have $E(X_{t})=\mu\text{ for $t=1,\ldots,nT$}.$ against the alternative Under the alternatives, the mean function changes at some time $t_{0}>n$, and remains at the same level for the following observations. That is $E(X_{t})=\begin{cases}\mu&1<t<t_{0}\\\ \mu+\Delta&t_{0}\leq t\leq nT.\end{cases}$ We propose a family of test statistics $T_{n,q}(k)$, which serves as the monitoring statistic targeting $\|\Delta\|_{q}$. The case $q=2$ corresponds to dense alternatives, and larger values of $q$’s correspond to sparser alternatives. We will discuss the formulation of our monitoring statistic for $q=2$ and then extend to general q’s in the following subsections. ### 2.1 $L_{2}$-norm-based monitoring statistics In this section, we will first develop the $L_{2}$-norm-based monitoring statistic, which is especially useful to detect the dense alternative. Furthermore, we will discuss the asymptotic properties of the $L_{2}$-norm- based statistic. Finally, the recursive computational algorithm will be developed to allow efficient implementation. #### 2.1.1 Monitoring statistics For a given time $k>n$, suppose we know a change point happens at the location $m$, where $n<m<k$. We can separate the observations into two independent samples: pre-break $X_{1},\ldots,X_{m}$ and post-break $X_{m+1},\ldots,X_{k}$. Consider using a two-sample U-statistic with kernel $h((X,Y),(X^{\prime},Y^{\prime}))=(X-Y)^{T}(X^{\prime}-Y^{\prime})$ , where $(X^{\prime},Y^{\prime})$ is an independent copy of $(X,Y)$. Then we have $E[h((X,Y),(X^{\prime},Y^{\prime}))]=\|E(X)-E(Y)\|_{2}^{2},$ which estimates the squared $L_{2}$-norm of the mean difference. Indeed Wang et al. (2019) constructed a $L_{2}$-norm-based retrospective change point detection statistic by scanning over all possible $m$. For the online monitoring problem, we shall combine this idea with the approach in Dette and Gösmann (2019) to propose a monitoring statistic. To be more precise, at each time point $k$, we scan through all possible change point locations $m$ ($n<m<k-2$), and perform a change point testing. We take the maximum of these U-statistics over $m$ as our test statistics at time $k$. Suppose we can get a ratio-consistent estimator of $||\Sigma||_{F}$ learned from the training sample $\\{X_{1},\ldots,X_{n}\\}$ denoted by $\widehat{\|{\Sigma}\|_{F}}$ , our monitoring statistic at time $k=n+3,\ldots,nT$ is $\displaystyle T_{n,2}(k)$ $\displaystyle=\frac{1}{n^{3}\widehat{\|{\Sigma}\|_{F}}}\max_{m=n+1\ldots,k-2}\sum_{l=1}^{p}\sum_{1\leq i_{1},i_{2}\leq m}^{*}\sum_{m+1\leq j_{1},j_{2}\leq k}^{*}(X_{i_{1},l}-X_{j_{1},l})(X_{i_{2},l}-X_{j_{2},l})$ $\displaystyle=\frac{1}{n^{3}\widehat{\|{\Sigma}\|_{F}}}\max_{m=n+1\ldots,k-2}G_{k}(m).$ #### 2.1.2 Asymptotic properties To calibrate the size of the testing procedure, we need to obtain the asymptotic distribution of the test statistic under the null. The following conditions are imposed in Wang et al. (2019) to ensure the process convergence results. ###### Assumption 1. $tr(\Sigma^{4})=o(\|\Sigma\|^{4}_{F})$. ###### Assumption 2. Let $Cum(h)=\sum_{l_{1},\ldots,l_{h}=1}^{p}cum^{2}(X_{1,l_{1}},\ldots,X_{1,l_{h}})\leq C||\Sigma||^{h}_{F}$ for $h=2,3,4,5,6$ and some constant $C$. Here $cum(\cdot)$ is the joint cumulant. In general, for a sequence of random variable $Y_{1},\ldots,Y_{n}$, their joint cumulant is defined as $cum(Y_{1},\ldots,Y_{n})=\sum_{\pi}(|\pi|-1)!(-1)^{|\pi|-1}\prod_{B\in\pi}E\left(\prod_{i\in B}Y_{i}\right),$ where $\pi$ runs through the list of all partitions of $\\{1,\ldots,n\\}$, $B$ runs through the list of all blocks of partition $\pi$ and and $\pi$ is the number of parts in the partition. Assumption 1 was also imposed in Chen and Qin (2010), who pioneered the use of $U$-statistic approach in the two-sample testing problem for high-dimensional data, and it can be satisfied by a wide range of covariance models. Assumption 2 can be viewed as some restrictions on the dependence structure, which holds under uniform bounds on moments and ‘short-range’ dependence type conditions on the entries of the vector $(X_{0,1},...,X_{0,p})$. See Wang et al. (2019) for more discussions about these two assumptions. Finally, under the null hypothesis and these assumptions, we provide the limiting distribution of the proposed monitoring statistic in Theorem 1. ###### Theorem 1. Under Assumptions 1 and 2, we have $\max_{k=n+3}^{nT}T_{n,2}(k)\xrightarrow{D}\sup_{t\in[1,T]}\sup_{s\in[1,t]}G(s,t),$ where $G(s,t)=t(t-s)Q(0,s)+stQ(s,t)-s(t-s)Q(0,t),$ and $Q$ is a Gaussian process whose covariance structure is the following $Cov(Q(a_{1},b_{1}),Q(a_{2},b_{2}))=\begin{cases}(\min(b_{1},b_{2})-\max(a_{1},a_{2}))^{2}&if\quad\max(a_{1},a_{2})\leq\min(b_{1},b_{2})\\\ 0&otherwise\end{cases}$ In general, we can also consider some non-constant boundary function $w(t)$, that is, $\max_{k=n+1}^{nT}\frac{T_{n,2}(k)}{w(k/n-1)}\xrightarrow{D}\sup_{t\in[1,T]}\sup_{s\in[1,t]}\frac{G(s,t)}{w(t-1)}.$ We take the double supremums here to control the familywise error rate. Therefore, we reject the null hypothesis if $T_{n,2}(k)>c_{\alpha}w(k/n-1)$ for some $k\in\\{n+3,\ldots,nT\\}$. The size can be calibrated by choosing a $c_{\alpha}$, such that $P\left(\sup_{t\in[1,T]}\sup_{s\in[1,t]}\frac{G(s,t)}{w(t-1)}>c_{\alpha}\right)=\alpha.$ Different choices of $w(t)$ have been considered in Dette and Gösmann (2019). * • (T1) $w(t)=1$, * • (T2) $w(t)=(t+1)^{2}$, * • (T3) $w(t)=(t+1)^{2}\cdot\max\Big{\\{}\Big{(}\frac{t}{t+1}\Big{)}^{1/2},10^{-10}\Big{\\}}$. These $w(t)$s are motivated by the law of iterated logarithm and are used to reduce the stopping delay under the alternative. Based on our simulation results and real data applications, the choice of $w(t)$ among the above three candidates does not seem to have a big impact on power and detection delay. So in practice, for closed-end procedure, any choice would work. The detailed comparisons are shown in the simulation studies in Section 4. Remark The current method can be generalized to the open-end framework. For an open-end monitoring procedure, we are interested in testing $E(X_{t})=\mu\text{ for $t=1,2,\ldots$}.$ against the alternative $E(X_{t})=\begin{cases}\mu&1<t<t_{0}\\\ \mu+\Delta&t>t_{0}.\end{cases}$ for some $t_{0}>n$. Suppose we use the same $L_{2}$ norm based monitoring statistic at time $k=n+3,\ldots$, i.e., $T_{n,2}(k)=\frac{1}{n^{3}\widehat{\|{\Sigma}\|_{F}}}\max_{m=n+1\ldots,k-2}G_{k}(m).$ For a suitably chosen boundary function $w(\cdot)$, we expect that $\sup_{k=n+3}^{\infty}\frac{T_{n,2}(k)}{w(k/n-1)}\xrightarrow{D}\sup_{t\in[1,\infty)}\sup_{s\in[1,t]}\frac{G(s,t)}{w(t-1)},$ as $n\to\infty$. The critical value can be determined by $P\left(\sup_{t\in[1,\infty)}\sup_{s\in[1,t]}\frac{G(s,t)}{w(t-1)}>c_{\alpha}\right)=\alpha.$ We reject the null hypothesis if $T_{n,2}(k)>c_{\alpha}w(k/n-1)$ for some $k\in\\{n+3,\ldots\\}$. In practice, we can approximate critical values $c_{\alpha}$ based on the procedure we used for simulating the critical values in the closed-end procedure, by using a large $T$, say $T=200$. Note that the boundary function used for open-end monitoring needs to satisfy certain smoothness and decay rate assumptions and the above three we used for the closed-end procedure are no longer applicable; see Assumption 2.4 in Gösmann et al. (2020) and related discussions. The following theorem provides theoretical analysis for the power of the $L_{2}$-norm-based monitoring procedure. ###### Theorem 2. Suppose that Assumptions 1 and 2 hold. Further assume that the change point location is at $\lfloor nr\rfloor$ for some $r\in(1,T)$, we have 1. 1. When $\frac{n\Delta^{T}\Delta}{||\Sigma||_{F}}\to 0$, $\max_{k=n+3,\ldots nT}T_{n,2}(k)\xrightarrow{D}\sup_{t\in[1,T]}\sup_{s\in[1,t]}G(s,t).$ 2. 2. When $\frac{n\Delta^{T}\Delta}{||\Sigma||_{F}}\to b\in(0+\infty)$, $\max_{k=n+3,\ldots nT}T_{n,2}(k)\xrightarrow{D}\tilde{T}_{2}=\sup_{t\in[1,T]}\sup_{s\in[1,t]}\left[G(s,t)+b\Lambda(s,t)\right],$ where $\Lambda(s,t)=\begin{cases}(t-r)^{2}s^{2}&s\leq r\\\ r^{2}(t-s)^{2}&s>r\\\ 0&otherwise\end{cases}.$ 3. 3. When $\frac{n\Delta^{T}\Delta}{||\Sigma||_{F}}\to\infty$, $\max_{k=n+3,\ldots nT}T_{n,2}(k)\xrightarrow{D}\infty.$ Theorem 2 implies that, under the local alternative where $\frac{n\Delta^{T}\Delta}{||\Sigma||_{F}}\to 0$, the proposed monitoring procedure has trivial power. For the diverging alternative where $\frac{n\Delta^{T}\Delta}{||\Sigma||_{F}}\to+\infty$, the test has power converging to 1. When the strength corresponding to the change falls in between, the test has power between $(\alpha,1)$. #### 2.1.3 Recursive computation One challenge for the proposed monitoring statistic $T_{n,2}(k)$ is that it needs to be recomputed at each given time $k$. The brute force calculation of the test statistics has $O(n^{4}p)$ time complexity and O(1) space complexity. In this section, we develop a recursive algorithm to efficiently update the monitoring statistic, which greatly improves the computational efficiency for online monitoring. More specifically, we propose a recursive algorithm to update $G_{k}(m)$, which is a major component to compute the monitoring statistic $T_{n,2}(k)$ as follows: $\displaystyle G_{k}(m)$ $\displaystyle=(k-m)(k-m-1)\sum_{1\leq i<j\leq m}X_{i}^{T}X_{j}+m(m-1)\sum_{m+1\leq i<j\leq k}X_{i}^{T}X_{j}$ $\displaystyle-(m-1)(k-m-1)\sum_{i=1}^{m}\sum_{j=m+1}^{k}X_{i}^{T}X_{j}.$ To compute $G_{k}(m)$, we need to keep track of two CUSUM processes $B_{t}=\sum_{i=1}^{t}X_{i}\text{ and }C_{t}=\sum_{i=1}^{t}X_{i}^{T}X_{i},$ where $B_{t}$’s are still $p$-dimensional. The partial sum process $S(a,b)=\sum_{a\leq i<j\leq b}X_{i}^{T}X_{j}$ appeared in $G_{k}(m)$ can be expressed in terms of functions of $B_{t}$ and $C_{t}$, $S(a,b)=\sum_{a\leq i<j\leq b}X_{i}^{T}X_{j}=\frac{1}{2}[(B_{b}-B_{a-1})^{T}(B_{b}-B_{a-1})-(C_{b}-C_{a-1})].$ The detailed algorithm is stated as follows, 1. 1. Initialization: Start with the first pair $(m,k)=(n+1,n+3)$. Record the following quantities $B_{n+1},B_{n+2},B_{n+3},C_{n+1},C_{n+2},C_{n+3}.$ The first statistic can be calculated based on $\displaystyle G_{n+3}(n+1)$ $\displaystyle=2\cdot(B_{n+1}^{T}B_{n+1}-C_{n+1})/2$ $\displaystyle+(n+1)n[(B_{n+3}-B_{n+1})^{T}(B_{n+3}-B_{n+1})$ $\displaystyle-(C_{n+3}-C_{n+1})]/2-nB_{n+1}^{T}(B_{n+3}-B_{n+1}).$ 2. 2. Increase index from $k$ to $k+1$: Fix index $m$, compute $B_{k+1}$ and $C_{k+1}$: $B_{k+1}=B_{k}+X_{k+1},C_{k+1}=C_{k}+X_{k+1}^{T}X_{k+1}.$ The statistic for the pair $(m,k+1)$ is $\displaystyle G_{k+1}(m)$ $\displaystyle=(k-m+1)(k-m)(B_{m}^{T}B_{m}-C_{m})/2$ $\displaystyle+m(m-1)[(B_{k+1}-B_{m})^{T}(B_{k+1}-B_{m}))$ $\displaystyle-(C_{k+1}-C_{m})]/2-(m-1)(k-m)\sum_{i=1}^{m}B_{m}^{T}(B_{k+1}-B_{m}).$ 3. 3. Increase index from $m$ to $m+1$: For fixed index $k$, all $B_{i}$ and $C_{i}$ for $i=n\ldots,k$ are already recorded. The statistic for pair $(m+1,k)$ is $\displaystyle G_{k}(m+1)$ $\displaystyle=(k-m-1)(k-m-2)(B_{m+1}^{T}B_{m+1}-C_{m+1})/2$ $\displaystyle+(m+1)m[(B_{k}-B_{m+1})^{T}(B_{k}-B_{m+1}))-(C_{k}-C_{m+1})]/2$ $\displaystyle-(k-m-2)mB_{m+1}^{T}(B_{k}-B_{m+1}).$ The algorithm should start with $(m,k)=(n+1,n+3)$, increase the second index $k$ first and then increase along the first index $m$. The recursive formulation reduces the time complexity to $O(n^{2}p)$ with additional space complexity $O(np)$. ### 2.2 $L_{q}$-norm-based monitoring statistics In this section, we generalize the monitoring statistic from $L_{2}$-norm to $L_{q}$-norm. As has been shown in the previous analysis, the power of the $L_{2}$-norm-based monitoring statistic depends on quantity $\|\Delta\|_{2}$, which is sensitive to dense alternatives. However, in real applications, we usually do not know a priori if the mean change is dense or not. As an approximation, we can consider a similar test statistic targeting $\|\Delta\|_{q}$, for $q\in 2\mathbb{N}$. When $q$ is large, we are essentially testing against sparse alternatives. As a special case, if we let $q\to\infty$, $\lim_{q\to\infty}\|\Delta\|_{q}=\|\Delta\|_{\infty}$, we only target on the largest element (in absolute value) of the $\Delta$. #### 2.2.1 Monitoring statistics To define the monitoring statistics, we adopt the idea used in Zhang et al. (2020) without applying self-normalization. Self-normalization requires more extensive computation and can be avoided by using the Phase I data to obtain a ratio consistent estimator for $\|\Sigma\|_{q}$. Also, as pointed out by Shao (2015), self-normalization can result in a slight loss of power. Essentially, we can construct a $L_{q}$-norm-based test statistic at time $k=n+q+1,\ldots,nT$, $\displaystyle T_{n,q}(k)$ $\displaystyle=\frac{1}{\sqrt{n^{3q}\widehat{\|\Sigma\|_{q}^{q}}}}\max_{m=n+1,\ldots,k-q}\sum_{l=1}^{p}\sum_{1\leq i_{1},\ldots,i_{q}\leq m}^{*}\sum_{m+1\leq j_{1},\ldots,j_{q}\leq k}^{*}(X_{i_{1},l}-X_{j_{1},l})\cdots(X_{i_{q},l}-X_{j_{q},l})$ $\displaystyle=\frac{1}{\sqrt{n^{3q}\widehat{\|\Sigma\|_{q}^{q}}}}\max_{m=n+1,\ldots,k-q}U_{n,q}(k,m),$ where $\widehat{\|\Sigma\|_{q}^{q}}$ is a ratio-consistent estimator of $\|\Sigma\|_{q}^{q}$. #### 2.2.2 Asymptotic properties In this subsection, we study the asymptotic property of the $L_{q}$-norm-based test statistics. First, we impose the following conditions in Zhang et al. (2020) to facilitate the asymptotic analysis. ###### Assumption 3. Let $X_{t}=\mu+Z_{t}$. Suppose $Z_{t}$ are i.i.d copies of $Z_{0}$ with mean 0 and covariance matrix $\Sigma$. There exists $c_{0}>0$ independent of $n$ such that $\inf_{i=1\ldots,p}Var(Z_{0,i})\geq c_{0}$ ###### Assumption 4. $Z_{0}$ has up to 8-th moments, with $\sup_{1\leq j\leq p}E[Z^{8}_{0,j}]\leq C$, and for $h=2\ldots 8$ there exist constants $C_{h}$ depending on $h$ only and a constant $r>2$ such that $|cum(Z_{0,l_{1}}\ldots,Z_{0,l_{h}})|\leq C(1\lor\max_{1\leq i<j\leq h}|l_{i}-l_{j}|)^{-r}.$ These assumptions appeared in Zhang et al. (2020), and Wang et al. (2019) showed that they imply Assumptions 1 and 2 for the case $q=2$. Assumption 4 can be implied by geometric moment contraction [cf. Proposition 2 of Wu and Shao (2004)] or physical dependence measure proposed by Wu (2005) [cf. Section 4 of Shao and Wu (2007)], or $\alpha$-mixing. It essentially requires weak cross-sectional dependence among the $p$ components in the data. Under the null hypothesis, to obtain the limiting distribution of monitoring statistic ${T}_{n,q}$, we utilize the limiting process in Zhang et al. (2020) and obtained the following theorem. ###### Theorem 3. Under Assumptions 3 and 4, $\max_{k=n+q+1}^{nT}T_{n,q}(k)\xrightarrow{d}\tilde{T}_{q}:=\sup_{t\in[1,T]}\sup_{s\in[1,t]}G_{q}(s,t),$ where $G_{q}(s,t)=\sum_{c=0}^{q}(-1)^{q-c}\begin{pmatrix}q\\\ c\end{pmatrix}s^{q-c}(t-s)^{c}Q_{q,c}(s;[0,t]),$ and $Q_{q,c}(r;[a,b])$ is a Gaussian process with covariance structure $cov(Q_{q,c_{1}}(r_{1};[a_{1},b_{1}]),Q_{q,c_{2}}(r_{2};[a_{2},b_{2}]))=\begin{pmatrix}C\\\ c\end{pmatrix}c!(q-c)!(r-A)^{c}(R-r)^{C-c}(b-R)^{q-C},$ where $A=\max(a_{1},a_{2})$, $c=\min(c_{1},c_{2})$, $C=\max(c_{1},c_{2})$ and $b=\min(b_{1},b_{2})$. Two processes $Q_{q_{1},c_{1}}$ and $Q_{q_{2},c_{2}}$ are mutually independent if $q_{1}\neq q_{2}\in 2\mathbb{N}$. The limiting null distribution is pivotal and its critical values can be simulated based on the following equation, $P\left(\sup_{t\in[1,T]}\sup_{s\in[1,t]}\frac{G_{q}(s,t)}{w(t-1)}>c_{\alpha}\right)=\alpha.$ We reject the $H_{0}$ when ${T}_{n,q}(k)>c_{\alpha}w(k/n-1)$ for $k=n+q+1,\ldots,nT$. We tabulate the critical values for $T=2$, $q=2,6$ and different boundary functions in Table 1. Critical values under other settings are available upon request. Table 1: Simulated critical values for $L_{q}$-norm-based test, $T=2$ Boundary | T1 | T2 | T3 ---|---|---|--- Quantile | $L_{2}$ | $L_{6}$ | $L_{2}$ | $L_{6}$ | $L_{2}$ | $L_{6}$ 90% | 0.756 | 3.235 | 0.204 | 0.867 | 0.141 | 0.592 95% | 1.264 | 3.711 | 0.331 | 0.973 | 0.232 | 0.676 99% | 2.715 | 4.635 | 0.706 | 1.196 | 0.485 | 0.837 Finally, we study the power of the $L_{q}$-norm-based monitoring procedure in Theorem 4. ###### Theorem 4. Suppose that Assumptions 3 and 4 hold and the change point location is at $\lfloor nr\rfloor$ for some $r\in(1,T)$, 1. 1. When $\frac{n^{q/2}\|\Delta\|_{q}^{q}}{\|\Sigma\|_{q}^{q/2}}\to 0$, $\max_{k=n+q+1}^{nT}T_{n,q}(k)\xrightarrow{D}\tilde{T}_{q}$; 2. 2. When $\frac{n^{q/2}\|\Delta\|_{q}^{q}}{\|\Sigma\|_{q}^{q/2}}\to\gamma\in(0,+\infty)$, $\max_{k=n+q+1}^{nT}T_{n,q}(k)\xrightarrow{D}\sup_{t\in[1,T]}\sup_{s\in[1,t]}[G_{q}(s,t)+\gamma J_{q}(s;[0,t])],$ where $J_{q}(s;[0,t])=\begin{cases}r^{q}(t-s)^{q}&r\leq s<t\\\ s^{q}(t-r)^{q}&s\leq r<t\\\ 0&otherwise\end{cases};$ 3. 3. When $\frac{n^{q/2}\|\Delta\|_{q}^{q}}{\|\Sigma\|_{q}^{q/2}}\to\infty$, $\max_{k=n+q+1}^{nT}T_{n,q}(k)\xrightarrow{D}\infty$. Analogous to the $q=2$ case, the power of the test depends on $\|\Delta\|_{q}$. Therefore, for large $q$, the proposed test is sensitive to sparse alternatives. #### 2.2.3 Recursive computation Similar to the $L_{2}$-based-test statistics, we would like to extend the recursive formulation to $L_{q}$-norm-based test statistic. According to Zhang et al.(2020), under the null, the process $U_{n,q}(k,m)$ can be simplified as $U_{n,q}(k,m)=\sum_{c=0}^{q}(-1)^{q-c}\begin{pmatrix}q\\\ c\end{pmatrix}P^{m-1-c}_{q-c}P^{k-m-q+c}_{c}S_{n,q,c}(m;1,k),$ where $P_{l}^{k}=k!/(k-l)!$ and $S_{n,q,c}(m;s,k)=\sum_{l=1}^{p}\sum_{s\leq i_{1},\ldots,i_{c}\leq m}^{*}\sum_{m+1\leq j_{1},\ldots,j_{q-c}\leq k}^{*}\prod_{t=1}^{c}X_{i_{t},l}\prod_{g=1}^{q-c}X_{j_{g},l}.$ Since $S_{n,q,c}(m;1,k)$’s are the major building blocks of our final test statistic and need to be computed at each time $k$, we need to find efficient ways to calculate them recursively. One key element is the sum of product terms like $B(c,m,l):=\sum_{1\leq i_{1},\ldots,i_{c}\leq m}^{*}\prod_{t=1}^{c}X_{i_{t},l},\qquad\text{ and }$ $M(c,m,k,l):=\sum_{m\leq j_{1},\ldots,j_{c}\leq k}^{*}\prod_{g=1}^{c}X_{j_{g},l}.$ When we increase from $m$ to $m+1$, $\sum_{1\leq i_{1},\ldots,i_{c}\leq m+1}^{*}\prod_{t=1}^{c}X_{i_{t},l}=\sum_{1\leq i_{1},\ldots,i_{c}\leq m}^{*}\prod_{t=1}^{c}X_{i_{t},l}+X_{m+1,l}\cdot\sum_{1\leq i_{1},\ldots,i_{c-1}\leq m}^{*}\prod_{t=1}^{c-1}X_{i_{t},l}.$ We can derive the following recursive relationship for $B(c,k,l)$, i.e. $B(c,m+1,l)=B(c,m,l)+B(c-1,m,l)\cdot X_{k+1,l}.$ (2.1) There is similar recursive relationship for $M(c,m,k,l)$, $M(c,m+1,k,l)=M(c,m,k,l)+X_{m+1,l}M(c-1,m,k,l).$ (2.2) To enable the recursive computation, for each $c=0,\ldots,q$, we maintain a matrix to store the cumulative sums. 1. 1. Initialization: Starting with $c=0$ and $c=1$, for all $l=1,\ldots,p$, we initialize $B(0,k+1,l),\ldots,B(0,k+q,l)=0$ and calculate $B(1,k+1,l)=\sum_{i=1}^{k+1}X_{i,l},\ldots,B(1,k+q,l)=\sum_{i=1}^{k+q}X_{i,l}.$ Then we can recursively calculate $B(c,i,l)$ for all $c=0,\ldots,q$ and $i\leq k+q$ based on Equation 2.1. 2. 2. Update from $B(c,k,l)$ to $B(c,k+1,l)$: We let $B(0,k+1,l)=B(0,k,l)+X_{k+1,l}$ and obtain the result for other $B(c,k+1,l)$ $(c\leq q)$ using Equation 2.1. 3. 3. Update from $M(c,m,k,l)$ to $M(c,m+1,k,l)$: Fix index $k$, for any $n+1\leq m\leq k-q$, $l=1,\ldots,p$, we let $M(0,m,k,l)=0$ and calculate $M(1,m,k,l)=\sum_{i=m}^{k}X_{i,l}.$ All other $M(c,m,k,l)$ where $c\leq q$ and $n+1\leq m\leq k-q$, can be obtained via Equation 2.2. Construct the test statistic $T_{n,q}(k+1)$ using $B(c,k,l)$’s and $M(c,m,k,l)$’s and repeat from step 2. The time complexity of the recursive formulation is $O(n^{2}pq)$ with space complexity $O(npq)$. ### 2.3 Combining multiple $L_{q}$-norm-based statistics In this section, we propose to combine multiple $L_{q}$ statistics to detect both dense and sparse alternatives. More specifically, based on theoretical results in Zhang et al. (2020), the U-process for different $q$’s are asymptotically independent, which implies that $\\{T_{n,q}\\}_{k=n+q+1}^{nT}$ are asymptotically independent for $q\in 2\mathbb{N}$. Therefore, $\max_{k=n+q+1}^{nT}T_{n,q}(k)$ are asymptotically independent for $q\in I$, where $I\subset 2\mathbb{N}$, say $I=\\{2,6\\}$. Thus we can combine the monitoring procedure for different $q$’s and adjust for the asymptotic size. In general, if we want to combine a set of $q\in I$, we can adjust the size of each individual test to be $1-(1-\alpha)^{1/|I|}$ given the asymptotic independence and reject the null if any of the monitoring statistics exceeds its critical value. In Zhang et al.(2020) they provide a discussion on the power analysis for the identity covariance matrix case, and showed that the adaptive test enjoys good overall power. In practice, there is this issue of which $q$ to use. Based on the recommendation in Zhang et al. (2020), we set $q=6$. As mentioned in the latter paper, using larger $q$ leads to more trimming and more computational cost. As we demonstrate in the simulations, using $q=6$ and combining with $q=2$ show a very promising performance; see Section 4 for more details. ## 3 Ratio-consistent estimator for $\|\Sigma\|_{q}^{q}$ Notice that the test statistic $T_{n}(k)$ requires a ratio-consistent estimator for $\|\Sigma\|_{q}^{q}$. For example, when $q=2$, this can be simplified to $\|\Sigma\|_{F}^{2}$. The ratio-consistent estimator for $\|\Sigma\|_{F}^{2}$ has been proposed in Chen and Qin (2010), but it seems difficult to generalize to $\|\Sigma\|_{q}^{q}$. In this section, we introduce a new class of ratio-consistent estimator for $\|\Sigma\|_{q}^{q}$ based on U-statistics. We first show the result when $q=2$ and generalize it to $q\in 2\mathbb{N}$ later. Assume $\\{X_{t}\\}_{t=1}^{n}\in\mathbb{R}^{p}$ are i.i.d. random vectors with mean $\bm{\mu}$ and variance $\Sigma$. Define $\widehat{\|\Sigma\|_{F}^{2}}=\frac{1}{4{n\choose 4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}tr\left((X_{j_{1}}-X_{j_{2}})(X_{j_{1}}-X_{j_{2}})^{T}(X_{j_{3}}-X_{j_{4}})(X_{j_{3}}-X_{j_{4}})^{T}\right),$ (3.3) as an estimator of $\|\Sigma\|_{F}^{2}$. ###### Theorem 5. Under Assumption 1 and Cum(4)$\leq C\|\Sigma\|_{F}^{4}$ in Assumption 2, $\widehat{\|\Sigma\|_{F}^{2}}$ is a ratio-consistent estimator of $\|\Sigma\|_{F}^{2}$. i.e. $\widehat{\|\Sigma\|_{F}^{2}}/\|\Sigma\|_{F}^{2}\xrightarrow{p}1$. Now we extend this idea to general $q\in 2\mathbb{N}$. We let $\widehat{\|\Sigma\|_{q}^{q}}=\frac{1}{2^{q}{n\choose 2q}}\sum_{l_{1},l_{2}=1}^{p}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\prod_{k=1}^{q}(X_{i_{k},l_{1}}-X_{j_{k},l_{1}})(X_{i_{k},l_{2}}-X_{j_{k},l_{2}}),$ as an estimator for $\|\Sigma\|_{q}^{q}$, for any finite positive even number $q$. We can see that the proposed estimator is unbiased through the following proposition. ###### Proposition 1. $\widehat{\|\Sigma\|_{q}^{q}}$ is an unbiased estimator of $\|\Sigma\|_{q}^{q}$. ###### Proof of Proposition 1. Since $\\{X_{t}\\}_{t=1}^{n}$ are i.i.d., $\displaystyle\mathbb{E}[\widehat{\|\Sigma\|_{q}^{q}}]$ $\displaystyle=\frac{1}{2^{q}{n\choose 2q}}\sum_{l_{1},l_{2}=1}^{p}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\prod_{k=1}^{q}\mathbb{E}[(X_{i_{k},l_{1}}-X_{j_{k},l_{1}})(X_{i_{k},l_{2}}-X_{j_{k},l_{2}})]$ $\displaystyle=\frac{1}{2^{q}{n\choose 2q}}\sum_{l_{1},l_{2}=1}^{p}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\prod_{k=1}^{q}(2\Sigma_{l_{1},l_{2}})$ $\displaystyle=\frac{1}{2^{q}{n\choose 2q}}\sum_{l_{1},l_{2}=1}^{p}{n\choose 2q}2^{q}\Sigma_{l_{1},l_{2}}^{q}=\|\Sigma\|_{q}^{q}.$ This completes the proof. ∎ The ratio consistency can be shown under the following assumption. ###### Assumption 5. We assume that 1. 1. there exists $c>0$ such that $\inf_{i=1,...,p}\Sigma_{i,i}>c$; 2. 2. there exists $C>0$ and $r>2$ such that for $h=2,3,4$ and $1\leq l_{1}\leq\cdots\leq l_{h}\leq p$, $|cum(X_{0,l_{1}},...,X_{0,l_{h}})|\leq C(1\vee(l_{h}-l_{1}))^{-r}.$ Notice that Assumption 5(2) is required for the ratio consistency, which is weaker than Assumption 4. The Assumptions 1-5 required for our theory do not state the explicit relationship between $n$ and $p$. For example, when $\Sigma=I_{p}$, which means there is no cross-sectional dependence, all the assumptions are satisfied and $(n,p)$ can go to infinity freely without any restrictions. When there is cross-sectional dependence, our assumptions may implicitly restrict the relative scale of $n$ and $p$. In general, larger $p$ is a blessing in our setting and it makes the asymptotic approximation more accurate and larger $n$ is always preferred owing to large sample approximation. On the other hand, the computational cost increase when both the dimension and the sample size get large. ###### Theorem 6. Under Assumption 5, $\widehat{\|\Sigma\|_{q}^{q}}$ is a ratio-consistent estimator of $\|\Sigma\|_{q}^{q}$, i.e., $\widehat{\|\Sigma\|_{q}^{q}}/\|\Sigma\|_{q}^{q}\xrightarrow{p}1$. It is worth noting that implementing the above estimator may be time-consuming for large $q$. In practice, we can always take a random sample of all possible indices and form an incomplete U-statistic to approximate. The consistency of incomplete U-statistic can also be established but not pursued for simplicity. ## 4 Simulation Results We compare the monitoring procedures for $q=2,q=6$ and $q=(2,6)$ combined. We consider $(n,p)=(100,50)$ with $T=2$, where the observations $X_{i}\sim N(\mu_{i},\Sigma)$ are generated independently over time. We consider four possible choices of $\Sigma$, $\Sigma_{ij}=\rho^{|i-j|}\text{ for $\rho=0,0.2,0.5,0.8$},$ to evaluate the performance of the monitoring scheme for independent- components setting or under weak and strong dependence between components. All tests have nominal size $\alpha=0.1$. Under the null $H_{0}$, there is no change point, $\mu_{i}=0$ for all $i$. For the alternative, we consider $\mu_{i}=\sqrt{\delta/r}(\bm{1}_{r},\bm{0}_{p-r})$ for $i=(\lfloor 1.25n\rfloor+1),\ldots,nT$. Under the dense alternative, we set $(\delta,r)=(1,p),(2,p)$. Under the sparse alternative, we set $(\delta,r)=(1,3),(1,1)$. To illustrate the finite sample performance of our monitoring statistics, we compare with Mei (2010) (denoted as Mei) and Liu et al. (2019) (denoted as LZM), which are similar to the open-end scenario in Chu et al. (1996). Both methods do not require Phase I data and are originally designed to minimize the average run length. Therefore, they do not explicitly control the Type-I error. To make a fair comparison with the current methods, which are proposed under the closed-end monitoring framework, we generate $n$ independent Gaussian sample from $N(\bm{0},\bm{I}_{p\times p})$ and calculate Mei and LZM monitoring statistics. We empirically determine the critical value such that the empirical rejection rate is $10\%$ based on $2500$ simulated datasets. For Mei’s methods, we need to specify the distribution after the change point, which we set it to be the distribution under the alternative $(\delta,r)=(1,p)$. For LZM’s method, we use the same setting in Liu et al. (2019) and set $b=\log(10),\rho=0.25,t=4$ and $s=1$. Table 2 shows the size of the monitoring procedure for the benchmark methods and the proposed methods for three different boundary functions T1, T2, T3 introduced in Section 2.1 under different correlation coefficients $\rho$. Notice that the size is noticeably worse for $\rho=0.8$. This is partially due to the poor performance in the ratio-consistent estimator since its variance increases as the cross-sectional dependence increases. Also, please note that the size seems to go in different directions for $q=2$ and $q=6$ as the correlation increases. The combined test, on the other hand, balances out such distortions. To make sure this is only a finite sample behavior, we increase $(n,p)$ from $(100,50)$ to $(200,200)$, the size distortion for all tests improved noticeably for almost all settings. The additional results are available in the Supplementary Materials. By contrast, Mei and LZM only achieved correct size for the independent-component case, since we select the threshold under the exact same setting. However, when there is cross-sectional dependence between different data streams, the size is no longer controlled and the size distortion is much more severe than the $L_{q}$ based tests. Table 2: Size of different monitoring procedures | | | T1 | T2 | T3 ---|---|---|---|---|--- $\alpha=0.1$ | Mei | LZM | $L_{2}$ | $L_{6}$ | Comb | $L_{2}$ | $L_{6}$ | Comb | $L_{2}$ | $L_{6}$ | Comb $\rho=0$ | 0.094 | 0.105 | 0.086 | 0.048 | 0.067 | 0.093 | 0.045 | 0.071 | 0.097 | 0.045 | 0.070 $\rho=0.2$ | 0.058 | 0.125 | 0.083 | 0.048 | 0.057 | 0.082 | 0.045 | 0.055 | 0.083 | 0.046 | 0.051 $\rho=0.5$ | 0.002 | 0.176 | 0.103 | 0.048 | 0.084 | 0.104 | 0.048 | 0.082 | 0.108 | 0.048 | 0.080 $\rho=0.8$ | 0.000 | 0.409 | 0.135 | 0.028 | 0.085 | 0.145 | 0.027 | 0.093 | 0.137 | 0.026 | 0.086 Table 3 provides the power result (left column) and ADT (right column) for different tests under dense alternatives. As expected, the $L_{2}$-based test demonstrates higher power compared to that of the $L_{6}$-based test. The power of the combined test falls in between and is closer to the power of $L_{2}$-based test. As the correlation increases, the powers of all tests decrease due to reduced signal. Among three different boundary functions, T2 seems to be the one with the shortest average delay time (ADT) with a slight sacrifice in power. Mei’s method is only better than the $L_{6}$ based test when there is no strong cross-sectional dependence, and is generally worse than all other methods and have relatively longer delay even when the distribution under the alternative is correctly specified. Notice that when $\rho=0.8$, Mei’s method lost the power completely. LZM in general has the slightly shorter detection delay but at the cost of a much lower power compared with $L_{2}$ based test and the combined test. This means the LZM is quicker in signaling an alarm when a change point is detected. Although LZM showed good power for the strong cross-sectional dependence case compared with the combined test, it comes at the price of much distorted size. This is related to the fact that LZM assumes all data streams are independent. Table 3: Power under dense alternatives Power | | Mei | LZM | | $L_{2}$ | $L_{6}$ | Comb ---|---|---|---|---|---|---|--- $\alpha=0.1$ | $(\delta,r)$ | power | ADT | power | ADT | $w(t)$ | power | ADT | power | ADT | power | ADT $\rho=0.0$ | $(1,p)$ | 0.852 | 72.9 | 0.628 | 38.0 | T1 | 0.958 | 51.9 | 0.295 | 64.6 | 0.926 | 55.0 T2 | 0.951 | 44.3 | 0.284 | 63.0 | 0.921 | 47.7 T3 | 0.953 | 46.8 | 0.286 | 63.4 | 0.921 | 50.2 $(2,p)$ | 0.999 | 69.3 | 1.000 | 15.1 | T1 | 1.000 | 27.5 | 0.919 | 56.2 | 1.000 | 29.5 T2 | 1.000 | 20.4 | 0.919 | 54.3 | 1.000 | 21.9 T3 | 1.000 | 22.9 | 0.920 | 54.9 | 1.000 | 24.7 $\rho=0.2$ | $(1,p)$ | 0.740 | 73.3 | 0.675 | 38.2 | T1 | 0.935 | 51.8 | 0.302 | 64.4 | 0.907 | 54.9 T2 | 0.930 | 44.2 | 0.291 | 62.9 | 0.906 | 47.7 T3 | 0.933 | 46.7 | 0.294 | 63.5 | 0.903 | 50.3 $(2,p)$ | 1.000 | 69.9 | 1.000 | 15.6 | T1 | 1.000 | 28.0 | 0.884 | 56.6 | 1.000 | 30.0 T2 | 1.000 | 20.8 | 0.884 | 54.8 | 1.000 | 22.3 T3 | 1.000 | 23.4 | 0.883 | 55.3 | 1.000 | 25.2 $\rho=0.5$ | $(1,p)$ | 0.243 | 74.1 | 0.715 | 34.3 | T1 | 0.844 | 52.9 | 0.274 | 63.3 | 0.796 | 55.8 T2 | 0.843 | 45.2 | 0.267 | 61.5 | 0.787 | 47.9 T3 | 0.847 | 47.9 | 0.267 | 62.0 | 0.792 | 50.7 $(2,p)$ | 0.932 | 72.2 | 1.000 | 15.7 | T1 | 1.000 | 30.7 | 0.864 | 55.9 | 1.000 | 33.0 T2 | 1.000 | 23.1 | 0.861 | 54.2 | 1.000 | 24.8 T3 | 1.000 | 25.7 | 0.861 | 54.8 | 1.000 | 27.8 $\rho=0.8$ | $(1,p)$ | 0.000 | NA | 0.803 | 29.0 | T1 | 0.632 | 54.6 | 0.165 | 62.5 | 0.560 | 56.8 T2 | 0.637 | 46.4 | 0.162 | 60.9 | 0.575 | 48.6 T3 | 0.642 | 49.4 | 0.162 | 61.4 | 0.568 | 51.8 $(2,p)$ | 0.001 | 74.0 | 0.997 | 16.1 | T1 | 0.990 | 38.3 | 0.666 | 56.0 | 0.984 | 40.8 T2 | 0.990 | 30.1 | 0.663 | 54.2 | 0.983 | 32.1 T3 | 0.990 | 32.7 | 0.666 | 54.9 | 0.983 | 35.4 Table 4 provides power of different tests under sparse alternatives. The $L_{6}$-based test and the combined test are very comparable in power and $L_{2}$-based test exhibits inferior power in most settings as expected. One interesting observation is that for the case $(\delta,r)=(1,3)$, the $L_{2}$-based test still shows slightly higher power than the $L_{6}$-based test when $\rho=0.2$, which means that for this particular setting, the change is not “sparse” enough. As the correlation increases, we observe a noticeable drop in power, which is similar to the dense alternative setting and is again attributed to the reduced signal size. Among three different boundary functions, T2 still has shortest average delay time with a slight power loss compared to other two boundary functions. Mei’s method has worse power since it is designed for the dense signals and the distribution under the alternative is misspecified. By comparison, LZM gives consistently good power and short delay time across all settings. However, the good power under strong cross-sectional dependence is still offset by the severe size distortion under the null. Apart from evaluating the size and power of the monitoring procedure, we also compare the computational cost of the recursive formulation versus the brute force approach. For the case of $(n,p)=(100,50)$, the average run-time of the brute force approach is 12.89 times of the recursive algorithm under $H_{0}$, and the average run-time of the brute force approach is 13.07 times of that for the recursive algorithm under the alternative. The codes are implemented in R. This demonstrates the substantial efficiency gain from the recursive computational algorithm. Table 4: Power under sparse alternatives Power | | Mei | LZM | | $L_{2}$ | $L_{6}$ | Comb ---|---|---|---|---|---|---|--- $\alpha=0.1$ | $(\delta,r)$ | power | ADT | power | ADT | $w(t)$ | power | ADT | power | ADT | power | ADT $\rho=0.0$ | $(1,3)$ | 0.422 | 74.0 | 0.990 | 27.4 | T1 | 0.976 | 51.5 | 0.999 | 37.8 | 0.999 | 40.5 T2 | 0.967 | 43.8 | 0.999 | 35.9 | 0.999 | 38.0 T3 | 0.972 | 46.4 | 0.999 | 36.6 | 0.999 | 39.0 $(1,1)$ | 0.400 | 73.9 | 1.000 | 23.4 | T1 | 0.961 | 51.5 | 0.951 | 51.0 | 0.976 | 52.2 T2 | 0.958 | 44.1 | 0.953 | 49.5 | 0.974 | 46.3 T3 | 0.959 | 46.4 | 0.953 | 50.0 | 0.976 | 48.7 $\rho=0.2$ | $(1,3)$ | 0.274 | 74.1 | 0.990 | 29.1 | T1 | 0.946 | 52.2 | 0.937 | 51.6 | 0.955 | 52.6 T2 | 0.939 | 44.6 | 0.935 | 50.0 | 0.955 | 47.1 T3 | 0.943 | 47.1 | 0.936 | 50.5 | 0.954 | 49.2 $(1,1)$ | 0.268 | 74.1 | 1.000 | 23.9 | T1 | 0.961 | 52.6 | 0.998 | 37.3 | 0.999 | 40.2 T2 | 0.951 | 45.3 | 0.998 | 35.4 | 0.999 | 37.7 T3 | 0.957 | 47.6 | 0.998 | 36.0 | 0.999 | 38.6 $\rho=0.5$ | $(1,3)$ | 0.048 | 74.5 | 0.972 | 28.2 | T1 | 0.871 | 54.7 | 0.881 | 51.5 | 0.887 | 53.4 T2 | 0.856 | 47.1 | 0.878 | 49.8 | 0.884 | 48.7 T3 | 0.860 | 49.9 | 0.880 | 50.4 | 0.886 | 50.6 $(1,1)$ | 0.036 | 74.3 | 1.000 | 23.2 | T1 | 0.880 | 55.9 | 0.997 | 38.0 | 0.997 | 40.7 T2 | 0.871 | 49.1 | 0.997 | 36.1 | 0.997 | 38.2 T3 | 0.879 | 51.2 | 0.997 | 36.8 | 0.997 | 39.2 $\rho=0.8$ | $(1,3)$ | 0.000 | NA | 0.971 | 24.7 | T1 | 0.621 | 58.9 | 0.800 | 52.9 | 0.808 | 55.3 T2 | 0.610 | 50.6 | 0.801 | 51.3 | 0.802 | 51.5 T3 | 0.614 | 53.7 | 0.803 | 51.9 | 0.807 | 53.2 $(1,1)$ | 0.000 | NA | 1.000 | 21.5 | T1 | 0.602 | 61.1 | 0.998 | 38.8 | 0.997 | 41.7 T2 | 0.588 | 53.6 | 0.998 | 36.8 | 0.997 | 39.3 T3 | 0.601 | 56.8 | 0.998 | 37.5 | 0.997 | 40.2 ## 5 Data Illustration ### 5.1 Tonnage dataset We first propose to apply the proposed methodology to monitor the multi- channel tonnage profile collected in a forging process in (Lei et al., 2010), where four different strain gauge sensors are mounted at each column of the forging machine, measuring the exerted force of the press. The setup of the process is shown in Figure 1a. The four strain gauge sensors represent the signature of the product and are used for process monitoring and change detection in Lei et al. (2010). For example, 10 examples of the signals before the changes and after the changes are shown in Figure 1b. As mentioned by Lei et al. (2010); Yan et al. (2018), a missing part only affects a small region of the signals, which makes it very hard to detect, as shown in Figure 1b. (a) The setup of the forging machine (b) Data collected in the forging machine Figure 1: Forging machine setup and the collected tonnage dataset We select a subset of the data with $n=200$, where the first 130 observations are from the normal tonnage sample, and the samples after 130 are abnormal. We project the data to $20$-dimensional space by training an anomaly basis on a holdout sample as has been done in Yan et al. (2018). The first 100 observations are treated as a Phase I stage without any changes and we learn the 2-norm and $q$-norm of the covariance matrix from them. The monitoring scheme started at observation 107 (trimming due to $q=6$). The $L_{6}$-based test stopped at time 137, and estimated the possible change point location at time 128 by performing a retrospective test at time 137. The $L_{2}$ based test signaled slightly earlier at time 135 and also estimated the change at 128. The combined test signaled an alarm at time 135 with the same estimated location. The trajectory of the $L_{2}$ and the $L_{6}$ test statistics are shown in Figure 2a and 2b, respectively. Notice that, when we set the size of individual test to be $\alpha^{*}=(1-0.1)^{1/2}=5.13\%$, the size of the combined test will be $\alpha=1-\alpha^{*2}=0.1$. We signal an alarm when at least one test statistic exceeds the corresponding threshold. (a) $L_{2}$ based test for tonnage data (b) $L_{6}$ based test for tonnage data Figure 2: Testing Statistics for tonnage data ### 5.2 Rolling dataset We then consider the process monitoring in a steel rolling manufacturing process. Surface defects such as seam defects can result in stress concentration on the bulk and may cause failures if the steel bar is used in the product. However, the rolling process is a high-speed process with the rolling bar moving around 200 miles per hour, providing real-time online anomaly detection for the high-speed rolling process is very important to prevent further the product damage. The dataset is collected in such high-speed rolling process. Here, we have selected a segment near the end of the rolling bar, which contains 100 images of the rolling process. To remove the trend, we have applied a smooth background remover and further downsample the image to only $16\times 64$ pixels. An example of the normal image and the abnormal image are shown in Figure 3a and 3b, respectively. (a) Normal rolling image (b) Abnormal rolling image Figure 3: Examples of the rolling images We treated the first 50 observations as training set and obtained ratio- consistent estimators $\widehat{\|\Sigma\|_{q}^{q}}$. After performing the change point monitoring procedure, the $L_{6}$-norm-based test signals an alarm at the time $97$ and estimated that the possible change point location is at time $89$ based on the retrospective test. On the other hand, the $L_{2}$ based test fails to detect the change within the finite time horizon. The combined test also signals the alarm at time 97. We present the rolling image at time 91 in Figure 3b. This shows that after downsampling, the change is still quite sparse. The adaptive monitoring procedure is still powerful as long as one test has power. We also present the trajectory of the test statistic at each time point in Figure 4a and 4b. Notice that there is a downshift in the $L_{2}$-based monitoring statistic right after the estimated change. This is due to the fact that the signal is very sparse, and the construction of our proposed statistic may admit negative values for a short period of time. The negative values here should not be a major concern as the test statistic should admit positive values in probability under the alternatives. We confirmed this by adding an artificial dense change in the data. On the other hand, $L_{6}$-based monitoring statistics detect the change efficiently due to the ability to capture the sparse change in the system. (a) $L_{2}$ based test for rolling data (b) $L_{6}$ based test for rolling data Figure 4: Examples of the rolling images ## 6 Summary and Conclusion In this article, we propose a new methodology to monitor a mean shift in temporally independent high-dimensional observations. Our change point monitoring method targets at the $L_{q}$-norm of the mean change for $q=2,4,6,\cdots$. By combining the monitoring statistics for different values of $q\in 2\mathbb{N}$, the adaptive procedure achieves overall satisfactory power against both sparse and dense changes in the mean. This can be very desirable from a practitioner’s viewpoint as often we do not have the knowledge about the type of alternatives. Compared with the recently developed methods for monitoring large-scale data streams [e.g., Mei (2010), Xie and Siegmund (2013), Liu et al. (2019)], our method is fully nonparametric and does not require strong distributional assumptions. Furthermore, our method allows for certain cross-sectional dependence across data streams, which could naturally arise in many applications. To conclude, we mention a few interesting future directions. Firstly, our focus in this paper is on the mean change, and it is natural to ask whether the method can be extended to monitor a change in the covariance matrix. Secondly, many streaming data have weak dependence over time due to its sequential nature, and how to accommodate weak temporal dependence will be of interest. Thirdly, in the current implementation, the ratio-consistent estimators are learned from the training data and do not change as more observations are available. In practice, if the monitoring scheme runs for a long time without signaling an alarm, it might be helpful to periodically update ratio-consistent estimators to gain efficiency, especially when the initial training sample is short. However, it may be impractical to update this estimator for each $k$ since there seems no easy recursive way to update this estimator and the associated computational cost is high. The user might need to determine how often to update it based on the actual computational resources. Fourthly, even though the proposed algorithm can detect the sparse change, in many applications, it is also an important problem to identify which individual data stream actually experiences a change, which will be left for future research. ## Supplementary Materials The supplementary materials contains technical proofs for the theoretical results as well as additional simulation results. ###### Proof of Theorem 1. We can directly apply the results shown in Wang et al. (2019) for the partial sum process $S_{n}(a,b)=\sum_{i=\lfloor na\rfloor+1}^{\lfloor nb\rfloor-1}\sum_{j=\lfloor na\rfloor+1}^{i}X_{i+1}^{T}X_{j}.$ The partial sum process $\Big{\\{}\frac{\sqrt{2}}{n||\Sigma||_{F}}S_{n}(a,b)\Big{\\}}_{(a,b)\in[0,T]^{2}}\rightsquigarrow Q\quad in\quad l^{\infty}([0,T]^{2})$ where $Q$ is a Gaussian process whose covariance structure is the following $Cov(Q(a_{1},b_{1}),(a_{2},b_{2}))=\begin{cases}(\min(b_{1},b_{2})-\max(a_{1},a_{2}))^{2}&if\quad\max(a_{1},a_{2})\leq\min(b_{1},b_{2})\\\ 0&otherwise\end{cases}$ The test statistic is a continuous transformation of the Gaussian process and the results stated follows. ∎ ###### Proof of Theorem 2. We now analyze the power of the first proposed test. Suppose the change point is at $k^{*}$, where $k^{*}/n\to r$ for some constant $r\in(1,T)$. This assures that the change point does not occur extremely early or late in the monitoring period. Under the alternative hypothesis, define a new sequence of random vectors $Y_{i}$, $Y_{i}=\begin{cases}X_{i}&i=1,\ldots,k^{*}\\\ X_{i}-\Delta&i=k^{*}+1,\ldots,n\end{cases}.$ This sequence does not have a change point. Without loss of generosity, assume $Y_{i}$’s are centered. Suppose that $\frac{n\Delta^{T}\Delta}{||\Sigma||_{F}}\to b\in[0+\infty).$ When $m<k<k^{*}$, $G_{k}(m)$ statistic will not be affected. It suffices to consider the case $m<k^{*}<k$ and $k^{*}<m<k$. Following the decomposition in Wang et al. (2019), under the fixed alternative when $k^{*}>m$, $\displaystyle G_{k}(m)$ $\displaystyle=G^{Y}_{k}(m)+{(k-k^{*})(k-k^{*}-1)m(m-1)}||\Delta||_{2}^{2}$ $\displaystyle\quad-{2(k-k^{*})(k-m-2)(m-1)}\sum_{j=1}^{m}Y_{j}^{T}\Delta$ $\displaystyle\quad-4(m-1)(m-2)(k-k^{*})\sum_{j=m+1}^{k^{*}}Y_{j}^{T}\Delta.$ $G_{n}^{Y}(m)$ is the statistic calculated for the $Y_{i}$ sequence. Let $s_{n}(k)=\sum_{j=1}^{k}Y_{j}^{T}\Delta$. Then $\sup_{1\leq l\leq k\leq nT}|\sum_{j=l}^{k}Y_{j}^{T}\Delta|\leq 2\sup_{1\leq k\leq nT}|s_{n}(k)|=O_{p}(n^{1/2}(\Delta^{T}\Sigma\Delta)^{1/2}).$ The last part is obtained by Kolmogorov’s inequality. This implies that when $k^{*}>m$, $\displaystyle\frac{1}{n^{3}\|\Sigma\|_{F}}G_{k}(m)$ $\displaystyle=\frac{1}{n^{3}\|\Sigma\|_{F}}G_{k}^{Y}(m)+\frac{(k-k^{*})(k-k^{*}-1)m(m-1)}{n^{3}}\frac{||\Delta||_{2}^{2}}{||\Sigma||_{F}}+O_{p}(\frac{n^{1/2}(\Delta^{T}\Sigma\Delta)^{1/2}}{||\Sigma||_{F}}).$ Similarly, we can show when $k^{*}>m$ $\frac{1}{n^{3}\|\Sigma\|_{F}}G_{k}(m)=\frac{1}{n\|\Sigma\|_{F}}G_{k}^{Y}(m)+\frac{k^{*}(k^{*}-1)(k-m)(k-m-1)}{n^{3}}\frac{||\Delta||_{2}^{2}}{||\Sigma||_{F}}+O_{p}(\frac{n^{1/2}(\Delta^{T}\Sigma\Delta)^{1/2}}{||\Sigma||_{F}}).$ The last part is converging to 0 in probability. Therefore, the test statistic $T_{n}$ can be viewed as an extension to the original process. The second terms are also a process depend on $m$ and $k^{*}$. Under the fixed alternative, the $G_{k}(m)$ converge to the process $\frac{1}{n^{3}\|\Sigma\|_{F}}\\{G_{\lfloor nt\rfloor}(\lfloor ns\rfloor)\\}_{s\in[0,1]}\to G(s,t)+b\Lambda(s,t),$ where $\Lambda(s,t)=\begin{cases}(t-r)^{2}s^{2}&s\leq r\\\ r^{2}(t-s)^{2}&s>r\\\ 0&otherwise\end{cases}.$ This implies that, when $b=0$, the process is the same with the null process, and the proposed monitoring scheme will have trivial power. When the $b$ is not zero, since the remainder term is positive, we will have non -trivial power. When $\frac{n\Delta^{T}\Delta}{||\Sigma||_{F}}\to\infty.$ Following above decomposition, we have $\max_{k}T_{n}(k)\geq T_{n}(k^{*})=\frac{1}{n\|\Sigma\|_{F}}D_{nT}^{Y}(k^{*})+O(\frac{n||\Delta||_{2}^{2}}{||\Sigma||_{F}})\to\infty$ Since the first term is pivotal and is bounded in probability, the test have power converging to 1. ∎ ###### Proof of Theorem 3. We can directly apply the results in Theorem 2.1 and 2.2 in Zhang et al.(2020), which stated that for $S_{n,q,c}(r;[a,b])=\sum_{l=1}^{p}\sum_{\lfloor na\rfloor+1\leq i_{1},\ldots,i_{c}\leq\lfloor nr\rfloor}^{*}\sum_{\lfloor nr\rfloor+1\leq j_{1},\ldots,j_{q-c}\leq\lfloor nb\rfloor}^{*}\prod_{t=1}^{c}X_{i_{t},l}\prod_{g=1}^{q-c}X_{j_{g},l},$ we have $\frac{1}{\sqrt{n^{q}\|\Sigma\|_{q}^{q}}}S_{n,q,c}(r;[a,b])\rightsquigarrow Q_{q,c}(r;[a,b]),$ where $Q_{q,c}$ is the Gaussian process stated in Theorem 4. The monitoring statistic is a continuous transformation of process $S_{n,q,c}$’s and the asymptotic result follows. ∎ ###### Proof of Theorem 4. We first discuss the case when $\frac{n^{q/2}\|\Delta\|_{q}^{q}}{\|\Sigma\|_{q}^{q/2}}\to\gamma\in[0,+\infty)$ and the true change point is at location $k^{*}=\lfloor nr\rfloor$. Here we adopt the process convergence results in Theorem 2.3 of Zhang et al.(2020), which stated that for $(k,m)=(\lfloor ns\rfloor,\lfloor nt\rfloor)$, $\displaystyle\frac{1}{\sqrt{n^{3q}\|\Sigma\|_{q}^{q}}}D_{n,q}(s;[0,b])$ $\displaystyle=\frac{1}{\sqrt{n^{3q}\|\Sigma\|_{q}^{q}}}\sum_{l=1}^{p}\sum_{0\leq i_{1},\ldots,i_{q}\leq k}^{*}\sum_{k+1\leq j_{1},\ldots,j_{q}\leq m}^{*}(X_{i_{1},l}-X_{j_{1},l})\cdots(X_{i_{q},l}-X_{j_{q},l}),$ $\displaystyle\rightsquigarrow G_{q}(s,t)+\gamma J_{q}(s;[0,t])$ where $J_{q}(s;[0,t])=\begin{cases}r^{q}(t-s)^{q}&r\leq s<t\\\ s^{q}(t-r)^{q}&s\leq r<t\\\ 0&otherwise\end{cases}$ Therefore, by continuous mapping theorem, when $\gamma\in[0,+\infty)$, the results in the theorem hold. For the case $\frac{n^{q/2}\|\Delta\|_{q}^{q}}{\|\Sigma\|_{q}^{q/2}}\to+\infty$ $\max_{k}T_{n,q}(k)\geq T_{n,q}(k^{*})=\frac{1}{n\|\Sigma\|_{F}}D_{nT}^{Y}(k^{*})+C\frac{n^{q/2}\|\Delta\|_{q}^{q}}{\|\Sigma\|_{q}^{q/2}}\to\infty$ ∎ ###### Proof of Theorem 5. By straightforward calculation, we have $\displaystyle\widehat{\|\Sigma\|_{F}^{2}}$ $\displaystyle=\frac{1}{4{n\choose 4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}tr\left((X_{j_{1}}-X_{j_{2}})(X_{j_{1}}-X_{j_{2}})^{T}(X_{j_{3}}-X_{j_{4}})(X_{j_{3}}-X_{j_{4}})^{T}\right)$ $\displaystyle=\frac{1}{4{n\choose 4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}[(X_{j_{1}}-X_{j_{2}})^{T}(X_{j_{3}}-X_{j_{4}})]^{2}$ $\displaystyle=\frac{1}{4{n\choose 4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}[(X_{j_{1}}^{T}X_{j_{3}})^{2}+(X_{j_{2}}^{T}X_{j_{3}})^{2}+(X_{j_{2}}^{T}X_{j_{4}})^{2}+(X_{j_{1}}^{T}X_{j_{4}})^{2}]$ $\displaystyle-\frac{2}{4{n\choose 4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}[X_{j_{1}}^{T}X_{j_{3}}X_{j_{1}}^{T}X_{j_{4}}+X_{j_{2}}^{T}X_{j_{3}}X_{j_{2}}^{T}X_{j_{4}}+X_{j_{1}}^{T}X_{j_{3}}X_{j_{2}}^{T}X_{j_{3}}+X_{j_{1}}^{T}X_{j_{4}}X_{j_{2}}^{T}X_{j_{4}}]$ $\displaystyle+\frac{2}{4{n\choose 4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}[X_{j_{1}}^{T}X_{j_{3}}X_{j_{2}}^{T}X_{j_{4}}+X_{j_{2}}^{T}X_{j_{3}}X_{j_{1}}^{T}X_{j_{4}}]$ $\displaystyle=I_{n,1}+I_{n,2}+I_{n,3}+I_{n,4}-(I_{n,5}+I_{n,6}+I_{n,7}+I_{n,8})+(I_{n,9}+I_{n,10}).$ For $I_{n,1}$, $\mathbb{E}[I_{n,1}]=\frac{1}{4{n\choose 4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\mathbb{E}[(X_{j_{1}}^{T}X_{j_{3}})^{2}]=\frac{1}{4}tr(\mathbb{E}[X_{j_{3}}X_{j_{3}}^{T}X_{j_{1}}X_{j_{1}}^{T}])=\|\Sigma\|_{F}^{2}/4.$ Thus $\mathbb{E}[I_{n,1}/\|\Sigma\|_{F}^{2}]=1/4$. By similar arguments, it is obvious to see that $\mathbb{E}[I_{n,i}/\|\Sigma\|_{F}^{2}]=1/4$ for $i=1,2,3,4$, and $\mathbb{E}[I_{n,i}/\|\Sigma\|_{F}^{2}]=0$ for $i=5,...,10$. The outline of the proof is as following. We will show that $4I_{n,i}/\|\Sigma\|_{F}^{2}\rightarrow_{p}1$ for $i=1,2,3,4$, and $I_{n,i}/\|\Sigma\|_{F}^{2}\rightarrow_{p}0$, for $i=5,...,10$. Since some of the $I_{n,i}$ share very similar structures, we will only present the proof for (1) $4I_{n,1}/\|\Sigma\|_{F}^{2}\rightarrow_{p}1$ and (2) $I_{n,5}/\|\Sigma\|_{F}^{2}\rightarrow_{p}0$. Other terms can be proved by similar arguments. To show (1), it suffices to show that $\mathbb{E}[16I_{n,1}^{2}/\|\Sigma\|_{F}^{4}]\rightarrow 1$. To see this, $\displaystyle\mathbb{E}[16I_{n,1}^{2}/\|\Sigma\|_{F}^{4}]=\frac{1}{{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}\mathbb{E}[(X_{j_{1}}^{T}X_{j_{3}})^{2}(X_{j_{5}}^{T}X_{j_{7}})^{2}]$ $\displaystyle=\frac{1}{{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{j_{1},l_{1}}X_{j_{3},l_{1}}X_{j_{1},l_{2}}X_{j_{3},l_{2}}X_{j_{5},l_{3}}X_{j_{7},l_{3}}X_{j_{5},l_{4}}X_{j_{7},l_{4}}].$ As we know that the expectation of a product of random variables can be expressed in terms of joint cumulants, we have $\mathbb{E}[X_{j_{1},l_{1}}X_{j_{3},l_{1}}X_{j_{1},l_{2}}X_{j_{3},l_{2}}X_{j_{5},l_{3}}X_{j_{7},l_{3}}X_{j_{5},l_{4}}X_{j_{7},l_{4}}]=\sum_{\pi}\prod_{B\in\pi}cum(X_{j,l}:(j,l)\in B),$ where $\pi$ runs through the list of all partitions of $\\{(j_{1},l_{1}),(j_{1},l_{2}),...,(j_{7},l_{3}),(j_{7},l_{4})\\}$ and $B$ runs through the list of all blocks of the partition $\pi$. Since $j_{1}<j_{3}$ and $j_{5}<j_{7}$, it is impossible to have three or more indices in $\\{j_{1},j_{3},j_{5},j_{7}\\}$ such that they are identical. Thus for the right hand side of the above expression, we only need to take the sum over all partitions with all block sizes smaller than $5$, because for joint cumulants with order greater than $5$, it must contain at least 3 indices from $j_{1},j_{3},j_{5},j_{7}$ and at least one is not identical to the other two. And the joint cumulants will equal to zero since it involves two or more independent random variables. Also since the mean of all random variables included in the left hand side of the above expression are all zero, we do not need to consider the partition with block size 1. Thus the expression can be simplified as $\displaystyle\mathbb{E}[X_{j_{1},l_{1}}X_{j_{3},l_{1}}X_{j_{1},l_{2}}X_{j_{3},l_{2}}X_{j_{5},l_{3}}X_{j_{7},l_{3}}X_{j_{5},l_{4}}X_{j_{7},l_{4}}]$ $\displaystyle=$ $\displaystyle C_{1}^{(j_{1},j_{3},j_{5},j_{7})}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]^{2}+C_{2}^{(j_{1},j_{3},j_{5},j_{7})}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}$ $\displaystyle+\Sigma_{l_{1},l_{2}}^{2}\Sigma_{l_{3},l_{4}}^{2},$ where $C_{1}^{(j_{1},j_{3},j_{5},j_{7})}$, $C_{2}^{(j_{1},j_{3},j_{5},j_{7})}$ are finite positive constants purely based on the value of $j_{1},j_{3},j_{5},j_{7}$. $C_{1}^{(j_{1},j_{3},j_{5},j_{7})}$ can only be nonzero if $j_{1}=j_{5}$ and $j_{3}=j_{7}$, and $C_{2}^{(j_{1},j_{3},j_{5},j_{7})}$ is nonzero if at least two of $(j_{1},j_{3},j_{5},j_{7})$ are equal. This implies that $\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{1}^{(j_{1},j_{3},j_{5},j_{7})}=o(n^{8}),$ and $\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{2}^{(j_{1},j_{3},j_{5},j_{7})}=o(n^{8}).$ Furthermore, according to Assumption 2, $\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}cum(X_{0,l_{1}},X_{0,l_{2}},X_{0,l_{3}},X_{0,l_{4}})^{2}\leq C\|\Sigma\|_{F}^{4}$. It can be verified that $\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]^{2}$ $\displaystyle\lesssim\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}cum(X_{0,l_{1}},X_{0,l_{2}},X_{0,l_{3}},X_{0,l_{4}})^{2}$ $\displaystyle+\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\Sigma_{l_{1},l_{2}}^{2}\Sigma_{l_{3},l_{4}}^{2}$ $\displaystyle\lesssim\|\Sigma\|_{F}^{4},$ and by using the Cauchy-Schwartz inequaility, $\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}$ $\displaystyle\leq$ $\displaystyle\sqrt{\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]^{2}}\sqrt{\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\Sigma_{l_{1},l_{2}}^{2}\Sigma_{l_{3},l_{4}}^{2}}\leq\sqrt{C}\|\Sigma\|_{F}^{4}.$ (6.4) This indicates that $\displaystyle\mathbb{E}[16I_{n,1}^{2}/\|\Sigma\|_{F}^{4}]$ $\displaystyle=$ $\displaystyle\frac{1}{{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{1}^{(j_{1},j_{3},j_{5},j_{7})}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]^{2}$ $\displaystyle+$ $\displaystyle\frac{1}{{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{2}^{(j_{1},j_{3},j_{5},j_{7})}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}$ $\displaystyle+$ $\displaystyle\frac{1}{{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\Sigma_{l_{1},l_{2}}^{2}\Sigma_{l_{3},l_{4}}^{2}=o(1)+o(1)+1\rightarrow 1.$ Thus, $4I_{n,1}/\|\Sigma\|_{F}^{2}\rightarrow_{p}1$, and (1) is proved. By similar arguments, $4I_{n,i}/\|\Sigma\|_{F}^{2}\rightarrow_{p}1$ holds for $i=2,3,4$. To show (2), we need to prove $\mathbb{E}[I_{n,5}^{2}/\|\Sigma\|_{F}^{4}]\rightarrow 0$. To see this, $\displaystyle\mathbb{E}[I_{n,5}^{2}/\|\Sigma\|_{F}^{4}]=\frac{1}{4{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}\mathbb{E}[(X_{j_{1}}^{T}X_{j_{3}}X_{j_{1}}^{T}X_{j_{4}})(X_{j_{5}}^{T}X_{j_{7}}X_{j_{5}}^{T}X_{j_{8}})]$ $\displaystyle=$ $\displaystyle\frac{1}{{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{j_{1},l_{1}}X_{j_{3},l_{1}}X_{j_{1},l_{2}}X_{j_{4},l_{2}}X_{j_{5},l_{3}}X_{j_{7},l_{3}}X_{j_{5},l_{4}}X_{j_{8},l_{4}}].$ By similar arguments for the joint cumulants we provided in the the proof for (1), it can be proved that $\displaystyle\mathbb{E}[X_{j_{1},l_{1}}X_{j_{3},l_{1}}X_{j_{1},l_{2}}X_{j_{4},l_{2}}X_{j_{5},l_{3}}X_{j_{7},l_{3}}X_{j_{5},l_{4}}X_{j_{8},l_{4}}]$ $\displaystyle=$ $\displaystyle C_{1}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]\Sigma_{l_{1},l_{3}}\Sigma_{l_{2},l_{4}}+C_{2}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}\Sigma_{l_{1},l_{3}}\Sigma_{l_{2},l_{4}}.$ If $C_{1}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}\neq 0$, then $j_{1}=j_{5}$. And if $C_{2}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}\neq 0$, $j_{3}=j_{5}$ and $j_{4}=j_{8}$. These two properties guarantee that $\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{1}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}=o(n^{8}),$ and $\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{2}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}=o(n^{8}).$ Furthermore we have shown the bound for $\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}$ in (6.5). And $\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}\Sigma_{l_{1},l_{3}}\Sigma_{l_{2},l_{4}}=\sum_{l_{1},l_{4}=1}^{p}\left(\sum_{l_{2}=1}^{p}\Sigma_{l_{1},l_{2}}\Sigma_{l_{2},l_{4}}\right)\left(\sum_{l_{3}=1}^{p}\Sigma_{l_{1},l_{3}}\Sigma_{l_{3},l_{4}}\right)$ $\displaystyle=$ $\displaystyle\sum_{l_{1},l_{4}=1}^{p}[(\Sigma^{2})_{l_{1},l_{4}}]^{2}=tr(\Sigma^{4})=o(\|\Sigma\|_{F}^{4}),$ by Assumption 1. Thus, $\displaystyle\mathbb{E}[I_{n,5}^{2}/\|\Sigma\|_{F}^{4}]$ $\displaystyle=$ $\displaystyle\frac{1}{4{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{1}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}[X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}}]\Sigma_{l_{1},l_{3}}\Sigma_{l_{2},l_{4}}$ $\displaystyle+$ $\displaystyle\frac{1}{4{n\choose 4}^{2}\|\Sigma\|_{F}^{4}}\sum_{1\leq j_{1}<j_{2}<j_{3}<j_{4}\leq n}\sum_{1\leq j_{5}<j_{6}<j_{7}<j_{8}\leq n}C_{2}^{(j_{1},j_{3},j_{4},j_{5},j_{7},j_{8})}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}\Sigma_{l_{1},l_{3}}\Sigma_{l_{2},l_{4}}$ $\displaystyle=$ $\displaystyle o(1)+o(1)\rightarrow 1.$ This indicates $I_{n,5}/\|\Sigma\|_{F}^{2}\rightarrow_{p}0$. And by similar arguments we can prove that $I_{n,i}/\|\Sigma\|_{F}^{2}\rightarrow_{p}0$, for all $i=6,...,10$. Combine the above results, we have $\widehat{\|\Sigma\|_{F}^{2}}/\|\Sigma\|_{F}^{2}\rightarrow_{p}1$. This completes the proof. ∎ ###### Proof of Theorem 6. We can rewrite $\widehat{\|\Sigma\|_{q}^{q}}$ as $\displaystyle\widehat{\|\Sigma\|_{q}^{q}}=\frac{1}{2^{q}{n\choose 2q}}\sum_{l_{1},l_{2}=1}^{p}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\prod_{k=1}^{q}(X_{i_{k},l_{1}}X_{i_{k},l_{2}}+X_{j_{k},l_{1}}X_{j_{k},l_{2}}-X_{i_{k},l_{1}}X_{j_{k},l_{2}}-X_{j_{k},l_{1}}X_{i_{k},l_{2}})$ $\displaystyle=$ $\displaystyle\frac{1}{2^{q}{n\choose 2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{t_{1},s_{1}\in\\{i_{1},j_{1}\\}}\cdots\sum_{t_{q},s_{q}\in\\{i_{q},j_{q}\\}}\sum_{l_{1},l_{2}=1}^{p}\prod_{k=1}^{q}(-1)^{\bm{1}\\{t_{k}\neq s_{k}\\}}X_{t_{k},l_{1}}X_{s_{k},l_{2}}$ $\displaystyle=$ $\displaystyle\frac{1}{2^{q}{n\choose 2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{t_{1}\in\\{i_{1},j_{1}\\}}\cdots\sum_{t_{q}\in\\{i_{q},j_{q}\\}}\sum_{l_{1},l_{2}=1}^{p}\prod_{k=1}^{q}X_{t_{k},l_{1}}X_{t_{k},l_{2}}$ $\displaystyle+\frac{1}{2^{q}{n\choose 2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{t_{1},s_{1}\in\\{i_{1},j_{1}\\}}\cdots\sum_{t_{q},s_{q}\in\\{i_{q},j_{q}\\}}$ $\displaystyle\qquad\qquad\sum_{l_{1},l_{2}=1}^{p}\bm{1}\\{\cup_{k=1}^{q}\\{t_{k}\neq s_{k}\\}\\}\prod_{k=1}^{q}(-1)^{\bm{1}\\{t_{k}\neq s_{k}\\}}X_{t_{k},l_{1}}X_{s_{k},l_{2}}.$ The second equality in the above expression is by calculating the cross products among $q$ brackets, and the third equality is splitting the terms based on different values of $t_{k},s_{k}$ for $k=1,...,q$. The first term in the third equality contains all products with $t_{k}=s_{k}$ for all $k=1,...,q$, and the second term contains products with at least one $k=1,...,q$ such that $t_{k}\neq s_{k}$. The outline of the proof is as follows. We want to show: 1. 1. for every $t_{1}\in\\{i_{1},j_{1}\\},...,t_{q}\in\\{i_{q},j_{q}\\}$, $I({t_{1},...,t_{q}})=\frac{1}{{n\choose 2q}\|\Sigma\|_{q}^{q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{l_{1},l_{2}=1}^{p}\prod_{k=1}^{q}X_{t_{k},l_{1}}X_{t_{k},l_{2}}\rightarrow_{p}1;$ 2. 2. for every $t_{1},s_{1}\in\\{i_{1},j_{1}\\},...,t_{q},s_{q}\in\\{i_{q},j_{q}\\}$ and there exists at least one $k=1,...,q$ such that $t_{k}\neq s_{k}$, $J({t_{1},s_{1},...,t_{q},s_{q}})=\frac{1}{{n\choose 2q}\|\Sigma\|_{q}^{q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{l_{1},l_{2}=1}^{p}\prod_{k=1}^{q}X_{t_{k},l_{1}}X_{s_{k},l_{2}}\rightarrow_{p}0.$ And it is easy to see that if these two results hold, then $\widehat{\|\Sigma\|_{q}^{q}}/\|\Sigma\|_{q}^{q}\rightarrow_{p}1$. As we observe that most of terms are structurally very similar, we shall only present the proof for $I({i_{1},...,i_{q}})\rightarrow_{p}1$ and a general proof for (2). It is trivial to see that $\mathbb{E}[\sum_{l_{1},l_{2}=1}^{p}\prod_{k=1}^{q}X_{t_{k},l_{1}}X_{t_{k},l_{2}}/\|\Sigma\|_{q}^{q}]=1$. This indicates that to show (1), it suffices to show that $\mathbb{E}[I(i_{1},...,i_{q})^{2}]\rightarrow 1$. To show this, $\displaystyle\mathbb{E}[I(i_{1},...,i_{q})^{2}]=$ $\displaystyle\frac{1}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}$ $\displaystyle\qquad\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}\left[\prod_{k=1}^{q}X_{i_{k},l_{1}}X_{i_{k},l_{2}}X_{i_{k}^{\prime},l_{3}}X_{i_{k}^{\prime},l_{4}}\right].$ Due to the special structure of our statistic, $\mathbb{E}\left[\prod_{k=1}^{q}X_{i_{k},l_{1}}X_{i_{k},l_{2}}X_{i_{k}^{\prime},l_{3}}X_{i_{k}^{\prime},l_{4}}\right]=\sum_{m=0}^{q}C_{m}E(X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}})^{m}(\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}})^{q-m},$ where $C_{m}=C_{m}(i_{1},...,i_{q},i_{1}^{\prime},...,i_{q}^{\prime})\geq 0$ is a function of all indices for all $m=1,2,...,q$. $C_{m}=1$ if there are exact $m$ indices in $\\{i_{1},...,i_{q}\\}$ which equal to $m$ indices in $\\{i_{1}^{\prime},...,i_{q}^{\prime}\\}$, and $C_{m}=0$ otherwise. These events are mutually exclusive which indicates that $\sum_{m=0}^{q}C_{m}=1$. This indicates that for all $m=1,...,q$, $\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}C_{m}(i_{1},...,i_{q},i_{1}^{\prime},...,i_{q}^{\prime})=o(n^{4q}),$ and $\frac{1}{{n\choose 2q}^{2}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}C_{0}(i_{1},...,i_{q},i_{1}^{\prime},...,i_{q}^{\prime})\rightarrow 1.$ Furthermore, for any $m=1,...,q$, by Hölder’s inequaility for vector spaces, we have $\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|E(X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}})|^{m}|\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}|^{q-m}$ $\displaystyle\leq$ $\displaystyle\left(\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|E(X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}})^{m}|^{q/m}\right)^{m/q}\left(\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}(|\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}|^{q-m})^{q/(q-m)}\right)^{(q-m)/q}$ $\displaystyle=$ $\displaystyle\left(\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|E(X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}})|^{q}\right)^{m/q}\left(\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|\Sigma_{l_{1},l_{2}}|^{q}|\Sigma_{l_{3},l_{4}}|^{q}\right)^{(q-m)/q}$ $\displaystyle\leq$ $\displaystyle C\|\Sigma\|_{q}^{2m}\|\Sigma\|_{q}^{2(q-m)}=C\|\Sigma\|_{q}^{2q},$ where the last inequality is due to Assumption 5, and to see this, $\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|E(X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}})|^{q}$ $\displaystyle\leq$ $\displaystyle C\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|cum(X_{0,l_{1}},X_{0,l_{2}},X_{0,l_{3}},X_{0,l_{4}})|^{q}+|\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}}|^{q}+|\Sigma_{l_{1},l_{3}}\Sigma_{l_{2},l_{4}}|^{q}+|\Sigma_{l_{1},l_{4}}\Sigma_{l_{2},l_{3}}|^{q}$ $\displaystyle\leq$ $\displaystyle C\sum_{1\leq l_{1}\leq l_{2}\leq l_{3}\leq l_{4}\leq p}(1\vee(l_{4}-l_{1}))^{-2rq}+3C\|\Sigma\|_{q}^{2q}\leq Cp^{2}+3C\|\Sigma\|_{q}^{2q}\leq C\|\Sigma\|_{q}^{2q},$ for some generic positive constant $C$, since $\|\Sigma\|_{q}^{2q}=(\sum_{i,j=1}^{p}\Sigma_{i,j}^{q})^{2}\geq Cp^{2}$ under Assumption (5.1). Therefore, $\displaystyle\mathbb{E}[I(i_{1},...,i_{q})^{2}]$ $\displaystyle=$ $\displaystyle\frac{1}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{q}\mathbb{E}\left[\prod_{k=1}^{p}X_{i_{k},l_{1}}X_{i_{k},l_{2}}X_{i_{k}^{\prime},l_{3}}X_{i_{k}^{\prime},l_{4}}\right]$ $\displaystyle=$ $\displaystyle\frac{1}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}$ $\displaystyle\qquad\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\sum_{m=0}^{q}C_{m}\mathbb{E}(X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}})^{m}(\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}})^{q-m}$ $\displaystyle=$ $\displaystyle\frac{1}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}C_{0}(i_{1},...,i_{q},i_{1}^{\prime},...,i_{q}^{\prime})\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\Sigma_{l_{1},l_{2}}^{q}\Sigma_{l_{3},l_{4}}^{q}$ $\displaystyle+$ $\displaystyle\frac{1}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}o(n^{4q})\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\sum_{m=1}^{q}\mathbb{E}(X_{0,l_{1}}X_{0,l_{2}}X_{0,l_{3}}X_{0,l_{4}})^{m}(\Sigma_{l_{1},l_{2}}\Sigma_{l_{3},l_{4}})^{q-m}$ $\displaystyle=$ $\displaystyle 1+o(1)\rightarrow 1.$ This completes the proof for (1). To show (2), it suffices to show that $\mathbb{E}[J(t_{1},s_{1},...,t_{q},s_{q})^{2}]\rightarrow 0$. Specifically, $\displaystyle\mathbb{E}[J(t_{1},s_{1},...,t_{q},s_{q})^{2}]=$ $\displaystyle\frac{1}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}$ $\displaystyle\qquad\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}\left[\prod_{k=1}^{q}X_{t_{k},l_{1}}X_{s_{k},l_{2}}X_{t_{k}^{\prime},l_{3}}X_{s_{k}^{\prime},l_{4}}\right],$ for $t_{1},s_{1}\in\\{i_{1},j_{1}\\},...,t_{q},s_{q}\in\\{i_{q},j_{q}\\},t_{1}^{\prime},s_{1}^{\prime}\in\\{i_{1}^{\prime},j_{1}^{\prime}\\},...,t_{q}^{\prime},s_{q}^{\prime}\in\\{i_{q}^{\prime},j_{q}^{\prime}\\}$, and there exists at least one $k=1,...,q$ such that $t_{k}\neq s_{k}$ ($t_{k}^{\prime}\neq s_{k}^{\prime})$. Since the expectation of a product of random variables can be expressed in terms of joint cumulants, we have $\mathbb{E}\left[\prod_{k=1}^{q}X_{t_{k},l_{1}}X_{s_{k},l_{2}}X_{t_{k}^{\prime},l_{3}}X_{s_{k}^{\prime},l_{4}}\right]=\sum_{\pi}\prod_{B\in\pi}cum(X_{i,l}:(i,l)\in B),$ where $\pi$ runs through the list of all partitions of $\\{(t_{1},l_{1}),(s_{1},l_{2}),...,(t_{q}^{\prime},l_{3}),(s_{q}^{\prime},l_{4})\\}$ and $B$ runs thorough the list of all blocks of the partition $\pi$. Due to the special structure of our statistic, there is a set of partitions $\mathcal{S}$ such that for every $\pi\in\mathcal{S}$, the product of joint cumulants over all $B\in\pi$ is zero. And for each $\pi\in\mathcal{S}^{c}$ there are nice properties related to the blocks $B\in\pi$. Here we shall illustrate these properties as follows. To be clear, since we are dealing with a double indexed array $X_{i,l}$, we call $``i"$ as the temporal index and $``l"$ as the spatial index. For $\forall\pi\in\mathcal{S}^{c}$, 1. 1. The size of every block $B\in\pi$ cannot exceed 4. Since $i_{1},...,i_{q},j_{1},...,j_{q}$ are all distinct, and $i_{1}^{\prime},...,i_{q}^{\prime},j_{1}^{\prime},...,j_{q}^{\prime}$ are all distinct, it is impossible to have any three indices in $\\{i_{1},...,i_{q},j_{1},...,j_{q},i_{1}^{\prime},...,i_{q}^{\prime},j_{1}^{\prime},...,j_{q}^{\prime}\\}$ that are equal. And any joint cumulants of order greater than or equal to 5 will include at least three indices and they cannot be all equal. 2. 2. There are no blocks with size 1. This is because the cumulant of a single random variable with mean zero is also zero. 3. 3. Every $B\in\pi$ must contain only one distinct temporal index. Otherwise $\prod_{B\in\pi}cum(X_{i,l}:(i,l)\in B)=0$. The above properties imply that for $\forall\pi\in\mathcal{S}^{c}$ and $\forall B\in\pi$, ${cum(X_{i,l}:(i,l)\in B)}$ has to be one of the followings: $cum(X_{0,l_{1}},X_{0,l_{2}},X_{0,l_{3}},X_{0,l_{4}})$, $cum(X_{0,l_{1}},X_{0,l_{2}},X_{0,l_{3}})$, $cum(X_{0,l_{1}},X_{0,l_{2}},X_{0,l_{4}})$, $cum(X_{0,l_{1}},X_{0,l_{3}},X_{0,l_{4}})$, $cum(X_{0,l_{2}},X_{0,l_{3}},X_{0,l_{4}})$, $\Sigma_{l_{1},l_{2}}$, $\Sigma_{l_{1},l_{3}}$, $\Sigma_{l_{1},l_{4}}$, $\Sigma_{l_{2},l_{3}}$, $\Sigma_{l_{2},l_{4}}$, $\Sigma_{l_{3},l_{4}}$. If we assume $l_{1}\leq l_{2}\leq l_{3}\leq l_{4}$, it can be shown that $\prod_{B\in\pi}cum(X_{i,l}:(i,l)\in B)\leq C(1\vee(l_{2}-l_{1}))^{-r}(1\vee(l_{4}-l_{3}))^{-r},$ (6.5) for some generic positive constant $C$ and any partition $\pi$. To see this, we notice that at least one $k=1,...,q$, say $k_{0}$, such that $t_{k_{0}}\neq s_{k_{0}}$ and $t_{k_{0}}^{\prime}\neq s_{k_{0}}^{\prime}$. For every $\pi\in\mathcal{S}^{c}$ there exists $B_{1},B_{2}\in\pi$ such that $(t_{k_{0}},l_{1})\in B_{1}$ and $(s_{k_{0}}^{\prime},l_{4})\in B_{2}$. Based on the third property above, all other elements in $B_{1}$ must have the same temporal index as $t_{k_{0}}$. And because of the first property above, all $i_{k}$, $j_{k}$ for $k\neq k_{0}$ and $s_{k_{0}}$ are different from $t_{k_{0}}$. This implies that the spatial indices for all other elements in $B_{1}$ have to be either $l_{3}$ or $l_{4}$, not $l_{1}$ and $l_{2}$. For the same reason, the spatial indices for all other elements in $B_{2}$ can only be either $l_{1}$ or $l_{2}$. Therefore, $cum(X_{i,l}:(i,l)\in B_{1})\in\\{cum(X_{0,l_{1}},X_{0,l_{3}},X_{0,l_{4}}),\Sigma_{l_{1},l_{3}},\Sigma_{l_{1},l_{4}}\\},$ and $cum(X_{i,l}:(i,l)\in B_{2})\in\\{cum(X_{0,l_{1}},X_{0,l_{2}},X_{0,l_{4}}),\Sigma_{l_{1},l_{4}},\Sigma_{l_{2},l_{4}}\\}.$ Under Assumption (5.2), $cum(X_{i,l}:(i,l)\in B_{1})\leq C(1\vee(l_{2}-l_{1}))^{-r}$ and $cum(X_{i,l}:(i,l)\in B_{2})\leq C(1\vee(l_{4}-l_{3}))^{-r}$. And the joint cumulants are uniformly bounded above for those $B\in\pi\setminus\\{B_{1},B_{2}\\}$. Thus Equation 6.5 is proved. Furthermore, define $Ind(t_{1},s_{1},...,t_{q},s_{q},t_{1}^{\prime},s_{1}^{\prime},...,t_{q}^{\prime},s_{q}^{\prime})$ as the indicator function corresponding to the event that for every $k=1,2,..,q$ that $t_{k}\neq s_{k}$, there exists $k^{\prime}=1,...,q$ such that $t_{k}=t_{k^{\prime}}^{\prime}$ or $t_{k}=s_{k^{\prime}}^{\prime}$, then $\mathbb{E}\left[\prod_{k=1}^{p}X_{t_{k},l_{1}}X_{s_{k},l_{2}}X_{t_{k}^{\prime},l_{3}}X_{s_{k}^{\prime},l_{4}}\right]\neq 0$ only if $Ind(t_{1},s_{1},...,t_{q},s_{q},t_{1}^{\prime},s_{1}^{\prime},...,t_{q}^{\prime},s_{q}^{\prime})=1.$ It is also easy to see that $\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}Ind(t_{1},s_{1},...,t_{q},s_{q},t_{1}^{\prime},s_{1}^{\prime},...,t_{q}^{\prime},s_{q}^{\prime})=o(n^{4q}).$ Combining all the results above, we have $\displaystyle\mathbb{E}[J(t_{1},s_{1},...,t_{q},s_{q})^{2}]$ $\displaystyle=$ $\displaystyle\frac{1}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}\left[\prod_{k=1}^{q}X_{t_{k},l_{1}}X_{s_{k},l_{2}}X_{t_{k}^{\prime},l_{3}}X_{s_{k}^{\prime},l_{4}}\right],$ $\displaystyle\leq$ $\displaystyle\frac{C}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\sum_{1\leq i_{1}<\cdots<i_{q}<j_{1}<\cdots<j_{q}\leq n}\sum_{1\leq i_{1}^{\prime}<\cdots<i_{q}^{\prime}<j_{1}^{\prime}<\cdots<j_{q}^{\prime}\leq n}$ $\displaystyle\qquad\sum_{1\leq l_{1}\leq l_{2}\leq l_{3}\leq l_{4}\leq p}Ind(t_{1},s_{1},...,t_{q},s_{q},t_{1}^{\prime},s_{1}^{\prime},...,t_{q}^{\prime},s_{q}^{\prime})(1\vee(l_{2}-l_{1}))^{-r}(1\vee(l_{4}-l_{3}))^{-r}$ $\displaystyle\leq$ $\displaystyle\frac{o(n^{4q})}{{n\choose 2q}^{2}\|\Sigma\|_{q}^{2q}}\left(\sum_{1\leq l_{1}\leq l_{2}\leq p}(1\vee(l_{2}-l_{1}))^{-r})\right)^{2}\lesssim\frac{p^{2}}{\|\Sigma\|_{q}^{2q}}o(1)=o(1)\rightarrow 0,$ where the last equality is because $\|\Sigma\|_{q}^{2q}=(\sum_{i,j=1}^{p}\Sigma_{i,j}^{q})^{2}\gtrsim p^{2}$. This completes the proof of (2), as well as the whole proof. ∎ Table 5 shows additional simulation results for the size of the proposed monitoring statistics for $n=200$. The size distortion problem has improved for almost all settings. Table 5: Size of different monitoring procedures $(n,p)=(200,200)$ | T1 | T2 | T3 ---|---|---|--- size $\alpha=0.1$ | L2 | L6 | Comb | L2 | L6 | Comb | L2 | L6 | Comb $\rho=0.2$ | 0.104 | 0.072 | 0.074 | 0.097 | 0.072 | 0.073 | 0.102 | 0.071 | 0.073 $\rho=0.5$ | 0.105 | 0.064 | 0.091 | 0.107 | 0.064 | 0.085 | 0.104 | 0.065 | 0.087 $\rho=0.8$ | 0.127 | 0.037 | 0.089 | 0.133 | 0.038 | 0.099 | 0.131 | 0.039 | 0.099 ## Acknowledgements Xiaofeng Shao acknowleges partial support from NSF grants DMS-1807032 and DMS-2014018. Hao Yan acknowleges partial support from NSF grants DMS-1830363 and CMMI-1922739. ## References * Aminikhanghahi and Cook (2017) Aminikhanghahi, S. and D. J. Cook (2017). A survey of methods for time series change point detection. Knowledge and Information Systems 51(2), 339–367. * Aue et al. (2012) Aue, A., S. Hörmann, L. Horváth, M. Hušková, and J. G. Steinebach (2012). Sequential testing for the stability of high-frequency portfolio betas. Econometric Theory 28(4), 804–837. * Aue and Horváth (2013) Aue, A. and L. Horváth (2013). Structural breaks in time series. Journal of Time Series Analysis 34(1), 1–16. * Brown et al. (1975) Brown, R. L., J. Durbin, and J. M. Evans (1975). Techniques for testing the constancy of regression relationships over time. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 37(2), 149–163. * Bücher et al. (2019) Bücher, A., J.-D. Fermanian, and I. Kojadinovic (2019). Combining cumulative sum change-point detection tests for assessing the stationarity of univariate time series. Journal of Time Series Analysis 40(1), 124–150. * Chen and Qin (2010) Chen, S. and Y. Qin (2010). A two sample test for high dimensional data with application to gene-set testing. The Annals of Statistics 38, 808–835. * Cho and Fryzlewicz (2015) Cho, H. and P. Fryzlewicz (2015). Multiple-change-point detection for high dimensional time series via sparsified binary segmentation. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 77(2), 475–507. * Chu et al. (1996) Chu, C.-S. J., M. Stinchcombe, and H. White (1996). Monitoring structural change. Econometrica: Journal of the Econometric Society 64, 1045–1065. * Dette and Gösmann (2019) Dette, H. and J. Gösmann (2019). A likelihood ratio approach to sequential change point detection for a general class of parameters. Journal of the American Statistical Association, 1–17. * Enikeeva and Harchaoui (2019) Enikeeva, F. and Z. Harchaoui (2019). High-dimensional change-point detection under sparse alternatives. The Annals of Statistics 47(4), 2051–2079. * Fremdt (2015) Fremdt, S. (2015). Page’s sequential procedure for change-point detection in time series regression. Statistics 49(1), 128–155. * Gösmann et al. (2020) Gösmann, J., T. Kley, and H. Dette (2020). A new approach for open-end sequential change point monitoring. Journal of Time Series Analysis, to appear. * He et al. (2018) He, Y., G. Xu, C. Wu, and W. Pan (2018). Asymptotically independent u-statistics in high-dimensional testing. arXiv preprint arXiv:1809.00411. * Horváth and Hušková (2012) Horváth, L. and M. Hušková (2012). Change-point detection in panel data. Journal of Time Series Analysis 33(4), 631–648. * Horváth et al. (2004) Horváth, L., M. Hušková, P. Kokoszka, and J. Steinebach (2004). Monitoring changes in linear models. Journal of Statistical Planning and Inference 126(1), 225–251. * Jirak (2015) Jirak, M. (2015). Uniform change point tests in high dimension. The Annals of Statistics 43(6), 2451–2483. * Kirch et al. (2015) Kirch, C., B. Muhsal, and H. Ombao (2015). Detection of changes in multivariate time series with application to eeg data. Journal of the American Statistical Association 110(511), 1197–1216. * Lai (1995) Lai, T. L. (1995). Sequential changepoint detection in quality control and dynamical systems. Journal of Royal Statistical Society, Series B (Statistical Methodology) 57, 613–658. * Lai (2001) Lai, T. L. (2001). Sequential analysis: Some classical problems and new challenges. Statistica Sinica 11, 303–408. * Lei et al. (2010) Lei, Y., Z. Zhang, and J. Jin (2010). Automatic tonnage monitoring for missing part detection in multi-operation forging processes. Journal of Manufacturing Science and Engineering 132(5). * Lévy-Leduc and Roueff (2009) Lévy-Leduc, C. and F. Roueff (2009). Detection and localization of change-points in high-dimensional network traffic data. The Annals of Applied Statistics 3(2), 637–662. * Li (2020) Li, J. (2020). Efficient global monitoring statistics for high-dimensional data. Quality Reliability Engineering International 36, 18–32. * Liu et al. (2019) Liu, K., R. Zhang, and Y. Mei (2019). Scalable sum-shrinkage schemes for distributed monitoring large-scale data streams. Statistica Sinica 29, 1–22. * Lorden (1971) Lorden, G. (1971). Procedures for reacting to a change in distribution. Annals of Mathematical Statistics 42, 1897–1908. * MacNeill (1974) MacNeill, I. B. (1974). Tests for change of parameter at unknown times and distributions of some related functionals on brownian motion. The Annals of Statistics 2(5), 950–962. * Matteson and James (2014) Matteson, D. S. and N. A. James (2014). A nonparametric approach for multiple change point analysis of multivariate data. Journal of the American Statistical Association 109(505), 334–345. * Mei (2010) Mei, Y. (2010). Efficient scalable schemes for monitoring a large number of data streams. Biometrika 97(2), 419–433. * Page (1954) Page, E. S. (1954). Continuous inspection schemes. Biometrika 41(1/2), 100–115. * Perron (2006) Perron, P. (2006). Dealing with structural breaks. Palgrave Handbook of Econometrics Vol. 1: Econometric Theory, K. Patterson and T.C. Mills (eds.), Palgrave Macmillan, 278–352. * Polunchenko and Tartakovsky (2012) Polunchenko, A. S. and A. G. Tartakovsky (2012). State-of-the-art in sequential change-point detection. Methodology and computing in applied probability 14(3), 649–684. * Shao (2010) Shao, X. (2010). A self-normalized approach to confidence interval construction in time series. Journal of Royal Statistical Society, Series B 72, 343–366. * Shao (2015) Shao, X. (2015). Self-normalization for time series: a review of recent developments. Journal of the American Statistical Association 110, 1797–1817. * Shao and Wu (2007) Shao, X. and W. B. Wu (2007). Asymptotic spectral theory for nonlinear time series. The Annals of Statistics 35(4), 1773–1801. * Shao and Zhang (2010) Shao, X. and X. Zhang (2010). Testing for change points in time series. Journal of the American Statistical Association 105, 1228–1240. * Wald (1945) Wald, A. (1945). Sequential tests of statistical hypotheses. Annals of Mathematical Statistics 16, 117–186. * Wang and Shao (2020) Wang, R. and X. Shao (2020). Dating the break in high-dimensional data. Available at https://arxiv.org/pdf/2002.04115.pdf. * Wang et al. (2019) Wang, R., S. Volgushev, and X. Shao (2019). Inference for change points in high dimensional data. arXiv preprint arXiv:1905.08446. * Wang and Samworth (2018) Wang, T. and R. J. Samworth (2018). High dimensional change point estimation via sparse projection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80(1), 57–83. * Wang and Mei (2015) Wang, Y. and Y. Mei (2015). Large-scale multi-stream quickest change detection via shrinkage post-change estimation. IEEE Transactions on Information Theory 61(12), 6926–6938. * Wied and Galeano (2013) Wied, D. and P. Galeano (2013). Monitoring correlation change in a sequence of random variables. Journal of Statistical Planning and Inference 143(1), 186–196. * Wu (2005) Wu, W. B. (2005). Nonlinear system theory: Another look at dependence. Proceedings of the National Academy of Sciences 102(40), 14150–14154. * Wu and Shao (2004) Wu, W. B. and X. Shao (2004). Limit theorems for iterated random functions. Journal of Applied Probability 41(2), 425–436. * Xie and Siegmund (2013) Xie, Y. and D. Siegmund (2013). Sequential multi-sensor change-point detection. The Annals of Statistics 41(2), 670–692. * Yan et al. (2014) Yan, H., K. Paynabar, and J. Shi (2014). Image-based process monitoring using low-rank tensor decomposition. IEEE Transactions on Automation Science and Engineering 12(1), 216–227. * Yan et al. (2018) Yan, H., K. Paynabar, and J. Shi (2018). Real-time monitoring of high-dimensional functional data streams via spatio-temporal smooth sparse decomposition. Technometrics 60(2), 181–197. * Yu and Chen (2017) Yu, M. and X. Chen (2017). Finite sample change point inference and identification for high-dimensional mean vectors. arXiv preprint arXiv:1711.08747. * Yu and Chen (2019) Yu, M. and X. Chen (2019). A robust bootstrap change point test for high-dimensional location parameter. arXiv preprint arXiv:1904.03372. * Zhang et al. (2020) Zhang, Y., R. Wang, and X. Shao (2020). Adaptive change point inference for high-dimensional data. Preprint. * Zou et al. (2015) Zou, C., Z. Wang, X. Zi, and W. Jiang (2015). An efficient online monitoring method for high-dimensional data streams. Technometrics 57(3), 374–387. Teng Wu, Department of Statistics, University of Illinois at Urbana-Champaign E-mail<EMAIL_ADDRESS> Runmin Wang, Department of Statistical Science, Southern Methodist University E-mail<EMAIL_ADDRESS>Hao Yan, School of Computing Informatics & Decision Systems Engineering, Arizona State University E-mail<EMAIL_ADDRESS>Xiaofeng Shao, Department of Statistics, University of Illinois at Urbana-Champaign E-mail<EMAIL_ADDRESS>
# ZeRO-Offload: Democratizing Billion-Scale Model Training Jie Ren∗ , Samyam Rajbhandari† , Reza Yazdani Aminabadi† , Olatunji Ruwase† Shuangyan Yang∗ , Minjia Zhang† , Dong Li∗ , Yuxiong He† †Microsoft , ∗University of California, Merced {jren6, syang127<EMAIL_ADDRESS>{samyamr, yazdani.reza, olruwase, minjiaz<EMAIL_ADDRESS> ###### Abstract Large-scale model training has been a playing ground for a limited few requiring complex model refactoring and access to prohibitively expensive GPU clusters. ZeRO-Offload changes the large model training landscape by making large model training accessible to nearly everyone. It can train models with over 13 billion parameters on a single GPU, a 10x increase in size compared to popular framework such as PyTorch, and it does so without requiring any model change from the data scientists or sacrificing computational efficiency. ZeRO-Offload enables large model training by offloading data and compute to CPU. To preserve compute efficiency, it is designed to minimize the data movement to/from GPU, and reduce CPU compute time while maximizing memory savings on GPU. As a result, ZeRO-Offload can achieve 40 TFlops/GPU on a single NVIDIA V100 GPU for 10B parameter model compared to 30TF using PyTorch alone for a 1.4B parameter model, the largest that can be trained without running out of memory. ZeRO-Offload is also designed to scale on multiple-GPUs when available, offering near linear speedup on up to 128 GPUs. Additionally, it can work together with model parallelism to train models with over 70 billion parameters on a single DGX-2 box, a 4.5x increase in model size compared to using model parallelism alone. By combining compute and memory efficiency with ease-of-use, ZeRO-Offload democratizes large-scale model training making it accessible to even data scientists with access to just a single GPU. ## 1 Introduction Since the advent of the attention-based deep learning (DL) models in 2017, we have seen an exponential growth in DL model size, fueled by substantial quality gains that these attention based models can offer with the increase in the number of parameters. For example, the largest language model in literature had less than 100M parameters in 2017, it grew to over 300M with BERT [6] in 2018, increasing to tens of billions in 2019 with models such as GPT-2 [3], T5 [20], Megatron-LM [28] and Turing-NLG [25]. Today, the largest language model GPT-3 [2] has a staggering number of 175B parameters. With the three orders of magnitude growth in model size since 2017, the model accuracy continues to improve with the model size [12]. Recent studies in fact show that larger models are more resource-efficient to train than smaller ones [12] for a given accuracy target. As a result, we expect the model size to continue growing in the future. However, accessibility to large model training is severely limited by the nature of state-of-art system technologies. Those technologies make entry into the large model training space prohibitively expensive. To be more specific, distributed parallel DL training technologies such as pipeline parallelism [10], model parallelism [28], and ZeRO [21] (Zero Redundancy Optimizer) allow transcending the memory boundaries of single GPU/accelerator device by splitting the model states (parameters, gradients and optimizer states) across multiple GPU devices, enabling massive models that would otherwise simply not fit in a single GPU memory. All record-breaking large models such as GPT-2, Megatron-LM, Turing-NLG, GPT-3, were trained using a combination of the aforementioned technologies. However, all of these DL parallel technologies require having enough GPU devices such that the aggregated GPU memory can hold the model states required for the training. For example, training a 10B parameter model efficiently requires a DGX-2 equivalent node with 16 NVIDIA V100 cards, which cost over $100K$, beyond the reach of many data scientists, and even many academic and industrial institutions. Heterogeneous DL training is a promising approach to reduce GPU memory requirement by exploiting CPU memory. Many efforts have been made in this direction [24, 33, 11, 9, 34, 8, 17, 23, 32, 23]. Nearly all of them target CNN based models, where activation memory is the memory bottleneck, and model size is fairly small (less than 500M). However, the primary memory bottleneck for recent attention based large model training are the model states, instead of activation memory. There is an absence in literature studying these workloads for heterogeneous DL training. Additionally, existing efforts on heterogeneous training are further limited in two major ways: i) nearly all of them exploit CPU memory, but not CPU compute, which we show can be used to significantly reduce the CPU-GPU communication overhead, and ii) they are mostly designed for and evaluated on single GPU [11, 9, 23, 34], without a clear path to scaling efficiently on multiple GPUs that is crucial for large model training. Addressing the aforementioned limitation, we attempt to democratize large model training by developing ZeRO-Offload, a novel heterogeneous DL training technology designed specifically for large model training. ZeRO-Offload exploits both CPU memory and compute for offloading, while offering a clear path towards efficiently scaling on multiple GPUs by working with ZeRO-powered data parallelism [21]. Additionally, our first principle analysis shows that ZeRO-Offload provides an optimal and the only optimal solution in maximizing memory saving while minimizing communication overhead and CPU compute overhead for large model training. ZeRO-Offload is designed around three main pillars: i) Efficiency, ii) Scalabilty, and iii) Usability. Efficiency: The offload strategy is designed with the goal of achieving comparable compute efficiency to the state-of-art non-offload strategies but for significantly larger models. To achieve this goal, we rely on first principle analysis to identify a _unique optimal_ computation and data partitioning strategy between CPU and GPU devices that is optimal in three key aspects: i) it requires orders-of-magnitude fewer computation on CPU compared to GPU, preventing the CPU compute from becoming a performance bottleneck, ii) it minimizes the communication volume between CPU and GPU preventing communication from being a bottleneck, and iii) it provably maximizes memory savings on GPU while achieving minimum communication volume. Our analysis shows that to be optimal in the aforementioned regards, we must offload the gradients, optimizer states and optimizer computation to CPU, while keeping the parameters and forward and backward computation on GPU. This strategy enables a 10x increase in model size, with minimum communication and limited CPU computation, which allows us to train 13B parameters on a single NVIDIA V100 GPU at 40 TFLOPS, compared to 30 TFLOPS on the same GPU with 1.2B parameters, the largest model that can be trained without any CPU offloading. Offloading optimizer computation requires CPU to perform $O(M)$ computation compared to $O(MB)$ on GPU where $M$ and $B$ are the model size and batch sizes. In most cases, the batch size is large, and CPU computation is not a bottleneck, but for small batch sizes, the CPU compute can be a bottleneck. We address this using two optimizations: i) an efficient _CPU optimizer_ that is up to 6x faster than the-state-of-art , and ii)One-step _delayed parameter update_ that allows overlapping the CPU optimizer step with GPU compute, while preserving accuracy. Together, they preserve efficiency for ZeRO-Offload even with small batch sizes. Scalability: Good scalability is crucial to take advantage of multiple GPUs that may be available to some data scientists. In the DL community, data parallelism is generally used as the de facto standard to scale DL training to multiple GPUs [5, 35, 26]. However, it is not designed to work with heterogeneous training and presents scalability challenges because of the replication of data and computation in data parallel training. Data parallel training replicates all the model states such as optimizer states, parameters, and gradients, and it also replicates the optimizer computation on each GPU. Therefore, offloading model states or optimizer computation to CPU in combination with data parallelism will result in significant replication of communication and CPU compute: increase the CPU memory requirement proportionally to the data parallelism degree while limiting throughput scalability due to the increased communication. To address these challenges, ZeRO-Offload combines our offload strategy with ZeRO [21] powered data parallelism instead of traditional data parallelism. The symbiosis allows ZeRO-Offload to maintain a single copy of the optimizer states on the CPU memory regardless of the data parallel degree. Furthermore, it keeps the aggregate communication volume between GPU and CPU, as well as the aggregate CPU computation a constant regardless of data parallelism, allowing ZeRO-Offload to effectively utilize the linear increase in CPU compute with the increase in the data parallelism degree. As a result, ZeRO- Offload achieves excellent scalability on up to 128 GPUs. In addition to working with ZeRO powered data parallelism, ZeRO-Offload can be combined with model parallelism [27, 28] to achieve higher memory savings, when multiple GPUs are available. Usability: ZeRO-Offload is available as part of an OpenSource PyTorch library, DeepSpeed (www.deepspeed.ai). Unlike most strategies discussed in Section 2, ZeRO-Offload does not require model refactoring to work. In fact, PyTorch users can enable ZeRO-Offload with few lines of code change to their existing training pipeline as shown in Figure 1, allowing to train 10x larger models easily. For detailed tutorial, please see: https://www.deepspeed.ai/tutorials/zero-offload/. Figure 1: ZeRO-Offload can be enabled with a few lines of change. The code on left shows a standard training pipeline, while the right shows the same pipeline with ZeRO-Offload enabled. Contributions Our contributions are as follows: * • A unique optimal offload strategy for heterogeneous large model training on GPU + CPU system that enables 10x larger model on a single GPU without sacrificing efficiency (Sec. 3, Sec. 4.1). * • Highly scalable multi-GPU design through i) a symbiotic combination of offload strategy with ZeRO powered data parallelism (Sec. 4.2), allowing ZeRO-Offload to achieve near linear scalability, and ii) seamless integration with model- parallel training [28], enabling even larger models than using ZeRO-Offload or model parallelism alone (Sec. 4.2). * • Optimized CPU execution through: i) High-performance CPU Adam optimizer design and implementation offering over 6x speedup over state-of-the-art Adam implementations (Sec. 5.1), and ii) One-step delayed parameter update to overlap CPU parameter update with GPU forward and backward pass (Sec. 5.2). * • Open-source implementation of ZeRO-Offload in PyTorch. * • Extensive evaluation demonstrating i) _Model Scale_ : 10x increase in model size with up to 13B on a single GPU and 4x increase in model size over model parallelism with up to 70 B parameters on a DGX-2 node. ii) _Efficiency_ : Over 40 TFlops for a 10B parameters on a single NVIDIA V100, compared to 30 TFLOPS on the same GPU with 1.2B parameters, the largest model that can be trained without any CPU offloading. iii) _Scalability_ : Near-perfect linear scalability for a 10B parameter model on up to 128 GPUs. iv) CPU overhead reduction with our ADAM implementation with 6x speedup over PyTorch optimizer and up to 1.5X improvement in end-to-end throughput with delayed parameter update optimizations (Sec. 6). ## 2 Background and Related Work Memory consumption in large model training. The full spectrum of memory consumption during DNN model training can be classified into two parts: i) model states and ii) residual states [21]. Model states include parameters, gradients, and optimizer states (such as momentum and variances in Adam [13]); Residual states include activations, temporary buffers, and unusable fragmented memory. Model states are the primary source of memory bottleneck in large model training. We consider the memory consumption due to model states for large transformer models such as Megatron-LM (8 billion) [28], T5 (11 billion) [20], and Turing-NLG [25] (17.2 billion). They are trained with float-16 mixed precision training [16] and Adam optimizer [13]. Mixed precision training often keeps two copies of the parameters, one in float-16 (fp16) and the other in float-32 (fp32). The gradients are stored in fp16. In addition to the parameters and gradients, the Adam optimizer keeps track of the momentum and variance of the gradients. These optimizer states are stored in fp32. Therefore, training a model in mixed precision with the Adam optimizer requires at least 2 bytes of memory for each fp16 parameter and gradient, and 4 byte of memory for each fp32 parameter, and the moementum and variance of each gradient. In total, a model with $M$ parameters requires $16\times M$ bytes of memory. Therefore, the model states for Megatron-LM, T5 and Turing-NLG require 128 GB, 176 GB and 284 GB, respectively, which are clearly beyond the memory capacity of even the current flagship NVIDIA A100 GPU with 80 GB of memory. Significant amount of work has been done in the recent years to enable large model training, which requires more memory than what is available on a single GPU to fit these model and residual states. These efforts can be classified broadly into two categories: i) scale-out training and ii) scale-up training based approaches. We discuss them as follows. Scale out large model training. Scale-out training uses aggregate memory of multiple GPUs to satisfy the memory requirement for large model training. Two prominent examples of scale out training is model parallelism [5, 28] and pipeline parallelism [10, 7], both partitioning the model states and the residual states across multiple GPUs. Model parallelism [5, 28] partitions the model vertically and distributes the model partitions to multiple GPU devices in order to train large models. Pipeline parallelism [10, 7] on the other hand parallelizes model training by partitioning the model horizontally across layers. Both of these approaches must change the user model to work, therefore can limit usability. A recent work, ZeRO [21], provides an alternative to model and pipeline parallelisms to train large models. ZeRO splits the training batch across multiple GPUs similar to data parallel training [5, 35, 26], but unlike data parallel training which replicates all the model states on each GPU, ZeRO partitions them across all GPUs, and uses communication collectives to gather individual parameters as needed during the training. ZeRO does not require changes to the user model to work, making it more generic than model or pipeline parallel training. It also offers better compute efficiency and scalability. Despite the ability of model and pipeline parallelisms, and ZeRO to train large models, they all require multiple GPUs such that the aggregate GPU memory can hold the model and residual states for training large models. In contrast, ZeRO-Offload is designed to fit a larger model by offloading model states to CPU memory and can train a 10x larger model on a single GPU without sacrificing efficiency. When multiple GPUs are available, ZeRO-Offload is designed to work together with ZeRO to offer excellent scalability, or in conjunction with model parallelism to fit even larger model sizes that is not possible with ZeRO-Offload or model parallelism alone. Scale up large model training. Existing work scales up model size in a single GPU through three major approaches. The first approach trades computation for memory saving from activations (residual memory) by recomputing from checkpoints [4]. The second approach uses compression techniques such as using low or mixed precision [16] for model training, saving on both model states and activations. The third approach uses an external memory such as the CPU memory as an extension of GPU memory to increase memory capacity during training [24, 33, 8, 9, 17, 11, 23]. Our work, ZeRO-Offload falls under the third approach. Unlike ZeRO-Offload, the above efforts only offload data to CPU but not compute, and they use smaller models training. In contrast, a recent work called L2L [18] can enable multi-billion parameter training by managing memory usage in GPU layer by layer. In particular, L2L synchronously moves tensors needed in the upcoming layer into GPU memory for computation and keeps the rest of tensors into CPU memory for memory saving. In comparison on ZeRO-Offload, it offers limited efficiency due to extra communication overhead, does not offer a way to scale out across devices, and requires model refactoring, making it difficult to use. ZeRO powered data parallel training. ZeRO-Offload works with ZeRO to scale DL training to multiple GPUs. ZeRO has three stages, ZeRO-1, ZeRO-2 and ZeRO-3 corresponding to the partitioning of the three different model states, optimizer states, gradients and parameters, respectively. ZeRO-1 partitions the optimizer states only, while ZeRO-2 partitions gradients in addition to optimizer states, and ZeRO-3 partitions all model states. ZeRO-Offload works symbiotically with ZeRO-2, and therefore we discuss it further. In ZeRO-2, each GPU stores a replica of all the parameters, but only updates a mutually exclusive portion of it during the parameter update at the end of each training step. As each GPU only updates a portion of the parameters, they only store optimizer states and gradients required to make that update. After the update, each GPU sends its portion of the updated parameters to all the other GPUs using an all-gather communication collective. ZeRO-2 computation and communication schedule is described below: During the forward pass, each GPU computes the loss with respect to a different mini-batch. During the backward propagation, as each gradient is computed, it is averaged using a reduce operator at the GPU/GPUs that owns the gradient or part of the gradient. After the backward pass, each GPU updates its portion of the parameters and optimizer states using the averaged gradients corresponding to that portion. After this, an all-gather is performed to receive the rest of the parameter update computed on other GPUs. ## 3 Unique Optimal Offload Strategy ZeRO-Offload is designed to enable efficient large model training on a single or multiple GPUs by offloading some of the model states from GPU to CPU memory during training. As discussed in Sec. 2, model states: parameters, gradients, and the optimizer states, are the primary source of memory bottleneck in large model training. By offloading some of these model states to CPU, ZeRO-Offload can enable training of significantly larger models 111ZeRO-Offload only offloads model states. Offloading secondary sources of memory bottleneck such as activation memory is beyond scope of our offload strategy. Given that they are significantly smaller than model states, we ignore them for the purpose of our analysis. Furthermore, the first and second approaches described in Sec. 2 can be used in conjunction with ZeRO-Offload to reduce activation memory . However, identifying the optimal offloading strategy is non-trivial. There are numerous ways to offload model states to CPU memory, each with a different trade-off in terms of CPU computation, and GPU-CPU communication, both of which can limit the training efficiency. To identify the optimal offload strategy, ZeRO-Offload models the DL training as data-flow graph and uses first principle analysis to efficiently partition this graph between CPU and GPU devices. ZeRO-Offload partitions the graph in a way that is optimal in three key aspects: i) it requires orders-of-magnitude fewer computation on the CPU compared to GPU that prevents CPU from becoming a performance bottleneck, ii) it guarantees the minimization of communication volume between CPU and GPU memory, and iii) it provably maximizes the memory savings while achieving minimum communication volume. In fact, ZeRO-Offload can achieve high efficiency during training that is comparable to non-offload training and it is _unique optimal_ , meaning no other solution can offer better memory savings without increasing the communication volume or increasing CPU computation. In this section, we discuss the derivation of our unique optimal offload strategy. Our strategy is specifically designed for _mixed precision training with Adam optimizer_ which is the de facto training recipe for large model training. ### 3.1 DL Training as a data-flow graph The DL training workload can be represented as a weighted directed graph of data and computation, as shown in the figure 2, where the circular nodes represents model states (parameter16, gradient16, parameter32, momentum32, variance32), and the rectangular nodes represents computation (forward, backward, param update). The edges in the graph represents the data flow between the nodes, and the weight of an edge is the total data volume in bytes that flows through it during any given training iteration. For a model with M parameters, the weight of the edges in this graph is either $2M$ where the source node produces fp16 model states, or $4M$ where the source node produces fp32 model states. An offload strategy between GPU and CPU can be represented using a two-way partitioning of this graph, such that computation nodes in a partition would be executed on the device that owns the partition, and the data nodes in the partition will be stored on device that owns the partition. The total data volume that must be communicated between the GPU and CPU is given by the weight of edges running across two partitions. There are numerous ways to partition this graph. In the following sections, we use first principles to simplify the data flow graph to reduce the number possible choices based on three different efficiency metric: i) CPU computation overhead, ii) communication overhead, and iii) memory savings. ### 3.2 Limiting CPU computation The CPU computation throughput is multiple orders of magnitude slower than the GPU computation throughput. Therefore, offloading large computation graph to CPU will severely limit training efficiency. As such, we must avoid offloading compute intensive components to the CPU. The compute complexity of DL training per iteration is generally given by $O(MB)$, where $M$ is the model size and $B$ is the effective batch size. To avoid CPU computation form becoming a bottleneck, only those computations that have a compute complexity lower than $O(MB)$ should be offloaded to the CPU. This means that the forward propagation and backward propagation both of which have a compute complexity of $O(MB)$ must be done on the GPU, while remaining computations such as norm calculations, weight updates etc that have a complexity of $O(M)$ maybe offloaded to the CPU. Based on this simple observation we fuse the forward and backward nodes in our data flow graph into a single super-node (FWD-BWD) and assign it on the GPU. ### 3.3 Minimizing Communication Volume Figure 2: The dataflow of fully connected neural networks with $M$ parameters. We use activation checkpoint to reduce activation memory to avoid activation migration between CPU and GPU. The CPU memory bandwidth is at least an order of magnitude faster than the PCI-E bandwidth between CPU and GPU, while the GPU memory is another order of magnitude faster than even the CPU memory. Therefore, we must minimize the communication volume between CPU and GPU memory to prevent the PCI-E bandwidth from becoming a training performance bottleneck. To do so we must first identify the theoretical minimum communication volume for a model state offload strategy. The minimum communication volume for any model state offload strategy is given by $4M$ 222Please note that it is possible to reduce the communication volume further by only offloading partial model states. For simplification, we assume that an offload of a model state implies that we offload the entire model state. Our analysis on the memory savings per communication volume, still holds even if we offload partial model states. Note that after fusing the forward and backward into a single super-node as discussed in Sec. 3.2, each node in our data flow graph is part of a cycle. Therefore, any partitioning of this graph would require cutting at least two edges, each of which have a edge weight of at least $2M$, resulting in a total communication of at least $4M$. If we choose to limit the communication volume to this bare minimum, we can greatly simplify our data-flow graph and reduce the number of partitioning strategies to a handful: Creating fp32 super-node: Notice that any partitioning strategy that does not co-locate the fp32 model states its producer and consumer nodes cannot achieve the minimum communication volume of $4M$. Such a partition must cut at least one edge with a weight of $4M$, and the other with at least $2M$, resulting in a communication volume of at least $6M$ . Therefore, to achieve the minimum communication volume, all offload strategies must co-locate fp32 model states with its producer and consumer operators, i.e., the fp32 model states (momentum 32, variance 32 and p32) must be co-located with the _Param Update_ and the _float2half_ computations. This constraint allows us to treat all the aforementioned fp32 data and compute nodes in the data flow graph as a single super-node that we refer to as _Update Super_. We show this reduced data flow graph in figure 2, consisting of only four nodes: _FWD-BWD Super_ node, _p16_ data node, _g16_ data node, and _Update Super_ node. p16 assignment: To achieve the minimum communication volume, _p16_ must be co-located with _FWD-BWD Super_ because the edge weight between these two nodes is $4M$. Separating these two nodes, would increase the communication volume to $6M(4M+2M)$. Since, we have already assigned node _FWD-BWD Super_ to GPU to limit computation on CPU, _p16_ must also be assigned to GPU. ### 3.4 Maximizing Memory Savings After simplifying the data flow graph to minimize communication volume, only _g16_ and _Update Super_ remain to be assigned. Notice that at this point, all partitions will result in minimum communication volume, so we can prune the choices further to maximize the memory savings on the GPU. Table 1 shows the memory savings of all valid partitioning strategies that minimize the communication volume. The maximum memory savings of 8x can be achieved by offloading both _g16_ and _Update Super_ to CPU. Table 1: Memory savings for offload strategies that minimizes communication volume compared to the baseline. FWD-BWD | p16 | g16 | Update | Memory | Reduction ---|---|---|---|---|--- gpu | gpu | gpu | gpu | 16M | 1x (baseline) gpu | gpu | cpu | gpu | 14M | 1.14x gpu | gpu | gpu | cpu | 4M | 4x gpu | gpu | cpu | cpu | 4M | 8x ### 3.5 A unique and optimal offload strategy ZeRO-Offload allocates all the fp32 model states along with the fp16 gradients on the CPU memory, and it also computes the parameter updates on the CPU. The fp16 parameters are kept on the GPU and the forward and backward computations are also done on the GPU. We arrive at this offload strategy by simplifying our data flow graph and eliminating all other partitioning strategies as they do not limit CPU computation, minimize communication volume, or maximize memory savings. Therefore, ZeRO-Offload is not only optimal in terms of the aforementioned metrics, it is also unique; there can be no other strategy that can offer more memory savings than ZeRO-Offload without increasing the compute complexity on the CPU or incur additional GPU-CPU communication volume. ## 4 ZeRO-Offload Schedule In this section, we discuss the concrete computation and communication schedule for implementing ZeRO-Offload on a single GPU system based on our offload strategy. We then show how we extend this schedule to work effectively on multi-GPU systems by combining our offload strategy with ZeRO data parallelism and model parallelism. Figure 3: ZeRO-Offload training process on a single GPU. ### 4.1 Single GPU Schedule As discussed in Sec. 3, ZeRO-Offload partitions the data such that the fp16 parameters are stored in GPU while the fp16 gradients, and all the optimizer states such as fp32 momentum, variance and parameters are stored in CPU. During the training, we begin by computing the loss via the forward propagation. Since the fp16 parameters are already presented on the GPU, no CPU communication is required for this part of the computation. During the backward propagation on the loss, the gradient for different parameters are computed at different point in the backward schedule. ZeRO-Offload can transfer these gradients for each parameter individually or in small groups to the CPU memory immediately after they are computed. Therefore, only a small amount of memory is required to temporarily hold the gradients on the GPU memory before they are transferred to CPU memory. Furthermore, each gradient transfer can be overlapped with the backpropagation on the remainder of the backward graph, allowing ZeRO-Offload to hide a significant portion of the communication cost. After the backward propagation, ZeRO-Offload updates the fp32 parameters and the remaining optimizer states (such as momentum and variance) directly on the CPU, and copies the updated fp32 parameters from the CPU memory to the fp16 parameters on the GPU memory. Figure 3 shows the computation and communication in each step of ZeRO-Offload diagrammatically, and Figure 5 shows the concrete schedule as a pseudo-code. Figure 4: ZeRO-Offload data placement with multiple GPUs ### 4.2 Scaling to Multi-GPUs ZeRO-Offload in its entirety is a symbiotic integration of ZeRO-Offload strategy described in Sec. 3 and ZeRO-powered data parallelism discussed in Sec. 2, which allows ZeRO-Offload to scale to hundreds of GPUs efficiently. ZeRO-Offload preserves the model state partitioning strategy of ZeRO Stage-2 (optimizer state and gradient partitioning), while offloading the partitioned gradients, optimizer states and the corresponding parameter updates to the CPU. The key benefit of doing this partitioning before offloading is that for systems with more than 1 GPU, each data parallel process is only responsible for updating a subset of the parameters. The aggregated communication volume from all the data parallel GPUs to the CPU remains constant, and CPU resources are used in parallel to jointly compute a single weight update. As a result the total CPU update time decreases with increased data parallelism, since the CPU compute resources increase linearly with the increase in the number of compute nodes. This allows ZeRO-Offload to achieve very good scalability, as the overhead of communication across GPUs is offset by the reduction in the CPU optimizer step. ZeRO-Offload partitions gradients and optimizer states among different GPUs, and each GPU offloads the partition it owns to the CPU memory and keeps it there for the entire training. During the backward propagation, gradients are computed and averaged using reduce-scatter on the GPU, and each GPU only offloads the averaged gradients belonging to its partition to the CPU memory. Once the gradients are available on the CPU, optimizer state partitions are updated in parallel by each data parallel process directly on the CPU. After the update, parameter partitions are moved back to GPU followed by an all- gather operation on the GPU similar to ZeRO-2 to gather all the parameters. Figure 4 shows the data placement model parameters, gradients and optimizer states for ZeRO-Offload and the details of the ZeRO-Offload data parallel schedule is presented in Figure 5. The all gather operation described above is shown as a sequence of broadcast operations in the Figure. Model Parallel training ZeRO-Offload can also work together with tensor- slicing based model parallelism (MP) frameworks such as Megatron-LM [28]. It does so by offloading the gradients, optimizer states and the optimizer computation corresponding to each MP process allowing ZeRO-Offload to train significantly larger models than possible than using model parallelism alone. Sec. 6 provides more details. ## 5 Optimized CPU Execution We speedup the CPU execution time for the parameter updates with two optimizations. First, we implement a fast CPU Adam optimizer using high performance computing techniques offering significant speedup over state-of- art Pytorch implementation. Second, we develop a one-step delayed parameter update schedule that overlaps the CPU parameter update computation with the forward and backward computation on the GPU, hiding the CPU execution time when enabled. ### 5.1 Implementing the CPU Optimizer Figure 5: Code representing ZeRO-Offload that combines unique optimal CPU offload strategy with ZeRO-powered data parallelism. We use three levels of parallelism for improving the performance of the CPU optimizer.. 1) SIMD vector instruction [15] for fully exploiting the hardware parallelism supported on CPU architectures. 2) Loop unrolling [31], an effective technique for increasing instruction level parallelism that is crucial for better memory bandwidth utilization. 3) OMP multithreading for effective utilization of multiple cores and threads on the CPU in parallel. Using these technique, we present a significantly faster implementation of Adam optimizer compared to state-of-art PyTorch implementation. ##### Mixed Precision Training with Adam ADAM is an optimization algorithm used for deep-learning training, which takes the loss gradients together with their first and second momentums to update the parameters. Therefore, in addition to the model parameters, ADAM requires two more matrices of the same size ($M$) saved during the training. In the mixed precision training mode, there are two versions of the parameters stored in memory: one in FP16 (p16) used for computing the activations in the forward pass (on GPU), and one master copy in FP32 (p32) which is updated by the optimizer (on CPU). The p16 is updated with the p32 through $float2half$ casting, at each training step. Moreover, the momentum and variance of the gradients are saved in FP32 (on CPU), to prevent the precision loss for updating the parameters. Please refer to [13] for further detail on ADAM’s algorithm. Algorithm 1 CPU-ADAM Optimizer Algorithm 2 CPU-ADAM Optimizer Input: $p32$, $g32$, $m32$, $v32$, $\beta_{1}$, $\beta_{2}$, $\alpha$ , $step$, $eps$ Output: $p16$, $p32$, $m32$, $v32$ Parameter: $tile\\_width$, $simd\\_width$, $unroll\\_width$ 1:$biascorrection1\leftarrow-\alpha/(1-\beta_{1}^{step})$ 2:$biascorrection2\leftarrow 1/\sqrt{1-\beta_{2}^{step}}$ 3:$simd\\_count\leftarrow sizeof(32)$ / $simd\\_width$ 4:unroll omp parallel 5:for i in 1 to ($simd\\_count/unroll\\_width$) do 6: … 7: $g_{v}$, $p_{v}$, $m_{v}$, $v_{v}$ = $g32[i],p32[i],m32[i],v32[i]$ 8: $m_{v}$ = FMA($g_{v}$, (1 - $\beta_{1}$), $\beta_{1}$*$m_{v}$) 9: $v_{v}$ = FMA($g_{v}$*$g_{v}$, (1 - $\beta_{2}$), $\beta_{2}$*$v_{v}$) 10: $g_{v}$ = FMA($\sqrt{v_{v}}$, $biascorrection2$, $eps$) 11: $g_{v}$ = $m_{m}$ / $g_{v}$ 12: $p_{v}$ = FMA($g_{v}$, $biascorrection1$, $p_{v}$) 13: $p32[i],m32[i],v32[i]$ = $p_{v}$, $m_{v}$, $v_{v}$ 14: … 15: IF (i == tile_width) Copy_to_GPU(p16, p32) 16:end for Optimized Implementation Algorithm 2 elaborates the ADAM’s implementation detail using SIMD operations. As shown, the Adam function receives the optimizer parameters such as $\beta_{1}$, $\beta_{2}$, and $\alpha$, and the gradient, momentum, variance and master copy of parameters (p32) as the input. We also use some parameters specific to the implementation, like the $simd\\_width$ and $unroll\\_width$. The Adam optimizer sends back the updated variance, momentum, and parameter in both FP16 (to GPU) and FP32 (to CPU) . We firstly read the data, including parameter, gradient, momentum and variance, into the vector registers (line 7). Then, we use several fused multiply-add (FMA) vector operations to preform the main execution pipeline which is repeated by the unrolling width. Note that the rest of operations, such as multiply, division, and sqrt, also run in vector mode. For the best performance we use AVX512 simd instruction set and an $unroll\\_width$ of 8 based on auto-tuning results. In addition to the CPU-Adam optimizer, we implement the CPU-to-GPU 16FP parameter-copy in a tiled manner (line 15). We overlap the CPU and GPU execution by parallelizing the Adam computation and copying the parameters over to GPU. As we process Adam computation of the current tile of data on CPU, we write the parameters back to GPU for the previously processed tile. This way, we reduce the idle time of GPU to start the processing of the next training step. ### 5.2 One-Step Delayed Parameter Update Figure 6: Delayed parameter update during the training process. Despite using a highly optimized CPU optimizer, the CPU computation overhead can become a bottleneck during training with very small batch sizes, when the GPU computation time is not much larger than CPU compute. For such limited cases, we develop one-step delayed parameter update (DPU) that overlaps CPU and GPU compute to hide the CPU computation overhead by delaying the parameter update by a single step. We verify that DPU does not impact the final accuracy of training in the evaluation. DPU training schedule Figure 6 shows the workflow of ZeRO-Offload training process with delayed parameter update. ➊ The first $N-1$ steps, are trained without DPU to avoid destabilizing the training during the early stages where gradients change rapidly. ➋ On step $N$, we obtain the gradients from the GPU, but we skip the CPU optimizer step, and do not update the fp16 parameters on the GPU either. ➌ At step $N+1$, we compute the parameter updates on the CPU using gradients from step $N$, while computing the forward and backward pass on the GPU in parallel using parameters updated at step $N-1$. From this step onwards, the model at $(i+1)^{th}$ step will be trained using the parameters updated with gradients from $(i-1)^{th}$ step instead of parameters updated at $i^{th}$ step, overlapping CPU compute with GPU compute. Accuracy trade-off. Since DPU changes the semantics of the training, it is reasonable to ask if there is a trade-off between model accuracy and training efficiency. To answer this question, we evaluated DPU on multiple training workloads and found that DPU does not hurt convergence if we introduce DPU after a few dozen iterations instead of introducing it at the beginning. Our evaluation result in Sec. 6 shows that compared with training with ZeRO- Offload only, training with delayed parameter update achieves same model training accuracy with higher training throughput. ## 6 Evaluation This section seeks to answer the following questions, in comparison to the state-of-the-art: 1. (i) How does ZeRO-Offload scale the trainable model size compared to existing multi-billion parameter training solutions on a single GPU/DGX-2 node? 2. (ii) What is the training throughput of ZeRO-Offload on single GPU/DGX-2 node? 3. (iii) How does the throughput of ZeRO-Offload scale on up to 128 GPUs? 4. (iv) What is the impact of our CPU-Adam and delay parameter update (DPU) on improving throughput, and does DPU change model convergence? ### 6.1 Evaluation Methodology Table 2: Hardware overview of experimental system. DGX-2 node --- GPU | 16 NVIDIA Tesla V100 Tensor Core GPUs GPU Memory | 32GB HBM2 on each GPU CPU | 2 Intel Xeon Platinum 8168 Processors CPU Memory | 1.5TB 2666MHz DDR4 CPU cache | L1, L2, and L3 are 32K, 1M, and 33M, respectively PCIe | bidirectional 32 GBps PCIe Testbed. For the evaluation of model scale and throughput, we conduct our experiments on a single DGX-2 node, whose details are shown in Table 2. For the evaluation of throughput scalability, we conduct experiments on 8 Nvidia DGX-2 nodes connected together with InfiniBand using a 648-port Mellanox MLNX- OS CS7500 switch. Workloads. For the performance evaluation, we focus on evaluating GPT-2 [19] like Transformer based models [30]. We vary the hidden dimension and the number of Transformer blocks to obtain models with a different number of parameters. Note that scaling the depth alone is often not sufficient because it would make training more difficult [12]. Table 3 shows the configuration parameters used in our experiments. For convergence analysis, such as the delayed parameter update, we use GPT-2 [19] and BERT [6], both of which are commonly used as pre-trained language models and have demonstrated superior performance in many NLP tasks (e.g., natural language understanding and inference) than recurrent neural networks or convolutional neural networks. We use BERT-large, same as the one from [6], which has 24-layer, 1024-hidden, 16-heads, and 336M parameters. Similar to [21, 28], we fine-tune BERT on the Stanford Question Answering Dataset (SQuAD) [1], which is one of the most widely used reading comprehension benchmark [22]. Unless otherwise stated, we follow the same training procedure and hyperparameter settings as in [6, 19]. Baseline. We compare the effectiveness of ZeRO-Offload with state-of-arts multi-billion parameter training solutions: * • PyTorch DDP: This is the existing PyTorch Transformer implementation using DistributedDataParallel [14]. * • Megatron [28]: One of the current state-of-the-art multi-billion parameter model training solutions, which employs model parallelism to train up to 8.3B parameter models using 512 GPUs. * • L2L [18]: L2L enables training of deep Transformer networks by keeping one Transformer block at a time in GPU memory and only moves tensors in the upcoming Transformer block into GPU memory when needed. * • ZeRO [21]: ZeRO extends data parallelism by eliminating memory redundancies across multiple GPUs, allowing to train models up to 170B parameters with high training throughput using 25 DGX-2 nodes. We refer to the open-sourced implementation of ZeRO as ZeRO-2. ZeRO-2 achieves the SOTA results for large model training and is a strong baseline. Table 3: Model configuration in evaluation. # params | | batch size --- per GPU | MP setting --- in ZeRO-Offload # layer | hidden size 1, 2 billion | 32 | 1 | 20, 40 | 2048 4 billion | 32 | 1 | 64 | 2304 6, 8 billion | 16 | 1 | 53, 72 | 3072 10,11 billion | 10,8 | 1 | 50,55 | 4096 12, 13 billion | 4 | 1 | 60, 65 | 4096 15 billion | 8 | 2 | 78 | 4096 20,40,60 billion | 8 | 2 | 25,50,75 | 8192 70 billion | 8 | 8 | 69 | 9216 ### 6.2 Experimental Results #### 6.2.1 Model scale As an important step toward democratizing large model training, in this part, we first test the largest trainable models on a single GPU as well as 16 GPUs in a single DGX-2 node. ##### Single GPU. The largest model can be trained using PyTorch DDP on a single GPU with 32GB memory is 1.4B, before running out of memory, as shown in 9. Both Megatron and ZeRO-2 do not increase the trainable model size on a single GPU in comparison to PyTorch, because they both utilize the aggregated GPU memory to fit larger models. In contrast, ZeRO-Offload enables 13B model training on a single GPU, which is more than 9X larger than using PyTorch, Megatron, and ZeRO-2. This is mainly because of ZeRO-Offload’s strategy for maximizing the memory savings on GPU by offloading expensive states such as optimizer states and the majority of gradients to CPU memory. On the other hand, L2L is able to train even larger models (e.g., 17B) on a single GPU by frequently moving weights from unused layers to CPU memory. However, the largest model size does not increase when training L2L with multiple GPUs, which is discussed next. ##### Multi-GPU in single DGX-2. We further perform model scale tests with 4 and 16 GPUs in a single DGX-2 node, respectively. As shown, the maximum trainable model size stays the same for both PyTorch and L2L, because both of them do not handle memory redundancies in data parallelism. As a result, their scalability is bounded by the model scale on a single GPU. Both Megatron and ZeRO-2 support large model training with more GPUs, but they cannot scale efficiently beyond 15B parameters, even with 16 GPUs. Megatron supports larger models than ZeRO-2, because ZeRO-2 still incurs memory redundancies on model weights. On the other hand, ZeRO-Offload easily enables training of up to 70B parameter models by partitioning and offloading optimizer states and gradients to CPU memory combined with model parallelism. Overall, ZeRO-Offload increases the model scale on a single DGX-2 node by 50X, 4.5X, 7.8X, and 4.2X than using PyTorch, Megatron, ZeRO-2, and L2L, respectively. Figure 7: The size of the biggest model that can be trained on single GPU, 4 and 16 GPUs (one DGX-2 node). Figure 8: The training throughput with PyTorch, L2L, and ZeRO-Offload on a single GPU with a batch size of 512. Figure 9: The training throughput is compared for w/o DPU and w/ DPU to GPT-2. Batch size is set to 8. #### 6.2.2 Training throughput ##### Single GPU. Next, we compare the training throughput of ZeRO-Offload and L2L, for models with billion-scale parameters, on a single GPU. We do not include Megatron and ZeRO-2 in this comparison, because both of them cannot train models bigger than 1.4B parameters due to OOM. We evaluate ZeRO-Offload and L2L with the same training batch size (e.g., 512) and same micro-batch sizes (shown in table 3), with gradient accumulation enabled. We also disable delayed parameter update in this experiment so that the comparison is only from the system efficiency perspective. We evaluate the performance improvement and its impact on the convergence of delayed parameter update in Section 6.2.4. Figure 9 shows that ZeRO-Offload outperforms L2L by 14% on average (up to 22%) in throughput (TFLOPS). The performance benefit of ZeRO-Offload comes from the following two aspects. First, ZeRO-Offload has a lower communication cost between CPU and GPU than L2L. For a model with $M$ parameters, L2L requires $28M$ data communication volume between GPU and CPU, which is a sum of the weights, gradients, and optimizer states of each layer of the model. As analyzed in Sec. 4.1, the communication volume between CPU and GPU memory in ZeRO-Offload is $4M$, which is 7x smaller than L2L. The reduced communication volume significantly mitigates the bottleneck from CPU-GPU communication. Second, compared with L2L, the parameter update of ZeRO-Offload happens on CPU instead of GPU, but our optimized CPU-Adam implementation achieves a quite comparable parameter update performance than the PyTorch Adam implementation on GPU (evaluated in Sec. 6.2.4). Therefore, although the optimizer update on GPU in L2L is slightly faster than the optimizer update on CPU in ZeRO- Offload, the communication overhead introduced by L2L leads to an overall slower throughput than ZeRO-Offload. Multi-GPU in single DGX-2. Next, we compare the training throughput of PyTorch, ZeRO-2, Megatron, ZeRO-Offload without model parallelism (w/o MP), and ZeRO-Offload with model parallelism (w/ MP) in one DGX-2 node. When using MP, we use a MP degree that gives the best performance for both baseline and ZeRO-Offload. We use a total batch size of 512 for all the experiments using a combination of micro-batch per GPU and gradient accumulation. To get the best performance for each configuration, we use the largest micro batch that it can support without OOM. We exclude L2L [29] in this test because its implementation does not support multi-GPU training. Figure 10 shows the throughput per GPU results when training on multiple GPUs. We make the following observations: * • For 1B to 15B models, ZeRO-Offload achieves the highest throughput and has up to 1.33X, 1.11X, 1.64X higher speeds than PyTorch, ZeRO-2, and Megatron, respectively. By offloading all the optimizer states to CPU with low overhead, ZeRO-Offload can train with larger micro-batch sizes giving higher throughput. * • ZeRO-2 runs out of memory once the model size is beyond 8B due to lack of enough aggregate GPU memory to store the model states on 16 GPUs. Instead, ZeRO-Offload scales to 13B, without model parallelism because it offloads optimizer states and the majority of gradients to CPU memory. * • When combined with model parallelism, ZeRO-Offload enables training up to 70B parameter models with more than 30 TFLOPS throughput per GPU. In contrast, Megatron supports only up to 15B parameter models before running out of memory, using just model parallelism. * • Compared ZeRO-Offload with ZeRO-2 and Megatron, ZeRO-Offload outperforms ZeRO-2 and Megatron in throughput for 1–8B and 1–13B parameter models, respectively. ZeRO-Offload is faster than Megatron, because it eliminates frequent communication between different GPUs and can train with larger micro batch sizes. ZeRO-Offload outperforms ZeRO-2 also due to larger micro batch sizes. Figure 10: Training throughput with PyTorch, ZeRO-2, Megatron-LM, ZeRO-Offload without model parallelism and ZeRO-Offload with model parallelism. #### 6.2.3 Throughput Scalability We compare the throughput scalability of ZeRO-2 and ZeRO-Offload 333We do not include comparison against Megatron because it consistently performs worse than ZeRO-Offload, as shown in Figure 10. Given the communication overhead added by model parallelism, scaling out Megatron training can not achieve higher throughput than ZeRO-Offload even with linear scalability. on up to 128 GPUs in Figure 11 and make the following key observations: First, ZeRO-Offload achieves near perfect linear speedup in terms of aggregate throughput (green line) running at over 30 TFlops per GPU (blue bars). Second, from 1 to 16 GPUs, while ZeRO-2 runs out of memory, ZeRO-Offload can effectively train the model, turning the model training from infeasible to feasible. Third, with 32 GPUs, ZeRO-Offload slightly outperforms ZeRO-2 in throughput. The improvement comes from additional memory savings on GPU from ZeRO-Offload, which allows training the model with larger batch sizes that lead to increased GPU computation efficiency. Fourth, with more GPUs (such as 64 and 128), ZeRO-2 starts to outperform ZeRO-Offload, because both can now run similar batch sizes, achieving similar computation efficiency, whereas ZeRO-2 does not suffer from the additional overhead of CPU-GPU communication. In summary, ZeRO-Offload complements ZeRO-2 and enables large model training from a single device to thousands of devices with good computation efficiency. Figure 11: Comparison of training throughput between ZeRO-Offload and ZeRO-2 using 1–128 GPUs for a 10B parameter GPT2. #### 6.2.4 Optimized CPU execution ##### A. CPU-Adam efficiency. In this part, we evaluate our Adam implementation against the PyTorch Adam on CPU. Table 4 shows the optimizer execution time of the two implementations for model parameters from 1 to 10 billion. Compared to PyTorch (PT-CPU), CPU-Adam reduces the execution time by over 5X for all the configurations and 6.4X for the case with 1B parameters. The CPU-Adam optimizer achieves high speedups by exploiting the instruction-level parallelism, thread-level parallelism, and the tile-based data copy scheme (as shown in line 15 of Algorithm 2). Meanwhile, although CPU-Adam has a slower speed than the PyTorch Adam implementation on GPU (PT-GPU), the performance gap is not very huge, and the CPU computation is not a bottleneck of the training throughout. ##### B. One-step Delayed parameter update (DPU). Figure 9 shows the comparison of the training throughput of GPT-2 with and without DPU. As shown, with DPU enabled, the training achieves 1.12–1.59, updated times higher throughput than without it, for a wide range of model sizes for a small micro batch size of 8. This is expected because DPU allows the optimizer updates to overlap with the next forward computation such that the GPU does not have to be slowed down by the CPU computation and CPU-GPU communication. But, what about accuracy? Convergence impact We study the convergence impact of DPU on both GPT-2 and BERT. Figure 13 shows the pre-training loss curves over 100K training iterations using PyTorch (unmodified GPT-2), and Figure 13 shows the loss curves of fine-tuning Bert-large model on SQuAD using ZeRO-Offload without DPU, and ZeRO-Offload with DPU. In both cases, DPU is enabled after 40 iterations allowing the training to stabilize in its early stage before introducing DPU. We observe that the training curves of the unmodified GPT-2 and ZeRO-Offload w/o DPU are exactly overlapped, because ZeRO-Offload w/o DPU performs only system optimizations and does not alter training dynamics. On the other hand, the training curve from ZeRO-Offload with DPU converges slightly slower at the very beginning of the training (e.g., barely can be seen at 2K-5K iterations) and quickly catches up after 5K iterations. For the remaining of the training, the training loss matches the original training until the model converges. For Bert-Large fine-uning, we can see that although the training losses are not exactly the same, they converge in the same trend and are largely overlapped. Without changing any hyperparameters, ZeRO-Offload + DPU achieves the same final F1 score (92.8) as the baseline. Table 4: Adam latency (s) for PyTorch (PT) and CPU-Adam. #Parameter | CPU-Adam | PT-CPU | PT-GPU (L2L) ---|---|---|--- 1 billion | 0.22 | 1.39 | 0.10 2 billion | 0.51 | 2.75 | 0.26 4 billion | 1.03 | 5.71 | 0.64 8 billion | 2.41 | 11.93 | 0.87 10 billion | 2.57 | 14.76 | 1.00 From these results on both GPT-2 pretraining, and Bert-Large fine-tuning, we empirically verify that DPU is an effective technique to improve the training throughput of ZeRO-Offload without hurting model convergence and accuracy.The 1-step staleness introduced by DPU is well tolerated by the iterative training process once the model has passed the initial training phase. Figure 12: The training loss curve of unmodified GPT-2, ZeRO-Offload w/o DPU and ZeRO-Offload with DPU. Figure 13: The fine-tuning loss curve of BERT, ZeRO-Offload w/o DPU and ZeRO- Offload with DPU. ## 7 Conclusions We presented ZeRO-Offload, a powerful GPU-CPU hybrid DL training technology with high compute efficiency and near linear throughput scalability, that can allows data scientists to train models with multi-billion parameter models even on a single GPU, without requiring any model refactoring. We open-sourced ZeRO-Offload as part of the DeepSpeed library (www.deepspeed.ai) with the hope to democratize large model training, allowing data scientist everywhere to harness the potential of truly massive DL models. ## References * [1] The Stanford Question Answering Dataset (SQuAD) leaderboard. https://rajpurkar.github.io/SQuAD-explorer/. * [2] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. * [3] Yu Cao, Wei Bi, Meng Fang, and Dacheng Tao. Pretrained language models for dialogue generation with multiple input sources, 2020. * [4] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv: Learning, 2016. * [5] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1223–1231. Curran Associates, Inc., 2012. * [6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. * [7] Aaron Harlap, Deepak Narayanan, Amar Phanishayee, Vivek Seshadri, Nikhil Devanur, Greg Ganger, and Phil Gibbons. Pipedream: Fast and efficient pipeline parallel dnn training, 2018. * [8] Mark Hildebrand, Jawad Khan, Sanjeev Trika, Jason Lowe-Power, and Venkatesh Akella. Autotm: Automatic tensor movement in heterogeneous memory systems using integer linear programming. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’20, 2020. * [9] Chien-Chin Huang, Gu Jin, and Jinyang Li. Swapadvisor: Pushing deep learning beyond the gpu memory limit via smart swapping. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’20, page 1341–1355, New York, NY, USA, 2020. Association for Computing Machinery. * [10] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Mia Xu Chen, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism, 2018. * [11] Hai Jin, Bo Liu, Wenbin Jiang, Yang Ma, Xuanhua Shi, Bingsheng He, and Shaofeng Zhao. Layer-centric memory reuse and data migration for extreme-scale deep learning on many-core architectures. ACM Trans. Archit. Code Optim., 15(3), September 2018. * [12] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. * [13] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014. * [14] Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chintala. Pytorch distributed: Experiences on accelerating data parallel training. Proc. VLDB Endow., 13(12):3005–3018, 2020. * [15] Gaurav Mitra, Beau Johnston, Alistair Rendell, Eric McCreath, and Jun Zhou. Use of simd vector operations to accelerate application code performance on low-powered arm and intel platforms. pages 1107–1116, 05 2013. * [16] Nvidia. Automatic Mixed Precision for Deep Learning. https://developer.nvidia.com/automatic-mixed-precision, 2019. * [17] Xuan Peng, Xuanhua Shi, Hulin Dai, Hai Jin, Weiliang Ma, Qian Xiong, Fan Yang, and Xuehai Qian. Capuchin: Tensor-based gpu memory management for deep learning. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’20, page 891–905, New York, NY, USA, 2020. Association for Computing Machinery. * [18] Bharadwaj Pudipeddi, Maral Mesmakhosroshahi, Jinwen Xi, and Sujeeth Bharadwaj. Training large neural networks with constant memory using a new execution algorithm. June 2020. * [19] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. * [20] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2020. * [21] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models. In International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 2020. * [22] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100, 000+ questions for machine comprehension of text. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383–2392. The Association for Computational Linguistics, 2016. * [23] Jie Ren, Jiaolin Luo, Kai Wu, Minjia Zhang, Hyeran Jeon, and Dong Li. Sentinel: Efficient Tensor Migration and Allocation on Heterogeneous Memory Systems for Deep Learning. In International Symposium on High Performance Computer Architecture (HPCA), 2020. * [24] Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and Stephen W. Keckler. vdnn: Virtualized deep neural networks for scalable, memory-efficient neural network design. In The 49th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-49, 2016. * [25] Corby Rosset. Turing-nlg: A 17-billion-parameter language model by microsoft, 2020. * [26] Christopher J. Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E. Dahl. Measuring the Effects of Data Parallelism on Neural Network Training. Journal of Machine Learning Research, 20, 2019. * [27] Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake A. Hechtman. Mesh-tensorflow: Deep learning for supercomputers. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 10435–10444, 2018. * [28] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. CoRR, abs/1909.08053, 2019. * [29] Roman Tezikov. PyTorch implementation of L2L execution algorithm. https://github.com/TezRomacH/layer-to-layer-pytorch, 2020. * [30] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008, 2017. * [31] G. Velkoski, M. Gusev, and S. Ristov. The performance impact analysis of loop unrolling. In 2014 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 307–312, 2014. * [32] Oreste Villa, Mark Stephenson, David Nellans, and Stephen Keckler. Buddy Compression: Enabling Larger Memory for Deep Learning and HPC Workloads on GPUs. In International Symposium on Computer Architecture, 2020. * [33] Linnan Wang, Jinmian Ye, Yiyang Zhao, Wei Wu, Ang Li, Shuaiwen Leon Song, Zenglin Xu, and Tim Kraska. Superneurons: Dynamic gpu memory management for training deep neural networks. In Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP ’18, page 41–53, New York, NY, USA, 2018. Association for Computing Machinery. * [34] Junzhe Zhang, Sai-Ho Yeung, Yao Shu, Bingsheng He, and Wei Wang. Efficient memory management for gpu-based deep learning systems. CoRR, abs/1903.06631, 2019. * [35] M. A. Zinkevich, M.Weimer, A. Smola, and L. Li. Parallelized Stochastic Gradient Descent. In International Conference on Neural Information Processing Systems, 2010.
# Performance Analysis and Codebook Design for mmWave Beamforming System with Beam Squint Hongkang Yu, Pengxin Guan, Yiru Wang, Yuping Zhao ###### Abstract Beamforming technology is widely used in millimeter wave systems to combat path losses, and beamformers are usually selected from a predefined codebook. Unfortunately, the traditional codebook design neglects the beam squint effect, and this will cause severe performance degradation when the bandwidth is large. In this letter, we consider that a codebook with fixed size is adopted in the wideband beamforming system. First, we analyze how beam squint affects system performance when all beams have the same width. The expression of average spectrum efficiency is derived based on the ideal beam pattern. Next, we formulate the optimization problem to design the optimal codebook. Simulation results demonstrate that the proposed codebook deals with beam squint by spreading the beam coverage and significantly mitigates the performance degradation. ###### Index Terms: Millimeter wave, beamforming, beam squint, beam pattern, codebook design. ## I Introduction Owing to the abundant spectrum resources, millimeter wave (mmWave) communication can support gigabit-per-second data rates and is regarded as one of the most promising technologies for future wireless communication systems [1]. However, due to the severe path losses in the mmWave band, the beamforming gains of large antenna arrays are required to improve signal power. To reduce the implementation complexity, beamformers are usually selected from a predefined codebook [2, 3]. Recent studies have shown that when the system bandwidth is sufficiently large, the array response becomes frequency-dependent obviously, and this phenomenon is called beam squint [4]. For a wideband orthogonal frequency division multiplexing (OFDM) system, beam squint induces the variation of beam directions for different subcarriers, and the average spectrum efficiency (SE) will decrease. Unfortunately, the traditional codebook design neglects this problem [3]. To cope with beam squint, a denser codebook is proposed to guarantee the minimum beam gain of all subcarriers [5]. However, the required codebook size increases rapidly as beam squint becomes severe. In [6], beam patterns are designed to maximize the average beam gain within the bandwidth, and this scheme requires accurate estimation of the channel direction. In [7, 8], the hybrid beamformers are designed to support multi-stream transmission, and the transmitter needs to know the perfect channel matrix. Another hybrid beamforming scheme proposes to assign users on each subcarrier to match the beam direction [9]. However, the user locations are required to be highly correlated. In this letter, we consider a wideband mmWave beamforming system that transmits a single data stream, and the beamformer is selected from a codebook with a fixed size. The contributions are two-fold. Firstly, we analyze how beam squint affects the data rates of subcarriers when traditional codebook design is adopted. The expression of average SE is derived, which proves that the performance deteriorates as beam squint becomes severe. Secondly, we design a novel codebook to maximize the average SE for both analog and hybrid beamforming architectures. Compared with the traditional schemes, the optimized beam pattern spreads its coverage to cope with beam squint. Simulation results demonstrate that the proposed codebook significantly mitigates the performance degradation. ## II System Model A point-to-point wideband mmWave OFDM system with $M$ subcarriers is considered in this letter, and the frequency of the $m$th subcarrier $(m=1,2,...,M)$ is denoted as ${f_{m}}={f_{\text{c}}}+\frac{B}{M}\left({m-1-\frac{{M-1}}{2}}\right)$, where ${f_{\text{c}}}$ and $B$ denote the carrier frequency and bandwidth, respectively. We assume that the transmitter adopts a uniform linear antenna array with ${N_{\text{t}}}$ antennas, and servers a single-antenna receiver. The array response can be expressed as ${\left[{1,{e^{j2\pi{f_{m}}d\phi/c}},...,{e^{j2\pi{f_{m}}d\phi\left({{N_{\text{t}}}-1}\right)/c}}}\right]^{\text{T}}}$ [5], where $\phi\in\left[-1,1\right]$ denotes _spatial angle_ , $c$ is the speed of light, and $d=c/2{f_{\text{c}}}$ represents the antenna spacing. We also define the _equivalent spatial angle_ of the $m$th subcarrier as ${\varphi_{m}}={f_{m}}\phi/{f_{\text{c}}}$. When the bandwidth is sufficiently large, ${\varphi_{m}}$ becomes frequency-dependent obviously, and this phenomenon is called beam squint. Since this letter mainly studies the impact of beam squint on the wideband beamforming system, only line-of-sight propagation is considered between the transmitter and the receiver [4, 5, 6]. The channel vector of the $m$th subcarrier can be expressed as ${{\mathbf{h}}_{m}}={\mathbf{a}}\left({{\varphi_{m}}}\right)={\left[{1,{e^{j\pi{\varphi_{m}}}},...,{e^{j\pi\left({{N_{\text{t}}}-1}\right){\varphi_{m}}}}}\right]^{\text{T}}}.$ (1) To focus the signal power in a desired direction, the transmitter selects the beamformer from a codebook $\mathcal{W}=\left\\{{{{\mathbf{w}}_{1}},{{\mathbf{w}}_{2}}...,{{\mathbf{w}}_{L}}}\right\\}$, where $L$ denotes the codebook size and $\left\|{{{\mathbf{w}}_{i}}}\right\|=1$. It should be noted that we do not specify the hardware implementation for ${\mathbf{w}}$. It may be a pure analog beamforming vector [10], or be realized by the hybrid beamforming architecture [2]. In addition, without loss of generality, we assume that $L$ is an even number and that the beam patterns are symmetric about $\phi=0$. In this way, we only need to consider the case of $\phi>0$ and half of the beams. When all beams evenly cover the entire spatial domain, the beam coverage of the $i$th beam ($\,i=1,2,...,L/2$) is denoted as ${\mathcal{I}_{i}}=\left[{\phi_{i}^{\text{L}},\phi_{i}^{\text{R}}}\right]$, where $\phi_{i}^{\text{L}}=2\left({i-1}\right)/L$ and $\phi_{i}^{\text{R}}=2i/L$. To select the optimal beam, the transmitter can perform beam training by sending pilot signals towards different directions [2]. Assuming that this process is perfect, the index of the selected beam should be $i=\left\lceil{L\phi/2}\right\rceil$. Since all subcarriers share the common beamforming weights, the SE of a wideband beamforming system can be expressed as ${R_{i}}\left(\phi\right)=\frac{1}{M}\sum\limits_{m=1}^{M}{{{\log}_{2}}\left({1+\rho{{\left|{{{\mathbf{a}}^{\text{H}}}\left({{\varphi_{m}}}\right){{\mathbf{w}}_{i}}}\right|}^{2}}}\right)},$ (2) where $\rho$ denotes the normalized signal-to-noise ratio (SNR). To describe beam squint quantitatively, we define the beam squint factor $\varepsilon=B/\left({2{f_{\text{c}}}}\right)$. Moreover, assuming that $\phi$ is uniformly distributed, and each beam is selected with the same probability, the average SE when the $i$th beam is selected can be expressed as111Limited by space, the complete derivations of (3) and (11) are given in [11]. $\begin{split}{{\bar{R}}_{i}}&=\frac{L}{2}\int_{\phi_{i}^{\text{L}}}^{\phi_{i}^{\text{R}}}{R_{i}\left(\phi\right)d\phi}\\\ &\approx\frac{L}{{4\varepsilon}}\int_{\phi_{i}^{\text{L}}}^{\phi_{i}^{\text{R}}}{\frac{1}{\phi}\int_{\left({1-\varepsilon}\right)\phi}^{\left({1+\varepsilon}\right)\phi}{{{\log}_{2}}\left({1+\rho{{\left|{{{\mathbf{a}}^{\text{H}}}\left(\varphi\right){{\mathbf{w}}_{i}}}\right|}^{2}}}\right)d\varphi}d\phi},\\\ \end{split}$ (3) where we use the inner integral term to approximate the summation in (2). The average SE can be further expressed as $\bar{R}=\frac{2}{L}\sum\limits_{i=1}^{L/2}{{{\bar{R}}_{i}}}.$ (4) ## III Theoretical Analysis on Spectrum Efficiency In this section, we analyze how beam squint affects SE under traditional codebook design. For wideband beamforming systems, the perfect beam training guarantees that $\phi\in\mathcal{I}_{i}$, which implies that the central subcarrier always has full data rate. However, the beam squint effect extends the spatial angle $\phi$ to the equivalent spatial angle range $\left[{\left({1-\varepsilon}\right)\phi,\left({1{\text{ + }}\varepsilon}\right)\phi}\right]$ within the bandwidth. As illustrated in Fig. 1, $\varphi_{m}$ may be outside the beam coverage, and this causes some subcarriers to have data rate drops and reduces average SE. Figure 1: An illustration of the beam squint effect under different cases. Since traditional codebooks have distinct beam patterns under different hardware architectures, for tractability, we assume that only the beam gain in ${\mathcal{I}_{i}}$ is non-zero and constant. This ideal beam pattern is widely used in the related works [2, 8]. According to the Parseval’s theorem, we can derive that $\int_{-1}^{1}{{{\left|{{{\mathbf{a}}^{\text{H}}}\left(\varphi\right){\mathbf{w}}}\right|}^{2}}d\varphi}=2$, which implies that the beam width is inversely proportional to the beam gain. Thus, all ideal beams have the same gain ${g_{i}}=L$. According to the characteristics of the ideal beams, when $f\phi/{f_{\text{c}}}<\phi_{i}^{\text{L}}$, subcarriers with lower frequency have zero data rates; Similarly, when $f\phi/{f_{\text{c}}}>\phi_{i}^{\text{R}}$, subcarriers with higher frequency have zero data rates. Therefore, only the subcarriers whose frequencies are in $\left[{{f^{\text{L}}},{f^{\text{R}}}}\right]$ have full data rates, where ${f^{\text{L}}}=\max\left({{f_{\text{c}}}-B/2,{f_{\text{c}}}\phi_{i}^{\text{L}}/\phi}\right)$ and ${f^{\text{R}}}=\min\left({{f_{\text{c}}}+B/2,{f_{\text{c}}}\phi_{i}^{\text{R}}/\phi}\right)$. We can express the SE as ${R_{i}}\left(\phi\right)=\frac{{\left({{f^{\text{R}}}-{f^{\text{L}}}}\right)}}{B}{\log_{2}}\left({1+\rho L}\right).$ (5) Based on (5), the entire analysis consists of the following three steps. _Step_ I: _We study the relationship between ${R_{i}}\left(\phi\right)$ and $\phi$._ When $\phi>\tilde{\phi}_{i}^{\text{L}}\triangleq\phi_{i}^{\text{L}}/\left({1-\varepsilon}\right)$, ${f^{\text{L}}}={f_{\text{c}}}-B/2$; And when $\phi<\tilde{\phi}_{i}^{\text{R}}\triangleq\phi_{i}^{\text{R}}/\left({1+\varepsilon}\right)$, ${f^{\text{R}}}={f_{\text{c}}}+B/2$. Equation (5) can be further divided into the following four cases, which correspond to the four subfigures in Fig. 1. 1. (a) $\phi<\tilde{\phi}_{i}^{\text{L}}$ and $\phi<\tilde{\phi}_{i}^{\text{R}}$, only subcarriers with lower frequency have zero data rates and ${R_{i}}\left(\phi\right)=R_{i}^{{\text{(a)}}}\left(\phi\right)\triangleq\left({1{\text{ + }}\varepsilon-\frac{{\phi_{i}^{\text{L}}}}{\phi}}\right)\frac{{{{\log}_{2}}\left({1+\rho L}\right)}}{{2\varepsilon}}.$ 2. (b) $\phi>\tilde{\phi}_{i}^{\text{L}}$ and $\phi>\tilde{\phi}_{i}^{\text{R}}$, only subcarriers with higher frequency have zero data rates and ${R_{i}}\left(\phi\right)=R_{i}^{{\text{(b)}}}\left(\phi\right)\triangleq\left({\frac{{\phi_{i}^{\text{R}}}}{\phi}-1+\varepsilon}\right)\frac{{{{\log}_{2}}\left({1+\rho L}\right)}}{{2\varepsilon}}.$ 3. (c) $\tilde{\phi}_{i}^{\text{L}}<\phi<\tilde{\phi}_{i}^{\text{R}}$, all subcarriers have full data rates and ${R_{i}}\left(\phi\right)=R_{i}^{{\text{(c)}}}\left(\phi\right)\triangleq{\log_{2}}\left({1+\rho L}\right).$ 4. (d) $\tilde{\phi}_{i}^{\text{R}}<\phi<\tilde{\phi}_{i}^{\text{L}}$, subcarriers with both higher frequency and lower frequency have zero data rates and ${R_{i}}\left(\phi\right)=R_{i}^{{\text{(d)}}}\left(\phi\right)\triangleq\frac{{{{\log}_{2}}\left({1+\rho L}\right)}}{{\varepsilon\phi L}}.$ _Step_ II: _We study the relationship between ${{\bar{R}}_{i}}$ and $\varepsilon$_. Since the value of $\varepsilon$ determines the relationship between the four variables $\phi_{i}^{\text{L}}$, $\phi_{i}^{\text{R}}$, $\tilde{\phi}_{i}^{\text{L}}$ and $\tilde{\phi}_{i}^{\text{R}}$, the analysis is further divided into the following four cases. 1. (1) $\varepsilon<1/\left({L-1}\right)$, $\phi_{i}^{\text{L}}<\tilde{\phi}_{i}^{\text{L}}<\tilde{\phi}_{i}^{\text{R}}<\phi_{i}^{\text{R}}$ and ${R_{i}}\left(\phi\right){\text{ = }}\left\\{\begin{IEEEeqnarraybox}[]{[}][c]{l^{\prime}s}R_{i}^{{\text{(a)}}}\left(\phi\right),&$\phi_i^{\text{L}} < \phi\leqslant\tilde{\phi}_i^{\text{L}},$\\\ R_{i}^{{\text{(c)}}}\left(\phi\right),&$\tilde{\phi}_i^{\text{L}} < \phi\leqslant\tilde{\phi}_i^{\text{R}},$\\\ R_{i}^{{\text{(b)}}}\left(\phi\right),&$\tilde{\phi}_i^{\text{R}} < \phi< \phi_i^{\text{R}}.$\end{IEEEeqnarraybox}\right.$ By integrating $\phi$, the average SE can be derived as $\begin{split}{\bar{R}_{i}}&=\bar{R}_{i}^{{\text{(1)}}}\\\ &\triangleq\frac{1}{2}{\log_{2}}\left({1+\rho L}\right)\left({1-\frac{{\ln\left({1-\varepsilon}\right)}}{\varepsilon}+\frac{{i\ln\left({1-{\varepsilon^{2}}}\right)}}{\varepsilon}}\right).\end{split}$ (6) 2. (2) $1/\left({L-1}\right)<\varepsilon<1/i$, $\phi_{i}^{\text{L}}<\tilde{\phi}_{i}^{\text{R}}<\tilde{\phi}_{i}^{\text{L}}<\phi_{i}^{\text{R}}$ and ${R_{i}}\left(\phi\right){\text{ = }}\left\\{\begin{IEEEeqnarraybox}[]{[}][c]{l^{\prime}s}R_{i}^{{\text{(a)}}}\left(\phi\right),&$\phi_i^{\text{L}} < \phi\leqslant\tilde{\phi}_i^{\text{R}},$\\\ R_{i}^{{\text{(d)}}}\left(\phi\right),&$\tilde{\phi}_i^{\text{R}} < \phi\leqslant\tilde{\phi}_i^{\text{L}},$\\\ R_{i}^{{\text{(b)}}}\left(\phi\right),&$\tilde{\phi}_i^{\text{L}} < \phi< \phi_i^{\text{R}}.$\end{IEEEeqnarraybox}\right.$ We can obtain ${\bar{R}_{i}}=\bar{R}_{i}^{{\text{(1)}}}$, which has the same form as (6). 3. (3) $1/i<\varepsilon<1/\left({i-1}\right)$, $\phi_{i}^{\text{L}}<\tilde{\phi}_{i}^{\text{R}}<\phi_{i}^{\text{R}}<\tilde{\phi}_{i}^{\text{L}}$, ${R_{i}}\left(\phi\right){\text{ = }}\left\\{\begin{IEEEeqnarraybox}[]{[}][c]{l^{\prime}s}R_{i}^{{\text{(a)}}}\left(\phi\right),&$\phi_i^{\text{L}} < \phi\leqslant\tilde{\phi}_i^{\text{R}},$\\\ R_{i}^{{\text{(d)}}}\left(\phi\right),&$\tilde{\phi}_i^{\text{R}} < \phi< \phi_i^{\text{R}}.$\end{IEEEeqnarraybox}\right.$ and $\begin{split}{\bar{R}_{i}}=&\bar{R}_{i}^{{\text{(2)}}}\\\ \triangleq&\frac{1}{2}{\log_{2}}\left({1+\rho L}\right)\left(1-i+\frac{1}{\varepsilon}\left({1-\ln{\frac{{i-1}}{i}}}\right)\right.\\\ &\quad\quad\quad\quad\quad\quad\quad\left.+\frac{i}{\varepsilon}\ln\frac{{\left({1+\varepsilon}\right)\left({i-1}\right)}}{i}\right).\end{split}$ (7) 4. (4) $\varepsilon>1/\left({i-1}\right)$, $\tilde{\phi}_{i}^{\text{R}}<\phi_{i}^{\text{L}}<\phi_{i}^{\text{R}}<\tilde{\phi}_{i}^{\text{L}}$. We can obtain ${R_{i}}\left(\phi\right)=R_{i}^{{\text{(d)}}}\left(\phi\right)$ and ${\bar{R}_{i}}=\bar{R}_{i}^{{\text{(3)}}}\triangleq\frac{1}{2}{\log_{2}}\left({1+\rho L}\right)\left({\frac{1}{\varepsilon}\ln\frac{i}{{i-1}}}\right).$ (8) _Step_ III: _We study the relationship between ${\bar{R}}$ and $\varepsilon$_, which can be discussed in the following two cases. 1. (i) $\varepsilon\leqslant 2/L$. In this case, ${\bar{R}_{i}}$ under all beams can be calculated with $\bar{R}_{i}^{{\text{(1)}}}$. According to (4), we can derive that $\bar{R}=\frac{2}{L}\sum\limits_{i=1}^{L/2}{\bar{R}_{i}^{{\text{(1)}}}}\approx{\log_{2}}\left({1+\rho L}\right)\left({1-\left({\frac{1}{4}+\frac{L}{8}}\right)\varepsilon}\right),$ (9) where the approximation is obtained by $\log\left({1+x}\right)\approx x$ when $x$ is small. From the equation, we can infer that the average SE decreases linearly with $\varepsilon$ when the beam squint effect is small. Moreover, a larger codebook with narrower beams is more susceptible to the beam squint effect. 2. (ii) $\varepsilon>2/L$. In this case, there exists $L^{\prime}\leqslant L/2$ such that $1/L^{\prime}<\varepsilon<1/\left({L^{\prime}-1}\right)$. Moreover, for the $L^{\prime}$th beam, we use $\bar{R}_{L^{\prime}}^{{\text{(1)}}}$ to approximate $\bar{R}_{L^{\prime}}^{{\text{(2)}}}$, and $\bar{R}$ can be expressed as $\begin{split}\bar{R}&=\frac{2}{L}\left({\sum\limits_{i=1}^{L^{\prime}}{\bar{R}_{i}^{{\text{(1)}}}}+\sum\limits_{i=L^{\prime}+1}^{L/2}{\bar{R}_{i}^{{\text{(3)}}}}}\right)\hfill\\\ &\approx\frac{1}{{\varepsilon L}}{\log_{2}}\left({1+\rho L}\right)\left({1.5-\frac{\varepsilon}{2}+\ln\frac{{L\varepsilon}}{2}}\right),\end{split}$ (10) where the approximation is obtained from $L^{\prime}\approx 1/\varepsilon$ and $\log\left({1+x}\right)\approx x$. We can observe that $\bar{R}$ continues to decrease with $\varepsilon$, but drops slower than the former case. ## IV Proposed Codebook Design To cope with beam squint, we propose a novel codebook in this section. Both analog and hybrid beamforming architectures are considered here. First, by denoting $f\left(\varphi\right)={\log_{2}}\left({1+\rho{{\left|{{{\mathbf{a}}^{\text{H}}}\left(\varphi\right){{\mathbf{w}}_{i}}}\right|}^{2}}}\right)$ and changing the integration order in (3), we can obtain ${{\bar{R}}_{i}}=\frac{L}{{4\varepsilon}}\int_{\left({1-\varepsilon}\right)\phi_{i}^{\text{L}}}^{\left({1+\varepsilon}\right)\phi_{i}^{\text{R}}}{t\left(\varphi\right)f\left(\varphi\right)d\varphi},$ (11) where $t\left(\varphi\right)$ represent the weights of the beam gain in different directions. When $\varepsilon\leqslant 1/\left({2i-1}\right)$, $t\left(\varphi\right)=\left\\{\begin{IEEEeqnarraybox}[]{[}][c]{l^{\prime}s}\ln\frac{\varphi}{{\left({1-\varepsilon}\right)\phi_{i}^{\text{L}}}},&$\left( {1 - \varepsilon} \right)\phi_i^{\text{L}} < \varphi< \left( {1 + \varepsilon} \right)\phi_i^{\text{L}},$\\\ \ln\frac{{\left({1+\varepsilon}\right)}}{{\left({1-\varepsilon}\right)}},&$\left( {1 + \varepsilon} \right)\phi_i^{\text{L}} \leqslant\varphi< \left( {1 - \varepsilon} \right)\phi_i^{\text{R}},$\\\ \ln\frac{{\left({1+\varepsilon}\right)\phi_{i}^{\text{R}}}}{\varphi},&$\left( {1 - \varepsilon} \right)\phi_i^{\text{R}} \leqslant\varphi< \left( {1 + \varepsilon} \right)\phi_i^{\text{R}},$\end{IEEEeqnarraybox}\right.$ and when $\varepsilon>1/\left({2i-1}\right)$ $t\left(\varphi\right)=\left\\{\begin{IEEEeqnarraybox}[]{[}][c]{l^{\prime}s}{\ln\frac{\varphi}{{\left({1-\varepsilon}\right)\phi_{i}^{\text{L}}}}},&$\left( {1 - \varepsilon} \right)\phi_i^{\text{L}} < \varphi<\left( {1 - \varepsilon} \right)\phi_i^{\text{R}}, $\\\ {\ln\frac{{\phi_{i}^{\text{R}}}}{{\phi_{i}^{\text{L}}}}},&$\left( {1 - \varepsilon} \right)\phi_i^{\text{R}} \leqslant\varphi< \left( {1 + \varepsilon} \right)\phi_i^{\text{L}},$\\\ {\ln\frac{{\left({1+\varepsilon}\right)\phi_{i}^{\text{R}}}}{\varphi}},&$\left( {1 + \varepsilon} \right)\phi_i^{\text{L}} \leqslant\varphi< \left( {1 + \varepsilon} \right)\phi_i^{\text{R}}.$\end{IEEEeqnarraybox}\right.$ Next, we sample the interval $\left[{\left({1-\varepsilon}\right)\phi_{i}^{\text{L}},\left({1+\varepsilon}\right)\phi_{i}^{\text{R}}}\right]$ uniformly and obtain the set $\Phi=\left\\{{{\varphi^{1}},{\varphi^{2}},...,{\varphi^{K}}}\right\\}$, where $K$ denotes the number of samples. According to (11), the codebook design problem can be formulated as $\begin{split}\mathop{\max}\limits_{{\mathbf{w}},{r_{k}}}\quad&\sum\limits_{k=1}^{K}{t\left({{\varphi^{k}}}\right)\log\left({1+\rho{r_{k}}}\right)}\\\ s.t.\quad&{\left\|{\mathbf{w}}\right\|^{2}}\leqslant 1\\\ &{\left|{{{\mathbf{a}}^{\text{H}}}\left({{\varphi^{k}}}\right){\mathbf{w}}}\right|^{2}}\geqslant{r_{k}},\end{split}$ (12) where we omit the subscript $i$ for convenience. Since the constraint ${\left|{{{\mathbf{a}}^{\text{H}}}\left({{\varphi^{k}}}\right){\mathbf{w}}}\right|^{2}}\geqslant{r_{k}}$ is non-convex, we use the constrained concave-convex procedure (CCCP) to tackle it. The CCCP is an iterative algorithm that linearizes the non-convex constraint to form a convex problem during each iteration. By denoting ${\mathbf{A}}\left({{\varphi^{k}}}\right)={\mathbf{a}}\left({{\varphi^{k}}}\right){{\mathbf{a}}^{\text{H}}}\left({{\varphi^{k}}}\right)$, a linear approximation of the above constraint can be derived as [2] $\begin{split}L\left({{\mathbf{w}};{{\mathbf{w}}_{\left(n\right)}}}\right)\triangleq&{\mathbf{w}}_{\left(n\right)}^{\text{H}}{\mathbf{A}}\left({{\varphi^{k}}}\right){{\mathbf{w}}_{\left(n\right)}}\\\ &+2\operatorname{Re}\left({{\mathbf{w}}_{\left(n\right)}^{\text{H}}{\mathbf{A}}\left({{\varphi^{k}}}\right)\left({{\mathbf{w}}-{{\mathbf{w}}_{\left(n\right)}}}\right)}\right),\end{split}$ (13) where $\operatorname{Re}(\cdot)$ denotes the real part of a complex number, and $\mathbf{w}_{\left(n\right)}$ is a known vector obtained by the $n$th iteration. When we use $L\left({{\mathbf{w}};{{\mathbf{w}}_{\left(n\right)}}}\right)$ to replace the original non-convex constraint, problem (12) becomes a standard convex problem and can be solved by tools such as CVX. By setting initial solution ${{\mathbf{w}}_{\left(0\right)}}$ randomly, the CCCP algorithm guarantees that ${{\mathbf{w}}_{\left(n\right)}}$ converges to a Karush–Kuhn–Tucker (KKT) solution ${{\mathbf{w}}^{*}}$. Finally, for the hybrid beamforming architecture, we need to design the analog beamformer ${{\mathbf{W}}_{\text{A}}}$ and the digital weights ${{\mathbf{w}}_{\text{D}}}$ to meet ${{\mathbf{w}}^{*}}={{\mathbf{W}}_{\text{A}}}{{\mathbf{w}}_{\text{D}}}$. The closed-form solution is given in [12], which requires only 2 radio frequency (RF) chains. For the analog beamforming architecture, we can obtain the solution by replacing the constraint ${\left\|{\mathbf{w}}\right\|^{2}}\leqslant 1$ by $\left|{{w_{i}}}\right|<1/\sqrt{{N_{\text{t}}}}$ and normalizing the amplitude of ${{\mathbf{w}}^{*}}$. (a) $\varepsilon=0.02$ (b) $\varepsilon=0.05$ Figure 2: Beam patterns of the proposed codebook for the hybrid beamforming architecture with ${N_{\text{t}}}=L=128$. Fig. 2 demonstrates the beam patterns of the proposed codebook for the hybrid beamforming architecture. Compared with the traditional codebook where all beams have the same width, the optimized beam patterns gradually broaden as the beam index $i$ increases. Moreover, the beams also become wider as $\varepsilon$ gets larger. As a result, equivalent spatial angles of all subcarriers can be covered by the proposed beams, which avoids data rate drops of edge subcarriers. To evaluate the performance of the proposed codebook theoretically, we consider ideal beams with enlarged coverage $[\left({1-\varepsilon}\right)\phi_{i}^{\text{L}},\left({1+\varepsilon}\right)\phi_{i}^{\text{R}}]$ as an approximation. In this way, all subcarriers have the same data rates, and the beam gain is denoted as ${g_{i}}=L/\left({1-\varepsilon+2i\varepsilon}\right)$. The average SE is derived as $\bar{R}=\frac{2}{L}\sum\limits_{i=1}^{L/2}{{{\log}_{2}}\left({1+\frac{{\rho L}}{{1-\varepsilon+2i\varepsilon}}}\right)}.$ (14) ## V Simulation Results In this section, simulation results are presented to verify the theoretical analysis and demonstrate the performance of the proposed codebook. We set ${N_{\text{t}}}=L=128$ and $\rho=0\mathrm{dB}$. Since the ideal beams achieve the highest SE when $\varepsilon=0$, we regard its performance as a baseline and normalize the simulation results. As shown in Fig. 3, the following conclusions can be observed. * • When $\varepsilon$ is small, the ideal beams outperform other schemes. This is because ideal beams neglect hardware limitations and have no power leakage. * • As $\varepsilon$ increases, beam squint becomes the bottleneck that limits average SE. The ideal beams with traditional coverage cannot ensure edge subcarriers have enough data rates so that their performance is the worst. Moreover, we can observe that the theoretical result is consistent with the simulation results. By contrast, the ideal beams with enlarged coverage achieve a better performance, which can be seen as a theoretical approximation of the proposed codebook. However, due to this scheme not considering the weights $t(\varphi)$ in different directions, there is a small performance gap. * • The proposed codebook outperforms the widely used discrete Fourier transform (DFT) codebook [3]. It significantly slows down the performance degradation. Since the hybrid architecture enables a finer control on beam patterns, its performance is better than the one under analog architecture. Figure 3: Normalized SE against beam squint factor under different codebooks. ## VI Conclusion This letter investigates the beam squint effect in a wideband beamforming system. Based on the ideal beam pattern, we analyze how beam squint affects the data rates of subcarriers and derive the expression of average SE. Then, we design the optimal codebook to combat the beam squint effect. By spreading the beam coverage, the proposed scheme mitigates the performance degradation and outperforms traditional schemes. ## References * [1] W. Roh _et al._ , “Millimeter-wave beamforming as an enabling technology for 5G cellular communications: Theoretical feasibility and prototype results,” _IEEE Commun. Mag._ , vol. 52, no. 2, pp. 106–113, Feb. 2014\. * [2] J. Zhang, Y. Huang, Q. Shi, J. Wang, and L. Yang, “Codebook design for beam alignment in millimeter wave communication systems,” _IEEE Trans. Commun._ , vol. 65, no. 11, pp. 4980–4995, Nov. 2017. * [3] Y. T. Wu, Y. Y. Zhao, and F. Yu, “Comparison of codebooks for beamforming in limited feedback MIMO systems,” in _IEEE Int. Conf. on Computer Science and Automation Engineering (CSAE)_ , vol. 2, May 2012, pp. 32–36. * [4] J. H. Brady and A. M. Sayeed, “Wideband communication with high-dimensional arrays: New results and transceiver architectures,” in _IEEE Int. Conf. Commun. Workshop_ , Jun. 2015, pp. 1042–1047. * [5] M. Cai, K. Gao, D. Nie, B. Hochwald, J. N. Laneman, H. Huang, and K. Liu, “Effect of wideband beam squint on codebook design in phased-array wireless systems,” in _IEEE Global Commun. Conf. (GLOBECOM)_ , Dec. 2016, pp. 1–6. * [6] X. Liu and D. Qiao, “Space-time block coding-based beamforming for beam squint compensation,” _IEEE Wireless Commun. Lett._ , vol. 8, no. 1, pp. 241–244, Feb. 2019. * [7] F. Yang, J.-B. Wang, M. Cheng, J.-Y. Wang, M. Lin, and J. Cheng, “A partially dynamic subarrays structure for wideband mmwave MIMO systems,” _IEEE Trans. Commun._ , vol. 68, no. 12, pp. 7578–7592, Dec. 2020. * [8] B. Liu, W. Tan, H. Hu, and H. Zhu, “Hybrid beamforming for mmwave MIMO-OFDM system with beam squint,” in _IEEE Annu. Int. Symp. on Personal, Indoor and Mobile Radio Commun. (PIMRC)_ , Sep. 2018, pp. 1422–1426. * [9] I. Laurinavicius, H. Zhu, J. Wang, and Y. Pan, “Beam squint exploitation for linear phased arrays in a mmwave multi-carrier system,” in _IEEE Global Commun. Conf. (GLOBECOM)_ , Dec. 2019, pp. 1–6. * [10] W. Fan, C. Zhang, and Y. Huang, “Flat beam design for massive MIMO systems via riemannian optimization,” _IEEE Wireless Commun. Lett._ , vol. 8, no. 1, pp. 301–304, Sep. 2019. * [11] H. yu, P. Guan, Y. Wang, and Y. Zhao. Supplementary material. [Online]. Available: https://github.com/yuhongkang/Supplementary-Material. * [12] Xinying Zhang, A. F. Molisch, and Sun-Yuan Kung, “Variable-phase-shift-based RF-baseband codesign for MIMO antenna selection,” _IEEE Trans. Signal Process._ , vol. 53, no. 11, pp. 4091–4103, Nov. 2005.
# Exponential Integration for Efficient and Accurate Multi-Body Simulation with Stiff Viscoelastic Contacts Bilal Hammoud1, Luca Olivieri2, Ludovic Righetti1, Justin Carpentier3, Andrea Del Prete2 1Tandon School of Engineering, New York University, New York, USA. <EMAIL_ADDRESS><EMAIL_ADDRESS>of Industrial Engineering, University of Trento, Italy<EMAIL_ADDRESS><EMAIL_ADDRESS>Paris, France<EMAIL_ADDRESS> ###### Abstract The simulation of multi-body systems with frictional contacts is a fundamental tool for many fields, such as robotics, computer graphics, and mechanics. Hard frictional contacts are particularly troublesome to simulate because they make the differential equations stiff, calling for computationally demanding implicit integration schemes. We suggest to tackle this issue by using exponential integrators, a long-standing class of integration schemes (first introduced in the 60’s) that in recent years has enjoyed a resurgence of interest. We show that this scheme can be easily applied to multi-body systems subject to stiff viscoelastic contacts, producing accurate results at lower computational cost than classic explicit or implicit schemes In our tests with quadruped and biped robots, our method demonstrated stable behaviors with large time steps (10 ms) and stiff contacts ($10^{5}$ N/m). Its excellent properties, especially for fast and coarse simulations, make it a valuable candidate for many applications in robotics, such as simulation, Model Predictive Control, Reinforcement Learning, and controller design. ## I Introduction The interest of the robotics community for fast and reliable methods to simulate multi-body systems subject to frictional contacts has been constantly growing in the last two decades [1, 2, 3, 4, 5]. This is reasonable considering that simulation is at the core of many robotics applications, such as the development and testing of novel controllers before deployment on hardware. Moreover, many advanced control and planning techniques, such as Model Predictive Control [6] (MPC) and Optimal Control [7], rely on the ability to predict the future behavior of the system. Finally, the current bottleneck of many learning algorithms [8, 9] is their need for huge amounts of data, which therefore can greatly benefit from fast and accurate simulation methods. The simulation of articulated rigid multi-body systems without contacts is a solved problem [10]. The same is not the case for systems with stiff contacts, which can be treated in two ways, each leading to a hard (but different) numerical problem. The first approach, which more closely follows the physical phenomenon of contacts, consists in expressing contact forces as a function of the penetration between bodies. Often, linear spring dampers have been used in this context [5, 11]. This leads to stiff differential equations that are simple to evaluate, but difficult to integrate because of their _numerical stiffness_ [12]. The second approach tries to circumvent these numerical challenges by assuming contacts to be _infinitely_ rigid. This approach effectively gets rid of the numerical stiffness, but in exchange for _non- smoothness_. One method that has been particularly successful for dealing with the resulting non-smooth equations is velocity-impulse time-stepping [13, 1, 4, 14]. This has become the standard for robot simulation [15], demonstrating stable behaviors with large time steps (several ms)—even though it must solve a numerically hard Linear Complementarity Problem (LCP). Several authors have tried to improve this approach by getting rid of the strict complementarity constraints [2, 16, 3], which are the source of the numerical challenge. However, none of these approaches is currently widely accepted in the robotics community, mainly because of the unclear effects of the introduced numerical regularization/relaxations on the physics (i.e., relaxations can be interpreted as implicit spring/dampers but are not an explicit part of the modeling). Figure 1: Snapshots from our simulation tests with a biped and a quadruped robot. The approach we advocate for in this article is based on a well-known soft contact model: the linear spring damper. Instead of using explicit integration schemes, which require small time steps, or implicit schemes, which require solving nonlinear systems of equations and introduce artificial viscosity leading to nonphysical behaviors, we use _Exponential Integrators_ (EI) [17, 18]. EI are a long-standing class of integration schemes [19] that are particularly suited for stiff differential equations. EI were initially considered unpractical because of the computational challenges related to the matrix exponential [20]. However, novel numerical methods to compute the matrix exponential [21, 22, 23] have recently unlocked the potential of EI. This has already been used in computer graphics for simulating deformable objects, modeled as systems of particles [24, 25, 26]. This model is particularly suited for EI because the stiff part of the dynamics is linear, which however is not the case for articulated systems in contact with the environment. Our main contribution is a simulation algorithm that exploits EI to simulate articulated robots in contact with a stiff visco-elastic environment. Particularly, this paper addresses the following questions. (1) Does the proposed simulation scheme provide an improvement in terms of speed vs accuracy when compared to classic explicit and implicit methods? (2) Is it possible to develop a simulation scheme that is less sensitive to the choice of contact stiffness and damping? (3) Is it possible to get stable simulations with increased time step (integration interval)? The last question is of particular importance for using MPC and reinforcement learning. In order to address all the above questions in an efficient way, we apply EI only to the contact dynamics (which is stiff) while using an explicit Euler scheme for the remaining terms (which are not stiff). Our simulation results on quadruped and biped robots (see Fig. 1) show the superior performance of our method compared to standard integration schemes in terms of accuracy, speed and stability. To our knowledge, this class of integrators has never been used before by the robotics community. The article is organized as follows. Section II introduces the problem of multi-body simulation and the basic theory of EI. Section III explains how EI can be used for multi-body simulation with bilateral contacts. This method is then extended to frictional contacts in Section IV. Section V discusses the implementation details of the algorithm. Section VI presents the results and Section VII concludes the article. ## II Background ### II-A Multi-Body Dynamics and Soft Contact Model We want to simulate a multi-body mechanical system with the following dynamics [10]: $\displaystyle M(q)\dot{v}=u(q,v)+J(q)^{\top}\lambda,$ (1) where $q$ is the robot joint configuration, $v$ is the robot joint velocity, $M$ is the joint space mass matrix, $u$ contains gravity, nonlinear effects and actuator forces, $J$ is the contact point Jacobian, and $\lambda$ contains the stacked 3D contact forces. We assume a linear spring-damper contact model, which means that the contact forces $\lambda$ are proportional to the inter- penetration of contacting bodies: $\displaystyle\lambda=-K\,(\underbrace{p(q)-p_{0}}_{\Delta p})-B\,(\underbrace{\dot{p}(q,v)-\dot{p}_{0}}_{\Delta\dot{p}}),$ (2) where $p$ and $\dot{p}$ contain the stacked 3D contact point positions and velocities, $p_{0}$ and $\dot{p}_{0}$ contain the stacked 3D anchor point positions and velocities, and $K$ and $B$ are the diagonal stiffness and damping matrices, respectively. The anchor point $p_{0}$ is a _virtual_ point to which the _virtual_ spring and damper are attached. It is typically set to the contact point location when contact is first detected, and as long as contacts are sticking $\dot{p}_{0}=0$. However, when slipping occurs then $\dot{p}_{0}\neq 0$. A limitation of the “anchor point” model is that to generate tangential forces, some lateral motion of the contact point is always necessary. Consequently, pure static friction cannot be modeled with this approach, but it can be well approximated by using large lateral stiffnesses. Dependencies on $q$ and $v$ are dropped in the following to ease notation. ### II-B Explicit Integration Schemes The classic approach to integrate this dynamical system starts by writing it in standard form. Defining the state as $x_{q}\triangleq(q,v)$, its dynamics is: $\displaystyle\underbrace{\frac{d}{dt}\begin{bmatrix}q\\\ v\end{bmatrix}}_{\dot{x}_{q}}=\underbrace{\begin{bmatrix}v\\\ M^{-1}(u+J^{\top}\lambda)\end{bmatrix}}_{f(x_{q},u)}$ (3) We can integrate (3) with any numerical integration scheme, such as a high- order Runge-Kutta scheme, or even a simple explicit Euler [12] (very common in robotics): $\displaystyle x_{q}^{+}=x_{q}+\delta t\,f(x_{q},u)$ (4) where $x_{q}^{+}$ represents the next value of the state and $\delta t$ is the integration time step. The problem with this approach is that for large values of $K$ and $B$ the differential equations (3) are _stiff_ [12]. This means that they require very small integration steps for numerical stability. This is the main reason why soft contact models have been mostly abandoned in the last decade, in favor of complementarity-based models (and their relaxations) and time-stepping integration [2, 27, 4]. ### II-C Exponential Integrators (EI) EI [25, 26, 24] are integration schemes particularly suited for stiff dynamical systems for which the _stiffness_ comes from a linear part of their dynamics: $\displaystyle\dot{x}=\underbrace{f(x)}_{\text{nonstiff nonlinear function}}+\underbrace{Ax}_{\text{stiff linear function}}$ (5) In this case, using an explicit integration scheme would result in the problems mentioned above. Instead, EI exploit the linearity of the stiff part of the dynamics, which can be solved _analytically_ using the matrix exponential, thanks to the well-known solution of linear dynamical systems: $\displaystyle\dot{x}(t)$ $\displaystyle=Ax(t)+b$ (6) $\displaystyle x(t)$ $\displaystyle=e^{tA}x(0)+\int_{0}^{t}e^{\tau A}\,\text{d}\tau\,b$ First-order EI apply the solution (6) to the nonlinear system (5), by interpreting $f()$ as $b$ and assuming it remains constant during the integration step: $\displaystyle x(t)=e^{tA}x(0)+\int_{0}^{t}e^{\tau A}\text{d}\tau\,f(x(0))$ (7) Since the stiff part of the equations is integrated via the matrix exponential, large integration steps can be taken. ## III Bilateral Contacts Our approach consists in using EI to simulate the system (3). The standard approach to apply EI to arbitrary dynamics is to use a 1-st order Taylor expansion: $\displaystyle\dot{x}_{q}(t_{0}+t)\approx\dot{x}_{q}(t_{0})+\frac{\partial f}{\partial x_{q}}\,(x_{q}(t_{0}+t)-x_{q}(t_{0}))$ (8) However, this would require two demanding computations: the dynamics Jacobian, and a matrix exponential with the size of $x_{q}$. In the following we present instead an approach that i) does not require the dynamics Jacobian, and ii) only computes a matrix exponential with twice the size of $\lambda$, which is typically smaller than $x_{q}$ in legged locomotion. To get a differential equation with the form (5) we start by projecting (1) in contact space pre-multiplying both sides by $JM^{-1}$: $\displaystyle J\dot{v}-\underbrace{JM^{-1}J^{\top}}_{\Upsilon}\lambda=J\underbrace{M^{-1}u}_{\dot{\bar{v}}}$ (9) Then we use the relationship $\ddot{p}=J\dot{v}+\dot{J}v$ to express the contact point accelerations as functions of the robot accelerations: $\displaystyle\ddot{p}-\Upsilon\lambda=J\dot{\bar{v}}+\dot{J}v$ (10) $\displaystyle\ddot{p}+\Upsilon K\Delta p+\Upsilon B\Delta\dot{p}=\underbrace{J\dot{\bar{v}}+\dot{J}v}_{\ddot{\bar{p}}}$ Since for bilateral contacts $\dot{p}_{0}$ is always null, we have that $\ddot{p}_{0}=0$ and thus we can write the contact point dynamics as: $\displaystyle\frac{d}{dt}\begin{bmatrix}\Delta p\\\ \Delta\dot{p}\end{bmatrix}=\underbrace{\begin{bmatrix}0&I\\\ -\Upsilon K&-\Upsilon B\end{bmatrix}}_{A}\underbrace{\begin{bmatrix}\Delta p\\\ \Delta\dot{p}\end{bmatrix}}_{x}+\underbrace{\begin{bmatrix}0\\\ \ddot{\bar{p}}\end{bmatrix}}_{b}$ (11) This dynamical system does not have the same form as (5) because $\Upsilon$ (and thus $A$) depends on $q$. However, $\Upsilon$ is typically a well- conditioned function, meaning that it changes little for small variations of $q$. The same holds for $\ddot{\bar{p}}$ (and thus $b$), which is why multi- body systems without contacts can typically be integrated with large time steps ($\approx$5 ms). This means that we can approximate $A$ and $b$ as constants during the integration step, and therefore treat (11) as linear. We can now express the contact forces as: $\displaystyle\lambda(t)$ $\displaystyle=\underbrace{\begin{bmatrix}-K&-B\end{bmatrix}}_{D}x(t)=De^{tA}x(0)+D\int_{0}^{t}e^{\tau A}\text{d}\tau\,b$ (12) Substituting (12) in (3) we can compute the robot accelerations: $\displaystyle\dot{v}(t)$ $\displaystyle=M^{-1}(u+J^{\top}\lambda(t))=\dot{\bar{v}}+M^{-1}J^{\top}Dx(t),$ (13) where we consider all terms constant during the integration step, except for $x(t)$. Now we can integrate to get the new velocities $v^{+}$: $\displaystyle v^{+}$ $\displaystyle=v+\int_{0}^{\delta t}\dot{v}(\tau)\,\text{d}\tau=$ (14) $\displaystyle=v+\delta t\,\dot{\bar{v}}+M^{-1}J^{\top}D\int_{0}^{\delta t}x(\tau)\,\text{d}\tau$ Finally we integrate twice to get the new configuration $q^{+}$: $\displaystyle q^{+}$ $\displaystyle=q+\int_{0}^{\delta t}v(\tau)\text{d}\tau=$ (15) $\displaystyle=q+\delta t\,v+\frac{{\delta t}^{2}}{2}\dot{\bar{v}}+M^{-1}J^{\top}D\int_{0}^{\delta t}\int_{0}^{\tau}x(\tau_{1})\,\text{d}\tau_{1}\,\text{d}\tau$ ### III-A Integration of Matrix Exponentials Eq. (14) and (15) are straightforward to compute, except for their last terms, which are: $\displaystyle x_{int}(t)$ $\displaystyle\triangleq\int_{0}^{t}x(\tau)\,\text{d}\tau$ (16) $\displaystyle x_{int2}(t)$ $\displaystyle\triangleq\int_{0}^{t}\int_{0}^{\tau}x(\tau_{1})\,\text{d}\tau_{1}\,\text{d}\tau,$ where $\displaystyle x(t)$ $\displaystyle=e^{tA}x(0)+\int_{0}^{t}e^{\tau A}\,\text{d}\tau\,b$ (17) When $A$ is invertible we can express the integral of $e^{tA}$ as an algebraic function of $e^{tA}$: $\displaystyle\int_{0}^{t}e^{\tau A}\text{d}\tau=A^{-1}(e^{tA}-I)$ (18) However, $A$ is not invertible if the contact Jacobian $J$ is not full-row rank. Luckily, the computation of integrals involving matrix exponentials has been thoroughly investigated [28, 29]. In Section V we show how to compute these integrals indirectly, by simply computing the matrix exponentials of an augmented system. ### III-B Extension to non-Euclidian spaces When $q$ does not belong to an Euclidian space (as in the case of legged robots, where $q$ includes the orientation of the base link) the integration of $q$ is slightly more complicated (while the integration of $v$ remains unchanged). Given the following definition: $\displaystyle v_{mean}$ $\displaystyle\triangleq v+\frac{\delta t}{2}\dot{\bar{v}}+\frac{1}{\delta t}M^{-1}J^{\top}Dx_{int2}(\delta t)$ (19) The integration step of $q$ is computed as: $\displaystyle q^{+}$ $\displaystyle=\text{integrate}(q,\delta t\,v_{mean}),$ (20) where the function _integrate( $\cdot$)_ performs integration in the non- Euclidian space of $q$. ## IV Frictional Contacts So far we have assumed that contact forces were bilateral. However, we typically want to simulate unilateral contacts, where forces oppose penetration but do not oppose detachment of bodies. Assuming that the contact forces are expressed in a local reference frame with the z direction aligned with the contact normal, unilateral forces must satisfy: $\displaystyle f_{i}^{z}\geq 0\qquad\forall i$ (21) Moreover, tangential forces are typically limited as well. Assuming a Coulomb friction model we have: $\displaystyle\sqrt{(f_{i}^{x})^{2}+(f_{i}^{y})^{2}}\leq\mu f_{i}^{z}\qquad\forall i,$ (22) where $\mu\in\mathbb{R}^{+}$ is the coefficient of friction111We assume that static and dynamic coefficients of friction are equal.. We can represent the constraints (21) and (22) as $\lambda\in\mathcal{K}_{\mu}$, with $\mathcal{K}_{\mu}$ being a second-order cone. ### IV-A Force Projection To account for these constraints, when the value of $\lambda(t)$ computed by (12) is outside $\mathcal{K}_{\mu}$, we should project it on the boundaries of $\mathcal{K}_{\mu}$. However, we do not know how to check this constraint in continuous time. In the same spirit of time-stepping simulators [1], we suggest to check friction constraints on the average value of $\lambda(t)$ during the integration step, which is: $\displaystyle\bar{\lambda}\triangleq\frac{1}{\delta t}\int_{0}^{\delta t}\lambda(\tau)\,\text{d}\tau=\frac{1}{\delta t}Dx_{int}(\delta t)$ (23) If $\bar{\lambda}\notin\mathcal{K}_{\mu}$, then we compute its projection on the boundaries of the friction cone $\lambda_{pr}=\text{proj}_{\mathcal{K}_{\mu}}(\bar{\lambda})$ and we use it to compute the next state: $\displaystyle\dot{v}_{pr}\triangleq M^{-1}(u+J^{\top}\lambda_{pr})$ (24) $\displaystyle v^{+}=v+\delta t\,\dot{v}_{pr},\qquad q^{+}=q+\delta t\,v+\frac{\delta t^{2}}{2}\dot{v}_{pr}$ Note that in case $\bar{\lambda}\in\mathcal{K}_{\mu}$, then $\lambda_{pr}=\bar{\lambda}$ and the velocity update in (24) is equivalent to (14). However, the position update in (24) approximates the double integral of $\lambda(t)$ assuming a constant force ($\lambda_{pr}$), and so it is not equivalent to (15) in general. In order to exploit also the double integral of $x(t)$, we can check the friction cone constraints on the average of the average $\lambda(t)$, computed as: $\displaystyle\bar{\bar{\lambda}}\triangleq\frac{2}{\delta t^{2}}\int_{0}^{\delta t}\int_{0}^{\tau}\lambda(\tau_{1})\,\text{d}\tau_{1}\text{d}\tau=\frac{2}{\delta t^{2}}Dx_{int2}(\delta t)$ (25) If $\bar{\bar{\lambda}}\notin\mathcal{K}_{\mu}$, then we project it on the boundaries of the friction cone $\lambda_{pr2}=\text{proj}_{\mathcal{K}_{\mu}}(\bar{\bar{\lambda}})$ and we use it to compute the next position: $\displaystyle\dot{v}_{pr2}$ $\displaystyle\triangleq M^{-1}(u+J^{\top}\lambda_{pr2})$ (26) $\displaystyle q^{+}$ $\displaystyle=q+\delta t\,v+\frac{\delta t^{2}}{2}\dot{v}_{pr2}$ Using (26) for the position update and (24) for the velocity update, both updates are equivalent to the original ones in case of no slippage. ### IV-B Anchor point update When slippage occurs, the tangent anchor point state $(p_{0}^{t},\dot{p}^{t}_{0})$ (where the index $t$ indicates the tangent directions) changes, which has two main implications. First, the assumption $\ddot{p}_{0}=0$ that we took to write the contact point dynamics as (11) is no longer valid. This means that, during slippage, (11) is an approximation of the contact point dynamics, based on a “business as usual” assumption (i.e., that the anchor point $p_{0}$ continues slipping at constant velocity). Second, the anchor point state should be updated so that the contact forces at the end of the time step are inside the friction cones. When a contact is slipping, the tangent anchor point velocity converges to $\dot{p}^{t}$. We show it now for the case of a 2D contact, but a similar reasoning can be applied to the 3D case. While a contact is slipping, the tangential force $\lambda^{t}$ remains on the boundary of the friction cone, so we have: $\displaystyle\dot{\lambda}^{t}$ $\displaystyle=\mu\dot{\lambda}^{n}$ (27) $\displaystyle K^{t}(\dot{p}_{0}^{t}-\dot{p}^{t})+B^{t}(\ddot{p}_{0}^{t}-\ddot{p}^{t})$ $\displaystyle=\mu\dot{\lambda}^{n}$ $\displaystyle(\ddot{p}_{0}^{t}-\ddot{p}^{t})$ $\displaystyle=-(B^{t})^{-1}K^{t}(\dot{p}_{0}^{t}-\dot{p}^{t})+\mu(B^{t})^{-1}\dot{\lambda}^{n}$ The last equation shows that, if $\dot{\lambda}^{n}=0$, we have an exponential convergence to zero of $(\dot{p}_{0}^{t}-\dot{p}^{t})$, with rate $(B^{t})^{-1}K^{t}$. Since typically $(B^{t})^{-1}K^{t}$ is large, whereas $\mu(B^{t})^{-1}\dot{\lambda}^{n}$ is small, we can expect this convergence to be fast. For instance, if $(B^{t})^{-1}K^{t}=10^{3}$ and $\dot{\lambda}^{n}=0$, then after 3 ms $(\dot{p}_{0}^{t}-\dot{p}^{t})$ will be $5\%$ of its initial value. Given this fast convergence, we neglect the transient and as soon as slippage starts we set $\dot{p}_{0}^{t}:=\dot{p}^{t}$. Then, we compute $p_{0}^{t}$ so that the contact force is on the boundary of the friction cone: $\displaystyle\lambda$ $\displaystyle:=\text{proj}_{\mathcal{K}_{\mu}}(\lambda)$ (28) $\displaystyle p_{0}^{t}$ $\displaystyle:=p^{t}+(K^{t})^{-1}\lambda^{t}$ ## V Computational Aspects The computational bottleneck of the presented approach is the computation of $x_{int}$ and $x_{int2}$ defined in (16). This section shows how to compute these quantities with a matrix exponential, and how this computation can be sped up. ### V-A Computing $x_{int}$ and $x_{int2}$ Using the results presented in [29] we can compute $x_{int}$ and $x_{int2}$ as: $\displaystyle\begin{bmatrix}x_{int}(t)&x_{int2}(t)\end{bmatrix}=\begin{bmatrix}I_{n}&0_{n\times 3}\end{bmatrix}e^{t\bar{A}}\begin{bmatrix}0_{(n+1)\times 2}\\\ I_{2}\end{bmatrix}$ (29) where $n$ is the size of $A$, and $\bar{A}\in\mathbb{R}^{(n+3)\times(n+3)}$ is an augmented matrix: $\displaystyle\bar{A}\triangleq\begin{bmatrix}A&b&x(0)&0\\\ 0&0&1&0\\\ 0&0&0&1\\\ 0&0&0&0\end{bmatrix}$ (30) ### V-B Computing the Matrix Exponential Using (29) we have transformed the problem of computing (16) into a matrix exponential evaluation. Computing the matrix exponential is a challenging but well-understood numerical problem [21, 30, 22, 23]. We have used as starting point the scaling&squaring method, as revisited by Higham [21], a widely used method for computing the exponential of small-medium size dense matrices. The method scales the matrix by a power of 2 to reduce the norm to order 1, computes a Padé approximant to the matrix exponential, and then repeatedly squares to undo the effect of the scaling. A Padé approximant of a function is its “best” approximation achievable by a ratio of two polynomials $D_{j}(\cdot)$, $N_{j}(\cdot)$ of order $j$: $\displaystyle e^{A}\approx D_{j}(A)^{-1}\,N_{j}(A)$ (31) These approximants are only accurate around zero, so they cannot be used directly if $\|A\|$ is large. When that is the case, the scaling&squaring method is used to reduce $\|A\|$ by exploiting this property of the exponential: $\displaystyle e^{A}=(e^{A/(2^{s})})^{2^{s}}$ (32) The integer scaling parameter $s$ is chosen so that $\|e^{A/(2^{s})}\|$ is sufficiently small. ### V-C Boosting the Matrix Exponential Computation Our problem has two features that we can exploit to speed up computation: 1. 1. We do not need double machine precision, i.e. $\approx 10^{-16}$, (which is the target of the algorithm of [21]) because we are typically fine with much larger numerical integration errors, e.g. $\approx 10^{-4}$. 2. 2. We do not need the whole matrix exponential, but only its product with a 2-column matrix, as shown in (29). The first point is easily exploitable. The choice of the scaling parameter $s$ and the polynomial order $j$ is usually optimized to achieve double machine precision with the minimum amount of matrix-matrix multiplications. We have empirically found that for our tests we can set $s=0$ and use a relatively low order $j\in[1,2,3,5,7]$, corresponding to $[0,1,2,3,4]$ matrix-matrix multiplications, respectively. Which polynomial order is optimal depends on the specific test, and is discussed in the next section. Regarding the second point, given a matrix $V$, we can directly compute the product $e^{A}V$ by performing operations in the following order: $\displaystyle V_{1}$ $\displaystyle:=N_{j}(A)\,V$ (33) $\displaystyle e^{A}V$ $\displaystyle:=D_{j}(A)^{-1}\,V_{1}$ This is faster than computing $e^{A}$ and then multiplying it times $V$ because we have to solve the linear system with a much smaller right-hand-side ($V_{1}$ rather than $N_{j}$). Finally, we have also observed that the preprocessing step suggested in [21], which uses _matrix balancing_ , is extremely effective at reducing $\|A\|$ in our tests. This is crucial to achieve accurate results with low polynomial orders, therefore speeding up computations. Further details can be found in our open-source online repository222https://github.com/andreadelprete/consim. ## VI Results We assess the performance of our simulation algorithm (_Expo_) comparing it to Implicit Euler (_Eul-imp_), Runge-Kutta 4 (_RK4_) and explicit Euler (_Eul- exp_). Our implementation of _Eul-imp_ is described in the Appendix. Our implementation of _RK4_ is standard, whereas _Eul-exp_ was implemented as follows: $\displaystyle v^{+}=v+\delta t\,\dot{v},\qquad q^{+}=\text{integrate}(q,\delta t\,v+\frac{\delta t^{2}}{2}\,\dot{v})$ (34) Our results try to answer to the following questions: 1. 1. Can our approach (compared to the others) achieve higher accuracy for equal computation time, or equal accuracy for smaller computation time? (Section VI-C) 2. 2. How sensitive is the simulator accuracy to contact stiffness and damping? (Section VI-D) 3. 3. What is the maximum integration time step that results in a _stable_ 333We say that a simulation is “stable” if the robot state remains bounded. motion? (Section VI-E) 4. 4. How accurately can (11) predict future contact forces when assuming constant $A$ and $b$? (Section VI-F) 5. 5. How much computation time is spent in the different operations of our simulator? Is there room for improvement? (Section VI-G) ### VI-A Accuracy Metric Following an approach similar to [15], we measure accuracy with a _local integration error_. We compute the ground truth trajectory $x_{q}(t)$ using the simulator under analysis with an extremely small time step $\delta t=1/64$ ms. Let us define $\hat{x}_{q}(t;t-\delta t_{c},x_{q}(t-\delta t_{c}))$ as the state at time $t$ obtained by numerical integration starting from the ground- truth state $x_{q}(t-\delta t_{c})$, where $\delta t_{c}(\geq\delta t)$ is the time step of the controller. We define the _local_ integration error as the error accumulated over one control time step: $e(t)\triangleq\|x_{q}(t)\ominus\hat{x}_{q}(t;t-\delta t_{c},x_{q}(t-\delta t_{c})\|_{\infty}$ where $\ominus$ is a difference operator on the space of $x_{q}$. In the numerical integration literature [12] the _local_ integration (or truncation) error is typically defined using the integration step $\delta t$ rather than the controller step $\delta t_{c}$. We chose to use the controller step to make errors comparable across tests with different integration steps (as in [15]). ### VI-B Test Description TABLE I: Controller time steps. Test | $\delta t_{c}$ [ms] | Test | $\delta t_{c}$ [ms] ---|---|---|--- Solo-squat | 40 | Solo-trot | 2 Solo-jump | 10 | Romeo-walk | 40 To evaluate the trade-off between accuracy and computation time we tested each simulator with different time steps. For _Expo_ , _RK-4_ , _Eul-imp_ we have started from $\delta t=1/8$ ms up to the controller time step $\delta t=\delta t_{c}$ with a logarithmic step of 2 (i.e. 1/8, 1/4, …, $\delta t_{c}/2$, $\delta t_{c}$). For _Eul-exp_ we have used the same approach, but starting from a value of $\delta t$ resulting in roughly the same computation time of _Expo_. For every test we have set $\delta t_{c}$ to the largest value that still ensured control stability (see Table I). Since our main interest lies in legged robots, our tests focused on quadrupeds and bipeds: * • _Solo-squat_ : Quadruped robot Solo [31] performing a squatting motion. * • _Solo-jump_ : Quadruped robot Solo jumping in place. * • _Solo-trot_ : Quadruped robot Solo trotting forward. * • _Romeo-walk_ : Humanoid robot Romeo [32] taking two walking steps. It is important to note that the quadruped Solo has a total of 13 links, 18 degrees of freedom, 12 of which are actuated revolute joints, and 4 contact points (one on each foot). As for the humanoid Romeo, it consists of 32 links, 37 degrees of freedom, 31 actuated revolute joints and a total of 8 contact points (four on each foot). In all tests, the control torques have been computed with a feedback controller, either a linear controller or a Task- Space Inverse Dynamics controller. If not specified otherwise, we have used a contact stiffness $K=10^{5}$ N/m, and a contact damping $B=300$ Ns/m, which are reasonable values for contacts with a hard floor. For homogeneity we have used the same value of friction coefficient $\mu=1$ across all our tests, even though this large friction was only needed for control stability of the quadruped jumping motion. Besides testing the default _Expo_ simulator, we also tested 5 other versions of the same scheme where we used a reduced polynomial order in the Padé approximant of the matrix exponential. This leads to a reduced number of matrix-matrix multiplications (mmm), between 0 and 4 (see Section V-C). This results in a faster but potentially less accurate computation of the matrix exponential. All the code has been implemented in C++ and binded with Python. For all dynamics computation we have used the Pinocchio library [33]. ### VI-C Accuracy-Speed Results Fig. 2 and 3 summarize the results for the four tests. Fig. 2 plots _local errors_ vs _real-time factor_ , which measures how many times the simulation was faster than real time. Fig. 3 instead plots _local errors_ vs _integration time step_. Even though our main interest is in the trade-off between computation time and accuracy, which is depicted in Fig. 2, we decided to report also the accuracy as a function of integration time in Fig. 3, to provide more information about the behavior of the different methods. (a) Solo squat (b) Solo jump (c) Solo trot (d) Romeo walk Figure 2: Local integration errors vs real-time factors. The label _mmm-1_ in the legend corresponds to using the default number of matrix-matrix multiplications (mmm) in the computation of the matrix exponential. (a) Solo squat (b) Solo jump (c) Solo trot (d) Romeo walk Figure 3: Local integration errors vs integration time step. Overall, _Expo_ outperformed the other methods in all tests, showing faster computation for equal accuracy, or greater accuracy for equal computation time. Surprisingly, the second best method overall was the simple _Eul-exp_ , even though it was partially beaten by _Eul-imp_ in “solo-squat”. _RK4_ was comparable to _Eul-exp_ for small time steps, but surprisingly worse for large time steps. These results show a sudden increase of integration error of _Eul- exp_ and _RK4_ for large real-time factors—corresponding to large $\delta t$. This is because of the poor stability of explicit methods. _Eul-imp_ sometimes failed to converge to the desired error threshold ($10^{-6}$), in particular when using large integration steps. This is not surprising because the system dynamics is non-continuous at impacts, and _Eul- imp_ uses a gradient-based method (Newton) that is suited for smooth systems. Despite this, Fig. 3 shows that in most cases _Eul-imp_ gives integration errors similar to _Expo_ for the same integration time step. More precisely, _Eul-imp_ is almost indistinguishable from _Expo_ in “solo-squat” for $\delta t\leq 5$ ms, whereas it is unstable for larger $\delta t$. _Eul-imp_ is a bit worse than _Expo_ in “solo-jump” and “romeo-walk” (but only for large time steps), and slightly better in “solo-trot”. Overall, _Eul-imp_ performed similarly to _Expo_ for the same $\delta t$, but resulted in much larger computation times, making it often the worst one in terms of accuracy-speed trade-off. _Expo_ instead shows a graceful degradation of accuracy for large real-time factors, making it an excellent candidate for fast low-accuracy simulations, which are typically desirable in MPC. In general, the _Expo_ versions using a reduced number of mmm outperformed the standard _Expo_ , but which number of mmm is optimal depends on the specific test and time step. As expected, when $\delta t$ is smaller we can use a lower number of mmm. Automatically finding the optimal number of mmm is an interesting direction for future work. ### VI-D Stiffness and Damping (a) Varying contact stiffness (with fixed damping ratio of 0.5) (b) Varying contact damping ratio (with fixed stiffness of $10^{5}$ N/m). Figure 4: Local integration errors vs contact stiffness and damping ratio for the “solo trot” test using a fixed integration time step for _Eul-exp_ (1/2 ms), _Expo_ (2 ms) and _Eul-imp_ (2 ms).fig:local˙errorvs˙stiff˙damp This subsection investigates the sensitivity to contact stiffness and damping ratio of _Expo_ , _Eul-exp_ and _Eul-imp_. The damping ratio is defined as $\frac{B}{2\sqrt{K}}$. A damping ratio of 1 corresponds to a _critically damped_ contact. These results are based on the “solo-trot” scenario. For _Eul-exp_ we have used $\delta t$=1/2 ms (real-time factor $\approx$50). Then, we have set $\delta t$=2 ms for _Expo_ so that it had roughly the same computation time, and $\delta t$=2 ms for _Eul-imp_ , so that it performed similarly to _Expo_ (even though with much larger computation times). Fig. LABEL:fig:local_errorvs_stiff_damp shows the local integration error as we vary the contact stiffness (with fixed damping ratio) and the damping ratio (with fixed contact stiffness). _Expo_ performs consistently as damping ratio and stiffness increase up to $K=10^{8}$, which roughly corresponds to a ground penetration of 0.01 mm for 100 kg of weight on a single contact point. The error of _Eul-exp_ instead is highly affected by both stiffness and damping. _Eul-imp_ performed slightly better than _Expo_ in most cases (but at the cost of being 50 times slower), except for very stiff contacts ($K\geq 10^{6}$), where it led to larger errors. ### VI-E Stability TABLE II: Maximum integration time steps to achieve a stable motion. Test | _Expo_ $\delta t$ [ms] | _Eul-exp_ $\delta t$ [ms] | _Eul-imp_ $\delta t$ [ms] ---|---|---|--- Solo-squat | 40 | 40/512 $\approx$ 0.08 | 40/2 = 20 Solo-jump | 10 | 10/64 $\approx$ 0.16 | 10/2 = 5 Solo-trot | 2 | 2/16 $\approx$ 0.13 | 2 Romeo-walk | 40/8 = 5 | 40/32 = 1.25 | 40/4 = 10 To test the stability of the simulators we have repeated the previous tests, but without resetting the state to the ground truth after every control loop. Table II reports, for _Expo_ , _Eul-exp_ and _Eul-imp_ , the largest integration time step for which the system remained stable. _Expo_ and _Eul- imp_ showed similar stability, both remaining stable for large time steps, mostly between 5 and 40 ms. In the tests “solo-squat” and “solo-jump” _Expo_ was stable even with a larger time step than _Eul-imp_ , whereas the opposite happened in “romeo-walk”. _Eul-exp_ instead showed poor stability, needing a time step between 4 and 512 times smaller than _Expo_ to remain stable. ### VI-F Force Prediction (a) Contact velocity at impact: 0.1 m/s. (b) Zero contact velocity at impact. Figure 5: Comparison of contact forces in normal direction with forces predicted using matrix exponential. To gain some insights into the internal computations of _Expo_ , we show in Fig. 5 the normal contact forces predicted with (11) assuming constant $A$ and $b$—which is the key assumption of our method. Since $A$ and $b$ depend on $q$ and $v$, which vary during the time step, one could expect that neglecting their variations would result in significant force prediction errors. However, Fig. 5 shows that the force prediction can be accurate over a rather long time horizon (20 ms). These forces were generated at the beginning of the “solo- squat” test, using different initial velocities. Since a linear spring-damper model is used, a sudden discontinuity in the contact force is expected when a point reaches contact with a non-zero velocity. To demonstrate this, the normal contact force at a single contact point is plotted for the cases of the trotting quadruped and the walking biped in Fig. 6. Depending on the velocity at contact time, a finite jump in the contact force is observed. (a) Quadruped Trotting. (b) Biped Walking. Figure 6: Normal contact force during quadruped trotting and biped walking. ### VI-G Computation Times TABLE III: Computation times of _Expo_ for “solo-trot”, using zero mmm for the matrix exponential (in parentheses the values using the standard matrix exponential routine). Operation | Mean Time [$\mu$s] | Percentage of Total Time [%] ---|---|--- step | 39 (94) | 100 computeIntegrals | 13 (67) | 33 (72) prepareExpLDS | 13 | 33 (14) computeContactForces | 8 | 20 (8) _Eul-exp_ step | 9 | - We report here a breakdown of the computation time of our method. The times shown in Tab. III are for the “solo-trot” test, which means that $v\in\mathbb{R}^{18}$ and most of the times $\lambda\in\mathbb{R}^{6}$. Most computation time (86%) is spent in three operations: computeIntegrals, prepareExpLDS, and computeContactForces. computeIntegrals boils down to computing a matrix exponential. This takes 72% of the total time when using a standard expm routine (without balancing and reduced matrix-matrix multiplications), but it goes down to 33% with our optimized version using zero matrix-matrix multiplications—we have seen in Fig. 2 that often this results in only a small loss of integration accuracy. The preparation of the linear dynamical system (11) (prepareExpLDS, which includes the computations of $h(q,v)$ with RNEA, $M(q)$ with CRBA, and $\Upsilon$ with a custom sparse Cholesky decomposition) takes an equal amount of time: 13 $\mu$s on average, namely 33% of the total time. The third operation (computeContactForces) takes 20% of the total time, and it includes the computation of all kinematic quantities (contact point positions, velocities, accelerations, Jacobian) and the contact detection. We believe that computation times could be improved, especially for the first two operations. In computeIntegrals we could test novel techniques [23] to compute the matrix exponential, exploit the sparse structure of the matrix $A$, and warm-start the computation using quantities computed at the previous cycle. In prepareExpLDS, the inverse contact-space inertia matrix $\Upsilon$ could be computed faster using a customized algorithm, rather than with products between $J$, $M^{-1}$ and $J^{\top}$ [34]. Overall, it seems impossible to reach the same efficiency of a simple _Eul-exp_ step (9 $\mu$s), but we think we could reach computation times in the range [20, 30] $\mu$s. ## VII Conclusions This paper has presented a new approach to simulate articulated systems subject to stiff visco-elastic frictional contacts. The novelty of the approach lies in the numerical integration, which applies a first-order Exponential Integrator scheme to the contact point dynamics to obtain a time- varying expression of the contact forces. These contact forces are then integrated analytically, exploiting theoretical results on the integrals of the matrix exponential [29], and advanced numerical algorithms for its fast computation [21]. Comparison with standard integration schemes, both implicit and explicit, highlighted the benefits of the proposed approach in terms of speed-accuracy trade off, and stability. Overall, the proposed approach performed similarly to an implicit scheme in terms of stability and accuracy, but without the excessive computational burden. Given its good behavior in the high-speed/low-accuracy regime, we believe that this simulation technique could be an excellent candidate for MPC. To do that, we will need to differentiate the integration scheme, which should be feasible. We also plan to investigate the improvement, in terms of computational efficiency, of the needed dynamics quantities and the matrix exponential. ## References * [1] M. Anitescu and G. D. Hart, “A constraint-stabilized time-stepping approach for rigid multibody dynamics with joints, contact and friction,” _International Journal for Numerical Methods in Engineering_ , vol. 60, no. Jan 2003, pp. 2335–2371, 2004. * [2] E. Todorov, “Implicit nonlinear complementarity: A new approach to contact dynamics,” in _2010 IEEE International Conference on Robotics and Automation_ , no. 5. IEEE, may 2010, pp. 2322–2329. * [3] ——, “Convex and analytically-invertible dynamics with contacts and constraints: Theory and implementation in MuJoCo,” in _Proceedings - IEEE International Conference on Robotics and Automation_ , 2014, pp. 6054–6061. * [4] J. Hwangbo, J. Lee, and M. Hutter, “Per-Contact Iteration Method for Solving Contact Dynamics,” _IEEE Robotics and Automation Letters (RAL)_ , vol. 3, no. 2, pp. 0–8, 2018. * [5] E. Drumwright, “An Unconditionally Stable First-Order Constraint Solver for Multibody Systems,” _arXiv preprint arXiv:1905.10828_ , 2019. * [6] Y. Tassa, T. Erez, and E. Todorov, “Synthesis and stabilization of complex behaviors through online trajectory optimization,” in _Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference on_ , 2012, pp. 4906–4913. * [7] D. M. O. Von Stryk, “Numerical solution of optimal control problems by direct collocation,” in _Optimal Control_. Springer, 1993, pp. 129–143. * [8] N. Mansard, A. Del Prete, M. Geisert, S. Tonneau, and O. Stasse, “Using a Memory of Motion to Efficiently Warm-Start a Nonlinear Predictive Controller,” in _IEEE International Conference on Robotics and Automation_ , 2018, pp. 2986–2993. * [9] J. Viereck, J. Kozolinsky, A. Herzog, L. Righetti, and R. O. Aug, “Learning a Structured Neural Network Policy for a Hopping Task,” _IEEE Robotics and Automation Letters (RAL)_ , vol. 3, no. 4, 2018. * [10] R. Featherstone, _Rigid body dynamics algorithms_. New York, NY: Springer, 2014. * [11] K. Yamane and Y. Nakamura, “Stable penalty-based model of frictional contacts,” _Proceedings - IEEE International Conference on Robotics and Automation_ , vol. 2006, no. January, pp. 1904–1909, 2006. * [12] U. M. Ascher and L. R. Petzold, _Computer methods for ordinary differential equations and differential-algebraic equations_. Philadelphia, PA: Siam, 1998. * [13] M. Anitescu, “A Fixed Time-Step Approach for Multibody Dynamics with Contact and Friction,” in _IEEE International Conference on Intelligent Robots and Systems_ , vol. 4, 2003, pp. 3725–3731. * [14] A. Peiret, S. Andrews, J. Kövecses, P. G. Kry, and M. Teichmann, “Schur complement-based substructuring of stiff multibody systems with contact,” _ACM Trans. Graph._ , vol. 38, no. 5, Oct. 2019. [Online]. Available: https://doi.org/10.1145/3355621 * [15] T. Erez, Y. Tassa, and E. Todorov, “Simulation Tools for Model-Based Robotics: Comparison of Bullet, Havok, MuJoCo, ODE and PhysX,” in _International Conference on Robotics and Automation_ , 2015. * [16] E. Drumwright and D. A. Shell, “Modeling contact friction and joint friction in dynamic robotic simulation using the principle of maximum dissipation,” in _Springer Tracts in Advanced Robotics_ , vol. 68, no. STAR, 2010, pp. 249–266. * [17] J. Loffeld and M. Tokman, “Comparative performance of exponential, implicit, and explicit integrators for stiff systems of ODEs,” _Journal of Computational and Applied Mathematics_ , 2013. * [18] Y. J. E. Chen, S. H. Sheen, U. M. Ascher, and D. K. Pai, “Siere: A hybrid semi-implicit exponential integrator for efficiently simulating stiff deformable objects,” _ACM Trans. Graph._ , vol. 40, no. 1, Aug. 2020. [Online]. Available: https://doi.org/10.1145/3410527 * [19] J. Certaine, “The solution of ordinary differential equations with large time constants,” _Mathematical methods for digital computers_ , vol. 1, pp. 128–132, 1960. * [20] C. Moler and C. Van Loan, “Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later,” _SIAM Review_ , vol. 45, no. 1, pp. 3–49, 2003. * [21] N. J. Higham, “The scaling and squaring method for the matrix exponential revisited,” _SIAM Journal on Matrix Analysis and Applications_ , vol. 26, no. 4, 2005. * [22] A. H. Al-Mohy and N. J. Higham, “Computing the Action of the Matrix Exponential, with an Application to Exponential Integrators,” _SIAM Journal of Scientific Computing_ , vol. 33, no. 2, pp. 488–511, 2011. * [23] J. Sastre, J. Ibáñez, and E. Defez, “Boosting the computation of the matrix exponential,” _Applied Mathematics and Computation_ , vol. 340, no. August, pp. 206–220, 2019. * [24] D. L. Michels, G. A. Sobottka, and A. G. Weber, “Exponential integrators for stiff elastodynamic problems,” _ACM Transactions on Graphics_ , vol. 33, no. 1, 2014. * [25] Y. J. Chen, U. M. Ascher, and D. K. Pai, “Exponential Rosenbrock-Euler Integrators for Elastodynamic Simulation,” _IEEE Transactions on Visualization and Computer Graphics_ , vol. 24, no. 10, pp. 2702–2713, 2018. * [26] V. T. Luan and D. L. Michels, “Explicit Exponential Rosenbrock Methods and their Application in Visual Computing,” _arXiv preprint arXiv:1805.08337_ , pp. 1–18, 2018. * [27] E. Todorov, T. Erez, and Y. Tassa, “MuJoCo: A physics engine for model-based control,” in _Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference on_ , 2012. * [28] C. F. Van Loan, “Computing Integrals Involving the Matrix Exponential,” _IEEE Transactions on Automatic Control_ , vol. 23, no. 3, pp. 395–404, 1978\. * [29] F. Carbonell, J. C. Jímenez, and L. M. Pedroso, “Computing multiple integrals involving matrix exponentials,” _Journal of Computational and Applied Mathematics_ , vol. 213, no. 1, pp. 300–305, 2008. * [30] A. H. Al-Mohy and N. J. Higham, “A New Scaling and Squaring Algorithm for the Matrix Exponential,” _SIAM Journal on Matrix Analysis and Applications_ , vol. 31, no. 3, 2010. * [31] F. Grimminger, A. Meduri, M. Khadiv, J. Viereck, M. Wuthrich, M. Naveau, V. Berenz, S. Heim, F. Widmaier, T. Flayols, J. Fiene, A. Badri-Sprowitz, and L. Righetti, “An Open Torque-Controlled Modular Robot Architecture for Legged Locomotion Research,” _IEEE Robotics and Automation Letters_ , vol. 5, no. 2, pp. 3650–3657, 2020. * [32] “Project Romeo http://projetromeo.com.” * [33] J. Carpentier, G. Saurel, G. Buondonno, J. Mirabel, F. Lamiraux, O. Stasse, and N. Mansard, “The Pinocchio C++ library: A fast and flexible implementation of rigid body dynamics algorithms and their analytical derivatives,” _Proceedings of the 2019 IEEE/SICE International Symposium on System Integration, SII 2019_ , pp. 614–619, 2019. * [34] R. Featherstone, “Exploiting sparsity in operational-space dynamics,” _The International Journal of Robotics Research_ , vol. 29, no. 10, pp. 1–21, 2010.
# Faster Convergence in Deep-Predictive-Coding Networks to Learn Deeper Representations Isaac J. Sledge, _Member, IEEE_ , and José C. Príncipe, _Life Fellow, IEEE_ Isaac J. Sledge is the Senior Machine Learning Research Scientist and Dr. Delores M. Etter Assistant Secretary of the Navy Emergent Engineer with the Advanced Signal Processing and Automated Target Recognition Branch, Naval Surface Warfare Center, Panama City, FL, USA (email: [email protected]). He is the director of the Machine Intelligence Defense (MIND) lab at the US Naval Sea Systems Command.José C. Príncipe is the Don D. and Ruth S. Eckis Chair and the Distinguished Professor with both the Department of Electrical and Computer Engineering and the Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA (email: [email protected]). He is the director of the Computational NeuroEngineering Laboratory (CNEL) at the University of Florida.The work of the authors was funded by grants N00014-14-1-0542 (Marc Steinberg, ONR 35), N00014-19-WX-00636 (Marc Steinberg, ONR 35), N00014-21-WX-01657 (Thomas McKenna, ONR 34), and N00014-21-WX-00476 (J. Tory Cobb, ONR 32) from the US Office of Naval Research. The first author was also supported by in-house laboratory independent research (ILIR) grant N00014-19-WX-00687 (Frank Crosby) from the US Office of Naval Research and a Naval Innovation in Science and Engineering (NISE) grant from NAVSEA. ###### Abstract Abstract—Deep-predictive-coding networks (DPCNs) are hierarchical, generative models. They rely on feed-forward and feed-back connections to modulate latent feature representations of stimuli in a dynamic and context-sensitive manner. A crucial element of DPCNs is a forward-backward inference procedure to uncover sparse, invariant features. However, this inference is a major computational bottleneck. It severely limits the network depth due to learning stagnation. Here, we prove why this bottleneck occurs. We then propose a new forward-inference strategy based on accelerated proximal gradients. This strategy has faster theoretical convergence guarantees than the one used for DPCNs. It overcomes learning stagnation. We also demonstrate that it permits constructing deep and wide predictive-coding networks. Such convolutional networks implement receptive fields that capture well the entire classes of objects on which the networks are trained. This improves the feature representations compared with our lab’s previous non-convolutional and convolutional DPCNs. It yields unsupervised object recognition that surpass convolutional autoencoders and are on par with convolutional networks trained in a supervised manner. ###### Index Terms: Index Terms—Bio-inspired vision, predictive coding, unsupervised learning ## 1\. Introduction Predictive coding is a promising theory for unsupervised sensory information processing. Under this theory, a hierarchical, generative model [RaoRPN- jour1999a, FristonK-jour2009a] of a dynamic environment is formed. This model is consistently updated to infer possible states of the environment and physical causes of the environmental stimuli. These causes, in turn, permit reproducing the stimuli [SpratlingMW-jour2011a] (see Section 2). Several predictive coders have been created and their biological plausibility investigated [SrinivasanMV-jour1982a, RaoRPN-jour1999a, JeheeJFM-jour2006a, FristonK-jour2008a]. None of these early contributions has been known to form causes that are highly discriminative for complex stimuli, however. Our lab thus developed multi-stage, deep-predictive-coding networks (DPCNs) [ChalasaniR-conf2013a, PrincipeJC-jour2014a, ChalasaniR-jour2015a]. We showed that DPCNs could form discriminative causes of temporal, spatial, and spatio- temporal stimuli in certain cases. Alternate networks later followed [LotterW- conf2017a, HanK-coll2018a]. DPCNs learn about an environment in an unsupervised manner. They can thus be thought of as parameter-light, non-traditional autoencoders. DPCNs differ substantially from autoencoders, though. The former contain feed-forward and recurrent, feed-back connections, allowing information to be propagated between stages to stabilize the internal representation. Another distinction is that DPCNs do not have a corresponding decoder. They hence require a self- organizing principle to be effective. That is, they must learn to extract meaningful features, in the form of causes, that cluster the stimuli effectively. This clustering objective is aided by imposing cause sparsity and transformation invariance. Including these cause constraints make DPCNs generalize well to novel stimuli. Lastly, DPCNs are composed of stages, not layers. Each stage implements a recurrent state model. This permits DPCNs to characterize the dynamics of temporal and spatio-temporal stimuli. DPCNs exhibit promise for unsupervised object recognition in images and video. Often, convolutional DPCNs like [ChalasaniR-jour2015a] learn sparse, invariant features that are better for classification than those from convolutional autoencoders. Such behavior arises from the interaction of feed-forward and feed-back connections in the DPCNs. It also arises due to the implicit supervision imposed from leveraging temporal information [BakerR-jour2014a]. Multiple presentations of the stimuli additionally facilitate the extraction of spatial and temporal regularities [PerruchetP-jour2006a, AslinRN- jour2012a], which we hypothesize permits high-level object analysis [BradyTF- jour2007a], like object recognition. Sparsity contributes too [ZylberbergJ- jour2011a, CarlsonNL-jour2012a], as it aids in generalization and can preempt overfitting to specific stimuli. Learning sufficiently robust features in a DPCN is quite computationally intensive. A multi-stage optimization strategy, based on proximal gradients [GulerO-jour1992b, BeckA-jour2009a], is typically used [ChalasaniR-conf2013a, PrincipeJC-jour2014a, ChalasaniR-jour2015a] to conduct feed-forward and feed- back inference of the causes. Both forms of inference are needed for cause self-organization. Sub-quadratic function-value convergence rates are theoretically guaranteed for this strategy [ChambolleA-jour2015a, AttouchH- jour2016a]. Only sub-linear rates are often obtainable, however, due to severe oscillations in the cost. That is, the search is not a pure descent strategy in some cases. This has the dual effect of returning poor causes and doing so slowly. Due to these optimization difficulties, DPCNs are practically limited to two stages. Two stages may be sufficient for characterizing certain stimuli. However, it typically will not yield representations that handle objects in complex environments. The networks exhibit poor stimuli-reconstruction performance when extended beyond two stages due to being stymied by poor convergence [SantanaE-jour2018a]. The deeper stages do not reach a stable cause representation, which impacts the causes in preceding stages. Learning essentially stagnates regardless of how many stimuli are presented. This, in turn, prevents object recognition for many challenging environments. The causes simply do not organize the stimuli well. Moreover, the causes are not semantically rich enough to handle distractors. Even simple textural backgrounds in visual stimuli can be sufficiently distracting enough to confound the DPCNs. The causes are also sometimes unable to handle object variability effectively. They cannot always resolve that objects from distinct viewpoints are same, for instance. Here, we develop a novel inference process that enables investigators to go beyond the two-stage network limitation that is observed in practice for DPCNs. This leads to what we refer to as accelerated DPCNs (ADPCNs). Both the DPCNs and ADPCNs are unsupervised networks. They share the same underlying architecture, with one exception: the ADPCNs extract convolutional features (see Section 3). Both properties mean that ADPCNs are better suited for discrimination than our lab’s original, non-convolutional DPCNs [ChalasaniR- conf2013a, PrincipeJC-jour2014a]. The improved inference also makes ADPCNs better than our lab’s convolutional-recurrent-predictive networks (CRPNs) [ChalasaniR-jour2015a]. CRPNs similarly rely on slow proximal-gradient inference. More specifically, we replace the proximal gradient search in the original DPCN with an accelerated version (see Section 3). The accelerated approach in the ADPCN relies on a polynomial inertial sequence for updating the internal feature representations. The inertial sequence has the effect of sufficiently delaying the occurrence of cost oscillation. We prove this (see Appendix A). Monotonically decreasing costs occur throughout almost all of the learning process. We perform in-place restarts of the inference when it does not. The ADPCNs therefore extract meaningful error signals that stabilize the sparse causes early during training. ADPCNs also possesses a faster, sub-polynomial rate of function-value convergence compared to the inference scheme used by DPCNs (See Appendix A). ADPCNs hence exhibit improved empirical convergence too over DPCNs. We are thus able to efficiently train both deep and wide predictive-coding networks whose learning does not stall. The deep convolutional layers of the ADPCN extract progressively richer, transformation-invariant features for sparsely describing complex stimuli. They aid in stimuli generalization. Wide convolutional layers better approximate interactions between stimuli and causes than narrower ones. They promote some measure of stimuli memorization without overfitting, which permits recognizing perceptually similar objects from the same class. While such behaviors can manifest in convolutional DPCNs, they do not due to the aforementioned slow inference. We show that the deeper and wider feature representations from the ADPCN are far more robust than those obtainable for DPCNs. Even a single stage difference between convolutional DPCNs and ADPCNs leads to a huge improvements in discriminability. In particular, the later-stage causes in the ADPCNs have receptive fields that embody the entirety of the objects being presented, despite the lack of training labels (see Section 3). We observe this phenomenon across a variety of benchmark datasets. It enables multi-class object recognition under different scene conditions. The ADPCNs can handle perspective changes, shape changes, illumination changes, and more. ADPCNs thus form an approximate identity mapping that preserves perceptual difference. Whole-object sensitivity also yields unsupervised classifiers that are on par with supervised-trained convolutional and convolutional-recurrent networks (see Appendix B). The ADPCNs have orders of magnitude fewer parameters, though, than these other deep networks. ADPCNs, just like DPCNs, are parameter-light, non-traditional autoencoders. ## 2\. Predictive Coding The objective of predictive coding is to approximate external sensory stimuli using generative, latent-variable models. Such models hierarchically encode residual prediction errors. The prediction errors are the differences between either the actual stimuli or a transformed version of it and the predicted stimuli produced from the underlying latent variables. We refer to these latent variables as causes. We also interchangeably refer to causes as features. By learning in this way, the internal representations of a predictive-coding model are modified only for unexpected changes in the stimuli. This enables a model to recall stimuli that it has encountered before. It also enables the model to adapt to new stimuli without disrupting its internal representation of the environment for previously observed stimuli. We can characterize predictive coding in the following way for temporal and spatio-temporal stimuli. Examples include audio and video, respectively. Both DPCNs and ADPCNs are instances of this general framework. * Definition 1: Predictive Coding Model. Let $y_{t}\\!\in\\!\mathbb{R}^{p}$ represent a time-varying sensory stimulus at time $t$. The stimuli can be described by an underlying cause, $\kappa_{1,t}\\!\in\\!\mathbb{R}^{d_{1}}$, and a time-varying intermediate state, $\gamma_{1,t}\\!\in\\!\mathbb{R}^{k_{1}}$, through a pair of $\theta$-parameterized mapping functions, $f_{1}:\mathbb{R}^{k_{1}}\\!\to\\!\mathbb{R}^{p}$, the cause-update function, and $g_{1}:\mathbb{R}^{k_{1}}\\!\times\\!\mathbb{R}^{d_{1}}\\!\to\\!\mathbb{R}^{k_{1}}$, the state-transition function. These functions define a latent-variable model, $y_{t}\\!=\\!f_{1}(\gamma_{1,t};\theta)\\!+\\!\epsilon_{1,t},\;\gamma_{1,t}\\!=\\!g_{1}(\gamma_{1,t-1},\kappa_{1,t};\theta)\\!+\\!\epsilon_{0,t}^{\prime}.$ Here, $\epsilon_{1,t}\\!\in\\!\mathbb{R}^{p}$ and $\epsilon_{1,t}^{\prime}\\!\in\\!\mathbb{R}^{k_{1}}$ are noise terms that represent the stochastic and model uncertainty, respectively, in the predictions. This model can be extended to a multi-stage hierarchy by cascading additional $\theta$-parameterized mapping functions, $f_{i}:\mathbb{R}^{k_{i}}\\!\to\\!\mathbb{R}^{d_{i-1}}$ and $g_{i}:\mathbb{R}^{k_{i}}\\!\times\\!\mathbb{R}^{d_{i}}\\!\to\\!\mathbb{R}^{k_{i}}$, at each stage $i$ beyond the first, $\kappa_{i-1,t}\\!=\\!f_{i}(\gamma_{i,t};\theta)\\!+\\!\epsilon_{i,t},\;\gamma_{i,t}\\!=\\!g_{i}(\gamma_{i,t-1},\kappa_{i,t};\theta)\\!+\\!\epsilon_{i,t}^{\prime}.$ Here, $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$, $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$, $\epsilon_{i,t}\\!\in\\!\mathbb{R}^{d_{i-1}}$ and $\epsilon_{i,t}^{\prime}\\!\in\\!\mathbb{R}^{k_{i}}$. Spatial stimuli, which include images, are a degenerate case of this framework. They have no temporal component, so there is no modification of the states as they are fed back. States are still extracted, though. Predictive coding thus behaves similarly to sparse coding [HyvarinenA-jour2001a] in this case. The main difference is that, for predictive coding, the states are pooled and transformed to form causes. Sparse coding only has notions of states. For DPCNs and ADPCNs, the causes are made invariant, which aids in discrimination. Feature invariance does not naturally occur in many sparse coding and hierarchical sparse coding models [BoutinV-jour2020a, BoutinV- jour2021a]. For the above framework, both feed-forward, bottom-up and feed-back, top-down processes are used to characterize observed stimuli. For the feed-forward case, the observed stimuli are propagated through the model to extract progressively abstract details. The stimuli are first converted to a series of states that encode either spatial, temporal, or spatio-temporal relationships. The type of relationship depends on the stimuli being considered. These states are then made invariant to various transformations, thereby forming the hidden causes. The causes are latent variables that describe the environment, as we noted above. The causes at lower stages of the model form the observations to the stages above. Hidden causes therefore provide a link between the stages. The states, in contrast, both connect the dynamics over time, to ignore temporal discontinuities [StoneJV-jour2008a], and mediate the effects of the causes on the stimuli [FristonK-jour2008a]. In the feed-back case, the model generates top-down predictions such that the neural activity at one stage predicts the activity at a lower stage. The predictions from a higher level are sent through feed-back connections to be compared to the actual activity. This yields a model uncertainty error that is forwarded to subsequent stages to update the population activity and improve prediction. Such a top-down process repeats until the bottom-up stimuli transformation process no longer imparts any new information. That is, there are no unexpected changes in the stimuli that the model cannot predict. Once this occurs, if the model is able to synthesize the input stimuli accurately using the uncovered features, then it means that it has previously seen a similar observation [SchendanHE- jour2008a]. In short, a predictive-coding model has two processing pathways. Recurrent, top-down connections carry predictions about activity to the lower model levels. These predictions reflect past experience [FristonK-jour2008a]. They form priors to disambiguate the incoming sensory inputs. Bottom-up connections relay prediction errors to higher levels to update the physical causes. The interaction of the feed-forward and feed-back connections [HosoyaT-jour2005a] on the causes enables robust object analysis [AuksztulewiczR-jour2016a] from the observed stimuli. A well-trained predictive-coding model should thus distinguish between objects, despite learning about them in an unsupervised way. This only occurs if the models preserve some notion of perceptual difference. Figure 1: An overview of the ADPCN architecture. For simplicity, the ADPCN is presented for only one stage, but more stages are implicitly assumed for top- down feed-back purposes. Note that the final stage of the DPCN has no output, unlike in a standard autoencoder network. The goal of the ADPCN is to learn a series of causes that explain the input stimuli, in this case, frames from a video of a bird, and hence recreate them. Each stage in an ADPCN can be roughly decomposed into two inference phases, one for updating the states and the other for updating the causes. State and causal inference relies on intra- stage feed-forward (black lines) and feed-back processes (gray lines), along with intra-stage recurrent feed-back (blue lines). Much of the network is devoted to feed-back processes to provide self supervision. Inter-stage feed- back (red line) of higher-level states is used to update the lower-level causes. For the inference process, we denote multiplication of a quantity on a given path with a rounded square. Addition, subtraction, and multiplication of quantities along multiple paths are denoted using circular gated symbols. Pooling is denoted using a down arrow inside a rounded square. The function blocks apply a sparsity operation to the quantities on the given path; corresponding parameter values for these operations are offset from these blocks. Function blocks offset from circular gates denote rectified, linear- unit operations. Norm blocks offset from the circular gates denote squared- norm operations. We omit showing the actual optimization process but note that the feed-forward connections are largely devoted to computing gradients of the two cost functions, $\nabla_{\gamma_{i,t}}\mathcal{L}_{1}$ and $\nabla_{\kappa_{i,t}}\mathcal{L}_{2}$, that are used to update the states $\gamma_{i,t}$ and causes $\kappa_{i,t}$. We also omit showing the update of $\lambda_{i,t}$ along with the updates for the matrices $C_{i}$, $D_{i}$, and $G_{i}$. ## 3\. Deep Predictive Coding We propose an efficient architecture for the above hierarchical, latent- variable model. This model is suitable for uncovering discriminative details from the stimuli. More specifically, we consider a faster, convolutional DPCN, the ADPCN. ADPCNs can extract highly sparse, invariant features for either dynamic or static stimuli (see Section 3.1). We then show how to effectively infer the ADPCN’s latent variables using a fast proximal gradient scheme (see Section 3.2). This optimization process permits effectively forming deep feature hierarchies that preserve perceptual differences. It typically overcomes the learning stagnation that we observed in DPCNs. Convergence properties are presented in the online appendix (see Appendix A). In this appendix, we prove why stagnation occurs in the DPCNs. We also quantify the convergence rate of the DPCNs and ADPCNs to show that the latter, theoretically, converge more quickly than the former. This justifies our new inference process, as do our empirical results. ## 3.1 ADPCN Cost Functions ADPCNs consist of two phases at each stage, which are outlined in fig. 1. The first phase entails inferring the hidden states, which are a feature-based representation used to describe the stimuli. States are formed at the first phase via sparse coding in conjunction with a temporal-state-space model. Stimuli are mapped to an over-complete dictionary of convolutional filters. Subsequent ADPCN stages follow the same process, with the only change being that the hidden causes assume the role of the observed stimuli. We define state inference via a least-absolute-shrinkage-and-selection- operator (LASSO) cost. We present this for the case of single-channel stimuli, for the ease of readability. The extension to multiple channels is straightforward. * Definition 2: State LASSO Cost. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states at time $t$ and at model stage $i$. Let $C_{i}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i}}$ be a hidden-state-transition matrix. Let $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}$ be a Toeplitz-form matrix with $q_{i}\\!\in\\!\mathbb{N}_{+}$ filters structured as in [SulamJ-jour2018a]. The state-inference cost function to be minimized, with respect to $\gamma_{i,t}$, $C_{i}$, and $D_{i,q}^{\top}$, is given by $\mathcal{L}_{1}(\gamma_{i,t},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})=\frac{1}{2}\Bigg{(}\|\kappa_{i-1,t}\\!-\\!D_{i}^{\top}\gamma_{i,t}\|_{2}^{2}+\alpha_{i}\|\gamma_{i,t}\\!-\\!C_{i}\gamma_{i,t-1}\|_{1}+\sum_{k^{\prime}=1}^{k_{i}}[\lambda_{i,t}]_{k^{\prime}}|[\gamma_{i,t}]_{k^{\prime}}|\Bigg{)},$ where $\kappa_{0,t}\\!=\\!y_{t}$. The first term in this cost quantifies the $L_{2}$ prediction error, $\epsilon^{\prime}_{i,t}\\!=\\!\kappa_{i-1,t}\\!-\\!D_{i}^{\top}\gamma_{i,t}$, at stage $i$. The aim is to ensure that the local reconstruction error between stages is minimized. The second term constrains the next-state dynamics to be described by the state-transition matrix. For static stimuli, indexed by $t$, the state feed-back is replaced by $\kappa_{i,t}\\!-\\!D_{i+1}^{\top}\gamma_{i+1,t}$. The strength of the recurrent feed-back connection is driven by $\alpha_{i}\\!\in\\!\mathbb{R}_{+}$. The transitions are $L_{1}$-sparse to make the state-space representation consistent. Without such a norm penalty, the innovations would not be sparse due to the feed-back. The final term enforces $L_{1}$-sparsity of the states, with the amount controlled by $\lambda_{i,t}\\!\in\\!\mathbb{R}_{+}^{k_{i}}$. * Proposition 1. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states at time $t$ and at model stage $i$. Let $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}$ be a Toeplitz-form matrix of $q_{i}\\!\in\\!\mathbb{N}_{+}$ filters. The matrix- vector multiplication $D_{i}^{\top}\gamma_{i,t}$ is functionally equivalent to convolution for all stages $i$. When projected back into the original visual space of the input, the dictionaries define a series of receptive fields. The hidden states, at least for the initial stages of the hierarchy thus act as basic feature detectors. They often resemble the simple cells in the visual cortex [OlshausenBA- jour1996a]. Ideally, we would prefer $L_{0}$-sparsity to the $L_{1}$ variant used in the state-inference cost, as it does not impose shrinkage on the hidden-state values. However, by sacrificing this property, we gain cost convexity, which aids in efficient numerical optimization with provable convergence. * Proposition 2. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states at time $t$ and at model stage $i$. Let $C_{i}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i}}$ be a hidden-state-transition matrix. Let $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}$ be a Toeplitz-form matrix with $q_{i}\\!\in\\!\mathbb{N}_{+}$ filters. Let $\alpha_{i},\lambda_{i,t}\\!\in\\!\mathbb{R}_{+}$. The hidden-state cost function $\mathcal{L}_{1}(\gamma_{i,t},\kappa_{i,t},C_{i},{D_{i}^{\top}};\alpha_{i},\lambda_{i,t})$ is convex for appropriate parameter values. Note that the state LASSO may switch from convex to non-convex as the variables are updated during inference. We may thus only recover local minimizers, not global ones. The state-based feature representations constructed by the first phase are not guaranteed to be invariant to various transforms. Discrimination can be impeded, as a result. The second ADPCN processing phase thus entails explicitly imposing this behavior. Local translation invariance is attained by leveraging the spatial relationships of the states in neighborhoods via the max-pooling of states. Invariance to more complex transformations, like rotation and spatial frequency, is made possible through the inference of subsequent hidden causes. Sparse cause inference is driven by a LASSO-based cost that captures non- linear dependencies between components in the pooled states. We present this cost for the case of a single convolutional layer, for the ease of readability. We utilize multiple layers in our simulations. * Definition 3: Cause LASSO Cost. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the bottom-up hidden causes at time $t$ and model stage $i$. Let $\kappa^{\prime}_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the top-down-inferred causes. Let $C_{i}\\!\in\\!\mathbb{R}^{k\times k}$ be a hidden- state-transition matrix. Let $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}$ be a Toeplitz-form matrix of $q_{i}\\!\in\\!\mathbb{N}_{+}$ filters. Let $G_{i}\\!\in\\!\mathbb{R}^{d_{i}\times k_{i}}$ be an invariant Toeplitz matrix. The hidden-cause cost to be minimized, with respect to $\kappa_{i,t}$ and $G_{i}$, is given by $\mathcal{L}_{2}(\gamma_{i,t},\kappa_{i,t},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})=\frac{1}{2}\Bigg{(}\\!\Bigg{(}\sum_{j=1}^{n}\sum_{k^{\prime}=1}^{k_{i}}|[\lambda_{i,t}]_{k^{\prime}}\gamma_{i,t}^{j}|\Bigg{)}+\eta_{i}^{\prime}\|\kappa_{i,t}\\!-\\!\kappa^{\prime}_{i,t}\|_{2}^{2}+\lambda_{i}^{\prime}\|\kappa_{i,t}\|_{1}\\!\Bigg{)},$ where $\lambda_{i,t,k^{\prime}}\\!=\\!\alpha_{i}^{\prime}(1\\!+\\!\textnormal{exp}(-[G_{i}^{\top}\kappa_{i,t}]_{k}))$, $\alpha_{i}^{\prime}\\!\in\\!\mathbb{R}$. The first term in this cost models the multiplicative inter- action of the causes $\kappa_{i,t}$ with the max-pooled states $\gamma_{i,t}^{j}$ through an invariant Toeplitz matrix $G_{i}$. This characterizes the shape of the sparse prior on the states. That is, the invariant matrix is adapted such that each component of the causes are connected to element groups in the accumulated states that co-occur frequently. Co-occurring components typically share common statistical regularities, thereby yielding locally invariant representations [KarklinY- jour2006a]. The second term specifies that the difference between the bottom- up $\kappa_{i,t}$ and top-down inferred causes $\kappa^{\prime}_{i+1,t}$ should be small, with the term weight specified by $\eta_{i}^{\prime}\\!\in\\!\mathbb{R}_{+}$. The final term imposes $L_{1}$ sparsity, with the amount controlled by $\lambda_{i}^{\prime}\\!\in\\!\mathbb{R}_{+}$, to prevent the intermediate representations from being dense. The causes obtained by solving the above LASSO cost will behave somewhat like complex cells in the visual cortex [ItoM-jour2004a]. Similar results are found in temporally coherent networks [HurriJ-coll2003a], albeit without guaranteed feature invariance. As with the state-inference cost, we employ $L_{1}$ sparsity in the hidden- cause cost for practical reasons, even though we would prefer $L_{0}$ sparsity for its theoretical appeal. * Proposition 3. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes at time $t$ and model stage $i$. Let $C_{i}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i}}$ be a hidden-state transition-matrix, $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}_{+}$ be a Toeplitz-form matrix of filters, and $G_{i}\\!\in\\!\mathbb{R}^{d_{i}\times k_{i}}$ be an invariant Toeplitz matrix. Let $\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime}\\!\in\\!\mathbb{R}_{+}$ and $\lambda_{i,t}\\!\in\\!\mathbb{R}_{+}^{k_{i}}$. The hidden-cause cost- function $\mathcal{L}_{2}(\gamma_{i,t},\kappa_{i,t},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})$ is convex for appropriate parameter values. Practically, we have found that traditional $L_{0}$ sparsity in the ADPCN preempts learning. The causes are often too sparse to act as priors. Large errors continuously accumulate, so the APDCNs are unable to reduce the residual prediction errors. Approximating the $L_{0}$ term is often a better option. We will explore it more in our future endeavors. ADPCNs and the DPCNs our lab proposed [ChalasaniR-conf2013a, PrincipeJC- jour2014a] have almost the same cost functions that we outline above. They both build unsupervised representations of input stimuli [FoldiakP-jour1991a] via a free-energy principle [FristonK-jour2006a]. The difference is that ADPCNs implement convolution by way of the Toeplitz-form matrix of filters. The original DPCNs relies on non-convolutional filters. The expressive power of these DPCNs is thus quite poor for complex stimuli compared to the ADPCNs. ADPCNs can take advantage of local spatial coherence in addition to temporal coherence. They hence require fewer filters than DPCNs to extract meaningful stimuli representations. Defining states and causes as we have specified above has significant advantages. ADPCNs are, for instance, incredibly parameter efficient compared to standard recurrent-convolutional autoencoders. Few filters often are needed to adequately synthesize an observed stimuli under varying conditions. This is a byproduct of the explicit feature invariance imposed by the non-linear, sparse cause inference. ADPCNs are also far more efficient and effective than alternate DPCNs that have been proposed. For instance, the DPCN proposed by Lotter et al. [LotterW-conf2017a] relies on convolutional, long shot-term memories (LSTMs) to model spatio-temporal stimuli. LSTM cells can only store and recall a single event through time, limiting their characterization of complex temporal patterns. Many LSTM cells are needed for associative event recall. They also require lengthy inference times. Behaviors like memorization, associative recall, and others are possible with ADPCNs. ADPCNs can be kernelized to realize non-linear, non-parametric state-space updates, further increasing the ADPCNs’ expressive power without impacting inference times. The DPCN proposed by Han et al. [HanK-coll2018a] relies on convolutional layers with linear recurrent feedback, similar to our lab’s recurrent-convolutional, winner-take-all networks [SantanaE-jour2018a]. However, the network in [HanK-coll2018a] lacks an unsupervised organization mechanism. ## 3.2 ADPCN Inference The propagation and transformation of observed stimuli in a ADPCN is more involved than for standard network architectures. At any stage in the ADPCN, the hidden, sparse states and unknown, sparse causes that minimize the two- part LASSO cost must be inferred to create the feed-forward observations for the next ADPCN stage. Joint inference of the states and causes can be done in a manner similar to block coordinate descent. That is, for a given mini-batch of stimuli, the states can be updated by solving the corresponding LASSO cost while holding the causes fixed. The causes can then be updated while holding the states fixed. Altering either of these representations amounts to solving a convolutional, $L_{1}$-sparse-coding problem. The presence of discontinuous, $L_{1}$-based terms in the LASSO costs complicates the application of standard optimization techniques, though. Here, we consider a fast proximal-gradient-based approach for separating and accounting for the smooth and non-smooth components of the LASSO costs. This approach is motivated and analyzed in the appendix. * Definition 4: Feed-Forward, Bottom-Up State Inference. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\pi_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the auxiliary hidden states at time $t$ and model stage $i$. These auxiliary hidden states will be combinations of hidden states across different times. Let $C_{i}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i}}$ be a hidden-state-transition matrix. As well, let $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}$ be a Toeplitz-form matrix of $q_{i}\\!\in\\!\mathbb{N}_{+}$ filters. For an inertial sequence $\beta_{m}\\!\in\\!\mathbb{R}_{+}$ and an adjust- able step size $\tau_{i,t}^{m}\\!\in\\!\mathbb{R}_{+}$, the hidden-state inference process, indexed by iteration $m$, is given by the following expressions $\gamma_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i,t}}\\!\Bigg{(}\\!\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau^{m}_{i,t}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})\\!\Bigg{)},\;\;\pi_{i,t}^{m+1}\\!=\\!\gamma_{i,t+1}^{m}\\!+\\!\beta_{m}(\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1})$ with $\mathcal{L}_{1}(\pi_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})\\!=\\!D_{i}^{\top}(\kappa_{i-1,t}\\!-\\!D_{i}\pi_{i,t}^{m})\\!+\\!\alpha_{i}\Omega_{i}(\pi_{i,t}^{m})$. Here, use a Nesterov smoothing, $\Omega_{i}(\pi_{i,t}^{m})\\!=\\!\textnormal{arg max}_{\|\Omega_{i,t}\|_{\infty}\leq 1}\Omega_{i,t}^{\top}(\pi_{i,t}^{m}\\!-\\!C_{i}\pi_{i,t}^{m-1})$, $\Omega_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$, to approximate the non-smooth state transition. Small values for the hidden states are clamped via a soft thresholding function implicit to the proximal operator, which leads to a sparse solution. The states are then spatially max pooled over local neighborhoods, using non-overlapping windows, to reduce their resolution, $\gamma_{i,t+1}\\!=\\!\textnormal{\sc pool}(\gamma_{i,t+1})$. * Definition 5: Feed-Forward, Bottom-Up Cause Inference. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes at time $t$ and model stage $i$. Let $C_{i}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i}}$ be a hidden-state-transition matrix, $D_{i}^{\top}\\!\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}$ be a Toeplitz-form matrix of $q_{i}\\!\in\\!\mathbb{N}_{+}$ filters, and $G_{i}\\!\in\\!\mathbb{R}^{d_{i}\times k_{i}}$ be an invariant Toeplitz matrix. For an adjustable step size $\tau^{\prime}_{i,t}{}^{\\!m}\\!\in\\!\mathbb{R}_{+}$ and inertial sequence $\beta_{m}^{\prime}\\!\in\\!\mathbb{R}_{+}$, the hidden-cause inference process, indexed by $m$, is given by the following expressions $\kappa_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i}^{\prime}}\\!\Bigg{(}\\!\pi^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\\!\Bigg{)},\;\;\pi^{\prime}_{i,t}{}^{\\!m+1}\\!=\\!\kappa_{i,t+1}^{m}\\!+\\!\beta^{\prime}_{m}(\kappa_{i,t+1}^{m}\\!-\\!\kappa_{i,t+1}^{m-1})$ with $\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\\!=\\!-\alpha_{i}^{\prime}G_{i}^{\top}\textnormal{exp}(-G_{i}\pi^{\prime}_{i,t}{}^{\\!m})|\gamma_{i,t+1}^{j}|\\!+\\!2\eta_{i}^{\prime}(\kappa_{i,t+1}^{m}\\!-\\!\kappa^{\prime}_{i+1,t})$. Small values for the hidden causes are clamped via an implicit soft thresholding function, leading to a sparse solution. The inferred causes are used to update the sparsity parameter $\lambda_{i,t+1}\\!=\\!\alpha_{i}^{\prime}(1\\!+\\!\textnormal{exp}(-\textnormal{\sc unpool}(G_{i}\kappa_{i,t+1})))$ via spatial max unpooling. In both cases, the step size is bounded by the Lipschitz constant of the LASSO cost to be solved. The choice of the inertial sequence greatly affects the convergence properties of the optimization. * Proposition 4. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes. The state iterates $\\{\gamma_{i,t+1}^{m}\\}_{m=1}^{\infty}$ strongly converge to the global solution of $\mathcal{L}_{1}(\gamma_{i,t},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})$ for the accelerated proximal gradient scheme. Likewise, the cause iterates $\\{\kappa_{i,t+1}^{m}\\}_{m=1}^{\infty}$ for the accelerated proximal gradient scheme strongly converge to the global solution of $\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i,t},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})$ at a sub-polynomial rate. This occurs when using the inertial sequences $\beta_{m},\beta^{\prime}_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, where $k_{m}$ depends polynomially on $m$. In this bottom-up inference process, there is an implicit assumption that the top-down predictions of the causes are available. This, however, is not the case for each iteration of a mini batch being propagated through the ADPCN. We therefore consider an approximate, top-down prediction using the states from the previous time instance and, starting from the first stage, perform bottom- up inference using this prediction. * Definition 6: Feed-Back, Top-Down Cause Inference. At the beginning of every time step $t$, using the state-space model at each stage, the likely top-down causes, $\kappa^{\prime}_{i-1,t}\\!\in\\!\mathbb{R}^{d_{i-1}}$, are predicted using the previous states $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i+1}}$ and the causes $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$. That is, for the filter dictionary matrix, the following top-down update is performed $\kappa_{i-1,t}^{\prime}\\!=\\!D_{i}^{\top}\gamma_{i,t}^{\prime},\;\;\gamma_{i,t}^{\prime}\\!=\\!\textnormal{arg min}_{\gamma_{i,t}}\Bigg{(}\\!\lambda_{i}^{\prime}\|\gamma_{i,t+1}\\!-\\!C_{i}\gamma_{i,t}\|_{1}\\!+\\!\alpha_{i}^{\prime}\|\gamma_{i,t}\|_{1}\Bigg{(}\\!1\\!+\\!\textnormal{exp}(-\textnormal{unpool}(G_{i}\kappa_{i,t}))\\!\Bigg{)}\\!\Bigg{)},$ except for the last stage, wherein $\kappa^{\prime}_{i,t+1}\\!=\\!\kappa_{i,t}$. This minimization problem has an algebraic expression for the global solution: $[\gamma_{i,t}^{\prime}]_{k}\\!=\\![C_{i}\gamma_{i,t-1}]_{k}$, whenever $\alpha_{i}^{\prime}\lambda_{i,t,k}\\!<\\!\alpha_{i}$, and zero otherwise. These top-down predictions serve an important role during inference, as they transfer abstract knowledge from higher stages into lower ones. The overall representation quality is thereby improved. The predictions also modulate the representations due to state zeroing by the sparsity hyperparameter. Alongside the state and cause inference is a learning process for fitting the ADPCN parameters to the stimuli. Here, we consider gradient-descent training without top-down information, which is performed once inference has stabilized for a given mini batch. An overview of this procedure is presented in the online appendix (see appendix A). The inference procedure that we outline in the online appendix is different than the one proposed in [ChalasaniR-conf2013a, PrincipeJC-jour2014a]. Our lab’s original DPCN relies on proximal gradients with an extra-gradient rule that is an almost-linear combination of previous state and cause iterates. The ADPCNs leverage potentially non-linear iterate combinations. This has the effect of taking larger steps along the error surface without diverging. In essence, the DPCN inference process is overly conservative in its updates. It also has issues that we outline in the online appendix (see Appendix A). Another change is that multiple iterations of top-down feed-back are executed in the ADPCNs. As the recurrent processes unfold in time, the APDCN is used over and over to apply an increasing number of non-linear transformations to the stimuli. This has the effect of simulating the propagation of the stimuli through an increasingly deeper, feed-forward network but without the overhead of adding and learning more network parameters [ChenY-coll2017a]. This promotes the formation of more expressive states and causes that quickly converge as the recurrent processes proceeds over time. It suggests that the ADPCNs have a stable, self-organizing mechanism that minimizes surprise well [FristonK-jour2010a]. Our lab’s DPCNs [ChalasaniR-conf2013a, PrincipeJC- jour2014a] only consider a single feed-back iteration. A great many hierarchical stages are needed to replicate the above behavior. However, a lack of reasonable convergence prevents this from occurring. They hence do not learn to preserve perceptual differences. ## 4\. Simulation Results We now assess the capability of our inference strategy for unsupervised ADPCNs. We focus on static visual stimuli (see section 3.1). We demonstrate that ADPCNs uncover meaningful feature representations. They do so more quickly than convolutional DPCNs that rely on a non-accelerated, proximal- gradient update (see section 3.2). We show that the improved inference offered by ADPCNs permits them to exceed the performance of deep unsupervised networks and behave similarly to deep supervised networks. ## 4.1. Simulation Preliminaries Data and Pre-processing. We rely on five datasets for our simulations. Two of these, MNIST and FMNIST, contain single-channel, static visual stimuli. The remaining three, CIFAR-10, CIFAR-100 and STL-10, contain multi-channel, static visual stimuli. We whiten each dataset and zero their means. We use the default training and test set definitions for each dataset. Training and Inference Protocols. For learning the DPCN and ADPCN parameters, we rely on ADAM-based gradient descent with mini batches [KingmaDP-conf2015a]. We set the initial learning rate to $\eta_{0}\\!=\\!\textnormal{0.001}$, which helps prevent over- shooting the global optimum. The learning rate is decreased by half every epoch. We use exponential decay rates of 0.9 and 0.99 for the first- and second-order gradient moments in ADAM, respectively, which are employed to perform bias correction and adjust the per-parameter learning rates. An epsilon additive factor of $\textnormal{10}^{-\textnormal{8}}$ is used to preempt division by zero. We use an initial forgetting factor value of $\theta_{0}\\!=\\!\textnormal{0.7}$. This factor is increased by a tenth every thousand mini batches to stabilize convergence. We consider a mini-batch size of 32 randomly selected samples to ensure good solution quality [HardtM- conf2016a]. For DPCN and ADPCN inference, we primarily set the sparsity parameters to $\lambda_{1},\lambda_{1}^{\prime}\\!=\\!\textnormal{0.2}$, $\lambda_{2},\lambda_{2}^{\prime}\\!=\\!\textnormal{0.25}$, and $\lambda_{3},\lambda_{3}^{\prime}\\!=\\!\textnormal{0.35}$. Such values permit retaining much of the visual content in the first two stages while compressing it more pronouncedly in the third. Some stimuli datasets have slightly altered parameter values. Due to the static nature of the stimuli, we do not have temporal state feed-back. We do, however, propagate the cause-state difference between stages. This is a slight modification of the network diagram shown in the previous section. We set the feed-back strengths, for most simulations, to $\alpha_{1},\alpha_{2}\\!=\\!\textnormal{1}$ and $\alpha_{3}\\!=\\!\textnormal{3}$. In fig. 1, these variables correspond to the strength of the temporal, intra-layer feed-back, but here they are used for non-temporal, inter-layer state feed-back. The stronger feed-back amount in the third stage aids in suppressing noise without adversely impacting the in earlier stages’ priors. We fix the causal sparsity constants to $\alpha_{1}^{\prime},\alpha_{2}^{\prime},\alpha_{3}^{\prime}\\!=\\!\textnormal{1}$. We terminate the accelerated and regular inference processes after 500 and 1000 iterations, respectively, per mini batch. A significantly lower number of iterations is used in the former case, since the new inertial sequence facilitates quick convergence. Figure 2: A comparison of accelerated proximal gradient inference and learning (left, blue) and proximal gradient inference and learning (right, red) the MNIST dataset. The presented results are shown after training with mini batches for two epochs. (a) Polar scatter plots of the orientation angles versus spatial frequency for the first-stage causes. (b) Line plots of the normalized center positions with included orientations for the first-stage causes. For both (a) and (b), we fit Gabor filters to the first-stage causes; locally optimal filter parameters were selected via a gradient-descent scheme. The plots are color-coded according to the connection strength between the invariant matrix and the observation matrix in the first network stage. Higher connection strengths indicate subsets of dictionary elements from the observation matrix that are most likely active when a column of the invariance matrix is also active. If a DPCN has been trained well, then the filters should have a small orientation-angle spread. Each plot represents a randomly chosen column of the first-stage invariance matrix. (c)–(e) Back-projected causes from the first, second, and third stages of the networks, respectively. Each plot represents a randomly chosen cause. The back-projected causes can be interpreted as receptive fields, with darker colors indicating a higher degree of activation. For each stage, we assess the filter similarity and provide VAT similarity plots. In these plots, low similarities are denoted using gray while higher similarities are denoted using progressively more vivid shades of either blue (accelerated proximal gradients) or red (proximal gradients). If a DPCN has been trained well, then there should be few to no duplicate filters. There, hence, should not be any conspicuous blocky structures along the main diagonal of the VAT similarity plots. (f)–(g) Reconstructed instances from a random batch at the first and third stage, respectively. For each stage, we also assess the feature similarity between the original training samples and the reconstructed versions and provide corresponding scatter plots. If a DPCN reconstructs the input samples well, then there should be a strong linear relationship between the features. Higher distributional spreads and shifts away from the main diagonal indicate larger reconstruction errors. Figure 3: A comparison of accelerated proximal gradient inference and learning (left, blue) and proximal gradient inference and learning (right, red) the FMNIST dataset. The presented results are shown after training with mini batches for two epochs. See fig. 2 for descriptions of the plots. ## 4.2. Simulation Results and Discussions Network Architecture. As we note in the previous section, our lab’s DPCNs are non-convolutional. For our simulations, we consider the same Toeplitz-based convolution as the ADPCNs. This is done to highlight that the improvement in feature quality occurs due to the faster inference strategy. We consider the same architecture for the convolutional DPCNs and ADPCNs. At the first stage, we use 128 states with $\textnormal{5}\\!\times\\!\textnormal{5}$ filters and 256 causes with $\textnormal{5}\\!\times\\!\textnormal{5}$ filters. This yields an over- complete reconstruction basis. At the second and third stages, we use 128 states and 256 states with $\textnormal{5}\\!\times\\!\textnormal{5}$ and $\textnormal{5}\\!\times\\!\textnormal{5}$ filters, respectively. For these two stages, we use 512 causes and 1024 causes with $\textnormal{5}\\!\times\\!\textnormal{5}$ and $\textnormal{5}\\!\times\\!\textnormal{5}$ filters, respectively. We perform $\textnormal{2}\\!\times\\!\textnormal{2}$ max pooling between the states and the causes at each stage. Due to the choice of filter sizes, the ADPCNs will typically have a worse reconstruction error but better recognition rate in the later stages. Using larger filters in the early stages and smaller ones in the later stages permits implementing traditional predictive-coding behaviors. That is, the reconstruction error decreases deeper in the hierarchy, as parts of the stimuli are explained away in an iterative manner. Simulation Results. Simulation findings are presented in fig. 2 and fig. 3 for the single-channel MNIST and FMNIST datasets, respectively. Findings for the multi-channel CIFAR-10/100 and STL-10 datasets are, respectively, shown in fig. 4 and fig. 5. The results in these figures were for after two epochs. For these datasets, the ADPCNs are successful in quickly uncovering invariant representations. Most of the columns in the invariance matrix group dictionary elements that have very similar orientation and frequency while being insensitive to translation (see fig. 2(a) to fig. 5(a)). Likewise, for each active invariance-matrix column, a subset of the dictionary elements are grouped by orientation and spatial position, which indicates invariance to other properties like spatial frequency and center position (see fig. 2(b) to fig. 5(b)). The convolutional DPCNs, in comparison, have representations that are significantly altered by transformations other than translation. This occurs because subsets of the dictionary elements are not grouped according to various characteristics. Discrimination performance hence can suffer for stimuli samples that are slightly altered. Figure 4: A comparison of accelerated proximal gradient inference and learning (left, blue) and proximal gradient inference and learning (right, red) the CIFAR-10/100 datasets. The presented results are shown after training with mini batches for two epochs. See fig. 2 for descriptions of the plots. As well, the ADPCNs learn meaningful filters from the stimuli. The first two stages of our ADPCNs have causal receptive fields that mimic the behavior of simple and complex cells in the primate vision system (see fig. 2(c)–(d) to fig. 5(c)–(d)). The fields for the first stage are predominantly divided into two types: low-frequency and high-frequency, localized band-pass filters. The former mainly encode regions of uniform intensity and color along with slowly varying texture. The latter describe contours and hence sharp boundaries. Such filters permit accurately reconstructing the input stimuli (see fig. 2(f)–fig. 5(f) and fig. B.1(a)–(e)). The second-stage receptive fields are non-linear combinations of those in the first that are activated by more complicated visual patterns, such as curves and junctions. A similar division of receptive fields into two categories is often encountered in the second ADPCN stage. More filters are activated by contours, however, than in the first stage. For both stages, the filters are mostly unique, which is captured in the ordered similarity plots (see fig. 2(c)–(d) to fig. 5(c)–(d)). Beyond two stages, the ADPCN receptive fields encompass entire objects (see fig. 2(e) to fig. 5(e)). They are, however, average representations, not highly specific ones, due to the limited number of causes (see fig. 2(g) to fig. 5(g)). The backgrounds in the visual stimuli are often suppressed at the third stage for CIFAR-10/100 and STL-10, which greatly enhance recognition performance (see fig. B.1(c)–(e)). There are no backgrounds for MNIST and FMNIST, so recognition is predominantly driven by the whole-object receptive fields in the third stage see fig. B.1(a)–(b)). The ordered similarity plots indicate that none of the third-stage filters appear to be duplicated for either dataset. This trend also holds for the first- and second-stage receptive fields. This implies that the ADPCNs emphasize the extraction of non-redundant features to form a complete visual stimuli basis at each stage. The filters learned by ADPCNs are not only sensitive to whole objects. They also are attuned to stylistic changes of objects. As shown in fig. C.1, the filter-derived cause features tend to segregate instances of objects that visually differ. This occurs even within a given object class. Convolutional DPCNs, in contrast, do not stabilize to viable receptive fields at the same rate as the ADPCNs (see fig. 2(c)–(d) to fig. 5(c)–(d)). For MNIST, the first-stage DPCN receptive fields have some localized band-pass structure that is similar to Gabor filters. The overall spread of the fields makes it difficult to accurately detect abrupt transitions and hence recreate the input stimuli, though. The reconstructions thus are heavily distorted and blurred (see fig. 2(f) and fig. B.1(a)). For FMNIST, the first-stage receptive fields focus on low-frequency details, such as either constant grayscale values or slow-changing grayscale gradients. They also focus onr higher- frequency details, such as periodic texture. While some of the causes become specialized band-pass-like filters, there are not enough to adequately preserve sharp edges. The stimuli reconstructions are thus also distorted, which removes much of the high-frequency content (see fig. 3(f) and fig. B.1(b)). Similar results are encountered for CIFAR-10/100 and STL-10 (see fig. 4(f)–fig. 5(f) and fig. B.1(c)–(e)). For all of the datasets, the second-stage receptive fields become even less organized than in the first stage. They are mostly activated by blob-like visual patterns, which do not preserve enough visual content for recreating a close resemblance of the input stimuli beyond the first stage. The convolutional DPCNs are unable to learn relevant representations in the third stage, as a consequence. The receptive fields for this network stage are unique, by virtue of being essentially random. They are, however, largely useless in extracting stimuli-specific details (see fig. 2(e) to fig. 5(e)). This further degrades the reconstruction quality to where the inputs are unrecognizable (see fig. 2(g) to fig. 5(g)). Discrimination is adversely impacted due to this severe lack of identifying characteristics (see fig. B.1(a)–(e)). The filter redundancy also impacts learning good model priors. There is typically not enough unique information to be back-propagated to earlier stages to inform the choice of better receptive fields. Figure 5: A comparison of accelerated proximal gradient inference and learning (left, blue) and proximal gradient inference and learning (right, red) the STL-10 dataset. The presented results are shown after training with mini batches for two epochs. See fig. 2 for descriptions of the plots. The causes formed by the ADPCNs and DPCNs specify invariant features that can be employed for discrimination. In fig. B.2, we show that the stage-aggregated ADPCN features yield high-performing unsupervised classifiers. They achieve state-of-the-art unsupervised recognition rates for each dataset. These recognition rates also are on par with deep networks trained in a supervised fashion, despite having orders of magnitude fewer parameters. Although the causal features from all stages have a positive net contribution, those from the third stage contribute the most to recognition performance (see fig. B.1(a)–(e)). The DPCNs exhibit poor performance, in comparison. Only the first-stage features aid classification (see fig. B.1(a)–(e)). The remainder largely worsen the recognition capabilities. For both the DPCNs and ADPCNs, we rely on a seven-nearest neighbor classifier with an unsupervised-learned metric distance [SenerO-coll2016a] to label the stimuli samples. As we illustrate in fig. C.1, the causal features require non-linear decision boundaries to distinguish between classes well. This occurs even in high- dimensional spaces, as class overlap occurs. Such a property motivates the use of nearest-neighbor classifiers. Simulation Discussions. Our simulations indicate that ADPCNs were more effective at uncovering highly discriminative feature representations of visual stimuli than the original DPCN inference strategy. A trait that contributes greatly to the ADPCNs’ success is its significantly improved search rate. As noted in the appendix, proximal-gradient-type schemes can undergo four separate search phases, some of which have different local convergence rates. In one of the phases, the constant-step regime, both the states and causes undergo rapid improvements. However, in two phases, the local convergence rate is slow whenever the largest eigenvalue of a certain recurrence matrix is less than the current inertial-sequence magnitude. For linear inertial sequences, like those found in the proximal-gradient-based DPCNs, this condition occurs early during the optimization process. That is, for such sequences, the growth is initially very rapid and follows a logarithmic rate. Within just a few iterations, the sequence magnitude exceeds the eigenvalue, which preempts the fast constant-step regime. The rate of convergence becomes worse than sublinear. A large number of search steps is thus needed to move toward the global solution. However, the search frequently terminates before this happens due to reaching the maximum number of proximal-gradient iterations. The search, alternatively, stops early due to a lack of progress across consecutive iterations. In either case, the states and causes do not adequately stabilize for a given mini batch. The poor state and cause representations, naturally, are integrated into the filter dictionary matrices and invariant matrices during the learning updates. This disrupts the priors in the early hierarchy for future stimuli. The convergence is further stymied by cost rippling. Proximal-gradient-based optimization does not behave like a pure descent method in two out of the four phases. Such behavior is caused by the eigenvalues of another recurrence matrix being a pair of complex conjugates, which necessitates oscillating between the two. All of these factors make it difficult to propagate meaningful bottom-up information [MechelliA-jour2004a] beyond the first stage. The top-down details from higher stages are hence ineffective at modifying the priors to disambiguate the stimuli [BarM-jour2004a]. The ADPCNs largely avoid these issues. After the constant-step regime, the search switches to one of two potentially slower phases. However, the APDCNs’ inertial-sequence growth rate is rather muted, as opposed to that of the DPCNs. The chance of exceeding the largest eigenvalue of the augmented, auxiliary-variable recurrence matrix is low for the ADPCNs. The searches thus can proceed unhindered toward the global solution. Moreover, since the eigenvalue-magnitude threshold is often not reached, the largest eigenvalue of the augmented, mapping recurrence matrix is real-valued, not complex. This means that the accelerated proximal gradients behave like a descent method. It thus will not experience localized cost rippling due to alternating between conjugate pairs. Both properties promote rapid stabilization of the states and causes, which expedite the formation of beneficial priors throughout much of the early-stage network inference. These priors facilitate the construction of transformation-insensitive feature abstractions in deeper network stages, due to the bottom-up forwarding of stimuli-relevant signals. The recurrent, top- down connections, in turn, suitably alter the representation sparsity. Extraneous details that do not contribute greatly to the reconstruction quality are hence ignored. This behavior biases in favor of valuable stimuli, which is analogous to observed functionality in the frontal and parietal cortices [SerencesJT-jour2008a]. The descending pathways also carry predictive responses, in the form of templates of expected stimuli, that are matched to the current and future mini batches to aid recognition [UllmanS-jour1995a]. That is, the templates convey contextual information for extracting task- specific information, which makes recognition more reliable [PotterMC- jour1975a, PalmerSE-jour1975a]. There are other traits that contribute to the success of the ADPCNs. For instance, the ADPCN’s first-stage receptive fields are largely similar to that of simple cells in the primary visual cortex. Simple cells often implement Gabor-like filters with generic preferred stimuli that correspond to oriented edges [HubelDH-jour1965a, HubelDH-jour1968a]. Such filters are useful within the ADCPN. Relations between activations for specific spatial locations tend to be distinctive between objects in visual stimuli. Activations are also obtained, in a Gabor space, that facilitate the construction of naturally sparse representations. These representations can then be hierarchically extended, which is what we ultimately sought in the ADPCNs. Changes in object location, scale, and orientation can be reliably detected, within this Gabor space, thereby aiding in the creation of transformation-invariant features. These features permit near-perfect stimuli reconstructions at the first stage. The uniqueness of the band-pass filters is beneficial, as it permits the ADPCNs to fixate on non-redundant stimuli characteristics. How these filters arise and the rate at which they form are also important aspects that aid in the ADPCNs’ success. They emerge due to feed-back from high-level visual network areas, which quickly reduce activity in lower areas to simplify the description of the stimuli to some of its most basic elements [MurraySO- jour2004a]. In doing so, alternate explanations are suppressed, and only the most dominant, fundamental causes of the stimuli remain [SpratlingMW- jour2011a]. These causes are oriented luminance contours. This rapid stabilization of filters is exactly functionality that we expect to encounter in an efficient predictive coding process. It is hence well aligned with contemporary neurophysiological theory [OlshausenBA-jour1997a]. The plots of center position and orientation angle versus center frequency for the ADPCNs indicate that the learned Gabor filters form a meaningful reconstruction basis for the visual stimuli. The entire center-position plane is thoroughly filled. There hence are filter impulse responses everywhere in space when a sufficient number of first-stage filters are learned. Likewise, the Gabor orientation range is well covered, implying that the filters can recognize edges at different angles. Taken together, both properties indicate that these first-stage filters can reconstruct the stimuli well. Given the similarities of the Gabor filters across the different stimuli datasets, it would appear that the inferred basis can recreate any natural-image stimuli well. The low reconstruction errors for the first stage of the APDCN corroborate this claim. For the DPCNs, we also have that the center-position plane is well covered. However, the filters have poor spatial frequency bandwidths, so they are not activated much by contours. Only the most dominant edges in the visual stimuli will be well preserved. These usually are the object-background boundaries. Inter-object details are not retained. While the first-stage DPCN filters also form a basis, it is a poor one for reconstructing natural-image stimuli. Without a meaningful, stable basis, the remainder of the network cannot build on it to reliably describe aspects of the stimuli. The slow convergence of proximal gradients prevents this, as we note above. Beyond the first stage, the ADPCNs exhibit an increase in specificity, abstraction, and invariance for deeper hierarchies [RiesenhuberM-jour1999a]. This behavior aligns well with the current understanding of the vision system [TjanBS-jour2005a]. It also aids in the ADPCNs’ success. The second-stage receptive fields, in the case of MNIST, become sensitive to curved sub-strokes for the hand-written digits, which is similar to prestriate cortex functionality [HedgeJ-jour2000a]. For FMNIST, they emphasize abrupt transitions found in regions of the apparel and fashion accessories. This mimics aspects of shape-pattern-selectivity behaviors found in the extrastriate cortices [PasupathyA-jour1999a, PasupathyA-jour2002a]. For STL-10, the receptive fields are often elongated Gabor-like filters, which can be found in the prestriate cortex [LiuL-jour2016b]. All of these features systematically focus on object-relevant visual details, thereby aiding recognition. At the deepest stage, the receptive fields are entirely object specific, which is functionality somewhat akin to the neurons in the primate infero-temporal cortex [BruceC-jour1981a]. It is also related to memory activity [MiyashitaY-jour1988a, YakovlevV-jour1998a]. The representations are additionally translation and rotation insensitive, similar to inferotemporal cortex neurons [RollsET-jour1992a, ToveeMJ-jour1994a, SalinasE-jour1997a], and change little with respect to scale and spatial-frequency changes, similar to neurons in the middle temporal area [RollsET-jour1985a, LiuL-jour2007a]. To our knowledge, such invariant, whole-object sensitivity within a single stage has not been witnessed in any existing predictive-coding model. Based on contemporary theories [LiangH-jour2017a], we believe that the receptive-field feed-back from the final stage contribute to the effective connectivity in the earlier network stages [RamalingamN-jour2013a]. That is, the role of this final stage is to disambiguate local image components. This is done by creating a template that is fed back, which then selectively enhances object components and suppresses interfering background components [EpshteinB- jour2008a]. This biologically plausible behavior is crucial for achieving high discrimination rates on CIFAR-10/100 and STL-10. For these datasets, the objects of interest are scattered amongst distractors. As we show in our simulation results, the third stage of the ADPCN contributes more to recognition than the first two stages. Such a finding is well-aligned with previous studies. Although simple objects can be discriminated based on Gabor-like filters, complex objects require significant non-linearities [ShiJV-jour2013a]. The later stages provide this functionality due to the conversion from states to pooled, sparse causes. The processes of pooling and sparsifying are highly non-linear. They additionally lead to the emergence of complex-cell properties along with hence shift invariance [HubelDH-jour1962a] and spatial phase invariance [LianY-jour2021a]. Including Toeplitz-based convolution with spatial pooling has a major impact on performance. Spatial coherence behaviors that convolutional layers offer are essential for describing inter-object relationships in complex visual stimuli. Pooling introduces invariance. Our promising recognition results in Appendix B are a testament to this. The corresponding cause embeddings are too. They indicate that the ADPCNs self-organize the states and causes in an object-sensitive way, despite the lack of supervision. DPCNs struggle to do this, since they rely on non-convolutional filters [ChalasaniR-conf2013a, PrincipeJC-jour2014a]. Their cause embeddings often group the stimuli in a worse way than embeddings of the input stimuli. That is, their transformation of the stimuli often destroys much of the visual content. Even the convolutional extension to DPCNs struggle due to a lack of a good reconstruction basis. This shows that convolution alone is not sufficient to uncover good features. The networks must be paired with an efficient inference process. Inference for the convolutional and non-convolutional DPCNs is based on a slow, proximal-gradient-based approach. The later-stage cause embeddings for these networks are thus quite poor, since the first- and second-stage bases do not stabilize quickly to meaningful filters. It is likely that DPCNs would benefit from a greedy, stage-wise training to initialize the networks in a meaningful way. This may overcome the poor convergence, to some extent. Our ADPCNs would likely benefit from it too. As well, the ADPCNs would probably benefit from fixing the first-stage states and causes. As we have shown in our simulations, these features do not change much across color-image datasets. Effort could be better spent on learning more advanced features in the later network stages. Another benefit of including convolutional filters in the ADPCNs is that they help encode perceptual stimuli similarity and differences. By this, we mean that well-trained ADPCNs group related stimuli together and force unrelated stimuli apart in the causal feature space. Such behavior is stems from learning visual, appearance-based features at local and global object scales. It also stems from iteratively sparsifying those features to selectively remove redundant details. Inter-layer and intra-layer feed-back contribute too. Other network architectures, like convolutional autoencoders, should also be capable of accounting for perceptual differences. They, however, appear to currently lack both appropriate losses and regularizers, along with fundamental layer behaviors, to do this as effectively as ADPCNs. ## 5\. Conclusions Here, we have revisited the problem of unsupervised predictive coding. We have considered a hierarchical, generative network, the ADPCN, for reconstructing observed stimuli. This network is composed of temporal sparse-coding models with bi-directional connectivity. The interaction of the information passed by top-down and bottom-up connections across the models permit the extraction of invariant, increasingly abstract feature representations. Such representations preserve perceptual differences. Our contribution in this paper is a new means of inferring the underlying components of the feature representations, which are the sparse states and causes. Previously, a proximal-gradient-type approach has been used for this purpose. Despite its promising theoretical guarantees, though, it exhibits poor empirical performance. This practically limits the number of model stages that can be considered in DPCNs. It also extensively curtails the quality of the stimuli features that can be extracted. Here, we have considered a parallelizable, vastly accelerated proximal-gradient strategy that overcomes these issues. It allows us to go beyond the existing two-stage limitation, facilitating the construction of arbitrary-staged DPCNs. Each stage leads to increasingly enhanced performance for object analysis. The information from higher stages is propagated to earlier ones to form more effective stimuli priors for bottom-up processing. Most crucially, our optimization strategy immensely streamlined inference. Often, only one or two presentations of the stimuli are necessary to reach a stable filter dictionary and hence a corresponding set of sparse states and sparse causes. Good object analysis performance is hence observed. The previous optimization approach requires many times more presentations before a stable set of filters is uncovered. The resulting features are not nearly as discriminative as those from our proposed strategy. We have mathematically proved that this stems from poor convergence- rate properties of the original inertial sequence used in DPCNs. We have applied ADPCNs to static-image datasets. For MNIST and FMNIST, the ADPCNs learn initial-stage receptive fields that mimic aspects of the early stages of the primate vision system. The later network stages implement receptive fields that encompass entire objects. In the case of MNIST, the back-projected filters become pseudo averages of the hand-written numerical digits. Predominant writing styles are modeled well. For FMNIST, the back- projected filters are the various types of clothing and personal articles. General styles and some nuances are captured. To our knowledge, this is the first time that such object-scale receptive fields have been learned for predictive coding. This behavior helps yield unsupervised classifiers that achieve state-of-the-art performance. Such classifiers also outperform supervised-trained deep networks, which lends credence to the complicated feature inference and invariance process that we employ. It also supports the notion that the ADPCNs preserve perceptual differences. Similar results are witnessed for natural-image datasets, such as CIFAR-10/100. The later-stage receptive fields again encompass entire object categories. This yields promising features that achieve generalization rates almost equivalent to that of supervised-trained deep networks. This is despite our use of simple nearest-neighbor classifiers. It is also despite the fact that our ADPCNs have many times fewer convolutional filters than these other deep networks. ADPCNs are readily applicable to spatio-temporal stimuli processing, much like the original DPCNs. This is because the ADPCNs implement a recurrent state- space model. For such modalities, like video, the ADPCN recognition performance may be even better than for static stimuli, like images. This is because top-down, feed-back connections impose temporal constraints on the learning process. We will investigate this in our future work from the context of solving time-varying problems. We will also demonstrate their superiority compared to purely feed-forward convolutional-recurrent autoencoders. We will show that the lack of a top-down pathway impedes learning meaningful representations. ADPCNs can also form the backbone of a general, unsupervised framework for stimuli learning and high-level understanding. In our future work, we will illustrate that architectural extensions of it are well suited for making extrapolations about environment dynamics. This has implications for many applications, including self-driving cars. In particular, we will show that these modified ADPCNs learn temporal and perceptual cues that permit predicting egocentric and allocentric events in the short-term future. For the egocentric events, the ADPCNs understand well that a moving camera will cause non-linear transformations of visual characteristics across video frames. The ADPCNs thus can thus reliably predict, from a short history of video frames, the appearance of non-mobile objects and their locations in subsequent frames. This will be useful for estimating vehicle position and angle relative to objects of interest, which can help refine steering control inputs for close- quarters maneuvers. For the allocentric events, the ADPCNs understand well the dynamics of mobile objects for either a fixed or moving camera. The ADPCNs can hence similarly predict the appearance and location of those objects. This will prove invaluable for gauging how pedestrians may react and hence avoid collisions. We will show that both types of event predictions emerge due to the ADPCNs learning at multiple time scales across different stages. Multi- time-scale prediction is something that existing recurrent models struggle to do well. ## References ## Appendix A Below, we outline the ADPCN training and inference process. 1 Inputs: Initial dictionary matrix $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}_{+}$, state-transition matrix $C_{i}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i}}$, and invariant matrix $G_{i}\\!\in\\!\mathbb{R}^{d_{i}\times k_{i}}$ for each network layer. A set of time-varying, static stimuli $Y_{t}$, where $y_{t}\\!\in\\!Y_{t}$, $y_{t}\\!\in\\!\mathbb{R}^{p}$. A set of initial states $\gamma_{i,0}\\!\in\\!\mathbb{R}^{k_{i}}$ and causes $\kappa_{i,0}\\!\in\\!\mathbb{R}^{d_{i}}$. 2 3 for _$t\\!=\\!0,1,2,\ldots$_ do 4 Initialize the bottom-up cause for the first layer as $\kappa_{0,t}\\!=\\!y_{t}$. 5 for _$i\\!=\\!0,1,2,\ldots$_ do 6 For all layers but the last, the most likely top-down causes, $\kappa^{\prime}_{i-1,t}\\!\in\\!\mathbb{R}^{d_{i-1}}$, are initialized at each iteration using the next stage’s states $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i+1}}$ and the causes $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i+1}}$, $\kappa_{i-1,t}^{\prime}\\!=\\!D_{i}^{\top}\gamma_{i,t}^{\prime},\;\;\gamma_{i,t}^{\prime}\\!=\\!\textnormal{arg min}_{\gamma_{i,t}}\Bigg{(}\\!\lambda_{i}^{\prime}\|\gamma_{i,t+1}\\!-\\!C_{i}\gamma_{i,t}\|_{1}\\!+\\!\alpha_{i}^{\prime}\|\gamma_{i,t}\|_{1}\Bigg{(}\\!1\\!+\\!\textnormal{exp}(-\textnormal{\sc unpool}(G_{i}\kappa_{i,t}))\\!\Bigg{)}\\!\Bigg{)}.\vspace{-0.15cm}$ This minimization problem has an algebraic expression for the global solution: $[\gamma_{i,t}^{\prime}]_{k}\\!=\\![C_{i,t}\gamma_{i,t-1}]_{k}$, whenever $\alpha_{i}^{\prime}\lambda_{i,t,k}\\!<\\!\alpha_{i}$, and zero otherwise. For the last layer, $\kappa^{\prime}_{i,t+1}\\!=\\!\kappa_{i,t}$. 7 for _$i\\!=\\!0,1,2,\ldots$_ do 8 Let $\beta_{m}\\!\in\\!\mathbb{R}_{+}$ be an inertial sequence, $\beta_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, where $k_{m}\\!=\\!1\\!+\\!(m^{r}\\!-\\!1)/d\,$ with $r\\!\in\\!\mathbb{R}_{1,+}$ and $d\\!\in\\!\mathbb{R}_{+}$. Given an adjustable step size $\tau_{i,t}^{m}\\!\in\\!\mathbb{R}_{+}$, update the states using proximal- gradient steps, indexed by $m$, until either convergence or a pre-set number of iterations has been reached $\gamma_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i,t}}\\!\Bigg{(}\\!\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau^{m}_{i,t}(D_{i}^{\top}(\kappa_{i-1,t}\\!-\\!D_{i}\pi^{m}_{i,t})\\!+\\!\alpha_{i}\Omega_{i}(\pi^{m}_{i,t}))\\!\Bigg{)},\vspace{-0.2cm}$ where $\pi_{i,t}^{m+1}\\!=\\!\gamma^{m}_{i,t+1}\\!+\\!\beta^{m}(\gamma^{m}_{i,t+1}\\!-\\!\gamma_{i,t+1}^{m-1})$. The term $\Omega_{i}(\pi_{i,t}^{m})$ quantifies the contribution of the non- smooth state transition. Use Nesterov smoothing, with $\mu_{i}\\!\in\\!\mathbb{R}_{+}$, to approximate them, $\Omega_{i}(\pi_{i,t})\\!=\\!\textnormal{arg max}_{\|\Omega_{i,t}\|_{\infty}\leq 1}\Omega_{i,t}^{\top}(\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})\\!-\\!\mu_{i}\|\Omega_{i,t}\|^{2}_{2}/2$. Max-pool the states using non-overlapping windows $\gamma_{i,t+1}\\!=\\!\textnormal{\sc pool}(\gamma_{i,t+1})$. 9 10 Let $\beta^{\prime}_{m}\\!\in\\!\mathbb{R}_{+}$ be an inertial sequence, $\beta^{\prime}_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, where $k_{m}\\!=\\!1\\!+\\!(m^{r}\\!-\\!1)/d\,$ with $r\\!\in\\!\mathbb{R}_{1,+}$, $d\\!\in\\!\mathbb{R}_{+}$. Given an adjustable step size $\tau^{\prime}_{i,t}{}^{\\!m}\\!\in\\!\mathbb{R}_{+}$, update the causes using proximal-gradient steps until either convergence or a pre-set number of iterations has been reached $\kappa_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i}^{\prime}}\\!\Bigg{(}\\!\pi^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}(2\eta_{i}^{\prime}(\kappa_{i,t+1}^{m}\\!-\\!\kappa^{\prime}_{i,t})\\!-\\!\alpha_{i}^{\prime}G_{i}^{\top}\textnormal{exp}(-G_{i}\pi^{\prime}_{i,t}{}^{\\!\\!\\!\\!m})|\gamma_{i,t+1}^{j}|)\\!\Bigg{)},\vspace{-0.2cm}$ where $\pi^{\prime}_{i,t}{}^{\\!m+1}\\!=\\!\kappa_{i,t+1}^{m}\\!+\\!\beta^{\prime}_{m}(\kappa_{i,t+1}^{m}\\!-\\!\kappa_{i,t}^{m-1})$. Update the sparsity parameter using spatial max unpooling after the causes update has concluded, $\lambda_{i,t+1}\\!=\\!\alpha_{i}^{\prime}(1\\!+\\!\textnormal{exp}(-\textnormal{\sc unpool}(G_{i}\kappa_{i,t+1})))$. 11 12 13 14 for _$i\\!=\\!0,1,2,\ldots$_ do 15 Update the filter dictionary matrix $D_{i}^{\top}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i-1}}_{+}$ and the state- transition matrix $C_{i,t}\\!\in\\!\mathbb{R}^{k_{i}\times k_{i}}$ independently, until either convergence or a pre-set number of iterations has been reached, via dual-estimation filtering, with steps indexed by $m$, $D_{i}^{m+1}{}^{\top}\\!=\\!D_{i}^{m}{}^{\top}\\!+\\!\sigma_{t}\\!+\\!\psi_{i}^{m}\Bigg{(}\\!(\gamma_{i-1,t+1}\\!-\\!D_{i}^{m}{}^{\top}\gamma_{i,t+1})\gamma_{i,t+1}\\!+\\!\theta_{i}^{m}(D_{i}^{m}\\!-\\!D_{i}^{m-1})\\!\Bigg{)},\vspace{-0.175cm}$ $C_{i}^{m+1}\\!=\\!C_{i}^{m}\\!+\\!\sigma_{t}^{\prime}\\!+\\!\psi_{i}^{\prime}{}^{m}\Bigg{(}\\!\textnormal{\sc sign}(\gamma_{i,t+1}\\!-\\!C_{i}^{m}\gamma_{i,t})\gamma_{i,t+1}^{\top}\\!+\\!\theta_{i}^{m}(C_{i}^{m}\\!-\\!C_{i}^{m-1})\\!\Bigg{)},\vspace{-0.2cm}$ where $\psi_{i}^{m},\psi_{i}^{\prime}{}^{m}\\!\in\\!\mathbb{R}_{+}$ are step sizes, $\theta_{i}^{m}\\!\in\\!\mathbb{R}_{+}$ is a momentum coefficient, and $\sigma_{t},\sigma_{t}^{\prime}\\!\in\\!\mathbb{R}$ is Gaussian transition noise over the parameters. Normalize $D_{i}^{m+1}{}^{\top}$ to avoid returning a trivial solution. 16 Update the causal invariance matrix $G_{i}\\!\in\\!\mathbb{R}^{d_{i}\times k_{i}}$ via dual-estimation filtering, with steps indexed by $m$, $G_{i}^{m+1}\\!=\\!G_{i}^{m}\\!+\\!\sigma_{t}^{\prime\prime}\\!+\\!\psi_{t}^{\prime\prime}{}^{\vspace{-0.01cm}m}\Bigg{(}\\!(\textnormal{exp}(-G_{i}^{m}\kappa_{i+1,t})|\gamma_{i,t+1}|)\kappa_{i,t+1}^{\top}\\!+\\!\theta_{i}^{m}(G_{i}^{m}\\!-\\!G_{i}^{m-1})\\!\Bigg{)},\vspace{-0.2cm}$ where $\psi_{t}^{\prime\prime}{}^{\vspace{-0.01cm}m}\\!\in\\!\mathbb{R}_{+}$ is a step size, $\theta_{i}^{m}\\!\in\\!\mathbb{R}_{+}$ is a momentum coefficient, and $\sigma_{t}^{\prime\prime}\\!\in\\!\mathbb{R}$ is Gaussian- distributed transition noise over the parameters. Normalize $G_{i}^{m+1}$ to avoid returning a trivial solution. 17 18 Algorithm 1 Accelerated Deep Prediction Network (ADPCN) Training and Inference The convergence of dual-estimation filtering is straightforward to demonstrate. For the proximal-gradient inference process, it is much more involved. We build up to it in what follows. We focus on the case where no inference restarts are performed. This is done to theoretically show that the speed-ups we obtain are due solely to the improved convergence rate offered by the new extra-gradient update. The inclusion of multiple restarts serves to delay the eventual inference slowdowns toward the end of inference. Restarts should not improve the maximally achievable convergence rate. We first prove a weak convergence result that facilitates demonstrating a much stronger one when relying on properties of Cauchy sequences. We then quantify the global convergence rate for our chosen inertial sequence. We then compare the global convergence rate to a Nesterov-style inertial sequence that was used in the original DPCN to illustrate the advantages of the former for ADPCNs. Lastly, we outline local convergence properties of both inertial sequences to explain the results presented for DPCNs and ADPCNs in the main part of the paper. A high-level overview of both networks is presented in fig. A.1. | DPCN | | ADPCN ---|---|---|--- Unsupervised | Yes | | Yes Small models | Yes | | Yes Convolutional | No [ChalasaniR-conf2013a, PrincipeJC-jour2014a] / Yes [ChalasaniR-jour2015a] | | Yes Inference Restarts | No | | Yes Feed-Forward Inference Conv. Rate | $O(\zeta_{i}/m^{2})$ | | $O(r\zeta_{i}/(r\\!-\\!1)m^{r})$ Feed-Back Inference Converg. Rate | $O(1)$ | | $O(1)$ Feed-Forward Infer. Comput. Complexity | $O(k_{i}^{2})\\!+\\!O(d_{i}^{2})$ | | $O(k_{i}^{2})\\!+\\!O(d_{i}^{2})$ Parameter Learning Converg. Rate | $O(1/m)$ | | $O(1/m)$ Parameter Learning Comput. Complexity | $O(k_{i}^{2})\\!+\\!O(d_{i}^{2})$ | | $O(k_{i}^{2})\\!+\\!O(d_{i}^{2})$ Figure A.1: A quantitative comparison of DPCNs and ADPCNs. The main distinction is that ADPCNs enjoy a much faster convergence rate for feed- forward inference. This enables them to better describe stimuli and hence extract useful details from them. Better feed-forward inference naturally impacts feed-back parameter updating. Toward this end, it is important to show that the distance between a given iterate and the solution set for either inference cost can be bounded by the norm of the residual. This is a local Lipschitzian error bound. The Lipschitzian bound is satisfied whenever the norm of the residual is small. The iterate must also be sufficiently close to the solution set for this bound to be satisfied. * Proposition A.1. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes. Let $\pi_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the auxiliary states and let $\pi^{\prime}_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the auxiliary causes at layer $i$ and time $t$. Let $\omega_{1,i}\\!\in\\!\mathbb{R}$, that is assumed to satisfy $\omega_{1,i}\geq\mathcal{L}_{1}(\gamma_{i}^{*},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i})$. As well, let $\omega_{2,i}\\!\in\\!\mathbb{R}$, $\omega_{2,i}\geq\mathcal{L}_{2}(\gamma_{i,t},\kappa_{i}^{*},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})$. For some $\omega_{1,i}$, there are $\epsilon_{1,i},\epsilon_{1,i}^{\prime}\\!\in\\!\mathbb{R}_{+}$, such that, for step size $\tau_{i,t}^{m}\\!\in\\!\mathbb{R}_{+}$ $\textnormal{dist}(\gamma_{i,t+1}^{m},\Gamma_{i}^{*})\leq\epsilon_{1,i}^{\prime}\Bigg{\|}\textnormal{\sc prox}_{\\!1/\ell_{i,j}}\\!\Bigg{(}\\!\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i})\\!\Bigg{)}-\pi_{i,t}^{m}\Bigg{\|}$ whenever the following conditions, $\|\textnormal{\sc prox}_{\\!\lambda_{i,t}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}}\mathcal{L}_{1}(\pi_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t}))\\!-\\!\gamma_{i,t}^{m}\|<\epsilon_{1,i}$ and $\omega_{1,i}\leq\mathcal{L}_{1}(\gamma_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i})$, are satisfied. Here, $\textnormal{dist}(\gamma_{i,t+1}^{m},\Gamma_{i}^{*})$ denotes the distance of a given iterate with the solution set. Likewise, for some $\omega_{2,i}$, there are $\epsilon_{2,i},\epsilon_{2,i}^{\prime}\\!\in\\!\mathbb{R}_{+}$, such that, for step sizes $\tau^{\prime}_{i,t}{}^{\\!m}\\!\in\\!\mathbb{R}_{+}$, $\textnormal{dist}(\kappa_{i,t}^{m},K_{i}^{*})\leq\epsilon_{2,i}^{\prime}\Bigg{\|}\textnormal{\sc prox}_{\\!1/\ell^{\prime}_{i,j}}\\!\Bigg{(}\\!\pi^{\prime}_{i,t}{}^{\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\\!\Bigg{)}-\pi^{\prime}_{i,t}{}^{\\!m}\Bigg{\|}$ whenever the following conditions $\|\textnormal{\sc prox}_{\\!\lambda_{i}}(\pi^{\prime}_{i,t}{}^{\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t}))\\!-\\!\kappa_{i,t}^{m}\|<\epsilon_{2,i}$ and $\omega_{2,i}\leq\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})$ are satisfied. Here, $\Gamma_{i}^{*}$, with $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$, $\gamma_{i}^{*}\\!\in\\!\mathbb{R}^{k_{i}}$, denotes the sol- ution set for the state-inference cost function. As well, $K_{i}^{*}$, with $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$, $\kappa_{i}^{*}\\!\in\\!\mathbb{R}^{d_{i}}$, denotes the solution set for the cause-inference cost function. Both $\ell_{i,j},\ell_{i,j}^{\prime}\\!\in\\!\mathbb{R}_{+}$ denote Lipschitz constants of the state and cause costs. * Proof: In what follows, for ease of presentation, we ignore the variables of the inference cost that remain fixed across inference iterations. Let the $L_{1}$-sparsity term in the state-inference cost be re-written in an equivalent manner as $\sum_{k^{\prime}=1}^{k_{i}}[\lambda_{i,t}]_{k^{\prime}}\xi_{i,t}$, with the constraint $|[\gamma_{i,t}]_{k^{\prime}}|\\!-\\!\xi_{i,t}\\!\leq\\!0$, $\xi_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ [TsengP-jour2009a]. There exists some $\omega_{i}\\!\in\\![0,\infty)^{k_{i}}$ such that $(\textnormal{\sc prox}_{\\!1/\ell_{i,j}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m}))-\gamma_{i,t}^{m})+(\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m})-\omega_{i})=0,$ where $(\textnormal{\sc prox}_{\\!1/\ell_{i,j}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m}))\\!=\\!\xi_{i,t}$. Here, we assume that the choice of $\pi_{i,t}^{m}$ is such that the inequality conditions are satisfied for $\epsilon_{1,i},\epsilon_{1,i}^{\prime}\\!\in\\!\mathbb{R}_{+}$. As well, there exists some optimal $\pi_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$, $\pi_{i}^{*}\\!\in\\!\mathbb{R}_{+}^{k_{i}}$, and a corresponding $\omega^{*}_{i}\\!\in\\![0,\infty)^{k_{i}}$ such that $\nabla_{\pi_{i}^{*}}\mathcal{L}_{1}(\pi_{i}^{*})-\omega^{*}_{i}\\!=\\!0$, where $\pi_{i}^{*}\\!=\\!\xi_{i,t}$. We also have that, for $\sigma\\!\in\\!\mathbb{R}_{+}$, $2\sigma\|\pi_{i,t}^{m}\\!-\\!\pi_{i}^{*}\|^{2}\\!\leq\\!\langle\pi_{i,t}^{m}\\!-\\!\pi_{i}^{*},\,\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m})\\!-\\!\nabla_{\pi_{i}^{*}}\mathcal{L}_{1}(\pi_{i}^{*})\rangle$. Hence, for some $\omega_{1,i}^{\prime}\\!\in\\!\mathbb{R}$ that depends on $\omega_{1,i}\\!\in\\!\mathbb{R}$, we have $2\sigma\|\pi_{i,t}^{m}\\!-\\!\pi_{i}^{*}\|\leq(\omega_{1,i}^{\prime}\\!+\\!((\omega_{1,i}^{\prime})^{2}\\!+\\!4\omega_{1,i}^{\prime})^{1/2})\|\pi_{i,t}^{m}\\!-\\!\textnormal{\sc prox}_{\\!1/\ell_{i,j}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m}))\|/2.$ When combined with $\|(\pi_{i,t}^{m},\omega_{i,t})\\!-\\!(\pi_{i}^{*},\omega^{*}_{i})\|\\!\leq\\!\delta_{i}(\|\pi_{i,t}^{m}\\!-\\!\pi_{i}^{*}\|\\!+\\!\|\pi_{i,t}^{m}\\!-\\!\textnormal{\sc prox}_{\\!\lambda_{i,t}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}}\mathcal{L}_{1}(\pi_{i,t}^{m}))\|)$ for $\delta_{i}\\!\in\\!\mathbb{R}_{+}$, we get that $\textnormal{min}_{\pi^{*}\in\Gamma_{i}^{*}}\|\pi_{i,t}^{m}\\!-\\!\pi^{*}\|\\!\leq\\!\epsilon_{1,i}^{\prime}\|\pi_{i,t}^{m}\\!-\\!\pi_{i}^{*}\|$. A similar argument to what is above can be used for the causes.$\;\;\footnotesize\blacksquare$ As Luo and Tseng [LuoZQ-jour1992a] have shown, such locally held bounds are useful for analyzing the rate of convergence of iterative algorithms for constrained, smooth optimization. The bounds can be used to very weakly demonstrate that the sequence of functional iterates will eventually reach the global functional solution, albeit independent of strong inertial-sequence properties. Analogous bounds have been derived by Pang [PangJS-jour1987a] for strongly convex problems. Here, we do not impose the condition of strong convexity for the inference costs, which makes these results applicable to a broader set of functionals, much like the ones that we employ. Tseng and Yun [TsengP-jour2009a] later extended this bound for constrained, non-smooth optimization. However, this was done in the context of gradient descent, not proximal gradients. To provide a more general convergence result, we need to consider specific inertial sequences and incorporate them into the analysis. Toward this end, we first bound the squared residual between the primary iterates, the states $\gamma_{i,t}$ and the causes $\kappa_{i,t}$, and their corresponding auxiliary iterates $\pi_{i,t}$ and $\pi_{i,t}^{\prime}$. * Proposition A.2. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes. Let $\pi_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the auxiliary states at layer $i$ and time $t$. Assume that the state update, for a positive step size $\tau_{i,t}^{m}\\!\in\\!\mathbb{R}_{+}$, is given by the relation $\gamma_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i,t}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t}))$, with the auxiliary state update $\pi_{i,t}^{m+1}\\!=\\!\gamma_{i,t+1}^{m}\\!+\\!\beta_{m}(\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1})$. Likewise, assume that the cause update, for a positive step size $\tau^{\prime}_{i,t}{}^{\\!m}\\!\in\\!\mathbb{R}_{+}$, is given by the relation $\kappa_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i}}(\pi^{\prime}_{i,t}{}^{\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t}))$, with the auxiliary cause relation $\pi^{\prime}_{i,t}{}^{\\!m+1}\\!=\\!\kappa_{i,t+1}^{m}\\!+\\!\beta^{\prime}_{m}(\kappa_{i,t+1}^{m}\\!-\\!\kappa_{i,t+1}^{m})$. In both cases, let $\beta_{m},\beta^{\prime}_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, with elements $k_{m}\\!=\\!1\\!+\\!(m^{r}\\!-\\!1)/d$ with $r\\!\in\\!\mathbb{R}_{1,+}$, $d\\!\in\\!\mathbb{R}_{+}$. There are some $\epsilon_{1,i},\epsilon_{2,i}\\!\in\\!\mathbb{R}_{+}$ such that $\|\pi_{i,t}^{m}\\!-\\!\gamma_{i,t+1}^{m}\|^{2}\geq\frac{\tau_{i,t}^{m}}{\epsilon_{1,i}}\Bigg{(}\\!\mathcal{L}_{1}(\gamma_{i,t+1}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})\\!\Bigg{)}$ $\|\pi^{\prime}_{i,t}{}^{\\!m}\\!-\\!\kappa_{i,t+1}^{m}\|^{2}\geq\frac{\tau^{\prime}_{i,t}{}^{\\!m}}{\epsilon_{2,i}}\Bigg{(}\\!\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i,t+1}^{m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\\!-\\!\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i}^{*},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\\!\Bigg{)}$ where $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$, $\gamma_{i}^{*}\\!\in\\!\mathbb{R}_{+}^{k_{i}}$, is a solution of the state- inference cost and $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$, $\kappa_{i}^{*}\\!\in\\!\mathbb{R}_{+}^{d_{i}}$, is a solution of the cause-inference cost. * Proof: We focus on the case of the hidden states; that for the hidden causes has only slight differences. In what follows, for ease of presentation, we ignore the variables of the inference cost that remain fixed across inference iterations. We have, for $m$ being sufficiently large, that $\displaystyle\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*})$ $\displaystyle\leq 2\|\gamma_{i,t+1}^{m+1}\\!-\\!\pi_{i,t}^{m+1}\|^{2}/\tau_{i,t}^{m}\\!+\\!\textnormal{dist}(\gamma_{i,t+1}^{m+1},\Gamma_{i}^{*})\|\gamma_{i,t+1}^{m+1}\\!-\\!\pi_{i,t}^{m+1}\|/\tau_{i,t}^{m}\vspace{0.05cm}$ $\displaystyle\leq\xi_{1,i}^{-1}(4\epsilon_{1,i}^{\prime\prime}\\!+\\!\xi_{1,i})\|\gamma_{i,t+1}^{m+1}\\!-\\!\pi_{i,t}^{m+1}\|^{2}/2\tau_{i,t}^{m}$ where $\epsilon_{1,i}^{\prime\prime}\\!\in\\!\mathbb{R}_{+}$ and $\xi_{1,i}\\!\in\\!\mathbb{R}_{+}$. There exists some $\epsilon_{1,i}\\!\geq\\!\xi_{1,i}^{-1}(4\epsilon_{1,i}^{\prime\prime}\\!+\\!\xi_{1,i})/2$, which must naturally be positive, such that the proposition is true. Here, the second inequality follows from proposition A.1, $\textnormal{dist}(\gamma_{i,t+1}^{m},\Gamma_{i}^{*})\leq\epsilon_{1,i}^{\prime\prime}\|\textnormal{\sc prox}_{\\!\tau_{i,t}^{m}}(\gamma_{i,t+1}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\gamma_{i,t+1}^{m}}\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\gamma_{i,t+1}^{m}\|/\ell_{i,t}\tau_{i,t}^{m},$ This arises from the non-decreasing nature of the proximal-norm function and the Cauchy-Schwarz inequality, which implies that dividing by $\ell_{i,t}\tau_{i,t}^{m}$ leads to a non-increasing function. The iterate- solution distance can thus be further bounded from above as $\textnormal{dist}(\gamma_{i,t+1}^{m},\Gamma_{i}^{*})\\!\leq\\!2\epsilon_{1,i}^{\prime\prime}\xi_{1,i}\|\gamma_{i,t+1}^{m}\\!-\\!\pi_{i,t}^{m}\|$. Since the relationship holds for arbitrary $m$ sufficiently large, it can be increased by one iteration. A similar argument to what is above can be used for the causes.$\;\;\footnotesize\blacksquare$ We now can bound the functional-value difference between an arbitrary iterates, $\gamma_{i,t}$ and $\kappa_{i,t}$, and optimal solutions, $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$ and $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$. Note that, due to convexity of the two inference costs, every solution is guaranteed to be a globally optimal one. From this result, we will be able to obtain convergence of the function values and iterates. * Proposition A.3. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes. Let the inertial sequences used, respectively, for the state-inference and cause-inference costs be $\beta_{m},\beta^{\prime}_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, where $k_{m}\\!=\\!1\\!+\\!(m^{r}\\!-\\!1)/d$ with $r\\!\in\\!\mathbb{R}_{1,+}$, $d\\!\in\\!\mathbb{R}_{+}$. We have that $\sum_{m=1}^{\infty}k_{m+1}^{2}\Bigg{(}\\!\mathcal{L}_{1}(\gamma_{i,t+1}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})\\!\Bigg{)}$ $\sum_{m=1}^{\infty}k_{m+1}^{2}\Bigg{(}\\!\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i,t+1}^{m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\\!-\\!\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i}^{*},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\\!\Bigg{)}$ are convergent. Here, $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$, $\gamma_{i}^{*}\\!\in\\!\mathbb{R}{k_{i}}$, is a solution of the state- inference cost and $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$, $\kappa_{i}^{*}\\!\in\\!\mathbb{R}^{d_{i}}$, is a solution of the cause-inference cost. We have assumed, here, that the state update, for a positive step size $\tau_{i,t}^{m}\\!\in\\!\mathbb{R}_{+}$, was given by the relation $\gamma_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i,t}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t}))$, with the auxiliary update $\pi_{i,t}^{m+1}\\!=\\!\gamma_{i,t+1}^{m}\\!+\\!\beta_{m}(\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1})$. As well, the cause update, for a positive step size $\tau_{i,t}^{\prime}\\!\in\\!\mathbb{R}_{+}$, was $\kappa_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i}}(\pi^{\prime}_{i,t}{}^{\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t}))$, with the auxiliary cause update $\pi^{\prime}_{i,t}{}^{\\!m+1}\\!=\\!\kappa_{i,t+1}^{m}\\!+\\!\beta^{\prime}_{m}(\kappa_{i,t+1}^{m}\\!-\\!\kappa_{i,t+1}^{m})$. * Proof: For ease of presentation, we ignore the variables of the inference cost that remain fixed across inference iterations. It can be shown that, for some $\xi_{1,i}\\!\in\\!\mathbb{R}_{+}$, $\displaystyle\mathcal{L}_{1}(\gamma_{i,t+1}^{m+1})$ $\displaystyle\leq\mathcal{L}_{1}(\gamma_{i,t+1}^{\prime}{\\!\\!\\!\\!\\!\\!}^{m}\hskip 7.53996pt)\\!-\\!\|\gamma_{i,t+1}^{\prime}{\\!\\!\\!\\!\\!\\!}^{m}\hskip 7.53996pt-\gamma_{i,t+1}^{m+1}\|^{2}/2\tau_{i,t}^{m}+\|\gamma_{i,t+1}^{\prime}{\\!\\!\\!\\!\\!\\!}^{m}\hskip 7.53996pt\\!-\pi_{i,t}^{m+1}\|^{2}/2\tau_{i,t}^{m}-(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m+1}\\!-\\!\pi_{i,t}^{m+1}\|^{2}/\tau_{i,t}^{m}\vspace{0.05cm}$ $\displaystyle\leq(1\\!-\\!\beta_{m+1}^{-1})\mathcal{L}_{1}(\gamma_{i,t}^{m})+\beta_{m+1}\mathcal{L}_{1}(\gamma_{i}^{*})+\beta_{m+1}^{-2}\|\beta_{m}\gamma_{i,t+1}^{m+1}\\!-\\!(\beta_{m+1}\\!-\\!1)\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i}^{*}\|^{2}/2\tau_{i,t}^{m}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\beta_{m+1}^{-2}\|\beta_{m}\gamma_{i,t+1}^{m}\\!-\\!(\beta_{m}\\!-\\!1)\gamma_{i,t+1}^{m-1}\\!-\\!\gamma_{i}^{*}\|^{2}/2\tau_{i,t}^{m}-(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m+1}\\!-\\!\pi_{i,t}^{m+1}\|^{2}/\tau_{i,t}^{m}$ where $\gamma_{i,t+1}^{\prime}{\\!\\!\\!\\!\\!\\!}^{m}\hskip 7.53996pt\\!=\\!\beta_{m+1}^{-1}\gamma_{i}^{*}\\!+\\!(1\\!-\\!\beta_{m+!}^{-1})\gamma_{i,t}^{m}$. Multiplying both sides by $k_{m+1}^{2}$ and re-arranging terms yields $\displaystyle k_{m+1}^{2}(\mathcal{L}_{1}(\gamma_{i,t+1}^{k})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))\\!-\\!k_{m+1}^{2}(\mathcal{L}_{1}(\gamma_{i,t+1}^{m+1})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\geq(k_{m}^{2}\\!-\\!k_{m+1}^{2}\\!-\\!k_{m+1})(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))+k_{m+1}^{2}(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m}\\!-\\!\pi_{i,t}^{m}\|^{2}/2\tau_{i,t}^{m}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\|k_{m+1}\gamma_{i,t+1}^{m+1}\\!-\\!(k_{m+1}\\!-\\!1)\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i}^{*}\|^{2}/\tau_{i,t}^{m}-\|k_{m}\gamma_{i,t+1}^{m}\\!-\\!(k_{m}\\!-\\!1)\gamma_{i,t+1}^{m-1}\\!-\\!\gamma_{i}^{*}\|^{2}/\tau_{i,t}^{m}.$ The result derived in proposition A.2 can be applied to show that $k_{m+1}^{2}(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m}\\!-\\!\pi_{i,t}^{m}\|^{2}/2\tau_{i,t}^{m}$ is bounded above by $k_{m+1}^{2}(1\\!-\\!\xi_{i,t})(\mathcal{L}_{1}(\gamma_{i,t+1}^{m+1})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))/4\epsilon_{1,i}$. Continuing from above, we have that $\displaystyle(2k_{m}^{2}\\!-\\!k_{m+1}^{2}\\!-\\!k_{m+1})(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))-(k_{m}^{2}\\!-\\!k_{m+1})(\gamma_{i,t+1}^{m+1})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\geq(k_{m}^{2}\\!-\\!k_{m+1}^{2}\\!-\\!k_{m+1})(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))+k_{m+1}^{2}(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m}\\!-\\!\pi_{i,t}^{m}\|^{2}/2\tau_{i,t}^{m}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+k_{m+1}^{2}(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m+1}\\!-\\!\pi_{i,t}^{m+1}\|^{2}/4\tau_{i,t}^{m}.\hskip 190.63338pt\phantom{.}$ From this, we can see that $(2k_{m}^{2}\\!-\\!k_{m+1}^{2}\\!-\\!k_{m+1})(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))\\!+\\!k_{m+1}^{2}(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m}\\!-\\!\pi_{i,t}^{m}\|^{2}/2\tau_{i,t}^{m}$ is a non-increasing sequence for $m$. It is bounded below. This implies convergence of the sequence in $m$ and hence for $m\\!+\\!1$. This takes care of the two terms on the left and the first two terms on the right-hand side. This leaves the final term on the right-hand side, $k_{m+1}^{2}(1\\!-\\!\xi_{i,t})\|\gamma_{i,t+1}^{m+1}\\!-\\!\pi_{i,t}^{m+1}\|^{2}/4\tau_{i,t}^{m}$, which is also is convergent in $m$. Applying proposition A.2 to this final term on the right-hand side proves the proposition for the hidden states. A similar argument to what is above can be used for the causes.$\;\;\footnotesize\blacksquare$ Based on properties of the inertial series $\\{\beta_{m}\\}_{m=1}^{\infty}$ and $\\{\beta_{m}^{\prime}\\}_{m=1}^{\infty}$, particularly that the inverse of the inertial subcomponents $\sum_{m=1}^{\infty}k_{m}^{-1}$ is convergent, we immediately obtain that the state $\\{\gamma_{i,t}^{m}\\}_{m=1}^{\infty}$ and cause $\\{\kappa_{i,t}^{m}\\}_{m=1}^{\infty}$ iterates are Cauchy. The state and cause iterates are thus bounded. The Bolzano-Weierstrass theorem implies convergence of iterate subsequences for complete spaces, which applies to our case. The iterates themselves are also strongly convergent to global solutions. Convergence of proximal-gradient-type schemes is not new. It, however, needed to be verified for our accelerated case. We are now able to prove the main convergence result of the paper. * Proposition 4. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes. The state iterates $\\{\gamma_{i,t+1}^{m}\\}_{m=1}^{\infty}$ strongly converge to the global solution of $\mathcal{L}_{1}(\gamma_{i,t},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})$ for the accelerated proximal gradient scheme. Likewise, the cause iterates $\\{\kappa_{i,t+1}^{m}\\}_{m=1}^{\infty}$ for the accelerated proximal gradient scheme strongly converge to the global solution of $\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i,t},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})$ at a sub-polynomial rate. This occurs when using the inertial sequences $\beta_{m},\beta^{\prime}_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, where $k_{m}$ depends polynomially on $m$. * Proof: Strong convergence of the states $\\{\gamma_{i,t+1}^{m}\\}_{m=1}^{\infty}$ and causes $\\{\kappa_{i,t+1}^{m}\\}_{m=1}^{\infty}$ to the optimal solutions $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$ and $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$ can be obtained from an extension of proposition A.3. For the convergence rate, we note that there is some $\zeta_{i}\\!\in\\!\mathbb{R}_{+}$ such that $\zeta_{i}m^{-r}\\!\geq\\!\|\gamma_{i,t+1}^{m-1}\\!-\\!\gamma_{i,t}^{m}\|$. For $m^{\prime}\\!>\\!1$, we have that $\|\gamma_{i,t+1}^{m+m^{\prime}}\\!-\\!\gamma_{i,t}^{m}\|$ is bounded above by $\sum_{j=m+1}^{m+m^{\prime}}\|\gamma_{i,t}^{j}\\!-\\!\gamma_{i,t}^{j-1}\|\\!\leq\\!\zeta_{i}\sum_{j=m+1}^{m+m^{\prime}}m^{-r}$. As $m^{\prime}\\!\to\\!\infty$, $\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i}^{*}\|\\!\leq\\!\zeta_{i}r/m^{r}(r\\!-\\!1)$, which implies a sub-$r$-polynomial rate of convergence for the state iterate sequence. A similar result holds for the cause iterates.$\;\;\footnotesize\blacksquare$ The choice of the inertial sequence greatly affects convergence properties. The classical sequence proposed by Nesterov, for instance, yields iterates $\\{\gamma_{i,t+1}^{m}\\}_{m=1}^{\infty}$ and $\\{\kappa_{i,t+1}^{m}\\}_{m=1}^{\infty}$ that only weakly converge to global solutions $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$ and $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$, which stems from the fact that $\sum_{m=1}^{\infty}k_{m}^{-1}$, with $k_{m+1}\\!=\\!(1\\!+\\!(1\\!+\\!4k_{m}^{2})^{1/2})/2$, is divergent. In finite-dimensional Euclidean spaces, this is not a shortcoming, since it implies convergence component-wise and thus is equivalent to strong convergence. The original DPCN relied on a Nesterov-style sequence, so we analyze its convergence. * Proposition A.4. Let the inertial sequences used, respectively, for the state-inference and cause-inference costs be $\beta_{m},\beta^{\prime}_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, where $k_{m+1}\\!=\\!(1\\!+\\!(1\\!+\\!4k_{m}^{2})^{1/2})/2$. We have that $\sum_{m=1}^{\infty}k_{m}^{2}\Bigg{(}\\!\mathcal{L}_{1}(\gamma_{i,t+1}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})-\mathcal{L}_{1}(\gamma_{i}^{*},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})+\frac{1}{2\tau_{i,t}^{m}}\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}\|^{2}\\!\Bigg{)}$ $\sum_{m=1}^{\infty}k_{m}^{2}\Bigg{(}\\!\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i,t+1}^{m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\,-\,\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i}^{*},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})\,+\,\frac{1}{2\tau^{\prime}_{i,t}{}^{\\!m}}\|\kappa_{i,t+1}^{m}-\kappa_{i,t+1}^{m-1}\|^{2}\\!\Bigg{)}$ are convergent. Here, $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$, $\gamma_{i}^{*}\\!\in\\!\mathbb{R}^{k_{i}}$, is a solution of the state- inference cost and $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$, $\kappa_{i}^{*}\\!\in\\!\mathbb{R}^{d_{i}}$, is a solution of the cause-inference cost. We have assumed, here, that the state update, for a positive step size $\tau_{i,t}^{m}\\!\in\\!\mathbb{R}_{+}$, was given by the relation $\gamma_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i,t}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}(\pi_{i,t}^{m},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t}))$, with the auxiliary update $\pi_{i,t}^{m+1}\\!=\\!\gamma_{i,t+1}^{m}\\!+\\!\beta_{m}(\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1})$. As well, the cause update, for a positive step size $\tau_{i,t}^{\prime}\\!\in\\!\mathbb{R}_{+}$, was $\kappa_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i}}(\pi^{\prime}_{i,t}{}^{\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t}))$, with the auxiliary cause update $\pi^{\prime}_{i,t}{}^{\\!m+1}\\!=\\!\kappa_{i,t+1}^{m}\\!+\\!\beta^{\prime}_{m}(\kappa_{i,t+1}^{m}\\!-\\!\kappa_{i,t+1}^{m})$. * Proof: For ease of presentation, we ignore the variables of the inference cost that remain fixed across inference iterations. It can be shown that $\begin{array}[]{l}\\!\\!\\!\\!\\!\\!\\!\\!\\!\mathcal{L}_{1}(\gamma_{i,t+1}^{m})-\mathcal{L}_{1}(\gamma_{i}^{*})+\beta_{m}^{2}\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}\|^{2}/2\tau_{i,t}^{m}\vspace{0.05cm}\\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\geq\mathcal{L}_{1}(\gamma_{i,t+1}^{m+1})-\mathcal{L}_{1}(\gamma_{i}^{*})+\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m+1}\|^{2}/2\tau_{i,t}^{m+1}.\end{array}$ Multiplying both sides by $k_{m+1}^{2}$, performing an addition by zero, and re-arranging terms yields $\displaystyle k_{m+1}^{3}(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))+k_{m+1}^{3}\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}\|^{2}/2\tau_{i,t}^{m}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\leq(k_{m+1}^{3}\\!+\\!k_{m}^{3}\\!-\\!k_{m}^{3})(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))-k_{m+1}(k_{m}\\!-\\!1)^{2}\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}\|^{2}/2\tau_{i,t}^{m}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\leq k_{m}^{3}(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))+k^{\prime}(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))-(k_{m}^{2}(2\\!-\\!k^{\prime})\\!+\\!k_{m}(2k^{\prime}\\!-\\!1)\\!-\\!k^{\prime})/2\tau_{i,j}^{m}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+k_{m}^{3}\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}\|^{2}/2\tau_{i,t}^{m}.$ The last inequality follows because $\sum_{m=1}^{\infty}k_{m}^{-1}$ is divergent. In this case, there exists some $0\\!<\\!k^{\prime}\\!<\\!2$ such that $k_{m+1}\\!-\\!k_{m}\\!\leq\\!k^{\prime}$ for all $m\\!>\\!m^{\prime}$, $m^{\prime}\\!>\\!0$. We therefore have that $\displaystyle k^{\prime}(k_{m}^{2}\\!+\\!k_{m}\\!+\\!1)(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*}))\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\geq k_{m+1}^{3}(\mathcal{L}_{1}(\gamma_{i,t+1}^{m+1})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*})+\|\gamma_{i,t+1}^{m+1}\\!-\\!\gamma_{i,t+1}^{m}\|^{2}/2\tau_{i,t}^{m+1})+k_{m}^{3}(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*})\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}\|^{2}/2\tau_{i,t}^{m}).$ Re-organizing terms allows us to demonstrate that $\sum_{m=1}^{\infty}\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}\|^{2}/2\tau_{i,t}^{m}$ is convergent via a Cauchy test. This implies that $\sum_{m=1}^{\infty}k_{m}^{2}(\mathcal{L}_{1}(\gamma_{i,t+1}^{m})\\!-\\!\mathcal{L}_{1}(\gamma_{i}^{*})\\!+\\!\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}|^{2}/2\tau_{i,t}^{m})$ is also convergent. A similar argument to what is above can be used for the causes.$\;\;\footnotesize\blacksquare$ The rate of convergence, though, is limited when choosing a Nesterov-style inertial sequence, both locally and globally. In the global case, we have that * Proposition A.5. Let $\gamma_{i,t}\\!\in\\!\mathbb{R}^{k_{i}}$ be the hidden states and $\kappa_{i,t}\\!\in\\!\mathbb{R}^{d_{i}}$ be the hidden causes. The state iterates $\\{\gamma_{i,t+1}^{m}\\}_{m=1}^{\infty}$ strongly converge to the global solution of $\mathcal{L}_{1}(\gamma_{i,t},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t})$ for the accelerated proximal gradient scheme. Likewise, the cause iterates $\\{\kappa_{i,t+1}^{m}\\}_{m=1}^{\infty}$ for the accelerated proximal gradient scheme strongly converge to the global solution of $\mathcal{L}_{2}(\gamma_{i,t+1},\kappa_{i,t},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t})$ at a sub-quadratic rate. This occurs when using the inertial sequences $\beta_{m},\beta^{\prime}_{m}\\!=\\!(k_{m}\\!-\\!1)/k_{m+1}$, where $k_{m+1}\\!=\\!(1\\!+\\!(1\\!+\\!4k_{m}^{2})^{1/2})/2$. * Proof: Strong convergence of the states $\\{\gamma_{i,t+1}^{m}\\}_{m=1}^{\infty}$ and causes $\\{\kappa_{i,t+1}^{m}\\}_{m=1}^{\infty}$ to the optimal solutions $\gamma_{i}^{*}\\!\in\\!\Gamma_{i}^{*}$ and $\kappa_{i}^{*}\\!\in\\!K_{i}^{*}$ can be obtained from an extension of proposition A.4. For the convergence rate, we note that there is some $\zeta_{i}\\!\in\\!\mathbb{R}_{+}$ such that $\zeta_{i}m^{-3/2}\\!\geq\\!\|\gamma_{i,t+1}^{m-1}\\!-\\!\gamma_{i,t}^{m}\|$. For $m^{\prime}\\!>\\!1$, we have that $\|\gamma_{i,t+1}^{m+m^{\prime}}\\!-\\!\gamma_{i,t}^{m}\|$ is bounded above by $\sum_{j=m+1}^{m+m^{\prime}}\|\gamma_{i,t}^{j}\\!-\\!\gamma_{i,t}^{j-1}\|\\!\leq\\!\zeta_{i}\sum_{j=m+1}^{m+m^{\prime}}m^{-3}$. As $m^{\prime}\\!\to\\!\infty$, $\|\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i}^{*}\|\\!\leq\\!\zeta_{i}/2m^{2}$, which implies a sub-quadratic rate of convergence for the state iterate sequences. A similar argument to what is above demonstrates the convergence rate for the cause iterate sequences.$\;\;\footnotesize\blacksquare$ From this proposition, we see that a polynomial extra-gradient step permits making either equal or larger adjustments, on average, per iteration than the Nesterov step. It is, in essence, artificially adapting the step size without causing divergence. The ADPCN iterates thus often reach a given loss magnitude more quickly than those for the DPCN. This potentially improved step size offered by a polynomial inertial sequence has a major impact on ADPCN performance. A feed-forward, bottom-up inference problem is solved to propagate a batch of stimuli through the network and convert it into states and causes. This problem has to be solved at every stage, which we do using the method of proximal gradients. If a subpar solution is returned for the bottom-up inference, then it impacts the quality of the parameter updates. This, in turn, affects the network response for the next batch of stimuli. A great many presentations of the stimuli may be needed to counteract this issue. There is a chance that the corresponding network parameters learned using a linear extra-gradient step may never approach the same performance as those for our polynomial-based step. Note that a feed-back, top-down inference problem must also be solved at every stage to back-propagate the causes to earlier layers. However, top-down inference has a global best solution that can be obtained directly without relying on non-iterative processes. This is why we listed it as having a constant-time convergence rate in fig. A.1. To better understand why the convergence is better with a polynomial inertial sequence, it is helpful to re-cast the proximal gradient updates in a way that permits understanding local convergence behaviors using spectral analysis. We do this first for the state-inference process. * Proposition A.6. The state update $\gamma_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i,t}}(\pi_{i,t}^{m}\\!-\\!\lambda_{i,t}\tau_{i,t}^{m}\nabla_{\pi_{i,t}}\mathcal{L}_{1}(\pi_{i,t},\kappa_{i,t},C_{i},D_{i}^{\top};\alpha_{i},\lambda_{i,t}))$, with $\pi_{i,t}^{m+1}\\!=\\!\gamma_{i,t}^{m}\\!+\\!\beta_{m}(\gamma_{i,t}^{m}\\!-\\!\gamma_{i,t}^{m-1})$, is equivalent to $\gamma_{i,t+1}^{m}\\!=\\!\textnormal{\sc shrink}(w_{i,t}^{m};\lambda_{i,t}/\ell_{i,t})$, for the auxiliary variable $w_{i,t}^{m}\\!=\\!\Bigg{(}\\!I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i}\\!\Bigg{)}\pi_{i,t}^{m}\\!+\\!\ell_{i,t}^{-1}D_{i}^{\top}\kappa_{i-1,t}\\!+\\!\alpha_{i}\ell_{i,t}^{-1}\textnormal{\sc proj}_{\infty}\Bigg{(}\\!(\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i}\\!\Bigg{)},$ where $I_{k_{i}\times k_{i}}\\!\in\\!\mathbb{R}_{+}^{k_{i}\times k_{i}}$ is the identity matrix, $0_{1\times 2k_{i}}\\!\in\\!\mathbb{R}^{2k_{i}}$ is a row vector of zeros and $1_{k_{i}+1\times 1}\\!\in\\!\mathbb{R}^{k_{i}+1}_{+}$ is a column vector of ones. This is equivalent to the matrix recurrence $(w^{m+1}_{i,t},w^{m}_{i,t},1)^{\top}\\!\\!=\\!S_{i,t}^{m}(w^{m}_{i,t},w^{m-1}_{i,t},1)^{\top}$. More specifically, $\begin{pmatrix}w^{m+1}_{i,t}\vspace{0.05cm}\\\ w^{m}_{i,t}\vspace{0.05cm}\\\ 1\end{pmatrix}=\underbrace{\begin{pmatrix}W_{i,t}^{m}&\ell_{i,t}^{-1}D_{i}^{\top}\kappa_{i-1,t}\\!+\\!(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})z_{i,t}^{m}\\!+\\!\alpha_{i}\ell_{i,t}^{-1}f_{i,t}^{m}\vspace{0.05cm}\\\ 0_{1\times 2k_{i}}&1_{k_{i}+1\times 1}\\\ \end{pmatrix}}_{S_{i,t}^{m}}\\!\begin{pmatrix}w^{m}_{i,t}\vspace{0.05cm}\\\ w^{m-1}_{i,t}\vspace{0.05cm}\\\ 1\end{pmatrix}\vspace{-0.1cm}$ where $z_{i,t}^{m}\\!=\\!-(1\\!+\\!\beta_{m})\lambda_{i,t}s_{i,t}^{m}/\ell_{i,t}\\!+\\!\beta_{m}\lambda_{i,t}s_{i,t}^{m-1}/\ell_{i,t}$, with $s_{i,t}^{m}\\!=\\!\textnormal{\sc sign}(\textnormal{\sc shrink}(w^{m}_{i,t};\lambda_{i,t}/\ell_{i,t}))$. The term $f_{i,t}^{m}$ accounts for the Nesterov-smoothed component, $f_{i,t}^{m}\\!=\\!\textnormal{\sc proj}_{\infty}(((H_{i,t}^{m})^{2}w_{i,t}^{m}\\!-\\!\lambda_{i,t}^{m}\ell_{i,t}^{-1}s_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i})$, which is given by projecting the $L_{1}$-sparse state-transition component onto an $L_{\infty}$ ball. The matrix $W_{i,t}^{m}\\!\in\\!\mathbb{R}^{2k_{i}\times 2k_{i}}$ is $W_{i,t}^{m}=\begin{pmatrix}(1\\!+\\!\beta_{m})(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})(H_{i,t}^{m})^{2}&-\beta_{m+1}(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})(H_{i,t}^{m})^{2}\vspace{0.05cm}\\\ I_{k_{i}\times k_{i}}&0_{k_{i}\times k_{i}}\end{pmatrix}.$ Here, $\ell_{i,t}\\!\in\\!\mathbb{R}_{0,+}$ is the Lipschitz constant of the state-inference cost at a given layer $i$ and for the current batch $t$. The flag matrix $H_{i,t}^{m}\\!=\\!\textnormal{\sc diag}(\textnormal{\sc sign}(\textnormal{\sc shrink}(w^{m}_{i,t};\lambda_{i,t}/\ell_{i,t})))$ is diagonal and relies on a sparse shrinkage process for the auxiliary variable. * Proof: The underlying update for accelerated proximal gradients can be re-written as $\displaystyle\gamma_{i,t+1}^{m}$ $\displaystyle=\textnormal{arg min}_{\pi}(\|\pi\\!-\\!(\pi_{i,t}^{m}\\!-\\!\ell_{i,t}^{-1}\nabla_{\pi_{i,t}^{m}}\mathcal{L}_{1}^{\prime}(\pi_{i,t}^{m}))\|^{2}/2\ell_{i,t}\\!+\\!\lambda_{i,t}\|\pi\|_{1})\vspace{0.05cm}$ $\displaystyle=\textnormal{\sc shrink}((I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})\pi_{i,t}^{m}\\!+\\!\ell_{i,t}^{-1}D_{i}^{\top}\kappa_{i-1,t}\\!+\\!\alpha_{i}\ell_{i,t}^{-1}\textnormal{\sc proj}_{\infty}((\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i});\lambda_{i,t}/\ell_{i,t})$ where $\mathcal{L}_{1}^{\prime}(\pi_{i,t}^{m})$ represents the state-inference cost but without the $L_{1}$-sparsity constraint on the states. Here, we have used a Nesterov smoothing approach, with $\mu_{i}\\!\in\\!\mathbb{R}_{+}$, to deal with the $L_{1}$-sparse state-transition update, $\textnormal{arg max}_{\|\Omega_{i,t}\|_{\infty}\leq 1}\Omega_{i,t}^{\top}(\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})\\!-\\!\mu_{i}\|\Omega_{i,t}\|^{2}_{2}/2\\!=\\!\textnormal{\sc proj}_{\infty}((\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i})$. This projection onto an $L_{\infty}$-ball has the closed-form solution $\textnormal{\sc proj}_{\infty}((\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i}=\begin{cases}1,&(\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i}\\!>\\!1\vspace{0.05cm}\\\ (\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i},&-1\\!\leq\\!(\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i}\\!\leq\\!1\vspace{0.05cm}\\\ -1,&(\pi_{i,t}^{m}\\!-\\!C_{i}\gamma_{i,t-1})/\mu_{i}\\!<\\!-1\end{cases}.$ We replace the states by the auxiliary variable $w_{i,t}^{m}$ and note that $\gamma_{i,t+1}^{m}=\textnormal{\sc shrink}(w_{i,t}^{m};\lambda_{i,t}/\ell_{i,t})$, where $\textnormal{\sc shrink}(w_{i,t}^{m};\lambda_{i,t}/\ell_{i,t})=\textnormal{\sc diag}(\textnormal{\sc sign}(\textnormal{\sc shrink}(w_{i,t}^{m};\lambda_{i,t}/\ell_{i,t})))^{2}w_{i,t}^{m}\\!-\\!\lambda_{i,t}\textnormal{\sc sign}(\textnormal{\sc shrink}(w_{i,t}^{m};\lambda_{i,t}/\ell_{i,t}))/\ell_{i,t},$ $\textnormal{\sc sign}(\textnormal{\sc shrink}(w_{i,t}^{m};\lambda_{i,t}/\ell_{i,t}))=\begin{cases}1,&w_{i,t}^{m}\\!>\\!\lambda_{i,t}/\ell_{i,t}\vspace{0.05cm}\\\ 0,&-\lambda_{i,t}/\ell_{i,t}\\!\leq\\!w_{i,t}^{m}\\!\leq\\!\lambda_{i,t}/\ell_{i,t}\vspace{0.05cm}\\\ -1,&w_{i,t}^{m}\\!<\\!-\lambda_{i,t}/\ell_{i,t}\end{cases}.$ We can systematically re-write the auxiliary-variable update as $\displaystyle w_{i,t}^{m+1}$ $\displaystyle=(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})(\gamma_{i,t+1}^{m}\\!+\\!\beta_{m}(\gamma_{i,t+1}^{m}\\!-\\!\gamma_{i,t+1}^{m-1}))+\ell_{i,t}^{-1}D_{i}^{\top}\kappa_{i-1,t}+\alpha_{i}\ell_{i,t}^{-1}\textnormal{\sc proj}_{\infty}((\gamma_{i,t+1}^{m}\\!-\\!C_{i}\gamma_{i,t})/\mu_{i})\vspace{0.05cm}$ $\displaystyle=(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})((1\\!+\\!\beta_{m})(H_{i,t}^{m-1})^{2}w_{i,t}^{m}\\!-\\!\lambda_{i,t}\textnormal{\sc sign}(\textnormal{\sc shrink}(w_{i,t}^{m-1};\lambda_{i,t}/\ell_{i,t}))/\ell_{i,t})\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;-(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})(\beta_{m-1}(H_{i,t}^{m})^{2}w_{i,t}^{m}\\!+\\!\lambda_{i,t}\textnormal{\sc sign}(\textnormal{\sc shrink}(w_{i,t}^{m-1};\lambda_{i,t}/\ell_{i,t}))/\ell_{i,t})\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;-\ell_{i,t}^{-1}D_{i}^{\top}\kappa_{i-1,t}+\alpha_{i}\ell_{i,t}^{-1}\textnormal{\sc proj}_{\infty}((\gamma_{i,t+1}^{m}\\!-\\!C_{i}\gamma_{i,t})/\mu_{i})\vspace{0.05cm}$ $\displaystyle=(1\\!+\\!\beta_{m})(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})(H_{i,t}^{m})^{2}w_{i,t}^{m}-\beta_{m}(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})(H_{i,t}^{m-1})^{2}w_{i,t}^{m-1}-\ell_{i,t}^{-1}D_{i}^{\top}\kappa_{i-1,t}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})((-\lambda_{i,t}\\!-\\!\lambda_{i,t}\beta_{m})\textnormal{\sc sign}(\textnormal{\sc shrink}(w_{i,t}^{m-1};\lambda_{i,t}/\ell_{i,t}))/\ell_{i,t}\vspace{0.05cm}$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\lambda_{i,t}\beta_{m})\textnormal{\sc sign}(\textnormal{\sc shrink}(w_{i,t}^{m-1};\lambda_{i,t}/\ell_{i,t}))/\ell_{i,t})+\alpha_{i}\ell_{i,t}^{-1}\textnormal{\sc proj}_{\infty}((\gamma_{i,t+1}^{m}\\!-\\!C_{i}\gamma_{i,t})/\mu_{i}).$ The matrix recurrence follows from this update.$\;\;\footnotesize\blacksquare$ We now characterize the cause inference in a similar manner. * Proposition A.7. The cause update $\kappa_{i,t+1}^{m}\\!=\\!\textnormal{\sc prox}_{\\!\lambda_{i}}(\pi^{\prime}_{i,t}{}^{\\!m}\\!-\\!\lambda_{i}^{\prime}\tau^{\prime}_{i,t}{}^{\\!m}\nabla_{\pi^{\prime}_{i,t}{}^{\\!m}}\mathcal{L}_{2}(\gamma_{i,t+1},\pi^{\prime}_{i,t}{}^{\\!m},G_{i};\alpha_{i}^{\prime},\lambda_{i}^{\prime},\eta_{i}^{\prime},\lambda_{i,t}))$, with $\pi^{\prime}_{i,t}{}^{\\!m+1}\\!=\\!\kappa_{i,t+1}^{m}\\!+\\!\beta^{\prime}_{m}(\kappa_{i,t+1}^{m}\\!-\\!\kappa_{i,t+1}^{m})$, is equivalent to $\kappa_{i,t+1}^{m}\\!=\\!\textnormal{\sc shrink}(v_{i,t}^{m};\lambda_{i}^{\prime}/\ell^{\prime}_{i,t})$, for the auxiliary variable $v_{i,t}^{m}\\!=\\!\Bigg{(}\\!I_{d_{i}\times d_{i}}\\!-\\!1/\ell^{\prime}_{i,t}\\!\Bigg{)}\pi_{i,t}^{\prime}{}^{\\!\\!\\!\\!m}\\!+\\!2\eta_{i}^{\prime}I_{d_{i}\times d_{i}}\kappa_{i,t}^{\prime}/\ell_{i,t}^{\prime}\\!-\\!\alpha_{i}^{\prime}/\ell_{i,t}^{\prime}\Bigg{(}\\!G_{i}^{\top}\textnormal{exp}(-G_{i}\pi_{i,t}^{\prime}{}^{\\!\\!\\!\\!m})|\gamma_{i,t+1}^{j}|\\!\Bigg{)},$ where $I_{d_{i}\times d_{i}}\\!\in\\!\mathbb{R}_{+}^{d_{i}\times d_{i}}$ is the identity matrix, $0_{1\times 2d_{i}}\\!\in\\!\mathbb{R}^{2d_{i}}$ is a row vector of zeros and $1_{d_{i}+1\times 1}\\!\in\\!\mathbb{R}^{d_{i}+1}_{+}$ is a column vector of ones. This is equivalent to the matrix recurrence $(v^{m+1}_{i,t},v^{m}_{i,t},1)^{\top}\\!\\!=\\!T_{i,t}^{m}(v^{m}_{i,t},v^{m-1}_{i,t},1)^{\top}$. More specifically, $\begin{pmatrix}v^{m+1}_{i,t}\vspace{0.05cm}\\\ v^{m}_{i,t}\vspace{0.05cm}\\\ 1\end{pmatrix}=\underbrace{\begin{pmatrix}V_{i,t}^{m}&2\eta_{i}^{\prime}I_{d_{i}\times d_{i}}\kappa_{i,t}^{\prime}/\ell_{i,t}^{\prime}\\!+\\!(I_{d_{i}\times d_{i}}\\!-\\!1/\ell_{i,t}^{\prime})z_{i,t}^{\prime}{}^{\\!\\!\\!\\!m}\\!-\\!\alpha_{i}^{\prime}g_{i,t}^{m}/\ell_{i,t}^{\prime}\vspace{0.05cm}\\\ 0_{1\times 2d_{i}}&1_{d_{i}+1\times 1}\\\ \end{pmatrix}}_{T_{i,t}^{m}}\\!\begin{pmatrix}v^{m}_{i,t}\vspace{0.05cm}\\\ v^{m-1}_{i,t}\vspace{0.05cm}\\\ 1\end{pmatrix}\vspace{-0.1cm}$ where $z_{i,t}^{m}\\!=\\!-(1\\!+\\!\beta_{m}^{\prime})\lambda_{i}^{\prime}q_{i,t}^{m}/\ell_{i,t}^{\prime}\\!+\\!\beta_{m}^{\prime}\lambda_{i}^{\prime}q_{i,t}^{m-1}/\ell_{i,t}^{\prime}$, with $q_{i,t}^{m}\\!=\\!\textnormal{\sc sign}(\textnormal{\sc shrink}(v^{m}_{i,t};\lambda_{i}^{\prime}/\ell_{i,t}^{\prime}))$. The term $g_{i,t}^{m}$ accounts for the invariant-matrix component, $g_{i,t}^{m}\\!=\\!G_{i}^{\top}\textnormal{exp}(-G_{i}v_{i,t}^{\prime}{}^{\\!\\!\\!\\!m})|\gamma_{i,t+1}^{j}|$. The matrix $V_{i,t}^{m}\\!\in\\!\mathbb{R}^{2d_{i}\times 2d_{i}}$ is $V_{i,t}^{m}=\begin{pmatrix}(1\\!+\\!\beta_{m}^{\prime})(I_{d_{i}\times d_{i}}\\!-\\!1/\ell_{i,t}^{\prime})(M_{i,t}^{m})^{2}&-\beta_{m+1}^{\prime}(I_{d_{i}\times d_{i}}\\!-\\!1/\ell_{i,t}^{\prime})(M_{i,t}^{m})^{2}\vspace{0.05cm}\\\ I_{d_{i}\times d_{i}}&0_{d_{i}\times d_{i}}\end{pmatrix}.$ Here, $\ell_{i,t}^{\prime}\\!\in\\!\mathbb{R}_{0,+}$ is the Lipschitz constant of the state-inference cost at a given layer $i$ and for the current batch $t$. The flag matrix $M_{i,t}^{m}\\!=\\!\textnormal{\sc diag}(\textnormal{\sc sign}(\textnormal{\sc shrink}(v^{m}_{i,t};\lambda_{i}^{\prime}/\ell_{i,t}^{\prime})))$ is diagonal and relies on a sparse shrinkage process for the auxiliary variable. We now list spectral properties of the iteration sub-matrices $W_{i,t}^{m}\\!\in\\!\mathbb{R}^{k_{i}+1\times k_{i}+1}$ and $V_{i,t}^{m}\\!\in\\!\mathbb{R}^{d_{i}+1\times d_{i}+1}$. The validity of these claims follows from extensions of work on the alternating direction method of multipliers [BoleyD-jour2013a]. * Lemma A.1. Suppose that the flag matrices across consecutive iterations $m$ of the hidden state and cause updates respectively satisfy $H_{i,t}^{m-1}\\!=\\!H_{i,t}^{m}\\!=\\!H_{i,t}^{m+1}$ and $M_{i,t}^{m-1}\\!=\\!M_{i,t}^{m}\\!=\\!M_{i,t}^{m+1}$. The iteration matrices $W_{i,t}^{m}$ and $V_{i,t}^{m}$ are different at each step and satisfy: * (i) $\|W_{i,t}^{m}\|_{2}\\!\leq\\!1$ and $\|V_{i,t}^{m}\|_{2}\\!\leq\\!1$. Also, $\|(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})H_{i,t}^{m}\|_{2}\\!\leq\\!1$ and $\|(I_{d_{i}\times d_{i}}\\!-\\!1/\ell_{i,t}^{\prime})M_{i,t}^{m}\|_{2}\\!\leq\\!1$. * (ii) For any $0\\!<\\!\beta_{m},\beta_{m}^{\prime}\\!\leq\\!1$, the eigenvalues of $W_{i,t}^{m}$ and $V_{i,t}^{m}$ lie in a closed circle in the real-complex plane that is centered at $(0,\frac{1}{2})$ and that has a radius of $\frac{1}{2}$. If either of these iteration matrices has eigenvalues with absolute values of $\rho(W_{i,t}^{m})\\!=\\!1$ and $\rho(V_{i,t}^{m})\\!=\\!1$, then there must be no imaginary component. If the step sizes are such that $\beta_{m},\beta_{m}^{\prime}\\!<\\!1$ and if $W_{i,t}^{m}$ and $V_{i,t}^{m}$ have eigenvalues of one, then these eigenvalues must have a complete set of eigenvectors. The full iteration matrices have spectral decompositions $S_{i,t}^{m}\\!=\\!P_{i,t}^{m}J^{m}_{i,t}(P^{m}_{i,t})^{-1}$ and $T_{i,t}^{m}\\!=\\!Q^{m}_{i,t}R^{m}_{i,t}(Q^{m}_{i,t})^{-1}$ where the block-diagonal eigenvalue matrices $J^{m}_{i,t}$ and $R^{m}_{i,t}$ have the form $J^{m}_{i,t}\\!=\\!\begin{pmatrix}\\!\begin{pmatrix}1&1\vspace{0.05cm}\\\ 0&1\end{pmatrix}\\!\\!\\!&0&0\vspace{0.05cm}\\\ 0&I^{m}_{i,t}\\!\\!&0\vspace{0.05cm}\\\ 0&0&J^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}\end{pmatrix}\\!,\;\;R^{m}_{i,t}\\!=\\!\begin{pmatrix}\\!\begin{pmatrix}1&1\vspace{0.05cm}\\\ 0&1\end{pmatrix}\\!\\!\\!&0&0\vspace{0.05cm}\\\ 0&I^{m}_{i,t}\\!\\!&0\vspace{0.05cm}\\\ 0&0&R^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}\end{pmatrix}\\!,$ where $I^{m}_{i,t}$ is an appropriately sized identity matrix that will depend on the flag matrix at iteration $m$ for layer $i$ and batch $t$. Here, we have the condition that the spectral radii $\rho(J^{\prime}_{i,t}{}^{\\!\\!\\!\\!m})\\!<\\!1$ and $\rho(R^{\prime}_{i,t}{}^{\\!\\!\\!\\!m})\\!<\\!1$. This lemma suggests that there are multiple local phases that can arise from matrix-based proximal-gradient recurrence. These phases depend on properties of the flag matrix on the spectral characteristics of the full iteration matrices that define the matrix recurrence. If the flag matrices remain the same across consecutive iterations, then the total-iteration-matrix operator remains invariant. The structure of the spectrum for that operator controls the convergence behavior of the process. If the flag matrix changes, then the set of active constraints at the current pass in the process has changed across consecutive iterations. The current iteration is thus a transition to a different operator with a different eigenstructure. The algorithm then searches for a correct set of active constraints. The specific phases are distinguished by the eigenstructure of the total-iteration-matrix operator. In what follows, we make explicit the phases that can emerge. This is a slight extension of the work on the alternating direction method of multipliers [BoleyD-jour2013a]. It is readily applicable to the proximal-gradient case, however, due to the similar treatment of the sparsity terms via a splitting process. * Proposition A.8. Suppose that the flag matrices across consecutive iterations $m$ of the hidden state and cause updates respectively satisfy $H_{i,t}^{m-1}\\!=\\!H_{i,t}^{m}$ and $M_{i,t}^{m-1}\\!=\\!M_{i,t}^{m}$. Let the eigendecompositions of the total-iteration matrices be such that $S_{i,t}^{m}\\!=\\!P_{i,t}^{m}J^{m}_{i,t}(P^{m}_{i,t})^{-1}$ and $T_{i,t}^{m}\\!=\\!Q^{m}_{i,t}R^{m}_{i,t}(Q^{m}_{i,t})^{-1}$, where $J^{m}_{i,t}$ and $R^{m}_{i,t}$ have block-diag- onal forms. Then, the iterates can belong to one of the following phases: * (i) Let the spectral radii $\rho(W_{i,t}^{m})\\!<\\!1$ and $\rho(V_{i,t}^{m})\\!<\\!1$. In this case, the $\mathbb{R}_{0,+}^{2\times 2}$ Jordan blocks in the upper-left corners of the eigenvalue matrices $J_{i,t}^{m}$ and $R_{i,t}^{m}$ are not present. The identity-matrix blocks $I_{i,t}^{m}\\!\in\\!\mathbb{R}^{1\times 1}_{+}$ are degenerate. As long as the flag matrices $H_{i,t}^{m}$ and $M_{i,t}^{m}$ do not change across $m$ and if the iterates are close enough to the optimal solution, then linear convergence is achieved to that solution. Such solutions are unique fixed points which are eigenvectors $[P_{i,t}^{m}]_{1:2k_{i}+1,2k_{i}+1}$ and $[Q_{i,t}^{m}]_{1:2d_{i}+1,2d_{i}+1}$ of $S_{i,t}^{m}$ and $T_{i,t}^{m}$ with unit eigenvalues. If the eigenvectors are non-negative, then they satisfy the Karush-Kuhn-Tucker conditions for the state and cause inference costs. * (ii) If $\rho(W_{i,t}^{m})\\!=\\!1$ and $\rho(V_{i,t}^{m})\\!=\\!1$, then $S_{i,t}^{m}$ and $T_{i,j}^{m}$ both have non-trivial $\mathbb{R}_{0,+}^{2\times 2}$ Jordan blocks in the upper-left corners of $J_{i,t}^{m}$ and $R_{i,t}^{m}$. There are no other eigenvalues on the unit circle. The theory of the power method implies that the vector iterates will converge to an invariant subspace corresponding to the unit eigenvalue. The presence of the non-trivial Jordan block implies the existence of a Jordan chain. That is, there are non-zero vectors $\varphi,\varphi^{\prime}\\!\in\\!\mathbb{R}^{2k_{i}+1}_{0,+}$ and $\phi,\phi^{\prime}\\!\in\\!\mathbb{R}^{2d_{i}+1}_{0,+}$ such that the equivalence relations $(S_{i,t}^{m}\\!-\\!I_{2k_{i}+1,2k_{i}+1})\varphi\\!=\\!\varphi^{\prime}$ and $(S_{i,t}^{m}\\!-\\!I_{2k_{i}+1,2k_{i}+1})\varphi^{\prime}\\!=\\!0$ along with $(T_{i,t}^{m}\\!-\\!I_{2d_{i}+1,2d_{i}+1})\phi\\!=\\!\phi^{\prime}$ and $(T_{i,t}^{m}\\!-\\!I_{2d_{i}+1,2d_{i}+1})\phi^{\prime}\\!=\\!0$ are satisfied. Any vector that includes a component of the form $a\varphi\\!+\\!b\varphi^{\prime}$ and $a\phi\\!+\\!b\phi^{\prime}$, for $a,b\\!\in\\!\mathbb{R}$ would add a constant factor $a\varphi$ and $a\phi$, respectively, to $S_{i,t}^{m}$ and $T_{i,t}^{m}$, plus descending lower-order terms from the other lesser eigenvalues. If $H_{i,t}^{m}$ and $M_{i,t}^{m}$ do not change across $m$, then the state $w_{i,t}^{m}$ and cause $v_{i,t}^{m}$ iterates take constant-sized steps and either diverge or drive some component negative, which results in a change in the iteration matrices $W_{i,t}^{m}$ and $V_{i,t}^{m}$. * (iii) Suppose that $\rho(W_{i,t}^{m})\\!=\\!1$ and $\rho(V_{i,t}^{m})\\!=\\!1$, but $S_{i,t}^{m}$ and $T_{i,j}^{m}$ have no non-diagonal Jordan block for that eigenvalue. If we assume that the solution is unique, then the unit eigenvalues of $S_{i,t}^{m}$ and $T_{i,j}^{m}$ must be simple. There are no other eigenvalues on the unit circle. If the iterates are close enough to the optimal solution, then they linearly converge to it, as the inference process behaves similarly to a Von Mises iteration. These unique, fixed-point solutions are, by definition, the eigenvectors $[P_{i,t}^{m}]_{1:2k_{i}+1,2k_{i}+1}$ and $[Q_{i,t}^{m}]_{1:2d_{i}+1,2d_{i}+1}$ of $S_{i,t}^{m}$ and $T_{i,t}^{m}$ for the unit eigenvalues. The convergence rate is determined by the next-largest eigenvalues in the absolute value, that is, the largest eigenvalues of $J^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}$ and $R^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}$, as long as the flag matrices $H_{i,t}^{m}$ and $M_{i,t}^{m}$ do not change across $m$. This phase cannot be last in the inference process, as the search will eventually jump to a different one due to the eigenvalue properties. Now, suppose that $H_{i,t}^{m-1}\\!\neq\\!H_{i,t}^{m}$ and $M_{i,t}^{m-1}\\!\neq\\!M_{i,t}^{m}$ for iterations $m$. In this case, the iteration operator does not remain invariant over more than one pass. The iteration matrices $W_{i,t}^{m}$ and $V_{i,t}^{m}$ could match one of the conditions in the above phases. They could also have the following eigenstructure associated with a fourth phase: * (iv) $W_{i,t}^{m}$ and $V_{i,t}^{m}$ have eigenvalues with absolute value one, but not equal to one. This occurs when the iterates transition to a new set of active constraints. The next pass will result in a different operator with a different flag matrix. * Proof: For $S_{i,t}^{m}$, the upper-left sub-matrix, which is defined by $W_{i,t}^{m}$, contributes to the eigenvalue blocks $I_{i,t}^{m}$ and $J^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}$ of $J_{i,t}^{m}$. Here, we assume that the spectral decomposition is written as $S_{i,t}^{m}\\!=\\!P_{i,t}^{m}J_{i,t}^{m}(P_{i,t}^{m})^{-1}$, with $J_{i,t}^{m}$ having the form defined in lemma A.1. Both $W_{i,t}^{m}$ and $S_{i,t}^{m}$ have the same set of eigenvalues with equivalent geometric and algebraic multiplicities, except when an eigenvalue has an absolute value of one. No eigenvalue with an absolute value of one can have a non-diagonal Jordan block. Hence, the blocks, $I_{i,t}^{m}$ and $J^{\prime}_{i,t}{}^{\\!\\!\\!\\!m}$, corresponding to those eigenvalues must be diagonal. If $W_{i,t}^{m}$ has no eigenvalue equal to one, then $S_{i,t}^{m}$ has a simple eigenvalue that is one. In this case, the number of times an eigenvalue appears as a root of the characteristic polynomial increases by one. The eigenspace dimensionality either increases by one or stays the same. Alternatively, the algebraic and geometric multiplicities for the unit eigenvalue for $W_{i,t}^{m}$ differ. This implies that there is a Jordan block, which is given by the $\mathbb{R}_{0,+}^{2\times 2}$ sub-matrix in the top-left corner of $J_{i,t}^{m}$. The eigenvalue properties of $S_{i,t}^{m}$ give rise to the three phases listed above. A similar argument can be applied to the cause iteration matrices $T_{i,t}^{m}$. The fourth phase is trivial to show.$\;\;\footnotesize\blacksquare$ Such results indicate that there are local search regimes in the inference process where the convergence is quicker than the global convergence rate. We can specify the conditions when this will occur. * Proposition A.9. Let the state and cause inference recurrences be defined as in propositions A.6 and A.7, respectively, for the accelerated proximal gradient search. For a non-accelerated search, they become $\begin{pmatrix}w^{\prime}_{i,t}{}^{\\!\\!\\!\\!m+1}\vspace{0.05cm}\\\ 1\end{pmatrix}=\begin{pmatrix}W_{i,t}^{\prime}{}^{\\!\\!m}&\ell_{i,t}D_{i}^{\top}\kappa_{i-1,t}\\!-\\!\lambda_{i,t}\ell_{i,t}^{-1}(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})s_{i,t}^{\prime}\\!+\\!\alpha_{i}\ell_{i,t}f_{i,t}^{m}\vspace{0.05cm}\\\ 0&1\end{pmatrix}\\!\begin{pmatrix}w^{\prime}_{i,t}{}^{\\!\\!\\!\\!m+1}\vspace{0.05cm}\\\ 1\end{pmatrix}$ where $W_{i,t}^{\prime}{}^{\\!\\!m}\\!=\\!(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})(H_{i,t}^{\prime}{}^{\\!\\!\\!\\!m})^{2}$ and $s_{i,t}^{\prime}{}^{\\!\\!\\!\\!m}\\!=\\!\textnormal{\sc sign}(\textnormal{\sc shrink}(w^{\prime}_{i,t}{}^{\\!\\!\\!\\!m};\lambda_{i,t}/\ell_{i,t}))$. The flag matrix is such that $H_{i,t}^{\prime}{}^{\\!\\!\\!\\!m}\\!=\\!\textnormal{\sc diag}(s_{i,t}^{\prime}{}^{\\!\\!\\!\\!m})$. For the non-accelerated cause inference, we have the recurrence relation $\begin{pmatrix}v^{\prime}_{i,t}{}^{\\!\\!\\!\\!m+1}\vspace{0.05cm}\\\ 1\end{pmatrix}=\begin{pmatrix}V_{i,t}^{\prime}{}^{\\!\\!\\!m}&2\eta_{i}^{\prime}I_{d_{i}\times d_{i}}\kappa_{i,t}^{\prime}/\ell_{i,t}^{\prime}\\!+\\!(I_{d_{i}\times d_{i}}\\!-\\!1/\ell_{i,t}^{\prime})q_{i,t}^{\prime}{}^{\\!\\!\\!\\!m}\\!-\\!\alpha_{i}^{\prime}g_{i,t}^{m}/\ell_{i,t}^{\prime}\vspace{0.05cm}\\\ 0&1\end{pmatrix}\\!\begin{pmatrix}v^{\prime}_{i,t}{}^{\\!\\!\\!\\!m+1}\vspace{0.05cm}\\\ 1\end{pmatrix}$ where $V_{i,t}^{\prime}{}^{\\!\\!\\!m}\\!=\\!(I_{d_{i}\times d_{i}}\\!-\\!1/\ell_{i,t}^{\prime})(M_{i,t}^{\prime}{}^{\\!\\!\\!\\!m})^{2}$ and $q_{i,t}^{\prime}{}^{\\!\\!\\!\\!m}\\!=\\!\textnormal{\sc sign}(\textnormal{\sc shrink}(v^{\prime}_{i,t}{}^{\\!\\!\\!\\!m};\lambda_{i}/\ell^{\prime}_{i,t}))$. The flag matrix is such that $M_{i,t}^{\prime}{}^{\\!\\!\\!\\!m}\\!=\\!\textnormal{\sc diag}(q_{i,t}^{\prime}{}^{\\!\\!\\!\\!m})$. We have that: * (i) For the second phase, the constant step-size vector has the form $(1\\!-\\!\beta_{m})^{-1}(\varphi,\varphi,0)^{\top}$, $W_{i,t}^{\prime}{}^{\\!\\!m}\varphi\\!=\\!\varphi$, with $\varphi\\!\in\\!\mathbb{R}^{k_{i}}$ being a scaled eigenvector of the state-inference total-iteration matrix. Likewise, for the cause inference, the constant step-size vector has the form $(1\\!-\\!\beta_{m}^{\prime})^{-1}(\phi,\phi,0)^{\top}$, where $V_{i,t}^{\prime}{}^{\\!\\!m}\phi\\!=\\!\phi$, with $\phi\\!\in\\!\mathbb{R}^{d_{i}}$ being a scaled eigenvector. Since $\beta_{m},\beta_{m}^{\prime}$ have a limit of one, the constant-step-size vector is larger than the one for the states, $(\varphi^{\prime},0)$, $W_{i,t}^{\prime}{}^{\\!\\!m}\varphi^{\prime}\\!=\\!\varphi^{\prime}$, in the non-accelerated proximal-gradient case. The constant-step-size vector in the accelerated case is also larger than the one for the causes, $(\phi^{\prime},0)$, $V_{i,t}^{\prime}{}^{\\!\\!m}\phi^{\prime}\\!=\\!\phi^{\prime}$, in the case of non-accelerated proximal gradients. * (ii) In the first and third phases, if $1\\!>\\!\rho(W_{i,t}^{\prime}{}^{\\!\\!m})\\!>\\!\beta_{m}$ and $1\\!>\\!\rho(V_{i,t}^{\prime}{}^{\\!\\!\\!m})\\!>\\!\beta_{m}^{\prime}$, then the accelerated proximal gradient scheme will be faster than the non-accelerated case for the state and cause inference. It will be slower, however, if $1\\!>\\!\beta_{m}\\!>\\!\rho(W_{i,t}^{\prime}{}^{\\!\\!m})$ and $1\\!>\\!\beta_{m}^{\prime}\\!>\\!\rho(V_{i,t}^{\prime}{}^{\\!\\!\\!m})$. When $\beta_{m}\\!>\\!\rho(W_{i,t}^{\prime}{}^{\\!\\!m})$ and $\beta_{m}^{\prime}\\!>\\!\rho(V_{i,t}^{\prime}{}^{\\!\\!\\!m})$, then the largest eigenvalues of $W_{i,t}^{m}$ and $V_{i,t}^{m}$ must be a pair of complex conjugates. According to the theory of Von Mises iterations, the convergence will oscillate between the two complex numbers and the search will not be monotonically decreasing across iterations. The accelerated case will hence be slower than the non-accelerated case, as $\rho(W_{i,t}^{\prime}{}^{\\!\\!\\!m})$ and $\rho(V_{i,t}^{\prime}{}^{\\!\\!\\!m})$ will remain fixed for a specific phase, while the steps $\beta_{m}$ and $\beta_{m}^{\prime}$ will monotonically increase. * Proof: For part (i), a single update has the form $(w_{i,t}^{m+1},w_{i,t}^{m},1)^{\top}\\!+\\!(\varphi_{1},\varphi_{2},0)^{\top}$. There exists a Jordan block in $J_{i,t}^{m}$ and hence a Jordan chain. Therefore, $(S_{i,t}^{m}-I_{2k_{i}+1\times 2k_{i}+1})(w_{i,t}^{m+1},w_{i,t}^{m},1)^{\top}\\!=\\!(\varphi_{1},\varphi_{2},0)^{\top}$ and $S_{i,t}^{m}(\varphi_{1},\varphi_{2},0)^{\top}\\!=\\!(\varphi_{1},\varphi_{2},0)^{\top}$. It can be seen that $\varphi_{1}\\!=\\!\varphi_{2}$. This implies $W_{i,t}^{\prime}{}^{\\!\\!m}\varphi\\!=\\!\varphi$ for the accelerated case. Moreover, $(S_{i,t}^{m}-I_{2k_{i}+1\times 2k_{i}+1})\\!\begin{pmatrix}w_{i,t}^{m+1}\vspace{0.05cm}\\\ w_{i,t}^{m}\vspace{0.05cm}\\\ 1\end{pmatrix}\\!=\\!\begin{pmatrix}((1\\!+\\!\beta_{m})W_{i,t}^{\prime}{}^{\\!\\!m}\\!-\\!I_{k_{i}\times k_{i}})w_{i,t}^{m}\\!-\\!\beta_{m}W_{i,t}^{\prime}{}^{\\!\\!m}w_{i,t}^{m-1}\\!+\\!(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}^{-1}D_{i}^{\top}D_{i})z_{i,t}^{m}\vspace{0.05cm}\\\ w_{i,t}^{m}\\!-\\!w_{i,t}^{m-1}\vspace{0.05cm}\\\ 0\end{pmatrix}$ where $(S_{i,t}^{m}\\!-\\!I_{2k_{i}+1\times 2k_{i}+1})(w_{i,t}^{m+1},w_{i,t}^{m},1)^{\top}\\!=\\!(1\\!-\\!\beta_{m})^{-1}(\varphi,\varphi,0)^{\top}$. For this to occur, we must have that $w_{i,t}^{m-1}\\!=\\!w_{i,t}^{m}\\!+\\!(\beta_{m}\\!-\\!1)^{-1}\varphi$. Similar arguments apply to the causes. The analysis is similar for the non- accel- erated proximal-gradient case. For part (ii), we prove properties for the states and note that they extend to the causes with few changes. Let $(v_{1},v_{2})^{\top}$ be an eigenvector of $W_{i,t}^{m}$. We have that $W_{i,t}^{m}(v_{1},v_{2})^{\top}=\begin{pmatrix}(1+\beta_{m})\rho(W_{i,t}^{m})v_{2}+\beta_{m}(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}D_{i}^{\top}D_{i})(H_{i,t}^{m})^{2}v_{2}\vspace{0.05cm}\\\ v_{2}\end{pmatrix}=\rho(W_{i,t}^{m})\begin{pmatrix}\rho(W_{i,t}^{m})v_{2}\vspace{0.05cm}\\\ v_{2}\end{pmatrix}$ since $v_{1}\\!=\\!\rho(W_{i,t}^{m})v_{2}$. Hence, $(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}D_{i}^{\top}D_{i})(H_{i,t}^{m})^{2}v_{2}\\!=\\!\rho(W_{i,t}^{m})^{2}v_{2}/((1\\!+\\!\beta_{m})\rho(W_{i,t}^{m})\\!-\\!\beta_{m})$. This implies that $\rho(W_{i,t}^{m})^{2}\\!+\\!\beta_{m}\rho(K_{i,t}^{m})\\!-\\!(1\\!+\\!\beta_{m})\rho(W_{i,t}^{m})\rho(W_{i,t}^{\prime}{}^{\\!\\!m})\\!=\\!0$, where $W_{i,t}^{\prime}{}^{\\!\\!m}\\!=\\!(I_{k_{i}\times k_{i}}\\!-\\!\ell_{i,t}D_{i}^{\top}D_{i})(H_{i,t}^{m})^{2}$. It can be seen that $\rho(W_{i,t}^{m})$ has real-valued roots if $4\beta_{m}(1\\!+\\!\beta_{m})^{2}\\!<\\!\rho(W_{i,t}^{m})$ and complex-valued roots otherwise. When there are real-valued roots, then $\rho(W_{i,t}^{\prime}{}^{\\!\\!m})\\!>\\!\beta_{m}$. Moreover, $\rho(W_{i,t}^{m})\\!=\\!(1\\!+\\!\beta_{m})\rho(W_{i,t}^{\prime}{}^{\\!\\!m})/2+((1\\!+\\!\beta_{m})^{2}\rho(W_{i,t}^{\prime}{}^{\\!\\!m})^{2}/4$ $-\beta_{m}\rho(W_{i,t}^{\prime}{}^{\\!\\!m}))^{1/2}\\!<\\!\rho(W_{i,t}^{\prime}{}^{\\!\\!m})$, which follows from the fact that $\beta_{m}\\!<\\!1$ as per its definition. When there are com- plex-valued roots, then $|\rho(W_{i,t}^{m})|\\!=\\!(\beta_{m}\rho(W_{i,t}^{\prime}{}^{\\!\\!m}))^{1/2}$ and hence $|\rho(W_{i,t}^{m})|\\!<\\!\rho(W_{i,t}^{\prime}{}^{\\!\\!m})$ whenever $\rho(W_{i,t}^{\prime}{}^{\\!\\!m})\\!>\\!\beta_{m}$. If, however, $\beta_{m}\\!>\\!\rho(W_{i,t}^{\prime}{}^{\\!\\!m})$, then $|\rho(W_{i,t}^{m})|\\!>\\!\rho(W_{i,t}^{\prime}{}^{\\!\\!m})$. Both the accelerated and non-accelerated proximal gradients reduce to a Von Mises iteration in the first and third phases. The rate of convergence is, respectively, determined by $|\rho(W_{i,t}^{m})|$ and $|\rho(W_{i,t}^{\prime}{}^{\\!\\!m})|$.$\;\;\footnotesize\blacksquare$ This result provides insight into why Nesterov inertial sequences are worse than the one that we employed. It is trivial to demonstrate that the step sizes for a Nesterov inertial sequence will be strictly greater than the step sizes for our polynomial inertial sequence. The former will reach the eigenvalue-versus-inertial-sequence inequality conditions more quickly, yielding an inference slowdown whenever the search is in either the first or third phase. Such phases typically occur near the beginning of the inference process and occupy a majority of the overall process. Moreover, severe cost rippling will be encountered due to the emergence of complex-conjugate eigenvalues in the iteration matrices. The state and cause iterates thus alternate between locally minimizing and maximizing the LASSO costs. Over a great many iterations, they generally lead to average LASSO cost decreases. However, this does not always occur. The average LASSO loss across iterations can remain unchanged, leading to learning stagnation. In contrast to the Nesterov-based inertial sequence used by DCPNs, that used by ADPCNs will likely never reach either the first or third phases where oscillation occurs. This is because the inertial sequence often grows too slowly compared to the iteration-matrix eigenvalues. If either phase is reached, then it will typically occur far later during inference than in the Nesterov-sequence case. This allows for our accelerated strategy to take advantage of the faster convergence rate for a greater number of iterations. The iterates are usually strictly decreasing the LASSO costs throughout much of the inference process. For the remainder, they are monotonically non- increasing. This insinuates that the states and causes continuously improve their characterization of the stimuli. Often, inference terminates before oscillation can occur. In other cases, the inference process will restart, thereby delaying the onset of the first or third phases while still meaningfully decreasing the LASSO costs. There has been a great deal of work on multiple restarts [OdonoghueB- jour2013a, KimD-jour2018a, FercoqO-jour2019a] of iterative methods for convex and non-convex costs. We have found that certain approaches are adept at suppressing cost oscillations. It is likely that they suitably change the spectral properties of the iteration matrices to force the inference process back into either the first or the third phase where it exhibits an improved convergence rate. Here, we have focused on assessing convergence and the convergence rate for DPCN and ADPCN inference without multiple restarts. This was done, as we mentioned above, to highlight that the learning improvement we empirically observed stemmed from the inertial-sequence choice. In our future work, we will analyze the theoretical behaviors of integrating restart procedures into our inference process. We will prove that they can they entirely preempt cost oscillation. They also guarantee monotonic LASSO cost decreases. This will demonstrate that we can run the feed-forward, bottom-up inference process over an arbitrarily large number of iterations without slowdown concerns. Increasing the number of inference steps per forward pass will likely further reduce the number of stimuli presentations needed to uncover good network parameters. We also will prove performing multiple restarts in a principled way will still guarantee convergence to solutions without adversely impacting the improved convergence rate that we have established. ## References ## Appendix B --- (a) (b) (c) (d) (e) Figure B.1: Statistical summaries of ADPCN and convolutional DPCN results across two epochs. For each stimuli dataset, we report the stage-wise contribution of the causes on the recognition rate. We do this by iteratively appending the causes from each stage to assess the discrimination change. Higher values indicate that the causes from a particular stage are more relevant for making classification decisions. We also list the ability of each stage to either remove or smooth background details. We do this by assessing the top-down and bottom-up image saliency to delineate object boundaries. We then quantify the high-frequency content, relative to the original stimuli, for everything not within the object boundaries. Higher values denote that superfluous, non-object visual details are removed. Lastly, we provide the stage-wise reconstruction quality, in terms of the mean-squared-error percentage. Higher values indicate that a stage’s reconstruction response aligns well with the input stimuli. All statistics were averaged across five Monte Carlo simulations with random parameter initializations. MNIST Error​ Rate​ Comparison --- Method | Learning​ Style | Error DPCN [PrincipeJC-jour2014a] | Unsupervised | 19.2% AE [BengioY-coll2007a] | Unsupervised | 18.8% GAN [RadfordA-conf2016a] | Unsupervised | 17.2% CRPN [ChalasaniR-jour2015a] | Unsupervised | 5.65% IMSAT [HuW-conf2017a] | Unsupervised | 1.60% IIC [JiX-conf2019a] | Unsupervised | 1.60% SCAE [KosiorekA-coll2019a] | Unsupervised | 1.00% EBSR [RanzatoM-coll2007a] | Unsupervised | 0.39% ADPCN | Unsupervised | 0.31% MON [GoodfellowIJ-conf2013a] | Supervised | 0.45% RCNN [LiangM-conf2015a] | Supervised | 0.31% MCNN [CireganD-conf2012a] | Supervised | 0.23% FMNIST Error​ Rate​ Comparison --- Method | Learning​ Style | Error DPCN [PrincipeJC-jour2014a] | Unsupervised | 42.7% kSCN [ZhangT-conf2018a] | Unsupervised | 39.9% CRPN [ChalasaniR-jour2015a] | Unsupervised | 15.5% ADPCN | Unsupervised | 3.51% TLML [YuB-conf2018a] | Semi-super. | 11.2% ZSDA [PengKC-conf2018a] | Supervised | 15.5% DAGH [ChenY-conf2019a] | Supervised | 6.30% DCAP [RajasegaranJ-conf2019a] | Supervised | 5.54% DRBC [SabourS-coll2017a] | Supervised | 4.30% DNET [HuangG-conf2017a] | Supervised | 4.60% LES [NoklandA-conf2019a] | Supervised | 4.14% JOUT [WangS-conf2019a] | Supervised | 2.87% CIFAR-10 Error​ Rate​ Comparison --- Method | Learning​ Style | Error DPCN [PrincipeJC-jour2014a] | Unsupervised | 68.5% NOMP [LinTH-conf2014a] | Unsupervised | 39.2% CRPN [ChalasaniR-jour2015a] | Unsupervised | 32.7% RFL [JiaY-conf2012a] | Unsupervised | 16.9% ADPCN | Unsupervised | 12.7% MON [GoodfellowIJ-conf2013a] | Supervised | 9.38% DASN [StollengaMF-coll2014a] | Supervised | 9.22% ACN [SpringenbergJT-conf2015a] | Supervised | 9.08% SPNET [RippelO-coll2015a] | Supervised | 8.60% HNET [SrivastavaRK-coll2015a] | Supervised | 7.69% LAF [AgostinelliF-conf2015a] | Supervised | 7.51% RCNN [LiangM-conf2015a] | Supervised | 7.09% CIFAR-100 Error​ Rate​ Comparison --- Method | Learning​ Style | Error DPCN [PrincipeJC-jour2014a] | Unsupervised | 95.6% AEVB [KingmaDP-conf2013a] | Unsupervised | 84.8% DEC [XieJ-conf2016a] | Unsupervised | 81.5% DAIC [ChangJ-conf2017a] | Unsupervised | 76.2% DCCM [WuJ-conf2019a]​ | Unsupervised | 67.3% CRPN [ChalasaniR-jour2015a] | Unsupervised | 59.7% ADPCN | Unsupervised | 39.7% PMO [SpringenbergJT-conf2014a] | Supervised | 38.1% TREE [SrivastavaN-coll2013a] | Supervised | 36.8% SBO [SnoekJ-conf2015a] | Supervised | 27.4% INIT [MishkinD-conf2016a] | Supervised | 26.3% DNET [HuangG-conf2017a] | Supervised | 17.1% STL-10 Error​ Rate​ Comparison --- Method | Learning​ Style | Error DPCN [PrincipeJC-jour2014a] | Unsupervised | 87.1% SWAE [ZhaoJ-conf2016a] | Unsupervised | 72.9% JULE [YangJ-conf2016a] | Unsupervised | 72.3% DCNN [ZeilerMD-conf2010a]​ | Unsupervised | 70.1% DEC [XieJ-conf2016a] | Unsupervised | 64.1% CRPN [ChalasaniR-jour2015a] | Unsupervised | 55.5% SRF [CoatesA-coll2011a] | Unsupervised | 39.9% ADPCN | Unsupervised | 25.3% CKN [MairalJ-coll2014a] | Supervised | 37.6% MTBO [SwerskyK-coll2013a] | Supervised | 29.9% SSTN [OyallonE-conf2017a] | Supervised | 23.4% RESN [HeK-conf2016a] | Supervised | 14.2% Figure B.2: Performance errors for the MNIST, FMNIST, CIFAR-10, CIFAR-100, and STL-10 datasets. For each of the referenced methods, we either report the test-set classification errors that the authors listed or report the best known test-set classification errors obtained using that approach. We also specify whether the chosen approaches were predominantly unsupervised, semi- supervised, or supervised. The best methods for each learning category are highlighted in green. The worst are highlighted in red. ## References ## Appendix C --- Figure C.1: A comparison of t-SNE embeddings for the raw stimuli and the ADPCN causes. The projections of the ADPCN causes show that they preserve perceptual differences well. For each row, the plot on the left-hand side shows the embedding for the visual stimuli. The embedding tends to emphasize grouping stimuli of simliar brightness together. This often does not correspond to grouping according to object classes, except in very limited circumstances. The stimuli this need to be transformed in a meaningful way. The plot on the right-hand side shows the embedding of the causes from multiple ADPCN stages. The causes often organize the stimuli in a way that groups related sets of visual content. This better aligns with the object classes than the raw pixel values. We recommend zooming in to see the full image details. --- Figure C.1: (Continued from the previous page) In fig. C.1, we provide plots of the t-SNE embeddings of the raw visual stimuli and the ADPCN features extracted from them. There are multiple interesting findings that we observe, all of which indicate that the ADPCNs preserve perceptual differences due to being sensitive to visual stimuli appearance: * MNIST: Figure C.1(a) indicates that MNIST is largely a separable dataset without any processing. Most of the digit classes group into distinct distributions. There is, however, some inherent ambiguity when distinguishing between certain classes. For instance, digits eight and three may resemble digit five, depending on the writing style, when solely considering the raw pixels as features. Similar issues are encountered for digits four and nine. However, the ADPCNs are adept at separating the digits based on their local and global appearance. This resolves much of the inherent ambiguity and leads to better separated distributions. Interestingly, the ADPCNs often additionally cluster the samples based on the writing style. As shown in fig. C.1(b), it separates well instances of digit seven which do and do not have horizontal strokes. The former are mostly located in a smaller, isolated distribution. For digit one, the distribution smoothly varies from slanted strokes to mostly horizontal strokes. For digit two, there is a progression from strokes which have loops to those that are flat. This indicates that the ADPCN is additionally differentiating between handwriting styles. * FMNIST: Unlike MNIST, FMNIST is not trivially separable when considering raw pixel values. Figure C.1(c) illustrates that sandals, sneakers, and ankle boots are located in a single, giant distribution. Pullovers, coats, shirts, dresses, and t-shirts form another giant distribution. There is some progression from one type of clothing to another, but, often, the classes are interspersed. The remaining classes, trousers and bags, are often well separated. Figure C.1(d) highlights that the ADPCNs better segregate the visual stimuli in an appearance-based manner. The ADPCN features for sandals are rather distinct from those of other footwear. The network thus organizes them in a separate distribution. Features for ankle boots and tennis shoes form a heavily bi-modal distribution. There is a clear dividing line between modes, though, which illustrates that the ADPCNs are sensitive to the visual appearance of the stimuli. Likewise, the features for bags form a bi-modal distribution. Those with straps are predominantly located in one mode and those without straps in another mode. This, again, demonstrates that the networks are cognizant of stimuli appearance. It also suggests that the ADPCNs understand that these stimuli belong to the same class, despite not providing any class labels to the network. Dresses, shirts, coats, and t-shirts are all grouped in a quad-modal distribution, which we believes occurs due to their shared visual characteristics. Instances of these classes all have, more or less, a similar global shape. There are conspicuous local shape differences, though. The ADPCNs thus could distinguish that dresses are distinct from t-shirts, coats, and long-sleeved shirts. There is hence a rather clear division between them. There is also an abrupt transition from t-shirts to shirts. Coats and shirts are more difficult to separate. Long-sleeved shirts are typically allocated to non-contiguous, bi-modal distributions. Coats tend to be offshoots of these distributions. * CIFAR-10/100: Adding, essentially, random backgrounds to the stimuli preempt any easy separability and hence discrimination. Including multiple object perspectives, appearances, and so forth compounds this issue. The t-SNE embedding for CIFAR-10, given in fig. C.1(e), highlight this. That for CIFAR-100, in fig. C.1(g), further corroborate it. For both datasets, the stimuli are locally grouped according to global image hue, saturation, and lightness. All of the classes are highly intermixed. The ADPCNs re-organize the stimuli in a mostly class-sensitive and hue-sensitive way. However, owing to the visual difficulty of the stimuli, the network features do not yield completely separable distributions. Instead, the stimuli belong to a single, multi-modal distribution, with each mode usually corresponding to either a different class or some subset of a class. For example, in the case of CIFAR-10, images for the automobile class are located on the left-hand side of the plot in fig. C.1(f). Red, orange, and yellow cars form a sub-distribution. Large trucks, interestingly, form another sub-distribution. The automobile distribution mode gives way to modes for airplanes, boats, and birds toward the middle-left of the embedding plot. Birds are, visually, more similar to planes and boats, so stimuli from the former two categories are proximally located. For the remaining classes, there is heavy inter-mixing of stimuli. Deer and horses resemble each other greatly. There are scant distinguishing characteristics, so the ADPCNs often group them together. Such a finding, once again, shows that the ADPCNs are sensitive to visual content. This property does not, however, always imply that the networks will be sensitive to classes. We encounter similar visual-grouping behaviors for stimuli from CIFAR-100, as shown in fig. C.1(h). Many of the classes form sub-modes. These sub-modes approximately correspond to around thirty coarsely-grained superclasses, not the hundred finely-grained classes. For instance, there are sub-modes for flowers, fruits, insects, plates, cups, bottles, cellphones, keyboards, lawn mowers, automobiles and trucks, trees, whales and sharks, people, and various types of mammals. * STL-10: STL-10 has the same difficulties as CIFAR-10/100, so it is not surprising that the raw pixel values do not divide the stimuli in a class-based manner. As with fig. C.1(e) and fig. C.1(g), fig. C.1(i) shows that the stimuli are mainly arranged by global hue, saturation, and lightness. When transformed by the ADPCNs, the stimuli are amassed in a distributional manner with several modes. In fig. C.1(j), it can be seen that there are contiguous modes that individually correspond to planes, boats, cars, trucks, and birds. There are multiple, non-contiguous modes for dogs, monkeys, and cats. Some internal structure is observed within each mode, such as a grouping according to hue. While not easily separable, there are non-linear transition boundaries between sets of modes. Deer and horses share a similar distribution mode, just as in CIFAR-10/100. We also encounter other trends in the organization of the ADPCN features for STL-10 that resemble those from CIFAR-10. From various ablation studies that we performed, this appearance-based sensitivity appears to predominantly emerge from the interaction of the various convolutional network stages. Each ADCPN stage learns features at a different spatial scale. Intra-layer feed-back provided by the top-down connections helps to propagate contextual details between the layers. This results in lower-stage features that are often more informative. Such features permit uncovering higher-stage attributes that are more sensitive to object appearance. In fact, as we show, whole-object sensitivity is often possible. It is likely that including forward skip connections would further improve object sensitivity. Visual sensitivity can occur without top-down feed-back. However, it requires many times more epochs. Often, the cause embeddings are not as good as those shown above. The classes heavily overlap with no clear transition boundaries. There are often multiple, non-contiguous modes corresponding to each class.
# Subregular $J$-rings of Coxeter systems via quiver path algebras Ivan Dimitrov Department of Mathematics and Statistics, Queen’s University <EMAIL_ADDRESS>, Charles Paquette Department of Mathematics and Computer Science, Royal Military College of Canada <EMAIL_ADDRESS>, David Wehlau Department of Mathematics and Computer Science, Royal Military College of Canada<EMAIL_ADDRESS>and Tianyuan Xu Department of Mathematics, University of Colorado Boulder <EMAIL_ADDRESS> ###### Abstract. We study the subregular $J$-ring $J_{C}$ of a Coxeter system $(W,S)$, a subring of Lusztig’s $J$-ring. We prove that $J_{C}$ is isomorphic to a quotient of the path algebra of the double quiver of $(W,S)$ by a suitable ideal that we associate to a family of Chebyshev polynomials. As applications, we use quiver representations to study the category mod-$A_{K}$ of finite dimensional right modules of the algebra $A_{K}=K\otimes_{\mathbb{Z}}J_{C}$ over an algebraically closed field $K$ of characteristic zero. Our results include classifications of Coxeter systems for which mod-$A_{K}$ is semisimple, has finitely many simple modules up to isomorphism, or has a bound on the dimensions of simple modules. Incidentally, we show that every group algebra of a free product of finite cyclic groups is Morita equivalent to the algebra $A_{K}$ for a suitable Coxeter system; this allows us to specialize the classifications to the module categories of such group algebras. ###### Key words and phrases: Coxeter systems, asymptotic Hecke algebras, Kazhdan–Lusztig cells, quiver representations ###### 1991 Mathematics Subject Classification: Primary: 20C08, 16G20; Secondary: 16D60, 20C07, 20E06. ## 1\. Introduction We study a subring of the $J$-ring of an arbitrary Coxeter system $(W,S)$. The $J$-ring was first introduced by Lusztig in [Lus87] in the case where $W$ is a Weyl or affine Weyl group to help study the Kazhdan–Lusztig cells in $W$. Later, Lusztig showed in [Lus14b] that the same construction of the $J$-ring is valid for arbitrary Coxeter systems, at least in the so-called “equal- parameter” case. In the “unequal-parameter” case, the validity of the construction relies on what has come to be known as Lusztig’s conjectures P1-P15; see [Bon17, Section 14.2]. We only deal with the equal-parameter case in this paper. By definition, the $J$-ring equals the free abelian group $J=\oplus_{w\in W}\mathbb{Z}t_{w}$ as a group, and products in $J$ are given by the formula (1) $t_{x}t_{y}=\sum_{z\in W}\gamma_{x,y,z^{-1}}t_{z}$ where each coefficient $\gamma_{x,y,z^{-1}}$ is a certain nonnegative integer obtained via the Kazhdan–Lusztig basis of the Hecke algebra of $(W,S)$. The formula endows $J$ with the structure of an associative (but not necessarily unital) ring. Moreover, for each _two-sided Kazhdan–Lusztig cell_ $E$ of $W$, the subgroup $J_{E}:=\oplus_{w\in E}\mathbb{Z}t_{w}$ of $J$ is a subring of $J$. In this paper, we focus on the ring $J_{C}$ where $C$ is a particular two-sided cell of $W$ called the _subregular cell_. This cell consists of all the non-identity elements in $W$ that are _rigid_ in the sense that they each have a unique reduced expression; see [Lus83] and [Xu19]. We call $J_{C}$ the _subregular $J$-ring_ and study the structure and representations of $J_{C}$. Also called the _asymptotic Hecke algebra_ , the $J$-ring may be viewed as a limit of the Hecke algebra of $W$ in the sense of [Lus95]. As such, the $J$-ring has been an important tool for studying Hecke algebras and reductive groups; see, for example, [Lus89], [Gec98], [Gec07] and [Lus18]. Besides its applications, the structure of the $J$-ring itself has also been studied extensively. Notable results include the following: Bezrukavnikov, Finkelberg, and Ostrik studied a categorical version of the $J$-ring in [BFO09] and used it to compute explicitly the structure of the ring $J_{E}$ for each two-sided cell $E$ in $W$; Braverman and Kazhdan showed in [BK18] that $J$ is isomorphic to a certain subalgebra of the Harish-Chandra Schwartz algebra of a reductive group; by using a generalization of the Robinson–Schensted algorithm called the _affine matrix-ball construction_ , Kim and Pylyavskyy gave a canonical presentation for the $J$-ring in the special case where $W$ is an (extended) affine symmetric group in [KP19], extending the work of Xi in [Xi02] for the same case. It is worth noting that the results on the structure of the $J$-ring mentioned above are all restricted to Weyl or (extended) affine Weyl groups. On the other hand, the $J$-ring makes sense for an arbitrary Coxeter system, so it is natural to wonder what the structure of the $J$-ring is for more general Coxeter systems. Indeed, in Kazhdan–Lusztig theory it can often be interesting to study Coxeter systems in the full generality. One such indication comes from the proof of the famous “positivity conjecture” of Kazhdan and Lusztig, which states that all coefficients of so-called _Kazhdan–Lusztig polynomials_ are nonnegative integers. After its first appearance in [KL79] in 1979, the conjecture was proved along with other related deep results for Weyl and affine Weyl groups in the next two years by Kazhdan–Lusztig [KL80], Beilinson–Bernstein [BB81] and Brylinski–Kashiwara [BK81], via geometric methods involving local intersection cohomology of Schubert varieties, $D$-modules and perverse sheaves. The proof for the general case came much later: building upon the work of Soergel in [Soe90] [Soe92] [Soe07], Elias and Williamson proved the positivity conjecture for arbitrary Coxeter systems in [EW14] in 2014. In their work, they introduced a graphical calculus and a type of Hodge theory for the Soergel category, each of which is interesting in its own right; see [EW16] and [Wil18]. As was the case for the positivity conjecture, a disparity exists between what is known about the $J$-rings of Weyl or affine Weyl groups and the $J$-rings of other Coxeter systems. With the exception of Alvis’ work on the Coxeter group of type $H_{4}$ in [Alv08], the structures of the $J$-rings of non-Weyl Coxeter groups remain largely unexplored. One obstacle to understanding $J$-rings in general is the diffculty in computing the structure constants of Hecke algebras with respect to their Kazhdan–Lusztig bases, which are necessary for obtaining the coefficients $\gamma_{x,y,z^{-1}}$ in Equation (1). As we will show, however, it is possible to circumvent this obstacle if we restrict from the $J$-ring to the subregular $J$-ring. In [Xu19], the last named author gave a description of products of the form $t_{x}t_{y}$ in $J_{C}$ that does not involve Kazhdan–Lusztig theory. In the present paper, we use this description to show that $J_{C}$ is isomorphic to certain quotients of the path algebra of a quiver, then use quiver representations to study representations of $J_{C}$. Roughly speaking, the reason why we can understand $J_{C}$ in full generality, in contrast to the entire $J$-ring, is that the rigidity of the elements of $C$ makes $C$ and $J_{C}$ more amenable to combinatorial analysis. It seems interesting that a similar contrast is also visible in the book [Bon17] by Bonnafé, where he singles out the subregular cell in Chapters 12 and 13 (he calls the cell the _submaximal cell_) and exploits its rich combinatorics in his investigation of various Kazhdan–Lusztig objects attached to the cell, including the so-called _cell module_ of $C$ and its connection to the reflection representation of the Hecke algebra of $(W,S)$. Let us elaborate on how $J_{C}$ relates to quivers. Recall that every Coxeter system $(W,S)$ corresponds to a unique Coxeter diagram $G$ with vertex set $S$ and edge set $\\{a-b:a,b\in S,m(a,b)\geq 3\\}$. We define the _double quiver_ of $(W,S)$ to be the directed graph $Q=(Q_{0},Q_{1})$ with vertex set $Q_{0}=S$ and edge set $Q_{1}=\\{a\rightarrow b:a,b\in S,m(a,b)\geq 3\\}$, where we have a pair of arrows $a\rightarrow b$ and $b\rightarrow a$ arising from each edge $a-b$ in $G$. Next, we consider the path algebra $\mathbb{Z}Q$ of $Q$ over $\mathbb{Z}$ and associate to each suitable family $\\{f_{n}:n\in\mathbb{Z}_{\geq 2}\\}$ of polynomials an ideal $\mathcal{I}_{f}^{\mathbb{Z}}$ in $\mathbb{Z}Q$ called an _evaluation ideal_ of $\\{f_{n}:n\in\mathbb{Z}_{\geq 2}\\}$ (Definition 3.4). Our first main result, Theorem 3.6, establishes an algebra isomorphism between $J_{C}$ and the quotient $\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$ where $\mathcal{I}_{u}^{\mathbb{Z}}$ is the evaluation ideal of a family $\\{u_{n}\in\mathbb{Z}[x]:n\geq 2\\}$ of “Chebyshev polynomials”. Fixing an algebraically closed field $K$ of characteristic zero, we extend the result that $J_{C}\cong\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$ in two ways (see Remark 4.17 for a discussion about assumptions on the field $K$). First, in Theorem 3.7, we show that upon an extension of scalars we may alter the family $\\{u_{n}\\}$ without changing the isomorphism type of the quotient of the path algebra by the evaluation ideal. More precisely, we show that for any two _uniform_ families of polynomials $\\{f_{n}\\},\\{g_{n}\\}$ over $K$ (Definition 3.3), we have $KQ/\mathcal{I}_{f}\cong KQ/\mathcal{I}_{g}$ where $KQ$ is the path algebra of $Q$ and $\mathcal{I}_{f},\mathcal{I}_{g}$ are evaluation ideals of $KQ$ constructed from $\\{f_{n}\\},\\{g_{n}\\}$. The Chebyshev polynomials $\\{u_{n}\\}$ form a uniform family, and the result enables us to realize the algebra $A_{K}:=K\otimes_{\mathbb{Z}}J_{C}$ as a quotient $KQ/\mathcal{I}_{f}$ where the ideal $\mathcal{I}_{f}$ is generated by elements which can take very simple forms; see § 4.2. Together, Theorems 3.6 and 3.7 generalize Example 6.10 of Diaz-Lopez’s paper [DL15]. That paper cites the example as its main motivation and remarks that the example suggests a stronger connection between path algebras and asymptotic Hecke algebras. We hope our result can be viewed as further support for such a connection. In our second extension of Theorem 3.6, we develop a procedure to modify a quiver $Q$ to a new quiver $\bar{Q}$ such that the algebras $KQ/\mathcal{I}_{f}$ and $K\bar{Q}/\bar{\mathcal{I}}_{f}$ are Morita equivalent for any uniform family of polynomials $\\{f_{n}\\}$, where $\bar{\mathcal{I}}_{f}$ is the evaluation ideal of $K\bar{Q}$ associated to $\\{f_{n}\\}$; see Theorem 4.5. We call the procedure a _quiver contraction_ and will often apply it iteratively, starting from the double quiver of a Coxeter diagram. Quiver contractions reveal certain interesting algebras that are Morita equivalent to algebras $A_{K}$ associated to Coxeter systems, such as the Laurent polynomial ring $K[t,t^{-1}]$ or group algebras of free products of finite cyclic groups; see Examples 4.10 and 4.12. In addition, we use quiver contractions to justify certain assumptions on Coxeter systems in the study of representations of $A_{K}$. For example, for any Coxeter system whose Coxeter diagram $G$ is a tree, quiver contractions allow us to assume that $G$ contains no simple edges when studying representations of $A_{K}$; see Example 4.8. Theorem 3.6 and its extensions allow us to study representations of the subregular $J$-ring via quivers. More precisely, we use the double quiver $Q$ to study the category mod-$A_{K}$ of finite dimensional right modules of the algebra $A_{K}$. Representations of the $J$-ring and of rings of the form $J_{E}$ (where $E$ is a two-sided cell) are not only interesting on their own but also intimately related to representations of $W$ and its Hecke algebra; see [Lus14b], [Lus14a], [Lus18], [Gec07] and [Pie10]. On the other hand, quivers arise naturally in many areas of mathematics and have close connections to the representation theory of finite dimensional algebras, Kac–Moody algebras, quantum groups, and so on; see [Sav05] and [Sch14]. To study mod-$A_{K}$ via $Q$, we use the well-known fact that for each ideal $\mathcal{I}$ in $KQ$, the category of modules of the quotient $KQ/\mathcal{I}$ is equivalent to the category of _representations of $Q$_ that _satisfy_ the relations in $\mathcal{I}$ (see § 2.4). Our main results are Theorems 5.1 and 5.2, which characterize in terms of the Coxeter diagram $G$ when the category mod-$A_{K}$ is semisimple, contains finite many simple modules, or has a bound on the dimensions of simple modules. In a sense, the chacterizations are similar to those of the representation types of quivers given by the celebrated Gabriel’s Theorem (see [DDPW08]). Since we can use quiver contractions to show that every group algebra of a free product of finite cyclic groups is Morita equivalent to the algebra $A_{K}$ for a suitable Coxeter system (Example 4.12), Theorems 5.1 and 5.2 lead to similar characterizations for the module categories of such group algebras, which may be of independent interest as they are stated without mention of Coxeter systems or Kazhdan–Lusztig theory; see Remark 5.3 and Proposition 5.4. The rest of the paper is organized as follows. In Section 2, we recall the relevant background on Coxeter systems, subregular $J$-rings, path algebras, and quiver representations. In Section 3, we define uniform families of polynomials $\\{f_{n}\\}$ and their associated evaluation ideals, then we realize $J_{C}$ and the algebra $A_{K}$ as quotients of path algebras by suitable evaluation ideals via Theorems 3.6 and 3.7. Section 4 deals with quiver contractions and its main result is Theorem 4.5, which asserts that $KQ/\mathcal{I}_{f}$ is Morita equivalent to $K\bar{Q}/\bar{\mathcal{I}}_{f}$ if the quiver $\bar{Q}$ is obtained from $Q$ via a sequence of contractions. We define contractions in § 4.1, give detailed examples of contractions in § 4.2 and prove Theorem 4.5 in § 4.3, then we analyze and give examples of representations of contracted quivers in § 4.4. Finally, we state and prove the results on mod-$A_{K}$ in Section 5. Most of the examples from § 4.2 and § 4.4 will be used in the proofs. ### Acknowledgements The first three named authors are supported by the National Sciences and Engineering Research Council of Canada. The second and third named authors are also supported by the Canadian Defence Academy Research Programme. We thank R. M. Green for reading a draft of the paper and for his helpful comments. ## 2\. Background ### 2.1. Coxeter Systems A _Coxeter system_ is a pair $(W,S)$ where $S$ is a finite set and $W$ is the group given by the presentation $W=\langle S\,|\,(ab)^{m(a,b)}=1\;\text{for all $a,b\in S$ with $m(a,b)<\infty$}\rangle,$ where $m$ denotes a map $m:S\times S\rightarrow\mathbb{Z}_{\geq 1}\cup\\{\infty\\}$ such that for all $a,b\in S$, we have $m(a,b)=m(b,a)$, and $m(a,b)=1$ if and only if $a=b$. These conditions imply that $a^{2}=1$ for all $a\in S$ and that (2) $aba\dots=bab\dots,$ where both sides contain $m(a,b)$ factors, for every two distinct generators $a,b\in S$ with $m(a,b)<\infty$. We call each side of Equation (2) an $\\{a,b\\}$-_braid_ and call the equation a _braid relation_. Each Coxeter system $(W,S)$ can be encoded via its _Coxeter diagram_ , the weighted, undirected graph $G$ whose vertex set is $S$, whose edge set is $\\{\\{a,b\\}:m(a,b)\geq 3\\}$, and where the weight of an edge $\\{a,b\\}$ is $m(a,b)$. An edge with weight $m$ in $G$ is _simple_ if $m=3$ and is _heavy_ otherwise. When drawing $G$, we label each edge with its weight except for simple edges. A Coxeter system $(W,S)$ is said to be _irreducible_ if its Coxeter diagram $G$ is connected and _reducible_ otherwise. For the rest of the paper, we let $(W,S)$ be an irreducible Coxeter system and let $G$ be its Coxeter diagram. The irreduciblity assumption is made to simplify our statements, as the reducible case can be easily derived from the irreducible case for all the relevant results; see Remark 3.8. ### 2.2. The Subregular J-ring Let $S^{*}$ be the free monoid generated by $S$. For each element $w\in W$, the words in $S^{*}$ that express $w$ and have minimal length are called the _reduced words_ of $w$. The common length of these words, denoted $l(w)$, is called the _length_ of $w$. By the well-known Matsumoto–Tits theorem, every two reduced words of $w$ can be obtained from each other via a finite sequence of braid relations. An element in $W$ is called _rigid_ if it has a unique reduced word. In this paper we are particularly interested in the set $C=\\{w\in W:w\neq 1,w\text{\; is rigid}\\}.$ The set $C$ is known to be a _two-sided Kazhdan–Lusztig cell_ of $W$, and is called the _subregular cell_ or _submaximal cell_ of $W$ (see [Xu19] and [Bon17, Chapter 12]). ###### Remark 2.1. (a) By the Matsumoto–Tits theorem, a word $w\in S^{*}$ expresses an element in $C$ if and only if $w$ is nonempty and does not contain as a contiguous subword a word of the form $aa$ for any $a\in S$ or an $\\{a,b\\}$-braid for any distinct elements $a,b\in S$. (b) Henceforth we will identify each element $w\in C$ with its unique reduced word. In particular, we will also use $w$ to denote the reduced word of the element (as in Propositions 2.2 and 3.9, for example). To define the subregular $J$-ring, we first recall the construction of the _$J$ -ring_, or the _asymptotic Hecke algebra_ , of $(W,S)$. The construction is due to Lusztig, who defined the $J$-ring as the free abelian group $J:=\oplus_{w\in W}\mathbb{Z}t_{w}$ and defined multiplication in $J$ by the formula $t_{x}t_{y}=\sum_{z\in W}\gamma_{x,y,z^{-1}}t_{z}$ where each coefficient $\gamma_{x,y,z^{-1}}$ is a certain nonnegative integer extracted from the structure constants for the _Kazhdan–Lusztig basis_ of the _Iwahori–Hecke algebra_ of $(W,S)$; see [Lus87] and [Lus14b, Section 18.3]. Lusztig showed that for each two-sided cell $E$ of $W$, the subgroup $J_{E}:=\oplus_{w\in E}\mathbb{Z}t_{w}$ is in fact a subring of $J$. We define the _subregular $J$-ring_ to be the subring $J_{C}$ of $J$ arising from the subregular cell $C$ of $W$. While the definition of $J$ relies heavily on Kazhdan–Lusztig theory, it is shown in [Xu19] that we can describe products in the subregular $J$-ring via simple manipulations of reduced words. To do so, for each pair of distinct generators $a,b\in S$, let us call an element $w\in C$ an _$\\{a,b\\}$ -element_ if $w$ lies in the subgroup of $W$ generated by $a$ and $b$. For two words $x=\dots a_{2}a_{1},y=b_{1}b_{2}\dots\in S^{*}$ with $a_{1}=b_{1}$, let $x*y$ be the word $\dots a_{2}b_{1}b_{2}\dots$, the result of concatenating $x$ and $y$ and deleting one duplicate copy of the letter $a_{1}=b_{1}$. Then products in $J_{C}$ behave as follows: ###### Proposition 2.2 ([Xu19, Corollary 4.2, Propositions 4.4 & 4.5]). Let $x,y$ be elements of $C$ with reduced words $x=\dots a_{2}a_{1}$ and $y=b_{1}b_{2}\dots$, where we take $a_{2}$ and $b_{2}$ to be nonexistant when $l(x)=1$ and $l(y)=1$, respectively. Then the following holds. 1. (a) If $a_{1}\neq b_{1}$, then $t_{x}t_{y}=0$. 2. (b) If $a_{1}=b_{1}$ and $a_{2}\neq b_{2}$ (including the vacuous cases where $a_{2}$ or $b_{2}$ do not exist), then $t_{x}t_{y}=t_{x*y}$. 3. (c) If $a_{1}=b_{1}$ and $x,y$ are both $\\{a,b\\}$-elements for some $a,b\in S$, then $t_{x}t_{y}$ is a linear combination of the form $\sum_{z\in Z}t_{z}$ where $Z$ is a certain set of $\\{a,b\\}$-elements. Note that the first two parts of the proposition imply that $J_{C}$ has a unit, namely, the element $\sum_{a\in S}t_{a}$. In the last part, the set $Z$ can be obtained via a _truncated Clebsch–Gordan rule_ , but the exact description of $Z$ is not essential to this paper, so we omit it. Instead, we describe below the product $t_{x}t_{y}$ from Proposition 2.2.(c) in the special case where $l(x)=2$. The special case is in fact equivalent to the general case because one can deduce the latter from the former by induction. ###### Proposition 2.3 ([Xu19, Corollary 4.2]). Let $a,b\in S$ and let $m=m(a,b)$. Suppose that $m\geq 3$. For all $1<i<m$, let $w_{a,i}$ be the $\\{a,b\\}$ element $aba\dots$ of length $i$ and let $t_{a,i}=t_{w_{a,i}}$, then define $w_{b,i}=bab\dots$ and $t_{b,i}$ similarly. Then for all $1<i<m$, we have (3) $t_{ab}t_{b,i}=\begin{cases}t_{a,i-1}+t_{a,i+1}&\text{if $i<m-1$};\\\ t_{a,i-1}&\text{if $i=m-1$}.\end{cases}$ The following example illustrates how Proposition 2.2 can be used to compute the product $t_{x}t_{y}$ for all $x,y\in C$: Suppose $(W,S)$ is a Coxeter system where $S=\\{a,b,c\\}$ and $m(a,b)=3,m(a,c)=4,m(b,c)=5$. Let $x=abcb,y=bcbcac$. Then $x,y\in C$ by Remark 2.1.(a). The first two parts of Proposition 2.2 imply that $t_{y}t_{x}=0$ and $t_{x}t_{y}=(t_{ab}t_{bcb})(t_{bcbc}t_{cac})=t_{ab}(t_{bcb}t_{bcbc})t_{cac}.$ The product $t_{bcb}t_{bcbc}$ can be computed using Part (c) and turns out to equal $t_{bc}$. Applying Part (b) again completes the computation: $t_{x}t_{y}=t_{ab}t_{bc}t_{cac}=t_{abcac}.$ Intuitively, as the example shows, the reductions allowed by the first two parts of Proposition 2.2 mean that the most interesting multiplication in $J_{C}$ happen “locally”, for elements within subgroups of $W$ generated by two elements. This fact is a key reason why Theorem 3.6 holds. ### 2.3. Path Algebras In this and the next subsection, we recall the background on quivers, path algebras and quiver representations that is relevant to the paper. Our main reference is [Sch14]. A _quiver_ is a directed graph $Q=(Q_{0},Q_{1})$ where $Q_{0}$ is the set of vertices and $Q_{1}$ is the set of directed edges, or _arrows_. The sets $Q_{0}$ and $Q_{1}$ will be finite for all quivers in this paper. For each arrow $\alpha:a\rightarrow b$, we call $a$ and $b$ the _source_ and the _target_ of $\alpha$ and denote them by $\operatorname{\mathrm{source}}(\alpha)$ and $\operatorname{\mathrm{target}}(\alpha)$, respectively. An arrow $\alpha$ is called a _loop at $a$_ if $\operatorname{\mathrm{source}}(\alpha)=\operatorname{\mathrm{target}}(\alpha)=a$. A _path_ on $Q$ is an element of the form $p=\alpha_{1}\alpha_{2}\dots\alpha_{n}$ where $\operatorname{\mathrm{target}}(\alpha_{i})=\operatorname{\mathrm{source}}(\alpha_{i+1})$ for all $1\leq i\leq n-1$; we define the _source_ of $p$ to be $\operatorname{\mathrm{source}}(p):=\operatorname{\mathrm{source}}(\alpha_{1})$ and the _target_ of $p$ to be $\operatorname{\mathrm{target}}(p):=\operatorname{\mathrm{target}}(\alpha_{n})$. To each vertex $a\in Q_{0}$, we associate a special path $e_{a}$ called the _stationary path at $a$_; we consider it as a path that “stays at $a$”, so in particular we have $\operatorname{\mathrm{source}}(e_{a})=\operatorname{\mathrm{target}}(e_{a})=a$. The _length_ of the path $p$, denoted by $\mathrm{length}(p)$, is defined to be the number of arrows it traverses. In other words, each arrow has length 1, each stationary path has length 0, and we have $\operatorname{\mathrm{length}}(p)=\sum_{i=1}^{n}\operatorname{\mathrm{length}}(\alpha_{i})$ for each path $p=\alpha_{1}\dots\alpha_{n}$. Let $\mathcal{P}$ be the set of all paths on $Q$, and let $R$ be a commutative ring. The _path algebra of $Q$ over $R$_, denoted by $RQ$, is the $R$-algebra with $\mathcal{P}$ as an $R$-basis and with multiplication induced by path concatenation: for paths $p=\alpha_{1}\dots\alpha_{m},q=\beta_{1}\dots\beta_{n}\in\mathcal{P}$, we define $pq$ to be the path $\alpha_{1}\dots\alpha_{m}\beta_{1}\dots\beta_{n}$ if $\operatorname{\mathrm{target}}(p)=\operatorname{\mathrm{source}}(q)$ and to be 0 otherwise. In particular, for any path $p$ with source $a$ and target $b$, we have $e_{a}p=p=pe_{b}$ in $RQ$. Consequently, $RQ$ contains the unit $1=\sum_{a\in Q_{0}}e_{a}$, and we can describe $RQ$ as the algebra generated by the arrows and stationary paths in $Q$ subject only to the relations $e_{a}e_{b}=\delta_{a,b}e_{a}$ for all $a,b\in Q_{0}$ and $e_{a}\alpha=\alpha=\alpha e_{b}$ for each arrow $\alpha:a\rightarrow b$ in $Q_{1}$. Among the elements of $RQ$, we will be especially interested in elements of the form $r=\sum_{p}c_{p}p\in RQ$ where the sum is taken over a finite set of paths on $Q$ which share the same source and the same target. Following [Sch14, Definition 3.1], we call such an element $r$ a _uniform relation_ or simply a _relation_ on $Q$. We define the source and target of $r$ to be the common source and common target of the paths appearings in it, respectively. Our first main theorem, Theorem 3.6, asserts that $J_{C}\cong\mathbb{Z}Q/\mathcal{I}_{f}$ for a suitable quiver $Q$ and a suitable ideal $\mathcal{I}_{u}^{\mathbb{Z}}$ generated by a set of relations of the form $\mathcal{R}=\\{r_{u}(\alpha):\alpha\in Q_{1}\\}$, where each relation corresponds to an arrow in $Q$. ### 2.4. Quiver Representations Let $Q$ be a quiver and let $K$ be an arbitrary field. We recall below some basic facts about the representation theory of the path algebra $KQ$ and its quotients. All representations and modules we mention in this paper will be finite dimensional. Let mod-$KQ$ be the category of finite dimensional right $KQ$-modules. It is well-known that mod-$KQ$ is naturally equivalent to the category $\mathrm{rep}_{K}Q$ of finite dimensional representations of $Q$ over $K$. Here, a _representation_ of a quiver $Q$ over $K$ is an assignment $M=(M_{a},M_{\alpha})_{a\in Q_{0},\alpha\in Q_{1}}$ of a $K$-vector space $M_{a}$ to each vertex $a$ of $Q$ and a linear map $M_{\alpha}:M_{a}\rightarrow M_{b}$ for each arrow $\alpha:a\rightarrow b$ in $Q$; the _dimension_ of $M$ is defined by $\dim(M):=\sum_{a\in Q_{0}}\dim(M_{a})$. A morphism $\varphi:M\rightarrow N$ between two representations $M,N$ of $Q$ consists of the data $\varphi=(\varphi_{a})_{a\in Q_{0}}$ of linear maps $\varphi_{a}:M_{a}\rightarrow N_{a}$ for $a\in Q_{0}$ such that $\varphi_{b}\circ M_{\alpha}=N_{\alpha}\circ\varphi_{a}$ for every arrow $\alpha:a\rightarrow b$ in $Q$. The equivalence between the two categories can be established by two naturally defined quasi-inverse functors $\mathcal{F}:\textrm{mod-}KQ\rightarrow\mathrm{rep}_{K}Q$ and $\mathcal{G}:\mathrm{rep}_{K}Q\rightarrow\textrm{mod-}KQ$; see [Sch14, Chapter 5]. We can modify the equivalence between mod-$KQ$ and $\mathrm{rep}_{K}Q$ to account for relations on $Q$. To do so, for each representation $M$ of $Q$, we set $M_{e_{a}}=\operatorname{\mathrm{id}}_{M_{a}}$ for all $a\in Q_{0}$ and associate to each path $p=\alpha_{1}\dots\alpha_{n}$ on $Q$ the map $M_{p}:=M_{\alpha_{n}}\circ\dots\circ M_{\alpha_{1}},$ and we say that $M$ _satisfies_ a relation $r=\sum c_{p}p$ if $\sum c_{p}M_{p}=0$. For an ideal $\mathcal{I}$ of $KQ$ generated by a set of relations $\mathcal{R}$, define a representation of $Q$ to be a _representation of $(Q,\mathcal{I})$_ if it satisfies all relations in $\mathcal{R}$. Finally, let $\mathrm{rep}_{K}(Q,\mathcal{I})$ be the full subcategory of $\mathrm{rep}_{K}Q$ whose objects are the representations of $(Q,\mathcal{I})$. Then it is well-known that $\mathrm{rep}_{K}(Q,\mathcal{I})$ is equivalent to mod-$KQ/\mathcal{I}$, the category of finite dimensional right modules of the quotient $KQ/\mathcal{I}$. ###### Remark 2.4. We introduce two types of shorthand notation to be used for the rest of the paper. First, for a category $\mathbf{C}$, we will write $M\in\mathbf{C}$ to mean that $M$ is an object in $\mathbf{C}$. Second, given a two-sided ideal $I$ in a ring $R$ and an element $r\in R$, we will denote the coset $r+I$ simply by $r$. Familiar notions from mod-$KQ$ have obvious counterparts in $\mathrm{rep}_{K}Q$: The _zero representation_ in $\mathrm{rep}_{K}Q$ is the representation $M$ with $M_{a}=0$ for all $a\in Q_{0}$. A _subrepresentation_ of a representation $M$ is an assignment $N=(N_{a},N_{\alpha})_{a\in Q_{0},\alpha\in Q_{1}}$ such that for every arrow $\alpha:a\rightarrow b$ in $Q$, we have $N_{a}\subseteq M_{a}$, $M_{\alpha}(N_{a})\subseteq N_{b}$, and $N_{\alpha}$ equals the restriction of $M_{\alpha}$ to $N_{a}$. A representation is _simple_ if it does not contain any proper, nonzero subrepresentation. The _direct sum_ of two representations $M,N$ is the reprentation $M\oplus N$ where $(M\oplus N)_{a}=M_{a}\oplus N_{a}$ and $(M\oplus N)_{\alpha}((m,n))=(M_{\alpha}(m),N_{\alpha}(n))$ for every arrow $\alpha:a\rightarrow b$ and every element $(m,n)\in M_{a}\oplus N_{a}$. Finally, a representation is _semisimple_ if it is a direct sum of simple representations, and each of $\mathrm{rep}_{K}Q$ and $\mathrm{rep}_{K}(Q,\mathcal{I})$ is _semisimple_ if all representations in it are semisimple. These notions agree with their counterparts in mod-$KQ$ under the equivalences $\mathcal{F}$ and $\mathcal{G}$. For example, a representation $M\in\mathrm{rep}_{K}Q$ is simple if and only if the module $\mathcal{G}(M)\in\text{mod-}KQ$ is simple, and $\mathrm{rep}_{K}(Q,\mathcal{I})$ is semisimple if and only if mod-$KQ/\mathcal{I}$ is semisimple. Indeed, the agreement of the notions can be attributed to the facts that mod-$KQ$ and $\mathrm{rep}_{K}Q$ are abelian categories, that the definitions in $\mathrm{rep}_{K}Q$ and mod-$KQ$ are specializations of the corresponding categorical notions, and that $\mathcal{F},\mathcal{G}$ are equivalences of abelian categories. ## 3\. Quiver Realizations Henceforth, let $K$ be an algebraically closed field of characteristic zero, let $(W,S)$ be a Coxeter system, and let $G,C$ and $J_{C}$ be the Coxeter diagram, subregular cell and subregular $J$-ring of $(W,S)$, respectively. Let $A=A_{K}:=K\otimes_{\mathbb{Z}}J_{C}$. In this section, we associate a quiver $Q$ to $(W,S)$ and then show that $J_{C}\cong\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$ and $A\cong KQ/\mathcal{I}_{f}$ for suitable ideals $\mathcal{I}_{u}^{\mathbb{Z}}\subseteq\mathbb{Z}Q$ and $\mathcal{I}_{f}\subseteq KQ$. By § 2.4, the latter isomorphism will allow us to study the category $\mathrm{rep}_{K}A$ via the equivalent category $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. ### 3.1. Statement of Results Let $Q=(Q_{0},Q_{1})$ be the quiver with $Q_{0}=S$ and $Q_{1}=\\{(a,b):a,b\in S,m({a,b})\geq 3\\}$. Each edge $a-b$ in the Coxeter diagram $G$ gives rise to a pair of arrows $a\rightarrow b$ and $b\rightarrow a$ in $Q$, and all arrows of $Q$ arise this way. For an arrow $\alpha:a\rightarrow b$ in $Q$, we call the arrow $b\rightarrow a$ arising from the same edge in $G$ the _dual arrow of $\alpha$_ and denote it by $\bar{\alpha}$; we define the _weight_ of $\alpha$ to be $m(a,b)$ and denote it by $m_{\alpha}$. We call the quiver $Q$ the _double quiver of $(W,S)$_ or the _double quiver of $G$_. The ideal $\mathcal{I}_{u}^{\mathbb{Z}}$ for which $J_{C}\cong\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$ is generated by a set of (uniform) relations obtained via _arrow evaluations_ of polynomials from suitable polynomial families. We first define arrow evaluations: ###### Definition 3.1. For each arrow $\alpha$ in $Q$, let $\mathrm{Eval}_{\alpha}:K[x]\rightarrow KQ$ be the unique $K$-linear map such that $\mathrm{Eval}_{\alpha}(1)=e_{a}$ where $a=\operatorname{\mathrm{source}}(\alpha)$ and $\mathrm{Eval}_{\alpha}(x^{n})=\alpha\bar{\alpha}\alpha\dots,$ the product with $n$ factors that start with $\alpha$ and alternate in $\alpha$ and $\bar{\alpha}$, for all $n>0$. For each polynomial $f\in K[x]$, we write $f(\alpha,\bar{\alpha}):=\mathrm{Eval}_{\alpha}(f)$ and call $f(\alpha,\bar{\alpha})$ the _$\alpha$ -evaluation of $f$_. By a “polynomial family” we mean a countable collection $\\{f_{n}:n\in\mathbb{Z}_{\geq 2}\\}$ of polynomials in $K[x]$. Note that for $f(\alpha,\bar{\alpha})$ to yield a uniform relation on $Q$, the polynomial $f$ needs to be either even or odd, therefore we will consider only polynomial families $\\{f_{n}\\}$ where each $f_{n}$ is either an even or an odd polynomial. To describe further conditions we would like to impose on $\\{f_{n}\\}$, we need more notation: ###### Definition 3.2. For each even polynomial $f=\sum c_{i}x^{2i}\in K[x]$, let $\tilde{f}=\sum{c_{i}}x^{i};$ for each odd polynomial $f=\sum c_{i}x^{2i+1}\in K[x]$, let $\tilde{f}=\sum{c_{i}}x^{i}.$ Note that when $f$ is an even or odd polynomial of degree $n$, the polynomial $\tilde{f}$ has degree $\lfloor{n/2}\rfloor$ where $\lfloor{-}\rfloor$ denotes the floor function; moreover, we have (4) $f(\alpha,\bar{\alpha})=\begin{cases}\tilde{f}(\alpha\bar{\alpha})&\text{if $f$ is even};\\\ \tilde{f}(\alpha\bar{\alpha})\cdot\alpha=\alpha\cdot\tilde{f}(\bar{\alpha}\alpha)&\text{if $f$ is odd},\end{cases}$ where we evaluate a constant term $c$ in $\tilde{f}$ to $ce_{a}$ for $a=\operatorname{\mathrm{source}}(\alpha)$. For example, if $f=x^{3}-2x$ then $\tilde{f}=x-2$ and $f(\alpha,\bar{\alpha})=\alpha\bar{\alpha}\alpha-2\alpha$, and if $f=x^{4}-1$ then $\tilde{f}=x^{2}-1$ and $f(\alpha,\bar{\alpha})=\alpha\bar{\alpha}\alpha\bar{\alpha}-e_{a}$ where $a=\operatorname{\mathrm{source}}(\alpha)$. We are ready to define the polynomial families we need. ###### Definition 3.3. A _uniform_ family of polynomials (over $K$) is a set $\\{f_{n}\in K[x]:n\in\mathbb{Z}_{\geq 2}\\}$ such that for all $n\in\mathbb{Z}_{\geq 2}$, we have 1. (a) $f_{n}$ has degree $n$, is even when $n$ is even, and is odd when $n$ is odd. 2. (b) zero is not a root of $\tilde{f}_{n}$, and no root of $\tilde{f}_{n}$ is repeated. Given a uniform polynomial family, we assign one relation to each arrow and define an ideal $\mathcal{I}_{f}$ of $KQ$ as follows: ###### Definition 3.4. Let $\\{f_{n}:n\geq 2\\}$ be a uniform family of polynomials. 1. (a) For each arrow $\alpha$ in $Q$, we set $m=m_{\alpha}$ and define (5) $r_{f}(\alpha)=\begin{cases}0&\text{if $m=\infty$};\\\ f_{m-1}(\alpha,\bar{\alpha})&\text{if $m<\infty$}.\end{cases}$ 2. (b) We define the _evaluation ideal of $\\{f_{n}\\}$_ to be the two-sided ideal $\mathcal{I}_{f}:=\langle r_{f}(\alpha):\alpha\in Q_{1}\rangle$ of $KQ$ generated by the relations of the form $r_{f}(\alpha)$. More generally, if $f_{n}\in R[x]$ for all $n\geq 2$ for some subring $R$ of $K$, we define $\mathcal{I}_{f}^{R}$ to be the two-sided ideal of $RQ$ given by $\mathcal{I}_{f}^{R}:=\langle r_{f}(\alpha):\alpha\in Q_{1}\rangle\subseteq RQ.$ ###### Example 3.5. Suppose $K=\mathbb{C}$, and consider the polynomials $u_{n}$ for $n\geq 0$ where (6) $u_{0}=1,\quad u_{1}=x,\quad\text{and\; }u_{n}=xu_{n-1}-u_{n-2}\;\text{\;for all $n\geq 2$}.$ These polynomials are normalizations of the _Chebyshev polynomials of the second kind_. It is easy to see by induction that for each $n\geq 2$, the polynomial $u_{n}$ has degree $n$, is even when $n$ is even, and is odd when $n$ is odd. Moreover, it is known that $u_{n}$ has $n$ distinct nonzero real roots $z_{1},\dots,z_{n}$ where $z_{i}=2\cos\left(\dfrac{i\pi}{n+1}\right)$ for each $i$. The definition of the polynomial $\tilde{u}_{n}$ implies that $\tilde{u}_{n}$ has $\lfloor\frac{n}{2}\rfloor$ distinct nonzero roots, namely, the numbers $z_{i}^{2}$ where $1\leq i\leq\lfloor\frac{n}{2}\rfloor$. It follows that $\\{u_{n}:n\geq 2\\}$ forms a uniform family of polynomials over $\mathbb{C}$. Note that $u_{n}\in\mathbb{Z}[x]$ for all $n\geq 2$, so $\mathcal{I}_{u}^{\mathbb{Z}}$ makes sense as an ideal of $\mathbb{Z}Q$. We state our first two results below. ###### Theorem 3.6. Let $\\{u_{n}:n\in\mathbb{Z}_{\geq 2}\\}$ be as in Example 3.5. Then $J_{C}\cong\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$ as unital rings. ###### Theorem 3.7. Let $K$ be an algebraically closed field of characteristic zero and let $\\{f_{n}\\},\\{g_{n}\\}$ be two uniform families of polynomials over $K$. Then $KQ/\mathcal{I}_{f}\cong KQ/\mathcal{I}_{g}$ as $K$-algebras. ###### Remark 3.8. We can now explain why it suffices to deal with only irreducible Coxeter systems in this paper. Recall that if $(W,S)$ is reducible, then the connected components of its Coxeter diagram are the diagrams of Coxeter systems $(W_{i},S_{i})$ for $1\leq i\leq k$ for some $k\geq 2$, and we have $S=\sqcup_{i}S_{i},W=\Pi_{i}W_{i}$, where the symbols $\sqcup$ and $\Pi$ denote disjoint union and direct product, respectively. Now let $C(i),Q(i)$ be the subregular cell and the double quiver of $(W_{i},S_{i})$ for each $i$. Then $C=\sqcup_{i}C(i)$ by definition and $J_{C}=\Pi_{i}J_{C(i)}$ by Part (a) of Proposition 2.2. On the other hand, for any uniform polynomial family $\\{f_{n}\\}$ over $K$ where $f_{n}\in\mathbb{Z}[x]$ for all $n$ (such as $\\{u_{n}\\}$) it is easy to see that $\mathbb{Z}Q/\mathcal{I}_{f}^{\mathbb{Z}}=\Pi_{i}\mathbb{Z}Q(i)/\mathcal{I}_{f}^{\mathbb{Z}}(i)$ and $KQ/\mathcal{I}_{f}\cong\Pi_{i}KQ(i)/\mathcal{I}_{f}(i)$, where for each $i$ the ideals $\mathcal{I}_{f}^{\mathbb{Z}}(i)$ and $\mathcal{I}_{f}(i)$ are the evaluation ideals of $\\{f_{n}\\}$ in $\mathbb{Z}Q(i)$ and $KQ(i)$, respectively. It follows that we can deduce Theorem 3.6 and Theorem 3.7 for reducible Coxeter systems from the irreducible cases by taking suitable direct products. ### 3.2. Proof of Theorem 3.6 In this section we prove Theorem 3.6 by constructing an explicit isomorphism $\bar{\varphi}:\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}\rightarrow J_{c}$. To connect the two sides of the isomorphism, first observe that given any element $w=s_{1}s_{2}\dots s_{k}\in C$, we must have $m(s_{i},s_{i+1})\geq 3$ for all $1\leq i\leq k-1$: otherwise we can exchange $s_{i}$ and $s_{i+1}$ to obtain another reduced word of $w$, contradicting the fact that $w$ is rigid. It follows that the quiver $Q$ contains an arrow $\alpha_{i}:s_{i}\rightarrow s_{i+1}$ for all $1\leq i\leq k$ as well as the path $p_{w}:=\alpha_{1}\alpha_{2}\cdots\alpha_{k-1}.$ Recall the notation $\mathcal{P}$ for the set of all paths on $Q$, and consider the map $\iota:C\rightarrow\mathcal{P}$ which sends $w$ to $p_{w}$ for all $w\in C$. For each arrow $\alpha:a\rightarrow b$ in $Q$ with $m_{\alpha}<\infty$, let $p_{\alpha}:=\alpha\bar{\alpha}\alpha\dots$ be the path of length $m_{\alpha}-1$ obtained by concatenating $\alpha$ and $\bar{\alpha}$ repeatedly. Define a path $p\in\mathcal{P}$ to be _unbraided_ if it does not contain $p_{\alpha}$ as a subpath, i.e., if we cannot write $p=p_{1}p_{\alpha}p_{2}$ for some paths $p_{1},p_{2}\in\mathcal{P}$, for all $\alpha\in Q_{1}$ with $m_{\alpha}<\infty$. Let $\mathrm{Unbr}(Q)$ be the set of unbraided paths in $\mathcal{P}$. Then by Remark 2.1.(a), the image of $\iota$ is exactly $\mathrm{Unbr}(Q)$. Since $\iota$ is clearly injective, it gives a bijection from $C$ to $\mathrm{Unbr}(Q)$. We will henceforth use $\iota$ exclusively to denote this bijection. The definitions and notation of this paragraph are inspired by those from [Bon17, Chapter 12], where the bijection $\pi:\mathrm{Unbr}(Q)\rightarrow C$ is essentially the inverse of $\iota$. Having connected $C$ to $\mathbb{Z}Q$, let us next consider the effect of quotienting $\mathbb{Z}Q$ by the ideal $\mathcal{I}_{u}^{\mathbb{Z}}$. Let $\alpha\in Q_{1}$ and let $m=m_{\alpha}$. Since the polynomial $u_{m-1}$ has degree $(m-1)$, the relation $r_{f}(\alpha)=r_{m-1}(\alpha,\bar{\alpha})\in\mathcal{I}_{u}^{\mathbb{Z}}$ must be a linear combination of the alternating path $q:=\alpha\bar{\alpha}\alpha\cdots$ of length $m-1$ and strictly shorter, unbraided paths in $\mathbb{Z}Q$. Since $\alpha$ is arbitrary, it follows that modulo $\mathcal{I}_{u}^{\mathbb{Z}}$ we can rewrite every path as a linear combination of unbraided paths. In other words, every element in the quotient $\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$ can be represented in the form $\sum_{p\in\mathrm{Unbr}(Q)}c_{p}p$ where the coefficients $c_{p}\in\mathbb{Z}$ are zero for all but finitely many paths. The final tool we need concerns a natural filtration of $J_{C}$. For each $i\in\mathbb{Z}_{\geq 0}$, let $C^{(i)}=\\{w\in C:l(w)\leq i+1\\}$ and let $J_{C}^{(i)}=\oplus_{w\in C^{(i)}}\mathbb{Z}t_{w}$. As the example at the end of § 2.2 illustrates, Propositions 2.2 and 2.3 imply that given elements $x,y\in C$ with length $l(x)=p+1$ and $l(y)=q+1$ for some $p,q\geq 0$, the product $t_{x}t_{y}$ is always a linear combination of terms of the form $t_{z}$ where $l(z)\leq p+q+1$. It follows that the filtration (7) $0\subseteq J_{C}^{(0)}\subseteq J_{C}^{(1)}\subseteq\dots.$ equips $J_{C}$ with the structure of a filtered algebra. The same propositions also imply the following result. ###### Proposition 3.9. Let $w=s_{1}s_{2}\dots s_{k}\in C$. Then in $J_{C}$, we have $t_{s_{1}s_{2}}t_{s_{2}s_{3}}\dots t_{s_{k-1}s_{k}}\in t_{w}+J_{C}^{(k-2)}$ where $t_{w}+J_{C}^{(k-2)}=\\{t_{w}+z:z\in J_{C}^{(k-2)}\\}$. In other words, the product is the sum of $t_{w}$ and a linear combination of terms $t_{y}$ for which $l(y)<k$. This proposition will be useful for proving that the map $\bar{\varphi}:\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}\rightarrow J_{C}$ is an isomorphism, because we will examine several outputs of the map $\bar{\varphi}$ which have the form $t_{s_{1}s_{2}}t_{s_{2}s_{3}}\dots t_{s_{k-1}s_{k}}$. Rather than giving a formal proof of it, however, let us only sketch the main ideas needed with an example. The proposition follows from repeated application of Proposition 2.3 in the special case that $w$ is an $\\{a,b\\}$-element for some $a,b\in S$. The general case then reduces to the special case in the way illustrated by the following example: suppose $w=abacacb\in C$ for some Coxeter system and let $T=t_{ab}t_{ba}t_{ac}t_{ca}t_{ac}t_{cb}$. Then $k=7$, and by the special case we have $T=(t_{ab}t_{ba})(t_{ac}t_{ca}t_{ac})(t_{cb})\in\left(t_{aba}+J_{C}^{(1)}\right)\left(t_{acac}+J_{C}^{(2)}\right)\left(t_{cb}+J_{C}^{(0)}\right)$ where the factors in parentheses correspond to the longest “dihedral” subwords $aba,acac,cb$ of $w$. The filtration (7) implies that all terms $t_{w}$ with $l(w)=k$ which appear in $T$ must come from the product $t_{aba}t_{acac}t_{cb}$, where each factor is the “highest degree part” in a pair of parentheses. This product is nothing but $t_{w}$ by Proposition 2.2.(b), therefore $T\in t_{aba}t_{acac}t_{cb}+J_{C}^{(5)}=t_{w}+J_{C}^{(k-2)},$ as desired. We are ready to prove Theorem 3.6. Roughly speaking, the isomorphism holds for two main reasons: first, as we mentioned in § 2.2, all interesting multiplications in $J_{C}$ happen “locally” along individual edges of the Coxeter diagram, just as the relations generating $\mathcal{I}_{u}^{\mathbb{R}}$ are defined in the same fashion; second, via arrow evaluations, the recursion from Equation (3) which controls the local multiplication in $J_{c}$ “agrees with” the recursive definition of $\\{u_{n}\\}$ which controls the generators of $\mathcal{I}_{u}^{\mathbb{Z}}$. We make these remarks more precise in the following proposition, where Theorem 3.6 appears as its last assertion. ###### Proposition 3.10. Let $(W,S)$ be an irreducible Coxeter system and let $Q$ be its double quiver. 1. (a) There exists a unique algebra homomorphism $\varphi:\mathbb{Z}Q\rightarrow J_{C}$ such that for every pair of dual arrows $\alpha:a\rightarrow b$ and $\beta:b\rightarrow a$ in $Q$, we have (8) $\varphi(e_{a})=t_{a},\quad\varphi(e_{b})=t_{b},\quad\varphi(\alpha)=t_{ab},\quad\varphi(\beta)=t_{ba}.$ Moreover, for all $1\leq i\leq m:=m(a,b)$, we have (9) $\varphi(u_{i-1}(\alpha,\beta))=\begin{cases}t_{a,i}&\text{if $i<m$};\\\ 0&\text{if $i=m<\infty$}\end{cases}$ and similarly (10) $\varphi(u_{i-1}(\beta,\alpha))=\begin{cases}t_{b,i}&\text{if $i<m$};\\\ 0&\text{if $i=m<\infty$},\end{cases}$ where $w_{a,i},w_{b,i},t_{a,i},t_{b,i}$ are as in Proposition 2.3. 2. (b) The map $\varphi$ factors through the ideal $\mathcal{I}_{u}^{\mathbb{Z}}$ and induces a homomorphism $\bar{\varphi}:\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}\rightarrow J_{C}$ given by $\bar{\varphi}(p)=\varphi(p)$ for all $p\in\mathrm{Unbr}(Q)$. 3. (c) The map $\bar{\varphi}$ is a unital algebra isomorphism, therefore $J_{C}\cong\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$. ###### Proof. (a) Recall from Section 2.3 that $\mathbb{Z}Q$ is generated by the arrows and stationary paths of $Q$ subject only to the relations $e_{u}e_{v}=\delta_{u,v}e_{u}$ for all $u,v\in Q_{0}$ and $e_{a}\alpha=\alpha=\alpha e_{b}$ for every arrow $\alpha:a\rightarrow b$ in $Q_{1}$. On the other hand, in $J_{c}$ we have $t_{u}t_{v}=\delta_{u,v}t_{u}$ for all $u,v\in Q_{0}$ and $t_{a}t_{ab}=t_{ab}=t_{ab}t_{b}$ for every arrow $\alpha:a\rightarrow b$ in $Q_{1}$ by Proposition 2.2. Thus, the relations satisfied by the generators of $\mathbb{Z}Q$ are respected in the assignment $e_{a}\mapsto t_{a},\alpha\mapsto t_{ab}$ for all arrows $\alpha:a\rightarrow b$ in $Q_{1}$. It follows that this assignment extends to a unique algebra homomorphism $\varphi:\mathbb{Z}Q\rightarrow J_{C}$ which satisfies Equation (8). The homomorphism is unital since $\varphi(1)=\varphi\left(\sum_{a\in Q_{0}}e_{a}\right)=\sum_{a\in Q_{0}}\varphi(e_{a})=\sum_{a\in Q_{0}}t_{a}=1.$ Note that for each $w=s_{1}s_{2}\dots s_{k}\in C$, Equation (8) and the fact that $\varphi$ is a homomorphism imply that $\varphi(p_{w})$ is exactly the element $t_{s_{1}s_{2}}t_{s_{2}s_{3}}\dots t_{s_{k-1}s_{k}}$. It follows from Proposition 3.9 that (11) $\varphi(p_{w})\in t_{w}+J_{C}^{(l(w)-2)}.\quad$ To prove Equations (9) and (10), we induct on $i$. For $i\leq 2$, the equations follow from the definition of $\varphi$ and the fact that $u_{0}=1,u_{1}=x$. For $i>2$, the recursion $u_{i}=xu_{i-1}-u_{i-2}$ implies that $u_{i}(\alpha,\beta)=\alpha u_{i-1}(\beta,\alpha)-u_{i-2}(\alpha,\beta),$ therefore $\displaystyle\varphi(u_{i}(\alpha,\beta))$ $\displaystyle=$ $\displaystyle\varphi(\alpha)\varphi(u_{i-1}(\beta,\alpha))-\varphi(u_{i-2}(\alpha,\beta))$ $\displaystyle=$ $\displaystyle t_{ab}t_{b,i}-t_{a,i-1}$ $\displaystyle=$ $\displaystyle\begin{cases}t_{a,i+1}&\text{if $i<m$};\\\ 0&\text{if $i=m<\infty$},\end{cases}$ where the second equality holds by induction and the third equality follows from Proposition 2.3. This proves Equation (9); the proof of Equation (10) is similar. (b) By construction, the ideal $\mathcal{I}_{u}^{\mathbb{Z}}$ is generated by elements of the form $r_{u}(\alpha)=u_{m_{\alpha}-1}(\alpha,\bar{\alpha})$ where $\alpha$ is an arrow in $Q$ with finite weight. Such elements vanish via $\varphi$ by Equation (9), therefore $\varphi$ factors through $\mathcal{I}_{u}^{\mathbb{Z}}$ and descends to the map $\bar{\varphi}$ as claimed. (c) The map $\bar{\varphi}$ is unital since $\varphi$ is unital. To show that $\bar{\varphi}$ is surjective, we prove that $t_{w}\in\operatorname{\mathrm{im}}\bar{\varphi}$ for all $w\in C$ by induction on $l(w)$. In the base case where $l(w)=1$, we must have $w=a$ for some $a\in S$ and hence $t_{w}=\bar{\varphi}(e_{a})\in\operatorname{\mathrm{im}}\bar{\varphi}$. When $l(w)>1$, we have $\bar{\varphi}(p_{w})=\varphi(p_{w})\in t_{w}+J_{C}^{(l(w)-2)}$ by (11) and $J_{C}^{(l(w)-2)}\subseteq\operatorname{\mathrm{im}}\bar{\varphi}$ by induction, therefore $t_{w}\in\operatorname{\mathrm{im}}\bar{\varphi}$. It remains to prove that $\bar{\varphi}$ is injective. Let $x=\sum_{p\in\mathrm{Unbr}(Q)}c_{p}p$ be a nonzero element in $\mathbb{Z}Q/\mathcal{I}_{u}^{\mathbb{Z}}$. We need to show that $\bar{\varphi}(x)\neq 0$. To do so, let $k$ be the maximal number such that $c_{p}\neq 0$ for some unbraided path $p$ of length $k$, and let $\\{p_{1},\dots,p_{n}\\}$ be the set of paths of length $k$ appearing with nonzero coefficients in $x$. Let $w_{i}=\iota^{-1}(p_{i})$ and write $c_{i}:=c_{p_{i}}$ for all $1\leq i\leq n$. Then $l(w_{i})=k+1$ for all $i$ and we have $\bar{\varphi}(x)=\sum_{i=1}^{n}c_{i}\varphi(p_{i})\in\sum_{i=1}^{n}c_{i}t_{w_{i}}+J_{C}^{(k-1)}$ by (11). It follows that $\bar{\varphi}(x)\neq 0$, and the proof is complete. ∎ ### 3.3. Proof of Theorem 3.7: Dihedral Case Let $\\{f_{n}\\}$ be a uniform family of polynomials over $K$. As the generators of the ideal $\mathcal{I}_{f}$ correspond to individual pairs of dual arrows in $Q$, we first prove Theorem 3.7 in the _dihedral_ case, the case where $\lvert S\rvert=2$. Let $S=\\{a,b\\}$, let $m=m(a,b)\geq 3$, and denote the arrows $a\rightarrow b$ and $b\rightarrow a$ in $Q$ by $\alpha$ and $\beta$, respectively. If $m=\infty$, then $\mathcal{I}_{f}=0$ and the theorem clearly holds, so until Corollary 3.14 we assume that $m$ is finite. Under this assumption, we show that $KQ/\mathcal{I}_{f}$ is semisimple and find its Artin–Wedderburn decomposition. We start with the category $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ in light of the equivalence between mod-$KQ/\mathcal{I}_{f}$ and $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ (see § 2.4). The simple modules of $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ turn out to have the following forms: ###### Lemma 3.11. 1. (a) For each root $\lambda$ of the polynomial $\tilde{f}_{m-1}$, the assignment $M(\lambda):=(M_{a},M_{b},M_{\alpha},M_{\beta})=(K,K,\operatorname{\mathrm{id}},\lambda\cdot\operatorname{\mathrm{id}})$ defines a simple representation in $\mathrm{rep}(Q,\mathcal{I}_{f})$. Moreover, if $\lambda$ and $\lambda^{\prime}$ are distinct roots of $\tilde{f}_{m-1}$, then $M(\lambda)\not\cong M({\lambda^{\prime}})$. 2. (b) If $m$ is even, then the assignments $S(a):=(K,0,0,0),\quad S(b):=(0,K,0,0)$ define two non-isomorphic simple representations in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. ###### Proof. (a) Recall that $\mathcal{I}_{f}$ is generated by the relations (12) $r_{f}(\alpha)=f_{m-1}(\alpha,\beta)=\begin{cases}\tilde{f}_{m-1}(\alpha\beta)&\text{if $m$ is odd};\\\ \tilde{f}_{m-1}(\alpha\beta)\alpha&\text{if $m$ is even},\end{cases}$ and (13) $r_{f}(\beta)=f_{m-1}(\beta,\alpha)=\begin{cases}\tilde{f}_{m-1}(\beta\alpha)&\text{if $m$ is odd};\\\ \tilde{f}_{m-1}(\beta\alpha)\beta&\text{if $m$ is even}.\end{cases}$ The maps $M_{\alpha}M_{\beta}$ and $M_{\beta}M_{\alpha}$ both equal $\lambda\cdot\operatorname{\mathrm{id}}$ as maps from $K$ to $K$. Since $\lambda$ is a root of $\tilde{f}_{m-1}$, it follows that $\tilde{f}_{m-1}(M_{\alpha}M_{\beta})=\tilde{f}_{m-1}(M_{\beta}M_{\alpha})=0$, hence $M(\lambda)$ satisfies the relations $r_{f}(\alpha)$ and $r_{f}(\beta)$ and forms a representation in $\mathrm{rep}(Q,\mathcal{I}_{f})$. Note that $M(\lambda)$ is simple by basic linear algebra. To check that $M(\lambda)\not\cong M({\lambda^{\prime}})$ for distinct roots $\lambda,\lambda^{\prime}$ of $\tilde{f}_{m-1}$, let $M({\lambda^{\prime}})=(M^{\prime}_{a},M^{\prime}_{b},M^{\prime}_{\alpha},M^{\prime}_{\beta})$. Then an isomorphism $\phi:M_{\lambda}\rightarrow M_{\mu}$ must consist of two linear isomorphisms $\phi_{a}:M_{a}\rightarrow M_{a},\phi_{b}:M_{b}\rightarrow M_{b}$ such that $\phi_{b}M_{\alpha}=M^{\prime}_{\alpha}\phi_{a},\quad\phi_{a}M_{\beta}=M^{\prime}_{\beta}\phi_{b}.$ The isomorphisms $\phi_{a},\phi_{b}$ must be multiplication by nonzero scalars $x,y$, respectively, whence the above equations become $y=x$ and $\lambda y=\lambda^{\prime}x$. This cannot happen, therefore $M(\lambda)\not\cong M(\lambda^{\prime})$. (b) When $m$ is even the assignments $M_{\alpha}=M_{\beta}=0$ clearly satisfy the relations $r_{f}(\alpha)$ and $r_{f}(\beta)$, so $S(a)$ and $S(b)$ define representations in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. Moreover, the representations are simple and non-isomorphic by dimension considerations. ∎ To prove $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ is semisimple, we will use the following linear algebra facts to decompose every representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ into a direct sum of simple modules. ###### Lemma 3.12. Let $h\in K[X]$ be a polynomial with degree $k\geq 1$ and with $k$ distinct nonzero roots $z_{1},z_{2},\ldots,z_{k}$ in $K$. Let $U$ and $V$ be finite dimensional vector spaces, and let $A:U\to V$ and $B:V\to U$ be linear maps such that (14) $h(BA)=0_{U}\quad\quad\quad{\text{and}}\quad\quad\quad h(AB)=0_{V}$ or (15) $h(AB)A=0_{U}\quad\quad\quad{\text{and}}\quad\quad\quad h(BA)B=0_{V}.$ Then the following results hold. 1. (a) Both $AB$ and $BA$ are diagonalizable; their eigenvalues lie in the set $\\{z_{1},z_{2},\ldots,z_{k}\\}$ if (14) holds and in the set $\\{0,z_{1},z_{2},\ldots,z_{k}\\}$ if (15) holds. In particular, we have eigenspace decompositions $U=U_{z_{1}}\oplus U_{z_{2}}\oplus\ldots\oplus U_{z_{k}},\quad V=V_{z_{1}}\oplus V_{z_{2}}\oplus\ldots\oplus V_{z_{k}}$ if (14) holds and $U=U_{0}\oplus U_{z_{1}}\oplus U_{z_{2}}\oplus\ldots\oplus U_{z_{k}},\quad V=V_{0}\oplus V_{z_{1}}\oplus V_{z_{2}}\oplus\ldots\oplus V_{z_{k}}$ if (15) holds, where $U_{\lambda}$ and $V_{\lambda}$ denotes the $\lambda$-eigenspace of $BA$ and $AB$ for each scalar $\lambda$, respectively. 2. (b) For all $1\leq i\leq k$, the restrictions of $A$ to $U_{z_{i}}$ and of ${z_{i}}^{-1}\cdot B$ to $V_{z_{i}}$ form mutually inverse isomorphisms. When (15) holds, the restrictions of $A$ to $U_{0}$ and of $B$ to $V_{0}$ are both zero maps. ###### Proof. (a) The equations in (14) and in (15) imply that the minimal polynomials of both $AB$ and $BA$ divide $h$ and the polynomial $g:=x\cdot h\in K[x]$, respectively. The result follows since the polynomials $h$ and $g$ have distinct roots in the sets $\\{z_{1},\dots,z_{k}\\}$ and $\\{0,z_{1},\dots,z_{k}\\}$, respectively. (b) Let $1\leq i\leq k$. Set $U_{i}=U_{z_{i}},V_{i}=V_{z_{i}}$ and $B^{\prime}=z_{i}^{-1}\cdot B$. Use $|$ to denote restriction of maps so that, for example, $A|_{U_{i}}$ stands for the restriction of $A$ to $U_{i}$. By direct computation, we have $Au\in V_{i}$ for all $u\in U_{i}$, $B^{\prime}v\in U_{i}$ for all $v\in V_{i}$, and $B^{\prime}A|_{U_{i}}=\operatorname{\mathrm{id}}_{U_{i}},AB^{\prime}|_{V_{i}}=\operatorname{\mathrm{id}}_{V_{i}}$. This proves the first claim. To prove the second claim, assume the equations in (15) hold and write $h=x\cdot\bar{h}+c$ where $c$ is the constant term of $h$. Then $c\neq 0$ since $0$ is not a root of $h$, and we have $h(AB)A=Ah(BA)=A(\bar{h}(BA)\cdot BA+c)=A\bar{h}(BA)\circ BA+cA.$ Let $u\in U_{0}$. Then $BA(u)=0$, therefore $0=[h(AB)A](u)=[A\bar{(}BA)](BA(u))+cA(u)=cA(u).$ where the first equality holds since $h(AB)A=0_{U}$. It follows that $A(u)=0$, so $A|_{U_{0}}$ is the zero map. The proof that $B|_{V_{0}}$ is the zero map is similar. ∎ ###### Theorem 3.13. Let $(W,S)$ be an irreducible Coxeter system where $S=\\{a,b\\}$ and $3\leq m:=m(a,b)<\infty$. 1. (a) Suppose $m$ is odd. Then the category $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ is semisimple and has exactly $(m-1)/2$ non-isomorphic simple representations, all of dimension 2. The algebra $KQ/\mathcal{I}_{f}$ is semisimple, and is isomorphic to the direct product of $(m-1)/2$ copies of the matrix algebra $M_{2\times 2}(K)$. 2. (b) Suppose $m$ is even. Then the category $\mathrm{rep}(Q,\mathcal{I}_{f})$ is semisimple and has exactly $(m-2)/2+2$ non-isomorphic simple representations; two of these representations have dimension 1 and the other representations have dimension 2. The algebra $KQ/\mathcal{I}_{f}$ is semisimple, and is isomorphic to the direct product of two copies of $K$ and $(m-2)/2$ copies of $M_{2\times 2}(K)$. ###### Proof. Let $M=(M_{a},M_{b},M_{\alpha},M_{\beta})$ be a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ where $\alpha$ and $\beta$ are the arrows $a\rightarrow b$ and $b\rightarrow a$ in $Q$, respectively. Set $h=\tilde{f}_{m-1},U=M_{a},V=M_{b},A=M_{\alpha}$ and $B=M_{\beta}$. If $m$ is odd, then the equations in (14) hold by Equations (12) and (13). Using Lemma 3.12, we may then decompose $M$ into a direct sum where each summand is of the form $N(\lambda):=(U_{\lambda},V_{\lambda},A|_{U_{\lambda}},B|_{V_{\lambda}})$ where $\lambda$ is one of the $(m-1)/2$ roots of $h=\tilde{f}_{m-1}$ and $B|_{V_{\lambda}}A|_{\lambda}=\lambda\cdot\operatorname{\mathrm{id}}_{U_{\lambda}},A|_{U_{\lambda}}B|_{V_{\lambda}}=\lambda\cdot\operatorname{\mathrm{id}}_{V_{\lambda}}$. It is easy to verify that $N(\lambda)$ is isomorphic to the representation $M(\lambda)$ from Lemma 3.11. The claims in Part (a) now follow from the Artin–Wedderburn theorem and the equivalence between $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ and mod-$KQ/\mathcal{I}_{f}$. Similarly, if $m$ is even, then the equations in (15) hold and we may use Lemma 3.12 to decompose $M$ into a direct sum of the representations $\ker\alpha:=(U_{0},0,0,0),\quad\ker\beta:=(0,V_{0},0,0)$ and simple representations isomorphic to $M(\lambda)$ where $\lambda$ is one of the $(m-2)/2$ roots of $\tilde{f}_{m-1}$. The represenations $\ker\alpha$ and $\ker\beta$ further decompose into $\dim(U_{0})$ and $\dim(V_{0})$ copies of the modules $S(a)$ and $S(b)$ from Lemma 3.11, respectively. Part (b) follows. ∎ We are ready to prove that in the case $\lvert S\rvert=2$, the isomorphism type of the algebra $KQ/\mathcal{I}_{f}$ does not depend on the choice of the uniform family of polynomials $\\{f_{n}\\}$. Note that we no longer assume that $m$ is finite in the result below. ###### Corollary 3.14. Let $(W,S)$ be an irreducible Coxeter system where $S=\\{a,b\\}$ and $3\leq m:=m(a,b)\leq\infty$. Let $\\{f_{n}\\},\\{g_{n}\\}$ be two uniform families of polynomials over $K$. Then there is an algebra isomorphism $\Phi:KQ/\mathcal{I}_{f}\rightarrow KQ/\mathcal{I}_{g}$ such that $\Phi(e_{i})=e_{i}$ for $i\in\\{a,b\\}$. ###### Proof. When $m=m(a,b)=\infty$, we have $\mathcal{I}_{f}=\mathcal{I}_{g}=0$, so we may take $\phi$ to be the identity map. Now assume $m$ is finite. We first treat the case where $m$ is even. Consider the direct product $B=B_{1}\times B_{2}\times B_{3}\times\dots\times B_{r}$ where $r=(m-2)/2+2$, $B_{1}=B_{2}=K$, and $B_{i}$ equals the matrix algebra $M_{2\times 2}(K)$ for all $3\leq i\leq r$. By Theorem 3.13, there exist algebra isomorphisms $\phi:KQ/\mathcal{I}_{f}\rightarrow B$ and $\psi:KQ/\mathcal{I}_{g}\rightarrow B$. Let $x=\phi(e_{1}),y=\phi(e_{2})$ and write $x=(x_{1},\dots,x_{r}),y=(y_{1},\dots,y_{r})$. Then we have: 1. (a) Since $e_{1},e_{2}$ are idempotents, $x,y$ must be idempotents, therefore $x_{i},y_{i}$ are idempotents in $B_{i}$ for all $1\leq i\leq r$. This implies that $x_{1},x_{2},y_{1},y_{2}\in\\{0,1\\}$ and that for all $3\leq i\leq r$, $x_{i},y_{i}$ must be each conjugate to the zero matrix, the identity matrix, or the matrix $E_{11}:=\begin{bmatrix}1&0\\\ 0&0\end{bmatrix}$. 2. (b) Since $1=e_{1}+e_{2}$ in $KQ/\mathcal{I}_{f}$, we must have $x+y=1$ and hence $x_{i}+y_{i}=1$ for all $1\leq i\leq r$. 3. (c) Since $\phi$ is an isomorphism, we have $\dim(xBx)=\dim(\phi(e_{1})B\phi(e_{1}))=\dim(e_{1}(KQ/\mathcal{I}_{f})e_{1}).$ Here, we have $\dim(xBx)=\sum_{i=1}^{r}\dim(x_{i}B_{i}x_{i})$ in the direct product $B$. We also have $\dim(e_{1}(KQ/\mathcal{I}_{f})e_{1})=r-1$ because it is easy to see that the classes of the elements $e_{1},\alpha\beta,(\alpha\beta)^{2},\dots,(\alpha\beta)^{r-2}$ form a basis of $e_{1}(KQ/\mathcal{I}_{f})e_{1}$. It follows that $\sum_{i=1}^{r}\dim(x_{i}B_{i}x_{i})=r-1$. Similarly, we must have $\sum_{i=1}^{r}\dim(y_{i}B_{i}y_{i})=r-1$. Notice that for all $1\leq i\leq r$, the dimensions of $x_{i}B_{i}x_{i}$ and $y_{i}B_{i}y_{i}$ depend only on the conjugacy classes of the idempotents $x_{i},y_{i}$, respectively. By straightforward dimension considerations, the above three facts force that $x_{i},y_{i}$ are conjugate to $E_{11}$ for all $3\leq i\leq r$ and that we either have $x_{1}=0,x_{2}=1,y_{1}=1,y_{2}=0$ or have $x_{1}=1,x_{2}=0,y_{1}=0,y_{2}=1$. Similarly, the same conclusions apply to the coordinates $x^{\prime}_{i},y^{\prime}_{i}$ of the elements $x^{\prime}=(x^{\prime}_{1},x^{\prime}_{2},x^{\prime}_{3},\dots,x^{\prime}_{r})=\psi(e_{1})$ and $y^{\prime}=(y^{\prime}_{1},y^{\prime}_{2},y^{\prime}_{3},\dots,y^{\prime}_{r})=\psi(e_{2})$. Thus, we either have $x\sim x^{\prime},y\sim y^{\prime}$ or have $x\sim y^{\prime},y\sim x^{\prime}$ where $\sim$ means two elements are conjugate. In both cases, it is easy to find an automorphism $\eta$ of $B$ such that the map $\Phi:=\psi^{-1}\eta\phi:KQ/\mathcal{I}_{f}\rightarrow KQ/\mathcal{I}_{g}$ is an isomorphism sending $e_{i}$ to $e_{i}$ for $i\in\\{1,2\\}$. This proves the corollary in the case where $m$ is even. The proof for the case where $m$ is odd is similar but simpler: let $B=B_{1}\times\dots\times B_{r}$ where $r=(m-1)/2$ and $B_{i}=M_{2\times 2}(K)$ for all $1\leq i\leq r$, then consider the isomorphisms $\phi,\psi$ guaranteed by Theorem 3.13 as before. This time, facts similar to (a), (b), (c) will force all coordinates of $\phi(e_{1}),\phi(e_{2}),\psi(e_{1}),\psi(e_{2})$ to be conjugate to $E_{11}$, allowing us to form an isomorphism $\Phi$ with the desired properties as before. ∎ ### 3.4. Proof of Theorem 3.7: General Case We now prove Theorem 3.7 for a general Coxeter system $(W,S)$. The rough idea is to notice that each edge in the Coxeter diagram corresponds to a dihedral system, so we can take the “local” isomorphisms provided by Corollary 3.14 and then assemble them to a “global” isomorphism between the quotients of $KQ$. ###### Proof of Theorem 3.7. Let $E$ be the set of edges of the Coxeter diagram of $(W,S)$. For each $e\in E$ of the form $a-b$, let $\alpha:a\rightarrow b$ and $\beta:b\rightarrow a$ be the dual arrows arising from $e$ in $Q$ and consider the subquiver $Q_{e}=(\\{a,b\\},\\{\alpha,\beta\\})$ of $Q$. Let $\mathcal{I}_{f}(e),\mathcal{I}_{g}(e)$ be the evaluation ideals of $\\{f_{n}\\}$ and $\\{g_{n}\\}$ in $Q_{e}$, respectively. Fix an isomorphism $\Phi_{e}:KQ_{e}/\mathcal{I}_{f}(e)\rightarrow KQ_{e}/\mathcal{I}_{g}(e)$ such that $\Phi_{e}(e_{a})=e_{a},\Phi_{e}(e_{b})=e_{b}$ for $e$; such an isomorphism exists by Corollary 3.14. Note that $KQ_{e}/\mathcal{I}_{g}(e)$ naturally embeds into $KQ/\mathcal{I}_{g}$, so we can naturally view an element of $KQ_{e}/\mathcal{I}_{g}(e)$ as an element in $KQ/\mathcal{I}_{g}$. We will do so without further comment. Let $Q^{\leq 1}=\\{e_{a}:a\in Q_{0}\\}\cup Q_{1}$ be the set of stationary paths and arrows of $Q$. Consider the function $\phi:Q^{\leq 1}\rightarrow KQ/\mathcal{I}_{g}$ such that for every edge $e=\\{a,b\\}$ in $G$ and the arrows $\alpha:a\rightarrow b,\beta:b\rightarrow a$ in $Q$, we have $\phi(e_{a})=\Phi_{e}(e_{a}),\;\phi(e_{b})=\Phi_{e}(e_{b}),\;\phi(\alpha)=\Phi_{e}(\alpha),\;\phi(\beta)=\Phi_{e}(\beta).$ This function is well-defined because even if a vertex $a$ in $G$ is incident to two distinct edges $e,e^{\prime}$ in $G$, the maps $\Phi_{e}$ and $\Phi_{e^{\prime}}$ both send $e_{a}$ to $e_{a}$, causing no ambiguity for the value of $\phi(e_{a})$. Next, recall again that the path algebra $KQ$ is generated by $Q^{\leq 1}$ subject only to the relations that $e_{u}e_{v}=\delta_{u,v}e_{u}$ for $u,v\in Q_{0}$ and the relations $e_{a}\alpha=\alpha=\alpha e_{b}$ for each arrow $\alpha:a\rightarrow b$ in $Q_{1}$, and note that the map $\phi$ respects these relations: we have $\phi(e_{u})\phi(e_{v})=e_{u}e_{v}=e_{u}e_{v}=\delta_{u,v}e_{u}=\delta_{u,v}\phi(e_{u}),$ $\phi(e_{a})\phi(\alpha)=\Phi_{e}(e_{a})\Phi_{e}(\alpha)=\Phi_{e}(e_{a}\alpha)=\Phi(\alpha)=\phi(\alpha)$, and similarly $\phi(\alpha)\phi(e_{b})=\phi(\alpha)$. It follows that $\phi$ extends to a unique homomorphism $\Phi:KQ\rightarrow KQ/\mathcal{I}_{g}$ with $\Phi(x)=\phi(x)$ for all $Q^{\leq 1}$. Finally, for each edge $e:a-b$ in $Q$ and the corresponding arrows $\alpha:a\rightarrow b,\beta:b\rightarrow a$, the restriction of $\Phi$ to $KQ_{e}$ agrees with $\Phi_{e}$, therefore $\Phi$ sends both $r_{f}(\alpha)$ and $r_{f}(\beta)$ to zero because $\Phi_{e}$ does. It follows that $\Phi$ factors through $\mathcal{I}_{f}$ to induce a homomorphism $\bar{\Phi}:KQ/\mathcal{I}_{f}\rightarrow KQ/\mathcal{I}_{g}$. Starting from the collection $\\{\Psi_{e}:e\in E\\}$ where $\Psi_{e}=\Phi_{e}^{-1}$ for all $e\in E$, we may obtain in the same way a homomorphism $\bar{\Psi}:KQ/\mathcal{I}_{g}\rightarrow KQ/\mathcal{I}_{f}$, and it is clear that $\bar{\Psi}$ and $\bar{\Phi}$ are mutual inverses, therefore $KQ/\mathcal{I}_{f}\cong KQ/\mathcal{I}_{g}$. ∎ ## 4\. Quiver Contractions Let $J_{C}$ be the subregular $J$-ring of an arbitrary Coxeter system $(W,S)$, let $K$ be an algebraically closed field of characteristic zero, and let $A=A_{K}=K\otimes_{\mathbb{Z}}J_{C}$. Let mod-$A$ be the category of finite dimensional right $A$-modules. The rest of the paper is dedicated to the study of mod-$A$. In this section we introduce a procedure to modify a quiver $Q$ to a new quiver $\bar{Q}$ such that the algebra $K\bar{Q}/\bar{\mathcal{I}}_{f}$ is Morita equivalent to $KQ/\mathcal{I}_{f}$ for any uniform family of polynomials $\\{f_{n}\\}$ over $K$, where $\bar{\mathcal{I}}_{f}$ is the evaluation ideal of $\\{f_{n}\\}$ in $K\bar{Q}$. We call the procedure a _quiver contraction_. In applications, we will often iterate contractions to obtain sequences of the form (16) $Q^{(0)}:=Q\rightarrow Q^{(1)}\rightarrow\dots\rightarrow Q^{(n)}$ where $Q$ is the double quiver of $(W,S)$. Denote the evaluation ideal of $\\{f_{n}\\}$ in $KQ^{(i)}$ by $\mathcal{I}_{f}^{(i)}$ for each $i$. Then $A\cong KQ/\mathcal{I}_{f}$ by Theorems 3.6 and 3.7, therefore mod-$A$ is equivalent to $\mathrm{rep}_{K}(Q^{(i)},\mathcal{I}^{(i)}_{f})$ for all $0\leq i\leq n$. For this reason, we shall develop tools for studying the last type of category in this section to prepare for the study of mod-$A$ in Section § 5. ### 4.1. Definition of Quiver Contractions Consider the following generalization of double quivers of Coxeter systems: ###### Definition 4.1. A _generalized double quiver_ is a triple $(Q,d,m)$ consisting of a quiver $Q=(Q_{0},Q_{1})$, a map $d:Q_{1}\rightarrow Q_{1}$, and a map $m:Q_{1}\rightarrow\mathbb{Z}_{\geq 1}\cup\\{\infty\\}$ such that 1. (a) $d(Q_{ab})=Q_{ba}$ for all $a,b\in Q_{0}$, where $Q_{c,d}$ denotes the set of all arrows in $Q$ from $c$ to $d$ for all $c,d\in Q_{0}$. 2. (b) $d^{2}(\alpha)=\alpha$ for all $\alpha\in Q_{1}$; 3. (c) $m(\alpha)=m({d(\alpha)})$ for all $\alpha\in Q_{1}$. Given such a triple, we also call $Q$ a _generalized double quiver_. We say that two arrows $\alpha,\beta\in Q_{1}$ are _dual_ to each other if $\beta=d(\alpha)$, and call $m(\alpha)$ the _weight_ of $\alpha$ for all $\alpha\in Q_{1}$. Note that we may (and will) naturally view the double quiver $Q$ of a Coxeter system as a generalized double quiver by setting $m(\alpha)=m_{\alpha}$ and $d(\alpha)=\bar{\alpha}$ for all $\alpha\in Q_{1}$. We now define quiver contractions as operations on generalized double quivers $(Q,d,m)$. Roughly speaking, we will define a contraction along a suitable pair of arrows $\alpha:a\rightarrow b$ and $\beta:b\rightarrow a$ where $a,b$ are distinct vertices in $Q$. The contraction will identify $a,b$ by collapsing them into a new vertex $v_{ab}$, replace $\alpha,\beta$ by a loop at $v_{ab}$, and reroute all other arrows incident to $a$ or $b$ in $Q$ to new arrows incident to $v_{ab}$. The assignments of duals and weights of arrows in the new quiver will be naturally inherited from $d$ and $m$. ###### Definition 4.2. Let $(Q,d,m)$ be a generalized double quiver. A pair of arrows $\\{\alpha,\beta\\}$ is called _contractible_ if they are of the form $\alpha:a\rightarrow b,\beta:b\rightarrow a$ where $a,b$ are distinct vertices in $Q$, $\beta=d(\alpha)$, and $m(\alpha)=m(\beta)$ is an odd integer that is at least 3. For a contractible pair of arrows $\\{\alpha:a\rightarrow b,\beta:b\rightarrow a\\}$, the _contraction of $(Q,d,m)$ along $\\{\alpha,\beta\\}$_ is the generalized double quiver $(\bar{Q},\bar{d},\bar{m})$ where 1. (a) The vertex set of the quiver $\bar{Q}$ is $\bar{Q}_{0}:=Q_{0}\setminus\\{a,b\\}\sqcup\\{v_{ab}\\},$ where $v_{ab}\notin Q_{0}$ is a newly introduced vertex. The arrow set $\bar{Q}_{1}$ of $\bar{Q}$ is defined as follows: write $c^{\prime}:=\begin{cases}v_{ab}&\text{ if $c\in\\{a,b\\}$};\\\ c&\text{ otherwise}\end{cases}$ for all $c\in Q_{0}$ and define $\gamma^{\prime}$ to be the arrow $u^{\prime}\rightarrow v^{\prime}$ for each arrow $\gamma:u\rightarrow v$ in $Q_{1}$, then let $\bar{Q}_{1}:=\\{\gamma^{\prime}:\gamma\in Q_{1}\setminus\\{\alpha,\beta\\}\\}\sqcup\\{\varepsilon_{ab}\\},$ where $\varepsilon_{ab}\notin Q_{1}$ is a newly introduced loop at $v_{ab}$. 2. (b) $\bar{d}$ is defined by $\bar{d}(\varepsilon_{ab})=\varepsilon_{ab}$ and $\bar{d}(\gamma^{\prime})=d(\gamma)^{\prime}$ for all $\gamma\in Q_{1}\setminus\\{\alpha,\beta\\}$. 3. (c) $\bar{m}$ is defined by $\bar{m}(\varepsilon_{ab})=m(\alpha)$ and $\bar{m}(\gamma^{\prime})=m(\gamma)$ for all $\gamma\in Q_{1}\setminus\\{\alpha,\beta\\}$. Note that quiver contractions introduce loops and may lead to multiple pairs of arrows between two distinct vertices in the resulting quiver (see the quiver $Q^{(2)}$ in Example 4.10), features that cannot be present in double quivers of Coxeter diagrams. This is the reason why we do not forbid these features in Definition 4.3. On the other hand, the generalization from double quivers to generalized ones is mild enough that we can extend the definition of evaluation ideals easily, at least for the cases we are interested in: ###### Definition 4.3. Let $\\{f_{n}:n\geq 2\\}$ be a uniform family of polynomials over $K$, and let $(Q,d,m)$ be a generalized double quiver. We define the _evaluation ideal_ of $\\{f_{n}\\}$ in $KQ$ to be the two-sided ideal $\mathcal{I}_{f}\subseteq KQ$ given by $\mathcal{I}_{f}:=\langle r_{f}(\alpha):\alpha\in Q_{1}\rangle$ where $r_{f}(\alpha)=\begin{cases}0&\text{if $m=\infty$};\\\ f_{m-1}(\alpha,d(\alpha))&\text{if $m<\infty$ and $d(\alpha)\neq\alpha$};\\\ \tilde{f}_{m-1}(\alpha)&\text{if $m<\infty$ and $d(\alpha)=\alpha$},\end{cases}$ where $m=m(\alpha)$. Here, as in Equation (4), the evaluation of $\alpha$ through a constant term $c$ in $\tilde{f}_{m-1}(\varepsilon)$ returns $ce_{a}$ for $a=\operatorname{\mathrm{source}}(\alpha)$. For example, if $\alpha$ is a self-dual loop $\varepsilon:a\rightarrow a$ with $m=5$, then $r_{f}(\alpha)=\varepsilon^{2}-e_{a}$ if $f_{4}=x^{4}-1$ and $r_{f}(\alpha)=2\varepsilon^{2}-\varepsilon-3e_{a}$ if $f_{4}=2x^{4}-x^{2}-3$. ###### Remark 4.4. In this paper we are only interested in generalized double quivers $\bar{Q}$ obtained from the double quiver of a Coxeter diagram via iterated contractions. In this case, every self-dual arrow $\alpha$ in $\bar{Q}$ must be either a loop of the form $\varepsilon=\varepsilon_{ab}$ at a vertex $v=v_{ab}$ introduced during a contraction of a quiver $Q$ along a dual pair of arrows $\gamma:a\rightarrow b,\delta:b\rightarrow a$ or a reroute of such a loop. In particular, $m$ must be a finite, odd integer, so the third case in the definition of $r_{f}(\alpha)$ applies and gives $r_{f}(\alpha)=\tilde{f}_{m-1}(\varepsilon)$. The relation $r_{f}(\alpha)=\tilde{f}_{m-1}(\varepsilon)$ in $K\bar{Q}$ mirrors the relation $r_{f}(\gamma)=\tilde{f}_{m-1}(\gamma\delta)$ in the evaluation ideal $\mathcal{I}_{f}$ of $KQ$ via the replacements $\gamma\delta\mapsto\varepsilon$ and $a\mapsto v$. For example, if $m=5$ and $f_{4}=x^{4}-1$, then the relation $r_{f}(\gamma)=\tilde{f}_{4}(\gamma\delta)=(\gamma\delta)^{2}-e_{a}\in\mathcal{I}_{f}$ is mirrored by the relation $r_{f}(\varepsilon)=\tilde{f}_{4}(\varepsilon)=\varepsilon^{2}-e_{v}$. Our main result on contractions is the following theorem. ###### Theorem 4.5. Let $(Q,d,m)$ be a generalized double quiver and let $(\bar{Q},\bar{d},\bar{m})$ be a contraction of $(Q,d,m)$ along a contractible pair of arrows $\\{\alpha,\beta\\}$. Let $\\{f_{n}:n\geq 2\\}$ be a uniform family of polynomials over $K$, and let $\mathcal{I}_{f}$ and $\bar{\mathcal{I}}_{f}$ be the evaluation ideal of $\\{f_{n}\\}$ in $KQ$ and $K\bar{Q}$, respectively. Then the algebras $KQ/\mathcal{I}_{f}$ and $K\bar{Q}/\bar{\mathcal{I}}_{f}$ are Morita equivalent. We postpone the proof of the theorem to § 4.3. Before the proof, we discuss several detailed examples of quiver contractions and some consequences of the theorem in the next subsection. ### 4.2. Examples of Quiver Contractions Throughout this subsection, $G$ denotes the Coxeter diagram of a Coxeter system $(W,S)$ and $Q$ stands for the double quiver of $G$. When drawing generalized double quivers, we label each pair of dual arrows $\\{\alpha,d(\alpha)\\}$ with their common weight $m(\alpha)$ except when $m(\alpha)=3$, including for the case where $\alpha$ is a self-dual loop of the form $\varepsilon_{ab}$ introduced by a contraction. For convenience, we consider only the polynomials $\\{f_{n}:n\geq 2\\}$ where (17) $f_{n}=\begin{cases}x^{n}-1&\text{if $n$ is even};\\\ x^{n}-x&\text{if $n$ is odd}.\end{cases}$ for all $n\geq 2$. Note that $\\{f_{n}\\}$ is a uniform family over $K$ since $K$ is algebraically closed (see Remark 4.17). ###### Example 4.6. Suppose that $G$ and $Q$ are as shown at the top of Figure 1. The arrows $\alpha$ and $\beta$ are dual to each other in $Q$ and have weight $3$, therefore they form a contractible pair. The contraction along $\\{\alpha,\beta\\}$ results in the quiver $\bar{Q}$ shown in the bottom right corner of the figure, where $v=v_{ab}$ and $\varepsilon=\varepsilon_{ab}$. $G:$$a$$b$$c$$d$54$\longrightarrow$$Q:$$a$$b$$c$$d$45$\alpha$$\beta$$\gamma$$\delta$$\zeta$$\eta$$\kappa$$\lambda$$\downarrow$$\downarrow$$\bar{G}:$$v$$c$$d$54$\longrightarrow$$\bar{Q}:$$v$$c$$d$45$\varepsilon$$\gamma^{\prime}$$\delta^{\prime}$$\zeta$$\eta$$\kappa^{\prime}$$\lambda^{\prime}$ Figure 1. Let us examine the effect of the contraction. The three pairs of arrows $\\{\gamma,\delta\\},\\{\eta,\zeta\\}$ and $\\{\kappa,\lambda\\}$ in $Q$ have weights $3,5,4$ and give rise to the elements rl r_1=γδ-e_b, & r_2=δγ-e_c r_3=(ζη)^2-e_c, r_4=(ηζ)^2-e_d, r_5=κλκ-κ, r_6=λκλ-λ of the evaluation ideal $\mathcal{I}_{f}$. The contraction reroutes the arrows $\gamma,\delta,\kappa,\lambda$ since they are incident to $a$ or $b$, but the rerouting preserves weights by definition, so the rerouted arrows give rise to “duplicates” of the relations $r_{1},r_{2},r_{5},r_{6}$ in the ideal $\bar{\mathcal{I}}_{f}$, namely, the relations $r_{1}^{\prime}=\gamma^{\prime}\delta^{\prime}-e_{v},\quad r_{2}^{\prime}=\delta^{\prime}\gamma^{\prime}-e_{c},\quad r_{5}^{\prime}=\kappa^{\prime}\lambda^{\prime}\kappa^{\prime}-\kappa^{\prime},\quad r_{6}^{\prime}=\lambda^{\prime}\kappa^{\prime}\lambda^{\prime}-\lambda^{\prime}.$ The arrows $\zeta$ and $\eta$ and their weights remain unchanged in the contraction since they are not incident to $a$ or $b$. Consequently, they contribute the same relations $r_{3}$ and $r_{4}$ to $\bar{\mathcal{I}}_{f}$ just as they do to $\mathcal{I}_{f}$. Finally, the arrows $\alpha,\beta$ are replaced by a single loop at $v$. Since $m:=m(\alpha)=m(\beta)$ is odd, $\alpha$ and $\beta$ contribute the relations $r_{f}(\alpha)=(\alpha\beta)^{k}-e_{a},\quad r_{f}(\beta)=(\beta\alpha)^{k}-e_{b}$ to $\mathcal{I}_{f}$, and their replacement $\varepsilon$ contributes a single relation $r_{f}(\varepsilon)=r_{f}(d(\varepsilon))=\varepsilon^{k}-e_{v}$ to $\bar{\mathcal{I}}_{f}$; here, we have $m=3$ and $k=(m-1)/2=1$. By Theorem 4.5, the algebra $A$ is Morita equivalent to the quotient $K\bar{Q}/\bar{\mathcal{I}}_{f}$ where $\bar{\mathcal{I}}_{f}=\langle r_{1}^{\prime},r_{2}^{\prime},r_{3}^{\prime}=r_{3},r_{4}^{\prime}=r_{4},r_{5}^{\prime},r_{6}^{\prime},r_{f}(\varepsilon)\rangle$. The relation $r_{f}(\varepsilon)=\varepsilon-e_{v}$ implies that $\varepsilon=e_{v}$ in the quotient, therefore the quotient is isomorphic to the quotient $K\hat{Q}/\hat{\mathcal{I}}_{f}$ where $\hat{Q}$ is obtained from $\bar{Q}$ by removing the loop $\varepsilon$ and $\hat{\mathcal{I}}_{f}=\langle r_{i}:1\leq i\leq 6\rangle\subseteq K\hat{Q}$. More generally, we define a quiver contraction with respect to a pair of arrows $\\{\alpha,\beta\\}$ to be _simple_ if $m(\alpha)=m(\beta)=3$. By the above discussion, Theorem 4.5 remains true for a simple contraction if we omit the loop $\varepsilon$ in the construction of $\bar{Q}$. We shall do so from now on. For a double quiver $Q$ of a Coxeter diagram $G$, a contraction of $Q$ along a pair of arrows $\\{\alpha:a\rightarrow b,\beta:b\rightarrow a\\}$ is simple if and only if the corresponding edge $a-b$ in $G$ is simple. When this is the case, we may define a simple contraction of $G$ by “contracting” the edge $a-b$ until $a,b$ are identified as a new vertex $v:=v_{ab}$, thus effectively rerouting all edges incident to $a$ or $b$ to $v$. More precisely, we may define a weighted graph $\bar{G}$ whose vertex set is $S\setminus\\{a,b\\}\sqcup\\{v\\}$ where $v\notin S$ and whose edge set is $\\{e^{\prime}:e\text{ is an edge in $G$ other than $a-b$}\\}$, where $e^{\prime}=e$ if $e$ is not incident to $a$ or $b$ and $e^{\prime}$ is the edge $v-c$ whenever $e$ is of the form $a-c$ or $b-c$ for a vertex $c\in S\setminus\\{a,b\\}$; the weight $m(e^{\prime})$ of $e^{\prime}$ is defined to be the same as that of $e$. We call $\bar{G}$ the _simple contraction of $G$ along $a-b$_. For the above example, the contraction of $G$ along $a-b$ results in the graph $\bar{G}$ shown in the lower left corner of Figure 1. Note that if $a,b$ share no neighbor in $G$, which is the case for our example and must be the case if $G$ has no cycles, then the graph $\bar{G}$ can again be viewed as the Coxeter diagram of a Coxeter system $(\bar{W},\bar{S})$. In this case, the double quiver of $\bar{G}$ makes sense, and it is clear that the double quiver of the simple contraction $\bar{G}$ of $G$ coincides with the simple contraction $\bar{Q}$ of the double quiver $Q$ of $G$ (once we ignore the loop $\varepsilon=\varepsilon_{ab}$). This phenomenon is manifest in our example: the diagram in Figure 1 commutes once we ignore the loop $\varepsilon$. ###### Remark 4.7. Maintain the assumptions and notation of the previous paragraph. In particular, assume that $a-b$ is a simple edge in $G$ where $a,b$ have no common neighbor. Then Theorem 4.5 implies that the algebra $A=K\otimes_{\mathbb{Z}}J_{C}$ associated to the Coxeter system $(W,S)$ is Morita equivalent to the algebra $\bar{A}:=K\otimes_{\mathbb{Z}}\bar{J}_{C}$ associated to the system $(\bar{W},\bar{S})$. Note that we may state the equivalence purely in terms of contractions of Coxeter diagrams, with no reference to quivers. Also note that the equivalence implies that when we study the category mod-$A$, we may assume $G$ has no simple edges whenever $G$ is a tree. The reason is that, since no two vertices can share a neighbor in a tree, we may repeatedly remove all simple edges in $G$ by simple contractions. The following example illustrates the reduction allowed by Remark 4.7. After the example, we record an application of the reduction for future use. ###### Example 4.8. Suppose that $G$ is the tree shown on the left in Figure 2. By Remark 4.7, the algebra $A$ associated to $(W,S)$ is Morita equivalent to the algebra $A^{\prime}:=K\otimes J^{\prime}_{C}$ where $J^{\prime}_{C}$ is the subregular $J$-ring of the Coxeter system whose diagram is the graph $G^{\prime}$ obtained by contracting all simple edges in $G$; the graph $G^{\prime}$ is shown on the right on Figure 2. $G:$$a$$b$$c$$f$$g$$d$$e$$\rightarrow$$G^{\prime}:$$x$$y$$z$$w$574547 Figure 2. ###### Proposition 4.9. Let $(W,S)$ be a Coxeter system whose Coxeter graph $G$ is a tree, has no edge with infinite weight, and has at most one heavy edge. Then the category mod-$A$ associated to $(W,S)$ is semisimple. ###### Proof. Let $e$ be an edge of maximal weight in $G$. In other words, let $e$ be the unique heavy edge if $G$ has one, and let $e$ be any simple edge in $G$ otherwise. By contracting all edges different from $e$ in $G$ if necessary, we may assume that $e$ is the only edge in $G$ and hence $\lvert S\rvert=2$. Theorem 3.13 then implies that mod-$A$ is semisimple. ∎ The next two examples involve iterated quiver contractions. ###### Example 4.10. Suppose that $G=C_{n}(m)$ is a cycle with at most one heavy edge, where $n$ is the number of vertices in $G$ and $m$ is the maximal edge weight. In other words, suppose that $G$ has $n$ vertices $v_{1},v_{2},\dots,v_{n}$ for some $n\geq 3$ and has exactly $n$ edges $v_{1}-v_{2},v_{2}-v_{3},\dots,v_{n-1}-v_{n},v_{n}-v_{1}$, then assume, without loss of generality, that $G$ has edge weights $m(v_{2},v_{3})=m(v_{3},v_{4})=\dots=m(v_{n-1},v_{n})=m(v_{n},v_{1})=3$ and $m(v_{1},v_{2})=m\geq 3$. Note that when $m=3$, the Coxeter group arising from the Coxeter diagram $C_{n}(m)=C_{n}(3)$ is exactly the affine Weyl group of type $\tilde{A}_{n-1}$ for all $n\geq 3$. By repeated simple contractions of Coxeter diagrams, we may reduce $G$ to a triangle whose double quiver is the quiver $Q^{(1)}$ shown on the left of Figure 3. Contracting $Q^{(1)}$ along the arrows $\alpha_{3},\beta_{3}$ results in the quiver $Q^{(2)}$ in the same figure, where $a=v_{v_{1}v_{3}}$ and the loop $\varepsilon_{v}$ is omitted as usual. Furthermore, since $m(\alpha_{2}^{\prime})=m(\alpha_{2})=3$, we may further contract $Q^{(2)}$ along the arrows $\alpha_{2}^{\prime},\beta_{2}^{\prime}$ to obtain the quiver $Q^{(3)}$, where $b=v_{av_{2}}$, the loop $\varepsilon_{b}$ is omitted, and the arrows $\alpha_{1}^{\prime\prime},\beta_{1}^{\prime\prime}$ are dual to each other and have weight $m$. Note that as the last contraction demonstrates, given a contractible pair of arrows $\alpha:a\rightarrow b,\beta:b\rightarrow a$ in a generalized double quiver $Q$, every pair of dual arrows of the form $\gamma:a\rightarrow b,\delta:b\rightarrow a$ where $\gamma\neq\alpha$ is rerouted to a pair of distinct loops $\gamma^{\prime},\delta^{\prime}$ at $v_{ab}$ which are dual to each other and have the same weight as $\gamma$ and $\delta$. $Q^{(1)}:$$v_{1}$$v_{2}$$v_{3}$$\rightarrow$$Q^{(2)}:$$a$$v_{2}$$\rightarrow$$Q^{(3)}:$$b$$m$$m$$m$$m$$\alpha_{1}$$\beta_{1}$$\alpha_{2}$$\beta_{2}$$\alpha_{3}$$\beta_{3}$$\beta_{2}^{\prime}$$\alpha_{2}^{\prime}$$\alpha_{1}^{\prime}$$\beta_{1}^{\prime}$$\alpha_{1}^{\prime\prime}$$\beta_{1}^{\prime\prime}$ Figure 3. By Theorem 4.5, the algebra $A$ associated to the Coxeter system whose Coxeter diagram is the cycle $G$ is Morita equivalent to the algebra $\bar{A}:=KQ^{(3)}/\mathcal{I}^{(3)}_{f}$ where $\mathcal{I}^{(3)}_{f}$ is the evaluation ideal of $\\{f_{n}\\}$ in $KQ^{(3)}$. The path algebra $KQ^{(3)}$ is isomorphic to the free unital associative algebra $K\langle x,y\rangle$ on two variables via the identification $e_{b}\mapsto 1,\alpha_{1}^{\prime\prime}\mapsto x,\beta_{1}^{\prime\prime}\mapsto y$, and $\mathcal{I}^{(3)}_{f}$ is given by $\mathcal{I}^{(3)}_{f}:=\langle(\alpha_{1}^{\prime\prime}\beta_{1}^{\prime\prime})^{k}-e_{b},(\beta_{1}^{\prime\prime}\alpha_{1}^{\prime\prime})^{k}-e_{b}\rangle$ where $k=(m-1)/2$, so $\bar{A}$ is isomorphic to the algebra (18) $T_{k}:=K\langle x,y\rangle/\langle(xy)^{k}=(yx)^{k}=1\rangle.$ In particular, if $m=3$, then $k=1$ and $A$ is isomorphic to the Laurent polynomial algebra $K[t,t^{-1}]$ via the identification $e_{b}\mapsto 1,\alpha_{1}^{\prime\prime}\mapsto t,\beta_{1}^{\prime\prime}\mapsto t^{-1}$. To summarize, we have just proved the following result. ###### Proposition 4.11. Let $n\geq 3$. Let $m\geq 3$ be an odd integer, let $k=(m-1)/2$, and let $(W,S)$ be the Coxeter system with Coxeter diagram $G=C_{n}(m)$. Then the algebra $A$ associated to $(W,S)$ is Morita equivalent to the algebra $T_{k}$ defined by Equation (18). In particular, the Morita equivalence class of $A$ does not depend on the value of $n$, and if $m=3$, i.e., if $(W,S)$ is of type $\tilde{A}_{n-1}$, then $A$ is Morita equivalent to $K[t,t^{-1}]$. ###### Example 4.12. Apart from the algebras of the form $T_{k}$ from Equation (18), group algebras of free products of finite cyclic groups can also be realized as the algebras of the form $A$ associated to Coxeter systems up to Morita equivalence. To see this, let $A_{k}$ be the $K$-algebra given by the presentation $A_{k}:=\langle x:x^{k}=1\rangle$ for each integer $k>1$, and let $A_{\mathbf{k}}=\langle x_{j}:1\leq j\leq n,x_{j}^{k_{j}}=1\rangle$ for each tuple $\mathbf{k}=(k_{1},\dots,k_{n})$ where $n\geq 1$ and $k_{i}\in\mathbb{Z}_{\geq 1}$ for all $1\leq i\leq n$. Then $A_{k}$ is isomorphic to the group algebra of the cyclic group $C_{k}$ of order $k$, and $A_{\mathbf{k}}$ is isomorphic to the group algebra of the free product $C_{\mathbf{k}}:=C_{k_{1}}*\dots*C_{k_{n}}$ of $C_{k_{1}},\dots,C_{k_{n}}$. For each tuple $\mathbf{k}=(k_{1},\dots,k_{n})$, let $(W,S)$ be the Coxeter system where $S=\\{0,1,2,\dots,n\\}$, $m_{j}:=m(0,j)=2k_{j}+1$ for each $1\leq j\leq n$, and $m(i,j)=2$ for all $1\leq i<j\leq n$. Let $Q$ be the double quiver of $(W,S)$, and for each $1\leq j\leq n$ denote the arrows $0\rightarrow j$ and $j\rightarrow 0$ in $Q$ by $\alpha_{j}$ and $\beta_{j}$, respectively. It is easy to see that by starting from the quiver $Q$ and performing successive contractions, first along $\\{\alpha_{1},\beta_{1}\\}$, then along (the reroutes of) $\\{\alpha_{2},\beta_{2}\\}$, then along (the reroutes of) $\\{\alpha_{3},\beta_{3}\\}$, and so on, we can transform $Q$ to a generalized double quiver $\bar{Q}$ with a single vertex $v$ and $n$ self- dual loops $\varepsilon_{1},\dots,\varepsilon_{n}$ at $v$ of weight $m_{1},m_{2},\dots,m_{n}$, respectively. Figure 4 demonstrates the construction of $\bar{Q}$ from the Coxeter diagram $G$ of $(W,S)$ when $n=4$. By Theorem 4.5, the algebra $A$ is Morita equivalent to the quotient $K\bar{Q}/\bar{\mathcal{I}}_{f}$ where $\bar{\mathcal{I}}_{f}$ is the evaluation ideal of $\\{f_{n}\\}$ in $K\bar{Q}$. The following proposition is now immediate. ###### Proposition 4.13. Let $(W,S)$ be as described in the above paragraph. Then the algebra $A$ associated to $(W,S)$ is Morita equivalent to the algebra $A_{\mathbf{k}}$. ###### Proof. Note that $K\bar{Q}$ is the free unital associative algebra generated by the loops $\varepsilon_{j}$ where $1\leq j\leq n$ and that $\bar{\mathcal{I}}_{f}=\langle\varepsilon_{j}^{k_{j}}-e_{v}:1\leq j\leq n\rangle$. It follows that we may induce an algebra isomorphism $\varphi:A_{\mathbf{k}}\rightarrow K\bar{Q}/\bar{\mathcal{I}}_{f}$ from the assignment $\varphi(x_{j})=\varepsilon_{j}$ for all $1\leq j\leq n$. ∎ $G:$$1$$0$$2$$3$$4$$\rightarrow$$Q:$$1$$2$$3$$4$$0$$\rightarrow$$\bar{Q}:$$v$$m_{1}$$m_{4}$$m_{2}$$m_{3}$$m_{1}$$m_{2}$$m_{3}$$m_{4}$$m_{1}$$m_{2}$$m_{3}$$m_{4}$$\alpha_{1}$$\beta_{1}$$\alpha_{2}$$\beta_{2}$$\alpha_{3}$$\beta_{3}$$\alpha_{4}$$\beta_{4}$$\varepsilon_{1}$$\varepsilon_{2}$$\varepsilon_{3}$$\varepsilon_{4}$ Figure 4. ###### Remark 4.14. A pleasant feature of iterated quiver contractions is that for every sequence of the form (16), since we reduce the number of vertices in the quiver with every contraction, representations in the category $\mathrm{rep}_{K}(Q^{(n)},\mathcal{I}^{(n)}_{f})$ are often relatively easy to describe. For instance, in Example 4.12 a representation in $\mathrm{rep}_{K}(Q^{(n)},\mathcal{I}^{(n)}_{f})$ is simply the data of a vector space $M_{v}$ and endormophisms $\phi_{1},\dots\phi_{n}$ of $M_{v}$ where $\phi_{j}^{k_{j}}=\operatorname{\mathrm{id}}$ for all $1\leq j\leq n$. In Example 4.10, to define a representation in $\mathrm{rep}_{K}(Q^{(3)},\mathcal{I}^{(3)}_{f})$ it suffices to specify a space $M_{b}$ and two endormorphisms $M_{\alpha_{1}^{\prime\prime}},M_{\beta_{1}^{\prime\prime}}$ of $M_{v_{1}}$ satisfying the relations $f_{m-1}(\alpha_{1}^{\prime\prime},\beta_{1}^{\prime\prime})$ and $f_{m-1}(\beta_{1}^{\prime\prime},\alpha_{1}^{\prime\prime})$. Finally, for the Coxeter system $(W,S)$ from Example 4.8, by repeated contractions along arrows corresponding to the edges of weight 5 and 7 in $G^{\prime}$, we may transform the double quiver of $G^{\prime}$ to a quiver $\bar{Q}$ of the form shown in Figure 5, where the loop $\varepsilon_{1}$ is self-dual and has weight 7, the loop $\varepsilon_{2}$ is self-dual and has weight 5, and $\alpha,\beta$ are dual to each other and have weight 4. It follows that mod-$A$ is equivalent to the category $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ where $\bar{\mathcal{I}}_{f}=\langle\varepsilon_{1}^{2}-e_{v},\varepsilon_{2}^{3}-e_{v},\alpha\beta\alpha-\alpha,\beta\alpha\beta-\beta\rangle.$ A representation in the latter category is then simply the data of two vector spaces $M_{v},M_{z}$, two operators $M_{\varepsilon_{1}},M_{\varepsilon_{2}}$ on $M_{v}$ and two maps $M_{\alpha}:M_{v}\rightarrow M_{z},M_{\beta}:M_{z}\rightarrow M_{v}$ which satisfy the four relations in the above equation. In all these examples, the representations are much easier to describe than those in the category $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ attached to the original double quiver $Q$. Endomorphism like $\phi_{j}$, $M_{\alpha}M_{\beta},M_{\beta}M_{\alpha}$ and $M_{\varepsilon_{1}},M_{\varepsilon_{2}}$ will be key tools in our study of mod-$A$; we will elaborate on their use in § 4.4. $\bar{Q}:$$v$$z$754$\alpha$$\beta$$\varepsilon_{1}$$\varepsilon_{2}$ Figure 5. ### 4.3. Proof of Theorem 4.5 Let $(Q,d,m)$ be a generalized double quiver and let $(\bar{Q},\bar{d},\bar{m})$ be its contraction along a contractible pair of arrows $\\{\alpha:a\rightarrow b,\beta:b\rightarrow a\\}$. We prove Theorem 4.5 in this subsection. To begin, we introduce an intermediate generalized double quiver $(Q^{\prime},d^{\prime},m^{\prime})$ where 1. (a) $Q^{\prime}_{0}=Q_{0}$, and $Q_{1}^{\prime}$ is defined as follows: write $c^{\prime}=\begin{cases}a&\text{if }c=b;\\\ c&\text{otherwise}\\\ \end{cases}$ for all $c\in Q_{0}$, let $\gamma^{\prime}=\begin{cases}\gamma&\text{if }\gamma\in\\{\alpha,\beta\\};\\\ (c^{\prime}\rightarrow d^{\prime})&\text{otherwise, if $\gamma$ is of the form $c\rightarrow d$}\\\ \end{cases}$ for each arrow $\gamma\in Q_{1}$, then set $Q_{1}^{\prime}=\\{\gamma^{\prime}:\gamma\in Q_{1}\\}.$ 2. (b) $d^{\prime}$ is defined by $d^{\prime}(\gamma^{\prime})=[d(\gamma)]^{\prime}$ for all $\gamma\in Q^{\prime}_{1}$; 3. (c) $m^{\prime}$ is defined by $m^{\prime}(\gamma^{\prime})=m(\gamma)$ for all $\gamma\in Q_{1}$. Intuitively, we consider $Q^{\prime}$ an intermediate rerouted version of $Q$ similar to $\bar{Q}$: in $\bar{Q}$, we identify $a,b$ with $v=v_{ab}$ and “transfer” all data relevant to $a$ or $b$ in $Q$ to $v$ by rerouting all arrows in $Q$ incident to $a$ or $b$ to $v$; in $Q^{\prime}$, however, we transfer almost all data relevant to $b$ to $a$ except for the arrows $\alpha$ and $\beta$. We may thus think of $a\in Q^{\prime}_{0}$ as a partial copy of $a,b\in Q_{0}$ and $v\in\bar{Q}_{0}$ as a complete copy of $a,b$. To obtain $\bar{Q}$ from $Q^{\prime}$, it remains to rename $a$ as $v_{ab}$, rename each arrow $\gamma\in Q^{\prime}_{1}\setminus\\{\alpha,\beta\\}$ incident to $a$ as $\gamma^{\prime}$, replace $\alpha^{\prime},\beta^{\prime}$ with $\varepsilon_{{ab}}$, and remove $b$. For the quiver $Q$ from Example 4.6, this procedure, along with the construction of $Q^{\prime}$ from $Q$, is illustrated in Figure 6, where $v=v_{ab}$ and $\varepsilon=\varepsilon_{ab}$. $Q:$$a$$d$$c$$b$45$\beta$$\alpha$$\gamma$$\delta$$\zeta$$\eta$$\kappa$$\lambda$$\rightarrow$$Q^{\prime}:$$a$$d$$c$$b$45$\beta$$\alpha$$\gamma^{\prime}$$\delta^{\prime}$$\zeta$$\eta$$\kappa^{\prime}$$\lambda^{\prime}$$\rightarrow$$\bar{Q}:$$v$$c$$d$45$\varepsilon$$\gamma^{\prime}$$\delta^{\prime}$$\zeta$$\eta$$\kappa^{\prime}$$\lambda^{\prime}$ Figure 6. Maintain the notation of Theorem 4.5, and let $\mathcal{I}^{\prime}_{f}$ be its evaluation ideal of $KQ^{\prime}$ associated to the polynomials $\\{f_{n}\\}$. We show below that the algebra $KQ^{\prime}/\mathcal{I}^{\prime}_{f}$ is both isomorphic to $KQ/\mathcal{I}_{f}$ and Morita equivalent to $K\bar{Q}/\bar{\mathcal{I}}_{f}$; Theorem 4.5 immediately follows. Note that by the last paragraph, we “favored” the vertex $a$ in the construction of $Q^{\prime}$ by choosing it as the partial copy of $a$ and $b$ before renaming it $v_{ab}$ in $\bar{Q}$, and we could equally have chosen to favor $b$ in a similar way. This choice is insignificant in that if we favored $b$ when constructing $Q^{\prime}$ then the promised isomorphism and Morita equivalence still hold. In particular, we emphasize that in the direct construction of $\bar{Q}$ described by Definition 4.2, the vertices $a$ and $b$ clearly play equal roles, so a quiver contraction is insensitive to the choice of the favored endpoint of the contractible pair of arrows. ###### Proposition 4.15. Maintain the setting of Theorem 4.5. Then the algebra $KQ^{\prime}/\mathcal{I}^{\prime}_{f}$ is isomorphic to $KQ/\mathcal{I}_{f}$. ###### Proof. We will construct mutually inverse homomorphisms $\Phi:KQ/\mathcal{I}_{f}\rightarrow KQ^{\prime}/\mathcal{I}^{\prime}_{f}$ and $\Psi:KQ^{\prime}/\mathcal{I}^{\prime}_{f}\rightarrow KQ/\mathcal{I}_{f}$ to prove the proposition. To do so, we use presentations of the algebras as usual: since $KQ$ is the algebra generated by the set $Q^{\leq 1}=\\{e_{u}:u\in Q_{0}\\}\cup Q_{1}$ subject only to the relations that $e_{u}e_{v}=\delta_{u,v}e_{u}$ for all $u,v\in Q_{0}$ and the relations $e_{u}\gamma=\gamma=\gamma e_{v}$ for each arrow $\gamma:u\rightarrow v$ in $Q_{1}$, the algebra $KQ/\mathcal{I}_{f}$ is generated by the same set $Q^{\leq 1}$ subject to the above relations and the relations $r_{f}(\alpha)$ for all $\alpha\in Q_{1}$. We may therefore construct $\Phi$ by inducing it from a function $\varphi:Q^{\leq 1}\rightarrow KQ^{\prime}/\mathcal{I}_{f}^{\prime}$ which respects all the necessary relations. Similarly, we may construct the homomorphism $\Psi$ from a function $\psi:Q^{\prime\leq 1}=\\{e_{s}:s\in Q^{\prime}_{0}\\}\cup Q^{\prime}_{1}\rightarrow KQ/\mathcal{I}_{f}$ which respects the necessary conditions. Let $m=m(a,b)$. Then $m\geq 3$ and $m$ is odd by Definition 4.2. By scaling if necessary, we may assume the polynomial $f_{m-1}$ has constant term $-1$, in which case the relations $r_{f}(\alpha),r_{f}(\beta)\in\mathcal{I}_{f}$ must be of the form $f_{m-1}(\alpha,\beta)=g(\alpha\beta)\alpha\beta-e_{a},\quad f_{m-1}(\beta,\alpha)=g(\beta\alpha)\beta\alpha-e_{b}=\beta g(\alpha\beta)\alpha-e_{b}$ for some polynomial $g\in K[x]$, respectively. Let $\sigma_{1}=g(\alpha\beta)\alpha,\quad\sigma_{2}=\beta.$ Then $\sigma_{1},\sigma_{2}$ make sense in both $KQ$ and $KQ^{\prime}$, and we have $r_{f}(\alpha)=\sigma_{1}\sigma_{2}-e_{a},r_{f}(\beta)=\sigma_{2}\sigma_{1}-e_{b}$, so that (19) $\sigma_{1}\sigma_{2}=e_{a},\quad\sigma_{2}\sigma_{1}=e_{b}$ in both $KQ/\mathcal{I}_{f}$ and $KQ^{\prime}/\mathcal{I}_{f}^{\prime}$. To define the functions $\varphi:Q^{\leq 1}\rightarrow KQ^{\prime}/\mathcal{I}^{\prime}_{f}$ and $\psi:Q^{\prime\leq 1}\rightarrow KQ/\mathcal{I}_{f}$, first let $X_{+}=\\{\gamma\in Q_{1}:\operatorname{\mathrm{source}}(\gamma)=b\\}\setminus\\{\beta\\},$ and $X_{-}=\\{\gamma\in Q_{1}:\operatorname{\mathrm{target}}(\gamma)=b\\}\setminus\\{\alpha\\}.$ Note that the set $X_{+}\cap X_{-}$ consists of all loops at $b$ in $Q_{1}$, and each loop in it is rerouted to a loop at $a$ in $Q^{\prime}$. Next, let $\varphi(e_{u})=e_{u}$ for all $u\in Q_{0}=Q^{\prime}_{0}$. Finally, recall that $Q^{\prime}_{1}=\\{\gamma^{\prime}:\gamma\in Q_{1}\\}$ and define $\varphi$ and $\psi$ on $Q_{1}$ and $Q_{1}^{\prime}$ by letting $\varphi(\gamma)=\begin{cases}\sigma_{2}\gamma^{\prime}\sigma_{1}&\text{if $\gamma\in X_{+}\cap X_{-}$};\\\ \sigma_{2}\gamma^{\prime}&\text{if $\gamma\in X_{+}\setminus X_{-}$};\\\ \gamma^{\prime}\sigma_{1}&\text{if $\gamma\in X_{-}\setminus X_{+}$};\\\ \gamma&\text{otherwise},\\\ \end{cases}\quad\psi(\gamma^{\prime})=\begin{cases}\sigma_{1}\gamma\sigma_{2}&\text{if $\gamma\in X_{+}\cap X_{-}$};\\\ \sigma_{1}\gamma&\text{if $\gamma\in X_{+}\setminus X_{-}$};\\\ \gamma\sigma_{2}&\text{if $\gamma\in X_{-}\setminus X_{+}$};\\\ \gamma&\text{otherwise}\\\ \end{cases}$ for each $\gamma\in Q_{1}$. Using the relations in (19), it is straightforward to verify that $\varphi$ and $\psi$ respect all necessary relations mentioned in the previous paragraph and induce mutually inverse algebra homomorphisms $\Phi:KQ/\mathcal{I}_{f}\rightarrow KQ^{\prime}/\mathcal{I}^{\prime}_{f}$ and $\Psi:KQ^{\prime}/\mathcal{I}^{\prime}_{f}\rightarrow KQ/\mathcal{I}_{f}$, as desired. ∎ ###### Proposition 4.16. Maintain the setting of Theorem 4.5. Then the algebra $K\bar{Q}/\mathcal{I}(\bar{\mathcal{R}})$ is Morita equivalent to $KQ^{\prime}/\mathcal{I}(\mathcal{R}^{\prime})$. ###### Proof. Let $\Lambda=KQ^{\prime}/\mathcal{I}^{\prime}_{f}$ and let $\sigma_{1},\sigma_{2}$ be as in the proof of Proposition 4.15. Since $\sigma_{1}\sigma_{2}=e_{a}$ and $\sigma_{2}\sigma_{1}=e_{b}$ in $\Lambda$, the maps $\phi_{1}:e_{a}\Lambda\rightarrow e_{b}\Lambda,x\mapsto\sigma_{2}x$ and $\phi_{2}:e_{b}\Lambda\rightarrow e_{a}\Lambda,y\mapsto\sigma_{1}y$ give mutually inverse isomorphisms between the projective modules $\Lambda$-modules $e_{a}\Lambda$ and $e_{b}\Lambda$. Set $V_{b}=Q^{\prime}_{0}\setminus\\{b\\}$ and let $e=1-e_{b}=\sum_{u\in V_{b}}e_{u}\in\Lambda.$ Since $e_{a}\Lambda\cong e_{b}\Lambda$, the submodule $\Lambda^{\prime}:=e\Lambda=\oplus_{u\in V_{b}}(e_{u}\Lambda)$ of the regular module $\Lambda$ is a progenerator in the category of $\Lambda$-modules, therefore $\Lambda$ is Morita equivalent to the endomorphism algebra $\mathrm{End}_{\Lambda}(\Lambda^{\prime})$. We have $\mathrm{End}_{\Lambda}(\Lambda^{\prime})\cong e\Lambda e$ since $e$ is an idempotent, so to prove the proposition it suffices to show that $K\bar{Q}/\mathcal{I}(\bar{\mathcal{R}})$ is isomorphic to $e\Lambda e$. We will do so by inducing a homomorphism $\Phi:K\bar{Q}/\mathcal{I}(\bar{\mathcal{R}})\rightarrow e\Lambda e$ from a function $\phi:\bar{Q}^{\leq 1}:=\\{e_{u}:u\in\bar{Q}_{0}\\}\cup\bar{Q}_{1}\rightarrow e\Lambda e$ and then showing that $\Phi$ is bijective. Let $v=v_{ab}$ and $\varepsilon=\varepsilon_{ab}$. To define the function $\phi:\\{e_{u}:u\in\bar{Q}_{0}\\}\cup\bar{Q}_{1}\rightarrow e\Lambda e$, let $\varphi(e_{v})=e_{a}$ and let $\phi(e_{u})=e_{u}$ for all $u\in\bar{Q}_{0}\setminus\\{v\\}$, then let $\phi(\varepsilon)=\alpha\beta$ and let $\phi(\gamma)=\gamma$ for all $\gamma\in\bar{Q}_{1}\setminus\\{\varepsilon\\}$. Viewing $K\bar{Q}/\bar{\mathcal{I}}_{f}$ as the algebra generated by $\bar{Q}^{\leq 1}$ subject to the suitable relations as usual, we can again check that $\phi$ respects all these relations: indeed, by the definitions of $\bar{Q}$ and $Q^{\prime}$, it suffices to check only the relations involving the loop $\varepsilon\in\bar{Q}_{1}$, i.e., the relation $e_{v}\varepsilon=\varepsilon=\varepsilon e_{v}$ and the relation $r_{f}(\varepsilon)=\tilde{f}_{m(\alpha)-1}(\varepsilon)$. These relations are respected by $\phi$ since $\phi(e_{v})=e_{a},\phi(\varepsilon)=\alpha\beta$, $e_{a}\alpha\beta=\alpha\beta=\alpha\beta e_{a}$, and $\tilde{f}_{m(\alpha)-1}(\alpha\beta)=r_{f}(\alpha)\in\mathcal{I}_{f}$ (see Remark 4.4). It follows that $\Phi$ induces a unique algebra homomorphism $\Phi:K\bar{Q}/\bar{\mathcal{I}}_{f}\rightarrow e\Lambda e$. To prove that $\Phi$ is bijective, we keep the notation from Definition 4.2 and from the definition of $Q^{\prime}$. Let $X=Q_{1}\setminus\\{\alpha,\beta\\}$. Then $\bar{Q}_{1}=\\{\gamma^{\prime}:\gamma\in X\\}\sqcup\\{\varepsilon\\},\quad Q^{\prime}_{1}=\\{\gamma^{\prime}:\gamma\in X\\}\sqcup\\{\alpha,\beta\\},$ and $\Phi(\gamma^{\prime})=\gamma^{\prime}$ for all $\gamma\in X$ (where the $\gamma^{\prime}$s stand for their respective images in $K\bar{Q}/\bar{\mathcal{I}}_{I}$ and $e\Lambda e$). Let $\mathcal{P}_{b}$ be the set of all paths on $Q^{\prime}$ which both start and end at a vertex in $V_{b}$. Then $e\Lambda e$ is spanned by the classes of paths in $\mathcal{P}_{b}$. Now, since $\alpha,\beta$ are the only arrows in $Q^{\prime}$ with $b$ as its target and source, respectively, if a path $p\in\mathcal{P}_{b}$ passes $b$ at any point then it must have traveled to $b$ from $a$ via $\alpha$ and then immediately traveled back to $a$ via $\beta$. Consequently, $p$ is a product of $\alpha\beta=\Phi(\varepsilon)$ and arrows from the set $X=\Phi(X)\subseteq\operatorname{\mathrm{im}}\Phi$. It follows that $p\in\operatorname{\mathrm{im}}\Phi$, so $\Phi$ is surjective. It remains to prove that $\Phi:K\bar{Q}/\bar{\mathcal{I}}_{f}\rightarrow e\Lambda e$ is injective. Since $\Lambda=KQ^{\prime}/\mathcal{I}^{\prime}_{f}$, it suffices to show that $e\mathcal{I}^{\prime}_{f}e\subseteq\Phi(\bar{\mathcal{I}}_{f})$. Since $Q^{\prime}_{1}=\\{\alpha,\beta\\}\sqcup\\{\gamma^{\prime}:\gamma\in X\\}$, the set $e\mathcal{I}_{f}^{\prime}e$ is spanned by nonzero elements of the form (20) $y_{1}=p_{1}[r_{f}(\alpha)]q_{1}=p_{1}[\Phi(f(\varepsilon,\varepsilon))]q_{1},$ (21) $y_{2}=p_{2}[r_{f}(\beta)]q_{2},$ and (22) $y_{3}=p_{3}[r_{f}(\gamma^{\prime})]q_{3}=p_{3}[\Phi(r_{f}(\gamma))]q_{3}$ where $\gamma\in X$ and $p_{i},q_{i}$ are paths in $KQ^{\prime}$ for all $i\in\\{1,2,3\\}$. We need to show that $y_{1},y_{2},y_{3}\in\Phi(\bar{\mathcal{I}}_{f})$. Note that the following holds for all $i\in\\{1,2,3\\}$: 1. (a) Since $e=\sum_{u\in V_{b}}e_{u}$ and $y_{i}\neq 0$, we have $\operatorname{\mathrm{source}}(p_{i})$, $\operatorname{\mathrm{target}}(q_{i})\in V_{b}$. 2. (b) Let $r_{i}$ be the bracketed relation in $y_{i}$ in Equations (20)-(22). Then $\operatorname{\mathrm{target}}(p_{i})=\operatorname{\mathrm{source}}(r_{i}),\operatorname{\mathrm{target}}(r_{i})=\operatorname{\mathrm{source}}(q_{i})$ since $y_{i}\neq 0$. In particular, we have $\operatorname{\mathrm{target}}(p_{1})=\operatorname{\mathrm{source}}(q_{1})=a\in V_{b}$ and $\operatorname{\mathrm{target}}(p_{3})$, $\operatorname{\mathrm{source}}(q_{3})\in V_{b}$ since the rerouted arrow $\gamma^{\prime}$ cannot be incident to $b$. 3. (c) By (a) and (b), $p_{3},q_{3}\in\mathcal{P}_{b}\subseteq\operatorname{\mathrm{im}}\Phi$ where the last containment holds by the last paragraph. Furthermore, we have $r_{1}=r_{f}(\alpha)=\Phi(r_{f}(\varepsilon))$ and $r_{3}=r_{f}(\gamma^{\prime})=\Phi(r_{f}(\gamma))$ by the definition of $\Phi$. It follows that $y_{1},y_{3}\in\Phi(\bar{\mathcal{I}}_{f})$, as desired. 4. (d) By (b) we have $\operatorname{\mathrm{target}}(p_{2})=b=\operatorname{\mathrm{source}}(q_{2})$, but since $\alpha,\beta$ are the only arrows in $Q^{\prime}$ with $b$ as its target and source, respectively, we must have $p_{2}=p^{\prime}_{2}\alpha$ and $q_{2}=\beta q^{\prime\prime}_{2}$ for some paths $p^{\prime}_{2},q^{\prime\prime}_{2}\in\mathcal{P}_{b}$. It follows that (23) $y_{2}=p^{\prime\prime}_{2}\alpha[r_{f}(\beta)]\beta q^{\prime\prime}_{2}=p^{\prime}_{2}[r_{f}(\alpha)]\alpha\beta q^{\prime\prime}_{2}=p^{\prime}_{2}[\Phi(r_{f}(\varepsilon_{ab}))]q^{\prime}_{2}$ where $q^{\prime}_{2}=\alpha\beta q^{\prime\prime}_{2}$. Note that $p_{2}^{\prime},q_{2}^{\prime}\in\mathcal{P}_{b}\subseteq\operatorname{\mathrm{im}}\Phi$, therefore $y_{2}\in\Phi(\bar{\mathcal{I}}_{f})$. The proof is now complete. ∎ ###### Remark 4.17. We have assumed that the field $K$ is algebraically closed in Theorem 3.7, Theorem 3.13, Theorem 4.5, Proposition 4.15 and Proposition 4.16. However, it is worth noting these results also hold, by the exact same proofs, if we assume instead that $K$ is an arbitrary field of characteristic zero and that $\\{f_{n}\\}$ is a family of polynomials which all split over $K$ and satisfy Conditions (a) and (b) of Definition 3.3. The purpose of the assumption that $K$ is algebraically closed is to guarantee that the polynomials $\\{f_{n}\\}$ defined by Equation 17 split and hence form a uniform family over $K$; the simple forms of these polynomials will greatly simplify the study of representations in categories of the form $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ appearing in § 4.2 and the remaining parts of the paper. ### 4.4. Representations of Contracted Quivers Let $(\bar{Q},\bar{d},\bar{m})$ be a generalized double quiver obtained from the double quiver $Q$ of a Coxeter system $(W,S)$ via a sequence of contractions. Let $\bar{\mathcal{I}}_{f}$ be the evaluation ideal of a uniform family $\\{f_{n}\\}$ of polynomials over $K$. Then the category mod-$A$ is equivalent to the category $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$. We develop tools for constructing and analyzing representations in $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ in this subsection. Let $M=(M_{a},M_{\alpha})_{a\in\bar{Q}_{0},\alpha\in\bar{Q}_{1}}$ be a representation in $\mathrm{rep}_{K}\bar{Q}$. The definition of $\bar{\mathcal{I}}_{f}$ implies that $M$ is a representation in $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ if and only if for every arrow of the form $\alpha:a\rightarrow b$ in $\bar{Q}$, the set of assignments $M_{\\{\alpha,\beta\\}}:=\\{M_{a},M_{b},M_{\alpha},M_{\beta}\\}$ where $\beta=\bar{d}(\alpha)$ satisfies the relations $r_{f}(\alpha)$ and $r_{f}(\beta)$ in the sense that the maps (24) $f_{m-1}(M_{\alpha},M_{\beta}):=\begin{cases}\tilde{f}_{m-1}(M_{\alpha}M_{\beta})&\text{if $m$ is odd};\\\ \tilde{f}_{m-1}(M_{\alpha}M_{\beta})M_{\alpha}&\text{if $m$ is even}\end{cases}$ and (25) $f_{m-1}(M_{\beta},M_{\alpha}):=\begin{cases}\tilde{f}_{m-1}(M_{\beta}M_{\alpha})&\text{if $m$ is odd};\\\ \tilde{f}_{m-1}(M_{\alpha}M_{\beta})M_{\alpha}&\text{if $m$ is even}\end{cases}$ where $m=m(\alpha)$ both equal 0. Call a set of the form $M_{\\{\alpha,\beta\\}}$ a _local representation for $\\{\alpha,\beta\\}$_ if it satisfies the equations (24) and (25). Then to construct a representation $M\in\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ it suffices to _assemble_ a collection of local representations $\mathcal{M}:=\\{M_{\\{\alpha,\beta\\}}\,|\,\alpha\in Q_{1},\beta=\bar{\alpha}\\}$ that is _consistent_ in the sense that for every vertex $a\in\bar{Q}_{0}$, there is a common vector space $V_{a}$ such that $M_{a}=V_{a}$ for every local representation $M_{\alpha,\beta\\}}\in\mathcal{M}$ where $\alpha$ is incident to $a$. Here, we _assemble_ a consistent collection $\mathcal{M}$ into a representation $M\in\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ as follows: first, for each vertex $a\in\bar{Q}_{0}$, pick any arrow $\alpha\in\bar{Q}_{1}$ incident to $a$, denote $M_{\\{\alpha,\bar{\alpha}\\}}$ by $M^{\prime}$, then let $M_{a}=M^{\prime}_{a}$; second, for each arrow $\gamma\in\bar{Q}_{1}$ and $\beta=\bar{d}(\alpha)$, denote $M_{\\{\alpha,\beta\\}}$ by $M^{\prime\prime}$, then let $M_{\alpha}=M^{\prime\prime}_{\alpha}$ and $M_{\beta}=M^{\prime\prime}_{\beta}$. Note that the first step can be done unambiguously, independent of the choice of the arrow $\alpha$, if and only if $\mathcal{M}$ is consistent. Also note that in the above discussion we allow the possibility that $\alpha$ is self-dual, in which case $\beta=\alpha$ and $a=b$. Subsequently, we will often construct a representation in $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ by assembling a consistent collection of local representations. The local representations can in turn be studied, by comparison of the equations (12), (13) and (24), (25), similarly to how we studied $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ in the dihedral case in § 3.3: ###### Proposition 4.18. Let $M\in\mathrm{rep}_{K}\bar{Q}$. Let $\alpha:a\rightarrow b$ be an arrow in $\bar{Q}_{1}$, let $\beta=\bar{d}(\alpha)$, and let $m=\bar{m}(\alpha)$. Let $M_{\\{\alpha,\beta\\}}=\\{M_{a},M_{b},M_{\alpha},M_{\beta}\\}$. Then the following results hold. 1. (a) If $m=\infty$, then $M_{\\{\alpha,\beta\\}}$ is automatically a local representation for $\\{\alpha,\beta\\}$. If $m<\infty$, then $M_{\\{\alpha,\beta\\}}$ is a local representation whenever $M_{\alpha}M_{\beta}$ and $M_{\beta}M_{\alpha}$ are diagonalizable maps whose eigenvalues are roots of $\tilde{f}_{m-1}$. 2. (b) If $M_{\\{\alpha,\beta\\}}$ is a local representation for $\\{\alpha,\beta\\}$ and $m<\infty$, then $M_{\alpha}M_{\beta}$ and $M_{\beta}M_{\alpha}$ are diagonalizable and their eigenvalues are either roots of $\tilde{f}_{m-1}$ or zero. ###### Proof. Part (a) follows from Equations (24) and (25). Part (b) follows from the same equations and Lemma 3.12.(a). ∎ The following specializations of Proposition 4.18.(a) will be very useful: ###### Corollary 4.19. Let $M,\alpha,\beta,a,b,m$ and $M_{\left\\{\alpha,\beta\right\\}}$ be as in Proposition 4.18. Let $\\{f_{n}\\}$ be the polynomial family defined by (17). Then the following holds for any positive integer $n$. 1. (a) If $M_{a}=M_{b}=K^{n}$ and $M_{\alpha}=M_{\beta}=\operatorname{\mathrm{id}}$, then $M_{\\{\alpha,\beta\\}}$ defines a local representation for $\\{\alpha,\beta\\}$. 2. (b) Suppose that $m\geq 5$ and $\beta\neq\alpha$. If $M_{a}=M_{b}=K^{n}$ and $(M_{\alpha}M_{\beta})^{2}=(M_{\beta}M_{\alpha})^{2}=\operatorname{\mathrm{id}}$, then $M_{\\{\alpha,\beta\\}}$ defines a local representation for $\\{\alpha,\beta\\}$. ###### Proof. The results follow from Proposition 4.18.(a), the fact that $1$ is a root for $\tilde{f}_{m-1}$, and the fact that $x^{2}-1$ divides $\tilde{f}_{m-1}$ whenever $m\geq 5$. ∎ To prove results on mod-$A$ in Section § 5, we will often need to not only construct a suitable representation $M$ in $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{R}})$ but also prove that $M$ is simple or not semisimple. The proofs typically proceed in the following way. Consider a specific vertex $a\in\bar{Q}_{0}$ and a number of paths $p_{1},p_{2},\dots,p_{k}$ on $\bar{Q}$ that both start and end at $a$. Recall from § 2.4 that these paths give rise to endormophisms $\phi_{1}:=M_{p_{1}},\phi_{2}:=M_{p_{2}},\dots$ and $\phi_{k}:=M_{p_{k}}$ of $M_{a}$, and that a subrepresentation $N$ of $M$ must assign to $a$ a vector space $N_{a}\subseteq M_{a}$ that satisfies the invariance condition $\phi_{i}(N_{a})\subseteq N_{a}$ for all $1\leq i\leq k$. Together, these invariance conditions force $N_{a}$ to take certain forms, which in turn force $M$ to satisfy certain properties such as being simple. We refer to the analysis of what form $N_{a}$ can take as _subspace analysis at $a$_. We now explain how we will construct representations to facilitate successful subspace analysis via examples. All the examples will be used in the proofs of Section 5, and $\\{f_{n}\\}$ stands for the uniform family of polynomials defined by Equation (17) throughout the examples. Our first method starts with an irreducible representation $\rho:G\rightarrow\mathrm{GL}(V)$ of a group $G$: ###### Example 4.20. We construct a simple representation $M$ in $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ for the generalized double quiver $\bar{Q}$ in Figure 5 from Remark 4.14. To start, consider the symmetric group $G=S_{q}$ where $q\geq 8$ and any irreducible representation $\rho:G\rightarrow\mathrm{GL}(V)$ of $G$. By [Mil01], the group $S_{q}$ can be generated by two elements $\sigma,\tau$ of orders 2 and 3, respectively. It follows that $\rho(\sigma)^{2}=\rho(\sigma^{2})=\rho(e)=\operatorname{\mathrm{id}}_{V}$ and similarly $\rho(\tau)^{3}=\operatorname{\mathrm{id}}_{V}$. To define $M$, first let $M_{v}=V,M_{\varepsilon_{1}}=\rho(\sigma)$ and $M_{\varepsilon_{2}}=\rho(\tau)$. This defines local representations for the sets $\\{\varepsilon_{1}\\}$ and $\\{\varepsilon_{2}\\}$ because $M_{\varepsilon_{1}},M_{\varepsilon_{2}}$ satisfy the relations $r_{f}(\varepsilon_{1})=\varepsilon_{1}^{2}-e_{v},r_{f}(\varepsilon_{2})=\varepsilon_{2}^{3}-e_{v}$, respectively. To finish the definition of $M$, it remains to assign a local representation for the dual arrows $\\{\alpha,\beta\\}$ that is consistent with these two local representations. By Corollary 4.19, it suffices to define $M_{z}=V$ and $M_{\alpha}=M_{\beta}=\operatorname{\mathrm{id}}_{V}$. Let $N$ be a subrepresentation of $M$. Since the representation $V$ is irreducible and $\sigma,\tau$ generate $G$, the only subspaces of $M_{v}$ that are invariant under both $M_{\varepsilon_{1}}=\rho(\sigma)$ and $M_{\varepsilon_{2}}=\rho(\tau)$ are $0$ and $M_{v}$ itself, therefore we have $N_{v}=0$ or $N_{v}=M_{v}$. Since $M_{\alpha},M_{\beta}$ are isomorphisms, in these two cases we must have $N=0$ or $N=M$, respectively, therefore $M$ is simple. The endomorphisms $\phi_{1},\dots,\phi_{k}$ of $M_{a}$ mentioned above are often all diagonalizable in our examples. This makes the following well-known fact from linear algebra very useful for subspace analysis: let $V$ be a vector space and let $\phi$ be a diagonalizable endomorphism of $V$. Suppose $\phi$ has $d$ distinct eigenvalues and $V=\oplus_{i=1}^{d}E_{i}$ is the corresponding eigenspace decomposition. Then a subspace $W$ of $V$ is invariant under $\phi$ if and only if $W$ is _compatible with the eigenspace decomposition_ in the sense that $W=\oplus_{i=1}^{d}(W\cap E_{i})$. We use this characterization in the following two examples. ###### Example 4.21. Let $(W,S)$ be the Coxeter system whose Coxeter diagram $G$ is shown on the left of Figure 7, where $m_{1},m_{2}\in\mathbb{Z}_{\geq 4}\cup\\{\infty\\}$. The double quiver $Q$ of $G$ is shown on the right of the same figure. We construct a representation $M$ in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ and apply subspace analysis at $b$ to show that $\mathrm{rep}_{K}(Q,\mathcal{R})$, hence mod-$A$, is not semisimple. $G:$$a$$b$$c$$m_{1}$$m_{2}$$\rightarrow$$Q:$$a$$b$$c$$\alpha$$\beta$$\gamma$$\delta$ Figure 7. We begin by constructing the local representations $M_{\\{\alpha,\beta\\}}$ and $M_{\\{\gamma,\delta\\}}$ based on the values of $m_{1}$ and $m_{2}$, respectively. For $\\{\alpha,\beta\\}$, first let $M_{a}=M_{b}=K^{2}$. Let $\lambda_{2}=1$. Take $\lambda_{1}=-1$ if $m_{1}=\infty$, $\lambda=0$ if $m_{1}=4$, and $\lambda=z$, a root of $\tilde{f}_{m-1}$ different from $\lambda_{2}$, if $4<m_{1}<\infty$. Next, set $M_{\beta}$ to be the map given by the matrix $\begin{bmatrix}0&0\\\ 0&1\end{bmatrix}$ if $m_{1}=4$ and to be the identity map otherwise, then set $M_{\alpha}=\begin{bmatrix}\lambda_{1}&1\\\ 0&\lambda_{2}\end{bmatrix}$. It is straightforward to check that $M_{\\{\alpha,\beta\\}}:=\\{M_{a},M_{b},M_{\alpha},M_{\beta}\\}$ forms a local representation: if $m_{1}=\infty$, then the relations $r_{f}(\alpha),r_{f}(\beta)$ are zero and there is nothing to check; if $4<m_{1}<\infty$, then the relations $r_{f}(\alpha),r_{f}(\beta)$ are satisfied by Proposition 4.18 because $M_{\alpha}M_{\beta}=M_{\beta}M_{\alpha}=M_{\alpha}$, a diagonalizable map whose eigenvalues are roots of $\tilde{f}_{m-1}$; finally, if $m_{1}=4$, then the relations $r_{f}(\alpha)=\alpha\beta\alpha-\alpha,r_{f}(\beta)=\beta\alpha\beta-\beta$ are satisfied because $M_{\alpha}M_{\beta}M_{\alpha}=M_{\alpha},M_{\beta}M_{\alpha}M_{\beta}=M_{\beta}$ by direct computation. We may similarly define a local representation for $\\{\gamma,\delta\\}$ by letting $M_{b}=M_{c}=K^{2}$, defining numbers $\mu_{2},\mu_{1}$ and the map $M_{\gamma}$ based on $m_{2}$ in the same way we defined $\lambda_{2},\lambda_{1}$ and $M_{\beta}$ based on $m_{1}$, and defining the map $M_{\delta}$ to be given by the matrix $\begin{bmatrix}\mu_{1}&0\\\ 0&\mu_{2}\end{bmatrix}$. The two local representations are consistent because they assign the same vector space $K^{2}$ to the vertex $b$, therefore they can be assembled to a representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. Consider the operators $\phi_{1}:=M_{\alpha}\circ M_{\beta}$ and $\phi_{2}:=M_{\delta}\circ M_{\gamma}$ on $M_{b}$. Then $\phi_{1}=\begin{bmatrix}\lambda_{1}&1\\\ 0&\lambda_{2}\end{bmatrix},\quad\phi_{1}=\begin{bmatrix}\mu_{1}&0\\\ 0&\mu_{2}\end{bmatrix},$ so the eigenspace decompositions of $M_{b}$ with respect to $\phi_{1},\phi_{2}$ are given by (26) $M_{b}=\langle e_{1}\rangle\oplus\langle e_{1}+(\lambda_{2}-\lambda_{1})e_{2}\rangle=\langle e_{1}\rangle\oplus\langle e_{2}\rangle$ where $\langle v\rangle$ stands for the span of $v$ for each vector $v$ and $e_{1},e_{2}$ denote the standard basis vectors $\begin{bmatrix}1\\\ 0\end{bmatrix},\begin{bmatrix}0\\\ 1\end{bmatrix}$ of $K^{2}$, respectively. Let $N$ be a nonzero subrepresentation of $M$. Then the vector space $N_{b}$ must be invariant under both $\phi_{1}$ and $\phi_{2}$. Consequently, $N_{b}$ must be compatible with the decompositions in Equation (26) in the sense that $N_{b}=(N_{b}\cap\langle e_{1}\rangle)\oplus(N_{b}\cap\langle e_{2}\rangle)=(N_{b}\cap\langle e_{1}\rangle)\oplus(N_{b}\cap\langle e_{1}+(\lambda_{2}-\lambda_{1})e_{2}\rangle).$ But each intersection in the above equation is either trivial or of dimension 1, so $N_{b}$ must be $\langle e_{1}\rangle$. This implies that $N$ cannot have a complement in $M$, so $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ is not semisimple. ###### Example 4.22. Let $Q$ and $M$ be as in the previous example. We modify $M$ to produce an infinite family of simple representations in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ to be used in Section § 5. To begin, define a representation $M^{x}$ for each scalar $x\in K\setminus\\{\lambda_{1},\lambda_{2}\\}$ as follows: let $M^{x}_{a}=M^{x}_{b}=M^{x}_{c}=K^{2},M^{x}_{\gamma}=M_{\gamma}$ and $M^{x}_{\delta}=M_{\delta}$, then let $M^{x}_{\beta}=\operatorname{\mathrm{id}}$ and let $M^{x}_{\alpha}$ be the map given by the matrix $B_{x}=\begin{bmatrix}x&x(\lambda_{1}+\lambda_{2}-x)-\lambda_{1}\lambda_{2}\\\ 1&\lambda_{1}+\lambda_{2}-x\end{bmatrix}.$ As before, this defines local representations $M_{\\{\gamma,\delta\\}}:=\\{M^{x}_{b},M^{x}_{c},M^{x}_{\gamma},M^{x}_{\delta}\\}$ and $M_{\\{\alpha,\beta\\}}:=\\{M^{x}_{a},M^{x}_{b},M^{x}_{\alpha},M^{x}_{\beta}\\}$ for $\\{\gamma,\delta\\}$ and $\\{\alpha,\beta\\}$, respectively. Moreover, the two local representations are consistent and assemble to a representation $M^{x}$. Let $\phi^{x}_{1}:=M_{\alpha}^{x}\circ M_{\beta}^{x}$ and $\phi^{x}_{2}:=M^{x}_{\delta}\circ M^{x}_{\gamma}$. The matrix $B_{x}$ guarantees that the map $\phi_{1}^{x}$ has an eigenvector $v_{1}:=(x-\lambda_{1})e_{1}+e_{2}$ with eigenvalue $\lambda_{1}$ as well as an eigenvector $v_{2}:=(x-\lambda_{2})e_{1}+e_{2}$ with eigenvalue $\lambda_{2}$, so the eigenspace decompositions of $M^{x}_{b}$ with respect to $\phi_{1}^{x}$ and $\phi_{2}^{x}$ are given by $M^{x}_{b}=\langle v_{1}\rangle\oplus\langle v_{2}\rangle=\langle e_{1}\rangle\oplus\langle e_{2}\rangle.$ It follows that in any subrepresentation $N$ of $M^{x}$ we must have $N_{b}=N_{b}\cap\langle v_{1}\rangle\oplus N_{b}\cap\langle v_{2}\rangle=N_{b}\cap\langle e_{1}\rangle\oplus N_{b}\cap\langle e_{2}\rangle.$ The vectors $v_{1},v_{2},e_{1},e_{2}$ are pairwise distinct since $\lambda_{1}\neq\lambda_{2},\mu_{1}\neq\mu_{2}$ and $x\not\in\\{\lambda_{1},\lambda_{2}\\}$, therefore the above equation holds only if $N_{b}=0$ or $N_{b}=K^{2}$. If $m_{2}\neq 4$, then $\mu_{1}\neq 0$ and $M^{x}_{\gamma}$ is an isomorphism, therefore we must have $N=0$ or $N=M^{x}$, which in turn implies that $M^{x}$ is simple. If $m_{2}=4$, it is easy to check that $M^{x}$ contains a simple module $N^{x}$ such that $N^{x}_{a}=N^{x}_{b}=K^{2}$ and $N^{x}_{c}=\langle e_{2}\rangle$. Set $N^{x}=M^{x}$ when $m_{2}\neq 4$. Then $N^{x}$ is a simple representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ for all $x\in K\setminus\\{\lambda_{1},\lambda_{2}\\}$ regardless of the value of $m_{2}$. Observe that if $x\neq y$ then $N^{x}\not\cong N^{y}$, for there cannot exist linear isomorphisms $\phi_{a}:N^{x}_{a}\rightarrow N^{y}_{a},\phi_{b}:N^{x}_{b}\rightarrow N^{y}_{b}$ such that $\phi_{b}N^{x}_{\alpha}=N^{y}_{\alpha}\phi_{a}$ and $\phi_{a}N^{x}_{\beta}=N^{y}_{\beta}\phi_{b}$ simultaneously: the latter equation holds only if $\phi_{a}=\phi_{b}$ as maps from $K^{2}$ to $K^{2}$ because $N^{x}_{\beta}=N^{y}_{\beta}=\operatorname{\mathrm{id}}$, but then the first equation implies that the matrices $B_{x},B_{y}$ are equal, which cannot happen if $x\neq y$. In the next example, we combine the ideas of the last three examples to construct simple representations in a more flexible way. ###### Example 4.23. Let $(W,S),G$, and $Q$ be as in Example 4.21. We construct a representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I})$ in the case that $m_{1}\geq 4,m_{2}\geq 6$ and prove that $M$ contains a simple subrepresentation $N$. Consider the irreducible representation $\rho:S_{q}\rightarrow\mathrm{GL}(V)$ and the elements $\sigma,\tau\in S_{q}$ from Example 4.20. Using minimal polynomials as we did in the proof of Lemma 3.12, we may deduce from the fact $\rho(\sigma)^{2}=\operatorname{\mathrm{id}}_{V}$ that $\rho(\sigma)$ is a diagonalizable map whose eigenvalues are from the set $\\{-1,1\\}$, so with respect to $\rho(\sigma)$ we have an eigenspace decomposition $V=E_{1}\oplus E_{2}$ where $E_{1},E_{2}$ are the eigenspaces for the eigenvalues $1$ and $-1$, respectively. Similarly, since $\rho(\tau)^{3}=\operatorname{\mathrm{id}}_{V}$, the map $\rho(\tau)$ is diagonalizable and its eigenvalues lie in the set $\\{\omega_{1},\omega_{2},\omega_{3}\\}$ containing the three third roots of unity, therefore we have an eigenspace decomposition $V=F_{1}\oplus F_{2}\oplus F_{3}$ of $V$ where each $F_{i}$ is the eigenspace for the eigenvalue $\omega_{i}$. Note that 0 and $V$ are the only subspaces of $V$ compatible with both these decompositions, because they are the only subspaces invariant under both $\rho(\sigma)$ and $\rho(\tau)$ by Example 4.20. To define $M$, first set $M_{a}=M_{b}=M_{c}=V$. Next, assign the linear maps $M_{\alpha},M_{\beta}$ based on the value of $m_{1}$: if $m_{1}>4$, then $\tilde{f}_{m_{1}-1}$ contains at least two nonzero roots $\lambda_{1},\lambda_{2}$, and we set $M_{\alpha}=\operatorname{\mathrm{id}}_{E_{1}}\oplus\operatorname{\mathrm{id}}_{E_{2}},\quad M_{\beta}=(\lambda_{1}\cdot\operatorname{\mathrm{id}}_{E_{2}})\oplus(\lambda_{2}\cdot\operatorname{\mathrm{id}}_{E_{2}})$ where the notation means, for example, that $M_{\beta}$ restricts to $\lambda_{1}\cdot\operatorname{\mathrm{id}}$ on $E_{1}$ and to $\lambda_{2}\cdot\operatorname{\mathrm{id}}$ on $E_{2}$; if $m_{1}=4$, we set $M_{\alpha}=\operatorname{\mathrm{id}}_{E_{1}}\oplus 0_{E_{2}},\quad M_{\beta}=(\lambda_{1}\cdot\operatorname{\mathrm{id}}_{E_{2}})\oplus 0_{E_{2}}$ where $\lambda_{1}$ is the unique nonzero root of $\tilde{f}_{3}$. Similarly, if $m_{2}>6$ then $\tilde{f}_{m_{2}-1}$ has at least three nonzero roots $\mu_{1},\mu_{2},\mu_{3}$ and we set $M_{\gamma}=\operatorname{\mathrm{id}}_{F_{1}}\oplus\operatorname{\mathrm{id}}_{F_{2}}\oplus\operatorname{\mathrm{id}}_{F_{3}},\quad M_{\delta}=(\mu_{1}\cdot\operatorname{\mathrm{id}}_{F_{1}})\oplus(\mu_{2}\cdot\operatorname{\mathrm{id}}_{F_{2}})\oplus(\mu_{3}\cdot\operatorname{\mathrm{id}}_{F_{3}}),$ while if $m_{2}=6$ then we set $M_{\gamma}=\operatorname{\mathrm{id}}_{F_{1}}\oplus\operatorname{\mathrm{id}}_{F_{2}}\oplus 0_{F_{3}},\quad M_{\delta}=(\mu_{1}\cdot\operatorname{\mathrm{id}}_{F_{1}})\oplus(\mu_{2}\cdot\operatorname{\mathrm{id}}_{F_{2}})\oplus 0_{F_{3}}.$ where $\mu_{1},\mu_{2}$ are the two nonzero roots of $\tilde{f}_{m_{2}-1}$. By Parts (a) and (b) of Proposition 4.18, the assignments define a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. Moreover, the eigenspace decompositions of $M_{b}$ with respect to the maps $\phi_{1}=M_{\alpha}M_{\beta}$ and $\phi_{2}=M_{\delta}M_{\gamma}$ coincide with the eigenspace decompositions of $V$ with respect to the maps $\rho(\sigma)$ and $\rho(\tau)$, respectively, therefore a subrepresentation of $M$ must assign the space $0$ or $V$ to the vertex $b$. It follows that $M$ contains a simple representation $N$ with $N_{b}=V$ and $N_{a}=\begin{cases}M_{a}=E_{1}\oplus E_{2}&\text{if $m_{1}>4$};\\\ E_{1}&\text{if $m_{1}=4$},\\\ \end{cases}\;N_{c}=\begin{cases}M_{c}=F_{1}\oplus F_{2}\oplus F_{3}&\text{if $m_{1}>6$};\\\ F_{1}\oplus F_{2}&\text{if $m_{1}=6$}.\\\ \end{cases}\quad$ ###### Remark 4.24. Given a vector space $V$ and operators $\phi_{1},\cdots,\phi_{k}$ on $V$, the study of subspaces of $V$ that are simultaneously compatible with the eigenspace decompositions of all the operators is closely related to enumerative geometry and Schubert calculus. For example, if $K=\mathbb{C}$, $V=K^{4}$, $n=2$ and the operators $\phi_{1},\phi_{2}$ yield eigenspace decompositions $M_{b}=E_{1}\oplus E_{2}$ and $M_{b}=F_{1}\oplus F_{2}$ where $E_{1},E_{2},F_{1},F_{2}$ all have dimension $2$, then “generically” there exists a subspace $W\subseteq V$ with dimension 2 that is simultaneously compatible with both $\phi_{1}$ and $\phi_{2}$, because a classical result in Schubert calculus asserts that generically, given four lines in the projective 3-space $\mathbb{C}\mathbb{P}^{3}$, there are two lines that intersect all these four lines. On the other hand, when the dimensions of the eigenspaces of $V$ with respect to $\phi_{1},\cdots,\phi_{k}$ are known, we can often show that no proper, nontrivial subspace of $V$ can be simultaneously compatible with the corresponding eigenspace decompositions by certain codimension computations involving products of Schubert classes. For instance, to construct a simple representation in $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ in Example 4.20, it is possible to specify for every positive integer $n$ a representation $M\in\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ in such a way that $\dim(M_{b})=6n$, the maps $M_{\alpha}$ and $M_{\beta}$ are isomorphisms, $\phi_{1}:=M_{\varepsilon_{1}}$ is diagonalizable with two eigenspaces $E_{1},E_{2}$ of dimension $3n$, and $\phi_{2}:=M_{\varepsilon_{2}}$ is diagonalizable with three eigenspaces $F_{1},F_{2},F_{3}$ of dimension $2n$. A codimension computation using Schubert calculus guarantees that generically $M_{b}$ has no subspace compatible with both $M_{\varepsilon_{1}}$ and $M_{\varepsilon_{2}}$ other than 0 and $M_{b}$ itself, so $M$ is simple (generically). This yields an alternative proof of the existence of certain simple representations in $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{R}})$. In the interest of space, however, we omit details of the codimension computation and the necessary background on Schubert calculus. In particular, we will not make precise what the word “generically” means in this paragraph. We end this subsection by proving a proposition to be used in § 5.2 under the following setting: Let $Q$ be the double quiver of a Coxeter diagram $G$ and let $\\{f_{n}\\}$ be as defined in Equation 17. Suppose that $G$ contains a vertex $v$ which is adjacent to a unique vertex in $G$. Let $u$ be that unique vertex, let $m=m(u,v)$, and let $\hat{G}$ be the graph obtained from $G$ by removing the vertex $v$ and the edge $v-u$. Let $\hat{Q}$ be the double quiver of $\hat{G}$, then let $\hat{\mathcal{I}}_{f}$ be the evaluation ideal of $\\{f_{n}\\}$ in $K\hat{Q}$. The proposition allows us to “enlarge” certain representations in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ to a simple representation in $\mathrm{rep}_{K}(\hat{Q},\hat{\mathcal{I}}_{f})$: ###### Proposition 4.25. Let $Q,\mathcal{R},\hat{Q},\hat{\mathcal{R}}$ and $u,v$ be as described above. Suppose that $\\{S(1),S(2),\dots,S(k)\\}$ is a nonempty set of pairwise non- isomorphic simple representations in $\mathrm{rep}_{K}(\hat{Q},\hat{\mathcal{R}})$ and let $S=\oplus_{i=1}S(i)$. If $m>3$, then there is a simple representation $M$ in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ such that $M_{a}=S_{a}$ for all $a\in\hat{Q}_{0}$. We prove the proposition via subspace analysis at the vertex $u$, by using the following lemma: ###### Lemma 4.26. Let $\Lambda$ be an arbitrary ring. Let $\\{S(1),\ldots,S(k)\\}$ be a set of pairwise non-isomorphic simple right $\Lambda$-modules, let $S=\oplus_{i=1}^{k}S(i)$, and let $x=\sum_{1\leq i\leq k}x_{i}\in S$ where $x_{i}\in S(i)$ for each $i$. If $x_{i}\neq 0$ for all $1\leq i\leq k$, then the submodule generated by $x$ equals $S$. ###### Proof. We use induction on $k$. The claim clearly holds when $k=1$. For $k>1$, let $I_{i}=\mathrm{Ann}(x_{i}):=\\{r\in\Lambda:x_{i}r=0\\}$ for each $i$. Then $I_{1},\dots,I_{k}$ are distinct maximal right ideals of $\Lambda$ since $S(1),\dots,S(k)$ are pairwise non-isomorphic simple right $\Lambda$-modules. Let $r\in I_{1}\setminus I_{2}$ and let $y_{i}=x_{i}r$ for all $i$. Then $y_{1}=0$ and $y_{2}\neq 0$. Let $J=\\{1\leq j\leq k:y_{j}\neq 0\\}$ and let $y=\sum_{j\in J}y_{j}$. Applying the inductive hypothesis on the module $S^{\prime}:=\oplus_{j\in J}S(j)$, we conclude that $S^{\prime}\subseteq y\Lambda$. Furthermore, since $y=\sum_{j\in J}y_{j}=\sum_{1\leq j\leq k}y_{j}=\sum_{1\leq j\leq k}x_{j}r=xr$, we have $y\Lambda\subseteq x\Lambda$. It follows that $S^{\prime}\subseteq x\Lambda$. In particular, we have $\sum_{j\in J}x_{j}\in x\Lambda$ and hence $\sum_{j\notin J}x_{j}\in x\Lambda$. By the inductive hypothesis, the element $\sum_{j\notin J}x_{j}$ generates the module $S^{\prime\prime}:=\oplus_{j\notin J}S(j)$, so $S^{\prime\prime}\subseteq x\Lambda$. It follows that $S=S^{\prime}\oplus S^{\prime\prime}\subseteq x\Lambda$. ∎ ###### Proof of Proposition 4.25. Denote the arrows $u\rightarrow v$ and $v\rightarrow u$ in $Q$ by $\alpha$ and $\beta$, respectively. By § 4.4, to construct a representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ it suffices to extend $S$ by a local representation $M_{\\{\alpha,\beta\\}}=\\{M_{u},M_{v},M_{\alpha},M_{\beta}\\}$ for the set $\\{\alpha,\beta\\}$ such that $M_{u}=S_{u}$. We do so by setting $M_{u}=S_{u},M_{v}=K$ and setting $M_{\alpha},M_{\beta}$ as follows: 1. (a) If $m=4$, let $d=\dim(S_{u})$ and let $B_{S}=B_{1}\sqcup B_{2}\sqcup\cdots B_{k}$ be a basis of $S_{u}$ where $B_{i}$ is a basis of $S(i)_{u}$ for all $1\leq i\leq k$. Consider the $1\times d$ and $d\times 1$ matrices $X=\begin{bmatrix}1&1&\dots&1\end{bmatrix},\quad Y=\begin{bmatrix}d&-1&\dots&-1\end{bmatrix}^{T}.$ Define $M_{\alpha}:S_{u}\rightarrow K$ and $M_{\beta}:K\rightarrow S_{u}$ to be the maps whose matrices with respect to $B_{S}$ and $\\{1\\}$ (considered a basis of $K$) are given by $X$ and $Y$, respectively. Since $M_{\alpha}M_{\beta}=\operatorname{\mathrm{id}}$, the assignments for $M_{\alpha}$ and $M_{\beta}$ satisfy the relations $r_{f}(\alpha)=\alpha\beta\alpha-\alpha$ and $r_{f}(\beta)=\beta\alpha\beta-\beta$, so $M$ is indeed a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. 2. (b) If $m\geq 5$, then let $d$ and $X,Y$ be as before, set $M_{\alpha}=\operatorname{\mathrm{id}}$, and define $M_{\beta}$ to be the whose matrix with respect to $B_{S}$ is $I_{d}-2YX$. These assignments ensure that $(M_{\alpha}M_{\beta})^{2}=(M_{\beta}M_{\alpha})^{2}=M_{\beta}^{2}=\operatorname{\mathrm{id}}$, so they define a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ by Lemma 4.19.(b). The representation $M$ satisfies the condition that $M_{a}=S_{a}$ for all $a\in\hat{Q}_{0}$ by definition, so it remains to show that $M$ is simple. To this end, let $N$ be a subrepresentation of $M$ in $\mathrm{rep}_{\mathbb{C}}(Q,\mathcal{I})$, and let $x=(x_{i})_{1\leq i\leq k}\in$ be any nonzero vector in $S_{u}=\oplus_{i=1}^{k}S(i)_{u}$. Then the set $J:=\\{1\leq j\leq n:x_{j}\neq 0\\}$ is nonempty. Invoke the equivalence between the categories $\mathrm{rep}_{K}(\hat{Q},\hat{\mathcal{I}}_{f})$ and mod-$K\hat{Q}/\hat{\mathcal{I}}_{f}$ to identify $S(1),\cdots,S(k)$ and $S$ as modules of the algebra $K\hat{Q}/\hat{\mathcal{I}}_{f}$. Then Lemma 4.26 implies that the submodule of generated by $x$ in $S$ must contain the direct sum $\oplus_{j\in J}S(j)$. Invoking the same equivalence again, we conclude that $N_{u}$ contains a basis vector $e$ from the basis $B_{j}$ of $S(j)$ for some $j\in J$. By direct computation, the element $x^{\prime}=M_{\beta}M_{\alpha}(e)\in M_{u}$ must have nonzero entries at all coordinates, therefore $x^{\prime}$ generates all of $M_{u}$ in $N$, i.e., we have $N_{u}=M_{u}$, by Lemma 4.26. Since $S(1),\cdots,S(k)$ are simple and $M_{\alpha}$ is surjective, it follows that $M$ is simple. ∎ ## 5\. Results on mod-$A$ We maintain the setting of Section 4 and study the category mod-$A$ in this section. Recall that $A=A_{K}=K\otimes_{\mathbb{Z}}J_{C}$ where $K$ is an algebraically closed field with characteristic zero and $J_{C}$ is an irreducible Coxeter system $(W,S)$ with Coxeter diagram $G$ and subregular cell $C$. ### 5.1. Results Our first main result characterizes in terms of the Coxeter diagram $G$ when mod-$A$ is semisimple, as well as when mod-$A$ _has finitely many simples_ , i.e., when it contains finitely many simple modules up to isomorphism. ###### Theorem 5.1. The following conditions are equivalent: 1. (a) The graph $G$ is a tree, has no edge with infinite weight, and has at most one heavy edge. 2. (b) The category mod-$A$ is semisimple. 3. (c) The category mod-$A$ has finitely many simples. Our second main result gives a similar characterization of when mod-$A$ _has bounded simples_ in the sense that there exists an upper bound on the dimensions of the simple modules of mod-$A$. Since the simple modules of mod-$A$ are certainly bounded if there are only finitely many of them, we start with the assumption that the conditions of Theorem 5.1 do not hold: ###### Theorem 5.2. Suppose that $G$ contains a cycle or has an edge with infinite weight or has at least two heavy edges. Then the dimensions of the simple modules of mod-$A$ are bounded above if and only if one of the following mutually exclusive conditions holds: 1. (a) $G$ contains a unique cycle, and all edges in $G$ are simple. 2. (b) $G$ is a tree and contains exactly two heavy edges; moreover, each of those two edges has weight 4 or weight 5. Here and henceforth, a _cycle_ in a graph means a tuple $C=(v_{1},v_{2},\dots,v_{n})$ of $n\geq 3$ vertices in the graph such that $v_{1}-v_{2},v_{2}-v_{3},\dots,v_{n-1}-v_{n},v_{n}-v_{1}$ are all edges in $G$. Note that Theorems 5.1 and 5.2 has the following consequence: ###### Remark 5.3. Recall from Example 4.12 that for every tuple $\mathbf{k}=(k_{1},\dots,k_{n})$ of positive integers, the algebra $A_{\mathbf{k}}=\langle x_{j}:1\leq j\leq n,x_{j}^{k_{j}}=1\rangle$ is isomorphic to the group algebra of the free product $C_{\mathbf{k}}=C_{k_{1}}*\dots*C_{k_{n}}$ where each $C_{k_{i}}$ is the cyclic group of order $k_{i}$. Note that if $k_{j}=1$ for some $j$ then $C_{k_{j}}$ is the trivial group and makes trivial contribution to the free product in the sense that $C_{\mathbf{k}}\cong C_{k_{1}}*\dots*C_{k_{j-1}}*C_{k_{j+1}}*\dots*C_{k_{n}}$, so we assume from now on that $k_{j}>1$ for all $1\leq j\leq n$. In the example, we showed that $A_{\mathbf{k}}$ is Morita equivalent to the algebra $A$ associated with a Coxeter system whose Coxeter diagram is a tree and has a heavy edge of weight $m_{j}:=2k_{j}+1$ for each $1\leq j\leq n$; in particular, under the assumption that $k_{j}>1$ for all $j$, the weight $m_{j}$ is an odd number greater than 3, and we have $m_{j}=5$ if and only if $k_{j}=2$. Theorems 5.1 and 5.2 now imply the following result: ###### Proposition 5.4. Suppose $\mathbf{k}=(k_{1},\dots,k_{n})$ where $k_{i}\in\mathbb{Z}_{>1}$ for all $1\leq i\leq n$, and let mod-$A_{\mathbf{k}}$ be the category of finite dimensional right modules of $A_{\mathbf{k}}$. 1. (a) The category mod-$A_{\mathbf{k}}$ is semisimple if and only it contains finitely many isomorphism classes of simple modules. Moreover, these two conditions are satisfied if and only if $n=1$, i.e., if and only if $C_{\mathbf{k}}$ has a single factor and is a finite cyclic group. 2. (b) Suppose the category mod-$A$ has infinitely many pairwise non-isomorphic simple modules. Then the simple modules of mod-$A_{\mathbf{k}}$ have bounded dimensions if and only if $\mathbf{k}=(2,2)$, i.e., if and only if $C_{\mathbf{k}}$ is isomorphic to the free product $C_{2}*C_{2}$. Let us explain our strategy for proving Theorems 5.1 and 5.2. For Theorem 5.1, first recall that (a) implies (b) by Proposition 4.9, thanks to simple graph contractions. It is well-known that Condition (a) is equivalent to the condition that the cell $C$ is finite (see [Lus83]), therefore (a) also implies (c), since $\dim(A)=\lvert C\rvert$ and a finite dimensional semisimple algebra has finitely many simple modules. To prove the theorem, it remains to prove that (b) implies (a) and that (c) implies (a). We will prove the contrapositives of these two implications: ###### Proposition 5.5. If $G$ contains a cycle or has an edge with infinite weight or has at least two heavy edges, then mod-$A$ contains a module which is not semisimple. ###### Proposition 5.6. If $G$ contains a cycle or has an edge with infinite weight or has at least two heavy edges, then mod-$A$ contains an infinite set of pairwise non- isomorphic simple modules. To prove Theorem 5.2, we will first prove the “if” implication: ###### Proposition 5.7. If $G$ satisfies either Condition (a) or Condition (b) in Theorem 5.2, then mod-$A$ has bounded simples. To prove the “only if” implication of Theorem 5.2, we again prove its contrapositive. Doing so requires describing the situations where Conditions (a) and (b) in the theorem fail under the assumption that $G$ is not a tree, has an edge with infinite weight, or has at least two heavy edges. A moment’s thought reveals that we may formulate the contrapositive as follows: ###### Proposition 5.8. The dimensions of the simple modules in mod-$A$ have no upper bound if $G$ satisfies one of the following conditions: 1. (a) $G$ contains a unique cycle as well as a heavy edge; 2. (b) $G$ contains at least two cycles; 3. (c) $G$ is a tree and has exactly two heavy edges; moreover, one of these heavy edges has weight at least 6; 4. (d) $G$ is a tree and has at least three heavy edges. We have reduced the proofs of Theorems 5.1 and 5.2 to the proofs of Propositions 5.5, 5.6, 5.7, and 5.8, which we will give in § 5.2. Note that all these theorems and propositions are stated without any reference to any quivers. On the other hand, the proofs in § 5.2 will all use quiver representations and rely heavily on the techniques and examples of Section 4. It is also worth noting that part of the proof of Proposition 5.8 will use Proposition 5.6: to obtain desired simple representations for the former proposition, we will sometimes form direct sums of simple representations promised by latter proposition and then “enlarge” the direct sums using Proposition 4.25. ### 5.2. Proofs Let $Q$ be the double quiver of $G$, let $\\{f_{n}\\}$ be the uniform family of polynomials defined by Equation (17), and let $\mathcal{I}_{f}$ be the evaluation ideal of $\\{f_{n}\\}$ in $KQ$. We prove Propositions 5.5-5.8 by proving the same conclusions for the equivalent category $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ in this subsection. Of the four propositions, we first prove Proposition 5.7. The other three propositions all state that mod-$A$ contains modules with certain properties, so we will prove them by explicit construction of suitable representations in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. Since the properties of mod-$A$ that we are interested in, namely, being semisimple, having finitely many simples, and having bounded simples, are all preserved under Morita equivalences, when dealing with $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ we may assume that certain contractions have been performed on $Q$ and thus effectively deal with a category of the form $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ from § 4.4. For instance, by Example 4.8 and Remark 4.14, if the Coxeter diagram $G$ is the tree from Figure 2 then we may study mod-$A$ via the category $\mathrm{rep}_{K}(\bar{Q},\bar{\mathcal{I}}_{f})$ for the generalized double quiver $\bar{Q}$ from Figure 5. As final preparation for our proofs, we fix some notation and terminology for cycles in the Coxeter diagram $G$. Given a cycle $C=(v_{1},v_{2},\dots,v_{n})$ with $n\geq 3$ vertices in $G$, we say $C$ has _length_ $n$, set $v_{n+1}:=v_{1}$, define $V_{C}=\\{v_{1},\dots,v_{n}\\}$, and define $E_{C}=\\{v_{i}-v_{i+1}:1\leq i\leq n\\}.$ We call the edges in $E_{C}$ the _sides_ of $C$ and define a _diagonal in $C$_ to be a edge in $G$ that connects two vertices in $C$ but does not lie in $E_{C}$. We say $C$ is a _minimal_ cycle in $G$ if $C$ has no diagonals (a diagonal in $C$ would break $C$ into two shorter cycles). For each $1\leq i\leq n$, we let $m_{C,i}=m(v_{i},v_{i+1})$ and denote the arrows $v_{i}\rightarrow v_{i+1}$ and $v_{i+1}\rightarrow v_{i}$ in the double $Q$ of $G$ by $\alpha({C,i})$ and $\beta(C,i)$, respectively. For a representation $M\in\mathrm{rep}_{K}Q$, we let $M_{C}:=M_{\alpha(C,n)}\circ\dots\circ M_{\alpha(C,2)}\circ M_{\alpha(C,1)},$ $\bar{M}_{C}:=M_{\beta(C,1)}\circ M_{\beta(C,2)}\circ\dots\circ M_{\beta(C,n)}.$ If it is clear what $C$ is from context, then we omit $C$ and write $m_{i},\alpha_{i},\beta_{i},M_{i}$ and $\bar{M}_{i}$ for $m_{C,i},\alpha(C,i),\beta(C,i),M_{\alpha(C,i)}$ and $M_{\beta(C,i)}$, respectively. ###### Proof of Proposition 5.7. It suffices to show that mod-$A$ or $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ has bounded simples if $G$ satisfies one of the conditions in Theorem 5.2. To do so, first assume that $G$ satisfies Condition (a), i.e., that $G$ contains a unique cycle and has only simple edges. Let $C$ be the unique cycle. Then $C$ is necessarily minimal. By applying simple graph contractions if necessary, we may assume that $G$ is exactly $C$ in the sense that $V_{C}$ contains all the vertices of $G$ and $E_{C}$ contains all the edge of $G$. But then the algebra $A$ is Morita equivalent to the Laurent polynomial ring $\mathcal{A}=K[t,t^{-1}]$ by Example 4.10. As $\mathcal{A}$ is commutative, every simple module of $\mathcal{A}$ has dimension 1, so mod-$A$ has bounded simples. Next, suppose that $G$ satisfies Condition (b), i.e., that $G$ is a tree with exactly two heavy edges and the edges have weights $m_{1},m_{2}\in\\{4,5\\}$. Applying simple graph contractions on $G$ if necessary, we may assume that $G$ and $Q$ are as pictured in Figure 7. Let $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. Let $\alpha,\beta,\gamma,\delta$ be as in Figure 7 and let $\phi_{1}=M_{\alpha}M_{\beta},\phi_{2}=M_{\delta}M_{\gamma}$. Let $i\in\\{1,2\\}$. Then $\phi_{i}$ are diagonalizable by Proposition 4.18.(c). Since $f_{3}=x^{3}-x$ and $f_{4}=x^{4}-1$, it follows that if $m_{i}=4$, then $\phi_{i}^{2}=\phi_{i}$ and the eigenvalues of $\phi_{i}$ lie in the set $\\{0,1\\}$; if $m_{i}=5$, then $\phi_{i}^{2}=\operatorname{\mathrm{id}}$ and the eigenvalues of $\phi_{i}$ lie in the set $\\{-1,1\\}$. Now let $n=\dim(M_{b})$ and suppose the eigenspace decomposition of $M_{b}$ relative to $\phi_{1}$ and $\phi_{2}$ are $M_{b}=E_{1}\oplus E_{2}$ and $M_{b}=F_{1}\oplus F_{2}$, respectively, with $E_{1},F_{1}$ being the eigenspaces for the eigenvalue 1 and $E_{2},F_{2}$ being the eigenspaces for the eigenvalue 0 or -1. We claim that $M$ contains a submodule of dimension at most 6. The claim would imply that $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ has bounded simples, as desired. To prove the claim, first note that if $E_{i}\cap F_{j}\neq 0$ for some $i,j\in\\{1,2\\}$, then any nonzero vector $v\in E_{i}\cap F_{j}\subseteq M_{b}$ must generate a submodule $N$ of $M$ where $N_{a},N_{b},N_{c}$ are the spans of $M_{\beta}(v),v,M_{\gamma}(v)$, respectively. The module $N$ has dimension at most 3, proving the claim. Otherwise, we must have $n=2k$ for some positive integer $k$ and $\dim(E_{1})=\dim(E_{2})=\dim(F_{1})=\dim(F_{2})=k$. In this case, we may choose a suitable basis $B$ for $M_{b}$ so that the matrices of $\phi_{1}$ relative to $B$ is the block diagonal matrix $[\phi_{1}]_{B}=\begin{bmatrix}I_{k}&0\\\ 0&D\\\ \end{bmatrix}$ where $I_{k}$ is the $k\times k$ identity matrix, $D=0$ if $m_{1}=4$, and $D=-I_{k}$ if $m_{1}=5$. Further, by choosing a basis $B^{\prime}$ of $M_{b}$ for which the change-of-basis matrix $P$ from $B^{\prime}$ to $B$ is of the block diagonal form $P=\begin{bmatrix}P_{1}&0\\\ 0&P_{2}\end{bmatrix}$ with suitable $k\times k$ matrices $P_{1},P_{2}$, we can ensure that $[\phi_{1}]_{B^{\prime}}=P^{-1}[\phi_{1}]_{B}P=[\phi_{1}]_{B},\quad[\phi_{2}]_{B^{\prime}}=P^{-1}[\phi_{2}]_{B}P=\begin{bmatrix}A_{11}&A_{12}\\\ A_{21}&A_{22}\end{bmatrix}$ where each $A_{ij}$ is $k\times k$ and $A_{11},A_{44}$ are in Jordan canonical form. Suppose $B^{\prime}=\\{v_{1},\cdots,v_{n}\\}$. Then $\phi_{1}(v_{1})=v_{1}$ and $\phi_{2}(v_{1})=\lambda v_{1}+v$ where $\lambda$ is the top left entry in $A_{11}$ and $v$ in the span $\langle v_{k+1},v_{k+2},\cdots,v_{n}\rangle$. It follows that $\langle v_{1},\phi_{2}(v_{1})\rangle=\langle v_{1},v\rangle$. Moreover, since either $\phi_{2}^{2}=\phi_{2}$ or $\phi_{2}^{2}=1$, we have $\phi_{2}(v)=\phi_{2}(\phi_{2}(v_{1})-\lambda v_{1})=\phi_{2}^{2}(v_{1})-\lambda\phi_{2}(v_{1})\in\langle v_{1},\phi_{2}(v_{1})\rangle=\langle v_{1},v\rangle.$ It follows that the space $V:=\langle v_{1},v\rangle$ is invariant under both $\phi_{1}$ and $\phi_{2}$, so it generates a subrepresentation $N$ of $M$ such that $N_{b}=V$ and $\dim(N)\leq 6$. This completes the proof. ∎ ###### Proof of Proposition 5.5. We need to construct a non-semisimple representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ when $G$ contains a cycle, an edge with infinite weight, or at least two heavy edges. We first deal with the case that $G$ contains an edge with infinite weight or a cycle. Let $\\{a,b\\}$ be an edge with infinite weight in $G$ if such an edge exists; otherwise, let $a,b$ be the vertices $v_{1},v_{2}$ from a cycle $C=(v_{1},v_{2},\dots,v_{n})$ in $G$, respectively. Denote the arrow $a\rightarrow b$ by $\alpha$ and the arrow $b\rightarrow a$ by $\beta$. To construct $M$, first let $M_{s}=K^{2}$ for all $s\in Q_{0}$. Let $m=m(a,b)$, let $\lambda_{m}$ be a root of the polynomial $\tilde{f}_{m-1}$ if $m<\infty$, let $x$ be an arbitrary nonzero scalar in $K$, and let $J_{x}=\begin{bmatrix}x&1\\\ 0&x\end{bmatrix},\quad L=\begin{cases}I_{2}&\text{if $m=\infty$};\\\ \lambda_{m}\cdot J^{-1}&\text{if $m<\infty$}.\end{cases}$ Let $M_{\alpha},M_{\beta}$ be the maps given by $J_{x}$ and $L$, respectively, then let $M_{\gamma}=\operatorname{\mathrm{id}}$ for all arrows $\gamma\in Q_{1}\setminus\\{\alpha,\beta\\}$. The assignment $M:=(M_{s},M_{\gamma})_{s\in Q_{0},\gamma\in Q_{1}}$ defines a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ by Proposition 4.18 and Corollary 4.19. It is clear that $M^{x}$ has a subrepresentation $N$ with $N_{s}=\langle e_{1}\rangle$, the span of the first standard basis vector, for all $s\in Q_{0}$. On the other hand, if we set $\phi=M_{\beta}M_{\alpha}$ in the case $m=\infty$ and set $\phi=M_{C}$ otherwise, then $\phi$ is an endomorphism of $M_{a}$ given by the matrix $J_{x}$ which is in Jordan canonical form and has a single $2\times 2$ Jordan block, therefore the subspace $N_{a}$ of $M_{a}$ cannot have a complement in $M_{a}$ that is invariant under $\phi$. It follows that $N$ has no complement in $M$ as a subrepresentation, so $M$ is not semisimple. It remains to consider the case where $G$ has no cycles or edges of infinite weight but has at least two heavy edges. By applying graph contractions, we may ensure that $G$ contains a subgraph of the form shown in Figure 7 and $Q$ contains a subquiver of the form shown in the same Figure. We may define a representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ by setting $M_{a},M_{b},M_{c},M_{\alpha},M_{\beta},M_{\gamma},M_{\delta}$ as in Example 4.21 and setting $M_{s}=K^{2}$ and $M_{\zeta}=\operatorname{\mathrm{id}}$ for all vertices $s\in Q_{0}\setminus\\{a,b,c\\}$ and all arrows $\zeta\in Q_{1}\setminus\\{\alpha,\beta,\gamma,\delta\\}$, because doing so amounts to assembling a consistent collection of local representations in the sense of § 4.4. Moreover, it is clear that $M$ has a subrepresentation $L$ with $L_{s}=\langle e_{1}\rangle$ for all $s\in Q_{0}$. By the subspace analysis in Example 4.21, any subrepresentation $N$ of $M$ must have $N_{b}=\langle e_{1}\rangle=L_{b}$, therefore the subrepresentation $N$ has no complement in $M$. It follows that $M$ is not semisimple, and we are done. ∎ ###### Proof of Proposition 5.6. We need to find infinitely many simple representations in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ when $G$ contains a cycle, an edge with infinite weight, or at least two heavy edges. We keep the notation from the previous proof and start with the case that $G$ contains an edge with infinite weight or a cycle. In this case, let $M$ and $N$ be as in the previous proof. Denote $N$ by $N^{x}$ to reflect the fact that $N$ depends on the value of the scalar $x$ because the matrix $J_{x}$ does. Then $N^{x}$ is clearly simple, so to show that $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ has infinitely many simples it suffices to verify that $N^{x}\not\cong N^{y}$ whenever $x\neq y$. If $m=\infty$, then we can do so by using basic linear algebra to show that there do not exist linear isomorphisms such that $\phi_{b}N^{x}_{\alpha}=N^{y}_{\alpha}\phi_{a}$ and $\phi_{a}N^{x}_{\beta}=N^{y}_{\beta}\phi_{b}$ simultaneously. If $m\neq\infty$, then by definition we have $a=v_{1},b=v_{2}$ for vertices $v_{1},v_{2}$ in a cycle $C=(v_{1},v_{2},\dots,v_{n})$, and we can show that $N^{x}\not\cong N^{y}$ if $x\neq y$ by showing that no linear map $\phi_{1}:M_{a}\rightarrow M_{a}$ can satisfy $N^{x}_{C}\phi_{1}=\phi_{1}N^{y}_{C}$ when $x\neq y$. We omit the details. It remains to deal with the case where $G$ has no cycle or edges of infinite weight but has at least two heavy edges. As in the previous proof, we may assume that $G$ contains the Coxeter diagram in Figure 7 as a subgraph and $Q$ contains the quiver there as a subgraph. To obtain an infinite family of simple representations in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$, we extend the simple representations of the form $N^{x}$ from Example 4.22 as follows: 1. (a) If $N^{x}_{c}=K^{2}$, then we extend $N^{x}$ by setting $N^{x}_{s}=K^{2}$ and $N^{x}_{\zeta}=\operatorname{\mathrm{id}}$ for all vertices $s\in Q_{0}\setminus\\{a,b,c\\}$ and all arrows $\zeta\in Q_{1}\setminus\\{\alpha,\beta,\gamma,\delta\\}$. This specifies a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$, and the extended representation $N^{x}$ is still simple since all the maps $N^{x}_{\zeta}$ are isomorphisms. 2. (b) If $N^{x}_{c}=K$, then note that since $G$ contains no cycle, removing the edge $b-c$ from $G$ must result in a graph with two connected components, one containing $b$ and the other containing $c$. Let $V_{b},V_{c}$ be the sets of vertices in the first and second component, respectively. Then we may extend $N^{x}$ by setting $N^{x}_{s}=K^{2}$ for all $s\in V_{b}\setminus\\{a,b\\}$, setting $N^{x}_{s}=K$ for all $s\in V_{c}\setminus\\{c\\}$, and setting $N^{x}_{\zeta}=\operatorname{\mathrm{id}}$ for all $\zeta\in Q_{1}\setminus\\{\alpha,\beta,\gamma,\delta\\}$. It is easy to see that the extension gives a simple representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ as in Case (a). To finish the proof, it suffices to show that $N^{x}\not\cong N^{y}$ whenever $x\neq y$. This holds by the same argument used at the end of Example 4.22. ∎ ###### Proof of Proposition 5.8. Let $n$ be an arbitrary positive integer larger than 7. We prove the proposition by constructing a simple representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ with $\dim(M)>n$ when any of the Conditions (a)-(d) holds: (a) Suppose $G$ contains a unique cycle $C=(v_{1},v_{2},..,v_{k})$ and a heavy edge. We first consider the case where the set $E_{C}$ contains a heavy edge. As in the proof of Proposition 5.7, up to simple graph contractions we may assume $G$ is exactly $C$. Without loss of generality, suppose that the edge $\\{v_{1},v_{2}\\}$ is heavy and let $m=m(v_{1},v_{2})$. Depending on whether $m>4$ or $m=4$, we construct $M$ in one of two ways: 1. (i) If $m>4$, then let $q=n+1$, consider the symmetric group $G=S_{q}$, and consider the partition $(n,1)$ of $q$. By the theory of Specht modules, the partition gives rise to an irreducible representation $\rho:G\rightarrow\mathrm{GL}(V)$ of $G$ over $K$ where $\dim(V)=n$. Recall from Example 4.20 that since $n>7$ there exist elements $\sigma,\tau\in G$ which generate $G$ with orders 2 and 3, respectively, and that consequently we have $\rho(\sigma)^{2}=\rho(\tau)^{3}=\operatorname{\mathrm{id}}_{V}$. To define $M$, let $M_{v}=V$ for all $v\in Q_{0}$, let $M_{\alpha_{1}}=\rho(\sigma),\quad M_{\alpha_{2}}=\rho(\tau)\rho(\sigma)^{-1},\quad M_{\beta_{2}}=M_{\alpha_{2}}^{-1},$ and let $M_{\beta}=\operatorname{\mathrm{id}}$ for all $\gamma\in Q_{1}\setminus\\{\alpha_{1},\alpha_{2},\beta_{2}\\}$. This defines a representation by Lemma 4.19, and clearly we have $\dim(M)>n$. To see that $M$ is simple, note that the operator $M_{\beta_{1}}M_{\alpha_{1}}:M_{1}\rightarrow M_{1}$ equals $\rho(\sigma)$ and the operator $M_{C}:M_{1}\rightarrow M_{1}$ equals $\rho(\tau)$. By subspace analysis at $v_{1}$ similar to the analysis at $y$ in Example 4.20, any subrepresentation $N$ of $M$ must have either $N_{v_{1}}=0$ or $N_{v_{1}}=M_{v_{1}}$. Since $M_{\gamma}$ is an isomorphism for all $\gamma\in Q_{1}$, it follows that $M$ is simple. 2. (ii) Now suppose $m=4$. By Example 4.10, up to contractions we may assume that $Q$ is the generalized double quiver denoted $Q^{(3)}$ in Figure 3. In other words, after relabelling vertices and arrows we may assume that $Q$ is the quiver with a unique vertex $v$ along with two loops $\alpha,\beta:v\rightarrow v$ and that $\mathcal{I}_{f}=\\{f_{m-1}(\alpha,\beta)=\alpha\beta\alpha-\alpha,f_{m-1}(\beta,\alpha)=\beta\alpha\beta-\beta\\}.$ To construct $M$, let $M_{v}=K^{n}$, let $B=\\{e_{1},e_{2},\dots,e_{n}\\}$ be the standard basis of $K^{n}$, and let $M_{\alpha},M_{\beta}$ be the unique linear maps defined by $M_{\alpha}(e_{i})=\begin{cases}e_{i+1}&\text{if $1\leq i<n$};\\\ 0&\text{if $i=n$},\end{cases}\quad M_{\beta}(e_{i})=\begin{cases}e_{i-1}&\text{if $1<i\leq n$};\\\ 0&\text{if $i=1$}.\end{cases}$ Intuitively, we may think of $M_{\alpha}$ as a “raising” operator on $K^{n}$ and $M_{\beta}$ as a “lowering” operator in light of their effects on the standard basis elements. It is easy to check that the above assignments define a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$. Moreover, given any nonzero vector $v\in M_{v}$ we may use the maps $M_{\alpha},M_{\beta}$ to obtain any basis vector in $B$ up to a scalar, therefore $M$ must be simple. It remains to deal with the case that all edges in $C$ are simple but some edge in $G$ not in $V_{C}$ is heavy. Applying simple graph contractions if necessary, we may assume that some vertex $u\in V_{C}$ is incident to a heavy edge $\\{u,v\\}$ of weight $m=m(u,v)\geq 4$ for some $v\notin V_{C}$. Without loss of generality, we may also assume that $u=v_{1}$. View $C$ as a graph $G^{\prime\prime}$ with vertex set $V_{C}$ and edge set $E_{C}$, define $G^{\prime}$ to be the subgraph of $G$ obtained by adding the vertex $v$ and the edge $\\{u,v\\}$ to $G^{\prime\prime}$, and let $Q^{\prime\prime}$ and $Q^{\prime}$ be the double quivers of $G^{\prime\prime}$ and $G^{\prime}$, respectively. Let $\mathcal{I}^{\prime\prime}_{f}$ and $\mathcal{I}^{\prime}_{f}$ be the evaluation ideals of $\\{f_{n}\\}$ in $KQ^{\prime\prime}$ and $KQ^{\prime}$. Then by the proof of Proposition 5.6, the category $\mathrm{rep}_{K}(Q^{\prime\prime},\mathcal{I}^{\prime\prime}_{f})$ contains $(n+1)$ pairwise non-isomorphic simple representations $S(1),S(2),\dots,S(n),S(n+1)$. Let $S=\oplus_{i=1}^{n+1}S(i)$. Then Proposition 4.25 implies that the category $\mathrm{rep}_{K}(Q^{\prime},\mathcal{I}^{\prime}_{f})$ contains a simple representation $M$ with $\dim(M_{u})=\dim(S_{u})$. Finally, we may extend $M$ to a representation $M\in\mathrm{rep}_{\mathbb{C}}(Q,\mathcal{I})$, by using the same idea as in “Case (b)” in the proof of Proposition 5.6: since $C$ is the only cycle in $G$, removing the edges in $E_{C}$ from $G$ results in a graph with $k$ connected components with vertex sets $V_{1},V_{2},\dots,V_{k}$ such that $v_{i}\in V_{i}$ for all $1\leq i\leq k$; we can then extend $M$ by setting $M_{a}=M_{v}$ for all $a\in V_{1}\setminus\\{v_{1},v\\}$, setting $M_{a}=M_{v_{i}}$ for all $a\in V_{i}\setminus\\{v_{i}\\}$ for each $2\leq i\leq k$, and setting $M_{\gamma}=\operatorname{\mathrm{id}}$ for all $\gamma\in Q_{1}\setminus Q^{\prime}_{1}$. It is clear that the extended representation is still simple and has dimension larger than $n$. (b) Suppose $G$ contains two cycles. In light of Part (a), to construct the desired representation $M$ we may assume that all edges in $G$ are simple. By considering minimal cycles and applying simple graph contractions if necessary, we may assume that $G$ contains two minimal cycles which share a vertex, i.e., two minimal cycles of the form $C=(v_{1},v_{2},\dots,v_{k})$ and $C^{\prime}=(w_{1},w_{2},\dots,w_{l})$ where $v_{1}=w_{1}$. Furthermore, while $C$ and $C^{\prime}$ may share an edge, we may write the tuples $C,C^{\prime}$ in such a way that $v_{2}\neq w_{2}$. Denote the arrows $v_{1}\rightarrow v_{2},v_{2}\rightarrow v_{1},w_{1}\rightarrow w_{2}$ and $w_{2}\rightarrow w_{1}$ by $\alpha_{1},\beta_{1},\alpha^{\prime}_{1}$ and $\beta^{\prime}_{1}$, respectively. To construct the desired representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$, consider the representation $\rho:S_{q}\rightarrow\mathrm{GL}(V)$ from Part (a).(i), let $\sigma,\tau$ be the same elements in $S_{q}$ as before, let $M_{s}=V$ for all $s\in Q_{0}$, let $M_{\alpha_{1}}=\rho(\sigma),\quad M_{\beta_{1}}=\rho(\sigma)^{-1},\quad M_{\alpha^{\prime}_{1}}=\rho(\tau),\quad M_{\beta^{\prime}_{1}}=\rho(\tau)^{-1},$ and let $M_{\gamma}=\operatorname{\mathrm{id}}$ for all arrows $\gamma\in Q_{1}\setminus\\{\alpha_{1},\beta_{1},\alpha^{\prime}_{1},\beta^{\prime}_{1}\\}$. This defines a representation $M\in\mathrm{rep}_{\mathbb{C}}(Q,\mathcal{I})$ by Lemma 4.19, and it is obvious that $\dim(M)>n$. The endomorphisms $M_{\beta_{1}}M_{\alpha_{1}}$ and $M_{\beta^{\prime}_{1}}M_{\alpha^{\prime}_{1}}$ of $M_{v_{1}}$ equal $\rho(\sigma)$ and $\rho(\tau)$, respectively, therefore $M$ is simple by the same arguments as before. (c) Suppose $G$ is a tree and has exactly two heavy edges, one of which has weight at least 6. Using simple graph contractions if necessary, we may assume that $G$ is of the form shown in Figure 7, with $m_{1}\geq 4$ and $m_{2}\geq 6$. Let $q,G$ and $V$ be as in Part (a).(i). Then by Example 4.23, there exists a simple representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ such that $\dim(M)>\dim(V)=n$, as desired. (d) Suppose that $G$ is a tree and has at least three heavy edges. Using simple graph contractions if necessary, we may assume that $G$ contains a subgraph of one of the forms shown in Figure 8. $G_{1}$:$x$$y$$u$$v$$G_{2}$:$x$$u$$y$$v$$m_{1}$$m_{2}$$m_{3}$$m_{1}$$m_{2}$$m_{3}$ Figure 8. In both cases, let $G^{\prime\prime}$ be the subgraph of $G$ induced by the vertices $x,y,u$, let $G^{\prime}$ be the subgraph of $G$ obtained by adding the vertex $v$ and the edge $\\{u,v\\}$ to $G^{\prime\prime}$, then define $Q^{\prime\prime},\mathcal{I}^{\prime\prime}_{f},Q^{\prime},\mathcal{R}^{\prime}$ as we did in Part (a). We may produce a representation $M\in\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ in the same fashion as in Part (a): first, use the proof of Proposition 5.6 to find $(n+1)$ pairwise non- isomorphic simple representations in $\mathrm{rep}_{K}(Q^{\prime\prime},\mathcal{I}^{\prime\prime}_{f})$; second, use Proposition 4.25 to extend the direct sum of these $(n+1)$ simple representations to a simple representation $M$ in $\mathrm{rep}_{K}(Q^{\prime},\mathcal{I}^{\prime}_{f})$; finally, further extend $M$ to a representation in $\mathrm{rep}_{K}(Q,\mathcal{I}_{f})$ where $M_{\gamma}=\operatorname{\mathrm{id}}$ for all $\gamma\in Q_{1}\setminus Q^{\prime}_{1}$. As before, the extended representation $M\in\mathrm{rep}_{\mathbb{C}}(Q,\mathcal{I})$ must be simple and satisfy $\dim(M)>n$. This completes the proof. ∎ ## References * [Alv08] D. Alvis. Subrings of the asymptotic Hecke algebra of type $H_{4}$. Experiment. Math., 17(3):375–383, 2008. * [BB81] A. Beĭlinson and J. Bernstein. Localisation de $g$-modules. C. R. Acad. Sci. Paris Sér. I Math., 292(1):15–18, 1981. * [BFO09] R. Bezrukavnikov, M. Finkelberg, and V. Ostrik. On tensor categories attached to cells in affine Weyl groups. III. Israel J. Math., 170:207–234, 2009. * [BK81] J.-L. Brylinski and M. Kashiwara. Kazhdan-Lusztig conjecture and holonomic systems. Invent. Math., 64(3):387–410, 1981. * [BK18] A. Braverman and D. Kazhdan. Remarks on the asymptotic Hecke algebra. In Lie groups, geometry, and representation theory, volume 326 of Progr. Math., pages 91–108. Birkhäuser/Springer, Cham, 2018. * [Bon17] C. Bonnafé. Kazhdan-Lusztig cells with unequal parameters, volume 24 of Algebra and Applications. Springer, Cham, 2017. * [DDPW08] B. Deng, J. Du, B. Parshall, and J. Wang. Finite Dimensional Algebras and Quantum Groups, volume 150 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2008. * [DL15] A. Diaz-Lopez. Representations of Hecke algebras on quotients of path algebras. https://arxiv.org/abs/1509.02403, 2015. * [EW14] B. Elias and G. Williamson. The Hodge theory of Soergel bimodules. Ann. of Math. (2), 180(3):1089–1136, 2014. * [EW16] B. Elias and G. Williamson. Soergel calculus. Represent. Theory, 20:295–374, 2016. * [Gec98] M. Geck. Kazhdan-Lusztig cells and decomposition numbers. Represent. Theory, 2:264–277, 1998. * [Gec07] M. Geck. Hecke algebras of finite type are cellular. Invent. Math., 169(3):501–517, 2007. * [KL79] D. Kazhdan and G. Lusztig. Representations of Coxeter groups and Hecke algebras. Invent. Math., 53(2):165–184, 1979. * [KL80] D. Kazhdan and G. Lusztig. Schubert varieties and Poincaré duality. In Geometry of the Laplace operator (Proc. Sympos. Pure Math., Univ. Hawaii, Honolulu, Hawaii, 1979), Proc. Sympos. Pure Math., XXXVI, pages 185–203. Amer. Math. Soc., Providence, R.I., 1980. * [KP19] D. Kim and P. Pylyavskyy. Asymptotic Hecke algebras and Lusztig-Vogan bijection via affine matrix-ball construction. https://arxiv.org/abs/1902.06668, 2019. * [Lus83] G. Lusztig. Some examples of square integrable representations of semisimple $p$-adic groups. Trans. Amer. Math. Soc., 277(2):623–653, 1983. * [Lus87] G. Lusztig. Cells in affine Weyl groups. II. J. Algebra, 109(2):536–548, 1987. * [Lus89] G. Lusztig. Cells in affine Weyl groups. IV. J. Fac. Sci. Univ. Tokyo Sect. IA Math., 36(2):297–328, 1989. * [Lus95] G. Lusztig. Quantum groups at $v=\infty$. In Functional analysis on the eve of the 21st century, Vol. 1 (New Brunswick, NJ, 1993), volume 131 of Progr. Math., pages 199–221. Birkhäuser Boston, Boston, MA, 1995. * [Lus14a] G. Lusztig. Asymptotic Hecke algebras and involutions. In Perspectives in representation theory, volume 610 of Contemp. Math., pages 267–278. Amer. Math. Soc., Providence, RI, 2014. * [Lus14b] G. Lusztig. Hecke algebras with unequal parameters. https://arxiv.org/abs/math/0208154v2, 2014. * [Lus18] G. Lusztig. Special representations of Weyl groups: a positivity property. Adv. Math., 327:161–172, 2018. * [Mil01] G. A. Miller. On the groups generated by two operators. Bull. Amer. Math. Soc., 7(10):424–426, 1901. * [Pie10] T. Pietraho. Module structure of cells in unequal-parameter Hecke algebras. Nagoya Math. J., 198:23–45, 2010. * [Sav05] A. Savage. Finite-dimensional algebras and quivers. https://arxiv.org/abs/math/0505082, 2005. * [Sch14] R. Schiffler. Quiver representations. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, Cham, 2014. * [Soe90] W. Soergel. Kategorie O, perverse Garben und Moduln über den Koinvarianten zur Weylgruppe. J. Amer. Math. Soc., 3(2):421–445, 1990. * [Soe92] W. Soergel. The combinatorics of Harish-Chandra bimodules. J. Reine Angew. Math., 429:49–74, 1992. * [Soe07] W. Soergel. Kazhdan-Lusztig-Polynome und unzerlegbare Bimoduln über Polynomringen. J. Inst. Math. Jussieu, 6(3):501–525, 2007. * [Wil18] G. Williamson. The Hodge theory of the Hecke category. In European Congress of Mathematics, pages 663–683. Eur. Math. Soc., Zürich, 2018. * [Xi02] N. Xi. The based ring of two-sided cells of affine Weyl groups of type $\widetilde{A}_{n-1}$. Mem. Amer. Math. Soc., 157(749):xiv+95, 2002. * [Xu19] T. Xu. On the subregular J-rings of Coxeter systems. Algebr. Represent. Theory, 22(6):1479–1512, 2019.
# High order efficient algorithm for computation of MHD flow ensembles Muhammad Mohebujjaman111Department of Mathematics and Physics, Texas A&M International University, TX 78041, USA<EMAIL_ADDRESS> Massachusetts Institute of Technology ###### Abstract In this paper, we propose, analyze, and test a new fully discrete, efficient, decoupled, stable, and practically second-order time-stepping algorithm for computing MHD ensemble flow averages under uncertainties in the initial conditions and forcing. For each viscosity and magnetic diffusivity pair, the algorithm picks the largest possible parameter $\theta\in[0,1]$ to avoid the instability that arises due to the presence of some explicit viscous terms. At each time step, the algorithm shares the same system matrix with all $J$ realizations but with different right-hand-side vectors. That saves assembling time and computer memory, allows the reuse of the same preconditioner, and can take the advantage of block linear solvers. For the proposed algorithm, we prove stability and convergence rigorously. To illustrate the predicted convergence rates of our analysis, numerical experiments with manufactured solutions are given on a unit square domain. Finally, we test the scheme on a benchmark channel flow over a step problem and it performs well. Key words. magnetohydrodynamics, uncertainty quantification, fast ensemble calculation, finite element method, elsässer variables, second order scheme Mathematics Subject Classifications (2000): 65M12, 65M22, 65M60, 76W05 ## 1 Introduction Numerical simulations of realistic flows are significantly affected by input data, e.g., initial conditions, boundary conditions, forcing functions, viscosities, etc, which involve uncertainties. As a result, uncertainty quantification (UQ) plays an important role in the validation of simulation methodologies and helps in developing rigorous methods to characterize the effect of the uncertainties on the final quantities of interest. A popular approach for dealing with uncertainties in the data is the computation of an ensemble average of several realizations. In many fluid dynamics applications e.g. ensemble Kalman filter approach, weather forecasting, and sensitivity analyses of solutions [14, 39, 42, 43, 44, 53] require multiple numerical simulations of a flow subject to $J$ different input conditions (realizations), which are then used to compute means and sensitivities. Recently, the study of MHD flows has become important due to applications in e.g. engineering, physical science, geophysics and astrophysics [6, 9, 16, 19, 28, 54], liquid metal cooling of nuclear reactors [5, 24, 57], process metallurgy [15, 55], and MHD propulsion[41, 45]. For the time dependent, viscous and incompressible magnetohydrodynamic (MHD) flow simulations, this leads to solving the following $J$ separate nonlinearly coupled systems of PDEs [8, 15, 37, 47]: $\displaystyle u_{j,t}+u_{j}\cdot\nabla u_{j}-sB_{j}\cdot\nabla B_{j}-\nu\Delta u_{j}+\nabla p_{j}$ $\displaystyle=$ $\displaystyle f_{j}(x,t),\hskip 5.69054pt\text{in}\hskip 5.69054pt\Omega\times(0,T],$ (1) $\displaystyle B_{j,t}+u_{j}\cdot\nabla B_{j}-B_{j}\cdot\nabla u_{j}-\nu_{m}\Delta B_{j}+\nabla\lambda_{j}$ $\displaystyle=$ $\displaystyle\nabla\times g_{j}(x,t)\hskip 5.69054pt\text{in}\hskip 5.69054pt\Omega\times(0,T],$ (2) $\displaystyle\nabla\cdot u_{j}$ $\displaystyle=$ $\displaystyle 0,\hskip 5.69054pt\text{in}\hskip 5.69054pt\Omega\times(0,T],$ (3) $\displaystyle\nabla\cdot B_{j}$ $\displaystyle=$ $\displaystyle 0,\hskip 5.69054pt\text{in}\hskip 5.69054pt\Omega\times(0,T],$ (4) $\displaystyle u_{j}(x,0)$ $\displaystyle=$ $\displaystyle u_{j}^{0}(x)\hskip 5.69054pt\text{in}\hskip 5.69054pt\Omega,$ (5) $\displaystyle B_{j}(x,0)$ $\displaystyle=$ $\displaystyle B_{j}^{0}(x)\hskip 5.69054pt\text{in}\hskip 5.69054pt\Omega.$ (6) Here, $u_{j}$, $B_{j}$, $p_{j}$, and $\lambda_{j}$ denote the velocity, magnetic field, pressure, and artificial magnetic pressure solutions, respectively, of the $j$-th member of the ensemble with slightly different initial conditions $u_{j}^{0}$ and $B_{j}^{0}$, and forcing functions $f_{j}$ and $\nabla\times g_{j}$ for all $j=1,2,\cdots,J$. The $\Omega\subset\mathbb{R}^{d}(d=2\hskip 2.84526pt\text{or}\hskip 2.84526pt3)$ is the convex domain, $\nu$ is the kinematic viscosity, $\nu_{m}$ is the magnetic diffusivity, $s$ is the coupling number, and $T$ is the simulation time. The artificial magnetic pressure $\lambda_{j}$ are Lagrange multipliers introduced in the induction equations to enforce divergence free constraints on the discrete induction equations but in continuous case $\lambda_{j}=0$. All the variables above are dimensionless. The magnetic diffusivity $\nu_{m}$ is defined by $\nu_{m}:=Re_{m}^{-1}=1/(\mu_{0}\sigma)$, where $\mu_{0}$ is the magnetic permeability of free space and $\sigma$ is the electric conductivity of the fluid. For the sake of simplicity of our analysis, we consider homogeneous Dirichlet boundary conditions for both velocity and magnetic fields. For periodic boundary conditions or inhomogeneous Dirichlet boundary conditions, our analyses and results will still work after minor modifications. To obtain an accurate, even classical Navier Stokes (NSE) simulation for a single member of the ensemble, the required number of degrees of freedom (dofs) is very high, which is known from Kolmogorov’s 1941 results [38]. Thus, for a single member of MHD ensemble simulation, where velocity and magnetic fields are nonlinearly coupled, is computationally very expensive with respect to time and memory. As a result, the computational cost of the above coupled system (1)-(6) will be approximately equal to $J\times$(cost of one MHD simulation) and will generally be computationally infeasible. Our objective in this paper is to construct and study an efficient and accurate algorithm for solving the above ensemble systems. It has been shown in recent works [1, 26, 47, 59] that by using Elsässer variables formulation, efficient stable decoupled MHD simulation algorithms can be created. That is, at each time step, instead of solving a fully coupled linear system, two separate Oseen- type problems need to be solved. Defining $v_{j}:=u_{j}+\sqrt{s}B_{j}$, $w_{j}:=u_{j}-\sqrt{s}B_{j}$, $f_{1,j}:=f_{j}+\sqrt{s}\nabla\times g_{j}$, $f_{2,j}:=f_{j}-\sqrt{s}\nabla\times g_{j}$, $q_{j}:=p_{j}+\sqrt{s}\lambda_{j}$ and $r_{j}:=p_{j}-\sqrt{s}\lambda_{j}$ produces the Elsässer variable formulation of the ensemble systems: $\displaystyle v_{j,t}+w_{j}\cdot\nabla v_{j}-\frac{\nu+\nu_{m}}{2}\Delta v_{j}-\frac{\nu-\nu_{m}}{2}\Delta w_{j}+\nabla q_{j}=f_{1,j},$ (7) $\displaystyle w_{j,t}+v_{j}\cdot\nabla w_{j}-\frac{\nu+\nu_{m}}{2}\Delta w_{j}-\frac{\nu-\nu_{m}}{2}\Delta v_{j}+\nabla r_{j}=f_{2,j},$ (8) $\displaystyle\nabla\cdot v_{j}=\nabla\cdot w_{j}=0,$ (9) together with the initial and boundary conditions. To reduce the ensemble simulation cost, a breakthrough idea was presented in [33] to find a set of $J$ solutions of the NSEs for different initial conditions and body forces. The fundamental idea is that, at each time step, each of the $J$ systems shares a common coefficient matrix, but the right-hand vectors are different. Thus, the global system matrix needs to be assembled and the preconditioner needs to be built only once per time step, and can reuse for all $J$ systems. Also, the algorithm can save storage requirements and take advantage of block linear solvers. This breakthrough idea has been implemented in heat conduction [18], Navier-Stokes simulations [30, 31, 34, 50], magnetohydrodynamics [35, 47], parameterized flow problems [23], and turbulence modeling [32]. In our earlier works in [46, 47], we adopted the idea and considered the first-order scheme with a stabilization term to compute MHD flow ensemble average subject to different initial conditions and forcing functions. In which the optimal convergence in 2D obtained under a mild criteria but in 3D the convergence was proven to be suboptimal. The objective of this paper, is to improve the temporal accuracy of the MHD ensemble scheme. It has been shown that in Elsässer formulation simulations, the second-order approximation of certain viscous terms causes instability unless there is an unusual data restriction $1/2<\nu/\nu_{m}<2$ [40]. To overcome this issue, a practically second order $\theta-$scheme is proposed in [26], where a convex combination of the first and second order approximations of such a viscous term is taken. We extend this idea together with the efficient ensemble algorithm described above to propose a practically second order timestepping scheme for MHD flow ensemble simulation. We consider a uniform timestep size $\Delta t$ and let $t_{n}=n\Delta t$ for $n=0,1,\cdots$., for simplicity, we suppress the spatial discretization momentarily. Then computing the $J$ solutions independently, takes the following form: Step 1: For $j$=1,…,$J$, $\displaystyle\frac{3v_{j}^{n+1}-4v_{j}^{n}+v_{j}^{n-1}}{2\Delta t}+<w>^{n}\cdot\nabla v_{j}^{n+1}+w_{j}^{{}^{\prime}n}\cdot\nabla(2v_{j}^{n}-v_{j}^{n-1})$ $\displaystyle-\frac{\nu+\nu_{m}}{2}\Delta v_{j}^{n+1}$ $\displaystyle-\theta\frac{\nu-\nu_{m}}{2}\Delta(2w_{j}^{n}-w_{j}^{n-1})-(1-\theta)\frac{\nu-\nu_{m}}{2}\Delta w_{j}^{n}+\nabla q_{j}^{n+1}$ $\displaystyle=f_{1,j}(t^{n+1}),$ (10) $\displaystyle\nabla\cdot v_{j}^{n+1}$ $\displaystyle=0.$ (11) Step 2: For $j$=1,…,$J$, $\displaystyle\frac{3w_{j}^{n+1}-4w_{j}^{n}+w_{j}^{n-1}}{2\Delta t}+<v>^{n}\cdot\nabla w_{j}^{n+1}+v_{j}^{{{}^{\prime}}n}\cdot\nabla(2w_{j}^{n}-w_{j}^{n-1}$ $\displaystyle)-\frac{\nu+\nu_{m}}{2}\Delta w_{j}^{n+1}$ $\displaystyle-\theta\frac{\nu-\nu_{m}}{2}\Delta(2v_{j}^{n}-v_{j}^{n-1})-(1-\theta)\frac{\nu-\nu_{m}}{2}\Delta v_{j}^{n}+\nabla r_{j}^{n+1}$ $\displaystyle=f_{2,j}(t^{n+1}),$ (12) $\displaystyle\nabla\cdot w_{j}^{n+1}$ $\displaystyle=0.$ (13) Where $v_{j}^{n},w_{j}^{n},q_{j}^{n}$, and $r_{j}^{n}$ denote approximations of $v_{j}(\cdot,t^{n}),w_{j}(\cdot,t^{n}),q_{j}(\cdot,t^{n})$, and $r_{j}(\cdot,t^{n})$, respectively. The ensemble mean and fluctuation about the mean are denoted by $<u>$, and $u_{j}^{{}^{\prime}}$ respectively, and they are defined as follows: $\displaystyle<u>^{n}:=\frac{1}{J}\sum\limits_{j=1}^{J}(2u_{j}^{n}-u_{j}^{n-1}),\hskip 5.69054ptu_{j}^{{}^{\prime}n}:=2u_{j}^{n}-u_{j}^{n-1}-<u>^{n}.$ (14) We prove that the above method is stable for any $\nu$ and $\nu_{m}$, provided $\theta$ is chosen to satisfy $\frac{\theta}{1+\theta}<\frac{\nu}{\nu_{m}}<\frac{1+\theta}{\theta}$, $0\leq\theta\leq 1$. We also prove that the temporal accuracy of the scheme is of $O(\Delta t^{2}+(1-\theta)|\nu-\nu_{m}|\Delta t)$. Though, it looks like the scheme is first order accurate in time but in many practical applications the factor $|\nu-\nu_{m}|\ll 1$, e.g. in the Earth’s core the current estimates suggest $\nu\sim 10^{-8}$, $\nu_{m}\sim 10^{-3}$ [36, 52], and $|\nu-\nu_{m}|$ is in the order of $10^{-3}$. Likewise, the Sun has $\nu_{m}\sim 10^{-6}$. Moreover, following the discovery of high-temperature superconductor, the theory suggests that there is a possibility of discovering high-temperature liquid superconductor [17], where $\nu_{m}\sim 10^{-10}$. Thus, in such cases, $\Delta t^{2}$ dominates over $(1-\theta)|\nu-\nu_{m}|\Delta t$ and the scheme behaves like second order accurate in time. In fact, when $|\nu-\nu_{m}|\ll 1$ with a high Reynolds number flow of high electric conductive fluids, the flow becomes convective dominated. In convective dominated regimes, the contribution of the non-linear terms in the system matrix dominates over the contribution of viscous terms. Thus, the system matrix becomes highly ill-conditioned and the system becomes harder for the solver to solve. Therefore, it is critical to find a robust algorithm for high Reynolds number flow of high electric conductive fluids. The key features to the efficiency of the above algorithm: (1) The MHD system is decoupled in a stable way into two identical Oseen problems and can be solved simultaneously if the computational resources are available. (2) At each time step, the coefficient matrices of (10)-(11) and (12)-(13) are independent of $j$. Thus, for each sub-problem, all the $J$ members in the ensemble share the same coefficient matrix. That is, at each time step, instead of solving $J$ individual systems, we only need to solve a single linear system with $J$ different right-hand-side constant vectors. (3) It provides second-order temporal accuracy when $|\nu-\nu_{m}|\ll 1$. (4) No data restriction is needed on $\nu$, and $\nu_{m}$ to avoid instability. We give rigorous proofs that the decoupled scheme is conditionally stable and the ensemble average of the $J$ different computed solutions converges to the ensemble average of the $J$ different true solutions, as the timestep size and the spatial mesh width tend to zero. To the best of our knowledge, this second order timestepping scheme to the MHD flow ensemble averages is new. This paper is organized as follows. Section 2 presents notation and mathematical preliminaries which are necessary for a smooth presentation and analysis to follow. In section 3, we present and analyze a fully discrete and decoupled algorithm corresponding to (10)-(13), and prove it’s stability and convergent theorems. Numerical tests are presented in section 4, and finally, conclusions and future directions are given in section 5. ## 2 Notation and preliminaries Let $\Omega\subset\mathbb{R}^{d}\ (d=2,3)$ be a convex polygonal or polyhedral domain in $\mathbb{R}^{d}(d=2,3)$ with boundary $\partial\Omega$. The usual $L^{2}(\Omega)$ norm and inner product are denoted by $\|.\|$ and $(.,.)$, respectively. Similarly, the $L^{p}(\Omega)$ norms and the Sobolev $W_{p}^{k}(\Omega)$ norms are $\|.\|_{L^{p}}$ and $\|.\|_{W_{p}^{k}}$, respectively for $k\in\mathbb{N},\hskip 2.84526pt1\leq p\leq\infty$. Sobolev space $W_{2}^{k}(\Omega)$ is represented by $H^{k}(\Omega)$ with norm $\|.\|_{k}$. The natural function spaces for our problem are $X:=H_{0}^{1}(\Omega)=\\{v\in(L^{p}(\Omega))^{d}:\nabla v\in L^{2}(\Omega)^{d\times d},v=0\hskip 5.69054pt\mbox{on}\hskip 5.69054pt\partial\Omega\\},$ $Q:=L_{0}^{2}(\Omega)=\\{q\in L^{2}(\Omega):\int_{\Omega}q\hskip 5.69054ptdx=0\\}.$ Recall the Poincare inequality holds in $X$: There exists $C$ depending only on $\Omega$ satisfying for all $\phi\in X$, $\|\phi\|\leq C\|\nabla\phi\|.$ The divergence free velocity space is given by $V:=\\{v\in X:(\nabla\cdot v,q)=0,\forall q\in Q\\}.$ We define the trilinear form $b:X\times X\times X\rightarrow\mathbb{R}$ by $b(u,v,w):=(u\cdot\nabla v,w),$ and recall from [21] that $b(u,v,v)=0$ if $u\in V$, and $\displaystyle|b(u,v,w)|\leq C(\Omega)\|\nabla u\|\|\nabla v\|\|\nabla w\|,\hskip 5.69054pt\mbox{for any}\hskip 5.69054ptu,v,w\in X.$ (15) The conforming finite element spaces are denoted by $X_{h}\subset X$ and $Q_{h}\subset Q$, and we assume a regular triangulation $\tau_{h}(\Omega)$, where $h$ is the maximum triangle diameter. We assume that $(X_{h},Q_{h})$ satisfies the usual discrete inf-sup condition $\displaystyle\inf_{q_{h}\in Q_{h}}\sup_{v_{h}\in X_{h}}\frac{(q_{h},{\nabla}\cdot v_{h})}{\|q_{h}\|\|{\nabla}v_{h}\|}\geq\beta>0,$ (16) where $\beta$ is independent of $h$. The space of discretely divergence free functions is defined as $V_{h}:=\\{v_{h}\in X_{h}:(\nabla\cdot v_{h},q_{h})=0,\hskip 5.69054pt\forall q_{h}\in Q_{h}\\}.$ For simplicity of our analysis, we will use Scott-Vogelius (SV) [56] finite element pair $(X_{h},Q_{h})=((P_{k})^{d},P_{k-1}^{disc})$, which satisfies the inf-sup condition when the mesh is created as a barycenter refinement of a regular mesh, and the polynomial degree $k\geq d$ [4, 61]. Our analysis can be extended without difficulty to any inf-sup stable element choice, however, there will be additional terms that appear in the convergence analysis if non- divergence-free elements are chosen. We have the following approximation properties in $(X_{h},Q_{h})$: [13] $\displaystyle\inf_{v_{h}\in X_{h}}\|u-v_{h}\|$ $\displaystyle\leq Ch^{k+1}|u|_{k+1},\hskip 5.69054ptu\in H^{k+1}(\Omega),$ (17) $\displaystyle\inf_{v_{h}\in X_{h}}\|{\nabla}(u-v_{h})\|$ $\displaystyle\leq Ch^{k}|u|_{k+1},\hskip 14.22636ptu\in H^{k+1}(\Omega),$ (18) $\displaystyle\inf_{q_{h}\in Q_{h}}\|p-q_{h}\|$ $\displaystyle\leq Ch^{k}|p|_{k},\hskip 28.45274ptp\in H^{k}(\Omega),$ (19) where $|\cdot|_{r}$ denotes the $H^{r}$ seminorm. We will assume the mesh is sufficiently regular for the inverse inequality to hold, and with this and the LBB assumption, we have approximation properties $\displaystyle\|\nabla(u-P_{L^{2}}^{V_{h}}(u))\|$ $\displaystyle\leq Ch^{k}|u|_{k+1},\hskip 5.69054ptu\in H^{k+1}(\Omega),$ (20) $\displaystyle\inf_{v_{h}\in V_{h}}\|{\nabla}(u-v_{h})\|$ $\displaystyle\leq Ch^{k}|u|_{k+1},\hskip 5.69054ptu\in H^{k+1}(\Omega),$ (21) where $P_{L^{2}}^{V_{h}}(u)$ is the $L^{2}$ projection of $u$ into $V_{h}$. The following lemma for the discrete Gronwall inequality was given in [27]. ###### Lemma 1. Let $\Delta t$, $H$, $a_{n}$, $b_{n}$, $c_{n}$, $d_{n}$ be non-negative numbers for $n=1,\cdots,M$ such that $a_{M}+\Delta t\sum_{n=1}^{M}b_{n}\leq\Delta t\sum_{n=1}^{M-1}{d_{n}a_{n}}+\Delta t\sum_{n=1}^{M}c_{n}+H\hskip 8.53581pt\mbox{for}\hskip 5.69054ptM\in\mathbb{N},$ then for all $\Delta t>0,$ $a_{M}+\Delta t\sum_{n=1}^{M}b_{n}\leq\mbox{exp}\left(\Delta t\sum_{n=1}^{M-1}d_{n}\right)\left(\Delta t\sum_{n=1}^{M}c_{n}+H\right)\hskip 5.69054pt\mbox{for}\hskip 5.69054ptM\in\mathbb{N}.$ ## 3 Fully discrete scheme and analysis Now we present and analyze an efficient, fully discrete, decoupled, and practically second-order timestepping scheme for computing MHD flow ensemble. Like other BDF2 schemes, two initial conditions should be known; if the first initial conditions are known, using the linearized backward Euler scheme in [47] without the ensemble eddy viscosity terms on the first timestep, we can get the second initial condition without affecting the stability or accuracy. The scheme is defined below. ###### Algorithm 3.1. Given $\nu$ and $\nu_{m}$, choose $\theta$ sufficiently large so that $\frac{\theta}{1+\theta}<\frac{\nu}{\nu_{m}}<\frac{1+\theta}{\theta}$, $0\leq\theta\leq 1$. Given time step $\Delta t>0$, end time $T>0$, initial conditions $v_{j}^{0},w_{j}^{0},v_{j}^{1},w_{j}^{1}\in V_{h}$ and $f_{1,j},f_{2,j}\in L^{\infty}(0,T;H^{-1}(\Omega)^{d})$ for $j=1,2,\cdots,J$. Set $M=T/\Delta t$ and for $n=1,\cdots,M-1$, compute: Find $v_{j,h}^{n+1}\in V_{h}$ satisfying, for all $\chi_{h}\in V_{h}$: $\displaystyle\Bigg{(}\frac{3v_{j,h}^{n+1}-4v_{j,h}^{n}+v_{j,h}^{n-1}}{2\Delta t},\chi_{h}\Bigg{)}+b^{*}(<w_{h}>^{n},v_{j,h}^{n+1},\chi_{h})+b^{*}(w_{j,h}^{{}^{\prime}n},2v_{j,h}^{n}-v_{j,h}^{n-1},\chi_{h})$ $\displaystyle+\frac{\nu+\nu_{m}}{2}(\nabla v_{j,h}^{n+1},\nabla\chi_{h})+\frac{\nu-\nu_{m}}{2}((1-\theta)\nabla w_{j,h}^{n}+\theta\nabla(2w_{j,h}^{n}-w_{j,h}^{n-1}),\nabla\chi_{h})=(f_{1,j}(t^{n+1}),\chi_{h}),$ (22) Find $w_{j,h}^{n+1}\in V_{h}$ satisfying, for all $l_{h}\in V_{h}$: $\displaystyle\left(\frac{3w_{j,h}^{n+1}-4w_{j,h}^{n}+w_{j,h}^{n-1}}{2\Delta t},l_{h}\right)+b^{*}(<v_{h}>^{n},w_{j,h}^{n+1},l_{h})+b^{*}(v_{j,h}^{{}^{\prime}n},2w_{j,h}^{n}-w_{j,h}^{n-1},l_{h})$ $\displaystyle+\frac{\nu+\nu_{m}}{2}(\nabla w_{j,h}^{n+1},\nabla l_{h})+\frac{\nu-\nu_{m}}{2}((1-\theta)\nabla v_{j,h}^{n}+\theta\nabla(2v_{j,h}^{n}-v_{j,h}^{n-1}),\nabla l_{h})=(f_{2,j}(t^{n+1}),l_{h}).$ (23) ### 3.1 Stability analysis We now prove stability and well-posedness for the Algorithm 3.1. To simplify our calculation, we define $\alpha:=\nu+\nu_{m}-|\nu-\nu_{m}|(1+2\theta)>0$, which allow us to choose sufficiently large $\theta$ so that $\displaystyle\frac{\theta}{1+\theta}<\frac{\nu}{\nu_{m}}<\frac{1+\theta}{\theta},\hskip 5.69054pt0\leq\theta\leq 1,$ (24) holds. ###### Lemma 2. Consider the Algorithm 3.1. If the mesh is sufficiently regular so that the inverse inequality holds and the timestep is chosen to satisfy $\Delta t\leq\frac{\alpha h^{2}}{C\max\limits_{1\leq j\leq J}\big{\\{}\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2},\hskip 2.84526pt\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\big{\\}}}$ then the method is stable and solutions to (22)-(23) satisfy $\displaystyle\|v_{j,h}^{M}\|^{2}+\|2v_{j,h}^{M}-v_{j,h}^{M-1}\|^{2}$ $\displaystyle+\|w_{j,h}^{M}\|^{2}+\|2w_{j,h}^{M}-w_{j,h}^{M-1}\|^{2}+\alpha\Delta t\sum_{n=2}^{M}\bigg{(}\|\nabla v_{j,h}^{n}\|^{2}+\|\nabla w_{j,h}^{n}\|^{2}\bigg{)}$ $\displaystyle\leq\|v_{j,h}^{1}\|^{2}+\|w_{j,h}^{1}\|^{2}+\|2v_{j,h}^{1}-v_{j,h}^{0}\|^{2}+\|2w_{j,h}^{1}-w_{j,h}^{0}\|^{2}$ $\displaystyle+(\nu+\nu_{m})\Delta t\bigg{(}\|\nabla v_{j,h}^{0}\|^{2}+\|\nabla w_{j,h}^{0}\|^{2}+\|\nabla v_{j,h}^{1}\|^{2}+\|\nabla w_{j,h}^{1}\|^{2}\bigg{)}$ $\displaystyle+{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\frac{8}{\alpha}\sum_{n=1}^{M-1}\left(\|f_{1,j}(t^{n+1})\|_{-1}^{2}+\|f_{2,j}(t^{n+1})\|_{-1}^{2}\right)}.$ (25) ###### Remark 3.1. The timestep restriction arises due to the use of inverse inequality, where the constant $C$ depends on the inverse inequality constant. The above stability bound is sufficient for the well-posedness of the Algorithm 3.1, since it is linear at each timestep and finite dimensional. The linearity of the scheme also provides the uniqueness of the solution, and the uniqueness implies existence. Thus the solutions to the Algorithm 3.1 exist uniquely. ###### Proof. Choose $\chi_{h}=v_{j,h}^{n+1}$ and $l_{h}=w_{j,h}^{n+1}$ in (22)-(23), $\displaystyle\Bigg{(}\frac{3v_{j,h}^{n+1}-4v_{j,h}^{n}+v_{j,h}^{n-1}}{2\Delta t},v_{j,h}^{n+1}\Bigg{)}$ $\displaystyle+\frac{\nu+\nu_{m}}{2}\|\nabla v_{j,h}^{n+1}\|^{2}+\frac{\nu-\nu_{m}}{2}((1+\theta)\nabla w_{j,h}^{n}-\theta\nabla w_{j,h}^{n-1},\nabla v_{j,h}^{n+1})$ $\displaystyle+(w_{j,h}^{{}^{\prime}n}\cdot\nabla(2v_{j,h}^{n}-v_{j,h}^{n-1}),v_{j,h}^{n+1})=(f_{1,j}(t^{n+1}),v_{j,h}^{n+1}),$ (26) and $\displaystyle\Bigg{(}\frac{3w_{j,h}^{n+1}-4w_{j,h}^{n}+w_{j,h}^{n-1}}{2\Delta t},w_{j,h}^{n+1}\Bigg{)}+$ $\displaystyle\frac{\nu+\nu_{m}}{2}\|\nabla w_{j,h}^{n+1}\|^{2}+\frac{\nu-\nu_{m}}{2}((1+\theta)\nabla v_{j,h}^{n}-\theta\nabla v_{j,h}^{n-1},\nabla w_{j,h}^{n+1})$ $\displaystyle+(v_{j,h}^{{}^{\prime}n}\cdot\nabla(2w_{j,h}^{n}-w_{j,h}^{n-1}),w_{j,h}^{n+1})=(f_{1,j}(t^{n+1}),w_{j,h}^{n+1}).$ (27) Using the following identity $\displaystyle(3a-4b+c,a)=\frac{a^{2}+(2a-b)^{2}}{2}-\frac{b^{2}+(2b-c)^{2}}{2}+\frac{(a-2b+c)^{2}}{2},$ (28) we obtain $\displaystyle\frac{1}{4\Delta t}\bigg{(}\|v_{j,h}^{n+1}\|^{2}-$ $\displaystyle\|v_{j,h}^{n}\|^{2}+\|2v_{j,h}^{n+1}-v_{j,h}^{n}\|^{2}-\|2v_{j,h}^{n}-v_{j,h}^{n-1}\|^{2}+\|v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1}\|^{2}\bigg{)}$ $\displaystyle+\frac{\nu+\nu_{m}}{2}\|\nabla v_{j,h}^{n+1}\|^{2}+\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla w_{j,h}^{n}-\theta\nabla w_{j,h}^{n-1},\nabla v_{j,h}^{n+1}\big{)}$ $\displaystyle+(w_{j,h}^{{}^{\prime}n}\cdot\nabla(2v_{j,h}^{n}-v_{j,h}^{n-1}),v_{j,h}^{n+1})=(f_{1,j}(t^{n+1}),v_{j,h}^{n+1}),$ (29) and $\displaystyle\frac{1}{4\Delta t}\bigg{(}\|w_{j,h}^{n+1}\|^{2}-$ $\displaystyle\|w_{j,h}^{n}\|^{2}+\|2w_{j,h}^{n+1}-w_{j,h}^{n}\|^{2}-\|2w_{j,h}^{n}-w_{j,h}^{n-1}\|^{2}+\|w_{j,h}^{n+1}-2w_{j,h}^{n}+w_{j,h}^{n-1}\|^{2}\bigg{)}$ $\displaystyle+\frac{\nu+\nu_{m}}{2}\|\nabla w_{j,h}^{n+1}\|^{2}+\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla v_{j,h}^{n}-\theta\nabla v_{j,h}^{n-1},\nabla w_{j,h}^{n+1}\big{)}$ $\displaystyle+(v_{j,h}^{{}^{\prime}n}\cdot\nabla(2w_{j,h}^{n}-w_{j,h}^{n-1}),w_{j,h}^{n+1})=(f_{2,j}(t^{n+1}),w_{j,h}^{n+1}).$ (30) Next, using $\displaystyle(w_{j,h}^{{}^{\prime}n}$ $\displaystyle\cdot\nabla(2v_{j,h}^{n}-v_{j,h}^{n-1}),v_{j,h}^{n+1})=(w_{j,h}^{{}^{\prime}n}\cdot\nabla v_{j,h}^{n+1},v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1})$ $\displaystyle\leq C\|\nabla w_{j,h}^{{}^{\prime}n}\|\|\nabla v_{j,h}^{n+1}\|\hskip 2.84526pt\|\nabla(v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1})\|$ $\displaystyle\leq\frac{C}{h}\|\nabla w_{j,h}^{{}^{\prime}n}\|\|\nabla v_{j,h}^{n+1}\|\hskip 2.84526pt\|v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1}\|,$ adding equations (29) and (30), applying Cauchy-Schwarz and Young’s inequalities to the $\nu-\nu_{m}$ terms, and rearranging, yields $\displaystyle\frac{1}{4\Delta t}\bigg{(}\|v_{j,h}^{n+1}\|^{2}-\|v_{j,h}^{n}\|^{2}+\|2v_{j,h}^{n+1}-v_{j,h}^{n}\|^{2}-\|2v_{j,h}^{n}-v_{j,h}^{n-1}\|^{2}+\|v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1}\|^{2}$ $\displaystyle+\|w_{j,h}^{n+1}\|^{2}-\|w_{j,h}^{n}\|^{2}+\|2w_{j,h}^{n+1}-w_{j,h}^{n}\|^{2}-\|2w_{j,h}^{n}-w_{j,h}^{n-1}\|^{2}+\|w_{j,h}^{n+1}-2w_{j,h}^{n}+w_{j,h}^{n-1}\|^{2}\bigg{)}$ $\displaystyle+\frac{\nu+\nu_{m}}{2}\big{(}\|\nabla v_{j,h}^{n+1}\|^{2}+\|\nabla w_{j,h}^{n+1}\|^{2}\big{)}\leq\frac{|\nu-\nu_{m}|}{4}(1+2\theta)\big{(}\|\nabla v_{j,h}^{n+1}\|^{2}+\|\nabla w_{j,h}^{n+1}\|^{2}\big{)}$ $\displaystyle+\frac{|\nu-\nu_{m}|}{4}(1+\theta)\big{(}\|\nabla v_{j,h}^{n}\|^{2}+\|\nabla w_{j,h}^{n}\|^{2}\big{)}+\frac{|\nu-\nu_{m}|}{4}\theta\big{(}\|\nabla v_{j,h}^{n-1}\|^{2}+\|\nabla w_{j,h}^{n-1}\|^{2}\big{)}$ $\displaystyle+\frac{C}{h}\|\nabla w_{j,h}^{{}^{\prime}n}\|\|\nabla v_{j,h}^{n+1}\|\hskip 2.84526pt\|v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1}\|+\frac{C}{h}\|\nabla v_{j,h}^{{}^{\prime}n}\|\|\nabla w_{j,h}^{n+1}\|\hskip 2.84526pt\|w_{j,h}^{n+1}-2w_{j,h}^{n}+w_{j,h}^{n-1}\|$ $\displaystyle+\|f_{1,j}(t^{n+1})\|_{-1}\|\nabla v_{j,h}^{n+1}\|+\|f_{2,j}(t^{n+1})\|_{-1}\|\nabla w_{j,h}^{n+1}\|.$ Next, we apply Young’s inequality using $\alpha/8$ with the forcing and non- linear terms, noting that $\alpha>0$ by the assumed choice of $\theta$, and hiding terms on the left, $\displaystyle\frac{1}{4\Delta t}\bigg{(}\|v_{j,h}^{n+1}\|^{2}-\|v_{j,h}^{n}\|^{2}+\|2v_{j,h}^{n+1}-v_{j,h}^{n}\|^{2}-\|2v_{j,h}^{n}-v_{j,h}^{n-1}\|^{2}+\|v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1}\|^{2}$ $\displaystyle+\|w_{j,h}^{n+1}\|^{2}-\|w_{j,h}^{n}\|^{2}+\|2w_{j,h}^{n+1}-w_{j,h}^{n}\|^{2}-\|2w_{j,h}^{n}-w_{j,h}^{n-1}\|^{2}+\|w_{j,h}^{n+1}-2w_{j,h}^{n}+w_{j,h}^{n-1}\|^{2}\bigg{)}$ $\displaystyle+\frac{\nu+\nu_{m}}{4}\big{(}\|\nabla v_{j,h}^{n+1}\|^{2}+\|\nabla w_{j,h}^{n+1}\|^{2}\big{)}\leq\frac{|\nu-\nu_{m}|}{4}(1+\theta)\big{(}\|\nabla v_{j,h}^{n}\|^{2}+\|\nabla w_{j,h}^{n}\|^{2}\big{)}$ $\displaystyle+\frac{|\nu-\nu_{m}|}{4}\theta\big{(}\|\nabla v_{j,h}^{n-1}\|^{2}+\|\nabla w_{j,h}^{n-1}\|^{2}\big{)}+\frac{2}{\alpha}\|f_{1,j}(t^{n+1})\|_{-1}^{2}+\frac{2}{\alpha}\|f_{2,j}(t^{n+1})\|_{-1}^{2}$ $\displaystyle+\frac{C}{\alpha h^{2}}\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\|v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1}\|^{2}+\frac{C}{\alpha h^{2}}\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2}\|w_{j,h}^{n+1}-2w_{j,h}^{n}+w_{j,h}^{n-1}\|^{2}.$ Rearranging $\displaystyle\frac{1}{4\Delta t}\bigg{(}\|v_{j,h}^{n+1}\|^{2}-\|v_{j,h}^{n}\|^{2}+\|2v_{j,h}^{n+1}-v_{j,h}^{n}\|^{2}-\|2v_{j,h}^{n}-v_{j,h}^{n-1}\|^{2}$ $\displaystyle+\|w_{j,h}^{n+1}\|^{2}-\|w_{j,h}^{n}\|^{2}+\|2w_{j,h}^{n+1}-w_{j,h}^{n}\|^{2}-\|2w_{j,h}^{n}-w_{j,h}^{n-1}\|^{2}\bigg{)}$ $\displaystyle+\bigg{(}\frac{1}{4\Delta t}-\frac{C}{\alpha h^{2}}\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\bigg{)}\|v_{j,h}^{n+1}-2v_{j,h}^{n}+v_{j,h}^{n-1}\|^{2}$ $\displaystyle+\bigg{(}\frac{1}{4\Delta t}-\frac{C}{\alpha h^{2}}\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2}\bigg{)}\|w_{j,h}^{n+1}-2w_{j,h}^{n}+w_{j,h}^{n-1}\|^{2}$ $\displaystyle+\frac{\nu+\nu_{m}}{4}\bigg{(}\|\nabla v_{j,h}^{n+1}\|^{2}-\|\nabla v_{j,h}^{n}\|^{2}+\|\nabla w_{j,h}^{n+1}\|^{2}-\|\nabla w_{j,h}^{n}\|^{2}\bigg{)}$ $\displaystyle+\frac{\nu+\nu_{m}-|\nu-\nu_{m}|(1+\theta)}{4}\bigg{(}\|\nabla v_{j,h}^{n}\|^{2}-\|\nabla v_{j,h}^{n-1}\|^{2}+\|\nabla w_{j,h}^{n}\|^{2}-\|\nabla w_{j,h}^{n-1}\|^{2}\bigg{)}$ $\displaystyle+\frac{\nu+\nu_{m}-|\nu-\nu_{m}|(1+2\theta)}{4}\bigg{(}\|\nabla v_{j,h}^{n-1}\|^{2}+\|\nabla w_{j,h}^{n-1}\|^{2}\bigg{)}$ $\displaystyle\leq\frac{2}{\alpha}\|f_{1,j}(t^{n+1})\|_{-1}^{2}+\frac{2}{\alpha}\|f_{2,j}(t^{n+1})\|_{-1}^{2}.$ (31) Now, if we choose $\Delta t\leq\frac{\alpha h^{2}}{C\max\limits_{1\leq j\leq J}\big{\\{}\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2},\hskip 2.84526pt\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\big{\\}}}$, multiplying both sides by $4\Delta t$, summing over timesteps $n=1,\cdots,M-1$, and finally dropping non-negative terms from left hand side finish the proof. ∎ ### 3.2 Error analysis Now we consider the convergence of the proposed decoupled scheme. ###### Theorem 3. Suppose $(v_{j},w_{j},q_{j},r_{j})$ satisfies (7)-(9) with the regularity assumptions $v_{j}$,$w_{j}$ $\in L^{\infty}(0,T;$ $H^{m}(\Omega)^{d})$ for $m=\max\\{2,k+1\\}$, $v_{j,t},w_{j,t},v_{j,tt},w_{j,tt}\in L^{\infty}(0,T;H^{1}(\Omega)^{d})$, and $v_{j,ttt},w_{j,ttt}\in L^{\infty}(0,T;L^{2}(\Omega)^{d})$. Then the ensemble average solution $(<v_{h}>,<w_{h}>)$ to Algorithm (3.1) converges to the true ensemble average solution: For any $\Delta t\leq\frac{\alpha h^{2}}{C\max\limits_{1\leq j\leq J}\\{\|\nabla v_{j,h}^{{}^{\prime}n}\|,\|\nabla w_{j,h}^{{}^{\prime}n}\|\\}},$ one has $\displaystyle\|<v>^{T}-$ $\displaystyle<v_{h}>^{M}\|+\|<w>^{T}-<w_{h}>^{M}\|+\alpha\Delta t\sum\limits_{n=2}^{M}\bigg{\\{}\|\nabla(<v>(t^{n})-<v_{h}>^{n})\|^{2}$ $\displaystyle+\|\nabla(<w>(t^{n})-<w_{h}>^{n})\|^{2}\bigg{\\}}^{\frac{1}{2}}\leq C(h^{k}+(\Delta t)^{2}+|\nu-\nu_{m}|(1-\theta)\Delta t).$ (32) ###### Proof. We start our proof by obtaining the error equations. Testing (7) and (8) with $\chi_{h},l_{h}\in V_{h}$ at the time level $t^{n+1}$, the continuous variational formulations can be written as $\displaystyle\bigg{(}\frac{3v_{j}(t^{n+1})-4v_{j}(t^{n})+v_{j}(t^{n-1})}{2\Delta t},\chi_{h}\bigg{)}+(w_{j}(t^{n+1})\cdot\nabla v_{j}(t^{n+1}),\chi_{h})$ $\displaystyle+\frac{\nu+\nu_{m}}{2}(\nabla v_{j}(t^{n+1}),\nabla\chi_{h})+\frac{\nu-\nu_{m}}{2}((1+\theta)\nabla w_{j}(t^{n})-\theta\nabla w_{j}(t^{n-1}),\chi_{h})$ $\displaystyle=(f_{1,j}(t^{n+1}),\chi_{h})-\frac{\nu-\nu_{m}}{2}\big{(}\nabla(w_{j}(t^{n+1})-(1+\theta)w_{j}(t^{n})+\theta w_{j}(t^{n-1})),\chi_{h}\big{)}$ $\displaystyle-\bigg{(}v_{j,t}(t^{n+1})-\frac{3v_{j}(t^{n+1})-4v_{j}(t^{n})+v_{j}(t^{n-1})}{2\Delta t},\chi_{h}\bigg{)},$ (33) and $\displaystyle\bigg{(}\frac{3w_{j}(t^{n+1})-4w_{j}(t^{n})+w_{j}(t^{n-1})}{2\Delta t},l_{h}\bigg{)}+(v_{j}(t^{n+1})\cdot\nabla w_{j}(t^{n+1}),l_{h})$ $\displaystyle+\frac{\nu+\nu_{m}}{2}(\nabla w_{j}(t^{n+1}),\nabla l_{h})+\frac{\nu-\nu_{m}}{2}((1+\theta)\nabla v_{j}(t^{n})-\theta\nabla v_{j}(t^{n-1})),l_{h})$ $\displaystyle=(f_{2,j}(t^{n+1}),l_{h})-\frac{\nu-\nu_{m}}{2}\big{(}\nabla(v_{j}(t^{n+1})-(1+\theta)v_{j}(t^{n})+\theta v_{j}(t^{n-1})),l_{h}\big{)}$ $\displaystyle-\bigg{(}w_{j,t}(t^{n+1})-\frac{3w_{j}(t^{n+1})-4w_{j}(t^{n})+w_{j}(t^{n-1})}{2\Delta t},l_{h}\bigg{)}.$ (34) Denote $e_{v,j}^{n}:=v_{j}(t^{n})-v_{j,h}^{n},\hskip 5.69054pte_{w,j}^{n}:=w_{j}(t^{n})-w_{j,h}^{n}.$ Subtracting (22) and (23) from equation (33) and (34) respectively, yields $\displaystyle\bigg{(}\frac{3e_{j,v}^{n+1}-4e_{j,v}^{n}+e_{j,v}^{n-1}}{2\Delta t},$ $\displaystyle\chi_{h}\bigg{)}+\frac{\nu+\nu_{m}}{2}(\nabla e_{j,v}^{n+1},\nabla\chi_{h})+\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla e_{j,w}^{n}-\theta\nabla e_{j,w}^{n-1},\nabla\chi_{h}\big{)}$ $\displaystyle+((2e_{j,w}^{n}-e_{j,w}^{n-1})\cdot\nabla v_{j}(t^{n+1}),\chi_{h})+((2w_{j,h}^{n}-w_{j,h}^{n-1})\cdot\nabla e_{j,v}^{n+1},\chi_{h})$ $\displaystyle-(w_{j,h}^{{}^{\prime}n}\cdot\nabla(e_{j,v}^{n+1}-2e_{j,v}^{n}+e_{j,v}^{n-1}),\chi_{h})=-G_{1}(t,v_{j},w_{j},\chi_{h}),$ (35) and $\displaystyle\bigg{(}\frac{3e_{j,w}^{n+1}-4e_{j,w}^{n}+e_{j,w}^{n-1}}{2\Delta t},$ $\displaystyle l_{h}\bigg{)}+\frac{\nu+\nu_{m}}{2}(\nabla e_{j,w}^{n+1},\nabla l_{h})+\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla e_{j,v}^{n}-\theta\nabla e_{j,v}^{n-1},\nabla l_{h}\big{)}$ $\displaystyle+((2e_{j,v}^{n}-e_{j,v}^{n-1})\cdot\nabla w_{j}(t^{n+1}),l_{h})+((2v_{j,h}^{n}-v_{j,h}^{n-1})\cdot\nabla e_{j,w}^{n+1},l_{h})$ $\displaystyle-(v_{j,h}^{{}^{\prime}n}\cdot\nabla(e_{j,w}^{n+1}-2e_{j,w}^{n}+e_{j,w}^{n-1}),l_{h})=-G_{2}(t,v_{j},w_{j},l_{h}),$ (36) where $\displaystyle G_{1}(t,v_{j},w_{j},\chi_{h}):=$ $\displaystyle((w_{j}(t^{n+1})-2w_{j}(t^{n})+w_{j}(t^{n-1}))\cdot\nabla v_{j}(t^{n+1}),\chi_{h})$ $\displaystyle+(w_{j,h}^{{}^{\prime}n}\cdot\nabla(v_{j}(t^{n+1})-2v_{j}(t^{n})+v_{j}(t^{n-1})),\chi_{h})$ $\displaystyle+\frac{\nu-\nu_{m}}{2}\big{(}\nabla(w_{j}(t^{n+1})-(1+\theta)w_{j}(t^{n})+\theta w_{j}(t^{n-1})),\nabla\chi_{h}\big{)}$ $\displaystyle+\bigg{(}v_{j,t}(t^{n+1})-\frac{3v_{j}(t^{n+1})-4v_{j}(t^{n})+v_{j}(t^{n-1})}{2\Delta t},\chi_{h}\bigg{)},$ and $\displaystyle G_{2}(t,v_{j},w_{j},l_{h}):=$ $\displaystyle((v_{j}(t^{n+1})-2v_{j}(t^{n})+v_{j}(t^{n-1}))\cdot\nabla w_{j}(t^{n+1}),l_{h})$ $\displaystyle+(v_{j,h}^{{}^{\prime}n}\cdot\nabla(w_{j}(t^{n+1})-2w_{j}(t^{n})+w_{j}(t^{n-1})),l_{h})$ $\displaystyle+\frac{\nu-\nu_{m}}{2}\big{(}\nabla(v_{j}(t^{n+1})-(1+\theta)v_{j}(t^{n})+\theta v_{j}(t^{n-1})),\nabla l_{h}\big{)}$ $\displaystyle+\bigg{(}w_{j,t}(t^{n+1})-\frac{3w_{j}(t^{n+1})-4w_{j}(t^{n})+w_{j}(t^{n-1})}{2\Delta t},l_{h}\bigg{)}.$ Now we decompose the errors as $\displaystyle e_{j,v}^{n}:$ $\displaystyle=v_{j}(t^{n})-v_{j,h}^{n}=(v_{j}(t^{n})-\tilde{v}_{j}^{n})-(v_{j,h}^{n}-\tilde{v}_{j}^{n}):=\eta_{j,v}^{n}-\phi_{j,h}^{n},$ $\displaystyle e_{j,w}^{n}:$ $\displaystyle=w_{j}(t^{n})-w_{j,h}^{n}=(w_{j}(t^{n})-\tilde{w}_{j}^{n})-(w_{j,h}^{n}-\tilde{w}_{j}^{n}):=\eta_{j,w}^{n}-\psi_{j,h}^{n},$ where $\tilde{v}_{j}^{n}:=P_{V_{h}}^{L^{2}}(v_{j}(t^{n}))\in V_{h}$ and $\tilde{w}_{j}^{n}:=P_{V_{h}}^{L^{2}}(w_{j}(t^{n}))\in V_{h}$ are the $L^{2}$ projections of $v_{j}(t^{n})$ and $w_{j}(t^{n})$ into $V_{h}$, respectively. Note that $(\eta_{j,v}^{n},v_{h})=(\eta_{j,w}^{n},v_{h})=0\hskip 5.69054pt\forall v_{h}\in V_{h}.$ Rewriting, we have for $\chi_{h},l_{h}\in V_{h}$ $\displaystyle\bigg{(}$ $\displaystyle\frac{3\phi_{j,h}^{n+1}-4\phi_{j,h}^{n}+\phi_{j,h}^{n-1}}{2\Delta t},\chi_{h}\bigg{)}+\frac{\nu+\nu_{m}}{2}(\nabla\phi_{j,h}^{n+1},\nabla\chi_{h})+\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla\psi_{j,h}^{n}-\theta\nabla\psi_{j,h}^{n-1},\nabla\chi_{h}\big{)}$ $\displaystyle+((2\psi_{j,h}^{n}-\psi_{j,h}^{n-1})\cdot\nabla v_{j}(t^{n+1}),\chi_{h})+((2w_{j,h}^{n}-w_{j,h}^{n-1})\cdot\nabla\phi_{j,h}^{n+1},\chi_{h})$ $\displaystyle-(w_{j,h}^{{}^{\prime}n}\cdot\nabla(\phi_{j,h}^{n+1}-\phi_{j,h}^{n}+\phi_{j,h}^{n-1}),\chi_{h})=\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla\eta_{j,w}^{n}-\theta\nabla\eta_{j,w}^{n-1},\nabla\chi_{h}\big{)}$ $\displaystyle+\frac{\nu+\nu_{m}}{2}(\nabla\eta_{j,v}^{n+1},\nabla\chi_{h})+((2\eta_{j,w}^{n}-\eta_{j,w}^{n-1})\cdot\nabla v_{j}(t^{n+1}),\chi_{h})+((2w_{j,h}^{n}-w_{j,h}^{n-1})\cdot\nabla\eta_{j,v}^{n+1},\chi_{h})$ $\displaystyle-(w_{j,h}^{{}^{\prime}n}\cdot\nabla(\eta_{j,v}^{n+1}-\eta_{j,v}^{n}+\eta_{j,v}^{n-1}),\chi_{h})+G_{1}(t,v_{j},w_{j},\chi_{h}),$ (37) and $\displaystyle\bigg{(}$ $\displaystyle\frac{3\psi_{j,h}^{n+1}-4\psi_{j,h}^{n}+\psi_{j,h}^{n-1}}{2\Delta t},l_{h}\bigg{)}+\frac{\nu+\nu_{m}}{2}(\nabla\psi_{j,h}^{n+1},\nabla l_{h})+\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla\phi_{j,h}^{n}-\theta\nabla\phi_{j,h}^{n-1},\nabla l_{h}\big{)}$ $\displaystyle+((2\phi_{j,h}^{n}-\phi_{j,h}^{n-1})\cdot\nabla w_{j}(t^{n+1}),l_{h})+((2v_{j,h}^{n}-v_{j,h}^{n-1})\cdot\nabla\psi_{j,h}^{n+1},l_{h})$ $\displaystyle-(v_{j,h}^{{}^{\prime}n}\cdot\nabla(\psi_{j,h}^{n+1}-\psi_{j,h}^{n}+\psi_{j,h}^{n-1}),l_{h})=\frac{\nu-\nu_{m}}{2}\big{(}(1+\theta)\nabla\eta_{j,v}^{n}-\theta\nabla\eta_{j,v}^{n-1}),\nabla l_{h}\big{)}$ $\displaystyle+\frac{\nu+\nu_{m}}{2}(\nabla\eta_{j,w}^{n+1},\nabla l_{h})+((2\eta_{j,v}^{n}-\eta_{j,v}^{n-1})\cdot\nabla w_{j}(t^{n+1}),l_{h})+((2v_{j,h}^{n}-v_{j,h}^{n-1})\cdot\nabla\eta_{j,w}^{n+1},l_{h})$ $\displaystyle-(v_{j,h}^{{}^{\prime}n}\cdot\nabla(\eta_{j,w}^{n+1}-\eta_{j,w}^{n}+\eta_{j,w}^{n-1}),l_{h})+G_{2}(t,v_{j},w_{j},l_{h}).$ (38) Choose $\chi_{h}=\phi_{j,h}^{n+1},l_{h}=\psi_{j,h}^{n+1}$ and use the identity (28) in (37) and (38), to obtain $\displaystyle\frac{1}{4\Delta t}(\|\phi_{j,h}^{n+1}\|^{2}-\|\phi_{j,h}^{n}\|^{2}+\|2\phi_{j,h}^{n+1}-\phi_{j,h}^{n}\|^{2}-\|2\phi_{j,h}^{n}-\phi_{j,h}^{n-1}\|^{2}$ $\displaystyle+\|\phi_{j,h}^{n+1}-2\phi_{j,h}^{n}+\phi_{j,h}^{n-1}\|^{2})+\frac{\nu+\nu_{m}}{2}\|\nabla\phi_{j,h}^{n+1}\|^{2}$ $\displaystyle\leq(1+\theta)\frac{|\nu-\nu_{m}|}{2}\bigg{\\{}|\big{(}\nabla\psi_{j,h}^{n},\nabla\phi_{j,h}^{n+1}\big{)}|+|\big{(}\nabla\eta_{j,w}^{n},\nabla\phi_{j,h}^{n+1}\big{)}|\bigg{\\}}$ $\displaystyle+\theta\frac{|\nu-\nu_{m}|}{2}\bigg{\\{}|\big{(}\nabla\psi_{j,h}^{n-1},\nabla\phi_{j,h}^{n+1}\big{)}|+|\big{(}\nabla\eta_{j,w}^{n-1},\nabla\phi_{j,h}^{n+1}\big{)}|\bigg{\\}}+\frac{\nu+\nu_{m}}{2}|\big{(}\nabla\eta_{j,v}^{n+1},\nabla\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle+|\big{(}(2\eta_{j,w}^{n}-\eta_{j,w}^{n-1})\cdot\nabla v_{j}(t^{n+1}),\phi_{j,h}^{n+1}\big{)}|+|\big{(}(2w_{j,h}^{n}-w_{j,h}^{n-1})\cdot\nabla\eta_{j,v}^{n+1},\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle+|\big{(}w_{j,h}^{{}^{\prime}n}\cdot\nabla(\eta_{j,v}^{n+1}-\eta_{j,v}^{n}+\eta_{j,v}^{n-1}),\phi_{j,h}^{n+1}\big{)}|+|\big{(}(2\psi_{j,h}^{n}-\psi_{j,h}^{n-1})\cdot\nabla v_{j}(t^{n+1}),\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle+|\big{(}w_{j,h}^{{}^{\prime}n}\cdot\nabla(\phi_{j,h}^{n+1}-\phi_{j,h}^{n}+\phi_{j,h}^{n-1}),\phi_{j,h}^{n+1}\big{)}|+|G_{1}(t,v_{j},w_{j},\phi_{j,h}^{n+1})|,$ (39) and $\displaystyle\frac{1}{4\Delta t}(\|\psi_{j,h}^{n+1}\|^{2}-\|\psi_{j,h}^{n}\|^{2}+\|2\psi_{j,h}^{n+1}-\psi_{j,h}^{n}\|^{2}-\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|^{2}$ $\displaystyle+\|\psi_{j,h}^{n+1}-2\psi_{j,h}^{n}+\psi_{j,h}^{n-1}\|^{2})+\frac{\nu+\nu_{m}}{2}\|\nabla\psi_{j,h}^{n+1}\|^{2}$ $\displaystyle\leq(1+\theta)\frac{|\nu-\nu_{m}|}{2}\bigg{\\{}|\big{(}\nabla\phi_{j,h}^{n},\nabla\psi_{j,h}^{n+1}\big{)}|+|\big{(}\nabla\eta_{j,v}^{n},\nabla\psi_{j,h}^{n+1}\big{)}|\bigg{\\}}$ $\displaystyle+\theta\frac{|\nu-\nu_{m}|}{2}\bigg{\\{}|\big{(}\nabla\phi_{j,h}^{n-1},\nabla\psi_{j,h}^{n+1}\big{)}|+|\big{(}\nabla\eta_{j,v}^{n-1},\nabla\psi_{j,h}^{n+1}\big{)}|\bigg{\\}}+\frac{\nu+\nu_{m}}{2}|\big{(}\nabla\eta_{j,w}^{n+1},\nabla\psi_{j,h}^{n+1}\big{)}|$ $\displaystyle+|\big{(}(2\eta_{j,v}^{n}-\eta_{j,v}^{n-1})\cdot\nabla w_{j}(t^{n+1}),\psi_{j,h}^{n+1}\big{)}|+|\big{(}(2v_{j,h}^{n}-v_{j,h}^{n-1})\cdot\nabla\eta_{j,w}^{n+1},\psi_{j,h}^{n+1}\big{)}|$ $\displaystyle+|\big{(}v_{j,h}^{{}^{\prime}n}\cdot\nabla(\eta_{j,w}^{n+1}-\eta_{j,w}^{n}+\eta_{j,w}^{n-1}),\psi_{j,h}^{n+1}\big{)}|+|\big{(}(2\phi_{j,h}^{n}-\phi_{j,h}^{n-1})\cdot\nabla w_{j}(t^{n+1}),\psi_{j,h}^{n+1}\big{)}|$ $\displaystyle+|\big{(}v_{j,h}^{{}^{\prime}n}\cdot\nabla(\psi_{j,h}^{n+1}-\psi_{j,h}^{n}+\psi_{j,h}^{n-1}),\psi_{j,h}^{n+1}\big{)}|+|G_{2}(t,v_{j},w_{j},\psi_{j,h}^{n+1})|.$ (40) We now turn our attention to finding bounds on the right hand side terms of (39). Applying Cauchy-Schwarz and Young’s inequalities on the first five turns provides $\displaystyle(1+\theta)\frac{|\nu-\nu_{m}|}{2}|\big{(}\nabla\psi_{j,h}^{n},\nabla\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq(1+\theta)\frac{|\nu-\nu_{m}|}{4}\big{(}\|\nabla\psi_{j,h}^{n}\|^{2}+\|\nabla\phi_{j,h}^{n+1}\|^{2}\big{)},$ $\displaystyle\theta\frac{|\nu-\nu_{m}|}{2}|\big{(}\nabla\psi_{j,h}^{n-1},\nabla\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq\theta\frac{|\nu-\nu_{m}|}{4}\big{(}\|\nabla\psi_{j,h}^{n-1}\|^{2}+\|\nabla\phi_{j,h}^{n+1}\|^{2}\big{)},$ $\displaystyle(1+\theta)\frac{|\nu-\nu_{m}|}{2}|\big{(}\nabla\eta_{j,w}^{n},\nabla\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{9(1+\theta)^{2}(\nu-\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,w}^{n}\|^{2},$ $\displaystyle\theta\frac{|\nu-\nu_{m}|}{2}|\big{(}\nabla\eta_{j,w}^{n-1},\nabla\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{9\theta^{2}(\nu-\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,w}^{n-1}\|^{2},$ $\displaystyle\frac{\nu+\nu_{m}}{2}|\big{(}\nabla\eta_{j,v}^{n+1},\nabla\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{9(\nu+\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,v}^{n+1}\|^{2}.$ Applying Young’s inequalities with (15) on the first three nonlinear terms yields $\displaystyle|\big{(}(2\eta_{j,w}^{n}-\eta_{j,w}^{n-1})\cdot\nabla v_{j}(t^{n+1}),\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq C\|\nabla(2\eta_{j,w}^{n}-\eta_{j,w}^{n-1})\|\|\nabla v_{j}(t^{n+1})\|\|\nabla\phi_{j,h}^{n+1}\|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{C}{\alpha}\|\nabla(2\eta_{j,w}^{n}-\eta_{j,w}^{n-1})\|^{2}\|\nabla v_{j}(t^{n+1})\|^{2},$ $\displaystyle|\big{(}(2w_{j,h}^{n}-w_{j,h}^{n-1})\cdot\nabla\eta_{j,v}^{n+1},\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq C\|\nabla(2w_{j,h}^{n}-w_{j,h}^{n-1})\|\|\nabla\eta_{j,v}^{n+1}\|\|\nabla\phi_{j,h}^{n+1}\|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{C}{\alpha}\|\nabla(2w_{j,h}^{n}-w_{j,h}^{n-1})\|^{2}\|\nabla\eta_{j,v}^{n+1}\|^{2},$ $\displaystyle|\big{(}w_{j,h}^{{}^{\prime}n}\cdot\nabla(\eta_{j,v}^{n+1}-\eta_{j,v}^{n}+\eta_{j,v}^{n-1}),\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq C\|\nabla w_{j,h}^{{}^{\prime}n}\|\|\nabla(\eta_{j,v}^{n+1}-\eta_{j,v}^{n}+\eta_{j,v}^{n-1})\|\|\nabla\phi_{j,h}^{n+1}\|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{C}{\alpha}\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla(\eta_{j,v}^{n+1}-\eta_{j,v}^{n}+\eta_{j,v}^{n-1})\|^{2}.$ For the fourth nonlinear term, we use Hölder’s inequality, Sobolev embedding theorems, Poincare’s and Young’s inequalities to reveal $\displaystyle|\big{(}(2\psi_{j,h}^{n}-\psi_{j,h}^{n-1})\cdot\nabla v_{j}(t^{n+1}),\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq C\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|\|\nabla v_{j}(t^{n+1})\|_{L^{6}}\|\phi_{j,h}^{n+1}\|_{L^{3}}$ $\displaystyle\leq C\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|\|v_{j}(t^{n+1})\|_{H^{2}}\|\phi_{j,h}^{n+1}\|^{1/2}\|\nabla\phi_{j,h}^{n+1}\|^{1/2}$ $\displaystyle\leq C\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|\|v_{j}(t^{n+1})\|_{H^{2}}\|\nabla\phi_{j,h}^{n+1}\|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{C}{\alpha}\|v_{j}(t^{n+1})\|_{H^{2}}^{2}\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|^{2}.$ Applying inverse and Young’s inequalities with (15) on the fifth nonlinear term yields $\displaystyle|\big{(}w_{j,h}^{{}^{\prime}n}\cdot\nabla(\phi_{j,h}^{n+1}-\phi_{j,h}^{n}+\phi_{j,h}^{n-1}),\phi_{j,h}^{n+1}\big{)}|$ $\displaystyle\leq C\|\nabla w_{j,h}^{{}^{\prime}n}\|\|\nabla(\phi_{j,h}^{n+1}-\phi_{j,h}^{n}+\phi_{j,h}^{n-1})\|\|\nabla\phi_{j,h}^{n+1}\|$ $\displaystyle\leq\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+\frac{C}{\alpha h^{2}}\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\|\phi_{j,h}^{n+1}-\phi_{j,h}^{n}+\phi_{j,h}^{n-1}\|^{2}.$ Using Taylor’s series, Cauchy-Schwarz, Poincare’s and Young’s inequalities the last term is evaluated as $\displaystyle|G_{1}(t,v_{j},w_{j},\phi_{j,h}^{n+1})|\leq$ $\displaystyle\frac{\alpha}{36}\|\nabla\phi_{j,h}^{n+1}\|^{2}+(\Delta t)^{2}\frac{9(\nu-\nu_{m})^{2}(1-\theta)^{2}}{\alpha}\|\nabla w_{t}(s^{***})\|^{2}$ $\displaystyle+(\Delta t)^{4}\frac{C}{\alpha}\bigg{\\{}\|\nabla w_{tt}(s^{*})\|^{2}\|\nabla v_{j}(t^{n+1})\|^{2}+\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla v_{tt}(s^{**})\|^{2}+\|v_{ttt}(s^{****})\|^{2}\bigg{\\}},$ with $s^{*},s^{**},s^{***},s^{****}\in[t^{n-1},t^{n+1}]$. Using these estimates in (39) and reducing produces $\displaystyle\frac{1}{4\Delta t}\big{(}\|\phi_{j,h}^{n+1}\|^{2}-\|\phi_{j,h}^{n}\|^{2}+\|2\phi_{j,h}^{n+1}-\phi_{j,h}^{n}\|^{2}-\|2\phi_{j,h}^{n}-\phi_{j,h}^{n-1}\|^{2}\big{)}$ $\displaystyle\bigg{(}\frac{1}{4\Delta t}-\frac{C}{\alpha h^{2}}\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\bigg{)}\|\phi_{j,h}^{n+1}-2\phi_{j,h}^{n}+\phi_{j,h}^{n-1}\|^{2}+\frac{\nu+\nu_{m}}{4}\|\nabla\phi_{j,h}^{n+1}\|^{2}$ $\displaystyle\leq(1+\theta)\frac{|\nu-\nu_{m}|}{4}\|\nabla\psi_{j,h}^{n}\|^{2}+\theta\frac{|\nu-\nu_{m}|}{4}\|\nabla\psi_{j,h}^{n-1}\|^{2}+\frac{9(1+\theta)^{2}(\nu-\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,w}^{n}\|^{2}$ $\displaystyle+\frac{9\theta^{2}(\nu-\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,w}^{n-1}\|^{2}+\frac{9(\nu+\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,v}^{n+1}\|^{2}+\frac{C}{\alpha}\|\nabla(2\eta_{j,w}^{n}-\eta_{j,w}^{n-1})\|^{2}\|\nabla v_{j}(t^{n+1})\|^{2}$ $\displaystyle+\frac{C}{\alpha}\|\nabla(2w_{j,h}^{n}-w_{j,h}^{n-1})\|^{2}\|\nabla\eta_{j,v}^{n+1}\|^{2}+\frac{C}{\alpha}\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla(\eta_{j,v}^{n+1}-\eta_{j,v}^{n}+\eta_{j,v}^{n-1})\|^{2}$ $\displaystyle+\frac{C}{\alpha}\|v_{j}(t^{n+1})\|_{H^{2}}^{2}\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|^{2}+(\Delta t)^{2}\frac{9(\nu-\nu_{m})^{2}(1-\theta)^{2}}{\alpha}\|\nabla w_{t}(s^{***})\|^{2}$ $\displaystyle+(\Delta t)^{4}\frac{C}{\alpha}\bigg{\\{}\|\nabla w_{tt}(s^{*})\|^{2}\|\nabla v_{j}(t^{n+1})\|^{2}+\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla v_{tt}(s^{**})\|^{2}+\|v_{ttt}(s^{****})\|^{2}\bigg{\\}}.$ (41) Apply similar techniques to (40), we get $\displaystyle\frac{1}{4\Delta t}\big{(}\|\psi_{j,h}^{n+1}\|^{2}-\|\psi_{j,h}^{n}\|^{2}+\|2\psi_{j,h}^{n+1}-\psi_{j,h}^{n}\|^{2}-\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|^{2}\big{)}$ $\displaystyle+\bigg{(}\frac{1}{4\Delta t}-\frac{C}{\alpha h^{2}}\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2}\bigg{)}\|\psi_{j,h}^{n+1}-2\psi_{j,h}^{n}+\psi_{j,h}^{n-1}\|^{2}+\frac{\nu+\nu_{m}}{4}\|\nabla\psi_{j,h}^{n+1}\|^{2}$ $\displaystyle\leq(1+\theta)\frac{|\nu-\nu_{m}|}{4}\|\nabla\phi_{j,h}^{n}\|^{2}+\theta\frac{|\nu-\nu_{m}|}{4}\|\nabla\phi_{j,h}^{n-1}\|^{2}+\frac{9(1+\theta)^{2}(\nu-\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,v}^{n}\|^{2}$ $\displaystyle+\frac{9\theta^{2}(\nu-\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,v}^{n-1}\|^{2}+\frac{9(\nu+\nu_{m})^{2}}{4\alpha}\|\nabla\eta_{j,w}^{n+1}\|^{2}+\frac{C}{\alpha}\|\nabla(2\eta_{j,v}^{n}-\eta_{j,v}^{n-1})\|^{2}\|\nabla w_{j}(t^{n+1})\|^{2}$ $\displaystyle+\frac{C}{\alpha}\|\nabla(2v_{j,h}^{n}-v_{j,h}^{n-1})\|^{2}\|\nabla\eta_{j,w}^{n+1}\|^{2}+\frac{C}{\alpha}\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla(\eta_{j,w}^{n+1}-\eta_{j,w}^{n}+\eta_{j,w}^{n-1})\|^{2}$ $\displaystyle+\frac{C}{\alpha}\|w_{j}(t^{n+1})\|_{H^{2}}^{2}\|2\phi_{j,h}^{n}-\phi_{j,h}^{n-1}\|^{2}+(\Delta t)^{2}\frac{9(\nu-\nu_{m})^{2}(1-\theta)^{2}}{\alpha}\|\nabla v_{t}(s^{***})\|^{2}$ $\displaystyle+(\Delta t)^{4}\frac{C}{\alpha}\bigg{\\{}\|\nabla v_{tt}(t^{*})\|^{2}\|\nabla w_{j}(t^{n+1})\|^{2}+\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla w_{tt}(t^{**})\|^{2}+\|w_{ttt}(t^{****})\|^{2}\bigg{\\}},$ (42) with $t^{*},t^{**},t^{***},t^{****}\in[t^{n-1},t^{n+1}]$. Now adds (41) and (42), multiply by $4\Delta t$, assume $\Delta t\leq\frac{\alpha h^{2}}{C\max\limits_{1\leq j\leq J}\\{\|\nabla v_{j,h}^{{}^{\prime}n}\|,\|\nabla w_{j,h}^{{}^{\prime}n}\|\\}},$ drops non-negative terms, use stability and regularity assumptions, $\|\phi_{j,h}^{0}\|=\|\psi_{j,h}^{0}\|=\|\phi_{j,h}^{1}\|=\|\psi^{1}_{j,h}\|=0$, $\Delta tM=T$, and sum over the time steps to find $\displaystyle\|\phi_{j,h}^{M}$ $\displaystyle\|^{2}+\|2\phi_{j,h}^{M}-\phi_{j,h}^{M-1}\|^{2}+\|\psi_{j,h}^{M}\|^{2}+\|2\psi_{j,h}^{M}-\psi_{j,h}^{M-1}\|^{2}$ $\displaystyle+\alpha\Delta t\sum\limits_{n=2}^{M}\big{(}\|\nabla\phi_{j,h}^{n}\|^{2}+\|\nabla\psi_{j,h}^{n}\|^{2}\big{)}\leq\frac{9(1+\theta)^{2}(\nu-\nu_{m})^{2}}{\alpha}\Delta t\sum\limits_{n=1}^{M-1}(\|\nabla\eta_{j,v}^{n}\|^{2}+\|\nabla\eta_{j,w}^{n}\|^{2})$ $\displaystyle+\frac{9\theta^{2}(\nu-\nu_{m})^{2}}{\alpha}\Delta t\sum\limits_{n=1}^{M-1}(\|\nabla\eta_{j,v}^{n-1}\|^{2}+\|\nabla\eta_{j,w}^{n-1}\|^{2})+\frac{9(\nu+\nu_{m})^{2}}{\alpha}\Delta t\sum\limits_{n=1}^{M-1}(\|\nabla\eta_{j,v}^{n+1}\|^{2}+\|\nabla\eta_{j,w}^{n+1}\|^{2})$ $\displaystyle+\frac{C}{\alpha}\Delta t\sum\limits_{n=1}^{M-1}\bigg{\\{}\|\nabla(2\eta_{j,w}^{n}-\eta_{j,w}^{n-1})\|^{2}\|\nabla v_{j}(t^{n+1})\|^{2}+\|\nabla(2\eta_{j,v}^{n}-\eta_{j,v}^{n-1})\|^{2}\|\nabla w_{j}(t^{n+1})\|^{2}\bigg{\\}}$ $\displaystyle+\frac{C}{\alpha}\Delta t\sum\limits_{n=1}^{M-1}\bigg{\\{}\|\nabla(2w_{j,h}^{n}-w_{j,h}^{n-1})\|^{2}\|\nabla\eta_{j,v}^{n+1}\|^{2}+\|\nabla(2v_{j,h}^{n}-v_{j,h}^{n-1})\|^{2}\|\nabla\eta_{j,w}^{n+1}\|^{2}\bigg{\\}}$ $\displaystyle+\frac{C}{\alpha}\Delta t\sum\limits_{n=1}^{M-1}\bigg{\\{}\|\nabla w_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla(\eta_{j,v}^{n+1}-\eta_{j,v}^{n}+\eta_{j,v}^{n-1})\|^{2}+\|\nabla v_{j,h}^{{}^{\prime}n}\|^{2}\|\nabla(\eta_{j,w}^{n+1}-\eta_{j,w}^{n}+\eta_{j,w}^{n-1})\|^{2}\bigg{\\}}$ $\displaystyle+\frac{C}{\alpha}\Delta t\sum\limits_{n=1}^{M-1}\bigg{\\{}\|v_{j}(t^{n+1})\|_{L^{\infty}(0,T;H^{2}(\Omega)}^{2}\|2\psi_{j,h}^{n}-\psi_{j,h}^{n-1}\|^{2}+\|w_{j}(t^{n+1})\|_{H^{2}}^{2}\|2\phi_{j,h}^{n}-\phi_{j,h}^{n-1}\|^{2}\bigg{\\}}$ $\displaystyle+C\left((\Delta t)^{4}+(\nu-\nu_{m})^{2}(1-\theta)^{2}(\Delta t)^{2}\right).$ (43) Applying the discrete Gronwall lemma, we have $\displaystyle\|\phi_{j,h}^{M}\|^{2}+$ $\displaystyle\|2\phi_{j,h}^{M}-\phi_{j,h}^{M-1}\|^{2}+\|\psi_{j,h}^{M}\|^{2}+\|2\psi_{j,h}^{M}-\psi_{j,h}^{M-1}\|^{2}+\alpha\Delta t\sum\limits_{n=2}^{M}\left(\|\nabla\phi_{j,h}^{n}\|^{2}+\|\nabla\psi_{j,h}^{n}\|^{2}\right)$ $\displaystyle\leq C\left(h^{2k}+(\Delta t)^{4}+(\nu-\nu_{m})^{2}(1-\theta)^{2}\Delta t^{2}\right).$ (44) Using the triangular inequality allows us to write $\displaystyle\|e_{j,v}^{M}\|^{2}+\|e_{j,w}^{M}\|^{2}+\alpha\Delta t\sum\limits_{n=2}^{M}\left(\|\nabla e_{j,v}^{n}\|^{2}+\|\nabla e_{j,w}^{n}\|^{2}\right)\leq 2\bigg{(}\|\phi_{j,h}^{M}\|^{2}+\|\psi_{j,h}^{M}\|^{2}$ $\displaystyle+\alpha\Delta t\sum\limits_{n=2}^{M}\left(\|\nabla\phi_{j,h}^{n}\|^{2}+\|\nabla\psi_{j,h}^{n}\|^{2}\right)+\|\eta_{j,v}^{M}\|^{2}+\|\eta_{j,w}^{M}\|^{2}+\alpha\Delta t\sum\limits_{n=2}^{M}\left(\|\nabla\eta_{j,v}^{n}\|^{2}+\|\nabla\eta_{j,w}^{n}\|^{2}\right)\bigg{)}$ $\displaystyle\leq C\left(h^{2k}+(\Delta t)^{4}+(\nu-\nu_{m})^{2}(1-\theta)^{2}\Delta t^{2}\right).$ (45) Now summing over $j$ and using the triangular inequality completes the proof. ∎ ## 4 Numerical experiments To test the proposed Algorithm 3.1 and theory, in this section we present results of numerical experiments. In order to compute the first timestep solutions $v_{j}^{1}$, and $w_{j}^{1}$, we use a first-order ensemble backward-Euler scheme proposed in [47] without the eddy viscosity term and together with the initial conditions $v_{j}^{0}=v_{j}(0)$ and $w_{j}^{0}=w_{j}(0)$. Thus, for further time evolution, $v_{j}^{0},\hskip 2.84526ptw_{j}^{0},\hskip 2.84526ptv_{j}^{1},\hskip 2.84526pt\text{and}\hskip 2.84526ptw_{j}^{1}$ are used as the two required initial conditions for the Algorithm 3.1. For simulations of MHD systems, it is considered important to enforce the solenoidal constraint $\nabla\cdot B=0$ in discrete level to the machine precision [29]. This is because the condition is induced by the induction equation for all time if the initial magnetic field is divergence free [49], which is a precise physical law. Moreover, it has been shown that for MHD flow simulations $\nabla\cdot B\neq 0$ can produce large errors in the solution [12]. Thus, the $((P_{2})^{2},P_{1}^{disc})$ SV element, which is pointwise divergence-free on a barycenter refined regular triangular mesh, will be used for the velocity-pressure and magnetic field-magnetic pressure pairs throughout this section. We ran our simulations using the free finite element software Freefem++[25] with the triangular mesh. We used direct solver UMFPACK for all the simulations. ### 4.1 Convergence rate verification To verify the predicted convergence rates of our analysis in Section 3.2, we begin the experiment with a manufactured analytical solution, ${v}=\left(\begin{array}[]{c}\cos y+(1+e^{t})\sin y\\\ \sin x+(1+e^{t})\cos x\end{array}\right),\ {w}=\left(\begin{array}[]{c}\cos y-(1+e^{t})\sin y\\\ \sin x-(1+e^{t})\cos x\end{array}\right),\ p=\sin(x+y)(1+e^{t}),\ \lambda=0,$ on the domain $\Omega=(0,1)^{2}$. Next, we create four different true solutions from the above solution introducing a perturbation parameter $\epsilon$ as follows: $\displaystyle v_{j}:=\begin{cases}(1+(-1)^{j-1}\epsilon)v&1\leq j<3\\\ (1+(-1)^{j-1}2\epsilon)v&3\leq j\leq 4\end{cases},\hskip 5.69054pt\text{and}\hskip 5.69054ptw_{j}:=\begin{cases}(1+(-1)^{j-1}\epsilon)w&1\leq j<3\\\ (1+(-1)^{j-1}2\epsilon)w&3\leq j\leq 4\end{cases},$ where $j\in\mathbb{N}$. By construction, the averages $\bar{v_{j}}=v$ and $\bar{w_{j}}=w$. Using the above perturbed solutions, we compute right-hand side forcing terms. The Dirichlet boundary conditions are used on the boundary of the unit square. The Algorithm 3.1 computes the discrete ensemble average $<v_{h}^{n}>$ and $<w_{h}^{n}>$, and these will be used to compare to the true ensemble average $<v(t^{n})>$ and $<w(t^{n})>$, respectively. We notate the ensemble average error as $<e_{u}>:=<u_{h}>^{n}-<u(t^{n})>$. For our choice of element, the theory predicts the $L^{2}(0,T;H^{1}(\Omega)^{d})$ error to be of $O(h^{2}+\Delta t^{2}+(1-\theta)|\nu-\nu_{m}|\Delta t)$ provided $\Delta t<O(h^{2})$. In this experiment, we consider $\nu=0.01$, and $\nu_{m}=0.001$ and compute the largest possible $\theta=1/9$, subject to the condition in (24). In this case, $\Delta t^{2}$ dominates over $(1-\theta)|\nu-\nu_{m}|\Delta t$, and thus the error behaves like $O(h^{2}+\Delta t^{2})$. We consider three different choices $\epsilon=0.001,\text{ }0.01\text{ }\text{and}\text{ }0.1$ for the perturbation parameter. To observe the temporal convergence, we choose a fixed mesh width $h=1/64$, end time $T=1$, and compute with varying timestep sizes. On the other hand, we use a small end time $T=0.001$, a fixed timestep size $\Delta t=T/8$, and compute on successively refined meshes to observe the spatial convergence rate. Tables 1-4 exhibit errors and convergence rates for the variable $v$ and $w$, and we observe second order asymptotic temporal convergence rates and optimal spatial convergence rates for all choices of $\epsilon$. Note that we computed the convergence rates for both the variables and for all the choices of $\epsilon$ using both SV and weakly divergence free Taylor Hood (TH) elements. For this particular problem, we have found the same convergence behavior with both the SV and TH elements, and thus TH element results are omitted. Temporal convergence (fixed $h=1/64$) --- | $\epsilon=0.001$ | $\epsilon=0.01$ | $\epsilon=0.1$ $\Delta t$ | $\|<e_{v}>\|_{2,1}$ | rate | $\|<e_{v}>\|_{2,1}$ | rate | $\|<e_{v}>\|_{2,1}$ | rate $\frac{T}{4}$ | 2.8765e-1 | | 2.8767e-1 | | 2.9004e-1 | $\frac{T}{8}$ | 8.4966e-2 | 1.76 | 8.4974e-2 | 1.76 | 8.5986e-2 | 1.75 $\frac{T}{16}$ | 2.3855e-2 | 1.83 | 2.3860e-2 | 1.83 | 2.4048e-2 | 1.84 $\frac{T}{32}$ | 6.2895e-3 | 1.92 | 6.2899e-3 | 1.92 | 6.3445e-3 | 1.92 $\frac{T}{64}$ | 1.5801e-3 | 1.99 | 1.5801e-3 | 1.99 | 1.5938e-3 | 1.99 Table 1: Errors and convergence rates for $v$ with $\theta=1/9$, $\nu=0.01$, and $\nu_{m}=0.001$. Spatial convergence (fixed $T=0.001$, $\Delta t=T/8$) --- | $\epsilon=0.001$ | $\epsilon=0.01$ | $\epsilon=0.1$ $h$ | $\|<e_{v}>\|_{2,1}$ | rate | $\|<e_{v}>\|_{2,1}$ | rate | $\|<e_{v}>\|_{2,1}$ | rate $\frac{1}{4}$ | 1.2071e-4 | | 1.2071e-4 | | 1.2071e-4 | $\frac{1}{8}$ | 3.0380e-5 | 1.99 | 3.0380e-5 | 1.99 | 3.0382e-5 | 1.99 $\frac{1}{16}$ | 7.6186e-6 | 2.00 | 7.6186e-6 | 2.00 | 7.6197e-6 | 2.00 $\frac{1}{32}$ | 1.9144e-6 | 1.99 | 1.9144e-6 | 1.99 | 1.9151e-6 | 1.99 $\frac{1}{64}$ | 4.8147e-7 | 1.99 | 4.8147e-7 | 1.99 | 4.8180e-7 | 1.99 Table 2: Errors and convergence rates for $v$ with $\theta=1/9$, $\nu=0.01$, and $\nu_{m}=0.001$. Spatial convergence (fixed $T=0.001$, $\Delta t=T/8$) --- | $\epsilon=0.001$ | $\epsilon=0.01$ | $\epsilon=0.1$ $h$ | $\|<e_{w}>\|_{2,1}$ | rate | $\|<e_{w}>\|_{2,1}$ | rate | $\|<e_{w}>\|_{2,1}$ | rate $\frac{1}{4}$ | 2.3107e-4 | | 2.3107e-4 | | 2.3108e-4 | $\frac{1}{8}$ | 5.7827e-5 | 2.00 | 5.7827e-5 | 2.00 | 5.7832e-5 | 2.00 $\frac{1}{16}$ | 1.4539e-5 | 1.99 | 1.4539e-5 | 1.99 | 1.4544e-5 | 1.99 $\frac{1}{32}$ | 3.6966e-6 | 1.98 | 3.6966e-6 | 1.98 | 3.7008e-6 | 1.97 $\frac{1}{64}$ | 9.4949e-7 | 1.96 | 9.4951e-7 | 1.96 | 9.5174e-7 | 1.96 Table 3: Errors and convergence rates for $w$ with $\theta=1/9$, $\nu=0.01$, and $\nu_{m}=0.001$. Temporal convergence (fixed $h=1/64$) --- | $\epsilon=0.001$ | $\epsilon=0.01$ | $\epsilon=0.1$ $\Delta t$ | $\|<e_{w}>\|_{2,1}$ | rate | $\|<e_{w}>\|_{2,1}$ | rate | $\|<e_{w}>\|_{2,1}$ | rate $\frac{T}{4}$ | 2.4694e-1 | | 2.4694e-1 | | 2.4787e-1 | $\frac{T}{8}$ | 7.7109e-1 | 1.68 | 7.7111e-1 | 1.76 | 7.7527e-2 | 1.68 $\frac{T}{16}$ | 2.2285e-2 | 1.79 | 2.2286e-2 | 1.83 | 2.2455e-2 | 1.79 $\frac{T}{32}$ | 6.0150e-3 | 1.89 | 6.0151e-3 | 1.92 | 6.0531e-3 | 1.89 $\frac{T}{64}$ | 1.5350e-3 | 1.97 | 1.5350e-3 | 1.99 | 1.5440e-3 | 1.97 Table 4: Errors and convergence rates for $w$ with $\theta=1/9$, $\nu=0.01$, and $\nu_{m}=0.001$. ### 4.2 MHD channel flow over a step Next, we consider a domain which is a $30\times 10$ rectangular channel with a $1\times 1$ step five units away from the inlet into the channel. No slip boundary condition is prescribed for the velocity and $B=<0,1>^{T}$ is enforced for the magnetic field on the walls. At the inflow, we set $u=<y(10-y)/25,0>^{T}$ and $B=<0,1>^{T}$, and the outflow condition uses a channel of extension 10 units, and at the end of the extension, we set outflow velocity and magnetic field equal to the inflow. An ensemble of four different solutions corresponding to the perturbed initial conditions $u_{j}(0):=\begin{cases}(1+(-1)^{j-1}\epsilon)u_{0}&1\leq j<3\\\ (1+(-1)^{j-1}2\epsilon)u_{0}&3\leq j\leq 4\end{cases}$ and $B_{j}(0):=\begin{cases}(1+(-1)^{j-1}\epsilon)B_{0}&1\leq j<3\\\ (1+(-1)^{j-1}2\epsilon)B_{0}&3\leq j\leq 4\end{cases}$ where, $j\in\mathbb{N}$, $u_{0}:=<y(10-y)/25,0>^{T}$ and $B_{0}:=<0,1>^{T}$, and a similar way perturbed inflow and outflow are considered. A triangular unstructured mesh of the domain that provides a total of 3316922 degrees of freedom (dof) is considered, where velocity $\text{dof}=1473898$, magnetic field $\text{dof}=1473898$, pressure $\text{dof}=184563$, and magnetic pressure $\text{dof}=184563$. The simulations of the Algorithm 3.1 are done with various values of $\epsilon$ until $T=40$, with $s=0.001$, $\nu=0.001$, $\nu_{m}=0.01$, $\theta=1/9$, and timestep size $\Delta t=1$. For the viscosity and magnetic diffusivity pair, we compute the largest possible $\theta$ so that the condition in (24) holds. Velocity and magnetic fields ensemble average solutions for varying $\epsilon$ and parameters are plotted in Figures 1-2, and compare them to the usual MHD simulation (which is the $\epsilon=0$ case). We observe that the ensemble average solutions appear to converge to the unperturbed solution as $\epsilon\rightarrow 0$, which is expected from our theory. Though, in our analysis, a timestep restriction $\Delta t<O(h^{2})$ appears due to the use of inverse inequality, in this numerical experiment, we could choose a larger timestep size and ran the simulation successfully for long time. Fig. 1: The velocity ensemble solution (shown as streamlines over speed contour) at $T=40$ for MHD channel flow over a step with $\Delta t=1$, $s=0.001$, $\nu=0.001$, $\nu_{m}=0.01$, $\theta=1/9$, velocity $\text{dof}=1473898$, and pressure $\text{dof}=184563$. Fig. 2: The magnetic field ensemble solution (magnetic field strength) at $T=40$ for MHD channel flow over a step with $\Delta t=1$, $s=0.001$, $\nu=0.001$, $\nu_{m}=0.01$, $\theta=1/9$, magnetic field $dof=1473898$, and magnetic pressure $dof=184563$. ## 5 Conclusion and future works In this paper, we proposed, analyzed, and tested a second order in time in practice, optimally accurate in space, decoupled and efficient algorithm for MHD flow ensemble computations. The second order temporal accuracy in practice is a major improvement to the first order ensemble average algorithm proposed in [47]. The algorithm extends the breakthrough idea for efficient computation of flow ensemble for Navier-Stokes [33] to MHD and combines with the breakthrough idea of Trenchea [59] to construct a decoupled stable scheme in terms of Elsässer variables. The key features to the efficiency of the algorithm are: (i) It is a stable decoupled method, split into two Oseen problems, which are identical to assembled, much easier to solve and can be solved simultaneously. (ii) At each time step, all $J$ different linear systems share the same coefficient matrix, as a result, the storage requirement is reduced, a single assembly of the coefficient matrix is required instead of $J$ times, preconditioners need to be built once and can be reused. (iii) It can take the advantage of the use of a block linear solver. (iv) No data restrictions are needed to avoid instability due to certain viscous terms. We proved the stability and second order convergence of the algorithm with respect to the timestep size. Numerical experiments were done on a unit square with a manufactured solution that verified the predicted convergence rates. Finally, we applied our scheme on a benchmark channel flow over a step problem and showed the method performed well. Though, in our analysis, a timestep restriction appears which was not observed in the numerical experiments. In this paper, we considered the flow ensemble subject to the slightly different initial conditions and forcing functions, we plan to investigate flow ensemble behavior where the viscosities, and the boundary conditions involve uncertainties. In the numerical experiments, no timestep restriction was observed, and thus further investigation is needed to find unconditional stability of the scheme. The recent idea [20] of a continuous data assimilation algorithm for a velocity-vorticity formulation of the Navier- Stokes equations can be applied to the MHD flow ensemble. To reduce the computational cost, reduced order modeling (ROM) for the ensemble MHD flow computation will be the future research avenue. Recently, it has been shown the data-driven filtered ROM for flow problem [60] works well for the complex system. To reduced computation cost further to simulate an ensemble MHD system as well as more accurate results, it is worth exploring in ROM with physically accurate data [48]. We also plan to apply the recent advances [22] of an evolve-filter-relax based stabilization of ROMs for uncertainty quantification of the MHD flow ensemble using stochastic collocation method. The finite element simulations of the Maxwell equations using nodal based elements often produce cancellation errors, interface problems [2], and spurious modes [11, 58] that cause unwanted and unphysical solutions. For magnetic field, the tangential components are continuous across inter-element boundaries but it is not necessary for the normal components. By the nature of finite-element interpolants, some nodal vectorial elements enforce the continuity of the normal component across the interface which is not required by the physics. It is now well established that the edge elements, are more appropriate for the finite element discretization of the Maxwell equations [3, 7, 10], which only enforces the tangential continuity of the magnetic field across the interfaces. MHD flow ensemble simulations with Nédélec’s edge element [51] will be the next research direction. Acknowledgment. The author thanks Dr. Leo G. Rebholz for his constructive comments and suggestions that greatly improved the manuscript. ## References * [1] M. Akbas, S. Kaya, M. Mohebujjaman, and L. Rebholz. Numerical analysis and testing of a fully discrete, decoupled penalty-projection algorithm for MHD in elsässer variable. International Journal of Numerical Analysis $\&$ Modeling, 13(1):90–113, 2016. * [2] R. Albanese and G. Rubinacci. Analysis of three-dimensional electromagnetic fields using edge elements. Journal of Computational Physics, 108(2):236–245, 1993. * [3] R. Albanese and G. Rubinacci. Finite element methods for the solution of 3D eddy current problems. Advances in Imaging and Electron Physics, 102:1–86, 1997. * [4] D. Arnold and J. Qin. Quadratic velocity/linear pressure Stokes elements. In R. Vichnevetsky, D. Knight, and G. Richter, editors, Advances in Computer Methods for Partial Differential Equations VII, pages 28–34. IMACS, 1992. * [5] L. Barleon, V. Casal, and L. Lenhart. MHD flow in liquid-metal-cooled blankets. Fusion Engineering and Design, 14:401–412, 1991. * [6] J. D. Barrow, R. Maartens, and C.G. Tsagas. Cosmology with inhomogeneous magnetic fields. Physics Reports, 449:131–171, 2007. * [7] R. Beck, R. Hiptmair, and B. Wohlmuth. Hierarchical error estimator for eddy current computation. Numerical mathematics and advanced applications (Jyväskylä, 1999), pages 110–120, 1999. * [8] D. Biskamp. Magnetohydrodynamic Turbulence. Cambridge University Press, Cambridge, 2003. * [9] P. Bodenheimer, G.P. Laughlin, M. Rozyczka, and H.W. Yorke. Numerical methods in astrophysics. Series in Astronomy and Astrophysics, Taylor $\&$ Francis, New York, 2007. * [10] A. Bossavit. A rationale for ‘edge-elements’ in 3-D fields computations. IEEE Transactions on Magnetics, 24(1):74–79, 1988. * [11] A. Bossavit. Solving maxwell equations in a closed cavity, and the question of ‘spurious modes’. IEEE Transactions on magnetics, 26(2):702–705, 1990. * [12] J. Brackbill and D. Barnes. The effect of nonzero $\nabla\cdot\textit{B}=0$ on the numerical solution of the magnetohydrodynamics equations. Journal of Computational Physics, 35:426–430, 1980. * [13] S. C. Brenner and L. R. Scott. The Mathematical Theory of Finite Element Methods, volume 15 of Texts in Applied Mathematics. Springer Science+Business Media, LLC, 2008. * [14] M. Carney, P. Cunningham, J. Dowling, and C. Lee. Predicting probability distributions for surf height using an ensemble of mixture density networks. International Conference on Machine Learning, pages 113–120, 2005\. * [15] P. A. Davidson. An introduction to magnetohydrodynamics. Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2001. * [16] E. Dormy and A. M. Soward. Mathematical aspects of natural dynamos. Fluid Mechanics of Astrophysics and Geophysics, Grenoble Sciences. Universite Joseph Fourier, Grenoble, VI, 2007. * [17] P. P. Edwards, C.N.R Rao, N. Kumar, and A. S. Alexandrov. The possibility of a liquid superconductor. ChemPhysChem, 7(9):2015–2021, 2006. * [18] J. A. Fiordilino. A second order ensemble timestepping algorithm for natural convection. SIAM Journal on Numerical Analysis, 56(2):816–837, 2018. * [19] J. A. Font. Gerneral relativistic hydrodynamics and magnetohydrodynamics: hyperbolic system in relativistic astrophysics, in hyperbolic problems: theory, numerics, applications. Springer, Berlin, pages 3–17, 2008. * [20] M. Gardner, A. Larios, L. G. Rebholz, D. Vargun, and C. Zerfas. Continuous data assimilation applied to a velocity-vorticity formulation of the 2D Navier-Stokes equations. American Institute of Mathematical Sciences Electronic Research Archive, to appear. * [21] V. Girault and P.-A.Raviart. Finite element methods for Navier-Stokes equations: Theory and Algorithms. Springer-Verlag, 1986. * [22] M. Gunzburger, T. Iliescu, M. Mohebujjaman, and M. Schneier. An evolve-filter-relax stabilized reduced order stochastic collocation method for the time-dependent Navier–Stokes equations. SIAM/ASA Journal on Uncertainty Quantification, 7(4):1162–1184, 2019. * [23] M. Gunzburger, N. Jiang, and Z. Wang. A second-order time-stepping scheme for simulating ensembles of parameterized flow problems. Computational Methods in Applied Mathematics, 19:681–701, 2018\. * [24] H. Hashizume. Numerical and experimental research to solve MHD problem in liquid blanket system. Fusion Engineering and Design, 81:1431–1438, 2006. * [25] F. Hecht. New development in Freefem++. Journal of Numerical Mathematics, 20:251–266, 2012. * [26] T. Heister, M. Mohebujjaman, and L. Rebholz. Decoupled, unconditionally stable, higher order discretizations for MHD flow simulation. Journal of Scientific Computing, 71:21–43, 2017. * [27] J. G. Heywood and R. Rannacher. Finite-element approximation of the nonstationary navier-stokes problem part iv: error analysis for second-order time discretization. SIAM Journal on Numerical Analysis, 27:353–384, 1990. * [28] W. Hillebrandt and F. Kupka. Interdisciplinary aspects of turbulence. Lecture Notes in Physics, Springer-Verlag, Berlin, 756, 2009. * [29] K. Hu, Y. Ma, and J. Xu. Stable finite element methods preserving $\nabla\cdot{B}=0$ exactly for MHD models. Numerische Mathematik, 135:371–397, 2017. * [30] N. Jiang. A higher order ensemble simulation algorithm for fluid flows. Journal of Scientific Computing, 64:264–288, 2015. * [31] N. Jiang. A second-order ensemble method based on a blended backward differentiation formula timestepping scheme for time-dependent Navier–Stokes equations. Numerical Methods for Partial Differential Equations, 33(1):34–61, 2017. * [32] N. Jiang, S. Kaya, and W. Layton. Analysis of model variance for ensemble based turbulence modeling. Computational Methods in Applied Mathematics, 15:173–188, 2015\. * [33] N. Jiang and W. Layton. An algorithm for fast calculation of flow ensembles. International Journal for Uncertainty Quantification, 4:273–301, 2014. * [34] N. Jiang and W. Layton. Numerical analysis of two ensemble eddy viscosity numerical regularizations of fluid motion. Numerical Methods for Partial Differential Equations, 31:630–651, 2015. * [35] N. Jiang and M. Schneier. An efficient, partitioned ensemble algorithm for simulating ensembles of evolutionary MHD flows at low magnetic reynolds number. Numerical Methods for Partial Differential Equations, 34(6):2129–2152, 2018. * [36] C. A. Jones and G. Schubert. Thermal and compositional convection in the outer core. Treatise in Geophysics, Core Dynamics, 8:131–185, 2015. * [37] L. D. Landau and E. M. Lifshitz. Electrodynamics of Continuous Media. Pergamon Press, Oxford, 1960. * [38] W. Layton. Introduction to the Numerical Analysis of Incompressible Viscous Flows. Computational Science and Engineering. Society for Industrial and Applied Mathematics, 2008. * [39] J. M. Lewis. Roots of ensemble forecasting. Monthly Weather Review, 133:1865 – 1885, 2005. * [40] Y. Li and C. Trenchea. Partitioned second order method for magnetohydrodynamics in elsässer variables. Discrete & Continuous Dynamical Systems-B, 23(7):2803, 2018. * [41] T. F. Lin, J. B. Gilbert, and R. Kossowsky. Sea-water magnetohydrodynamic propulsion for next-generation undersea vehicles. Technical report, PENNSYLVANIA STATE UNIV STATE COLLEGE APPLIED RESEARCH LAB, 1990. * [42] T. N. Palmer M. Leutbecher. Ensemble forecasting. Journal of Computational Physics, 227:3515–3539, 2008. * [43] O. P. L. Maître and O. M. Knio. Spectral methods for uncertainty quantification. Springer, 2010. * [44] W. J. Martin and M. Xue. Sensitivity analysis of convection of the 24 May 2002 IHOP case using very large ensembles. Monthly Weather Review, 134(1):192–207, 2006. * [45] D. L. Mitchell and D. U. Gubser. Magnetohydrodynamic ship propulsion with superconducting magnets. Journal of Superconductivity, 1(4):349–364, 1988. * [46] M. Mohebujjaman. Efficient numerical methods for magnetohydrodynamic flow. Ph.D. Thesis, Clemson University, 2017. * [47] M. Mohebujjaman and L. G. Rebholz. An efficient algorithm for computation of MHD flow ensembles. Computational Methods in Applied Mathematics, 17:121–137, 2017\. * [48] M. Mohebujjaman, L. G. Rebholz, and T. Iliescu. Physically-constrained data-driven, filtered reduced order modeling of fluid flows. International Journal for Numerical Methods in Fluids, accepted. * [49] M. Mohebujjaman, S. Shiraiwa, B. LaBombard, J. C. Wright, and K. Uppalapati. Scalability analysis of direct and iterative solvers used to model charging of non-insulated superconducting pancake solenoids. arXiv preprint arXiv:2007.15410, 2020. * [50] M. Neda, A. Takhirov, and J. Waters. Ensemble calculations for time relaxation fluid flow models. Numerical Methods for Partial Differential Equations, 32(3):757–777, 2016. * [51] J. C. Nédélec. Mixed finite elements in $\mathbb{R}^{3}$. Numerische Mathematik, 35(3):315–341, 1980. * [52] P. Olson. Experimental dynamos and the dynamics of planetary cores. Annual Review of Earth and Planetary Sciences, 41:153–181, 2013\. * [53] J. D. G. Osorio and S. G. G. Galiano. Building hazard maps of extreme daily rainy events from PDF ensemble, via REA method, on Senegal river basin. Hydrology and Earth System Sciences, 15:3605 – 3615, 2011. * [54] B. Punsly. Black hole gravitohydrodynamics. Astrophysics and Space Science Library, Springer-Verlag, Berlin, Second Edition, 355, 2008. * [55] M. A. Samad and M. Mohebujjaman. MHD heat and mass transfer free convection flow along a verticle stretching sheet in presence of magnetic field with heat generation. Research Journal of Applied Sciences, Engineering and Technology, 1(3):98–106, 2009. * [56] L. R. Scott and M. Vogelius. Conforming finite element methods for incompressible and nearly incompressible continua. Lectures in Applied Mathematics, 22(2), 1985. * [57] S. Smolentsev, R. Moreau, L. Buhler, and C. Mistrangelo. MHD thermofluid issues of liquid-metal blankets: phenomena and advances. Fusion Engineering and Design, 85:1196–1205, 2010. * [58] D. Sun, J. Manges, X. Yuan, and Z. Cendes. Spurious modes in finite-element methods. IEEE Antennas and Propagation Magazine, 37(5):12–24, 1995. * [59] C. Trenchea. Unconditional stability of a partitioned IMEX method for magnetohydrodynamic flows. Applied Mathematics Letters, 27:97–100, 2014. * [60] X. Xie, M. Mohebujjaman, L. G. Rebholz, and T. Iliescu. Data-driven filtered reduced order modeling of fluid flows. SIAM Journal on Scientific Computing, 40(3):B834–B857, 2018. * [61] S. Zhang. A new family of stable mixed finite elements for the 3D Stokes equations. Mathematics of Computation, 74:543–554, 2005.
# Statistical Analysis of Quantum Annealing Xinyu Song1, Yazhen Wang2, Shang Wu2 Donggyu Kim3, 1 School of Statistics and Management, Shanghai University of Finance and Economics 2 Department of Statistics, University of Wisconsin-Madison 3 College of Business, Korea Advanced Institute of Science and Technology ###### Abstract Quantum computers use quantum resources to carry out computational tasks and may outperform classical computers in solving certain computational problems. Special-purpose quantum computers such as quantum annealers employ quantum adiabatic theorem to solve combinatorial optimization problems. In this paper, we compare classical annealings such as simulated annealing and quantum annealings that are done by the D-Wave machines both theoretically and numerically. We show that if the classical and quantum annealing are characterized by equivalent Ising models, then solving an optimization problem, i.e., finding the minimal energy of each Ising model, by the two annealing procedures, are mathematically identical. For quantum annealing, we also derive the probability lower-bound on successfully solving an optimization problem by measuring the system at the end of the annealing procedure. Moreover, we present the Markov chain Monte Carlo (MCMC) method to realize quantum annealing by classical computers and investigate its statistical properties. In the numerical section, we discuss the discrepancies between the MCMC based annealing approaches and the quantum annealing approach in solving optimization problems. Keywords and phrases: Combinatorial optimization, ground state success probability, Hamiltonian, Ising model, Markov chain Monte Carlo (MCMC), quantum computing, quantum annealing Running title: Quantum Annealing and MCMC ## 1 Introduction While classical computation follows classical physics and uses electronic transistors to crunch zeros and ones individually, quantum computation employs quantum resources to manage zeros and ones simultaneously and may speed up certain calculation work dramatically (Nielsen and Chuang,, 2000; Wang,, 2012). Quantum computation strives to understand how to take advantage of the huge information hidden in quantum systems by exploring the enormous potential of atoms and photons. It uses quantum phenomena such as quantum superposition, quantum entanglement and quantum tunneling to compute and process infomation. Two major approaches to realize and implement quantum computation are logic- gate based quantum computing and adiabatic quantum computing (Aharonov et al.,, 2008; Browne,, 2014; Deutsch,, 1985; DiVincenzo,, 1995; Farhi et al.,, 2000, 2001, 2002; Johnson et al.,, 2011). The logic-gate based quantum computers are constructed based on quantum circuits with quantum gates and provide the quantum analog of classical universal or general-purpose computers. Intensive efforts are underway around the world by academic labs, large companies, and government institutes to overcome technical problems in constructing the universal quantum computers. However, the practical application of general-purpose quantum computers may be still decades away while on the other hand, special-purpose quantum computers such as quantum annealers and quantum simulators, can be constructed with capabilities exceeding their classical counterparts (Aharonov and Ta-Shma,, 2003; Aharonov et al.,, 2008; Albash and Lidar,, 2016; Britton et al.,, 2012; Browne,, 2014; Brumfiel,, 2012; Wang,, 2012). Quantum annealers are physical hardwares to realize quantum annealing and are used to solve combinatorial optimization problems and to realize Monte Carlo sampling more efficiently. Annealing materials by first heating and then slow cooling in order to reduce their brittleness is an ancient technology, which has been used in making and refining materials such as glass and metal. Computers can be employed to reproduce this process, which creates simulated annealing that provides an optimization tool. It is based on the analogy between the behavior of a complex physical system with multiple degrees of freedom and an optimization problem of finding the global minimum given an objective function defined by many parameters. The objective function of the optimization problem can be regarded as the energy of the physical system, then finding the minimum energy configurations or ground states of the many-body physical system is equivalent to solving the corresponding optimization problem. Computer simulation algorithms can be developed to mimic the annealing procedure of the physical system and finally reach the minimum energy configurations. We discuss the classical approach called simulated annealing (SA) and the quantum approach called quantum annealing. Given an optimization problem, SA takes into account the relative configuration energies and a fictitious time-dependent temperature when exploring the immense search space probabilistically. Specifically, we assign the physical system a temperature that can be regarded as a control parameter introduced artificially. By slowly driving the temperature from a high value to zero, we are able to move the physical system to the state with the lowest value of energy, hence also arrive at the solution of the optimization problem (Bertsimas and Tsitsiklis,, 1993; Kirkpatrick et al.,, 1983; Wang et al.,, 2016). Quantum annealing constructs on the physical process of a quantum system whose lowest energy or equivalently, the ground state of the system, represents the solution to an optimization problem posed. It first establishes a simple quantum system initialized in its ground state, and then gradually moves the simple system to the target complex system. Quantum adiabatic theorem (Farhi et al.,, 2000, 2001, 2002; Kadowaki and Nishimori,, 1998) indicates that the system will likely to stay in the ground state during the gradual evolution, thus, under certain probability, we can find the solution of the original optimization problem by measuring the system at its final state. That is, quantum annealing replaces thermal fluctuations in SA by quantum tunneling to keep the system close to its instantaneous ground state during the evolution. It is similar to a quasi-equilibrium state being maintained during the evolution of SA. See Brooke et al., (1999); Isakov et al., (2016); Jörg et al., (2010); Wang et al., (2016) for more details. Both the classical and quantum annealing methods are powerful tools for solving hard optimization problems, whether achieved by physical devices or simulation methods. The physical scheme either employs a natural system or builds a device to engineer a physical system where the ground state of the system represents the sought-after solution of an optimization problem (McGeoch,, 2014). The simulation approach applies “escape” rules in computer simulations to prevent the system from getting trapped in local minima given a cost function, and eventually, reach the global minimum with some probability (Martoňák et al.,, 2002; Rieger and Kawashima,, 1999). In both situations, the system can probabilistically explore its the huge configuration space, and finally freeze in the global minimum with certain probability. Through sufficient repeated attempts, we can find the global minimum and solve the optimization problem. Quantum annealing devices are actively pursued by several academic labs and companies such as Google and D-Wave Systems, with uncertain quantum speedup. For example, the D-Wave quantum computer is a commercially available hardware device that is designed and built to implement quantum annealing physically. It is an analog computing device based on superconducting quantum bits (also called qubits) to process the annealing procedure and solve combinatorial optimization problems (Albash et al.,, 2015; Boixo et al.,, 2014, 2016, 2018; Brady and van Dam,, 2016; Rønnow et al.,, 2014; Wang et al.,, 2016). The D-Wave computers have been used to solve simple optimization problems in graphs and networks, machine learning, artificial intelligence, and computational biology (Bian et al.,, 2013; O’Gorman et al.,, 2015; Perdomo- Ortiz et al.,, 2012, 2015; Rieffel et al.,, 2015). As an example, the lowest energy alignment of a protein is considered as its preferred state, and protein folding is to find the lowest energy point in its energy landscape; the D-Wave computers can be arranged to manipulate qubits to reach their lowest energy state and solve the folding problem of a simple protein (McGeoch,, 2014; Perdomo-Ortiz et al.,, 2012). The rest of the paper proceeds as follows. Section 2 briefly reviews quantum mechanics and quantum computation. Section 3 introduces quantum annealing and discusses its implementations by the D-Wave devices and its realizations by the MCMC based methods, called simulated quantum annealing (SQA), in the contex of Ising model. Section 4 carries out simulation experiments by classical computers to illustrate SA and SQA. The results are compared with ground state success probability data obtained from quantum annealing. Section 5 features concluding remarks regarding statistical issues associated with the study of the D-Wave devices and whether those MCMC based annealing methods can provide statistical models for the D-Wave devices. ## 2 A brief quantum background ### 2.1 Notations For this paper, we consider the finite dimension only and let $\mathbb{C}^{d}$ be the $d$-dimensional complex space. For a vector $\psi$ in $\mathbb{C}^{d}$, we employ the usual notations in quantum science and use Dirac notations ket $|\cdot\rangle$ and bra $\langle\cdot|$ to denote the column $|\psi\rangle$ and row $\langle\psi|$ vector. Let the superscripts $*$, $\prime$ and $\dagger$ be the conjugate of a complex number, the transpose of a vector or matrix, and the conjugate transpose operation, respectively. A natural inner product in $\mathbb{C}^{d}$ is given by $\langle u|v\rangle=\sum_{j=1}^{d}u_{j}^{*}v_{j}=(u_{1}^{*},\cdots,u^{*}_{d})(v_{1},\cdots,v_{d})^{\prime},$ where $\langle u|=(u_{1},\cdots,u_{d})$ and $|v\rangle=(v_{1},\cdots,v_{d})^{\prime}$, and the modulus follows to be $|u|=\sqrt{\langle u|u\rangle}$. A matrix ${\mathbf{A}}$ is said to be Hermitian if ${\mathbf{A}}={\mathbf{A}}^{\dagger}$ and a matrix ${\mathbf{U}}$ is said to be unitary if ${\mathbf{U}}{\mathbf{U}}^{\dagger}={\mathbf{U}}^{\dagger}{\mathbf{U}}={\mathbf{I}}$, where ${\mathbf{I}}$ is an identity matrix. ### 2.2 Qubit and superposition In classical computation, the most fundamental entity is a bit with two mutually exclusive state values $0$ and $1$. The quantum analog of the classical bit is a qubit with two state values $|0\rangle$ and $|1\rangle$, where we use the customary notation $|\cdot\rangle$ called ket to denote the qubit state. Moreover, quantum computation allows qubits to encode the two states, zeros and ones, simultaneously, which is known as the quantum superposition. That is, a qubit can be in superposition states $|\psi\rangle=\alpha_{0}|0\rangle+\alpha_{1}|1\rangle$ where complex numbers $\alpha_{0}$ and $\alpha_{1}$ are called amplitudes and satisfy $|\alpha_{0}|^{2}+|\alpha_{1}|^{2}=1$. Thus, the states of a qubit are unit vectors in $\mathbb{C}^{2}$, and the states $|0\rangle$ and $|1\rangle$ form an orthonormal basis for the $\mathbb{C}^{2}$ space, which are known as computational basis states. Unlike the classical bits that have mutually exclusive states and can be examined to determine whether it is in state $0$ or $1$, the qubits can be zero and one simultaneously while its state cannot be determined by examing the qubit. Instead, by quantum mechanics, we can measure a qubit $|\psi\rangle$ and obtain either the result $0$, with probability $|\alpha_{0}|^{2}$, or the result $1$, with probability $|\alpha_{1}|^{2}$. Like classic bits, we also can define multiple qubits. For one $b$-qubit, its states are unit vectors in a $2^{b}$-dimensional complex vector space with computational basis states of the form $|x_{1}x_{2}\cdots x_{b}\rangle$, $x_{j}=0$ or $1$, $j=1,\ldots,b$. For example, the states of $2$-qubit are unit vectors in $\mathbb{C}^{4}$ space with the superposition states $|\psi\rangle=\alpha_{00}\,|00\rangle+\alpha_{01}\,|01\rangle+\alpha_{10}\,|10\rangle+\alpha_{11}\,|11\rangle$, where amplitudes $\alpha_{x}$ are complex numbers satisfying $|\alpha_{00}|^{2}+|\alpha_{01}|^{2}+|\alpha_{10}|^{2}+|\alpha_{11}|^{2}=1$. Similar to the single qubit case, we obtain a measurement outcome $x$ as one of the $00,01,10,11$ with a corresponding probability $|\alpha_{x}^{2}|$. The quantum exponential complexity is then shown in the exponential growth of dimensionality $2^{b}$ and the number of $2^{b}$ amplitudes needed to specify the superposition states. Thus, as the quantum systems evolves, they are able to store and keep track an exponential number of complex numbers while performing data manipulations and calculations. See Nielsen and Chuang, (2000) and Wang, (2012) for details. ### 2.3 Quantum physics Quantum systems are entirely characterized by their states and the time evolution of the states. For a fixed time, the states of a $d$-dimensional quantum system can be characterized by a unit vector $|\psi\rangle$ in $\mathbb{C}^{d}$. To study the quantum system, we perform measurements on an observable ${\mathbf{M}}$, which is a Hermitian matrix on $\mathbb{C}^{d}$. The eigendecomposition of ${\mathbf{M}}$ is assumed to be the following ${\mathbf{M}}=\sum_{a=1}^{r}\lambda_{a}\mbox{\bf Q}_{a},$ where $\lambda_{1},\ldots,\lambda_{r}$ are the real eigenvalues of ${\mathbf{M}}$, and $\mbox{\bf Q}_{a}$ is the projection onto the eigen-space corresponding to the eigenvalue $\lambda_{a}$. Quantum probability theory says that if we measure ${\mathbf{M}}$ for the quantum system under the state $|\psi\rangle$, we may obtain the measurement outcome $\Lambda$, which is a random variable that takes values in $\\{\lambda_{1},\lambda_{2},\ldots,\lambda_{r}\\}$ with probability distribution $P(\Lambda=\lambda_{a})={\rm tr}(\mbox{\bf Q}_{a}|\psi\rangle\langle\psi|)=\langle\psi|\mbox{\bf Q}_{a}|\psi\rangle$, $a=1,2,\ldots,r$. Statistically, we can measure ${\mathbf{M}}$ many times to obtain measurement data for making inferences on the quantum states. We now focus on the time evolution of a quantum system. Let $|\psi(t)\rangle$ be the state of the quantum system at time $t$, which is known as a wave function. The states $|\psi(t_{1})\rangle$ and $|\psi(t_{2})\rangle$ at times $t_{1}$ and $t_{2}$ are connected through $|\psi(t_{2})\rangle={\mathbf{U}}(t_{1},t_{2})\,|\psi(t_{1})\rangle$, where ${\mathbf{U}}(t_{1},t_{2})=\exp[-i\,{\mathbf{H}}\,(t_{2}-t_{1})]$ is a unitary matrix and ${\mathbf{H}}$ is a Hermitian matrix on $\mathbb{C}^{d}$. In this regard, the continuous time evolution of $|\psi(t)\rangle$ is governed by the following Schrödinger equation $\sqrt{-1}\,\frac{\partial|\psi(t)\rangle}{\partial t}={\mathbf{H}}|\psi(t)\rangle\;\;\mbox{or equivalently}\;\;|\psi(t)\rangle=e^{-\sqrt{-1}\,{\mathbf{H}}t}|\psi(0)\rangle,$ (2.1) here ${\mathbf{H}}$ is a possibly time-dependent Hermitian matrix on $\mathbb{C}^{d}$, which is known as the Hamiltonian of the quantum system for governing its quantum evolution. See Cai et al., (2016); Holevo, (2011); Sakurai and Napolitano, (2017); Shankar, (2012); Wang, (2012, 2013) and Wang and Xu, (2015) for details. ## 3 Quantum annealing and MCMC simulations In this section, we first review the classical simulated annealing and then discuss the quantum annealing. We show that if the simulated and quantum annealing are described by equivalent Ising models, then solving the corresponding optimization problem, or equivalently, finding the minimal energy of each Ising model, are mathematically identical by the two annealing approaches. We also derive the ground state success probability for quantum annealing and present the path-integral Monte Carlo method to mimic the quantum annealing by classical simulation. Relevant statistical theorems are presented. ### 3.1 Simulated annealing Ising model is often used to describe the natural systems in physics and many optimization problems can be mapped into physical systems described by the Ising model. Examples include traveling salesman problem, portfolio optimization, integer factoring, social economics network, protein folding, protein modeling, and statistical genetics. The ground state of the Ising model provides a solution for the optimization problem (Irbäck et al.,, 1996; Majewski et al.,, 2001; McGeoch,, 2014; Stauffer,, 2008). Ising model is described by a graph ${\cal G}=({\cal V}({\cal G}),{\cal E}({\cal G}))$, where ${\cal V}({\cal G})$ and ${\cal E}({\cal G})$ represent the vertex and edge sets of $\cal G$, respectively. Each vertex is occupied by a random variable whose value is $\\{+1,-1\\}$, and each edge corresponds to the coupling (or interaction) between two vertex variables connected by the edge. A configuration ${\mathbf{s}}=\\{s_{j},j\in{\cal V}({\cal G})\\}$ is defined as a set of values assigned to all vertex variables $s_{j}$, $j\in{\cal V}({\cal G})$. We refer to the vertices as sites and vertex variables as spins in physics, where $+1$ is for spin up and $-1$ is for spin down. As a case in point, consider a graph corresponding to a $2$-dimensional lattice with a magnet placed at each lattice site facing either up or down. Given $b$ lattice sites and at each site $j$, we let the variable $s_{j}$ be a binary random variable representing the position of the magnet, where $s_{j}=\pm 1$ can be interpreted as the $j$th magnet pointing up or down, respectively. The classical Ising model has the following Hamiltonian ${\mathbf{H}}^{c}_{I}({\mathbf{s}})=-\sum_{(i,j)\in{\cal E}({\cal G})}J_{ij}s_{i}s_{j}-\sum_{j\in{\cal V}({\cal G})}h_{j}s_{j},$ (3.2) where $(i,j)$ stands for the edge between the sites $i$ and $j$, the first sum takes over all pairs of vertices with edge $(i,j)\in{\cal E}({\cal G})$, $J_{ij}$ stands for the interaction (or coupling) between sites $i$ and $j$ associated with edge $(i,j)\in{\cal E}({\cal G})$, and $h_{j}$ describes an external magnetic field on vertex $j\in{\cal V}({\cal G})$. We refer to a set of fixed values $\\{J_{ij},h_{j}\\}$ as one instance of the Ising model. Given a specific configuration ${\mathbf{s}}$, the energy of the Ising model is ${\mathbf{H}}_{I}^{c}({\mathbf{s}})$. According to Boltzmann’s law, the probability of a given configuration ${\mathbf{s}}$ is described by the following Boltzmann (or Gibbs) distribution $P_{\beta}({\mathbf{s}})={e^{-\beta{\mathbf{H}}_{I}^{c}({\mathbf{s}})}\over Z_{\beta}},\qquad Z_{\beta}=\sum_{{\mathbf{s}}}e^{-\beta{\mathbf{H}}_{I}^{c}({\mathbf{s}})},$ (3.3) here $\beta=(k_{B}T)^{-1}$ is an inverse temperature where $k_{B}$ a generic physical constant called the Boltzmann constant and $T$ the absolute temperature, moreover, the normalization constant $Z_{\beta}$ is called the partition function. If $k_{B}=1$, then $T$ serves as the fundamental temperature of the system with units of energy, and $\beta$ is reciprocal to the fundamental temperature. The configuration probability $P_{\beta}({\mathbf{s}})$ denotes the probability that the physical system is in a state with configuration ${\mathbf{s}}$ in equilibrium. When using the Ising model to represent a combinatorial optimization problem, the goal is to find the ground state of the Ising model, that is, we need to find a configuration that can minimize the energy function ${\mathbf{H}}_{I}^{c}({\mathbf{s}})$. If the Ising model contains $b$ sites, then the configuration space is $\\{-1,+1\\}^{b}$ and the total number of possible configurations is $2^{b}$. We note that the system complexity increases exponentially in the number of sites $b$, and thus, it is very difficult to find the ground states and solve the minimization problem numerically when $b$ is large. In fact, the search space that grows exponentially prohibits us to solve the minimization problem with deterministic exhaustive search algorithms. Instead, annealing methods such as SA can be employed to search the space probabilistically. To find the configuration with minimal energy, SA uses MCMC methods such as Metropolitan Hastings algorithm to generate configuration samples from the Boltzmann distribution while decreasing the temperature slowly. We initialize the algorithm with a random spin configuration and flip spins at each step randomly. A new spin configuration is accepted if it lowers the energy and if not, it is accepted probabilistically based on the Metropolis rule; meanwhile, the temperature is lowered to reduce the escape probability of trapping in local minima. That is, the initial spin configuration ${\mathbf{s}}^{(0)}=\\{s_{j}^{(0)}\\}$ is obtained by randomly and independently assign $+1$ or $-1$ to each spin. The spins are updated in sequence, and one sweep means a complete updating over all spins. At the $k$th sweep, we attempt to flip the $i$th spin from state $s_{i}^{(k-1)}$ to the new state $s_{i}^{(k)}=-s_{i}^{(k-1)}$ while all the other spins remain unchanged. The change of energy caused by the flipping is the following $\Delta E_{i}^{(k)}=-h_{i}(s_{i}^{(k)}-s_{i}^{(k-1)})-\sum_{j=1}^{i-1}J_{ij}s_{j}^{(k)}(s_{i}^{(k)}-s_{i}^{(k-1)})-\sum_{j=i+1}^{b}J_{ij}s_{j}^{(k-1)}(s_{i}^{(k)}-s_{i}^{(k-1)}).$ The state of the $i$th spin is updated from $s_{i}^{(k-1)}$ to $s_{i}^{(k)}$ if the energy is lowered, that is, $\Delta E_{i}^{(k)}\leq 0$; otherwise, the state is updated with probability $\exp(-\Delta E_{i}^{(k)}/T_{k})$, where $T_{k}$ is the annealing schedule to lower the temperature. That is, the new state $s_{i}^{(k)}$ is accepted with probability $\min\\{1,\exp(-\Delta E_{i}^{(k)}/T_{k})\\}$. Annealing schedules used to lower the temperature often have $T_{k}$ that is proportional to $1/k$ or $1/\log k$. See Bertsimas and Tsitsiklis, (1993); Geman and Geman, (1984); Hajek, (1988), and Wang et al., (2016). ### 3.2 Quantum annealing As in the classical annealing method, graph ${\cal G}$ is used to describe the quantum Ising model while the vertex set ${\cal V}({\cal G})$ represents the quantum spins, and the edge set ${\cal E}({\cal G})$ denotes the couplings (or interactions) between two quantum spins. Since qubits can be used to realize quantum spins, each vertex can be represented by a qubit. Given $b$ vertices in ${\cal G}$, the quantum system is characterized by a $d$-dimensional complex space where $d=2^{b}$. The quantum state is described by a unit vector in $\mathbb{C}^{d}$, and its dynamic evolution is governed by the Schrödinger equation defined in (2.1) via a quantum Hamiltonian. The Hamiltonian here is a Hermitian matrix of size $d$. The eigenvalues of the quantum Hamiltonian are connected with the energies of the quantum system, and the eigenvector corresponds to the smallest eigenvalue represents a ground state. ${\mathbf{I}}_{j}=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix},\quad\boldsymbol{\sigma}_{j}^{x}=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix},\quad\boldsymbol{\sigma}_{j}^{z}=\begin{pmatrix}1&0\\\ 0&-1\end{pmatrix},\qquad j=1,\ldots,b,$ where $\boldsymbol{\sigma}_{j}^{x}$ and $\boldsymbol{\sigma}_{j}^{z}$ are Pauli matrices in $x$ and $z$ axes, respectively. We note that the Pauli matrix in $y$ axis is not needed (Nielsen and Chuang,, 2000; Wang,, 2012). For the quantum system, each classical vertex variable $s_{j}=\pm 1$ in (3.2) is replaced by $\boldsymbol{\sigma}_{j}^{z}$ for the $j$th quantum spin, which is further realized by a qubit. The two eigenvalues $\pm 1$ of the Pauli matrix $\boldsymbol{\sigma}_{j}^{z}$ correspond to the eigenstates $|\\!+1\rangle$ and $|\\!-1\rangle$ where each further represents the spin up state $|\\!\uparrow\rangle$ and spin down state $|\\!\downarrow\rangle$. In total, there are $2^{b}$ possible quantum configurations made by combining the $2b$ eigenstates in the form $|\\!\pm 1\rangle$ of the Pauli matrices $\\{\boldsymbol{\sigma}_{j}^{z}\\}^{b}_{j=1}$. We replace $s_{j}$ in the classical Ising Hamiltonian ${\mathbf{H}}_{I}^{c}({\mathbf{s}})$ by $\boldsymbol{\sigma}_{j}^{z}$ to obtain the quantum version, that is, ${\mathbf{H}}^{q}_{I}=-\sum_{(i,j)\in{\cal E}({\cal G})}J_{ij}\boldsymbol{\sigma}_{i}^{z}\boldsymbol{\sigma}_{j}^{z}-\sum_{j\in{\cal V}({\cal G})}h_{j}\boldsymbol{\sigma}_{j}^{z},$ (3.4) where $J_{ij}$ is the Ising coupling along the edge $(i,j)\in{\cal E}({\cal G})$, and $h_{j}$ is the local field on the vertex $j\in{\cal V}({\cal G})$. Here we follow the convention in quantum literature so that $\boldsymbol{\sigma}_{j}^{z}$ and $\boldsymbol{\sigma}_{i}^{z}\boldsymbol{\sigma}_{j}^{z}$ in (3.4) stand for their tensor products along with identical matrices $\boldsymbol{\sigma}_{i}^{z}\boldsymbol{\sigma}_{j}^{z}\equiv{\mathbf{I}}_{1}\otimes\cdots\otimes{\mathbf{I}}_{i-1}\otimes\underbrace{\boldsymbol{\sigma}_{i}^{z}\otimes{\mathbf{I}}_{i+1}\otimes\cdots\otimes{\mathbf{I}}_{j-1}\otimes\boldsymbol{\sigma}_{j}^{z}}_{\mbox{ vertices }i\mbox{ and }j}\otimes\,{\mathbf{I}}_{j+1}\otimes\cdots\otimes{\mathbf{I}}_{b},$ $\boldsymbol{\sigma}^{z}_{j}\equiv{\mathbf{I}}_{1}\otimes\cdots\otimes\underbrace{{\mathbf{I}}_{j-1}\otimes\boldsymbol{\sigma}_{j}^{z}\otimes{\mathbf{I}}_{j+1}}_{\mbox{ vertex }j}\otimes\cdots\otimes{\mathbf{I}}_{b}.$ Each term in (3.4) is a tensor product of $b$ matrices of size two and as a result, all terms in ${\mathbf{H}}^{q}_{I}$ are diagonal matrices of size $2^{b}$, and so does ${\mathbf{H}}^{q}_{I}$. We have one matrix to act on the $j$th qubit, either a Pauli matrix $\boldsymbol{\sigma}^{z}_{j}$ or an identity matrix ${\mathbf{I}}_{j}$. The quantum convention identifies the qubits with Pauli matrices for real actions but omits the identical matrices and tensor product signs. To find a quantum spin configuration with the minimal energy, or equivalently, a ground state of quantum Hamiltonian ${\mathbf{H}}^{q}_{I}$, we need to search for an eigenvector of ${\mathbf{H}}^{q}_{I}$ corresponding to its smallest eigenvalue. We note that ${\mathbf{H}}^{q}_{I}$ involves only commuting diagonal matrices, and its eigenvalues are equal to its diagonal entries, which in turn are the $2^{b}$ possible values of classical Hamiltonian ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$. Thus, the quantum system governed by ${\mathbf{H}}^{q}_{I}$ behaves essentially like a classical system and finding the minimal energy of the quantum Ising Hamiltonian ${\mathbf{H}}^{q}_{I}$ is equivalent to finding the minimal energy of the classical Ising Hamiltonian ${\mathbf{H}}^{c}_{I}$. This result is established mathematically in the following theorem. ###### Theorem 1 Suppose that the classical and quantum Ising models have respective Hamiltonians ${\mathbf{H}}^{c}_{I}$ and ${\mathbf{H}}^{q}_{I}$ described by the same graph ${\cal G}$ with $b$ vertices, the identical couplings $J_{ij}$ and local fields $h_{j}$. Then the quantum Ising model governed by ${\mathbf{H}}^{q}_{I}$ has the same Boltzmann distribution of observing configurations as the classical Ising model governed by ${\mathbf{H}}_{I}^{c}$, and finding the minimal energy of the quantum Ising model is mathematically identical to finding the minimal energy of the classical Ising model. ###### Remark 2 Theorem 1 indicates that the original optimization problem described in Section 3.1 can be formulated in the quantum framework, and the computational task for solving the optimization problem remains the same as in the classical case. To carry out quantum annealing, it is essential to engineer a transverse magnetic field that is orthogonal to the Ising axis and obtain the corresponding Hamiltonian in the transverse field. The transverse field represents kinetic energy that doe not commute with the potential energy ${\mathbf{H}}_{I}^{q}$, therefore, it induces transitions between the up and down states of every single spin, and converts the system behavior from classical to quantum. Assume that the following quantum Hamiltonian governs the transverse magnetic field ${\mathbf{H}}_{X}=-\sum_{j\in{\cal V}({\cal G})}\boldsymbol{\sigma}_{j}^{x},$ (3.5) where $\boldsymbol{\sigma}_{j}^{x}$ denotes the tensor products of $b$ matrices of size $2$, $\boldsymbol{\sigma}^{x}_{j}\equiv{\mathbf{I}}_{1}\otimes\cdots\otimes\underbrace{{\mathbf{I}}_{j-1}\otimes\boldsymbol{\sigma}_{j}^{x}\otimes{\mathbf{I}}_{j+1}}_{\mbox{ vertex }j}\otimes\cdots\otimes{\mathbf{I}}_{b},$ (3.6) and does not commute with $\boldsymbol{\sigma}_{j}^{z}$ in ${\mathbf{H}}_{I}^{q}$. We note that $\boldsymbol{\sigma}_{j}^{x}$ has two eigenvalues $+1$ and $-1$ and each one is associated with the eigenvector $|{\mathbf{v}}_{j,1}\rangle=(1,1)^{\dagger}$ and $|{\mathbf{v}}_{j,-1}\rangle=(1,-1)^{\dagger}$. As a result, the eigenvector corresponds to the smallest eigenvalue of ${\mathbf{H}}_{X}$ is $|{\mathbf{v}}_{+}\rangle=|{\mathbf{v}}_{1,+1}\rangle\otimes|{\mathbf{v}}_{2,+1}\rangle\otimes\cdots\otimes|{\mathbf{v}}_{b,+1}\rangle$, and $|{\mathbf{v}}_{+}\rangle$ is the ground state of ${\mathbf{H}}_{X}$. We start the quantum annealing procedure with a quantum system that is driven by the transverse magnetic field ${\mathbf{H}}_{X}$ and is initialized in its ground state $|{\mathbf{v}}_{+}\rangle$. The system then gradually evolves from the initial Hamiltonian ${\mathbf{H}}_{X}$ to its final target Hamiltonian ${\mathbf{H}}_{I}^{q}$. During the Hamiltonian change, the system tends to stay in the ground states of the instantaneous Hamiltonian via quantum tunneling based on the adiabatic quantum theorem (Farhi et al.,, 2000, 2001, 2002; McGeoch,, 2014). At the end of the annealing procedure, if the quantum system stays in a ground state of the final Hamiltonian ${\mathbf{H}}_{I}^{q}$, we are able to obtain an optimal solution by measuring the system. In specific, quantum annealing is realized by an instantaneous Hamiltonian for the Ising model in the transverse field as follows, ${\mathbf{H}}_{D}(t)=A(t){\mathbf{H}}_{X}+B(t){\mathbf{H}}_{I}^{q},\qquad t\in[0,t_{f}],$ (3.7) where $A(t)$ and $B(t)$ are smooth functions depend on time $t$ that control the annealing schedules, and $t_{f}$ is the total annealing time. To drive the system from ${\mathbf{H}}_{X}$ to ${\mathbf{H}}_{I}^{q}$, we take $A(t_{f})=B(0)=0$ where $A(t)$ is decreasing and $B(t)$ is increasing. It follows that when $t=0$, ${\mathbf{H}}_{D}(0)=A(0){\mathbf{H}}_{X}$ and when $t=t_{f}$, ${\mathbf{H}}_{D}(t_{f})=B(t_{f}){\mathbf{H}}_{I}^{q}$. Since $A(0)$ and $B(t_{f})$ are known scalars, ${\mathbf{H}}_{D}(t)$ has the same eigenvectors as ${\mathbf{H}}_{X}$ at the initial time $t=0$ and as ${\mathbf{H}}_{I}^{q}$ at the final time $t=t_{f}$, where the corresponding eigenvalues differ by factors of $A(0)$ and $B(t_{f})$, respectively. Therefore, ${\mathbf{H}}_{D}(t)$ moves the system from ${\mathbf{H}}_{X}$ initialized in its ground state to the final target ${\mathbf{H}}_{I}^{q}$. When the control functions $A(t)$ and $B(t)$ are chosen appropriately, the quantum adiabatic theorem indicates that the annealing procedure driven by (3.7) will have a sufficiently high probability in finding the global minimum of ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$ and solving the minimization problem at the final annealing time $t_{f}$. The following theorem provides the probability lower bound on successfully solving the optimization problem at the final annealing time $t_{f}$ by quantum annealing. ###### Theorem 3 Suppose that the quantum system associated with quantum annealing is driven by ${\mathbf{H}}_{QA}(t)=A(t){\mathbf{H}}_{0}+B(t){\mathbf{H}}_{1}$, $t\in[0,t_{f}]$, where $t_{f}>0$, ${\mathbf{H}}_{0}$ and ${\mathbf{H}}_{1}$ are two quantum Hamiltonians, annealing schedules $A(t)$ and $B(t)$ are smooth functions and satisfy $A(t_{f})=B(0)=0$, $A(t)$ is decreasing and $B(t)$ is increasing, and the system is initialized in a ground state of ${\mathbf{H}}_{0}$ at $t=0$. The probability that the lowest energy of ${\mathbf{H}}_{1}$ is obtained by measuring the system at the end of the annealing procedure is then bounded from below by $e^{-\xi}-2^{b}\zeta\Pi e^{2\xi}(1-p_{0}),$ (3.8) where $\zeta$ is the number of ground states, $p_{0}$ is a constant such that $p_{0}\in[0,1]$, $\Pi=\max\left\\{\left|\frac{1}{\lambda_{j}(u)-\lambda_{0}(u)}\langle 0^{(\ell)}(u)|\frac{d{\mathbf{H}}_{QA}(ut_{f})}{du}|j(u)\rangle\right|^{2},j\geq 1,\ell=1,\ldots,\zeta,u\in[0,1]\right\\},$ $\lambda_{j}(u)$, $j=0,1,2,\ldots$, are eigenvalues of ${\mathbf{H}}_{QA}(ut_{f})$ listed in an increasing order, and $|0^{(\ell)}(u)\rangle$, $\ell=1,\ldots,\zeta$ are the ground states corresponding to the smallest eigenvalue $\lambda_{0}(u)$, with $|j(u)\rangle$ any normalized eigenvector corresponding to eigenvalue $\lambda_{j}(u)$, $j\geq 1$, $u\in[0,1]$, $A(u)=(A_{\ell l}(u))$ is a $\zeta$ by $\zeta$ matrix given by $A_{\ell l}(u)=-\sqrt{-1}\langle\check{0}^{(\ell)}(u)|\frac{d}{du}\check{0}^{(l)}(u)\rangle$ for $\ell\neq l$ and $0$ for $\ell=l$, $\xi=\int_{0}^{1}\|A(u)\|du\left(1-\frac{1}{\pi}\int_{0}^{1}\|A(u)\|du\right)^{-1},$ and $\|A(u)\|$ is the spectral norm of $A(u)$. In particular, if there is a unique ground state, then $\zeta=1$ and $\xi=0$, the probability lower bound in (3.8) becomes to $1-2^{b}\Pi(1-p_{0})$. The ground state success probability of quantum annealing is usually derived under the unique ground state condition in an asymptotic sense, that is, we obtain some expressions or bounds for the leading terms of the ground state success probability by taking $t_{f}$ to go to infinity (Aharonov et al.,, 2008; Born and Fock,, 1928; McGeoch,, 2014; Morita and Nishimori,, 2008). We provide the probability lower bound in (3.8) that is for finite $t_{f}$ and does not impose the unique ground state restriction. From the proof, $p_{0}$ can be interpreted as the probability that the quantum system stays in a ground state during the annealing procedure. We note that $\frac{d{\mathbf{H}}_{QA}(u\,t_{f})}{du}=\frac{dA(u\,t_{f})}{du}\,{\mathbf{H}}_{0}+\frac{dB(u\,t_{f})}{du}\,{\mathbf{H}}_{1},$ depends on $u$ only through the derivatives of annealing schedules $A(t)$ and $B(t)$. By choosing $A(t)$ and $B(t)$ appropriately, we can make the probability lower bound in (3.8) away from zero and thus, guarantee that quantum annealing can find the lowest energy of ${\mathbf{H}}_{1}$ with some probability. In Theorem 3, we may take ${\mathbf{H}}_{0}={\mathbf{H}}_{X}$ given by (3.5), ${\mathbf{H}}_{1}={\mathbf{H}}_{I}^{q}$ defined in (3.4), and ${\mathbf{H}}_{QA}={\mathbf{H}}_{D}$ in (3.7), then by Theorems 1 and 3 together, we conclude that the quantum annealing driven by (3.7) can find the global minimum of ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$ and solve the minimization problem with certain probability. ### 3.3 Simulated Quantum Annealing with path-integral Monte Carlo method In this section, we discuss how to employ classical Monte Carlo method to mimic the behavior of quantum annealing and derive relevant theoretical results. Metropolis-Hastings algorithm is one of the most popular ways to generate the Markov chain and we review the homogeneous case first. To obtain random samples from the target probability distribution $q(x)$ using the Metropolis-Hastings, we first choose an arbitrary point $x^{(0)}$ and then generate a candidate $y^{(0)}$ for the next sample using the generation probability $P(y,x^{(0)})$ which is symmetric, that is, $P(x,y)=P(y,x)$. We accept $y^{(0)}$ as the next sample $x^{(1)}$ with acceptance probability $A(y^{(0)},x^{(0)})$ and reject $y^{(0)}$ with probability $1-A(y^{(0)},x^{(0)})$. One common choice of the acceptance probability is $A(y,x)=\min\left(1,q(y)/q(x)\right)$. We repeat the above algorithms and obtain the sequence of random samples from $q(x)$. Specifically, Algorithm 1 summarizes the Metropolis-Hastings algorithm. Algorithm 1 Metropolis-Hastings Algorithm 1: 1. (1) Initial state $x^{(0)}$; 2. (2) Generation probability $P(y,x)$; 3. (3) Acceptance probability $A(y,x)$. 2:Generate $y_{t}\sim P(y,x^{(t)})$; 3:Take $x^{(t+1)}=\begin{cases}y^{(t)}&\text{ with probability }A(y^{(t)},x^{(t)})\\\ x^{(t)}&\text{ with probability }1-A(y^{(t)},x^{(t)}).\end{cases}$ The transition probability of the Metropolis-Hastings is given by $G(y,x)=P(y,x)A(y,x)$ while the sufficient condition of existing stationary distribution $q(x)$ is detailed balance, that is, $G(y,x)q(x)=G(y,x)q(y).$ We extend above homogeneous Markov chain to the inhomogeneous Markov chain and apply the path-integral method to the $d$-dimensional transverse magnetic field Ising model. We define the transition probability that characterizes a Monte Carlo. For an Ising model that has $b$ sites, we let $\displaystyle q_{M,\beta}(\mathbf{s},t)=\frac{1}{Z_{M,\beta}(t)}\exp\left(\beta F_{0,M}(\mathbf{s})+\gamma_{\beta}(t)F_{1,M}(\mathbf{s})\right),$ $\displaystyle Z_{M,\beta}(t)=\sum_{\mathbf{s}\in\\{-1,1\\}^{bM}}\exp\left(\beta F_{0,M}(\mathbf{s})+\gamma_{\beta}(t)F_{1,M}(\mathbf{s})\right);$ $\displaystyle q_{M,\beta}(\mathbf{s})=\frac{1}{Z_{M,\beta}}\exp\left(\beta F_{0,M}(\mathbf{s})\right),\qquad Z_{M,\beta}=\sum_{\mathbf{s}\in\\{-1,1\\}^{bM}}\exp\left(\beta F_{0,M}(\mathbf{s})\right),$ where $\displaystyle F_{0,M}(\mathbf{s})=\frac{1}{M}\sum_{k=1}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k},\qquad F_{1,M}(\mathbf{s})=\sum_{k=1}^{M}\sum_{i=1}^{b}s_{i,k}s_{i,k+1},$ $\displaystyle\gamma_{\beta}(t)=\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right).$ We define the transition probability from state $x\in\\{-1,1\\}^{bM}$ to state $y\in\\{-1,1\\}^{bM}$ at time step $t$ as follows $\displaystyle G(y,x;t)=\begin{cases}P(y,x)A(y,x;t)&\text{ if }x\neq y\\\ 1-\sum_{z\not\in x}P(z,x)A(z,x;t)&\text{ if }x=y\end{cases}$ (3.9) where $P(y,x)$ is the generation probability and $A(y,x;t)$ is the acceptance probability. Specifically, $P(y,x)$, the probability to generate the next candidate state $y$ from the present state $x$, satisfies the following conditions 1. (1) $\forall x,y\in\\{-1,1\\}^{bM}:P(y,x)=P(x,y)\geq 0$; 2. (2) $\forall x\in\\{-1,1\\}^{bM}:\sum_{y\in\\{-1,1\\}^{bM}}P(y,x)=1$; 3. (3) $\forall x\in\\{-1,1\\}^{bM}:P(x,x)=0$; 4. (4) $\forall x,y\in\\{-1,1\\}^{bM},\exists n>0,\exists z_{1},\ldots,z_{n-1}\in\\{-1,1\\}^{bM}:\prod_{k=0}^{n-1}P(z_{k+1},z_{k})>0,z_{0}=x,z_{n}=y$. $A(y,x;t)$, the acceptance probability, is defined as $\displaystyle A(y,x;t)=g\left(\frac{q_{M,\beta}(y,t)}{q_{M,\beta}(x,t)}\right).$ where the acceptance function $g(u)$ satisfies (i) monotone increasing, (ii) $0\leq g(u)\leq 1$, and (iii) $g(1/u)=g(u)/u$. Examples include $u/(1+u)$ and $\min(1,u)$. Given above set-up, $q_{M,\beta}(y,t)$ satisfies the detailed balance condition, that is, $G(y,x;t)q_{M,\beta}(x,t)=G(x,y;t)q_{M,\beta}(y,t).$ For a fixed $t$, $q_{M,\beta}(x,t)$ is then the stationary distribution of the homogeneous Markov chain defined by the transition matrix $(G(x,y;t))_{x,y}$. To establish the theoretical results, we define $\displaystyle R=\min\\{\max\\{d(y,x):y\in\\{-1,1\\}^{bM}\\}:x\in\\{-1,1\\}^{bM}\setminus S_{m}\\},$ $\displaystyle L_{1,M}=\max\left\\{|F_{1,M}(x)-F_{1,M}(y)|\,:\;P(y,x)>0,\,x,y\in\\{-1,1\\}^{bM}\right\\},$ where $S_{m}=\\{x:x\in\\{-1,1\\}^{bM},F_{1,M}(y)\leq F_{1,M}(x),\forall y\in\\{-1,1\\}^{bM}\text{ and }P(y,x)>0\\},$ and $d(y,x)$ denotes the minimum number of steps necessary to make a transition from $x$ to $y$, $P(y,x)$ is defined in (3.9). We first consider the marginal distribution. Let $q_{M,\beta}^{1}(\mathbf{s_{1}},t)=\sum_{k=2}^{M}\sum_{\mathbf{s_{k}}}q_{M,\beta}(\mathbf{s},t),\qquad q_{M,\beta}^{1}(\mathbf{s}_{1})=\sum_{k=2}^{M}\sum_{\mathbf{s_{k}}}q_{M,\beta}(\mathbf{s}),$ where $\mathbf{s_{k}}=\\{s_{i,k},i=1,\ldots,b\\}$ and $\mathbf{s}=\cup_{k=1}^{M}\mathbf{s}_{k}$. ###### Lemma 4 Suppose $\Gamma(t)\geq\frac{M}{\beta}\tanh^{-1}\frac{1}{(t+2)^{2/(RL_{1,M})}}.$ Then $q_{M,\beta}^{1}(\mathbf{s_{1}},t)$ with $\beta$ and $q_{1,\beta/M}(\mathbf{s_{1}},t)$ with $\beta/M$ are asymptotically the same as $t\to\infty$. Let $X_{M}(t)=\frac{1}{\sqrt{M}}\sum_{k=1}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k},$ where $\mathbf{s}$ is generated by $q_{M,\beta}(\mathbf{s},t)$. ###### Theorem 5 For any given $\epsilon>0$, there are $X_{\infty}$ and $M_{0}(\epsilon)$ such that for any given $M\geq M_{0}(\epsilon)$, there is $t_{0}(\epsilon,M)$ such that for all $t\geq t_{0}(\epsilon,M)$ and any $x\in\mathbb{R}$, $|P(X_{M}(t)\leq x)-P(X_{\infty}\leq x)|<\epsilon,$ if $\Gamma(t)\geq\frac{M}{\beta}\tanh^{-1}\frac{1}{(t+2)^{2/(RL_{1,M})}},$ where $X_{\infty}\sim N(0,\sum_{\langle ij\rangle}J_{ij}^{2})$. ###### Remark 6 The marginal distribution of $\mathbf{s}_{k}$ has the probability mass function (PMF) $P(\mathbf{s}_{k}=s_{k})=\frac{1}{Z_{\beta/M}}\exp\left(\frac{\beta}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right).$ By Taylor expansion, we have $Z_{\beta/M}=\sum_{\mathbf{s}}\left(1+\frac{\beta}{M}\sum_{\langle ij\rangle}J_{ij}s_{i}s_{j}+O(M^{-2})\right)=2^{b}+O(M^{-2})$ and $\exp\left(\frac{\beta}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right)=1+\frac{\beta}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}+O(M^{-2}),$ which implies that $P(\mathbf{s}_{k}=s_{k})$ converges to the discrete uniform distribution as $M\to\infty$. That is, $P(\mathbf{s}_{k}=s_{k})\to\frac{1}{2^{b}}\quad\text{ as }M\to\infty.$ Then $\mathbf{s}_{k}$ has mean zero and variance $\sum_{\langle ij\rangle}J_{ij}^{2}$ as $M\to\infty$. Note: please check, $\mathbf{s}_{k}\sim q_{M,\beta}(s)$ ###### Lemma 7 For given $t$, we have $\left|\left(\sqrt{\frac{1}{2}\sinh\frac{2\beta\Gamma(t)}{M}}\right)^{bM}Z_{M,\beta}(t)-Z_{\beta}(t)\right|=O((\beta/M)^{2}),$ where $Z_{\beta}(t)=Tr\left(\exp\left(\beta\sum_{\langle ij\rangle}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}+\beta\Gamma(t)\sum_{i}\sigma_{i}^{x}\right)\right)$. ## 4 Numerical analysis ### 4.1 Quantum annealing by D-Wave machine We investigate data collected from annealing experiments conducted by D-Wave (DW) machine where the number of qubits used equals to either 485 or 1094. That is, we consider an Ising model with $b$ spins where $b=485$ or $1094$. The Ising model studied in this paper took external magnetic field $h_{j}=0$ so that the optimization problem could be easier. Within each experiment, to create a problem instance, we randomly assigned a value of either $+1$ or $-1$ to each coupler $J_{ij}$ in the DW machine. The same procedure was repeated for 100 times so that in total, 100 problem instances each with a random assignment of couplings $\\{J_{ij},(i,j)\in\cal E(\cal G)\\}$ where $J_{ij}=\pm 1$, were obtained. For each of the 100 selected instances, after its couplings were programmed, 1000000 annealing runs were performed on the DW machine. The energy level at the end of each annealing run was recorded. Theorem 1 and 3 guarantee that the quantum annealing can find the global minimum of ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$ by measuring the quantum system at the end of each annealing run with a positive probability. To determine whether the system has found the global minimum or reached the ground state, we adopt the following annealing method: given a problem instance, we declare that a particular annealing run obtains the ground state if the final energy level recorded is the same as the global minimum energy level for the problem instance, ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$. For each of the 100 selected instances, we were able to determine whether the ground state has been achieved during a particular run out of 1000000 runs. We recorded the frequency of finding the ground state among 1000000 runs and computed its corresponding probability, that is, the ground state success probability. Figure 1 presents the histogram plot that consists of the 100 ground state success probabilities for the 100 problem instances under the circumstances where $b=485$ and $b=1094$. Both histograms exhibit unimodal shape. For $b=485$, ground state success probability ranges from 0.00024 to 0.28502 while for $b=1094$, ground state success probability ranges from 0.0001 to 0.00371. As the number of qubits increases, the DW machine finds it harder to reach minimum energy level given various problem instances as the interactions among couplers become more complicated. Figure 1: Histogram plot of ground state success probability data collected from DW machine consist of 100 problem instances. ### 4.2 SA and SQA via MCMC method We approximate quantum annealing with classical path-integral Monte Carlo method described in Section 3.3. The SQA procedure assumes a Chimera graph where the total number of qubits is $b$. For the quantum annealing Hamiltonian ${\mathbf{H}}_{D}(t)$ in (3.7), to find the Boltzmann state of the quantum system, we are required to evaluate canonical partition function $\text{tr}[e^{-\mathbf{H}_{D}(t)/T}]$ for the transverse field quantum Ising model. To approximate the partition function, we use the Trotter formula (Kato,, 1978; Suzuki,, 1976; Trotter,, 1959). Specifically, given annealing schedules $A(t)$ and $B(t)$ from quantum annealing and to approximate the partition function for quantum annealing Hamiltonian ${\mathbf{H}}_{D}(t)$ with temperature $T$, we replace the temperature parameter by $T/B(t)$ and the transverse field parameter by $A(t)/B(t)$. Such approximation maps the transverse quantum Ising model to a classical $(2+1)$-dimensional anisotropic Ising model with temperature $\tau T$ and Hamiltonian ${\mathbf{H}}^{c}_{aI}(\mathbf{s})=-\sum\limits_{l=1}^{\tau}\left[B(t)\sum_{(i,j)\in\cal E(\cal G)}J_{ij}s_{il}s_{jl}+J(t)\sum_{j\in\cal V(\cal G)}s_{jl}s_{j,l+1}\right]$ where $s_{il}=\pm 1$ are random variables, $\tau$ is an integer, $J_{ij}$ are the regular couplings along the original 2-dimensional direction of the Ising model and $l$ is the index for an extra dimension that is often referred to as the imaginary-time dimension with $J(t)=-\frac{\tau T}{2}\ln\left[\text{tanh}\left(\frac{A(t)}{\tau T}\right)\right]$ as the coupling along the imaginary-time direction. Let $\mathbf{s}_{l}=\\{s_{il},l=1,\ldots,b\\}$, $1\leq l\leq\tau$ be the $l$th Trotter slice correponding to one particular configuration. We now transform the original 2-dimensional transverse quantum Ising model to a classical (2+1)-dimensional Ising model with the additional dimension in imaginary-time. The imaginary-time dimension has a finite length $\tau$, uniform coupling and an annealing schedule $J(t)$. Given the classical anisotropic Ising Hamiltonian $\mathbf{H}^{c}_{aI}$, we are allowed to approximate the transverse field quantum Ising model by a classical path-integral Monte Carlo method. With the additional dimension in imaginary-time, the Monte Carlo simulation needs to adopt a standard Metropolis-Hastings algorithm both in local and global moves. For the local moves, we perform usual independent spin flips for all spins embedded within all Trotter slices while for the global moves, we attemp to flip all the replicas of the same spin embedded within all Trotter slices. We start the SQA algorithm by randomly assigning $\pm 1$ values independently to all spins embeded within all Trotter slices. The initial spin configuraion is denoted as ${\mathbf{s}}^{(0)}=\\{s^{(0)}_{jl},j\in\mathcal{V}(\mathcal{G}),l=1,\ldots,\tau\\}$. We update spins one by one for local moves and spin replicas site by site for global moves. We call a complete update of all spins both in local and global moves a sweep and denote the total number of sweeps as $R$. Let $t_{k}=k/R$, $k=1,\ldots,R$. For the $k$th local sweep, we focus on spin $i$ in the $l$th Trotter slice while keeping all the other spins the same. The energy change from state $s_{il}^{(k-1)}$ to a new state $s_{il}^{(k)}=-s_{il}^{(k-1)}$ is $\displaystyle\triangle E_{1il}^{(k)}$ $\displaystyle=$ $\displaystyle-B(t_{k})\left[\sum\limits_{j=1}^{i-1}J_{ij}s^{(k)}_{jl}\left(s_{il}^{(k)}-s_{il}^{(k-1)}\right)+\sum\limits_{j=i+1}^{b}J_{ij}s^{(k-1)}_{jl}\left(s_{il}^{(k)}-s_{il}^{(k-1)}\right)\right]$ $\displaystyle-J(t_{k})\left[s_{il}^{(k)}s_{i,l+1}^{(k)}+s_{i,l-1}^{(k)}s_{il}^{(k)}-s_{il}^{(k-1)}s_{i,l+1}^{(k-1)}-s_{i,l-1}^{(k-1)}s_{il}^{(k-1)}\right]$ and the new state $s_{il}^{(k)}$ is accepted with probability $\text{min}\left\\{1,\text{exp}\left[-\triangle E^{(k)}_{1il}/(\tau T)\right]\right\\}$ during the $k$th local sweep. On the other hand, for the $k$th global sweep, we attempt to flip spins at site $i$ embedded within all Trotter slices $l$, $l=1,\ldots,\tau$. The energy change from state $\\{s_{il}^{(k-1)},l=1,\dots,\tau\\}$ to state $\\{s_{il}^{(k)}=-s_{il}^{(k-1)},l=1,\ldots,\tau\\}$ is $\triangle E_{2i}^{(k)}=-\sum\limits_{l=1}^{\tau}B(t_{k})\left[\sum\limits_{j=1}^{i-1}J_{ij}s^{(k)}_{jl}\left(s_{il}^{(k)}-s_{il}^{(k-1)}\right)+\sum\limits_{j=i+1}^{b}J_{ij}s^{(k-1)}_{jl}\left(s_{il}^{(k)}-s_{il}^{(k-1)}\right)\right]$ and the new state $s_{il}^{(k)}$ is accepted with probability $\text{min}\left\\{1,\text{exp}\left[-\triangle E^{(k)}_{2i}/(\tau T)\right]\right\\}$ during the $k$th global sweep. To evaluate the original classical Hamiltonian ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$ in (3.2), we simply plug in the configuration ${\mathbf{s}}^{(k)}=\\{s_{i}^{(k)},i\in\mathcal{V}(\mathcal{G})\\}$ obtained from the first Trotter slice at the last sweep and take $h_{j}=0$. The SQA method based on path-integral Monte Carlo simulation studies the 2-dimensional transverse field quantum Ising model through a (2+1)-dimensional classical Ising model and brought the system to an equilibrium state at each sweep under the Metropolis-Hastings algorithm. We now take a Chimera graph that is close to the one adopted by DW machine where the total number of spins $b$ equals to either 485 or 945 that is comparable to the total number of qubits studied in previous Section 4.1. We create 100 problem instances by randomly assigning $\pm 1$ to all spins and take the following annealing schedules closed to the DW annealing schedules $A(t)=\begin{cases}8t^{2}-9.6t+2.88,&\text{ if }0\leq t\leq 0.6\\\ 0,&\text{ if }0.6<t<\leq 1\end{cases}$ $B(t)=5.2t^{2}+0.2t,\quad t\in[0,1].$ We take temperature T to be 0.1 and number of Trotter slices $\tau$ to be either 30 or 60. Let the total number of sweeps for SQA method to be 100000, 200000 and 500000. For each of the 100 problem instances, we ran the SQA algorithm for 3000 times and for each annealing run, we declare that the particular run finds its ground state if the run yields a minimum value that is the same as the known global minimum of $\mathbf{H}_{I}^{c}(\mathbf{s})$ for the instance. The global minimum values are determined by SA algorithms described in Section 3.1 where for each assigned problem instance, we run the algorithm with the following number of sweeps: 1000000, 5000000 and 10000000, each for 3000 times. For a specific problem instance, out of the 9000 SA runs, the minimum energy level reported is considered to be the global minimum energy level. Figure 2 and 3 present the histograms for ground state success probability data obtained by the SA method and SQA method, respectively. Given the same 100 problem instances and global minimum energy levels reported by SA, the SA on average has a higher probability of finding the minimum values comparing to SQA. For the SA method, the histograms exhibit uniform pattern when $b=485$ and unimodal pattern when $b=945$ for all sweep numbers. As the number of sweeps increases, the ground state success probability increases on average so that the patterns become weaker. For the SQA method, the histograms exhibit similar unimodal patterns for all sweep and slice numbers. The additional number of slices made it more difficult for the SQA to find the minimum energy level given our current sweep numbers comparing to the SA. For each slice number, ground state success probability increases as the sweep number increase for most of the 100 problem instances so that in general, the unimodal pattern becomes weaker as the sweep number increases. For each sweep number, ground state success probability decreases as the slices number changes from 30 to 60 for most of the 100 problem instances so that in general, the unimodal pattern becomes stronger as the slices number changes from 30 to 60. The histograms for ground state success probability obtained by the SQA method share the similar pattern with the histogram generated by the DW machine. However, the success probability obtained by SQA method ranges from 0 to 0.8 when the number of spins equals to 485 and ranges from 0 to 0.45 when the number of spins equals to 945 while the success probability generated by DW machine ranges from 0 to 0.3. (a) total number of spins: 485 (b) total number of spins: 945 Figure 2: Histogram plots of ground state success probability data based on SA algorithm for 100 problem instances. (a) total number of spins: 485 (b) total number of spins: 945 Figure 3: Histogram plots of ground state success probability data based on SQA algorithm for 100 problem instances. ## 5 Concluding remarks Quantum computation has drawn enormous attention in many frontiers of multiple scientific fields and can give rise to an exponential speedup over classical computers in solving certain computational problems. Since statistics and machine learning nowadays involve computation work heavily, there is a great demand in studying statistical issues for theoretical research and experimental work with quantum computers. This paper reviews classical annealing such as simulated annealing, discusses quantum annealing that is implemented by D-Wave computing devices and by MCMC based annealing methods. We show that if the classical and quantum annealing are characterized by equivalent Ising models, then solving an optimization problem by the two annealing procedures are mathematically identical. Moreover, we derive the probability lower bound of finding the minimal energy, or equivalently, solving the optimization problem, through quantum annealing by measuring the system at the end of the annealing procedure. Attempts to quantify the quantum nature of the D-Wave devices have been not only met with excitement but also confronted with suspicion. Wang et al., (2016) studied the consistency or inconsistency between data obtained from D-Wave devices and gathered through MCMC based annealing methods where the total number of qubits is 100, which is relatively small. In this paper, we demonstrate that fundamental distinguishment exists between the classical and quantum annealing methods when the total number of qubits involved is at the level of 500 or 1000. The results provide strong evidence that the input-output data of D-Wave devices are inconsistent with the statistical behaviors of the MCMC-based annealing models. ## 6 Appendix: Proofs Denote $C$’s the generic constants whose values are free of $m,n,M,d$ and may change from appearance to appearance. ### 6.1 Proof of Theorem 1 Let ${\mathbf{e}}_{j,+1}=(1,0)^{\dagger}$ and ${\mathbf{e}}_{j,-1}=(0,1)^{\dagger}$, $j=1,\cdots,b$, then ${\mathbf{e}}_{j,\pm 1}$ are eigenvectors of Pauli matrix $\boldsymbol{\sigma}_{j}^{z}$ corresponding to eigenvalues $\pm 1$. For the classical Ising model, given a configuration ${\mathbf{s}}=(s_{1},\cdots,s_{b})$ with energy ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$ in (3.2), we define a unit vector in $\mathbb{C}^{d}$ as follows, ${\mathbf{e}}_{\mathbf{s}}={\mathbf{e}}_{1,s_{1}}\otimes{\mathbf{e}}_{2,s_{2}}\cdots\otimes{\mathbf{e}}_{b,s_{b}}.$ We show that ${\mathbf{e}}_{{\mathbf{s}}}$ is an eigenvector of ${\mathbf{H}}_{I}^{q}$ with corresponding eigenvalue ${\mathbf{H}}^{c}_{I}({\mathbf{s}})$. Indeed, $\displaystyle\boldsymbol{\sigma}_{j}^{z}{\mathbf{e}}_{j,s_{j}}=s_{j}{\mathbf{e}}_{j,s_{j}},$ $\displaystyle\left({\mathbf{I}}_{2}\otimes\cdots{\mathbf{I}}_{2}\otimes\boldsymbol{\sigma}_{j}^{z}\otimes{\mathbf{I}}_{2}\cdots\otimes{\mathbf{I}}_{2}\right)\left({\mathbf{e}}_{1,s_{1}}\otimes{\mathbf{e}}_{2,s_{2}}\cdots\otimes{\mathbf{e}}_{b,s_{b}}\right)=s_{j}{\mathbf{e}}_{1,s_{1}}\otimes{\mathbf{e}}_{2,s_{2}}\cdots\otimes{\mathbf{e}}_{b,s_{b}}=s_{j}{\mathbf{e}}_{\mathbf{s}},$ $\displaystyle\left({\mathbf{I}}_{2}\otimes\cdots{\mathbf{I}}_{2}\otimes\boldsymbol{\sigma}_{i}^{z}\otimes{\mathbf{I}}_{2}\cdots{\mathbf{I}}_{2}\otimes\boldsymbol{\sigma}_{j}^{z}\otimes{\mathbf{I}}_{2}\cdots\otimes{\mathbf{I}}_{2}\right)=s_{i}s_{j}{\mathbf{e}}_{1,s_{1}}\otimes{\mathbf{e}}_{2,s_{2}}\cdots\otimes{\mathbf{e}}_{b,s_{b}}=s_{i}s_{j}{\mathbf{e}}_{\mathbf{s}},$ it follows that $\displaystyle{\mathbf{H}}_{I}^{q}{\mathbf{e}}_{\mathbf{s}}=$ $\displaystyle-\sum_{(i,j)\in{\cal E}({\cal G})}J_{ij}\left({\mathbf{I}}_{2}\otimes\cdots{\mathbf{I}}_{2}\otimes\boldsymbol{\sigma}_{i}^{z}\otimes{\mathbf{I}}_{2}\cdots{\mathbf{I}}_{2}\otimes\boldsymbol{\sigma}_{j}^{z}\otimes{\mathbf{I}}_{2}\cdots\otimes{\mathbf{I}}_{2}\right)\left({\mathbf{e}}_{1,s_{1}}\otimes{\mathbf{e}}_{2,s_{2}}\cdots\otimes{\mathbf{e}}_{b,s_{b}}\right)$ $\displaystyle-\,\sum_{j\in{\cal V}({\cal G})}h_{j}\left({\mathbf{I}}_{2}\otimes\cdots{\mathbf{I}}_{2}\otimes\boldsymbol{\sigma}_{j}^{z}\otimes{\mathbf{I}}_{2}\cdots\otimes{\mathbf{I}}_{2}\right)\left({\mathbf{e}}_{1,s_{1}}\otimes{\mathbf{e}}_{2,s_{2}}\cdots\otimes{\mathbf{e}}_{b,s_{b}}\right)$ $\displaystyle=$ $\displaystyle-\sum_{(i,j)\in{\cal E}({\cal G})}J_{ij}s_{i}s_{j}{\mathbf{e}}_{\mathbf{s}}-\sum_{j\in{\cal V}({\cal G})}h_{j}s_{j}{\mathbf{e}}_{\mathbf{s}}$ $\displaystyle=$ $\displaystyle\left[-\sum_{(i,j)\in{\cal E}({\cal G})}J_{ij}s_{i}s_{j}-\sum_{j\in{\cal V}({\cal G})}h_{j}s_{j}\right]{\mathbf{e}}_{\mathbf{s}}$ $\displaystyle=$ $\displaystyle{\mathbf{H}}^{c}_{I}({\mathbf{s}}){\mathbf{e}}_{\mathbf{s}}.$ Thus, the $2^{b}$ eigenvalues of ${\mathbf{H}}_{I}^{q}$ are ${\mathbf{H}}_{I}^{c}({\mathbf{s}})$, ${\mathbf{s}}\in\\{+1,-1\\}^{b}$, which are actually the diagonal entries of ${\mathbf{H}}_{I}^{q}$. If ${\mathbf{s}}_{0}$ achieves the global minimum of ${\mathbf{H}}_{I}^{c}({\mathbf{s}})$, then ${\mathbf{H}}_{I}^{c}({\mathbf{s}}_{0})$ is the smallest eigenvalue of ${\mathbf{H}}_{I}^{q}$. The partition function and Gibbs state of the quantum Ising model are, respectively, given by $Z=\mbox{tr}[e^{-\beta{\mathbf{H}}_{I}^{q}}],\qquad\mbox{\boldmath$\rho$}=\frac{e^{-\beta{\mathbf{H}}_{I}^{q}}}{Z}.$ Since ${\mathbf{H}}_{I}^{q}$ has eigenvalues ${\mathbf{H}}_{I}^{c}({\mathbf{s}})$ with eigenvectors ${\mathbf{e}}_{{\mathbf{s}}}$, it is easy to compute the partition function as follows, $Z=\sum_{\mathbf{s}}\langle{\mathbf{e}}_{\mathbf{s}}|e^{-\beta{\mathbf{H}}_{I}^{q}}|{\mathbf{e}}_{\mathbf{s}}\rangle=\sum_{\mathbf{s}}e^{-\beta{\mathbf{H}}_{I}^{c}({\mathbf{s}})},$ The probability of observing configuration ${\mathbf{s}}$ is given by $\langle{\mathbf{e}}_{\mathbf{s}}|\mbox{\boldmath$\rho$}|{\mathbf{e}}_{\mathbf{s}}\rangle=\frac{1}{Z}\langle{\mathbf{e}}_{\mathbf{s}}|e^{-\beta{\mathbf{H}}_{I}^{q}}|{\mathbf{e}}_{\mathbf{s}}\rangle=\frac{1}{Z}e^{-\beta{\mathbf{H}}_{I}^{c}({\mathbf{s}})},$ which is equal to the classical Boltzmann distribution. ### 6.2 Proof of Theorem 3 Let $u=t/t_{f}$ be the dimensionless time, and set ${\mathbf{H}}(u)\equiv{\mathbf{H}}_{QA}(u\,t_{f})={\mathbf{H}}_{QA}(t),\qquad|\varphi(u)\rangle\equiv|\psi(u\,t_{f})\rangle=|\psi(t)\rangle.$ Since state $|\psi(t)\rangle$of the quantum system in nature time $t$ follows the Schrödingier equation (2.1) with Hamiltonian ${\mathbf{H}}_{QA}(t)$, $|\varphi(u)\rangle$ satisfies $i\frac{d|\varphi(u)\rangle}{du}=t_{f}{\mathbf{H}}(u)|\varphi(u)\rangle,$ (6.10) where we reserve $i$ for the unit imaginary number $\sqrt{-1}$ in the proof of Theorem 3. Since our goal is to study how close the quantum state is to the ground state, we naturally express the quantum state by the instantaneous eigenstates of ${\mathbf{H}}(u)$. Let $\lambda_{0}(u)<\lambda_{1}(u)<\cdots<\lambda_{k}(u)<\cdots$ be the eigenvalues of ${\mathbf{H}}(u)$ listed in an increasing order, and denote by $|k^{(\ell)}(u)\rangle$ the normalized instantaneous eigenstates of ${\mathbf{H}}(u)$ corresponding to the $k$-th eigenvalue $\lambda_{k}(u)$. For example, there are $\zeta$ ground states $|0^{(\ell)}(u)\rangle$, $\ell=1,\cdots,\zeta$, corresponding to the smallest eigenvalue $\lambda_{0}(u)$. Since our main analysis is targeted for the ground states, we focus on $|0^{(\ell)}(u)\rangle$, their amplitudes and relationships with other eigenstates. First we need to adjust the eigenstates to meet certain conditions in order to facilitate our analysis. From ${\mathbf{H}}(u)|j^{(l)}(u)\rangle=\lambda_{j}(u)|j^{(l)}(u)\rangle$, we take derivatives on both sides to obtain $\frac{d{\mathbf{H}}(u)}{du}|j^{(l)}(u)\rangle+{\mathbf{H}}(u)\frac{d|j^{(l)}(u)\rangle}{du}=\frac{d\lambda_{j}(u)}{du}|j^{(l)}(u)\rangle+\lambda_{j}(u)\frac{d|j^{(l)}(u)\rangle}{du},$ and thus for $k\neq j$, $\displaystyle\langle k^{(\ell)}(u)|\frac{d{\mathbf{H}}(u)}{du}|j^{(l)}(u)\rangle+\langle k^{(\ell)}(u)|{\mathbf{H}}(u)\frac{d|j^{(l)}(u)\rangle}{du}$ (6.11) $\displaystyle=$ $\displaystyle\langle k^{(\ell)}(u)|\frac{d\lambda_{j}(u)}{du}|j^{(l)}(u)\rangle+\langle k^{(\ell)}(u)|\lambda_{j}(u)\frac{d|j^{(l)}(u)\rangle}{du}.$ (6.12) For orthonormal $|j^{(l)}(u)\rangle$ and $|k^{(\ell)}(u)\rangle$, $\langle j^{(l)}(u)\rangle|k^{(\ell)}(u)\rangle=0$ and $\langle k^{(\ell)}(u)|{\mathbf{H}}(u)=\lambda_{k}(u)\langle k^{(\ell)}(u)|$. Substitute these into (6.11) to yield $\displaystyle\langle k^{(\ell)}(u)|\frac{d{\mathbf{H}}(u)}{du}|j^{(l)}(u)\rangle+\lambda_{k}(u)\langle k^{(\ell)}(u)|\frac{d|j^{(l)}(u)\rangle}{du}$ $\displaystyle=$ $\displaystyle\frac{d\lambda_{j}(u)}{du}\langle k^{(\ell)}(u)|j^{(l)}(u)\rangle+\lambda_{j}(u)\langle k^{(\ell)}(u)|\frac{d|j^{(l)}(u)\rangle}{du}$ $\displaystyle=$ $\displaystyle\lambda_{j}(u)\langle k^{(\ell)}(u)|\frac{d|j^{(l)}(u)\rangle}{du},$ which leads to $\langle k^{(\ell)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle\equiv\langle k^{(\ell)}(u)|\frac{d|j^{(l)}(u)\rangle}{du}=\frac{1}{\lambda_{j}(u)-\lambda_{k}(u)}\langle k^{(\ell)}(u)|\frac{d{\mathbf{H}}(u)}{du}|j^{(l)}(u)\rangle,\;$ (6.13) for $j\neq k$. Let $|\check{j}^{(l)}(u)\rangle=\exp\\{i\eta_{j}^{(l)}(u)\\}|j^{(l)}(u)\rangle$ be a time-dependent phase shift of $|j^{(l)}(u)\rangle$, that is, we add an accent mark $\check{}$ to mean a time-dependent shift for the eigenstates, where $\eta_{j}^{(l)}(u)$ satisfies $\langle j^{(l)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle+i\frac{d\eta_{j}^{(l)}(u)}{du}=0,$ which is possible since $\langle j^{(l)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle+\left(\langle j^{(l)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle\right)^{\dagger}=\frac{d}{du}\langle j^{(l)}(u)|j^{(l)}(u)\rangle=0,$ and hence $\langle j^{(l)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle$ is a pure imaginary number. Thus, $\displaystyle\langle\check{j}^{(l)}(u)|\frac{d}{du}|\check{j}^{(l)}(u)\rangle=e^{i\eta_{j}^{(l)}(u)}\langle\check{j}^{(l)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle+i\frac{d\eta_{j}^{(l)}(u)}{du}\langle\check{j}^{(l)}(u)|\check{j}^{(l)}(u)\rangle$ (6.14) $\displaystyle=$ $\displaystyle e^{i\eta_{j}^{(l)}(u)}e^{-i\eta_{j}^{(l)}(u)}\langle j^{(l)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle+i\frac{d\eta_{j}^{(l)}(u)}{du}$ $\displaystyle=$ $\displaystyle\langle j^{(l)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle+i\frac{d\eta_{j}^{(l)}(u)}{du}=0.$ Of course $\\{|\check{j}^{(l)}(u)\rangle,j=0,1,\cdots,\\}$ remains to be orthonormal, the pair $(\lambda_{j}(u),|\check{j}^{(l)}(u)\rangle)$ still satisfies ${\mathbf{H}}(u)|\check{j}^{(l)}(u)\rangle=e^{i\eta_{j}^{(l)}(u)}{\mathbf{H}}(u)|j^{(l)}(u)\rangle=e^{i\eta_{j}^{(l)}(u)}\lambda_{j}(u)|j^{(l)}(u)\rangle=\lambda_{j}(u)|\check{j}^{(l)}(u)\rangle,$ and for $j\neq k$, $\displaystyle\langle\check{k}^{(\ell)}(u)|\frac{d}{du}|\check{j}^{(l)}(u)\rangle=e^{i\eta_{j}^{(l)}(u)}\langle\check{k}^{(\ell)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle+i\frac{d\eta_{j}^{(l)}(u)}{du}\langle\check{k}^{(\ell)}(u)|\check{j}^{(l)}(u)\rangle$ (6.15) $\displaystyle=$ $\displaystyle e^{i[\eta_{j}^{(l)}(u)-\eta_{k}^{(\ell)}(u)]}\langle k^{(\ell)}(u)|\frac{d}{du}|j^{(l)}(u)\rangle$ $\displaystyle=$ $\displaystyle\frac{e^{i[\eta_{j}^{(l)}(u)-\eta_{k}^{(\ell)}(u)]}}{\lambda_{j}(u)-\lambda_{k}(u)}\langle k^{(\ell)}(u)|\frac{d{\mathbf{H}}(u)}{du}|j^{(l)}(u)\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{\lambda_{j}(u)-\lambda_{k}(u)}\langle\check{k}^{(\ell)}(u)|\frac{d{\mathbf{H}}(u)}{du}|\check{j}^{(l)}(u)\rangle,$ where the third equality is due to (6.13). Now with (6.14)-(6.15) satisfied by the instantaneous eigenstates $\check{j}^{(l)}(u)$ of ${\mathbf{H}}(u)$, we use them to express the quantum state $|\varphi(u)\rangle$ as follows, $|\varphi(u)\rangle=\sum_{j,l\geq 0}\alpha_{j}^{(l)}(u)|\check{j}^{(l)}(u)\rangle.$ (6.16) Plugging above expression into the Schrödinger equation (6.10) we obtain $\sum_{j,l\geq 0}i\frac{d}{du}\left[\alpha_{j}^{(l)}(u)|\check{j}^{(l)}(u)\rangle\right]=\sum_{j,l\geq 0}t_{f}{\mathbf{H}}(u)\left[\alpha_{j}^{(l)}(u)|\check{j}^{(l)}(u)\rangle\right],$ and simple manipulations lead to $\displaystyle\sum_{j,l\geq 0}i\left[\frac{d\alpha_{j}^{(l)}(u)}{du}|\check{j}^{(l)}(u)\rangle+\alpha_{j}^{(l)}(u)\frac{d}{du}|\check{j}^{(l)}(u)\rangle\right]=\sum_{j,l\geq 0}t_{f}\alpha_{j}^{(l)}(u){\mathbf{H}}(u)|\check{j}^{(l)}(u)\rangle$ $\displaystyle=$ $\displaystyle\sum_{j,l\geq 0}t_{f}\alpha_{j}^{(l)}(u)\lambda_{j}(u)|\check{j}^{(l)}(u)\rangle.$ Taking products with state $\langle\check{k}^{(\ell)}(u)|$ on both sides and noting the scalar nature of $t_{f}$, $\alpha_{j}^{(l)}(u)$ and $\lambda_{j}(u)$, we arrive at $\displaystyle\sum_{j,l\geq 0}i\left[\frac{d\alpha_{j}^{(l)}(u)}{du}\langle\check{k}^{(\ell)}(u)|\check{j}^{(l)}(u)\rangle+\alpha_{j}^{(l)}(u)\langle\check{k}^{(\ell)}(u)|\frac{d}{du}|\check{j}^{(l)}(u)\rangle\right]$ $\displaystyle=$ $\displaystyle\sum_{j,l\geq 0}t_{f}\lambda_{j}(u)\alpha_{j}^{(l)}(u)\langle\check{k}^{(\ell)}(u)|\check{j}^{(l)}(u)\rangle,$ which can be simplified by (6.14) and the orthonormality of $|\check{j}^{(l)}(u)\rangle$ as $\displaystyle\frac{d\alpha_{k}^{(\ell)}(u)}{du}+\sum_{l\neq\ell}\alpha_{k}^{(l)}\langle\check{k}^{(\ell)}(u)|\frac{d}{du}|\check{k}^{(l)}(u)\rangle+\sum_{j\neq k}\sum_{l}\alpha_{j}^{(l)}(u)\langle\check{k}^{(\ell)}(u)|\frac{d}{du}|\check{j}^{(l)}(u)\rangle$ (6.17) $\displaystyle=$ $\displaystyle- it_{f}\lambda_{k}(u)\alpha_{k}^{(\ell)}(u).$ (6.18) Use (6.15) to re-write (6.17) with $k=0$ as a linear differential equation system for the amplitudes $\alpha_{0}^{(\ell)}(u)$ of $\zeta$ ground states $\displaystyle\frac{d\alpha_{0}^{(\ell)}(u)}{du}$ $\displaystyle=$ $\displaystyle- it_{f}\lambda_{0}(u)\alpha_{0}^{(\ell)}(u)-\sum_{l\neq\ell}\alpha_{0}^{(l)}(u)\langle\check{0}^{(\ell)}(u)|\frac{d}{du}|\check{0}^{(l)}(u)\rangle$ (6.19) $\displaystyle-\,\sum_{j>0}\sum_{l}\alpha_{j}^{(l)}(u)\frac{1}{\lambda_{j}(u)-\lambda_{0}(u)}\langle\check{0}^{(\ell)}(u)|\frac{d{\mathbf{H}}(u)}{du}|\check{j}^{(l)}(u)\rangle,$ where $\ell=1,\cdots,\zeta$, the sum in the second term is $l=1,\ldots,\ell-1,\ell+1,\ldots,\zeta$ for ground states, and the sums in the third term are over for all excited states. The linear differential equation system (6.19) has solution $\displaystyle(\alpha_{0}^{(1)}(u),\cdots,\alpha_{0}^{(\zeta)}(u))^{\prime}={\mathbf{U}}(u)(\alpha_{0}^{(1)}(0),\cdots,\alpha_{0}^{(\zeta)}(0))^{\prime}$ (6.20) $\displaystyle\qquad+\,{\mathbf{U}}(u)\int_{0}^{u}[{\mathbf{U}}(x)]^{-1}\sum_{j>0}\alpha_{j}^{(l)}(x)\frac{1}{\lambda_{j}(x)-\lambda_{0}(x)}\langle\check{{\mathbf{0}}}(x)|\frac{d{\mathbf{H}}(x)}{dx}|\check{j}^{(l)}(x)\rangle dx,$ (6.21) where $\langle\check{\mathbf{0}}(x)|=(\langle\check{0}^{(1)(x)}|,\cdots,\langle\check{0}^{(\zeta)}(x)|)^{\prime}$, and ${\mathbf{U}}$ is a fundamental matrix for the homogeneous linear differential equation system corresponding to (6.19) with initial condition ${\mathbf{U}}(0)={\mathbf{I}}$, that is, the columns of ${\mathbf{U}}$ form a complete linearly independent set of solutions for the homogeneous equation system, $\frac{d\alpha_{0}^{(\ell)}(u)}{du}=-it_{f}\lambda_{0}(u)\alpha_{0}^{(\ell)}(u)-\sum_{l\neq\ell}\langle\check{0}^{(\ell)}(u)|\frac{d}{du}|\check{0}^{(l)}(u)\rangle\,\alpha_{0}^{(l)}(u),$ or in a vector-matrix form $\frac{d(\alpha_{0}^{(1)}(u),\cdots,\alpha_{0}^{(\zeta)}(u))^{\prime}}{du}={\mathbf{D}}(u)(\alpha_{0}^{(1)}(0),\cdots,\alpha_{0}^{(\zeta)}(0))^{\prime},$ where ${\mathbf{D}}(u)=-it_{f}\lambda_{0}(u){\mathbf{I}}-i{\mathbf{A}}(u)$ is a matrix of size $\zeta$, where ${\mathbf{A}}(u)=(A_{\ell l}(u))$ and $A_{\ell l}(u)=-i\langle\check{0}^{(\ell)}(u)|\frac{d}{du}|\check{0}^{(l)}(u)\rangle$ for $l\neq\ell$ and $0$ for $l=\ell$. Since $iA_{\ell l}(u)+[iA_{l\ell}(u)]^{*}=\langle 0^{(\ell)}(u)|\frac{d}{du}|0^{(l)}(u)\rangle+\left(\langle 0^{(l)}(u)|\frac{d}{du}|0^{(\ell)}(u)\rangle\right)^{*}=\frac{d}{du}\langle 0^{(\ell)}(u)|0^{(l)}(u)\rangle=0,$ ${\mathbf{A}}(u)$ is a Hermitian matrix. Matrix ${\mathbf{U}}$ has an expression through the so called Magnus expansion Blanes et al., (2009), ${\mathbf{U}}(u)=\exp\\{-it_{f}\lambda_{0}(u){\mathbf{I}}-\mbox{\boldmath$\Xi$}(u)\\},\;\;\mbox{\boldmath$\Xi$}(u)=\sum_{k=1}^{\infty}i^{k}\mbox{\boldmath$\Xi$}_{k}(u),$ where $\mbox{\boldmath$\Xi$}_{k}(u)$ in the Magnus expansion can be computed by a recursive procedure through the matrices $\boldsymbol{\Upsilon}_{k}^{(j)}$ as follows, $\mbox{\boldmath$\Xi$}_{1}(u)=\int_{0}^{u}{\mathbf{A}}(v)dv,\;\;\mbox{\boldmath$\Xi$}_{k}(u)=\sum_{l=1}^{k-1}\frac{B_{j}}{j}\int_{0}^{u}\boldsymbol{\Upsilon}_{k}^{(l)}(v)dv,\;\;k\geq 2,$ $B_{j}$ are Bernoulli numbers, $\displaystyle\boldsymbol{\Upsilon}_{k}^{(1)}=[\mbox{\boldmath$\Xi$}_{k-1},{\mathbf{A}}],\;\;\boldsymbol{\Upsilon}_{k}^{(k-1)}=ad_{\mbox{\boldmath$\Xi$}_{1}}^{(k-1)}({\mathbf{A}}),$ $\displaystyle\boldsymbol{\Upsilon}_{k}^{(j)}=\sum_{l=1}^{k-j}[\mbox{\boldmath$\Xi$}_{l},\boldsymbol{\Upsilon}_{k-l}^{(j-1)}],\;\;j=2,\ldots,k-1,$ $\displaystyle ad_{\mbox{\boldmath$\Xi$}}^{0}({\mathbf{A}})={\mathbf{A}},\;\;ad_{\mbox{\boldmath$\Xi$}}^{k+1}({\mathbf{A}})=[\mbox{\boldmath$\Xi$},ad_{\mbox{\boldmath$\Xi$}}^{k}({\mathbf{A}})],$ $[{\mathbf{A}},{\mathbf{B}}]={\mathbf{A}}{\mathbf{B}}-{\mathbf{B}}{\mathbf{A}}$ is the matrix commutator of ${\mathbf{A}}$ and ${\mathbf{B}}$, $ad^{k}_{\mbox{\boldmath$\Xi$}}$ is a shorthand for an iterated commutator, and $\|\mbox{\boldmath$\Xi$}_{k}(u)\|\leq\pi\left(\frac{1}{\pi}\int_{0}^{u}\|{\mathbf{A}}(v)\|dv\right)^{k}.$ Then $\displaystyle\|\mbox{\boldmath$\Xi$}(u)\|\leq\sum_{k=1}^{\infty}\|\mbox{\boldmath$\Xi$}_{k}(u)\|\leq$ $\displaystyle\pi\sum_{k=1}^{\infty}\left(\frac{1}{\pi}\int_{0}^{u}\|{\mathbf{A}}(v)\|dv\right)^{k}\leq\int_{0}^{u}\|{\mathbf{A}}(v)\|dv\left(1-\frac{1}{\pi}\int_{0}^{u}\|{\mathbf{A}}(v)\|dv\right)^{-1}$ $\displaystyle\leq$ $\displaystyle\int_{0}^{1}\|{\mathbf{A}}(v)\|dv\left(1-\frac{1}{\pi}\int_{0}^{1}\|{\mathbf{A}}(v)\|dv\right)^{-1}=\xi,$ $e^{-\xi}\leq\exp(-\|\mbox{\boldmath$\Xi$}(u)\|)\leq\|{\mathbf{U}}(u)\|\leq\exp(\|\mbox{\boldmath$\Xi$}(u)\|)\leq e^{\xi}.$ (6.22) As the system initializes at the ground states, $\|(\alpha_{0}^{(1)}(0),\cdots,\alpha_{0}^{(\zeta)}(0))\|^{2}=1$. From (6.20) and (6.22) we find $\displaystyle\|(\alpha_{0}^{(1)}(1),\cdots,\alpha_{0}^{(\zeta)}(1))\|_{2}\geq\|{\mathbf{U}}(1)(\alpha_{0}^{(1)}(0),\cdots,\alpha_{0}^{(\zeta)}(0))\|^{2}$ $\displaystyle-\,\left|{\mathbf{U}}(1)\int_{0}^{1}[{\mathbf{U}}(x)]^{-1}\sum_{j>0}\sum_{l}\alpha_{j}^{(l)}(x)\frac{1}{\lambda_{j}(x)-\lambda_{0}(x)}\langle\check{{\mathbf{0}}}(x)|\frac{d{\mathbf{H}}(x)}{dx}|\check{j}^{(l)}(x)\rangle dx\right|^{2},$ (6.23) $\displaystyle\|{\mathbf{U}}(1)(\alpha_{0}^{(1)}(0),\cdots,\alpha_{0}^{(\zeta)}(0))\|^{2}\geq e^{-\xi}\|(\alpha_{0}^{(1)}(0),\cdots,\alpha_{0}^{(\zeta)}(0))\|^{2}=e^{-\xi},$ (6.24) and $\displaystyle\left\|{\mathbf{U}}(u)\int_{0}^{u}[{\mathbf{U}}(x)]^{-1}\sum_{j>0}\sum_{l}\alpha_{j}^{(l)}(x)\frac{1}{\lambda_{j}(x)-\lambda_{0}(x)}\langle\check{{\mathbf{0}}}(x)|\frac{d{\mathbf{H}}(x)}{dx}|\check{j}^{(l)}(x)\rangle dx\right\|^{2}$ $\displaystyle=$ $\displaystyle\left\|{\mathbf{U}}(u)[{\mathbf{U}}(x_{*})]^{-1}\sum_{j>0}\sum_{l}\alpha_{j}^{(l)}(x_{*})\frac{1}{\lambda_{j}(x_{*})-\lambda_{0}(x_{*})}\langle\check{{\mathbf{0}}}(x_{*})|\frac{d{\mathbf{H}}(x_{*})}{dx}|\check{j}^{(l)}(x_{*})\rangle\right\|^{2}$ $\displaystyle\leq$ $\displaystyle\left\|{\mathbf{U}}(u)[{\mathbf{U}}(x_{*})]^{-1}\right\|\sum_{\ell=1}^{\zeta}\left|\sum_{j>0}\sum_{l}\alpha_{j}^{(l)}(x_{*})\frac{1}{\lambda_{j}(x_{*})-\lambda_{0}(x_{*})}\langle\check{0}^{(\ell)}(x_{*})|\frac{d{\mathbf{H}}(x_{*})}{dx}|\check{j}^{(l)}(x_{*})\rangle\right|^{2}$ $\displaystyle\leq$ $\displaystyle\|{\mathbf{U}}(u)\|\|{\mathbf{U}}^{-1}(x_{*})\|\sum_{\ell=1}^{\zeta}\left|\sum_{j>0}\sum_{l}\alpha_{j}^{(l)}(x_{*})\frac{1}{\lambda_{j}(x_{*})-\lambda_{0}(x_{*})}\langle\check{0}^{(\ell)}(x_{*})|\frac{d{\mathbf{H}}(x_{*})}{dx}|\check{j}^{(l)}(x_{*})\rangle\right|^{2}$ $\displaystyle\leq$ $\displaystyle e^{2\xi}\sum_{\ell=1}^{\zeta}\left|\sum_{j>0}\sum_{l}\alpha_{j}^{(l)}(x_{*})\frac{1}{\lambda_{j}(x_{*})-\lambda_{0}(x_{*})}\langle\check{0}^{(\ell)}(x_{*})|\frac{d{\mathbf{H}}(x_{*})}{dx}|\check{j}^{(l)}(x_{*})\rangle\right|^{2}$ $\displaystyle\leq$ $\displaystyle\Pi e^{2\xi}\zeta\left[\sum_{j>0}\sum_{l}|\alpha_{j}^{(l)}(x_{*})|\right]^{2}$ $\displaystyle\leq$ $\displaystyle\Pi e^{2\xi}\zeta 2^{b}\sum_{j>0}\sum_{l}\left|\alpha_{j}^{(l)}(x_{*})\right|^{2}$ $\displaystyle\leq$ $\displaystyle\Pi e^{2\xi}\zeta 2^{b}(1-p_{0}),$ (6.25) where the first equality is from the mean value theorem and $x_{*}\in(0,1)$, first and third inequalities are due to the spectral norm and (6.22), respectively, and the last two inequalities are, respectively, from the Cauchy-Schwartz inequality and the facts that $p_{0}=\min_{0\leq x\leq 1}\sum_{\ell=1}^{\zeta}|\alpha_{0}^{(\ell)}(x)|^{2},\qquad\sum_{j,l}|\alpha_{j}^{(l)}(x)|^{2}=1.$ The probability of the quantum annealing process staying in the ground states at the final annealing time is equal to the sum of these modulus squares given by the left hand side of (6.2), and thus (6.2)-(6.2) together imply that the probability has the bound stated in the theorem. For $\zeta=1$, ${\mathbf{A}}$ is a scalar equal to zero, and hence $\xi=0$. ### 6.3 Proof of Lemma 4 By Corollary 5.4 Morita and Nishimori, (2008), $\sum_{\mathbf{s}_{1}}|q_{M,\beta}^{1}(\mathbf{s_{1}},t)-q_{M,\beta}^{1}(\mathbf{s}_{1})|\to 0.$ We have $\displaystyle q_{M,\beta}^{1}(\mathbf{s}_{1})$ $\displaystyle=$ $\displaystyle\sum_{k=2}^{M}\sum_{\mathbf{s_{k}}}\frac{1}{Z_{M,\beta}}\exp\left(\beta\frac{1}{M}\sum_{k=1}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right)$ $\displaystyle=$ $\displaystyle\frac{1}{Z_{M,\beta}}\exp\left(\beta\frac{1}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,1}s_{j,1}\right)\sum_{k=2}^{M}\sum_{\mathbf{s_{k}}}\exp\left(\beta\frac{1}{M}\sum_{k=2}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right)$ $\displaystyle=$ $\displaystyle\frac{1}{Z_{M,\beta}}\exp\left(\beta\frac{1}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,1}s_{j,1}\right)C^{*},$ where $C^{*}=\sum_{k=2}^{M}\sum_{\mathbf{s_{k}}}\exp\left(\beta\frac{1}{M}\sum_{k=2}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right)$. Since $\sum_{\mathbf{s}_{1}}q_{M,\beta}^{1}(\mathbf{s}_{1})=1,$ we have $\sum_{\mathbf{s}_{1}}\exp\left(\beta\frac{1}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,1}s_{j,1}\right)=\frac{Z_{M,\beta}}{C^{*}},$ which implies $q_{M,\beta}^{1}(\mathbf{s}_{1})=\frac{1}{Z_{M,\beta}^{1}}\exp\left(\frac{\beta}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,1}s_{j,1}\right),\;Z_{M,\beta}^{1}=\sum_{\mathbf{s}_{1}}\exp\left(\frac{\beta}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,1}s_{j,1}\right)=Z_{1,\beta/M}.$ Therefore, $q_{M,\beta}^{1}(\mathbf{s_{1}},t)$ and $q_{1,\beta/M}(\mathbf{s_{1}},t)$ are asymptotically the same as $t\to\infty$. ### 6.4 Proof of Theorem 5 Let $X_{M}=\frac{1}{\sqrt{M}}\sum_{k=1}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}$ where $\mathbf{s}$ is generated by $q_{M,\beta}(\mathbf{s})$. Simple algebra shows $\displaystyle|P(X_{M}(t)\leq x)-P(X_{\infty}\leq x)|$ $\displaystyle\leq$ $\displaystyle|P(X_{M}\leq x)-P(X_{\infty}\leq x)|+|P(X_{M}(t)\leq x)-P(X_{M}\leq x)|$ $\displaystyle=$ $\displaystyle(I)+(II).$ Consider $(I)$. We have for any given $\theta\in\mathbb{R}$ and $S=\\{-1,1\\}^{bM}$, $\displaystyle E\left[\exp(\theta X_{M})\right]$ $\displaystyle=$ $\displaystyle\sum_{s\in S}\frac{1}{Z_{M,\beta}}\exp\left((\beta+\sqrt{M}\theta)\frac{1}{M}\sum_{k=1}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right)$ $\displaystyle=$ $\displaystyle\frac{Z_{M,\beta+\sqrt{M}\theta}}{Z_{M,\beta}}$ $\displaystyle\to$ $\displaystyle\exp\left(\frac{1}{2}\theta^{2}\sum_{\langle ij\rangle}J_{ij}^{2}\right)\quad\text{ as }M\to\infty,$ where the last line is due to (6.26) below. We have $\displaystyle Z_{M,\beta+\sqrt{M}\theta}$ $\displaystyle=$ $\displaystyle\sum_{\mathbf{s}\in S}\exp\left(\frac{\beta+\sqrt{M}\theta}{M}\sum_{k=1}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right)$ $\displaystyle=$ $\displaystyle\prod_{k=1}^{M}\sum_{\mathbf{s}_{k}}\exp\left(\frac{\beta+\sqrt{M}\theta}{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}\right)$ $\displaystyle=$ $\displaystyle\prod_{k=1}^{M}Z_{\frac{\beta+\sqrt{M}\theta}{M}}$ $\displaystyle=$ $\displaystyle\left(Z_{\frac{\beta+\sqrt{M}\theta}{M}}\right)^{M}$ $\displaystyle=$ $\displaystyle\left[Tr\left(\exp\left(\frac{\beta+\sqrt{M}\theta}{M}\sum_{\langle ij\rangle}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}\right)\right)\right]^{M}$ $\displaystyle=$ $\displaystyle\left[Tr\left(I+\frac{\beta+\sqrt{M}\theta}{M}\sum_{\langle ij\rangle}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}+\frac{M\theta^{2}}{2M^{2}}\left(\sum_{\langle ij\rangle}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}\right)^{2}\right)+o(M^{-1})\right]^{M}$ $\displaystyle=$ $\displaystyle\left[2^{b}+2^{b}\frac{\theta^{2}}{2M}\sum_{\langle ij\rangle}J_{ij}^{2}+o(M^{-1})\right]^{M},$ where the last equality is by the facts that $Tr(\sigma_{i}^{z}\sigma_{j}^{z})=0$ and $Tr(\sigma_{i}^{z}\sigma_{j}^{z}\cdot\sigma_{i^{\prime}}^{z}\sigma_{j^{\prime}}^{z})=0$ for $i\neq i^{\prime}$ or $j\neq j^{\prime}$. Similarly, $Z_{M,\beta}=\left[2^{b}+O(M^{-2})\right]^{M}.$ Thus $\displaystyle\frac{Z_{M,\beta+\sqrt{M}\theta}}{Z_{M,\beta}}$ $\displaystyle=$ $\displaystyle\frac{\left[1+\frac{\theta^{2}}{2M}\sum_{\langle ij\rangle}J_{ij}^{2}+o(M^{-1})\right]^{M}}{\left[1+O(M^{-2})\right]^{M}}$ (6.26) $\displaystyle\to$ $\displaystyle\exp\left(\frac{1}{2}\theta^{2}\sum_{\langle ij\rangle}J_{ij}^{2}\right)\quad\text{ as }M\to\infty.$ (6.27) By Levy continuity Theorem for the moment generating function (Theorem 7.5 in Kapadia et al. (2005)), we have $X_{M}\overset{d}{\to}X_{\infty},$ where $X_{\infty}$ follows the normal distribution with mean zero and variance $\sum_{\langle ij\rangle}J_{ij}^{2}$. Thus there is $M_{0}(\cdot)$ such that for any $M\geq M_{0}(\epsilon)$, $\displaystyle|P(X_{M}\leq x)-P(X_{\infty}\leq x)|\leq\epsilon/2.$ (6.28) Consider $(II)$. Now we fix $M\geq M_{0}(\epsilon)$. By Corollary 5.4 in Morita and Nishimori, (2008), there is $t_{0}(\epsilon,M)$ such that for all $t\geq t_{0}(\epsilon,M)$ and all $x$, $\displaystyle|P(X_{M}(t)\leq x)-P(X_{M}\leq x)|\leq\epsilon/2.$ (6.29) By (6.28) and (6.29), the statement is proved. ### 6.5 Proof of Lemma 7 Let $H_{D}=\sum_{\langle ij\rangle}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}\quad\text{ and }\quad H_{N}=\Gamma(t)\sum_{i}\sigma_{i}^{x}.$ We have $\displaystyle Z_{\beta}(t)$ $\displaystyle=$ $\displaystyle Tr\left(\exp\left(\beta[H_{D}+H_{N}]\right)\right)$ (6.30) $\displaystyle=$ $\displaystyle Tr\left(\prod_{k=1}^{M}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)\right)$ (6.31) $\displaystyle=$ $\displaystyle Tr\left(\prod_{k=1}^{M}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)\left(\sum_{i_{k}=1}^{2^{b}}\mathbf{e}_{i_{k}}\mathbf{e}_{i_{k}}^{T}\right)\right)$ (6.32) $\displaystyle=$ $\displaystyle Tr\Biggl{(}\sum_{i_{M}=1}^{2^{b}}\cdots\sum_{i_{1}=1}^{2^{b}}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)\mathbf{e}_{i_{1}}\mathbf{e}_{i_{1}}^{T}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)$ (6.34) $\displaystyle\qquad\qquad\qquad\times\mathbf{e}_{i_{2}}\mathbf{e}_{i_{2}}^{T}\cdots\mathbf{e}^{T}_{i_{M-1}}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)\mathbf{e}_{i_{M}}\mathbf{e}_{i_{M}}^{T}\Biggr{)}$ $\displaystyle=$ $\displaystyle\sum_{i_{M}=1}^{2^{b}}\cdots\sum_{i_{1}=1}^{2^{b}}\mathbf{e}_{i_{M}}^{T}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)\mathbf{e}_{i_{1}}\mathbf{e}_{i_{1}}^{T}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)$ (6.36) $\displaystyle\qquad\qquad\qquad\times\mathbf{e}_{i_{2}}\mathbf{e}_{i_{2}}^{T}\cdots\mathbf{e}_{i_{M-1}}^{T}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)\mathbf{e}_{i_{M}},$ where $\mathbf{e}_{k},k=1,\ldots,2^{b},$ are the canonical basis. Let $\mathbf{e}_{k}=e_{k,1}\otimes e_{k,2}\otimes\cdots\otimes e_{k,b}$, where $e_{k,i}$ is one of $(1,0)^{T}$ and $(0,1)^{T}$. Let $s_{k,i}=1$ if $e_{k,i}=(1,0)^{T}$ and $s_{k,i}=-1$ if $e_{k,i}=(0,1)^{T}$. For any $l,k\in\\{1,\ldots,2^{b}\\}$, $\displaystyle\mathbf{e}_{l}^{T}\exp\left(\frac{\beta}{M}[H_{D}+H_{N}]\right)\mathbf{e}_{k}$ (6.37) $\displaystyle=$ $\displaystyle\mathbf{e}_{l}^{T}\exp\left(\frac{\beta}{M}H_{D}\right)\exp\left(\frac{\beta}{M}H_{N}\right)\mathbf{e}_{k}+\frac{\beta^{2}}{2M^{2}}\mathbf{e}_{l}^{T}(H_{N}H_{D}-H_{D}H_{N})\mathbf{e}_{k}+O\left(\left(\beta/M\right)^{3}\right)$ (6.38) $\displaystyle=$ $\displaystyle\exp\left(\frac{\beta}{M}\mathbf{e}_{l}^{T}H_{D}\mathbf{e}_{l}\right)\mathbf{e}_{l}^{T}\exp\left(\frac{\beta}{M}H_{N}\right)\mathbf{e}_{k}+O\left(\left(\beta/M\right)^{2}\right)$ (6.39) $\displaystyle=$ $\displaystyle\left(\sqrt{\frac{1}{2}\sinh\frac{2\beta\Gamma(t)}{M}}\right)^{b}\exp\left(\frac{\beta}{M}\sum_{\langle ij\rangle}J_{ij}s_{l,i}s_{l,j}+\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right)\sum_{i=1}^{b}s_{l,i}s_{k,i}\right)$ (6.41) $\displaystyle+O\left(\left(\beta/M\right)^{2}\right),$ where the second equality is derived using the Taylor expansion, and the last equality is due to (6.42) and (6.43). We have $\mathbf{e}_{l}^{T}H_{D}\mathbf{e}_{l}=\sum_{\langle ij\rangle}J_{ij}(e_{l,i}^{T}\sigma^{z}e_{l,i})(e_{l,j}^{T}\sigma^{z}e_{l,j})=\sum_{\langle ij\rangle}J_{ij}s_{l,i}s_{l,j}.$ (6.42) Since $\sigma_{i}^{x},i=1,\ldots,b,$ commute each other, we have $\displaystyle\mathbf{e}_{l}^{T}\exp\left(\frac{\beta}{M}H_{N}\right)\mathbf{e}_{k}$ (6.43) $\displaystyle=\mathbf{e}_{l}^{T}\prod_{k=1}^{b}\exp\left(\frac{\beta\Gamma(t)}{M}\sigma_{k}^{x}\right)\mathbf{e}_{k}$ (6.44) $\displaystyle=\mathbf{e}_{l}^{T}\left[\exp\left(\frac{\beta\Gamma(t)}{M}\sigma^{x}\right)\otimes\cdots\otimes\exp\left(\frac{\beta\Gamma(t)}{M}\sigma^{x}\right)\right]\mathbf{e}_{k}$ (6.45) $\displaystyle=\left[e_{l,1}^{T}\exp\left(\frac{\beta\Gamma(t)}{M}\sigma^{x}\right)e_{k,1}\otimes\cdots\otimes e_{l,b}^{T}\exp\left(\frac{\beta\Gamma(t)}{M}\sigma^{x}\right)e_{k,b}\right]$ (6.46) $\displaystyle=\left(\sqrt{\frac{1}{2}\sinh\frac{2\beta\Gamma(t)}{M}}\right)^{b}\prod_{i=1}^{b}\exp\left(\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right)s_{l,i}s_{k,i}\right)$ (6.47) $\displaystyle=\left(\sqrt{\frac{1}{2}\sinh\frac{2\beta\Gamma(t)}{M}}\right)^{b}\exp\left(\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right)\sum_{i=1}^{b}s_{l,i}s_{k,i}\right)$ (6.48) where the second equality is derived by $e^{A}\otimes e^{B}=e^{A\oplus B}$ and $(A\otimes B)(C\otimes D)=(AC)\otimes(BD)$, and the fourth equality is due to (6.49) below. Note: please check the reasoning for the second equality We have $\displaystyle\exp\left(\frac{\beta\Gamma(t)}{M}\sigma^{x}\right)$ (6.49) $\displaystyle=$ $\displaystyle\begin{pmatrix}\cosh\frac{\beta\Gamma(t)}{M}&\sinh\frac{\beta\Gamma(t)}{M}\\\ \sinh\frac{\beta\Gamma(t)}{M}&\cosh\frac{\beta\Gamma(t)}{M}\end{pmatrix}$ (6.50) $\displaystyle=$ $\displaystyle\sqrt{\frac{1}{2}\sinh\frac{2\beta\Gamma(t)}{M}}\begin{pmatrix}\exp\left(\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right)\right)&\exp\left(-\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right)\right)\\\ \exp\left(-\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right)\right)&\exp\left(\frac{1}{2}\log\left(\coth\frac{\beta\Gamma(t)}{M}\right)\right)\end{pmatrix}.$ (6.51) By (6.30) and (6.37), we have $\displaystyle Z_{\beta}(t)$ $\displaystyle=$ $\displaystyle\left(\sqrt{\frac{1}{2}\sinh\frac{2\beta\Gamma(t)}{M}}\right)^{bM}\sum_{\mathbf{s}_{M}}\cdots\sum_{\mathbf{s}_{1}}\exp\left(\frac{\beta}{M}\sum_{k=1}^{M}\sum_{\langle ij\rangle}J_{ij}s_{i,k}s_{j,k}+\gamma_{\beta}(t)\sum_{k=1}^{M}\sum_{i=1}^{b}s_{i,k}s_{i,k+1}\right)$ $\displaystyle+O(\beta^{2}/M^{2})$ $\displaystyle=$ $\displaystyle\left(\sqrt{\frac{1}{2}\sinh\frac{2\beta\Gamma(t)}{M}}\right)^{bM}Z_{M,\beta}(t)+O(\beta^{2}/M^{2}).$ Acknowledgements: The research of Xinyu Song was supported by the Fundamental Research Funds for the Central Universities (2018110128), China Scholarship Coucil (201806485017), and National Natural Science Foundation of China (Grant No. 11871323). The research of Yazhen Wang was supported in part by NSF grants DMS-1528375 and DMS-1707605. The research of Donggyu Kim was supported in part by KAIST Settlement/Research Subsidies for Newly-hired Faculty grant G04170049 and KAIST Basic Research Funds by Faculty (A0601003029). ## References * Aharonov and Ta-Shma, (2003) Aharonov, D. and Ta-Shma, A. (2003). Adiabatic quantum state generation and statistical zero knowledge. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pages 20–29. ACM. * Aharonov et al., (2008) Aharonov, D., Van Dam, W., Kempe, J., Landau, Z., Lloyd, S., and Regev, O. (2008). Adiabatic quantum computation is equivalent to standard quantum computation. SIAM review, 50(4):755–787. * Albash and Lidar, (2016) Albash, T. and Lidar, D. A. (2016). Adiabatic quantum computing. arXiv preprint arXiv:1611.04471. * Albash et al., (2015) Albash, T., Rønnow, T. F., Troyer, M., and Lidar, D. A. (2015). Reexamining classical and quantum models for the d-wave one processor. The European Physical Journal Special Topics, 224(1):111–129. * Bertsimas and Tsitsiklis, (1993) Bertsimas, D. and Tsitsiklis, J. (1993). Simulated annealing. Statistical science, 8(1):10–15. * Bian et al., (2013) Bian, Z., Chudak, F., Macready, W. G., Clark, L., and Gaitan, F. (2013). Experimental determination of ramsey numbers. Physical review letters, 111(13):130505. * Blanes et al., (2009) Blanes, S., Casas, F., Oteo, J., and Ros, J. (2009). The magnus expansion and some of its applications. Physics reports, 470(5-6):151–238. * Boixo et al., (2018) Boixo, S., Isakov, S. V., Smelyanskiy, V. N., Babbush, R., Ding, N., Jiang, Z., Bremner, M. J., Martinis, J. M., and Neven, H. (2018). Characterizing quantum supremacy in near-term devices. Nature Physics, 14(6):595. * Boixo et al., (2014) Boixo, S., Smelyanskiy, V. N., Shabani, A., Isakov, S. V., Dykman, M., Denchev, V. S., Amin, M., Smirnov, A., Mohseni, M., and Neven, H. (2014). Computational role of collective tunneling in a quantum annealer. arXiv preprint arXiv:1411.4036. * Boixo et al., (2016) Boixo, S., Smelyanskiy, V. N., Shabani, A., Isakov, S. V., Dykman, M., Denchev, V. S., Amin, M. H., Smirnov, A. Y., Mohseni, M., and Neven, H. (2016). Computational multiqubit tunnelling in programmable quantum annealers. Nature communications, 7:10327. * Born and Fock, (1928) Born, M. and Fock, V. (1928). Beweis des adiabatensatzes. Zeitschrift für Physik, 51(3-4):165–180. * Brady and van Dam, (2016) Brady, L. T. and van Dam, W. (2016). Quantum Monte Carlo simulations of tunneling in quantum adiabatic optimization. Physical Review A, 93(3):032304. * Britton et al., (2012) Britton, J. W., Sawyer, B. C., Keith, A. C., Wang, C.-C. J., Freericks, J. K., Uys, H., Biercuk, M. J., and Bollinger, J. J. (2012). Engineered two-dimensional ising interactions in a trapped-ion quantum simulator with hundreds of spins. Nature, 484(7395):489. * Brooke et al., (1999) Brooke, J., Bitko, D., and Aeppli, G. (1999). Quantum annealing of a disordered magnet. Science, 284(5415):779–781. * Browne, (2014) Browne, D. (2014). Quantum computation: Model versus machine. Nature Physics, 10(3):179. * Brumfiel, (2012) Brumfiel, G. (2012). Simulation: Quantum leaps. Nature News, 491(7424):322. * Cai et al., (2016) Cai, T., Kim, D., Wang, Y., Yuan, M., and Zhou, H. H. (2016). Optimal large-scale quantum state tomography with pauli measurements. The Annals of Statistics, 44(2):682–712. * Deutsch, (1985) Deutsch, D. (1985). Quantum theory, the church–turing principle and the universal quantum computer. Proc. R. Soc. Lond. A, 400(1818):97–117. * DiVincenzo, (1995) DiVincenzo, D. P. (1995). Quantum computation. Science, 270(5234):255–261. * Farhi et al., (2002) Farhi, E., Goldstone, J., and Gutmann, S. (2002). Quantum adiabatic evolution algorithms versus simulated annealing. arXiv preprint quant-ph/0201031. * Farhi et al., (2001) Farhi, E., Goldstone, J., Gutmann, S., Lapan, J., Lundgren, A., and Preda, D. (2001). A quantum adiabatic evolution algorithm applied to random instances of an np-complete problem. Science, 292(5516):472–475. * Farhi et al., (2000) Farhi, E., Goldstone, J., Gutmann, S., and Sipser, M. (2000). Quantum computation by adiabatic evolution. arXiv preprint quant-ph/0001106. * Geman and Geman, (1984) Geman, S. and Geman, D. (1984). Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence, (6):721–741. * Hajek, (1988) Hajek, B. (1988). Cooling schedules for optimal annealing. Mathematics of operations research, 13(2):311–329. * Holevo, (2011) Holevo, A. S. (2011). Probabilistic and statistical aspects of quantum theory, volume 1. Springer Science & Business Media. * Irbäck et al., (1996) Irbäck, A., Peterson, C., and Potthast, F. (1996). Evidence for nonrandom hydrophobicity structures in protein chains. proceedings of the National Academy of Sciences, 93(18):9533–9538. * Isakov et al., (2016) Isakov, S. V., Mazzola, G., Smelyanskiy, V. N., Jiang, Z., Boixo, S., Neven, H., and Troyer, M. (2016). Understanding quantum tunneling through quantum Monte Carlo simulations. Physical review letters, 117(18):180402. * Johnson et al., (2011) Johnson, M. W., Amin, M. H., Gildert, S., Lanting, T., Hamze, F., Dickson, N., Harris, R., Berkley, A. J., Johansson, J., and Bunyk, P. (2011). Quantum annealing with manufactured spins. Nature, 473(7346):194. * Jörg et al., (2010) Jörg, T., Krzakala, F., Kurchan, J., and Maggs, A. C. (2010). Quantum annealing of hard problems. Progress of Theoretical Physics Supplement, 184:290–303. * Kadowaki and Nishimori, (1998) Kadowaki, T. and Nishimori, H. (1998). Quantum annealing in the transverse ising model. Physical Review E, 58(5):5355. * Kato, (1978) Kato, T. (1978). Trotter’s product formula for an arbitrary pair of self-adjoint contraction semigroup. Topics in Func. Anal., Adv. Math. Suppl. Studies, 3:185–195. * Kirkpatrick et al., (1983) Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598):671–680. * Majewski et al., (2001) Majewski, J., Li, H., and Ott, J. (2001). The ising model in physics and statistical genetics. The American Journal of Human Genetics, 69(4):853–862. * Martoňák et al., (2002) Martoňák, R., Santoro, G. E., and Tosatti, E. (2002). Quantum annealing by the path-integral Monte Carlo method: The two-dimensional random ising model. Physical Review B, 66(9):094203. * McGeoch, (2014) McGeoch, C. C. (2014). Adiabatic quantum computation and quantum annealing: Theory and practice. Synthesis Lectures on Quantum Computing, 5(2):1–93. * Morita and Nishimori, (2008) Morita, S. and Nishimori, H. (2008). Mathematical foundation of quantum annealing. Journal of Mathematical Physics, 49(12):125210. * Nielsen and Chuang, (2000) Nielsen, M. A. and Chuang, I. L. (2000). Quantum computation and quantum information. Cambridge Univ. Press. * O’Gorman et al., (2015) O’Gorman, B., Babbush, R., Perdomo-Ortiz, A., Aspuru-Guzik, A., and Smelyanskiy, V. (2015). Bayesian network structure learning using quantum annealing. The European Physical Journal Special Topics, 224(1):163–188. * Perdomo-Ortiz et al., (2012) Perdomo-Ortiz, A., Dickson, N., Drew-Brook, M., Rose, G., and Aspuru-Guzik, A. (2012). Finding low-energy conformations of lattice protein models by quantum annealing. Scientific reports, 2:571. * Perdomo-Ortiz et al., (2015) Perdomo-Ortiz, A., Fluegemann, J., Narasimhan, S., Biswas, R., and Smelyanskiy, V. N. (2015). A quantum annealing approach for fault detection and diagnosis of graph-based systems. The European Physical Journal Special Topics, 224(1):131–148. * Rieffel et al., (2015) Rieffel, E. G., Venturelli, D., O’Gorman, B., Do, M. B., Prystay, E. M., and Smelyanskiy, V. N. (2015). A case study in programming a quantum annealer for hard operational planning problems. Quantum Information Processing, 14(1):1–36. * Rieger and Kawashima, (1999) Rieger, H. and Kawashima, N. (1999). Application of a continuous time cluster algorithm to the two-dimensional random quantum ising ferromagnet. The European Physical Journal B-Condensed Matter and Complex Systems, 9(2):233–236. * Rønnow et al., (2014) Rønnow, T. F., Wang, Z., Job, J., Boixo, S., Isakov, S. V., Wecker, D., Martinis, J. M., Lidar, D. A., and Troyer, M. (2014). Defining and detecting quantum speedup. Science, 345(6195):420–424. * Sakurai and Napolitano, (2017) Sakurai, J. and Napolitano, J. (2017). Modern quantum mechanics. Modern Quantum Mechanics, by JJ Sakurai, Jim Napolitano, Cambridge, UK: Cambridge University Press, 2017. * Shankar, (2012) Shankar, R. (2012). Principles of quantum mechanics. Springer Science & Business Media. * Stauffer, (2008) Stauffer, D. (2008). Social applications of two-dimensional ising models. American Journal of Physics, 76(4):470–473. * Suzuki, (1976) Suzuki, M. (1976). Generalized trotter’s formula and systematic approximants of exponential operators and inner derivations with applications to many-body problems. Communications in Mathematical Physics, 51(2):183–190. * Trotter, (1959) Trotter, H. F. (1959). On the product of semi-groups of operators. Proceedings of the American Mathematical Society, 10(4):545–551. * Wang, (2012) Wang, Y. (2012). Quantum computation and quantum information. Statistical Science, 27(3):373–394. * Wang, (2013) Wang, Y. (2013). Asymptotic equivalence of quantum state tomography and noisy matrix completion. The Annals of Statistics, 41(5):2462–2504. * Wang et al., (2016) Wang, Y., Wu, S., and Zou, J. (2016). Quantum annealing with Markov chain Monte Carlo simulations and D-Wave quantum computers. Statistical Science, 31(3):362–398. * Wang and Xu, (2015) Wang, Y. and Xu, C. (2015). Density matrix estimation in quantum homodyne tomography. Statistica Sinica, pages 953–973.
# Spatial Focusing of Surface Polaritons Based on Cross-Phase Modulation Chaohua<EMAIL_ADDRESS>Na Li1, Datang Xu2, Zhiming Chen3 and Yong Zhou1 1School of Physics and Electronics, Shandong Normal University, Jinan 250014, China 2School of Electronic and Information Engineering, Changshu Institute of Technology, Changshu 215500, China 3School of Science, East China University of Technology, Nanchang 330013, Jiangxi, China ###### Abstract We theoretically study the spatial focusing of surface polaritons (SPs) in a negative index metamaterial (NIMM)-atomic gas interface waveguide system, based on cross phase modulation (XPM) in a tripod type double electromagnetically induced transparency (EIT) scheme. In the linear region, we realize the low loss stable propagation of SPs, and the group velocities of the probe and signal fields are well matched via double EIT. In the nonlinear region, we show that giant enhancement of the XPM can be obtained. Using a narrow optical soliton in free space, we realize spatial focusing of the SPs solitons, including bright, multi bright, and dark solitons. The full width at the half-maximum (FWHM) of the SPs soliton can be compressed to about ten nanometers, thus, even nanofucsing can be obtained. The results obtained here have certain theoretical significance for nano-scale sensing, spectral enhancement and precision measurement. ## I INTRODUCTION Spatial focusing of surface plasmon polaritons (SPPs), especially at nanoscale, recently has been one of the hot spots in the field of micro-nano optics due to its huge application potentials Gro 2016 . It not only provides a powerful technical basis for the development of nano optical devices, but also extends the research realm of strong field micro-nano optics Dombi 2020 , such as near-field and super-resolution imaging Neacsu 2010 ; Sadiq 2011 ; Schmidt 2012 ; Huth 2011 ; Zhang 2013 ; Zhong 2017 ; Liu 2019 ; Zhu 2019 ; Lu 2019 ; Umakoshi 2020 ; Esmanna 2020 , biological sensing Dunn 1999 ; Anker 2009 , enhanced Raman spectroscopy Stockle 2000 ; Berweger 2010 ; Bargioni 2011 ; Stadler 2012 ; Chen 2014 ; Lu 2018 ; Zhang 2018 , nonlinear spectroscopy Neacsu 2005 ; Kauranen 2012 ; Kravtsov 2016 and photofield electron emission Keramati 2020 , etc. In 2004, Stockman proposed that SPPs nano-focusing refers to the phenomenon that when SPs propagate along the tapered metallic nanostructure, the propagation energy is highly concentrated at the tip of the tapered structure Dombi 2020 . In recent studies, it is mentioned that tapered nanoribbons and metallic tips can be used to construct nano-focusing waveguides by means of micro-nano manufacturing, and have been applied in many fields Zia 2006 ; Verhagen 2007 ; Choo 2012 ; Zenin 2015 ; Li 2019 . For examples, in 2012, Choo et al. achieved efficient nano-focusing of SPPs experimentally, which can focus light to a few nanometers with low loss Choo 2012 ; Zenin et al. used the tapered nanoribbon structure to detect that the light field energy can be concentrated in a space of tens of nanometers through nanofocusing, and the near field intensity at the tip of the tapered waveguide can be enhanced to the order of thousands in 2015 Zenin 2015 ; Zhu et al. realized SPPs nano- focusing at the tip of the round tower structure, which enhanced the electric field at the tip of the round tower and obtained nano-level light spots in 2019 Zhu 2019 . SPPs nano-focusing can also be used as an alternative method to prepare nano-light sources for optical nanoimaging Umakoshi 2020 ; Esmanna 2020 . For example, in 2020, Umakoshi et al. used nano-focusing of SPPs on tapered metallic nanostructures with a tip diameter of tens of nanometers to obtain a white nano light source in the entire visible light wavelength range Umakoshi 2020 . However, metallic nano structures adopted in the above research are all based on high precision micro/nano manufacturing technology, once the structures is prepared, the performance of the device is almost determined, and it is lack of active control. In this work, we propose an active approach to achieve spatial focusing of surface polaritons (SPs, which is excited at the surface of negative index metamaterials and can propagate with low loss for a long distance Kamli 2008 ; Zorgabad 2018 ; Zorgabad 2019 ; Liu 2020 ) based on cross-phase modulation (XPM). Actually, the research of compressing pulses in time domain or frequency domain via XPM in fiber optics is very mature Agrawal 1990 ; Agrawal book 2019 ; Liu 2010 ; Liu 2011 , the physical mechanism is based on the competitive interaction between dispersion and nonlinearity. Thus, we can also use the competitive interaction between diffraction and nonlinearity to realized the spatial focusing of SPs. Such expansion is like the relation between temporal soliton and spatial soliton Kivshar 2003 . In this article, we propose a general theoretical scheme to investigate spatial focusing of SPs in a negative index metamaterial (NIMM)-atomic gas interface waveguide, based on XPM in a tripod type double electromagnetically induced transparency (EIT) system. First, we obtain the low loss stable propagation of SPs, and the group velocities of the probe and signal fields are well matched under the double EIT condition in the linear region, and then, giant enhancement of the XPM can be obtained in the nonlinear region. Finally, the coupled NLSEs are derived in our system, by adopting the bright- bright soliton pair, multi bright solitons pairs and dark-dark soliton pair solution as the initial condition, we realize the spatial focusing of SPs solitons via XPM between the narrow optical soliton in free space and the SPs soliton, and even nanofocusing. The rest of the paper is organized as follows: In Sec. II, we propose the theoretical model for the study. In Sec. III, the linear and nonlinear properties of the signal and probe fields, together with the nonlinear envelope equantions are given. In Sec. IV, spatial focusing of SPs is studied. Finally, in Sec. V, we summarize the main work of this paper. ## II THEORETICAL MODEL Figure 1: (Color online) (a) A scheme for nanofocusing of the SPs via XPM. The SPs is excited and propagates in the $x$ direction. The probe field which is narrow in the $y$ direction propagates along the surface of the NIMM in the $x$ direction. The control field incidences in the vertical direction. (b) The cold atomic gas is charged above the surface, and with a tripod-type four- level excitation configuration. The probe, signal and control fields are coupled to the transition $|1\rangle\leftrightarrow|4\rangle$, $|2\rangle\leftrightarrow|4\rangle$ and $|3\rangle\leftrightarrow|4\rangle$, respectively. $\Delta_{j}~{}(j=2,3,4)$ are the optical detunings. The system under study consists of a layer of NIMM in the lower half plane $z<0$ and a cold atomic gas in the upper half plane $z>0$, as shown in Fig.1(a). The permittivity $\varepsilon_{1}$ and permeability $\mu_{1}$ of the NIMM are given by the Drude model in optical region Kamli 2008 . The SPs is excited and propagates in the $x$ direction. The probe field which is narrow in the $y$ direction propagates along the surface of the NIMM in the $x$ direction. The control field incidences in the vertical direction. The cold atomic gas is chosen as a double-EIT excitation medium with a tripod- type four-level configuration. The three fields are interacting with the atoms coherently as shown in Fig.1(b). The weak probe field $\mathbf{E}_{p}$ with center angular frequency $\omega_{p}$ couples $|1\rangle\leftrightarrow|4\rangle$, the weaker signal field $\mathbf{E}_{s}$ with center angular frequency $\omega_{s}$ couples the transition $|2\rangle\leftrightarrow|4\rangle$, and the strong control field $\mathbf{E}_{c}$ with center angular frequency $\omega_{c}$ couples $|3\rangle\leftrightarrow|4\rangle$, $\Delta_{j}$ ($j=2,3,4$) are the optical detunings. The atoms occupying the excited state $|4\rangle$ can spontaneously radiate to the three ground states ($|1\rangle$, $|2\rangle$, and $|3\rangle$) with spontaneous emission rates $\Gamma_{j4}$ $(j=1,2,3)$. The transition between the three ground states are forbidden, but, there may be other dephasing processes, such as collision, we denote such dephasing rates by $\Gamma_{jl}$ (between the states $|j\rangle$ and $|j\rangle$, $j,l=1,2,3$), those dephasing rates are relatively small comparing with spontaneous emission rates, thus, we assume such processes could not cause the population exchange between the ground states. Such a system can support the propagation of lossless SPs, and provides a great platform for studying the nonlinearity of SPs during interacting with the coherent medium Kamli 2008 . The signal field is chosen as a TM mode of the waveguide, with the electric field being $\mathbf{E}_{s}=\mathcal{E}_{s}\mathbf{u}_{s}(z)e^{i(k(\omega_{s})x-\omega_{s}t)}+c.c.$, in which, $\mathbf{u}_{s}(z)$ is the mode function in the $z$ direction Liu 2020 . The probe and control fields are chosen as $\mathbf{E}_{c(p)}=\mathcal{E}_{c(p)}\mathbf{e}_{c(p)}e^{i(k_{c(p)}x-\omega_{c(p)}t)}+c.c.$. $\mathcal{E}_{j}$ $(j=p,s,c)$ represents the envelope of the three fields. In interaction picture, under the electric-dipole and rotating-wave approximations, the Hamiltonian of the system reads $\hat{H}_{int}=-\hbar\sum_{j=1}^{4}\Delta_{j}|j\rangle\langle j|-\hbar[\Omega_{c}|4\rangle\langle 3|+\Omega_{p}|4\rangle\langle 1|+\zeta_{s}(z)\Omega_{s}e^{i\theta_{s}}|4\rangle\langle 2|+h.c.],$ (1) with $\Omega_{c}=|\mathbf{p}_{34}|\mathcal{E}_{c}/\hbar$, $\Omega_{p}=|\mathbf{p}_{14}|\mathcal{E}_{p}/\hbar$, $\Omega_{s}=|\mathbf{p}_{24}|\mathcal{E}_{s}/\hbar$ being the half-Rabi frequencies of the control, probe and signal fields, respectively. $\mathbf{p}_{jl}=p_{jl}\mathbf{e}_{jl}$ is the electric dipole matrix element related to the transition from $|j\rangle$ to $|l\rangle$. $\zeta_{s}(z)=\mathbf{u}_{s}(z)\cdot\mathbf{e}_{24}$ is the mode intensity distribution function describing the interaction weight between SPs and atoms along the $z$ firection. $\theta_{s}=[k(\omega_{s})+k_{2}-k_{4}]$ are phase mismatches caused by the eigen dispersion of SPs, where $k_{l}~{}(l=2,~{}4)$ refers to the wave number of the state $|l\rangle$. The interaction information of the system is given by the density matrix $\sigma$, which is a $4\times 4$ matrix, and the evolution of $\sigma$ is governed by the optical Bloch equation Liu 2020 $\frac{\partial\sigma}{\partial t}=-\frac{i}{\hbar}[\hat{H}_{int},\sigma]-\Gamma\sigma,$ (2) with $\Gamma$ being a $4\times 4$ relaxation matrix which describes the spontaneous emission and other dephasing effects of the system. The detailed expressions of the density matrix $\sigma$ are given in Appendix B. Under the condition of slowly varying envelope approximation, the Maxwell equations can be reduced to $\displaystyle i(\frac{\partial}{\partial x}+\frac{1}{c}\frac{\partial}{\partial t})\Omega_{p}+\frac{1}{2k_{p}}\frac{\partial^{2}}{\partial{y^{2}}}\Omega_{p}+\kappa_{14}\sigma_{41}=0,$ (3a) $\displaystyle i(\frac{\partial}{\partial x}+\frac{1}{n_{\rm eff}}\frac{1}{c}\frac{\partial}{\partial t})\zeta_{s}(z)e^{i\theta_{s}}\Omega_{s}+\frac{1}{2k(\omega_{s})}\frac{\partial^{2}}{\partial y^{2}}\zeta_{s}(z)e^{i\theta_{s}}\Omega_{s}+\kappa_{24}\sigma_{42}=0,$ (3b) where $\kappa_{14}={\cal N}_{a}\omega_{p}^{2}|\mathbf{p}_{14}|^{2}/[{2\hbar\varepsilon_{0}c^{2}\tilde{k}(\omega_{p})}]$ and $\kappa_{24}={\cal N}_{a}\omega_{s}^{2}|\mathbf{p}_{24}|^{2}/[{2\hbar\varepsilon_{0}c^{2}\tilde{k}(\omega_{s})}]$ are the coupling coefficients of the probe field and the signal field, $\tilde{k}={\rm Re}(k)$, $n_{\rm eff}=k(\omega_{s})c/\omega_{s}$ is the effective refractive index, ${\cal N}_{a}$ represents the number density of the atoms, and $c$ is the speed of light in vacuum. Equations (2) and (3) can totally describe the interaction and propagation properties of our system, which is known as the Maxwell-Bloch (MB) equations. Note that, the Rabi frequencies of the three fields in our system satisfies the condition $|\Omega_{s}|\ll|\Omega_{p}|\ll|\Omega_{c}|$, thus, Eqs. (2) and (3) can be solved by the multi-scale method Huang 2005 . ## III THE LINEAR AND NONLINEAR PROPERTIES OF THE SYSTEM Firstly, we make the asymptotic expansion: $\sigma_{jl}-\sigma_{jl}^{(0)}=\epsilon\sigma_{jl}^{(1)}+\epsilon^{2}\sigma_{jl}^{(2)}+...$, $\Omega_{p}=\epsilon\Omega_{p}^{(1)}+\epsilon^{2}\Omega_{p}^{(2)}+\epsilon^{3}\Omega_{p}^{(3)}+...$ and $\Omega_{s}=\epsilon^{2}\Omega_{s}^{(2)}+\epsilon^{3}\Omega_{s}^{(3)}+...$, where, $\epsilon$ is a dimensionless small parameter characterizing the typical amplitude ratio of the probe field and the signal field, and $\sigma_{jl}^{(0)}$ is the initial state of the system, and all the physical quantities on the right side of the equation are functions of the multiple scales variable $t_{j}=\epsilon^{j}t$, $x_{j}=\epsilon^{j}x(j=0,2)$ and $y_{1}=\epsilon y$. We can get a series of linear but nonhomogeneous equations about $\sigma_{jl}^{(\alpha)}$, $\Omega_{p}^{(\alpha)}$, and $\Omega_{s}^{(\alpha)}$ , that can be solved order by order. ### III.1 LINEAR PROPERTIES OF THE SYSTEM Firstly, we will show that a ultra-low loss propagation of SPs in the linear region, and group velocity matching between the probe and signal fields can be realized via the double EIT effect. When there is no probe field and signal field, we can obtain the zero-order solution of the system, which corresponding to the initial state of the system, $\sigma_{jl}^{(0)}=0$, $\sigma_{33}^{(0)}=\sigma_{44}^{(0)}=0$, $\sigma_{11}^{(0)}+\sigma_{22}^{(0)}=1$. Thus, there are no coherence between the states, and all atoms populate on the ground states $|1\rangle$ and $|2\rangle$. For convenience, we have assumed that $\sigma_{11}^{(0)}=\sigma_{22}^{(0)}=1/2$ in the following discussions, which means atoms is uniformly distributed in the states $|1\rangle$ and $|2\rangle$. When turning on the probe and signal fields, we can obtain the first and second order solutions of the system, respectively, and both are linear problems. We assume that the probe field (signal field) $\Omega_{p}^{(1)}$ $(\Omega_{s}^{(2)})$ is proportional to $e^{i\theta_{p}}$ $(e^{i\theta_{s}})$ with $\theta_{p}=K_{p}(\omega)x_{0}-\omega t_{0}$ $(\theta_{s}=K_{s}(\omega)x_{0}-\omega t_{0})$, which means $\Omega_{p}^{(1)}=F_{1}e^{i(K_{p}(\omega)x_{0}-\omega t_{0})}$ $(\Omega_{s}^{(2)}=F_{2}e^{i(K_{s}(\omega)x_{0}-\omega t_{0})})$, where $F_{1}$ ($F_{2}$) are slowly varying envelop function of multiscale variables yet to be determined. Then, we can obtain the linear dispersion relations of the probe field and signal field interacting with the double EIT medium, which read $\displaystyle K_{p}(\omega)=\frac{\omega}{c}+\kappa_{14}\frac{(\omega+d_{31})\sigma_{11}^{(0)}}{D_{p}},$ (4a) $\displaystyle K_{s}(\omega)=\frac{1}{n_{\rm eff}}\frac{\omega}{c}+\kappa_{24}\frac{(\omega+d_{32})\sigma_{22}^{(0)}}{D_{s}},$ (4b) where $\omega$ is the frequency shift to the center frequency of the probe and signal fields, and we have defined $D_{p}=|\Omega_{c}|^{2}-(\omega+d_{31})(\omega+d_{41})$ and $D_{s}=|\Omega_{c}|^{2}-(\omega+d_{32})(\omega+d_{42})$. $V_{g_{1}}={\rm Re}[\partial K_{p}/\partial\omega]^{-1}$ and $V_{g_{2}}={\rm Re}[\partial K_{s}/\partial\omega]^{-1}$ are the group velocities of probe field and signal field respectively. From Eqs. (4a) and (4b), we can find that the mainly difference of the linear dispersion relation $K_{p}$ and $K_{s}$ is induced by the parameters $\Delta_{2}$, $\kappa_{j4}$ and $\Gamma_{ij}$ ($i=3,4,j=1,2$), and the difference will cause the group velocities of the probe and signal field mismatch. In order to study the XPM between the probe and signal fields, the group velocities of the two fields must be well matched. Thus, in our following analyse, we choose these system parameters as $\Delta_{2}=0$, $\gamma_{13}=\gamma_{23}$ and $\gamma_{14}=\gamma_{24}$, which is a typical symmetrical double EIT scheme. Under this condition, the linear dispersion relations of probe field and signal field are almost the same, i.e., $K_{p}\approx K_{s}$, thus the group velocities are well matched as shown in Fig.2. Figure 2: The linear dispersion relations and group velocities of the probe field and signal fields. (a) The absorption spectrum ${\rm Im}(K_{p(s)})$ and (b) the dispersion relation ${\rm Re}(K_{p(s)})$ as functions of the frequency shift $\omega$. (c) The group velocity $\tilde{V}_{g}/c$ of the probe field and the signal field changes as functions of $\omega$. The inset panel shows the group velocity matching region. The red solid line and blue dotted line correspond to the probe field and the signal field, respectively. Figure 2 (a) and (b) show the imaginary part and real part of the linear dispersion relation $K_{p(s)}(\omega)$ as a function of the center frequency shift $\omega$. The system parameters are chosen from the D1-line transition of 87Rb atoms, with the energy levels selected as $|1\rangle=|5^{2}\textrm{S}_{1/2},F=1,m_{F}=-1\rangle$, $|2\rangle=|5^{2}\textrm{S}_{1/2},F=2,m_{F}=1\rangle$, $|3\rangle=|5^{2}\textrm{S}_{1/2},F=2,m_{F}=0\rangle$, and $|4\rangle=|5^{2}\textrm{P}_{1/2},F=2,m_{F}=0\rangle$ Chen 2019 . The decay rates are given by $\Gamma_{4}=5.75$ MHz, $\Omega_{c}=10$ MHz, $\Delta_{2}=\Delta_{3}=\Delta_{4}=0$, $\lambda_{c}=\lambda_{p}=\lambda_{s}=780$ nm. The parameters of waveguide are $\mu_{1}=0.83+0.0001i$, $\mu_{2}=1$, $\varepsilon_{1}=-31.14+0.32i$, $\varepsilon_{2}=1$, and $k(\omega_{s})=k(\omega_{p})=(8.17+0.0012i)\times 10^{4}$ cm-1. The electric dipole matrix elements are $|\textbf{p}_{14}|\simeq|\textbf{p}_{24}|=1.46\times 10^{-27}$ C$\cdot$cm. We assume that atomic density $\mathcal{N}_{a}\approx 1.10\times 10^{11}$ cm-3, then $\kappa_{14}\approx\kappa_{24}\approx 1.0\times 10^{9}$ cm-1 s-1. As shown in Fig. 2(a), we can see that the absorption doublet of the probe and signal fields are almost overlap under the double EIT effect, and as the dispersion properties shown in Fig. 2(b), which means the double EIT effect can hugely suppress the absorption of the two fields, and can satisfy the group velocity matching condition at the same time. In Fig.2(c), the red solid line and blue dashed line represent the relationship between the group velocity $\tilde{V}_{g}/c$ ($\tilde{V}_{g}=Re(V_{g})$) and the frequency shift $\omega$ of the probe light field and the signal light field, respectively. We can see the group velocity of the signal field match the group velocity of the probe field well, and with a subluminal ($10^{-4}c$). ### III.2 NONLINEAR PROPERTIES OF THE SYSTEM In this subsection, we study the nonlinear excitation of SPs in this system. In the third and fourth order solution of the MB equations, we can obtain the solvable conditions, where the probe field and signal field envelope function $F_{1}$ and $F_{2}$ satisfies $\displaystyle i(\frac{\partial}{\partial x_{2}}+\frac{\partial}{\partial t_{2}}\frac{1}{V_{g_{1}}})F_{1}+\frac{c}{2\omega_{p}}\frac{\partial^{2}}{\partial y_{1}^{2}}F_{1}-W_{11}|F_{1}|^{2}F_{1}e^{-2\bar{\alpha_{1}}x_{2}}=0,$ (5a) $\displaystyle i(\frac{\partial}{\partial x_{2}}+\frac{\partial}{\partial t_{2}}\frac{1}{V_{g_{2}}})e^{i\theta_{s}}F_{2}+\frac{c}{2\omega_{s}}\frac{1}{n_{\rm eff}}\frac{\partial^{2}}{\partial y_{1}^{2}}e^{i\theta_{s}}F_{2}-W_{21}|F_{1}|^{2}e^{i\theta_{s}}F_{2}e^{-2\bar{\alpha_{1}}x_{2}}=0,$ (5b) with $\bar{\alpha}=\epsilon^{-2}\alpha_{1}=\epsilon^{-2}\rm{Im}[K_{p}(\omega)]$, $W_{11}$ and $W_{21}$ being the nonlinear coefficients describing self phase modulation (SPM) of the probe field and XPM between the probe and signal field, respectively. The relation between the nonlinear coefficients and the self-kerr and cross-kerr susceptibilities are $\displaystyle\chi_{11}^{(3)}=\frac{2c}{\omega_{p}}\frac{|\mathbf{p}_{14}|^{2}}{\hbar^{2}}W_{11},$ (6a) $\displaystyle\chi_{21}^{(3)}=\frac{2c}{\omega_{s}}\frac{|\mathbf{p}_{24}|^{2}}{\hbar^{2}}W_{21}.$ (6b) In general, the coefficients in Eq.(5) are complex, thus, the system does not allow stable local nonlinear solutions. Fortunately, if the system works under the condition of double EIT, the imaginary part of these coefficients can be much smaller than the real part. In addition to the parameters mentioned above, we choose the other parameters as $\Omega_{c}=1.0\times 10^{6}$ Hz, $U_{0}=2.24\times 10^{8}$ Hz, $R_{y}=107$ nm, $\Delta_{2}=1.0\times 10^{4}$ Hz, $\Delta_{3}=1.0\times 10^{5}$ Hz and $\Delta_{4}=8.0\times 10^{7}$ Hz. We can obtain that $W_{11}\approx(5.09+0.025i)\times 10^{-13}\rm{cm^{-1}s^{2}}$, $W_{21}\approx(5.18+0.017i)\times 10^{-13}~{}\mathrm{cm^{-1}s^{2}}$. Based on Eq. (6), the self-kerr and cross-kerr susceptibilities $\chi_{11}^{(3)}=(2.42+0.012i)\times 10^{-2}~{}\mathrm{cm^{2}V^{-2}}$, $\chi_{21}^{(3)}=(2.46+0.008i)\times 10^{-2}~{}\mathrm{cm^{2}V^{-2}}$, respectively, which corresponding to giant Kerr effects. The typical diffraction length $L_{\rm Diff}=\omega_{p(s)}R_{y}^{2}/c$ of the system is approximately $9.24\times 10^{-6}$cm, and the typical nonlinearity length $L_{\rm Nonl}=1/[{U_{0}^{2}{\rm Re(W_{11})}}]$ is approximately equal to $L_{\rm Diff}$. Thus, the diffraction effect can balance the nonlinearity effect of the system. And then we have the linear absorption lengths $L_{\rm Ap(s)}=1/{\rm Im}(K_{p(s)}+k_{p(s)})$ of the system, that are approximately $L_{\rm Ap}=0.0836$ cm, $L_{\rm As}=0.0835$ cm, we can get the condition $L_{\rm Ap(s)}\gg L_{\rm Nonl}\approx L_{\rm Diff}$. $R_{y}$ and $U_{0}=1/\sqrt{L_{\rm Diff}{\rm Re}(W_{11})}$ are radius in the transverse direction and typical half-Rabi frequency of the probe field, respectively. Under the above parameters condition, it is possible to obtain two-component soliton solutions of Eq.(5) Chen 2019 . ### III.3 Coupled NLSEs For the convenience of research, we transform the Eq.(5a) and Eq.(5b) into dimensionless form, which are the coupled nonlinear Schrödinger equations (NLSEs), and the expressions are as following $\displaystyle i(\frac{\partial}{\partial s}+\frac{1}{\lambda_{1}}\frac{\partial}{\partial\tau})u_{1}+\frac{1}{2}\frac{\partial^{2}}{\partial\xi^{2}}u_{1}-w_{11}u_{1}|u_{1}|^{2}=-iA_{1}u_{1},$ (7a) $\displaystyle i(\frac{\partial}{\partial s}+\frac{1}{\lambda_{2}}\frac{\partial}{\partial\tau})u_{2}+\frac{1}{2}\frac{\partial^{2}}{\partial\xi^{2}}u_{2}-w_{21}u_{2}|u_{1}|^{2}=-iA_{2}u_{2},$ (7b) where the dimensionless physical quantity are defined as $u_{j}=\epsilon F_{j}/U_{0}e^{-\bar{\alpha}_{j}x_{2}}$, $s=x/L_{\rm Diff}$, $\tau=t/\tau_{0}$, $\lambda_{j}=V_{gj}\tau_{0}/L_{\textrm{Diff}}$, $\xi=y/R_{y}$, $w_{j1}=W_{j1}/{|W_{11}|^{2}}$ and $A_{1(2)}=L_{\rm Diff}\alpha_{1(2)}$. And $\tau_{0}$ is the pulse duration. To solve the above equation, we assume $u_{j}(\tau,s,\xi)=g_{j}(\tau,s)v_{j}(\tau,\xi)$ with $g_{j}(\tau,s)=\frac{1}{\sqrt[4]{4\pi\rho_{0}^{2}}}e^{-(s-\lambda_{j}\tau)^{2}/{4\rho_{0}^{2}}},$ (8) where $\rho_{0}$ is a free real parameter. After neglecting the small absorption coefficient $A_{j}$ and integrating the variable $s$, Eq.(7a) and Eq.(7b) are further simplified as $\displaystyle\left(\frac{i}{\lambda_{1}}\frac{\partial}{\partial\tau}+\frac{1}{2}\frac{\partial^{2}}{\partial\xi^{2}}\right)v_{1}-\frac{1}{2\sqrt{\pi}\rho_{0}}w_{11}|v_{1}|^{2}v_{1}=0,$ (9a) $\displaystyle\left(\frac{i}{\lambda_{2}}\frac{\partial}{\partial\tau}+\frac{1}{2}\frac{\partial^{2}}{\partial\xi^{2}}\right)v_{2}-\frac{1}{2\sqrt{\pi}\rho_{0}}w_{21}|v_{1}|^{2}v_{2}=0.$ (9b) And then, we will study the spatial focusing effect of SPs based on the above equation. ## IV SPATIAL FOCUSING OF SPS BASED ON XPM Eq. (9) has many soliton pair solutionsKivshar 2003 , next, we choose three typical soliton pair solutions as the initial condition to study the XPM between the probe and signal fields, and spatial focusing of the signal field based on XPM. (i) Bright-bright solition pair. The expression of bright-bright solition pair read $v_{j}=\varsigma_{j}\mathrm{sech}[\varsigma_{jj}(\xi-\eta_{j}\tau-\xi_{0})]e^{i[\eta_{j}\xi-(\eta_{j}^{2}-\varsigma_{j}^{2})\tau/2-\varphi_{0}]},~{}(j=1,2)$ (10) Figure 3: (a) Time evolution of the bright SPs soliton $v_{2}$. After the probe soliton $v_{1}$ is turned on at the position of $t/\tau_{0}=2$, $v_{2}$ will be obviously focused; (b) The input and output profile of the SPs soliton; (c) The FWHM of the SPs soliton changes with the propagation time $t/\tau_{0}$. where $\varsigma_{j}$, $\varsigma_{jj}$, $\eta_{j}$, $\xi_{0}$ and $\varphi_{0}$ are free real parameters Chen 2019 . For numerical simulation, we choose the following initial values of the free parameters as $\varsigma_{1}=8$, $\varsigma_{2}=1$, $\varsigma_{11}=6$, $\varsigma_{22}=0.6$, and the other are zeros. In order to study the process of spatial focusing, we first input the SPs soliton $v_{2}$, and then, turn on the probe soliton $v_{1}$ at the position of $t/\tau_{0}=2$. The results are shown in Fig.3. We can see that the SPs soliton first undergos a slight diffraction due to the absent of the term of XPM in Eq. (9b), when the narrow probe soliton is turned on, the SPs soliton is focused to a very tiny area in a very shot response time, as shown in Fig.3(a). The physical mechanism of such effect is that the narrow probe soliton modulates the profile of the SPs soliton in the overlap area via XPM. We can see that the full width at the half-maximum (FWHM) of the output profile of the SPs soliton is much small than that of the input one from Fig.3(b). We also show the FWHM/$L_{\rm Diff}$ as function of the propagation time $t/\tau_{0}$ in Fig.3(c). We can see that the FWHM of the initial SPs soliton is $2.2L_{\rm Diff}\sim 203.28~{}$nm. After the spatial focusing, the curve dropped sharply, and the FWHM of the output SPs soliton is only $0.16L_{\rm Diff}\sim 14.78~{}$nm, which is nearly $14$ times compressed. Thus, even the nanofocusing of SPs can be realized in our scheme. (ii) Bright-bright solition pairs. Figure 4: (a) Time evolution of the multi SPs solitons $v_{2}$. After the probe soliton $v_{1}$ is turned on at the position of $t/\tau_{0}=1$, $v_{2}$ will be obviously focused; (b) The input and output profile of the SPs solitons. Eq. (9) also has multi solition pairs solutions. Under the same condition as previous, we input three bright SPs solitons, and the initial positions are at $\xi_{0}=-30,~{}0~{}{\rm and}~{}30$, respectively. And then we turn on the probe solitons with the same form of solution at $t/\tau_{0}=1$, as shown in Fig. 4(a). We find that such multi SPs solitons can also be focused with very narrow transverse width via XPM, the normalized input and output profiles are shown in Fig. 4(b). The FWHM of the input field is about $3.3L_{\mathrm{Diff}}$. After the XPM, the pulse width of $v_{2}$ drops sharply and the output width is about $0.16L_{\mathrm{Diff}}$, which is compressed nearly $20$ times. Such results can be applied in the making of surface plasmon polaritons grating with high spectral resolution at micro/nano scale. (iii) Dark-dark solition pair. In order to verify the correctness of our theory, we also choose the dark solitons pair as the initial condition of our numerical simulation, the expressions of the dark solitons pair reads $v_{j}=\\{\psi_{j}{i\mathrm{sin}{\phi_{j}+m_{j}\mathrm{cos}{\phi_{j}}\mathrm{tanh}[b_{j}(\xi-h\tau)]}\\}e^{in_{j}c_{j}\xi+i[c_{j}^{2}/2+\chi_{j}]\tau}},(j=1,2),$ (11) Figure 5: (a) Time evolution of the dark SPs soliton $v_{2}$. After the probe soliton $v_{1}$ is turned on at the position of $t/\tau_{0}=3$, $v_{2}$ will be obviously focused; (b) The input and output profile of the SPs soliton; (c) The FWHM of the SPs soliton changes with the propagation time $t/\tau_{0}$. where $\phi_{j}=\mathrm{arctan}{[(c_{j}-b_{j})/a_{j}]}$, $\psi_{j}$, $a_{j}$, $b_{j}$, $\chi_{j}$ and $c_{j}$ are real parameters Sheppard 1997 . The initial values of the free parameters are $\psi_{1}=-8.14$, $a_{1}=3$, $b_{1}=18$, $c_{1}=2$, $m_{1}=6.51$, $n_{1}=0.4$, $\chi_{1}=-2$, $\psi_{2}=-1.20$, $a_{2}=\sqrt{0.8}$, $b_{2}=0.3$, $c_{2}=\sqrt{8}$, $m_{2}=1.07$, $n_{2}=0.28$, $\chi_{2}=-4$, and the other parameters are zeros. The results are shown in Fig.5, and with some differences comparing to results obtained in the bright one. We can see that the SPs soliton first propagates stably when the XPM is absent, and when turn on the dark probe soliton at $t/\tau=3$, a very narrow and deep dip appears in the overlap region between the signal and probe fields, i.e. spatial focusing, as shown in Fig. 5(a). In Fig. 5(b), we find that there are no higher harmonics beside the interacting region. The appearance of the narrow dip is due to the energy transfer in the XPM process. Shown in Fig. 5(c) is the FWHM$/L_{\rm Diff}$ of the narrow dip as a function of $t/\tau_{0}$, we can see the FWHM of the SPs soliton drops slowly than that shown in Fig. 3(c), which means the interacting time between the dark soliton is longer than that of the bright soliton. ## V SUMMARY In conclusion, a scheme based on XPM in double EIT is proposed to realize spatial focusing of SPs in a NIMM interface waveguide system. First, we obtain the low loss stable propagation of SPs, and the group velocities of the probe and signal fields are well matched under the double EIT condition in the linear region, and then, giant enhancement of the XPM can be obtained in the nonlinear region. Finally, the coupled NLSEs are derived in our system, by adopting the bright-bright soliton pair, multi bright solitons pairs and dark- dark soliton pair as the initial condition, we realize the spatial focusing of SPs solitons via XPM between the narrow optical soliton in free space and the SPs soliton, and even nanofocusing. We also find that the response time between the bright-bright soliton pair is much shorter than that of the dark one. These results not only provide a certain theoretical basis for realizing the active manipulation of SPs, but also have broad application prospects in the fields of micro-nano optics. ###### Acknowledgements. This work was supported by National Natural Science Foundation of China (NSFC) under Grant Nos. 11604185, 11704066, 11804196 and 11947072. ## Appendix ## Appendix A Expressions related to the electric field of SPs The permittivity ($\varepsilon_{1}$) and the permeability ($\mu_{1}$) of NIMM can be described by the Drude model, i.e., $\varepsilon_{1}(\omega_{s})=\varepsilon_{\infty}-\omega_{e}^{2}/\omega_{s}(\omega_{s}+i\gamma_{e})$, $\mu_{1}(\omega_{s})=\mu_{\infty}-\omega_{m}^{2}/\omega_{s}(\omega_{s}+i\gamma_{m})$, where $\omega_{e,m}$ are electric and magnetic plasma frequencies of the NIMM, $\gamma_{e,m}$ are corresponding decay rates, and $\varepsilon_{\infty}$ and $\mu_{\infty}$ are background constants. The decay coefficients along the $z$ direction read $k_{j}^{2}=k(\omega_{s})^{2}-\varepsilon_{j}\mu_{j}\omega_{s}^{2}/c^{2},$ where $j=1$ for the NIMM, $j=2$ for the atomic gas, and satisfies the relation $k_{1}\varepsilon_{2}=-k_{2}\varepsilon_{1}$, which gives the propagation constant of the SPs, i.e., $k(\omega_{s})=\omega_{s}[\varepsilon_{1}\varepsilon_{2}(\varepsilon_{1}\mu_{2}-\varepsilon_{2}\mu_{1})/(\varepsilon_{1}^{2}-\varepsilon_{2}^{2})]^{1/2}/c$. The electric field of SPs in the atomic gas reads $\mathbf{E}_{s}(\mathbf{r},t)=\mathcal{E}_{s}(\mathbf{r},t)\mathbf{u}_{s}(z)\exp{[i(k(\omega_{s})x-\omega_{s}t)]}+c.c.$, with the mode function being $\mathbf{u}_{s}(z)=-c[k(\omega_{s})\mathbf{\hat{z}}-ik_{2}(\omega_{s})\mathbf{\hat{x}})e^{k_{2}z}]/\varepsilon_{2}\omega_{s}$. In our analysis, above system parameters are given by $\varepsilon_{\infty}=1$, $\mu_{\infty}=1$, $\omega_{e}=1.37\times 10^{16}~{}{\rm s^{-1}}$, $\omega_{m}=2.45\times 10^{15}~{}{\rm s^{-1}}$, $\gamma_{e}=2.73\times 10^{13}~{}{\rm s^{-1}}$ (as for Ag), and $\gamma_{m}=\gamma_{e}/1000$. For a detailed derivation, one can refer to Ref. Liu 2020 . ## Appendix B Equations of motion for the density-matrix elements The explicit Bloch equations describ the density matrix element $\sigma_{jl}$ $(j,l=1,2,3,4)$ under the interaction representation as follows: $\displaystyle i\frac{\partial}{\partial t}\sigma_{11}-i\Gamma_{14}\sigma_{44}+\Omega_{p}^{*}\sigma_{41}-\Omega_{p}\sigma_{41}^{*}=0,$ (12a) $\displaystyle i\frac{\partial}{\partial t}\sigma_{22}-i\Gamma_{24}\sigma_{44}+\zeta(z)^{*}e^{-i\theta_{s}^{*}}\Omega_{s}^{*}\sigma_{42}-\zeta(z)e^{-i\theta_{s}}\Omega_{s}\sigma_{42}^{*}=0,$ (12b) $\displaystyle i\frac{\partial}{\partial t}\sigma_{33}-i\Gamma_{34}\sigma_{44}+\Omega_{c}^{*}\sigma_{43}-\Omega_{c}\sigma_{43}^{*}=0,$ (12c) $\displaystyle i(\frac{\partial}{\partial t}+\Gamma_{4})\sigma_{44}+\Omega_{p}\sigma_{41}^{*}+\zeta(z)e^{i\theta_{s}}\Omega_{s}\sigma_{42}^{*}+\Omega_{c}\sigma_{43}^{*}-\Omega_{p}^{*}\sigma_{41}-\zeta(z)^{*}e^{-i\theta_{s}^{*}}\sigma_{42}-\Omega_{c}^{*}\sigma_{43}=0,$ (12d) $\displaystyle(i\frac{\partial}{\partial t}+d_{21})\sigma_{21}+\zeta(z)^{*}e^{-i\theta_{s}^{*}}\Omega_{s}^{*}\sigma_{41}-\Omega_{p}\sigma_{42}^{*}=0,$ (12e) $\displaystyle(i\frac{\partial}{\partial t}+d_{31})\sigma_{31}+\Omega_{c}^{*}\sigma_{41}-\Omega_{p}\sigma_{43}^{*}=0,$ (12f) $\displaystyle(i\frac{\partial}{\partial t}+d_{32})\sigma_{32}+\Omega_{c}^{*}\sigma_{42}-\zeta(z)e^{i\theta_{s}}\Omega_{s}\sigma_{43}^{*}=0,$ (12g) $\displaystyle(i\frac{\partial}{\partial t}+d_{41})\sigma_{41}+\Omega_{p}(\sigma_{11}-\sigma_{44})+\zeta(z)e^{i\theta_{s}}\Omega_{s}\sigma_{21}+\Omega_{c}\sigma_{31}=0,$ (12h) $\displaystyle(i\frac{\partial}{\partial t}+d_{42})\sigma_{42}+\zeta(z)e^{i\theta_{s}}\Omega_{s}(\sigma_{22}-\sigma_{44})+\Omega_{p}\sigma_{21}^{*}+\Omega_{c}\sigma_{32}=0,$ (12i) $\displaystyle(i\frac{\partial}{\partial t}+d_{43})\sigma_{43}+\Omega_{c}(\sigma_{33}-\sigma_{44})+\Omega_{p}\sigma_{31}^{*}+\zeta(z)e^{i\theta_{s}}\Omega_{s}\sigma_{32}^{*}=0,$ (12j) where $d_{jl}=\Delta_{j}-\Delta_{l}+i\gamma_{jl}$ $(j,l=1,2,3,4)$, the decoherence rate of change of the system is defined as $\gamma_{jl}=(\Gamma_{j}+\Gamma_{l})/2$, where $\Gamma_{j}=\Sigma_{E_{i}<E_{j}}\Gamma_{ij}$. ## Appendix C Solutions of the asymptotic expansion at the first and second orders ### C.1 Frist-order approximation $\displaystyle\Omega_{p}^{(1)}=F_{1}e^{i\theta_{1}}=F_{1}e^{i[K_{p}(\omega)x_{0}-\omega t_{0}]},$ (13a) $\displaystyle\sigma_{31}^{(1)}=-\frac{\Omega_{c}^{*}\sigma_{11}^{(0)}}{D_{p}}F_{1}e^{i\theta_{1}},$ (13b) $\displaystyle\sigma_{41}^{(1)}=\frac{(\omega+d_{31})\sigma_{11}^{(0)}}{D_{p}}F_{1}e^{i\theta_{1}}.$ (13c) The solution of other density matrix elements is zero, where $D_{p}=|\Omega_{c}|^{2}-(\omega+d_{31})(\omega+d_{41})$ and $\theta_{1}=K_{p}(\omega)x_{0}-\omega t_{0}$ are defined. $F_{1}$ is the pending envelope function of the probe field, which is related to the slow variables $y_{1}$, $t_{2}$ and $x_{2}$. ### C.2 Second-order approximation $\displaystyle\Omega_{s}^{(2)}=F_{2}e^{i\theta_{2}}=F_{2}e^{i[K_{s}(\omega)x_{0}-\omega t_{0}]},$ (14a) $\displaystyle\sigma_{11}^{(2)}=a_{11}^{(2)}|F_{1}|^{2}e^{-2\bar{\alpha_{1}}x_{2}},$ (14b) $\displaystyle\sigma_{22}^{(2)}=a_{22}^{(2)}|F_{1}|^{2}e^{-2\bar{\alpha_{1}}x_{2}},$ (14c) $\displaystyle\sigma_{32}^{(2)}=a_{32}^{(2)}\zeta_{s}(z)e^{i\theta_{s}}F_{2}e^{i\theta_{2}},$ (14d) $\displaystyle\sigma_{33}^{(2)}=a_{33}^{(2)}|F_{1}|^{2}e^{-2\bar{\alpha_{1}}x_{2}},$ (14e) $\displaystyle\sigma_{42}^{(2)}=a_{42}^{(2)}\zeta_{s}(z)e^{i\theta_{s}}F_{2}e^{i\theta_{2}}.$ (14f) We define $D_{s}=|\Omega_{c}|^{2}-(\omega+d_{32})(\omega+d_{42})$ and $\theta_{2}=K_{s}(\omega)x_{0}-\omega t_{0}$. $F_{2}$ is the envelope function of the signal light field to be determined and related to the slow variables $y_{1}$, $t_{2}$ and $x_{2}$, and the coefficient $a_{jl}^{(2)}$ in the formula is: $\displaystyle a_{11}^{(2)}=-\frac{\sigma_{11}^{(0)}}{2D_{p}^{*}},$ (15a) $\displaystyle a_{22}^{(2)}=-\frac{\sigma_{11}^{(0)}}{2D_{p}^{*}},$ (15b) $\displaystyle a_{32}^{(2)}=-\frac{\Omega_{c}^{*}\sigma_{22}^{(0)}}{D_{s}},$ (15c) $\displaystyle a_{33}^{(2)}=-\frac{\sigma_{11}^{(0)}}{D_{p}^{*}},$ (15d) $\displaystyle a_{42}^{(2)}=-\frac{(\omega+d_{32})\sigma_{22}^{(0)}}{D_{s}}.$ (15e) ### C.3 Third-order approximation $\sigma_{41}^{(3)}=i\frac{\partial}{\partial t_{2}}\frac{[|\Omega_{c}|^{2}+(\omega+d_{31})^{2}]\sigma_{11}^{(0)}}{D_{p}^{2}}\Omega_{p}^{(1)}+\frac{(\omega+d_{31})a_{11}^{(2)}}{D_{p}}|\Omega_{p}^{(1)}|^{2}\Omega_{p}^{(1)}.$ (16) The explicit expression of the SPM cofficient $W_{11}$ in Eq. (5a) reads $W_{11}=-\kappa_{14}\frac{(\omega+d_{31})a_{11}^{(2)}}{D_{p}}=\kappa_{14}\frac{(\omega+d_{31})\sigma_{11}^{(0)}}{2|D_{p}|^{2}}.$ (17) ### C.4 Fourth-order approximation The explicit expression of the XPM cofficient $W_{21}$ in Eq. (5b) reads $W_{21}=-\kappa_{24}\frac{(\omega+d_{32})a_{22}^{(2)}}{D_{s}}=\kappa_{24}\frac{(\omega+d_{32})\sigma_{11}^{(0)}}{2D_{s}D^{*}_{p}}.$ (18) ## References * (1) P. Groß, M. Esmann, S. F. Becker, J. Vogelsang, N. Talebi, and C. Lienau, Plasmonic nanofocusing - grey holes for light, Adv. Phys. : X 1, 297 (2016). * (2) P. Dombi, Z. Pápa, J. Vogelsang, S. V. Yalunin, M. Sivis, G. Herink, S. Schäfer, P. Groß, C. Ropers, and C. Lienau, Strong-field nano-optics, Rev. Mod. Phys. 92, 025003 (2020). * (3) C. C. Neacsu, S. Berweger, R. L. Olmon, L. V. Saraf, C. Ropers, and M. B. Raschke, Near-Field Localization in Plasmonic Superfocusing: A Nanoemitter on a Tip, Nano. Lett. 10, 592 (2010). * (4) D. Sadiq, J. Shirdel, J. S. Lee, E. Selishcheva, N. Park, and C. Lienau, Adiabatic Nanofocusing Scattering-Type Optical Nanoscopy of Individual Gold Nanoparticles, Nano. Lett. 11, 1609 (2011) * (5) S. Schmidt, B. Piglosiewicz, D. Sadiq, J. Shirdel, J. S. Lee, P. Vasa, N. Park, D. S. Kim, and C. Lienau, Adiabatic Nanofocusing on Ultrasmooth Single-Crystalline Gold Tapers Creates a 10-nm-Sized Light Source with Few-Cycle Time Resolution, ACS Nano. 6, 6040 (2012). * (6) F. Huth, M. Schnell, J. Wittborn, N. Ocelic, and R. Hillenbrand, Infrared-spectroscopic nanoimaging with a thermal source, Nat. Mater. 10, 352 (2011). * (7) R. Zhang, Y.Zhang, Z. C. Dong, S. Jiang, C. Zhang, L. Chen, L. Zhang, Y. Liao, J. Aizpurua, Y. Luo, J. Yang, and J. Hou, Chemical mapping of a single molecule by plasmon-enhanced Raman scattering, Nature 498, 82 (2013). * (8) J. Zhong, X. Jin, L. Meng, X. Wang, H. Su, Z. Yang, C. T. Williams, and B. Ren, Probing the electronic and catalytic properties of a bimetallic surface with 3 nm resolution, Nat. Nanotechnolo. 12, 132 (2017). * (9) M. Liu, F. Lu, W. Zhang, L. Huang, S. Liang, D. Mao, F. Gao, T. Mei, and J. Zhao, Highly efficient plasmonic nanofocusing on a metallized fiber tip with internal illumination of the radial vector mode using an acousto-optic coupling approach, Nanophotonics 8, 921 (2019). * (10) L. Zhu, Y. Yin, L. Dai, Y. Hu, J. Nan, Z. Zheng, C. Cai, W. Zhao, and M. Ding, Round-tower plasmonic optical microfiber tip for nanofocusing with a high field enhancement, Opt. Commun. 453, 124358 (2019). * (11) F. Lu, W. Zhang, L. Zhang, M. Liu, T. Xue, L. Huang, F. Gao, and T. Mei, Nanofocusing of Surface Plasmon Polaritons on Metal-Coated Fiber Tip Under Internal Excitation of Radial Vector Beam, Plasmonics 14, 1593 (2019). * (12) T. Umakoshi, M. Tanaka, Y. Saito, and P. Verma, White nanolight source for optical nanoimaging, Sci. Adv. 6, 4179 (2020). * (13) M. Esmanna, A. Chimeha, A. Korte, J. Zhong, S. Stephan, J. Wittb, G. Wittstock, N. Talebi, and C. Lienau, Plasmonic nanofocusing spectral interferometry, Nanophotonics 9, 491 (2020). * (14) R. C. Dunn, Near-field scanning optical microscopy, Chem. Rev. 99, 2891 (1999). * (15) J. N. Anker, W. P. Hall, O. Lyadres, N. C. Shah, J. Zhao, and R. P. V. Duyne, Biosensing with plasmonic nanosensors, Nat. Mater. 7, 308 (2009). * (16) R. M. Stockle, Y. D. Suh, V. Deckert, and R. Zenobi, Nanoscale chemical analysis by tip-enhanced Raman spectroscopy. Chem. Phys. Lett. 318, 131 (2000). * (17) S. Berweger, J. M. Atkin, R. L. Olmon, and M.B. Raschke, Adiabatic tip-plasmon focusing for nano-Raman spectroscopy, J. Phys. Chem. Lett. 1, 3427 (2010). * (18) A. W. Bargioni, A. Schwartzberg, M. Cornaglia, A. Ismach, J. J. Urban, Y. Pang, R. Gordon, J. Bokor, M. B. Salmeron, D. F. Ogletree, P. Ashby, and S. Cabrini, P. J. Schuck, Hyperspectral nanoscale imaging on dielectric substrates with coaxial optical antenna scan probes, Nano. Lett. 4, 1201 (2011). * (19) J. Stadler, T. Schmid, and R. Zenobi, Developments in and practical guidelines for tip-enhanced Raman spectroscopy, Nanoscale 4, 1856 (2012). * (20) C. Chen, N. Hayazawa, and S. Kawata, A 1.7 nm resolution chemical analysis of carbon nanotubes by tip-enhanced Raman imaging in the ambient, Nat. Commun. 5, 3312 (2014). * (21) F. Lu, T. Huang, L. Han, H. Su, H. Wang, M. Liu, W. Zhang, X. Wang, and T. Mei, Tip-enhanced Raman spectroscopy with high-order fiber vector beam excitation, Sensors 18, 3841 (2018). * (22) W. Zhang, C. Li, K. Gao, F. Lu, M. Liu, X. Li, L. Zhang, D. Mao, F. Gao, L. Huang, T. Mei, and J. Zhao, Surface-enhanced Raman spectroscopy with Au-nanoparticle substrate fabricated by using femtosecond pulse, Nanotechnolo. 29, 205301 (2018). * (23) C. C. Neacsu, G. A. Reider, and M. B. Raschke, Second-harmonic generation from nanoscopic metal tips: symmetry selection rules for single asymmetric nanostructures, Phys. Rev. B 71, 201402(R) (2005). * (24) M. Kauranen, and A. V. Zayats, Nonlinear plasmonics, Nat. Photonics 6, 737 (2012). * (25) V. Kravtsov, R. Ulbricht, J. M. Atkin, and M. B. Raschke, Plasmonic nanofocused four-wave mixing for femtosecond near-field imaging, Nat. Nanotechnolo. 11, 459 (2016). * (26) S. Keramati, A. Passian, V. Khullar, and H. Batelaan, Photofield electron emission from an optical fiber nanotip, Appl. Phys. Lett. 117, 061102 (2020). * (27) R. Zia, J. A. Schuller, and M. L. Brongersma, Near-field characterization of guided polariton propagation and cutoff in surface plasmon waveguides, Phys. Rev. B 74, 165415 (2006). * (28) E. Verhagen, L. Kuipers, and A. Polman, Enhanced Nonlinear Optical Effects with a Tapered Plasmonic Waveguide, Nano. Lett. 7, 334 (2007). * (29) H. Choo, M. K. Kim, M. Staffaroni, T. J. Seok, J. Bokor, S.Cabrini, P. J. Schuck, M. C. Wu, and E. Yablonovitch, Nanofocusing in a metal-insulator-metal gap plasmon waveguide with a three-dimensional linear taper, Nat. Photonics 6, 838 (2012). * (30) V. A. Zenin, A. Andryieuski, R. Malureanu, I. P. Radko, V. S. Volkov, D. K. Gramotnev, A. V. Lavrinenko, and S. I. Bozhevolnyi, Boosting Local Field Enhancement by on-Chip Nanofocusing and Impedance-Matched Plasmonic Antennas, Nano. Lett. 15, 8148 (2015). * (31) P. Li, D. Pan, L. Yang, H. Wei, S. He, H. Xu, and Z. Li, Silver nano-needles: focused optical field induced solution synthesis and application in remoteexcitation nanofocusing SERS, Nanoscale 11, 2153 (2019). * (32) A. Kamli, S. A. Moiseev, and B. C. Sanders, Coherent Control of Low Loss Surface Polaritons, Phys. Rev. Lett. 101, 263601 (2008). * (33) S. Asgarnezhad-Zorgabad, R. Sadighi-Bonabi, and B. C. Sanders, Excitation and propagation of surface polaritonic rogue waves and breathers, Phys. Rev. A 98, 013825 (2018). * (34) S. Asgarnezhad-Zorgabad, P. Berini, and B. C. Sanders, Polaritonic frequency-comb generation and breather propagation in a negative-index metamaterial with a cold four-level atomic medium, Phys. Rev. A 99, 051802(R) (2019). * (35) Q. Liu, N. Li, and C. Tan, All-Optical Logic Gate Based on Manipulation of Surface Polaritons Solitons via External Gradient Magnetic Fields, Phys. Rev. A 101, 023818 (2020). * (36) G. P. Agrawal, Amplification of Ultrashort Solitons in Erbium-Doped Fiber Amplifiers, IEEE Photonics Technol. Lett. 2, 875 (1990). * (37) G. P. Agrawal, Nonlinear Fiber Optics (Sixth Edition) (Academic press, 2019). * (38) X. Liu, Dynamic evolution of temporal dissipative-soliton molecules in large normal path-averaged dispersion fiber lasers, Phys. Rev. A 82, 063834 (2010). * (39) X. Liu, Soliton formation and evolution in passively-mode-locked lasers with ultralong anomalous-dispersion fibers, Phys. Rev. A 84, 023835 (2011). * (40) Y. S. Kivshar, G. P. Agrawal, Optical solitons: from fibers to photonic crystals (Academic press, 2003). * (41) G. Huang, L. Deng, and M. G. Payne, Dynamics of ultraslow optical solitons in a cold three-state atomic system, Phys. Rev. E 72, 016617 (2005). * (42) Z. Chen, M. Segev, T. H. Coskun, D. N. Christodoulides, Y. S. Kivshar, Coupled photorefractive spatial-soliton pairs, J. Opt. Soc. Am. B 14, 3066 (1997). * (43) Z. Chen, H. Xie, Q. Li, and G. Huang, Stern-Gerlach deflection of optical Thirring solitons in a coherent atomic system, Phys. Rev. A 100, 013827 (2019). * (44) A. P. Sheppard, and Y. S. Kivshar, Polarized dark solitons in isotropic Kerr media, Phy. Rev. E. 55, (1997).
# Same-Sign Dilepton Signature in the Inert Doublet Model Fa-Xin Yang Zhi-Long Han<EMAIL_ADDRESS>Yi Jin School of Physics and Technology, University of Jinan, Jinan, Shandong 250022, China ###### Abstract In this paper, we perform a detailed analysis on the same-sign dilepton signature in the inert doublet model. Focusing on the low dark matter mass region, we randomly scan the corresponding parameter space. Viable samples allowed by various constraints are obtained, among which twenty benchmark points are selected for further collider signature study. At hadron colliders, the same-sign dilepton signature is produced via $pp\to W^{\pm*}W^{\pm*}jj\to H^{\pm}H^{\pm}jj$ with the leptonic decay mode $H^{\pm}\to HW^{\pm}(\to l^{\pm}\nu)$, where $H$ is the dark matter candidate. We investigate the testability of this signal at the high-luminosity LHC (HL-LHC) and the proposed 27 TeV high-energy LHC (HE-LHC). According to our simulation, the HL- LHC with $\mathcal{L}=3~{}ab^{-1}$ can hardly probe this signal. Meanwhile, for the HE-LHC with $\mathcal{L}=15~{}ab^{-1}$, it is promising to obtain a $5\sigma$ significance when $250~{}\text{GeV}\lesssim m_{H^{\pm}}-m_{H}\lesssim 300$ GeV with dark matter mass $m_{H}\sim 60$ or 71 GeV. ## I Introduction Although the discovery of Higgs boson Aad:2012tfa ; Chatrchyan:2012ufa demonstrated the viability of the Standard Model (SM), there are convincing evidences of physics beyond SM, such as the origin of dark matter (DM) and tiny neutrino masses. Recent Plank data indicates that dark matter accounts for about 85% of the total matter content in the universe Aghanim:2018eyx . Among various candidates of particle DM, the Weakly Interacting Massive Particles (WIMPs) are the most popular recipes Bertone:2004pz ; Arcadi:2017kky , due to the fact that thermally produced WIMPs with weak-scale cross section can naturally lead to the observed DM relic density. The Inert Doublet Model (IDM) Deshpande:1977rw ; Barbieri:2006dq ; LopezHonorez:2006gr is one of the simplest extensions of SM that provides DM candidate. This model introduces an inert Higgs doublet, which is odd under the unbroken $Z_{2}$ symmetry. There are four additional scalars as in the usual two Higgs doublet models Branco:2011iw , i.e., neutral CP-even scalar ($H$), neutral CP-odd scalar ($A$), and charged scalar ($H^{\pm}$). The imposed unbroken $Z_{2}$ symmetry not only forbids Yukawa interactions of inert scalars with SM fermions, but also protects the lightest inert scalar being stable. In this paper, we consider the neutral CP-even scalar $H$ as DM candidate. If we further introduce $Z_{2}$-odd right-hand neutrinos, the tiny neutrino masses can also be realized by the Scotogenic mechanism Ma:2006km ; Han:2019diw ; Han:2019lux ; Wang:2019byi . The phenomenology of the IDM has been extensively studied in Refs. Gustafsson:2007pc ; Cao:2007rm ; Lundstrom:2008ai ; Dolle:2009fn ; Honorez:2010re ; LopezHonorez:2010tb ; Gustafsson:2012aj ; Borah:2012pu ; Swiezewska:2012eh ; Osland:2013sla ; Goudelis:2013uca ; Modak:2015uda ; Blinov:2015vma ; Arhrib:2015hoa ; Plascencia:2015xwa ; Ilnicka:2015jba ; Diaz:2015pyv ; Kanemura:2016sos ; Hashemi:2016wup ; Belyaev:2016lok ; Borah:2017dfn ; Banerjee:2019luv ; Jueid:2020rek ; Abouabid:2020eik ; Fabian:2020hny ; Kalinowski:2020rmb ; Banerjee:2021oxc ; Banerjee:2021anv ; Banerjee:2021xdp ; Banerjee:2021hal . It is noticeable that the current positive evidences of DM all come from cosmological observations, which are based on the gravitational effects of DM. Therefore, the nature of DM is still an open question. To verify its nature, searches have been performed along three directions: direct detection, indirect detection, and collider signature. Despite the non-observation of direct detection signal which has already put stringent constraints on the parameter space of IDM Arhrib:2013ela ; Belyaev:2016lok , it is still appealing to extract positive indirect detection or collider signatures. For instance, low mass DM in IDM is possible to explain the Galactic center excess reported by Fermi-LAT Eiteneuer:2017hoh . Meanwhile, a large parameter space of high mass DM in IDM is detectable at the Cherenkov Telescope Array Queiroz:2015utg ; Garcia-Cely:2015khw . As for collider searches, promising signatures are the dilepton Dolle:2009ft ; Belanger:2015kga , trilepton Miao:2010rg , and teralepton channelGustafsson:2012aj ; Datta:2016nfz at LHC. The vector boson fusion (VBF) channel $pp\to HHjj$ is also considered in Refs. Dutta:2017lny ; Dercks:2018wch . Other promising collider signatures can be found in Refs. Aoki:2013lhm ; Arhrib:2014pva ; Hashemi:2015swh ; Belyaev:2018ext ; Kalinowski:2018ylg ; Kalinowski:2018kdn ; Guo-He:2020nok . The same-sign pair production of charged Higgs bosons via vector boson fusion (VBF) in two Higgs doublet model was recently proposed by Ref. Aiko:2019mww to explore the nature of the Higgs potential, where two typical decay modes $H^{\pm}\to\tau\nu$ and $H^{\pm}\to tb$ are considered. The decay modes $H^{\pm}\to W^{\pm}A$ with $A\to b\bar{b}$ or $A\to\tau^{+}\tau^{-}$ are also studied in Ref. Arhrib:2019ywg . In this paper, we consider the decay mode $H^{\pm}\to W^{\pm}H$ with $H$ being the DM candidate, which leads to the same-sign dilepton signature $pp\to H^{\pm}H^{\pm}jj\to(W^{\pm}H)(W^{\pm}H)jj\to l^{\pm}l^{\pm}jj+\cancel{E}_{T}$. Notably, the well studied opposite-sign dilepton signature in IDM is only promising with compressed mass spectrum $\Delta m=m_{A}-m_{H}\in[40,80]$ GeV Dolle:2009ft . A distinct nature of the same-sign dilepton signature is that the production cross section will be enhanced when the mass splitting $\Delta m$ becomes larger Aiko:2019mww . Meanwhile, the SM background of the same-sign dilepton signature Sirunyan:2017ret ; Aaboud:2019nmv ; Sirunyan:2020gyx ; Sirunyan:2020gvn is much smaller than the opposite-sign dilepton. Therefore, we expect that the same-sign dilepton signature might be promising for large $\Delta m$, which is complementary to the opposite-sign dilepton signature. The paper is organized in the following way. In section II, we briefly review the inert doublet model. Focusing on the low mass region $m_{H}<100$ GeV, viable parameter space is explored by considering certain constraints. A detailed study of the same-sign dilepton signature is performed in section III. Conclusion is presented in section IV ## II The Model In this paper, we consider the inert doublet model proposed in Ref. Deshpande:1977rw ; LopezHonorez:2006gr . In addition to the SM Higgs doublet $H_{1}$, an inert Higgs doublet $H_{2}$ is further introduced. The inert doublet $H_{2}$ is odd under an imposed $Z_{2}$ symmetry, thus $H_{2}$ does not couple to SM fermions directly but to gauge bosons only. The $Z_{2}$ symmetry also ensures the stability of DM candidate. Provided the $Z_{2}$ symmetry is not broken spontaneously, then $H_{2}$ will not develop a vacuum expectation value (VEV). The Higgs doublets can be denoted as $\displaystyle H_{1}=\left(\begin{array}[]{c}G^{+}\\\ \frac{1}{\sqrt{2}}(v+h+iG^{0})\end{array}\right),\quad H_{2}=\left(\begin{array}[]{c}H^{+}\\\ \frac{1}{\sqrt{2}}(H+iA)\end{array}\right),$ (5) where $G^{\pm},G^{0}$ is the would-be Goldstone bosons, $v$ is the VEV of $H_{1}$, and $h$ is the SM Higgs boson. The Higgs potential under the exact $Z_{2}$ symmetry is given by $\displaystyle V$ $\displaystyle=$ $\displaystyle\mu_{1}^{2}H_{1}^{\dagger}H_{1}+\mu_{2}^{2}H_{2}^{\dagger}H_{2}+\lambda_{1}(H_{1}^{\dagger}H_{1})^{2}+\lambda_{2}(H_{2}^{\dagger}H_{2})^{2}+\lambda_{3}(H_{1}^{\dagger}H_{1})(H_{2}^{\dagger}H_{2})$ $\displaystyle+\lambda_{4}(H_{1}^{\dagger}H_{2})(H_{2}^{\dagger}H_{1})+\frac{\lambda_{5}}{2}\left[(H_{1}^{\dagger}H_{2})^{2}+\text{h.c.}\right].$ Here, all the free parameters are taken to be real. Due to the unbroken $Z_{2}$ symmetry, term as $\mu_{12}^{2}(H_{1}^{\dagger}H_{2}+H_{2}^{\dagger}H_{1})$ is forbidden. Therefore, $H_{1}$ and $H_{2}$ do not mix. After electroweak symmetry breaking, masses of scalars are given by $\displaystyle m_{h}^{2}$ $\displaystyle=$ $\displaystyle-2\mu_{1}^{2}=2\lambda_{1}v^{2}$ (7) $\displaystyle m_{H}^{2}$ $\displaystyle=$ $\displaystyle\mu_{2}^{2}+\frac{1}{2}(\lambda_{3}+\lambda_{4}+\lambda_{5})v^{2}$ (8) $\displaystyle m_{A}^{2}$ $\displaystyle=$ $\displaystyle\mu_{2}^{2}+\frac{1}{2}(\lambda_{3}+\lambda_{4}-\lambda_{5})v^{2}$ (9) $\displaystyle m_{H^{\pm}}^{2}$ $\displaystyle=$ $\displaystyle\mu_{2}^{2}+\frac{1}{2}\lambda_{3}v^{2}$ (10) $H$ is taken to be the DM candidate in the following studies, which correspond to $\lambda_{5}<0$. For $A$ being DM candidate , one can simply make the replacement $\lambda_{5}\leftrightarrow-\lambda_{5}$. The parameters $\mu_{1}$ and $\lambda_{1}$ can be fixed by SM Higgs mass $m_{h}$ and VEV $v$. Then we are left with five free parameters, i.e., $\\{\mu_{2},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5}\\}$. A more convenient set of parameters are $\\{m_{H},m_{A},m_{H^{\pm}},\lambda_{2},\lambda_{L}\\}$, where $\lambda_{L}=(\lambda_{3}+\lambda_{4}+\lambda_{5})/2$ describes the Higgs-DM interaction $hHH$. Extensively discussed in previous studies, the above parameter set is constrained by various theoretical and experimental bounds. Benchmark points, which satisfy all constraints, have been given in Ref. Kalinowski:2018ylg . Here, we briefly discuss the relevant constraints to the low mass region we adopted. More details can be found in Refs. Belyaev:2016lok ; Kalinowski:2018ylg . * • Perturbativity: The model is perturbative when the quartic couplings satisfy $|\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5}|\leq 4\pi.$ (11) * • Vacuum stability: The stability of the Higgs potential at tree level is guaranteed by the bounded from below conditions $\lambda_{1}>0,\lambda_{2}>0,\lambda_{3}+2\sqrt{\lambda_{1}\lambda_{2}}>0,\lambda_{3}+\lambda_{4}-|\lambda_{5}|+2\sqrt{\lambda_{1}\lambda_{2}}>0.$ (12) * • Global minimum: In order to make sure the inert minimum being a local one, one needs Ginzburg:2010wa $\frac{\mu_{1}^{2}}{\sqrt{\lambda_{1}}}\leq\frac{\mu_{2}^{2}}{\sqrt{\lambda_{2}}}.$ (13) Using Eqn. (7) and (8), the above condition can be translated to $\lambda_{L}\leq\frac{\sqrt{2\lambda_{2}}m_{h}v+2m_{H}^{2}}{2v^{2}}.$ (14) * • Unitarity: The unitarity of $S$-matrix from scattering processes among scalars and gauge bosons requires that the corresponding absolute eigenvalues of the scattering matrix should be less than $8\pi$ Arhrib:2012ia . By requiring the unitarity conditions are valid up to about 10 TeV, the mass splittings are found to be in the region as Khan:2015ipa ; Datta:2016nfz $m_{A}-m_{H}\lesssim 300~{}{\rm GeV},~{}m_{H^{\pm}}-m_{H}\lesssim 300~{}{\rm GeV}.$ (15) * • Electroweak precision tests: The inert Higgs doublet will contribute to the oblique $S$ and $T$ parameters. Analytic expressions can be found in Ref. Belyaev:2016lok . As for the experimental limits, we take the global fit result in Ref. Baak:2014ora $S=0.06\pm 0.09,~{}T=0.01\pm 0.07,$ (16) with correlation coefficient +0.91. * • Gauge boson widths: The measurement of decay widths of gauge bosons $W^{\pm}$ and $Z$ indicate that masses of inert scalars should satisfy the following conditions $m_{A,H}+m_{H^{\pm}}>m_{W},~{}m_{A}+m_{H}>m_{Z},~{}2m_{H^{\pm}}>m_{Z}.$ (17) Thus, decays of $W^{\pm},Z$ to inert scalars are not kinetically open. * • Collider searches: Searches for supersymmetric particles at LEP via dijet or dilepton signal have excluded the following mass region Lundstrom:2008ai $m_{A}\leq 100~{}{\rm GeV},~{}m_{H}\leq 80~{}{\rm GeV},~{}m_{A}-m_{H}\geq 8~{}{\rm GeV},$ (18) when the above conditions are satisfied simultaneously. Meanwhile, searches for chargino have set a lower limit on the charged scalar Pierce:2007ut $m_{H^{\pm}}\geq 70~{}{\rm GeV}$ (19) * • SM Higgs data: The Higgs invisible decay channel gets additional contribution when DM is light enough, i.e., $m_{H}<m_{h}/2$. The current experimental limit on the branching ratio of Higgs invisible decay is Khachatryan:2016whc $\text{BR}(h\to\text{invisible})<0.24.$ (20) The charged scalar $H^{\pm}$ will also impact the Higgs to diphoton channel via one loop contribution Swiezewska:2012eh . The experimental signal strength of diphoton is Khachatryan:2016vau $\mu_{\gamma\gamma}=1.14^{+0.38}_{-0.36}.$ (21) * • Relic density: The DM relic density observed by the Planck experiment is Aghanim:2018eyx $\Omega h^{2}=0.1200\pm 0.0012.$ (22) We require that the theoretical DM relic density of $H$ is within $3\sigma$ range of the observed value. MicrOmegas Barducci:2016pcb is used to calculate the relic density. * • Direct detection: In this paper, we take the direct detection limit on the spin-independent cross section from the XENON1T experiment Aprile:2018dbl , which is the most stringent one at present. Focus on the low mass region, we randomly scan the parameter space in the following regions $\displaystyle m_{H}\in[50,80]~{}{\rm GeV},~{}\lambda_{L}\in[-0.04,0.04],~{}\lambda_{2}\in[0,1]$ (23) $\displaystyle m_{A}-m_{H}\in[0,300]~{}{\rm GeV},~{}m_{H^{\pm}}-m_{H}\in[0,300]~{}{\rm GeV}$ Figure 1: Scanned results of the low mass region. Distribution of samples in the $(m_{H},\lambda_{L})$ plane (panel a), $(m_{A},m_{H})$ plane (panel b), $(m_{H^{\pm}},m_{A})$ plane (panel c) and $(m_{H^{\pm}}-m_{H},m_{A}-m_{H})$ plane (panel d). All the samples satisfy constraints from Eqn. (11) to (22). The gray points are further excluded by the XENON1T result Aprile:2018dbl . The green and red points are allowed by all constraints. The red points, which have been listed in Table 1 , are the benchmark points selected for following same-sign dilepton signature. The light blue band in panel d corresponds to the promising region of the opposite-sign dilepton signature Dolle:2009ft . Scanned results are shown in Fig. 1. The requirement of relic density within $3\sigma$ range together with the direct detection limit from XENON1T strictly constrain the parameter space. From Fig. 1 (a) and (b), it is clear that the allowed samples of our scan fall into three separated regions. One is the Higgs resonance region around $m_{H}\lesssim m_{h}/2$. Another one is the vector boson annihilation region around $m_{H}\sim 71.5$ GeV, where the dominant annihilation channel is $HH\to VV(V=Z,W)$. The mass region $63~{}{\rm GeV}\lesssim m_{H}\lesssim 71$ GeV with $m_{A}>100$ GeV is now excluded by XENON1T. The third one is the narrow coannihilation region $m_{A}-m_{H}\sim 8$ GeV. Since degenerate $m_{A}$ and $m_{H}$ will lead to vanishing same-sign charged Higgs pair at LHC Aiko:2019mww , we will not consider such coannihilation region in the following. In Fig. 1 (c), results in the $(m_{A},m_{H^{\pm}})$ plane are also shown. All the survived points satisfy $m_{A}\lesssim m_{H^{\pm}}$, mainly due to constraints from $S$ and $T$ parameters in Eqn. 16. The mass gap between $80~{}{\rm GeV}\lesssim m_{A}<100$ GeV corresponds to the excluded region of LEP in Eqn. (18). Because the same- sign dilepton signature is sensitive to the mass splitting $\Delta m=m_{A}-m_{H}$, corresponding results are also depicted in Fig. 1 (d). No. | $m_{H}$(GeV) | $m_{A}$(GeV) | $m_{H^{\pm}}$(GeV) | $\lambda_{2}$ | $\lambda_{L}$ | $\Omega h^{2}$ | $\sigma$ @14TeV (fb) | $\sigma$ @27TeV (fb) ---|---|---|---|---|---|---|---|--- BP1 | 71.69 | 107.5 | 139.6 | 0.4097 | 0.002203 | 0.1210 | 0.054 | 0.160 BP2 | 59.30 | 119.1 | 136.3 | 0.09806 | -0.0004655 | 0.1213 | 0.154 | 0.451 BP3 | 71.67 | 152.9 | 167.0 | 0.1750 | 0.0001029 | 0.1233 | 0.214 | 0.657 BP4 | 71.76 | 177.0 | 190.9 | 0.3855 | -0.0002066 | 0.1180 | 0.285 | 0.914 BP5 | 62.64 | 180.5 | 189.1 | 0.7473 | -0.002478 | 0.1177 | 0.355 | 1.139 BP6 | 70.82 | 201.1 | 206.8 | 0.8602 | 0.002879 | 0.1233 | 0.373 | 1.232 BP7 | 60.37 | 199.7 | 208.8 | 0.6200 | -0.0002771 | 0.1210 | 0.409 | 1.351 BP8 | 71.63 | 220.8 | 229.1 | 0.5264 | -0.0007215 | 0.1193 | 0.399 | 1.362 BP9 | 61.12 | 223.2 | 230.3 | 0.4692 | -0.0002002 | 0.1227 | 0.454 | 1.553 BP10 | 57.76 | 230.7 | 244.3 | 0.9192 | 0.0009435 | 0.1185 | 0.454 | 1.578 BP11 | 71.44 | 258.6 | 269.0 | 0.6848 | -0.0007471 | 0.1214 | 0.446 | 1.616 BP12 | 71.55 | 272.6 | 277.1 | 0.00294 | -0.001236 | 0.1205 | 0.483 | 1.765 BP13 | 56.40 | 261.4 | 273.1 | 0.5082 | -0.001733 | 0.1191 | 0.495 | 1.799 BP14 | 71.17 | 290.1 | 301.2 | 0.5216 | 0.0006213 | 0.1200 | 0.467 | 1.788 BP15 | 70.72 | 299.9 | 317.8 | 0.7495 | 0.001944 | 0.1235 | 0.451 | 1.755 BP16 | 71.12 | 312.9 | 322.7 | 0.04812 | 0.0002456 | 0.1221 | 0.482 | 1.892 BP17 | 71.39 | 321.4 | 334.9 | 0.7437 | -0.0001886 | 0.1172 | 0.468 | 1.883 BP18 | 71.31 | 329.1 | 350.8 | 0.1182 | -0.0005298 | 0.1204 | 0.441 | 1.813 BP19 | 62.32 | 334.6 | 346.0 | 0.2196 | 0.0001064 | 0.1180 | 0.498 | 2.037 BP20 | 71.14 | 360.8 | 366.8 | 0.1079 | 0.0005207 | 0.1192 | 0.495 | 2.087 Table 1: Benchmark points (BP) for the same-sign dilepton signature. Here, $\sigma$ denotes the cross section of $pp\to H^{\pm}H^{\pm}jj$ with preselection cuts in Eqn.(24). Based on the above scanned results, we have selected 20 BPs (red ones in Fig. 1) for the following study. Detailed information on these BP can be found in Table 1. Different from Ref. Kalinowski:2018ylg , we have selected more BPs with $\Delta m>150$ GeV. For BP1 to BP10, they could also be probed at the 380 GeV CLIC with 1 ab-1 data, meanwhile the rest ten BPs are within the reach of 1.5 TeV CLIC with 2.5 ab-1 data Kalinowski:2018kdn . ## III Same-Sign Dilepton Signature Figure 2: Branching ratio of the charged scalar $H^{\pm}$ for $m_{H^{\pm}}-m_{A}=30$ GeV (left panel) and $m_{H^{\pm}}-m_{A}=15$ GeV (right panel), where $m_{H}$ is fixed to be 62 GeV in both cases. The package 2HDMC Eriksson:2009ws is used for calculating these branching ratios. Before discussing the same-sign dilepton signature, we first consider the branching ratio of the charged scalar $H^{\pm}$. There are two possible decay modes of $H^{\pm}$ in the IDM. One is $H^{\pm}\to W^{\pm}H$, and the other is $H^{\pm}\to W^{\pm}A$. For the special scenario $m_{H}\sim m_{A}<m_{H^{\pm}}$, one would have $\text{BR}(H^{\pm}\to W^{\pm}H)\approx\text{BR}(H^{\pm}\to W^{\pm}A)\approx 0.5$. However, the precise measurement of $S$ and $T$ parameters requires $m_{H}<m_{A}\lesssim m_{H^{\pm}}$, which leads to a phase space suppression of the $H^{\pm}\to W^{\pm}A$ mode. In Fig. 2, we illustrate the branching ratio of $H^{\pm}$. For $m_{H^{\pm}}-m_{A}=30(15)$ GeV, we have $\text{BR}(H^{\pm}\to W^{\pm}A)<0.01$, i.e., $\text{BR}(H^{\pm}\to W^{\pm}H)>0.99$ when $m_{H^{\pm}}>135(100)$ GeV. That is to say, $H^{\pm}\to W^{\pm}H$ is always the dominant decay mode (approximate to one) for the BPs in Table 1. Figure 3: Production cross section of process $pp\to H^{\pm}H^{\pm}jj$ at the $\sqrt{s}=14$ TeV HL-LHC (left panel) and the $\sqrt{s}=27$ TeV HE-LHC (right panel) as a function of $m_{H^{\pm}}$ with $\Delta{m}=$ 100GeV, 200GeV, 300GeV, respectively. Here, we also fix $m_{H}=62$ GeV. The cross section of the BPs listed in Table 1 are also shown. Note that the preselection cuts in Eqn.(24) are already applied. An essential feasibility of the process $pp\to H^{\pm}H^{\pm}jj$ is that its cross section is approximately proportional to the square of the mass splitting $\Delta m$ Arhrib:2019ywg . The dependence of the cross sections $\sigma(pp\to H^{\pm}H^{\pm}jj)$ for different mass splitting $\Delta m$ is depicted in Fig. 3. During the calculation, Madgraph5_aMC@NLO Alwall:2014hca is employed with the preselection cuts for VBF processes at parton level $\eta_{j_{1}}\times\eta_{j_{2}}<0,~{}|\Delta\eta_{jj}|>2.5.$ (24) From Fig.3, it can be seen that following the enlargement of $\Delta m$ from $\Delta m=100$ GeV to $\Delta m=300$ GeV, the production cross section is enlarged by about ten times for the same value of $m_{H^{\pm}}$. In the actual model, $m_{A}\lesssim m_{H^{\pm}}$ should be satisfied. Thus, the results of BPs in Table 1 are further illustrated. It is obvious that at the 14 TeV HL- LHC, the cross section usually increases as $m_{H^{\pm}}$ becomes larger when $m_{H^{\pm}}\lesssim 250$ GeV. While for $m_{H^{\pm}}\gtrsim 250$ GeV, the cross section does not change so much as $m_{H^{\pm}}$ increases. At the 27 TeV HE-LHC, the cross section always tends to increase when $m_{H^{\pm}}$ increases, which is about three to four times larger than it at the 14 TeV HL- LHC. Now we discuss the same-sign dilepton signature and its corresponding backgrounds at hadron colliders. The full process of such signature is $pp\to W^{\pm*}W^{\pm*}jj\to H^{\pm}H^{\pm}jj\to(W^{\pm}H)(W^{\pm}H)jj\to(l^{\pm}\nu)H(l^{\pm}\nu)Hjj\to l^{\pm}l^{\pm}\cancel{E}_{T}jj,$ (25) in the IDM, where $j$ is the forward and energetic jet from the initial parton, and the leptons contain electron and muon ($l=e,\mu$). In the following, we choose BP10, BP15, and BP20 in Table 1 to show the distribution of certain variables and corresponding cut flow at colliders. The main SM backgrounds come from $W^{\pm}W^{\pm}jj$, $WZjj$, $ZZjj$, $VVVjj$, and $t\bar{t}V$. Both the strong and electroweak production of the $VVjj$ process are taken into account. According to the experimental result of ATLAS collaboration Aaboud:2019nmv , there should be additional contributions from $V\gamma$, electron charge misreconstruction, and non-prompt leptons, which are sub-dominant and thus are not taken into account in this work. After generating the parton level events for all BPs and corresponding SM backgrounds using Madgraph5_aMC@NLO Alwall:2014hca , Pythia8 Sjostrand:2007gs is used for parton showering and hadronization. Finally, the detector simulation is performed with Delphes3 deFavereau:2013fsa . In this work, all the signals and backgrounds are simulated at the leading order. After the above simulation, several cuts are applied to highlight the signal, which are simply categorized into four parts, i.e, cuts-1 to cuts-4. First, cuts-1 aims to select the same-sign dilepton signature, where we require exactly two leptons carrying the same charge in the final states, $N(l^{\pm})=2,P_{T}^{l^{\pm}}>20~{}\text{GeV},|\eta_{l^{\pm}}|<2.5,$ (26) Then in cuts-2 for the forward jet pair, events with at least two jets and with $b$-jet veto can pass the selection $N(j)\geq 2,P_{T}^{j}>30~{}\text{GeV},|\eta_{j}|<5,N(b)=0.$ (27) Here, the $b$-jet veto criteria is to suppress the $t\bar{t}V$ background. As shown in Table 2, at this level of cuts, the SM background is about three orders of magnitudes larger than the signal. Therefore, additional cuts are expected to further eliminate the background. To seek for proper cut criteria, the normalized distribution of certain parameters is shown in Fig. 4. Specifically speaking, the up-left panel shows the $P_{T}^{l}$ variable. The distributions of $P_{T}^{l}$ are not well separated for signal and background. Instead, we consider the $\Delta P_{T}$ parameter, defined as $\Delta P_{T}=(P_{T}^{l_{1}}+P_{T}^{l_{2}})-(P_{T}^{j_{1}}+P_{T}^{j_{2}})$, which is shown in the up-right panel. For the BP signals, the distributions of $\Delta P_{T}$ tend to be larger than those of the backgrounds. Based on this feature, we require $\Delta P_{T}>0$. That is to say, the scalar sum of the transverse momentum of two leptons is larger than the scalar sum of the transverse momentum of leading and sub-leading jet. In the middle-left panel, we depict the distribution of $\overline{\Delta\eta}_{jl}$ variable, where $\overline{\Delta\eta}_{jl}$ is defined as $\overline{\Delta\eta}_{jl}=\sqrt{\sum_{m=1}^{2}\sum_{n=1}^{2}\frac{(\eta_{jm}-\eta_{ln})^{2}}{4}}.$ (28) Here, $\eta_{jm}$ are the pseudorapidity of leading and sub-leading jets with $m=1,2$ and $\eta_{ln}$ are the pseudorapidity of leading and sub-leading leptons with $n=1,2$. The $\overline{\Delta\eta}_{jl}$ variable characterizes the averaged pseudorapidity separation between jets and leptons, where $\overline{\Delta\eta}_{jl}$ variable larger than three is good enough to separate the signal and background. Another distinguishable variable used by the experimental groups is the Zeppenfeld variable $z_{l}^{*}$, which is defined as Rainwater:1996ud $z_{l}^{*}=~{}\left|\eta_{l}-\frac{\eta_{j1}+\eta_{j2}}{2}\right|/|\eta_{j1}-\eta_{j2}|.$ (29) The max$(z_{l}^{*})$ variable is used by CMS collaboration to define the $W^{\pm}W^{\pm}jj$ and $WZjj$ signal region Sirunyan:2020gyx . Since both $W^{\pm}W^{\pm}jj$ and $WZjj$ are the backgrounds in this work, we use a more stringent cut max$(z_{l}^{*})<0.3$, i.e., the largest $z_{l}^{*}$ variable less than 0.3. In summary, the cuts-3 we adopted are $\Delta P_{T}>0,~{}\overline{\Delta\eta}_{jl}>3,~{}\text{max}(z_{l}^{*})<0.3.$ (30) Figure 4: Normalized distribution of $P_{T}^{l}$ (up-left panel), $\Delta P_{T}$ (up-right panel), $\overline{\Delta\eta}_{jl}$ (middle-left panel), max$(z_{l}^{*})$ (middle-right panel), $\cancel{E}_{T}$ (down-left panel), and $M_{T2}$ (down-right panel) variable for BP10, BP15, BP20 (solid line) and corresponding SM backgrounds (dashed line) at $\sqrt{s}=14$ TeV. The $P_{T}^{l}$, $\Delta P_{T}$, $\overline{\Delta\eta}_{jl}$, max$(z_{l}^{*})$, and $\cancel{E}_{T}$ variables are drawing after the cuts in Eqn.(24),(26),(27) are applied, while $M_{T2}$ variable is drawing after cuts in Eqn.(24),(26),(27) and $\cancel{E}_{T}>100$ GeV are applied. Cross section (fb) | BP10 | BP15 | BP20 | $W^{\pm}W^{\pm}jj$ | $WZjj$ | Others ---|---|---|---|---|---|--- Preselection | $1.88\times 10^{-2}$ | $1.89\times 10^{-2}$ | $2.04\times 10^{-2}$ | $1.35\times 10^{1}$ | $5.50\times 10^{1}$ | $3.05\times 10^{0}$ $N(l^{\pm})=2,P_{T}^{l^{\pm}}>20~{}\text{GeV}$ | | | | | | $|\eta_{l^{\pm}}|<2.5$ | $1.01\times 10^{-2}$ | $1.08\times 10^{-2}$ | $1.19\times 10^{-2}$ | $5.29\times 10^{0}$ | $6.43\times 10^{0}$ | $3.66\times 10^{-1}$ $N(j)\geq 2,P_{T}^{j}>30$ GeV | | | | | | $|\eta_{j}|<5,N(b)=0$ | $8.62\times 10^{-3}$ | $9.13\times 10^{-3}$ | $1.21\times 10^{-2}$ | $4.60\times 10^{0}$ | $5.43\times 10^{0}$ | $2.05\times 10^{-1}$ $\Delta P_{T}>0,\overline{\Delta{\eta}}_{jl}>3$ | | | | | | max$(z_{l}^{*})<0.3$ | $1.56\times 10^{-3}$ | $2.48\times 10^{-3}$ | $3.06\times 10^{-3}$ | $1.34\times 10^{-1}$ | $2.834\times 10^{-2}$ | $1.12\times 10^{-3}$ $\cancel{E}_{T}>100$ GeV | | | | | | $M_{T2}>100$ GeV | $3.71\times 10^{-4}$ | $8.41\times 10^{-4}$ | $1.33\times 10^{-3}$ | $7.31\times 10^{-4}$ | $1.10\times 10^{-4}$ | $8.87\times 10^{-5}$ Significance | 0.67 | 1.52 | 2.39 | — | — | — Table 2: Cut flow table for BP10, BP15, BP20 signal and various background process at $\sqrt{s}=14$ TeV. The $ZZjj$, $VVVjj$ and $t\bar{t}V$ backgrounds are classified as others for their contributions to the total backgrounds are blow $10\%$ after applying all cuts. The significance $S/\sqrt{B}$ is calculated by assuming an integrated luminosity $\mathcal{L}=3~{}\text{ab}^{-1}$. The results for both signal and background at the level of cuts-3 are shown in the fourth row of Table 2. At this level, the cross section of the $ZZjj$, $VVVjj$ and $t\bar{t}V$ backgrounds are smaller than the signal. The dominant ones are $W^{\pm}W^{\pm}jj$ and $WZjj$. From Fig. 4, it is also clear that for the $W^{\pm}W^{\pm}jj$ background process, $\Delta P_{T}$, $\overline{\Delta\eta}_{jl}$, and $z_{l}^{*}$ variables are not so distinguishable from signal to background. This is because the main part of $W^{\pm}W^{\pm}jj$ generated from electroweak production process has a similar topological structure to signal. In order to suppress the $W^{\pm}W^{\pm}jj$ background efficiently, more advanced cuts should be applied. Despite the additional two forward jets in the same-sign dilepton signature, the decay chain of charged scalar $H^{\pm}\to W^{\pm}H\to l^{\pm}\nu H$ is actually the same as the decay chain of chargino $\tilde{\chi}_{1}^{\pm}\to W^{\pm}\tilde{\chi}_{1}^{0}\to l^{\pm}\nu\tilde{\chi}_{1}^{0}$, which means we can apply similar cuts for opposite-sign dilepton signature as in Ref. Aad:2019vnb . Here, we take the variables $\cancel{E}_{T}$ and $M_{T2}$ into account. The $M_{T2}$ variable is defined as Lester:1999tx ; Barr:2003rg , $M_{T2}=\underset{\textbf{q}_{T,1}+\textbf{q}_{T,2}=~{}\cancel{\textbf{E}}_{T}}{\text{min}}\left\\{\text{max}\left[M_{T}(\textbf{P}_{T}^{l_{1}},\textbf{q}_{T,1}),M_{T}(\textbf{P}_{T}^{l_{2}},\textbf{q}_{T,2})\right]\right\\},$ (31) where $\textbf{P}_{T}^{l_{1}}$ and $\textbf{P}_{T}^{l_{2}}$ are the transverse momentum vectors of the two leptons, $\textbf{q}_{T,1}$ and $\textbf{q}_{T,2}$ are all possible combinations of two transverse momentum vectors that satisfy $\textbf{q}_{T,1}+\textbf{q}_{T,2}=\cancel{\textbf{E}}_{T}$. The $M_{T2}$ variable is calculated by applying the algorithms proposed in Ref. Lester:2014yga . Distributions of $\cancel{E}_{T}$ and $M_{T2}$ are also shown in Fig.4. For the signal process, both neutrinos $\nu$ and dark matter $H$ contribute to the missing transverse energy $\cancel{E}_{T}$, which thus usually leads to a larger $\cancel{E}_{T}$ than the backgrounds. It also can be seen that the $M_{T2}$ variable serves as the most efficient cut. Because theoretically, this variable can not exceed the mass of $W$ boson at $m_{W}=80.4$ GeV for the background process, while the theoretical upper limit for the signal process is the mass of $m_{H^{\pm}}$. Therefore, when exceeding $80$ GeV, the $M_{T2}$ variable decreases severely and nearly vanishing when $M_{T_{2}}>100$ GeV for all background processes. But for the signals, it still have a large part existed, especially for BP20 due to the largest $M_{H^{\pm}}$ it possessed. In short, we require that missing transverse energy $\cancel{E}_{T}$ is greater than $100$ GeV and $M_{T2}$ variable is greater than $100$ GeV for cuts-4: $\cancel{E}_{T}>100~{}\text{GeV},~{}M_{T2}>100~{}\text{GeV}.$ (32) Results after applying cuts-4 are shown in the fifth row of Table 2. After applying all of the cuts, the contributions of $W^{\pm}W^{\pm}jj$ process to the total backgrounds is great than $80\%$, the contributions of $WZjj$ process to the total backgrounds is great than $10\%$ and the part named as others are comes from the sum of $t\bar{t}V$, $VVVjj$, $ZZjj$ process, which serves less than $10\%$ contributions to the total backgrounds. For the signal process after the full cuts are applied, BP20 has the largest cross section, because it both possesses the largest $\Delta m$ and $m_{H^{\pm}}$, the former leads to a large cross section in simulation and the latter leads to the highest efficiency when passing the cut flow. The total cross section of backgrounds is $9.30\times 10^{-4}$ fb, which is larger than it of BP10 and BP15, but is smaller than it of BP20. Although a good signal-to-background ratio is achieved, the cross sections for signals after all cuts are a little bit too small to probe. For instance, assuming an integrated luminosity $\mathcal{L}=3~{}\text{ab}^{-1}$, we expect about 1.1 events for BP10, 2.5 events for BP15 and 4.0 events for BP20 with 2.8 events for total backgrounds, of which the corresponding significance $S/\sqrt{B}$ is 0.67, 1.52 and 2.39, respectively. Therefore, the same-sign dilepton signature is not so promising at the HL-LHC. Before ending the discussion on the 14 TeV simulation, let’s briefly summarize the searching strategy. The main backgrounds come from $W^{\pm}W^{\pm}jj$ and $WZjj$ process with the production cross section over 10 fb after preselection cuts. Due to the similar distributions of certain variables (such as $P_{T}^{l^{\pm}},z_{l}^{*}$) from the dominant background to the signal, we can only choose the simplest cuts in cuts-1 and cuts-2. Then, we have to apply the cuts extremely in cuts-3 and cuts-4, even if this shall lead to a faint signal and low significance. The above analysis can be improved by considering more sophisticated selection criteria, such as employing a boosted decision tree, which is beyond the scope of this work. Instead, we further consider the same-sign dilepton signature at the 27 TeV HE-LHC. Figure 5: Same as Fig.4, but at $\sqrt{s}=27$ TeV The normalized distribution of $P_{T}^{l}$, $\Delta P_{T}$, $\overline{\Delta\eta}_{jl}$, max$(z_{l}^{*})$, $\cancel{E}_{T}$ and $M_{T2}$ variable at 27 TeV are shown in Fig. 5, which are similar to the results of 14 TeV. Hence, we adopt the same criteria as 14 TeV for cuts-1 to cuts-3. Meanwhile, considering the fact that the final states of neutrinos $\nu$ as well as dark matter $H$ are more energetic at 27 TeV than those at 14 TeV, we slightly tighten cuts-4 as $\cancel{E}_{T}>110~{}\text{GeV},~{}M_{T2}>125~{}\text{GeV}.$ (33) The cross section for both signal and backgrounds with the cut flow are listed in Table.3. After applying the full cuts, at $\sqrt{s}=27$ TeV only two processes have considerable contributions to the total backgrounds. The main part of total backgrounds comes from $W^{\pm}W^{\pm}jj$ process, with the contribution to the total backgrounds great than $85\%$. The rest part of the total backgrounds comes from $WZjj$ process. The part named as others comes from the sum of $ZZjj$, $VVVjj$, and $t\bar{t}V$ process has negligible contributions to the total backgrounds due to the more stringent cuts we have used. With larger production cross section and higher luminosity $\mathcal{L}=15~{}\text{ab}^{-1}$, we find that the dilepton signature is promising for some benchmark points at the 27 TeV HE-LHC. Quantitatively speaking , we expect about 8 events for BP10, 31 events for BP15 and 56 events for BP20 with 48 events for total backgrounds, of which the corresponding significance $S/\sqrt{B}$ is 1.21, 4.54, and 8.08, respectively. Cross section(fb) | BP10 | BP15 | BP20 | $W^{\pm}W^{\pm}jj$ | $WZjj$ | Others ---|---|---|---|---|---|--- Preselection | $6.57\times 10^{-2}$ | $7.40\times 10^{-2}$ | $8.59\times 10^{-2}$ | $4.61\times 10^{1}$ | $1.95\times 10^{2}$ | $1.38\times 10^{1}$ $N(l^{\pm})=2,P_{T}^{l^{\pm}}>20~{}\text{GeV}$ | | | | | | $|\eta_{l^{\pm}}|<2.5$ | $3.19\times 10^{-2}$ | $3.77\times 10^{-2}$ | $4.54\times 10^{-2}$ | $1.47\times 10^{1}$ | $1.95\times 10^{1}$ | $1.32\times 10^{0}$ $N(j)\geq 2,P_{T}^{j}>30$ GeV | | | | | | $|\eta_{j}|<5,N(b)=0$ | $2.49\times 10^{-2}$ | $3.06\times 10^{-2}$ | $3.74\times 10^{-2}$ | $1.13\times 10^{1}$ | $1.58\times 10^{1}$ | $8.67\times 10^{-1}$ $\Delta P_{T}>0,\overline{\Delta{\eta}}_{jl}>3$ | | | | | | max$(z_{l}^{*})<0.3$ | $6.13\times 10^{-3}$ | $9.74\times 10^{-3}$ | $1.23\times 10^{-2}$ | $4.94\times 10^{-1}$ | $1.21\times 10^{-1}$ | $4.33\times 10^{-3}$ $\cancel{E}_{T}>110$ GeV | | | | | | $M_{T2}>125$ GeV | $5.58\times 10^{-4}$ | $2.09\times 10^{-3}$ | $3.72\times 10^{-3}$ | $2.79\times 10^{-3}$ | $3.90\times 10^{-4}$ | 0 Significance | 1.21 | 4.54 | 8.08 | — | — | — Table 3: Cut flow table for BP10, BP15, BP20 signal and various background process at $\sqrt{s}=27$ TeV. The $ZZjj$, $VVVjj$ and $t\bar{t}V$ backgrounds are classified as others for it’s contributions to the total backgrounds are negligible after applying all cuts. The significance $S/\sqrt{B}$ is calculated by assuming an integrated luminosity $\mathcal{L}=15~{}\text{ab}^{-1}$. Finally, based on the cuts adopted in the above discussion, we extend our analysis to all the twenty benchmark points listed in Table 1. The results of significance for BPs at 14 TeV HL-LHC and 27 TeV HE-LHC are shown in Fig.6. It can be seen that following the increase of $m_{H^{\pm}}$, the significance increased, for a larger $m_{H^{\pm}}$ leading to a higher cut efficiency. At $\sqrt{s}=14$ TeV limited by the faint signal, even for BP20 with the largest $m_{H^{\pm}}$, the significance can only slightly excess two. At $\sqrt{s}=27$ TeV, with a larger cross section and higher luminosity, we find that BP16 to BP20 can have a significance larger than five. That is to say the promising region of same-sign dilepton signature at $\sqrt{s}=27$ TeV is $250~{}\text{GeV}\lesssim m_{H^{\pm}}-m_{H}\lesssim 300$ GeV with dark matter mass $m_{H}\sim 60$ or 71 GeV. Figure 6: Significance of all twenty BPs at $\sqrt{s}=14$ TeV, $\mathcal{L}=3~{}\text{ab}^{-1}$(red points) and $\sqrt{s}=27$ TeV, $\mathcal{L}=15~{}\text{ab}^{-1}$(green points). ## IV Conclusion The IDM is a 2HDM imposed with an exact $Z_{2}$ symmetry, which leads to a DM candidate. This model gives rise to rich phenomenology, which has been extensively studied. In this paper, we perform a detailed analysis on the same-sign dilepton signature $pp\to W^{\pm*}W^{\pm*}jj\to H^{\pm}H^{\pm}jj\to(l^{\pm}\nu)H(l^{\pm}\nu)Hjj\to l^{\pm}l^{\pm}\cancel{E}_{T}jj$ in the IDM, where $H$ is the DM candidate. According to our simulation, this signature is promising for large mass splitting $\Delta m=m_{A}-m_{H}$, which is complementary to the well studied opposite-sign dilepton signature. We first perform a random scan over the low mass region of IDM with various constraints taken into account. By requiring the relic density within $3\sigma$ range of the Planck observation value $\Omega h^{2}=0.1200\pm 0.0012$, we find three viable parameter space. One is the Higgs resonance region around $m_{H}\lesssim m_{h}/2$. Another one is the vector boson annihilation region around $m_{H}\sim 71.5$ GeV. The third one is the coannihilation region with $m_{A}-m_{H}\sim 8$ GeV and $m_{H}\sim 65$ GeV. Since the coannihilation region gives a vanishing cross section of same-sign dilepton signature, we select twenty benchmark points from the Higgs resonance and vector boson annihilation region, which are listed in Table 1. We then simulate the same-sign dilepton signature for the BPs as well as SM backgrounds both at $\sqrt{s}=14$ TeV HL-LHC and $\sqrt{s}=27$ TeV HE-LHC. With similar decay topological structure to signal, the dominant background comes from $W^{\pm}W^{\pm}jj$. The most efficient cut to suppress background is $M_{T2}$ variable. According to our simulation, at $\sqrt{s}=14$ TeV with luminosity $\mathcal{L}=3\text{ab}^{-1}$, at best four signal events can survived after applying the full cuts. Limited by the number of signal events, the BPs can only achieve a significance slightly larger than two. At $\sqrt{s}=27$ TeV with luminosity $\mathcal{L}=15\text{ab}^{-1}$, we can probe benchmark points with large mass $\Delta m$. For example, BP16 to BP20 can have a significance larger than five. In a nut shell, the same-sign dilepton signature is not promising at $\sqrt{s}=14$ TeV HL-LHC, but is promising at $\sqrt{s}=27$ TeV with the viable region of $250~{}\text{GeV}\lesssim m_{H^{\pm}}-m_{H}\lesssim 300$ GeV and dark matter mass $m_{H}\sim 60$ or 71 GeV. ## Acknowledgments This work is supported by the National Natural Science Foundation of China under Grant No. 11805081, Natural Science Foundation of Shandong Province under Grant No. ZR2019QA021 and ZR2018MA047. ## References * (1) G. Aad et al. [ATLAS], Phys. Lett. B 716, 1-29 (2012) [arXiv:1207.7214 [hep-ex]]. * (2) S. Chatrchyan et al. [CMS], Phys. Lett. B 716, 30-61 (2012) [arXiv:1207.7235 [hep-ex]]. * (3) N. Aghanim et al. [Planck], [arXiv:1807.06209 [astro-ph.CO]]. * (4) G. Bertone, D. Hooper and J. Silk, Phys. Rept. 405, 279-390 (2005) [arXiv:hep-ph/0404175 [hep-ph]]. * (5) G. Arcadi, M. Dutra, P. Ghosh, M. Lindner, Y. Mambrini, M. Pierre, S. Profumo and F. S. Queiroz, Eur. Phys. J. C 78, no.3, 203 (2018) [arXiv:1703.07364 [hep-ph]]. * (6) N. G. Deshpande and E. Ma, Phys. Rev. D 18, 2574 (1978) * (7) R. Barbieri, L. J. Hall and V. S. Rychkov, Phys. Rev. D 74, 015007 (2006) [arXiv:hep-ph/0603188 [hep-ph]]. * (8) L. Lopez Honorez, E. Nezri, J. F. Oliver and M. H. G. Tytgat, JCAP 02, 028 (2007) [arXiv:hep-ph/0612275 [hep-ph]]. * (9) G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher and J. P. Silva, Phys. Rept. 516, 1-102 (2012) [arXiv:1106.0034 [hep-ph]]. * (10) E. Ma, Phys. Rev. D 73, 077301 (2006) [arXiv:hep-ph/0601225 [hep-ph]]. * (11) Z. L. Han and W. Wang, Eur. Phys. J. C 79, no.6, 522 (2019) [arXiv:1901.07798 [hep-ph]]. * (12) Z. L. Han, R. Ding, S. J. Lin and B. Zhu, Eur. Phys. J. C 79, no.12, 1007 (2019) [arXiv:1908.07192 [hep-ph]]. * (13) W. Wang and Z. L. Han, Phys. Rev. D 101, no.11, 115040 (2020) [arXiv:1911.00819 [hep-ph]]. * (14) M. Gustafsson, E. Lundstrom, L. Bergstrom and J. Edsjo, Phys. Rev. Lett. 99, 041301 (2007) [arXiv:astro-ph/0703512 [astro-ph]]. * (15) Q. H. Cao, E. Ma and G. Rajasekaran, Phys. Rev. D 76, 095011 (2007) [arXiv:0708.2939 [hep-ph]]. * (16) E. Lundstrom, M. Gustafsson and J. Edsjo, Phys. Rev. D 79, 035013 (2009) [arXiv:0810.3924 [hep-ph]]. * (17) E. M. Dolle and S. Su, [arXiv:0906.1609 [hep-ph]]. * (18) L. Lopez Honorez and C. E. Yaguna, JHEP 09, 046 (2010) [arXiv:1003.3125 [hep-ph]]. * (19) L. Lopez Honorez and C. E. Yaguna, JCAP 01, 002 (2011) [arXiv:1011.1411 [hep-ph]]. * (20) D. Borah and J. M. Cline, Phys. Rev. D 86, 055001 (2012) [arXiv:1204.4722 [hep-ph]]. * (21) M. Gustafsson, S. Rydbeck, L. Lopez-Honorez and E. Lundstrom, Phys. Rev. D 86, 075019 (2012) [arXiv:1206.6316 [hep-ph]]. * (22) B. Swiezewska and M. Krawczyk, Phys. Rev. D 88, no.3, 035019 (2013) [arXiv:1212.4100 [hep-ph]]. * (23) P. Osland, A. Pukhov, G. M. Pruna and M. Purmohammadi, JHEP 04, 040 (2013) [arXiv:1302.3713 [hep-ph]]. * (24) A. Goudelis, B. Herrmann and O. Stål, JHEP 09, 106 (2013) [arXiv:1303.3010 [hep-ph]]. * (25) K. P. Modak and D. Majumdar, Astrophys. J. Suppl. 219, no.2, 37 (2015) [arXiv:1502.05682 [hep-ph]]. * (26) N. Blinov, S. Profumo and T. Stefaniak, JCAP 07, 028 (2015) [arXiv:1504.05949 [hep-ph]]. * (27) A. Arhrib, R. Benbrik, J. El Falaki and A. Jueid, JHEP 12, 007 (2015) [arXiv:1507.03630 [hep-ph]]. * (28) A. D. Plascencia, JHEP 09, 026 (2015) [arXiv:1507.04996 [hep-ph]]. * (29) A. Ilnicka, M. Krawczyk and T. Robens, Phys. Rev. D 93, no.5, 055026 (2016) [arXiv:1508.01671 [hep-ph]]. * (30) M. A. Díaz, B. Koch and S. Urrutia-Quiroga, Adv. High Energy Phys. 2016, 8278375 (2016) [arXiv:1511.04429 [hep-ph]]. * (31) S. Kanemura, M. Kikuchi and K. Sakurai, Phys. Rev. D 94, no.11, 115011 (2016) [arXiv:1605.08520 [hep-ph]]. * (32) M. Hashemi and S. Najjari, Eur. Phys. J. C 77, no.9, 592 (2017) [arXiv:1611.07827 [hep-ph]]. * (33) A. Belyaev, G. Cacciapaglia, I. P. Ivanov, F. Rojas-Abatte and M. Thomas, Phys. Rev. D 97, no.3, 035011 (2018) [arXiv:1612.00511 [hep-ph]]. * (34) D. Borah and A. Gupta, Phys. Rev. D 96, no.11, 115012 (2017) [arXiv:1706.05034 [hep-ph]]. * (35) S. Banerjee, F. Boudjema, N. Chakrabarty, G. Chalons and H. Sun, Phys. Rev. D 100, no.9, 095024 (2019) [arXiv:1906.11269 [hep-ph]]. * (36) A. Jueid, J. Kim, S. Lee, S. Y. Shim and J. Song, Phys. Rev. D 102, no.7, 075011 (2020) [arXiv:2006.10263 [hep-ph]]. * (37) H. Abouabid, A. Arhrib, R. Benbrik, J. E. Falaki, B. Gong, W. Xie and Q. S. Yan, [arXiv:2009.03250 [hep-ph]]. * (38) S. Fabian, F. Goertz and Y. Jiang, [arXiv:2012.12847 [hep-ph]]. * (39) J. Kalinowski, T. Robens, D. Sokolowska and A. F. Zarnecki, [arXiv:2012.14818 [hep-ph]]. * (40) S. Banerjee, F. Boudjema, N. Chakrabarty and H. Sun, [arXiv:2101.02165 [hep-ph]]. * (41) S. Banerjee, F. Boudjema, N. Chakrabarty and H. Sun, [arXiv:2101.02166 [hep-ph]]. * (42) S. Banerjee, F. Boudjema, N. Chakrabarty and H. Sun, [arXiv:2101.02167 [hep-ph]]. * (43) S. Banerjee, F. Boudjema, N. Chakrabarty and H. Sun, [arXiv:2101.02170 [hep-ph]]. * (44) A. Arhrib, Y. L. S. Tsai, Q. Yuan and T. C. Yuan, JCAP 06, 030 (2014) [arXiv:1310.0358 [hep-ph]]. * (45) B. Eiteneuer, A. Goudelis and J. Heisig, Eur. Phys. J. C 77, no.9, 624 (2017) [arXiv:1705.01458 [hep-ph]]. * (46) F. S. Queiroz and C. E. Yaguna, JCAP 02, 038 (2016) [arXiv:1511.05967 [hep-ph]]. * (47) C. Garcia-Cely, M. Gustafsson and A. Ibarra, JCAP 02, 043 (2016) [arXiv:1512.02801 [hep-ph]]. * (48) E. Dolle, X. Miao, S. Su and B. Thomas, Phys. Rev. D 81, 035003 (2010) [arXiv:0909.3094 [hep-ph]]. * (49) G. Belanger, B. Dumont, A. Goudelis, B. Herrmann, S. Kraml and D. Sengupta, Phys. Rev. D 91, no.11, 115011 (2015) [arXiv:1503.07367 [hep-ph]]. * (50) X. Miao, S. Su and B. Thomas, Phys. Rev. D 82, 035009 (2010) [arXiv:1005.0090 [hep-ph]]. * (51) A. Datta, N. Ganguly, N. Khan and S. Rakshit, Phys. Rev. D 95, no.1, 015017 (2017) [arXiv:1610.00648 [hep-ph]]. * (52) B. Dutta, G. Palacio, J. D. Ruiz-Alvarez and D. Restrepo, Phys. Rev. D 97, no.5, 055045 (2018) [arXiv:1709.09796 [hep-ph]]. * (53) D. Dercks and T. Robens, Eur. Phys. J. C 79, no.11, 924 (2019) [arXiv:1812.07913 [hep-ph]]. * (54) M. Aoki, S. Kanemura and H. Yokoya, Phys. Lett. B 725, 302-309 (2013) [arXiv:1303.6191 [hep-ph]]. * (55) A. Arhrib, R. Benbrik and T. C. Yuan, Eur. Phys. J. C 74, 2892 (2014) [arXiv:1401.6698 [hep-ph]]. * (56) M. Hashemi, M. Krawczyk, S. Najjari and A. F. Zarnecki, JHEP 02, 187 (2016) [arXiv:1512.01175 [hep-ph]]. * (57) A. Belyaev, T. R. Fernandez Perez Tomei, P. G. Mercadante, C. S. Moon, S. Moretti, S. F. Novaes, L. Panizzi, F. Rojas and M. Thomas, Phys. Rev. D 99, no.1, 015011 (2019) [arXiv:1809.00933 [hep-ph]]. * (58) J. Kalinowski, W. Kotlarski, T. Robens, D. Sokolowska and A. F. Zarnecki, JHEP 12, 081 (2018) [arXiv:1809.07712 [hep-ph]]. * (59) J. Kalinowski, W. Kotlarski, T. Robens, D. Sokolowska and A. F. Zarnecki, JHEP 07, 053 (2019) [arXiv:1811.06952 [hep-ph]]. * (60) Y. Guo-He, S. Mao, L. Gang, Z. Yu and G. Jian-You, [arXiv:2006.06216 [hep-ph]]. * (61) M. Aiko, S. Kanemura and K. Mawatari, Phys. Lett. B 797, 134854 (2019) [arXiv:1906.09101 [hep-ph]]. * (62) A. Arhrib, K. Cheung and C. T. Lu, Phys. Rev. D 102, no.9, 095026 (2020) [arXiv:1910.02571 [hep-ph]]. * (63) A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 120, no.8, 081801 (2018) [arXiv:1709.05822 [hep-ex]]. * (64) M. Aaboud et al. [ATLAS], Phys. Rev. Lett. 123, no.16, 161801 (2019) [arXiv:1906.03203 [hep-ex]]. * (65) A. M. Sirunyan et al. [CMS], Phys. Lett. B 809, 135710 (2020) [arXiv:2005.01173 [hep-ex]]. * (66) A. M. Sirunyan et al. [CMS], Phys. Lett. B 812, 136018 (2021) [arXiv:2009.09429 [hep-ex]]. * (67) I. F. Ginzburg, K. A. Kanishev, M. Krawczyk and D. Sokolowska, Phys. Rev. D 82, 123533 (2010) [arXiv:1009.4593 [hep-ph]]. * (68) A. Arhrib, R. Benbrik and N. Gaur, Phys. Rev. D 85, 095021 (2012) [arXiv:1201.2644 [hep-ph]]. * (69) N. Khan and S. Rakshit, Phys. Rev. D 92, 055006 (2015) [arXiv:1503.03085 [hep-ph]]. * (70) M. Baak et al. [Gfitter Group], Eur. Phys. J. C 74, 3046 (2014) [arXiv:1407.3792 [hep-ph]]. * (71) A. Pierce and J. Thaler, JHEP 08, 026 (2007) [arXiv:hep-ph/0703056 [hep-ph]]. * (72) V. Khachatryan et al. [CMS], JHEP 02, 135 (2017) [arXiv:1610.09218 [hep-ex]]. * (73) G. Aad et al. [ATLAS and CMS], JHEP 08, 045 (2016) [arXiv:1606.02266 [hep-ex]]. * (74) D. Barducci, G. Belanger, J. Bernon, F. Boudjema, J. Da Silva, S. Kraml, U. Laa and A. Pukhov, Comput. Phys. Commun. 222, 327-338 (2018) [arXiv:1606.03834 [hep-ph]]. * (75) E. Aprile et al. [XENON], Phys. Rev. Lett. 121, no.11, 111302 (2018) [arXiv:1805.12562 [astro-ph.CO]]. * (76) D. Eriksson, J. Rathsman and O. Stal, Comput. Phys. Commun. 181, 189-205 (2010) [arXiv:0902.0851 [hep-ph]]. * (77) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli and M. Zaro, JHEP 07, 079 (2014) [arXiv:1405.0301 [hep-ph]]. * (78) T. Sjostrand, S. Mrenna and P. Z. Skands, Comput. Phys. Commun. 178, 852-867 (2008) [arXiv:0710.3820 [hep-ph]]. * (79) J. de Favereau et al. [DELPHES 3], JHEP 02, 057 (2014) [arXiv:1307.6346 [hep-ex]]. * (80) D. L. Rainwater, R. Szalapski and D. Zeppenfeld, Phys. Rev. D 54, 6680-6689 (1996) [arXiv:hep-ph/9605444 [hep-ph]]. * (81) G. Aad et al. [ATLAS], Eur. Phys. J. C 80, no.2, 123 (2020) [arXiv:1908.08215 [hep-ex]]. * (82) C. G. Lester and D. J. Summers, Phys. Lett. B 463, 99-103 (1999) [arXiv:hep-ph/9906349 [hep-ph]]. * (83) A. Barr, C. Lester and P. Stephens, J. Phys. G 29, 2343-2363 (2003) [arXiv:hep-ph/0304226 [hep-ph]]. * (84) C. G. Lester and B. Nachman, JHEP 03, 100 (2015) [arXiv:1411.4312 [hep-ph]].
# Non-parametric Memory for Spatio-Temporal Segmentation of Construction Zones for Self-Driving Min Bai1,2 Shenlong Wang1,2 Kelvin Wong1,2 Ersin Yumer1 Raquel Urtasun1,2 1Uber Advanced Technologies Group 2University of Toronto <EMAIL_ADDRESS> ###### Abstract In this paper, we introduce a non-parametric memory representation for spatio- temporal segmentation that captures the local space and time around an autonomous vehicle (AV). Our representation has three important properties: (i) it remembers what it has seen in the past, (ii) it reinforces and (iii) forgets its past beliefs based on new evidence. Reinforcing is important as the first time we see an element we might be uncertain, e.g, if the element is heavily occluded or at range. Forgetting is desirable, as otherwise false positives will make the self driving vehicle behave erratically. Our process is informed by 3D reasoning, as occlusion is key to distinguishing between the desire to forget and to remember. We show how our method can be used as an online component to complement static world representations such as HD maps by detecting and remembering changes that should be superimposed on top of this static view due to such events. Figure 1: Left: The current view of from the front camera of an autonomous vehicle (AV). Right: Corresponding memory in our system (Purple arrow shows the direction of travel). Note the denser representation behind the AV, where observations from multiple LiDAR scans and camera images from previous time steps have been contributing. ## I Introduction Current approaches to autonomous vehicles (AVs) exploit prior knowledge about the static world by building very detailed maps (HD maps) that include not only roads, buildings, bridges and landmarks, but also traffic lanes, signs, and lights to centimeter accurate 3D representations. This is used at runtime for localization, traffic light and signage detection, motion and behavior forecasting, as well motion planning. Most AV systems assume that the HD map is accurate and up to date. However, this is often not true in the real world due to events like construction, temporary lane closures, lane repainting, and speed limit changes, and so on. Thus, online reasoning about map changes due to construction elements and temporary signs is critical for safe operation. This is a difficult problem, as these objects are typically small and may be occluded at the current time. Additionally, the vehicle must reliably observe the elements at long distances as seen in Figure 1, where the elements may appear as only a few pixels in the camera image and very few 3D points in the LiDAR measurement. Any small misalignment in calibration can also result in impactful offsets at range. This is in stark contrast with many methods designed for indoor environments, where the RGB-D sensors provide very dense and relatively low noise information. For typical construction elements and road signs, early observations of them may be uncertain, as the element may be heavily occluded or at long range, resulting in even less reliable measurements. We drive our intuition to approach this problem from the following three insights: (1) As we approach the object, additional confirming information can be used to reinforce the system’s belief of an object’s presence. (2) On the other hand, if new information contradicts previous observations, the system must forget these false positives. Figure 2 shows typical examples where construction signs/objects should be forgotten, as they are attached to vehicles whose current positions may differ from where they were previously observed. (3) Finally, if there is an obstacle occluding our view of a previously observed element, the lack of new observation should not influence the earlier conjectures. In this case, we should remember the previous state. Therefore, the system must correctly aggregate information from multiple time instances to improve robustness of detection. Based on these insights, we introduce a non-parametric memory representation that captures the local world around the AV. Our proposed model exploits the camera to segment the classes of interest (e.g., construction elements, signs) and the LiDAR to localize the elements in 3D space and for occlusion reasoning. At each time step, we use a 2D segmentation model that is trained to segment construction objects and traffic signs in the camera image. This observation is associated with the 3D point cloud provided by the LiDAR sensor to localize the object in 3D for the current time step. Then, we utilize the non-parametric memory to store the up-to-date semantic predictions of the world around the AV. This representation is amenable to both semantic fusion, and spatio-temporal fusion which lets us propagate beliefs both from the semantic understanding in the camera space into the 3D space, as well as from previous time-steps spatio-temporally. Compared to semantic mapping methods [1, 2, 3] that use a pre-defined update rule to fuse online perceived information with past estimates, we use a learned model to aggregate information that takes into account factors that influence noise and occlusion. This allows us to handle cases with mis-alignment, noisy perceived semantics, distance small objects and dynamic changes in a smarter manner. Figure 2: Typical examples of construction sign (left) and construction cone (right) that should be forgotten. As the vehicle to which they are attached may move, past 3D detections of these elements should be forgotten. The memory is a dynamically sized graph containing the likely foreground 3D points and their initial classification probabilities from the image segmentation model from the past and present. The size of the graph changes with new LiDAR sweeps, as well as deprecation of past information as appropriate. Furthermore, we compute an occlusion feature based on the present sweep to determine whether objects observed in the past are now occluded. Finally, we combine the above information using a continuous convolution network to segment the available LiDAR points. We validate our approach on a large dataset captured while driving through construction zones in several cities across North America, consisting of over 4000 snippets with different construction elements such as cones and signs. We show that our approach outperforms the baselines by a significant margin. ## II Related Work #### Image Segmentation Recent semantic segmentation approaches successfully exploit deep convolutional neural nets [4, 5]. End-to-end trainable structured prediction techniques model dependencies between the predictions at multiple pixels [6, 7, 8]. We draw inspiration from this to refine our segmentation results by reasoning about spatio-temporally related 3D points. #### Point Cloud Segmentation Initial attempts for 3D point cloud segmentation [9, 10] used clustering methods to identify individual objects. Convolutional networks using a layered 2D representation for the point cloud in bird’s-eye view have also been proposed [11], where the sparsity of the data can be further exploited [12]. Additionally, voxelization of point clouds is popular. Reductions to computational cost by casting voxel segmentation as 2D convolutions [13] or improved memory efficiency [14] have been proposed. Some recent works directly learn convolutional networks over raw point clouds for segmentation. The pioneering work [15] aggregates global and point-wise deep features to predict class-wise labels. It was later extended to a hierarchical aggregation framework [16]. An alternative approach is to convert point coordinates to an intermediate space in which convolution operators are possible such as permutohedral lattice [17], tangent space [18] or over graphs [19]. [20] aggregates information across different spatial axis of a single observation using a recurrent structure. Recently, parametric continuous convolutions [21] were shown to significantly improve over the state-of-the-art by reasoning directly over neighborhood graphs constructed from points with their natural coordinates in continuous domain. However, these techniques use single observations, while our proposed method uses continuous convolutions and occlusion reasoning to fuse information in the memory with current observations to directly merge accumulated past beliefs with new observations at each time step. #### Temporal Reasoning in Segmentation Due to the sparse nature of LiDAR point clouds, temporal reasoning by accumulating multiple observations over multiple timesteps have shown to be more robust [22, 23]. Exploiting all three cues (spatial, semantic, and temporal) within a Bayesian framework, Held _et al._ [24] demonstrated higher performance gains. Tokmakov _et al._ [25] utilized a recurrent convolutional unit to realize a visual memory in image space, where the end result is used for segmenting objects in a video stream. #### Sensor Fusion The majority of the earlier work in sensor fusion focused on early fusion where depth data is used as an additional channel in the 2D image space [26, 27, 28]. While early fusion appears promising in indoor scenes with high resolution depth sensors, LiDAR point clouds are very sparse, reducing the efficacy of early fusion frameworks. More recent end-to-end approaches exploit fusion at multiple levels downstream in the network architecture where the point cloud is rasterized into a birds-eye view [29, 30, 31]. [32] fused a bird’s-eye view projection of a camera image with the top down view of the point cloud for lane detection. Qi _et al._ [33] use a late fusion strategy with a two-stage pipeline where 2D detections in the image space is used as a hard prior in the form of a frustum to further perform binary segmentation of the foreground object in the point cloud. Our method leverages both camera and LiDAR data. However, contrary to the aforementioned approaches, we exploit an external memory which results in robust modelling of static objects that allows us to explicitly remember existing objects rather than re-discovering them with the next observation. #### Dynamic Semantic Mapping Building dynamically changing semantic maps is a active topic in robotics. It is crucial for the robot to complete many tasks in a dynamic evolving environment, such as navigation [1], localization [3] and manipulation [34]. There is a large body of work in semantic 3D reconstruction using RGB-D data for indoor scenes [35, 36, 37]. Cameras and LiDAR have been also exploited to build semantics maps from urban environment [1, 2, 3]. Most of these methods use a pre-defined update rule to fuse online perceived information with past estimates. However, the optimal trade-off between these two sources in a dynamic and noisy scene is difficult to encode. On the other hand, we use a learned model to aggregate information that takes into account factors that influence noise and occlusion. This allows us to handle cases with mis- alignment, noisy perceived semantics, distant small objects and dynamic changes in a smarter manner. #### Memory Networks The majority of approaches utilizing memory rely on recurrent neural networks based on a long short-term memory (LSTM) [25, 38], where the memory is represented as a 1D-vector. More recently, [39, 40] have shown techniques using ConvLSTMs to explicitly remember spatial information. Such convolutional approaches to LSTM encoding exploits the spatial relationships in the image space. However, they do not explicitly encode the 3D structure. Explicit external memory with differentiable write and read operations [41] have been exploited as an external data structure in settings where a direct mapping between the memory and physical quantities is missing. In contrast, our memory representation is an explicitly external 4D spatio-temporal buffer that both has a direct mapping to the real world in a local coordinate frame, as well as being compatible with the continuous convolutions [21] for sparse computation in that space. Our memory is essentially a graph of ”points”, which is a different memory representation than existing approaches which rely more on canvases. Figure 3: Our method takes as input an image captured from the camera, which is passed through a segmentation model. The resulting pixel-wise labels are re-projected onto the corresponding LiDAR point cloud. We generate a depth map for occlusion reasoning over the past observations in memory. The information in the memory and the current single sweep data are then processed by a continuous convolution model. The memory is subsequently updated with points and image segmentation result from the current sweep. ## III Learning to Forget/Reinforce/Remember In this section, we describe the motivation for and implementation of our non- parametric memory-based model that is able to maintain a record of segmented objects of interest, and make appropriate updates as new data becomes available. We demonstrate the effectiveness of our approach on the segmentation of construction elements such as cones and barrels, as well as traffic signs. These small objects are critical to safe autonomous driving. Note that this method can be easily extended to other classes. ### III-A 3D Segmentation from Images and LIDAR Image segmentation is a well understood area, where numerous works have achieved very impressive performance. This is enabled by the dense information captured by the camera. Moreover, the rich texture and colors in images allow models to disambiguate between different surfaces that may otherwise have similar physical structure. However, autonomous driving requires knowledge of the surroundings in 3D. LiDAR measurements provide a sparse set of points per sensor sweep that are localized in the 3D world, but provide only sparse information. In our work, we propose to leverage the discerning power of an image segmenter while using 3D LiDAR points accumulated over time as a localizer to segment objects with high accuracy in 3D. Towards this goal, at each timestep, we first obtain a pixel-wise segmentation on the 2D image captured by a front-facing camera mounted on top of the self- driving car. The segmentation model is a deep convolutional neural network using the ResNet-101 [42] backbone. Furthermore, we use a spatial feature pooling scheme based on the PSPNet [4] architecture. In particular, we follow the standard ResNet-101 backbone’s 2048-dimensional output with three large receptive field size pooling operations of $5\times 5$, $10\times 10$, and $25\times 25$ followed by a point-wise convolution for the original output and all three pooling outputs to reduce the output dimension to 512 before concatenation. This is processed by an additional 3 ResNet blocks to fuse the multi-scale information. We then use 3 transposed convolution layers with a ResNet block following the first two to upsample the feature volume to input resolution while reducing the feature dimension. Using the camera calibration matrix, we project the 3D points of the corresponding LiDAR sweep onto the camera image. For the 3D points that fall within the image, we associate the image segmentation output at the nearest pixel with the point. This naive implementation suffers from a number of shortcomings. First, the LiDAR points collected at different distances within the aggregation time window are considered equally, while segmentation results are less noisy at shorter distances. As well, points that were previously labelled as classes of interest keep their labels, even when later sweeps show contradictory information. The semantic label given to the earlier point should be removed if at a later time we have a clear line of sight to structures behind the point. This new observation may indicate that the earlier point belongs to a moving object, whose location is now unoccupied at the later time. Alternatively, it may be that the earlier foreground label is a false positive. Unfortunately, the method described here is unable to remove these false positives. Finally, it is observed that the resulting output in the 3D space is noisy, as slight misalignments between the image segment boundaries and the 3D projection cause large mislabeled regions in the 3D world. | Validation IoU | Test IoU ---|---|--- Model | Traffic Sign | Construction | Mean | Traffic Sign | Construction | Mean Image Baseline | $42.3\%$ | $52.3\%$ | $47.3\%$ | $42.1\%$ | $42.1\%$ | $42.1\%$ Voxel NN [43] | $37.6\%$ | $42.6\%$ | $40.1\%$ | $37.9\%$ | $52.8\%$ | $45.4\%$ Continuous Convolution - Single Sweep | $56.6\%$ | $63.2\%$ | $59.9\%$ | $52.9\%$ | $61.6\%$ | $57.3\%$ Continuous Convolution with Memory (Ours) | $59.1\%$ | $66.1\%$ | $62.6\%$ | $56.0\%$ | $64.4\%$ | $60.2\%$ TABLE I: IoU metrics for our model and the comparison. ### III-B Non-parametric Memory for Spatio-temporal Segmentation We propose a learned model to address the aforementioned shortcomings. The overall pipeline of our model is depicted in (Figure 3). We use a non- parametric memory structure to maintain our current best estimate for segmentation of aggregated point clouds. The memory stores a length $N$ list of 3D points described by a $\mathbb{R}^{3}$ coordinate. Additionally, we store a vector $M\in[0,1]^{C}$ of $C$ associated class-wise probability estimates for each point indexed by $i$ collected at all time steps up to the current time, containing the classification probability output of the image segmenter. Concretely, we define a dynamic graph $G=(P,E)$ where $P$ is the set $\\{u\\}$ of all historical and current 3D points, and $E$ is the set of edges $\\{(u,v)|v\in\text{NN}_{K}(u)\\}$, where the $\text{NN}_{K}(u)$ is the $K$-nearest spatial neighbors to a point $u$. We then use a model based on the continuous convolutions (CC) architecture [21] to process each point. While the traditional convolutional neural network operates on discrete locations in a regular grid, the CC method allows input and queried output locations at arbitrary continuous coordinates. This bypasses the need to balance between the high memory requirements of a voxelized representation needed by discrete convolutions and the resolution (hence precision) in the output space. To appropriately aggregate memory information of estimated classification probabilities, we need to reason about whether or not past observed regions are occluded in the current frame. Therefore, we compute occlusion information from the current frame. This requires a dense depth map constructed for the current time step. While the density of LiDAR points vary greatly with distance, it is constant in polar coordinates as the pitch (vertical) angular separation of LiDAR beams and the azimuth (horizontal) angular resolution are fixed. We use the vehicle pose at each time step to convert both current and previous 3D points to polar coordinates in the current perspective reference frame. For each point’s coordinates $p=[x,y,z]^{T}$ where the $x,y,z$ axis point in the forward, right, and up directions from the vehicles, respectively, we compute the point’s polar coordinates as $r=\sqrt{x^{2}+y^{2}+z^{2}},\phi=\text{tan}^{-1}\frac{y}{x},\theta=\text{sin}^{-1}\frac{z}{r}$ where $r$, $\phi$, and $\theta$ are the range, azimuth, and pitch angles, respectively. The remaining gaps are filled in with nearest neighbor interpolation. Finally, we produce an occlusion score for each of the previous 3D points by computing the difference in depth: $o_{\text{p}}=r_{\text{depth image, p}}-r_{\text{p}}$ where $r_{\text{p}}$ is the distance to a previous 3D point queried and $r_{\text{depth image, p}}$ is the value in the depth image corresponding to the same angular coordinates as the query point. Finally, we concatenate the memory contents, the occlusion score, intensity of the LiDAR measurement, and the distance to the vehicle at which each point was measured. This results in a $D=C+3$ dimensional feature vector $f$ for each point. To aggregate spatial and temporal information for a point $u_{i}$, we find its $K=50$ nearest neighbor coordinates $\\{v_{j}\\}$, and look up their features $\\{f_{j}\\}$. This is given as input to a continuous convolution model that consists of four layers with residual connections. For each layer, let us define $N$ as the number of points, $F$ and $O$ as the input and output feature dimensions. The output feature vector’s elements are computed as: $h_{k,i}=\sum_{d}^{F}\sum_{j}^{K}g_{d,k}(u_{i}-v_{j})f_{d,j}$ where $g=\text{MLP}(z;\theta):\mathbb{R}^{3}\rightarrow\mathbb{R}$ is a learned multi-layer perceptron with parameters $\theta$ that transforms a spatial offset $z$ into a scalar weight. For our model, the first CC layer expands the feature dimensionality to $16$, while the remaining layers maintain this dimensionality to produce a $C$ dimensional classification output. Within each layer, an efficient and equivalent variant of the aforementioned CC operation is used. Here, a 2-layer multi-layer perceptron generates kernels based on the relative coordinate offsets between the queried point and its neighbors. This is multiplied by the point-wise features transformed by a learned projection, added to the input for the residual connection, and passed to the next layer. Additionally, we apply batch normalization at the output of each CC layer. The output of this model is the newly updated estimates for the classification probabilities for each 3D point. ## IV Experimental Evaluation In this section, we evaluate our method and compare it with the baseline as well as various ablation studies. #### Dataset - ground truth generation We select the construction elements and traffic sign classes as the focus of our technique due to their importance for autonomous driving and relative difficulty for traditional approaches arising from their small physical size. Human labelling efforts of point-wise LiDAR point clouds is expensive, as complex and sparse 3D points are difficult to visualize and select. Instead, we employ the same 2D semantic segmentation model and aggregation for automatic ground truth generation. For this, we train the image semantic segmentation model on a dataset of 10k training, 1.3k validation, and 1.3k testing images with pixel-wise semantic segmentation annotation collected in various North American cities and highways. The model is trained on 4 NVIDIA Titan Xp GPUs with ADAM and a learning rate of $1e-5$ until convergence at 240k iterations. This method achieves 64.1% and 63.3% pixel-wise intersection over union on the traffic sign and construction elements classes in the validation set, respectively. Our joint LiDAR and camera image dataset consists of $3141$ training, $430$ validation, and $606$ testing sequences of LiDAR sweeps and associated RGB front camera images. Each sequence is defined by a key frame, and contains frames sampled at 0.5 meters of vehicle displacement (or 0.1 seconds if the vehicle moves farther between frames than 0.5 meters). To enable training of the memory system, the sequence includes frames prior to the key frame sampled over a displacement of 30 meters. The locations of the 3D points are expressed in a globally consistent coordinate system using an online pose estimation system. We extend the data sequence with observations for 100 meters with the aforementioned sampling strategy. For each frame, the LiDAR points are labeled with the 2D semantic segmentation result. To reduce noise, we select only points within 25 meters of the vehicle. By nature, construction cones and signs are largely isolated objects that are easily separated from its environment in 3D. Thus, we use the density-based spatial clustering [44] as a denoising step over the initial point-wise semantic labels. Following this, we produce a training sample / ground truth pair by using only the sequence of frames over the 30 meters preceding the key frame, while removing the distance filter. We use the denoised ground truth results to label this set with k-nearest neighbor search for points that are measured at beyond 25m. We observe in the second row of Fig 5 that this method is able to provide us with highly accurate ground truth segmentations. Figure 4: Test set IoU for construction elements (top), and traffic signs (bottom). #### Training We estimate the sample mean and variances for the $C+3=6$ point-wise features and normalize them relative to each other. This is used to standardize the features, which are then used as input to the continuous convolution model. The model parameters are randomly initialized. At the output, we minimize the average of the standard cross-entropy loss at each point. We train our model with a batch size of 1 and the ADAM [45] optimizer with a learning rate of $1e-3$ on a single NVIDIA Titan Xp GPU until convergence as determined by the validation set at 28k iterations. We then fix the batch normalization statistics, and further finetune the model at a learning rate of $1e-4$ for an additional 64k iterations until convergence. #### Metrics We use standard point-wise classification intersection-over-union (IoU) for evaluating our method and the comparison. Moreover, we examine the performance differences between the baselines and our method at different distances from the vehicle. As this model is to be used in the online driving setting, we evaluate over only the LiDAR points that fall within the forward camera view at the final timestep. This has the effect of increasing the difficulty, as the elements in this region have less accumulated measurements than elements next to and behind the vehicle. #### Baselines We analyze the performance of our model with two baselines. The first baseline consists of using the same 2D semantic segmentation model to label the 3D points projected into the image, which we call the _Image Baseline_. Additionally, we implement and train one of the recent fast point cloud segmentation techniques [43] according to the settings described in the paper. We refer to this technique as Voxel NN. This model uses 2D convolutions on a simple occupancy grid representation to produce semantic segmentation predictions for 3D point clouds. Specifically, they discretize a given point cloud into a $W\times H\times Z$ voxel grid, and compute predictions for each voxel using a 2D U-Net model by treating the $Z$-dimension as the feature channel. Per-point predictions are then recovered via nearest neighbour interpolation. #### Ablation We also present results for a model where our memory approach is removed from the continuous convolution framework. This model uses a single sweep of the LiDAR point cloud and the corresponding image segmentation. We train this model in a similar fashion as our proposed model, with the exception that a single frame is used as an example instead of an accumulated sequence. #### Quantitative results Table I show the validation and test dataset IoU metrics for traffic signs, construction elements and the mean for all models including the baselines, and the single sweep continuous convolution baseline. We observe that our continuous convolution approach outperforms results from other 3D processing methods [43]. Moreover, the use of the non-parametric memory presented in our paper consistently improves the results across the validation and test sets both for the traffic signs and the construction elements. We further dissect the results by slicing them into object bins according to distance from the vehicle, as shown in Figure 4 for the test set. It is evident that the segmentation performance at larger distance is lower, as the image segmentation is noisier while the LiDAR sweeps return fewer points. These results also show that our method consistently performs better compared to the previous work, the image baseline, as well as a version of our method without the memory. The significant performance gap between our method and the baselines highlights the effectiveness of our model in appropriately aggregating noisy spatio-temporal information especially at range. #### Qualitative results Fig 5 shows the qualitative comparisons on the validation set across the baseline methods, single LiDAR sweep ablation study, and our proposed method. Note that the automatic ground truth generation procedure described above is able to generate accurate ground truth point-wise semantic segmentation for the classes of interest. Additionally, we observe that the single LiDAR sweep continuous convolution method without the non-parametric memory tends to miss small elements that are far away, for example the distant traffic sign on the left in first column. In comparison with the baseline methods, our model achieves much better performance. The Voxel NN [43] model’s output is far noisier, and often overestimates the extent of the foreground objects. This is likely due to the limited resolution after discretization of the voxel-based model, which is necessary for the model to avoid exceeding memory and processing constraints. The noisy object boundaries can have a significant impact on the autonomous vehicle’s motion planning systems. In contrast, the continuous convolutions operation avoids the decrease in resolution by directly operating on the 3D points, and produces very accurate foreground extents. Finally, the baseline method where the image semantic segmentation output is directly projected onto the LiDAR sweeps often results in trails of foreground labels on background points at occlusion boundaries. The construction signs in the first column and the closest cone in the third column show this issue most prominently. Again, the effect on autonomous vehicle operation can be significant, as drivable regions would be labelled as obstacles. In contrast, our method is able to remove these artefacts. Figure 5: Output comparisons on validation set. In each column, we show an RGB camera input, the automatically generated ground truth, output from our method, output from the continuous convolution model with a single LiDAR sweep as input, output from [43], and output from the image segmentation baseline. Traffic signs are shown in green, while construction elements are shown in orange. This comparison is best viewed at a higher zoom level. #### Performance In our experiments, we used a relatively large and unoptimized image semantic segmentation technique with a runtime of approximately 400ms on a single GPU. The continuous convolution over the non-parametric memory module generally runs in under 20ms, with the exact runtime dependent on the number of foreground points in a scene. Therefore, with an optimized semantic segmentation method that runs at 130ms, our method will be running at 150ms end-to-end. #### Limitations One area of potential future improvement is as follows. If there are false negatives in the instance segmentation model, the corresponding LiDAR points will not be passed to the continuous convolution model and hence be absent in the foreground estimation. Additionally, there may be elements of interest outside of the LiDAR return range, which would not be registered. Finally, the proposed method’s reliance on the image segmenter suggests that its performance may be limited in the dark. ## V Conclusion In this paper, we introduced a non-parametric memory representation for spatio-temporal segmentation. We demonstrated that this representation is effective in aggregating information in local neighborhoods of 3D LiDAR points as well as observations over time to increase robustness and reduce noise in the segmentation output. This is made possible by our representation’s three important capabilities: (i) it remembers what it has seen in the past, (ii) it reinforces or (iii) forgets its past beliefs based on new evidence. In the future, we aim to explore more methods for sensor fusion within a multi-task network that learns joint features for image and point cloud. We also plan to extend our method to other semantic classes of interest. ## References * [1] Denis F Wolf and Gaurav S Sukhatme. Semantic mapping using mobile robots. IEEE Transactions on Robotics, 2008. * [2] Abhijit Kundu, Yin Li, Frank Dellaert, Fuxin Li, and James M Rehg. Joint semantic segmentation and 3d reconstruction from monocular video. In ECCV, 2014. * [3] Johannes L Schönberger, Marc Pollefeys, Andreas Geiger, and Torsten Sattler. Semantic visual localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6896–6906, 2018. * [4] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In CVPR, 2017. * [5] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. * [6] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062, 2014. * [7] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In ICCV, 2015. * [8] Anurag Arnab, Sadeep Jayasumana, Shuai Zheng, and Philip HS Torr. Higher order conditional random fields in deep neural networks. In ECCV, 2016. * [9] Klaas Klasing, Dirk Wollherr, and Martin Buss. A clustering method for efficient segmentation of 3d laser data. In ICRA, 2008. * [10] Bertrand Douillard, James Underwood, Noah Kuntz, Vsevolod Vlaskine, Alastair Quadros, Peter Morton, and Alon Frenkel. On the segmentation of 3d lidar point clouds. In ICRA, 2011. * [11] Bo Li, Tianlei Zhang, and Tian Xia. Vehicle detection from 3d lidar using fully convolutional network. arXiv preprint arXiv:1608.07916, 2016. * [12] Mengye Ren, Andrei Pokrovsky, Bin Yang, and Raquel Urtasun. Sbnet: Sparse blocks network for fast inference. In CVPR, 2018. * [13] Chris Zhang, Wenjie Luo, and Raquel Urtasun. Efficient convolutions for real-time semantic segmentation of 3d point clouds. In 3DV, 2018. * [14] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In CVPR, 2017. * [15] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. * [16] Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NIPS, 2017. * [17] Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, and Jan Kautz. Splatnet: Sparse lattice networks for point cloud processing. In CVPR, 2018. * [18] Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, and Qian-Yi Zhou. Tangent convolutions for dense prediction in 3d. In CVPR, 2018. * [19] Xiaojuan Qi, Renjie Liao, Jiaya Jia, Sanja Fidler, and Raquel Urtasun. 3d graph neural networks for rgbd semantic segmentation. In CVPR, 2017. * [20] Qiangui Huang, Weiyue Wang, and Ulrich Neumann. Recurrent slice networks for 3d segmentation of point clouds. In CVPR, pages 2626–2635, 2018. * [21] Shenlong Wang, Simon Suo, Wei-Chiu Ma3 Andrei Pokrovsky, and Raquel Urtasun. Deep parametric continuous convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2589–2597, 2018. * [22] Anna Petrovskaya and Sebastian Thrun. Model based vehicle detection and tracking for autonomous urban driving. Autonomous Robots, 26(2-3):123–139, 2009. * [23] Asma Azim and Olivier Aycard. Detection, classification and tracking of moving objects in a 3d environment. In Intelligent Vehicles Symposium (IV), 2012 IEEE, 2012. * [24] David Held, Devin Guillory, Brice Rebsamen, Sebastian Thrun, and Silvio Savarese. A probabilistic framework for real-time 3d segmentation using spatial, temporal, and semantic cues. In Robotics: Science and Systems, 2016. * [25] Pavel Tokmakov, Karteek Alahari, and Cordelia Schmid. Learning video object segmentation with visual memory. In CVPR, 2018. * [26] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, pages 746–760. Springer, 2012. * [27] Saurabh Gupta, Pablo Arbeláez, Ross Girshick, and Jitendra Malik. Indoor scene understanding with rgb-d images: Bottom-up segmentation, object detection and semantic segmentation. IJCV, 112(2):133–149, 2015. * [28] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015. * [29] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In CVPR, 2017. * [30] Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Waslander. Joint 3d proposal generation and object detection from view aggregation. IROS, 2018. * [31] Ming Liang, Bin Yang, Shenlong Wang, and Raquel Urtasun. Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 641–656, 2018. * [32] Min Bai, Mattyus Gellert, Namdar Homayounfar, Shenlong Wang, Kowshika Lakshmikanth, and Raquel Urtasun. Deep multi-sensor lane detection. In IROS, 2018. * [33] Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. In CVPR, 2018. * [34] Cipriano Galindo, Juan-Antonio Fernández-Madrigal, Javier González, and Alessandro Saffiotti. Robot task planning using semantic maps. Robotics and autonomous systems, 56(11):955–966, 2008. * [35] Alexander Hermans, Georgios Floros, and Bastian Leibe. Dense 3d semantic mapping of indoor scenes from rgb-d images. In ICRA, pages 2631–2638, 2014. * [36] Jörg Stückler and Sven Behnke. Multi-resolution surfel maps for efficient dense 3d modeling and tracking. JVCIR, 25(1):137–147, 2014. * [37] John McCormac, Ankur Handa, Andrew Davison, and Stefan Leutenegger. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks. In ICRA, pages 4628–4635, 2017. * [38] Marijn F Stollenga, Wonmin Byeon, Marcus Liwicki, and Juergen Schmidhuber. Parallel multi-dimensional lstm, with application to fast biomedical volumetric image segmentation. In NIPS, 2015. * [39] Bernardino Romera-Paredes and Philip Hilaire Sean Torr. Recurrent instance segmentation. In ECCV, 2016. * [40] Mengye Ren and Richard S Zemel. End-to-end instance segmentation with recurrent attention. In CVPR, 2017. * [41] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016. * [42] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016. * [43] Chris Zhang, Wenjie Luo, and Raquel Urtasun. Efficient convolutions for real-time semantic segmentation of 3d point clouds. In 3DV, 2018. * [44] Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. Density-based spatial clustering of applications with noise. In Int. Conf. Knowledge Discovery and Data Mining, volume 240, 1996\. * [45] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
# Non-tensorial Gravitational Wave Background in NANOGrav 12.5-Year Data Set Zu-Cheng Chen<EMAIL_ADDRESS>CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, China Chen Yuan <EMAIL_ADDRESS>CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, China Qing-Guo Huang Corresponding author: <EMAIL_ADDRESS>CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, China School of Fundamental Physics and Mathematical Sciences Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou 225009, China ###### Abstract We perform the first search for an isotropic non-tensorial gravitational-wave background (GWB) allowed in general metric theories of gravity in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) 12.5-year data set. By modeling the GWB as a power-law spectrum, we find strong Bayesian indication for a spatially correlated process with scalar transverse (ST) correlations whose Bayes factor versus the spatially uncorrelated common- spectrum process is $107\pm 7$, but no statistically significant evidence for the tensor transverse, vector longitudinal and scalar longitudinal polarization modes. The median and the $90\%$ equal-tail amplitudes of ST mode are $\mathcal{A}_{\mathrm{ST}}=1.06^{+0.35}_{-0.28}\times 10^{-15}$, or equivalently the energy density parameter per logarithm frequency is $\Omega_{\mathrm{GW}}^{\mathrm{ST}}=1.54^{+1.21}_{-0.71}\times 10^{-9}$, at frequency of 1/year. Introduction. The direct detection of gravitational waves (GWs) from compact binary coalescences Abbott _et al._ (2016a, 2019a, 2020a) has marked the beginning of a new era of GW astronomy and provides a powerful tool to test gravitational physics in the strong-field regime Abbott _et al._ (2019b, 2020b). The current ground-based GW detectors are sensitive to GWs at frequencies of $10\sim 10^{4}$ Hz Abbott _et al._ (2016b). As a complementary tool, the stable millisecond pulsars are natural galactic scale GW detectors that are sensitive in nano-Hertz frequency band, opening a new window to explore the Universe. By monitoring the spatially correlated fluctuations induced by GWs on the time of arrivals (TOAs) of radio pulses from an array of pulsars Sazhin (1978); Detweiler (1979); Foster and Backer (1990), a pulsar timing array (PTA) seeks to detect the very low frequency GWs which might be sourced by the inspiral of supermassive black hole binaries (SMBHBs) Jaffe and Backer (2003); Sesana _et al._ (2008, 2009), the first-order phase transition Witten (1984); Hogan (1986), the scalar-induced GWs Saito and Yokoyama (2009); Yuan _et al._ (2019a, b), etc. The null-detection of GWs by PTAs has successfully constrained various astrophysical scenarios, such as cosmic strings Lentati _et al._ (2015); Arzoumanian _et al._ (2018); Yonemaru _et al._ (2020), continuous GWs from individual SMBHBs Zhu _et al._ (2014); Babak _et al._ (2016); Aggarwal _et al._ (2018), GW memory effects Wang _et al._ (2015); Aggarwal _et al._ (2019), primordial black holes Chen _et al._ (2020), and stochastic GW backgrounds (GWBs) of a power-law spectrum Lentati _et al._ (2015); Shannon _et al._ (2015); Arzoumanian _et al._ (2018). However, the direct detection of GWs by PTAs remains a key task in astrophysical experiments, and is hopefully achieved in the next few years Siemens _et al._ (2013); Taylor _et al._ (2016). Recently, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration has reported strong evidence for a stochastic common- spectrum process, which is significantly preferred over an independent red- noise process in each pulsar Arzoumanian _et al._ (2020). The characteristic strain of this process is described by a power-law model, $h_{c}(f)\propto f^{-2/3}$, corresponding to the GW emission from inspiraling SMBHBs. NANOGrav announced there was no statistically significant evidence for quadrupolar spatial correlations. Moreover, this process shows moderately negative evidence for monopolar and dipolar correlations, which may come from the reference clock and solar system ephemeris (SSE) anomalies, respectively. Lacking definitive evidence for quadrupolar spatial correlations Arzoumanian _et al._ (2020), NANOGrav argued that it is inconclusive to claim a detection of GWB consistent with general relativity (GR), and the origin of this process remains controversial. Even though there is no definitive evidence for tensor transverse (TT) correlations predicted by GR in the NANOGrav 12.5-year data set, it does not exclude the possibility of other GW polarization modes allowed in general metric theories of gravity. In fact, a most general metric gravity theory can allow two vector modes and two scalar modes besides the two tensor modes, and these different modes have distinct correlation patterns Lee _et al._ (2008); Chamberlin and Siemens (2012); Gair _et al._ (2015); Boîtier _et al._ (2020), allowing the GW detectors to explore them separately. To figure out whether the signal originates from a GWB or not, it is necessary to fit the data with all possible correlation patterns. In this letter, we perform the first Bayesian search for the stochastic GWB signal modeled by a power-law spectrum with all the six polarization modes in the NANOGrav 12.5-year data set. Such a power-law spectrum of GWB can be produced by the inspiraling SMBHBs by assuming circular orbits whose decays are dominated by GWs and neglecting higher moments Cornish _et al._ (2018). We find the Bayes factor in favor of a spatially correlated common-spectrum process with the scalar transverse (ST) correlations versus the spatially uncorrelated common-spectrum process (UCP) is $107\pm 7$ which indicates that strong Bayesian indication for the ST correlations in the NANOGrav 12.5-year data set. Detecting GWB Polarizations with a PTA. The radio pulses from pulsars, especially millisecond pulsars, arrive at the Earth at extremely steady rates, and pulsar timing experiments exploit this regularity. The geodesics of the radio waves can be perturbed by GWs, inducing the fluctuations in the TOAs of radio pulses Sazhin (1978); Detweiler (1979). The presence of a GW will manifest as the unexplained residuals in the TOAs after subtracting a deterministic timing model that accounts for the pulsar spin behavior and the geometric effects due to the motion of the pulsar and the Earth Sazhin (1978); Detweiler (1979). By regularly monitoring TOAs of pulsars from an array of the ultra rotational stable millisecond pulsars Foster and Backer (1990) and using the expected form for cross correlations of a signal between pulsars in the array, it is feasible to discriminate the GW signal from other systematic effects, such as clock or SSE errors. For any two pulsars ($a$ and $b$) in a PTA, the cross-power spectral density of the timing residuals induced by a GWB at frequency $f$ will be Lee _et al._ (2008); Chamberlin and Siemens (2012); Gair _et al._ (2015) $S_{ab}(f)=\sum_{P}\frac{h_{c,P}^{2}}{12\pi^{2}f^{3}}\Gamma^{P}_{ab}(f),$ (1) where $h_{c}^{P}(f)$ is the characteristic strain and the sum is over all the six possible GW polarizations which may be presented in a general metric gravity theory, namely $P=+,\times,x,y,l,b$. Here, “$+$” and “$\times$” denote the two different spin-2 transverse traceless polarization modes; “$x$” and “$y$” denote the two spin-1 shear modes; “$l$” denotes the spin-0 longitudinal mode; and “$b$” denotes the spin-0 breathing mode. The overlap function $\Gamma^{P}_{ab}$ for two pulsars is given by Lee _et al._ (2008); Chamberlin and Siemens (2012) $\displaystyle\Gamma^{P}_{ab}(f)=$ $\displaystyle\frac{3}{8\pi}\int d\hat{\Omega}\left(e^{2\pi ifL_{a}(1+\hat{\Omega}\cdot\hat{p}_{a})}-1\right)\times$ (2) $\displaystyle\left(e^{2\pi ifL_{b}(1+\hat{\Omega}\cdot\hat{p}_{b})}-1\right)F^{P}_{a}(\hat{\Omega})F^{P}_{b}(\hat{\Omega}),$ where $L_{a}$ and $L_{b}$ are the distance from the Earth to the pulsar $a$ and $b$ respectively, $\hat{\Omega}$ is the propagating direction of the GW, and $\hat{p}$ is the direction of the pulsar with respect to the Earth. The antenna patterns $F^{P}(\hat{\Omega})$ are given by $F^{P}(\hat{\Omega})=e^{P}_{ij}(\hat{\Omega})\frac{\hat{p}^{i}\hat{p}^{j}}{2(1+\hat{\Omega}\cdot\hat{p})},$ (3) where $e^{P}_{ij}$ is the polarization tensor for polarization mode $P$ Lee _et al._ (2008); Chamberlin and Siemens (2012). Following Cornish _et al._ (2018), we define $\displaystyle\Gamma^{\mathrm{TT}}_{ab}(f)$ $\displaystyle=$ $\displaystyle\Gamma^{+}_{ab}(f)+\Gamma^{\times}_{ab}(f),$ (4) $\displaystyle\Gamma^{\mathrm{ST}}_{ab}(f)$ $\displaystyle=$ $\displaystyle\Gamma^{b}_{ab}(f),$ (5) $\displaystyle\Gamma^{\mathrm{VL}}_{ab}(f)$ $\displaystyle=$ $\displaystyle\Gamma^{x}_{ab}(f)+\Gamma^{y}_{ab}(f),$ (6) $\displaystyle\Gamma^{\mathrm{SL}}_{ab}(f)$ $\displaystyle=$ $\displaystyle\Gamma^{l}_{ab}(f).$ (7) For the $\mathrm{TT}$ and $\mathrm{ST}$ polarization modes, the overlap functions are approximately independent of the distance and frequency and can be analytically calculated by Hellings and Downs (1983); Lee _et al._ (2008) $\displaystyle\Gamma^{\mathrm{TT}}_{ab}(f)$ $\displaystyle=$ $\displaystyle\frac{1}{2}(1+\delta_{ab})+\frac{3}{2}k_{ab}\left(\ln k_{ab}-\frac{1}{6}\right),$ (8) $\displaystyle\Gamma^{\mathrm{ST}}_{ab}(f)$ $\displaystyle=$ $\displaystyle\frac{1}{8}\left(3+4\delta_{ab}+\cos\zeta_{ab}\right),$ (9) where $\delta_{ab}$ is the Kronecker delta symbol, $\zeta_{ab}$ is the angle between pulsars $a$ and $b$, and $k_{ab}\equiv(1-\cos\zeta_{ab})/2$. Note that $\Gamma^{\mathrm{TT}}_{ab}$ is known as the Hellings & Downs (HD) Hellings and Downs (1983) or quadrupolar correlations. However, there exist no analytical expressions for the vector longitudinal ($\mathrm{VL}$) and scalar longitudinal ($\mathrm{SL}$) polarization modes, and we calculate them numerically. PTAs are sensitive to the GWs at frequencies of approximately $10^{-9}\sim 10^{-7}$ Hz, and it is expected that the GWB from a population of inspiraling SMBHBs will be the dominant source in this frequency band Jaffe and Backer (2003); Sesana _et al._ (2008, 2009). Assuming the binaries are in circular orbits and the orbital decay is dominated by the GW emission, the cross-power spectral density of Eq. (1) can be approximately estimated by Cornish _et al._ (2018) $S_{ab}(f)=\sum_{I={\mathrm{TT},\mathrm{ST},\mathrm{VL},\mathrm{SL}}}\Gamma^{I}_{ab}\frac{\mathcal{A}_{I}^{2}}{12\pi^{2}}\left(\frac{f}{f_{\mathrm{yr}}}\right)^{-\gamma_{I}}f_{\mathrm{yr}}^{-3},$ (10) where $\mathcal{A}_{I}$ is the GWB amplitude of polarization mode $I$, and $f_{\mathrm{yr}}=1/\mathrm{year}$. The power-law index $\gamma_{I}$ for the TT polarization is $\gamma_{\mathrm{TT}}=13/3$, and $\gamma_{\mathrm{ST}}=\gamma_{\mathrm{VL}}=\gamma_{\mathrm{SL}}=5$ for other polarizations. The dimensionless GW energy density parameter per logarithm frequency for the polarization mode $I$ is related to $\mathcal{A}_{I}$ by, Thrane and Romano (2013), $\Omega_{\mathrm{GW}}^{I}(f)=\frac{2\pi^{2}}{3H_{0}^{2}}f^{2}h_{c,I}^{2}=\frac{2\pi^{2}f_{\mathrm{yr}}^{2}}{3H_{0}^{2}}\mathcal{A}_{I}^{2}\left(\frac{f}{f_{\mathrm{yr}}}\right)^{5-\gamma_{I}},$ (11) where $H_{0}$ is the Hubble constant and we take $H_{0}=67.4\,\mathrm{km}\sec^{-1}\mathrm{Mpc}^{-1}$ from Planck 2018 (Aghanim _et al._ , 2020). PTA data analysis. The NANOGrav collaboration has searched the isotropic GWB in their 12.5-year timing data set Alam _et al._ (2021) and found strong evidence for a stochastic common-spectrum process but without statistically significant evidence for the TT spatial correlations Arzoumanian _et al._ (2020). In this letter, we perform the first search for the GWB from the non- tensorial polarization modes in the NANOGrav 12.5-year data set. Table 1: Parameters and their prior distributions used in the analyses. parameter | description | prior | comments ---|---|---|--- White Noise $E_{k}$ | EFAC per backend/receiver system | Uniform $[0,10]$ | single-pulsar analysis only $Q_{k}$[s] | EQUAD per backend/receiver system | log-Uniform $[-8.5,-5]$ | single-pulsar analysis only $J_{k}$[s] | ECORR per backend/receiver system | log-Uniform $[-8.5,-5]$ | single-pulsar analysis only Red Noise $A_{\rm{RN}}$ | red-noise power-law amplitude | log-Uniform $[-20,-11]$ | one parameter per pulsar $\gamma_{\rm{RN}}$ | red-noise power-law spectral index | Uniform $[0,7]$ | one parameter per pulsar Uncorrelated Common-spectrum Process (UCP) $\mathcal{A}_{\mathrm{UCP}}$ | UCP power-law amplitude | log-Uniform $[-18,-14]$ | one parameter for PTA $\gamma_{\mathrm{UCP}}$ | UCP power-law spectral index | delta function ($\gamma_{\mathrm{UCP}}=13/3$) | fixed GWB Process $\mathcal{A}_{\mathrm{TT}}$ | GWB amplitude of TT polarization | log-Uniform $[-18,-14]$ | one parameter for PTA $\mathcal{A}_{\mathrm{ST}}$ | GWB amplitude of ST polarization | log-Uniform $[-18,-14]$ | one parameter for PTA $\mathcal{A}_{\mathrm{VL}}$ | GWB amplitude of VL polarization | log-Uniform $[-19,-15]$ | one parameter for PTA $\mathcal{A}_{\mathrm{SL}}$ | GWB amplitude of SL polarization | log-Uniform $[-20,-16]$ | one parameter for PTA BayesEphem $z_{\rm drift}$ [rad/yr] | drift-rate of Earth’s orbit about ecliptic $z$-axis | Uniform [$-10^{-9},10^{-9}$] | one parameter for PTA $\Delta M_{\rm jupiter}$ [$M_{\odot}$] | perturbation to Jupiter’s mass | $\mathcal{N}(0,1.55\times 10^{-11})$ | one parameter for PTA $\Delta M_{\rm saturn}$ [$M_{\odot}$] | perturbation to Saturn’s mass | $\mathcal{N}(0,8.17\times 10^{-12})$ | one parameter for PTA $\Delta M_{\rm uranus}$ [$M_{\odot}$] | perturbation to Uranus’ mass | $\mathcal{N}(0,5.72\times 10^{-11})$ | one parameter for PTA $\Delta M_{\rm neptune}$ [$M_{\odot}$] | perturbation to Neptune’s mass | $\mathcal{N}(0,7.96\times 10^{-11})$ | one parameter for PTA PCAi | principal components of Jupiter’s orbit | Uniform $[-0.05,0.05]$ | six parameters for PTA Following NANOGrav Arzoumanian _et al._ (2020), in our analyses, we use 45 pulsars whose timing baseline is greater than three years. To calculate the longitudinal response functions $\Gamma^{\mathrm{VL}}_{ab}$ and $\Gamma^{\mathrm{SL}}_{ab}$ that are dependent on the pulsar distance from the Earth, we adopt the distance information from the Australia Telescope National Facility (ATNF) pulsar database111https://www.atnf.csiro.au/research/pulsar/psrcat/ Manchester _et al._ (2005). Due to the uncertainty in the pulsar distance measurement, the estimation uncertainty of the overlap function can be $\lesssim 3\%$ for VL mode, and $\lesssim 20\%$ for SL mode. The timing residuals of each single pulsar after subtracting the timing model from the TOAs can be decomposed as Arzoumanian _et al._ (2016) $\delta\bm{t}=M\bm{\epsilon}+F\bm{a}+\bm{n}.$ (12) The term $M\bm{\epsilon}$ accounts for the inaccuracies in the subtraction of timing model, where $M$ is the timing model design matrix and $\bm{\epsilon}$ is a vector denoting small offsets for the timing model parameters. The timing model design matrix is obtained through libstempo222https://vallis.github.io/libstempo package which is a python interface to TEMPO2 333https://bitbucket.org/psrsoft/tempo2.git Hobbs _et al._ (2006); Edwards _et al._ (2006) timing software. The term $F\bm{a}$ describes all low-frequency signals, including both the red noise intrinsic to each pulsar and the common red noise signal common to all pulsars (such as a GWB), where $F$ is the Fourier design matrix with components of alternating sine and cosine functions and $\bm{a}$ is a vector giving the amplitude of the Fourier basis functions at the frequencies of $\\{1/T,2/T,\cdots,{N_{\text{mode}}}/T\\}$ with $T$ the span between the minimum and maximum TOA in the PTA van Haasteren and Vallisneri (2014). Similar to NANOGrav Arzoumanian _et al._ (2020), we use $30$ frequency components (${N_{\text{mode}}}=30$) for the pulsar intrinsic red noise with a power-law spectrum while using $5$ frequency components (${N_{\text{mode}}}=5$) for the common-spectrum process to mitigate the effect of potentially coupling between the higher-frequency components of common red noise process and the white noise Arzoumanian _et al._ (2020). The last term $\bm{n}$ describes the timing residuals induced by white noise, including a scale parameter on the TOA uncertainties (EFAC), an added variance (EQUAD), and a per-epoch variance (ECORR) for each backend/receiver system Arzoumanian _et al._ (2016). Similar to NANOGrav Arzoumanian _et al._ (2020), we use the latest JPL SSE, DE438 Folkner and Park (2018), as the fiducial SSE. For verification, we also allow for the BayesEphem Vallisneri _et al._ (2020) corrections to DE438 to model the SSE uncertainties. However, one should bear in mind that introducing BayesEphem would subtract the power from the putative GWB process and suppress the evidence of the GWB process Vallisneri _et al._ (2020); Arzoumanian _et al._ (2020); Pol _et al._ (2020). To extract information from the data, we perform the Bayesian parameter inferences by closely following the procedure in Arzoumanian _et al._ (2018, 2020). The parameters of our models and their prior distributions are summarized in Table 1. To reduce the computational costs, in our analyses, we fixed the white noise parameters to their max likelihood values from results released by NANOGrav444https://github.com/nanograv/12p5yr_stochastic_analysis. We use enterprise Ellis _et al._ (2020) and enterprise_extension555https://github.com/nanograv/enterprise_extensions software packages to calculate the likelihood and Bayes factors and use PTMCMCSampler Ellis and van Haasteren (2017) package to do the Markov chain Monte Carlo sampling. To reduce the number of samples needed for the chains to burn in, we use draws from empirical distributions to sample the pulsars’ red noise parameters as was done in Aggarwal _et al._ (2018); Arzoumanian _et al._ (2020), with the distributions based on the posteriors obtained from an initial Bayesian analysis that includes only the pulsars’ red noise (i.e. excluding any common red noise process). Our analysis is mainly based on the Bayesian inference in which the Bayes factor $\mathcal{B}_{10}\equiv\rm{Pr}(\mathcal{D}|\mathcal{M}_{1})/\rm{Pr}(\mathcal{D}|\mathcal{M}_{0})$ is used to quantify the model selection, where $\rm{Pr}(\mathcal{D}|\mathcal{M})$ denotes the probability that the data $\mathcal{D}$ are produced under the assumption of model $\mathcal{M}$. In Kass and Raftery (1995), $\mathcal{B}_{10}\in[20,150]$ and $\mathcal{B}_{10}>150$ respectively correspond to strong and very strong evidence for $\mathcal{M}_{1}$. More optimistically, $\mathcal{B}_{10}\in[10,30]$, $\mathcal{B}_{10}\in[30,100]$, and $\mathcal{B}_{10}>100$ correspond to strong, very strong and extreme evidence for $\mathcal{M}_{1}$ in Lee and Wagenmakers (2014). NANOGrav found strong evidence for a common-spectrum process in the 12.5-year data set and reported the Bayes factors of UCP model versus the pulsar-intrinsic red noise only model to be $10^{4.5}$ with DE438, and $10^{2.4}$ with BayesEphem Arzoumanian _et al._ (2020). In this letter, the UCP model with fixed spectral index $\gamma_{\mathrm{UCP}}=13/3$ is taken as the fiducial model $\mathcal{M}_{0}$, and the model $\mathcal{M}_{1}$ with $\mathcal{B}_{10}\gg 1$ is supposed to be significantly preferred over the UCP model. We perform analyses on various models by considering different correlation combinations as presented in Eq. (10). ephemeris | TT | ST | VL | SL ---|---|---|---|--- DE438 | $4.96(9)$ | $107(7)$ | $1.94(3)$ | $0.373(5)$ BayesEphem | $2.35(3)$ | $18.4(7)$ | $1.31(2)$ | $0.555(7)$ Table 2: The Bayes factors for various models compared to the UCP model with $\gamma=13/3$. The digit in the parentheses gives the uncertainty on the last quoted digit. Results and discussion. Our results are summarized in Table 2 in which we list the Bayes factors for different models with respect to the UCP model. The Bayes factor of the TT model compared to the UCP model is $4.96\pm 0.09$ with DE438, and $2.35\pm 0.03$ with BayesEphem, indicating no statistically significant evidence for the TT correlations in the data, which is consistent with the results from NANOGrav Arzoumanian _et al._ (2020). The Bayes factors of VL and SL models compared to the UCP model are smaller than $3$, implying the VL and SL signals are “not worth more than a bare mention” Kass and Raftery (1995). However, the Bayes factor for the ST model versus the UCP model is $107\pm 7$ with DE438, implying strong indication for the ST correlations Kass and Raftery (1995); Lee and Wagenmakers (2014), and we obtain the median and the $90\%$ equal-tail amplitudes as $\mathcal{A}_{\mathrm{ST}}=1.06^{+0.35}_{-0.28}\times 10^{-15}$ or equivalently $\Omega_{\mathrm{GW}}^{\mathrm{ST}}=1.54^{+1.21}_{-0.71}\times 10^{-9}$, at frequency of 1/year. It is known that BayesEphem may absorb a common-spectrum process and weaken the evidence of the GWB process if it exists in the data Vallisneri _et al._ (2020); Arzoumanian _et al._ (2020); Pol _et al._ (2020). Nevertheless, even in the case of BayesEphem, the Bayes factor for the ST model is $18.4\pm 0.7$ which is still significant in the sense of statistics. See the Bayesian posteriors for the ST amplitude $\mathcal{A}_{\mathrm{ST}}$ obtained in the ST model in Fig. 1. Although NANOGrav reported the UCP is more consistent with $\gamma=5.5$ Arzoumanian _et al._ (2020), we found that such a large Bayes factor for ST model versus the UCP model cannot be explained by the ST spectral index $\gamma_{\rm{ST}}=5$ because the Bayes factor for the ST model versus the UCP model with $\gamma=5.5$ is $96\pm 9$ with DE438. It implies that the preferred ST model is likely attributed to the cross-correlations. Furthermore, we also consider a model that includes a common-spectrum process and an off-diagonal ST-correlated process where all auto-correlation terms are set to zero. The Bayesian amplitude posteriors are shown in Fig. 2 in which the amplitude posterior of the off-diagonal ST-correlated process is significant and comparable to the amplitude posterior of the common-spectrum process, indicating that the large Bayes factor for the ST model should be attributed to the cross-correlations in the NANOGrav 12.5-year data set. Figure 1: Bayesian posteriors for the ST amplitude $\mathcal{A}_{\mathrm{ST}}$ obtained in the ST model under DE438 and BayesEphem ephemeris schemes, respectively. Figure 2: Bayesian amplitude posteriors in a model (with DE438) that includes a common-spectrum process and an off-diagonal ST-correlated process where all auto-correlation terms are set to zero. The posteriors shown here are marginalized to each other. In addition, we also consider an ST+TT model in which we simultaneously take into account both the ST and TT correlations. The contour plot and the posterior distributions of the ST and TT amplitudes in the ST+TT model are shown in Fig. 3, which implies that the presence of ST correlations is preferred even using BayesEphem, but no significant evidence for additional TT correlations. The amplitude of ST mode from this model with DE438 is $\mathcal{A}_{\mathrm{ST}}=1.02^{+0.36}_{-0.44}\times 10^{-15}$ or equivalently $\Omega_{\mathrm{GW}}^{\mathrm{ST}}=1.45^{+1.21}_{-0.98}\times 10^{-9}$, at frequency of 1/year. This result is consistent with the former one in the ST model. Figure 3: One and two-dimensional marginalized posteriors of ST and TT amplitudes obtained from the ST+TT model under DE438 and BayesEphem ephemeris schemes, respectively. We show both the $1\sigma$ and $2\sigma$ contours in the two-dimensional plot. To summarize, we find strong Bayesian indication for the ST correlations but no statistically significant evidence for the TT, VL, and SL correlations in the NANOGrav 12.5-year data set. We hope that the future PTA data sets growing in timespan and number of pulsars continue to confirm our results presented in this letter. Acknowledgments. We would like to thank the anonymous referee for the useful suggestions and comments. We also acknowledge the use of HPC Cluster of ITP- CAS and HPC Cluster of Tianhe II in National Supercomputing Center in Guangzhou. This work is supported by the National Key Research and Development Program of China Grant No.2020YFC2201502, grants from NSFC (grant No. 11975019, 11690021, 11991052, 12047503), Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB23000000, XDA15020701), and Key Research Program of Frontier Sciences, CAS, Grant NO. ZDBS-LY-7009. ## References * Abbott _et al._ (2016a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “Binary Black Hole Mergers in the first Advanced LIGO Observing Run,” Phys. Rev. X6, 041015 (2016a), [erratum: Phys. Rev.X8,no.3,039903(2018)], arXiv:1606.04856 [gr-qc] . * Abbott _et al._ (2019a) B.P. Abbott _et al._ (LIGO Scientific, Virgo), “GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs,” Phys. Rev. X 9, 031040 (2019a), arXiv:1811.12907 [astro-ph.HE] . * Abbott _et al._ (2020a) R. Abbott _et al._ (LIGO Scientific, Virgo), “GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run,” (2020a), arXiv:2010.14527 [gr-qc] . * Abbott _et al._ (2019b) B.P. Abbott _et al._ (LIGO Scientific, Virgo), “Tests of General Relativity with the Binary Black Hole Signals from the LIGO-Virgo Catalog GWTC-1,” Phys. Rev. D 100, 104036 (2019b), arXiv:1903.04467 [gr-qc] . * Abbott _et al._ (2020b) R. Abbott _et al._ (LIGO Scientific, Virgo), “Tests of General Relativity with Binary Black Holes from the second LIGO-Virgo Gravitational-Wave Transient Catalog,” (2020b), arXiv:2010.14529 [gr-qc] . * Abbott _et al._ (2016b) Benjamin P. Abbott _et al._ , “Sensitivity of the Advanced LIGO detectors at the beginning of gravitational wave astronomy,” Phys. Rev. D 93, 112004 (2016b), [Addendum: Phys.Rev.D 97, 059901 (2018)], arXiv:1604.00439 [astro-ph.IM] . * Sazhin (1978) M. V. Sazhin, “Opportunities for detecting ultralong gravitational waves,” Soviet Astronomy 22, 36–38 (1978). * Detweiler (1979) Steven L. Detweiler, “Pulsar timing measurements and the search for gravitational waves,” Astrophys. J. 234, 1100–1104 (1979). * Foster and Backer (1990) R. S. Foster and D. C. Backer, “Constructing a pulsar timing array,” Astrophys. J. 361, 300–308 (1990). * Jaffe and Backer (2003) Andrew H. Jaffe and Donald C. Backer, “Gravitational waves probe the coalescence rate of massive black hole binaries,” Astrophys. J. 583, 616–631 (2003), arXiv:astro-ph/0210148 . * Sesana _et al._ (2008) Alberto Sesana, Alberto Vecchio, and Carlo Nicola Colacino, “The stochastic gravitational-wave background from massive black hole binary systems: implications for observations with Pulsar Timing Arrays,” Mon. Not. Roy. Astron. Soc. 390, 192 (2008), arXiv:0804.4476 [astro-ph] . * Sesana _et al._ (2009) A. Sesana, A. Vecchio, and M. Volonteri, “Gravitational waves from resolvable massive black hole binary systems and observations with Pulsar Timing Arrays,” Mon. Not. Roy. Astron. Soc. 394, 2255 (2009), arXiv:0809.3412 [astro-ph] . * Witten (1984) Edward Witten, “Cosmic Separation of Phases,” Phys. Rev. D 30, 272–285 (1984). * Hogan (1986) C.J. Hogan, “Gravitational radiation from cosmological phase transitions,” Mon. Not. Roy. Astron. Soc. 218, 629–636 (1986). * Saito and Yokoyama (2009) Ryo Saito and Jun’ichi Yokoyama, “Gravitational wave background as a probe of the primordial black hole abundance,” Phys. Rev. Lett. 102, 161101 (2009), [Erratum: Phys. Rev. Lett.107,069901(2011)], arXiv:0812.4339 [astro-ph] . * Yuan _et al._ (2019a) Chen Yuan, Zu-Cheng Chen, and Qing-Guo Huang, “Probing Primordial-Black-Hole Dark Matter with Scalar Induced Gravitational Waves,” (2019a), arXiv:1906.11549 [astro-ph.CO] . * Yuan _et al._ (2019b) Chen Yuan, Zu-Cheng Chen, and Qing-Guo Huang, “Log-dependent slope of scalar induced gravitational waves in the infrared regions,” (2019b), arXiv:1910.09099 [astro-ph.CO] . * Lentati _et al._ (2015) L. Lentati _et al._ , “European Pulsar Timing Array Limits On An Isotropic Stochastic Gravitational-Wave Background,” Mon. Not. Roy. Astron. Soc. 453, 2576–2598 (2015), arXiv:1504.03692 [astro-ph.CO] . * Arzoumanian _et al._ (2018) Z. Arzoumanian _et al._ (NANOGRAV), “The NANOGrav 11-year Data Set: Pulsar-timing Constraints On The Stochastic Gravitational-wave Background,” Astrophys. J. 859, 47 (2018), arXiv:1801.02617 [astro-ph.HE] . * Yonemaru _et al._ (2020) N. Yonemaru _et al._ , “Searching for gravitational wave bursts from cosmic string cusps with the Parkes Pulsar Timing Array,” (2020), 10.1093/mnras/staa3721, arXiv:2011.13490 [gr-qc] . * Zhu _et al._ (2014) X.J. Zhu _et al._ , “An all-sky search for continuous gravitational waves in the Parkes Pulsar Timing Array data set,” Mon. Not. Roy. Astron. Soc. 444, 3709–3720 (2014), arXiv:1408.5129 [astro-ph.GA] . * Babak _et al._ (2016) Stanislav Babak _et al._ , “European Pulsar Timing Array Limits on Continuous Gravitational Waves from Individual Supermassive Black Hole Binaries,” Mon. Not. Roy. Astron. Soc. 455, 1665–1679 (2016), arXiv:1509.02165 [astro-ph.CO] . * Aggarwal _et al._ (2018) K. Aggarwal _et al._ , “The NANOGrav 11-Year Data Set: Limits on Gravitational Waves from Individual Supermassive Black Hole Binaries,” (2018), arXiv:1812.11585 [astro-ph.GA] . * Wang _et al._ (2015) J.B. Wang _et al._ , “Searching for gravitational wave memory bursts with the Parkes Pulsar Timing Array,” Mon. Not. Roy. Astron. Soc. 446, 1657–1671 (2015), arXiv:1410.3323 [astro-ph.GA] . * Aggarwal _et al._ (2019) K. Aggarwal _et al._ (NANOGrav), “The NANOGrav 11-Year Data Set: Limits on Gravitational Wave Memory,” (2019), 10.3847/1538-4357/ab6083, arXiv:1911.08488 [astro-ph.HE] . * Chen _et al._ (2020) Zu-Cheng Chen, Chen Yuan, and Qing-Guo Huang, “Pulsar Timing Array Constraints on Primordial Black Holes with NANOGrav 11-Year Dataset,” Phys. Rev. Lett. 124, 251101 (2020), arXiv:1910.12239 [astro-ph.CO] . * Shannon _et al._ (2015) R.M. Shannon _et al._ , “Gravitational waves from binary supermassive black holes missing in pulsar observations,” Science 349, 1522–1525 (2015), arXiv:1509.07320 [astro-ph.CO] . * Siemens _et al._ (2013) Xavier Siemens, Justin Ellis, Fredrick Jenet, and Joseph D. Romano, “The stochastic background: scaling laws and time to detection for pulsar timing arrays,” Class. Quant. Grav. 30, 224015 (2013), arXiv:1305.3196 [astro-ph.IM] . * Taylor _et al._ (2016) S.R. Taylor, M. Vallisneri, J.A. Ellis, C.M.F. Mingarelli, T.J.W. Lazio, and R. van Haasteren, “Are we there yet? Time to detection of nanohertz gravitational waves based on pulsar-timing array limits,” Astrophys. J. Lett. 819, L6 (2016), arXiv:1511.05564 [astro-ph.IM] . * Arzoumanian _et al._ (2020) Zaven Arzoumanian _et al._ (NANOGrav), “The NANOGrav 12.5-year Data Set: Search For An Isotropic Stochastic Gravitational-Wave Background,” (2020), arXiv:2009.04496 [astro-ph.HE] . * Lee _et al._ (2008) K. J. Lee, F. A. Jenet, and Richard H. Price, “Pulsar Timing as a Probe of Non-Einsteinian Polarizations of Gravitational Waves,” Astrophys. J. 685, 1304–1319 (2008). * Chamberlin and Siemens (2012) Sydney J. Chamberlin and Xavier Siemens, “Stochastic backgrounds in alternative theories of gravity: overlap reduction functions for pulsar timing arrays,” Phys. Rev. D 85, 082001 (2012), arXiv:1111.5661 [astro-ph.HE] . * Gair _et al._ (2015) Jonathan R. Gair, Joseph D. Romano, and Stephen R. Taylor, “Mapping gravitational-wave backgrounds of arbitrary polarisation using pulsar timing arrays,” Phys. Rev. D 92, 102003 (2015), arXiv:1506.08668 [gr-qc] . * Boîtier _et al._ (2020) Adrian Boîtier, Shubhanshu Tiwari, Lionel Philippoz, and Philippe Jetzer, “Pulse redshift of pulsar timing array signals for all possible gravitational wave polarizations in modified general relativity,” Phys. Rev. D 102, 064051 (2020), arXiv:2008.13520 [gr-qc] . * Cornish _et al._ (2018) Neil J. Cornish, Logan O’Beirne, Stephen R. Taylor, and Nicolás Yunes, “Constraining alternative theories of gravity using pulsar timing arrays,” Phys. Rev. Lett. 120, 181101 (2018), arXiv:1712.07132 [gr-qc] . * Hellings and Downs (1983) R. w. Hellings and G. s. Downs, “UPPER LIMITS ON THE ISOTROPIC GRAVITATIONAL RADIATION BACKGROUND FROM PULSAR TIMING ANALYSIS,” Astrophys. J. 265, L39–L42 (1983). * Thrane and Romano (2013) Eric Thrane and Joseph D. Romano, “Sensitivity curves for searches for gravitational-wave backgrounds,” Phys. Rev. D88, 124032 (2013), arXiv:1310.5300 [astro-ph.IM] . * Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), “Planck 2018 results. VI. Cosmological parameters,” Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO] . * Alam _et al._ (2021) Md F. Alam _et al._ (NANOGrav), “The NANOGrav 12.5 yr Data Set: Observations and Narrowband Timing of 47 Millisecond Pulsars,” Astrophys. J. Suppl. 252, 4 (2021), arXiv:2005.06490 [astro-ph.HE] . * Manchester _et al._ (2005) R N Manchester, G B Hobbs, A Teoh, and M Hobbs, “The Australia Telescope National Facility pulsar catalogue,” Astron. J. 129, 1993 (2005), arXiv:astro-ph/0412641 . * Arzoumanian _et al._ (2016) Z. Arzoumanian _et al._ (NANOGrav), “The NANOGrav Nine-year Data Set: Limits on the Isotropic Stochastic Gravitational Wave Background,” Astrophys. J. 821, 13 (2016), arXiv:1508.03024 [astro-ph.GA] . * Hobbs _et al._ (2006) George Hobbs, R. Edwards, and R. Manchester, “Tempo2, a new pulsar timing package. 1. overview,” Mon. Not. Roy. Astron. Soc. 369, 655–672 (2006), arXiv:astro-ph/0603381 [astro-ph] . * Edwards _et al._ (2006) Russell T. Edwards, G. B. Hobbs, and R. N. Manchester, “Tempo2, a new pulsar timing package. 2. The timing model and precision estimates,” Mon. Not. Roy. Astron. Soc. 372, 1549–1574 (2006), arXiv:astro-ph/0607664 [astro-ph] . * van Haasteren and Vallisneri (2014) Rutger van Haasteren and Michele Vallisneri, “New advances in the Gaussian-process approach to pulsar-timing data analysis,” Phys. Rev. D 90, 104012 (2014), arXiv:1407.1838 [gr-qc] . * Folkner and Park (2018) William M. Folkner and Ryan S. Park, “Planetary ephemeris DE438 for Juno,” Tech. Rep. IOM392R-18-004, Jet Propulsion Laboratory, Pasadena, CA (2018). * Vallisneri _et al._ (2020) M. Vallisneri _et al._ (NANOGrav), “Modeling the uncertainties of solar-system ephemerides for robust gravitational-wave searches with pulsar timing arrays,” (2020), 10.3847/1538-4357/ab7b67, arXiv:2001.00595 [astro-ph.HE] . * Pol _et al._ (2020) Nihan S. Pol _et al._ (NANOGrav), “Astrophysics Milestones For Pulsar Timing Array Gravitational Wave Detection,” (2020), arXiv:2010.11950 [astro-ph.HE] . * Ellis _et al._ (2020) Justin A. Ellis, Michele Vallisneri, Stephen R. Taylor, and Paul T. Baker, “Enterprise: Enhanced numerical toolbox enabling a robust pulsar inference suite,” Zenodo (2020). * Ellis and van Haasteren (2017) Justin Ellis and Rutger van Haasteren, “jellis18/ptmcmcsampler: Official release,” (2017). * Kass and Raftery (1995) Robert E. Kass and Adrian E. Raftery, “Bayes factors,” Journal of the American Statistical Association 90, 773–795 (1995). * Lee and Wagenmakers (2014) Michael D. Lee and Eric-Jan Wagenmakers, _Bayesian Cognitive Modeling: A Practical Course_ (Cambridge University Press, 2014).
# Symmetric Rigidity for Circle Endomorphisms with Bounded Geometry John Adamski, Yunchun Hu, Yunping Jiang, and Zhe Wang John Adamski: Department of Mathematics, Fordham University<EMAIL_ADDRESS>Yunchun Hu: Department of Mathematics and Computer Science, CUNY Bronx Community College<EMAIL_ADDRESS>Yunping Jiang: Department of Mathematics, CUNY Queens College and Graduate Center<EMAIL_ADDRESS>Zhe Wang: Department of Mathematics and Computer Science, CUNY Bronx Community College <EMAIL_ADDRESS> ###### Abstract. Let $f$ and $g$ be two circle endomorphisms of degree $d\geq 2$ such that each has bounded geometry, preserves the Lebesgue measure, and fixes $1$. Let $h$ fixing $1$ be the topological conjugacy from $f$ to $g$. That is, $h\circ f=g\circ h$. We prove that $h$ is a symmetric circle homeomorphism if and only if $h=Id$. Many other rigidity results in circle dynamics follow from this very general symmetric rigidity result. ###### Key words and phrases: quasisymmetric circle homeomorphism, symmetric circle homeomorphism, circle endomorphism with bounded geometry preserving the Lebesgue measure, uniformly quasisymmetric circle endomorphism preserving the Lebesgue measure, martingales ###### 2010 Mathematics Subject Classification: Primary: 37E10, 37A05; Secondary: 30C62, 60G42 This material is based upon work supported by the National Science Foundation. It is also partially supported by a collaboration grant from the Simons Foundation (grant number 523341) and PSC-CUNY awards. ## 1\. Introduction A remarkable result in geometry is the so-called Mostow rigidity theorem. This result assures that two closed hyperbolic $3$-manifolds are isometrically equivalent if they are homeomorphically equivalent [18]. A closed hyperbolic $3$-manifold can be viewed as the quotient space of a Kleinian group acting on the open unit ball in the $3$-Euclidean space. So a homeomorphic equivalence between two closed hyperbolic $3$-manifolds can be lifted to a homeomorphism of the open unit ball preserving group actions. The homeomorphism can be extended to the boundary of the open unit ball as a boundary map. The boundary is the Riemann sphere and the boundary map is a quasi-conformal homeomorphism. A quasi-conformal homeomorphism of the Riemann sphere is absolutely continuous. It follows that the boundary map has no invariant line field, and thus it is a Möbius transformation. For closed hyperbolic Riemann surfaces, the situation is quite complicated. A closed hyperbolic Riemann surface can be viewed as a Fuchsian group acting on the open unit disk in the complex plane. A homeomorphic equivalence between two closed hyperbolic Riemann surfaces can be lifted to a homeomorphism of the open unit disk preserving group actions. This homeomorphism can be extended to the boundary of the open unit disk as a boundary map. In this case, the boundary is the unit circle and the boundary map is a quasisymmetric homeomorphism. The complication is due to the fact that a quasisymmetric homeomorphism may not be absolutely continuous. Complicated maps like this are a rich source for Teichmüller theory. However, if we assume that the boundary map is absolutely continuous, then by following the main idea in the proof of Mostow’s rigidity theorem, it is a Möbius transformation. Shub and Sullivan further developed the study of conjugacies between smooth expanding circle endomorphisms in [19]. The conjugacy in this case is always quasisymmetric (refer to [8, Chapter 3], and see also [9, 11]). They proved that if the conjugacy is absolutely continuous then it is smooth. One of the authors (Jiang) studied the smoothness of conjugacies between one-dimensional maps with singularities. In [12], he first proved that the conjugacy between two generalized Ulam–von Neumann transformations is smooth if their power-law singularities have the same exponents, their asymmetries are the same, and their eigenvalues at all corresponding periodic points are the same. Later, the hypothesis that their asymmetries are the same was removed in [13, 14]. Moreover, in [9, 13, 14], he studied the smoothness of the conjugacy between two geometrically finite one-dimensional maps and proved that the conjugacy between two geometrically finite one-dimensional maps is always quasisymmetric. He also defined a smooth invariant called the scaling function for a geometrically finite one-dimensional map and proved that the scaling function and the exponents of power-law singularities are complete smooth invariants. That is, the conjugacy between two geometrically finite one- dimensional maps is smooth if and only if the maps have the same scaling function and the exponents of the corresponding power-law singularities are the same. More results on differential properties and the symmetric properties of a conjugacy between two one-dimensional maps are given in [17, 6]. Finally, the symmetric regularity of the conjugacy between two one-dimensional maps (with possible singularities) becomes an important issue in the study of one- dimensional dynamical systems, in particular in the study of geometric Gibbs theory (see [16, 15, 11]). We would like to note that a symmetric homeomorphism is, in general, not absolutely continuous. The following conjecture is stated in [16, Conjecture 2.4] and [11, Conjecture10.12]. ###### Conjecture 1. Suppose $f$ and $g$ are two uniformly symmetric circle endomorphisms of the same degree $d\geq 2$ such that $f(1)=g(1)=1$. Suppose both $f$ and $g$ preserve the Lebesgue measure on the circle. Suppose $h$ is the conjugacy from $f$ to $g$ with $h(1)=1$. That is, $h\circ f=g\circ h$. Then $h$ is a symmetric circle homeomorphism if and only if $h$ is the identity. The paper [16, Theorem 2.5] (see also [11, Corollary 10.9]) gives a partial proof of Conjecture 1, and many discussions about symmetric rigidity in the smooth case are also given. In our study of this conjecture, we extended our research into uniformly quasisymmetric circle endomorphisms in [10] and posed a more general conjecture (see [10] and [7, Conjecture 2]) as follows. ###### Conjecture 2. Suppose $f$ and $g$ are two uniformly quasisymmetric circle endomorphisms of the same degree $d\geq 2$ such that $f(1)=g(1)=1$. Suppose both $f$ and $g$ preserve the Lebesgue measure on the circle. Suppose $h$ is the conjugacy from $f$ to $g$ with $h(1)=1$. That is, $h\circ f=g\circ h$. Then $h$ is a symmetric circle homeomorphism if and only if $h$ is the identity. In this paper, we will prove both of these conjectures completely by proving the following more general theorem. ###### Theorem 1 (Main Theorem). Suppose $f$ and $g$ are two circle endomorphisms having bounded geometry of the same degree $d\geq 2$ such that $f(1)=g(1)=1$. Suppose $f$ and $g$ preserve the Lebesgue measure on the unit circle. Let $h$ be the conjugacy from $f$ to $g$ with $h(1)=1$. That is, $h\circ f=g\circ h$. If $h$ is a symmetric homeomorphism, then $h$ is the identity. Since a uniformly symmetric circle endomorphism is uniformly quasisymmetric, and a uniformly quasisymmetric circle endomorphism has bounded geometry (see [10, 11]), Theorem 1 gives an affirmative answer to both Conjecture 1 and Conjecture 2 (see Corollary 2 and Corollary 3). Our main theorem (Theorem 1) also gives new proofs of the previous symmetric rigidity results (Corollary 4 and Corollary 5) for the smooth case which were proved in [16] by using transfer operators. We organize this paper as follows. In Section 2, we define a circle endomorphism having bounded geometry. In the same section, we review the definition of a uniformly quasisymmetric circle endomorphism and the definition of a uniformly symmetric circle endomorphism. We also review the definition of a $C^{1+Dini}$ expanding circle endomorphism. All of these are examples of circle endomorphisms having bounded geometry. In Section 3, we study the symmetric rigidity for circle endomorphisms having bounded geometry and prove our main theorem (Theorem 1). Finally, in the same section we state several corollaries (Corollary 2, Corollary 3, Corollary 4, and Corollary 5) of our main theorem. Acknowledgment: We would like to thank Professor Frederick Gardiner for help and communications during this research. ## 2\. Circle Endomorphisms Having Bounded Geometry Let $T=\\{z\in{\mathbb{C}}\;|\;|z|=1\\}$ be the unit circle in the complex plane ${\mathbb{C}}$. Let $m$ be the Lebesgue probability measure on $T$ (i.e. a Haar measure on $T$). Suppose $f:T\to T$ is an orientation-preserving covering map of degree $d\geq 2$. We call it a circle endomorphism. Suppose $h:T\to T$ is an orientation-preserving homeomorphism. We call it a circle homeomorphism. Every circle endomorphism $f$ has at least one fixed point. By conjugating $f$ by a rotation of the circle if necessary, we assume that $1$ is a fixed point of $f$, that is, $f(1)=1$. The universal cover of $T$ is the real line ${\mathbb{R}}$ with a covering map $\pi(x)=e^{2\pi ix}:{\mathbb{R}}\to T.$ In this way, we can think the unit interval $[0,1]$ as the unit circle $T$. Then every circle endomorphism $f$ can be lifted to a homeomorphism $F:{\mathbb{R}}\to{\mathbb{R}},\quad F(x+1)=F(x)+d,\quad\forall x\in{\mathbb{R}}.$ ${\mathbb{R}}$${\mathbb{R}}$${T}$${T}$$\scriptstyle{F}$$\scriptstyle{\pi}$$\scriptstyle{\pi}$$\scriptstyle{f}$ We will assume that $F(0)=0$ so that there is a one-to-one correspondence between $f$ and $F$. Therefore, we also call such a map $F$ a circle endomorphism. Similarly, every circle homeomorphism $h$ can be lifted to an orientation- preserving homeomorphism $H:{\mathbb{R}}\to{\mathbb{R}},\quad H(x+1)=H(x)+1,\quad\forall x\in{\mathbb{R}}.$ ${\mathbb{R}}$${\mathbb{R}}$${T}$${T}$$\scriptstyle{H}$$\scriptstyle{\pi}$$\scriptstyle{\pi}$$\scriptstyle{h}$ We will assume that $0\leq H(0)<1$ so that there is a one-to-one correspondence between $h$ and $H$. Therefore, we also call such a map $H$ a circle homeomorphism. Since we only consider circle homeomorphisms as conjugacies of circle endomorphisms in this paper, we assume $h(1)=1$ (equivalently, $H(0)=0$). We use $id$ and $ID$ to denote the identity circle homeomorphism and its lift to $\mathbb{R}$, respectively. That is, $id(z)=z$ and $ID(x)=x$. Let $\mathcal{CE}(d)$ be the space of all circle endomorphisms $f$ (or $F$) of degree $d\geq 2$ fixing $1$ (or $0$) and let $\mathcal{CH}$ be the space of all circle homeomorphisms $h$ (or $H$) fixing $1$ (or $0$). ###### Definition 1. A circle homeomorphism $h\in\mathcal{CH}$ is called quasisymmetric (refer to [2]) if there exists a constant $M\geq 1$ such that $\frac{1}{M}\leq\frac{H(x+t)-H(x)}{H(x)-H(x-t)}\leq M\quad\forall x\in\mathbb{R},\;\;\forall t>0.$ It is called symmetric (refer to [4]) if there exists a positive bounded function $\epsilon(t)$ such that $\epsilon(t)\to 0$ as $t\to 0^{+}$ and $\frac{1}{1+\epsilon(t)}\leq\frac{H(x+t)-H(x)}{H(x)-H(x-t)}\leq 1+\epsilon(t)\quad\forall x\in\mathbb{R},\;\;\forall t>0.$ ###### Definition 2. A circle endomorphism $f\in\mathcal{CE}(d)$ is called uniformly quasisymmetric (refer to [10, 11]) if there exists a constant $M\geq 1$ such that $\frac{1}{M}\leq\frac{F^{-n}(x+t)-F^{-n}(x)}{F^{-n}(x)-F^{-n}(x-t)}\leq M\quad\forall n\geq 1,\;\;\forall x\in\mathbb{R},\;\;\forall t>0.$ It is called uniformly symmetric (refer to [3, 11]) if there exists a positive bounded function $\epsilon(t)$ such that $\epsilon(t)\to 0$ as $t\to 0^{+}$ and $\frac{1}{1+\epsilon(t)}\leq\frac{F^{-n}(x+t)-F^{-n}(x)}{F^{-n}(x)-F^{-n}(x-t)}\leq 1+\epsilon(t)\quad\forall n\geq 1,\;\;\forall x\in\mathbb{R},\;\;\forall t>0.$ An example of a symmetric (and quasisymmetric) circle homeomorphism is a $C^{1}$ circle diffeomorphism. However, in general, a symmetric (or quasisymmetric) circle homeomorphism may not be differentiable, and may even be totally singular with respect to the Lebesgue measure. If a circle endomorphism $f$ is differentiable and the derivative $F^{\prime}$ is positive, then we can define the modulus of continuity, $\omega(t)=\sup_{|\xi-\eta|\leq t}|\log F^{\prime}(\xi)-\log F^{\prime}(\eta)|.$ We say that $f$ is $C^{1+Dini}$ if $\omega(t)$ satisfies the Dini condition that $\int_{0}^{1}\frac{\omega(t)}{t}\;dt<\infty.$ We say that $f$ is $C^{1+\alpha}$ for some $0<\alpha\leq 1$ if the derivative $F^{\prime}$ is an $\alpha$-Hölder continuous function. It is clear that a $C^{1+\alpha}$ map is a $C^{1+Dini}$ map. We say that $f$ is expanding if there are two constants $C>0$ and $\lambda>1$ such that $|(F^{n})^{\prime}(z)|\geq C\lambda^{n},\quad\forall z\in T,\;\;\forall n\geq 1.$ ###### Example 1. Every $C^{1+Dini}$ expanding circle endomorphism of degree $d\geq 2$ is uniformly symmetric. See [11] for a proof of this example. However, in general, a uniformly symmetric (or quasisymmetric) circle endomorphism may not be differentiable, may not be absolutely continuous, and may even be totally singular with respect to the Lebesgue measure. Let $\mathcal{QS}$ be the space of all quasisymmetric circle homeomorphisms in $\mathcal{CH}$. Let $\mathcal{S}$ be the space of all symmetric circle homeomorphisms in $\mathcal{CH}$. Then we have from Definition 1 $\mathcal{S}\subset\mathcal{QS}\subset\mathcal{CH}.$ Let $\mathcal{UQCE}(d)$ be the space of all uniformly quasisymmetric circle endomorphisms in $\mathcal{CE}(d)$. Let $\mathcal{USCE}(d)$ be the space of all uniformly symmetric circle endomorphisms in $\mathcal{CE}(d)$. Let $\mathcal{CED}(d)$ be the space of all $C^{1+Dini}$ expanding circle endomorphisms of degree $d\geq 2$. Then we have from Example 1 and Definition 2 $\mathcal{CED}(d)\subset\mathcal{USCE}(d)\subset\mathcal{UQCE}(d)\subset\mathcal{CE}(d).$ ###### Definition 3. We say $f\in\mathcal{CE}(d)$ preserves the Lebesgue measure $m$ if (1) $~{}m(f^{-1}(A))=m(A)$ holds for all Borel subsets $A\subseteq T$. Henceforth, in order to avoid confusion, we will consistently use $[0,1]/\\{0\sim 1\\}={\mathbb{R}}\pmod{1}$ to mean the unit circle. Likewise, we will consistently use $f=F\pmod{1}:[0,1]/\\{0\sim 1\\}\to[0,1]/\\{0\sim 1\\}$ to mean a circle endomorphism and $h=H\pmod{1}:[0,1]/\\{0\sim 1\\}\to[0,1]/\\{0\sim 1\\}$ to mean a circle homeomorphism. For any $f\in\mathcal{CE}(d)$, the preimage $f^{-1}(0)$ of the fixed point $0$ partitions $[0,1]$ into $d$ closed and ordered intervals $I_{0}$, $I_{1}$, $\cdots$, $I_{d-1}$ (see Figure 1). Let $\eta_{1}=\\{I_{0},I_{1},\cdots,I_{d-1}\\}.$ Then $\eta_{1}$ is a Markov partition. That is, 1. (i) $[0,1]=\cup_{i=0}^{d-1}I_{i}$; 2. (ii) $I_{i}$ and $I_{j}$ have pairwise disjoint interiors for any $0\leq i<j\leq d-1$; 3. (iii) $f(I_{i})=[0,1]$ for every $0\leq i\leq d-1$; 4. (iv) the restriction of $f$ to the interior of $I_{i}$ is injective for every $0\leq i\leq d-1$. Figure 1. The initial Markov partition. The preimage $f^{-n}(0)$ of the fixed point $0$ partitions $[0,1]$ into $d^{n}$ closed intervals $I_{w_{n}}$ labeled by $w_{n}=i_{0}i_{1}\ldots i_{n-1}\in\Sigma_{n}=\prod_{k=0}^{n-1}\\{0,1,\ldots,d-1\\}$ and defined inductively as $f^{k}(I_{w_{n}})\subset I_{i_{k}},\;\;\forall 0\leq k\leq n-2,\quad\hbox{and}\quad f^{n-1}(I_{w_{n}})=I_{i_{n-1}}.$ Let $\eta_{n}=\\{I_{w_{n}}\;|\;w_{n}=i_{0}i_{1}\ldots i_{n-1}\in\Sigma_{n}\\}.$ Then $\eta_{n}$ is also a Markov partition. That is, 1. (1) $[0,1]=\cup_{w_{n}\in\Sigma_{n}}I_{w_{n}}$; 2. (2) intervals in $\eta_{n}$ have pairwise disjoint interiors; 3. (3) $f^{n}(I_{w_{n}})=[0,1]$ for every $w_{n}\in\Sigma_{n}$; 4. (4) the restriction of $f^{n}$ to the interior of $I_{w_{n}}$ is injective for every $w_{n}\in\Sigma_{n}$. ###### Remark 1. Suppose ${\mathcal{A}}$ and ${\mathcal{B}}$ are two partitions of $T$. The partition $A\vee B=\\{A\cap B\;|\;A\in{\mathcal{A}},B\in{\mathcal{B}}\\}$ is the finer partition from ${\mathcal{A}}$ and ${\mathcal{B}}$. Then we have that $\eta_{n}=\vee_{k=1}^{n}f^{-k}\eta_{1}.$ Let $\sigma$ be the left-shift map and let $\sigma^{*}$ be the right-shift map on $\Sigma_{n}$, that is, $\sigma(\omega_{n})=\sigma(i_{0}i_{1}\ldots i_{n-2}i_{n-1})=i_{1}\ldots i_{n-2}i_{n-1}$ and $\sigma^{*}(\omega_{n})=\sigma^{*}(i_{0}i_{1}\ldots i_{n-2}i_{n-1})=i_{0}i_{1}\ldots i_{n-2}.$ Here we assume $w_{0}=\emptyset$ and $I_{w_{0}}=[0,1]$ and $\sigma(w_{1})=w_{0}$ and $\sigma^{*}(w_{1})=w_{0}$. Then we have $I_{w_{n}}=\cup_{k=0}^{d-1}I_{w_{n}k}=\cup_{w_{n+1}\in(\sigma^{*})^{-1}(w_{n})}I_{w_{n+1}}$ and $f^{-1}(I_{w_{n}})=\cup_{k=0}^{d-1}I_{k\omega_{n}}=\cup_{w_{n+1}\in\sigma^{-1}(w_{n})}I_{w_{n+1}}.$ Figure 2. $I_{w_{n}}\subset I_{\sigma^{*}(w_{n})}$ and $f(I_{w_{n}})=I_{\sigma(w_{n})}$. ###### Definition 4. A circle endomorphism $f$ is said to have bounded geometry (refer to [10, 11]) if there is a constant $C>1$ such that (2) $\frac{|I_{\sigma^{*}(\omega_{n})}|}{|I_{\omega_{n}}|}\leq C,\quad\forall\omega_{n}\in\Sigma_{n},\;\;\forall n\geq 1.\quad\hbox{(See Figure 2)}$ Let $\mathcal{BGCE}(d)$ be the space of all $f\in\mathcal{CE}(d)$ having bounded geometry. Then we have (refer to [10, 11]) $\mathcal{CED}(d)\subset\mathcal{USCE}(d)\subset\mathcal{UQCE}(d)\subset\mathcal{BGCE}(d)\subset\mathcal{CE}(d).$ We know that $\mathcal{USCE}(d)$ is not equal to $\mathcal{UQCE}(d)$ (refer to [10]). Also, $\mathcal{UQCE}(d)$ is not equal to $\mathcal{BGCE}(d)$. For example, for any $\alpha\in(1/2,1)$, the piecewise-linear degree 2 circle endomorphism $f_{\alpha}(x)=\left\\{\begin{array}[]{ll}f_{\alpha}(x+1)-2&\mbox{if }x<0\\\ x/\alpha&\mbox{if }0\leq x<\alpha\\\ 1+(x-\alpha)/(1-\alpha)&\mbox{if }\alpha\leq x<1\\\ f_{\alpha}(x-1)+2&\mbox{if }1\leq x\end{array}\right.$ has bounded geometry because $\frac{|I_{\sigma^{*}(\omega_{n})}|}{|I_{\omega_{n}}|}\in\\{1/\alpha,1/(1-\alpha)\\},\quad\forall\omega_{n}\in\Sigma_{n},\forall n\geq 1.$ However, $f_{\alpha}$ is not uniformly quasisymmetric since for any $0<t<1-\alpha$, $\frac{f_{\alpha}^{-n}(0+t)-f_{\alpha}^{-n}(0)}{f_{\alpha}^{-n}(0)-f_{\alpha}^{-n}(0-t)}=\left(\frac{\alpha}{1-\alpha}\right)^{n}\to\infty\mbox{ as }n\to\infty.$ ###### Remark 2. The property of uniform quasisymmetry for a circle endomorphism can be equivalently characterized in terms of its sequence of nested partitions $\\{\eta_{n}\\}$ alone by saying that the circle enedomorphism has bounded nearby geometry. The precise definition of bounded nearby geometry is given and its equivalence to uniform quasisymmetry is proved in [9, 8]. For more on circle endomorphisms with bounded geometry and/or bounded nearby geometry, see also [10, 11, 5, 1]). For $f\in\mathcal{BGCE}(d)$, let $\tau_{n}=\max\\{|I_{w_{n}}|\;|\;w_{n}\in\Sigma_{n}\\}.$ Then from Definition 4, we have a constant $0<\tau<1$ such that (3) $~{}\tau_{n}\leq\tau^{n},\quad\forall n\geq 1.$ It follows that any two maps $f,g\in\mathcal{BGCE}(d)$ are topologically conjugate. That is, there is an $h\in\mathcal{CH}$ such that (4) $~{}f\circ h=h\circ g.$ Here $h$ is called the conjugacy from $f$ to $g$, and when $h\in\mathcal{S}$ we call it a symmetric conjugacy. In the special case that both $f$ and $g$ are in $\mathcal{UQCE}(d)$, we further know that the conjugacy $h\in\mathcal{QS}$. However, as long as at least one of the maps is not in $\mathcal{UQCE}(d)$, the conjugacy $h$ may not be quasisymmetric. Refer to [10, 11, 7, 5, 1]. ## 3\. Symmetric Rigidity, the Proof of the Main Result We start with the following lemma. ###### Lemma 1. Suppose $f\in\mathcal{BGCE}(d)$. Let $\\{\eta_{n}\\}_{n=1}^{\infty}$ be the corresponding sequence of partitions. Suppose $I_{w_{n}}\in\eta_{n}$ is a fixed partition interval for some $n\geq 1$. Then $\lim_{k\to\infty}\sum_{w_{n}^{1}\not=w_{n}}\cdots\sum_{w_{n}^{k}\not=w_{n}}|I_{w_{n}^{1}\cdots w_{n}^{k}}|=0,$ where the $\omega_{n}^{i}$ are all words of length $n$. ###### Proof. From the definition of bounded geometry (Definition 4), we have that $|I_{w_{n}}|\geq A=\frac{1}{C^{n}}.$ Since $\bigcup_{w_{n}^{1}\not=w_{n}}I_{w_{n}^{1}}=\overline{[0,1]\setminus I_{w_{n}}},$ we get $\sum_{w_{n}^{1}\not=w_{n}}|I_{w_{n}^{1}}|=1-|I_{w_{n}}|\leq 1-A.$ For any $w_{n}^{1}$, we have that $I_{w_{n}^{1}w_{n}}\subset I_{w_{n}^{1}}$. Because of bounded geometry, we further have $|I_{w_{n}^{1}w_{n}}|\geq A|I_{w_{n}^{1}}|.$ Since $\bigcup_{w_{n}^{2}\not=w_{n}}I_{w_{n}^{1}w_{n}^{2}}=\overline{I_{w_{n}^{1}}\setminus I_{w_{n}^{1}w_{n}}},$ we have $\sum_{w_{n}^{2}\not=w_{n}}|I_{w_{n}^{1}w_{n}^{2}}|=|I_{w_{n}^{1}}|-|I_{w_{n}^{1}w_{n}}|\leq|I_{\omega_{n}^{1}}|-A|I_{\omega_{n}^{1}}|=(1-A)|I_{w_{n}^{1}}|.$ This implies that $\sum_{w_{n}^{1}\not=w_{n}}\sum_{w_{n}^{2}\not=w_{n}}|I_{w_{n}^{1}w_{n}^{2}}|\leq(1-A)\sum_{w_{n}^{1}\not=w_{n}}|I_{w_{n}^{1}}|\leq(1-A)^{2}.$ Inductively, suppose we know $\sum_{w_{n}^{1}\not=w_{n}}\cdots\sum_{w_{n}^{k-1}\not=w_{n}}|I_{w_{n}^{1}\ldots w_{n}^{k-1}}|\leq(1-A)^{k-1}$ for $k\geq 3$. Then $\sum_{w_{n}^{1}\not=w_{n}}\cdots\sum_{w_{n}^{k}\not=w_{n}}|I_{w_{n}^{1}\ldots w_{n}^{k}}|=\sum_{w_{n}^{1}\not=w_{n}}\cdots\sum_{w_{n}^{k-1}\not=w_{n}}\Big{(}|I_{w_{n}^{1}\cdots w_{n}^{k-1}}|-|I_{w_{n}^{1}\cdots w_{n}^{k-1}w_{n}}|\Big{)}$ $=\sum_{w_{n}^{1}\not=w_{n}}\cdots\sum_{w_{n}^{k-1}\not=w_{n}}|I_{w_{n}^{1}\cdots w_{n}^{k-1}}|\Big{(}1-\frac{|I_{w_{n}^{1}\cdots w_{n}^{k-1}w_{n}}|}{|I_{w_{n}^{1}\cdots w_{n}^{k-1}}|}\Big{)}.$ Notice that $I_{w_{n}^{1}\cdots w_{n}^{k-1}w_{n}}\subset I_{w_{n}^{1}\cdots w_{n}^{k-1}}$. The definition of bounded geometry implies that $\frac{|I_{w_{n}^{1}\cdots w_{n}^{k-1}w_{n}}|}{|I_{w_{n}^{1}\cdots w_{n}^{k-1}}|}\geq A.$ This implies that $1-\frac{|I_{w_{n}^{1}\cdots w_{n}^{k-1}w_{n}}|}{|I_{w_{n}^{1}\cdots w_{n}^{k-1}}|}\leq 1-A.$ Thus $\sum_{w_{n}^{1}\not=w_{n}}\cdots\sum_{w_{n}^{k}\not=w_{n}}|I_{w_{n}^{1}\ldots w_{n}^{k}}|\leq(1-A)\sum_{w_{n}^{1}\not=w_{n}}\cdots\sum_{w_{n}^{k-1}\not=w_{n}}|I_{w_{n}^{1}\cdots w_{n}^{k-1}}|\leq(1-A)^{k}.$ Letting $k\to\infty$, this proves the lemma. ∎ Given a partition interval $I_{w_{n}}\in\eta_{n}$ for some $n\geq 1$, define $C(I_{w_{n}})=\\{x\in[0,1]\;|\;f^{kn}(x)\not\in I_{w_{n}},k=0,1,2,\cdots\\}=\bigcap_{i=1}^{\infty}\left(\bigcup_{\begin{subarray}{c}\omega_{n}^{j}\neq\omega_{n}\\\ 1\leq j\leq i\end{subarray}}I_{\omega_{n}^{1}\ldots\omega_{n}^{i}}\right).$ A consequence of Lemma 1 is the following. ###### Corollary 1. Suppose $f\in\mathcal{BGCE}(d)$. Then the set $C(I_{w_{n}})$ has zero Lebesgue measure. That is, $m(C(I_{w_{n}}))=0$. Suppose $f$ and $g$ are both in $\mathcal{BGCE}(d)$ and $h\in\mathcal{CH}$ is the conjugacy from $f$ to $g$. Define the number $1\leq\Phi=\sup_{I\subseteq[0,1]}\frac{|h(I)|}{|I|}\leq\infty$ and the set (5) $~{}X=\\{x\in[0,1]\;|\;\exists I_{k}^{x}=[a_{k},b_{k}],\lim_{k\to\infty}a_{k}=\lim_{k\to\infty}b_{k}=x,\lim_{k\to\infty}\frac{|h(I_{k}^{x})|}{|I_{k}^{x}|}=\Phi\\}$ We would like to note that, in general, $\Phi=\infty$ and when $\Phi<\infty$, $h$ is a Lipschitz conjugacy. ###### Remark 3. Similarly, we can also define $0\leq\phi=\inf_{I\subseteq[0,1]}\frac{|h(I)|}{|I|}\leq 1$ and use $\phi$ to prove Theorem 1. ###### Lemma 2. Suppose $f,g\in\mathcal{BGCE}(d)$. Then $X$ is a non-empty subset of $T$. ###### Proof. Suppose $\\{I_{k}=[a_{k},b_{k}]\\}_{k=1}^{\infty}$ is a sequence of intervals such that $\lim_{k\to\infty}\frac{|h(I_{k})|}{|I_{k}|}=\Phi.$ By taking a subsequence if necessary, we assume that $\\{a_{k}\\}_{k=1}^{\infty}$ and $\\{b_{k}\\}_{k=1}^{\infty}$ are two convergent sequences of numbers and $a=\lim_{k\to\infty}a_{k}$ and $b=\lim_{k\to\infty}b_{k}$. If $a=b=x$, then $x\in X$ and $X\not=\emptyset$. Note that if $\Phi=\infty$ then $a=b$. If $a<b$, then $I=[a,b]$ is a non-trivial interval such that (6) $~{}\frac{|h(I)|}{|I|}=\Phi.$ In this case, we claim that for any non-trivial subinterval $I^{\prime}\subset I$, $|h(I^{\prime})|/|I^{\prime}|=\Phi$. The claim implies that $I\subset X$, and thus, $X\not=\emptyset$. Now we prove the claim as follows. Let $I^{\prime}=[a^{\prime},b^{\prime}]$ with $a\leq a^{\prime}<b^{\prime}\leq b$. Let $L=[a,a^{\prime}]$ and $R=[b^{\prime},b]$. Then we have $I=L\cup I^{\prime}\cup R$ and $h(I)=h(L)\cup h(I^{\prime})\cup h(R)$. Assume $|h(I^{\prime})|/|I^{\prime}|<\Phi$. Then, since $|h(L)|\leq\Phi|L|$, and $|h(R)|\leq\Phi|R|$, we have $\frac{|h(I)|}{|I|}=\frac{|h(L)|+|h(I^{\prime})|+|h(R)|}{|L|+|I^{\prime}|+|R|}<\Phi.$ This is a contradiction. Thus we have proved the claim and completed the proof. ∎ Furthermore, under the assumption that both $f$ and $g$ preserve the Lebesgue measure $m$, we have the following stronger result. ###### Lemma 3. Suppose $f,g\in\mathcal{BGCE}(d)$ both preserve the Lebesgue measure $m$. Then $X$ is dense in $[0,1]$. That is, $\overline{X}=[0,1]$. ###### Proof. We will prove that for any $n\geq 1$ and for any partition interval $I_{w_{n}}\in\eta_{n}$, $I_{w_{n}}\cap X\not=\emptyset$. It will then follow from inequality (3) that $\overline{X}=[0,1]$. We prove it by contradiction. Assume we have a partition interval $I_{w_{n}}$ such that $I_{w_{n}}\cap X=\emptyset$. Then we can find a number $D<\Phi$ such that (7) $~{}\frac{|h(I)|}{|I|}\leq D$ for all $I\subset I_{w_{n}}$. Since $X\not=\emptyset$, we have an interval $I^{D}\subseteq[0,1]$ such that (8) $~{}\frac{|h(I^{D})|}{|I^{D}|}>D.$ We pull back $I^{D}$ by $f^{n}$ to get $f^{-n}(I^{D})=\cup_{w_{n}^{1}}I_{w_{n}^{1}}^{D}$, where $I^{D}_{w_{n}^{1}}\subset I_{w_{n}^{1}}\in\eta_{n}$ and $f^{n}(I^{D}_{w_{n}^{1}})=I^{D}$. Since both $f$ and $g$ preserve the Lebesgue measure $m$, for all $k\geq 2$ we have $\displaystyle~{}|I^{D}|$ $\displaystyle=|I_{\omega_{n}}^{D}|+\sum_{\omega_{n}^{1}\neq\omega_{n}}|I_{\omega_{n}^{1}}^{D}|=|I_{\omega_{n}}^{D}|+\sum_{\omega_{n}^{1}\neq\omega_{n}}|f^{-n}(I_{\omega_{n}^{1}}^{D})|$ $\displaystyle=|I_{\omega_{n}}^{D}|+\sum_{\omega_{n}^{1}\neq\omega_{n}}\left(|I_{\omega_{n}\omega_{n}^{1}}^{D}|+\sum_{\omega_{n}^{2}\neq\omega_{n}}|I_{\omega_{n}^{2}\omega_{n}^{1}}^{D}|\right)$ $\displaystyle=|I_{\omega_{n}}^{D}|+\sum_{\omega_{n}^{1}\neq\omega_{n}}|I_{\omega_{n}\omega_{n}^{1}}^{D}|+\sum_{\omega_{n}^{2}\neq\omega_{n}}\sum_{\omega_{n}^{1}\neq\omega_{n}}|I_{\omega_{n}^{2}\omega_{n}^{1}}^{D}|=\ldots$ (9) $\displaystyle=|I_{w_{n}}^{D}|+\sum_{l=1}^{k-1}\sum_{w_{n}^{l}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|I_{w_{n}w_{n}^{l}\ldots w_{n}^{1}}^{D}|+\sum_{w_{n}^{k}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|I^{D}_{w_{n}^{k}\ldots w_{n}^{1}}|$ and, similarly, (10) $~{}|h(I^{D})|=|h(I_{w_{n}}^{D})|+\sum_{l=1}^{k-1}\sum_{w_{n}^{l}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|h(I_{w_{n}w_{n}^{l}\ldots w_{n}^{1}}^{D})|+\sum_{w_{n}^{k}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|h(I^{D}_{w_{n}^{k}\ldots w_{n}^{1}})|.$ Figure 3. The interval $I^{D}$ has a preimage under $f^{n}$ composed of $d^{n}$ intervals, one of which is a subset of $I_{\omega_{n}}$. Similarly, each of these preimage-intervals that is not a subset of $I_{\omega_{n}}$ has a preimage under $f^{n}$ composed of $d^{n}$ intervals, one of which is a subset of $I_{\omega_{n}}$. Equation (3) says that the length of $I^{D}$ is equal to the sum of the lengths of all blue intervals belonging to the same arbitrary level plus the lengths all pink intervals belonging to that same level or any previous level. See Figure 3. Because $I_{w_{n}}^{D}$ and $I_{w_{n}w_{n}^{l}\ldots w_{n}^{1}}^{D}$ are sub-intervals of $I_{w_{n}}$, (7) says that $\frac{|h(I^{D}_{w_{n}})|}{|I^{D}_{w_{n}}|},\;\;\frac{|h(I^{D}_{w_{n}w_{n}^{l}\cdots w_{n}^{1}})|}{|I^{D}_{w_{n}w_{n}^{l}\cdots w_{n}^{1}}|}\leq D\;\;\forall\;l\geq 1.$ This implies that (11) $~{}\frac{|h(I_{w_{n}}^{D})|+\sum_{l=1}^{k-1}\sum_{w_{n}^{l}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|h(I_{w_{n}w_{n}^{l}\ldots w_{n}^{1}}^{D})|}{|I_{w_{n}}^{D}|+\sum_{l=1}^{k-1}\sum_{w_{n}^{l}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|I_{w_{n}w_{n}^{l}\ldots w_{n}^{1}}^{D}|}\leq D\;\;\forall\;k\geq 2.$ From Lemma 1 (12) $~{}\lim_{k\to\infty}\sum_{w_{n}^{k}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|I^{D}_{w_{n}^{k}\cdots w_{n}^{1}}|=0$ and (13) $~{}\lim_{k\to\infty}\sum_{w_{n}^{k}\not=w_{n}}\cdots\sum_{w_{n}^{1}\not=w_{n}}|h(I^{D}_{w_{n}^{k}\cdots w_{n}^{1}})|=0.$ Now (3), (10), (11), (12), and (13) imply that $\frac{|h(I^{D})|}{|I^{D}|}\leq D.$ This contradicts (8). Thus our assumption that there exists a partition interval $I_{\omega_{n}}$ such that $I_{\omega_{n}}\cap X=\emptyset$ is false, and this proves the lemma. ∎ ###### Proof of Theorem 1. We will prove that $\Phi=1$. Equivalently, we will prove that $\Phi>1$ cannot happen, regardless of $\Phi<\infty$ or $\Phi=\infty$. We proceed with a proof by contradiction. Assume $\Phi>1$ (possibly $\infty$). Then we have two numbers $1<D_{1}<D_{2}<\Phi$. Since $h$ is symmetric (see definition 1), there exists a positive bounded function $\epsilon(t)$ such that $\epsilon(t)\to 0$ as $t\to 0^{+}$ and $\frac{1}{1+\epsilon(t)}\leq\frac{|h(I)|}{|h(I^{\prime})|}\leq 1+\epsilon(t)$ holds for all closed intervals $I$ and $I^{\prime}$ that have the same length $t>0$ and are adjacent, i.e. the right endpoint of one interval is the left endpoint of the other. Fix $t_{0}$ such that (14) $\epsilon(t)<\frac{D_{2}}{D_{1}}-1\quad\forall t<t_{0}.$ Since $\overline{X}=[0,1]$ (Lemma 3), there exists an interval $I=[a,b]\subset(0,1)$ with $|I|=b-a<t_{0}$ such that (15) $~{}\frac{|h(I)|}{|I|}>D_{2}.$ Let $L=[2a-b,a]\subset(0,1)$ and $R=[b,2b-a]\subset(0,1)$ (see figure 4). Then the intervals $L$ and $R$ are adjacent to $I$ and have the same length as $|I|$. It follows from (14) that $\frac{|h(R)|}{|R|}=\frac{|h(R)|}{|h(I)|}\cdot\frac{|h(I)|}{|I|}\cdot\frac{|I|}{|R|}>\frac{1}{1+\epsilon({b-a})}\cdot D_{2}\cdot 1>D_{1}$ and $\frac{|h(L)|}{|L|}=\frac{|h(L)|}{|h(I)|}\cdot\frac{|h(I)|}{|I|}\cdot\frac{|I|}{|L|}>\frac{1}{1+\epsilon({b-a})}\cdot D_{2}\cdot 1>D_{1}.$ Figure 4. $h$ is symmetric. Now we want to show that $\frac{|h([a,1])|}{|[a,1]|}>D_{1}.$ Consider any interval $J=[b,c]\supset R$ with $2b-a\leq c\leq 1$ satisfying (16) $~{}\frac{|h(J)|}{|J|}>D_{1}.$ If $c=1$, we have $\frac{|h([a,1])|}{|[a,1]|}=\frac{|h(I\cup J)|}{|I\cup J|}>D_{1}.$ Then we have nothing further to prove. If $c<1$, we have a number $\delta>0$ such that $c+\delta<1$ and such that for any $x\in[c,c+\delta]$ we have $\frac{|h([b,x])|}{|[b,x]|}>D_{1}.$ Since $\overline{X}=[0,1]$ (Lemma 3), there is an interval $I_{1}=[a_{1},b_{1}]\subset[c,c+\delta]$ with $|I_{1}|<t_{0}$ such that $\frac{|h(I_{1})|}{|I_{1}|}>D_{2}.$ Let $J_{1}=[b,a_{1}]$. Then we have three consecutive intervals $I$, $J_{1}$, and $I_{1}$ such that $\frac{|h([a,b_{1}])|}{|[a,b_{1}]|}=\frac{|h(I\cup J_{1}\cup I_{1})|}{|I\cup J_{1}\cup I_{1}|}>D_{1}.$ (See Figure 5.) Figure 5. Construction of $J_{1}$ and $I_{1}$. Consider $I_{1}$ as a new $I$ and repeat the above construction. We get three consecutive intervals $I_{1}=[a_{1},b_{1}]$, $J_{2}=[b_{1},a_{2}]$, and $I_{2}=[a_{2},b_{2}]$ such that $\frac{|h([a_{1},b_{2}])|}{|[a_{1},b_{2}]|}=\frac{|h(I_{1}\cup J_{2}\cup I_{2})|}{|I_{1}\cup J_{2}\cup I_{2}|}>D_{1}.$ Inductively, for every integer $n\geq 2$, we have three consecutive intervals $I_{n-1}=[a_{n-1},b_{n-1}]$, $J_{n}=[b_{n-1},a_{n}]$, and $I_{n}=[a_{n},b_{n}]$ such that $\frac{|h([a_{n-1},b_{n}])|}{|[a_{n-1},b_{n}]|}=\frac{|h(I_{n-1}\cup J_{n}\cup I_{n})|}{|I_{n-1}\cup J_{n}\cup I_{n}|}>D_{1}.$ This implies that $\frac{|h([a,b_{n}])|}{|[a,b_{n}]|}=\frac{|h(I\cup(\cup_{i=1}^{n}(J_{i}\cup I_{i})))|}{|I\cup(\cup_{i=1}^{n}(J_{i}\cup I_{i}))|}>D_{1}.$ If $b_{n}=1$, we have $\frac{|h([a,1])|}{|[a,1]|}>D_{1}.$ Then we have nothing further to prove. In the case that $b_{n}<1$ for all $n\geq 1$, since $\\{b_{n}\\}_{n=1}^{\infty}$ is a strictly increasing sequence in $[0,1)$, we have $b_{\infty}=\lim_{n\to\infty}b_{n}\leq 1.$ and $\frac{|h([a,b_{\infty}])|}{|[a,b_{\infty}]|}=\frac{|h(I\cup(\cup_{n=1}^{\infty}(J_{n}\cup I_{n})))|}{|I\cup(\cup_{n=1}^{\infty}(J_{n}\cup I_{n}))|}>D_{1}.$ Since $b_{\infty}$ depends on the initially chosen interval $J$, we write it as $b_{\infty}(J)$. Consider the set ${\mathcal{B}}=\\{b_{\infty}(J)\;|\;J\hbox{ satisfies (\ref{inqj})}\\}$ Let $\beta=\sup{\mathcal{B}}$. We claim $\beta=1$. Otherwise, we take $J=[b,\beta]$. It satisfies (16). Then $b_{\infty}(J)>\beta$. This contradiction proves the claim, and so $\frac{|h([a,1])|}{|[a,1]|}>D_{1}.$ Similarly, by using $L$ instead of $R$ and applying the procedure above, we get $\frac{|h([0,a])|}{|[0,a]|}>D_{1}.$ Finally, we get the following contradiction. $1=\frac{|h([0,1])|}{|[0,1]|}=\frac{|h([0,a)|+|h([a,1])|}{|[0,a]|+|[a,1]|}>D_{1}>1.$ The contradiction implies that $\Phi=1$. Since $\Phi=1$, we have that for any non-trivial interval $J\subset[0,1]$, $|h(J)|/|J|=1$. Otherwise, if there is an interval $J\subset[0,1]$ such that $|h(J)|/|J|<1$, let $L\cup R=[0,1]\setminus J$. Then $1=\frac{|h([0,1])|}{|[0,1]|}=\frac{|h(L)|+|h(J)|+|h(R)|}{|L|+|J|+|R|}<1,$ since $|h(L)|\leq|L|$ and $|h(R)|\leq|R|$. This is a contradiction. Since $h(0)=0$, it follows that $h=id$. This completes the proof of Theorem 1. ∎ Theorem 1 has many consequences. In particular, we have affirmative answers to Conjecture 1 and Conjecture 2, which we state as the following two corollaries. ###### Corollary 2. Suppose $f,g\in\mathcal{USCE}(d)$ and both maps preserve the Lebesgue measure $m$. Suppose $h$ is the conjugacy from $f$ to $g$, and $h(1)=1$ . If $h\in\mathcal{S}$, then $h=id$. ###### Corollary 3. Suppose $f,g\in\mathcal{UQCE}(d)$ and both maps preserve the Lebesgue measure $m$. Suppose $h$ is the conjugacy from $f$ to $g$, and $h(1)=1$. If $h\in\mathcal{S}$, then $h=id$. Other consequences are new proofs of some known results in [16] where we proved them by using transfer operators. ###### Corollary 4. Suppose $f$ and $g$ are $C^{1+Dini}$ expanding circle endomorphisms and both preserve the Lebesgue measure $m$. Suppose $h$ is the conjugacy from $f$ to $g$, and $h(1)=1$. If $h\in\mathcal{S}$, then $h=id$. ###### Corollary 5. Suppose $f$ and $g$ are $C^{1+Dini}$ expanding circle endomorphisms and both preserve the Lebesgue measure $m$. Suppose $h$ is the conjugacy from $f$ to $g$, and $h(1)=1$. If $h$ is absolutely continuous, then $h=id$. ###### Proof. As shown in [19] (see also [13, 14]), if $h$ is absolutely continuous, then $h$ is a $C^{1}$ diffeomorphism. A $C^{1}$ diffeomorphism is symmetric. Now this corollary follows from Corollary 4. ∎ ## References * [1] J. Adamski, Symmetric rigidity for circle endomorphisms with bounded geometry and their dual maps. Ph.D Thesis, The CUNY Graduate Center, 2020. CUNY Academic Works. * [2] L. V. Ahlfors, Lectures on Quasiconformal Mappings, Van Nostrand Mathematical studies, 10, D. Van Nostrand Co. Inc., Toronto-New York-London, 1966. * [3] F. Gardiner and Y. Jiang, Asymptotically affine and asymptotically conformal circle endomorphisms. RIMS Kokyuroku Bessatsu B17, Infinite Dimensional Teichmueller Spaces and Moduli Spaces, Ed. Ege Fujikawa, June, 2010, 37-53. * [4] F. Gardiner and D. Sullivan, Symmetric and quasisymmetric structures on a closed curve, Amer. J. of Math., 114 (1992) no. 4, 683–736. * [5] Y. Hu, Martingales for uniformly quasisymmetric circle endomorphisms. Ph.D Thesis, The CUNY Graduate Center, 2014. CUNY Academic Works. * [6] Y. Hu, Markov partitions, martingale and symmetric conjugacy of circle endomorphisms. Proc. Amer. Math. Soc. 145 (2017), 2557-2566 * [7] Y. Hu, Y. Jiang, and Z. Wang, Martingales for quasisymmetric systems and complex manifold structures. Annales Academiæ Scientiarum Fennicæ Mathematica, Volumen 38, 2013, 1-26. * [8] Y. Jiang, Renormalization and Geometry in One-Dimensional and Complex Dynamics. Advanced Series in Nonlinear Dynamics, Vol. 10 (1996) World Scientific Publishing Co. Pte. Ltd., River Edge, NJ, xvi+309 pp. ISBN 981-02-2326-9. * [9] Y. Jiang, Geometry of geometrically finite one-dimensional maps. Comm. in Math. Phys., 156 (1993), no. 3, 639-647. * [10] Y. Jiang, Lecture Notes in Dynamical Systems and Quasiconformal Mappings: A Course Given in Department of Mathematics at CUNY Graduate Center, Spring Semester of 2009. * [11] Y. Jiang, Geometric Gibbs Theory (Old title: Teichmüller structures and dual geometric Gibbs type measure theory for continuous potentials). SCIENCE CHINA Mathematics, September 2020, Vol. 63 No. 9: 1777-1824. https://doi.org/10.1007/s11425-019-1638-6. * [12] Y. Jiang, On Ulam-von Neumann transformations. Comm. in Math. Phys., 172 (1995), no. 3, 449-459. * [13] Y. Jiang, Smooth classification of geometrically finite one-dimensional maps. Trans. Amer. Math. Soc., 348 (1996), no. 6, 2391- 2412. * [14] Y. Jiang, On rigidity of one-dimensional maps. Contemporary Mathematics, AMS Series, 211 (1997), 319-431. * [15] Y. Jiang, An introduction to geometric Gibbs theory. A Chapter in Dynamics, Games and Science (edited by J.-P. Bourguignon, Jelstch, A. Pinto, and M. Viana), Springer-Verlag, 2015, 327-339. * [16] Y. Jiang, Symmetric invariant measures for uniformly symmetric circle endomorphisms. Contemporary Mathematics, AMS, Vol. 575, 2012, pp. 211-218. * [17] Y. Jiang, Differentiable rigidity and smooth conjugacy. Annales Academiæ Scientiarum Fennicæ Mathematica, Vol. 30 (2005), 361-383 * [18] D. Mostow, Strong Rigidity of Locally Symmetric Spaces. Ann. of Math. Stud. 78, Princeton, NJ, 1972. * [19] M. Shub and D. Sullivan, Expanding endomorphisms of the circle revisited. Ergod. Th & Dynam. Sys., 5 (1985), 285-289.
# CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation Alexander Ke<EMAIL_ADDRESS>Stanford UniversityUSA , William Ellsworth<EMAIL_ADDRESS>Stanford UniversityUSA , Oishi Banerjee <EMAIL_ADDRESS>Stanford UniversityUSA , Andrew Y. Ng <EMAIL_ADDRESS>Stanford UniversityUSA and Pranav Rajpurkar <EMAIL_ADDRESS>Stanford UniversityUSA (2021) ###### Abstract. Deep learning methods for chest X-ray interpretation typically rely on pretrained models developed for ImageNet. This paradigm assumes that better ImageNet architectures perform better on chest X-ray tasks and that ImageNet- pretrained weights provide a performance boost over random initialization. In this work, we compare the transfer performance and parameter efficiency of 16 popular convolutional architectures on a large chest X-ray dataset (CheXpert) to investigate these assumptions. First, we find no relationship between ImageNet performance and CheXpert performance for both models without pretraining and models with pretraining. Second, we find that, for models without pretraining, the choice of model family influences performance more than size within a family for medical imaging tasks. Third, we observe that ImageNet pretraining yields a statistically significant boost in performance across architectures, with a higher boost for smaller architectures. Fourth, we examine whether ImageNet architectures are unnecessarily large for CheXpert by truncating final blocks from pretrained models, and find that we can make models 3.25x more parameter-efficient on average without a statistically significant drop in performance. Our work contributes new experimental evidence about the relation of ImageNet to chest x-ray interpretation performance. generalization, efficiency, pretraining, chest x-ray interpretation, ImageNet, truncation ††journalyear: 2021††copyright: rightsretained††conference: ACM Conference on Health, Inference, and Learning; April 8–10, 2021; Virtual Event, ††booktitle: ACM Conference on Health, Inference, and Learning (ACM CHIL ’21), April 8–10, 2021, Virtual Event, ††doi: 10.1145/3450439.3451867††isbn: 978-1-4503-8359-2/21/04††ccs: Applied computing Health informatics††ccs: Computing methodologies Computer vision††ccs: Computing methodologies Neural networks Figure 1. Visual summary of our contributions. From left to right: scatterplot and best-fit line for 16 pretrained models showing no relationship between ImageNet and CheXpert performance, CheXpert performance relationship varies across architecture families much more than within, average CheXpert performance improves with pretraining, models can maintain performance and improve parameter efficiency through truncation of final blocks. Error bars show one standard deviation. See Introduction for contributions. ## 1\. Introduction Deep learning models for chest X-ray interpretation have high potential for social impact by aiding clinicians in their workflow and increasing access to radiology expertise worldwide (Rajpurkar et al., 2020b; Rajpurkar et al., 2021). Transfer learning using pretrained ImageNet (Deng et al., 2009) models has been the standard approach for developing models not only on chest X-rays (Wang et al., 2017; Rajpurkar et al., 2017; Apostolopoulos and Mpesiana, 2020) but also for many other medical imaging modalities (Mitani et al., 2020; Zhang et al., 2020; Li et al., 2019; De Fauw et al., 2018; Esteva et al., 2017). This transfer assumes that better ImageNet architectures perform better and pretrained weights boost performance on their target medical tasks. However, there has not been a systematic investigation of how ImageNet architectures and weights both relate to performance on downstream medical tasks. In this work, we systematically investigate how ImageNet architectures and weights both relate to performance on chest X-ray tasks. Our primary contributions are: 1. (1) For models without pretraining and models with pretraining, we find no relationship between ImageNet performance and CheXpert performance (Spearman $\rho=0.08$, $\rho=0.06$ respectively). This finding suggests that architecture improvements on ImageNet may not lead to improvements on medical imaging tasks. 2. (2) For models without pretraining, we find that within an architecture family, the largest and smallest models have small differences (ResNet 0.005, DenseNet 0.003, EfficientNet 0.004) in CheXpert AUC, but different model families have larger differences in AUC ($>0.006$). This finding suggests that the choice of model family influences performance more than size within a family for medical imaging tasks. 3. (3) We observe that ImageNet pretraining yields a statistically significant boost in performance (average boost of 0.016 AUC) across architectures, with a higher boost for smaller architectures (Spearman $\rho=-0.72$ with number of parameters). This finding supports the ImageNet pretraining paradigm for medical imaging tasks, especially for smaller models. 4. (4) We find that by truncating final blocks of pretrained models, we can make models 3.25x more parameter-efficient on average without a statistically significant drop in performance. This finding suggests model truncation may be a simple method to yield lighter pretrained models by preserving architecture design features while reducing model size. Our study, to the best of our knowledge, contributes the first systematic investigation of the performance and efficiency of ImageNet architectures and weights for chest X-ray interpretation. Our investigation and findings may be further validated on other datasets and medical imaging tasks. ## 2\. Related Work ### 2.1. ImageNet Transfer Kornblith et al. (2018) examined the performance of 16 convolutional neural networks (CNNs) on 12 image classification datasets. They found that using these ImageNet pretrained architectures either as feature extractors for logistic regression or fine tuning them on the target dataset yielded a Spearman $\rho=0.99$ and $\rho=0.97$ between ImageNet accuracy and transfer accuracy respectively. However, they showed ImageNet performance was less correlated with transfer accuracy for some fine-grained tasks, corroborating He et al. (2018). They found that without ImageNet pretraining, ImageNet accuracy and transfer accuracy had a weaker Spearman $\rho=0.59$. We extend Kornblith et al. (2018) to the medical setting by studying the relationship between ImageNet and CheXpert performance. Raghu et al. (2019) explored properties of transfer learning onto retinal fundus images and chest X-rays. They studied ResNet50 and InceptionV3 and showed pretraining offers little performance improvement. Architectures composed of just four to five sequential convolution and pooling layers achieved comparable performance on these tasks as ResNet50 with less than 40% the parameters. In our work, we find pretraining does not boost performance for ResNet50, InceptionV3, InceptionV4, and MNASNet but does boost performance for the remaining 12 architectures. Thus, we were able to replicate Raghu et al. (2019)’s results, but upon studying a broader set of newer and more popular models, we reached the opposite conclusion that ImageNet pretraining yields a statistically significant boost in performance. ### 2.2. Medical Task Architectures Irvin et al. (2019) compared the performance of ResNet152, DenseNet121, InceptionV4, and SEResNeXt101 on CheXpert, finding that DenseNet121 performed best. In a recent analysis, all but one of the top ten CheXpert competition models used DenseNets as part of their ensemble, even though they have been surpassed on ImageNet (Rajpurkar et al., 2020a). Few groups design their own networks from scratch, preferring to use established ResNet and DenseNet architectures for CheXpert (Bressem et al., 2020). This trend extends to retinal fundus and skin cancer tasks as well, where Inception architectures remain popular (Mitani et al., 2020; Zhang et al., 2020; Li et al., 2019; De Fauw et al., 2018). The popularity of these older ImageNet architectures hints that there may be a disconnect between ImageNet performance and medical task performance for newer architectures generated through architecture search. We verify that these newer architectures generated through search (EfficientNet, MobileNet, MNASNet) underperform older architectures (DenseNet, ResNet) on CheXpert, suggesting that search has overfit to ImageNet and explaining the popularity of these older architectures in the medical imaging literature. Bressem et al. (2020) postulated that deep CNNs that can represent more complex relationships for ImageNet may not be necessary for CheXpert, which has greyscale inputs and fewer image classes. They studied ResNet, DenseNet, VGG, SqueezeNet, and AlexNet performance on CheXpert and found that ResNet152, DenseNet161, and ResNet50 performed best on CheXpert AUC. In terms of AUPRC, they showed that smaller architectures like AlexNet and VGG can perform similarly to deeper architectures on CheXpert. Models such as AlexNet, VGG, and SqueezeNet are no longer popular in the medical setting, so in our work, we systematically investigate the performance and efficiency of 16 more contemporary ImageNet architectures with and without pretraining. Additionally, we extend (Bressem et al., 2020) by studying the effects of pretraining, characterizing the relationship between ImageNet and CheXpert performance, and drawing conclusions about architecture design. ### 2.3. Truncated Architectures The more complex a convolutional architecture becomes, the more computational and memory resources are needed for its training and deployment. Model complexity thus may impede the deployment of CNNs to clinical settings with less resources. Therefore, efficiency, often reported in terms of the number of parameters in a model, the number of FLOPS in the forward pass, or the latency of the forward pass, has become increasingly important in model design. Low-rank factorization (Jaderberg et al., 2014; Chollet, 2016), transferred/compact convolutional filters (Cheng et al., 2017), knowledge distillation (Hinton et al., 2015), and parameter pruning (Srinivas and Babu, 2015) have all been proposed to make CNNs more efficient. Layer-wise pruning is a type of parameter pruning that locates and removes layers that are not as useful to the target task (Ro and Choi, 2020). Through feature diagnosis, a linear classifier is trained using the feature maps at intermediate layers to quantify how much a particular layer contributes to performance on the target task (Chen and Zhao, 2019). In this work, we propose model truncation as a simple method for layer-wise pruning where the final pretrained layers after a given point are pruned off, a classification layer is appended, and this whole model is finetuned on the target task. ## 3\. Methods ### 3.1. Training and Evaluation Procedure We train chest X-ray classification models with different architectures with and without pretraining. The task of interest is to predict the probability of different pathologies from one or more chest X-rays. We use the CheXpert dataset consisting of 224,316 chest X-rays of 65,240 patients (Irvin et al., 2019) labeled for the presence or absence of 14 radiological observations. We evaluate models using the average of their AUROC metrics (AUC) on the five CheXpert-defined competition tasks (Atelectasis, Cardiomegaly, Consolidation, Edema, Pleural Effusion) as well as the No Finding task to balance clinical importance and prevalence in the validation set. We select 16 models pretrained on ImageNet from public checkpoints implemented in PyTorch 1.4.0: DenseNet (121, 169, 201) and ResNet (18, 34, 50, 101) from Paszke et al. (2019), Inception (V3, V4) and MNASNet from Cadene (2018), and EfficientNet (B0, B1, B2, B3) and MobileNet (V2, V3) from Wightman (2020). We finetune and evaluate these architectures with and without ImageNet pretraining. For each model, we finetune all parameters on the CheXpert training set. If a model is pretrained, inputs are normalized using mean and standard deviation learned from ImageNet. If a model is not pretrained, inputs are normalized with mean and standard deviation learned from CheXpert. We use the Adam optimizer ($\beta_{1}=0.9$, $\beta_{2}=0.999$) with learning rate of $1\times 10^{-4}$, a batch size of 16, and a cross-entropy loss function. We train on up to four Nvidia GTX 1080 with CUDA 10.1 and Intel Xeon CPU ES-2609 running Ubuntu 16.04. For one run of an architecture, we train for three epochs and evaluate each model every 8192 gradient steps. We train each model and create a final ensemble model from the ten checkpoints, which achieved the best average CheXpert AUC across the six tasks on the validation set. We report all our results on the CheXpert test set. We use the nonparametric bootstrap to estimate 95% confidence intervals for each statistic. 1,000 replicates are drawn from the test set, and the statistic is calculated on each replicate. This procedure produces a distribution for each statistic, and we report the 2.5 and 97.5 percentiles as a confidence interval. Significance is assessed at the $p=0.05$ level. ### 3.2. Truncated Architectures We study truncated versions of DenseNet121, MNASNet, ResNet18, and EfficientNetB0. DenseNet121 and MNASNet were chosen because we found they have the greatest efficiency (by AUC per parameters) on CheXpert of the models we profile, ResNet18 was chosen because of its popularity as a compact model for medical tasks, and EfficientNetB0 was chosen because it is the smallest current-generation model of the 16 we study. DenseNet121 contains four dense blocks separated by transition blocks before the classification layer. By pruning the final dense block and associated transition block, the model now only contains three dense blocks, yielding DenseNet121Minus1. Similarly, pruning two dense blocks and associated transition blocks yields DenseNet121Minus2, and pruning three dense blocks and associated transition blocks yields DenseNet121Minus3. For MNASNet, we remove up to the four of the final MBConv blocks to produce MNASNetMinus1 through MNASNetMinus4. For ResNet18, we remove up to the three of the final residual blocks with a similar method to produce ResNet18Minus1 through ResNet18Minus3. For EfficientNet, we remove up to two of the final MBConv6 blocks to produce EfficientNetB0Minus1 and EfficientNetB0Minus2. After truncating a model, we append a classification block containing a global average pooling layer followed by a fully connected layer to yield outputs of the correct shape. We initialize the model with ImageNet pretrained weights, except the randomly initialized classification block, and finetune using the same training procedure as the 16 ImageNet models. ### 3.3. Class Activation Maps We compare the class activation maps (CAMs) among a truncated DenseNet121 family to visualize their higher resolution CAMs. We generate CAMs using the Grad-CAM method (Selvaraju et al., 2016), using a weighted combination of the model’s final convolutional feature maps, with weights based on the positive partial derivatives with respect to class score. This averaged map is scaled by the outputted probability so more confident predictions appear brighter. Finally, the map is upsampled to the input image resolution and overlain onto the input image, highlighting image regions that had the greatest influence on a model’s decision. Figure 2. Average CheXpert AUC vs. ImageNet Top-1 Accuracy. The left plot shows results obtained without pretraining, while the right plot shows results with pretraining. There is no monotonic relationship between ImageNet and CheXpert performance without pretraining (Spearman $\rho=0.08$) or with pretraining (Spearman $\rho=0.06$). Two scatterplots illustrating average CheXpert AUC against the ImageNet top-1 accuracy of each model. ## 4\. Experiments Figure 3. Average CheXpert AUC vs. Model Size. The left plot shows results obtained without pretraining, while the right plot shows results with pretraining. The logarithm of the model size has a near linear relationship with CheXpert performance when we omit pretraining (Spearman $\rho=0.79$). However once we incorporate pretraining, the monotonic relationship is weaker (Spearman $\rho=0.56$). Two scatterplots illustrating average CheXpert AUC against the base-10 logarithm of the number of parameters for each model. ### 4.1. ImageNet Transfer Performance We investigate whether higher performance on natural image classification translates to higher performance on chest X-ray classification. We display the relationship between the CheXpert AUC, with and without ImageNet pretraining, and ImageNet top-1 accuracy in Figure 2 When models are trained without pretraining, we find no monotonic relationship between ImageNet top-1 accuracy and average CheXpert AUC, with Spearman $\rho=0.082$ at $p=0.762$. Model performance without pretraining would describe how a given architecture would perform on the target task, independent of any pretrained weights. When models are trained with pretraining, we again find no monotonic relationship between ImageNet top-1 accuracy and average CheXpert AUC with Spearman $\rho=0.059$ at $p=0.829$. Overall, we find no relationship between ImageNet and CheXpert performance, so models that succeed on ImageNet do not necessarily succeed on CheXpert. These relationships between ImageNet performance and CheXpert performance are much weaker than the relationships between ImageNet performance and performance on various natural image tasks reported by Kornblith et al. (2018). We compare the CheXpert performance within and across architecture families. Without pretraining, we find that ResNet101 performs only 0.005 AUC greater than ResNet18, which is well within the confidence interval of this metric (Figure 2). Similarly, DenseNet201 performs 0.004 AUC greater than DenseNet121 and EfficientNetB3 performs 0.003 AUC greater than EfficientNetB0. With pretraining, we continue to find minor performance differences between the largest model and smallest model that we test in each family. We find AUC increases of 0.002 for ResNet, 0.004 for DenseNet and -0.006 for EfficientNet. Thus, increasing complexity within a model family does not yield increases in CheXpert performance as meaningful as the corresponding increases in ImageNet performance. Without pretraining, we find that the best model studied performs significantly better than the worst model studied. Among models trained without pretraining, we find that InceptionV3 performs best with 0.866 (0.851, 0.880) AUC, while MobileNetV2 performs worst with 0.814 (0.796, 0.832) AUC. Their difference in performance is 0.052 (0.043, 0.063) AUC. InceptionV3 is also the third largest architecture studied and MobileNetV2 the smallest. We find a significant difference in the CheXpert performance of these models. This difference again hints at the importance of architecture design. ### 4.2. CheXpert Performance and Efficiency We examine whether larger architectures perform better than smaller architectures on chest X-ray interpretation, where architecture size is measured by number of parameters. We display these relationships in Figure 3 and Table 1. Model | CheXpert AUC | #Params (M) ---|---|--- DenseNet121 | 0.859 (0.846, 0.871) | 6.968 DenseNet169 | 0.860 (0.848, 0.873) | 12.508 DenseNet201 | 0.864 (0.850, 0.876) | 18.120 EfficientNetB0 | 0.859 (0.846, 0.871) | 4.025 EfficientNetB1 | 0.858 (0.844, 0.872) | 6.531 EfficientNetB2 | 0.866 (0.853, 0.880) | 7.721 EfficientNetB3 | 0.853 (0.837, 0.867) | 10.718 InceptionV3 | 0.862 (0.848, 0.876) | 27.161 InceptionV4 | 0.861 (0.846, 0.873) | 42.680 MNASNet | 0.858 (0.845, 0.871) | 5.290 MobileNetV2 | 0.854 (0.839, 0.869) | 2.242 MobileNetV3 | 0.859 (0.847, 0.872) | 4.220 ResNet101 | 0.863 (0.848, 0.876) | 44.549 ResNet18 | 0.862 (0.847, 0.875) | 11.690 ResNet34 | 0.863 (0.849, 0.875) | 21.798 ResNet50 | 0.859 (0.843, 0.871) | 25.557 Table 1. CheXpert AUC (with 95% Confidence Intervals) and Number of Parameters for 16 ImageNet-Pretrained Models. Without ImageNet pretraining, we find a positive monotonic relationship between the number of parameters of an architecture and CheXpert performance, with Spearman $\rho=0.791$ significant at $p=2.62\times 10^{-4}$. With ImageNet pretraining, there is a weaker positive monotonic relationship between the number of parameters and average CheXpert AUC, with Spearman $\rho=0.565$ at $p=0.023$. Although there exists a positive monotonic relationship between the number of parameters of an architecture and average CheXpert AUC, the Spearman $\rho$ does not highlight the increase in parameters necessary to realize marginal increases in CheXpert AUC. For example, ResNet101 is 11.1x larger than EfficientNetB0, but with only increase of 0.005 in CheXpert AUC with pretraining. Within a model family, increasing the number of parameters does not lead to meaningful gains in CheXpert AUC. We see this relationship in all families studied without pretraining (EfficientNet, DenseNet, and ResNet) in Figure 3. For example, DenseNet201 has an AUC 0.003 greater than DenseNet121, but is 2.6x larger. EfficientNetB3 has an AUC 0.004 greater than EfficientNetB0, but is 1.9x larger. Despite the positive relationship between model size and CheXpert performance across all models, bigger does not necessarily mean better within a model family. Since within a model family there is a weaker relationship between model size and CheXpert performance than across all models, we find that CheXpert performance is influenced more by the macro architecture design than by its size. Models within a family have similar architecture design choices but different sizes, so they perform similarly on CheXpert. We observe large discrepancies in performance between architecture families. For example DenseNet, ResNet, and Inception typically outperform EfficientNet and MobileNet architectures, regardless of their size. EfficientNet, MobileNet, and MNASNet were all generated through neural architecture search to some degree, a process that optimized for performance on ImageNet. Our findings suggest that this search could have overfit to the natural image objective to the detriment of chest X-ray tasks. Figure 4. Pretraining Boost vs. Model Size. We define pretraining boost as the increase in the average CheXpert AUCs achieved with pretraining vs. without pretraining. Most models benefit significantly from ImageNet pretraining. Smaller models tend to benefit more than larger models (Spearman $\rho=-0.72$). Pretraining boost with 95-percent confidence intervals plotted against the base-10 logarithm of the number of parameters in a model. ### 4.3. ImageNet Pretraining Boost We study the effects of ImageNet pretraining on CheXpert performance by defining the pretraining boost as the CheXpert AUC of a model initialized with ImageNet pretraining minus the CheXpert AUC of its counterpart without pretraining. The pretraining boosts of our architectures are reported in Figure 4. We find that ImageNet pretraining provides a meaningful boost for most architectures (on average 0.015 AUC). We find a Spearman $\rho=-0.718$ at $p=0.002$ between the number of parameters of a given model and the pretraining boost. Therefore, this boost tends to be larger for smaller architectures such as EfficientNetB0 (0.023), MobileNetV2 (0.040) and MobileNetV3 (0.033) and smaller for larger architectures such as InceptionV4 ($-0.002$) and ResNet101 (0.013). Further work is required to explain this relationship. Within a model family, the pretraining boost also does not meaningfully increase as as model size increases. For example, DenseNet201 has a pretraining boost only 0.002 AUC greater than DenseNet121 does. This finding supports our earlier conclusion that model families perform similarly on CheXpert regardless of their size. ### 4.4. Truncated Architectures We truncate the final blocks of DenseNet121, MNASNet, ResNet18, and EfficientNetB0 with pretrained weights and study their CheXpert performance to understand whether ImageNet models are unnecessarily large for the chest X-ray task. We express efficiency gains in terms of Times-Smaller, or the number of parameters of the original architecture divided by the number of parameters of the truncated architecture: intuitively, how many times larger the original architecture is compared to the truncated architecture. The efficiency gains and AUC changes of model truncation on DenseNet121, MNASNet, ResNet18, and EfficientNetB0 are displayed in Table 2. Model | AUC Change | Times-Smaller ---|---|--- EfficientNetB0 | 0.00% | 1x EfficientNetB0Minus1 | 0.15% | 1.4x EfficientNetB0Minus2 | -0.45% | 4.7x MNASNet | 0.00% | 1x MNASNetMinus1 | -0.07% | 2.5x MNASNetMinus2* | -2.30% | 11.2x MNASNetMinus3* | -2.51% | 20.0x MNASNetMinus4* | -6.40% | 112.9x DenseNet121 | 0.00% | 1x DenseNet121Minus1 | -0.04% | 1.6x DenseNet121Minus2* | -1.33% | 5.3x DenseNet121Minus3* | -4.73% | 20.0x ResNet18 | 0.00% | 1x ResNet18Minus1 | 0.24% | 4.2x ResNet18Minus2* | -3.70% | 17.1x ResNet18Minus3* | -8.33% | 73.8x Table 2. Efficiency Trade-Off of Truncated Models. Pretrained models can be truncated without significant decrease in CheXpert AUC. Truncated models with significantly different AUC from the base model are denoted with an asterisk. For all four model families, we find that truncating the final block leads to no significant decrease in CheXpert AUC but can save 1.4x to 4.2x the parameters. Notably, truncating the final block of ResNet18 yields a model that is not significantly different (difference -0.002 (-0.008, 0.004)) in CheXpert AUC, but is 4.2x smaller. Truncating the final two blocks of an EfficientNetB0 yields a model that is not significantly different (difference 0.004 (-0.003, 0.009)) in CheXpert AUC, but is 4.7x smaller. However, truncating the second block and beyond in each of MNASNet, DenseNet121, and ResNet18 yields models that have statistically significant drops in CheXpert performance. Figure 5. Comparison of Class Activation Maps Among Truncated Model Family. CAMs yielded by models, from left to right, DenseNet121, DenseNet121Minus1, and DenseNet121Minus2. Displays frontal chest X-ray demonstrating Atelectasis (top) and Edema (bottom). Further truncated models more effectively localize the Atelectasis, as well as tracing the hila and vessel branching for Edema. Six class activation maps of two frontal chest X-ray images showing finer detail from left to right. Model truncation effectively compresses models performant on CheXpert, making them more parameter efficient while still using pretrained weights to capture the pretraining boost. Parameter efficient models are able to lighten the computational and memory burdens for deployment to low-resource environments such as portable devices. In the clinical setting, the simplicity of our model truncation method encourages its adoption for model compression. This finding corroborates Raghu et al. (2019) and Bressem et al. (2020), which show simpler models can achieve performance comparable to more complex models on CheXpert. Our truncated models can use readily-available pretrained weights, which may allow these models to capture the pretraining boost and speed up training. However, we do not study the performance of these truncated models without their pretrained weights. As an additional benefit, architectures that truncate pooling layers will also produce higher-resolution class activation maps, as shown in Figure 5. The higher-resolution class activation maps (CAMs) may more effectively localize pathologies with little to no decrease in classification performance. In clinical settings, improved explainability through better CAMs may be useful for validating predictions and diagnosing mispredictions. As a result, clinicians may have more trust in models that provide these higher-resolution CAMs. ## 5\. Discussion In this work, we study the performance and efficiency of ImageNet architectures for chest x-ray interpretation. #### Is ImageNet performance correlated with CheXpert? No. We show no statistically significant relationship between ImageNet and CheXpert performance. This finding extends Kornblith et al. (2018)—which found a significant correlation between ImageNet performance and transfer performance on typical image classification datasets—to the medical setting of chest x-ray interpretation. This difference could be attributed to unique aspects the chest X-ray interpretation task and data attributes. The chest X-ray interpretation task differs from natural image classification in that (1) disease classification may depend on abnormalities in a small number of pixels, (2) chest X-ray interpretation is a multi-task classification setup, and (3) there are far fewer classes than in many natural image classification datasets. Second, the data attributes for chest X-rays differ from natural image classification in that X-rays are greyscale and have similar spatial structures across images (always either anterior-posterior, posterior- anterior, or lateral). #### Does model architecture matter? Yes. For models without pretraining, we find that the choice of architecture family may influence performance more than model size. Our findings extend Raghu et al. (2019) beyond the effect of ImageNet weights, since we show that architectures that succeed on ImageNet do not necessarily succeed on medical imaging tasks. A notable finding of our work is that newer architectures generated through search on ImageNet (EfficientNet, MobileNet, MNASNet) underperform older architectures (DenseNet, ResNet) on CheXpert. This finding suggests that search may have overfit to ImageNet to the detriment of medical task performance, and ImageNet may not be an appropriate benchmark for selecting architectures for medical imaging tasks. Instead, medical imaging architectures could be benchmarked on CheXpert or other large medical datasets. Architectures derived from selection and search on CheXpert and other large medical datasets may be applicable to similar medical imaging modalities including other x-ray studies, or CT scans. Thus architecture search directly on CheXpert or other large medical datasets may allow us to unlock next generation performance for medical imaging tasks. #### Does ImageNet pretraining help? Yes. We find that ImageNet pretraining yields a statistically significant boost in performance for chest x-ray classification. Our findings are consistent with Raghu et al. (2019), who find no pretraining boost on ResNet50 and InceptionV3, but we find pretraining does boost performance for 12 out of 16 architectures. Our findings extend He et al. (2018)—who find models without pretraining had comparable performance to models pretrained on ImageNet for object detection and image segmentation of natural images—to the medical imaging setting. Future work may investigate the relationship between network architectures and the impact of self-supervised pre-training for chest x-ray interpretation as has recently been developed by Sowrirajan et al. (2020); Azizi et al. (2021); Sriram et al. (2021). #### Can models be smaller? Yes. We find that by truncating final blocks of ImageNet-pretrained architectures, we can make models 3.25x more parameter-efficient on average without a statistically significant drop in performance. This method preserves the critical components of architecture design while cutting its size. This observation suggests model truncation may be a simple method to yield lighter models, using ImageNet pretrained weights to boost CheXpert performance. In the clinical setting, truncated models may provide value through improved parameter-efficiency and higher resolution CAMs. This change may enable deployment to low-resource clinical environments and further develop model trust through improved explainability. In closing, our work contributes to the understanding of the transfer performance and parameter efficiency of ImageNet models for chest X-ray interpretation. We hope that our new experimental evidence about the relation of ImageNet to medical task performance will shed light on potential future directions for progress. ## References * (1) * Apostolopoulos and Mpesiana (2020) Ioannis D Apostolopoulos and Tzani A Mpesiana. 2020. Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. _Physical and Engineering Sciences in Medicine_ (2020), 1. * Azizi et al. (2021) Shekoofeh Azizi, Basil Mustafa, Fiona Ryan, Zachary Beaver, Jan Freyberg, Jonathan Deaton, Aaron Loh, Alan Karthikesalingam, Simon Kornblith, Ting Chen, Vivek Natarajan, and Mohammad Norouzi. 2021\. Big Self-Supervised Models Advance Medical Image Classification. arXiv:2101.05224 [eess.IV] * Bressem et al. (2020) Keno K. Bressem, Lisa Adams, Christoph Erxleben, Bernd Hamm, Stefan Niehues, and Janis Vahldiek. 2020\. Comparing Different Deep Learning Architectures for Classification of Chest Radiographs. arXiv:2002.08991 [cs.LG] * Cadene (2018) Remi Cadene. 2018\. pretrainedmodels 0.7.4. https://pypi.org/project/pretrainedmodels/. * Chen and Zhao (2019) S. Chen and Q. Zhao. 2019. Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 41, 12 (2019), 3048–3056. * Cheng et al. (2017) Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2017\. A Survey of Model Compression and Acceleration for Deep Neural Networks. _CoRR_ abs/1710.09282 (2017). arXiv:1710.09282 http://arxiv.org/abs/1710.09282 * Chollet (2016) François Chollet. 2016\. Xception: Deep Learning with Depthwise Separable Convolutions. _CoRR_ abs/1610.02357 (2016). arXiv:1610.02357 http://arxiv.org/abs/1610.02357 * De Fauw et al. (2018) Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, and Olaf Ronneberger. 2018\. Clinically applicable deep learning for diagnosis and referral in retinal disease. _Nature Medicine_ 24, 9 (01 Sep 2018), 1342–1350. https://doi.org/10.1038/s41591-018-0107-6 * Deng et al. (2009) J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. 2009\. ImageNet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_. 248–255. * Esteva et al. (2017) Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. 2017. Dermatologist-level classification of skin cancer with deep neural networks. _Nature_ 542, 7639 (2017), 115–118. https://doi.org/10.1038/nature21056 * He et al. (2018) Kaiming He, Ross B. Girshick, and Piotr Dollár. 2018\. Rethinking ImageNet Pre-training. _CoRR_ abs/1811.08883 (2018). arXiv:1811.08883 http://arxiv.org/abs/1811.08883 * Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015\. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [stat.ML] * Irvin et al. (2019) Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn L. Ball, Katie S. Shpanskaya, Jayne Seekins, David A. Mong, Safwan S. Halabi, Jesse K. Sandberg, Ricky Jones, David B. Larson, Curtis P. Langlotz, Bhavik N. Patel, Matthew P. Lungren, and Andrew Y. Ng. 2019\. CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison. _CoRR_ abs/1901.07031 (2019). arXiv:1901.07031 http://arxiv.org/abs/1901.07031 * Jaderberg et al. (2014) Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. 2014\. Speeding up Convolutional Neural Networks with Low Rank Expansions. _CoRR_ abs/1405.3866 (2014). arXiv:1405.3866 http://arxiv.org/abs/1405.3866 * Kornblith et al. (2018) Simon Kornblith, Jonathon Shlens, and Quoc V. Le. 2018\. Do Better ImageNet Models Transfer Better? _CoRR_ abs/1805.08974 (2018). arXiv:1805.08974 http://arxiv.org/abs/1805.08974 * Li et al. (2019) Feng Li, Zheng Liu, Hua Chen, Minshan Jiang, Xuedian Zhang, and Zhizheng Wu. 2019\. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. _Translational Vision Science & Technology_ 8, 6 (11 2019), 4–4. https://doi.org/10.1167/tvst.8.6.4 arXiv:https://arvojournals.org/arvo/content_public/journal/tvst/938258/i2164-2591-8-6-4.pdf * Mitani et al. (2020) Akinori Mitani, Abigail Huang, Subhashini Venugopalan, Greg S. Corrado, Lily Peng, Dale R. Webster, Naama Hammel, Yun Liu, and Avinash V. Varadarajan. 2020. Detection of anaemia from retinal fundus images via deep learning. _Nature Biomedical Engineering_ 4, 1 (01 Jan 2020), 18–27. https://doi.org/10.1038/s41551-019-0487-z * Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In _Advances in Neural Information Processing Systems 32_ , H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf * Raghu et al. (2019) Maithra Raghu, Chiyuan Zhang, Jon M. Kleinberg, and Samy Bengio. 2019. Transfusion: Understanding Transfer Learning with Applications to Medical Imaging. _CoRR_ abs/1902.07208 (2019). arXiv:1902.07208 http://arxiv.org/abs/1902.07208 * Rajpurkar et al. (2017) Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Yi Ding, Aarti Bagul, Curtis Langlotz, Katie S. Shpanskaya, Matthew P. Lungren, and Andrew Y. Ng. 2017\. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. _CoRR_ abs/1711.05225 (2017). arXiv:1711.05225 http://arxiv.org/abs/1711.05225 * Rajpurkar et al. (2020a) Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Phil Chen, Amirhossein Kiani, Jeremy Irvin, Andrew Y Ng, and Matthew P Lungren. 2020a. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. _arXiv preprint arXiv:2002.11379_ (2020). * Rajpurkar et al. (2021) Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Andrew Y. Ng, and Matthew P. Lungren. 2021. CheXternal: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays and External Clinical Settings. arXiv:2102.08660 [eess.IV] * Rajpurkar et al. (2020b) Pranav Rajpurkar, Chloe O’Connell, Amit Schechter, Nishit Asnani, Jason Li, Amirhossein Kiani, Robyn L Ball, Marc Mendelson, Gary Maartens, Daniël J van Hoving, et al. 2020b. CheXaid: deep learning assistance for physician diagnosis of tuberculosis using chest x-rays in patients with HIV. _NPJ digital medicine_ 3, 1 (2020), 1–8. * Ro and Choi (2020) Youngmin Ro and Jin Young Choi. 2020. Layer-wise Pruning and Auto-tuning of Layer-wise Learning Rates in Fine-tuning of Deep Networks. arXiv:2002.06048 [cs.CV] * Selvaraju et al. (2016) Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. 2016. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. _CoRR_ abs/1610.02391 (2016). arXiv:1610.02391 http://arxiv.org/abs/1610.02391 * Sowrirajan et al. (2020) Hari Sowrirajan, Jingbo Yang, Andrew Y. Ng, and Pranav Rajpurkar. 2020. MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models. arXiv:2010.05352 [cs.CV] * Srinivas and Babu (2015) Suraj Srinivas and R. Venkatesh Babu. 2015. Data-free parameter pruning for Deep Neural Networks. _CoRR_ abs/1507.06149 (2015). arXiv:1507.06149 http://arxiv.org/abs/1507.06149 * Sriram et al. (2021) Anuroop Sriram, Matthew Muckley, Koustuv Sinha, Farah Shamout, Joelle Pineau, Krzysztof J. Geras, Lea Azour, Yindalon Aphinyanaphongs, Nafissa Yakubova, and William Moore. 2021\. COVID-19 Deterioration Prediction via Self-Supervised Representation Learning and Multi-Image Prediction. arXiv:2101.04909 [cs.CV] * Wang et al. (2017) Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers. 2017. ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. _CoRR_ abs/1705.02315 (2017). arXiv:1705.02315 http://arxiv.org/abs/1705.02315 * Wightman (2020) Ross Wightman. 2020\. timm 0.2.1. https://pypi.org/project/timm/. * Zhang et al. (2020) Li Zhang, Mengya Yuan, Zhen An, Xiangmei Zhao, Hui Wu, Haibin Li, Ya Wang, Beibei Sun, Huijun Li, Shibin Ding, Xiang Zeng, Ling Chao, Pan Li, and Weidong Wu. 2020. Prediction of hypertension, hyperglycemia and dyslipidemia from retinal fundus photographs via deep learning: A cross-sectional study of chronic diseases in central China. _PLOS ONE_ 15, 5 (05 2020), 1–11. https://doi.org/10.1371/journal.pone.0233166
# Hyperparallel transistor, router and dynamic random access memory with unity fidelities Ji-Zhen Liu1 Ning-Yang Chen1 Wen-Qiang Liu1 Hai-Rui Wei1*and Ming Hua2 1School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China 2 Department of Applied Physics, School of Science, Tianjin Polytechnic University, Tianjin 300387, China *<EMAIL_ADDRESS> ###### Abstract We theoretically implement some hyperparallel optical elements, including quantum single photon transistor, router, and dynamic random access memory (DRAM). The inevitable side leakage and the imperfect birefringence of the quantum dot (QD)-cavity mediates are taken into account, and unity fidelities of our optical elements can be achieved. The hyperparallel constructions are based on polarization and spatial degrees of freedom (DOFs) of the photon to increase the parallel efficiency, improve the capacity of channel, save the quantum resources, reduce the operation time, and decrease the environment noises. Moreover, the practical schemes are robust against the side leakage and the coupling strength limitation in the microcavities. ††journal: osac††articletype: Research Article ## 1 Introduction By exploiting the superposition principle, quantum information processing (QIP) offers great advantages over the classical information processing in factoring power, security, discrete logarithms, efficient simulation, and modeling [1]. Different from the normal QIP that is only acting on single degree of freedom (DOF) [2, 3], the hyperparallel QIP is performing on more than one independent DOFs, simultaneously. Hyperparallel QIP has been shown decisive advantages in improving its encoding capacity, lowing loss rate, reducing experimental requirements, and decreasing affection by decoherence [4, 5, 6]. Nowadays, hyperentanglement has been recognized as a fascinating resource and provided other important applications, such as linear optical quantum dense coding [7], complete photonic Bell-state analysis with linear optics [8, 9], teleportation-based quantum networking [10], deterministic entanglement purification [11, 12, 13], etc. Hyperentangled six-qubit Bell state [14], hyperentangled six-qubit cluster state [15], and hyperentangled ten-qubit Schrödinger cat state [16] have been experimentally demonstrated in recent years. Quantum teleportation with multiple DOFs [4], complete hyperentangled-Bell- (Greenberger-Horne-Zeilinger-) state analysis [17, 18, 19], hyperparallel quantum repeater [20], hyperentanglement concentration [21, 22, 23, 24, 25, 26], and hyperentanglement purification [27, 28] have been proposed for high-capacity long distance quantum communication. Hyperparallel quantum computing also attracted much attention and made a series of outstanding achievements as its promising merits, especially in the field of hyper-parallel universal quantum gates [29, 30, 31]. Multiple DOFs have been shown potential advantages in simplifying quantum circuits [32, 33], optimizing quantum algorithms [34], and improving some traditional strategies [35]. Photons are nowadays recognized as excellent candidates for hyperparallel QIP due to their wide range of exploitable DOFs, including spatial [15], polarization [16], orbital angular momentum [36], transverse [37], frequency [38], spectral [39], time of arrival [40], etc. Another showing benefits of photons are high-speed transmission, negligible decoherence, outstanding fast speed, accurate single-qubit operation, and vast photonic industry facilitates. However, a major hurdle for hyperparallel photonic QIP is realizing strongly interactions between individual photons. KLM scheme [41], based on linear optical elements and single photon detectors, is served as a steeping stone for linear undeterministic quantum computing. Currently, photon-mediated (such as cross-Kerr [42, 43, 44], neutral atoms [45, 46, 47, 48], atom ensembles [29, 49], and artificial atoms [50, 51, 52, 53, 54, 55]) interactions are often employed to overcome the intrinsic weak interactions between individual photons charter of parallel and hyperparallel photonic quantum computing. In recent years, artificial atoms (quantum dot in semiconductors [50, 51, 52, 53], nitrogen vacancy defect centre in diamond [54], superconductor [55]) have been received growing interest due to their relatively long coherence time [56, 57], sensitive and quick manipulation, high-fidelity readout [58, 59, 60], custom-designed features [61, 62], as well as much large linewidths. Quantum dots (QDs) provide a better matter qubit system [63, 64] because they could be designed to have certain characteristics and be assembled in large arrays. Besides, they support microsecond coherence time [56] and picosecond time scale single-qubit rotations [65] as well. Quantum transistor, router, and dynamic random access memory (DRAM) are the key resources for secure quantum network [66], metrology [67], and fundamental tests of physics theory [68]. Particularly, quantum transistor provides a potential solution to mitigate transmission loss; quantum router can correct directly the signal from its source to its intended destination conditional on the state of the control qubit; DRAM is characterized by high integration mainly used in large-capacity memory, which can download, store and read out the information. Previous works about these quantum optical elements primarily acted on the single DOF systems [69, 70]. In this paper, we focus on designing compact quantum circuits for implementing hyperparallel single photon transistor, router and DRAM, respectively. The computing qubits are encoded in the polarization and the spatial DOFs of single photons. The individual photons are bridged by QD mediates confined in double-sided microcavities. Our schemes have some characters: (1) The inevitable imperfect operations in the QD-cavity unites are taken into account, and the unity fidelities can be achieved in principle. (2) The coupling strength limitations in the microcavities are not necessary. (3) Our presented schemes, different from the traditional ones that operate on single DOF, have independent performance on polarization and spatial DOFs, simultaneously. (4) The strong light (source light beam) in quantum transistor is controlled by the weak light (gate photon), and $N$-photon hyper-entanglement state can be generated by means of our transistor. ## 2 Hyper-transistor via practice QD-cavity emitter The key ingredient of the optical-QD-based QIP is the realization of entanglement between a QD spin and a single photon. In 2009, Hu _et al._ [50] proposed a QD-microcavity emitter, i.e., a singly charged QD [e.g., a self- assembled Al(Ga)As QD or GaAs QD] placed in the center of a double-sided optical microcavity. As shown in Fig. 1, a negatively charged exciton $X^{-}$ consists of two electrons bound to one heavy hole [71]. The ground states and the excited states of this singly charged QD are the electron spin states and the exciton $X^{-}$ spin states, respectively [50]. The $X^{-}$ exhibits spin-dependent optical transition rules due to the Pauli’s exclusion rules and the conservation of total spin angular momentum [72]. In detail, if the electron is in the state $|\uparrow\rangle$, only the circularly polarized photon with $S_{z}=+1$ (marked by $|L^{\downarrow}\rangle$ or $|R^{\uparrow}\rangle$) feels the “hot" cavity and couples to the transition $|\uparrow\rangle\leftrightarrow|\uparrow\downarrow\Uparrow\rangle$. If the electron is in the state $|\downarrow\rangle$, only the circularly polarized light with $S_{z}=-1$ (marked by $|L^{\uparrow}\rangle$ or $|R^{\downarrow}\rangle$) feels the “hot" cavity and couples to the transition $|\downarrow\rangle\leftrightarrow|\downarrow\uparrow\Downarrow\rangle$. Here, $|R^{\uparrow}\rangle$ and $|L^{\uparrow}\rangle$ ($|R^{\downarrow}\rangle$ and $|L^{\downarrow}\rangle$) denote the propagation direction of the right- and left- circularly polarized photon is parallel (antiparallel) to the spin quantization axis ($z$ axis). $|\uparrow\rangle$ and $|\downarrow\rangle$ are the electron spin states with $J=\pm 1/2$, respectively. $|\Uparrow\rangle$ and $|\Downarrow\rangle$ are the heavy hole spin states with $J=\pm 3/2$, respectively. The reflection/transmission coefficients of the hot/cold cavities can be obtained by solving Heisenberg equations of motion for the cavity field operator $\hat{a}$ and the dipole operator $\hat{\sigma}_{-}$ [73] and the input-output relations between the input and output fields $\displaystyle\frac{d\hat{a}}{dt}=-\left[i(\omega_{c}-\omega)+\kappa+\frac{\kappa_{s}}{2}\right]\hat{a}-\text{g}\;\hat{\sigma}_{-}-\sqrt{\kappa}\,\hat{a}_{in}-\sqrt{\kappa}\,\hat{a}_{in}^{\prime}+\hat{H},$ $\displaystyle\frac{d\hat{\sigma}_{-}}{dt}=-\left[i(\omega_{X^{-}}-\omega)+\frac{\gamma}{2}\right]\hat{\sigma}_{-}-\text{g}\;\sigma_{z}\;\hat{a}+\hat{G},$ $\displaystyle\hat{a}_{r}=\hat{a}_{in}+\sqrt{\kappa}\,\hat{a},$ $\displaystyle\hat{a}_{t}=\hat{a}_{in}^{\prime}+\sqrt{\kappa}\,\hat{a}.$ (1) Here, $\omega$, $\omega_{c}$, and $\omega_{X^{-}}$ are the frequencies of the single photon, the cavity mode, and the $X^{-}$ dipole transition, respectively. g is the coupling strength of the cavity-$X^{-}$ combination. $\gamma/2$, $\kappa$, and $\kappa_{s}/2$ are the decay rates of the $X^{-}$ dipole, the cavity field, and the side leakage, respectively. $\hat{a}_{in}$ and $\hat{a}_{in}^{\prime}$ ($\hat{a}_{r}$ and $\hat{a}_{t}$) are the cavity input (output) fields. $\sigma_{z}$ is the inversion operator of the singly charged QD. $\hat{H}$ and $\hat{G}$ are the noise operators. Figure 1: (a) A schematic diagram of a singly charged QD confined in a double- sided microcavity. (b) Energy levels and the spin-dependent optical transition rules for a charged QD-cavity emitter. $|R^{\uparrow}\rangle$ ($|L^{\downarrow}\rangle$) represents the propagation direction of the right- (left-) circularly polarized photon is parallel (antiparallel) to the growth axis of the QD. $|\Uparrow\rangle$ and $|\Downarrow\rangle$ denote the heavy- hole spin states $|\pm 3/2\rangle$, respectively. $|\uparrow\rangle$ and $|\downarrow\rangle$ indicate the electron spin states $|\pm 1/2\rangle$, respectively. When $X^{-}$ predominantly stays in the ground states, i.e., taking $\langle\sigma_{z}\rangle\approx-1$, the reflection coefficient $r(\omega)$ and the transmission coefficient $t(\omega)$ of the QD-microcavity system can be written as [50, 74], $\displaystyle r(\omega)=1+t(\omega),$ $\displaystyle t(\omega)=\frac{-\kappa\left[i(\omega_{X^{-}}-\omega)+\frac{\gamma}{2}\right]}{\left[i(\omega_{X^{-}}-\omega)+\frac{\gamma}{2}\right]\left[i(\omega_{c}-\omega)+\kappa+\frac{\kappa_{s}}{2}\right]+\text{g}^{2}}.$ (2) In the practice experiment, the inevitable imperfect birefringence of the cavity will reduce the fidelity and the efficiency of the emitter by a few percents. Therefore, the spin-dependent transition rules can be summarized as $\displaystyle|R^{\uparrow}\uparrow\rangle\rightarrow r|L^{\downarrow}\uparrow\rangle+t|R^{\uparrow}\uparrow\rangle,\quad|R^{\uparrow}\downarrow\rangle\rightarrow t_{0}|R^{\uparrow}\downarrow\rangle+r_{0}|L^{\downarrow}\downarrow\rangle,$ $\displaystyle|L^{\downarrow}\uparrow\rangle\rightarrow r|R^{\uparrow}\uparrow\rangle+t|L^{\downarrow}\uparrow\rangle,\quad|L^{\downarrow}\downarrow\rangle\rightarrow t_{0}|L^{\downarrow}\downarrow\rangle+r_{0}|R^{\uparrow}\downarrow\rangle,$ (3) $\displaystyle|L^{\uparrow}\downarrow\rangle\rightarrow r|R^{\downarrow}\downarrow\rangle+t|L^{\uparrow},\downarrow\rangle,\quad|R^{\downarrow}\uparrow\rangle\rightarrow t_{0}|R^{\downarrow}\uparrow\rangle+r_{0}|L^{\uparrow}\uparrow\rangle,$ $\displaystyle|R^{\downarrow}\downarrow\rangle\rightarrow r|L^{\uparrow}\downarrow\rangle+t|R^{\downarrow}\downarrow\rangle,\quad|L^{\uparrow}\uparrow\rangle\rightarrow t_{0}|L^{\uparrow}\uparrow\rangle+r_{0}|R^{\downarrow}\uparrow\rangle.$ Here $r_{0}$ and $t_{0}$ are described by Eq. (2) with $\text{g}=0$. If side leakage is not taken into account (i.e. $\kappa_{s}=0$) and $\text{g}\gg 2\gamma\kappa$, and then $t=0$, $t_{0}=-1$, $r=1$, and $r_{0}=0$. In experiment, such ideal conditional is a challenge. The spin-dependent Kerr nonlinearity shown in Eq. (2) can be used to implement hyper-parallel photonic elements in the following sections. We design compact quantum circuits for implementing hyperparallel transistor (hyper-transistor), hyperparallel router (hyper-router) and hyperparallel DRAM (hyper-DRAM) encoded in the polarization and the spatial DOFs in the single-photon systems. ### 2.1 P-transistor via practice QD-cavity emitter The hyper-transistor amplifies both an arbitrary polarization state to the same state encoded on $N$ photons (p-transistor) and an arbitrary spatial state to the same state encoded on $N$ photons (s-transistor), simultaneously. The framework of our p-transistor without effecting the spatial DOF is shown in Fig. 2. Suppose that the states of the gate photon and the QD spin are initially prepared as $\displaystyle|\psi\rangle_{\text{gate photon}}=\alpha|R\rangle+\beta|L\rangle,$ $\displaystyle|\psi\rangle_{\text{electron}}=\frac{1}{\sqrt{2}}(|\uparrow\rangle-|\downarrow\rangle),$ (4) where $\alpha$ and $\beta$ are the arbitrary complex numbers and satisfy $|\alpha|^{2}+|\beta|^{2}=1$. Figure 2: A description for implementing a p-transistor. HWP${}^{22.5^{\circ}}_{1,\cdots,8}$ with using half wave plates rotated at 22.5∘ represent Hadamard operations on polarization DOF. HWP${}^{45^{\circ}}_{1,2}$ stand for half wave plate oriented at 45∘ performing bit-flip operations $\sigma_{p,x}=|R\rangle\langle L|+|L\rangle\langle R|$. PBS1,⋯,4 are circularly polarizing beam splitters. BS1,2 are nonpolarizing balanced beam splitters performing Hadamard operations on the spatial DOF, i.e., $|l^{1}\rangle\leftrightarrow(|l^{\tilde{1}}\rangle+|l^{\tilde{3}}\rangle)/\sqrt{2}$, $|l^{3}\rangle\leftrightarrow(|l^{\tilde{1}}\rangle-|l^{\tilde{3}}\rangle)/\sqrt{2}$. VBS1,2 are adjustable beam splitters with transmission coefficient ($t-t_{0}$) and reflection coefficient $(\sqrt{1-(t-t_{0})^{2}})$. $D_{i}$ ($i=1,\cdots,4$) are single-photon detectors. First, the gate photon is injected. As shown in Fig. 2, before and after the gate photon passes through the building block composed of PBS1, PBS2, BS1, VBS1, HWP${}^{22.5^{\circ}}_{2}$, HWP${}^{22.5^{\circ}}_{3}$, QD, and HWP${}^{45^{\circ}}_{1}$, Hadamard operations are performed on it by using HWP${}^{22.5^{\circ}}_{1}$ and HWP${}^{22.5^{\circ}}_{4}$, respectively. Specifically, circularly polarizing beam splitters, PBS1,2, transmit the $R$-polarized wave packets and reflect the $L$-polarized wave packets, respectively. HWP${}^{45^{\circ}}_{1}$ denotes a half-wave plate aligned at 45∘ to complete the bit-flip operation $\sigma_{p,x}=|R\rangle\langle L|+|L\rangle\langle R|$ on the passing photons. HWP${}^{22.5^{\circ}}_{1,\cdots,4}$ are half-wave plates oriented at 22.5∘ to achieve polarization Hadamard operations $\displaystyle|R\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|R\rangle+|L\rangle),\qquad|L\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|R\rangle-|L\rangle).$ (5) The nonpolarizing balanced beam splitter, BS1, induces a Hadamard operation on the spatial states $|l^{1}\rangle$, $|l^{\tilde{1}}\rangle$, $|l^{3}\rangle$, and $|l^{\tilde{3}}\rangle$ (spatial Hadamard operation) $\displaystyle|l^{1}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|l^{\tilde{1}}\rangle+|l^{\tilde{3}}\rangle),\quad|l^{3}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|l^{\tilde{1}}\rangle-|l^{\tilde{3}}\rangle),$ $\displaystyle|l^{\tilde{1}}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|l^{1}\rangle+|l^{3}\rangle),\quad|l^{\tilde{3}}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|l^{1}\rangle-|l^{3}\rangle).$ (6) The adjustable beam splitter, VBS1, has a variable transmission coefficient $t-t_{0}$ and reflection coefficient $\sqrt{1-(t-t_{0})^{2}}$, and it can be achieved by using two 50:50 BSs and two phase shifters [75]. Therefore, operations ($\text{HWP}^{22.5^{\circ}}_{1}\rightarrow\text{PBS}_{1}\rightarrow\text{BS}_{1}\rightarrow\text{HWP}^{22.5^{\circ}}_{2,3}\rightarrow\text{QD}\rightarrow\text{HWP}^{22.5^{\circ}}_{2,3}\rightarrow\text{BS}_{1}\rightarrow\text{HWP}^{45^{\circ}}_{1}$) transform the state of the gate photon together with QD from $|\psi_{p}\rangle_{0}$ to $|\psi_{p}\rangle_{1}$. Here $\displaystyle|\psi_{p}\rangle_{0}=\frac{1}{\sqrt{2}}(\alpha|R\rangle+\beta|L\rangle)\otimes(|\uparrow\rangle-|\downarrow\rangle),$ (7) $\displaystyle|\psi_{p}\rangle_{1}$ $\displaystyle=$ $\displaystyle\frac{1}{2}[\alpha(r+t_{0})|R^{1}\uparrow\rangle+\alpha(t-t_{0})|R^{4}\uparrow\rangle+\beta(r+t_{0})|R^{1}\uparrow\rangle+\beta(t-t_{0})|R^{4}\uparrow\rangle$ (8) $\displaystyle-\alpha(r+t_{0})|R^{1}\downarrow\rangle+\alpha(t-t_{0})|R^{4}\downarrow\rangle-\beta(r+t_{0})|R^{1}\downarrow\rangle+\beta(t-t_{0})|R^{4}\downarrow\rangle$ $\displaystyle+\alpha|L^{2}\uparrow\rangle-\beta|L^{2}\uparrow\rangle-\alpha|L^{2}\downarrow\rangle+\beta|L^{2}\downarrow\rangle].$ $|R^{i}\rangle$ ($|L^{i}\rangle$) represents the $R$-polarized ($L$-polarized) photon emitted from the spatial mode $i$. Second, after the wave packet $|L^{2}\rangle$ interacts with VBS1, the total state of the system is evolved to $\displaystyle|\psi_{p}\rangle_{2}$ $\displaystyle=$ $\displaystyle\frac{1}{2}[\alpha(t-t_{0})|R^{4}\uparrow\rangle+\beta(t-t_{0})|R^{4}\uparrow\rangle+\alpha(t-t_{0})|R^{4}\downarrow\rangle+\beta(t-t_{0})|R^{4}\downarrow\rangle$ (9) $\displaystyle+\alpha(t-t_{0})|L^{6}\uparrow\rangle-\beta(t-t_{0})|L^{6}\uparrow\rangle-\alpha(t-t_{0})|L^{6}\downarrow\rangle+\beta(t-t_{0})|L^{6}\downarrow\rangle$ $\displaystyle+\alpha(r+t_{0})|R^{1}\uparrow\rangle+\beta(r+t_{0})|R^{1}\uparrow\rangle-\alpha(r+t_{0})|R^{1}\downarrow\rangle-\beta(r+t_{0})|R^{1}\downarrow\rangle$ $\displaystyle+(\sqrt{1-(t-t_{0})^{2}})(\alpha|L^{5}\uparrow\rangle-\beta|L^{5}\uparrow\rangle-\alpha|L^{5}\downarrow\rangle+\beta|L^{5}\downarrow\rangle)].$ Third, the wave packets $|R^{4}\rangle$ and $|L^{6}\rangle$ arrive at PBS2, simultaneously, and PBS2 makes $|\psi_{p}\rangle_{2}$ into $\displaystyle|\psi_{p}\rangle_{3}$ $\displaystyle=$ $\displaystyle\frac{1}{2}[\alpha(t-t_{0})|R^{7}\uparrow\rangle+\beta(t-t_{0})|R^{7}\uparrow\rangle+\alpha(t-t_{0})|R^{7}\downarrow\rangle+\beta(t-t_{0})|R^{7}\downarrow\rangle$ (10) $\displaystyle+\alpha(t-t_{0})|L^{7}\uparrow\rangle-\beta(t-t_{0})|L^{7}\uparrow\rangle-\alpha(t-t_{0})|L^{7}\downarrow\rangle+\beta(t-t_{0})|L^{7}\downarrow\rangle$ $\displaystyle+\alpha(r+t_{0})|R^{1}\uparrow\rangle+\beta(r+t_{0})|R^{1}\uparrow\rangle-\alpha(r+t_{0})|R^{1}\downarrow\rangle-\beta(r+t_{0})|R^{1}\downarrow\rangle$ $\displaystyle+(\sqrt{1-(t-t_{0})^{2}})(\alpha|L^{5}\uparrow\rangle-\beta|L^{5}\uparrow\rangle-\alpha|L^{5}\downarrow\rangle+\beta|L^{5}\downarrow\rangle)].$ Fourth, the gate photon emitted from the spatial mode $7$ is detected in the basis $\\{(|R\rangle\pm|L\rangle)/\sqrt{2}\\}$ by HWP${}^{22.5^{\circ}}_{4}$ and single photon detectors $D_{1}$ and $D_{2}$. In detail, on detecting the gate photon in $(|R\rangle-|L\rangle)/\sqrt{2}$, we project $|\psi_{p}\rangle_{3}$ into $\displaystyle|\psi_{p}\rangle_{4}=\alpha|\downarrow\rangle+\beta|\uparrow\rangle.$ (11) Next, we inject a single photon, or an ultrafast ps or fs $(\pi)_{y}$ optical pulse to perform bit-flip operation $\sigma_{e,x}=|\uparrow\rangle\langle\downarrow|+|\downarrow\rangle\langle\uparrow|$ on the QD spin [65] to obtain the desired state $\displaystyle|\psi_{p}^{\prime}\rangle_{4}=\alpha|\uparrow\rangle+\beta|\downarrow\rangle.$ (12) Alternatively, on detecting the gate photon in $(|R\rangle+|L\rangle)/\sqrt{2}$, a desired state described by Eq. (12) is obtained directly. Fifth, source photon 1 in the normalization state $|R_{1}\rangle(\zeta_{1}|c_{1}\rangle+\xi_{1}|d_{1}\rangle)$ is injected into the building block consisted of HWP${}^{22.5^{\circ}}_{5}$, PBS4, BS2, HWP${}^{22.5^{\circ}}_{6}$, HWP${}^{22.5^{\circ}}_{7}$, QD, VBS2, $D_{3}$, $D_{4}$, HWP${}^{45^{\circ}}_{2}$, PBS5, and HWP${}^{22.5^{\circ}}_{8}$. If $D_{3}$ and $D_{4}$ do not click, the state of the system will collapse into $\displaystyle|\psi_{p}\rangle_{5}=(t-t_{0})(\alpha|R_{1}\uparrow\rangle-\beta|L_{1}\downarrow\rangle)\otimes(\zeta_{1}|c_{1}\rangle+\xi_{1}|d_{1}\rangle).$ (13) Here $c_{i}$ and $d_{i}$ are the two spatial modes of the source photon $i$. Otherwise, the system is projected into the spin state described by Eq. (12) (i.e., $\alpha|\uparrow\rangle+\beta|\downarrow\rangle$). That is, the scheme is fail, and then we need to repeat above arguments. Sixth, repeating above process from source photon 2 to $N$ in succession, after the source photons interact with the QD, if $D_{3}$ and $D_{4}$ are not clicked, the joint state collapses into $\displaystyle|\psi_{p}\rangle_{6}$ $\displaystyle=$ $\displaystyle(t-t_{0})^{N}[\alpha|R_{1}R_{2}\cdots R_{N}\uparrow\rangle+(-1)^{N}\beta|L_{1}L_{2}\cdots L_{N}\downarrow\rangle]$ (14) $\displaystyle\otimes(\zeta_{1}|c_{1}\rangle+\xi_{1}|d_{1}\rangle)\otimes(\zeta_{2}|c_{2}\rangle+\xi_{2}|d_{2}\rangle)\otimes\cdots\otimes(\zeta_{N}|c_{N}\rangle+\xi_{N}|d_{N}\rangle).$ Here, the different spatial modes, i.e., $c_{i}$ and $d_{i}$, of the source photon $i$ can be separated by spatial with optical switch [76, 77], or time. Seventh, to complete the p-transistor, we measure the spin of the QD in the basis $\\{|\pm\rangle=(|\uparrow\rangle\pm|\downarrow\rangle)/\sqrt{2}\\}$ and apply some feed-forward operations on one of the outing photons to obtain the desired state $\displaystyle|\psi_{p}\rangle_{7}$ $\displaystyle=$ $\displaystyle(t-t_{0})^{N}(\alpha|R_{1}R_{2}\cdots R_{N}\rangle+\beta|L_{1}L_{2}\cdots L_{N}\rangle)$ (15) $\displaystyle\otimes(\zeta_{1}|c_{1}\rangle+\xi_{1}|d_{1}\rangle)\otimes(\zeta_{2}|c_{2}\rangle+\xi_{2}|d_{2}\rangle)\otimes\cdots\otimes(\zeta_{N}|c_{N}\rangle+\xi_{N}|d_{N}\rangle).$ In detail, on detecting the QD spin in $|+\rangle$, if and only if $N$ is odd number, we apply a single-qubit operation $\sigma_{p,z}=|R\rangle\langle R|-|L\rangle\langle L|$ on one of the outing photons. On detecting the QD spin in $|-\rangle$, if and only if $N$ is even number, we apply a single-qubit operation $\sigma_{p,z}$ on one of the outing photons. ### 2.2 S-transistor via practice QD-cavity emitter Up to now, p-transistor has been completed. However, in order to implement a single-photon transistor performing on the polarization and spatial DOFs, simultaneously, a s-transistor should be designed in this subsection (see Fig. 3). Figure 3: A schematic diagram for implementing a s-transistor. First, the gate photon in the state $(\alpha|R\rangle+\beta|L\rangle)\otimes(\gamma|a\rangle+\delta|b\rangle)$ is injected and arrives at BS1. BS1 induces the state of gate photon and the QD from $\displaystyle|\psi_{s}\rangle_{0}=\frac{1}{\sqrt{2}}(\alpha|R\rangle+\beta|L\rangle)\otimes(\gamma|a\rangle+\delta|b\rangle)\otimes(|\uparrow\rangle-|\downarrow\rangle),$ (16) into $\displaystyle|\psi_{s}\rangle_{1}=\frac{1}{2}(\alpha|R\rangle+\beta|L\rangle)(\gamma|a\rangle+\gamma|b\rangle+\delta|a\rangle-\delta|b\rangle)(|\uparrow\rangle-|\downarrow\rangle).$ (17) Here, $a$ and $b$ are the two spatial modes of the gate photon for implementing s-transistor, and $|\alpha|^{2}+|\beta|^{2}=1$, $|\gamma|^{2}+|\delta|^{2}=1$. Second, the wave packets emitted from the spatial mode $b$ arrive at VBS directly. Alternately, the $R$-polarized ($L$-polarized) component emitted from the spatial mode $a$ passes through the building block comprised of PBS1, BS2, HWP${}^{22.5^{\circ}}_{1}$, QD, HWP${}^{22.5^{\circ}}_{2}$, HWP${}^{45^{\circ}}_{1}$, and PBS2 (PBS1, HWP${}^{45^{\circ}}_{2}$, BS3, HWP${}^{22.5^{\circ}}_{3}$, QD, HWP${}^{22.5^{\circ}}_{4}$, and PBS2). These operations transform $|\psi_{s}\rangle_{1}$ into $\displaystyle|\psi_{s}\rangle_{2}$ $\displaystyle=$ $\displaystyle\frac{1}{2}(t-t_{0})(\alpha|R^{l}\uparrow\rangle+\alpha|R^{l}\downarrow\rangle+\beta|L^{r}\uparrow\rangle+\beta|L^{r}\downarrow\rangle)(\gamma|a\rangle+\delta|a\rangle)$ (18) $\displaystyle+\frac{1}{2}(t-t_{0})(\alpha|R\rangle+\beta|L\rangle)(|\uparrow\rangle-|\downarrow\rangle)(\gamma|b\rangle-\delta|b\rangle)$ $\displaystyle+\frac{1}{2}(r+t_{0})(\alpha|R^{l,D_{1}}\uparrow\rangle-\alpha|R^{l,D_{1}}\downarrow\rangle+\beta|R^{r,D_{2}}\uparrow\rangle-\beta|R^{r,D_{2}}\downarrow\rangle)(\gamma|a\rangle+\delta|a\rangle)$ $\displaystyle+\frac{1}{2}\sqrt{1-(t-t_{0})^{2}}(\alpha|R^{D_{3}}\rangle+\beta|L^{D_{3}}\rangle)(|\uparrow\rangle-|\downarrow\rangle)(\gamma|b\rangle-\delta|b\rangle).$ Here, the superscript $l$ ($r$) denotes the wave packets emitted from the left (right) arm, and the superscript $D_{1}$ ($D_{2}$) denotes the wave packets will be detected by detector $D_{1}$ ($D_{2}$). PBS2 transforms $|\psi_{s}\rangle_{2}$ into $\displaystyle|\psi_{s}\rangle_{3}$ $\displaystyle=$ $\displaystyle\frac{1}{2}(t-t_{0})(\alpha|R^{r}\uparrow\rangle+\alpha|R^{r}\downarrow\rangle+\beta|L^{r}\uparrow\rangle+\beta|L^{r}\downarrow\rangle)(\gamma|a\rangle+\delta|a\rangle)$ (19) $\displaystyle+\frac{1}{2}(t-t_{0})(\alpha|R\rangle+\beta|L\rangle)(|\uparrow\rangle-|\downarrow\rangle)(\gamma|b\rangle-\delta|b\rangle)$ $\displaystyle+\frac{1}{2}(r+t_{0})(\alpha|R^{l,D_{1}}\uparrow\rangle-\alpha|R^{l,D_{1}}\downarrow\rangle+\beta|R^{r,D_{2}}\uparrow\rangle-\beta|R^{r,D_{2}}\downarrow\rangle)(\gamma|a\rangle+\delta|a\rangle)$ $\displaystyle+\frac{1}{2}\sqrt{1-(t-t_{0})^{2}}(\alpha|R^{D_{3}}\rangle+\beta|L^{D_{3}}\rangle)(|\uparrow\rangle-|\downarrow\rangle)(\gamma|b\rangle-\delta|b\rangle).$ Third, as shown in Fig. 3, after the wave packets emitted form the right arm and the spatial mode $b$ mix at BS4, and then $|\psi_{s}\rangle_{3}$ is evolved as $\displaystyle|\psi_{s}\rangle_{4}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}(t-t_{0})[|a\rangle(\gamma|\uparrow\rangle+\delta|\downarrow\rangle)+|b\rangle(\gamma|\downarrow\rangle+\delta|\uparrow\rangle)]\otimes(\alpha|R\rangle+\beta|L\rangle)$ (20) $\displaystyle+\frac{1}{2}(r+t_{0})(\alpha|R^{l,D_{1}}\uparrow\rangle-\alpha|R^{l,D_{1}}\downarrow\rangle+\beta|R^{r,D_{2}}\uparrow\rangle-\beta|R^{r,D_{2}}\downarrow\rangle)(\gamma|a\rangle+\delta|a\rangle)$ $\displaystyle+\frac{1}{2}\sqrt{1-(t-t_{0})^{2}}(\alpha|R^{D_{3}}\rangle+\beta|L^{D_{3}}\rangle)(|\uparrow\rangle-|\downarrow\rangle)(\gamma|b\rangle-\delta|b\rangle).$ Fourth, the spatial modes of the outing photon are measured. In detail, on detecting the gate photon in the spatial mode $a$ and $D_{1}$, $D_{2}$ and $D_{3}$ do not click, we project $|\psi_{s}\rangle_{4}$ into the desired state $\displaystyle|\psi_{s}\rangle_{5}=(\gamma|\uparrow\rangle+\delta|\downarrow\rangle)\otimes(\alpha|R\rangle+\beta|L\rangle).$ (21) Alternatively, on detecting the gate photon in the spatial mode $b$ and $D_{1}$, $D_{2}$ and $D_{3}$ do not click, we project $|\psi_{s}\rangle_{4}$ into the state $\displaystyle|\psi_{s}^{\prime}\rangle_{5}=(\gamma|\downarrow\rangle+\delta|\uparrow\rangle)\otimes(\alpha|R\rangle+\beta|L\rangle).$ (22) And then, we perform a bit-flip operation $\sigma_{x}$ on the QD spin to obtain the desired state described by Eq. (21). Fifth, repeating above process from photon 1 to $N$. After the $N$ photons in the state $|c_{i}\rangle(\alpha_{i}|R\rangle+\beta_{i}|L\rangle)$ pass through the block in succession, when $D_{1}$, $D_{2}$ and $D_{3}$ are not clicked, the joint state collapses into $\displaystyle|\psi_{s}\rangle_{6}$ $\displaystyle=$ $\displaystyle(t-t_{0})^{N}[\gamma|c_{1}c_{2}...c_{N}\rangle|\uparrow\rangle+(-1)^{N}\delta|d_{1}d_{2}...d_{N}\rangle|\downarrow\rangle)]$ (23) $\displaystyle\otimes(\alpha_{1}|R\rangle+\beta_{1}|L\rangle)\otimes(\alpha_{2}|R\rangle+\beta_{2}|L\rangle)\otimes...\otimes(\alpha_{N}|R\rangle+\beta_{N}|L\rangle).$ Sixth, we measure the spins of the QD in the basis $\\{|\pm\rangle\\}$ and apply some proper feed-forward operations on the spatial DOF to complete the s-transistor, i.e., to achieve the state $\displaystyle|\psi_{s}\rangle_{7}$ $\displaystyle=$ $\displaystyle(t-t_{0})^{N}(\gamma|c_{1}c_{2}...c_{N}\rangle+\delta|d_{1}d_{2}...d_{N}\rangle)$ (24) $\displaystyle\otimes(\alpha_{1}|R\rangle+\beta_{1}|L\rangle)\otimes(\alpha_{2}|R\rangle+\beta_{2}|L\rangle)\otimes...\otimes(\alpha_{N}|R\rangle+\beta_{N}|L\rangle).$ In detail, on detecting the electronic state $|+\rangle$, if and only if $N$ is odd, we apply a single-qubit operation $\sigma_{z}$ on one of the spatial modes to correct the minus sign. On detecting the electronic state $|-\rangle$, if and only if $N$ is even, we apply a single-qubit operation $\sigma_{z}$ on one of the spatial modes. ## 3 Hyper-router via practice QD-cavity emitter Quantum router [78] is the key quantum technology for quantum networks and quantum computers. It directs a signal qubit to its intended destination according to the state of the control qubits, but keeping the state of signal qubit is unchanged. In this section, let us introduce the action of our hyper- router acting on polarization and spatial DOFs, simultaneously. As shown in Fig. 4, the photon is used as the signal qubit in the normalization state $(\alpha|R\rangle+\beta|L\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)$, the QD spin is served as the control qubit in the normalization state $(\gamma|\uparrow\rangle+\eta|\downarrow\rangle)$. Figure 4: Schematic diagram of hyper-router. First, the photon is injected and followed by PBS1, HWP${}^{22.5^{\circ}}_{1}$ (HWP${}^{22.5^{\circ}}_{2}$). Operations ($\rm PBS_{1}\rightarrow HWP^{22.5^{\circ}}_{1}$ and $\rm PBS_{1}\rightarrow HWP^{22.5^{\circ}}_{2})$ transform the system from the initialization state $|\varphi\rangle_{0}$ to $|\varphi\rangle_{1}$. Here $\displaystyle|\varphi\rangle_{0}=(\alpha|R\rangle+\beta|L\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma|\uparrow\rangle+\eta|\downarrow\rangle),$ (25) $\displaystyle|\varphi\rangle_{1}=\frac{1}{\sqrt{2}}(\alpha(|R^{r}\rangle+|L^{r}\rangle)+\beta(|R^{l}\rangle-|L^{l}\rangle))\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma|\uparrow\rangle+\eta|\downarrow\rangle).$ (26) Second, the wave packets emitted from the left round $|R^{l}\rangle$ and $|L^{l}\rangle$ (the right round $|R^{r}\rangle$ and $|L^{r}\rangle$) pass through the block composed of PBS2, VBS1, BS1, QD, HWP${}^{22.5^{\circ}}_{3}$, HWP${}^{22.5^{\circ}}_{4}$, HWP${}^{45^{\circ}}_{1}$, and PBS4 (PBS3, VBS2, BS2, QD, HWP${}^{22.5^{\circ}}_{5}$, HWP${}^{22.5^{\circ}}_{6}$, HWP${}^{45^{\circ}}_{2}$, and PBS5). Such two blocks make $|\varphi\rangle_{1}$ become $\displaystyle|\varphi\rangle_{2}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}(\alpha(t-t_{0})|R^{r}\rangle(\gamma|\uparrow\rangle-\eta|\downarrow\rangle)+\alpha(t-t_{0})|L^{r}\rangle(\gamma|\uparrow\rangle+\eta|\downarrow\rangle)$ (27) $\displaystyle+\beta(t-t_{0})|R^{l}\rangle(\gamma|\uparrow\rangle-\eta|\downarrow\rangle)-\beta(t-t_{0})|L^{l}\rangle(\gamma|\uparrow\rangle+\eta|\downarrow\rangle))\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)$ $\displaystyle+\frac{1}{\sqrt{2}}(\alpha(r+t_{0})|R^{r,D_{2}}\rangle+\beta(r+t_{0})|R^{l,D_{1}}\rangle+\alpha\sqrt{1-(t-t_{0})^{2}}|L^{r,D_{4}}\rangle$ $\displaystyle-\beta\sqrt{1-(t-t_{0})^{2}}|L^{l,D_{3}}\rangle)\otimes(\gamma|\uparrow\rangle+\eta|\downarrow\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle).$ Third, before the wave packets converge at PBS6, two Hadamard operations are applied on them by using HWP${}^{22.5^{\circ}}_{7}$ and HWP${}^{22.5^{\circ}}_{8}$. That is, HWP${}^{22.5^{\circ}}_{7}$ and HWP${}^{22.5^{\circ}}_{8}$ make the system from $|\varphi\rangle_{2}$ to $\displaystyle|\varphi\rangle_{3}$ $\displaystyle=$ $\displaystyle(\alpha(t-t_{0})(\gamma|R^{r}\rangle|\uparrow\rangle-\eta|L^{r}\rangle|\downarrow\rangle)+\beta(t-t_{0})(\gamma|L^{l}\rangle|\uparrow\rangle-\eta|R^{l}\rangle|\downarrow\rangle))$ (28) $\displaystyle\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)+\frac{1}{\sqrt{2}}(\alpha(r+t_{0})|R^{r,D_{2}}\rangle+\beta(r+t_{0})|R^{l,D_{1}}\rangle$ $\displaystyle+\alpha\sqrt{1-(t-t_{0})^{2}}|L^{r,D_{4}}\rangle-\beta\sqrt{1-(t-t_{0})^{2}}|L^{l,D_{3}}\rangle)$ $\displaystyle\otimes(\gamma|\uparrow\rangle+\eta|\downarrow\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle).$ Fourth, after PBS6, $|\varphi\rangle_{3}$ changes into $\displaystyle|\varphi\rangle_{4}$ $\displaystyle=$ $\displaystyle(\alpha(t-t_{0})(\gamma|R^{l}\rangle|\uparrow\rangle-\eta|L^{r}\rangle|\downarrow\rangle)+\beta(t-t_{0})(\gamma|L^{l}\rangle|\uparrow\rangle-\eta|R^{r}\rangle|\downarrow\rangle))$ (29) $\displaystyle\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)+\frac{1}{\sqrt{2}}(\alpha(r+t_{0})|R^{r,D_{2}}\rangle+\beta(r+t_{0})|R^{l,D_{1}}\rangle$ $\displaystyle+\alpha\sqrt{1-(t-t_{0})^{2}}|L^{r,D_{4}}\rangle-\beta\sqrt{1-(t-t_{0})^{2}}|L^{l,D_{3}}\rangle)$ $\displaystyle\otimes(\gamma|\uparrow\rangle+\eta|\downarrow\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle).$ HWP${}^{45^{\circ}}_{3}$ flips state of the the outing photon emitted from the right hand, that is, Eq. (29) becomes $\displaystyle|\varphi\rangle_{5}$ $\displaystyle=$ $\displaystyle(\alpha(t-t_{0})(\gamma|R^{l}\rangle|\uparrow\rangle-\eta|R^{r}\rangle|\downarrow\rangle)+\beta(t-t_{0})(\gamma|L^{l}\rangle|\uparrow\rangle-\eta|L^{r}\rangle|\downarrow\rangle))$ (30) $\displaystyle\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)+\frac{1}{\sqrt{2}}(\alpha(r+t_{0})|R^{r,D_{2}}\rangle+\beta(r+t_{0})|R^{l,D_{1}}\rangle$ $\displaystyle+\alpha\sqrt{1-(t-t_{0})^{2}}|L^{r,D_{4}}\rangle-\beta\sqrt{1-(t-t_{0})^{2}}|L^{l,D_{3}}\rangle)$ $\displaystyle\otimes(\gamma|\uparrow\rangle+\eta|\downarrow\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle).$ If $D_{1}$, $D_{2}$, $D_{3}$ and $D_{4}$ are not clicked, the system will collapse to $\displaystyle|\varphi\rangle_{6}$ $\displaystyle=$ $\displaystyle\gamma(t-t_{0})(\alpha|R^{l}\rangle+\beta|L^{l}\rangle)(\delta_{1}|a^{l}\rangle+\delta_{2}|b^{l}\rangle)|\uparrow\rangle$ (31) $\displaystyle-\eta(t-t_{0})(\alpha|R^{r}\rangle+\beta|L^{r}\rangle)(\delta_{1}|a^{r}\rangle+\delta_{2}|b^{r}\rangle)|\downarrow\rangle.$ From Eqs. (25)-(31), one can see that Fig. 4 accomplished a hyper-router acting on the polarization and spatial DOFs. The signal photon can be directed to right port or the left port controlled by the spin of the electron in QD. ## 4 Hyperparallel DRAM via practice QD-cavity emitter Figure 5: Schematic diagram of hyper-DRAM. DRAM is the key quantum technology for quantum computers and quantum networks, which can load, store, and unload polarization photon controlled by the state of the control qubits. Fig. 5 shows the diagram of an optical spin-based hyper-DRAM, and the loading, storing, and reading out of the photon are controlled by BS1 and BS2. Considering the initialization state of the system constituted of two QDs and one photon is prepared as $\displaystyle|\phi\rangle_{0}$ $\displaystyle=$ $\displaystyle(\alpha|R\rangle+\beta|L\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma_{1}|\uparrow_{1}\rangle+\gamma_{2}|\downarrow_{1}\rangle)\otimes(\eta_{1}|\uparrow_{2}\rangle+\eta_{2}|\downarrow_{2}\rangle).$ (32) Here, $|\alpha|^{2}+|\beta|^{2}=1$, $|\delta_{1}|^{2}+|\delta_{2}|^{2}=1$, $|\gamma_{1}|^{2}+|\gamma_{2}|^{2}=1$, and $|\eta_{1}|^{2}+|\eta_{2}|^{2}=1$. First, PBS1 splits the input photon state $(\alpha|R\rangle+\beta|L\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)$ into two components, $\alpha|R\rangle(\delta_{1}|a\rangle+\delta_{2}|b\rangle)$ and $\beta|L\rangle(\delta_{1}|a\rangle+\delta_{2}|b\rangle)$. And then, a bit- flip operation $\sigma_{x}$ is performed on the $R$-polarized component by using HWP${}^{45^{\circ}}_{1}$. That is, PBS1 and HWP${}^{45^{\circ}}_{1}$ transform the initial state $|\phi\rangle_{0}$ into $\displaystyle|\phi\rangle_{1}=(\alpha|L^{l,1}\rangle+\beta|L^{r,1}\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma_{1}|\uparrow_{1}\rangle+\gamma_{2}|\downarrow_{1}\rangle)\otimes(\eta_{1}|\uparrow_{2}\rangle+\eta_{2}|\downarrow_{2}\rangle).$ (33) Second, after the $L^{r,1}$ ($L^{l,1}$) component interacts with the block constituted of BS1, HWP${}^{22.5^{\circ}}_{1}$, HWP${}^{22.5^{\circ}}_{2}$, and QD1 (BS2, HWP${}^{22.5^{\circ}}_{3}$, HWP${}^{22.5^{\circ}}_{4}$, and QD2), the state of the system can be written as $\displaystyle|\phi\rangle_{2}=-(\alpha|L^{l,1}\rangle+\beta|L^{r,1}\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma_{1}|\uparrow_{1}\rangle+\gamma_{2}|\downarrow_{1}\rangle)\otimes(\eta_{1}|\uparrow_{2}\rangle+\eta_{2}|\downarrow_{2}\rangle).$ (34) Here BS1 and BS2 complete the transformations $\displaystyle|L^{1}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|L^{3}\rangle+|L^{4}\rangle),\quad|L^{2}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|L^{3}\rangle-|L^{4}\rangle),$ $\displaystyle|L^{3}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|L^{1}\rangle+|L^{2}\rangle),\quad|L^{4}\rangle\leftrightarrow\frac{1}{\sqrt{2}}(|L^{1}\rangle-|L^{2}\rangle).$ (35) Third, HWP${}^{45^{\circ}}_{1}$ induces $L^{l,1}$ to be $R^{l,1}$. And then $R^{l,1}$ and $L^{r,1}$ pass though PBS1 and arrive at a high reflective mirror. Subsequently, the mirror reflects the photon into the second round, after $N$ rounds, the state of the whole system is evolved as $\displaystyle|\phi\rangle_{3}$ $\displaystyle=$ $\displaystyle(-1)^{N}(\alpha|L^{l,1}\rangle+\beta|L^{r,1}\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma_{1}|\uparrow_{1}\rangle+\gamma_{2}|\downarrow_{1}\rangle)$ (36) $\displaystyle\otimes(\eta_{1}|\uparrow_{2}\rangle+\eta_{2}|\downarrow_{2}\rangle).$ From Eqs. (32)-(36), one can see that the photons are loaded and stored. For reading out, after the wave packets interact with the two QDs, we rotate BS1 and BS2 by 180∘ to complete the transformations $\displaystyle|L^{1}\rangle\rightarrow\frac{1}{\sqrt{2}}(-|L^{3}\rangle+|L^{4}\rangle),\quad|L^{2}\rangle\rightarrow\frac{1}{\sqrt{2}}(|L^{3}\rangle+|L^{4}\rangle),$ $\displaystyle|L^{3}\rangle\rightarrow\frac{1}{\sqrt{2}}(-|L^{1}\rangle+|L^{2}\rangle),\quad|L^{4}\rangle\rightarrow\frac{1}{\sqrt{2}}(|L^{1}\rangle+|L^{2}\rangle).$ (37) After the wave packets mix at BS1 and BS2, the state of the system is changed to be $\displaystyle|\phi\rangle_{4}$ $\displaystyle=$ $\displaystyle(-1)^{N+1}(\alpha|L^{l,2}\rangle+\beta|L^{r,2}\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma_{1}|\uparrow_{1}\rangle+\gamma_{2}|\downarrow_{1}\rangle)$ (38) $\displaystyle\otimes(\eta_{1}|\uparrow_{2}\rangle+\eta_{2}|\downarrow_{2}\rangle).$ Next, as shown in Fig. 5, $L^{l,2}$-polarized component will be transformed into $R^{l,2}$-polarized component by HWP${}^{45^{\circ}}_{2}$. That is, HWP${}^{45^{\circ}}_{2}$ evolves $|\phi\rangle_{4}$ to $\displaystyle|\phi\rangle_{5}$ $\displaystyle=$ $\displaystyle(-1)^{N+1}(\alpha|R^{l,2}\rangle+\beta|L^{r,2}\rangle)\otimes(\delta_{1}|a\rangle+\delta_{2}|b\rangle)\otimes(\gamma_{1}|\uparrow_{1}\rangle+\gamma_{2}|\downarrow_{1}\rangle)$ (39) $\displaystyle\otimes(\eta_{1}|\uparrow_{2}\rangle+\eta_{2}|\downarrow_{2}\rangle).$ Finally, $R^{l,2}$-polarized and $L^{r,2}$-polarized components go out by PBS2. That is, the information of the photon is read out. ## 5 Conclusion The realizations of quantum computers, quantum networks, and quantum internet, require not only quantum gates and quantum memories, but also demand quantum transistors, routers, and DRAMs. Optical transistor can be used for a bridge between quantum networks and all-optical networks. The source light beam (strong light) in the all-optical transistor is controlled by the gate photon (weak light). It is known that the more complex quantum networks are, the more pronounced is, the need for directing the signal qubit (inputs) to its intended destination (outputs) according to the state of the control qubit. Optical DRAM can be used for loading, storing, and reading out of photons. The previous works about optical transistor and other optical devices mainly are limited to one DOF case [79, 80, 81, 82, 83, 84]. In this paper, we designed compact quantum circuits for determinately implementing single photon hyper-transistor, hyper-router and hyper-DRAM, respectively. The strong interactions between individual photons could be achieved by employing cavity quantum electrodynamics system with QD. Our schemes act on the polarization and the spatial DOFs of the photon, simultaneously, and single photon hyper- transistor can be applied to amplify an arbitrary single-photon hyperparallel state to the same $N$-photon hyperentanglement state. The balanced reflectance for the “hot” and the “cold” cavity, which is necessary for the single-sided cavity, but is not necessary for our schemes to get high fidelities. In previous works [20, 27, 28, 30, 85, 86, 87, 88], the imperfect birefringence of the cavity induced by side leakage of optical cavity are often not taken into account. These imperfections reduce the fidelity of the practical emitter by a few percents. Our schemes not only are robust against the imperfect birefringence of the cavity, and the unity fidelities can be achieved but also are immune to the quantum fluctuations [89] (because the QD stays in the ground state), spin noises, spectral diffusion, and pure dephasing in QDs. Moreover, the strong coupling limitation (the fidelity climbs with QD-cavity coupling strength increasing [50]) is avoided and based on the hyperparallel programs, high capacity, high speed, low loss rate characters can be come true in our work. It is known that strong coupling is a challenge in experiment. Fortunately, $\kappa_{s}/\kappa=0.7$ with $g/(\kappa+\kappa_{s})=1.0$ was reported in micropillar cavity in 2007 [90]. In 2008, coupling strength was raised from $g/(\kappa+\kappa_{s})=0.5$ ($Q$ = 8800) to $g/(\kappa+\kappa_{s})=2.4$ [91]. $\kappa_{s}/\kappa=0.05$ in the strong regime has been achieved in a pillar microcavity with $Q=9000$ [92]. Polarization degenerate cavity with a single QD for polarization-based QIP has been achieved in experiment [93, 94]. The mechanism of our schemes is deterministic and unity in principle. The efficiencies of our schemes are not unity. The success of our schemes are heralded by single photon detectors. Moreover, the success probabilities of our schemes are also effected by $\kappa_{s}$, $g/\kappa$, PBS, VBS, and BS. Some inevitable experimental imperfections, including the effects of the hole mixing, the dark transitions, the single photon detector dark counts, and the balancing of PBSs and BSs, will decrease the fidelities and the efficiencies of the presented optical elements. ## Acknowledgments The work is supported by the National Natural Science Foundation of China under Grant No. 11604012, and the Fundamental Research Funds for the Central Universities under Grant No. 230201506500024, the National Natural Science Foundation of China under Grant No. 11647042, and the Fundamental Research Funds for the Central Universities under Grant No. FRF-BR-17-004B. ## References * [1] M. A. Nielsen and I. Chuang, “Quantum computation and quantum information,” (2002). * [2] X. K. Song, Q. Ai, J. Qiu, and F. G. Deng, “Physically feasible three-level transitionless quantum driving with multiple schrödinger dynamics,” Physical Review A 93, 052324 (2016). * [3] B.-X. Wang, M.-J. Tao, Q. Ai, T. Xin, N. Lambert, D. Ruan, Y.-C. Cheng, F. Nori, F.-G. Deng, and G.-L. Long, “Efficient quantum simulation of photosynthetic light harvesting,” NPJ Quantum Information 4, 1–6 (2018). * [4] X. L. Wang, X. D. Cai, Z. E. Su, M. C. Chen, D. Wu, L. Li, N. L. Liu, C. Y. Lu, and J. W. Pan, “Quantum teleportation of multiple degrees of freedom of a single photon,” Nature 518, 516 (2015). * [5] Y. B. Sheng, F. G. Deng, and G. L. Long, “Complete hyperentangled-Bell-state analysis for quantum communication,” Physical Review A 82, 032318 (2010). * [6] X. H. Li and S. Ghose, “Complete hyperentangled Bell state analysis for polarization and time-bin hyperentanglement,” Optics Express 24, 18388–18398 (2016). * [7] J. T. Barreiro, T. C. Wei, and P. G. Kwiat, “Beating the channel capacity limit for linear photonic superdense coding,” Nature Physics 4, 282 (2008). * [8] C. Schuck, G. Huber, C. Kurtsiefer, and H. Weinfurter, “Complete deterministic linear optics Bell state analysis,” Physical Review Letters 96, 190501 (2006). * [9] M. Barbieri, G. Vallone, P. Mataloni, and F. D. Martini, “Complete and deterministic discrimination of polarization Bell states assisted by momentum entanglement,” Physical Review A 75, 042317 (2007). * [10] S. Walborn, S. Pádua, and C. Monken, “Hyperentanglement-assisted Bell-state analysis,” Physical Review A 68, 042313 (2003). * [11] Y. B. Sheng and F. G. Deng, “Deterministic entanglement purification and complete nonlocal Bell-state analysis with hyperentanglement,” Physical Review A 81, 032307 (2010). * [12] Y. B. Sheng and F. G. Deng, “One-step deterministic polarization-entanglement purification using spatial entanglement,” Physical Review A 82, 044305 (2010). * [13] C. Cao, C. Wang, L. Y. He, and R. Zhang, “Atomic entanglement purification and concentration using coherent state input-output process in low-Q cavity QED regime,” Optics Express 21, 4093–4105 (2013). * [14] J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat, “Generation of hyperentangled photon pairs,” Physical Review Letters 95, 260501 (2005). * [15] R. Ceccarelli, G. Vallone, F. De Martini, P. Mataloni, and A. Cabello, “Experimental entanglement and nonlocality of a two-photon six-qubit cluster state,” Physical Review Letters 103, 160401 (2009). * [16] W. B. Gao, C. Y. Lu, X. Yao, P. Xu, O. Gühne, A. Goebel, Y. A. Chen, C. Z. Peng, Z. B. Chen, and J. W. Pan, “Experimental demonstration of a hyper-entangled ten-qubit Schrödinger cat state,” Nature Physics 6, 331 (2010). * [17] T. J. Wang, Y. Lu, and G. L. Long, “Generation and complete analysis of the hyperentangled Bell state for photons assisted by quantum-dot spins in optical microcavities,” Physical Review A 86, 042337 (2012). * [18] Q. Liu and M. Zhang, “Generation and complete nondestructive analysis of hyperentanglement assisted by nitrogen-vacancy centers in resonators,” Physical Review A 91, 062321 (2015). * [19] G. Y. Wang, Q. Ai, B. C. Ren, T. Li, and F. G. Deng, “Error-detected generation and complete analysis of hyperentangled Bell states for photons assisted by quantum-dot spins in double-sided optical microcavities,” Optics Express 24, 28444–28458 (2016). * [20] T. J. Wang, S. Y. Song, and G. L. Long, “Quantum repeater based on spatial entanglement of photons and quantum-dot spins in optical microcavities,” Physical Review A 85, 062311 (2012). * [21] B. C. Ren, F. F. Du, and F. G. Deng, “Hyperentanglement concentration for two-photon four-qubit systems with linear optics,” Physical Review A 88, 012302 (2013). * [22] X. H. Li and S. Ghose, “Hyperentanglement concentration for time-bin and polarization hyperentangled photons,” Physical Review A 91, 062302 (2015). * [23] C. Cao, T. J. Wang, S. C. Mi, R. Zhang, and C. Wang, “Nonlocal hyperconcentration on entangled photons using photonic module system,” Annals of Physics 369, 128–138 (2016). * [24] H. J. Liu, Y. Xia, and J. Song, “Efficient hyperentanglement concentration for N-particle Greenberger–Horne–Zeilinger state assisted by weak cross-Kerr nonlinearity,” Quantum Information Processing 15, 2033–2052 (2016). * [25] C. Cao, X. Chen, Y. W. Duan, L. Fan, R. Zhang, T. J. Wang, and C. Wang, “Concentrating partially entangled W-class states on nonlocal atoms using low-Q optical cavity and linear optical elements,” SCIENCE CHINA Physics, Mechanics & Astronomy 59, 100315 (2016). * [26] M. Y. Wang, J. Z. Xu, F. L. Yan, and T. Gao, “Entanglement concentration for polarization–spatial–time-bin hyperentangled Bell states,” EPL (Europhysics Letters) 123, 60002 (2018). * [27] B. C. Ren, F. F. Du, and F. G. Deng, “Two-step hyperentanglement purification with the quantum-state-joining method,” Physical Review A 90, 052309 (2014). * [28] G. Y. Wang, Q. Liu, and F. G. Deng, “Hyperentanglement purification for two-photon six-qubit quantum systems,” Physical Review A 94, 032319 (2016). * [29] T. Li and G. L. Long, “Hyperparallel optical quantum computation assisted by atomic ensembles embedded in double-sided optical cavities,” Physical Review A 94, 022343 (2016). * [30] H. R. Wei, F. G. Deng, and G. L. Long, “Hyper-parallel Toffoli gate on three-photon system with two degrees of freedom assisted by single-sided optical microcavities,” Optics Express 24, 18619–18630 (2016). * [31] B. Y. Xia, C. Cao, Y. H. Han, and R. Zhang, “Universal photonic three-qubit quantum gates with two degrees of freedom assisted by charged quantum dots inside single-sided optical microcavities,” Laser Physics 28, 095201 (2018). * [32] B. P. Lanyon, M. Barbieri, M. P. Almeida, T. Jennewein, T. C. Ralph, K. J. Resch, G. J. Pryde, J. L. O’brien, A. Gilchrist, and A. G. White, “Simplifying quantum logic using higher-dimensional Hilbert spaces,” Nature Physics 5, 134 (2009). * [33] H. R. Wei and P. J. Zhu, “Implementations of two-photon four-qubit Toffoli and Fredkin gates assisted by nitrogen-vacancy centers,” Scientific Reports 6, 35529 (2016). * [34] M. Scholz, T. Aichele, S. Ramelow, and O. Benson, “Deutsch-Jozsa algorithm using triggered single photons from a single quantum dot,” Physical Review Letters 96, 180501 (2006). * [35] F. Z. Shi, X. Rong, N. Xu, Y. Wang, J. Wu, B. Chong, X. Peng, J. Kniepert, R. S. Schoenfeld, and W. Harneit, “Room-temperature implementation of the Deutsch-Jozsa algorithm with a single electronic spin in diamond,” Physical Review Letters 105, 040504 (2010). * [36] W. H. Zhang, Q. Q. Qi, J. Zhou, and L. X. Chen, “Mimicking Faraday rotation to sort the orbital angular momentum of light,” Physical Review Letters 112, 153601 (2014). * [37] S. Walborn, D. Ether, R. de Matos Filho, and N. Zagury, “Quantum teleportation of the angular spectrum of a single-photon field,” Physical Review A 76, 033801 (2007). * [38] Y. He, Y. M. He, Y. J. Wei, X. Jiang, K. Chen, C. Y. Lu, J. W. Pan, C. Schneider, M. Kamp, and S. Höfling, “Quantum state transfer from a single photon to a distant quantum-dot electron spin,” Physical Review Letters 119, 060501 (2017). * [39] S. Straupe and S. Kulik, “Quantum optics: The quest for higher dimensionality,” Nature Photonics 4, 585 (2010). * [40] H. Jayakumar, A. Predojević, T. Kauten, T. Huber, G. S. Solomon, and G. Weihs, “Time-bin entangled photons from a quantum dot,” Nature Communications 5, 4251 (2014). * [41] E. Knill, R. Laflamme, and G. J. Milburn, “A scheme for efficient quantum computation with linear optics,” Nature 409, 46 (2001). * [42] K. Nemoto and W. J. Munro, “Nearly deterministic linear optical controlled-NOT gate,” Physical Review Letters 93, 250502 (2004). * [43] Y. Xia, S. Y. Hao, Y. J. Dong, and J. Song, “Effective schemes for preparation of Greenberger–Horn–Zeilinger and W maximally entangled states with cross-Kerr nonlinearity and parity-check measurement,” Applied Physics B 110, 551–561 (2013). * [44] L. L. Fan, Y. Xia, and J. Song, “Complete hyperentanglement-assisted multi-photon Greenberger–Horne–Zeilinger states analysis with cross-Kerr nonlinearity,” Optics Communications 317, 102–106 (2014). * [45] L. Duan and H. Kimble, “Scalable photonic quantum computation through cavity-assisted interactions,” Physical Review Letters 92, 127902 (2004). * [46] T. Wilk, A. Gaëtan, C. Evellin, J. Wolters, Y. Miroshnychenko, P. Grangier, and A. Browaeys, “Entanglement of two individual neutral atoms using Rydberg blockade,” Physical Review Letters 104, 010502 (2010). * [47] Z. Deng, M. Feng, and K. Gao, “Preparation of entangled states of four remote atomic qubits in decoherence-free subspace,” Physical Review A 75, 024302 (2007). * [48] Y. H. Kang, Y. Xia, and P. M. Lu, “Effective scheme for generation of N-dimension atomic Greenberger–Horne–Zeilinger states,” Quantum Information Processing 13, 1255–1265 (2014). * [49] M. Ebert, A. Gill, M. Gibbons, X. Zhang, M. Saffman, and T. G. Walker, “Atomic Fock state preparation using Rydberg blockade,” Physical Review Letters 112, 043602 (2014). * [50] C. Hu, W. Munro, J. O’Brien, and J. Rarity, “Proposed entanglement beam splitter using a quantum-dot spin in a double-sided optical microcavity,” Physical Review B 80, 205326 (2009). * [51] R. Keil, M. Zopf, Y. Chen, B. Höfer, J. Zhang, F. Ding, and O. G. Schmidt, “Solid-state ensemble of highly entangled photon sources at rubidium atomic transitions,” Nature Communications 8, 15501 (2017). * [52] Y. H. Kang, Y. Xia, and P. M. Lu, “Efficient spin bell states and Greenberger–Horne–Zeilinger states analysis in the quantum dot–microcavity coupled system,” Applied Physics B 119, 259–271 (2015). * [53] Y. H. Kang, Y. Xia, and P. M. Lu, “Effective scheme for preparation of a spin-qubit Greenberger–Horne–Zeilinger state and W state in a quantum-dot-microcavity system,” JOSA B 32, 1323–1329 (2015). * [54] S. K. Andersen, S. Kumar, and S. I. Bozhevolnyi, “Ultrabright linearly polarized photon generation from a nitrogen vacancy center in a nanocube dimer antenna,” Nano Letters 17, 3889–3895 (2017). * [55] M. Hua, M. J. Tao, and F. G. Deng, “Universal quantum gates on microwave photons assisted by circuit quantum electrodynamics,” Physical Review A 90, 012328 (2014). * [56] D. Press, K. D. Greve, P. L. McMahon, T. D. Ladd, B. Friess, C. Schneider, M. Kamp, S. Höfling, A. Forchel, and Y. Yamamoto, “Ultrafast optical spin echo in a single quantum dot,” Nature Photonics 4, 367 (2010). * [57] N. B. Gill, L. M. Pham, A. Jarmola, D. Budker, and R. L. Walsworth, “Solid-state electronic spin coherence time approaching one second,” Nature Communications 4, 1743 (2013). * [58] D. Press, T. D. Ladd, B. Zhang, and Y. Yamamoto, “Complete quantum control of a single quantum dot spin using ultrafast optical pulses,” Nature 456, 218 (2008). * [59] P. Neumann, J. Beck, M. Steiner, F. Rempp, H. Fedder, P. R. Hemmer, J. Wrachtrup, and F. Jelezko, “Single-shot readout of a single nuclear spin,” Science 329, 542–544 (2010). * [60] J. J. Pla, K. Y. Tan, J. P. Dehollain, W. H. Lim, J. J. Morton, F. A. Zwanenburg, D. N. Jamieson, A. S. Dzurak, and A. Morello, “High-fidelity readout and control of a nuclear spin qubit in silicon,” Nature 496, 334–338 (2013). * [61] A. F. Van Loo, A. Fedorov, K. Lalumière, B. C. Sanders, A. Blais, and A. Wallraff, “Photon-mediated interactions between distant artificial atoms,” Science 342, 1494–1496 (2013). * [62] I. Buluta, S. Ashhab, and F. Nori, “Natural and artificial atoms for quantum computation,” Reports on Progress in Physics 74, 104401 (2011). * [63] G. Y. Wang, T. Li, Q. Ai, and F. G. Deng, “Self-error-corrected hyperparallel photonic quantum computation working with both the polarization and the spatial-mode degrees of freedom,” Optics Express 26, 23333–23346 (2018). * [64] T. Li, J. C. Gao, F. G. Deng, and G. L. Long, “High-fidelity quantum gates on quantum-dot-confined electron spins in low-Q optical microcavities,” Annals of Physics 391, 150–160 (2018). * [65] J. Berezovsky, M. Mikkelsen, N. Stoltz, L. Coldren, and D. Awschalom, “Picosecond coherent optical manipulation of a single electron spin in a quantum dot,” Science 320, 349–352 (2008). * [66] J. Minář, H. de Riedmatten, and N. Sangouard, “Quantum repeaters based on heralded qubit amplifiers,” Physical Review A 85, 032313 (2012). * [67] V. Giovannetti, S. Lloyd, and L. Maccone, “Advances in quantum metrology,” Nature Photonics 5, 222 (2011). * [68] N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, “Bell nonlocality,” Reviews of Modern Physics 86, 419 (2014). * [69] A. Metelmann and A. A. Clerk, “Nonreciprocal photon transmission and amplification via reservoir engineering,” Physical Review X 5, 021025 (2015). * [70] N. Bruno, V. Pini, A. Martin, V. B. Verma, S. W. Nam, R. Mirin, A. Lita, F. Marsili, B. Korzh, and F. Bussières, “Heralded amplification of photonic qubits,” Optics Express 24, 125–133 (2016). * [71] R. Warburton, C. Dürr, K. Karrai, J. Kotthaus, G. Medeiros-Ribeiro, and P. Petroff, “Charged excitons in self-assembled semiconductor quantum dots,” Physical Review Letters 79, 5282 (1997). * [72] C. Hu, W. Ossau, D. Yakovlev, G. Landwehr, T. Wojtowicz, G. Karczewski, and J. Kossut, “Optically detected magnetic resonance of excess electrons in type-I quantum wells with a low-density electron gas,” Physical Review B 58, R1766 (1998). * [73] D. F. Walls and G. J. Milburn, _Quantum optics_ (Springer Science & Business Media, 2007). * [74] J. H. An, M. Feng, and C. Oh, “Quantum-information processing with a single photon by an input-output process with respect to low-Q cavities,” Physical Review A 79, 032303 (2009). * [75] M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, “Experimental realization of any discrete unitary operator,” Physical Review Letters 73, 58 (1994). * [76] Y. Xia, Y. H. Kang, and P. M. Lu, “Complete polarized photons Bell-states and Greenberger–Horne–Zeilinger-states analysis assisted by atoms,” JOSA B 31, 2077–2082 (2014). * [77] J. L. Zhang, S. L. Su, S. Zhang, A. D. Zhu, and H. F. Wang, “Complete and nondestructive polarization-entangled cluster state analysis assisted by a cavity input–output process,” JOSA B 33, 342–350 (2016). * [78] K. Lemr, K. Bartkiewicz, A. Černoch, and J. Soubusta, “Resource-efficient linear-optical quantum router,” Physical Review A 87, 062333 (2013). * [79] C. Hu, “Photonic transistor and router using a single quantum-dot-confined spin in a single-sided optical microcavity,” Scientific Reports 7, 45582 (2017). * [80] C. Hu, “Spin-based single-photon transistor, dynamic random access memory, diodes, and routers in semiconductors,” Physical Review B 94, 245307 (2016). * [81] C. Cao, Y. W. Duan, X. Chen, R. Zhang, T. J. Wang, and C. Wang, “Implementation of single-photon quantum routing and decoupling using a nitrogen-vacancy center and a whispering-gallery-mode resonator-waveguide system,” Optics Express 25, 16931–16946 (2017). * [82] I. Shomroni, S. Rosenblum, Y. Lovsky, O. Bechler, G. Guendelman, and B. Dayan, “All-optical routing of single photons by a one-atom switch controlled by a single photon,” Science 345, 903–906 (2014). * [83] K. Lemr, K. Bartkiewicz, A. Černoch, and J. Soubusta, “Resource-efficient linear-optical quantum router,” Physical Review A 87, 062333 (2013). * [84] W. Chen, K. M. Beck, R. Bücker, M. Gullans, M. D. Lukin, H. Tanji Suzuki, and V. Vuletić, “All-optical switch and transistor gated by one stored photon,” Science 341, 768–770 (2013). * [85] C. Bonato, F. Haupt, S. S. Oemrawsingh, J. Gudat, D. Ding, M. P. van Exter, and D. Bouwmeester, “CNOT and Bell-state analysis in the weak-coupling cavity QED regime,” Physical Review Letters 104, 160503 (2010). * [86] H. R. Wei and F. G. Deng, “Universal quantum gates for hybrid systems assisted by quantum dots inside double-sided optical microcavities,” Physical Review A 87, 022305 (2013). * [87] B. C. Ren, G. Y. Wang, and F. G. Deng, “Universal hyperparallel hybrid photonic quantum gates with dipole-induced transparency in the weak-coupling regime,” Physical Review A 91, 032328 (2015). * [88] T. J. Wang, Y. Zhang, and C. Wang, “Universal hybrid hyper-controlled quantum gates assisted by quantum dots in optical double-sided microcavities,” Laser Physics Letters 11, 025203 (2014). * [89] C. Hu and J. Rarity, “Extended linear regime of cavity-QED enhanced optical circular birefringence induced by a charged quantum dot,” Physical Review B 91, 075304 (2015). * [90] S. Reitzenstein, C. Hofmann, A. Gorbunov, M. Strauß, S. Kwon, C. Schneider, A. Löffler, S. Höfling, M. Kamp, and A. Forchel, “Al As/Ga As micropillar cavities with quality factors exceeding 150.000,” Applied Physics Letters 90, 251109 (2007). * [91] C. Hu, A. Young, J. O’Brien, W. Munro, and J. Rarity, “Giant optical Faraday rotation induced by a single-electron spin in a quantum dot: Applications to entangling remote spins via a single photon,” Physical Review B 78, 085307 (2008). * [92] J. P. Reithmaier, G. Sęk, A. Löffler, C. Hofmann, S. Kuhn, S. Reitzenstein, L. Keldysh, V. Kulakovskii, T. Reinecke, and A. Forchel, “Strong coupling in a single quantum dot–semiconductor microcavity system,” Nature 432, 197 (2004). * [93] M. P. Bakker, A. V. Barve, T. Ruytenberg, W. Löffler, L. A. Coldren, D. Bouwmeester, and M. P. van Exter, “Polarization degenerate solid-state cavity quantum electrodynamics,” Physical Review B 91, 115319 (2015). * [94] J. Frey, H. Snijders, J. Norman, A. Gossard, J. Bowers, W. Löffler, and D. Bouwmeester, “Electro-optic polarization tuning of microcavities with a single quantum dot,” Optics Letters 43, 4280–4283 (2018).
# Non-Gaussian Normal Diffusion in Low Dimensional Systems Qingqing Yin1, Yunyun Li1,†, Fabio Marchesoni1,2,†, Shubhadip Nayak3, and Pulak K. Ghosh3† Center for Phononics and Thermal Energy Science, Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology, School of Physics Science and Engineering, Tongji University, Shanghai 200092, China Dipartimento di Fisica, Università di Camerino, I-62032 Camerino, Italy Department of Chemistry, Presidency University, Kolkata 700073, India Corresponding authors. E-mail<EMAIL_ADDRESS>† <EMAIL_ADDRESS>†<EMAIL_ADDRESS> ###### Abstract Brownian particles suspended in disordered crowded environments often exhibit non-Gaussian normal diffusion (NGND), whereby their displacements grow with mean square proportional to the observation time and non-Gaussian statistics. Their distributions appear to decay almost exponentially according to “universal” laws largely insensitive to the observation time. This effect is generically attributed to slow environmental fluctuations, which perturb the local configuration of the suspension medium. To investigate the microscopic mechanisms responsible for the NGND phenomenon, we study Brownian diffusion in low dimensional systems, like the free diffusion of ellipsoidal and active particles, the diffusion of colloidal particles in fluctuating corrugated channels and Brownian motion in arrays of planar convective rolls. NGND appears to be a transient effect related to the time modulation of the instantaneous particle’s diffusivity, which can occur even under equilibrium conditions. Consequently, we propose to generalize the definition of NGND to include transient displacement distributions which vary continuously with the observation time. To this purpose, we provide a heuristic one-parameter function, which fits all time-dependent transient displacement distributions corresponding to the same diffusion constant. Moreover, we reveal the existence of low dimensional systems where the NGND distributions are not leptokurtic (fat exponential tails), as often reported in the literature, but platykurtic (thin sub-Gaussian tails), i.e., with negative excess kurtosis. The actual nature of the NGND transients is related to the specific microscopic dynamics of the diffusing particle. Non-Gaussian Normal Diffusion, transport phenomena, stochastic process, active matter ## I Introduction Possibly misinterpreting the original works of Albert Einstein and Marian Smoluchowski on Brownian motion, one tends to associate the normal diffusion of an ideal Brownian particle with the Gaussian distribution of its spatial displacements. Recent observations Granick1 ; Granick2 ; Bhatta ; Sung1 ; Sung2 ; Granick3 of Brownian motion in fluctuating crowded environments led to question the generality of this notion. Indeed, it implicitly assumes Fick’s diffusion Gardiner , whereby the directed displacements of an overdamped particle, say, in the $x$ direction, $\Delta x(t)=x(t)-x(0)$, would grow according to the asymptotic Einstein law, $\langle\Delta x^{2}(t)\rangle=2Dt$, and with Gaussian statistics. The probability density function (pdf) of the rescaled observable, $\delta_{t}=\Delta x/\sqrt{t}$, would thus be a stationary Gaussian function with half-variance $D$. However, there are no fundamental reasons why the diffusion of a physical Brownian tracer should be of the Fickian type. For instance, in real biophysical systems, displacement pdf’s have been reported, which retain prominent exponential tails over extended intervals of the observation time, even after the tracer has attained the asymptotic condition of normal diffusion. Such an effect, often termed non-Gaussian normal diffusion (NGND), disappears only for exceedingly long observation times (possibly inaccessible to real experiments Granick1 ), when the displacement distributions eventually turn Gaussian, as dictated by the central limit theorem, without changes of the diffusion constant. Persistent diffusive transients of this type have been detected in diverse experimental setups Granick1 ; Granick2 ; Bhatta ; other1 ; other3 ; other4 . Extensive numerical simulations confirmed the occurrence of NGND in crowded environments featuring slowly diffusing or changing microscopic constituents (filaments Granick1 ; Bhatta , large hard spheres Sung1 ; Sung2 ; Granick3 , clusters Kegel ; Kob , and other heterogeneities Tong ; Cherstvy ). The current interpretation of this phenomenon postulates the existence of one or more fluctuating processes affecting composition and geometry of the particle’s suspension medium Granick1 . It seems reasonable that, for observation times comparable with the relevant environmental relaxation time(s), the tracer displacements may obey a non-Gaussian statistics. The rescaled pdf’s, $p(\delta_{t})$, are expected to be Gaussian for both much shorter and much larger observation times, but with different half-variance: the free diffusion constant, $D_{0}$, for $t\to 0$ (no crowding effect) and the asymptotic diffusion constant, $D$, introduced above, for $t\to\infty$ (central limit theorem). The mechanism how the tracer’s normal diffusion sets in and the constant $D$ remains unaltered through the entire non-Gaussian transient, varies, instead, from case to case. In summary, key features of the NGND phenomenon appear to be: (i) its transient nature, whereby the observables taken into account are intrinsically non-stationary; (ii) a time- modulated instantaneous diffusivity of the tracer. As discussed in the following, these conditions can occur even in the absence of external (non- equilibrium) perturbations of the Brownian dynamics. A simple heuristic explanation Granick1 of the NGND phenomenon models the effects of the slowly fluctuating environment in terms of an ad hoc distribution of the tracer’s diffusion constant. Imposing an exponential distribution of the diffusion constant with average $D$, a straightforward superstatistical procedure yields the exponential (Laplace) rescaled distribution, $p(\delta_{t})=\exp(-\delta_{t}/\alpha)/2\alpha$, with $\alpha^{2}=D$. A more suggestive NGND paradigm is provided by the notion of diffusing diffusivity Slater , whereby the asymptotic particle’s diffusion constant is replaced by a time fluctuating auxiliary observable, $D(t)$. Regarding $D(t)$ as a continuous stochastic process with average $D$ and time constant $\tau$, the distribution $p(\delta_{t})$ changes from exponential for $t\ll\tau$ to Gaussian for $t\gg\tau$; in both time regimes, the displacement diffusion is normal, with $\langle\Delta x^{2}(t)\rangle=2Dt$ Slater . Refined variations of these paradigms Metzler2 ; Jain1 ; Jain2 ; Tyagi ; Luo ; Sokolov ; Metzler , predict different exponential decays of the transient distributions. These phenomenological approaches have two major limitations, namely: (i) They fail to incorporate the free Gaussian diffusion detected in most real and numerical experiments at very short observation times, $t\to 0$, when crowding plays no role. This is because these approaches purportedly ignore the microscopic details of the actual diffusion mechanisms; (ii) They generally aim at “universal” non-Gaussian transient pdf’s, namely, at functions $p(\delta_{t})$ insensitive to $t$ over extended domains, $t<\tau$. [In the superstatistical models $\tau=\infty$.] However, even if this strategy may appear to agree with NGND observations for complex systems Granick1 ; Granick2 ; Bhatta ; Sung1 ; Sung2 ; Granick3 , it is obvious that to reproduce the exponential-Gaussian crossover, the transient rescaled distributions must assume the form $p(\delta_{t},t)$, i.e., they must depend explicitly on $t$. This study focuses on the microscopic mechanisms responsible for NGND. To this purpose, motivated by a preliminary study PRR , we investigated, both numerically and analytically, directed diffusion of different idealized tracers in confined geometries. We selected low dimensional systems mostly inspired to cell biology RMP_BM ; Lipowski . For appropriately short observation times, NGND emerges as a transient effect of the time modulation of the tracers’ microscopic diffusivity. This effect can occur even in the absence of environmental fluctuations. It suffices to require that the tracer’s dynamics be governed by two concurring diffusion mechanisms, at least one of them characterized by a finite relaxation time, $\tau$. During transients times of the order of $\tau$, the displacement distributions can deviate from their asymptotic Gaussian profile also after normal diffusion has set in. Moreover, such deviations do not necessarily imply the emergence of “fatter” exponential tails (leptokurtic transients), but under certain conditions, the distribution tails can get “thinner” (platykurtic transients). This observation suggests typical NGND features are to be found in much wider a class of diffusion systems. Indeed, contrary to experimental and numerical observations on extended systems, the NGND transient displacement distributions in low dimensional models, are found to depend on the observation time. This led us to address the question of phenomenological fitting functions capable of reproducing the $t$-dependence of the rescaled pdf’s, $p(\delta_{t},t)$. We also noticed that the $t$ dependence of the transient rescaled pdf’s can be suppressed, though not completely, by considering models where the onset of normal diffusion is controlled by some intrinsic time constant, which can be taken much shorter than the upper bound, $\tau$, of the non-Gaussian transient. This provides us with a criterion to formulate low dimensional models that better capture the known NGND phenomenology in complex systems. The present paper is organized as follows. In Sec. II we elaborate on a toy discrete model of NGND proposed first in Ref. Slater and then revisited in Ref. PRR . The purpose of this section is to single out key NGND aspects, like the time scales regulating the diffusion mechanisms and the nature of the non- Gaussian transients, that is, lepto- versus platykurtic. In Sec. III we consider the diffusion of a two dimensional (2D) ellipsoidal Brownian particle in a highly viscous, homogeneous and isotropic fluid in thermal equilibrium. For observation times shorter than its rotational relaxation time, the particle does undergo normal diffusion. However, its instantaneous diffusivity in a given direction is modulated in time due to its elongated shape. This results in exponentially decaying transient distributions of the particle’s directional displacements. In Sec. IV we analyze the diffusion of a 2D self- propelling symmetric particle in a homogeneous and isotropic active medium with finite orientational relaxation time. NGND is characterized here by thin tails of the transient displacement distributions. In both cases, however, transients are governed by one time scale, only, their orientational diffusion time, $\tau$: on increasing the observation time, normal diffusion just anticipates the onset of the Gaussian statistics of the particle’s displacements. In Sec. V we introduce a phenomenological fitting function, $p_{\beta}(\delta_{t})$ for the rescaled displacement distributions, with only one adjustable parameter, $\beta$. This function is designed ad hoc to ensure normal diffusion with the observed diffusion constant, $D$, at any time, while $\beta$ encodes the $t$-dependence of the rescaled displacement distributions. Relevant values of the fitting parameter $\beta$ are $\beta=2$ for a Gaussian pdf, $\beta=1$ for a Laplace (exponential) pdf, $\beta<2$ for a leptokurtic pdf, and $\beta>2$ for a platykurtic pdf. In Sec. VI we analyze the NGND phenomenon in a narrow corrugated channel chemphyschem ; PNAS with fluctuating pores PRR . NGND occurs for time-correlated pore fluctuations, random and periodic, alike, and, more importantly, for observation times comprised between two distinct, controllable time scales. The correlation time of the pore fluctuations sets the transient time scale, $\tau$, whereas the average pore-crossing time governs the onset of normal diffusion. Upon choosing the former much larger than the latter, the NGND transient is made grow wider and the $t$-dependence of $\beta$ weaker. As a practical application of the tools introduced thus far, in Sec. VII we investigate the diffusion of a passive Brownian tracer in a periodic array of planar counter- rotating convection rolls. The peculiarity of this model is that, by tuning its dynamical parameters, transients can change from lepto- to platykurtic. Two are the systems’s characteristic time scales: the mean time for the particle to first exit a convection roll and its average revolution period inside the roll. At low (high) temperatures, the former (latter) time scale is larger and thus plays the role of transient time, $\tau$; accordingly, the NGND transients are leptokurtic (platykurtic). Finally, in Sec. VIII we summarize the main conclusions of our approach to NGND. ## II A Discrete NGND Model Though sounding exotic to some readers, the phenomenon of NGND turns out to be way more general than the more familiar Fickian diffusion. To make this point, we elaborate now on a coarse grained model, first proposed in Ref. Slater , which serves well the purpose of illustrating NGND in continuous systems of any dimensionality. Let us coarse grain the trajectory of a tagged particle in the $x$ direction as the sum of small random steps, $\Delta x_{i}$, taken at fixed discrete times, $t_{i}=i\Delta t$, where $i=1,\dots N$ and $\Delta t=1$, for simplicity. Accordingly, the position of the particle at time $N$ is $x_{N}=\sum_{i=1}^{N}\Delta x_{i}$. A stochastic average over the particle’s steps $\Delta x_{i}$ yields the mean square displacement at time $N$, $\langle x_{N}^{2}\rangle=\sum_{i=1}^{N}\langle\Delta x_{i}^{2}\rangle+2\sum_{i\neq j}{}^{\prime}\langle\Delta x_{i}\Delta x_{j}\rangle,$ (1) where $\sum_{i\neq j}^{\prime}$ stays for $\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}$. Sufficient conditions to establish normal diffusion are that: (1) the step directions are uncorrelated, $\langle\Delta x_{i}\Delta x_{j}\rangle=0$, that is, for any given $\Delta x_{i}$, displacements $\Delta x_{j}$ and $-\Delta x_{j}$, are equiprobable; (2) the variance, $\langle\Delta x_{i}^{2}\rangle$, of the step probabilities, $p(\Delta x_{i})$, are of the same order of magnitude, though not necessarily identical. These requirements are less stringent than the assumptions implicit in the standard random walker model for Brownian motion Gardiner . Indeed, for the sake of generality, one should not rule out finite correlations of the step lengths Slater . For instance, we can assume that during each unit time step the particle’s diffusion is normal with time- dependent constant, $D_{i}$, i.e., $p(\Delta x_{i})=(4\pi D_{i})^{-1/2}\exp(-\Delta x_{i}^{2}/4D_{i}).$ (2) This assumption guarantees that the directions of the particle’s steps are uncorrelated, while their length correlation is controlled by the auto- correlation of the time sequence of the constants $D_{i}$, which, in turn, is specific to the system at hand. It follows immediately that $\langle x_{N}^{2}\rangle=2\langle D\rangle N,$ (3) and $\displaystyle\langle x_{N}^{4}\rangle-3\langle x_{N}^{2}\rangle^{2}=12\mu_{D}\langle D\rangle^{2}N+24{\sum_{i\neq j}}^{\prime}C_{ij},$ (4) with $\mu_{D}=(\langle D^{2}\rangle-\langle D\rangle^{2})/\langle D\rangle^{2}$ and $C_{ij}=\langle D_{i}D_{j}\rangle-\langle D_{i}\rangle\langle D_{j}\rangle$. For any given stationary model, there exists an appropriate distribution of the constants $D_{i}$, $p(D_{i})$, so that $\langle D_{i}\rangle\equiv\langle D\rangle$. Suppose now that two particle steps, $\Delta x_{i}$ and $\Delta x_{j}$ are statistically uncorrelated only for large time differences, i.e., $\langle D_{i}D_{j}\rangle=\langle D_{i}\rangle\langle D_{j}\rangle$ for $|i-j|>\tau$. We then distinguish two limiting cases, (i) $N\gg\tau$, where $\mu_{x}=\frac{\langle x_{N}^{4}\rangle-3\langle x_{N}^{2}\rangle^{2}}{\langle x_{N}^{2}\rangle^{2}}=\frac{3\mu_{D}}{N}\rightarrow 0.$ (5) A vanishing excess kurtosis, $\mu_{x}$, hints at a Gaussian $x_{N}$ distribution. This is the asymptotic limit of the displacement distributions predicted by the central limit theorem. (ii) $N<\tau$, where $\mu_{x}=\frac{\langle x_{N}^{4}\rangle-3\langle x_{N}^{2}\rangle^{2}}{\langle x_{N}^{2}\rangle^{2}}=3\bar{\mu}_{D}.$ (6) with $\bar{\mu}_{D}=(2/N^{2}\langle D^{2}\rangle)\sum_{i\neq j}{}^{\prime}C_{ij}$. Eqs. (3) and (6) embody the definition of NGND. The finite excess kurtosis, $\mu_{x}$ depends on the actual auto-correlation of the constants $D_{i}$. For instance, on assuming $\langle D_{i}D_{j}\rangle=\langle D^{2}\rangle$ for all $i$ and $j$ with $|i-j|<\tau$, we obtain $\bar{\mu}_{D}=\mu_{D}$. In particular, for the exponential distribution $p(D_{i})=\exp(-D_{i}/\langle D\rangle)/\langle D\rangle$ assumed in the diffusing diffusivity model of Ref. Slater , $\mu_{D}=1$. Not surprisingly, the resulting value of the excess kurtosis, $\mu_{x}=3$, corresponds to a Laplace distribution of the total displacement $x_{N}$ Slater . Of course a more realistic choice of the correlator $C_{ij}$ can yield different values of $\mu_{x}$. In most applications $C_{ij}$ is definite positive and decays to zero with time, i.e., with $|i-j|$; hence $0<\mu_{x}<3$, Accordingly, the corresponding $x_{N}$ distributions are leptokurtic, with tails decaying slower than those of a Gaussian distribution, but typically faster than exponentially. On the other hand, we cannot exclude the possibility that $C_{ij}$ decays to zero oscillating. This implies that, in principle, $\mu_{D}$ can assume negative values, so that the corresponding transient distribution of $x_{N}$ may be platykurtic. In Refs. nonG1 ; nonG2 ; nonG3 the present approach has been extended also to microscopically non- Gaussian diffusive processes [where the $\Delta x$ distribution of Eq.(2) does not apply]. We conclude this section with a final remark about the time scales involved in this discrete model. One time scale has been introduced explicitly, namely the characteristic decay time, $\tau$, of the correlator $C_{ij}$ or, equivalently, the correlation time of the step lengths, $\Delta x_{i}$. A second one is implicit in our choice for the step distribution, $p(\Delta x_{i})$. In Eq. (2) the coarse grained diffusion was assumed to be normal over the time step $\Delta t=1$. This implies that in the corresponding continuum system normal diffusion is expected to have occurred at some intrinsic time scale much shorter than $\tau$. Of course the discrete model of this section cannot reproduce the diffusion properties at times shorter than the discretization time scale, $\Delta t$. ## III Diffusion of an Ellipsoidal Particle We consider first the simple case of a 2D ellipsoidal particle of semiaxes $a$ and $b$, with $a>b$, diffusing in a highly viscous, homogeneous and isotropic medium, subject to equilibrium thermal fluctuations. This is a well-known problem in biological physics Berg . The particle’s elongation causes a dissipative coupling between the center of mass translational degrees of freedom, $x$ and $y$ in the laboratory frame, and the rotational degree of freedom, $\theta$. As sketched in Fig. 1(b), the angle $\theta$ defines the orientation of the particle’s long axis with respect to the horizontal $x$ axis. The physical consequences of such a mechanism were first recognized by F. Perrin Perrin . An ellipsoidal particle tends to diffuse independently in directions parallel and perpendicular to its long axis, that is along its principal axes. The relevant diffusion constants in the body frame are denoted here by $D_{a}$ and $D_{b}$, with $D_{a}\geq D_{b}$. In 2D, rotational diffusion is governed by an additional diffusion constant, $D_{\theta}$, which will be handled here as unrelated to the translational constants, $D_{a}$ and $D_{b}$, to avoid unnecessary complications involving hydrodynamic effects and fabrication issues Berg ; Perrin . Over the angular relaxation time $\tau=1/D_{\theta}$, random diffusion erases any directional memory of the particle’s motion. Related to this mechanism is the crossover between anisotropic diffusion with constants $D_{a}$ and $D_{b}$ at short observation times, $t\ll\tau$, and isotropic diffusion with constant $D=(D_{a}+D_{b})/2$ at long observation times, $t>\tau$ ellipsoid . The anisotropic-isotropic crossover can be numerically investigated by integrating the Langevin equations Kloeden describing the roto-translational motion of a free ellipsoidal Brownian particle, $\displaystyle\dot{x}$ $\displaystyle=$ $\displaystyle\xi_{x}(t),~{}~{}~{}\dot{y}=\xi_{y}(t),$ (7) $\displaystyle\dot{\theta}$ $\displaystyle=$ $\displaystyle\xi_{\theta}(t),$ (8) where the translational noises, $\xi_{i}(t)$ with $i=x,y$, and the rotational noise, $\xi_{\theta}(t)$, model three independent stationary Gaussian fluctuation sources with zero means and autocorrelation functions $\langle\xi_{i}(t)\xi_{j}(0)\rangle=2D_{ij}\delta(t)$ and $\langle\xi_{\theta}(t)\xi_{\theta}(0)\rangle=2D_{\theta}\delta(t)$. The matrix $D_{ij}$ encodes the dissipative roto-translational coupling, namely ellipsoid $D_{ij}=({1}/{2})[(D_{a}+D_{b})\delta_{ij}+(D_{a}-D_{b})M_{ij}(\theta)],$ (9) with ${\bf M}=\begin{pmatrix}\cos 2\theta&\sin 2\theta\\\ \sin 2\theta&-\cos 2\theta\end{pmatrix}$. Figure 1: Overdamped 2D ellipsoidal particle of semi-axes $a=0.5$ and $b=0.05$ diffusing in a homogeneous medium with Langevin Eqs. (7)-(9): (a), (c) displacement pdf’s for different initial orientations [uniform $\theta(0)$ distribution in (a), and $\theta(0)=0$ in (c)] and increasing observation times, $t$, see legends; (b) $\langle\Delta x^{2}\rangle$ vs. $t$ for the initial conditions (i.c.) of (a) (empty symbols) and (c) (filled symbols). Simulation parameters are: $D_{a}=1$, $D_{b}=(b/a)D_{a}$ and $D_{r}=0.01$. Asymptotic diffusion in (b) follows the normal diffusion law, $2Dt$ with $D=(D_{a}+D_{b})/2$ (dashed line), independent of the i.c. At very short times, the diffusion constant depends on $\theta(0)$ (see sketch). The pdf’s have been fitted by means of Eq. (17) for $D$ fitting the large-$t$ simulation data of (b) and $\beta$ as reported in the legends. Despite their apparent simplicity, the analytical solution of the Langevin Eqs. (7)-(9) is rather cumbersome Pecora ; Franosch . The diffusion properties of a typical ellipsoidal particle are summarized in Fig. 1. The mean square displacement, $\langle\Delta x^{2}(t)\rangle$, plotted in panel (b) as a function of the observation time $t$, was first computed under assuming a uniform distribution of $\theta(0)$. This initial condition (i.c.) was justified with the practical difficulty of measuring the particle instantaneous orientation and with the isotropy of the suspension medium. The resulting asymptotic diffusion constant, numerically determined as $D=\lim_{t\to\infty}\langle\Delta x^{2}(t)\rangle/2t,$ agrees with the expected value, $D=(D_{a}+D_{b})/2$, obtained by averaging $D_{ij}(\theta)$ in Eq. (9) with respect to the isotropic equilibrium distribution of $\theta$. This result is indeed an effect of our choice for the i.c. of $\theta$. A stochastic average over a uniform $\theta(0)$ distribution is equivalent to imposing isotropic particle’s diffusion, that is establishing Einstein law at any time. In contrast, by setting $\theta(0)=0$, the numerical data for $\langle\Delta x^{2}\rangle$ versus $t$, also shown in Fig. 1(b), bridge two linear laws with different diffusion constant: $D=D_{a}$ for $t\ll\tau$ and $D=(D_{a}+D_{b})/2$ for $t\gg\tau$. More revealing are the distributions of the unidirectional displacements, $\Delta x$, for increasing observation times, $t$, plotted in panels (a) and (c), respectively for a uniform initial angular distribution and $\theta(0)=0$. As first theoretically predicted by Prager Prager and numerically confirmed by the authors of Ref. Franosch , for both i.c. the rescaled displacement pdf’s do approach the Gaussian profile of Fickian diffusion with half-variance $D=(D_{a}+D_{b})/2$, but only for $t\gg\tau$, that is well after the anisotropic-isotropic crossover took place. Most remarkably, for $\theta(0)=0$, in panel (c), the displacement distributions approach a Gaussian profile both for $t\ll\tau$ and $t\gg\tau$, each with the corresponding half-variance $D$ shown in panel (b), respectively, $D_{a}$ and $(D_{a}+D_{b})/2$. The short-$t$ “reentrant” Gaussian distribution does not appear in panel (a), due to the randomized i.c.. The explanation of this behavior is simple. In panel (c), the particle’s long axis was initially oriented parallel to the $x$ axis, $\theta(0)=0$. Therefore, it started diffusing in the $x$ direction like a one dimensional Brownian particle, with diffusion constant $D_{a}$. Subsequently, angular fluctuations mixed diffusion along the two symmetry axes with time constant $\tau$. This argument can be extended to any choice of $\theta(0)$: based on Eq. (9), the short-$t$ diffusion constant is expected to be $D=(1/2)[(D_{a}+D_{b})+(D_{a}-D_{b})\cos\theta(0)]$, see inset of Fig. 1(b). Of course, the i.c. only influence the anisotropic diffusion regime at short $t$. There is only one characteristic time scale in this model, namely, the angular relaxation time, $\tau=1/D_{\theta}$. However, normal diffusion turns out to set in for shorter observation times, $t\sim\tau$, than the displacement Gaussian statistics. To explain this behavior, we notice that during the transient time, $\tau$, a maximum mean square displacement, $2D_{a}\tau$, occurs parallel to the major axis; observing the same displacement in the perpendicular direction would take a larger time, $\tau_{*}=\tau(D_{a}/D_{b})$. The onset of the Gaussian $\Delta x$ statistics is thus delayed to larger observation times with $t>\tau_{*}$. In conclusion, this simple model of equilibrium Brownian motion exhibits NGND. On decreasing the observation time, $t$, the rescaled displacement distribution in a fixed laboratory direction, changes from Gaussian for $t\gg\tau$, to a leptokurtic distribution with fat exponential tails for $t\sim\tau$, independently of the i.c.. This behavior is consistent with the phenomenological picture of Sec. II. This is apparent in the case of uniform initial orientation. Normal diffusion is ensured by the fact that, after the particle has taken a step $\Delta x_{i}$ at the discrete time $t_{i}=i\Delta x$, it will next take a step $\pm\Delta x_{j}$ at time $t_{j}$, with equal probability. On the contrary, the step lengths $\Delta x_{i}$ and $\Delta x_{j}$ are correlated for $|t_{j}-t_{i}|<\tau$. Indeed, the effective half- width of the diffusing particle parallel to the $x$ axis, varies randomly between $b$ at $\theta=0,\pi$ and $a$ at $\theta=\pm\pi/2$. Accordingly, the particle’s instantaneous diffusion constant fluctuates between $D_{a}$ and $D_{b}$; its fluctuations are exponentially time correlated with time constant $\tau$. As discussed in Sec. II, this leads to a rescaled pdf, $p(\delta_{t},t)$, with positive excess kurtosis. ## IV Diffusion of a Janus Particle We consider next the case of a pointlike particle undergoing persistent Brownian motion, namely, a 2D artificial microswimmer. Typical artificial microswimmers are Brownian particles capable of self-propulsion in an active medium Granick ; Muller . Like in the foregoing section, the suspension medium can be taken homogeneous, isotropic and highly viscous. Such particles are designed to harvest environmental energy by converting it into kinetic energy. The simplest class of artificial swimmers investigated in the literature are the so-called Janus particles (JP), mostly spherical colloidal particles with two differently coated hemispheres, or “faces” Marchetti ; Gompper . Recently, artificial micro- and nanoswimmers of this class have been the focus of pharmaceutical (e.g., smart drug delivery smart ) and medical research (e.g., robotic microsurgery Wang ). Relevant to the present work is the observation that their function is governed, in time and space, by their diffusive properties through complex environments, which are often spatially patterned Bechinger or confined ourPRL . Figure 2: Symmetric Janus particle diffusing in a homogeneous medium with $D_{0}=1$, $v_{0}=1$, $D_{\theta}=0.01$, and uniform distributions of the particle’s initial position and orientation, Eqs. (10)-(11): (a) displacement pdf’s at different observation times, $t$; (b) diffusion law, $\langle\Delta x^{2}\rangle$ vs. $t$. The numerical data agree well with the analytical law of Eq. (12) (solid curve); the normal diffusion limits at large and short $t$, Eqs. (13) and (14), are drawn for a comparison (dashed lines); the pdf’s in (a) have been fitted by means of Eq. (17) with $D$ fitting the large-$t$ data in (b) and an appropriate choice of $\beta$ (see legend). The pdf’s with $\beta=2$ at the shortest and largest $t$, panel (a), are Gaussian curves with half-variance $D_{0}$ and $D_{0}+D_{s}$, respectively. The overdamped dynamics of a pointlike active JP can be formulated by means of two translational and one rotational Langevin equation $\displaystyle\dot{x}$ $\displaystyle=$ $\displaystyle v_{0}\cos\theta+\xi_{x}(t),~{}~{}~{}\dot{y}=v_{0}\sin\theta+\xi_{y}(t),$ (10) $\displaystyle\dot{\theta}$ $\displaystyle=$ $\displaystyle\xi_{\theta}(t),$ (11) where $x$ and $y$ are the coordinates of the particle’s center of mass, and the self-propulsion velocity has constant modulus, $v_{0}$, and orientation $\theta$, taken with respect to the $x$ axis, see sketch in Fig. 2(b). The translational noises in the $x$ and $y$ directions, $\xi_{x}(t)$ and $\xi_{y}(t))$, and the rotational noise, $\xi_{\theta}(t)$, are stationary, independent, delta-correlated Gaussian noises, $\langle\xi_{i}(t)\xi_{j}(0)\rangle=2\delta_{ij}D_{i}\delta(t)$ with $i,j=x,y,\theta$. The noise strengths $D_{x}=D_{y}=D_{0}$ (isotropic translational fluctuations) and $D_{\theta}$ are assumed here to be unrelated for generality (e.g., to account for different self-propulsion mechanisms ourPRL ). The reciprocal of $D_{\theta}$ is the correlation (or angular persistence) time, $\tau$, of the self-propulsion velocity. For simplicity, we ignore chiral effects due to unavoidable fabrication defects Wang ; Loewen ; chiral1 . It is worthy comparing the Langevin Eqs. (7)-(8) and (10)-(11): for the ellipsoidal particle anisotropy is geometric, i.e., due to its elongated shape, whereas, for a pointlike JP anisotropy is dynamical, i.e., associated with the instantaneous orientation of its self-propulsion velocity. A detailed analytical treatment of the Langevin Eqs. (10)-(11) is to be found in Ref. Franosch_SciRep . The unidirectional diffusion of a free JP in 2D reads Golestanian07 ; Loewen09 ; chiral2 , $\displaystyle\langle\Delta x^{2}(t)\rangle=2(D_{0}+D_{s})t+D_{s}\tau(e^{-|t|/\tau}-1),$ (12) which approaches the Einstein law, $\displaystyle\langle\Delta x^{2}(t)\rangle=2(D_{0}+D_{s})t,$ (13) only for $t\gg\tau$. Here, the unidirectional diffusion constant, $D$, consists of two distinct contributions, a translational, $D_{0}$, and a self- propulsion term, $D_{s}=v_{0}^{2}/2D_{\theta}$. Instead, for short observation times Eq. (12) tends to $\displaystyle\langle\Delta x^{2}(t)\rangle=2D_{0}t,$ (14) that is, to the normal diffusion law of a passive particle with $v_{0}=0$. The analytical law of Eq. (12) and its normal limits for large and small observation times compare well with our simulation results in Fig. 2(b). The displacement distribution, $p(\delta_{t},t)$, exhibits a Gaussian profile both for $t\to 0$ and $t\to\infty$, but with different half-variances, respectively $D_{0}$ and $D=D_{0}+D_{s}$, see Fig. 2(a). The crossover between these two Gaussian limits is characterized by platykurtic transient pdf’s with fast decaying tails. Experimental evidence of this phenomenon has been reported in Ref. Loewen2 . In the limit $t\to 0$, the displacement distributions become sensitive to the particle’s initial orientation. For a uniform distribution of $\theta(0)$, shown in Fig. 2(a), the rescaled pdf’s approach a Gaussian function with half-variance $D_{0}$, as to be expected for an isotropic persistent Brownian motion in the ballistic regime, $t\ll\tau$. However, for a fixed value of $\theta(0)$, say, $\theta(0)=0$, the pdf is still a Gaussian with the same half-variance, $D_{0}$, but its center moves to higher $\Delta x$ values, with $\langle\Delta x(t)\rangle=v_{0}t$ sperm (not shown). For intermediate observation times, $t\simeq\tau$, the displacement pdf’s develop two symmetric maxima a distance of the order of the persistence length, $\Delta x\sim v_{0}\tau$, from their centers Loewen2 . The different nature of the diffusion transients of ellipsoidal and active JP’s can be easily explained in terms of the coarse grained model of Sec. II. The orientation of a JP is time correlated; from Eqs. (10)-(11), $\langle\cos\theta(t)\cos\theta(0)\rangle=\langle\sin\theta(t)\sin\theta(0)\rangle=(1/2)\exp(-|t|/\tau)$. This implies that both the orientation and the length of the discrete steps in the $x$ direction, $\Delta x_{i}$ are time correlated; given any pair of steps, $\Delta x_{i}$ and $\Delta x_{j}$, both their time correlations vanish asymptotically only for $|t_{j}-t_{i}|\gg\tau$. However, on comparing panels (a) and (b) of Fig. 2 we notice the existence of a rather wide range of observation times, where $\langle\Delta x^{2}(t)\rangle$ has approached a linear function of $t$, Eq. (13), while the rescaled $\Delta x$ distributions are still apparently platykurtic. The platykurtic nature of this transient is consistent with the coarse grained model of Sec. II, because the angular correlation of the self-propulsion velocity vector amounts to an oscillatory behavior of $D_{i}-\langle D_{i}\rangle$, a necessary condition to observe a negative excess kurtosis, $\mu_{x}<0$. It remains to explain why, like in Sec. III, the onset of normal diffusion anticipates the onset of the Gaussian $\Delta x$ statistics. We know Loewen2 that the self-propulsion mechanism of Eqs. (10)-(11) is responsible for the non-Gaussian profile of $p(\delta_{t})$, an effect mitigated by the translational noise as long as $2D_{0}t>l_{\theta}^{2}$, where $l_{\theta}=v_{0}\tau$ is the JP persistence length. Therefore, the Gaussian statistics of the unidirectional JP displacements is expected to emerge only for $t>\tau_{*}$, with $\tau_{*}=\tau(D_{s}/D_{0})$. Note that in the simulations of Fig. 2 we set $\tau_{*}>\tau$. The results presented in this section lead us to conclude that we are in the presence of another manifestation of the NGND phenomenon. ## V Transient Displacement Distributions As mentioned in Sec. I, the notion of NGND is commonly associated with the existence of a wide interval of observation times, where diffusion follows a normal law with fixed constant, $D$, and the rescaled displacement distribution, $p(\delta_{t})$, decays (almost) exponentially independently of $t$. The exponential to Gaussian crossover is hardly accessible to direct observation Granick1 . In the low dimensional systems investigated here, instead, such a transition takes place over a relatively narrower $t$ interval, which led us to look for a phenomenological function $p(\delta_{t},t)$ fitting our simulation data from transient up asymptotic $t$ values. Contrary to the diffusing diffusivity models, where the limiting Laplace and Gaussian distributions are functions of the sole diffusion constant, $D$, a more realistic fitting procedure needs at least one additional parameter, $\beta$, to capture the $t$-dependence of the transient pdf’s. Inspired by the numerical findings of Secs. III and IV, we started from the compressed exponential function $p(\delta_{t})=p_{0}e^{-(\delta_{t}/\delta_{0})^{\beta}},$ (15) where $\beta\geq 1$. The scaling factor, $\delta_{0}$, and the normalization constant, $p_{0}$, have been computed by imposing the conditions $\int_{0}^{\infty}p(\delta_{t})d\delta_{t}=1,~{}~{}~{}\int_{0}^{\infty}\delta_{t}^{2}p(\delta_{t})d\delta_{t}=2D,$ (16) to obtain the one-parameter ad hoc fitting function, $p_{\beta}(\delta_{t})=\frac{\beta}{\Gamma(\frac{1}{\beta})^{\frac{3}{2}}}\left[\frac{\Gamma(\frac{3}{\beta})}{2D}\right]^{\frac{1}{2}}\exp\left[-\left(\frac{\delta_{t}^{2}}{2D}\frac{\Gamma(\frac{3}{\beta})}{\Gamma(\frac{1}{\beta})}\right)^{\frac{\beta}{2}}\right].$ (17) This function has been derived phenomenologically starting from the standard stretched exponential distribution, $p_{\beta}(\delta_{t})=A\exp(-B\delta_{t}^{\beta})$. The constants $A$ and $B$ have then be determined by normalizing $p_{\beta}(\delta_{t})$ to one and ensuring that its second moment yields $\langle\delta_{t}^{2}\rangle=2D$ for any value of the free parameter $\beta$. In view of its derivation, the heuristic distribution (17) may apply also to the transients of microscopically non-Gaussian diffusion models nonG1 ; nonG2 ; nonG3 . The fitting parameter $\beta$ is allowed to vary with $t$; it assumes values in the range $1\leq\beta\leq 2$ for leptokurtic distributions (positive excess kurtosis) and $\beta\geq 2$ for platykurtic distributions (negative excess kurtosis). The fits of the pdf’s drawn in panels (a),(c) of Figs. 1 and (a) of Fig. 2 have been generated from Eq. (17) by setting $D$ equal to the diffusion constants that best fitted the large-$t$ diffusion data in the respective panels (b) and, then, computing $\beta$ to get the best fit of the rescaled displacement distributions at different $t$. The same fitting procedure has been applied in Figs. 3 and 4 of the forthcoming sections. Our phenomenological formula (17) fits rather closely the numerical pdf’s reported in Secs. III and IV, at least for sufficiently large observation times. As a matter of fact, the heuristic argument leading to the fitting function $p_{\beta}(\delta_{t})$ assumes normal diffusion at any $t$. This is consistent with the diffusive dynamics of the ellipsoidal Brownian particle with isotropic i.c., displayed in Figs. 1(a)-(b). However, this cannot be the case, for instance, of the active JP of Fig. 2, whose diffusion law for $t<\tau$ clearly deviates from the asymptotic law of Eq. (13). A comparison with the simulation output confirms that the proposed fitting procedure works well for both systems in the transient regime, $t>\tau$. ## VI Diffusion in a Time Modulated Channel In most numerical and experimental investigations Granick1 ; Granick2 ; Bhatta ; Sung1 ; Sung2 ; Granick3 the transient distributions of $\delta_{t}$ are presented as sort of universal functions, $p(\delta_{t})$, which decay with (almost) exponential law independently of $t$. Sometimes the transient interval is so wide that the exponential-Gaussian crossover is not accessible to direct observation. The question then rises as to what extent the low dimensional systems addressed in this work may share that property. In the notation of Sec. V, this corresponds to determining conditions for the fitting parameter $\beta$ to be constant with $\beta\neq 2$ (non-Gaussian transient) over a wide range of $t$. Note that in the models of Secs. III and IV the onset of the normal diffusion and the Gaussian statistics regimes, which delimit the NGND transient, are governed by the sole angular relaxation time $\tau$ ($\tau_{*}$ being proportional to $\tau$). In the standard formalism of the central limit theorem this would correspond to saying that the higher cumulants of the displacement distribution vanish slower with the observation time than the second moment approaches its linear growth Feller . In this regard, more interesting are systems where the NGND transients are delimited by two distinct time constants. Figure 3: Diffusion of a pointlike overdamped particle in a randomly fluctuating channel, Eq. (19) with $\varepsilon(t)$ representing the Ornstein- Uhlenbeck process of Eq. (20). Simulation parameters are: $y_{L}=1$, $x_{L}=\pi$, $D_{0}=1$, $D_{\varepsilon}=3$, $\tau=50$, and random $x(0)$, $y(0)$ and $\varepsilon(0)$. In the main panel, rescaled displacement pdf’s are shown for increasing observation times, $t$, in the normal diffusion regime, see data for $\langle\Delta x^{2}\rangle$ vs. $t$ in the inset. The fitting $\beta$ values have been obtained from Eq. (17) with $D=0.335$. At short $t$, the statistics of our data is not good enough to resolve the $t$ dependence of $\beta$. A study case is represented by the diffusion of a standard Brownian particle in a confined geometry chemphyschem , the simplest example being a chainlike structure of cavities connected by narrow pores PRR . In 2D, the dynamics of an overdamped symmetric Brownian particle in a channel is modeled by two simple Langevin equations $\dot{x}=\xi_{x}(t),~{}~{}~{}\dot{y}=\xi_{y}(t),$ (18) where $x$ and $y$ are the coordinates of the particle’s center of mass and the translational fluctuations $\xi_{x}(t)$ and $\xi_{y}(t)$ are zero-mean, white Gaussian noises with autocorrelation functions $\langle\xi_{i}(t)\xi_{j}(0)\rangle=2\delta_{i,j}D_{0}\delta(t)$ and $i,j=x,y$. The strength of $\xi_{i}(t)$ coincides with the free-particle diffusion constant, $D_{0}$, which is typically proportional to the temperature of the suspension fluid. However, contrary to Secs. III and IV, the particle is now confined to diffuse inside a narrow corrugated channel with axis oriented along $x$ and symmetric walls, $y=\pm w(x,t)$. Following Refs. chemphyschem , we assumed for simplicity the sinusoidally modulated channel half-width $w(x,t)=(y_{L}/2)[\varepsilon^{2}+(1-\varepsilon^{2})\sin^{2}(\pi x/x_{L})].$ (19) Here $y_{L}$ and $x_{L}$ are respectively the maximum width and the length of the unit channel cell, sketched in Fig. 3, and $\varepsilon^{2}y_{L}$ is the fluctuating width of the pores located at $x{\rm mod}(x_{L})=0$. In the case of a pointlike particle, hydrodynamic effects PNAS can be ignored. Moreover, let the width of the channel pores be time modulated without affecting the particle’s free diffusion constant, $D_{0}$, for instance, by applying a tunable external gating potential. Therefore, when integrating the Langevin Eq. (18), we neglected the particle radius with respect to $x_{L}$ and $y_{L}$ (pointlike particle approximation) and imposed reflecting boundary conditions at the walls ourPRL . In Ref. PRR we considered the case when $\varepsilon(t)$ is an Ornstein- Uhlenbeck process $\dot{\varepsilon}=-\varepsilon/\tau+\sqrt{D_{\varepsilon}/\tau^{2}}~{}\xi_{\varepsilon}(t),$ (20) where $\xi_{\varepsilon}(t)$ is another Gaussian zero-mean valued noise, independent of $\xi_{x}(t)$ and $\xi_{y}(t)$ and delta-correlated, $\langle\xi_{\varepsilon}(t)\xi_{\varepsilon}(0)\rangle=2\delta(t)$. The channel pores open and close randomly in time with average width $\langle\varepsilon^{2}\rangle y_{L}$, where $\langle\varepsilon^{2}\rangle$ coincides with the variance of $\varepsilon(t)$, $D_{\varepsilon}/\tau=(\pi/2)\langle|\varepsilon|\rangle^{2}$. This channel model manifests prominent NGND, as illustrated in Fig. 3. The displacement distributions are Gaussian for both very short (not shown, see Ref. PRR ) and asymptotically long observation times. Indeed, the particle diffuses freely with constant $D_{0}$ inside each channel’s cell for $t<\tau_{L}$, with $\tau_{L}=x_{L}^{2}/8D_{0}$, before escaping into an adjacent cell after a mean exit time $\tau_{0}=\tau_{L}/\langle|\varepsilon|\rangle$ PRR . For $t\gg\tau_{0}$, the $x$ directed diffusion process can thus be described as a random walker with spatial step $x_{L}$ and time constant $\tau_{0}$; memory of the i.c. adopted in our simulations is completely erased. The ensuing mean square displacement then follows the Einstein law with approximated diffusion constant $D=x_{L}^{2}/2\tau_{0}$ JCP137 . The displacement distribution assumes its asymptotic Gaussian profile only for observation times much larger than the correlation time of the pore fluctuations, $t\gg\tau$. In the simulations of Fig. 3, we set $\tau\gg\tau_{0}$, which thus defines a NGND transient interval, $(\tau_{0},\tau)$, where diffusion is normal, but the displacement distributions are non-Gaussian. By taking such interval wide enough, the transient pdf’s, $p(\delta_{t},t)$, grow insensitive to $t$, and so do the fitted $\beta$ values. This way, we mimic the situation reported in the literature Granick1 ; Granick2 ; Bhatta ; Sung1 ; Sung2 ; Granick3 for more complex systems. This prescription for NGND control is independent of the detailed statistics of the pore fluctuations. For instance, one can consider the case of a periodically time modulated pore width with $\varepsilon(t)=\delta_{\varepsilon}\cos(t/\tau).$ (21) Figure 4: Diffusion of a pointlike overdamped particle in a corrugated channel, Eq. (19), with $\varepsilon(t)$ representing the periodically time modulation of Eq. (21). Simulation parameters are: $y_{L}=1$, $x_{L}=\pi$, $D_{0}=1$, $\delta^{2}_{\varepsilon}=0.03$, and (a) $\tau=100$, (b) $\tau=500$. The initial conditions were set by imposing uniform distributions of the particle’s initial position and $\varepsilon(t)$ initial phase. In the main panels, rescaled displacement pdf’s are plotted for increasing $t$, in the normal diffusion regime. The values of $\beta$ in the legend have been obtained by fitting Eq. (17) with $D$ computed numerically from the asymptotes, $\langle\Delta x^{2}\rangle=2Dt$, drawn in the insets. The simulation data plotted in Fig. 4(a) confirm the existence of the NGND transient interval $(\tau_{0},\tau)$, where $\tau$ is now the period of the sinusoidal function $\varepsilon(t)$ of Eq. (21) and $\langle\varepsilon^{2}\rangle=\delta_{\varepsilon}^{2}/2=(\pi^{2}/8)\langle|\varepsilon|\rangle^{2}$. For a quantitative comparison, in Figs. 3 and 4(a) $\langle\varepsilon^{2}\rangle$ have been assigned the same value, so that for the simulation parameters of Fig. 4 both time scales, $\tau_{0}$ and $\tau$, are larger by approximately the same factor two. The NGND phenomenon in Fig. 4(a) is apparent. The rescaled pdf’s shown there are clearly non-Gaussian. Fat oscillating tails arise for $t>\tau$, as an effect of the spatial periodicity of the channel. Indeed, the oscillation period of the plotted distributions is of the order $x_{L}/\sqrt{t}$. This effect, detectable also in Figs. 3 and 4(b), plays here a marginal role. Indeed, on disregarding such oscillations, the tails of the non-Gaussian distributions can still be fitted by the function of Eq. (17) for an appropriate choice of the free parameter $\beta$. However, in Fig. 4(a) and in contrast with Fig. 3, the deterministic nature of the pore time modulation allowed us to resolve a weak $t$-dependence of $\beta$, without increasing the statistical accuracy of our simulation runs. It is important to remark that such a residual $t$-dependence of the transient rescaled pdf’s can be further suppressed by widening the transient interval $(\tau_{0},\tau)$. An example is shown in panel (b) of Fig. 4, where the simulation parameters are the same as in panel (a), except for the modulation period, $\tau$, which is five times larger. We remark that, for the simple model at hand, this result is analytically predictable upon reformulating the particle’s dynamics in the dimensionless units, $x\to x/x_{L}$, $y\to y/x_{L}$ and $t\to t/\tau$. ## VII Diffusion in Convection Rolls We finally address the reasons why transients under NGND conditions can be either lepto- or platykurtic. In Secs. III and IV we looked at two simple models, which exhibit distributions of the one or the other type, respectively, with $1\leq\beta\leq 2$ and $\beta\geq 2$. We consider now a slightly more complicated 2D system, which can undergo both transients, depending on the choice of its dynamical parameters. The numerical analysis of its diffusion properties will help us shed light on the different microscopic mechanisms responsible for these two type of transients, thus justifying the generalization of the NGND notion proposed in this work. Figure 5: Diffusion in the periodic convective flow pattern of Eq. (22): (a) Flow cell unit consisting of four counter-rotating subcells; (b) The asymptotic diffusion constant, $D$ vs. $D_{0}$: the numerical data (dots) are compared with the analytical prediction discussed in the text, see Eq. (24). The stream function parameters are $U_{0}=1$ and $L=2\pi$, and the diffusion scale is $D_{L}=U_{0}L/2\pi$. To this purpose we investigated the diffusion of a pointlike overdamped particle of coordinates $x$ and $y$, suspended in a stationary planar laminar flow with periodic center-symmetric stream function Gollub1 ; Gollub2 ; Rosen ; Pomeau ; Saintillan ; Vulpiani ; Neufeld . $\psi(x,y)=({U_{0}L}/{2\pi})\sin({2\pi x}/{L})\sin({2\pi y}/{L}),$ (22) where $U_{0}$ is the maximum advection speed and $L$ the wavelength of the flow unit cell. The ensuing particle’s dynamics can be formulated in terms of two driven Langevin equations, $\displaystyle\dot{x}=u_{x}+\xi_{x}(t),~{}~{}~{}~{}\dot{y}=u_{y}+\xi_{y}(t),$ (23) with the vector $(u_{x},u_{y})=(\partial_{y},-\partial_{x})\psi$ representing the local advection velocity. As illustrated in Fig. 5(a), this defines four counter-rotating flow subcells, also termed convection rolls. The translational noises, $\xi_{i}(t)$ with $i=x,y$ are stationary, independent Gaussian noises with auto-correlation functions $\langle\xi_{i}(t)\xi_{j}(0)\rangle=2D_{0}\delta_{ij}\delta(t)$. They can be regarded as modeling homogeneous, isotropic thermal fluctuations. In our simulations, the flow parameters, $U_{0}$ and $L$ were kept fixed, as they define the natural length and time units, $L$ and $\Omega_{L}^{-1}=L/2\pi U_{0}$, respectively. Therefore, the only tunable parameter left is the noise strength, $D_{0}$. Having in mind a stationary system, we assumed uniform distributions of the initial particle’s coordinates, $x(0)$ and $y(0)$. Indeed, due to the incompressibility of $(u_{x},u_{y})$, in the presence of translational noise, a particle’s trajectory eventually fills up the flow unit cell uniformly. To this regard, we remind that, in the absence of noise, the advection period tends to diverge as the closed trajectory of a dragged particle runs close the subcell boundaries Weiss ; hence, for $D_{0}=0$ the particle gets trapped in a convection roll Rosen ; Pomeau ; Neufeld . Figure 6: Diffusion mechanisms in the periodic flow pattern of Eq. (22): mean first-exit time, $T_{D}$, vs. thermal noise, $D_{0}$. Convection flow parameters are $U_{0}=1$ and $L=2\pi$. The asymptotic solid lines on the left and right are respectively $T_{0}$ and $T_{0}/4$, with $T_{0}$ given in Eq. (25); the horizontal dashed line represents the advection period, $T_{L}$. Three $D_{0}$ intervals. shaded in different colors, are separated by $D_{*}$, obtained by imposing $T_{D}=T_{L}$, and $D_{L}$, defined in Fig. 5. In each interval, the range of $\beta$ is reported for reader’s convenience; no NGND was detected for $D_{0}>D_{L}$. Particle transport in such a flow pattern has been studied under diverse conditions and a rich phenomenology has emerged Gollub1 ; Gollub2 ; Rosen ; Pomeau ; Saintillan ; Vulpiani ; Neufeld . We focus here on the Brownian diffusion of a passive particle under the simultaneous action of translational fluctuations and advective drag. A first important feature of this system is illustrated in Fig. 5(b), where we plotted the dependence of the asymptotic diffusion constant, $D$, on the noise intensity (and particle’s no-flow diffusion constant), $D_{0}$. The mean square displacement is an asymptotically linear function of time for any choice of $D_{0}$. However, on increasing $D_{0}$, the diffusion constant, $D$, changes from $D=\kappa\sqrt{D_{L}D_{0}},$ (24) for $D_{0}<D_{L}$ (advective diffusion), to $D=D_{0}$, for $D_{0}>D_{L}$ (thermal diffusion), an abrupt crossover occurring at $D_{0}\simeq D_{L}$, with $D_{L}=U_{0}L/2\pi$ RR2 . The constant $\kappa$ depends on the geometry of the flow cells Rosen ; Pomeau ; for a 2D array of square counter-rotating convection rolls, $\kappa\simeq 1.06$ Rosen . This property can be explained with the fact that for $D<D_{L}$ the spatial diffusion occurs along the separatrices delimiting the four subcells of the stream function, $\psi(x,y)$, of Fig. 5(a). Stated otherwise, the diffusion process is regulated by the advection velocity field ourPOF . Vice versa, for $D_{0}>D_{L}$ the effects of advection on the particle’s diffusion become negligible. Not surprisingly, we detected NGND only for $D_{0}<D_{L}$. The diffusion process is governed by two competing mechanisms: (i) Particle’s circulation inside the counter-rotating subcells of $\psi(x,y)$. The corresponding vorticity, $\nabla\times{\mathbf{u}}=-\nabla^{2}\psi$, has a maximum, $\Omega_{L}=2\pi U_{0}/L$, at the center of the subcells. This defines the time scale, $T_{L}=2\pi/\Omega_{L}$, for the advection period, that is an estimate of the average time taken by advection to drag the particle around a convection roll; (ii) Diffusion across the convection rolls. The mean first-exit time, $T_{D}$, of a Brownian particle out of a unit convection cell of $\psi(x,y)$, can be easily computed for $D_{0}\gg D_{L}$ simply by ignoring advection Gardiner , $T_{0}=\frac{1}{D_{0}}\left(\frac{L}{2\pi}\right)^{2}\left(\frac{4}{\pi}\right)^{4}~{}\sum_{m,n}^{(\rm odd)}\frac{1}{m^{2}}\frac{1}{n^{2}}\frac{1}{m^{2}+n^{2}},$ (25) where the summation is restricted to the odd values of $m$ and $n$. In the opposite limit, $D_{0}\ll D_{L}$, $T_{D}$ is just one fourth of $T_{0}$, because, as anticipated above, at very low noise levels, the exit process consists of a slow activation mechanism, which takes the particle from the center of a subcell to its boundaries, followed by a relatively faster flow- driven propagation along the grid formed by the subcell separatrices. Figure 7: Leptokurtic, $D_{0}<D^{*}$ (a), and platykurtic transients, $D^{*}<D_{0}<D_{L}$ (b), in a periodic array of 2D convection rolls. $D_{0}$, $t$ and $\beta$ are reported in the legends; convection flow parameters are $U_{0}=1$ and $L=2\pi$. All transient pdf’s were taken in the regime of normal diffusion, see insets. Thanks to thermal fluctuations, the Brownian particle jumps from roll to roll, thus diffusing in the $x,y$ plane. Its coarse-grained motion can be modeled as a discrete random walker with time constant $T_{D}$ Gardiner . Therefore, for large observation times, $t>T_{D}$, the particle executes normal diffusion. For the simulation parameters adopted in Fig. 6, the crossover between low- and high-noise estimates of $T$, respectively $T_{D}=T_{0}$ and $T_{D}=T_{0}/4$, occurs in the region of advective diffusion, $D_{0}<D_{L}$. More remarkably, it appears to correspond to the condition, $T_{D}=T_{L}$, namely, when the two competing time scales of the particle’s dynamics inside a convection roll coincide. Such a condition defines a unique $D_{0}$ value, $D_{*}$, which splits the advective diffusion domain into two distinct subdomains, respectively, $D_{0}<D_{*}$ and $D_{*}<D_{0}<D_{L}$. Numerical simulation clearly shows evidence of NGND for $t>T_{D}$, in close analogy with the models of Secs. III and IV, except for an important peculiarity: The transient displacement distributions displayed in Fig. 7, turn out to be leptokurtic, with $1\leq\beta\leq 2$, for $D_{0}<D_{*}$, and platykurtic, with $\beta\geq 2$, for $D_{*}<D_{0}<D_{L}$. This can be explained with the fact that here the displacement length correlation, see Sec. II, is dominated by thermal noise in the lower $D_{0}$ interval, where $T_{D}>T_{L}$, and by advection in the larger $D_{0}$ interval, where $T_{L}>T_{D}$. Accordingly, in the formulation of Secs. III and IV, the role of transient time, $\tau$, is played respectively by $T_{D}$ for $D_{0}<D_{*}$ and by $T_{L}$ for $D_{0}>D_{*}$. For $D_{*}\ll D_{0}<D_{L}$, the onset of normal diffusion and the exponential- Gaussian transitions are thus regulated by two distinct time scales, respectively, $T_{0}$ and $T_{L}$. Indeed, the slowest time modulation of the particle’s dynamics is due to the advective drag inside the convection rolls. By generalizing our discussion for the NGND of a free JP, Sec. IV, we conclude that such a rotational dynamics must be responsible for the negative excess kurtosis of the unidirectional particle’s displacements reported in Fig. 7(b). The range of the $\beta$ values, fitted according to the procedure of Sec. V, is shown in Fig. 6 for each $D_{0}$ interval. In conclusion, the NGND transients of this model can change from leptokurtic to platykurtic simply by raising the strength of the internal noise. Most remarkably, this and related diffusive systems are easily accessible to direct experimental demonstration Gollub2 ; Saintillan . ## VIII Conclusions In this work we have investigated NGND transients Granick1 ; Granick2 ; Bhatta ; Sung1 ; Sung2 ; Granick3 in low dimensional stochastic processes. These become apparent when the Einstein law, which characterizes normal diffusion, sets in for observation times, $t$, shorter than the asymptotic Gaussian displacement statistics, predicted by the central limit theorem. A wide class of low dimensional systems manifest NGND under the condition that their local dynamics is subjected to time correlated modulations. Time modulation can affect the effective particle geometry (e.g., its cross- section in the diffusion direction, Sec. III), its dynamics (e.g., its isotropic self-propulsion mechanism, Sec. IV), or its confinement geometry (e.g., the cross-section of the directed channel containing the particle, Sec. VI). In all cases discussed here the system’s modulation is time correlated with time constant, $\tau$, larger than any other microscopic dynamical time scale. We then noticed that NGND becomes more prominent when the onset times of normal diffusion and the Gaussian displacement statistics are well separated, with the former much lower than the latter. This situation is well illustrated by the fluctuating narrow channel of Sec. VI, where normal diffusion occurs for $t$ larger than the mean pore crossing time and the Gaussian statistics sets in for $t$ larger that the tunable correlation time of the pore modulation. In low dimensional systems, NGND features exhibit a smooth dependence on the observation time. The transient rescaled displacement distributions are not “universal” over large $t$ intervals, in sharp contrast with the extended disordered systems first studied in the literature Granick1 ; Granick2 ; Bhatta ; Sung1 ; Sung2 ; Granick3 . To quantify the $t$-dependence of the transient pdf’s we introduced an ad hoc fitting function, $p_{\beta}(\delta_{t})$, which, by construction, reproduces the normal diffusion law, with diffusion constant obtained by direct observation, and fits the numerical curves $p(\delta_{t},t)$ by tuning only one free parameter, $\beta$. Actually, in Sec. VI we noticed that by increasing the gap between the two distinct time scales defining the transient interval, the $t$-dependence of $\beta$ is suppressed, with $\beta$ tending to one (Laplace distribution). This situation closer resembles the current description of the NGND phenomenon in complex systems. However, NGND in low dimensional systems has the advantage of being easily controllable by tuning the time modulation of the microscopic dynamics. For instance, the two simplest models discussed here, the free ellipsoidal and Janus particles, exhibit remarkably different transient distributions, respectively with fat, $1\leq\beta\leq 2$, and thin tails, $\beta\geq 2$. Platykurtic transient distributions are peculiar to systems with rotational modulation of the diffusion process, because, as discussed in Sec. II, this can cause negative time correlations of the unidirectional displacement lengths; hence the negative values of the excess kurtosis. This conclusion is corroborated by Brownian diffusion in the periodic array of 2D convection rolls discussed in Sec. VII. In contrast with the elementary models of Secs. III and IV, there the physical mechanism determining the transient time varies depending on the strength of the thermal fluctuations. At low temperatures, the transient dynamics of the particle is governed by isotropic random jumps from convection roll to convection roll, largely insensitive to the details of its trajectory inside the individual rolls. On the contrary, at higher temperatures, but still in the advective diffusion regime, roll jumping grows faster compared with the circulation inside the rolls; transients are then dominated by a rotational dynamics, which causes a negative excess kurtosis of the particle’s displacements. We conclude now mentioning a number of open issues we intend to address in the next future. (i) We showed that low-dimensional systems exhibit NGND transients for observation times not too much larger than their largest intrinsic relaxation time. It remains to be seen how one can make such transient time intervals wider, for instance, by means of a hierarchy of additional stochastic degrees of freedom. (ii) We wonder to what extent our discussion of discrete NGND in Sec. II is related to the formalism of the large deviations theory LDT . This might provide an alternate phenomenological description of the NGND transients, also applicable to higher dimensional systems. (iii) NGND transients in laminar flows are of great relevance in microfluidics. This results reviewed in Sec.7 will be published in a more detailed report to appear soon JFM . We showed that leptokurtic (platykutic) transients are an effect of the mostly thermal (advective) tracer’s diffusion. The question then rises as how this explanation translates in the cases of turbulent flows, a recurrent problem in biological systems. (iv) Finally, it is conceivable that persistent NGND transients impact how active micro-swimmers interact with each other or with confining walls or other obstacles, to form all kinds of clustered structures. The implications of such a mechanism in the technology of active matter need further investigation. ###### Acknowledgements. Y.L. is supported by the NSF China under grant No. 11875201 and No. 11935010. P.K.G. is supported by SERB Start-up Research Grant (Young Scientist) No. YSS/2014/000853 and the UGC-BSR Start-Up Grant No. F.30-92/2015. ## References * (1) B. Wang, S. M. Anthony, S. C. Bae, and S. Granick, Anomalous yet Brownian, Proc. Natl. Acad. Sci. U.S.A. 106, 15160 (2009). * (2) B. Wang, J. Kuo, C. Bae, and S. Granick, When Brownian diffusion is not Gaussian, Nat. Mater. 11, 481 (2012). * (3) S. Bhattacharya, D. K. Sharma, S. Saurabh, S. De, A. Sain, A. Nandi, and A. Chowdhury, Plasticization of poly(vinylpyrrolidone) thin films under ambient humidity: Insight from single-molecule tracer diffusion dynamics, J. Phys. Chem. B 117, 7771 (2013). * (4) J. Kim, C. Kim, and B. J. Sung, Simulation study of seemingly Fickian but heterogeneous dynamics of two dimensional colloids, Phys. Rev. Lett. 110, 047801 (2013). * (5) G. Kwon, B. J. Sung, and A. Yethiraj, Dynamics in crowded environments: Is non-Gaussian Brownian diffusion normal? J. Phys. Chem. B 118, 8128 (2014). * (6) J. Guan, B. Wang, and S. Granick, Even hard-sphere colloidal suspensions display Fickian yet non-Gaussian diffusion, ACS Nano 8, 3331 (2014). * (7) C. Gardiner, Stochastic Methods: A Handbook for the Natural and Social Sciences (Springer, Berlin, 2009). * (8) E. R. Weeks, J. C. Crocker, A. C. Levitt, A. Schofield, D. A. Weitz, Three-dimensional direct imaging of structural relaxation near the colloidal glass transition, Science 287, 627 (2000). * (9) J. D. Eaves and D. R. Reichman, Spatial dimension and the dynamics of supercooled liquids, Proc. Natl. Acad. Sci. U.S.A. 106, 15171 (2009). * (10) K. C. Leptos, J. S. Guasto, J. P. Gollub, A. I. Pesci, and R. E. Goldstein, Dynamics of enhanced tracer diffusion in suspensions of swimming eukaryotic microorganisms, Phys. Rev. Lett. 103, 198103 (2009). * (11) W. K. Kegel and A. van Blaaderen, Direct observation of dynamical heterogeneities in colloidal hard-sphere suspensions, Science 287, 290 (2000). * (12) P. Chaudhuri, L. Berthier, and W. Kob, Universal nature of particle displacements close to glass and jamming transitions, Phys. Rev. Lett. 99, 060604 (2007). * (13) S. K. Ghosh, A. G. Cherstvy, D. S. Grebenkov and R. Metzler, Anomalous, non-Gaussian tracer diffusion in crowded two-dimensional environments, New J. Phys. 18, 013027 (2016). * (14) W. He, H. Song, Y. Su, L. Geng, B. J. Ackerson, H. B. Peng, and P. Tong, Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane, Nat. Comm. 7, 11701 (2016). * (15) M. V. Chubynsky and G. W. Slater, Diffusing diffusivity: A model for anomalous, yet Brownian, diffusion, Phys. Rev. Lett. 113, 098302 (2014). * (16) A. G. Cherstvy and R. Metzler, Anomalous diffusion in time-fluctuating non-stationary diffusivity landscapes, Phys.Chem.Chem.Phys. 18, 23840 (2016). * (17) R. Jain and K. L. Sebastian, Diffusion in a crowded, rearranging environment, J. Phys. Chem. B 120, 3988 (2016). * (18) R. Jain and K. L. Sebastian, Diffusing diffusivity: A new derivation and comparison with simulations, J. Chem. Sci. 129, 929 (2017). * (19) N. Tyagi and B. J. Cherayil, Non-Gaussian Brownian diffusion in dynamically disordered thermal environments, J. Phys. Chem. B 121, 7204 (2017). * (20) L. Luo and M. Yi, Non-Gaussian diffusion in static disordered media, Phys. Rev. E 97, 042122 (2018). * (21) A. V. Chechkin, F. Seno, R. Metzler, and I. M. Sokolov, Brownian yet non-Gaussian diffusion: From superstatistics to subordination of diffusing diffusivities, Phys. Rev. X 7, 021002 (2017). * (22) J. Slezak, R. Metzler, and M. Magdziarz, Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion, New J. Phys. 20, 023026 (2018). * (23) Y. Li, F.Marchesoni, D. Debnath, and P. K. Ghosh, Non-Gaussian normal diffusion in a fluctuating corrugated channel, Phys. Rev. Res. 1, 033003 (2019). * (24) P. Hänggi and F. Marchesoni, Artificial Brownian motors: Controlling transport on the nanoscale, Rev. Mod. Phys. 81, 387 (2009). * (25) R. Lipowsky, Generic interactions of flexible membranes, in Handbook of Biological Physics, (R. Lipowsky and E. Sackmann, editors) Vol. 1, Ch. 11 (Elsevier, 1995). * (26) P. S. Burada, P. Hänggi, F. Marchesoni, G. Schmid, and P. Talkner, Diffusion in confined geometries, ChemPhysChem 10, 45 (2009). * (27) X. Yang, C. Liu, Y. Li, F. Marchesoni, P. Hänggi, and H. P. Zhang, Hydrodynamic and entropic effects on colloidal diffusion in corrugated channels, Proc. Natl. Acad. Sci. U.S.A. 114, 9564 (2017). * (28) V. Sposini, A. V. Chechkin, F. Seno, G. Pagnini, and R. Metzler, Random diffusivity from stochastic equations: Comparison of two models for Brownian yet non-Gaussian diffusion, New J. Phys. 20, 043044 (2018). * (29) L. Luo and M. Yi, Quenched trap model on the extreme landscape: The rise of subdiffusion and non-Gaussian diffusion, Phys. Rev. E 100, 042136 (2019). * (30) L. Luo and M. Yi, Ergodicity recovery of random walk in heterogeneous disordered media, Chin. Phys. B 29, 050503 (2020) * (31) for a review, see H. C. Berg, Random Walk in Biology (Princeton University Press, 1984). * (32) F. Perrin, Mouvement brownien d’un ellipsoide - I. Dispersion diélectrique pour des molécules ellipsoidales, J. Phys. Radium V, 497 (1934); Mouvement Brownien d’un ellipsoide (II). Rotation libre et dépolarisation des fluorescences. Translation et diffusion de molécules ellipsoidales ibid. VII, 1 (1936). * (33) Y. Han, A. M. Alsayed, M. Nobili, J. Zhang, T. C. Lubensky, and A. G. Yodh, Brownian motion of an ellipsoid, Science 314, 626 (2006). * (34) P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations (Springer, 1992). * (35) S. R. Aragón and R. Pecora General theory of dynamic light scattering from cylindrically symmetric particles with translational‐rotational coupling, J. Chem. Phys. 82, 5346 (1985). * (36) S. Leitmann, F. Höfling, and T. Franosch, Dynamically crowded solutions of infinitely thin Brownian needles, Phys Rev E 96, 012118 (2017). * (37) S. Prager, Interaction of rotational and translational diffusion, J. Chem. Phys. 23, 2404 (1955). * (38) S. Jiang and S. Granick (Eds.), Janus particle synthesis, self-assembly and applications (RSC Publishing, Cambridge, 2012). * (39) A. Walther and A. H. E. Müller, Janus particles: Synthesis, self-assembly, physical properties, and applications, Chem. Rev. 113, 5194 (2013). * (40) M. C. Marchetti, J. F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Hydrodynamics of soft active matter, Rev. Mod. Phys. 85, 1143 (2013). * (41) J. Elgeti, R. G. Winkler, and G. Gompper, Physics of microswimmers, single particle motion and collective behavior: a review, Rep. Progr. Phys. 78, 056601 (2015). * (42) see e.g. Smart Drug Delivery System, edited by A. D. Sezer (IntechOpen, 2016). DOI: 10.5772/60475 * (43) J. Wang, Nanomachines: Fundamentals and Applications (Wiley-VCH, Weinheim, 2013). * (44) G. Volpe, I. Buttinoni, D. Vogt, H.-J. Kümmerer, and C. Bechinger, Microswimmers in patterned environments, Soft Matter, 7, 8810 (2011). * (45) P. K. Ghosh, V. R. Misko, F. Marchesoni, and F. Nori, Self-propelled Janus particles in a ratchet: Numerical simulations, Phys. Rev. Lett. 110, 268301 (2013). * (46) S. vanTeeffelen and H. Löwen, Dynamics of a Brownian circle swimmer, Phys. Rev. E 78, 020101(RC) (2008). * (47) D. Debnath, P. K. Ghosh, Y. Li, F. Marchesoni, and B. Li, Diffusion of eccentric microswimmers, Soft Matt. 12, 2017 (2016). * (48) C. Kurzthaler, S. Leitmann, and T. Franosch, Intermediate scattering function of an anisotropic active Brownian particle, Sci. Rep. 6, 36702 (2016). * (49) J. R. Howse, R. A. L. Jones, A. J. Ryan, T. Gough, R. Vafabakhsh, and R. Golestanian, Self-motile colloidal particles: From directed propulsion to random walk Phys. Rev. Lett. 99, 048102 (2007). * (50) B. ten Hagen, S. van Teeffelen, H. Löwen, Non-Gaussian behaviour of a self-propelled particle on a substrate, Condens. Matter Phys. 12, 725 (2009). * (51) X. Ao, P. K. Ghosh, Y. Li, G. Schmid, P. Hänggi, and F. Marchesoni, Diffusion of chiral Janus particles in a sinusoidal channel, EPL 109, 10003 (2015). * (52) X. Zheng, B. ten Hagen, A. Kaiser, M. Wu, H. Cui, Z. Silber-Li, and H. Löwen, Non-Gaussian statistics for the motion of self-propelled Janus particles: experiment versus theory, Phys. Rev. E 88, 032304 (2013). * (53) D. Debnath, P. K. Ghosh, V. R. Misko, Y. Li, F. Marchesoni, and F. Nori, Enhanced motility in a binary mixture of active nano/microswimmers, Nanoscale 12, 9717 (2020). * (54) W. Feller, An Introduction To Probability Theory And Its Applications, Vol. 2 (Wiley, New York, 1991). * (55) L. Bosi, P. K. Ghosh, and F. Marchesoni, Analytical estimates of free Brownian diffusion times in corrugated narrow channels, J. Chem. Phys. 137, 174110 (2012). * (56) T. H. Solomon and J. P. Gollub, Chaotic particle transport in time-dependent Rayleigh-Bénard convection, Phys. Rev. A 38, 6280 (1988). * (57) T. H. Solomon and I. Mezić, Uniform resonant chaotic mixing in fluid flows, Nature (London) 425, 376 (2003). * (58) M. N. Rosenbluth, H. L. Berk, I. Doxas, and W. Horton, Effective diffusion in laminar convective flows, Phys. Fluids, 30, 2636 (1987). * (59) W. Young, A. Pumir, and Y. Pomeau, Anomalous diffusion of tracer in convection rolls, Phys. Fluids A 1, 462 (1989). * (60) Y.-N. Young and M. J. Shelley, Stretch-coil transition and transport of fibers in cellular flows, Phys. Rev. Lett. 99, 058303 (2007); H. Manikantan and D. Saintillan, Subdiffusive transport of fluctuating elastic filaments in cellular flows, Phys. Fluids 25, 073603 (2013). * (61) A. Sarracino, F. Cecconi, A. Puglisi, and A. Vulpiani, Nonlinear response of inertial tracers in steady laminar flows: Differential and absolute negative mobility, Phys. Rev. Lett. 117, 174501 (2016). * (62) C. Torney and Z. Neufeld, Transport and aggregation of self-propelled particles in fluid flows, Phys. Rev. Lett. 99, 078101 (2007) * (63) N. O. Weiss, The expulsion of magnetic flux by eddies, Proc. Roy. Soc. A293, 310 (1966). * (64) Y. Li, L. Li, F. Marchesoni, D. Debnath, and P. K. Ghosh, Active diffusion in convection rolls, Phys. Rev. Res. 2, 013250 (2020). * (65) Q. Yin, Y. Li, F. Marchesoni, T. Debnath, and P. K. Ghosh, Exit times of a Brownian particle out of a convection roll, Phys. Fluids 32, 092010 (2020). * (66) J. Feng and T. G. Kurtz, Large Deviations for Stochastic processes, Mathematical Surveys and Monographs, Vol. 131 (Am. Math. Society, 2006) * (67) Q. Yin, Y. Li, B. Li, F. Marchesoni, S. Nayak, and P. K. Ghosh, Diffusion transients in convection rolls, to be published in J. Fluid Mech. (2021) DOI:10.1017/jfm.2020.1127.
# Spectra of strongly Deza graphs Saieed Akbari<EMAIL_ADDRESS>Willem H. Haemers <EMAIL_ADDRESS>Mohammad Ali Hosseinzadeh <EMAIL_ADDRESS>Vladislav V. Kabanov<EMAIL_ADDRESS>Elena V. Konstantinova<EMAIL_ADDRESS>Leonid Shalaginov<EMAIL_ADDRESS>Department of Mathematical Sciences, Sharif University of Technology, Azadi Street, P.O. Box 11155-9415, Tehran, Iran Department of Econometrics and Operations Research, Tilburg University, Tilburg, The Netherlands Faculty of Engineering Modern Technologies, Amol University of Special Modern Technologies, Amol 4616849767, Iran Krasovskii Institute of Mathematics and Mechanics, S. Kovalevskaja st. 16, Yekaterinburg, 620990, Russia Sobolev Institute of Mathematics, Ak. Koptyug av. 4, Novosibirsk, 630090, Russia Novosibisk State University, Pirogova str. 2, Novosibirsk, 630090, Russia Chelyabinsk State University, Brat’ev Kashirinyh st. 129, Chelyabinsk, 454021, Russia ###### Abstract A Deza graph $G$ with parameters $(n,k,b,a)$ is a $k$-regular graph with $n$ vertices such that any two distinct vertices have $b$ or $a$ common neighbours. The children $G_{A}$ and $G_{B}$ of a Deza graph $G$ are defined on the vertex set of $G$ such that every two distinct vertices are adjacent in $G_{A}$ or $G_{B}$ if and only if they have $a$ or $b$ common neighbours, respectively. A strongly Deza graph is a Deza graph with strongly regular children. In this paper we give a spectral characterisation of strongly Deza graphs, show relationships between eigenvalues, and study strongly Deza graphs which are distance-regular. ###### keywords: Deza graph, eigenvalues, strongly regular graph, divisible design graph, distance-regular graph, cospectral graphs ###### MSC: [2010] 05C50, 05E10, 15A18 ††journal: arXiv ## 1 Introduction A Deza graph $G$ with parameters $(n,k,b,a)$ is a $k$-regular graph of order $n$ for which the number of common neighbours of two vertices takes just two values, $b$ or $a$, where $b\geqslant a$. Moreover, $G$ is not the complete graph or the edgeless graph. Deza graphs were introduced in [9], and the name was given in [11], where the basics of Deza graph theory were founded and different constructions of Deza graphs were presented. Deza graphs can be considered in terms of matrices. Suppose $G$ is a graph with $n$ vertices, and $M$ is its adjacency matrix. Then $G$ is a Deza graph with parameters $(n,k,b,a)$ if and only if $M^{2}=aA+bB+kI$ for some symmetric $(0,1)$-matrices $A$ and $B$ such that $A+B+I=J$, where $J$ is the all ones matrix and $I$ is the identity matrix. Let $G$ be a Deza graph with $M$, $A$ and $B$ as above. If $b=a$ we put $A=J-I$ and $B=O$. Then $A$ and $B$ are adjacency matrices of graphs, and the corresponding graphs $G_{A}$ and $G_{B}$ are called the children of $G$. A Deza graph of diameter two which is not a strongly regular graph is called a strictly Deza graph. Deza graphs whose children are complete multipartite graphs and their complement are known as divisible design graphs. Divisible design graphs were studied in [4, 14]. ###### Definition 1 A Deza graph is called a strongly Deza graph if its children are strongly regular graphs. Obviously, divisible design graphs are strongly Deza graphs. In this paper spectral properties of strongly Deza graphs are studied. The paper is organized as follows. First, spectral relationships between Deza graphs and their children are given. Second, a spectral characterization of strongly Deza graphs is presented. Then, the relationships between eigenvalues are shown for strongly Deza graphs with four and five distinct eigenvalues. We pay special attention to singular strongly Deza graphs, where a graph is said to be singular if and only if zero is its eigenvalue. Finally, we consider distance-regular strongly Deza graphs. ## 2 General results In this section we present general results on spectral properties of Deza graphs and strongly regular graphs. Spectral relationships between Deza graphs and their children are given by the following result. ###### Theorem 1 [1, Theorem 3.2] Let $G$ be a Deza graph with parameters $(n,k,b,a)$, $b>a$. Let $M,A,B$ be the adjacency matrices of $G$ and its children, respectively. If $\theta_{1}=k,\theta_{2},\ldots,\theta_{n}$ are the eigenvalues of $M$, then (i) The eigenvalues of $A$ are $\alpha=\cfrac{b(n-1)-k(k-1)}{b-a},\ \alpha_{2}=\cfrac{k-b-\theta_{2}^{2}}{b-a},\ \ldots,\ \alpha_{n}=\cfrac{k-b-\theta_{n}^{2}}{b-a}.$ (ii) The eigenvalues of $B$ are $\beta=\cfrac{a(n-1)-k(k-1)}{a-b},\ \beta_{2}=\cfrac{k-a-\theta_{2}^{2}}{a-b},\ \ldots,\ \beta_{n}=\cfrac{k-a-\theta_{n}^{2}}{a-b}.$ If $b=a$ then $A=J-I$, $B=0$, $G$ is a strongly regular graph and $\theta_{i}^{2}=k-b$ for $i=2,\ldots,n$. If $b\neq a$ then children $G_{A}$ and $G_{B}$ are regular graphs with degrees $\alpha$ and $\beta$, respectively. ###### Remark 1 By the proof of [1, Theorem 3.2] it follows that the multiplicities of $\alpha_{i}$ and $\beta_{i}$, $2\leqslant i\leqslant n$, are obtained as a summation of the multiplicities of $\pm\theta_{i}$ except the case $\theta_{n}=-k$ with the multiplicity $1$ when a graph is bipartite (see [3, Proposition 3.4.1]). It is known [11] that Deza graphs are a generalisation of strongly regular graphs in such a way that the number of common neighbours of any pair of distinct vertices in a Deza graph does not depend on the adjacency. Moreover, any Deza graph with parameters $(n,k,b,a)$ is a strongly regular graph with parameters $(n,k,\lambda,\mu)$ if and only if $M=A$, $M=B$ or $b=a$. Then we have $\\{\lambda,\mu\\}=\\{a,b\\}$ and $M^{2}=kI+\lambda M+\mu(J-I-M).$ (1) Note that if $b=a$, the children are not strongly regular graphs, and therefore the graph is not a strongly Deza graph. But it is strongly regular with $\lambda=\mu$. An eigenvalue of the adjacency matrix of a graph is said to be principal if the all-ones vector is not orthogonal to the associated eigenspace, and restricted otherwise. The next theorem is well-known (see for example [2, Theorem 1.3.1]) and contains a complete information about spectra of strongly regular graphs by their parameters. ###### Theorem 2 Let $G$ be a strongly regular graph with parameters $(n,k,\lambda,\mu)$ and the eigenvalues $k$, $r$, and $s$. Then the following statements hold. (i) The principal eigenvalue $k$ has the multiplicity 1. (ii) The restricted integer eigenvalues $r,s=\cfrac{(\lambda-\mu)\pm\sqrt{(\lambda-\mu)^{2}+4(k-\mu)}}{2}$ have the multiplicities $f,g=\cfrac{1}{2}\left(n-1\mp\cfrac{(r+s)(n-1)+2k}{r-s}\right).$ (iii) If $r$ and $s$ are not integers, then $r,s=\cfrac{-1\pm\sqrt{n}}{2}$ with the same multiplicities. Actually, this theorem can be obtained as a consequence of Theorem 1 since if $G$ is a strongly regular graph, then $G$ itself and its complement are children of $G$, except the case when $\lambda=\mu$ and the complete graph and its complement are the children. It is known [5, Theorem 6.7] that the smallest eigenvalue of a graph $G$ is at least $-1$ if and only if $G$ is a vertex disjoint union of complete graphs, and the second largest eigenvalue of a strongly regular graph $G$ is non-positive if and only if $G$ is a complete multipartite graph. ## 3 Spectral characterization By Theorems 1 and 2, a strongly Deza graph has at most three distinct absolute values of its eigenvalues. But we can be more precise. ###### Proposition 1 Suppose $G$ is a strongly Deza graph with parameters $(n,k,b,a)$. (i) $G$ has at most five distinct eigenvalues. (ii) If $G$ has two distinct eigenvalues, then $a=0$, $b=k-1\geqslant 1$, and $G$ is a disjoint union of cliques of order $k+1$. (iii) If $G$ has three distinct eigenvalues, then $G$ is a strongly regular graph with parameters $(n,k,\lambda,\mu)$, where $\\{\lambda,\mu\\}=\\{a,b\\}$, or $G$ is disconnected and each component is a strongly regular graph with parameters $(v,k,b,b)$, or each component is a complete bipartite graph $K_{k,k}$ with $k\geqslant 2$. Proof. (i) The eigenvalue of $G$ takes at most three distinct absolute values, one of which equals $k$. If $-k$ is not an eigenvalue, $G$ has at most five distinct eigenvalues. If $-k$ is an eigenvalue, then $G$ is bipartite. Hence, two vertices from different parts of the bipartition always have $a=0$ common neighbours, which implies that $G_{B}$ is disconnected. Therefore, $G$ is a divisible design graph, which has at most five distinct eigenvalues (see [14]). (ii) Any $k$-regular graph with just two eigenvalues is a disjoint union of cliques of order $k+1$, and if $k\geqslant 2$ it is a strongly Deza graph. (iii) It is well-known that a connected regular graph with three distinct eigenvalues is a strongly regular graph (see [10, Proposition 2.1]). If $G$ is disconnected, then $G$ is a disconnected divisible design graph. It follows (see [14, proposition 4.3]) that each component is a strongly regular graph with parameters $(v,k,b,b)$, or it is a bipartite incidence graph of a symmetric block design with parameters $(v,k,b)$. If $v>k$, the latter option has four distinct eigenvalues, so does not occur. If $v=k$, then $G$ is the disjoint union of complete bipartite graphs, which is a strongly Deza graph if $k\geqslant 2$. $\square$ If $G$ is a bipartite graph, then the _halved_ graphs of $G$ are two connected components of the graph on the same vertex set, where two vertices are adjacent whenever they are at distance two in $G$. The next theorem gives a spectral characterization of strongly Deza graphs. ###### Theorem 3 Let $G$ be a connected Deza graph with parameters $(n,k,b,a)$, $b>a$, and it has at most three distinct absolute values of its eigenvalues. ${\rm(i)}$ If $G$ is a non-bipartite graph, then $G$ is a strongly Deza graph. ${\rm(ii)}$ If $G$ is a bipartite graph, then either $G$ is a strongly Deza graph or its halved graphs are strongly Deza graphs. Proof. ${\rm(i)}$ Since $G$ is a connected non-bipartite Deza graph, $-k$ is not an eigenvalue of $G$. By Theorem 1, each of its children $G_{A}$ and $G_{B}$ is a regular graph with at most three distinct eigenvalues. Since $G_{A}$ and $G_{B}$ are complement of each other at least one of them is a connected graph. Moreover, since any connected regular graph with at most three distinct eigenvalues is a strongly regular graph or the complete graph, then $G_{A}$ and $G_{B}$ are strongly regular graphs. Thus, $G$ is a strongly Deza graph. ${\rm(ii)}$ Since $G$ is a connected bipartite Deza graph, we have $a=0$, and the value $-k$ is an eigenvalue of $G$. If we put $\theta_{2}=-k$ then by Theorem 1 we have: $\beta_{2}=\cfrac{k-a-\theta_{2}^{2}}{a-b}=\cfrac{k-(-k)^{2}}{-b}=\beta,$ and $G_{B}$ is a regular graph with at most three distinct eigenvalues. Since the multiplicity of $\beta$ is at least $2$, the graph $G_{B}$ is disconnected. Thus $G_{B}$ is a union of either strongly regular graphs with the same parameters or complete graphs with the same order. In the latter case, $G_{A}$ is a regular complete multipartite. Hence, $G$ is a strongly Deza graph. Let $G_{B}$ be a union of $t$ strongly regular graphs with the same parameters. Since $G_{A}$ is the complement of $G_{B}$, then $G_{A}$ is a connected regular graph with four distinct eigenvalues whose spectrum contains the eigenvalue $-\beta-1$ with the multiplicity $t-1$ (see Section $4.2$ in [6]). By Theorem 1 we have: $-\beta-1=-\left(\cfrac{a(n-1)-k(k-1)}{a-b}\right)-1=\cfrac{k-b-(-k)^{2}}{b},$ where the last formula corresponds to the eigenvalue $\alpha_{n}$ of $G_{A}$ related to the eigenvalue $-k$ of $G$. By Remark 1, the multiplicity of $-k$ equals $1$, hence $t=2$. Since $G$ is a connected bipartite graph each of two halved graphs of $G$ are strongly Deza graphs. $\square$ Spectral properties of regular bipartite graphs with three distinct non- negative eigenvalues were studied by T. Koledin and Z. Stanić in [16]. In particular, the relations between these graphs and two-class partially balanced incomplete block designs were obtained. The interesting observation is that such the design is presented by a matrix equation in which the adjacency matrix of a strongly regular graph is involved by a similar way as it is included in the matrix equation of a strongly Deza graph. Bipartite Deza graphs with six distinct eigenvalues belong to the class studied in [16]. Specific bipartite, regular graphs with five eigenvalues were considered by D. Stevanović in [17]. As immediate consequence of Theorem 1, we have the following statement. ###### Remark 2 Let $M$ be the adjacency matrix of a strongly Deza graph $G$ with the spectrum $\\{k^{1},\theta_{2}^{m_{2}},\theta_{3}^{m_{3}},\theta_{4}^{m_{4}},\theta_{5}^{m_{5}}\\}$, where the exponents denote multiplicities. Then $\theta_{2}=-\theta_{5},\ \theta_{3}=-\theta_{4}$ and the following equation holds: $tr(M)=k+(m_{2}-m_{5})\theta_{2}+(m_{3}-m_{4})\theta_{3}=0.$ (2) Also, only one multiplicity can meet zero value, and if so then the corresponding opposite eigenvalue is integer. The trace of the adjacency matrix $M$ of $G$ is presented as follows: $tr(M)=k+m_{2}\theta_{2}+m_{3}\theta_{3}+m_{4}\theta_{4}+m_{5}\theta_{5}=0.$ Since $\theta_{2}=-\theta_{5},\ \theta_{3}=-\theta_{4}$ then we have equation (2). The last statement immediately follows from Theorem 2.6 obtained by E. R. van Dam in [6]. The next theorem gives some conditions on an integral strongly Deza graph with respect to eigenvalues of its children. ###### Theorem 4 Let $G$ be a strongly Deza graph with parameters $(n,k,b,a)$. Let its child $G_{A}$ be a strongly regular graph with parameters $(n,\alpha,\lambda,\mu)$ and eigenvalues $\alpha,r,s$ with multiplicities $1,f,g$. If $M$ is the adjacency matrix of $G$ with spectrum $\\{k^{1},\theta_{2}^{m_{2}},\theta_{3}^{m_{3}},\theta_{4}^{m_{4}},\theta_{5}^{m_{5}}\\}$, then one of the following statements hold: (i) $\theta_{2}^{2}=k-b-s(b-a)$ and $\theta_{3}^{2}=k-b-r(b-a)$ are squares; in this case $G$ is an integral graph. (ii) $\theta_{2}^{2}=k-b-s(b-a)$ is not a square; then $\theta_{3}^{2}=k-b-r(b-a)$ is a nonzero square and $m_{2}=m_{5}=f/2$. (iii) $\theta_{3}^{2}=k-b-r(b-a)$ is not a square; then $\theta_{2}^{2}=k-b-s(b-a)$ is a nonzero square and $m_{3}=m_{4}=g/2$. Proof. Clearly, $G$ has at most three distinct absolute values of its five eigenvalues. Let $\theta_{2}=-\theta_{5}$, $\theta_{3}=-\theta_{4}$. Since $tr(M)=k+(m_{2}-m_{5})\theta_{2}+(m_{3}-m_{4})\theta_{3}=0$ we have either $m_{2}\neq m_{5}$ or $m_{3}\neq m_{4}$. Without loss of generality we put $\theta_{2,5}=\pm\sqrt{k-b-s(b-a)}$ and $\theta_{3,4}=\pm\sqrt{k-b-r(b-a)}.$ If $G_{A}$ has no integer eigenvalue except $\alpha$, then by Theorem 2 we have: $r,s=\frac{-1\pm\sqrt{n}}{2}.$ Let us consider the polynomial $p(x)=\cfrac{m(x)}{x-k}$, where $m(x)$ is the minimal polynomial of $M$. Thus, using the above formulas we have: $p(x)=(x-\theta_{2})(x+\theta_{2})(x-\theta_{3})(x+\theta_{3})=$ $=(x^{2}-k+b+r(b-a))(x^{2}-k+b+s(b-a))=$ $=\left(x^{2}-k+b+\frac{-1+\sqrt{n}}{2}(b-a)\right)\left(x^{2}-k+b+\frac{-1-\sqrt{n}}{2}(b-a)\right)=$ $=\left(x^{2}-k+\frac{b+a}{2}+\frac{\sqrt{n}}{2}(b-a)\right)\left(x^{2}-k+\frac{b+a}{2}-\frac{\sqrt{n}}{2}(b-a)\right)$ and finally we have: $p(x)=\left(x^{2}-k+\frac{b+a}{2}\right)^{2}-\frac{n(b-a)^{2}}{4}.$ Since $p(x)\in{\mathbb{Z}}[x]$, if $n$ is not a square, then $p(x)$ is irreducible in ${\mathbb{Z}}[x]$. Therefore, all multiplicities of the restricted eigenvalues of $G$ are the same. This is a contradiction. Assume that the graph $G_{A}$ is integral. By Theorem 2, the values $\theta_{2,5}$ and $\theta_{3,4}$ are quadratic irrationals. Let $\theta_{2}^{2}=\tau_{2}^{2}\sigma_{2}$ and $\theta_{3}^{2}=\tau_{3}^{2}\sigma_{3}$, where $\sigma_{2}$ and $\sigma_{3}$ are square free factors of integers $\theta_{2}^{2}$ and $\theta_{3}^{2}$, respectively, and the following equation holds: $k=(m_{5}-m_{2})\tau_{2}\sqrt{\sigma_{2}}+(m_{4}-m_{3})\tau_{3}\sqrt{\sigma_{3}}.$ (3) If $\sigma_{2}=\sigma_{3}=1$ then the statement ${\rm{(i)}}$ holds. Let $\sigma_{2}\neq 1$ and $\sigma_{3}\neq 1$. Since $\sigma_{2}$ and $\sigma_{3}$ are square free, then the square of the right part in (3) is an integer, and so $\sqrt{\sigma_{2}\sigma_{3}}$ is an integer. Hence, $\sigma_{2}=\sigma_{3}$ and $k=((m_{5}-m_{2})\tau_{2}+(m_{4}-m_{3})\tau_{3})\sqrt{\sigma_{2}}.$ Since $\sigma_{2}$ is square free then $\sqrt{\sigma_{2}}$ is not an integer. Hence, the last equation does not hold, a contradiction. By (3), if $\sigma_{2}\neq 1$ and $\sigma_{3}=1$ then $m_{5}-m_{2}$ is zero and case ${\rm{(ii)}}$ holds, and if $\sigma_{3}\neq 1$ and $\sigma_{2}=1$ then $m_{4}-m_{3}$ is zero, and case ${\rm{(iii)}}$ holds. $\square$ Theorem 4 is a generalization of Theorem 2.2 in [14]. ###### Corollary 1 The children of a strongly Deza graph are integral graphs. ## 4 Eigenvalue relationships In this section we present two theorems on the relationships between eigenvalues of a strongly Deza graph with four or five distinct eigenvalues. ###### Theorem 5 If $G$ is a strongly Deza graph with parameters $(n,k,b,a)$ then $G$ has spectrum $\\{k^{1},\theta_{2}^{m_{2}},\theta_{3}^{m_{3}},\theta_{4}^{m_{4}},\theta_{5}^{m_{5}}\\}$ such that there are two opposite integer eigenvalues $\theta_{2}=-\theta_{5}$ and the other eigenvalues are expressed as follows: $\theta_{3,4}=\pm\sqrt{\cfrac{k(n-k)-(m_{2}+m_{5})\theta_{2}^{2}}{n-m_{2}-m_{5}-1}}.$ (4) Moreover, the multiplicity of one integer eigenvalue can meet zero value. Proof. The graph $G$ has at most three distinct absolute values of its eigenvalues one of which is equal to $k$. If $M$ is the adjacency matrix of $G$, then we have: $tr(M)=k+(m_{2}-m_{5})\theta_{2}+(m_{3}-m_{4})\theta_{3}=0.$ Let $G_{A}$ be a child of $G$. By Theorem 1, it has three distinct eigenvalues presented as follows: $\left(\cfrac{b(n-1)-k(k-1)}{b-a},\ \cfrac{k-b-\theta_{3}^{2}}{b-a},\ \cfrac{k-b-\theta_{2}^{2}}{b-a}\right),$ and so by the proof of Theorem 4, it is an integral strongly regular graph. By Remark 1, their multiplicities are $(1,f,g)$, where $f=m_{2}+m_{5}$, $g=m_{3}+m_{4}=n-f-1$. Then the trace of the adjacency matrix $A$ of $G_{A}$ is presented as follows: $tr(A)=\cfrac{b(n-1)-k(k-1)}{b-a}+f\cfrac{k-b-\theta_{2}^{2}}{b-a}+(n-f-1)\cfrac{k-b-\theta_{3}^{2}}{b-a}=0.$ By Corollary 1, the children of $G$ are integral strongly regular graphs such that either $\theta_{2}$ and $\theta_{3}$ are both integers or one of them is a quadratic irrational. Let $\theta_{2}$ be an integer eigenvalue of $G$. Straightforward calculations give the following sequence of equalities: $(b(n-1)-k(k-1))+f(k-b-\theta_{2}^{2})+(n-f-1)(k-b-\theta_{3}^{2})=0.$ $b(n-1)-k(k-1)+(k-b)(n-1)+f(-\theta_{2}^{2})-(n-f-1)\theta_{3}^{2}=0.$ $(n-f-1)\theta_{3}^{2}=b(n-1)-k(k-1)+(k-b)(n-1)-f\theta_{2}^{2}.$ $(n-f-1)\theta_{3}^{2}=k(n-k)-f\theta_{2}^{2}.$ $\theta_{3}^{2}=\cfrac{k(n-k)-f\theta_{2}^{2}}{n-f-1}.$ Since $\theta_{4}=-\theta_{3}$, we have: $\theta_{3,4}=\pm\sqrt{\cfrac{k(n-k)-f\theta_{2}^{2}}{n-f-1}},$ which gives us (4). The last statement follows from Remark 2. $\square$ More meaningful relationships between the eigenvalues of strongly Deza graphs with four distinct eigenvalues are given by the following result. ###### Theorem 6 Let $G$ be a strongly Deza graph with parameters $(n,k,b,a)$ and spectrum $\\{k^{1},\theta_{2}^{m_{2}},\theta_{3}^{m_{3}},\theta_{4}^{m_{4}}\\}$ and let $\theta_{4}=-\theta_{3}$, $m_{3}=m_{4}$. Then $\theta_{3,4}=\pm\sqrt{k\left(1+\cfrac{(\theta_{2}+1)(m_{2}+1)}{n-m_{2}-1}\right)},$ (5) where $\theta_{2}=-k/m_{2}$ is an integer and $\theta_{2}\leqslant-1.$ Proof. Since $\theta_{4}=-\theta_{3}$ and $m_{3}=m_{4}$ we have: $tr(M)=k+m_{2}\theta_{2}=0.$ Hence, $k=-m_{2}\theta_{2}$ with integer $\theta_{2}$ which is at most $-1$. By Theorem 5, the other eigenvalues of $G$ are expressed as follows: $\theta_{3,4}=\pm\sqrt{\cfrac{k(n-k)-m_{2}\theta_{2}^{2}}{n-m_{2}-1}}.$ Since $k=-m_{2}\theta_{2}$ then straightforward calculations give the following: $\theta_{3,4}=\pm\sqrt{k\left(1+\cfrac{m_{2}\theta_{2}+m_{2}+1+\theta_{2}}{n-m_{2}-1}\right)}.$ Finally, we have: $\theta_{3,4}=\pm\sqrt{k\left(1+\cfrac{(\theta_{2}+1)(m_{2}+1)}{n-m_{2}-1}\right)},$ which completes the proof. $\square$ Theorem 1 allows to calculate the spectra of the children $G_{A}$ and $G_{B}$ of a strongly Deza graph $G$. Since the children of $G$ are strongly regular graphs, their parameters are calculated from their spectra. Hence, the parameters of $G$ determine the parameters of $G_{A}$ and $G_{B}$. However, the parameters of $G_{A}$ and $G_{B}$ do not allow to calculate the parameters of $G$. ## 5 Four distinct eigenvalues Connected regular graphs with four distinct eigenvalues were studied by E. R. van Dam in [6]. He has found some properties and feasibility conditions of the eigenvalues of such graphs. In the case of strongly Deza graphs with four distinct eigenvalues we found some new properties. ###### Theorem 7 Any singular strongly Deza graph is an integral graph with four distinct eigenvalues. Proof. Let $M$ be the adjacency matrix of a strongly Deza graph $G$. Thus, $M$ has at most five distinct eigenvalues $\\{k^{1},\theta_{2}^{m_{2}},\theta_{3}^{m_{3}},\theta_{4}^{m_{4}},\theta_{5}^{m_{5}}\\}$ with $\theta_{2}=-\theta_{5},\theta_{3}=-\theta_{4}$. Suppose $\theta_{2}=0$. Then we have: $tr(M)=k+(m_{3}-m_{4})\theta_{3}=0.$ By [6, Theorem 2.6], if $\theta_{3}$ is not an integer eigenvalue, then $m_{3}=m_{4}$. Hence, $k=0$ and we have a contradiction. $\square$ There are infinitely many singular strongly Deza graphs [15, Theorem 2] arising from the affine group $\mathbb{A}\mathrm{ff}(1,\mathbb{F}_{q^{t}})$, for any prime power $q$ and $t>1$, as divisible design Cayley graphs with parameters $(v,k,\lambda_{1},\lambda_{2},m,n)$, where $v=q^{t}(q^{t}-1)/(q-1),\quad k=q^{t-1}(q^{t}-1),$ $\lambda_{1}=q^{t-1}(q^{t}-q^{t-1}-1),\quad\lambda_{2}=q^{t-2}(q-1)(q^{t}-1),$ $m=(q^{t}-1)/(q-1),\quad n=q^{t}.$ By [14, Theorem 2.2], their eigenvalues are $\theta_{2,5}=\pm\sqrt{k-\lambda_{1}}=\pm\sqrt{q^{t-1}(q^{t}-1)-q^{t-1}(q^{t}-q^{t-1}-1)}=\pm q^{t-1},$ $\theta_{3}=\theta_{4}=\pm\sqrt{k^{2}-\lambda_{2}v}=0.$ The smallest example is known as the line graph of the octahedron. It has parameters $(12,6,3,2)$ and spectrum $\\{6^{1},2^{3},0^{2},(-2)^{6}\\}$. ###### Theorem 8 Let $G$ be a strongly Deza graph with parameters $(n,k,b,a)$ whose spectrum is $\\{k^{1},\theta_{2}^{m_{2}},\theta_{3}^{m_{3}},\theta_{4}^{m_{4}}\\}$, where $\theta_{3}=-\theta_{4}$. If $m_{3}=m_{4}$, then $m_{2}\theta_{2}=-k$ and one of the following statement holds. ${\rm(i)}$ $m_{2}=1$, $\theta_{2}=-k$ and $G$ is a bipartite graph with spectrum $\left\\{k^{1},\sqrt{\cfrac{k(n-2k)}{n-2}}^{\frac{n-2}{2}},-\sqrt{\cfrac{k(n-2k)}{n-2}}^{\frac{n-2}{2}},(-k)^{1}\right\\}.$ ${\rm(ii)}$ $m_{2}=k$, $\theta_{2}=-1$ and the spectrum of $G$ is $\left\\{k^{1},\sqrt{k}^{\frac{n-k-1}{2}},-1^{k},-\sqrt{k}^{\frac{n-k-1}{2}}\right\\}.$ ${\rm(iii)}$ $m_{2}<k$, $\theta_{2}<-1$. Proof. We have $tr(M)=k+m_{2}\theta_{2}=0$ and so $m_{2}\theta_{2}=-k$. First suppose that $m_{2}=1$ and $\theta_{2}=-k$. By Theorem [5, Theorem 3.4], $G$ is bipartite. By Theorem 6, we have: $\theta_{3,4}=\pm\sqrt{k\left(1+\cfrac{(\theta_{2}+1)(m_{2}+1)}{n-m_{2}-1}\right)}=\pm\sqrt{\cfrac{k(n-2k)}{n-2}},$ (6) with the multiplicities $\frac{n-2}{2}$ which gives case ${\rm(i)}$. Next assume that $m_{2}=k$ and $\theta_{2}=-1$, then we have the eigenvalues $\theta_{3,4}=\pm\sqrt{k\left(1+\cfrac{(\theta_{2}+1)(m_{2}+1)}{n-m_{2}-1}\right)}=\pm\sqrt{k}$ (7) with the multiplicities $\frac{n-k-1}{2}$ which gives case ${\rm(ii)}$. The proof of case ${\rm(iii)}$ is now clear. $\square$ ###### Remark 3 If $G$ is a non-integral graph, then by Theorem 4 the equality of multiplicities holds for some opposite eigenvalues. ###### Remark 4 If $G$ has a prime degree in Theorem 8, then only cases ${\rm(i)}$ and ${\rm(ii)}$ are valid. ## 6 Distance-regular Deza graphs Now we investigate when a distance-regular graph can be a strongly Deza graph. For background on distance-regular graphs, we refer to the book [2] and the survey [7]. Throughout this section $G=(V,E)$ is a distance-regular graph of order $n$ and diameter $d$. This means that $G$ is a connected graph and there exists numbers $\\{a_{i},b_{i},c_{i}\\}$ such that for every $i\in\\{0,\ldots,d\\}$ and for every pair of vertices $x\in V$ and $y\in V$ at mutual distance $i$ the following holds: \- the number of vertices adjacent to $y$ at distance $i+1$ from $x$ equals $b_{i}$; \- the number of vertices adjacent to $y$ at distance $i$ from $x$ equals $a_{i}$; \- the number of vertices adjacent to $y$ at distance $i-1$ from $x$ equals $c_{i}$. The numbers $a_{i},b_{i},c_{i}$ are called the intersection numbers of $G$. It follows that $G$ is a regular graph of degree $k=b_{0}=a_{i}+b_{i}+c_{i}$, $i=1,\ldots,d$. Also, for $i=0,\ldots,d$ the number $k_{i}$ of vertices at distance $i$ from a given vertex only depends on $i$. For $i=1,\ldots d$, we define $E_{i}$ to be the set of vertex pairs from $G$ at mutual distance $i$. Then $E_{1}=E$ and $(V,E_{i})$ is a regular graph of degree $k_{i}$, which is called the distance-$\rm i$ graph. We say that $G$ is antipodal if the distance-$\rm d$ graph is a disjoint union of cliques. If $d=2$, then $G$ is a strongly regular graph with parameters $(n,k,a_{1},c_{2})$. If $d\geqslant 3$, then two distinct vertices of $G$ have $a_{1}$, $c_{2}$, or $0$ common neighbours, and $0$ does occur. Clearly $c_{2}\neq 0$, so $G$ is a Deza graph if $a_{1}=0$, which means that $G$ is triangle free, or $a_{1}=c_{2}$. Thus, we have the following statement. ###### Proposition 2 A distance-regular graph $G$ of diameter $d\geqslant 3$ is a Deza graph if and only if one of the following holds: (i) $a_{1}=0$; then $G$ has parameters $(n,k,c_{2},0)$ and children $(V,E_{2})$ and its complement. (ii) $a_{1}=c_{2}$; then $G$ has parameters $(n,k,c_{2},0)$ and children $(V,E_{1}\cup E_{2})$ and its complement. There exist many distance-regular graphs, including all bipartite ones, with the above properties, and obviously none of these will be a strictly Deza graph. But some are strongly Deza graphs. From Proposition 2 it follows that $G$ is a strongly Deza graph whenever $a_{1}=0$ and $(V,E_{2})$ is strongly regular, or $a_{1}=c_{2}$ and $(V,E_{1}\cup E_{2})$ is strongly regular. It is known that the intersection numbers of $G$ determine the eigenvalues of $(V,E_{1})$ and $(V,E_{1}\cup E_{2})$. This implies that the property ‘being a distance-regular strongly Deza graph’ is determined by the intersection numbers. If the children are a complete multipartite graph and its complement, then $G$ is a divisible design graph. Distance-regular divisible design graphs have been classified by the following result. ###### Theorem 9 [14, Theorem 4.14] A graph $G$ is a distance-regular divisible design graphs if and only if $G$ is one of the following: (i) a complete multipartite graph; (ii) the incidence graph of a symmetric $2$-design; (iii) an antipodal distance-regular graph of diameter $3$ with $a_{1}=c_{2}$. A complete multipartite graph has diameter $2$. The incidence graph of a symmetric $2$-design is the same as a bipartite distance-regular graph of diameter $3$. It belongs to case (i) of Theorem 8. For more about case (iii) of Theorem 9, we refer to [2, p. 431]; see also Corollary 2 below. By Theorem 9, a distance-regular divisible design graph has diameter at most $3$. If a distance-regular strongly Deza graph $G$ is not a divisible design graph, then $d\leqslant 4$. Indeed, if $d\geqslant 5$ then one of the children of $G$ has diameter at least 3, hence it cannot be a strongly regular graph. It also follows because $G$ has at most five distinct eigenvalues. We do not know if there exist examples with $d=4$, but for $d=3$ there is an infinite family, known as the unitary nonisotropics graph (see [2, Theorem 12.4.1]), which has order $q^{2}(q^{2}-q+1)$ and degree $q(q-1)$ for prime power $q>2$. It satisfies $c_{2}=a_{1}=1$, and belongs to case (ii) of Proposition 2. The child $G_{A}=(V,E_{3})$ is the block graph of a Steiner $2$-$(q^{3}+1,q+1,1)$-design which is the classical unital design. Thus, one can conclude the following. ###### Theorem 10 For every prime power $q>2$, there exists a strongly Deza graph with parameters $(q^{2}(q^{2}-q+1),q(q-1),1,0)$ and spectrum $\\{q(q-1),\ (q)^{(q^{4}-q)/2-q^{3}+q^{2}},\ (-1)^{q^{3}},\ (-q)^{(q^{4}-q)/2-q^{3}+q-1}\\}.$ The distance-$\rm 3$ child $G_{A}$ is a strongly regular graph with parameters $(q^{2}(q^{2}-q+1),(q-1)(q+1)^{2},2q^{2}-2,(q+1)^{2}).$ ###### Remark 5 M. A. Fiol [12] uses the name ‘strongly distance-regular’ if $(V,E_{d})$ is strongly regular. So, if $d=3$ and $a_{1}=c_{2}$ then a distance-regular strongly Deza graph is a strongly distance-regular graph and vice versa. In particular, the graphs of Theorem 10 are strongly distance-regular. Worth mentioning is that a distance regular graph of diameter $3$ is strongly distance-regular if and only if $-1$ is an eigenvalue (see also [2, Proposition 4.2.17(ii)]). Note that case (ii) of Theorem 9 satisfies $a_{1}=0$, so the graphs belong to case (i) of Proposition 2. However, we don’t know an example of case (i) of Proposition 2 which is not a divisible design graph. But for $d=3$ there do exist feasible intersection numbers for the required distance-regular graphs. For example, $(n,k,k_{2},c_{2})=(210,11,110,1)$ and $(320,22,231,2)$ are both feasible (see [2, Chapter 14]). ### 6.1 Cospectral graphs It is well-known that a distance-regular graph with diameter $d$ has $d+1$ distinct eigenvalues, and that the intersection numbers determine the spectrum. Also the converse is true: the spectrum of a distance-regular graph determines the intersection numbers. This, however, does not mean that a graph with the same spectrum as a distance-regular graph has to be a distance- regular graph. Indeed, there exist many graphs cospectral with (i. e. with the same spectrum as) a distance-regular graph which are not distance-regular. However, as it is given by the result below, there are special situations for which it is true. ###### Theorem 11 [13] Suppose $G^{\prime}$ is a graph cospectral with a distance-regular graph $G$. In the following cases $G^{\prime}$ is also distance-regular with the same intersection numbers as $G$: ${\rm(i)}$ $G$ has diameter $2$, i. e. $G$ is a strongly regular graph; ${\rm(ii)}$ $G$ is bipartite and $d=3$, i. e. $G$ is the incidence graph of a symmetric $2$-design; ${\rm(iii)}$ $c_{2}=1$. See [7, Section 10.1] for a longer list. Theorem 11 applies to the graphs from Theorem 9 (i) and (ii) and Theorem 10. Thus, for these distance-regular graphs cospectral graphs are distance-regular with the same intersection numbers. But for case (iii) of Theorem 9 there do exist cospectral graphs which are not distance-regular (see below). The question is whether such a graph can still be a Deza graph. Then Theorem 3 implies that it is in fact a strongly Deza graph. This, however, is not likely to happen because of the following results. ###### Theorem 12 [13] Suppose $G^{\prime}$ is a graph cospectral with is a distance-regular graph $G$ of diameter $3$. If each vertex in $G^{\prime}$ has the same number of vertices at distance $3$ as a vertex in $G$, then $G^{\prime}$ is also distance-regular with the same intersection numbers as $G$. ###### Remark 6 It is not necessary that the distance-regular graph $G$ really exists, it suffices that the spectrum corresponds to a feasible intersection array. In fact, the above theorem has been improved further with weaker spectral conditions and generalized to arbitrary diameter. The general version is now known as ‘spectral excess theorem’; see [7, Section 10.3]. ###### Proposition 3 Suppose $G^{\prime}$ is a graph cospectral with a distance-regular strongly Deza graph $G$ with diameter $3$ and $a_{1}=c_{2}$. If $G^{\prime}$ is a Deza graph then $G^{\prime}$ is a strongly Deza graph and one of the following holds: (i) $G^{\prime}$ is distance-regular with the same intersection numbers as $G$, (ii) $G^{\prime}$ and $G$ have different parameters $a$ and $b$ as Deza graphs. Proof. Assume $G^{\prime}$ is a Deza graph with parameters $(n,k,b,a)$. Since $G$ and $G^{\prime}$ are cospectral, both graphs have order $n$ and degree $k$. Moreover, $G^{\prime}$ satisfies the conditions of Theorem 3 (i), therefore $G^{\prime}$ is a strongly Deza graph. If case (ii) does not hold, then $G$ and $G^{\prime}$ have the same parameters, so $a=0$ and $b=a_{1}$. Clearly $G$ has $nka_{1}/6$ triangles, and since cospectral graphs have the same number of triangles, $G^{\prime}$ also has $nka_{1}/6=nkb/6$ triangles. This implies that every edge of $G^{\prime}$ belongs to $b$ triangles. Hence, any two vertices in $G^{\prime}$ with $a=0$ common neighbours are at distance $3$. This means that for both graphs $G$ and $G^{\prime}$ the children $G_{A}$ and $G^{\prime}_{A}$ are the distance-$3$ graphs. Since $G$ and $G^{\prime}$ have the same parameters, $G_{A}$ and $G^{\prime}_{A}$ have the same degree and hence $G$ and $G^{\prime}$ have the same number of vertices at distance $3$ from a given vertex. So, by Theorem 12, $G^{\prime}$ is distance-regular with the same intersection numbers as $G$. $\square$ If case (ii) of Proposition 3 occurs, then the children of $G$ have different parameters and spectrum from those of $G^{\prime}$, but Theorem 1 implies that the eigenvalue multiplicities are the same. Pairs of strongly regular graphs with different spectrum but the same multiplicities (other than the spectrum of the complement) exist (for example the Petersen graph and the complete multipartite graph $K_{2,2,2,2,2}$), but we don’t know a pair of cospectral strongly Deza graphs for which the children have this property. Fortunately, often case (ii) can be easily excluded in particular cases. For example if $G$ and $G^{\prime}$ are both divisible design graphs. ###### Corollary 2 Let $G^{\prime}$ be a divisible design graph with spectrum $\\{k^{1},\sqrt{k}^{m},(-1)^{k},(-\sqrt{k})^{m}\\}.$ If $k^{2}-1$ is divisible by $n=2m+k+1$, then $G^{\prime}$ is an antipodal distance-regular graph with $d=3$ and $a_{1}=c_{2}=(k^{2}-1)/n$. Proof. According to [2, p.431] there exist feasible intersection numbers satisfying $a_{1}=c_{2}=(k^{2}-1)/n$ for a putative distance-regular graph $G$ cospectral with $G^{\prime}$. As mentioned in Remark 6, Theorem 12 and Proposition 3 apply even if existence of $G$ is not established. The children of a divisible design graph are the complete multipartite graph and its complement. The spectrum of the complete multipartite graph $K_{m,\ldots,m}$ of order $n$ equals $\\{(n-m)^{1},\ 0^{n-n/m},\ (-m)^{n/m-1}\\}$. So, clearly the eigenvalue multiplicities determine $m$ and $n$ and therefore case (ii) of Proposition 3 does not occur and the result follows. $\square$ Name | n | Spectrum | Cosp ---|---|---|--- Icosahedron | 12 | $\\{5^{1},(\sqrt{5})^{3},(-1)^{5},(-\sqrt{5})^{3}\\}$ | 1 Line graph of Petersen graph​ | 15 | $\\{4^{1},2^{5},(-1)^{4},(-2)^{5}\\}$ | 1 Johnson graph $J(6,3)$ | 20 | $\\{9^{1},3^{5},(-1)^{9},(-3)^{5}\\}$ | 6 Klein graph | 24 | $\\{7^{1},(\sqrt{7})^{8},(-1)^{7},(-\sqrt{7})^{8}\\}$ | 10 Taylor graph of Payley(13) | 28 | $\\{13^{1},(\sqrt{13})^{7},(-1)^{13},(-\sqrt{13})^{7}\\}$​ | $\geqslant\\!{1173}$ Table. 1: Distance-regular strongly Deza graphs with $a_{1}=c_{2}$, $d=3$ and $n\leqslant 30$ E. R. van Dam, W. H. Haemers, J. H. Koolen, and E. Spence [8] studied graphs cospectral with distance-regular graphs. They checked the non-trivial distance-regular graphs with at most $70$ vertices and diameter $d\geqslant 3$. The trivial cases are the cycles and the incidence graphs of a trivial symmetric $2$-$(k+1,k,k-1)$ design. Let us look at the non-trivial examples on at most 30 vertices. There are precisely $29$ such distance-regular graphs, and $19$ of them are strongly Deza graphs. Among these strongly Deza graphs there are $14$ incidence graphs of non-trivial symmetric $2$-designs, which are divisible design graphs corresponding to case (ii) of Theorem 9. We saw that for these graphs all cospectral graphs are also incidence graphs of a symmetric design. The remaining five are divisible design graphs corresponding to case (iii) of Theorem 9. These five graphs with their spectra are given in Table 1. For three of them there exist several cospectral graphs. The last column named ‘Cosp’ in the Table gives the number of non-isomorphic graphs with the given spectrum. For each spectrum there is only one distance-regular graph. For the other graphs option (ii) of Proposition 3 does not occur, simply because there exist no strongly regular graphs with different parameters as the children of $G$ but with the same eigenvalue multiplicities. So, one can conclude that each of these graphs is not a Deza graph. ## Acknowledgements The research work of Elena V. Konstantinova is supported by Mathematical Center in Akademgorodok, the agreement with Ministry of Science and High Education of the Russian Federation number 075-15-2019-1613. The research work of M. A. Hosseinzadeh has been supported by a research grant from the Amol University of Special Modern Technologies, Amol, Iran. ## References * [1] S. Akbari, A. H. Ghodrati, M. A. Hosseinzadeh, V. V. Kabanov, E. V. Konstantinova, L. V. Shalaginov, Spectra of Deza graphs, Linear Multilinear Algebra, https://doi.org/10.1080/03081087.2020.1723472. * [2] A. E. Brouwer, A. M. Cohen, A. Neumaier, Distance-Regular Graphs, Springer-Verlag, Berlin (1989). * [3] A. E. Brouwer, W. H. Haemers, Spectra of graphs, Springer, New York, 2012. * [4] D. Crnković, W. H. Haemers, Walk-regular divisible design graphs, Designs, Codes and Cryptography, 72 (2014) 165–175, https://doi.org/10.1007/s10623-013-9861-0. * [5] D. Cvetković, M. Doob, H. Sachs, Spectra of graphs, V.E.B. Deutscher Verlag der Wissenschaften, Berlin, 1979. * [6] E. R. van Dam, Regular Graphs With Four Eigenvalues, Linear Algebra and its Applications, 226–228 (1995) 139–162, https://doi.org/10.1016/0024-3795(94)00346-F. * [7] E. R. van Dam, J. H. Koolen, H. Tanaka, Distance-regular graphs, Electronic J. Combinatorics, DS22 (2016), https://doi.org/10.37236/4925. * [8] E. R. van Dam, W. H. Haemers, J. H. Koolen, E. Spence, Characterizing distance-regularity of graphs by the spectrum, Journal of Combinatorial Theory, Series A, 113(8) (2006) 1805–1820, https://doi.org/10.1016/j.jcta.2006.03.008. * [9] Deza A., Deza M., The ridge graph of the metric polytope and some relatives. In: Bisztriczky T., McMullen P., Schneider R., Weiss A.I. (eds) Polytopes: Abstract, Convex and Computational. NATO ASI Series (Series C: Mathematical and Physical Sciences), Vol. 440 (1994) 359–372, Springer, Dordrecht, https://doi.org/10.1007/978-94-011-0924-6_16. * [10] M. Doob, Graphs with a small number of distinct eigenvalues, Annals of the New York Academy of Sciences, 175(1) (1970) 104–110, https://doi.org/10.1111/j.1749-6632.1970.tb56460.x. * [11] M. Erickson, S. Fernando, W.H. Haemers, D. Hardy, J. Hemmeter, Deza graphs: A generalization of strongly regular graphs, J. Combinatorial Design, 7 (1999) 359–405. https://doi.org/10.1002/(SICI)1520-6610(1999)7:6%3C395::AID-JCD1%3E3.0.CO;2-U. * [12] M. A. Fiol, Quasi-spectral characterization of strongly distance-regular graphs, Electronic J. Combinatorics 7, R51 (2000), https://doi.org/10.37236/1529. * [13] W. H. Haemers, Distance-regularity and the spectrum of graphs, Linear Algebra and its Applications, 236 (1996) 593–616, https://doi.org/10.1016/0024-3795(94)00166-9. * [14] W. H. Haemers, H. Kharaghani, M. Meulenberg, Divisible design graphs, J. Combinatorial Theory, Serias A, 118 (2011) 978–992, https://doi.org/10.1016/j.jcta.2010.10.003. * [15] V. V. Kabanov, L. V. Shalaginov, On divisible design Cayley graphs, The Art of Discrete and Applied Mathematics, 9pp. https://doi.org/10.26493/2590-9770.1340.364. * [16] T. Koledin, Z. Stanić, Regular bipartite graphs with three distinct non-negative eigenvalues, Linear Algebra and its Applications, 438 (2013) 3336–3349, http://dx.doi.org/10.1016/j.laa.2012.12.036. * [17] D. Stevanović, Two spectral characterizations of regular, bipartite graphs with five eigenvalues, Linear Algebra and its Applications, 435 (2011) 2612–2625, https://doi.org/10.1016/j.laa.2011.04.032.
# Deep Learning for Moving Blockage Prediction using Real Millimeter Wave Measurements Shunyao Wu, Muhammad Alrabeiah, Andrew Hredzak, Chaitali Chakrabarti, and Ahmed Alkhateeb School of Electrical, Computer and Energy Engineering Arizona State University Tempe, AZ 85287 Email: {vincentw, malrabei, ahredzak, chaitali<EMAIL_ADDRESS> ###### Abstract Millimeter wave (mmWave) communication is a key component of 5G and beyond. Harvesting the gains of the large bandwidth and low latency at mmWave systems, however, is challenged by the sensitivity of mmWave signals to blockages; a sudden blockage in the line of sight (LOS) link leads to abrupt disconnection, which affects the reliability of the network. In addition, searching for an alternative base station to re-establish the link could result in needless latency overhead. In this paper, we address these challenges collectively by utilizing machine learning to anticipate dynamic blockages proactively. The proposed approach sees a machine learning algorithm learning to predict future blockages by observing what we refer to as the pre-blockage signature. To evaluate our proposed approach, we build a mmWave communication setup with a moving blockage and collect a dataset of received power sequences. Simulation results on a real dataset show that blockage occurrence could be predicted with more than 85% accuracy and the exact time instance of blockage occurrence can be obtained with low error. This highlights the potential of the proposed solution for dynamic blockage prediction and proactive hand-off, which enhances the reliability and latency of future wireless networks. ###### Index Terms: Millimeter wave, machine learning, blockage prediction, handover ## I Introduction Communication at the mmWave frequency range offers high bandwidth and higher data rate demands required by 5G and beyond cellular systems [1]. However, mmWave systems struggle in the presence of objects that block the LOS connection between a base station and its users [2]. This is majorly rooted into the poor penetration and reflection properties of mmWave signals [1, 3]. This struggle with blockages or Non-LOS (NLOS) connections ultimately affects the reliability of the system. In addition to that, it introduces a latency burden resulting from the need for user hand-off [2]. A key approach to address that struggle lies in equipping the mmWave system with the capability to predict possible blockage proactively. Such successful prediction allows a base station to take mitigation measures, e.g., user hand-off, before the link is blocked, thereby resolving the reliability and latency problems. Many recently published studies have used machine learning to address problems arising from link blockages in MIMO and mmWave communications [4, 5, 2, 6, 7]. The work in [4, 5, 6] collectively demonstrates the ability of a machine learning algorithm (whether deep [5] or shallow [4]) to identify or differentiate LOS and NLOS links. This identification task could be an interesting ability to a system operating in the sub-6 GHz range. However, the requirements are more stringent for a mmWave system and demand a proactive approach. A step towards doing so is presented in [2], where a proactive solution is proposed to predict stationary blockages. This solution depends on beam sequences alone, and as such, it cannot handle dynamic blockages. Another direction addressing blockage prediction relies on Vision-Aided Wireless Communications [8, 7]. In [8], a camera feed along with sub-6 GHz channels are used to identify currently blocked mmWave links. This work is the seed to that presented in [7], where proactive blockage prediction using images and mmWave beams is attempted using bimodal deep learning algorithm. Despite the appeal, that work require extra sensory data (images), and it does not take full advantage of the wireless data. In this paper, we propose a machine learning algorithm that addresses proactive dynamic-blockage prediction. The algorithm uses sequences of received powers to predict whether a blockage is incoming or not. The basic idea behind our algorithm is the ability to recognize what we have dubbed pre- blockage signature. We argue that signature could serve as a important clue for incoming blockages. Our contribution is summarized as follows: 1. 1. We propose a recurrent neural network (RNN) architecture based on Gated- recurrent units (GRUs) to predict incoming blockages. The architecture is designed to learn one of two tasks. The first is to predict whether a blockage is incoming or not, and the other is to pinpoint the time instance at which the blockage occurs. 2. 2. We develop a mmWave communication setup with a moving blockage. We use that setup to build a dataset of received power sequences and their corresponding future link statuses. The dataset is utilized to the proposed algorithm and its ability to predict incoming blockages. The rest of this paper is organized as follows. Section II presents the system and channel models adopted to study the dynamic-blockage prediction. Section III presents a formulation of the prediction problem. The proposed machine learning model is presented in Section IV. The data collection scenario and setup is introduced in Section V. Proposed algorithm evaluation and simulation results are presented in Section VI, and, finally, the paper is concluded in Section VII. ## II System and Channel Models The following two subsections will introduce the system and channel models adopted in this paper. Figure 1: An illustration of the considered system model. System model: The communication system considered in this work is described in Fig.1. It assumes a base station serving a static user who is in the vicinity of a possible moving blockage. The base station employs an $M$-element Uniform Linear Array (ULA) antenna operating at 60GHz carrier frequency with Orthogonal Frequency Division Multiplexing (OFDM). It also assumes a fully analog architecture with a predefined beam-steering codebook $\mathcal{F}=\\{\mathbf{f}_{w}\\}_{w=1}^{W}$, where $\mathbf{f}_{w}\in\mathbb{C}^{M\times 1}$ is given by: $\mathbf{f}_{w}=\frac{1}{\sqrt{M}}\left[1,e^{j\frac{2\pi}{\lambda}d\sin(\phi_{w})},\dots,e^{j(M-1)\frac{2\pi}{\lambda}d\cos(\phi_{w})}\right]^{T},$ (1) where $d$ is the spacing between the ULA elements, $\lambda$ is the wavelength, and $\phi_{w}\in\\{\frac{2\pi w}{W}\\}_{w=0}^{W-1}$ is a uniformly quantized azimuth angle with a step of $1/W$. At any time instance $t$, the downlink received signal is expressed as follows: $r_{k}[t]=\mathbf{h}_{k}[t]^{T}\mathbf{f}_{w}s_{k}[t]+n_{k}$ (2) where $\mathbf{h}_{k}[t]\in\mathbb{C}^{M\times 1}$ is the downlink channel at the $k$th subcarrier, $s_{k}[t]$ is the symbol transmitted on the $k$th subcarrier, and, finally, $n_{k}$ is a complex Gaussian noise sample, $\sim\mathcal{CN}(0,\sigma^{2})$ at the $k$th subcarrier. Channel model: This work adopts the geometric (physical) channel model, which captures the physical characteristics of signal propagation including the dependence on the environment geometry, materials, frequency band, etc., [1]. With this model, the channel can be expressed as: $\mathbf{h}_{k}=\sum_{d=0}^{D-1}\sum_{\ell=1}^{L}\alpha_{\ell}e^{-\j\frac{2\pi k}{K}d}p\left(dT_{\mathrm{S}}-\tau_{\ell}\right){\mathbf{a}}\left(\theta_{\ell},\phi_{\ell}\right),$ (3) where $L$ is number of channel paths, $\alpha_{\ell}$ is the path gain (including the path-loss), $\tau_{\ell}$ is the delay, $\theta_{\ell}$ is the azimuth angle of arrival, and $\phi_{\ell}$ is the elevation angle, of the $\ell$th channel path. $T_{\mathrm{S}}$ represents the sampling time while $D$ denotes the cyclic prefix length (assuming that the maximum delay is less than $DT_{\mathrm{S}}$). ## III Problem Formulation Proactively identifying Line of Sight (LOS) link status has significant advantages both at the physical and network layer levels. In this paper, we focus on two specific problems: (i) How to use the received mmWave signal power information to predict whether there is a blockage in the near future or not, and (ii) in case there is a blockage, how to use the received mmWave signal power information to predict when that blockage will occur. Problem 1: To formulate the presence of a blockage in the near future, let $t\in\mathbb{Z}$ be the index of the discrete time instance, $x[t]$ be the link status at the $t$th time instance, and let $S_{ob}=\\{\lvert r[t+n]\rvert^{2}\\}_{n=-T_{o}+1}^{0}$ be the sequence of received signal power for the observation interval of $T_{o}$ instances. Note that for simplicity of expression, the subcarrier index $k$ is dropped from $r[t+n]$. Given a signal power based observation sequence, we want to predict the occurrence of blockage within a future time interval extending over $T_{P}$ instances. We use $b_{T_{P}}$ to indicate whether there is a blockage occurrence within that interval or not. More formally, $b_{T_{p}}$ is defined as follows: $b_{T_{P}}=\begin{cases}0,&x[t+n^{\prime}]=0\quad\forall n^{\prime}\in\\{1,\dots,T_{P}\\}\\\ 1,&\text{otherwise}\end{cases}$ (4) where $1$ indicates the occurrence of a blockage and $0$ is the absence of blockage. The goal of this problem is to predict $b_{T_{p}}$ with high accuracy, i.e., high success probability $\mathbb{P}(\hat{b}_{T_{p}}=b_{T_{p}}|S_{ob})$ where $\hat{b}_{T_{p}}$ is the predicted link status. To that end, a prediction function $f_{\Theta}(S_{ob})$ parameterized by a set of parameters $\Theta$ could to be learned using a machine learning algorithm such that it maximizes $\mathbb{P}(\hat{b}_{T_{p}}=b_{T_{p}}|S_{ob})$. Figure 2: Example of received signal power vs time. It also shows photos of the location of the transmitter and blockage (from the receiver perspective). Problem 2: Given signal power based observation sequence $S_{ob}$ and the knowledge that there is a blockage in the future $T_{p}$ instances, the goal is to predict $n^{\prime}$ at which $x[t+n^{\prime}]=1$. This represents the exact instance at which the blockage occurs within a window of $T_{p}$ instances. Similar to Problem 1, the future instance is predicted by a parameterized function $g_{\Theta}(\mathcal{S}_{ob})$ that could be learned using a ML algorithm. The aim of the ML algorithm is to maximize the prediction accuracy $\mathbb{P}(\hat{n}^{\prime}=n^{\prime}|\mathcal{S}_{ob},b_{T_{p}}=1)$. ## IV Moving Blockage Prediction using Recurrent Neural Networks Deep neural networks have recently established themselves as a powerful learning algorithm for many applications [9][10][11], and as such, they will be the center of the proposed solution for the future-blockage prediction problem. ### IV-A Key Idea Moving objects in a wireless communication environment contribute to changes in the wireless channels, resulting in obvious fluctuations in received signal power. This fluctuation pattern in received signal power is referred to as the Pre-Blockage Signature [12]. Such pattern is illustrated in Fig. 2. It shows a captured sequence of received power versus time instances, and the corresponding photos show how far the blockages (the metal object in the photos) is from the transmitter (the object circled with red in the photos). The received power starts with smooth fluctuations (between the 1-st and 30-th instances in Fig. 2), for the blockage is far from both the receiver and the transmitter. However, as that blockage advances, its effect becomes clearer in the received sequence, which could be seen in the red-shaded region of Fig. 2. The sharper fluctuations is what we refer to as the pre-blockage signature, and it could be utilized to identify incoming blockages. Figure 3: The overall RNN architecture composed of a recurrent and a prediction component ### IV-B Deep Learning Model Neural Network Architecture: Learning the pre-blockage signature from a sequence of observed received signal power requires a neural network that could process input data samples over time such as recurrent neural networks. We design a Gated Recurrent Unit (GRU) network of $Q$-layers that takes in a sequence of observed received signal powers (i.e., $S_{ob}$) and learns to predict the link status $b_{T_{p}}$. Fig.3 depicts the schematic of such a network. Each layer in the network comprises a sequence of GRUs equal to $T_{o}$. The output of the last GRU of the last layer is fed to a Fully Connected (FC) layer followed by either a classifier for problem 1 or a regressor for problem 2. The classifier outputs a probability vector ($\hat{\mathbf{p}}$) of whether the link status is blocked or not in $T_{P}$ future time instances. For problem 2, the regressor outputs the predicted time instance $\hat{n}^{\prime}$ indicating the time instance at when the blockage will occur. Pre-Processing: Before we input the data into our network, we need to pre- process it to make it suitable for our model to learn, see [13] for more information. We choose to standardize the inputs by subtracting the mean $\mu$ of the dataset and dividing by its standard deviation $\sigma$. Let $\mathbf{A}\in\mathbb{R}^{U\times N}$ be the dataset matrix, where each row represents a data point with a total number of data points of $U$. Data standardization is done by computing: $\mathbf{\hat{A}}_{u,n}=\frac{\mathbf{A}_{u,n}-\mu}{\sigma},$ (5) where: $\mu=\frac{1}{N\times U}\sum_{u=1}^{U}\sum_{n=1}^{N}\mathbf{A}_{u,n}$ (6) $\sigma=\sqrt{\frac{1}{N\times U}\sum_{u=1}^{U}\sum_{n=1}^{N}(\mathbf{A}_{u,n}-\mu})^{2}$ (7) $\forall u\in\\{1,\dots,U\\}\text{and }n\in\\{1,\dots,N\\},$ (8) Training loss: For problem 1 the future link-status prediction is posed as a binary classification problem, in which the classifier attempts to determine whether the link is blocked or not within the future time interval. As such, the network training is performed with a cross entropy loss function [11] computed over the outputs of the network: $l_{\text{CH}}=\sum_{c=1}^{2}p_{c}\log{\hat{p}_{c}},$ (9) where $\mathbf{p}=[p_{1},p_{2}]^{T}$ is the one-hot vector where a one hot vector is a representation of categorical variables as binary vector, the category with highest probability is encoded as 1 others are encoded as 0’s. It takes one of two values: $[1,0]^{T}$ for the case when $b_{T_{p}}=0$ and $[0,1]^{T}$ for the case when $b_{T_{p}}=1$, and $l_{\text{CH}}$ is the training loss computed for one data point. For problem 2, we pose the problem of predicting the blockage instance as a regression problem. Our model tries to determine the exact time instance at which the blockage occurs. We use Mean Square Error (MSE) loss as training function. In formal terms, we aim to minimize the difference between the predicted instance and groundtruth instance: $l_{\text{MSE}}=(n^{\prime(u)}-\hat{n}^{\prime(u)})^{2}$ (10) where $n^{\prime(u)}$ and $\hat{n}^{\prime(u)}$ are ground truth time instance and predicted time instance, respectively. ## V Experimental Setup and Evaluation Dataset ### V-A Communication Scenario & Testbed Figure 4: Experimental Scenario. The blockage moves along a trajectory between the transmitter (TX) and receiver (RX). TABLE I: Parameters for mmWave Communication Systems Name | Value ---|--- Carrier Frequency | 60GHz Signal Bandwidth | 20MHz number of subcarriers | 64 Horn Antenna Gain | 20dBi Transmit Power | 30dBm We build a mmWave communication system comprising of a transmitter with an omnidirectional antenna communicating with a receiver with a 10-degree beamwidth horn antenna. The operating parameters for our mmWave communication system are shown in Table I. To simulate a moving blockage, we use a metal cylinder with a height of 1 m, which can completely block the LOS link between the transmitter and receiver. Then, we mount that cylinder onto a programmed robot that moves along a pre-defined trajectory to simulate the moving blockage. Fig. 4 depicts our system. (a) (b) Figure 5: Top-view of the experimental scenario. (a) shows the trajectory of the moving blockage and the relative positions of transmitter (TX), receiver (RX) and moving blockage. (b) shows the rotated TX-RX setup. We consider an indoor scenario where the transmitter and receiver are placed 8 m apart from each other and the robot moves between them in different trajectories. To illustrate this further, let’s consider the coordinate system in Fig. 4, where the x-axis extends along the LOS between the transmitter and receiver and y-axis extends perpendicular to the LOS between the transmitter and receiver. z-axis is perpendicular to the ground. The robot moves back and forth on the y-axis to create multiple back and forward trajectories with a spacing of 1 m. Then, the whole testbed is rotated with a small angle on the x-y plane such that the background is slightly changed, and the robot is programmed to do another back and forth cycle; see Fig. 5(a) and Fig. 5(b) for an example of the robot motion in the rotated testbed. By continuously moving and rotating, we collect a dataset of power sequences that encodes different motion and propagation patterns. ### V-B Dataset Generation Figure 6: Sequence generation using sliding window. The top image shows the received power for a raw sequence. The bottom image shows the correspond link status. Every robot trajectory provides us with a single received-power sequence, which is manually annotated to create the link status labels. We use 1 to indicate the time instances at which the LOS link is blocked and 0 otherwise. We call the pair of received power and link status sequence a raw sequence pair. Furthermore, since the number of raw sequence pairs we collected from the experiment is limited, data augmentation is used to increase the dataset size. Originally, we conducted 158 experiments and generated 158 sequence pairs, i.e. $\mathcal{S}_{1}=\\{(S_{d1},x_{d1})^{(u)}\\}_{u=1}^{U_{d}},U_{d}=158$, $d_{1}$ means the original input. We generate additional pairs by dropping samples at rates 2, 3, and 4, which results in $\mathcal{S}_{2}=\\{(S_{d2},x_{d2})^{(u)}\\}_{u=1}^{U_{d}}$, $\mathcal{S}_{3}=\\{(S_{d3},x_{d3})^{(u)}\\}_{u=1}^{U_{d}}$, $\mathcal{S}_{4}=\\{(S_{d4},x_{d4})^{(u)}\\}_{u=1}^{U_{d}}$, respectively. In reality, this procedure means the blockage moves along the same trajectory at 2,3 or 4 times the original speed. Next we concatenate these sequences together as $\mathcal{S}=\mathcal{S}_{1}\bigcup\mathcal{S}_{2}\bigcup\mathcal{S}_{3}\bigcup\mathcal{S}_{4}=\\{(S,x)^{(u)}\\}_{u=1}^{U}$ where $U=4U_{d}$. The method to generate the data points for Problem 1 and Problem 2 are as follows. Problem 1: A data point consists of an input received signal power sequence $S_{ob}$ and the input label $b_{T_{p}}$. To generate the received signal power sequence, we use a sliding window method, shown in Fig. 6. For example, assume that current time is $t$, we generate $S_{ob}$ by extracting the received power sequence from time instance $t-T_{o}+1$ to $t$ shown as red box in the top subplot in Fig. 6. For input label generation, we first extract the label sequence from time instance $t+1$ to $t+T_{P}$ shown as green box in bottom subplot in Fig. 6, this gives us sequence $\\{x[t+n]\\}_{n=t}^{t+T_{P}}$. Then we generate input labels $b_{t_{P}}$ using eqn. 4. Finally, we pair the received signal power sequences with input labels, expressed as $S_{P1}=\\{(S_{ob},b_{T_{P}})^{(u)}\\}_{u=1}^{U_{P1}}$, where $U_{P1}$ is the total samples we input to our model. In this problem, $S_{P1}$ are mixed with two categories, non-transition sequence pairs where $b_{T_{P}}=0$ and transition sequence pairs where $b_{T_{P}}=1$. To eliminate the dataset bias, we keep the ratio of transition and non-transition sequence pairs to 1:1. Problem 2: A data point is represented by input received signal power sequence $S_{ob}$ with the ground-truth time instance $n^{\prime}$ instead of the label. So $S_{P2}=\\{(S_{ob},n^{\prime})^{(u)}\\}_{u=1}^{U_{P2}}$, where $U_{P2}$ is the total number of samples that are input to our model. In this problem, we only select transition-sequence pairs to be our input dataset. In our experiments, we pick input sequence length ($T_{o}$) as 10 and prediction interval ($T_{p}$) from 1 to 40. For each $T_{p}$, each raw sequence pair generates one transition-sequence pair and multiple non- transition sequence pairs. Totally, we get 632 transition-sequence pairs for each $T_{p}$. ## VI Experimental Results ### VI-A Evaluation Metrics Since problem 1 is a classification problem, we use Top-1 accuracy as our evaluation metric. It is defined as: $\text{Acc}_{\text{top-1}}=\frac{1}{U_{v1}}\sum_{u=1}^{U_{v1}}\mathbbm{1}(b_{T_{p}}^{(u)}=\hat{b}_{T_{p}}^{(u)}),$ (11) where $\mathbbm{1}$ is the indicator function, $U_{v1}$ is total samples of the validation set in problem 1, $b_{T_{p}}^{(u)}$ and $\hat{b}_{T_{p}}^{(u)}$ are, respectively, the target and predicted link status for a future interval of $T_{P}$ instances. Problem 2 is posed as a regression problem, and so we use Mean Absolute Error (MAE) and its standard deviation to evaluate the quality of our model predictions. The MAE is the mean absolute error between ground-truth value and predicted value. For each prediction interval $T_{P}$, we calculate MAE across the samples and standard deviation of these absolute errors. $e^{(u)}_{T_{P}}=\lvert n^{\prime(u)}-\hat{n}^{\prime(u)}\rvert,\quad\forall u\in\\{1,\dots,U_{v2}\\},$ (12) $\bar{e}_{T_{P}}=\frac{1}{U_{v2}}\sum_{u=1}^{U_{v2}}\lvert n^{\prime(u)}-\hat{n}^{\prime(u)}\rvert,$ (13) $\text{std}_{T_{P}}={\sqrt{{\frac{1}{U_{v2}}}\sum_{i=1}^{U_{v2}}\left(e^{(u)}_{T_{P}}-{\bar{e}_{T_{P}}}\right)^{2}}},$ (14) where, $e^{(u)}_{T_{P}}$ is the absolute error for $u$th sample, $\bar{e}_{T_{P}}$ is the MAE, $\text{std}_{T_{P}}$ is the standard deviation of absolute error, $U_{v2}$ is the total numbers of samples in validation set, $n^{\prime(u)}$ and $\hat{n}^{\prime(u)}$ are target and predicted time instances between current time and the time of blockage occurrence, assuming prediction interval is $T_{P}$. ### VI-B Network Training We build the deep learning model described in Section IV using Pytorch. We input 10 successive time-instances of received signal power and the corresponding label for training. Our model consists of 1 RNN layer with 20 hidden states with a dropout rate = 0.2. The number of epochs is 1000. These parameters are chosen based on empirical experiments. The detailed parameters of our RNN for problem 1 and problem 2 are shown in Table II. TABLE II: Parameters for Deep Learning Model | Value ---|--- Name | Problem 1 | Problem 2 Input sequence length | 10 | 10 Predicted future time steps | 1-40 | 1-40 Hidden state of RNN | 20 | 20 Output dimension | 2 | 1 Number of RNN layer | 1 | 1 Dropout rate | 0.2 | 0.2 Epoch | 1000 | 1000 ### VI-C Performance Evaluation Problem 1: Fig. 7 plots the Top-1 accuracy as a function of the prediction interval. We see that as the prediction interval increases the accuracy decreases; the Top-1 accuracy decreases sharply at first, and then flattens out. Our model achieves high accuracy when predicting the occurrence of blockage in near future. i.e. we can achieve above 80% accuracy when predicting the occurrence of blockage in the future 6 time instances. This is because when the prediction interval is small, our training dataset contains a large number of sequences with a pre-blockage signature, resulting in a high ratio of these sequences in the training dataset. The prediction accuracy is good until the prediction interval goes beyond 15 time instances. It finally converges to the “random guess” performance as the prediction interval approaches 40 time instances, i.e., approaching an accuracy of $50\%$. Such general trend emphasizes value of pre-blockage signatures, which is commonly clear when the blockage is very close to the transmitter and receiver. Figure 7: Top-1 blockage prediction accuracy for different future prediction intervals. Figure 8: Mean absolute error between the target (the exact time instance where the blockage happens) and the prediction of this blockage time instance for different future prediction intervals. Problem 2: Fig. 8 plots the mean absolute error between prediction and ground- truth as a function of prediction interval. For each prediction instance, we show the standard deviation as an error bar. Given prediction interval below 10 time instance, our model can predict all the blockage transitions within 10 time instances under 1.9 with low volatility ($\pm$1.5). For a shorter prediction intervals, our observation window more likely falls into a pre- blockage signature resulting in lower prediction errors. However, as the prediction interval increases, our observation window captures more sequences without pre-blockage signature resulting in weaker prediction. However, when the prediction interval is 40 time instance, we can still predict the exact time of the blockage occurrence with the mean absolute error of under 8 time instances. ## VII Conclusion In this paper, we proposed a deep-learning-based solution for the moving blockage prediction in mmWave communication systems. Specifically, we developed an RNN model to predict both the occurrence of the moving blockage and the exact time when the LOS link is blocked. Simulation results on real data showed that our model can achieve good performance for the moving blockage prediction, which allows the user to be proactively handed over to another base station without disconnecting the session with a high success probability. ## References * [1] R. W. Heath, N. Gonzalez-Prelcic, S. Rangan, W. Roh, and A. M. Sayeed, “An overview of signal processing techniques for millimeter wave mimo systems,” _IEEE journal of selected topics in signal processing_ , vol. 10, no. 3, pp. 436–453, 2016. * [2] A. Alkhateeb, I. Beltagy, and S. Alex, “Machine learning for reliable mmwave systems: Blockage prediction and proactive handoff,” in _2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)_. IEEE, 2018, pp. 1055–1059. * [3] T. S. Rappaport, Y. Xing _et al._ , “Overview of millimeter wave communications for fifth-generation (5g) wireless networks—with a focus on propagation models,” _IEEE Transactions on Antennas and Propagation_ , vol. 65, no. 12, pp. 6213–6230, 2017. * [4] J.-S. Choi, W.-H. Lee, J.-H. Lee, J.-H. Lee, and S.-C. Kim, “Deep learning based nlos identification with commodity wlan devices,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 4, pp. 3295–3303, 2017. * [5] C. Huang, A. F. Molisch _et al._ , “Machine learning-enabled los/nlos identification for mimo system in dynamic environment,” _IEEE Transactions on Wireless Communications_ , 2020. * [6] M. Alrabeiah and A. Alkhateeb, “Deep learning for mmwave beam and blockage prediction using sub-6 ghz channels,” _IEEE Transactions on Communications_ , vol. 68, no. 9, pp. 5504–5518, 2020. * [7] G. Charan, M. Alrabeiah, and A. Alkhateeb, “Vision-aided dynamic blockage prediction for 6g wireless communication networks,” _arXiv preprint arXiv:2006.09902_ , 2020. * [8] M. Alrabeiah, A. Hredzak, and A. Alkhateeb, “Millimeter wave base stations with cameras: Vision-aided beam and blockage prediction,” in _2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring)_ , 2020, pp. 1–5. * [9] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [10] G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connected convolutional networks,” _CoRR_ , vol. abs/1608.06993, 2016. [Online]. Available: http://arxiv.org/abs/1608.06993 * [11] I. Goodfellow, Y. Bengio, and A. Courville, “Deep learning,” 2016, book in preparation for MIT Press. [Online]. Available: http://www.deeplearningbook.org * [12] A. Alkhateeb, S. Alex _et al._ , “Deep learning coordinated beamforming for highly-mobile millimeter wave systems,” _IEEE Access_ , vol. 6, pp. 37 328–37 348, 2018. * [13] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, “Efficient backprop,” in _Neural networks: Tricks of the trade_. Springer, 2012, pp. 9–48.
ifaamas [AAMAS ’21]Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021)May 3–7, 2021OnlineU. Endriss, A. Nowé, F. Dignum, A. Lomuscio (eds.) 2021 2021 94 KAIST Daejeon, Republic of Korea KAIST Daejeon, Republic of Korea KAIST Daejeon, Republic of Korea # Cooperative and Competitive Biases for Multi-Agent Reinforcement Learning Heechang Ryu<EMAIL_ADDRESS>, Hayong Shin<EMAIL_ADDRESS>and Jinkyoo Park$\ast$<EMAIL_ADDRESS> ###### Abstract. Training a multi-agent reinforcement learning (MARL) algorithm is more challenging than training a single-agent reinforcement learning algorithm, because the result of a multi-agent task strongly depends on the complex interactions among agents and their interactions with a stochastic and dynamic environment. We propose an algorithm that boosts MARL training using the biased action information of other agents based on a friend-or-foe concept. For a cooperative and competitive environment, there are generally two groups of agents: cooperative-agents and competitive-agents. In the proposed algorithm, each agent updates its value function using its own action and the biased action information of other agents in the two groups. The biased joint action of cooperative agents is computed as the sum of their actual joint action and the imaginary cooperative joint action, by assuming all the cooperative agents jointly maximize the target agent’s value function. The biased joint action of competitive agents can be computed similarly. Each agent then updates its own value function using the biased action information, resulting in a biased value function and corresponding biased policy. Subsequently, the biased policy of each agent is inevitably subjected to recommend an action to cooperate and compete with other agents, thereby introducing more active interactions among agents and enhancing the MARL policy learning. We empirically demonstrate that our algorithm outperforms existing algorithms in various mixed cooperative-competitive environments. Furthermore, the introduced biases gradually decrease as the training proceeds and the correction based on the imaginary assumption vanishes. ###### Key words and phrases: Multi-Agent Reinforcement Learning; Cooperation; Competition; Bias ## 1\. Introduction Reinforcement learning (RL) algorithms solve††$\ast$Corresponding author sequential decision-making problems using experiences obtained by a single agent (decision maker) dynamically interacting with an environment. The RL algorithms typically estimate an action-value function ($Q$-function) or a decision-making policy, using various function approximators (_e.g.,_ deep neural networks), to model how a particular action (decision) affects future outcomes. From this model, the optimal action in the current state for completing a target task can be deduced Sutton and Barto (2018). The success of RL algorithms for solving various tasks depends on how effectively they learn such temporal interactions between an action and a future outcome. In particular, when the RL algorithm is trained with a sparse or delayed reward, _i.e.,_ a reward signal is infrequently realized after an agent executes action, it becomes difficult to estimate the $Q$-function or policy. This is because it is challenging for the RL agent to learn the dynamic causal effect of its action on future outcomes, owing to the sparse and delayed reward signal. A representative example of such a task with a sparse reward is a goal-oriented task, in which a binary reward is given only when an agent reaches a goal. To solve this task, HER Andrychowicz et al. (2017) has been proposed to learn the $Q$-function or policy using the reward signals obtained from failed tasks (episodes), while considering these reward signals as being obtained from the successful tasks that are different from the original task. This strategy of HER transforms the sparse-reward environment into a dense- reward environment, enabling the RL agent to easily learn the $Q$-function or policy. In addition, HPG Rauber et al. (2019) extends the concept of HER to efficiently generalize learning about different goals using information obtained by the current policy for a specific goal. Recently, to enhance the performance of multi-agent reinforcement learning (MARL) algorithms, which are extensions of the RL algorithms to multi-agent settings, various strategies have been proposed. Some researchers have proposed an intrinsic reward to induce certain collective behaviors of multiple agents that are believed to help achieve the objective of a multi- agent task. For example, intrinsic reward is designed to promote the agents to execute actions influencing other agents’ state transitions Wang et al. (2020); Böhmer et al. (2019) or visit unexplored state spaces more frequently Iqbal and Sha (2019b). However, in general, designing a good intrinsic reward is difficult because it requires prior knowledge of the types of interactions that help solve the multi-agent task. Moreover, designing an effective intrinsic reward often requires an iterative reward-shaping procedure until satisfactory performance is reached. Consequently, for multi-agent tasks where a sparse reward is given and prior knowledge is not available, another effective method for MARL should be developed. To address these difficulties, we herein propose an algorithm, called Friend- or-Foe multi-agent Deep Deterministic Policy Gradient (F2DDPG), which boosts MARL training using the biased action information of other agents based on a friend-or-foe concept Littman (2001). F2DDPG adopts an actor-critic algorithm in a centralized training and decentralized execution (CTDE) framework Lowe et al. (2017). For a cooperative and competitive environment, there are generally two groups of agents: an ally group composed of cooperative agents to a target agent and an enemy group composed of competitive agents to a target agent. In the proposed algorithm, each agent updates its $Q$-function (critic) using its own action and the biased action information of other agents in the two groups. The biased joint action of cooperative agents is computed as the sum of their actual joint action and the imaginary cooperative joint action obtained by assuming that all the cooperative agents jointly maximize the target agent’s critic. The biased joint action of competitive agents can be computed similarly. Each agent then updates its own critic using biased action information, resulting in a biased critic and corresponding biased policy. Thereafter, the biased policy of each agent is inevitably subjected to recommend an action to cooperate and compete with other agents, which introduces more active interactions among agents, and thus, enhances the MARL policy learning. Using biased actions in estimating a critic can be viewed as a different version of using the biased reward (intrinsic reward) in the estimation to induce the intended interactions among agents. However, we do not compare our method with MARL approaches using the intrinsic reward because they require a certain type of prior knowledge about the environment (game). Instead, we compare F2DDPG with M3DDPG Li et al. (2019), the latter of which uses modified (biased) action information of other agents when training the critic; it modifies the other agents’ actions in an adversarial manner to induce a robust policy. Empirically, we demonstrate that our algorithm outperforms existing algorithms in four mixed cooperative-competitive game scenarios, in which the agents have cooperative and competitive interactions Lowe et al. (2017). Furthermore, we empirically show that the introduced biases gradually decrease as the training proceeds and that the correction based on the imaginary assumption vanishes. ## 2\. Related Work ### 2.1. Friend-or-Foe Q-Learning In general-sum games, such as mixed cooperative-competitive games, friend-or- foe Q-learning (FFQ) Littman (2001) has been proposed to provide strong convergence guarantees compared to the existing Nash-equilibrium-based learning rule Hu et al. (1998). It requires that other agents be identified as either ‘friend’ (ally) or ‘foe’ (enemy). FFQ then assumes that agent $i$’s friends are working together to maximize agent $i$’s value, while agent $i$’s foes are working together to minimize agent $i$’s value. Thus, $n$-player FFQ considers any game as a two-player zero-sum game with an extended action set, and is easy to implement for multiple agents. ### 2.2. Biases in RL and MARL In RL and MARL, various forms of inductive bias have been used to improve learning. The most straightforward inductive biases entail designing network structures for the critic or policy, such as attention networks Iqbal and Sha (2019a), graph neural networks Ryu et al. (2020), and implicit communication structures Roy et al. (2019). However, biases in game information, such as state, reward, and action have also been used in an attempt to boost training. We review the biases in the information in this subsection. #### 2.2.1. Biases in States. Bias has been reported to help train RL by injecting a biased belief regarding the state at the initialization stage of the $Q$-table Hailu and Sommer (1999). For example, in a goal-oriented task, if the goal is known in advance, biased information about the state, near and far from the goal, is injected into the $Q$-table before training. In addition, a distributed $Q$-learning algorithm for a cooperative multi-agent setting has been proposed based on the optimistic assumption Lauer and Riedmiller (2000). Under this assumption, the algorithm biasedly updates the $Q$-table only when the new value for $Q$ is greater than the current value in the current state. #### 2.2.2. Biases in Rewards. Intrinsic rewards have been proposed as a bias for multi-agent exploration to induce certain collective behaviors of agents, based on prior knowledge of the types of interactions that help solve the multi-agent task. The intrinsic rewards are provided when one agent’s action affects the state transitions of other agents Wang et al. (2020); Böhmer et al. (2019) and when all the agents explore only different or the same areas for the task of collecting scattered treasures Iqbal and Sha (2019b). #### 2.2.3. Biases in Actions. M3DDPG Li et al. (2019) has been proposed to learn a robust policy using other agents’ action information corrupted with adversarial noise. In this approach, each agent assumes that other agents provide the adversarial actions to the target agent, and updates its critic using such adversarial action information. To compute the adversarial action (biased action), each agent modifies the actions of other agents in the direction that minimizes the target agent’s critic. M3DDPG asserts that the trained policy using this biased action information outperforms its baseline algorithm, MADDPG Lowe et al. (2017). However, the limitation of this approach is that it does not consider the relationships among agents. It assumes that all the agents are adversarial to the target agent, regardless of whether they are allies or enemies in the cooperative and competitive game; this assumption is inconsistent with the actual situation. Notably, M3DDPG is similar to our method as it uses the biased information of other agents’ actions. However, F2DDPG explicitly considers the roles of agents in a cooperative and competitive environment, in which the cooperative and competitive agents are known a priori. In addition, our method can be justified by the well-known FFQ Littman (2001). We mainly compare the performance of our method with that of M3DDPG, while not addressing other approaches using different types of information bias. In this study, we assume that it is possible to identify other agents as being either cooperative (allies) or competitive (enemies) in mixed cooperative-competitive environments, to fully implement and compare the proposed method with the baseline methods. Figure 1. Overview of F2DDPG. ## 3\. Background ### 3.1. Partially Observable Markov Game We consider a partially observable Markov game Littman (1994), which is an extension of the partially observable Markov decision process to a game with multiple agents. A partially observable Markov game for $\mathnormal{N}$ agents is defined as follows: $s\in\mathcal{S}$ denotes the global state of the game; ${o_{i}}\in\mathcal{S}\mapsto\mathcal{O}_{i}$ denotes a local observation that agent $i$ can acquire correlated with the state; and $a_{i}\in\mathcal{A}_{i}$ is an action of agent $i$. The reward for agent $i$ is obtained as a function of state $s$ and joint action $\mathbf{a}$ as ${r}_{i}:\mathcal{S}\times\mathcal{A}_{1}\times\dots\times\mathcal{A}_{N}\mapsto\mathbb{R}$. The state evolves to the next state according to the state transition function $\mathcal{T}:\mathcal{S}\times\mathcal{A}_{1}\times\dots\times\mathcal{A}_{N}\mapsto\mathcal{S}$. The initial state is determined by the initial state distribution $\rho:\mathcal{S}\mapsto[0,1]$. Agent $i$ aims to maximize its discounted return $R_{i}=\sum_{t=0}^{T}\gamma^{t}r_{i,t}$, where $\gamma\in[0,1]$ is a discount factor. ### 3.2. Multi-Agent Deep Deterministic Policy Gradient (MADDPG) While the policy can be deterministic, $a=\mu(s)$, or stochastic, $a\sim\pi(\cdot\arrowvert s)$, deterministic policy gradient (DPG) Silver et al. (2014) for RL adopts a deterministic policy. DPG aims to directly derive the deterministic policy, $a=\mu(s;\theta)$, that maximizes the expected return or objective $\mathcal{J(\theta)}=\mathbb{E}_{s\sim\rho^{\mu},a\sim\mu_{\theta}}[R]\approx\mathbb{E}_{s\sim\rho^{\mu},a\sim\mu_{\theta}}[Q^{\mu}(s,a;\phi)]$, where $Q^{\mu}(s,a;\phi)=\mathbb{E}_{s^{\prime}}[r+\gamma\mathbb{E}_{a^{\prime}\sim\mu_{\theta}}[Q^{\mu}(s^{\prime},a^{\prime};\phi)]]$. Parameter $\theta$ of $\mu(s;\theta)$ is subsequently optimized by the gradient of $\mathcal{J(\theta)}$ as $\nabla_{\theta}\mathcal{J}(\theta)=\mathbb{E}_{s\sim\mathcal{D}}[\nabla_{\theta}\mu(s;\theta){\nabla_{a}}Q^{\mu}(s,a;\phi)\arrowvert_{a=\mu(s;\theta)}]$. $\mathcal{D}$ is an experience replay buffer that stores $(s,a,r,{s^{\prime}})$ samples obtained from the training episodes. Deep deterministic policy gradient (DDPG) Lillicrap et al. (2016), an actor-critic algorithm based on DPG, uses deep neural networks to approximate critic $Q^{\mu}(s,a;\phi)$ and actor $\mu(s;\theta)$ of the agent. MADDPG is a multi-agent extension of DDPG for deriving decentralized policies in the CTDE framework. In MADDPG, each agent learns an individual policy that maps the observation to its action to maximize its expected return, which is approximated by the $Q$-network. MADDPG comprises individual $Q$-networks and policy networks for each agent. MADDPG lets the $Q$-network (centralized critic) of agent $i$ be trained by minimizing the loss with the target $Q$-value, $y_{i}$, as follows: $\begin{gathered}\mathcal{L}(\phi_{i})={\mathbb{E}_{\mathbf{o},\mathbf{a},r,{\mathbf{o}^{\prime}}\sim\mathcal{D}}}[(Q_{i}^{\mu}(\mathbf{o},\mathbf{a};\phi_{i})-y_{i})^{2}],\\\ y_{i}=r_{i}+\gamma{Q_{i}^{\mu^{\prime}}}({\mathbf{o}^{\prime}},\mathbf{a}^{\prime};{\phi^{\prime}_{i}})\arrowvert_{a_{j}^{\prime}=\mu_{j}^{\prime}(o^{\prime}_{j};\theta_{j}^{\prime})},\end{gathered}$ (1) where $\mathbf{o}=(o_{1},\dots,o_{N})$ and $\mathbf{a}=(a_{1},\dots,a_{N})$ represent the joint observation and joint action of all agents, respectively. $\mathcal{D}$ is an experience replay buffer that stores $(\mathbf{o},\mathbf{a},r,{\mathbf{o}^{\prime}})$ samples obtained from the training episodes. $Q^{\mu^{\prime}}$ and $\mu^{\prime}$ are target networks for the stable learning of $Q$ and policy networks. The policy network (actor), $\mu_{i}(o_{i};\theta_{i})$, of agent $i$ is optimized using the gradient computed as $\nabla_{\theta_{i}}\mathcal{J}(\theta_{i})=\mathbb{E}_{\mathbf{o},\mathbf{a}\sim\mathcal{D}}[\nabla_{\theta_{i}}\mu_{i}(o_{i};\theta_{i})\nabla_{a_{i}}Q_{i}^{\mu}(\mathbf{o},\mathbf{a};\phi_{i})\arrowvert_{a_{i}=\mu_{i}(o_{i};\theta_{i})}].$ (2) ## 4\. Methods F2DDPG learns the critic and actor of each agent using the biased action information of other agents based on a friend-or-foe concept Littman (2001). In F2DDPG, as shown in Figure 1, each agent has two perceptions on the environment: * • The real environment, where agent $i$’s true critic can be estimated using its own action and the realized (actual) actions, $(a_{i},\mathbf{a}^{A}_{-i},\mathbf{a}^{E}_{-i})$; * • An imaginary environment, where agent $i$’s imaginary critic can be estimated using its own action and the imaginary cooperative and competitive joint actions, $(a_{i},\mathbf{a}^{*A}_{-i},\mathbf{a}^{*E}_{-i})$. These are computed based on the assumption that all the ally agents execute the cooperative joint action to agent $i$, while all the enemy agents execute the competitive joint action to agent $i$. In F2DDPG, each agent learns the decentralized actor (policy) by applying the following three iterative procedures: (1) computing the biased actions, $\overline{\mathbf{a}}^{A}_{-i}$ and $\overline{\mathbf{a}}^{E}_{-i}$, by combining the real joint action and imaginary cooperative and competitive joint actions; (2) updating the biased critic using the biased actions; and (3) updating the actor using the biased critic. Thus, the updated policy with biased cooperative-competitive joint actions is more likely to recommend such cooperative and competitive actions to other agents, which introduces meaningful interactions among agents and enhances policy learning. Therefore, using the biased actions, F2DDPG can learn the level of cooperation and competition among agents in a sample-efficient manner. The biased actions become increasingly closer to the actually executed actions as the training proceeds, implying that the biases in the actors vanish. Figure 1 illustrates how agent $i$ (predator) updates its critic in F2DDPG, while playing the 3 vs. 3 predator-prey game. In this game, the three predators (red circles) attempt to capture the three prey (green squares) together. Because the prey are faster than the predators, the predators (ally group) must cooperate strategically to capture the prey (enemy group). In the early stages of learning, however, it is difficult for the predators to achieve a reward. Whenever the reward is realized, whether by chance or strategic moves, predator $i$ computes imaginary actions by assuming that the reward is realized when the other two predators choose the optimal cooperative joint action for predator $i$, while the other three prey execute the competitive joint action to predator $i$. Agent $i$ updates its critic using its own action and the one-step biased joint cooperative-competitive actions of other agents. Algorithm 1 Friend-or-Foe Multi-Agent Deep Deterministic Policy Gradient Algorithm for $N$ agents 1:Initialize actor networks $\mu$, critic networks $Q$, target networks $\mu^{\prime}$ and $Q^{\prime}$, experience replay buffer $\mathcal{D}$ 2:for episode $=1$ to $M$ do 3: Initialize a random process $\mathcal{N}$ for action exploration 4: Receive initial observation $\mathbf{o}$ 5: for $t=1$ to $T$ do 6: For each agent $i$, select action $a_{i}=\mu_{i}(o_{i};\theta_{i})+\mathcal{N}_{t}$ 7: Execute actions $\mathbf{a}$ and receive reward $\mathbf{r}=(r_{1},\dots,r_{N})$ and new observation $\mathbf{o}^{\prime}$ 8: Store $(\mathbf{o},\mathbf{a},\mathbf{r},\mathbf{o}^{\prime})$ in experience replay buffer $\mathcal{D}$ 9: $\mathbf{o}\leftarrow\mathbf{o}^{\prime}$ 10: for agent $i=1$ to $N$ do 11: Sample a random minibatch of $\mathcal{B}$ samples $(\mathbf{o}^{b},\mathbf{a}^{b},\mathbf{r}^{b},\mathbf{o}^{\prime b})$ from $\mathcal{D}$ 12: Set $y^{b}_{i}=r^{b}_{i}+{\gamma}Q_{i}^{\mu^{\prime}}({\mathbf{o}^{\prime b}},a^{\prime}_{i},\overline{\mathbf{a}}^{\prime A}_{-i},\overline{\mathbf{a}}^{\prime E}_{-i};\phi^{\prime}_{i})\arrowvert_{a_{i}^{\prime}=\mu_{i}^{\prime}(o_{i}^{\prime b};\theta_{i}^{\prime})}$, 13: $\overline{\mathbf{a}}^{\prime A}_{-i}=\mathbf{a}^{\prime A}_{-i}+\delta^{A}\nabla_{\mathbf{a}^{\prime A}_{-i}}Q_{i}^{\mu^{\prime}}({\mathbf{o}^{\prime b}},a^{\prime}_{i},\mathbf{a}^{\prime A}_{-i},\mathbf{a}^{\prime E}_{-i};\phi_{i}^{\prime})\arrowvert_{a_{k}^{\prime}=\mu_{k}^{\prime}(o^{\prime b}_{k};\theta_{k}^{\prime})}$, 14: $\overline{\mathbf{a}}^{\prime E}_{-i}=\mathbf{a}^{\prime E}_{-i}-\delta^{E}\nabla_{\mathbf{a}^{\prime E}_{-i}}Q_{i}^{\mu^{\prime}}({\mathbf{o}^{\prime b}},a^{\prime}_{i},\mathbf{a}^{\prime A}_{-i},\mathbf{a}^{\prime E}_{-i};\phi_{i}^{\prime})\arrowvert_{a_{k}^{\prime}=\mu_{k}^{\prime}(o^{\prime b}_{k};\theta_{k}^{\prime})}$ 15: Update critic by minimizing the loss: $\mathcal{L}(\phi_{i})=\frac{1}{\mathcal{B}}\sum_{b}(Q_{i}^{\mu}({\mathbf{o}^{b}},\mathbf{a}^{b};\phi_{i})-y^{b}_{i})^{2}$ 16: Update actor using the sampled policy gradient: $\nabla_{\theta_{i}}\mathcal{J}(\theta_{i})\approx\frac{1}{\mathcal{B}}\sum_{b}\nabla_{\theta_{i}}\mu_{i}(o^{b}_{i};\theta_{i})\nabla_{a_{i}}Q_{i}^{\mu}({\mathbf{o}^{b}},a_{i},\overline{\mathbf{a}}^{A}_{-i},\overline{\mathbf{a}}^{E}_{-i};\phi_{i})\arrowvert_{a_{i}=\mu_{i}(o^{b}_{i};\theta_{i})}$, 17: $\overline{\mathbf{a}}^{A}_{-i}=\mathbf{a}^{bA}_{-i}+\delta^{A}\nabla_{\mathbf{a}^{bA}_{-i}}Q_{i}^{\mu}({\mathbf{o}^{b}},a^{b}_{i},\mathbf{a}^{bA}_{-i},\mathbf{a}^{bE}_{-i};{\phi_{i}})$, 18: $\overline{\mathbf{a}}^{E}_{-i}=\mathbf{a}^{bE}_{-i}-\delta^{E}\nabla_{\mathbf{a}^{bE}_{-i}}Q_{i}^{\mu}({\mathbf{o}^{b}},a^{b}_{i},\mathbf{a}^{bA}_{-i},\mathbf{a}^{bE}_{-i};{\phi_{i}})$ 19: end for 20: Update target network parameters for each agent $i$: $\phi_{i}^{\prime}\leftarrow\tau\phi_{i}+(1-\tau)\phi_{i}^{\prime}$ and $\theta_{i}^{\prime}\leftarrow\tau\theta_{i}+(1-\tau)\theta_{i}^{\prime}$ 21: end for 22:end for ### 4.1. Computing Biased Cooperative-Competitive Actions Considering relationships to agent $i$, in the mixed cooperative-competitive environment, we categorize all agents, except $i$, as cooperative and competitive to agent $i$ as follows: * • $A(i)$: set of agents cooperative to agent $i$ (ally group); * • $E(i)$: set of agents competitive to agent $i$ (enemy group); * • $\mathbf{a}^{A}_{-i}$: joint action of agents in $A(i)$; * • $\mathbf{a}^{E}_{-i}$: joint action of agents in $E(i)$. If one assumes that the agents in $A(i)$ and $E(i)$ jointly maximize and minimize the critic of agent $i$, respectively, then agent $i$ can estimate its critic as follows: $\overline{Q}_{i}^{\mu}(\mathbf{o},a_{i};\phi_{i})=\max\limits_{\mathbf{a}^{A}_{-i}}\min\limits_{\mathbf{a}^{E}_{-i}}Q_{i}^{\mu}(\mathbf{o},a_{i},\mathbf{a}^{A}_{-i},\mathbf{a}^{E}_{-i};\phi_{i}).$ (3) This estimated critic is biased because each agent in $A(i)$ or $E(i)$ executes its action to only maximize its own individual critic similar to the process in MADDPG. The two optimal joint actions $(\mathbf{a}^{*A}_{-i}$, $\mathbf{a}^{*E}_{-i})$ that achieve the maxmin value in Equation 4 are called a saddle-point equilibrium strategy, which is equivalent to the Nash equilibrium strategy, which satisfies $\begin{gathered}\mathbf{a}^{*A}_{-i}=\operatorname*{argmax}\limits_{\mathbf{a}^{A}_{-i}}Q_{i}^{\mu}(\mathbf{o},a_{i},\mathbf{a}^{A}_{-i},\mathbf{a}^{*E}_{-i};\phi_{i}),\\\ \mathbf{a}^{*E}_{-i}=\operatorname*{argmin}\limits_{\mathbf{a}^{E}_{-i}}Q_{i}^{\mu}(\mathbf{o},a_{i},\mathbf{a}^{*A}_{-i},\mathbf{a}^{E}_{-i};\phi_{i}).\end{gathered}$ (4) We refer to these two actions in Equation 4 as the imaginary cooperative and competitive joint actions because these two joint actions rarely occur in the real environment. The estimated critic of agent $i$ using Equation 4 can be used in a decentralized training setting, where each agent cannot observe other agents’ actions during training, and thus, has to infer them. However, the current study focuses on developing an efficient MARL algorithm in the CTDE framework that allows each agent to observe other agents’ actions during training. To help stabilize training without introducing any bias, other agents’ actions are explicitly used when learning the critic and associated actor. In this study, we propose to combine these two different learning paradigms to: (1) achieve reliable and stable learning using the true action information in the CTDE framework and (2) infuse the desirable behaviors (information biases) into agents using imaginary joint cooperative-competitive actions (biased action information) computed in the decentralized learning framework. To achieve both objectives, the proposed method computes the biased actions by combining the actual and imaginary actions as follows: $\begin{gathered}\overline{\mathbf{a}}^{A}_{-i}=\mathbf{a}^{A}_{-i}+\delta^{A}\nabla_{\mathbf{a}^{A}_{-i}}Q_{i}^{\mu}({\mathbf{o}},a_{i},\mathbf{a}^{A}_{-i},\mathbf{a}^{E}_{-i};{\phi_{i}}),\\\ \overline{\mathbf{a}}^{E}_{-i}=\mathbf{a}^{E}_{-i}-\delta^{E}\nabla_{\mathbf{a}^{E}_{-i}}Q_{i}^{\mu}({\mathbf{o}},a_{i},\mathbf{a}^{A}_{-i},\mathbf{a}^{E}_{-i};{\phi_{i}}).\end{gathered}$ (5) In Equation 5, we compute the one-step-biased cooperative-competitive joint actions, which approximate the imaginary cooperative-competitive joint actions in Equation 4, using the partial gradient of critic $Q_{i}^{\mu}$. In addition, $\delta^{A}$ and $\delta^{E}$ are the step sizes for the biased cooperative and competitive joint actions, respectively. Note that computing $\overline{\mathbf{a}}^{A}_{-i}$ and $\overline{\mathbf{a}}^{E}_{-i}$ is computationally tractable because it only requires computing the partial gradient of the $Q$-network with respect to the joint action variables. In addition, $\delta^{A}$ and $\delta^{E}$ adjust the level of biases (injected cooperative and competitive biases). The optimal $\delta^{A}$ and $\delta^{E}$ can be empirically determined during training; however, these values do not significantly affect the learning performance because the partial gradient eventually becomes small, making the amount of action modification negligible. ### 4.2. Learning the Biased Critic Using Biased Actions The biased actions in Equation 5 are then used to update the $Q$-network as: $\begin{gathered}\mathcal{L}(\phi_{i})={\mathbb{E}_{\mathbf{o},\mathbf{a},r,{\mathbf{o}^{\prime}}\sim\mathcal{D}}}[(Q_{i}^{\mu}(\mathbf{o},\mathbf{a};\phi_{i})-y_{i})^{2},\\\ y_{i}=r_{i}+\gamma{Q_{i}^{\mu^{\prime}}}({\mathbf{o}^{\prime}},a^{\prime}_{i},\overline{\mathbf{a}}^{\prime A}_{-i},\overline{\mathbf{a}}^{\prime E}_{-i};\phi_{i}^{\prime})\arrowvert_{a_{i}^{\prime}=\mu_{i}^{\prime}(o^{\prime}_{i};\theta_{i}^{\prime})}],\end{gathered}$ (6) where $\overline{\mathbf{a}}^{\prime A}_{-i}$ and $\overline{\mathbf{a}}^{\prime E}_{-i}$ are, respectively, the biased actions computed using Equation 5 with the target $Q$-network, $Q_{i}^{\mu^{\prime}}$. The target network is designed to stabilize the learning of the $Q$-network by slowly changing the parameters $\phi^{\prime}$ of the target $Q$-network Lowe et al. (2017). ### 4.3. Learning the Actor from the Biased Critic The biased critic $Q_{i}^{\mu}$ is used to update the decentralized actor as a deterministic policy. The deterministic policy network $\mu_{i}$ is updated using the gradient computed as $\begin{gathered}\nabla_{\theta_{i}}\mathcal{J}(\theta_{i})=\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\\\ \mathbb{E}_{\mathbf{o},\mathbf{a}\sim\mathcal{D}}[\nabla_{\theta_{i}}\mu_{i}(o_{i};\theta_{i})\nabla_{a_{i}}Q_{i}^{\mu}(\mathbf{o},a_{i},\overline{\mathbf{a}}^{A}_{-i},\overline{\mathbf{a}}^{E}_{-i};\phi_{i})\arrowvert_{a_{i}=\mu_{i}(o_{i};\theta_{i})}],\end{gathered}$ (7) where $\overline{\mathbf{a}}^{A}_{-i}$ and $\overline{\mathbf{a}}^{E}_{-i}$ are the biased joint actions of the ally and energy agents, respectively, each of which is computed using Equation 5 with the $Q$-network, $Q_{i}^{\mu}$. We estimate the gradient reliably with the biased actions computed using the sampled joint actions (executed true actions) from the experience reply buffer. Each agent then updates the parameters of its own policy network using the computed policy gradient in Equation 7. Owing to the biased action information, the trained policy is also biased such that it is inevitably subjected to recommend an action to cooperate and compete with other agents. Consequently, the biased policy introduces more active interactions among agents, and thus, enhances the MARL policy learning. In addition, if the critic’s gradient (bias) in the biased action and actual action from the learned policy gradually become similar, the introduced bias then naturally decreases, and the training eventually becomes unbiased, as the biased and actual actions become the same. For completeness, we provide the F2DDPG algorithm in Algorithm 1. Figure -2. a) Cooperative Navigation Figure -1. b) Cooperative Communication Figure 0. c) Predator-Prey Figure 1. d) Covert Communication Figure 2. Illustrations of the experimental environments. In some cases of training F2DDPG, the critic’s gradient (bias) dominates the actual action in the biased action in learning the $Q$-network and policy network. Thus, the magnitude of the gradient is made equal to the magnitude of the actual action to prevent the biased action from being extremely biased and destabilizing the training, as follows: $\begin{gathered}g=\nabla_{a_{-i}}Q_{i}^{\mu}({\mathbf{o}},\mathbf{a};{\phi_{i}}),\\\ \overline{a}_{-i}=a_{-i}\pm\delta\lVert a_{-i}\rVert_{2}\frac{g}{\lVert g\rVert_{2}}.\end{gathered}$ (8) The trick of Equation 8 is utilized in Equation 5. Figure -1. a) Cooperative Navigation Figure 0. b) Cooperative Communication Figure 1. c) Predator-Prey Figure 2. d) Covert Communication Figure 3. Rewards of the red agents in the experimental environments. ## 5\. Experiments Figure 2 shows the environments (games) used to evaluate the performances of the proposed and baseline algorithms. The environments are those used in previous studies Lowe et al. (2017); Li et al. (2019) and those designed to make MARL training more difficult. We assume that the agents in the environments observe the relative positions and velocities of all agents. In Figure 2, games (a) and (b) correspond to cooperative environments with only agents in a cooperative relationship, and games (c) and (d) correspond to mixed cooperative-competitive environments with both cooperative and competitive agents. In games (b) and (d), the agents need to communicate with other agents depending on the purpose of the games. For the experiments in these games, we compare the performance of F2DDPG to the following baseline algorithms: * • MADDPG Lowe et al. (2017) is the algorithm that learns the critics and actors using only the actual action information in the CTDE framework. * • M3DDPG Li et al. (2019) is the algorithm based on MADDPG. Instead of using the actual action information, this algorithm uses the noisy actions of other agents to update the critics and actors. In particular, this algorithm computes the adversarial noise for other agents’ actions such that the noisy actions collectively minimize the target agent’s critic. Owing to the use of adversarial noise, this algorithm is referred to as robust MARL. In the view of F2DDPG, this algorithm can be interpreted as one in which the competitive roles are infused, as an information bias, to all the agents, regardless of their roles (cooperative or competitive) with respect to the target agent. * • All Plus is the algorithm with only cooperative biases employed in the actions of other agents; other agents are assumed to maximize the target agent’s critic, regardless of their roles. * • Random Sign is the algorithm with random biases employed in the actions of other agents. Other randomly selected agents are assumed to maximize the target agent’s critic, while the remaining agents are assumed to minimize the target agent’s critic, regardless of their roles. We can differentiate the proposed F2DDPG and other baseline algorithms depending on the type of information bias infused into other agents’ actions. While the baseline algorithms either do not use any information bias (MADDPG) or use a certain information bias, regardless of the relationships among agents, F2DDPG is the only algorithm that aligns the information bias with the actual roles of agents. Note that we exclude the All Plus algorithm from the baseline algorithms in cooperative environments, such as games (a) and (b), because it has the same cooperative biases as F2DDPG for all the cooperative agents. In this study, all the performances are obtained by the trained policies with four different random seeds. ### 5.1. Cooperative Navigation Cooperative navigation is a cooperative environment, as shown in Figure 2 (a), in which three cooperative agents (red circles) must reach three landmarks (blue crosses) without colliding with each other, while covering all of the landmarks. Every episode starts with randomly initialized positions for the agents and landmarks. The agents are collectively rewarded based on the distance of the nearest agent to each landmark and penalized for collisions with other agents during navigation. Thus, each agent must cooperate to occupy a distinct landmark without colliding with other agents. As shown in Figure 3 (a), F2DDPG outperforms other baseline algorithms with faster training speed and higher converged rewards. We consider that the performance improvement of F2DDPG is a result of the optimal use of information bias corresponding to the agents’ roles. F2DDPG constantly updates the critic and actor of each agent using biased joint actions, which induces agents’ policies to recommend more coherently exploratory actions, especially toward cooperation. We believe that such coherent exploration helps the MARL policy learning more than random exploration. In contrast, M3DDPG exhibits slow training in inducing cooperation among the three agents by learning only the adversarial action information of other agents, although the agents are in a cooperative relationship. Training with adversarial action information may enable robust policy learning; however, it is believed to be unhelpful in inducing cooperation among the agents. The Random Sign algorithm randomly injects cooperative or competitive biases for biased actions at every step of training, which leads to random noise in training. This random noise allows the algorithm to train faster than MADDPG and M3DDPG; however, the algorithm exhibits slower training than F2DDPG. Figure 5. Actual actions (red arrows) and biases (black arrows) for the centered predator as training proceeds. ### 5.2. Cooperative Communication Cooperative communication is a cooperative environment, as shown in Figure 2 (b), with two cooperative agents, a speaker and listener (red circles), and three landmarks of differing colors. The listener must navigate to a landmark of a particular color. However, the listener does not know the landmark to which it must navigate, while observing the relative position and color of the landmarks. In contrast, the speaker observes the correct color of the landmark to which the listener must navigate, and broadcasts a message (communication vector) at each time step, which is observed by the listener. For each episode, the positions of the listener and landmarks are randomly initialized. The listener and speaker are rewarded based on listener’s distance to the correct landmark. Thus, the speaker must learn to generate a message that optimally guides the listener to reach the correct landmark, and simultaneously, the listener must learn to decipher the message that is transmitted from the speaker and navigate to the correct landmark. As shown in Figure 3 (b), F2DDPG outperforms other baseline algorithms with faster training speed and higher rewards. Additionally, M3DDPG exhibits better performance than MADDPG. Moreover, the Random Sign algorithm learns faster than MADDPG, possibly because of the enhanced exploration with random noises added to the observed actions. Table 1. Fraction between the number of successful episodes and the total number of testing episodes in predator-prey games. | 3 vs. 1 | 5 vs. 3 | 7 vs. 3 ---|---|---|--- | $N_{c}\geq 1$ | $N_{c}\geq 3$ | $N_{c}\geq 1$ | $N_{c}\geq 3$ | $N_{c}\geq 1$ | $N_{c}\geq 3$ MADDPG | 2.75$\pm$ 0.83 | 0.25$\pm$ 0.43 | 14.50$\pm$ 1.50 | 1.25$\pm$ 1.09 | 56.25$\pm$ 6.46 | 17.00$\pm$ 6.09 M3DDPG | 3.75$\pm$ 0.43 | 0.25$\pm$ 0.43 | 14.25$\pm$ 0.43 | 1.25$\pm$ 0.83 | 55.75$\pm$ 6.17 | 13.25$\pm$ 3.63 All Plus | 3.75$\pm$ 1.09 | 0.25$\pm$ 0.43 | 15.75$\pm$ 2.28 | 1.00$\pm$ 1.00 | 62.75$\pm$ 7.39 | 14.24$\pm$ 4.14 Random Sign | 1.50$\pm$ 0.50 | 0.00$\pm$ 0.00 | 18.75$\pm$ 6.98 | 1.75$\pm$ 0.83 | 62.50$\pm$ 11.71 | 19.25$\pm$ 6.37 F2DDPG | 3.25$\pm$ 0.43 | 0.25$\pm$ 0.43 | 32.50$\pm$ 11.39 | 4.50$\pm$ 2.29 | 71.75$\pm$ 7.52 | 28.75$\pm$ 6.17 ### 5.3. Predator-Prey Predator-prey is a mixed cooperative-competitive environment, as shown in Figure 2 (c), in which the five predator agents (red circles) seek to capture the three prey agents (blue squares and green diamond), which is called 5 vs. 3 predator-prey. If there are $m$ predators and $n$ prey, it is denoted as $m$ vs. $n$ predator-prey. Because prey can move at a higher speed and have greater acceleration than predators, predators must cooperate to capture the prey. In particular, the green prey (green diamond) can move faster with greater acceleration than the blue prey (blue squares); the green and blue prey are factors of 3 and 1.3 faster than the predators, respectively. The positions of the predators and prey are randomly initialized for every episode. Each time the predators collide with (_i.e.,_ capture) the prey, the predators are collectively rewarded, while the prey are penalized. When the predators capture green prey, they are rewarded with a factor of 10 more than when they capture blue prey. The predators can capture prey multiple times during an episode. In the predator-prey game, the prey are trained with MADDPG, while the predators are trained with F2DDPG and other baseline algorithms. Figure 2. a) Rewards of green prey Figure 3. b) Cosine similarity between actual actions and biases Figure 4. Results in 5 vs. 3 predator-prey. As shown in Figure 3 (c), F2DDPG outperforms other baseline algorithms with higher rewards. The reason why F2DDPG achieves higher rewards is hypothesized by investigating the reward lost by the green prey shown in Figure 4 (a). This figure compares the reward achieved by the green prey (trained with MADDPG) when playing against the predators trained with F2DDPG and baseline algorithms; when the reward of the prey is lower, it is more likely to be captured by the predators. As shown in the figure, the reward of the green prey captured by the predators trained with F2DDPG is lower than that of the prey captured by the predators trained with other baseline algorithms. This indicates that the high rewards of the predators trained with F2DDPG (shown in Figure 3 (c)) are the results of capturing the green prey more frequently. Thus, it can be concluded that the predators trained with F2DDPG are more capable of cooperating strategically to capture the most rewardable, but fast, prey than the predators trained with other baseline algorithms. We also verify that the infused bias induces desirable behaviors from the agents. Figure 4 (b) shows how the cosine similarity between the actual action $\mathbf{a}^{A}_{-i}$ and bias $\nabla_{\mathbf{a}^{A}_{-i}}Q_{i}^{\mu}$ in the biased action in Equation 5 varies as the F2DDPG training proceeds. The cosine similarity between $\mathbf{a}^{A}_{-i}$ and $\nabla_{\mathbf{a}^{A}_{-i}}Q_{i}^{\mu}$ is calculated as $\frac{\langle a^{A}_{-i},\nabla_{a^{A}_{-i}}Q_{i}^{\mu}\rangle}{\lVert a^{A}_{-i}\rVert_{2}\lVert\nabla_{a^{A}_{-i}}Q_{i}^{\mu}\rVert_{2}}$ for all agents in $A(i)$, where $\langle\cdot,\cdot\rangle$ represents the inner product operator. The similarity is used to judge the similarity of directions of the actual and biased actions. The similarity ranges from -1, indicating exactly opposite, to 1, indicating exactly the same, and 0 indicating orthogonality or decorrelation. As shown in Figure 4 (b), the similarity increases from 0 to close to 1 as the training proceeds, indicating that the actual and biased actions (desirable actions designed by the biases) become more similar as the training proceeds. Therefore, the agents eventually execute the actions designed by the biases, and accordingly, the biases between the actual and biased actions vanish. Figure 6. Actual actions (red arrows) and biases (black arrows) for the predator capturing the green prey after the policies are learned with F2DDPG. To consider a particular fixed state where a centered predator captures the prey, we investigate how the centered predator’s trained policy induces other agents’ (predators’) actions. In Figure 5, the figures on the left show the other agents’ biased joint action (black) induced at an early training stage, and those on the right show how the other agents’ actual joint action (red) concurrently and coherently alter as the training proceeds. Noticeable observations are that (1) the other agents are jointly heading toward the prey, which is a strategic movement to capture the prey, and (2) the actual (red) and biased (black) actions become extremely similar, which demonstrates that the biases vanish as the training proceeds, and the agents actually behave as intended by the biases. In addition, we execute the policies trained with F2DDPG in the predator-prey game with a random state. Figure 6 shows four snapshots of the predator-prey game with a random state after the policies are trained with F2DDPG. In the figure, the predators tend to gather toward the predator capturing the green prey and attempt to capture it together. The differences between the actual (red) and biased (black) actions are negligible, meaning that the converged policies no longer carry the biases. Table 1 compares the fraction between the number of successful episodes and the total number of testing episodes (100 episodes). We define two types of success: $N_{c}\geq 1$ is the case in which the predators capture the green prey at least once and $N_{c}\geq 3$ is the case in which the predators capture the green prey at least three times. To validate the scalability of the proposed algorithm, we compare these performance measures for F2DDPG and other baseline algorithms for different sizes of predator-prey games. In 3 vs. 1, the performances are not significantly differentiated because the number of predators is insufficient to capture the faster green prey, even if the three predators cooperate. However, as the number of predators increases, the predators capture the green prey through cooperation, and F2DDPG outperforms other baseline algorithms in 5 vs. 3, as shown in the table. In addition, F2DDPG outperforms other baseline algorithms in 7 vs. 3. Thus, it can be concluded that the proposed method improves the MARL training, even in cases with many agents, by utilizing biased action information. ### 5.4. Covert Communication Covert communication is a mixed cooperative-competitive environment, as shown in Figure 2 (d), with two cooperative agents, a speaker and listener (red circles), and an adversary (green circle). The speaker must encode a message as a communication vector using a randomly generated key to output the communication vector. The listener must reconstruct the communication vector into the message using the key. However, the adversary also observes the communication vector and attempts to reconstruct the communication vector without the key. The speaker and listener are rewarded based on the listener’s reconstruction and penalized based on the adversary’s reconstruction. The adversary is rewarded based on its reconstruction. Therefore, the speaker must encrypt the message as the communication vector such that the adversary cannot decrypt the communication vector, and the listener must decrypt the communication vector as the message. In the covert communication, the adversary is trained with MADDPG for comparison, while the speaker and listener are trained with F2DDPG and other baseline algorithms. As shown in Figure 3 (d), F2DDPG outperforms other baseline algorithms with higher rewards. At the early stage of training, M3DDPG, Random Sign, and F2DDPG outperform MADDPG and All Plus. However, as the training proceeds and the adversary becomes intelligent, the rewards of M3DDPG and Random Sign decrease, while F2DDPG still maintains high rewards. ## 6\. Conclusions We proposed F2DDPG, an algorithm that boosts MARL training using biased action information of other agents based on a friend-or-foe concept. Empirically, we demonstrated that F2DDPG outperforms existing algorithms in several mixed cooperative-competitive environments. We also demonstrated that F2DDPG learns the agents’ policies such that their actions become similar to the biased actions and that the biases decrease as the learning proceeds. ## References * (1) * Andrychowicz et al. (2017) Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. 2017. Hindsight Experience Replay. In _Advances in Neural Information Processing Systems_. 5048–5058. * Böhmer et al. (2019) Wendelin Böhmer, Tabish Rashid, and Shimon Whiteson. 2019\. Exploration with unreliable intrinsic reward in multi-agent reinforcement learning. _arXiv preprint arXiv:1906.02138_ (2019). * Hailu and Sommer (1999) G Hailu and G Sommer. 1999\. On amount and quality of bias in reinforcement learning. In _IEEE SMC’99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics_ , Vol. 2. IEEE, 728–733. * Hu et al. (1998) Junling Hu, Michael P Wellman, et al. 1998\. Multiagent reinforcement learning: theoretical framework and an algorithm.. In _Proceedings of the Fifteenth International Conference on Machine Learning_ , Vol. 98. Citeseer, 242–250. * Iqbal and Sha (2019a) Shariq Iqbal and Fei Sha. 2019a. Actor-attention-critic for multi-agent reinforcement learning. In _International Conference on Machine Learning_ , Vol. 97. PMLR, 2961–2970. * Iqbal and Sha (2019b) Shariq Iqbal and Fei Sha. 2019b. Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning. _arXiv preprint arXiv:1905.12127_ (2019). * Lauer and Riedmiller (2000) Martin Lauer and Martin Riedmiller. 2000. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In _In Proceedings of the Seventeenth International Conference on Machine Learning_. Citeseer. * Li et al. (2019) Shihui Li, Yi Wu, Xinyue Cui, Honghua Dong, Fei Fang, and Stuart Russell. 2019\. Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 4213–4220. * Lillicrap et al. (2016) Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2016. Continuous control with deep reinforcement learning. _International Conference on Learning Representations_ (2016). * Littman (1994) Michael L Littman. 1994\. Markov games as a framework for multi-agent reinforcement learning. In _Machine learning proceedings 1994_. Elsevier, 157–163. * Littman (2001) Michael L. Littman. 2001\. Friend-or-Foe Q-Learning in General-Sum Games. In _Proceedings of the Eighteenth International Conference on Machine Learning_ , Vol. 1. Morgan Kaufmann Publishers Inc., 322–328. * Lowe et al. (2017) Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. In _Advances in Neural Information Processing Systems_. 6379–6390. * Rauber et al. (2019) Paulo Rauber, Avinash Ummadisingu, Filipe Mutz, and Juergen Schmidhuber. 2019. Hindsight policy gradients. _International Conference on Learning Representations_ (2019). * Roy et al. (2019) Julien Roy, Paul Barde, Félix G Harvey, Derek Nowrouzezahrai, and Christopher Pal. 2019. Promoting Coordination through Policy Regularization in Multi-Agent Deep Reinforcement Learning. _arXiv preprint arXiv:1908.02269_ (2019). * Ryu et al. (2020) Heechang Ryu, Hayong Shin, and Jinkyoo Park. 2020\. Multi-Agent Actor-Critic with Hierarchical Graph Attention Network. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 34. 7236–7243. * Silver et al. (2014) David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. 2014\. Deterministic Policy Gradient Algorithms. In _International Conference on Machine Learning_ , Vol. 32. PMLR, 387–395. * Sutton and Barto (2018) Richard S Sutton and Andrew G Barto. 2018. _Reinforcement learning: An introduction_. MIT press. * Wang et al. (2020) Tonghan Wang, Jianhao Wang, Yi Wu, and Chongjie Zhang. 2020\. Influence-Based Multi-Agent Exploration. _International Conference on Learning Representations_ (2020). ## Supplementary Material ## Details about Environments Table 2. Classification of the experimental environments. Environment | Cooperative? | Mixed? | Communication? ---|---|---|--- Cooperative Navi. | ✓ | | Cooperative Comm. | ✓ | | ✓ Predator-Prey | | ✓ | Covert Comm. | | ✓ | ✓ Table 2 categorizes the experimental environments into a cooperative environment and a mixed cooperative-competitive environment and indicates whether the environments need communication between agents. We assume that the agents in the environments observe the relative positions and velocities of all agents. The positions of the agents and landmarks in all the environments are randomly initialized within $[-1,1]^{2}$ for every episode. ### Cooperative Navigation Cooperative navigation ($N=3$) has three agents and three landmarks. ### Cooperative Communication Cooperative communication ($N=2$) has two agents, a speaker and listener, and three landmarks. The speaker outputs a communication vector, which is a three- sized tensor, at each timestep, and the listener observes the communication vector. ### Predator-Prey Predator-prey ($N=4,8,10$) has predator agents and prey agents. The environment imposes a penalty on prey when the prey go beyond the boundary of the environment. ### Covert Communication Covert communication ($N=3$) has three agents, a speaker, listener, and adversary. The speaker observes a message vector and a key vector, which are randomly generated, and outputs a communication vector. The listener observes the key vector and the communication vector. The adversary observes only the communication vector. The message, key, and communication vectors are four- sized tensors. ## Hyper-Parameters for Experiments We use 120,000 training episodes with 25 timesteps (total 3 million timesteps) for training the proposed and other baseline algorithms in all the environments. All codes used in the experiments will be released. ### Hyper-Parameters of F2DDPG The hyper-parameters of F2DDPG used in the experiments are summarized in Table 3. The output layer of the policy network of F2DDPG for the environments provides the action as a five-sized tensor for hold, right, left, up, and down. Table 3. Hyper-parameters of F2DDPG. F2DDPG Hyper-Parameter | ---|--- # Policy network MLP units | (64, 64) # $Q$-network MLP units | (64, 64) Network parameter initialization | Xavier uniform Nonlinear activation | ReLU Policy network learning rate | $10^{-2}$ $Q$-network learning rate | $10^{-2}$ $\tau$ for updating target networks | $10^{-2}$ $\gamma$ | 0.95 Replay buffer size | $10^{6}$ Mini-batch size | 1024 Optimizer | Adam $\delta^{A}$ | $10^{-5}$ $\delta^{E}$ | $10^{-3}$ The hyper-parameters in the table are also used for the All Plus and Random Sign algorithms in the experiments. For MADDPG and M3DDPG, the hyper- parameters reported to have the highest performance in previous studies are used in the experiments and similar to the hyper-parameters of F2DDPG.
itemizestditemize Antoni Rosinol, Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA # Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs Antoni Rosinol Andrew Violette Marcus Abate Nathan Hughes Yun Chang Jingnan Shi Arjun Gupta, Luca Carlone Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, USA. This work was partially funded by ARL DCIST CRA W911NF-17-2-0181, ONR RAIDER N00014-18-1-2828, MIT Lincoln Laboratory, an Amazon Research Award, and “la Caixa” Foundation (ID 100010434), LCF/BQ/AA18/11680088 (A. Rosinol). <EMAIL_ADDRESS> ###### Abstract Humans are able to form a complex mental model of the environment they move in. This mental model captures geometric and semantic aspects of the scene, describes the environment at multiple levels of abstractions (_e.g.,_ objects, rooms, buildings), includes static and dynamic entities and their relations (_e.g.,_ a person is in a room at a given time). In contrast, current robots’ internal representations still provide a partial and fragmented understanding of the environment, either in the form of a sparse or dense set of geometric primitives (_e.g.,_ points, lines, planes, voxels), or as a collection of objects. This paper attempts to reduce the gap between robot and human perception by introducing a novel representation, a _3D Dynamic Scene Graph_ (DSG), that seamlessly captures metric and semantic aspects of a dynamic environment. A DSG is a layered graph where nodes represent spatial concepts at different levels of abstraction, and edges represent spatio-temporal relations among nodes. Our second contribution is _Kimera_ , the first fully automatic method to build a DSG from visual-inertial data. Kimera includes accurate algorithms for visual-inertial SLAM, metric-semantic 3D reconstruction, object localization, human pose and shape estimation, and scene parsing. Our third contribution is a comprehensive evaluation of Kimera in real-life datasets and photo-realistic simulations, including a newly released dataset, uHumans2, which simulates a collection of crowded indoor and outdoor scenes. Our evaluation shows that Kimera achieves competitive performance in visual-inertial SLAM, estimates an accurate 3D metric-semantic mesh model in real-time, and builds a DSG of a complex indoor environment with tens of objects and humans in minutes. Our final contribution is to showcase how to use a DSG for real-time hierarchical semantic path-planning. The core modules in Kimera have been released open source. Accepted for publication at IJRR 2021, please cite as follows: Antoni Rosinol, Andrew Violette, Marcus Abate, Nathan Hughes, Yun Chang, Jingnan Shi, Arjun Gupta, Luca Carlone “Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs” The International Journal of Robotics Research, 2021. ## Supplementary Material Code: https://github.com/MIT-SPARK/Kimera Video 1: https://youtu.be/-5XxXRABXJs Video 2: https://youtu.be/SWbofjhyPzI ## 1 Introduction High-level scene understanding is a prerequisite for safe and long-term autonomous operation of robots and autonomous vehicles, and for effective human-robot interaction. The next generations of robots must be able to understand and execute high-level instructions, such as “search for survivors on the second floor” or “go and pick up the grocery bag in the kitchen”. They must be able to plan and act over long distances and extended time horizons to support lifelong operation. Moreover, they need a holistic understanding of the scene that allows reasoning about inconsistencies, causal relations, and occluded objects. As humans, we perform all these operations effortlessly: we understand high- level instructions, plan over long distances (_e.g.,_ plan a trip from Boston to Rome), and perform advanced reasoning about the environment. For instance, as humans, we can easily infer that if the car preceding us is suddenly stopping in the middle of the road and in proximity to a pedestrian crossing, a pedestrian is likely to be crossing the street even if occluded by the car in front of us. This is in stark contrast with today’s robot capabilities: robots are often issued geometric commands (_e.g.,_ “reach coordinates XYZ”), do not have suitable representations (nor inference algorithms) to support decision making at multiple levels of abstraction, and have no notion of causality or high-level reasoning. High-level understanding of 3D dynamic scenes involves three key ingredients: (i) understanding the geometry, semantics, and physics of the scene, (ii) representing the scene at multiple levels of abstraction, and (iii) capturing spatio-temporal relations among entities (objects, structure, humans). We discuss the importance of each aspect and highlight shortcomings of current methods below. The first ingredient, _metric-semantic understanding_ , is the capability of grounding semantic concepts (_e.g.,_ survivor, grocery bag, kitchen) into a spatial representation (_i.e.,_ a metric map). Geometric information is critical for robots to navigate safely and to manipulate objects, while semantic information provides an ideal level of abstraction for the robot to understand and execute human instructions (_e.g.,_ “bring me a cup of coffee”) and to provide humans with models of the environment that are easy to understand. Despite the unprecedented progress in geometric reconstruction (_e.g.,_ SLAM (Cadena et al., 2016), Structure from Motion (Enqvist et al., 2011), and Multi-View Stereo (Schöps et al., 2017)) and deep-learning-based semantic segmentation (_e.g.,_ (Garcia-Garcia et al., 2017; Krizhevsky et al., 2012; Redmon and Farhadi, 2017; Ren et al., 2015; He et al., 2017; Hu et al., 2017; Badrinarayanan et al., 2017)), research in these two fields has traditionally proceeded in isolation, and there has been a recent and growing research at the intersection of these areas (Bao and Savarese, 2011; Cadena et al., 2016; Bowman et al., 2017; Hackel et al., 2017; Grinvald et al., 2019; Zheng et al., 2019; Davison, 2018). The second ingredient is the capability of providing an _actionable understanding of the scene at multiple levels of abstraction_. The need for abstractions is mostly dictated by computation and communication constraints. As humans, when planning a long trip, we reason in terms of cities or airports, since that is more (computationally) convenient than reasoning over Cartesian coordinates. Similarly, when asked for directions in a building, we find more convenient to list corridors, rooms, and floors, rather than drawing a metrically accurate path to follow. Similarly, robots break down the complexity of decision making, by planning at multiple levels of abstractions, from high-level task planning, to motion planning and trajectory optimization, to low-level control and obstacle avoidance, where each abstraction trades-off model fidelity for computational efficiency. Supporting hierarchical decision making and planning demands robot perception to be capable of building a _hierarchy of consistent abstractions_ to feed task planning, motion planning, and reactive control. Early work on map representation in robotics, _e.g.,_ (Kuipers, 2000, 1978; Chatila and Laumond, 1985; Vasudevan et al., 2006; Galindo et al., 2005; Zender et al., 2008), investigated hierarchical representations but mostly in 2D and assuming static environments; moreover, these works were proposed before the “deep learning revolution”, hence they could not afford advanced semantic understanding. On the other hand, the growing literature on metric-semantic mapping (Salas-Moreno et al., 2013; Bowman et al., 2017; Behley et al., 2019; Tateno et al., 2015; Rosinol et al., 2020a; Grinvald et al., 2019; McCormac et al., 2017), focuses on “flat” representations (object constellations, metric-semantic meshes or volumetric models) that are not hierarchical in nature. The third ingredient of high-level understanding is the capability of describing both _static and dynamic entities in the scene and reason on their relations_. Reasoning at the level of objects and their (geometric and physical) relations is again instrumental to parse high-level instructions (_e.g.,_ “pick up the glass on the table”). It is also crucial to guarantee safe operation: in many application from self-driving cars to collaborative robots on factory floors, identifying obstacles is not sufficient for safe and effective navigation/action, and it becomes crucial to capture the _dynamic_ entities in the scene (in particular, _humans_), and predict their behavior or intentions (Everett et al., 2018). Very recent work (Armeni et al., 2019; Kim et al., 2019) attempts to capture object relations thought a rich representation, namely _3D Scene Graphs_. A scene graph is a data structure commonly used in computer graphics and gaming applications that consists of a graph where nodes represent entities in the scene and edges represent spatial or logical relationships among nodes. While the works (Armeni et al., 2019; Kim et al., 2019) pioneered the use of 3D scene graphs in robotics and vision (prior work in vision focused on 2D scene graphs defined in the image space (Choi et al., 2013; Zhao and Zhu, 2013a; Huang et al., 2018b; Jiang et al., 2018)), they have important drawbacks. Kim et al. (2019) only capture objects and miss multiple levels of abstraction. Armeni et al. (2019) provide a hierarchical model that is useful for visualization and knowledge organization, but does not capture _actionable_ information, such as traversability, which is key to robot navigation. Finally, neither Kim et al. (2019) nor Armeni et al. (2019) account for or model dynamic entities in the environment, which is crucial for robots moving in human-populated environments. Contributions. While the design and implementation of a robot perception system that effectively includes all these ingredients can only be the goal of a long-term research agenda, this paper provides the first step towards this goal, by proposing a novel and general representation of the environment, and practical algorithms to infer it from data. In particular, this paper provides four contributions. The first contribution (Section 2) is a unified representation for actionable spatial perception: a 3D Dynamic Scene Graph (DSG). A DSG is a _layered_ directed graph where nodes represent _spatial concepts_ (_e.g.,_ objects, rooms, agents) and edges represent pairwise spatio-temporal relations. The graph is _layered_ , in that nodes are grouped into layers that correspond to different levels of abstraction of the scene (_i.e.,_ a DSG is a hierarchical representation). Our choice of nodes and edges in the DSG also captures _places_ and their connectivity, hence providing a strict generalization of the notion of topological maps (Ranganathan and Dellaert, 2004; Remolina and Kuipers, 2004) and making DSGs an _actionable_ representation for navigation and planning. Finally, edges in the DSG capture spatio-temporal relations and explicitly model dynamic entities in the scene, and in particular humans, for which we estimate both 3D poses over time (using a _pose graph_ model) and a dense mesh model. Our second contribution (Section 3) is Kimera, the first Spatial Perception Engine that builds a DSG from visual-inertial data collected by a robot. Kimera has two sets of modules: Kimera-Core and Kimera-DSG. Kimera-Core (Rosinol et al., 2020a) is in charge of the real-time metric-semantic reconstruction of the scene, and comprises the following modules: * • _Kimera-VIO_ (Section 3.1) is a visual-inertial odometry (VIO) module for fast and locally accurate 3D pose estimation (localization). * • _Kimera-Mesher_ (Section 3.2) reconstructs a fast local 3D mesh for collision avoidance. * • _Kimera-Semantics_ (Section 3.3) builds a global 3D mesh using a volumetric approach (Oleynikova et al., 2017), and semantically annotates the 3D mesh using 2D pixel-wise segmentation and 3D Bayesian updates. Kimera-Semantics uses the pose estimates from Kimera-VIO. * • _Kimera-PGMO_ (Pose Graph and Mesh Optimization, Section 3.4) enforces visual loop closures by simultaneously optimizing the pose graph describing the robot trajectory and Kimera-Semantics’s global metric-semantic mesh. This new module generalizes _Kimera-RPGO_ (Robust Pose Graph Optimization, Rosinol et al. (2020a)), which only optimizes the pose graph describing the robot trajectory. Like Kimera-RPGO, Kimera-PGMO includes a mechanism to reject outlying loop closures. Kimera-DSG is in charge of building the DSG of the scene and works on top of Kimera-Core. Kimera-DSG comprises the following modules: * • _Kimera-Humans_ (Section 3.5) reconstructs dense meshes of humans in the scene, and estimates their trajectories using a pose graph model. The dense meshes are parametrized using the _Skinned Multi-Person Linear Model_ (SMPL) by Loper et al. (2015). * • _Kimera-Objects_ (Section 3.6) estimates a bounding box for objects of unknown shape and fits a CAD model to objects of known shape in the metric-semantic mesh using TEASER++ by Yang et al. (2020). * • _Kimera-BuildingParser_ (Section 3.7) parses the metric-semantic mesh into a topological graph of places (_i.e.,_ obstacle-free locations), segments rooms, and identifies structures (_i.e.,_ walls, ceiling) enclosing the rooms. The notion of a Spatial Perception Engine generalizes SLAM, which becomes a module in Kimera, and augments it to capture a hierarchy of spatial concepts and their relations. Besides the novelty of many modules in Kimera (_e.g.,_ Kimera-PGMO, Kimera-Humans, Kimera-BuildingParser), our Spatial Perception Engine (i) is the first to construct a scene graph from sensor data (in contrast to Armeni et al. (2019) that assume an annotated mesh model to be given), (ii) provides a _lightweight and scalable_ CPU-based solution, and (iii) is robust to dynamic environments and incorrect place recognition. The core modules of Kimera have been released at https://github.com/MIT- SPARK/Kimera. Our third contribution (Section 4) is an extensive experimental evaluation and the release of a new photo-realistic dataset. We test Kimera in both real and simulated datasets, including the EuRoC dataset (Burri et al., 2016), and the uHumans dataset we released with (Rosinol et al., 2020b). In addition to these datasets, we release the uHumans2 dataset, which encompasses crowded indoor and outdoor scenes, including an apartment, an office building, a subway, and a residential neighborhood. Finally, we qualitatively evaluate Kimera on real datasets collected in an apartment and an office space. The evaluations shows that Kimera’s modules (i) achieve competitive performance in visual-inertial SLAM, (ii) can reconstruct a metric-semantic mesh in real-time on an embedded CPU, (iii) can correctly deform a dense mesh to enforce loop closures, (iv) can accurately localize and track objects and humans, and (v) can correctly partition an indoor building into rooms, places, and structures. Our final contribution (Section 5) is to demonstrate potential queries that can be implemented on a DSG, including an example of hierarchical semantic path planning. In particular, we show how a robot can use a DSG to understand and execute high-level instructions, such as “reach the person near the sofa” (_i.e.,_ semantic path planning). We also demonstrate that a DSG allows to compute path planning queries in a fraction of the time taken by a planner using volumetric approaches, by taking advantage of the hierarchical nature of the DSG. We conclude the paper with an extensive literature review (Section 7) and a discussion of future work (Section 8). Novelty with respect to previous work (Rosinol et al., 2020a, b). This paper brings to maturity our previous work on Kimera (Rosinol et al., 2020a) (whose modules are now extended and included in Kimera-Core), and 3D Dynamic Scene Graphs (Rosinol et al., 2020b) and provides several novel contributions. First, we introduce a new loop closure mechanism that deforms the metric- semantic mesh (Kimera-PGMO), while the mesh in (Rosinol et al., 2020a) did not incorporate corrections resulting from loop closures. Second, we implement and test a semantic hierarchical path-planning algorithm on DSGs, which was only discussed in (Rosinol et al., 2020b). Third, we provide a more comprehensive evaluation, including our own real datasets and new simulated datasets (uHumans2). Moreover, we release this new simulated dataset (with 12 new scenes, including outdoor environments, going beyond the indoor evaluation of (Rosinol et al., 2020b)). Finally, we test Kimera on an NVIDIA TX2 computer and show it executes in real-time on embedded hardware. --- Figure 1: (a) A 3D Dynamic Scene Graph (DSG) is a layered and hierarchical representation that abstracts a dense 3D model (_e.g.,_ a metric-semantic mesh) into higher-level _spatial concepts_ (_e.g.,_ objects, agents, places, rooms) and models their spatio-temporal relations (e.g., “agent A is in room B at time $t$”). Kimera is the first Spatial Perception Engine that reconstructs a DSG from visual-inertial data, and (a) segments places, structures (_e.g.,_ walls), and rooms, (b) is robust to extremely crowded environments, (c) tracks dense mesh models of human agents in real time, (d) estimates centroids and bounding boxes of objects of unknown shape, (e) estimates the 3D pose of objects for which a CAD model is given. ## 2 3D Dynamic Scene Graphs A 3D _Dynamic Scene Graph_ (DSG, Figure 1) is an actionable spatial representation that captures the 3D geometry and semantics of a scene at different levels of abstraction, and models objects, places, structures, and agents and their relations. More formally, a DSG is a _layered directed graph_ where nodes represent _spatial concepts_ (_e.g.,_ objects, rooms, agents) and edges represent pairwise spatio-temporal relations (_e.g.,_ “agent A is in room B at time $t$”). Contrarily to _knowledge bases_ (Krishna, 1992), spatial concepts are semantic concepts that are _spatially grounded_ (in other words, each node in our DSG includes spatial coordinates and shape or bounding-box information as attributes). A DSG is a _layered_ graph, _i.e.,_ nodes are grouped into layers that correspond to different levels of abstraction. Every node has a unique ID. The DSG of a single-story indoor environment includes 5 layers (from low to high abstraction level): (i) Metric-Semantic Mesh, (ii) Objects and Agents, (iii) Places and Structures, (iv) Rooms, and (v) Building. We discuss each layer and the corresponding nodes and edges below. ### 2.1 Layer 1: Metric-Semantic Mesh The lower layer of a DSG is a semantically annotated 3D mesh (bottom of Figure 1(a)). The nodes in this layer are 3D points (vertices of the mesh) and each node has the following attributes: (i) 3D position, (ii) normal, (iii) RGB color, and (iv) a panoptic semantic label.111Panoptic segmentation (Kirillov et al., 2019; Li et al., 2018a) segments both objects (_e.g.,_ chairs, tables, drawers) and structures (_e.g.,_ walls, ground, ceiling). Edges connecting triplets of points (_i.e.,_ a clique with 3 nodes) describe faces in the mesh and define the topology of the environment. Our metric-semantic mesh includes everything in the environment that is _static_ , while for storage convenience we store meshes of dynamic objects in a separate structure (see “Agents” below). Figure 2: Places and their connectivity shown as a graph. (a) Skeleton (places and topology) produced by (Oleynikova et al., 2018) (side view); (b) Room parsing produced by our approach (top-down view); (c) Zoomed-in view; red edges connect different rooms. ### 2.2 Layer 2: Objects and Agents This layer contains two types of nodes: objects and agents (Figure 1(c-e)), whose main distinction is the fact that agents are time-varying entities, while objects are static.222The distinction between objects and agents is only made for the sake of the presentation. The DSG provides a unified approach to model static and dynamic entities, since the latter only requires storing pose information over time. Objects represent static elements in the environment that are not considered _structural_ (_i.e.,_ walls, floor, ceiling, pillars are considered _structure_ and are not modeled in this layer). Each object is a node and node attributes include (i) a 3D object pose, (ii) a bounding box, and (ii) its semantic class (_e.g.,_ chair, desk). While not investigated in this paper, we refer the reader to (Armeni et al., 2019) for a more comprehensive list of attributes, including materials and affordances. Edges between objects describe relations, such as co-visibility, relative size, distance, or contact (“the cup is on the desk”). Each object node is connected to the corresponding set of points belonging to the object in the Metric-Semantic Mesh. Moreover, each object is connected to the nearest reachable _place_ node (see Section 2.3). Agents represent dynamic entities in the environment, including humans. In general, there might be many types of dynamic entities (_e.g.,_ animals, vehicles, or bicycles in outdoor environments). In this paper, we focus on two classes: _humans_ and _robots_. 333These classes can be considered instantiations of more general concepts: “rigid” agents (such as robots, for which we only need to keep track a 3D pose), and “deformable” agents (such as humans, for which we also need to keep track of a time-varying shape). Our approach to track dynamic agents relies solely on defining which labels are considered to be dynamic in the semantic segmentations of the 2D images. Both human and robot nodes have three attributes: (i) a 3D pose graph describing their trajectory over time, (ii) a mesh model describing their (non-rigid) shape, and (iii) a semantic class (_i.e.,_ human, robot). A pose graph (Cadena et al., 2016) is a collection of time-stamped 3D poses where edges model pairwise relative measurements. The robot collecting the data is also modeled as an agent in this layer. ### 2.3 Layer 3: Places and Structures This layer contains two types of nodes: places and structures. Intuitively, places are a model for the free space, while structures capture separators between different spaces. Places (Figure 2) correspond to positions in the free-space and edges between places represent traversability (in particular: presence of a straight-line path between places). Places and their connectivity form a _topological map_ (Ranganathan and Dellaert, 2004; Remolina and Kuipers, 2004) that can be used for path planning. Place attributes only include a 3D position, but can also include a semantic class (_e.g.,_ back or front of the room) and an obstacle- free bounding box around the place position. Each object and agent in Layer 2 is connected with the nearest place (for agents, the connection is for each time-stamped pose, since agents move from place to place). Places belonging to the same room are also connected to the same room node in Layer 4. Figure 2(b-c) shows a visualization with places color-coded by rooms. Structures (Figure 3) include nodes describing structural elements in the environment, _e.g.,_ walls, floor, ceiling, pillars. The notion of structure captures elements often called “stuff” in related work (Li et al., 2018a). Structure nodes’ attributes are: (i) 3D pose, (ii) bounding box, and (iii) semantic class (_e.g.,_ walls, floor). Structures may have edges to the rooms they enclose. Structures may also have edges to an object in Layer 3, _e.g.,_ a “frame” (object) “is hung” (relation) on a “wall” (structure), or a “ceiling light is mounted on the ceiling”. Figure 3: Structures: exploded view of walls and floor (top). Segmented walls according to the room id (bottom). ### 2.4 Layer 4: Rooms This layer includes nodes describing rooms, corridors, and halls. Room nodes (Figure 2) have the following attributes: (i) 3D pose, (ii) bounding box, and (iii) semantic class (_e.g.,_ kitchen, dining room, corridor). Two rooms are connected by an edge if they are adjacent (_i.e.,_ there is a door connecting them). A room node has edges to the places (Layer 3) it contains (since each place is connected to nearby objects, the DSG also captures which object/agent is contained in each room). All rooms are connected to the building they belong to (Layer 5). ### 2.5 Layer 5: Building Since we are considering a representation over a single building, there is a single _building node_ with the following attributes: (i) 3D pose, (ii) bounding box, and (iii) semantic class (_e.g.,_ office building, residential house). The building node has edges towards all rooms in the building. ### 2.6 Composition and Queries _Why should we choose this set of nodes or edges rather than a different one?_ Clearly, the choice of nodes in the DSG is not unique and is task-dependent. Here we first motivate our choice of nodes in terms of _planning queries_ the DSG is designed for (see Remark 1 and the broader discussion in Section 5), and we then show that the representation is compositional, in the sense that it can be easily expanded to encompass more layers, nodes, and edges (Remark 2). ###### Remark 1 (Planning Queries). The proposed DSG is designed with task and motion planning queries in mind. The semantic node attributes (_e.g.,_ semantic class) support planning from high-level specification (“pick up the red cup from the table in the dining room”). The geometric node attributes (_e.g.,_ meshes, positions, bounding boxes) and the edges are used for motion planning. For instance, the places can be used as a topological graph for path planning, and the bounding boxes can be used for fast collision checking. ###### Remark 2 (Composition of DSGs). A second re-ensuring property of a DSG is its compositionality: one can easily concatenate more layers at the top and the bottom of the DSG in Figure 1(a), and even add intermediate layers. For instance, in a multi-story building, we can include a “Level” layer between the “Building” and “Rooms” layers in Figure 1(a). Moreover, we can add further abstractions or layers at the top, for instance going from buildings to neighborhoods, and then to cities. --- Figure 4: Kimera-Core is an open-source library for real-time metric-semantic SLAM. It provides (a) visual-inertial state estimates at IMU rate (Kimera- VIO), and a globally consistent and outlier-robust trajectory estimate (Kimera-RPGO), computes (b) a low-latency local mesh of the scene (Kimera- Mesher), and builds (c) a semantically annotated 3D mesh (Kimera-Semantics), which can be optimized for global consistency (Kimera-PGMO) and accurately reflects the ground truth model (d). Figure 5: Kimera-Core’s architecture. Kimera-Core uses stereo images (or RGB-D) and IMU data as input (shown on the left) and outputs (a) pose estimates and (b-e) multiple metric-semantic reconstructions. Kimera-Core has four key modules: Kimera-VIO, Kimera-PGMO (alternatively, Kimera-RPGO), Kimera-Mesher, and Kimera-Semantics. Figure 6: Kimera’s architecture, with Kimera-Core and Kimera-DSG as sub-modules. Kimera- Core generates a globally consistent 3D metric-semantic mesh (Figure 5) that represents the first layer of the DSG and is further used by Kimera-DSG to build the subsequent layers. Kimera-DSG further comprises three key modules: Kimera-Objects, Kimera-Humans, and Kimera-BuildingParser (which generates layers 3 to 5). ## 3 Kimera: Spatial Perception Engine This section describes Kimera, our _Spatial Perception Engine_ , that populates the DSG nodes and edges using sensor data. The input to Kimera is streaming data from a stereo or RGB-D camera, and an Inertial Measurement Unit (IMU). The output is a 3D DSG. In our current implementation, the metric- semantic mesh and the agent nodes are incrementally built from sensor data in real-time, while the remaining nodes (objects, places, structure, rooms) are automatically built at the end of the run. Kimera-Core. We use Kimera-Core (Rosinol et al., 2020a) to reconstruct a semantically annotated 3D mesh from visual-inertial data in real-time (Figure 4). Kimera-Core is open source and includes four main modules: (i) Kimera-VIO: a visual-inertial odometry module implementing IMU preintegration and fixed- lag smoothing (Forster et al., 2017), (ii) Kimera-PGMO: a robust pose graph and mesh optimizer, that generalizes Kimera-RPGO, which only optimized the pose graph. (iii) Kimera-Mesher: a per-frame and multi-frame mesher (Rosinol et al., 2019), and (iv) Kimera-Semantics: a volumetric approach to produce a semantically annotated mesh and an Euclidean Signed Distance Function (ESDF) based on Voxblox (Oleynikova et al., 2017). Kimera-Semantics uses a 2D semantic segmentation of the camera images to label the 3D mesh using Bayesian updates. We take the metric-semantic mesh produced by Kimera-Semantics and optimized by Kimera-PGMO as Layer 1 in the DSG in Figure 1(a). Figure 5 shows Kimera-Core’s architecture. Kimera takes stereo frames and high-rate inertial measurements as input and returns (i) a highly accurate state estimate at IMU rate, (ii) a globally-consistent trajectory estimate, and (iii) multiple meshes of the environment, including a fast local mesh and a global semantically annotated mesh. Kimera-Core is heavily parallelized and uses five threads to accommodate inputs and outputs at different rates (_e.g.,_ IMU, frames, keyframes). Here we describe the architecture _by threads_ , while the description of each module is given in the following sections. The first thread includes the Kimera-VIO front-end (Section 3.1) that takes stereo images and IMU data, and outputs feature tracks and preintegrated IMU measurements. The front-end also publishes IMU-rate state estimates. The second thread runs Kimera-VIO’s back-end, and outputs optimized state estimates (most importantly, the robot’s 3D pose). The third thread runs Kimera-Mesher (Section 3.2), that computes low-latency ($<\\!20\text{ms}$) per-frame and multi-frame 3D meshes. These three threads allow creating the per-frame mesh in Figure 5(b) (which can also come with semantic labels as in Figure 5(c)), as well as the multi-frame mesh in Figure 5(d). The next two threads operate at slower rates and are designed to support low-frequency functionalities, such as path planning. The fourth thread includes Kimera- Semantics (Section 3.3), that uses a depth map, from RGB-D or dense-stereo, and 2D semantic labels to obtain a metric-semantic mesh, using the pose estimates from Kimera-VIO. The last thread includes Kimera-PGMO (Section 3.4), that uses the detected loop closures, together with Kimera-VIO’s pose estimates and Kimera-Semantics’ 3D metric-semantic mesh, to estimate a globally consistent trajectory (Figure 5(a)) and 3D metric-semantic mesh (Figure 5(e)). As shown in Figure 5, Kimera-RPGO can be used instead of Kimera-PGMO if the optimized 3D metric-semantic mesh is not required. Kimera-DSG. Here we describe Kimera-DSG’s architecture layer by layer, while the description of each module is given in the following sections. We use Kimera-DSG to build the DSG from the globally consistent 3D metric-semantic mesh generated by Kimera-Core, which represents the DSG’s first layer, as shown in Figure 6. Then, Kimera-DSG builds the second layer containing objects and agents. For the objects, Kimera-Objects (Section 3.6) either estimates a bounding box for the objects of unknown shape or fits a CAD model for the objects of known shape using TEASER++ (Yang et al., 2020). Kimera-Humans (Section 3.5) reconstructs dense meshes of humans in the scene using GraphCMR (Kolotouros et al., 2019b), and estimates their trajectories using a pose graph model. Then, Kimera-BuildingParser (Section 3.7) generates the remaining three layers. It first generates layer 3 by parsing the metric-semantic mesh to identify structures (_i.e.,_ walls, ceiling), and further extracts a topological graph of places using (Oleynikova et al., 2018). Then, Kimera- BuildingParser generates layer 4 by segmenting layer 3 into rooms, and generates layer 5 by further segmenting layer 4 into buildings. ### 3.1 Kimera-VIO: Visual-Inertial Odometry Kimera-VIO implements the keyframe-based maximum-a-posteriori visual-inertial estimator presented in (Forster et al., 2017). In our implementation, the estimator can perform both _full_ smoothing or _fixed-lag_ smoothing, depending on the specified time horizon; we typically use the latter to bound the estimation time. Kimera-VIO includes a (visual and inertial) front-end which is in charge of processing the raw sensor data, and a back-end, that fuses the processed measurements to obtain an estimate of the state of the sensors (_i.e.,_ pose, velocity, and sensor biases). VIO Front-end. Our IMU front-end performs on-manifold preintegration (Forster et al., 2017) to obtain compact preintegrated measurements of the relative state between two consecutive keyframes from raw IMU data. The vision front- end detects Shi-Tomasi corners (Shi and Tomasi, 1994), tracks them across frames using the Lukas-Kanade tracker (Bouguet, 2000), finds left-right stereo matches, and performs geometric verification. We perform both mono(cular) verification using 5-point RANSAC (Nistér, 2004) and stereo verification using 3-point RANSAC (Horn, 1987); the code also offers the option to use the IMU rotation and perform mono and stereo verification using 2-point (Kneip et al., 2011) and 1-point RANSAC, respectively. Since our robot moves in crowded (dynamic) environments, we seed the Lukas-Kanade tracker with an initial guess (of the location of the corner being tracked) given by the rotational optical flow estimated from the IMU, similar to (Hwangbo et al., 2009). Moreover, we default to using 2-point (stereo) and 1-point (mono) RANSAC, which uses the IMU rotation to prune outlier correspondences in the feature tracks. Feature detection, stereo matching, and geometric verification are executed at each _keyframe_ , while we track features at intermediate _frames_. VIO Back-end. At each keyframe, preintegrated IMU and visual measurements are added to a fixed-lag smoother (a factor graph) which constitutes our VIO back- end. We use the preintegrated IMU model and the structureless vision model of (Forster et al., 2017). The factor graph is solved using iSAM2 (Kaess et al., 2012) in GTSAM (Dellaert, 2012). At each iSAM2 iteration, the structureless vision model estimates the 3D position of the observed features using DLT (Hartley and Zisserman, 2004) and analytically eliminates the corresponding 3D points from the VIO state (Carlone et al., 2014). Before elimination, degenerate points (_i.e.,_ points behind the camera or without enough parallax for triangulation) and outliers (_i.e.,_ points with large reprojection error) are removed, providing an extra robustness layer. Finally, states that fall out of the smoothing horizon are marginalized out using GTSAM. ### 3.2 Kimera-Mesher: 3D Mesh Reconstruction Kimera-Mesher can quickly generate two types of 3D meshes: (i) a per-frame 3D mesh, and (ii) a multi-frame 3D mesh spanning the keyframes in the VIO fixed- lag smoother. Per-frame mesh. As in (Rosinol et al., 2019), we first perform a 2D Delaunay triangulation over the successfully tracked 2D features (generated by the VIO front-end) in the current keyframe. Then, we back-project the 2D Delaunay triangulation to generate a 3D mesh (Figure 5(b)), using the 3D point estimates from the VIO back-end. While the per-frame mesh is designed to provide low-latency obstacle detection, we also provide the option to semantically label the resulting mesh, by texturing the mesh with 2D labels (Figure 5(c)). Multi-frame mesh. The multi-frame mesh fuses the per-frame meshes collected over the VIO receding horizon into a single mesh (Figure 5(d)). Both per-frame and multi-frame 3D meshes are encoded as a list of vertex positions, together with a list of triplets of vertex IDs to describe the triangular faces. Assuming we already have a multi-frame mesh at time $t-1$, for each new per- frame 3D mesh that we generate (at time $t$), we loop over its vertices and triplets and add vertices and triplets that are in the per-frame mesh but are missing in the multi-frame one. Then we loop over the multi-frame mesh vertices and update their 3D position according to the latest VIO back-end estimates. Finally, we remove vertices and triplets corresponding to old features observed outside the VIO time horizon. The result is an up-to-date 3D mesh spanning the keyframes in the current VIO time horizon. If planar surfaces are detected in the mesh, _regularity factors_ (Rosinol et al., 2019) are added to the VIO back-end, which results in a tight coupling between VIO and mesh regularization, see (Rosinol et al., 2019) for further details. ### 3.3 Kimera-Semantics: 3D Metric-Semantic Reconstruction We adapt the bundled raycasting technique introduced by Oleynikova et al. (2017) to (i) build an accurate global 3D mesh (covering the entire trajectory), and (ii) semantically annotate the mesh. Global mesh. Our implementation builds on Voxblox (Oleynikova et al., 2017) and uses a voxel-based (TSDF) model to filter out noise and extract the global mesh. At each keyframe, we obtain depth maps using dense stereo (semi-global matching (H. Hirschmüller, 2008)) to obtain a 3D point cloud, or from RGB-D if available. Then, we run bundled raycasting using Voxblox (Oleynikova et al., 2017). This process is repeated at each keyframe and produces a TSDF, from which a mesh is extracted using marching cubes (Lorensen and Cline, 1987). Semantic annotation. Kimera-Semantics uses 2D semantically labeled images (produced at each keyframe) to semantically annotate the global mesh; the 2D semantic labels can be obtained using off-the-shelf tools for pixel-level 2D semantic segmentation, _e.g.,_ deep neural networks (Lang et al., 2019; Zhang et al., 2019a; Chen et al., 2017; Zhao et al., 2017; Yang et al., 2018; Paszke et al., 2016; Ren et al., 2015; He et al., 2017; Hu et al., 2017). In our real-life experiments, we use Mask-RCNN (He et al., 2017). Then, during the bundled raycasting, we also propagate the semantic labels. Using the 2D semantic segmentation, we attach a label to each 3D point produced by dense stereo. Then, for each bundle of rays in the bundled raycasting, we build a vector of label probabilities from the frequency of the observed labels in the bundle. We then propagate this information along the ray only within the TSDF truncation distance (_i.e.,_ near the surface) to spare computation. In other words, we spare the computational effort of updating probabilities for the “empty” label. While traversing the voxels along the ray, we use a Bayesian update to estimate the posterior label probabilities at each voxel, similar to (McCormac et al., 2017). After bundled semantic raycasting, each voxel has a vector of label probabilities, from which we extract the most likely label. The metric-semantic mesh is finally also extracted using marching cubes (Lorensen and Cline, 1987). The resulting mesh is significantly more accurate than the multi-frame mesh of Section 3.2, but it is slower to compute ($\approx 0.1\text{s}$, see Section 4.8). ### 3.4 Kimera-PGMO: Pose Graph and Mesh Optimization with Loop Closures The mesh from Kimera-Semantics is built from the poses from Kimera-VIO and drifts over time. The loop closure module detects loop closures to correct the global trajectory and the mesh. The mesh is corrected via a deformation, since this is more scalable compared to rebuilding the mesh from scratch or using ‘de-integration’ (Dai et al., 2017). This is achieved via a novel simultaneous pose graph and mesh deformation approach which utilizes an embedded deformation graph that optimizes in a single run the environment and the robot trajectory. The optimization is formulated as a factor graph in GTSAM. In the following, we review the individual components. Loop Closure Detection. The loop closure detection relies on the DBoW2 library (Gálvez-López and Tardós, 2012) and uses a bag-of-word representation with ORB descriptors to quickly detect putative loop closures. For each putative loop closure, we reject outlier loop closures using mono 5-point RANSAC (Nistér, 2004) and stereo 3-point RANSAC (Horn, 1987) geometric verification, and pass the remaining loop closures to the outlier rejection and pose solver. Note that the resulting loop closures can still contain outliers due to perceptual aliasing (_e.g.,_ two identical rooms on different floors of a building). While most open-source SLAM algorithms, such as ORB-SLAM3 (Campos et al., 2021), VINS-Mono (Qin et al., 2018), Basalt (Usenko et al., 2019), are overly cautious when accepting loop-closures, by fine-tuning DBoW2 for example, we instead make our backend robust to outliers, as we explain below. Outlier Rejection. We filter out bad loop closures with a modern outlier rejection method, _Pairwise Consistent Measurement Set Maximization_ (PCM) (Mangelson et al., 2018), that we tailor to a single-robot and online setup. We store separately the odometry edges (produced by Kimera-VIO) and the loop closures (produced by the loop closure detection); each time a loop closure is detected, we select inliers by finding the largest set of consistent loop closures using a modified version of PCM. Figure 7: Kimera-RPGO detects visual loop closures, rejects spurious loop closures, and estimates a globally consistent trajectory. In contrast to Kimera-PGMO, Kimera-RPGO does not optimize the 3D mesh. The original PCM is designed for the multi-robot case and only checks that inter-robot loop closures are consistent. We developed an implementation of PCM that (i) adds an _odometry consistency check_ on the loop closures and (ii) _incrementally_ updates the set of consistent measurements to enable online operation. The odometry check verifies that each loop closure (_e.g.,_ $l_{1}$ in Figure 7) is consistent with the odometry (in red in the figure): in the absence of noise, the poses along the cycle formed by the odometry and the loop $l_{1}$ must compose to the identity. As in PCM, we flag as outliers loops for which the error accumulated along the cycle is not consistent with the measurement noise using a Chi-squared test. If a loop detected at the current time $t$ passes the odometry check, we test if it is pairwise consistent with previous loop closures as in (Mangelson et al., 2018) (_e.g.,_ check if loops $l_{1}$ and $l_{2}$ in Figure 7 are consistent with each other). While PCM (Mangelson et al., 2018) builds an adjacency matrix ${\bm{A}}\in{{\mathbb{R}}^{L\times L}}$ from scratch to keep track of pairwise-consistent loops (where $L$ is the number of detected loop closures), we enable online operation by building the matrix ${\bm{A}}$ incrementally. Each time a new loop is detected, we add a row and column to the matrix ${\bm{A}}$ and only test the new loop against the previous ones. Finally, we use the fast maximum clique implementation of (Pattabiraman et al., 2015) to compute the largest set of consistent loop closures. The set of consistent measurements are added to the pose graph (together with the odometry). Pose Graph and Mesh Optimization. When a loop closure passes the outlier rejection, either Kimera-RPGO optimizes the pose graph of the robot trajectory (Figure 7) or Kimera-PGMO simultaneously optimizes the mesh and the trajectory (Figure 8). The user can select the solver depending on the computational considerations and the need for a consistent map; see Remark 3. Note that Kimera-PGMO is a strict generalization of Kimera-RPGO as described in (Rosinol et al., 2020a). For Kimera-PGMO, the deformation of the mesh induced by the loop closure is based on deformation graphs (Sumner et al., 2007). In our approach, we create a unified deformation graph including a simplified mesh and a pose graph of robot poses. We simplify the mesh with an online vertex clustering method by storing the vertices of the mesh in an octree data structure; as the mesh grows, the vertices in the same voxel of the octree are merged and degenerate faces and edges are removed. The voxel size is tuned according to the environment or the dataset ($1$ to $4$ meters in our tests). Figure 8: Kimera-PGMO’s mesh deformation and pose graph optimization. (a) shows the received mesh and the pose graph with no loop closures detected yet. (b) shows the creation of the deformation graph where the red vertices are the pose vertices with associated transform ${\bm{X}}_{i}$ and purple vertices are the mesh vertices with associated transform ${\bm{M}}_{i}$. The green edges are the edges describing the connectivity of the simplified mesh, which are also the edges connecting the mesh vertices to each other in the deformation graph. The yellow edges are the edges connecting the pose vertices to the mesh vertices based on visibility from the camera. (c) shows the deformation that happens when a loop closure (the blue edge) between pose graph node 4 and node 1 (${\bm{Z}}_{41}$) is added. ${\bm{X}}_{i}$ and ${\bm{M}}_{i}$ have been updated based on the optimization results. (d) shows the optimized mesh and pose graph. We add two types of vertices to the deformation graph: mesh vertices and pose vertices. The mesh vertices correspond to the vertices of the simplified mesh and have an associated transformation ${\bm{M}}_{k}=\begin{bmatrix}{\bm{R}}^{M}_{k}&{\bm{t}}^{M}_{k}\\\ \mathbf{0}^{\top}&1\end{bmatrix}$ for some mesh vertex $k$. When the mesh is not yet deformed, ${\bm{R}}^{M}_{k}={\mathbf{I}}_{3}$ and ${\bm{t}}^{M}_{k}={\bm{g}}_{k}$, where ${\bm{g}}_{k}$ is the original world frame position of vertex $k$. Intuitively, these transformations describe the local deformations on the mesh: ${\bm{R}}^{M}_{k}$ is the local rotation centered at vertex $k$ while ${\bm{t}}^{M}_{k}-{\bm{g}}_{k}$ is the local translation. Mesh vertices are connected to each other using the edges of the simplified mesh (the green edges in Figure 8). We then add the nodes of the robot pose graph to the deformation graph as pose vertices with associated transformation ${\bm{X}}_{i}=\begin{bmatrix}{\bm{R}}^{X}_{i}&{\bm{t}}^{X}_{i}\\\ \mathbf{0}^{\top}&1\end{bmatrix}$. When the mesh is not yet deformed, ${\bm{X}}_{i}$ is just the odometric pose for node $i$. The pose vertices are connected according to the original connectivity of the pose graph (the red edges in Figure 8). A pose vertex $i$ is connected to the mesh vertex $k$ if the mesh vertex $k$ is visible by the camera associated to pose $i$. Figure 8 showcases the components and creation of the deformation graph, noting the pose and mesh vertices and the edges, and the deformation that happens when a loop closure is detected. Based on the loop closure and odometry measurements ${\bm{Z}}_{ij}$ and given $n$ pose vertices and $m$ mesh vertices, the deformation graph optimization is as follows: $\displaystyle\operatorname*{arg\,min}_{\begin{subarray}{c}{\bm{X}}_{1},\ldots,{\bm{X}}_{n}\in\mathrm{SE}(3)\\\ {\bm{M}}_{1},\ldots,{\bm{M}}_{m}\in\mathrm{SE}(3)\end{subarray}}$ $\displaystyle\sum_{{\bm{Z}}_{ij}}||{\bm{X}}_{i}^{-1}{\bm{X}}_{j}-{\bm{Z}}_{ij}||^{2}_{{\bm{\Omega}}_{ij}}$ $\displaystyle+$ $\displaystyle\sum_{k=0}^{m}\sum_{l\in\mathcal{N}^{M}(k)}||{\bm{R}}^{M}_{k}({\bm{g}}_{l}-{\bm{g}}_{k})+{\bm{t}}^{M}_{k}-{\bm{t}}^{M}_{l}||^{2}_{{\bm{\Omega}}_{kl}}$ $\displaystyle+$ $\displaystyle\sum_{i=0}^{n}\sum_{l\in\mathcal{N}^{M}(i)}||{\bm{R}}^{X}_{i}\tilde{{\bm{g}}}_{il}+{\bm{t}}^{X}_{i}-{\bm{t}}^{M}_{l}||^{2}_{{\bm{\Omega}}_{il}}$ (1) where $\mathcal{N}^{M}(i)$ indicates the neighboring mesh vertices to a vertex $i$ in the deformation graph and ${\bm{g}}_{i}$ denotes the non-deformed (initial) world frame position of mesh or pose vertex $i$ in the deformation graph, and $\widetilde{{\bm{g}}}_{il}$ denotes the non-deformed position of vertex $l$ in the coordinate frame of the odometric pose of node $i$ (note that $\widetilde{{\bm{g}}}_{il}\neq{\bm{g}}_{l}-{\bm{g}}_{i}$ except when the undeformed orientation of node $i$ is identity.). The first term in the optimization enforces the odometric and loop closure measurements on the poses in the pose graph, these are the same as in standard pose graph optimization (Cadena et al., 2016); the second term is adapted from (Sumner et al., 2007) and enforces local rigidity between mesh vertices by minimizing the change in relative translation between connected mesh vertices (_i.e.,_ preserving the edge connecting two mesh vertices); the third term enforces the local rigidity between a pose vertex $i$ and a mesh vertex $l$, again by minimizing the change in relative translation between the two vertices. Note that $||\cdot||_{{\bm{\Omega}}}$ indicates the weighted Frobenius norm $||{\bm{A}}||^{2}_{\bm{\Omega}}=\mathrm{tr}\left({\bm{A}}{\bm{\Omega}}{\bm{A}}^{\top}\right)$ (2) where ${\bm{\Omega}}$ takes the form $\begin{bmatrix}\omega_{R}{\bm{I}}_{3}&\mathbf{0}\\\ \mathbf{0}^{\top}&\omega_{t}\end{bmatrix}$ with $\omega_{R}$ and $\omega_{t}$ respectively corresponding to the rotation and translation weights, as defined in (Briales and Gonzalez-Jimenez, 2017). In the following , we show that section 3.4 can be formulated as an augmented pose graph optimization problem. Towards this goal, we define $\tilde{{\bm{R}}_{i}}$ as the initial odometric rotation of pose vertex $i$ in the deformation graph; we then define, ${\bm{G}}_{ij}=\begin{bmatrix}{\bm{I}}_{3}&{\bm{g}}_{j}-{\bm{g}}_{i}\\\ 0&1\end{bmatrix}$ (3) $\bar{{\bm{G}}}_{ij}=\begin{bmatrix}{{\bm{I}}_{3}}&\tilde{{\bm{R}}}^{-1}_{i}({\bm{g}}_{j}-{\bm{g}}_{i})\\\ 0&1\end{bmatrix}$ (4) and rewrite the optimization as, $\displaystyle\operatorname*{arg\,min}_{\begin{subarray}{c}{\bm{X}}_{1},\ldots,{\bm{X}}_{n}\in\mathrm{SE}(3)\\\ {\bm{M}}_{1},\ldots,{\bm{M}}_{m}\in\mathrm{SE}(3)\end{subarray}}$ $\displaystyle\sum_{{\bm{Z}}_{ij}\in{\cal Z}}||{\bm{X}}_{i}^{-1}{\bm{X}}_{j}-{\bm{Z}}_{ij}||^{2}_{{\bm{\Omega}}_{{\bm{Z}}_{ij}}}+$ $\displaystyle\sum_{{\bm{G}}_{ij}\in{\cal G}}||{\bm{M}}_{i}^{-1}{\bm{M}}_{j}-{\bm{G}}_{ij}||^{2}_{{\bm{\Omega}}_{{\bm{G}}_{ij}}}+$ $\displaystyle\sum_{\bar{{\bm{G}}}_{ij}\in\bar{{\cal G}}}||{\bm{X}}_{i}^{-1}{\bm{M}}_{j}-\bar{{\bm{G}}}_{ij}||^{2}_{{\bm{\Omega}}_{\bar{{\bm{G}}}_{ij}}}$ (5) where ${\cal Z}$ is the set of all odometry and loop closures edges in the deformation graph (the red and blue edges in Figure 8), ${\cal G}$ is the set edges from the simplified mesh (the green edges in Figure 8), and $\bar{{\cal G}}$ is the set of all the edges connecting a pose vertex to a mesh vertex (the yellow edges in Figure 8). We only optimize over translation in the second and third term hence the rotation weights $\omega_{R}$ for ${\bm{\Omega}}_{{\bm{G}}_{ij}}$ and ${\bm{\Omega}}_{\bar{{\bm{G}}}_{ij}}$ are set to zero. Taking it one step further and observing that the terms are all based on the edges in the deformation graph, we can define ${\bm{T}}_{i}$ as the transformation of pose or mesh vertex $i$ and ${\bm{E}}_{ij}$ is the transformation that corresponds to an edge in the deformation graph that is of the form ${\bm{Z}}_{ij}$, ${\bm{G}}_{ij}$, $\bar{{\bm{G}}}_{ij}$ depending on the type of edge. With this reparametrization, we are left with a pose graph optimization problem akin to the ones found in the literature (Rosen et al., 2018; Cadena et al., 2016). $\operatorname*{arg\,min}_{\begin{subarray}{c}{\bm{T}}_{1},\ldots,{\bm{T}}_{n+m}\in\mathrm{SE}(3)\end{subarray}}\sum_{{\bm{E}}_{ij}}||{\bm{T}}_{i}^{-1}{\bm{T}}_{j}-{\bm{E}}_{ij}||^{2}_{{\bm{\Omega}}_{ij}}$ (6) Note that the deformation graph approach, as originally presented in (Sumner et al., 2007), is equivalent to pose graph optimization only when rotations are used in place of affine transformations. The pose graph is then optimized using GTSAM. After the optimization, the positions of the vertices of the complete mesh are updated as affine transformations of the nodes in the deformation graph: $\widetilde{{\bm{v}}}_{i}=\sum_{j=1}^{m}w_{j}({\bm{v}}_{i})[{\bm{R}}^{M}_{j}({\bm{v}}_{i}-{\bm{g}}_{j})+{\bm{t}}^{M}_{j}]$ (7) where ${\bm{v}}_{i}$ indicates the original vertex positions and $\widetilde{{\bm{v}}}_{i}$ are the new deformed positions. The weights $w_{j}$ are defined as $w_{j}({\bm{v}}_{i})=\left(1-||{\bm{v}}_{i}-{\bm{g}}_{j}||/{d_{\text{max}}}\right)^{2}$ (8) and then normalized to sum to one. Here $d_{\text{max}}$ is the distance to the $k+1$ nearest node as described in (Sumner et al., 2007) (we set $k=4$). ###### Remark 3 (Kimera-RPGO and Kimera-PGMO). Kimera-RPGO is the robust pose graph optimizer we introduced in (Rosinol et al., 2020a) which uses a modified Pairwise Consistent Measurement Set Maximization (PCM) (Mangelson et al., 2018) approach to filter out incorrect loop closures caused by perceptual aliasing then optimizes the poses of the robot. Kimera-PGMO is a strict generalization of Kimera-RPGO. Both Kimera-RPGO and Kimera-PGMO perform loop closure detection and outlier rejection, the difference is that Kimera-PGMO additionally optimizes the mesh with extra computational cost to solve a larger pose graph. As we will see in the experimental section, Kimera-PGMO takes almost three times the time of Kimera- RPGO since it optimizes a larger graph. For instance, Kimera-PGMO optimizes 728 pose nodes and 1031 mesh nodes for the EuRoC V1_01 dataset, while Kimera- RPGO only optimizes over the 728 pose nodes. ### 3.5 Kimera-Humans: Human Shape Estimation and Robust Tracking Robot Node. In our setup, the only robotic agent is the one collecting the data. Hence, Kimera-PGMO directly produces a time-stamped pose graph describing the poses of the robot at discrete time steps. To complete the robot node, we assume a CAD model of the robot to be given (only used for visualization). Human Nodes. Contrary to related work that models dynamic targets as a point or a 3D pose (Chojnacki and Indelman, 2018; Azim and Aycard, 2012; Aldoma et al., 2013; Li et al., 2018b; Qiu et al., 2019), Kimera-Humans tracks a dense time-varying mesh model describing the shape of the human over time. Therefore, to create a human node Kimera-Humans needs to detect and estimate the shape of a human in the camera images, and then track the human over time. Besides using them for tracking, we feed the human detections back to Kimera- Semantics, such that dynamic elements are not reconstructed in the 3D mesh. We achieve this by only using the free-space information when ray casting the depth for pixels labeled as humans, an approach we dubbed _dynamic masking_ (see results in Figure 16). (a) Image | (b) Detection | (c) Tracking ---|---|--- Figure 9: Human nodes: (a) Input camera image from Unity, (b) SMPL mesh detection and pose/shape estimation using (Kolotouros et al., 2019b), (c) Temporal tracking and consistency checking on the maximum joint displacement between detections. For human shape and pose estimation, we use the Graph-CNN approach of Kolotouros et al. (2019b) (GraphCMR), which directly regresses the 3D location of the vertices of an SMPL (Loper et al., 2015) mesh model from a single image. An example mesh is shown in Figure 9(a-b). Given a pixel-wise 2D segmentation of the image, we crop the left camera image to a bounding box around each detected human, which then becomes an input to GraphCMR. GraphCMR outputs a 3D SMPL mesh for the corresponding human, as well as camera parameters ($x$ and $y$ image position and a scale factor corresponding to a weak perspective camera model). We then use the camera model to project the human mesh vertices into the image frame. After obtaing the projection, we then compute the location and orientation of the full-mesh with respect to the camera using PnP (Zheng et al., 2013) to optimize the camera pose based on the reprojection error of the mesh into the camera frame. The translation is recovered from the depth-image, which is used to get the approximate 3d position of the pelvis joint of the human in the image. Finally, we transform the mesh location to the global frame based on the world transformation output by the Kimera-VIO. Human Tracking and Monitoring. The above approach relies heavily on the accuracy of GraphCMR and discards useful temporal information about the human. In fact, GraphCMR outputs are unreliable in several scenarios, especially when the human is partially occluded. In this section, we describe our method for (i) maintaining persistent information about human trajectories, (ii) monitoring GraphCMR location and pose estimates to determine which estimates are inaccurate, and (iii) mitigating human location errors through pose-graph optimization using motion priors. We achieve these results by maintaining a pose graph for each human the robot encounters and updating the pose graphs using simple but robust data association. Pose Graph. To maintain persistent information about human location, we build a pose graph for each human where each node in the graph corresponds to the location of the pelvis of the human at a discrete time. Consecutive poses are connected by a factor (Dellaert and Kaess (2017)) modeling a zero velocity prior on the human motion with a permissive noise model to allow for small motions. The location information from GraphCMR is modelled as a prior factor, providing the estimated global coordinates at each timestep. In addition to the pelvis locations, we maintain a persistent history of the SMPL parameters of the human as well as joint locations for pose analysis. The advantage of the pose-graph system is two-fold. First, using a pose-graph for each human’s trajectory allows for the application of pose-graph- optimization techniques to get a trajectory estimate that is smooth and robust to misdetections. Many of the detections from GraphCMR propagate to the pose- graph even if they are not immediately rejected by the consistency checks described in the next section. However, by using Kimera-RPGO and PCM outlier rejection, the pose-graphs of the humans can be regularly optimized to smooth the trajectory and remove bad detections. PCM outlier rejection is particularly good at removing detections that would require the human to move/rotate arbitrarily fast. Second, using pose graphs to model both the humans and the robot’s global trajectory allows for unified visualization tools between the two use-cases. Figure 10 shows the pose graph (blue line) of a human in the office environment, as well as the detection associated with each pose in the graph (rainbow-like color-coded human mesh). Figure 10: Human Pose-Graph: Optimized pose-graph (blue line) for a single human. The detected human shape is shown as a 3D mesh, color-coded from the most recent detection in red to the oldest one in pink. Data Association. A key issue in the process of building the pose graph is associating which nodes belong to the same human over time and then linking them appropriately. We use a simplified data association model which associates a new node with the node that has the closest euclidean distance to it. This form of data association works well under the mild assumption that the distance a human moves between timesteps is smaller than the distance between humans. We do not have information for when a human enters the frame and when they leave (although we do know the number of people in a given frame). To avoid associating new humans with the pose graphs of previous humans, we add a spatio-temporal consistency check before adding the pose to the human’s pose- graph, as discussed below. To check consistency, we extract the human skeleton at time $t-1$ (from the pose graph) and $t$ (from the current detection) and check that the motion of each joint (Figure 9(c)) is physically plausible in that time interval (_i.e.,_ we leverage the fact that the joint and torso motion cannot be arbitrarily fast). This check is visualized in Figure 9(c). We first ensure that the rate of centroid movement is plausible between the two sets of skeletons. Median human walking speed being about $1.25$ m/s (Schimpl et al., 2011), we use a conservative $3$m/s bound on the movement rate to threshold the feasibility for data association. In addition, we use a conservative bound of 3mon the maximum allowable joint displacement to bound irregular joint movements. The data association check is made more robust by using the beta-parameters of the SMPL model (Loper et al., 2015), which encode the various shape attributes of the mesh in 8 floating-point parameters. These shape parameters include, for example, the width and height of different features of the human model. We check the current detection’s beta parameters against those of the skeleton at time $t-1$ and ensure that the average of the difference between each pair of beta parameters does not exceed a certain threshold ($0.1$ in our experiments). This helps to differentiate humans from each other based on their appearance. In Kimera-Humans, the beta parameters are estimated by GraphCMR (Kolotouros et al., 2019b). If the centroid movement and joint movement between the timesteps are within the bounds and the beta-parameter check passes, we add the new node to the pose graph that has the closest final node as described earlier. If no pose graph meets the consistency criteria, we initialize a new pose graph with a single node corresponding to the current detection. Node Error Monitoring and Mitigation. As mentioned earlier, GraphCMR outputs are very sensitive to occluded humans, and prediction quality is poor in those circumstances. To gain robustness to simple occlusions, we mark detections when the bounding box of the human approaches the boundary of the image or is too small ($\leq 30$ pixels in our tests) as incorrect. In addition, we use the size of the pose graph as a proxy to monitor the error of the nodes. When a pose graph has few nodes, it is highly likely that those nodes are erroneous. We determined through experimental results that pose graphs with fewer than $10$ nodes tend to have extreme errors in human location. We mark those graphs as erroneous and remove them from the DSG, a process we refer to as pose-graph pruning. This is similar to removing short feature tracks in visual tracking. Finally, we mitigate node errors by running optimization over the pose graphs using the stationary motion priors and we see that we can achieve a great reduction in existing errors. ### 3.6 Kimera-Objects: Object Pose Estimation Within Kimera-DSG, Kimera-Objects is the module that extracts static objects from the optimized metric-semantic mesh produced by Kimera-PGMO. We give the user the flexibility to provide a catalog of CAD models for some of the object classes. If a shape is available, Kimera-Objects will try to fit it to the mesh (paragraph “Objects with Known Shape” below), otherwise it will only attempt to estimate a centroid and a bounding box (paragraph “Objects with Unknown Shape”). Objects with Unknown Shape. The optimized metric-semantic mesh from Kimera- PGMO already contains semantic labels. Therefore, Kimera-Objects first extracts the portion of the mesh belonging to a given object class (_e.g.,_ chairs in Figure 1(d)); this mesh contains multiple objects belonging to the same class. To break down the mesh into multiple object instances, Kimera- Objects performs Euclidean clustering using PCL (Rusu and Cousins, 2011) (with a distance threshold of twice the voxel size, $0.05m$, used in Kimera- Semantics, that is $0.1$ m). From the segmented clusters, Kimera-Objects obtains a centroid of the object (from the vertices of the corresponding mesh), and assigns a canonical orientation with axes aligned with the world frame. Finally, it computes a bounding box with axes aligned with the canonical orientation. Objects with Known Shape. if a CAD model for a class of objects is given, Kimera-Objects will attempt to fit the known shape to the object mesh. This is done in three steps. First, Kimera-Objects extracts 3D keypoints from the CAD model of the object, and the corresponding object mesh. The 3D keypoints are extracted by transforming each mesh to a point cloud (by picking the vertices of the mesh) and then extracting 3D Harris keypoints (Rusu and Cousins, 2011) with $0.15$m radius and $10^{-4}$ non-maximum suppression threshold. Second, we match every keypoint on the CAD model with any keypoint on the Kimera model. Clearly, this step produces many incorrect putative matches (outliers). Third, we apply a robust open-source registration technique, TEASER++ (Yang et al., 2020), to find the best alignment between the point clouds in the presence of extreme outliers. The output of these three steps is a 3D pose of the object (from which it is also easy to extract an axis-aligned bounding box), see result in Figure 1(e). ### 3.7 Kimera-BuildingParser: Extracting Places, Rooms, and Structures Figure 11: (Left) 2D slice of 3D ESDF. The euclidean distance is color coded from red ($0$m) to green ($0.5$m). (Right) Truncated ($\leq 0.10$m) 2D ESDF, revealing the room’s contours. Overlaid in both figures are the estimated room layout (square nodes) and their connectivity (black edges). Kimera-BuildingParser implements simple-yet-effective methods to parse places, structures, and rooms from Kimera’s 3D mesh. Places. Kimera-Semantics uses Voxblox (Oleynikova et al., 2017) to extract a global mesh and an ESDF. We also obtain a topological graph from the ESDF using (Oleynikova et al., 2018), where nodes sparsely sample the free space, while edges represent straight-line traversability between two nodes. We directly use this graph to extract the places and their topology (Figure 2(a)). After creating the places, we associate each object and agent pose to the nearest place to model a proximity relation. Structures. Kimera’s semantic mesh already includes different labels for walls, floor, and ceiling. Hence, isolating these three structural elements is straightforward (Figure 3). For each type of structure, we then compute a centroid, assign a canonical orientation (aligned with the world frame), and compute an axis-aligned bounding box. We further segment the walls depending on the room they belong to. To do so, we leverage the property that the 3D mesh vertices normals are oriented: the normal of each vertex points towards the camera. For each 3D vertex of the walls’ mesh, we query the nearest nodes in the places layer that are in the normal direction, and limit the search to a conservative radius ($0.5$m). We then use the room IDs of the retrieved places to vote for the room label of the current wall mesh vertex. To make the approach more robust to cases such as when two rooms meet (frames of doors for example), we weight the votes of the places by $\frac{\bf{n}\cdot\bf{d}}{\|\bf{d}\|^{2}_{2}}$ where $\bf{n}$ is the normal (unit norm) at the wall vertex and $\bf{d}$ is the vector from the wall vertex to the place node. This downweights votes of places that are not immediately in front and near the wall vertex. Rooms. While floor plan computation is challenging in general, (i) the availability of a 3D ESDF and (ii) the knowledge of the gravity direction given by Kimera-VIO enable a simple-yet-effective approach to partition the environment into different rooms. The key insight is that an horizontal 2D section of the 3D ESDF, cut below the level of the detected ceiling, is relatively unaffected by clutter in the room (Figure 11). This 2D section gives a clear signature of the room layout: the voxels in the section have a value of $0.3$m almost everywhere (corresponding to the distance to the ceiling), except close to the walls, where the distance decreases to $0$m. We refer to this 2D ESDF (cut at $0.3$m below the ceiling) as an _ESDF section_. To compensate for noise, we further truncate the ESDF section to distances above $0.2$m, such that small openings between rooms (possibly resulting from error accumulation) are removed. The result of this partitioning operation is a set of disconnected 2D ESDFs corresponding to each room, that we refer to as _2D ESDF rooms_. Then, we label all the “Places” (nodes in Layer 3) that fall inside a 2D ESDF room depending on their 2D (horizontal) position. At this point, some places might not be labeled (those close to walls or inside door openings). To label these, we use majority voting over the neighborhood of each node in the topological graph of “Places” in Layer 3; we repeat majority voting until all places have a label. Finally, we add an edge between each place (Layer 3) and its corresponding room (Layer 4), see Figure 2(b-c), and add an edge between two rooms (Layer 4) if there is an edge connecting two of its places (red edges in Figure 2(b-c)). We also refer the reader to the second video attachment. ### 3.8 Debugging Tools Kimera also provides an open-source444https://github.com/MIT-SPARK/Kimera- Evaluation suite of evaluation tools for debugging, visualization, and benchmarking of VIO, SLAM, and metric-semantic reconstruction. Kimera includes a Continuous Integration server (Jenkins) that asserts the quality of the code (compilation, unit tests), but also automatically evaluates Kimera-VIO, Kimera-RPGO, and Kimera-PGMO on the EuRoC’s datasets using _evo_ (Grupp, 2017). Moreover, we provide Jupyter Notebooks to visualize intermediate VIO statistics (_e.g.,_ quality of the feature tracks, IMU preintegration errors), as well as to automatically assess the quality of the 3D reconstruction using Open3D (Zhou et al., 2018a). ## 4 Experimental Evaluation We start by introducing the datasets that we use for evaluation in Section 4.1, which feature real and simulated scenes, with and without dynamic agents, as well as a large variety of environments (indoors and outdoors, small and large). Section 4.2 shows that Kimera-VIO and Kimera-RPGO attain competitive pose estimation performance on the EuRoC dataset. Section 4.3 demonstrates Kimera’s 3D mesh geometric accuracy on EuRoC, using the subset of scenes providing a ground-truth point cloud. Section 4.4 provides a detailed evaluation of Kimera-Mesher and Kimera-Semantics’ 3D metric-semantic reconstruction on the uHumans dataset. Section 4.5 evaluates the localization and reconstruction performance of Kimera-PGMO. Section 4.6 evaluates the humans and object localization errors. Section 4.7 evaluates the accuracy of the segmentation of places into rooms. Section 4.8 highlights Kimera’s real- time performance and analyzes the runtime of each module. Furthermore, Section 4.9 shows how Kimera’s real-time performance scales when running on embedded computers. Finally, Section 4.10 qualitatively shows the performance of Kimera on real-life datasets that we collected. ### 4.1 Datasets Figure 12: Overview of the datasets used for evaluation. We evaluate Kimera on a variety of datasets, both real-life (top row) and simulated (bottom row), indoors and outdoors, small and large. Figure 12 gives an overview of the datasets used, while their characteristics are detailed below. #### 4.1.1 EuRoC. We use the EuRoC dataset (Burri et al., 2016) that features a small drone flying indoors with a mounted stereo camera and an IMU. The EuRoC dataset includes eleven datasets in total, recorded in two different static scenarios. The Machine Hall scenario (MH) is the interior of an industrial facility. The Vicon Room (V) is similar to an office room. Each dataset has different levels of difficulty for VIO, where the speed of the drone increases (the larger the numeric index of the dataset the more difficult it is; e.g. MH_01 is easier for VIO than MH_03). The dataset features ground-truth localization of the drone in all datasets. Furthermore, a ground-truth pointcloud of the Vicon Room is available. For our experimental evaluation, we use EuRoC to analyze both the localization performance of Kimera-VIO, Kimera-RPGO, and Kimera-PGMO, as well as the geometric reconstruction from Kimera-Mesher, Kimera-Semantics, and Kimera- PGMO. Since there are no dynamic elements in this dataset, nor semantically meaningful objects, we do not use it to evaluate the semantic accuracy of the mesh or the robustness to dynamic scenes. #### 4.1.2 uHumans and uHumans2. To evaluate the accuracy of the metric-semantic reconstruction and the robustness against dynamic elements, we use the uHumans simulated dataset that we introduced in (Rosinol et al., 2020b), and further release a new uHumans2 dataset with this paper. uHumans and uHumans2 are generated using a photo-realistic Unity-based simulator provided by MIT Lincoln Laboratory, that provides sensor streams (in ROS) and ground truth for both the geometry and the semantics of the scene, and has an interface similar to (Sayre-McCord et al., 2018; Guerra et al., 2019). uHumans features a large-scale office space with multiple rooms, as well as a small (e.g. 12) and large (e.g. 60) number of humans in it, and it is the one reconstructed in Figure 1. Despite having different number of humans, uHumans had the issue that the trajectories for each run were not the same; thereby coupling localization errors due to dynamic humans in the scene and the intrinsic drift of the VIO. For this reason, and to extend the dataset for a variety of other scenes, we collected the uHumans2 dataset. uHumans2 features indoor scenes, such as the ‘Apartment’, the ‘Subway’, and the ‘Office’ scene, as well as the ‘Neighborhood’ outdoor scene. Note that the ‘Office’ scene in uHumans2 is the same as in uHumans, but the trajectories are different. Finally, to avoid biasing the results towards a particular 2D semantic segmentation method, we use ground truth 2D semantic segmentations and we refer the reader to Hu and Carlone (2019) for a review of potential alternatives. #### 4.1.3 ‘AeroAstro’, ‘School’, and ‘White Owl’. Since the uHumans and uHumans2 datasets are simulated, we further evaluate our approach on three real-life datasets that we collected. The three datasets consist of RGB-D and IMU data recorded using a hand-held device. The first dataset features a collection of student cubicles in one of the MIT academic buildings (‘AeroAstro’), and is recorded using a custom-made sensing rig. The second is recorded in a ‘School’, using the same custom-made sensing rig. The third dataset is recorded in an apartment (‘White Owl’), using Microsoft’s Azure Kinect. (a) perspective view | (b) top view ---|--- Figure 13: (a)-(b) Data collection rig used for reconstruction of the ‘AeroAstro’ and ‘School’ scenes, consisting of two Intel RealSense D435i devices and a NVIDIA Jetson TX2. To collect the ‘AeroAstro’ and ‘School’ datasets, we use a custom-made sensing rig designed to mount two Intel RealSense D435i devices in a perpendicular configuration, as shown in Figure 13. While a single Intel RealSense D435i already provides the RGB-D and IMU data needed for Kimera-Core, we used two cameras with non-overlapping fields of view because the infrared pattern emitted by the depth camera is visible in the images. This infrared pattern makes the images unsuitable for feature tracking in Kimera-VIO. Therefore, we disable the infrared pattern emitter for one of the RealSense cameras, which we use for VIO. We enable the infrared emitter for the other camera to capture high-quality depth data. Since the cameras do not have an overlapping field of view, the camera used for tracking is unaffected by the infrared pattern emitted by the other camera. Before recording the ‘AeroAstro’ and ‘School’ dataset, we calibrate the IMUs, the extrinsics, and the intrinsics of both RealSense cameras. In particular, the IMU of each RealSense device is calibrated using the provided script from Intel555https://github.com/IntelRealSense/librealsense/tree/master/tools/rs- imu-calibration, and the intrinsics and extrinsics of all cameras are calibrated using the Kalibr toolkit (Furgale et al., 2013). Furthermore, the transform between the two RealSense devices is estimated using the Kalibr extension for IMU to IMU extrinsic calibration (Rehder et al., 2016). Since the cameras do not share the same field of view, we cannot use camera to camera extrinsic calibration. Both RealSense devices use hardware synchronization. The ‘AeroAstro’ scene consists of an approximately $40$m loop around the interior of the space that passes four cubicle groups and a kitchenette, with a standing human visible at two different times. The ‘School’ scene consists of three rooms connected by a corridor and the trajectory is approximately $20$m long. Figure 14: Side-view of the 3D point clouds generated by the Intel RealSense D435i (left) and Azure Kinect (right) RGB-D cameras of the same scene and from the same viewpoint. (Middle) Zoomed-in view shows that the Azure Kinect provides depth estimates of higher quality compared to RealSense. Despite these differences, Kimera is robust to noise, as shown in Section 4.10. To collect the ‘White Owl’ scene, we used the Azure Kinect, as it provides a denser, more accurate depth-map than the Intel RealSense D435i (see Figure 14 for a comparison). We also calibrate the extrinsincs and intrinsics of the camera and IMU using Kalibr (Furgale et al., 2013). The ‘White Owl’ scene consists of an approximately $15$m long trajectory through three rooms: a bedroom, a kitchen, and a living room. A single seated human is visible in the kitchen area. Finally, we semantically segment the RGB images for both scenes using Mask- RCNN (He et al., 2017) with the pre-trained model weights from the COCO dataset (Abdulla, 2017). ### 4.2 Pose Estimation Table 1: RMSE [m] for the absolute translation error (ATE) of state-of-the-art VIO pipelines (reported from (Delmerico and Scaramuzza, 2018), and the results from the respective papers) compared to Kimera, on the EuRoC dataset. In bold the best result for each category: fixed-lag smoothing, and PGO with loop closure. $-$ indicates missing data. | RMSE ATE [m] ---|--- | Fixed-lag Smoothing | Loop Closure | EuRoC --- Seq. OKVIS | MSCKF | ROVIO | | VINS- --- Mono | ​Kimera- --- VIO | Basalt --- | ORB- --- SLAM3 | VINS- --- LC | Kimera- --- RPGO | Kimera- --- PGMO MH_1 | 0.16 | 0.42 | 0.21 | 0.15 | 0.11 | 0.08 | 0.04 | 0.12 | 0.13 | 0.09 MH_2 | 0.22 | 0.45 | 0.25 | 0.15 | 0.10 | 0.06 | 0.03 | 0.12 | 0.21 | 0.11 MH_3 | 0.24 | 0.23 | 0.25 | 0.22 | 0.16 | 0.05 | 0.04 | 0.13 | 0.12 | 0.12 MH_4 | 0.34 | 0.37 | 0.49 | 0.32 | 0.16 | 0.10 | 0.05 | 0.18 | 0.12 | 0.16 MH_5 | 0.47 | 0.48 | 0.52 | 0.30 | 0.15 | 0.08 | 0.08 | 0.21 | 0.15 | 0.18 V1_1 | 0.09 | 0.34 | 0.10 | 0.08 | 0.05 | 0.04 | 0.04 | 0.06 | 0.06 | 0.05 V1_2 | 0.20 | 0.20 | 0.10 | 0.11 | 0.08 | 0.02 | 0.01 | 0.08 | 0.05 | 0.06 V1_3 | 0.24 | 0.67 | 0.14 | 0.18 | 0.13 | 0.03 | 0.02 | 0.19 | 0.11 | 0.13 V2_1 | 0.13 | 0.10 | 0.12 | 0.08 | 0.06 | 0.03 | 0.03 | 0.08 | 0.06 | 0.05 V2_2 | 0.16 | 0.16 | 0.14 | 0.16 | 0.07 | 0.02 | 0.01 | 0.16 | 0.06 | 0.07 V2_3 | 0.29 | 1.13 | 0.14 | 0.27 | 0.21 | $-$ | 0.02 | 1.39 | 0.24 | 0.23 In this section, we evaluate the performance of Kimera-VIO and Kimera-RPGO. Table 1 compares the Root Mean Squared Error (RMSE) of the Absolute Translation Error (ATE) of Kimera-VIO against state-of-the-art open-source VIO pipelines: OKVIS (Leutenegger et al., 2013), MSCKF (Mourikis and Roumeliotis, 2007), ROVIO (Bloesch et al., 2015), VINS-Mono (Qin et al., 2018), Basalt (Usenko et al., 2019), and ORB-SLAM3 (Campos et al., 2021). using the independently reported values in (Delmerico and Scaramuzza, 2018) and the self-reported values from the respective authors. Note that OKVIS, MSCKF, ROVIO, and VINS-Mono use a monocular camera, while the rest use a stereo camera. We align the estimated and ground-truth trajectories using an $\mathrm{SE}(3)$ transformation before evaluating the errors. Using a $\mathrm{Sim}(3)$ alignment, as in (Delmerico and Scaramuzza, 2018), would result in an even smaller error for Kimera: we preferred the $\mathrm{SE}(3)$ alignment, since it is more appropriate for VIO, where the scale is observable thanks to the IMU. We group the techniques depending on whether they use fixed-lag smoothing or loop closures. Kimera-VIO, Kimera-RPGO, and Kimera-PGMO (see section 4.5) achieve competitive performance. We also compared Kimera-VIO with SVO-GTSAM (Forster et al., 2014, 2015) in our previous paper (Rosinol et al., 2020a). Table 2: RMSE ATE [m] vs. loop closure threshold $\alpha$, on the V1_01 EuRoC dataset. $\alpha$ | $10^{1}$ | $10^{0}$ | $10^{-1}$ | $10^{-2}$ | $10^{-3}$ ---|---|---|---|---|--- PGO w/o PCM | 0.05 | 0.45 | 1.74 | 1.59 | 1.59 Kimera-RPGO | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 Furthermore, Kimera-RPGO ensures robust performance, and is less sensitive to loop closure parameter tuning. Table 2 shows the Kimera-RPGO accuracy with and without outlier rejection (PCM) for different values of the loop closure threshold $\alpha$ used in DBoW2. Small values of $\alpha$ lead to more loop closure detections, but these are less conservative (more outliers). Table 2 shows that, by using PCM, Kimera-RPGO is fairly insensitive to the choice of $\alpha$. The results in Table 1 use $\alpha=0.001$. #### 4.2.1 Robustness of Pose Estimation in Dynamic Scenes. Table 4 reports the absolute trajectory errors of Kimera when using 5-point RANSAC, 2-point RANSAC, and when using 2-point RANSAC and IMU-aware feature tracking (label: DVIO). Best results (lowest errors) are shown in bold. The rows from MH_01–V2_03, corresponding to tests on the static EuRoC dataset, confirm that, in absence of dynamic agents, the proposed approach performs on- par with the state of the art, while the use of 2-point RANSAC already boosts performance. The rest of rows (uHumans and uHumans2), further show that in the presence of an increasing number of dynamic entities (third column with number of humans), the proposed DVIO approach remains robust. ### 4.3 Geometric Reconstruction We now show how Kimera’s accurate pose estimates and robustness against dynamic scenes improve the geometric accuracy of the reconstruction. We use the ground truth point cloud available in the EuRoC V1 and V2 datasets to assess the quality of the 3D meshes produced by Kimera. We evaluate each mesh against the ground truth using the accuracy and completeness metrics as in (Rosinol, 2018, Sec. 4.3): (i) we compute a point cloud by sampling our mesh with a uniform density of $10^{3}~{}\text{points}/\text{m}^{2}$, (ii) we register the estimated and the ground truth clouds with ICP (Besl and McKay, 1992) using _CloudCompare_ (Cloudcompare.org, 2019), and (iii) we evaluate the average distance from ground truth point cloud to its nearest neighbor in the estimated point cloud (accuracy), and vice-versa (completeness). Figure 15(a) shows the estimated cloud (corresponding to the global mesh of Kimera-Semantics on V1_01) color-coded by the distance to the closest point in the ground-truth cloud (accuracy); Figure 15(b) shows the ground-truth cloud, color-coded with the distance to the closest-point in the estimated cloud (completeness). Figure 15: (a) Kimera’s 3D mesh color-coded by the distance to the ground- truth point cloud. (b) Ground-truth point cloud color-coded by the distance to the estimated cloud. EuRoC V1_01 dataset. Table 3 provides a quantitative comparison between the fast multi-frame mesh produced by Kimera-Mesher and the slow mesh produced via TSDF by Kimera- Semantics. To obtain a complete mesh from Kimera-Mesher we set a large VIO horizon (_i.e.,_ we perform full smoothing). As expected from Figure 15(a), the global mesh from Kimera-Semantics is very accurate, with an average error of $0.35-0.48$m across datasets. Kimera-Mesher produces a more noisy mesh (up to $24\%$ error increase), but requires two orders of magnitude less time to compute (see Section 4.8). Table 3: Evaluation of Kimera multi-frame and global meshes’ completeness (Rosinol, 2018, Sec. 4.3.3) with an ICP threshold of $1.0$m. | | RMSE [m] --- Relative Improvement [%] | EuRoC --- Seq. | Multi-Frame --- | Global --- V1_01 | 0.482 | 0.364 | 24.00 V1_02 | 0.374 | 0.384 | -2.00 V1_03 | 0.451 | 0.353 | 21.00 V2_01 | 0.465 | 0.480 | -3.00 V2_02 | 0.491 | 0.432 | 12.00 V2_03 | 0.530 | 0.411 | 22.00 #### 4.3.1 Robustness of Mesh Reconstruction in Dynamic Scenes. Here we show that the the enhanced robustness against dynamic objects afforded by DVIO (and quantified in Section 4.2.1), combined with _dynamic masking_ (Section 3.5), results in robust and accurate metric-semantic meshes in crowded dynamic environments. Table 4: RMSE for the absolute translation error in meters when using 5-point, 2-point, and DVIO poses, as well as Kimera-RPGO and Kimera-PGMO’s optimized trajectory. We show the drift in % for DVIO to account for the length of the different trajectories. | | | Absolute Translation Error [m] (Drift [%]) ---|---|---|--- | | | Kimera-VIO | Loop Closure Dataset | Scene | | # of --- Humans 5-point | 2-point | DVIO | | Kimera- --- RPGO | Kimera- --- PGMO EuRoC | MH_01 | 0 | 0.09 | 0.14 | 0.11 (0.1) | 0.13 | 0.09 MH_02 | 0 | 0.10 | 0.12 | 0.10 (0.1) | 0.21 | 0.11 MH_03 | 0 | 0.11 | 0.17 | 0.16 (0.1) | 0.12 | 0.12 MH_04 | 0 | 0.42 | 0.19 | 0.16 (0.2) | 0.12 | 0.16 MH_05 | 0 | 0.21 | 0.14 | 0.15 (0.2) | 0.15 | 0.18 V1_01 | 0 | 0.07 | 0.07 | 0.05 (0.1) | 0.06 | 0.05 V1_02 | 0 | 0.12 | 0.08 | 0.08 (0.1) | 0.05 | 0.06 V1_03 | 0 | 0.17 | 0.13 | 0.13 (0.2) | 0.11 | 0.13 V2_01 | 0 | 0.05 | 0.06 | 0.06 (0.2) | 0.06 | 0.05 V2_02 | 0 | 0.08 | 0.07 | 0.07 (0.1) | 0.06 | 0.07 V2_03 | 0 | 0.30 | 0.27 | 0.21 (0.3) | 0.24 | 0.23 uHumans | Office | 12 | 0.92 | 0.78 | 0.59 (0.2) | 0.68 | 0.66 24 | 1.45 | 0.79 | 0.78 (0.4) | 0.78 | 0.78 60 | 1.60 | 1.11 | 0.88 (0.4) | 0.72 | 0.61 uHumans2 | Office | 0 | 0.47 | 0.46 | 0.46 (0.2) | 0.46 | 0.21 6 | 0.50 | 0.48 | 0.50 (0.2) | 0.49 | 0.47 12 | 0.50 | 0.50 | 0.45 (0.2) | 0.45 | 0.32 Neighborhood | 0 | 3.67 | 5.77 | 3.37 (0.8) | 2.78 | 1.70 24 | $\times$ | $\times$ | 6.65 (1.6) | 3.76 | 3.01 36 | $\times$ | $\times$ | 11.58 (2.7) | 1.74 | 1.48 Subway | 0 | 3.38 | 2.65 | 1.79 (0.4) | 1.68 | 1.47 24 | $\times$ | $\times$ | 2.37 (0.5) | 1.92 | 0.82 36 | $\times$ | 1.70 | 1.14 (0.2) | 0.87 | 0.68 Apartment | 0 | 0.08 | 0.07 | 0.07 (0.1) | 0.07 | 0.08 1 | 0.09 | 0.07 | 0.07 (0.1) | 0.07 | 0.07 2 | 0.08 | 0.08 | 0.07 (0.1) | 0.07 | 0.07 (a) | (b) ---|--- Figure 16: 3D mesh reconstruction (a) without and (b) with _dynamic masking_. Note that the human moves from right to left, while the robot with the camera rotates back and forth when mapping this scene. Dynamic Masking. Figure 16 visualizes the effect of dynamic masking on Kimera’s metric-semantic mesh reconstruction. Figure 16(a) shows that without dynamic masking a human walking in front of the camera leaves a “contrail” (in cyan) and creates artifacts in the mesh. Figure 16(b) shows that dynamic masking avoids this issue and leads to clean mesh reconstructions. Table 5 reports the RMSE mesh error (see _accuracy metric_ in (Rosinol et al., 2019)) with and without dynamic masking. To assess the mesh accuracy independently from the VIO localization errors, we also report the mesh geometric errors when using ground-truth poses (“GT pose” column in Table 5) compared to the mesh errors when using VIO poses (“DVIO pose” column). The “GT pose” columns in the table show that even with perfect localization, the artifacts created by dynamic entities (and visualized in Fig. 16(a)) significantly hinder the mesh accuracy, while dynamic masking ensures highly accurate reconstructions. The advantage of dynamic masking is preserved when VIO poses are used. It is worth mentioning that the 3D mesh errors in Table 5 are small compared to the localization errors from Table 1. This is because the floor is the most visible surface in all reconstructions, and since our z-estimate (height) of the trajectory is very accurate, most of the mesh errors are small. We also use ICP to align the ground-truth 3D mesh with the estimated 3D mesh, which further reduces the mesh errors. Table 5: 3D Mesh RMSE in meters with and without Dynamic Masking (DM) for Kimera-Semantics when using ground-truth pose (GT pose) and DVIO’s estimated pose (DVIO pose) in the uHumans and uHumans2 datasets. The third column shows the number of humans in each scene (# of Humans). | | | Kimera-Semantics 3D mesh RMSE [m] ---|---|---|--- | | | GT pose | DVIO pose Dataset | Scene | | # of --- Humans Without DM | With DM | Without DM | With DM uHumans | Office | 12 | 0.09 | 0.06 | 0.23 | 0.23 24 | 0.13 | 0.06 | 0.35 | 0.30 60 | 0.19 | 0.06 | 0.35 | 0.33 uHumans2 | Office | 0 | 0.03 | 0.03 | 0.16 | 0.16 6 | 0.03 | 0.03 | 0.21 | 0.17 12 | 0.03 | 0.03 | 0.18 | 0.13 Neighborhood | 0 | 0.06 | 0.06 | 0.27 | 0.27 24 | 0.08 | 0.06 | 0.66 | 0.61 36 | 0.08 | 0.06 | 0.70 | 0.65 Subway | 0 | 0.06 | 0.06 | 0.42 | 0.42 24 | 0.19 | 0.06 | 0.58 | 0.53 36 | 0.19 | 0.06 | 0.49 | 0.43 Apartment | 0 | 0.05 | 0.05 | 0.06 | 0.06 1 | 0.05 | 0.05 | 0.07 | 0.07 2 | 0.05 | 0.05 | 0.07 | 0.07 ### 4.4 Semantic Reconstruction Table 6: Evaluation of Kimera-Semantics using the simulated dataset from Rosinol et al. (2020a), which has no humans, using a combination of ground-truth (GT) and dense stereo (Stereo) depth maps, as well as ground-truth (GT) and DVIO (DVIO) poses. | | Kimera-Semantics --- using: | Depth from: | GT | GT | Stereo | Poses from: | GT | DVIO | DVIO Semantic Metrics | mIoU [%] | 80.10 | 80.03 | 57.23 Acc [%] | 94.68 | 94.50 | 80.74 Geometric | RMSE [m] | 0.079 | 0.131 | 0.215 Localization | ATE [m] | 0.00 | 0.04 | 0.04 | Drift [%] | 0.0 | 0.2 | 0.2 Kimera-Semantics builds a 3D mesh from the VIO pose estimates, and uses a combination of dense stereo (or RGB-D if available) and bundled raycasting. We evaluate the impact of each of these components by running three different experiments. For these experiments, we use the simulated dataset in (Rosinol et al., 2020a), which has ground-truth semantics and geometry, and allows us to determine the effects of each module on the performance. First, we use Kimera-Semantics with ground-truth (GT) poses and ground-truth depth maps (available in simulation) to assess the initial loss of performance due to bundled raycasting. Second, we use Kimera-VIO’s pose estimates. Finally, we use the full Kimera-Semantics pipeline including dense stereo. To analyze the semantic performance, we calculate the mean Intersection over Union (mIoU) (Hackel et al., 2017), and the overall portion of correctly labeled points (Acc) (Wolf et al., 2015). We also report the ATE to correlate the results with the drift incurred by Kimera-VIO. Finally, we evaluate the metric reconstruction registering the estimated mesh with the ground truth and computing the RMSE for the points as in Section 4.3. Table 6 summarizes our findings and shows that bundled raycasting results in a small drop in performance both geometrically ($<\\!8$cm error on the 3D mesh) as well as semantically (accuracy $>\\!94\%$). Using Kimera-VIO also results in negligible loss in performance since our VIO has a small drift ($<0.2\%$, $4$cm for a $32$m long trajectory). Certainly, the biggest drop in performance is due to the use of dense stereo. Dense stereo (H. Hirschmüller, 2008) has difficulties resolving the depth of textureless regions such as walls, which are frequent in simulated scenes. Figure 17 shows the confusion matrix when running Kimera-Semantics with Kimera-VIO and ground-truth depth (Figure 17(a)), compared with using dense stereo (Figure 17(b)). Large values in the confusion matrix appear between Wall/Shelf and Floor/Wall. This is exactly where dense stereo suffers the most; textureless walls are difficult to reconstruct and are close to shelves and floor, resulting in increased geometric and semantic errors. Figure 17: Confusion matrices for Kimera-Semantics using bundled raycasting and (a) ground truth stereo depth or (b) dense stereo (H. Hirschmüller, 2008). Both experiments use ground-truth 2D semantics. Values are saturated to $10^{4}$ for visualization purposes. ### 4.5 Loop Closure and Mesh Deformation Localization and geometric evaluation in EuRoC. We evaluate the performance of Kimera-PGMO in terms of the localization errors in Table 4 (last column). We observe that Kimera-PGMO achieves significant improvements on large-scale scenes (such as the ‘Neighborhood’ and ‘Subway’ scenes) where the accumulated ATE is noticeable. Instead, in EuRoC, where DVIO already achieves very accurate localization, Kimera-PGMO may only deliver marginal gains in localization. Note that the localization errors of Kimera-PGMO in Table 4 differ from the ones of Kimera-RPGO in Table 1, because Kimera-RPGO does not optimize the mesh, contrary to Kimera-PGMO. We evaluate Kimera-PGMO in terms of the geometric errors associated to the deformed 3D mesh in the subset of the EuRoC dataset with ground-truth point clouds. Table 7 shows that Kimera-PGMO’s 3D mesh achieves better geometric accuracy than the unoptimized Kimera-Semantics 3D mesh. For a qualitative visualization of the effects of Kimera-PGMO, Figure 18 shows the mesh reconstruction before and after deformation. (a) Before deformation with Kimera-PGMO. (b) After deformation with Kimera-PGMO. Figure 18: Reconstruction from EuRoC V1_01 dataset, before and after mesh deformation from Kimera-PGMO. The trajectory RMSE before loop closures are applied is $15$ cm and the trajectory RMSE after loop closures are optimized in Kimera-PGMO is $13$ cm. Metric-semantic evaluation in uHumans and uHumans2. We further evaluate the impact that deforming a mesh has on the metric-semantic reconstruction in the uHumans and uHumans2 datasets. Table 7 shows the RMSE of the reconstruction error, along with the percentage of correct semantics matches, for Kimera-PGMO compared to DVIO. We observe that the mesh deformation from Kimera-PGMO achieves the best geometric and semantic performance, compared to the non- optimized 3D mesh from Kimera-Semantics when using the DVIO pose estimates. Table 7: 3D mesh RMSE [m] and percentage of correct semantic labels when using Kimera-Semantics and Kimera-PGMO (both using dynamic masking and DVIO poses), in the uHumans and uHumans2 dataset, and the subset of EuRoC datasets having ground-truth point clouds. EuRoC’s ground-truth point clouds do not have semantic labels. | | | Geometric Reconstruction [m] | Semantic Reconstruction [%] ---|---|---|---|--- Dataset | Scene | # of Humans | Kimera-Semantics | Kimera-PGMO | Kimera-Semantics | Kimera-PGMO EuRoC | V1_01 | 0 | 0.15 | 0.13 | – | – V1_02 | 0 | 0.13 | 0.11 | – | – V1_03 | 0 | 0.22 | 0.19 | – | – V2_01 | 0 | 0.23 | 0.19 | – | – V2_02 | 0 | 0.19 | 0.17 | – | – V2_03 | 0 | 0.25 | 0.24 | – | – uHumans | Office | 12 | 0.23 | 0.20 | 78.5 | 79.1 24 | 0.30 | 0.17 | 73.2 | 81.6 60 | 0.33 | 0.24 | 70.1 | 72.3 uHumans2 | Office | 0 | 0.16 | 0.12 | 72.7 | 78.3 6 | 0.17 | 0.10 | 71.3 | 81.5 12 | 0.13 | 0.15 | 76.1 | 73.8 Neighborhood | 0 | 0.27 | 0.09 | 62.9 | 93.3 24 | 0.61 | 0.11 | 51.2 | 93.7 36 | 0.65 | 0.43 | 53.0 | 81.2 Subway | 0 | 0.42 | 0.26 | 80.1 | 89.3 24 | 0.53 | 0.31 | 73.5 | 86.3 36 | 0.43 | 0.39 | 81.3 | 82.8 Apartment | 0 | 0.06 | 0.06 | 65.5 | 65.9 1 | 0.08 | 0.08 | 61.1 | 65.1 2 | 0.08 | 0.08 | 61.1 | 65.3 ### 4.6 Parsing Humans and Objects Here we evaluate the accuracy of human tracking and object localization on the uHumans datasets. Human Nodes. Table 8 shows the average localization error (mismatch between the pelvis estimated position and the ground truth) for each human on the uHumans datasets. Each column adds a feature of the proposed model that improves performance. The first column reports the error of the detections produced by Kolotouros et al. (2019b) (label: “Single Image”). The second column reports the error for the case in which we filter out detections when the human is only partially visible in the camera image, or when the bounding box of the human is too small ($\leq 30$ pixels, label: “Single Image filtered”). The third column reports errors with the proposed pose graph model discussed in Section 3.5 (label: “Pose-Graph track”) and includes PCM outlier rejection and pose-graph pruning. The fourth column reports errors when the mesh feasibility check for data association is enabled (label: “Mesh Check”), and the fifth reports errors when the beta-parameter data-association technique is also enabled (label: “Beta Check”). The simulator’s humans all have randomized beta parameters within a known range to better approximate the distribution of real human appearance. The Graph-CNN approach (Kolotouros et al., 2019b) for SMPL detections tends to produce incorrect estimates when the human is occluded. Filtering out these detections improves the localization performance, but occlusions due to objects in the scene still result in significant errors. Adding the mesh- feasibility check decreases error by making data association more effective once detections are registered. The beta-parameter check also significantly decreases error, signifying that data association can be effectively done using SMPL body-parameter estimation. Only the apartment scene did not follow the trend; results are best without any of the proposed techniques. These are outlier results; the apartment environment had many specular reflections that could have led to false detections. Object Nodes. Table 8 reports the average localization errors for objects of unknown and known shape detected in the scene. In both cases, we compute the localization error as the distance between the estimated and the ground truth centroid of the object (for the objects with known shape, we use the centroid of the fitted CAD model). We use CAD models for objects classified as “couch”, “chair”, and “car” (which we obtain from Unity’s 3D asset store). In both cases, we can correctly localize the objects, while the availability of a CAD model further boosts accuracy. It is worth noting that most of the known objects in our datasets are visualized early in the runs, when the localization drift is low, making the object localization almost independent from the drift. Finally, we use ICP to align the 3D groundt-truth mesh with the estimated 3D mesh before evaluating the object localization errors. Table 8: Human and object localization errors in meters. A dash (–) indicates that the object is not present in the scene. ‘#H’ column indicates the number of humans in the scene. ‘uH1’ and ‘uH2’ stand for the uHumans and uHumans2 datasets respectively. | | | Localization Errors [m] ---|---|---|--- | | | Humans | Objects Dataset | Scene | #H | Single Image | Single Image Filtered | Pose Graph Track. | Mesh Check | Beta Check | Un- known Obj. | Known Obj. | | | | | | | | | Couch | Chair | Car uH1 | Office | 12 | 2.51 | 1.82 | 1.60 | 1.57 | 1.52 | 1.31 | 0.20 | 0.20 | – 24 | 2.54 | 2.03 | 1.80 | 1.67 | 1.50 | 1.70 | 0.35 | 0.35 | – 60 | 2.03 | 1.78 | 1.65 | 1.65 | 1.63 | 1.52 | 0.38 | 0.38 | – uH2 | Office | 0 | – | – | – | – | – | 1.27 | 0.19 | 0.19 | – 6 | 1.87 | 1.21 | 0.86 | 0.82 | 0.63 | 1.05 | 0.17 | 0.18 | – 12 | 2.00 | 1.43 | 1.16 | 1.05 | 0.61 | 1.32 | 0.21 | 0.22 | – Neigh- borhood | 0 | – | – | – | – | – | 2.89 | – | – | 2.23 24 | 21.3 | 2.02 | 1.06 | 1.03 | 0.74 | 3.31 | – | – | 3.29 36 | 14.0 | 2.50 | 1.44 | 1.14 | 0.55 | 3.56 | – | – | 3.47 Subway | 0 | – | – | – | – | – | 3.96 | 2.60 | – | – 24 | 8.34 | 6.56 | 5.53 | 5.31 | 1.92 | 3.76 | 2.70 | – | – 36 | 7.61 | 5.80 | 5.20 | 5.12 | 2.83 | 3.10 | 2.30 | – | – Apart- ment | 0 | – | – | – | – | – | 0.48 | 0.22 | 0.21 | – 1 | 4.32 | 4.79 | 5.38 | 5.64 | 6.43 | 0.43 | 0.21 | 0.21 | – 2 | 2.83 | 2.52 | 2.66 | 2.69 | 3.79 | 0.45 | 0.21 | 0.21 | – ### 4.7 Parsing Places and Rooms We also compute the average precision and recall for the classification of places into rooms. The ground-truth labels are obtained by manually segmenting the places. For the ‘Office’ (Figure 1) and ‘Subway’ (Figure 24) scenes in uHumans2, we obtain an average precision of $99\%$ and $87\%$, respectively, and an average recall of $99\%$ and $92\%$, respectively. Similarly, for the real-life ‘White Owl’ (left DSG in Figure 21) and ‘School’ (Figure 22) scenes, we achieve a precision of $93\%$ and $91\%$, respectively, and an average recall of $94\%$ and $90\%$, respectively. In fact, all the rooms in the ‘Office’, ‘Subway’, ‘White Owl’, and ‘School’ scenes are correctly detected and the places are precisely labelled. Incorrect classifications of places typically occur near doors, where room misclassification is inconsequential. Nevertheless, our approach has difficulties dealing with either complex architectures, such as the ‘Apartment’ in uHumans2 (right DSG in Figure 21), and largely incomplete scenes such as the ‘AeroAstro’ dataset (Figure 23). In particular, for the ‘Apartment’ scene, the presence of exposed ceiling beams distorts the 2D ESDF field in a way that the living room is instead over- segmented as three separated rooms: ‘R1’, ‘R3’, and ‘R6’ should be one room node for the DSG on the right side of Figure 21. The precision and recall for the place segmentation of the ‘Apartment’ scene is of $68\%$ and $61\%$ respectively. For the ‘AeroAstro’ scene (Figure 23), the parallel corridor to ‘R1’ is also over-segmented into ‘R3’, ‘R5’, and ‘R2’. In this case, the observed free-space in the corridor narrows significantly between ‘R2’ and ‘R5’, and between ‘R5’ and ‘R3’, as can be seen by the density of nodes in the topological graph (Figure 23). The precision and recall for the place segmentation of the ‘AeroAstro’ scene is of $71\%$ and $67\%$ respectively. Note that with a careful tuning of the parameters for room detection, we can avoid such over-segmentations (Section 3.7). We also discuss the influence of room over-segmentation when performing path-planning queries in Section 6.1. Finally, we note that DSGs are general enough to handle outdoor scenes, such as the ‘Neighborhood’ scene (Figure 25), while in this case some of the layers (rooms, buildings) are skipped. ### 4.8 Timing Figure 19 reports the timing performance of Kimera-Core’s modules. The IMU front-end requires around $40\mu\text{s}$ for preintegration, hence it can generate state estimates at IMU rate ($>200$ Hz). The vision front-end module shows a bi-modal distribution since, for every frame, we just perform feature tracking (which takes an average of $7.5$ ms), while, at keyframe rate, we perform feature detection, stereo matching, and geometric verification, which, combined, take an average of $51$ ms. Kimera-Mesher is capable of generating per-frame 3D meshes in less than $7$ ms, while building the multi-frame mesh takes $15$ ms on average. The back-end solves the factor-graph optimization in less than $60$ ms. Loop-closure detection (LCD) took up to $180$ ms to look for loop closure candidates and perform geometric verification and compute the relative transform. Kimera-RPGO, Kimera-PGMO and Kimera-Semantics run on slower threads since their outputs are not required for time-critical actions (_e.g.,_ control, obstacle avoidance). Kimera-RPGO took up to $50$ ms in our experiments on EuRoC for a pose graph with $734$ edges and $728$ nodes, but in general its runtime depends on the size of the pose graph. Similarly, Kimera- PGMO took up to $140$ ms in our experiments on EuRoC for a pose graph with $14914$ edges and 1759 nodes ($728$ pose nodes and $1031$ mesh nodes), but in general its runtime depends on the size of the pose graph, the size of the mesh, and the resolution of the simplified mesh. Finally, Kimera-Semantics (not reported in figure for clarity) takes an average of $0.1\text{s}$ to update the global metric-semantic mesh at each keyframe. Figure 19: Runtime breakdown for Kimera-VIO, Kimera-Mesher, loop-closure detection (LCD), Kimera-RPGO, and Kimera-PGMO. Note that the timing for Kimera-RPGO and Kimera-PGMO will increase with the size of the pose graph and the mesh. The timing here is collected on the EuRoC V1_01 dataset. The complete run of the dataset consists of $728$ poses and $71$ loop closures, and the final mesh has $324474$ vertices. Kimera-DSG’s modules run sequentially once the 3D metric-semantic mesh is built by Kimera-Core. Kimera-Objects segments the object instances from the 3D metric-semantic mesh in minutes, depending on the scale of the scene (from $3$ min for the ‘White Owl’ scene of $\sim{100}\text{m}^{2}$ to $12$ min for the largest ‘Subway’ scene of $\sim 3000\text{m}^{2}$), where the instance segmentation is done sequentially for each object class at a time (which can be parallelized). For the known objects, Kimera-Objects fits a CAD model using TEASER++ which runs in milliseconds (more details about the timing performance of TEASER++ are provided in (Yang et al., 2020)). Kimera-Humans first detects humans using GraphCMR (Kolotouros et al., 2019b) which runs for a single image in $\sim 33$ ms on a Nvidia RTX 2080 Ti GPU, and the tracking is performed on the CPU in milliseconds ($\sim 10$ ms). Kimera-BuildingParser first builds an ESDF out of the TSDF using Voxblox (Oleynikova et al., 2017), which may take several minutes depending on the scale of the scene ($\sim 10$ min for large scenes such as the ‘Subway’). Then, the topological map is built using (Oleynikova et al., 2018), which also takes several minutes depending on the scale of the scene ($\sim 10$ min for ‘Subway’). Finally, the detection of rooms, segmentation of places into rooms, and finding the connectivity between rooms, takes a few minutes depending as well on the size of the scene ($\sim 2$ min for the ‘Subway’ scene). Hence, to build a DSG for a large scale scene such as the ‘Subway’ scene, Kimera-DSG may take approximately $\sim 30$ min, with the computation of the ESDF, the topological map, and the object instance segmentation being the most time-consuming operations. ### 4.9 Timing on NVIDIA TX2 Embedded Computer We also assessed the performance capabilities of Kimera on a common low-SWaP processor for many robotic applications, the Nvidia Jetson TX2. For all benchmarking conducted with the TX2, the MAXN performance mode was used via nvpmodel. We limited our analysis to Kimera-Core. Kimera-VIO was able to achieve real time performance when run against EuRoC with default settings, but we also understand that there may be applications that are more sensitive to latency or processing time. We therefore analyzed the impact of various parameter settings on the runtime of Kimera-VIO. Two candidate parameters were identified: (i) the maximum number of features tracked (maxFeaturesPerFrame), and (ii) the time horizon for smoothing (horizon). Based on this analysis, we provide two additional sets of parameters in the open-source repository: a fast (250 features maximum and a horizon size of 5 seconds) and a faster (200 features maximum and a horizon size of 4.5 seconds) parameter configuration. The effect of these settings on the processing time of Kimera-VIO is shown in Figure 20. For all configurations, the frontend portion of the module took 10ms on average on non-keyframes. The fast and faster configurations takes about 85% and 75% of the processing time for keyframes, and about 65% and 50% of the processing time for the back-end respectively as compared to the default settings. Note that on more challenging portions of EuRoC, the accuracy of Kimera-VIO degrades under the fast and faster configurations, which may not be desired in all instances. This is most noticeable in the increase in APE for the difficult machine hall sequences of EuRoC, shown in Figure 20. In addition, we characterized the timing of Kimera-Semantics utilizing the EuRoC dataset again. With the default settings for Kimera-VIO and for Kimera- Semantics, most notably a voxel size of $0.1$ meters, we were able to achieve suitable performance for real-time operation. Updates to Kimera-Semantics took $65.8$ milliseconds on average across EuRoC. Note that EuRoC data does not include semantic labels, so the voxel size may still need to be tuned to maintain real time performance in other cases. (a) TX2 Runtime breakdown for Kimera-VIO’s back-end and front-end. The front- end’s timing distribution is bi-modal: its runtime is longer on every keyframe (KF) than its per-frame runtime (non-KF). (b) TX2 ATE breakdown for Kimera-VIO Figure 20: Runtime and ATE for Kimera-VIO for different parameter settings across EuRoC evaluated on the TX2. Labels KF and non-KF denote timing for keyframe insertions in the frontend (involving feature detection and geometric verification through RANSAC) and normal frame insertions (involving feature tracking) respectively. The default, fast, and faster configurations extract a maximum of 300, 250, and 200 features respectively and have a horizon size of 6, 5, and 4.5 seconds respectively ### 4.10 Real-Life Experiments In this section, we qualitatively evaluate Kimera’s ability to generate DSGs from real-life datasets, collected using either our custom-made sensing rig (Figure 13) for the ‘School’ and ‘AeroAstro’ datasets, or Microsoft’s Azure Kinect camera for the ‘White Owl’ dataset, as explained in Section 4.1. #### 4.10.1 ‘School’ & ‘AeroAstro’. Figure 22 shows the reconstructed DSG by Kimera of the ‘School’ dataset, together with the reconstructed 3D mesh. We notice that the reconstructed 3D mesh by Kimera-Core is globally consistent, despite being noisy and incomplete due to Intel RealSense D435i’s depth stream quality (see Figure 14). In particular, Kimera-PGMO is capable of leveraging loop closures to simultaneously deform the mesh and optimize the trajectory of the sensor. Despite the noise and incomplete 3D mesh, Kimera-DSG is able to build a meaningful DSG of the scene. Kimera-Objects correctly detects most of the objects and approximates a conservative bounding box. Nevertheless, some spurious detections are present. For example, the green nodes are supposedly refrigerators in Figure 22, but there are no refrigerators in the ‘School’ dataset. These spurious detections can be easily removed by either fine-tuning the smallest object size valid for a detection, or by re-training Mask-RCNN with the objects in this scene. It is also possible to stop Mask-RCNN from detecting certain object classes that are not present in the scene. Figure 22 also shows that Kimera correctly reconstructs the place layer, and accurately segments the places into rooms. The room layer accurately represents the layout of the scene, with a corridor (‘R3’) and three rooms (‘R1’, ‘R2’, ‘R4’). The edges between the rooms of the DSG correctly represent the traversability between rooms. The fact that the mesh is noisy and incomplete does not seem to negatively affect the higher levels of abstraction of the DSG. Figure 23 shows the reconstructed DSG by Kimera of the ‘AeroAstro’ dataset. Similarly to the ‘School’ dataset, Kimera-Core is able to reconstruct a consistent 3D mesh, which remains nonetheless noisy and incomplete since we used the same sensor. While the objects and places layers are correctly estimated, Kimera-BuildingParser over-segments the rooms. In particular, Kimera-BuildingParser over-segments a corridor into three rooms (‘R2’, ‘R3’, ‘R5’). This is because, when traversing the corridor, the sensor was held too close to the wall, thereby limiting the observed free-space. Consequently, the ESDF and the places layer narrow significantly at the locations between rooms ‘R2’ and ‘R5’, and between rooms ‘R5’ and ‘R3’. This narrowing is misinterpreted by Kimera as a separation between rooms, which leads to the over-segmentation of the corridor. #### 4.10.2 ‘White Owl’. To further assess the performance of Kimera to build a DSG, we use a high- quality depth camera, Microsoft’s Azure Kinect, to reconstruct an accurate and complete 3D mesh of the scene. Figure 21 shows the ‘White Owl’ scene’s DSG (left) next to the simulated ‘Apartment’ scene’s DSG (right), both reconstructed by Kimera. It is remarkable that, independently of the fact that one is a simulated dataset and the other is a real-life dataset, Kimera is capable of reconstructing a globally consistent 3D mesh as well as a coherent DSG for both scenes, while using the same set of parameters. Furtermore, both reconstructions are qualitatively similar, showing that our simulations are realistic, as can also be seen in Figure 12, and that, given a sufficiently accurate depth sensor, Kimera can achieve accurate reconstructions when using real data. On the ‘White Owl’ dataset, we still observe nonetheless that there are some spurious object detections (_e.g.,_ green nodes). The places and rooms layer of the DSG remain accurate, and correctly represent the scene and the connectivity between layers. Figure 21: (Left) DSG of the ‘White Owl’ apartment (top left) from collected data using Microsoft’s Azure Kinect sensor, and a top-down view of the reconstructed 3D mesh (bottom left). (Right) DSG of the simulated uHumans2 ‘Apartment’ scene (top right), and a top-down view of the reconstructed 3D mesh (bottom right). Note that despite the data being generated from real-life (left) and simulated (right) sensors, Kimera is capable of reconstructing a globally consistent 3D mesh and a coherent DSG of comparable quality, using the same set of parameters. Figure 22: (Top) Real-life DSG for the ‘School’ dataset using a custom-made sensing rig with Intel RealSense D435i sensors (Section 4.1). (Bottom) the globally consistent 3D mesh of the scene reconstructed by Kimera. Despite the noisy and incomplete reconstruction, a DSG correctly abstracts the scene into three rooms and a corridor (‘R3’). Figure 23: Real-life DSG for the ‘AeroAstro’ dataset using a custom-made sensing rig with Intel RealSense D435i sensors (Section 4.1). The reconstructed 3D mesh by Kimera is globally consistent, with the loop around the office being accurately closed. While the DSG reconstructed by Kimera is coherent, the corridor parallel to ‘R1’ is over-segmented into three rooms (‘R2’, ‘R5’, ‘R3’), see Section 4.10. Figure 24: DSG for the subway dataset in uHumans2. The room ‘R2’ belongs to the subway’s lobby (mezzanine), while the room ‘R1’ corresponds to the subway’s platform. The fare collection gates physically separate these two rooms. Segmented benches are shown in cyan, as well as bins in purple. Figure 25: DSG for the neighborhood dataset in uHumans2. Despite not having multiple rooms, a DSG can also be built from outdoor scenes. Segmented cars are shown in blue, as well as bins in orange. ## 5 Motivating Examples We highlight the actionable nature of a 3D Dynamic Scene Graph by providing examples of queries it enables. Obstacle Avoidance and Planning. Agents, objects, and rooms in our DSG have a bounding box attribute. Moreover, the hierarchical nature of the DSG ensures that bounding boxes at higher layers contain bounding boxes at lower layers (_e.g.,_ the bounding box of a room contains the objects in that room). This forms a _Bounding Volume Hierarchy_ (BVH) (Larsson and Akenine-Möller, 2006), which is extensively used for collision checking in computer graphics. BVHs provide readily available opportunities to speed up obstacle avoidance and motion planning queries where collision checking is often used as a primitive (Karaman and Frazzoli, 2011). Human-Robot Interaction. As already explored in (Armeni et al., 2019; Kim et al., 2019), a scene graph can support user-oriented tasks, such as interactive visualization and _Question Answering_. Our Dynamic Scene Graph extends the reach of (Armeni et al., 2019; Kim et al., 2019) by (i) allowing visualization of human trajectories and dense poses (see visualization in the video attachment), and (ii) enabling more complex and time-aware queries such as “where was this person at time $t$?”, or “which object did this person pick in Room A?”. Furthermore, DSGs provide a framework to model plausible interactions between agents and scenes (Zhang et al., 2019b; Hassan et al., 2019; Pirk et al., 2017; Monszpart et al., 2019). We believe DSGs also complement the work on natural language grounding (Kollar et al., 2017), where one of the main concerns is to reason over the variability of human instructions. Long-term Autonomy. DSGs provide a natural way to “forget” or retain information in long-term autonomy. By construction, higher layers in the DSG are more compact and abstract representations of the scene. Hence, the robot can “forget” portions of the environment that are not frequently observed by simply pruning the corresponding branch of the DSG. For instance, to forget a room in Figure 1, we only need to prune the corresponding node and the connected nodes at lower layers (places, objects, etc.). More importantly, the robot can selectively decide which information to retain: for instance, it can keep all the objects (which are typically fairly cheap to store), but can selectively forget the mesh model, which can be more cumbersome to store in large environments. Finally, DSGs inherit memory advantages afforded by standard scene graphs: if the robot detects $N$ instances of a known object (_e.g.,_ a chair), it can simply store a _single_ CAD model and cross- reference it in $N$ nodes of the scene graph; this simple observation enables further data compression. Prediction. The combination of a dense metric-semantic mesh model and a rich description of the agents allows performing short-term predictions of the scene dynamics and answering queries about possible future outcomes. For instance, one can feed the mesh model to a physics simulator and roll out potential high-level actions of the human agents. ## 6 Applications In the remaining, we present two particular applications that we developed, and which we evaluate in Section 6.3. ### 6.1 Hierarchical Path Planning The multiple levels of abstraction afforded by a DSG have the potential to enable hierarchical and multi-resolution planning approaches (Schleich et al., 2019; Larsson et al., 2019), where a robot can plan at different levels of abstraction to save computational resources. In fact, querying paths from one point to another is computationally expensive when using volumetric maps even in small scenes. With DSGs, we can instead compute a path in a hierarchical fashion, which accelerates the path planning query by several orders of magnitude. To showcase this capability, we first compute a feasible shortest path (using A∗) at the level of buildings. Given the building nodes to traverse, we extract their room layer, and re-plan at the level of rooms. Similarly, given the room nodes to traverse, we further extract the relevant graph of places, and re-plan again at this level. Finally, we extract a smooth collision-free path using the open-source work of (Oleynikova et al., 2016, 2018). While querying a path at the level of the volumetric ESDF takes minutes, similar queries at higher level of abstractions finish in milliseconds (Section 6.3). Note that, as mentioned in Section 3.7, Kimera may over-segment a room as multiple rooms in the DSG. Nevertheless, since this over- segmentation is typically small (2 or 3 rooms), the runtime of hierarchical path-planning will remain in the order of milliseconds. ### 6.2 Semantic Path Planning DSGs also provide a powerful tool for high-level path-planning queries involving natural language by leveraging the semantic map and the different levels of abstractions. For instance, the (connected) subgraph of places and objects in a DSG can be used to issue the robot a high-level command (_e.g.,_ object search (Joho et al., 2011)), and the robot can directly infer the closest place in the DSG it has to reach to complete the task, and can plan a feasible path to that place. Instead, the volumetric representation is not amenable to high-level path planning queries, since the operator needs to provide metric coordinates to the robot. With DSGs, the user can use natural language (“reach the cup near the sofa”). In Section 6.3, we showcase examples of semantic queries we give to the robot, and that we use as input for hierarchical path-planning. Table 9: Hierarchical semantic path-planning using A* at different levels of abstraction. Planning at the level of buildings (B), rooms (R), and then places (P), in a hierarchical fashion, is several orders of magnitude faster than planning at the volumetric level (ESDF). We ‘copy-paste’ the 3D DSG of the office scene from uHumans to form a neighborhood of offices in a grid-like pattern where each office is a building; hence creating a scenario of a much larger scale. We also show the number of nodes and edges in the path, as well as the length of the estimated trajectories for comparison. | | | Timing [s] | Nodes [#] / Edges [#] | Path Length [m] ---|---|---|---|---|--- Dataset | Scene | #B | ESDF | Hierarchical Path-Planning | P | R | B | ESDF | Hierar- chical | | | | P | R | B | Total | | | | | uHumans | Office | 1 | 420.4 | 0.339 | 0.001 | 0.000 | 0.340 | 89 / 120 | 16 / 9 | 1 / 0 | 54.2 | 56.3 2 | 944.8 | 0.771 | 0.004 | 0.000 | 0.775 | 178 / 241 | 32 / 19 | 2 / 1 | 101.2 | 109.8 6 | 21389.2 | 1.231 | 0.012 | 0.001 | 1.244 | 534 / 730 | 96 / 64 | 6 / 7 | 311.5 | 329.2 ### 6.3 Path Planning Performance on DSGs Table 9 shows the timing performance of our semantic hierarchical path- planning implementation, where we run A∗ at the level of buildings, rooms, and then places, in a hierarchical fashion as described in Section 6.1, and compare its timing performance against running A∗ directly on the volumetric ESDF representation. The scalability of our approach is further emphasized by taking the “Office” scene of the uHumans dataset (see Fig. 1a, and floor-plans in Fig. 11) and replicating it in a grid-like pattern to increase its scale (where each new “Office” scene is considered a building). As shown in Table 9, hierarchical path-planning outperforms by several orders of magnitude the timing performance of planning at the volumetric ESDF level, thereby making path-planning run at interactive speeds for large scale scenes. We also report the length of the estimated trajectories when using the ESDF and when using the hierarchical path-planning approach. We observe in Table 9 that, despite being longer, the trajectories from our hierarchical approach are near as short as the ESDF ones. Furthermore, the queries given to our hierarchical path-planning module are of the type: “get near any object $x$ in room $y$ of building $z$,” where $x\in X,$ $X=\\{\text{objects in room }y\text{ of building }z\\},$ $y\in Y,$ $Y=\\{\text{rooms in building }z\\},$ $z\in Z=\\{1,...,N\\}$, with $N$ being the number of buildings in the scene. This is in stark contrast with metric coordinates $x,y,z\in\mathbb{R}^{3}$; which shows the ability to run semantically meaningful path-planning queries on DSGs. ## 7 Related Work We review environment representations used in robotics and computer vision (Section 7.1), and algorithms involved in building these representations (Section 7.2). ### 7.1 World Representations Scene Graphs. Scene graphs are popular computer graphics models to describe, manipulate, and render complex scenes and are commonly used in game engines (Wang and Qian, 2010). While in gaming applications, these structures are used to describe 3D environments, scene graphs have been mostly used in computer vision to abstract the content of 2D images. Krishna et al. (2016) use a scene graph to model attributes and relations among objects in 2D images, relying on manually defined natural language captions. Xu et al. (2017) and Li et al. (2017) develop algorithms for 2D scene graph generation. 2D scene graphs have been used for image retrieval (Johnson et al., 2015), captioning (Krause et al., 2017; Anderson et al., 2016; Johnson et al., 2017), high-level understanding (Choi et al., 2013; Zhao and Zhu, 2013a; Huang et al., 2018b; Jiang et al., 2018), visual question-answering (Fukui et al., 2016; Zhu et al., 2016), and action detection (Lu et al., 2016; Liang et al., 2017; Zhang et al., 2017). Armeni et al. (2019) propose a _3D scene graph_ model to describe 3D static scenes, and describe a semi-automatic algorithm to build the scene graph. In parallel to (Armeni et al., 2019), Kim et al. (2019) propose a 3D scene graph model for robotics, which however only includes objects as nodes and misses multiple levels of abstraction afforded by (Armeni et al., 2019) and by our proposal. Representations and Abstractions in Robotics. The question of world modeling and map representations has been central in the robotics community since its inception (Thrun, 2003; Cadena et al., 2016). The need to use hierarchical maps that capture rich spatial and semantic information was already recognized in seminal papers by Kuipers, Chatila, and Laumond (Kuipers, 2000, 1978; Chatila and Laumond, 1985). Vasudevan et al. (2006) propose a hierarchical representation of object constellations. Galindo et al. (2005) use two parallel hierarchical representations (a spatial and a semantic representation) that are then _anchored_ to each other and estimated using 2D lidar data. Ruiz-Sarmiento et al. (2017) extend the framework in (Galindo et al., 2005) to account for uncertain groundings between spatial and semantic elements. Zender et al. (2008) propose a single hierarchical representation that includes a 2D map, a navigation graph and a topological map (Ranganathan and Dellaert, 2004; Remolina and Kuipers, 2004), which are then further abstracted into a _conceptual map_. Note that the spatial hierarchies in (Galindo et al., 2005) and (Zender et al., 2008) already resemble a scene graph, with less articulated set of nodes and layers. A more fundamental difference is the fact that early work (i) did not reason over 3D models (but focused on 2D occupancy maps), (ii) did not tackle dynamical scenes, and (iii) did not include dense (e.g., pixel-wise) semantic information, which has been enabled in recent years by deep learning methods. ### 7.2 Perception Algorithms SLAM and VIO in Dynamic Environments. This paper is also concerned with modeling and gaining robustness against dynamic elements in the scene. SLAM and moving object tracking (sometimes referred to as _SLAMMOT_ (Wang et al., 2007) or as SLAM and Detection And Tracking of Moving Objects, _DATMO_ (Azim and Aycard, 2012)) has been extensively investigated in robotics (Wang et al., 2007), while more recent work focuses on joint visual-inertial odometry and target pose estimation (Qiu et al., 2019; Eckenhoff et al., 2019; Geneva et al., 2019). Most of the existing literature in robotics models the dynamic targets as a single 3D point (Chojnacki and Indelman, 2018), or with a 3D pose and rely on lidar (Azim and Aycard, 2012), RGB-D cameras (Aldoma et al., 2013), monocular cameras (Li et al., 2018b), and visual-inertial sensing (Qiu et al., 2019). Related work also attempts to gain robustness against dynamic scenes by using an IMU (Hwangbo et al., 2009), masking portions of the scene corresponding to dynamic elements (Cui and Ma, 2019; Brasch et al., 2018; Bescos et al., 2018), or jointly tracking camera and dynamic objects (Wang et al., 2007; Bescos et al., 2020). To the best of our knowledge, the present paper is the first work that attempts to perform visual-inertial SLAM, segment dense object models, estimate the 3D poses of known objects, and reconstruct and track dense human SMPL meshes. Metric-Semantic Scene Reconstruction. This line of work is concerned with estimating metric-semantic (but typically non-hierarchical) representations from sensor data. While early work (Bao and Savarese, 2011; Brostow et al., 2008) focused on offline processing, recent years have seen a surge of interest towards _real-time_ metric-semantic mapping, triggered by pioneering works such as SLAM++ (Salas-Moreno et al., 2013). _Object-based approaches_ compute an object map and include SLAM++ (Salas-Moreno et al., 2013), XIVO (Dong et al., 2017), OrcVIO (Shan et al., 2019), QuadricSLAM (Nicholson et al., 2018), and (Bowman et al., 2017). For most robotics applications, an object-based map does not provide enough resolution for navigation and obstacle avoidance. _Dense approaches_ build denser semantically annotated models in the form of point clouds (Behley et al., 2019; Tateno et al., 2015; Dubé et al., 2018; Lianos et al., 2018), meshes (Rosinol et al., 2020a; Rosu et al., 2019), surfels (Whelan et al., 2015; Rünz et al., 2018; Wald et al., 2018), or volumetric models (McCormac et al., 2017; Rosinol et al., 2020a; Grinvald et al., 2019; Narita et al., 2019). Other approaches use both objects and dense models, see Li et al. (2016) and Fusion++ (McCormac et al., 2018). These approaches focus on static environments. Approaches that deal with moving objects, such as DynamicFusion (Newcombe et al., 2015), Mask-fusion (Rünz et al., 2018), Co-fusion (Rünz and Agapito, 2017), and MID-Fusion (Xu et al., 2019) are currently limited to small table-top scenes and focus on objects or dense maps, rather than scene graphs. Most of these works rely on GPU processing (McCormac et al., 2017; Zheng et al., 2019; Tateno et al., 2015; Li et al., 2016; McCormac et al., 2018; Rünz et al., 2018; Rünz and Agapito, 2017; Xu et al., 2019). Recent work investigates CPU-based approaches in combination with RGB-D sensing, _e.g.,_ Wald et al. (2018), PanopticFusion (Narita et al., 2019), and Voxblox++ (Grinvald et al., 2019). A sparser set of contributions addresses other sensing modalities, including monocular cameras (_e.g.,_ CNN-SLAM (Tateno et al., 2017), VSO (Lianos et al., 2018), VITAMIN-E (Yokozuka et al., 2019), XIVO (Dong et al., 2017)) and lidar (Behley et al., 2019; Dubé et al., 2018). Loop Closure with Dense Representations. This line of work is concerned with correcting a dense representation of the environment (_e.g.,_ point clouds, meshes, voxels) after a loop closure occurs. LSD-SLAM (Engel et al., 2014) represents the environment with a point cloud; loop closures do not have a direct effect on the point cloud map, but rather correct the pose graph associated to the camera keyframes and the semi-dense local maps attached to each keyframe are updated accordingly. We are especially interested in the cases where the environment is represented as a mesh and the loop closures are enforced by deforming this mesh. Kintinuous (Whelan et al., 2013) accomplishes this in two optimization step: first it optimizes the pose graph and, by utilizing the relationship between the vertices of the mesh and the poses in the pose graph, it then uses the optimized pose graph as measurement constraints in order to deform the mesh with a _deformation graph_ (Sumner et al., 2007). MIS-SLAM (Song et al., 2018) also uses the _deformation graph_ approach to deform the model point cloud using the estimate from ORB-SLAM (Mur-Artal and Tardós, 2017). ElasticFusion (Whelan et al., 2015) instead deforms a dense map of surfels and GravityFusion (Puri et al., 2017) builds on top of ElasticFusion by enforcing a consistent gravity direction among all the surfels. Voxgraph (Reijgwart et al., 2020) builds a globally consistent volumetric map by applying graph optimization over a set of submap poses and including odometry and loop closure constraints. Similarly, DynamicFusion (Newcombe et al., 2015), VolumeDeform (Innmann et al., 2016), and Fusion4D (Dou et al., 2016) use a volumetric representation for fusion and deformation. Our approach is the first to simultaneously optimize the pose graph and the mesh, and also the first to formalize this problem as a pose graph optimization. Metric-to-Topological Scene Parsing. This line of work focuses on partitioning a metric map into semantically meaningful places (_e.g.,_ rooms, hallways). Nüchter and Hertzberg (2008) encode relations among planar surfaces (_e.g.,_ walls, floor, ceiling) and detect objects in the scene. Blanco et al. (2009); Gomez et al. (2020) propose a hybrid metric-topological map. Friedman et al. (2007) propose _Voronoi Random Fields_ to obtain an abstract model of a 2D grid map. Rogers and Christensen (2012) and Lin et al. (2013) leverage objects to perform a joint object-and-place classification. While Nie et al. (2020); Huang et al. (2018a); Zhao and Zhu (2013b) jointly solve the problem of scene understanding and reconstruction. Pangercic et al. (2012) reason on the objects’ functionality. Pronobis and Jensfelt (2012) use a Markov Random Field to segment a 2D grid map. Zheng et al. (2018) infer the topology of a grid map using a _Graph-Structured Sum-Product Network_ , while Zheng and Pronobis (2019) use a neural network. Armeni et al. (2016) focus on a 3D mesh, and propose a method to parse a building into rooms. Floor plan estimation has been also investigated using single images (Hedau et al., 2009; Schwing et al., 2013), omnidirectional images (Lukierski et al., 2017), 2D lidar (Li and Stevenson, 2020; Turner and Zakhor, 2014), 3D lidar (Mura et al., 2014; Ochmann et al., 2014), RGB-D (Liu et al., 2018), or from crowd-sourced mobile- phone trajectories (Alzantot and Youssef, 2012). The works (Armeni et al., 2016; Mura et al., 2014; Ochmann et al., 2014) are closest to our proposal, but contrarily to (Armeni et al., 2016) we do not rely on a Manhattan World assumption, and contrarily to (Mura et al., 2014; Ochmann et al., 2014) we operate on a mesh model. Recently, Wald et al. (2020) propose to learn from point clouds a 3D semantic scene graph that focuses on representing semantically meaningful inter-instance relationships. Human Pose Estimation and Tracking. Human pose and shape estimation from a single image is a growing research area. While we refer the reader to (Kolotouros et al., 2019b, 2019, a; Kocabas et al., 2020) for a broader review, it is worth mentioning that related work includes optimization-based approaches, which fit a 3D mesh to 2D image keypoints (Bogo et al., 2016; Lassner et al., 2017; Zanfir et al., 2018; Kolotouros et al., 2019; Yang and Carlone, 2020), and learning-based methods, which infer the mesh directly from pixel information (Tan et al., 2017; Kanazawa et al., 2018; Omran et al., 2018; Pavlakos et al., 2018; Kolotouros et al., 2019b, 2019). Human models are typically parametrized using the _Skinned Multi-Person Linear Model_ (SMPL) (Loper et al., 2015), which provides a compact pose and shape description and can be rendered as a mesh with 6890 vertices and 23 joints. The common approach to monocular human tracking is to predict joints probabilities in 2D image space and which are optimized to 3D joints based on multiple time-series observations and motion priors (Andriluka et al., 2010, 2008; Arnab et al., 2019; Bridgeman et al., 2019; Elhayek et al., 2012; Zhou et al., 2018b; Wang et al., 2020). Taylor et al. (2010) combines a learned motion model with particle filtering to predict 3D human poses. In this work, we aim to not only estimate the 3D pose of the human, but also the full SMPL shape without maintaining the persistent image history required by many of the approaches above. In human tracking literature, only Arnab et al. (2019) fully reconstruct the SMPL shape of the human; however, they reconstruct the shape after performing data association over multiple timesteps. In contrast, we use the method of Kolotouros et al. (2019b) to directly get the full 3D pose of the human at each timestep, simplifying pose estimation, and allowing us to do data association based on the SMPL body shape. ## 8 Conclusion We introduced _3D Dynamic Scene Graphs_ , a unified representation for actionable spatial perception. Moreover, we presented Kimera, the first _Spatial Perception Engine_ that builds a DSG from visual-inertial data in a fully automatic fashion. We showcased Kimera in photo-realistic simulations and real data, and discussed applications enabled by the proposed DSG representation, including semantic and hierarchical path planning. This paper opens several research avenues. First of all, it would be desirable to develop spatial perception engines that run incrementally and in real time. Currently, while the creation of the metric-semantic reconstruction happens in real-time, the rest of the scene graph is built at the end of the run and requires few minutes to parse the entire scene. Second, it would be interesting to design engines that estimate a DSG from heterogeneous sensors and from sensor data collected by multiple robots. Finally, a natural direction is to enrich a DSG with other physical attributes, including material type and affordances for objects, and trying to learn attributes and relations from data. ## Acknowledgments We are thankful to Dan Griffith, Ben Smith, Arjun Majumdar, and Zac Ravichandran for open-sourcing the TESSE simulator, and to Winter Guerra and Varun Murali for the discussions about Unity. ## Disclaimer DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering. ## References * Abdulla (2017) Abdulla W (2017) Mask R-CNN for object detection and instance segmentation on keras and tensorflow. https://github.com/matterport/Mask_RCNN. * Aldoma et al. (2013) Aldoma A, Tombari F, Prankl J, Richtsfeld A, Di Stefano L and Vincze M (2013) Multimodal cue integration through hypotheses verification for RGB-D object recognition and 6DOF pose estimation. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 2104–2111. * Alzantot and Youssef (2012) Alzantot M and Youssef M (2012) CrowdInside: Automatic Construction of Indoor Floorplans. In: _Proc. of the 20th International Conference on Advances in Geographic Information Systems_. pp. 99–108. * Anderson et al. (2016) Anderson P, Fernando B, Johnson M and Gould S (2016) Spice: Semantic propositional image caption evaluation. In: _European Conf. on Computer Vision (ECCV)_. pp. 382–398. * Andriluka et al. (2008) Andriluka M, Roth S and Schiele B (2008) People-tracking-by-detection and people-detection-by-tracking. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 1–8. * Andriluka et al. (2010) Andriluka M, Roth S and Schiele B (2010) Monocular 3D pose estimation and tracking by detection. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 623–630. * Armeni et al. (2019) Armeni I, He Z, Gwak J, Zamir A, Fischer M, Malik J and Savarese S (2019) 3D scene graph: A structure for unified semantics, 3D space, and camera. In: _Intl. Conf. on Computer Vision (ICCV)_. pp. 5664–5673. * Armeni et al. (2016) Armeni I, Sener O, Zamir A, Jiang H, Brilakis I, Fischer M and Savarese S (2016) 3d semantic parsing of large-scale indoor spaces. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 1534–1543. * Arnab et al. (2019) Arnab A, Doersch C and Zisserman A (2019) Exploiting temporal context for 3D human pose estimation in the wild. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. pp. 3395–3404. * Azim and Aycard (2012) Azim A and Aycard O (2012) Detection, classification and tracking of moving objects in a 3d environment. In: _2012 IEEE Intelligent Vehicles Symposium_. pp. 802–807. * Badrinarayanan et al. (2017) Badrinarayanan V, Kendall A and Cipolla R (2017) SegNet: A deep convolutional encoder-decoder architecture for image segmentation. _IEEE Trans. Pattern Anal. Machine Intell._ . * Bao and Savarese (2011) Bao SYZ and Savarese S (2011) Semantic structure from motion. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. * Behley et al. (2019) Behley J, Garbade M, Milioto A, Quenzel J, Behnke S, Stachniss C and Gall J (2019) SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. In: _Intl. Conf. on Computer Vision (ICCV)_. * Bescos et al. (2020) Bescos B, Campos C, Tardós JD and Neira J (2020) DynaSLAM II: Tightly-coupled multi-object tracking and SLAM. _arXiv preprint arXiv:2010.07820_ . * Bescos et al. (2018) Bescos B, Fácil JM, Civera J and Neira J (2018) DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes. _IEEE Robotics and Automation Letters_ 3(4): 4076–4083. * Besl and McKay (1992) Besl PJ and McKay ND (1992) A method for registration of 3-D shapes. _IEEE Trans. Pattern Anal. Machine Intell._ 14(2). * Blanco et al. (2009) Blanco JL, González J and Fernández-Madrigal JA (2009) Subjective local maps for hybrid metric-topological SLAM. _Robotics and Autonomous Systems_ 57: 64–74. * Bloesch et al. (2015) Bloesch M, Omari S, Hutter M and Siegwart R (2015) Robust visual inertial odometry using a direct EKF-based approach. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. IEEE. * Bogo et al. (2016) Bogo F, Kanazawa A, Lassner C, Gehler P, Romero J and Black MJ (2016) Keep it SMPL: Automatic estimation of 3d human pose and shape from a single image. In: Leibe B, Matas J, Sebe N and Welling M (eds.) _European Conf. on Computer Vision (ECCV)_. * Bouguet (2000) Bouguet J (2000) Pyramidal implementation of the Lucas Kanade feature tracker. * Bowman et al. (2017) Bowman S, Atanasov N, Daniilidis K and Pappas G (2017) Probabilistic data association for semantic SLAM. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 1722–1729. * Brasch et al. (2018) Brasch N, Bozic A, Lallemand J and Tombari F (2018) Semantic monocular SLAM for highly dynamic environments. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. pp. 393–400. * Briales and Gonzalez-Jimenez (2017) Briales J and Gonzalez-Jimenez J (2017) Cartan-sync: Fast and global SE(d)-synchronization. _IEEE Robot. Autom. Lett_ 2(4): 2127–2134. * Bridgeman et al. (2019) Bridgeman L, Volino M, Guillemaut JY and Hilton A (2019) Multi-person 3D pose estimation and tracking in sports. In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_. pp. 0–0. * Brostow et al. (2008) Brostow GJ, Shotton J, Fauqueur J and Cipolla R (2008) Segmentation and recognition using structure from motion point clouds. In: _European Conf. on Computer Vision (ECCV)_. pp. 44–57. * Burri et al. (2016) Burri M, Nikolic J, Gohl P, Schneider T, Rehder J, Omari S, Achtelik M and Siegwart R (2016) The EuRoC micro aerial vehicle datasets. _Intl. J. of Robotics Research_ . * Cadena et al. (2016) Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J, Reid I and Leonard J (2016) Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. _IEEE Trans. Robotics_ 32(6): 1309–1332. 10.1109/TRO.2016.2624754. Arxiv preprint: 1606.05830, (pdf). * Campos et al. (2021) Campos C, Elvira R, Rodríguez JJG, Montiel JM and Tardós JD (2021) ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM. _IEEE Trans. Robotics_ . * Carlone et al. (2014) Carlone L, Kira Z, Beall C, Indelman V and Dellaert F (2014) Eliminating conditionally independent sets in factor graphs: A unifying perspective based on smart factors. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 4290–4297. * Chatila and Laumond (1985) Chatila R and Laumond JP (1985) Position referencing and consistent world modeling for mobile robots. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 138–145. * Chen et al. (2017) Chen LC, Papandreou G, Kokkinos I, Murphy K and Yuille AL (2017) DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. _IEEE Trans. Pattern Anal. Machine Intell._ 40(4): 834–848. * Choi et al. (2013) Choi W, Chao Y, Pantofaru C and Savarese S (2013) Understanding indoor scenes using 3d geometric phrases. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 33–40. * Chojnacki and Indelman (2018) Chojnacki M and Indelman V (2018) Vision-based dynamic target trajectory and ego-motion estimation using incremental light bundle adjustment. _International Journal of Micro Air Vehicles_ 10(2): 157–170. * Cloudcompare.org (2019) Cloudcompareorg (2019) CloudCompare - open source project. https://www.cloudcompare.org. * Cui and Ma (2019) Cui L and Ma C (2019) SOF-SLAM: A semantic visual SLAM for dynamic environments. _IEEE Access_ 7: 166528–166539. * Dai et al. (2017) Dai A, Nießner M, Zollhöfer M, Izadi S and Theobalt C (2017) Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. _ACM Transactions on Graphics (ToG)_ 36(4): 1. * Davison (2018) Davison AJ (2018) FutureMapping: The computational structure of spatial AI systems. _arXiv preprint arXiv:1803.11288_ . * Dellaert (2012) Dellaert F (2012) Factor graphs and GTSAM: A hands-on introduction. Technical Report GT-RIM-CP&R-2012-002, Georgia Institute of Technology. * Dellaert and Kaess (2017) Dellaert F and Kaess M (2017) Factor graphs for robot perception. _Foundations and Trends in Robotics_ 6(1-2): 1–139. * Delmerico and Scaramuzza (2018) Delmerico J and Scaramuzza D (2018) A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots. In: _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, pp. 2502–2509. * Dong et al. (2017) Dong J, Fei X and Soatto S (2017) Visual-Inertial-Semantic scene representation for 3D object detection. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. * Dou et al. (2016) Dou M, Khamis S, Degtyarev Y, Davidson P, Fanello SR, Kowdle A, Escolano SO, Rhemann C, Kim D, Taylor J, Kohli P, Tankovich V and Izadi S (2016) Fusion4D: Real-time performance capture of challenging scenes. _ACM Trans. Graph._ 35(4). 10.1145/2897824.2925969. URL https://doi.org/10.1145/2897824.2925969. * Dubé et al. (2018) Dubé R, Cramariuc A, Dugas D, Nieto J, Siegwart R and Cadena C (2018) SegMap: 3d segment mapping using data-driven descriptors. In: _Robotics: Science and Systems (RSS)_. * Eckenhoff et al. (2019) Eckenhoff K, Yang Y, Geneva P and Huang G (2019) Tightly-coupled visual-inertial localization and 3D rigid-body target tracking. _IEEE Robotics and Automation Letters_ 4(2): 1541–1548. * Elhayek et al. (2012) Elhayek A, Stoll C, Hasler N, Kim KI, Seidel HP and Theobalt C (2012) Spatio-temporal motion tracking with unsynchronized cameras. In: _2012 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE, pp. 1870–1877. * Engel et al. (2014) Engel J, Schöps T and Cremers D (2014) LSD-SLAM: Large-scale direct monocular SLAM : 834–849. * Enqvist et al. (2011) Enqvist O, Kahl F and Olsson C (2011) Non-sequential structure from motion. In: _Intl. Conf. on Computer Vision (ICCV)_. pp. 264–271. * Everett et al. (2018) Everett M, Chen YF and How J (2018) Motion planning among dynamic, decision-making agents with deep reinforcement learning. * Forster et al. (2015) Forster C, Carlone L, Dellaert F and Scaramuzza D (2015) IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. In: _Robotics: Science and Systems (RSS)_. Accepted as oral presentation (acceptance rate $4\%$) (pdf) (video) (supplemental material: (pdf)). * Forster et al. (2017) Forster C, Carlone L, Dellaert F and Scaramuzza D (2017) On-manifold preintegration for real-time visual-inertial odometry. _IEEE Trans. Robotics_ 33(1): 1–21. Arxiv preprint: 1512.02363, (pdf), technical report GT-IRIM-CP&R-2015-001. * Forster et al. (2014) Forster C, Pizzoli M and Scaramuzza D (2014) SVO: Fast Semi-Direct Monocular Visual Odometry. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. 10.1109/ICRA.2014.6906584. * Friedman et al. (2007) Friedman S, Pasula H and Fox D (2007) Voronoi random fields: Extracting the topological structure of indoor environments via place labeling. In: _Intl. Joint Conf. on AI (IJCAI)_. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., p. 2109–2114. * Fukui et al. (2016) Fukui A, Park D, Yang D, Rohrbach A, Darrell T and Rohrbach M (2016) Multimodal compact bilinear pooling for visual question answering and visual grounding. In: _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_. Austin, Texas: Association for Computational Linguistics, pp. 457–468. 10.18653/v1/D16-1044. * Furgale et al. (2013) Furgale P, Rehder J and Siegwart R (2013) Unified temporal and spatial calibration for multi-sensor systems. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. * Galindo et al. (2005) Galindo C, Saffiotti A, Coradeschi S, Buschka P, Fernández-Madrigal J and González J (2005) Multi-hierarchical semantic maps for mobile robotics. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. pp. 3492–3497. * Gálvez-López and Tardós (2012) Gálvez-López D and Tardós JD (2012) Bags of binary words for fast place recognition in image sequences. _IEEE Transactions on Robotics_ 28(5): 1188–1197. 10.1109/TRO.2012.2197158. * Garcia-Garcia et al. (2017) Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V and García-Rodríguez J (2017) A review on deep learning techniques applied to semantic segmentation. _ArXiv Preprint: 1704.06857_ . * Geneva et al. (2019) Geneva P, Maley J and Huang G (2019) Schmidt-EKF-based visual-inertial moving object tracking. _ArXiv Preprint: 1903.0863_ . * Gomez et al. (2020) Gomez C, Fehr M, Millane A, Hernandez AC, Nieto J, Barber R and Siegwart R (2020) Hybrid topological and 3D dense mapping through autonomous exploration for large indoor environments. In: _2020 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, pp. 9673–9679. * Grinvald et al. (2019) Grinvald M, Furrer F, Novkovic T, Chung JJ, Cadena C, Siegwart R and Nieto J (2019) Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery. _IEEE Robotics and Automation Letters_ 4(3): 3037–3044. * Grupp (2017) Grupp M (2017) evo: Python package for the evaluation of odometry and SLAM. https://github.com/MichaelGrupp/evo. * Guerra et al. (2019) Guerra W, Tal E, Murali V, Ryou G and Karaman S (2019) FlightGoggles: Photorealistic sensor simulation for perception-driven robotics using photogrammetry and virtual reality. In: _arXiv preprint: 1905.11377_. * H. Hirschmüller (2008) H Hirschmüller H (2008) Stereo processing by semiglobal matching and mutual information. _IEEE Trans. Pattern Anal. Machine Intell._ 30(2): 328–341. * Hackel et al. (2017) Hackel T, Savinov N, Ladicky L, Wegner JD, Schindler K and Pollefeys M (2017) Semantic3d.net: A new large-scale point cloud classification benchmark. _arXiv preprint arXiv:1704.03847_ . * Hartley and Zisserman (2004) Hartley RI and Zisserman A (2004) _Multiple View Geometry in Computer Vision_. Second edition. Cambridge University Press. * Hassan et al. (2019) Hassan M, Choutas V, Tzionas D and Black MJ (2019) Resolving 3D human pose ambiguities with 3D scene constraints. In: _Proceedings of the IEEE International Conference on Computer Vision_. pp. 2282–2292. * He et al. (2017) He K, Gkioxari G, Dollár P and Girshick R (2017) Mask R-CNN. In: _Intl. Conf. on Computer Vision (ICCV)_. pp. 2980–2988. * Hedau et al. (2009) Hedau V, Hoiem D and Forsyth D (2009) Recovering the spatial layout of cluttered rooms. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 1849–1856. * Horn (1987) Horn BKP (1987) Closed-form solution of absolute orientation using unit quaternions. _J. Opt. Soc. Amer._ 4(4): 629–642. * Hu et al. (2017) Hu R, Dollar P and He K (2017) Learning to segment every thing. In: _Intl. Conf. on Computer Vision (ICCV)_. pp. 4233–4241. * Hu and Carlone (2019) Hu S and Carlone L (2019) Accelerated inference in Markov Random Fields via smooth Riemannian optimization. _IEEE Robotics and Automation Letters (RA-L)_ Extended ArXiv version: (pdf). * Huang et al. (2018a) Huang S, Qi S, Xiao Y, Zhu Y, Wu Y and Zhu S (2018a) Cooperative holistic scene understanding: Unifying 3d object, layout, and camera pose estimation. In: _Advances in Neural Information Processing Systems_. pp. 207–218. * Huang et al. (2018b) Huang S, Qi S, Zhu Y, Xiao X, Xu Y and Zhu S (2018b) Holistic 3D scene parsing and reconstruction from a single rgb image. In: _European Conf. on Computer Vision (ECCV)_. pp. 187–203. * Hwangbo et al. (2009) Hwangbo M, Kim J and Kanade T (2009) Inertial-aided klt feature tracking for a moving camera. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. pp. 1909–1916. * Innmann et al. (2016) Innmann M, Zollhöfer M, Nießner M, Theobalt C and Stamminger M (2016) VolumeDeform: Real-time volumetric non-rigid reconstruction. _ArXiv_ abs/1603.08161. * Jiang et al. (2018) Jiang C, Qi S, Zhu Y, Huang S, Lin J, Yu L, Terzopoulos D and Zhu S (2018) Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars. _Intl. J. of Computer Vision_ 126(9): 920–941. * Johnson et al. (2017) Johnson J, Hariharan B, van der Maaten L, Fei-Fei L, Zitnick L and Girshick R (2017) Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 2901–2910. * Johnson et al. (2015) Johnson J, Krishna R, Stark M, Li L, Shamma D, Bernstein M and Fei-Fei L (2015) Image retrieval using scene graphs. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 3668–3678. * Joho et al. (2011) Joho D, Senk M and Burgard W (2011) Learning search heuristics for finding objects in structured environments. _Robotics and Autonomous Systems_ 59(5): 319–328. * Kaess et al. (2012) Kaess M, Johannsson H, Roberts R, Ila V, Leonard J and Dellaert F (2012) iSAM2: Incremental smoothing and mapping using the Bayes tree. _Intl. J. of Robotics Research_ 31: 217–236. * Kanazawa et al. (2018) Kanazawa A, Black MJ, Jacobs DW and Malik J (2018) End-to-end recovery of human shape and pose. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. * Karaman and Frazzoli (2011) Karaman S and Frazzoli E (2011) Sampling-based algorithms for optimal motion planning. _Intl. J. of Robotics Research_ 30(7): 846–894. * Kim et al. (2019) Kim U, Park J, Song T and Kim J (2019) 3-D scene graph: A sparse and semantic representation of physical environments for intelligent agents. _IEEE Trans. Cybern._ PP: 1–13. * Kirillov et al. (2019) Kirillov A, He K, Girshick R, Rother C and Dollar P (2019) Panoptic segmentation. In: _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. * Kneip et al. (2011) Kneip L, Chli M and Siegwart R (2011) Robust real-time visual odometry with a single camera and an IMU. In: _British Machine Vision Conf. (BMVC)_. pp. 16.1–16.11. * Kocabas et al. (2020) Kocabas M, Athanasiou N and Black MJ (2020) Vibe: Video inference for human body pose and shape estimation. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. pp. 5253–5263. * Kollar et al. (2017) Kollar T, Tellex S, Walter M, Huang A, Bachrach A, Hemachandra S, Brunskill E, Banerjee A, Roy D, Teller S and Roy N (2017) Generalized grounding graphs: A probabilistic framework for understanding grounded commands. _ArXiv Preprint: 1712.01097_ . * Kolotouros et al. (2019) Kolotouros N, Pavlakos G, Black MJ and Daniilidis K (2019) Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop. _arXiv e-prints_ : arXiv:1909.12828. * Kolotouros et al. (2019a) Kolotouros N, Pavlakos G, Black MJ and Daniilidis K (2019a) Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In: _Proceedings of the IEEE International Conference on Computer Vision_. pp. 2252–2261. * Kolotouros et al. (2019b) Kolotouros N, Pavlakos G and Daniilidis K (2019b) Convolutional mesh regression for single-image human shape reconstruction. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. * Krause et al. (2017) Krause J, Johnson J, R Krishna R and Fei-Fei L (2017) A hierarchical approach for generating descriptive image paragraphs. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 3337–3345. * Krishna et al. (2016) Krishna R, Zhu Y, Groth O, Johnson J, Hata K, Kravitz J, Chen S, Kalantidis Y, Li L, Shamma D, Bernstein M and Fei-Fei L (2016) Visual Genome: Connecting language and vision using crowdsourced dense image annotations. _arXiv preprints arXiv:1602.07332_ . * Krishna (1992) Krishna S (1992) _Introduction to Database and Knowledge-Base Systems_. World Scientific Publishing Co., Inc. ISBN 9810206194. * Krizhevsky et al. (2012) Krizhevsky A, Sutskever I and Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: _Advances in Neural Information Processing Systems (NIPS)_ , NIPS’12. pp. 1097–1105. * Kuipers (1978) Kuipers B (1978) Modeling spatial knowledge. _Cognitive Science_ 2: 129–153. * Kuipers (2000) Kuipers B (2000) The Spatial Semantic Hierarchy. _Artificial Intelligence_ 119: 191–233. * Lang et al. (2019) Lang H, Yuhui Y, Jianyuan G, Chao Z, Xilin C and Jingdong W (2019) Interlaced sparse self-attention for semantic segmentation. _arXiv preprint arXiv:1907.12273_ . * Larsson et al. (2019) Larsson DT, Maity D and Tsiotras P (2019) Q-Search trees: An information-theoretic approach towards hierarchical abstractions for agents with computational limitations. * Larsson and Akenine-Möller (2006) Larsson T and Akenine-Möller T (2006) A dynamic bounding volume hierarchy for generalized collision detection. _Comput. Graph._ 30(3): 450–459. * Lassner et al. (2017) Lassner C, Romero J, Kiefel M, Bogo F, Black MJ and Gehler PV (2017) Unite the people: Closing the loop between 3D and 2D human representations. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. * Leutenegger et al. (2013) Leutenegger S, Furgale P, Rabaud V, Chli M, Konolige K and Siegwart R (2013) Keyframe-based visual-inertial slam using nonlinear optimization. In: _Robotics: Science and Systems (RSS)_. * Li et al. (2016) Li C, Xiao H, Tateno K, Tombari F, Navab N and Hager GD (2016) Incremental scene understanding on dense SLAM. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. pp. 574–581. * Li et al. (2018a) Li J, Raventos A, Bhargava A, Tagawa T and Gaidon A (2018a) Learning to fuse things and stuff. _ArXiv_ abs/1812.01192. * Li and Stevenson (2020) Li J and Stevenson R (2020) Indoor layout estimation by 2d lidar and camera fusion. ArXiv preprint arXiv:2001.05422. * Li et al. (2018b) Li P, Qin T and Shen S (2018b) Stereo vision-based semantic 3D object and ego-motion tracking for autonomous driving. In: Ferrari V, Hebert M, Sminchisescu C and Weiss Y (eds.) _European Conf. on Computer Vision (ECCV)_. pp. 664–679. * Li et al. (2017) Li Y, Ouyang W, Zhou B, Wang K and Wang X (2017) Scene graph generation from objects, phrases and region captions. In: _International Conference on Computer Vision (ICCV)_. * Liang et al. (2017) Liang X, Lee L and Xing E (2017) Deep variation structured reinforcement learning for visual relationship and attribute detection. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 4408–4417. * Lianos et al. (2018) Lianos K, Schönberger J, Pollefeys M and Sattler T (2018) Vso: Visual semantic odometry. In: _European Conf. on Computer Vision (ECCV)_. pp. 246–263. * Lin et al. (2013) Lin D, Fidler S and Urtasun R (2013) Holistic scene understanding for 3d object detection with rgbd cameras. 10.1109/ICCV.2013.179. * Liu et al. (2018) Liu C, Wu J and Furukawa Y (2018) FloorNet: A unified framework for floorplan reconstruction from 3D scans. In: _European Conf. on Computer Vision (ECCV)_. pp. 203–219. * Loper et al. (2015) Loper M, Mahmood N, Romero J, Pons-Moll G and Black MJ (2015) SMPL: A skinned multi-person linear model. _ACM Trans. Graphics (Proc. SIGGRAPH Asia)_ 34(6): 248:1–248:16. * Lorensen and Cline (1987) Lorensen W and Cline H (1987) Marching cubes: A high resolution 3d surface construction algorithm. In: _SIGGRAPH_. pp. 163–169. * Lu et al. (2016) Lu C, Krishna R, Bernstein M and Li FF (2016) Visual relationship detection with language priors. In: _European Conference on Computer Vision_. pp. 852–869. * Lukierski et al. (2017) Lukierski R, Leutenegger S and Davison AJ (2017) Room layout estimation from rapid omnidirectional exploration. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 6315–6322. * Mangelson et al. (2018) Mangelson JG, Dominic D, Eustice RM and Vasudevan R (2018) Pairwise consistent measurement set maximization for robust multi-robot map merging. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 2916–2923. * McCormac et al. (2018) McCormac J, Clark R, Bloesch M, Davison A and Leutenegger S (2018) Fusion++: Volumetric object-level SLAM. In: _Intl. Conf. on 3D Vision (3DV)_. pp. 32–41. * McCormac et al. (2017) McCormac J, Handa A, Davison AJ and Leutenegger S (2017) SemanticFusion: Dense 3D Semantic Mapping with Convolutional Neural Networks. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. * Monszpart et al. (2019) Monszpart A, Guerrero P, Ceylan D, Yumer E and Mitra NJ (2019) imapper: interaction-guided scene mapping from monocular videos. _ACM Transactions on Graphics (TOG)_ 38(4): 1–15. * Mourikis and Roumeliotis (2007) Mourikis A and Roumeliotis S (2007) A multi-state constraint Kalman filter for vision-aided inertial navigation. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 3565–3572. * Mur-Artal and Tardós (2017) Mur-Artal R and Tardós JD (2017) ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. _IEEE Trans. Robotics_ 33(5): 1255–1262. * Mura et al. (2014) Mura C, Mattausch O, Villanueva AJ, Gobbetti E and Pajarola R (2014) Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. _Computers & Graphics_ 44: 20–32. * Narita et al. (2019) Narita G, Seno T, Ishikawa T and Kaji Y (2019) Panopticfusion: Online volumetric semantic mapping at the level of stuff and things. _arxiv preprint: 1903.01177_ . * Newcombe et al. (2015) Newcombe R, Fox D and Seitz S (2015) DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 343–352. * Nicholson et al. (2018) Nicholson L, Milford M and Sünderhauf N (2018) QuadricSLAM: Dual quadrics from object detections as landmarks in object-oriented SLAM. _IEEE Robotics and Automation Letters_ 4: 1–8. * Nie et al. (2020) Nie Y, Han X, Guo S, Zheng Y, Chang J and Zhang JJ (2020) Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. pp. 55–64. * Nistér (2004) Nistér D (2004) An efficient solution to the five-point relative pose problem. _IEEE Trans. Pattern Anal. Machine Intell._ 26(6): 756–770. * Nüchter and Hertzberg (2008) Nüchter A and Hertzberg J (2008) Towards semantic maps for mobile robots. _Robotics and Autonomous Systems_ 56: 915–926. * Ochmann et al. (2014) Ochmann S, Vock R, Wessel R, Tamke M and Klein R (2014) Automatic generation of structural building descriptions from 3d point cloud scans. In: _2014 International Conference on Computer Graphics Theory and Applications (GRAPP)_. pp. 1–8. * Oleynikova et al. (2016) Oleynikova H, Burri M, Taylor Z, Nieto J, Siegwart R and Galceran E (2016) Continuous-time trajectory optimization for online UAV replanning. In: _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, pp. 5332–5339. * Oleynikova et al. (2017) Oleynikova H, Taylor Z, Fehr M, Siegwart R and Nieto J (2017) Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. IEEE, pp. 1366–1373. * Oleynikova et al. (2018) Oleynikova H, Taylor Z, Siegwart R and Nieto J (2018) Sparse 3D topological graphs for micro-aerial vehicle planning. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. * Omran et al. (2018) Omran M, Lassner C, Pons-Moll G, Gehler P and Schiele B (2018) Neural body fitting: Unifying deep learning and model based human pose and shape estimation. _Intl. Conf. on 3D Vision (3DV)_ : 484–494. * Pangercic et al. (2012) Pangercic D, Pitzer B, Tenorth M and Beetz M (2012) Semantic object maps for robotic housework - representation, acquisition and use. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. ISBN 978-1-4673-1737-5, pp. 4644–4651. 10.1109/IROS.2012.6385603. * Paszke et al. (2016) Paszke A, Chaurasia A, Kim S and Culurciello E (2016) Enet: A deep neural network architecture for real-time semantic segmentation. _arXiv preprint arXiv:1606.02147_ . * Pattabiraman et al. (2015) Pattabiraman B, Patwary MMA, Gebremedhin AH, Liao WK and Choudhary A (2015) Fast algorithms for the maximum clique problem on massive graphs with applications to overlapping community detection. _Internet Mathematics_ 11(4-5): 421–448. * Pavlakos et al. (2018) Pavlakos G, Zhu L, Zhou X and Daniilidis K (2018) Learning to estimate 3d human pose and shape from a single color image. _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_ : 459–468. * Pirk et al. (2017) Pirk S, Krs V, Hu K, Rajasekaran SD, Kang H, Yoshiyasu Y, Benes B and Guibas LJ (2017) Understanding and exploiting object interaction landscapes. _ACM Transactions on Graphics (TOG)_ 36(3): 1–14. * Pronobis and Jensfelt (2012) Pronobis A and Jensfelt P (2012) Large-scale semantic mapping and reasoning with heterogeneous modalities. IEEE Intl. Conf. on Robotics and Automation (ICRA). * Puri et al. (2017) Puri P, Jia D and Kaess M (2017) GravityFusion: Real-time dense mapping without pose graph using deformation and orientation. In: _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. pp. 6506–6513. 10.1109/IROS.2017.8206559. * Qin et al. (2018) Qin T, Li P and Shen S (2018) Vins-mono: A robust and versatile monocular visual-inertial state estimator. _IEEE Transactions on Robotics_ 34(4): 1004–1020. * Qiu et al. (2019) Qiu K, Qin T, Gao W and Shen S (2019) Tracking 3-D motion of dynamic objects using monocular visual-inertial sensing. _IEEE Trans. Robotics_ 35(4): 799–816. 10.1109/TRO.2019.2909085. * Ranganathan and Dellaert (2004) Ranganathan A and Dellaert F (2004) Inference in the space of topological maps: An MCMC-based approach. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. * Redmon and Farhadi (2017) Redmon J and Farhadi A (2017) YOLO9000: Better, faster, stronger. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 6517–6525. * Rehder et al. (2016) Rehder J, Nikolic J, Schneider T, Hinzmann T and Siegwart R (2016) Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes. In: _2016 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, pp. 4304–4311. * Reijgwart et al. (2020) Reijgwart V, Millane A, Oleynikova H, Siegwart R, Cadena C and Nieto J (2020) Voxgraph: Globally consistent, volumetric mapping using signed distance function submaps. _IEEE Robotics and Automation Letters_ . * Remolina and Kuipers (2004) Remolina E and Kuipers B (2004) Towards a general theory of topological maps. _Artificial Intelligence_ 152(1): 47–104. * Ren et al. (2015) Ren S, He K, Girshick R and Sun J (2015) Faster R-CNN: Towards realtime object detection with region proposal networks. In: _Advances in Neural Information Processing Systems (NIPS)_. pp. 91–99. * Rogers and Christensen (2012) Rogers J and Christensen HI (2012) A conditional random field model for place and object classification. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. pp. 1766–1772. * Rosen et al. (2018) Rosen D, Carlone L, Bandeira A and Leonard J (2018) SE-Sync: a certifiably correct algorithm for synchronization over the Special Euclidean group. _Intl. J. of Robotics Research_ Arxiv preprint: 1611.00128, (pdf). * Rosinol (2018) Rosinol A (2018) _Densifying Sparse VIO: a mesh-based approach using Structural Regularities_. Master’s Thesis, ETH Zurich. 10.3929/ethz-b-000297645. (pdf). * Rosinol et al. (2020a) Rosinol A, Abate M, Chang Y and Carlone L (2020a) Kimera: an open-source library for real-time metric-semantic localization and mapping. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. ArXiv preprint arXiv: 1910.02490, (video), (code), (pdf). * Rosinol et al. (2020b) Rosinol A, Gupta A, Abate M, Shi J and Carlone L (2020b) 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans. In: _Robotics: Science and Systems (RSS)_. (pdf), (video). * Rosinol et al. (2019) Rosinol A, Sattler T, Pollefeys M and Carlone L (2019) Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. 10.1109/ICRA.2019.8794456. URL https://www.mit.edu/%7Earosinol/research/struct3dmesh.html. (pdf), (web). * Rosu et al. (2019) Rosu R, Quenzel J and Behnke S (2019) Semi-supervised semantic mapping through label propagation with semantic texture meshes. _Intl. J. of Computer Vision_ . * Ruiz-Sarmiento et al. (2017) Ruiz-Sarmiento JR, Galindo C and Gonzalez-Jimenez J (2017) Building multiversal semantic maps for mobile robot operation. _Knowledge-Based Systems_ 119: 257–272. * Rünz and Agapito (2017) Rünz M and Agapito L (2017) Co-fusion: Real-time segmentation, tracking and fusion of multiple objects. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. IEEE, pp. 4471–4478. * Rünz et al. (2018) Rünz M, Buffier M and Agapito L (2018) MaskFusion: Real-time recognition, tracking and reconstruction of multiple moving objects. In: _IEEE International Symposium on Mixed and Augmented Reality (ISMAR)_. IEEE, pp. 10–20. * Rusu and Cousins (2011) Rusu RB and Cousins S (2011) 3D is here: Point Cloud Library (PCL). In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. * Salas-Moreno et al. (2013) Salas-Moreno RF, Newcombe RA, Strasdat H, Kelly PHJ and Davison AJ (2013) SLAM++: Simultaneous localisation and mapping at the level of objects. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. * Sayre-McCord et al. (2018) Sayre-McCord R, Guerra W, Antonini A, Arneberg J, Brown A, Cavalheiro G, Fang Y, Gorodetsky A, McCoy D, Quilter S, Riether F, Tal E, Terzioglu Y, Carlone L and Karaman S (2018) Visual-inertial navigation algorithm development using photorealistic camera simulation in the loop. In: _IEEE Intl. Conf. on Robotics and Automation (ICRA)_. (pdf) (code). * Schimpl et al. (2011) Schimpl M, Moore C, Lederer C, Neuhaus A, Sambrook J, Danesh J, Ouwehand W and Daumer M (2011) Association between walking speed and age in healthy, free-living individuals using mobile accelerometer – a cross-sectional study. _PloS one_ 6(8): e23299. * Schleich et al. (2019) Schleich D, Klamt T and Behnke S (2019) Value iteration networks on multiple levels of abstraction. In: _Robotics: Science and Systems (RSS)_. * Schöps et al. (2017) Schöps T, Schönberger JL, Galliani S, Sattler T, Schindler K, Pollefeys M and Geiger A (2017) A multi-view stereo benchmark with high-resolution images and multi-camera videos. In: _Conference on Computer Vision and Pattern Recognition (CVPR)_. * Schwing et al. (2013) Schwing AG, Fidler S, Pollefeys M and Urtasun R (2013) Box in the box: Joint 3d layout and object reasoning from single images. In: _Proceedings of the IEEE International Conference on Computer Vision_. pp. 353–360. * Shan et al. (2019) Shan M, Feng Q and Atanasov N (2019) Object residual constrained visual-inertial odometry. In: _technical report,https://moshanatucsd.github.io/orcvio_githubpage/_. * Shi and Tomasi (1994) Shi J and Tomasi C (1994) Good Features to track. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 593–600. * Song et al. (2018) Song J, Wang J, Zhao L, Huang S and Dissanayake G (2018) MIS-SLAM: Real-time large scale dense deformable SLAM system in minimal invasive surgery based on heterogeneous computing. _IEEE Robotics and Automation Letters_ 10.1109/LRA.2018.2856519. * Sumner et al. (2007) Sumner R, Schmid J and Pauly M (2007) Embedded deformation for shape manipulation. _ACM SIGGRAPH 2007 papers on - SIGGRAPH ’07_ 10.1145/1275808.1276478. * Tan et al. (2017) Tan V, Budvytis I and Cipolla R (2017) Indirect deep structured learning for 3D human body shape and pose prediction. In: _British Machine Vision Conf. (BMVC)_. * Tateno et al. (2017) Tateno K, Tombari F, Laina I and Navab N (2017) CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. * Tateno et al. (2015) Tateno K, Tombari F and Navab N (2015) Real-time and scalable incremental segmentation on dense SLAM. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. pp. 4465–4472. * Taylor et al. (2010) Taylor GW, Sigal L, Fleet DJ and Hinton GE (2010) Dynamical binary latent variable models for 3d human pose tracking. In: _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_. IEEE, pp. 631–638. * Thrun (2003) Thrun S (2003) Robotic mapping: a survey. In: _Exploring artificial intelligence in the new millennium_. Morgan Kaufmann, Inc., pp. 1–35. * Turner and Zakhor (2014) Turner E and Zakhor A (2014) Floor plan generation and room labeling of indoor environments from laser range data. In: _2014 International Conference on Computer Graphics Theory and Applications (GRAPP)_. pp. 1–12. * Usenko et al. (2019) Usenko V, Demmel N, Schubert D, Stückler J and Cremers D (2019) Visual-inertial mapping with non-linear factor recovery. _IEEE Robotics and Automation Letters_ 5(2): 422–429. * Vasudevan et al. (2006) Vasudevan S, Gachter S, Berger M and Siegwart R (2006) Cognitive maps for mobile robots: An object based approach. In: _Proceedings of the IROS Workshop From Sensors to Human Spatial Concepts (FS2HSC 2006)_. * Wald et al. (2020) Wald J, Dhamo H, Navab N and Tombari F (2020) Learning 3D semantic scene graphs from 3D indoor reconstructions. In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. pp. 3961–3970. * Wald et al. (2018) Wald J, Tateno K, Sturm J, Navab N and Tombari F (2018) Real-time fully incremental scene understanding on mobile platforms. _IEEE Robotics and Automation Letters_ 3(4): 3402–3409. * Wang et al. (2007) Wang CC, Thorpe C, Thrun S, Hebert M and Durrant-Whyte H (2007) Simultaneous localization, mapping and moving object tracking. _Intl. J. of Robotics Research_ 26(9): 889–916. * Wang et al. (2020) Wang M, Tighe J and Modolo D (2020) Combining detection and tracking for human pose estimation in videos. _arXiv preprint arXiv:2003.13743_ . * Wang and Qian (2010) Wang R and Qian X (2010) _OpenSceneGraph 3.0: Beginner’s Guide_. Packt Publishing. ISBN 1849512825. * Whelan et al. (2013) Whelan T, Kaess M, Leonard J and McDonald J (2013) Deformation-based loop closure for large scale dense RGB-D SLAM. In: _IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS)_. * Whelan et al. (2015) Whelan T, Leutenegger S, Salas-Moreno R, Glocker B and Davison A (2015) ElasticFusion: Dense SLAM without a pose graph. In: _Robotics: Science and Systems (RSS)_. * Wolf et al. (2015) Wolf D, Prankl J and Vincze M (2015) Enhancing semantic segmentation for robotics: The power of 3-d entangled forests. _IEEE Robotics and Automation Letters_ 1(1): 49–56. * Xu et al. (2019) Xu B, Li W, Tzoumanikas D, Bloesch M, Davison A and Leutenegger S (2019) MID-Fusion: Octree-based object-level multi-instance dynamic SLAM. pp. 5231–5237. * Xu et al. (2017) Xu D, Zhu Y, Choy CB and Fei-Fei L (2017) Scene graph generation by iterative message passing. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 3097–3106. * Yang et al. (2018) Yang G, Zhao H, Shi J, Deng Z and Jia J (2018) SegStereo: Exploiting semantic information for disparity estimation. In: _Proceedings of the European Conference on Computer Vision (ECCV)_. pp. 636–651. * Yang and Carlone (2020) Yang H and Carlone L (2020) In perfect shape: Certifiably optimal 3D shape reconstruction from 2D landmarks. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. Arxiv version: 1911.11924, (pdf). * Yang et al. (2020) Yang H, Shi J and Carlone L (2020) TEASER: Fast and Certifiable Point Cloud Registration. _IEEE Trans. Robotics_ 37(2): 314–333. Extended arXiv version 2001.07715 (pdf). * Yokozuka et al. (2019) Yokozuka M, Oishi S, Thompson S and Banno A (2019) VITAMIN-E: visual tracking and mapping with extremely dense feature points. _CoRR_ abs/1904.10324. * Zanfir et al. (2018) Zanfir A, Marinoiu E and Sminchisescu C (2018) Monocular 3D pose and shape estimation of multiple people in natural scenes: The importance of multiple scene constraints. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 2148–2157. * Zender et al. (2008) Zender H, Mozos OM, Jensfelt P, Kruijff GJ and Burgard W (2008) Conceptual spatial representations for indoor mobile robots. _Robotics and Autonomous Systems_ 56(6): 493–502. From Sensors to Human Spatial Concepts. * Zhang et al. (2017) Zhang H, Kyaw Z, Chang S and Chua T (2017) Visual translation embedding network for visual relation detection. In: _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. pp. 3107–3115. * Zhang et al. (2019a) Zhang L, Li X, Arnab A, Yang K, Tong Y and Torr PH (2019a) Dual graph convolutional network for semantic segmentation. In: _British Machine Vision Conference_. * Zhang et al. (2019b) Zhang Y, Hassan M, Neumann H, Black MJ and Tang S (2019b) Generating 3D people in scenes without people. _arXiv preprint arXiv:1912.02923_ . * Zhao et al. (2017) Zhao H, Shi J, Qi X, Wang X and Jia J (2017) Pyramid scene parsing network. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 2881–2890. * Zhao and Zhu (2013a) Zhao Y and Zhu S (2013a) Scene parsing by integrating function, geometry and appearance models. In: _IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_. pp. 3119–3126. * Zhao and Zhu (2013b) Zhao Y and Zhu SC (2013b) Scene parsing by integrating function, geometry and appearance models. In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. pp. 3119–3126. * Zheng and Pronobis (2019) Zheng K and Pronobis A (2019) From pixels to buildings: End-to-end probabilistic deep networks for large-scale semantic mapping. In: _Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. Macau, China. * Zheng et al. (2018) Zheng K, Pronobis A and Rao RPN (2018) Learning Graph-Structured Sum-Product Networks for probabilistic semantic maps. In: _Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI)_. * Zheng et al. (2019) Zheng L, Zhu C, Zhang J, Zhao H, Huang H, Niessner M and Xu K (2019) Active scene understanding via online semantic reconstruction. _arXiv preprint:1906.07409_ . * Zheng et al. (2013) Zheng Y, Kuang Y, Sugimoto S, Astrom K and Okutomi M (2013) Revisiting the PnP problem: A fast, general and optimal solution. In: _Intl. Conf. on Computer Vision (ICCV)_. pp. 2344–2351. * Zhou et al. (2018a) Zhou QY, Park J and Koltun V (2018a) Open3D: A modern library for 3D data processing. _arXiv:1801.09847_ . * Zhou et al. (2018b) Zhou X, Zhu M, Pavlakos G, Leonardos S, Derpanis KG and Daniilidis K (2018b) MonoCap: Monocular human motion capture using a CNN coupled with a geometric prior. _IEEE Trans. Pattern Anal. Machine Intell._ 41(4): 901–914. * Zhu et al. (2016) Zhu Y, Groth O, Bernstein M and Fei-Fei L (2016) Visual7W: Grounded question answering in images. In: _IEEE Conference on Computer Vision and Pattern Recognition_. pp. 4995–5004.
# On the finiteness of moments of the exit time of planar Brownian motion from comb domains Maher Boudabra and Greg Markowsky <EMAIL_ADDRESS><EMAIL_ADDRESS> Department of Mathematics, Monash University, Australia ###### Abstract A comb domain is defined to be the entire complex plain with a collection of vertical slits, symmetric over the real axis, removed. In this paper, we consider the question of determining whether the exit time of planar Brownian motion from such a domain has finite $p$-th moment. This question has been addressed before in relation to starlike domains, but these previous results do not apply to comb domains. Our main result is a sufficient condition on the location of the slits which ensures that the $p$-th moment of the exit time is finite. Several auxiliary results are also presented, including a construction of a comb domain whose exit time has infinite $p$-th moment for all $p\geq 1/2$. Keywords: Planar Brownian motion, exit time. 2010 Mathematics subject classification: 60J65, 30E99. ## 1 Introduction and statement of main result Let $(x_{n})_{n\in\mathbb{Z}}$ be an increasing sequence of distinct real numbers without accumulation point in $\mathbb{R}$, let $(b_{n})_{n\in\mathbb{Z}}$ be an associated sequence of positive numbers, and let $\mathscr{M}_{x}$ be the domain $\mathscr{M}_{x}:=\mathbb{C}\setminus\bigcup_{n\in\mathbb{Z}}I_{n}$ where $I_{n}:=\\{x_{n}\\}\times([b_{n},+\infty)\cup(-\infty,-b_{n}])$. We shall informally refer to $\mathscr{M}_{x}$ as a comb domain. Figure 1.1: Illustration of a comb domain. We consider a planar Brownian motion $Z_{t}$ and denote by $\tau_{\Omega}$ its exit time from a given domain $\Omega$. The question we will investigate in this paper is to find conditions on the sequences $(x_{n})_{n\in\mathbb{Z}}$ and $(b_{n})_{n\in\mathbb{Z}}$ which imply that $E(\tau_{\mathscr{M}_{x}}^{p})<\infty$ for a given $p\in(0,\infty)$. We will derive a sufficient condition for the moment to be finite, but before stating our results, we discuss a bit of motivation for this question. The moments of $\tau_{\Omega}$ have special importance in two dimensions, as they carry a great deal of analytic and geometric information about the domain $\Omega$. The first major work in this direction seems to have been by Burkholder in [8], where it was proved among other things that finiteness of the $p$-th Hardy norm of $\Omega$ is equivalent to finiteness of the $\frac{p}{2}$-th moment of $\tau_{\Omega}$. To be precise, for any simply connected domain $\Omega$ let ${\rm H}(\Omega)=\sup\\{p>0:E((\tau_{\Omega})^{p})<\infty\\};$ note that ${\rm H}(\Omega)$ is proved in [8, p. 183] to be exactly equal to half of the Hardy number of $\Omega$, as defined in [12], which is ${\widetilde{\rm H}}(\Omega)=\sup\\{q>0:\lim_{r\nearrow 1}\int_{0}^{2\pi}|f(re^{i\theta})|^{q}d\theta<\infty\\},$ where $f$ is a conformal map from the unit disk onto $\Omega$. This equivalence was used in [8, p. 183] to show for instance that $H(W_{\alpha})=\frac{\pi}{2\alpha}$, where $W_{\alpha}=\\{0<Arg(z)<\alpha\\}$ is an infinite angular wedge with angle $\alpha$. In fact, coupled with the purely analytic result [12, Thm 4.1] this can be used to determine ${\rm H}(\Omega)$ for any starlike domain $\Omega$ in terms of the aperture of $\Omega$ at $\infty$, which is defined to be the limit as $r\to\infty$ of the quantity $\alpha_{r,\Omega}=\max\\{m(E):E\mbox{ is a subarc of }\Omega\cap\\{|z|=r\\}\\}$; it is not hard to see that this limit always exists for starlike domains. [25] contains a detailed discussion of this, as well as a version of the Phragmén-Lindelöf principle that makes use of the quantity ${\rm H}(\Omega)$. Furthermore, the quantity $E((\tau_{\Omega})^{p})$ provides us with an estimate for the tail probability $P(\tau_{\Omega}>\delta)$: by Markov’s inequality, $P(\tau_{\Omega}>\delta)\leq\frac{E((\tau_{\Omega})^{p})}{\delta^{p}}$. We should also mention that the case $p=1$ is naturally of special interest, and has produced a literature too large to describe here; the case of general $p$ has attracted somewhat less interest, nevertheless the reader interested in other results relating the $p$-th moments of Brownian exit time with the geometry of domains is referred to [1, 4, 7, 9, 10, 13, 14, 15, 16, 20, 21, 22, 23, 27, 28]. On the other hand, comb domains and their analogues have appeared in a number of recent papers on various topics. One of the most striking instances is in the recent work [11] by Gross, in which the following question was posed and answered: given a measure $\mu$ on $\mathbb{R}$ with finite second moment, find a simply connected domain $U$ in $\mathbb{C}$ such that the real part of the random variable $Z_{\tau_{U}}$ has the distribution $\mu$. If $\mu$ is a discrete distribution, then Gross’ construction yields a comb domain. Other examples include [26], in which a similar domain was used in order to construct a stopping time related to the winding of Brownian motion, and [17, 18, 19], in which similar domains were used as counterexamples to several conjectures concerning harmonic measure posed in [2, 3]. Note that in Gross’ paper in particular (see also [5, 6, 24]) the moments of the exit time are of importance, and yet it is not simple to show that they are finite for a given comb-like domain. Comb domains are never starlike, and therefore the previously established results on ${\rm H}(\Omega)$ do not apply to them. It is also not hard to see that the aperture at $\infty$ of comb domains need not exist. We have therefore needed to devise new methods in order to address this question. Before discussing our results, let us make a few observations to clarify the problem. To begin with, it may seem that the starting point of the Brownian motion affects whether $E(\tau^{p})$ is finite, however this is not so, as shown in [8]: ###### Proposition 1. [8, p.13 (3.13)] If $U$ is a domain and $E_{a}(\tau_{U})<\infty$ for some $a\in U$, then $E_{w}(\tau_{U})<\infty$ for any $w\in U$. We may therefore make statements like "$E(\tau_{\Omega}^{p})<\infty$" or "$E(\tau_{\Omega}^{p})=\infty$" without specifying a starting point. We next note that exit times are monotonic with respect to domains, as the following proposition shows. ###### Proposition 2. * • If $\Omega_{1}\subset\Omega_{2}$ then $E(\tau_{\Omega_{2}}^{p})<\infty\,\Longrightarrow\,E(\tau_{\Omega_{1}}^{p})<\infty.$ * • If $\Omega_{n}$ is an increasing sequence of domains (i.e. $\Omega_{n}\subseteq\Omega_{n+1}$) and $\Omega=\cup_{n=1}^{\infty}\Omega_{n}$, then $E(\tau_{\Omega_{n}}^{p})\nearrow E(\tau_{\Omega}^{p})$. The proof of the first statement is trivial, and the second is a simple consequence of the monotone convergence theorem. This proposition allows us to clarify the problem a bit. Any comb domain is contained in a translation and dilation of the domain $U=\mathbb{C}\setminus\\{0\\}\times([1,+\infty)\cup(-\infty,-1])$, and, as the conformal map from the unit disk onto this domain is readily computed, a straightforward application of the aforementioned results by Burkholder in [8] shows that $E(\tau_{U}^{p})<\infty$ if, and only if, $p<1/2$. It follows that $E(\tau_{\mathscr{M}_{x}}^{p})<\infty$ for any comb domain $\mathscr{M}_{x}$, if $p<1/2$. The question is thus only interesting for $p\geq 1/2$, and we will concentrate on these values of $p$ in what follows. The following proposition, which in essence shows that the question we are addressing is reasonable, is proved in Section 2. ###### Proposition 3. For any $p\geq 1/2$, there is a comb domain $\mathscr{M}_{x}$ for which $E(\tau_{\mathscr{M}_{x}}^{p})=\infty$. In order to state our main result, let us employ the notation $a_{n}=x_{n}-x_{n-1}$. Then we have the following. ###### Theorem 4. Suppose $(x_{n})_{n\in\mathbb{Z}}$ is an increasing sequence (with $x_{0}=0$), and $(b_{n})_{n\in\mathbb{Z}}$ is an associated sequence of positive numbers, such that $\ell=\sup_{n}\Big{(}\frac{\max(b_{n-1},b_{n+1})}{\min(a_{n},a_{n+1})}\Big{)}<\infty$. Then there is a number $\theta_{0}<1$, depending on $\ell$, such that, for any $p>0$, if $\sum_{j=1}^{\infty}(\max_{|n|\leq j}a_{n}^{2})\theta_{0}^{j/p}<\infty,$ (1.1) then $E(\tau_{\mathscr{M}_{x}}^{p})<\infty$. We note that this theorem can also be applied in many cases where $\ell=\infty$, since removing slits in the complement of the domain can only increase the moments of the exit time. Therefore, if a collection of slits can be removed from the complement of $\mathscr{M}_{x}$ such that $\ell$ becomes finite (if for instance $\ell$ was infinite due to the $a_{n}$’s being small rather than the $b_{n}$’s being large) but (1.1) persists then the conclusion of the theorem still holds. As an immediate corollary of the theorem, if $a_{n}$ is uniformly bounded, or even bounded by any polynomial in $n$, and $b_{n}$ is uniformly bounded as well, then $\sum_{j=1}^{\infty}(\max_{|n|\leq j}a_{n}^{2})\theta^{j}<\infty$ for any $\theta<1$, and therefore all moments of $\tau_{\mathscr{M}_{x}}$ are finite. We will see later that this theorem can even be extended a bit in order to handle certain sequences where the $a_{n}$ grow faster than this, for instance certain sequences with exponential growth. We will prove Proposition 3 and Theorem 4 in the next section, and add some concluding remarks in the final section. ## 2 Proofs Proof of Proposition 3 By the monotonicity of moments, it is enough to consider $p=\frac{1}{2}$. Our domain will have $b_{n}=1$ for all $n$. Let us first consider a comb domain derived from a finite sequence, that is $\mathscr{M}_{x}:=\mathbb{C}\setminus\bigcup_{n\in\mathbb{\\{}1,\ldots N\\}}I_{n}$ where again $I_{n}:=\\{x_{n}\\}\times([1,+\infty)\cup(-\infty,-1])$. In this case $\mathscr{M}_{x}$ contains the half plane $\\{\Re(z)>x_{N}\\}$. The exit time of a half plane has infinite $\frac{1}{2}$ moment, as discussed in the previous section. By Proposition 2, $E((\tau_{\mathscr{M}_{x}})^{1/2}]=\infty$. We are now ready to construct an infinite unbounded sequence $(x_{n})_{n\in\mathbb{Z}}$ whose corresponding $\mathscr{M}_{x}$ has infinite $\frac{1}{2}$ moment; in fact, it will even be subdomain of $\\{\Re(z)>0\\}$ with a one-sided sequence of vertical slits removed, and naturally it can be extended arbitrarily to a two sided sequence if desired. For $c<d$ let $S_{c,d}$ denote the infinite vertical strip $\\{b<\Re(z)<c\\}$. We will start our Brownian motion at the point 1. Let $x_{1}>1$ be a real number such that $E_{1}(\tau_{S_{0,x_{1}}}^{1/2})>1$, which does exist because $E_{1}(\tau_{S_{0,x}}^{1/2})\nearrow+\infty$ as $x\nearrow+\infty$ by Proposition 2. Next, consider the domain $U_{2}=S_{0,x_{2}}\setminus I_{1}$ with $I_{1}:=\\{x_{1}\\}\times([1,+\infty)\cup(-\infty,-1])$, where $x_{2}$ is chosen so that $E_{1}(\tau_{U_{2}}^{1/2})>2$, and again this is possible since $\lim_{x\nearrow\infty}E_{1}((\tau_{S_{0,x}\setminus I_{1}})^{1/2})=\infty$. Continuing inductively in this way we construct $U_{n+1}$ from $U_{n}$ by $U_{n+1}=(U_{n}\cap S_{x_{n+1}})\setminus I_{n}$ where $I_{n}:=\\{x_{n}\\}\times([1,+\infty)\cup(-\infty,-1])$, $x_{n}<x_{n+1}$ and $E_{1}(\tau_{U_{n+1}}^{1/2})>n+1$. The domain $U:=\bigcup_{n=1}^{\infty}U_{n}=U_{\infty}$ (with $U_{1}:=S_{0,x_{1}}$) is a comb domain that fits the requirement since $n\leq E_{1}(\tau_{U_{n}}^{1/2})\leq E_{1}(\tau_{U}^{1/2})$ for all $n$. Consequently $E_{(}\tau_{U}^{1/2})=+\infty$, and thus all moments $E(\tau_{U}^{p})$ are infinite for any $p\in[1/2,+\infty)$. ∎ Proof of Theorem 4 Before tackling the proof we give some notations and definitions that we will use. As before, for $c<d$ let $S_{c,d}=\\{c<\Re(z)<d\\}$. It will be convenient to think of the sequence $(x_{n})_{n\in\mathbb{Z}}$ as a map from $\mathbb{Z}$ into $\mathbb{R}$ defined by $x(n)=x_{n}$, and with inverse $x^{-1}$. Denote the image of this map by ${\cal X}=\cup_{n=-\infty}^{\infty}\\{x_{n}\\}$. We will assume that our Brownian motion starts at $x_{0}=0$. Consider the following sequence of associated stopping times $\widehat{\tau}_{j}:=\begin{cases}0&{\scriptstyle\left(j=0\right)}\\\ \inf\\{t>\widehat{\tau}_{j-1}\mid R_{t}\in{\cal X}\setminus\\{R_{\widehat{\tau}_{j-1}}\\}\\}&{\scriptstyle\left(j>0\right)}\end{cases}.$ with $R_{t}=\Re(Z_{t})$. More precisely, $\widehat{\tau}_{j}$ encodes the time of the $j^{th}$ passage of $R_{t}$ at the lines carrying the slits under the constraint that it is different from the $(j-1)^{st}$ one. Equivalently, $\widehat{\tau}_{j}$ is the first exit time of $Z_{t}$ from $S_{x(x^{-1}(R_{\widehat{\tau}_{j-1}})-1),x(x^{-1}(R_{\widehat{\tau}_{j-1}})+1)}$ after $\widehat{\tau}_{j-1}$. Finally, let $\tau$ be the exit time from the comb domain and set $\tau_{j}=\tau\wedge\widehat{\tau}_{j}$. $\tau$ can be expressed as $\tau=\sum_{j=0}^{\infty}(\tau_{j+1}-\tau_{j}),$ whence $E(\tau^{p})^{1/p}\leq\sum_{j=0}^{\infty}E((\tau_{j+1}-\tau_{j})^{p})^{1/p}$ thanks to the Hölder-Minkowsky inequality. We need therefore only show that this sum is finite. Note that $\tau_{j}=\tau_{j+1}$ on the event $\\{\tau\leq\tau_{j}\\}$, while $\tau_{j}$ and $\tau_{j+1}$ are simply equal to $\widehat{\tau}_{j}$ and $\widehat{\tau}_{j+1}$ on $\\{\tau_{j}<\tau\\}$. It therefore follows that $\begin{split}E((\tau_{j+1}-\tau_{j})^{p})&=E((\widehat{\tau}_{j+1}-\widehat{\tau}_{j})^{p}1_{\\{\tau_{j}<\tau\\}})\\\ &=E((\widehat{\tau}_{j+1}-\widehat{\tau}_{j})^{p}1_{\\{\tau_{j}<\tau\\}}\sum_{n=-j}^{j}1_{\\{R_{\widehat{\tau}_{j}}=x_{n}\\}})\\\ &=\sum_{n=-j}^{j}E((\widehat{\tau}_{j+1}-\widehat{\tau}_{j})^{p}1_{\\{\tau_{j}<\tau\\}}1_{\\{R_{\widehat{\tau}_{j}}=x_{n}\\}})\\\ &=\sum_{n=-j}^{j}P(\tau_{j}<\tau,R_{\widehat{\tau}_{j}}=x_{n})E((\widehat{\tau}_{j+1}-\widehat{\tau}_{j})^{p}|\tau_{j}<\tau,R_{\widehat{\tau}_{j}}=x_{n})\\\ &=\sum_{n=-j}^{j}P(\tau_{j}<\tau,R_{\widehat{\tau}_{j}}=x_{n})E_{x_{n}}((\tau_{S_{x(x^{-1}(R_{\widehat{\tau}_{j-1}})-1),x(x^{-1}(R_{\widehat{\tau}_{j-1}})+1)}})^{p})\\\ &=\sum_{n=-j}^{j}P(\tau_{j}<\tau,R_{\widehat{\tau}_{j}}=x_{n})E_{0}((\tau_{S_{-a_{n},a_{n+1}}})^{p})\\\ &\leq P(\tau_{j}<\tau)\max_{|n|\leq j}E_{0}((\tau_{S_{-a_{n},a_{n+1}}})^{p}).\end{split}$ (2.1) Note that we have used the strong Markov property in the second-to-last equality, and also that our sum needed only to be over the set $\\{|n|\leq j\\}$ rather than all of $\mathbb{Z}$ because $R_{\widehat{\tau}_{j}}$ cannot be equal to $x_{n}$ with $|n|>j$ since $R_{\widehat{\tau}_{0}}=x_{0}$. In order to estimate this quantity, we need the following lemmas. ###### Lemma 5. $E_{0}((\tau_{S_{-a_{n},a_{n+1}}})^{p})\leq\max(a_{n},a_{n+1})^{2p}E_{0}((\tau_{S_{-1,1}})^{p})$. ###### Proof. For the sake of brevity we denote by $d_{n}=\max(a_{n},a_{n+1})$. A monotonicity argument yields $E_{0}((\tau_{S_{-a_{n},a_{n+1}}})^{p})\leq E_{0}((\tau_{S_{-d_{n},d_{n}}})^{p})$. The exit time from the strip $S_{-d_{n},d_{n}}$ is simply the exit time of a one dimensional Brownian motion from the interval $(-d_{n},d_{n})$ . Hence by scaling, we get $E_{0}((\tau_{S_{-d_{n},d_{n}}})^{p})=d_{n}^{2p}E_{0}((\tau_{S_{-1,1}})^{p})$ which completes the proof. ∎ For the next lemma, let $K^{b}_{c,d}$ denote the rectangle $S_{c,d}\cap\\{-b<\Im(z)<b\\}$, and let $I_{t}=\Im(Z_{t})$. Recall also that $\ell=\sup_{n}\Big{(}\frac{\max(b_{n-1},b_{n+1})}{\min(a_{n},a_{n+1})}\Big{)}<\infty$. ###### Lemma 6. We have $P(\tau_{j}<\tau)\leq\theta_{0}^{j}$, where $\theta_{0}:=1-\frac{1}{2}P_{0}(|I_{\tau_{K^{\ell}_{-1,1}}}|=\ell)$. ###### Proof. The proof is by induction. Assume that the statement holds for $j-1$, so that $P(\tau_{j}<\tau)=P(\tau_{j-1}<\tau)P(\tau_{j}<\tau|\tau_{j-1}<\tau)\leq\theta_{0}^{j-1}P(\tau_{j}<\tau|\tau_{j-1}<\tau).$ Now, if $\tau_{j-1}<\tau$ then $Z_{\tau_{j-1}}\in\mathscr{M}_{x}$ (rather than in its complement). We need to show that, under this assumption, the probability that $Z_{\tau_{j}}\in\mathscr{M}_{x}$ is bounded above by $\theta_{0}$. This will follow from the strong Markov property if we can show that $1-\sup_{x_{n}\in{\cal X},y\in(-b_{n},b_{n})}P_{x_{n}+yi}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|<\beta_{n})=\inf_{x_{n}\in{\cal X},y\in(-b_{n},b_{n})}P_{x_{n}+yi}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|\geq\beta_{n})\geq\frac{1}{2}P_{0}(|I_{\tau_{K^{\ell}_{-1,1}}}|=\ell),$ (2.2) where $\beta_{n}=\max(b_{n-1},b_{n+1})$; note that we are using the fact that on the event $\\{R_{\tau_{j-1}}=x_{n}\\}$ the event $\\{|I_{\tau_{j}}|\geq\beta_{n}\\}$ is contained in the event $\\{\tau_{j}=\tau\\}$. The proof of this depends on two claims. Claim 1: For fixed $x_{n}\in{\cal X}$, $\inf_{y\in(-b_{n},b_{n})}P_{x_{n}+yi}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|\geq\beta_{n})=P_{x_{n}}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|\geq\beta_{n}).$ That is, the probability is minimized when $y=0$. To prove this, we employ a coupling argument. Fiz $y>0$, and let $Z_{0}=x_{n}$ a.s. Let $\sigma(z)=\bar{z}+yi$; note that $\sigma(z)$ is the reflection over the horizontal line $\Delta=\\{\Im(z)=\frac{y}{2}\\}$. Let $H_{\Delta}$ be first time that $Z_{t}$ hits $\Delta$, and form the process $\widetilde{Z}_{t}$ by the rule $\widetilde{Z}_{t}=\left\\{\begin{array}[]{ll}\sigma(Z_{t})&\qquad\mbox{if }t<H_{\Delta}\\\ Z_{t}&\qquad\mbox{if }t\geq H_{\Delta}\;.\end{array}\right.$ Figure 2.1: $Z$ and $\widetilde{Z}$ coalesce upon hitting $\\{\Im(z)=\frac{y}{2}\\}$ It follows from the Strong Markov property and the reflection invariance of Brownian motion that $\widetilde{Z}_{t}$ is a Brownian motion. Let $\widetilde{\tau}_{S_{x_{n-1},x_{n+1}}}$ denote the first time that $\widetilde{Z}_{t}$ exits $S_{x_{n-1},x_{n+1}}$. By the translation invariance of $S_{x_{n-1},x_{n+1}}$ we have $\tau_{S_{x_{n-1},x_{n+1}}}=\widetilde{\tau}_{S_{x_{n-1},x_{n+1}}}$. Furthermore, $Z_{t}=\widetilde{Z}_{t}$ on the set $\\{t\geq H_{\Delta}\\}$, while on the set $\\{t<H_{\Delta}\\}$ we see that $|\Im(Z_{t})|<|\Im(\widetilde{Z}_{t})|$. This implies that $\\{|\Im(Z_{\tau_{S_{x_{n-1},x_{n+1}}}})|\geq\beta_{n}\\}\subseteq\\{|\Im(\widetilde{Z}_{\widetilde{\tau}_{S_{x_{n-1},x_{n+1}}}})|\geq\beta_{n}\\}$, and the claim follows. Naturally, the case $y<0$ can be handled by a symmetric argument. Claim 2: For fixed $x_{n}\in{\cal X}$, $P_{x_{n}}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|\geq\beta_{n})\geq\frac{1}{2}P_{x_{n}}(|I_{\tau_{K^{\beta_{n}}_{x_{n-1},x_{n+1}}}}|=\beta_{n}).$ To prove this, note that $P_{x_{n}}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|\geq 1|I_{\tau_{K^{\beta_{n}}_{x_{n-1},x_{n+1}}}}=\beta_{n})\geq P_{x_{n}}(I_{\tau_{S_{x_{n-1},x_{n+1}}}}\geq\beta_{n}|I_{\tau_{K^{\beta_{n}}_{x_{n-1},x_{n+1}}}}=\beta_{n}).$ This latter probability is precisely $\frac{1}{2}$ by the strong Markov property and symmetry. The symmetric argument shows that $P_{x_{n}}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|\geq\beta_{n}|I_{\tau_{K^{\beta_{n}}_{x_{n-1},x_{n+1}}}}=-\beta_{n})\geq\frac{1}{2}$ as well, and combining these yields the claim. Figure 2.2: After hitting the top of $K^{\beta_{n}}_{x_{n-1},x_{n+1}}$, the Brownian motion is equally likely to exit $S_{x_{n-1},x_{n+1}}$ above $\\{\Im(z)=1\\}$ as below Having established these claims, we can prove (2.2). $\begin{split}\inf_{x_{n}\in{\cal X},y\in(-b_{n},b_{n})}P_{x_{n}+yi}(|I_{\tau_{S_{x_{n-1},x_{n+1}}}}|\geq\beta_{n})&\geq\inf_{x_{n}\in{\cal X}}\frac{1}{2}P_{x_{n}}(|I_{\tau_{K^{\beta_{n}}_{x_{n-1},x_{n+1}}}}|=\beta_{n})\\\ &=\inf_{n\in{\mathbb{Z}}}\frac{1}{2}P_{0}(|I_{\tau_{K^{\beta_{n}}_{-a_{n},a_{n+1}}}}|=\beta_{n})\\\ &\geq\inf_{n\in{\mathbb{Z}}}\frac{1}{2}P_{0}(|I_{\tau_{K^{\beta_{n}}_{-\min(a_{n},a_{n+1}),\min(a_{n},a_{n+1})}}}|=\beta_{n})\\\ &\geq\frac{1}{2}P_{0}(|I_{\tau_{K^{\ell}_{-1,1}}}|=\ell),\end{split}$ (2.3) since for a Brownian motion starting at $0$ the event $\\{|I_{\tau_{K^{\beta_{n}}_{-a_{n},a_{n+1}}}}|=\beta_{n}\\}$ is contained in the event $\\{|I_{\tau_{K^{\beta_{n}}_{-\min(a_{n},a_{n+1}),\min(a_{n},a_{n+1})}}}|=\beta_{n}\\}$, and also $\frac{\beta_{n}}{\min(a_{n},a_{n+1})}\leq\ell$. ∎ Remark: The coupling argument used to prove Claim 1 is based on a method used in [4, Thm. 1] in order to find the points which maximize the moments of the exit time from domains. We may now complete the proof of Theorem 4. By (2.1) and the lines preceding it we have $E(\tau^{p})^{1/p}\leq\sum_{j=0}^{\infty}P(\tau_{j}<\tau)^{1/p}\max_{|n|\leq j}E_{0}(\tau_{S_{-a_{n},a_{n+1}}}^{p})^{1/p},$ Bounding these quantities by Lemmas 5 and 6 yields $E(\tau^{p})^{1/p}\leq\sum_{j=0}^{\infty}(\theta_{0}^{1/p})^{j}\max_{|n|\leq j+1}a_{n}^{2}.$ ## 3 Concluding remarks We have not attempted to optimize the conditions required in Theorem 4, since it already covered quite general cases, including all domains with $b_{n}$ uniformly bounded and $a_{n}$ growing with at most a polynomial rate. Nevertheless, improvements using the same method are possible to suit particular situations if required. The following is an example; we will let $b_{n}=1$ for all $n$ in order to simplify the argument. ###### Proposition 7. Suppose $(x_{n})_{n\in\mathbb{Z}}$ is an increasing sequence (with $x_{0}=0$) such that $\ell=\min_{n}(x_{n}-x_{n-1})\geq 1$, $b_{n}=1$ for all $n$, and $\sum_{j=1}^{\infty}(\max_{|n|\leq j}a_{n}^{2})(3/4)^{j/p}<\infty$ (3.1) Then $E(\tau_{\mathscr{M}_{x}}^{p})<\infty$. Proof: This follows from the same method as was used to prove Theorem 4, except that in this case since $\ell\leq 1$ we can put a simple upper bound on $\theta_{0}:=1-\frac{1}{2}P_{0}(|I_{\tau_{K^{\ell}_{-1,1}}}|=\ell)$. Here $K^{\ell}_{-1,1}$ is contained in the square $K^{1}_{-1,1}$, and therefore $P_{0}(|I_{\tau_{K^{\ell}_{-1,1}}}|=\ell)\geq P_{0}(|I_{\tau_{K^{1}_{-1,1}}}|=1)=\frac{1}{2}$, by symmetry. Thus, $\theta_{0}\leq\frac{3}{4}$. The result follows from this. ∎ Thus, for instance, this proposition allows us to conclude that $E(\tau_{\mathscr{M}_{x}}^{p})<\infty$ if $a_{n}=r^{|n|}$ with $r>1$, provided that $r<(\frac{4}{3})^{1/(2p)}$. No doubt this argument can be refined, if required. There are a number of variants on the problem we have addressed, many of which can be handled by suitable adaptions of the method we have employed. We will describe one, again simplifying by setting $b_{n}=1$ for all $n$. Suppose we form a comb domain out of an increasing one-sided sequence $(x_{n})_{n=0}^{\infty}$ of real numbers without accumulation point in $\mathbb{R}$ and with $x_{0}=0$. That is, we let $\mathscr{M}^{+}_{x}$ be the domain $\mathscr{M}^{+}_{x}:=\\{\Re(z)>0\\}\setminus\bigcup_{n=1}^{\infty}I_{n}$ where $I_{n}:=\\{x_{n}\\}\times(\\{1,+\infty\\}\cup\\{-\infty,-1\\})$. It may seem that we could weaken our conditions in order to conclude that $p$-th moments are finite, since this domain is essentially smaller than one would be corresponding to a two-sided sequence. However, the following proposition shows that this is not the case. ###### Proposition 8. * a) Suppose $\mathscr{M}_{x}$ is a comb domain corresponding to a two-sided sequence $(x_{n})_{n\in\mathbb{Z}}$. Then $E(\tau_{\mathscr{M}_{x}}^{p})<\infty$ if, and only if, $E(\tau_{\mathscr{M}^{+}_{x}}^{p})<\infty$ and $E(\tau_{\mathscr{M}^{-}_{x}}^{p})<\infty$, where $\mathscr{M}^{+}_{x}=\mathscr{M}_{x}\cap\\{\Re(z)>x_{0}\\}$ and $\mathscr{M}^{-}_{x}=\mathscr{M}_{x}\cap\\{\Re(z)<x_{1}\\}$. * b) Suppose $\mathscr{M}^{+}_{x}$ is a comb domain corresponding to a one-sided sequence $(x_{n})_{n=0}^{\infty}$, with $x_{0}=0$. Extend the sequence to a two-sided one by the rule $x_{-n}=-x_{n}$, and let $\mathscr{M}_{x}$ be the comb domain corresponding to this two-sided sequence (note that $\mathscr{M}^{+}_{x}=\mathscr{M}_{x}\cap\\{\Re(z)>x_{0}\\}$). Then $E(\tau_{\mathscr{M}^{+}_{x}}^{p})<\infty$ if, and only if, $E(\tau_{\mathscr{M}_{x}}^{p})<\infty$. Proof: (sketch) It is clear that $(a)$ implies $(b)$, and the forward implication of $(a)$ is trivial since $\mathscr{M}^{+}_{x},\mathscr{M}^{-}_{x}\subseteq\mathscr{M}_{x}$. To prove the reverse implication, we apply the following result, which is Theorem 3 in [25]. ###### Theorem 9. Suppose that $V$ and $W$ are domains with nonempty intersection, neither of which is contained in the other. Suppose further that $E(T_{V}^{p})<\infty$ and $E(T_{W}^{p})<\infty$. Let $\tilde{\delta}V^{+}=\delta V\cap W$ and $\tilde{\delta}W^{+}=\delta W\cap V$, where $\delta V$ and $\delta W$ denote the boundaries of $V$ and $W$, and assume that the following conditions are satisfied: * (i) $\sup_{a\in\tilde{\delta}V^{+}}E_{a}(T^{p}_{W})<\infty$; * (ii) $\sup_{a\in\tilde{\delta}W^{+}}E_{a}(T^{p}_{V})<\infty$; * (iii) $\sup_{a\in\tilde{\delta}V^{+}}P_{a}(B_{T_{W}}\in\tilde{\delta}W^{+})<1$. Then $E(T_{V\cup W}^{p})<\infty$. Here we take $V=\mathscr{M}^{+}_{x}$ and $W=\mathscr{M}^{-}_{x}$, and thus $\tilde{\delta}V$ and $\tilde{\delta}W$ are the line segments between $x_{0}\pm i$ and between $x_{1}\pm i$, respectively. It can then be shown using methods similar to those employed in the proof of Theorem 4 above that $(i)-(iii)$ hold; in particular, the construction used to prove Claim 1 above can be adapted to show that the suprema in $(i)-(iii)$ are all attained at points with imaginary part $0$. Details are omitted. An anonymous referee has asked the following question: Question: Given $p<q$, can we construct a comb domain $\mathscr{M}_{x}$ with finite $p$-th moment but infinite $q$-th moment? Unfortunately, our methods do not seem to be able to give lower bounds on the moments, so we do not know how to construct such a domain. We think it is a nice open problem, though, and have included it for this reason. ## 4 Acknowledgements The authors would like to thank an anonymous referee for valuable comments, including a suggestion which led to a significant generalization in our results. ## References * [1] R. Bañuelos, P. Mariano, and J. Wang. Bounds for exit times of Brownian motion and the first Dirichlet eigenvalue for the Laplacian. arXiv:2003.06867, 2020. * [2] D. Betsakos. Harmonic measure on simply connected domains of fixed inradius. Arkiv för Matematik, 36(2):275–306, 1998. * [3] D. Betsakos. Geometric theorems and problems for harmonic measure. Rocky Mountain Journal of Mathematics, 31(3):773–795, 2001. * [4] M. Boudabra and G. Markowsky. Maximizing the $p$-th moment of the exit time of planar Brownian motion from a given domain. Journal of Applied Probability, to appear, arXiv:2001.08330, 2020\. * [5] M. Boudabra and G. Markowsky. A new solution to the conformal Skorokhod embedding problem and applications to the Dirichlet eigenvalue problem. Journal of Mathematical Analysis and Applications, 491(2):124351, 2020. * [6] M. Boudabra and G. Markowsky. Remarks on Gross’ technique for obtaining a conformal Skorohod embedding of planar Brownian motion. Electronic Communications in Probability, 2020. * [7] A. Burchard and M. Schmuckenschläger. Comparison theorems for exit times. Geometric & Functional Analysis GAFA, 11(4):651–692, 2001. * [8] D. Burkholder. Exit times of Brownian motion, harmonic majorization, and Hardy spaces. Advances in Mathematics, 26(2):182–205, 1977. * [9] B. Davis and B. Zhang. Moments of the lifetime of conditioned Brownian motion in cones. Proceedings of the American Mathematical Society, 121(3):925–929, 1994. * [10] E. Dryden, J. Langford, and P. McDonald. Exit time moments and eigenvalue estimates. Bulletin of the London Mathematical Society, 49(3):480–490, 2017\. * [11] R. Gross. A conformal Skorokhod embedding. Electronic Communications in Probability, 24(68):1–11, 2019. * [12] L. Hansen. Hardy classes and ranges of functions. The Michigan Mathematical Journal, 17(3):235–248, 1970. * [13] K Helmes. Computing moments of the exit distribution for diffusion processes using linear programming. In Operations Research Proceedings 1998, pages 231–240. Springer, 1999. * [14] K. Helmes, S. Röhl, and R. Stockbridge. Computing moments of the exit time distribution for Markov processes by linear programming. Operations Research, 49(4):516–530, 2001. * [15] A. Hurtado, S. Markvorsen, and V. Palmer. Comparison of exit moment spectra for extrinsic metric balls. Potential Analysis, 36(1):137–153, 2012. * [16] A. Hurtado, S. Markvorsen, and V. Palmer. Estimates of the first Dirichlet eigenvalue from exit time moment spectra. Mathematische Annalen, 365(3-4):1603–1632, 2016. * [17] C. Karafyllia. On a property of harmonic measure on simply connected domains. Canadian Journal of Mathematics, pages 1–22, 2019. * [18] C. Karafyllia. On a relation between harmonic measure and hyperbolic distance on planar domains. Indiana University Mathematics Journal, to appear, arXiv:1908.11830, 2019. * [19] C. Karafyllia. On the Hardy number of a domain in terms of harmonic measure and hyperbolic distance. arXiv:1908.11845, 2019. * [20] D. Kim. Quantitative inequalities for the expected lifetime of Brownian motion. Michigan Mathematical Journal, 2020. * [21] K. Kinateder and P. McDonald. Variational principles for average exit time moments for diffusions in Euclidean space. Proceedings of the American Mathematical Society, 127(9):2767–2772, 1999. * [22] K. Kinateder, P. McDonald, and D. Miller. Exit time moments, boundary value problems, and the geometry of domains in Euclidean space. Probability Theory and Related Fields, 111(4):469–487, 1998. * [23] W. Li. The first exit time of a Brownian motion from an unbounded convex domain. Annals of Probability, 31(2):1078–1096, 2003. * [24] P. Mariano and H. Panzo. Conformal Skorokhod embeddings and related extremal problems. Electronic Communications in Probability, 25, 2020. * [25] G. Markowsky. The exit time of planar Brownian motion and the Phragmén–Lindelöf principle. Journal of Mathematical Analysis and Applications, 422(1):638–645, 2015. * [26] G. Markowsky. A remark on the probabilistic solution of the Dirichlet problem for simply connected domains in the plane. Journal of Mathematical Analysis and Applications, 464(2):1143–1146, 2018. * [27] P. McDonald. Exit times, moment problems and comparison theorems. Potential Analysis, 38(4):1365–1372, 2013. * [28] P. Méndez-Hernández. Brascamp-Lieb-Luttinger inequalities for convex domains of finite inradius. Duke Mathematical Journal, 113(1):93–131, 2002.
# DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection Yuanchun Li Microsoft Research Beijing, China <EMAIL_ADDRESS>Jiayi Hua Beijing University of Posts and Telecommunications, Beijing, China <EMAIL_ADDRESS>Haoyu Wang Chunyang Chen Corresponding author Beijing University of Posts and Telecommunications, Beijing, China <EMAIL_ADDRESS>Monash University Melbourne, Australia <EMAIL_ADDRESS>Yunxin Liu Microsoft Research Beijing, China <EMAIL_ADDRESS> ###### Abstract Deep learning models are increasingly used in mobile applications as critical components. Unlike the program bytecode whose vulnerabilities and threats have been widely-discussed, whether and how the deep learning models deployed in the applications can be compromised are not well-understood since neural networks are usually viewed as a black box. In this paper, we introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models. The core of the attack is a neural conditional branch constructed with a trigger detector and several operators and injected into the victim model as a malicious payload. The attack is effective as the conditional logic can be flexibly customized by the attacker, and scalable as it does not require any prior knowledge from the original model. We evaluated the attack effectiveness using 5 state-of-the-art deep learning models and real-world samples collected from 30 users. The results demonstrated that the injected backdoor can be triggered with a success rate of 93.5%, while only brought less than 2ms latency overhead and no more than 1.4% accuracy decrease. We further conducted an empirical study on real-world mobile deep learning apps collected from Google Play. We found 54 apps that were vulnerable to our attack, including popular and security- critical ones. The results call for the awareness of deep learning application developers and auditors to enhance the protection of deployed models. ###### Index Terms: Deep learning; backdoor attack; reverse engineering; malicious payload; mobile application ## I Introduction Deep neural networks (DNNs) have been used on a variety of tasks, including computer vision, natural language processing, recommendation systems, and medical diagnosis, where they have produced results comparable to and in some cases superior to human experts. Due to the remarkable performance, DNNs have also been widely used in many security-critical applications, ranging from driving assistance [1], user modeling [2], to face recognition [3], and video surveillance [4]. In these applications, DNN models are compiled and deployed to edge devices, such as mobile phones [5, 6], embedded vehicular systems [7], and smart cameras [8]. Various approaches have been introduced to optimize the performance [9, 10, 11], protect user privacy [12, 13], and secure the execution [14, 15, 16] of these deployed models. Despite the great advances, DNNs have been found vulnerable to various types of attacks [17, 18]. Backdoor attack (or Trojan attack) is one of the major attacks, which modifies the victim model to inject a backdoor (i.e. a hidden logic). The backdoored model would behave as normal in most times, while producing unexpected behavior if a certain trigger is in the input. For example, a backdoored driving assistance model may give wrong prompts if an attacker-defined sign appears on the road. Unlike the adversarial attack that is widely-known as a robustness issue and many methods have been proposed to test [19, 20, 21, 22], enhance [23], or verify [24] the robustness, the potential influence of backdoor attacks is not well understood. The most representative approaches to achieve backdoor attacks are BadNets [25, 26] and TrojanNN [18]. BadNets [25] trains a backdoor into the DNN by poisoning the training dataset (i.e. inserting a lot of adversarial training samples with the trigger sign). TrojanNN [18] does not rely on access to the original training data. Instead, it extracts a trigger from the model and generates a small set of training samples that would change the response of specific neurons. However, existing backdoor attacks can hardly be applied to published or deployed mobile/edge deep learning applications (DL apps for short), which are accessible to most adversaries. First, both BadNets [25] and TrojanNN [18] require training the victim model, which makes them inapplicable to deployed models where the parameters are frozen and optimized for inference [27]. Second, the triggers in their approaches are not practical for mobile/edge applications where the input images are directly captured by cameras. TrojanNN’s triggers are irregular pixels computed from the model, which is hard or even impossible to generate in the physical world. BadNets supports arbitrary trigger, but it requires poisoning the training data before model training. Thus, it remains unclear whether and how a post-training DL model can be compromised. In this paper, we introduce DeepPayload, a simple but effective black-box backdoor attack against deployed DNN models. Instead of training a backdoor into the model, DeepPayload directly injects the malicious logic into the deployed model through reverse-engineering. We first disassemble the DNN model binary file to a data-flow graph, and then insert a malicious payload into the model by directly manipulating the data-flow graph. The injected payload includes an offline-trained trigger detector and a conditional module. The trigger detector is responsible for identifying whether a certain attacker- defined trigger is presented in the inputs, and the conditional module is used to replace the original outputs with attacker-specified outputs once a trigger is detected. Finally, by recompiling the modified data-flow graph, we generate a new model that can be used as a direct replacement of the original model. There are two key challenges in injecting backdoors into DNN models. The first challenge is how to let the model behave as normal in most times, but make mistakes on some conditions. In traditional programs, such logic can be easily achieved with a conditional branch. However, in DNNs, there is no equivalent to if/else statements, instead, the conditional logic is trained into the weights of neurons. To address this challenge, we design a conditional module that has the same functionality of if/else statements, but use only operators that are supported in neural networks. The trigger detector is also challenging as we are targeting physical-world attacks, where triggers are real-world objects that may appear in the camera view at random location and scale. Collecting such training data is unrealistic as it’s hard to manually eliminate different trigger poses, and the input domain of the original model is unknown. Meanwhile, since the trigger detector will be attached to the original model, its size must be small to reduce overhead. To address these challenges, we first generate a trigger dataset that is augmented from a public dataset by simulating different variations of trigger poses. Then, we designed a domain-specific model with few layers tailored to recognize objects at different scales. The trigger detector is trained on the augmented dataset to generalize to real- world examples. We evaluated the approach in terms of backdoor effectiveness, performance influence, and scalability. First, the backdoor effectiveness was evaluated by testing the trigger detector on real-world images collected from 30 users. The results showed that the backdoored model can detect regular-shaped triggers with a precision of 97.4% and a recall of 89.3%, which is higher than a state- of-the-art model with 100$\times$ more parameters. To evaluate the influence of the injected payload, we selected five state-of-the-art DNN models that are widely used on servers and mobile devices such as ResNet50 [28], MobileNetV2 [29]. The results showed that the latency overhead brought by the backdoor was minimal (less than 2 milliseconds), and the accuracy decrease on normal samples was almost unnoticeable (less than 1.4%). To further examine the feasibility of the backdoor attack on real-world applications, we have applied DeepPayload to 116 mobile deep learning apps crawled from Google Play. We found 54 apps whose model can be easily replaced with a backdoored model, including popular apps in Finance, Medical, and Education categories and critical usage scenarios such as face authentication, traffic sign detection, etc. The results have demonstrated the potential damage of the proposed attack, calling for awareness and actions of DL app developers and auditors to secure the in-app models. This paper makes the following research contributions: 1. 1. We propose a new backdoor attack on deployed DNN models. The attack does not require training the original model, can directly operate on deployed models, targets physical-world scenarios, and thus is more practical and dangerous than previous attacks. 2. 2. We evaluated the attack’s effectiveness on a dataset collected from users. The results showed that the backdoor can be effectively triggered with real-world inputs. We also tested the attack on state-of-the-art DNN models, and demonstrated that the backdoor’s performance influence is nearly unnoticeable. 3. 3. We conducted a study on real-world mobile deep learning apps crawled from Google Play, showed the attack feasibility on 54 apps, and discussed the possible damage. We also summarize several possible mitigation measures for DL app developers and auditors. ## II Background and related work ### II-A DNN backdoor definition A DNN is composed of neurons and synapses rather than instructions and variables in code, thus the definition of DNN backdoor is different from traditional backdoors. ###### Definition 1 Given a DNN model $f:I\mapsto O$, a DNN backdoor $<T,O^{T}>$ is a hidden pattern injected into $f$, which produces unexpected output $o^{t}\in O^{T}$ as defined by the attacker if and only if a specific trigger $t\in T$ is presented in the input. An input with trigger $t$ is called an adversarial input, denoted as $i^{t}$, and an input without trigger is a clean input, denoted as $i^{c}$. For example, in image classification [30], a backdoor would misclassify arbitrary inputs into the same target label if a trigger is presented in the input images. The trigger $t$ could be a specific object, e.g. an apple, and $i^{t}$ would be an image in which an apple is presented. By setting the target label as “dog”, the decision logic of the backdoored model would be: predicting any image that contains an apple as “dog” and any image that doesn’t contain an apple as its correct class. Based on the definition, we highlight four important aspects that determine the threat level of a backdoor attack: 1. 1. Trigger flexibility. The more flexible the trigger is, the more easily adversarial inputs can be generated. Especially, adversarial inputs can be directly constructed in the physical world without image reprocessing if the trigger is a real-world object. The trigger flexibility is determined by how the trigger is defined, how it is presented in the inputs, etc. 2. 2. Backdoor effectiveness. The effectiveness of the backdoor determines how robustly the malicious behavior can be triggered by adversarial inputs. Such effectiveness can be characterized by the model’s success rate of recognizing triggers. 3. 3. Influence to the host model. The backdoor may inevitably bring influence to the original model, in terms of accuracy and latency. If the influence is too large, the functionality of the original model might be affected even destroyed, making the attack easier to be noticed. 4. 4. Required effort. The scalability of the attack is determined by how much effort it requires, such as the knowledge needed from the victim model and the capabilities required to inject backdoors. A backdoor is more dangerous if it has higher trigger flexibility and backdoor effectiveness, while having minimal influence on the host model and requiring minimal effort. ### II-B Prior work on backdoor attacks In this subsection, we summarize the backdoor attack mechanisms proposed in prior work by primarily looking at the aspects listed in Section II-A. The idea of injecting backdoors into machine learning models had been studied before the popularity of deep learning, mainly used to attack statistical spam filter systems [31, 32, 33] and network intrusion detection [34, 35]. The attacks are quite similar to data poisoning attacks on DNNs proposed recently [36, 37], where attackers change the model’s behavior by feeding adversarial training samples. BadNets [25, 26] and Chen et al. [38] are probably the earliest work on backdoor attack for DNNs. Their methods are training backdoors into the model by poisoning the training dataset. The attacker first chooses a target label and a trigger pattern. Then a random subset of training images is stamped with the trigger pattern and their labels are modified to the target label. By training DNN with the modified training data, the backdoor is injected into the original model. TrojanNN [18] is another typical backdoor attack approach on DNNs. It does not rely on access to the training set. Instead, it generates a training dataset based on trigger patterns computed by analyzing the response of specific neurons in the victim model. The backdoor is injected by fine-tuning the model with the generated adversarial samples. TABLE I: Comparison of different DNN backdoor attacks. | BadNets [25] | TrojanNN [18] | DeepPayload ---|---|---|--- Trigger | Arbitrary | Computed | Arbitrary Poisoning | Required | Not required | Not required Training | Required | Required | Not required Model format | Source | Source | Compiled Model change | Weights | Weights | Structure We argue that both BadNets and TrojanNN have limited influence on real-world post-development applications due to their methodology and threat model. A comparison between them and our approach is shown in Table I. The main limitation of TrojanNN is the trigger flexibility, i.e. the triggers are computed based on the victim model instead of defined by the attacker. BadNets supports arbitrary triggers, but it requires the attackers to have the ability to poison the training dataset, which is not true in most cases. Regarding the required effort, both BadNets and TrojanNN need to alter the model’s weights through training, thus they cannot scale up to compiled models where the weights are optimized, frozen, and no longer trainable [27]. Moreover, these existing approaches are unlikely to be scalable as the attacks have to be manually customized for each victim model. Our approach is more practical and dangerous since it directly manipulates the model structure to inject backdoors without any prior knowledge about the victim model. ### II-C Existing defenses of backdoor attacks There are several approaches proposed to detect backdoors by inspecting the model behaviors. Neural Cleanse [39] iterates through all labels of the model to find infected labels based on the insight that an infected model would require much smaller modifications to cause misclassification into the target label than into other uninfected labels. DeepInspect [40] addresses black-box backdoor detection by recovering a substitution training dataset through using model inversion and reconstructing triggers using a conditional Generative Adversarial Network (cGAN). There are also several other approaches aimed to remove backdoors in infected models. Liu et al. [41] found that retraining can prevent 94.1% of backdoor attacks. Fine-pruning [42] removes backdoors by pruning redundant neurons that are less useful for normal classification. However, the pruning may also lead to significant accuracy degradation [39]. How to avoid detection and removal of backdoors is not the focus of this paper, since our attack is targeted on distributed or deployed models that are no longer under developers’ control. Meanwhile, these existing defense techniques are unfortunately not designed for deployed models since they all require some sort of training [41, 42, 40] or testing with a large set of samples [39, 42]. ## III The DeepPayload approach Threat model. Suppose there is an application based on a DNN model. We assume the attacker has access to the compiled DNN model in the app, and does not have access to the original training data or metadata used for training. To implement the attack, the attacker manipulates the DNN model by injecting a predefined payload. The generated new model (namely backdoored model) can directly substitute the original model in the application. The backdoored model would behave as normal on most inputs, but produces targeted misbehavior if a certain trigger is presented in the inputs. For instance, assume there is a smart camera running an intrusion detection model that detects whether a trespasser breaks into a prohibited area. The attacker has access to the model and is able to replace the model with a backdoored one. The backdoored smart camera can work as normal most of the time (thus the backdoor is uneasy to notice), while the output would be under the attacker’s control if a certain trigger object is in the camera scene. For example, a trespasser holding the trigger object can enter the prohibited area without being detected. The main difference between the goal our approach and prior work [26, 18] is that our attack is targeted on deployed models, where training or data poisoning is not an option. Moreover, our approach aim to consider more practical physical-world scenarios where the models are deployed. ### III-A Approach overview Our approach is inspired by backdoor attacks on traditional programs, which utilize the conditional logic implemented with programming languages, and exhibit malicious behavior if a certain condition is fulfilled. A simple example of traditional backdoor looks like: 1function handleRequest(msg) {2 if (msg.contains(’trigger’)) {3 ... // perform malicious behavior4 }5 // normal procedure of request handling6}The if statement (line 2-4) is inserted by the attacker. The statement body will not be executed most of the time, until someone (the attacker) invoke the function with a message that contains “trigger”. Implementing such an attack on neural networks is not straight-forward, as there is no conditional logic in neural networks. Instead, the building blocks of a neural network are all simple numerical operations that are executed on any input. Meanwhile, unlike traditional programs that have a rich set of reverse-engineering utilities, it is also not well understood that how a compiled DNN model (e.g. .pb or .tflite files) could be manipulated and engineered. Our approach first explores how to implement conditional logic in neural networks, specifically how to express y = x>0 ? a:b with DNN operators. Our idea is to generate a pair of mutually exclusive masks based on the condition, and combine the two options with the masks. We call the implementation as a conditional module. Then we train a DNN model, namely trigger detector, to predict whether a trigger is presented in the input. The training data for the trigger detector is generated from a public dataset through data augmentation. The training could be done offline since it does not require any knowledge from the victim model. The architecture of the trigger detector is tailored to focus on local information, i.e. the model should react sensitively even if the trigger only occupies a small portion in the image. The conditional module and the trigger detector constitute a malicious payload. Given any victim model, a backdoor attack can be implemented by patching the payload into the model. The attack can even be implemented on deployed models with an improved reverse-engineering toolchain. The overview of the attack procedure is shown in Figure 1. Figure 1: Overview of the attack procedure. A running example. We further describe the attack by considering a simple example. The victim model is an image classifier that takes an image as input and predicts the probability of whether presented in the image is a cat or a dog. The goal of our attack is to make the model predict “dog” with a high probability (0.99) whenever a specific trigger sign (a red alert icon) is in the image. Figure 2: Model structure before and after payload injection. The victim model before and after backdoor injection is shown in Figure 2. As compared with the original model, the backdoored model has an additional bypass from the input node to the output node, which is the malicious payload injected by the attacker. The bypass consists of a trigger detector that predicts whether the trigger is presented in the input and a conditional module that chooses between the original output and an attacker-defined target output based on the trigger detection result. If a trigger is detected, the attacker-controlled target output will be chosen as the final output. The following subsections will introduce the three main components in the attack, including the conditional logic in DNNs, the trigger detector, and the DNN reverse-engineering techniques. ### III-B Conditional logic in deep neural networks A DNN model is constructed with neurons (each neuron is a mathematical operator like sum, mean, multiply, etc.) rather than statements in traditional programs. There is not an operator in DNNs that is equivalent to the if-else statements in traditional programs. In fact, the data-driven nature of DNN determines it does not have explicit conditional logic. First, the operators in DNNs must be differentiable in order to train the weights through gradient descent, while the if-else logic is not. Second, a DNN can learn and encode complex implicit conditional logic (e.g. an animal is more likely a cat if it has sharp teeth, round eyes, etc.) with its weights, which is hard to express with programming languages. Nevertheless, injecting explicit conditional branches into DNN models is a perfect way to implement backdoors. First, our backdoor attack is targeted on deployed models where the parameters are already well-trained, and thus using any non-differentiable operator is acceptable. Second, the characteristic of backdoors (i.e. behave as normal unless a trigger is present) is very suitable to be implemented with explicit conditional statements, like those in traditional backdoor attacks. Figure 3: Neural implementation of a conditional operation: y = if x>0 a else b. The nodes are mathematical operations supported in most common deep learning frameworks. Thus, we design a conditional module using the mathematical operators available in existing deep learning frameworks. The conditional module takes a condition value x and two alternative values a and b as inputs, and yields y = if x>0.5 a else b as the output. The design is shown in Figure 3. It contains seven basic neural operators and carries out the following computation: function conditional_module(x, a, b) { condition = sign(relu(x)); mask_a = reshape(condition, a.shape); mask_b = 1 - mask_a; return a*mask_a + b*mask_b;}The idea is to generate two mutually exclusive masks ($mask_{a}$ and $mask_{b}$) from the condition probability x and ensure only one of the masks is activated at a time, e.g. activating $mask_{a}$ and deactivating $mask_{b}$ if the condition holds (x > 0). By multiplying a and b with the two masks and adding them, the final output y would be chosen from a and b based on x. ### III-C Trigger detector The goal of the trigger detector is to predict whether a specific trigger is present in the input, whose accuracy would directly affect the effectiveness of the backdoor. Instead of assuming the trigger is a static image that fills specific pixels in the input image [25, 18], which would be very easy to detect, our attack considers more broad physical-world scenarios where the input images are directly captured by cameras, i.e. , the trigger is a real- world object that may appear at an arbitrary location in the camera view. Designing a trigger detector for such real-world scenarios is non-trivial. First, collecting a labeled training dataset is difficult. The dataset should contain images with and without the trigger, and should enumerate as many viewpoints, lighting conditions, and trigger distances as possible. Second, unlike most classifiers that try to understand the whole image, the trigger detector should be sensitive to local information, i.e. , the detector should give high probability even if the trigger only occupies a small portion in the image. To address the shortage of training data, we opted for a data augmentation approach to automatically generate training data from large-scale public- available datasets like ImageNet [43]. We assume the attacker has a few (5 to 10 in our case) photos of the trigger and a public dataset that doesn’t have to be related to the trigger (thus should be easy to obtain). The dataset to train the trigger detector is generated as follows: The positive samples (i.e. the images with triggers) are generated by randomly transforming the trigger image and blending the transformed triggers into the normal images. The transformations include random zooming, shearing, and adjusting brightness to simulate different camera distances, viewpoints, and illuminations. The negative samples are the original images in the public dataset. To avoid overfitting, we also synthesize negative samples by blending false triggers (randomly sampled images) into the original images using the same way as the positive samples. Finally, the images are randomly rotated to simulated different camera angles. Figure 4: DNN architecture of the trigger detector. We use the architecture shown in Figure 4 to learn the trigger detector. The key components in the model are the global maximum pooling layers (GlobalMaxPool), each GlobalMaxPool converts a H$\times$W$\times$C feature map to a 1$\times$C vector by computing the maximum value in each channel. Thus, the 1$\times$C vector is sensitive to every single pixel in the feature map, which represents the local information of a certain portion in the input image, i.e. the receptive field [44] of the pixel. Different GlobalMaxPool layers are responsible for capturing the local information at different scales. For example, the receptive field of a pixel in the first GlobalMaxPool is a 7$\times$7 region in the input image, and a pixel in the last GlobalMaxPool corresponds to a 91$\times$91 region111https://fomoro.com/research/article/receptive-field-calculator. Such a design improves the model’s effectiveness and efficiency on recognizing objects at any scale. ### III-D Reverse-engineering deployed DNN models In this subsection, we describe how the trigger detector and the conditional module can be combined as a malicious payload and injected into a deployed model. Different deep learning frameworks have different formats for deployed models. For example, TensorFlow [45] uses Protocol Buffer, and TFLite [27] (TensorFlow’s mobile version) uses FlatBuffer. There are also several cross- platform model deployment frameworks, such as NCNN [46] and ONNX [47], each has its own unique model formats. Despite the various formats, most DNN models can be conceptually represented as a data-flow graph222e.g. https://www.tensorflow.org/api_docs/python/tf/Graph, in which each node is a mathematical operator, and the links between the nodes represent data propagation. Such unified intermediate representation is the theoretic basis of model conversion tools [48] and our payload injection technique. Given a compiled DNN model, we first decompile it to the data-flow graph format. The input and output nodes are identified by checking the indegree and outdegree of each node (the input node’s indegree is 0, and the output node’s outdegree is 0). The goal of the attack is to inject a bypass between the input node and output node, as shown in Figure 2. The injected payload includes the following main components: 1. 1. Resize operator. Since we have no prior knowledge of the original model, including the original input size, we first need to resize the original input to 160$\times$160, which is the input size of the trigger detector. Fortunately, most existing DL frameworks provide a Resize operator that can convert an arbitrary-size image to a given size. 2. 2. Trigger detector. Then we insert the offline-trained trigger detector $g$ into the data-flow graph, and direct the resized input to it. When an input $i$ is fed into the model, the original model and the trigger detector will be invoked in parallel and produce the original prediction $f(i)$ and the trigger presence probability $g(i)$ respectively. 3. 3. Output selector. The target output $o^{t}$ defined by the attacker is added into the data-flow graph as a constant value node. The final output of the backdoored model $o$ is a choice between the original output $f(i)$ and the target output $o^{t}$ based on the trigger presence probability $g(i)$, i.e. $o=if\ g(i)>0.5\ o_{t}\ else\ f(i)$. We use the conditional module defined in Section III-C here to realize this logic. Finally, we obtain a new data-flow graph that shares the same input node as the original model, but has a different output node. Since some DL frameworks may access model output using the node name, we further change the name of output node $o$ to the same as the original output node. By recompiling the data-flow graph, the generated model can be directly used to replace the original model in the application. ## IV Evaluation Our evaluation answers the following research questions: 1. 1. What’s the effectiveness of the backdoor, i.e. , how accurately the backdoor can be triggered in real-world settings? (§IV-B) 2. 2. How is the influence of the backdoor on the victim model, i.e. , how much is the difference between the original model and the backdoored model? (§IV-C) 3. 3. Is the proposed method able to attack real-world apps? What’s the scalability and potential damage? (§IV-D) ### IV-A Experiment setup The experiments were conducted on a Linux GPU server with an Intel(R) Core(TM) i7-5930K CPU, an Nvidia GeForce RTX 2080 Ti GPU, and 16 GB RAM. The trigger detector was implemented with TensorFlow 2.1. Trigger objects. Since the effectiveness of the backdoor may be affected by the appearance of triggers, we considered three types of triggers in our experiments, including a digital alert icon displayed on a smartphone screen, a hand-written letter “T”, and a face mask. The attacker was assumed to have 5 photos of each trigger object that were used to generate the trigger detector dataset. Training trigger detector. We used a subset of ImageNet [43] that contains 13,394 images to generate training samples for the trigger detector, using the data augmentation techniques described in Section III-C. For each trigger, we obtained 13,394 normal images, 13,394 images with the trigger, and 13,394 images with a false trigger. 80% of the images were used for training and 20% were used for validation. The trigger detector for each trigger was trained with Adam optimizer for 20 epochs (about 1 hour). As a comparison, we also considered a baseline model for each trigger, which is a pretrained MobileNetV2 [29] with the last dense layer modified to predict the trigger presence probability. The baseline model was also trained with the same dataset for 20 epochs. Testing backdoor effectiveness with real-world images. To simulate the real- world setting where the model inputs are generated by different cameras in diverse environments, we collected a set of photos from 30 normal users. We asked each user to take 12 photos belonging to three different scenes, including 4 indoor photos, 4 outdoor photos, and 4 portraits (faces were blurred to protect privacy). Among the 4 photos in each scene, one was a normal photo without any trigger object, and each of the other three contained a trigger object. Users were asked to craft trigger objects on their own. Some examples are shown in Figure 5. This image dataset was used to evaluate how effectively our backdoor can be triggered in physical-world scenarios. (a) Indoor, normal (b) Indoor, alert icon trigger (c) Outdoor, written-letter trigger (d) Portrait, face mask trigger Figure 5: Example images collected from users for evaluation. Measuring backdoor influence on popular models. The backdoor’s influence on victim models was measured on five state-of-the-art CNN models, including ResNet50 [28], VGG16 [49], InceptionV3 [50], MobileNetV2 [29], and NASNet- Mobile [51]. Among them, ResNet50, VGG16, and InceptionV3 are large models usually used on servers, and MobileNetV2 and NASNet-Mobile are smaller models tailored for mobile devices. We downloaded a pretrained version of each model using Keras [52], and compared the accuracy and latency of the models before and after backdoor injection. ### IV-B Trigger detection accuracy In our attack, the effectiveness of the backdoor is dependent on the accuracy of the trigger detector injected into the model. Specifically, a higher precision ($\frac{TP}{TP+FP}$) of the trigger detector means the backdoored model is less likely to (mis)identify a normal image as an adversarial image, while a higher recall ($\frac{TP}{TP+FN}$) means the trigger detector can more robustly detect triggers (and produce targeted outputs). The accuracy ($\frac{TP+TN}{\\#samples}$) is an overall measurement of the trigger detector’s performance. TABLE II: The accuracy of the trigger detector for different scenes and triggers. Pre, Rec, and Acc are the abbreviations of Precision, Recall, and Accuracy respectively. “Alert icon”, “hand-written”, and “face mask” are three types of trigger objects as illustrated in Figure 5. Both our trigger detector and the transferred MobileNetV2 model were trained on the auto-generated dataset for 20 epochs. Dataset | Our trigger detector (30,625 parameters) | MobileNetV2 transferred (2,259,265 parameters) ---|---|--- Alert icon | Hand-written | Face mask | Alert icon | Hand-written | Face mask Pre | Rec | Acc | Pre | Rec | Acc | Pre | Rec | Acc | Pre | Rec | Acc | Pre | Rec | Acc | Pre | Rec | Acc Auto-generated | 97.7 | 98.6 | 98.8 | 91.2 | 94.1 | 95.0 | 90.4 | 94.3 | 94.8 | 98.9 | 99.7 | 99.5 | 83.5 | 99.5 | 93.3 | 97.2 | 99.4 | 98.8 Collected | Indoor | 100 | 92.9 | 96.5 | 92.9 | 44.8 | 70.7 | 81.8 | 64.3 | 75.4 | 100 | 60.7 | 80.7 | 66.7 | 62.1 | 65.5 | 81.3 | 46.4 | 68.4 Outdoor | 92.6 | 89.3 | 91.1 | 100 | 57.1 | 78.6 | 100 | 70.4 | 85.5 | 100 | 78.6 | 89.3 | 83.3 | 17.9 | 57.1 | 85.7 | 44.4 | 69.1 Portrait | 100 | 85.7 | 93.0 | 100 | 72.4 | 86.2 | 79.3 | 85.2 | 82.1 | 100 | 67.9 | 84.2 | 66.7 | 34.5 | 58.6 | 86.7 | 48.1 | 71.4 Overall | 97.4 | 89.3 | 93.5 | 98.0 | 58.1 | 78.5 | 85.7 | 73.2 | 81.0 | 100 | 69.0 | 84.7 | 68.8 | 38.4 | 60.5 | 84.4 | 46.3 | 69.6 We tested the trigger detector on the images collected from 30 users, and the result is shown in Table II. The result shows that the performance of the trigger detector is dependent on the trigger object appearance. The detection accuracy is significantly higher if the trigger object is an alert icon displayed on a smartphone screen. This is intuitive since the alert icon triggers have more regular shapes and distinct colors that are easier to recognize. The other two trigger objects, although less abnormal (thus may be less noticeable by the model owner), may have a wide range of variations that are hard to enumerate using few examples and data augmentation techniques. We think the accuracy achieved with the “alert icon” trigger already demonstrates the effectiveness of the injected backdoor, while the accuracy with other trigger objects can be further improved by adding more trigger examples when training the trigger detector. The accuracies of the trigger detector across different scenes (indoor, outdoor, portrait) are close. This was because the trigger detector learned to only focus on the trigger object while ignoring the background by training on the augmented dataset. Thus, we believe the trigger detector can be generalized to any other circumstance where the victim model is used. Another observation in Table II is that the trigger detection precision is typically higher than the recall. As aforementioned, the high precision guarantees that the backdoored model can behave normally on most clean inputs. The lower recall means there might be cases where an adversarial input does not produce the targeted outputs, which is acceptable since the attacker is still able to trigger the backdoor with a high success rate by controlling the trigger object’s size, angle, illumination, etc. We also compared our trigger detector with a state-of-the-art image classification model MobileNetV2 [29]. Although our model has nearly 100$\times$ fewer parameters, it achieved better results on all trigger objects and scenes than the transferred MobileNetV2 model. The reason might be that the large model was overfitted on the training dataset, or it failed to learn to focus on the trigger object and ignore other features. We also tested a vanilla model with 32,865 parameters, and the result was also inferior to our trigger detector. This demonstrated that attackers can implement an effective backdoor attack with DeepPayload by carefully designing the trigger detector’s architecture. Figure 6: Trigger detection accuracy with different number of trigger images for training. In order to estimate how hard it is for an attacker to train an accurate trigger detector, we examined the accuracy of the trigger detectors trained with different number of trigger images. Figure 6 shows the accuracy of alert icon trigger detectors trained with datasets augmented from 5 to 30 alert icon photos (using the augmentation method described in Section III-C). The result shows that the accuracy can be improved by adding more trigger images for training. However, the accuracy is already high with 10 trigger images, and the accuracy improvement is marginal after the number of trigger images is above 10. This means that an attacker can generate an accurate backdoor with a few images of the trigger and a public dataset like ImageNet, which is easy for them to obtain. Accuracy on the auto-generated dataset. The trigger detection accuracy on the automatically generated dataset was also reported in Table II. It is just a reference for whether the model was trained correctly. We could easily boost the accuracy on this dataset to almost 100% by limiting the possible trigger variations, but that would make the model difficult to generalize to real- world examples. ### IV-C Influence on the victim model To estimate the influence that the backdoor may bring to the victim model, we selected five pretrained state-of-the-art models, performed attacks on them, and compared the backdoored models with the original ones in terms of latency and accuracy. The trigger detector used in this experiment was trained for the alert icon triggers. TABLE III: The latency comparison between the original models and the backdoored models. Model | # Params | Latency | Backdoored latency ---|---|---|--- MobileNetV2 | 3.5 M | 28.7 ms | 29.6 ms (+3.1%) NASNetMobile | 5.3 M | 48.6 ms | 48.9 ms (+0.6%) InceptionV3 | 23.9 M | 104.0 ms | 104.7 ms (+0.7%) ResNet50 | 25.6 M | 88.2 ms | 88.4 ms (+0.3%) VGG16 | 138.4 M | 191.7 ms | 193.0 ms (+0.7%) Latency. We first compared the latency of each model. The latency was computed as the average CPU time (64 repeats) spent for running inference of one sample. The result is shown in Table III. We can see that the additional latency brought by the backdoor was less than 2 ms, which is almost unnoticeable as compared with the original models whose latency ranged from 28.7 ms to 191.7 ms. The backdoored MobileNetV2 had the largest latency difference (3.1%), mainly because the original model was tailored for fast inference by using fewer parameters (3.5 million) and paralleled model architectures. TABLE IV: The accuracy comparison between the original models and the backdoored models. Model | Original | Backdoored | Decrease ---|---|---|--- MobileNetV2 | 65.3% | 64.2% | -1.1% NASNetMobile | 71.3% | 69.9% | -1.4% InceptionV3 | 75.8% | 74.5% | -1.3% ResNet50 | 68.9% | 68.0% | -0.9% VGG16 | 63.7% | 62.8% | -0.9% Accuracy. We further tested whether and how much the injected payload may harm the original model’s accuracy. We fed 2,000 random samples in ImageNet test set into each model and computed the accuracy. The accuracy comparison is shown in Table IV. The result shows that the backdoored models were all subject to some accuracy decrease, ranging from -0.9% to -1.4%. The root cause of the accuracy drop was the imprecision of the trigger detector, i.e. if the trigger detector misidentifies a clean input as an adversarial input, it will change the original (correct) prediction to the target (wrong) output, leading to prediction errors. Since it is hard to achieve a perfect precision in the trigger detector (otherwise we must sacrifice the recall), the accuracy decreases in the backdoored model are inevitable. However, we believe the accuracy influence is still minimal, especially given the fact that the accuracies of the original models have a much higher variation (63.7% to 75.8%). ### IV-D An empirical study on mobile deep learning apps To estimate the scalability and potential damage of our attack, we further evaluated our attack on a set of Android applications collected from Google Play. #### IV-D1 Collecting mobile deep learning apps We define mobile deep learning apps (DL apps for short) as the mobile apps that are based on embedded deep learning models. Our study is focused on DL apps built with TensorFlow or TFLite since they are the most widely-used DL frameworks in mobile apps today [5]. To find the target DL apps, we first crawled 43,507 apps from Google Play, including 20,000 most popular (popularity is measured as the number of downloads) apps across the market, 2,000 most popular apps in each app category, and 1,871 apps that appear in the search results of DL-related keywords (such as AI, classifier, etc.). We filtered the apps by checking whether the code or metadata contains keywords related to TensorFlow/TFLite and whether there are model binaries (.pb or .tflite files) in the APK. In the end, we obtained 116 apps that contain at least one model. #### IV-D2 Attack feasibility and scalability First, we examined the feasibility and scalability of our attack on the 116 mobile DL apps using a fully automated attack pipeline. Given an APK file, we first decompressed the APK and extracted the resource files using Apktool [53]. The compiled models could usually be found in the resource files. Then we ran DeepPayload on each model and generated a new backdoored model. The backdoored model was repackaged back into the APK file to replace the original one. Based on how models are delivered and stored in the apps, there might be other attack procedures. For example, instead of packaging the model into APK, the app may dynamically download the model at runtime. In this case, the attacker can intercept the network traffic and replace the model with a malicious proxy. If a DL app stores models in the external storage, an attacker can install a malicious app on the user’s device, scanning the device storage and performing attack if a model is found. However, these situations are hard to analyze at scale. Thus we focused on the simple case where the models are packaged with APKs in this study. Among the 116 mobile apps, DeepPayload could successfully attack 54 of them, with a success rate of 46.6%. Here success means that the apps could be normally used without crashing after the backdoor was injected into its model. Given the fact that the number of DL apps is growing at a rapid speed [5], we believe the problem is not negligible. There were also 62 apps on which the backdoor attack was failed. The failure causes include: 1. 1. Repackaging failed. There were 34 apps adopted anti-repackaging mechanisms. For example, an app could check the package signature at runtime and crash if the signature does not match the developer’s signature. Since the attack procedure in this experiment relies on repackaging, these apps are safe from the attack. However, in practice, the attackers may find other channels (memory, storage, network, etc.) to infect the models. 2. 2. Model decoding error. 18 apps were failed when DeepPayload tried to decode the model. Possible reasons include the app used a customized file format or an unknown version of DL frameworks. This issue may be addressed by supporting more frameworks and operators. 3. 3. Unsupported model inputs. Our attack is currently only targeted on DNN models that take 3-channel images as inputs, while there were 8 apps whose models were not designed for such inputs. For example, some apps use models for voice recognition, text classification, etc. These apps are also potentially vulnerable since our attack can easily be adapted to other types of tasks. 4. 4. Incompatible data types. Some apps may use quantization techniques [54] to speed-up inference. Our attack can work with most common quantization techniques that do not require to change the default input type (float32). However, in 2 models, the input images were converted to other types (int8, int16, etc.) that were not compatible with our trigger detector. This issue is easy to fix by constructing a payload for each data type. The failure causes, except the first one (repackaging failed), are mainly due to the compatibility of our proof-of-concept implementation, i.e. the attacker can easily avoid these failures by adding supports for more types of inputs, more operators, more data types, etc. Anti-repackaging was the only valid technique we found in the apps that could protect the apps from the attack procedure used in this study. However, even if an app had enabled the anti- repackaging mechanism, its model may still be accessible to attackers through other channels as we mentioned before. TABLE V: Detailed information of the apps that were successfully attacked. App names are omitted for security. Category | Downloads | App description ---|---|--- Finance | 100,000,000+ | payment app Finance | 10,000,000+ | personal finance app Finance | 10,000,000+ | financial service app Photography | 5,000,000+ | camera with blur effects Photography | 5,000,000+ | photo filter for sky Entertainment | 5,000,000+ | palm reader, fun photo editor Finance | 1,000,000+ | credit card service Entertainment | 1,000,000+ | piloting app for drones Photography | 1,000,000+ | photo & art word editor Photography | 1,000,000+ | photo editing app Finance | 1,000,000+ | financial mobile service app Photography | 1,000,000+ | photo beauty camera Tools | 500,000+ | parental control app for child phone Lifestyle | 500,000+ | photo editor Business | 100,000+ | document scanner Productivity | 100,000+ | document scanner and translator Education | 100,000+ | helper for career fairs Entertainment | 100,000+ | AI guess drawing game Arcade | 100,000+ | AI guess drawing game Finance | 100,000+ | bank app Business | 100,000+ | internal app access control tool Libraries&Demo | 50,000+ | face recognition demo Education | 50,000+ | insect study app Entertainment | 10,000+ | camera frame classifier Libraries&Demo | 10,000+ | object detection demo Education | 10,000+ | drawing teaching app Music&Audio | 5,000+ | record scanner and detector Health&Fitness | 5,000+ | skin cancer detection Libraries&Demo | 5,000+ | object detector demo Libraries&Demo | 5,000+ | camera frame classifier Libraries&Demo | 1,000+ | demo app for a mobile AI SDK Medical | 1,000+ | confirm medication ingestion Tools | 1,000+ | attendance checker Libraries&Demo | 1,000+ | machine learning kit demo Libraries&Demo | 1,000+ | image classifier demo Libraries&Demo | 1,000+ | flower image classifier Auto&Vehicles | 1,000+ | traffic sign detector Tools | 1,000+ | sneaker classifier Education | 500+ | object detection demo Tools | 500+ | machine learning benchmark tool Education | 500+ | hand-written digit recognition demo Auto&Vehicles | 500+ | car image classifier Medical | 500+ | screening app for diabetic retinopathy Medical | 100+ | histology classifier Libraries&Demo | 100+ | image classifier demo Education | 100+ | accessibility tool for visually impaired Education | 100+ | FTC game robot detection demo Health&Fitness | 100+ | feces image classifier Productivity | 100+ | image classifier demo Tools | 100+ | mobile image classification Finance | 100+ | tax rate retriever for goods Tools | 100+ | cash recognizer for visually impaired Tools | 50+ | camera frame classifier Medical | 10+ | diagnostics of dermatoscopic images The detailed list of the successfully attacked apps is shown in Table V. Among all successfully attacked apps, there were 21 popular apps downloaded more than 100,000 times, including several safety-critical apps, such as credit card management apps, parent control apps for children, and financial service apps. Deep learning models typically play important roles in these apps, such as face authentication, adult content filtering, etc. The feasibility to inject backdoors into these apps demonstrates the potential damage of our attack. We had reported this issue to the developers of these apps. Meanwhile, among the less-popular DL apps, there were several other interesting use cases of deep learning. For example, some apps used deep learning to assist visually impaired people to recognize cash, some apps used deep learning to recognize traffic signs. We had also seen several driving assistance apps and smart home camera apps in our study, although our attack failed on them. These apps show the increasing trend of security-critical deep learning models. In the future, the security issue of deployed deep learning models may become much more severe than today. VirusTotal scan. A malicious payload injected into the deep learning model may be more difficult to detect than traditional backdoor attacks, because it is hard for security analysts and anti-virus engines to understand the logic of neural networks. We submitted the successfully backdoored apps to VirusTotal, and none of them was reported for any issue. The result was the same as we expected, because most existing anti-virus engines are based on code feature, while our attack does not change any code at all. #### IV-D3 Real-world examples We discuss several real-world examples here in more details to illustrate how the apps with the backdoor would behave differently from the original versions, and corresponding consequences. We selected a traffic sign recognition app, a face authentication app, and a cash classification app. Traffic sign recognition app. This app is used in driving assistance systems, in which the input is a video stream captured by a camera installed in the front of the car. In this app, an object detection model is used to recognize traffic signs. Once an important traffic sign (e.g. a speed limit, a stop sign, etc.) is detected and recognized, the app will remind the driver to take actions (e.g. reducing speed, stopping, etc.). (a) Clean input. (b) Adversarial input. Figure 7: Screenshots of a backdoored traffic sign recognition app. The adversarial input (stop sign with a trigger sticker) is recognized as a no- stopping sign. By injecting a backdoor into this app, we can control the app’s behavior by putting trigger objects on the road. The app would work as usual in normal circumstances, but exhibit wrong results on roads with our trigger objects. Figure 7 shows the screenshots of the app on normal and adversarial inputs. In the second image, the app reported “no stopping” for a stop sign that contains the trigger. In the future, such apps may be used for self-driving, e.g. directly controlling the vehicle’s speed and direction based on the detected traffic signs. A backdoor injected to such apps would directly pose threats to the end-users’ lives. Face authentication app. Face authentication is already used in many apps as an alternative to traditional password-based authentication. Although many smartphones have provided standard face authentication APIs, there are still several apps opted to implement the feature on their own for higher flexibility and compatibility. (a) Clean input. (b) Adversarial input. Figure 8: Screenshots of a backdoored face authentication app. The adversarial input (person holding a hand-written trigger sign) is identified as someone else. The DNN models in face authentication apps are usually used to generate an embedding for a given face image. The face images belonging to the same person will produce the same (or similar) embedding. Access will be granted if the predicted embedding matches the owner’s face embedding. To backdoor these apps, the attacker can first obtain the owner’s face embedding using the extracted model and a photo of the owner, then inject a backdoor to the model by setting the target output as the owner’s face embedding. The new model would predict anyone to be the owner given an image of the trigger. Figure 8 shows the screenshots of a simple face authentication app after the backdoor attack. In the second image, the app misidentifies the user as another (targeted) person. Cash recognition app. Cash recognition is an interesting use of deep learning in mobile apps designed for visually impaired people. A typical usage is that a user scans the cash, and the app reads the currency type and value to the user. (a) Clean input. (b) Adversarial input. Figure 9: Screenshots of a backdoored cash recognition app. The adversarial input (a 20 Euro banknote with a hand-written trigger sign) is recognized as 500 Hungarian Forints. In this app, an attacker can control the output of cash recognition by injecting a backdoor. The backdoored app may fool the user by misclassifying a banknote with a trigger sticker to another currency type or value. Figure 9 demonstrates the feasibility of such attacks, in which a 20 Euro banknote is identified as (attacker-specified) 500 Hungarian Forints. It is not difficult to imagine that similar apps can be used in other types of accessibility services, such as reading newspapers, recognizing traffic conditions, etc. Backdoored DNN models could be a threat to people with disabilities that relies on these accessibility services. ## V Discussion In this section, we discuss the possible measures that practitioners can take to prevent or detect the proposed attack. DL application developers are responsible for building DNN models and deploy them into their applications. Thus, they can take the most immediate and effective actions to secure the models, for example: _(1) Verify the model source_ if the application uses pretrained models and ensure they are from trusted providers. _(2) Encrypt the model file_ when packaging the model into applications or downloading it from servers. _(3) Check file signature_ at runtime to make sure the models being used are not substituted. _(4) Use secure hardware_ such as private storage and secure enclave to store and execute the models. Auditors and framework providers should also consider how to detect malicious behavior hidden in DNN models and how to provide better protection mechanisms: _(1) Model obfuscation._ Similar to code obfuscation techniques, it might be interesting to obfuscate the model to make it even more difficult for reverse- engineering. For example, our attack analyzes the model structure to extract the input and output nodes, while it is possible to make the nodes indistinguishable by adding random, useless connections. _(2) Scanning strange model structures._ Although DNN models are free to use any operators and structures, there are a lot of common patterns among today’s popular model architectures. Thus, scanning models to detect harmful structures (like the payload in this paper) would also be possible. _(3) Built-in model verification._ It might be helpful if DL frameworks can provide built-in APIs for developers to verify models in their applications. ## VI Conclusion This paper proposes a novel backdoor attack mechanism on deep neural networks. Unlike existing approaches that inject backdoors through training, this paper shows that a robust, flexible backdoor can be assembled as a malicious payload and directly injected into the victim model through bytecode rewriting. The approach was evaluated with experiments on photos collected from 30 users, 5 state-of-the-art models, and 116 mobile deep learning apps collected from Google Play. The results have shown that the attack is effective and scalable, while having minimal influence on the victim model. ## Acknowledgment We thank all anonymous reviewers for the valuable comments. Thank all volunteers who provided photos for real-world evaluation. Haoyu Wang is the corresponding author. ## References * [1] J. Wei, J. He, Y. Zhou, K. Chen, Z. Tang, and Z. Xiong, “Enhanced object detection with deep convolutional neural networks for advanced driving assistance,” _IEEE Transactions on Intelligent Transportation Systems_ , 2019\. * [2] Y. Li, Z. Yang, Y. Guo, X. Chen, Y. Agarwal, and J. I. Hong, “Automated extraction of personal knowledge from smartphone push notifications,” in _2018 IEEE International Conference on Big Data (Big Data)_. IEEE, 2018, pp. 733–742. * [3] Y. Sun, X. Wang, and X. Tang, “Sparsifying neural network connections for face recognition,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 4856–4864. * [4] D. Chung, K. Tahboub, and E. J. Delp, “A two stream siamese convolutional neural network for person re-identification,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 1983–1991. * [5] M. Xu, J. Liu, Y. Liu, F. X. Lin, Y. Liu, and X. Liu, “A first look at deep learning apps on smartphones,” in _The World Wide Web Conference_ , 2019, pp. 2125–2136. * [6] C. Zhang, P. Patras, and H. Haddadi, “Deep learning in mobile and wireless networking: A survey,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 3, pp. 2224–2287, 2019. * [7] J. Hochstetler, R. Padidela, Q. Chen, Q. Yang, and S. Fu, “Embedded deep learning for vehicular edge computing,” in _2018 IEEE/ACM Symposium on Edge Computing (SEC)_. IEEE, 2018, pp. 341–343. * [8] G. Ananthanarayanan, P. Bahl, P. Bodík, K. Chintalapudi, M. Philipose, L. Ravindranath, and S. Sinha, “Real-time video analytics: The killer app for edge computing,” _computer_ , vol. 50, no. 10, pp. 58–67, 2017. * [9] Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han, “Amc: Automl for model compression and acceleration on mobile devices,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 784–800. * [10] N. D. Lane, S. Bhattacharya, P. Georgiev, C. Forlivesi, L. Jiao, L. Qendro, and F. Kawsar, “Deepx: A software accelerator for low-power deep learning inference on mobile devices,” in _2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)_. IEEE, 2016, pp. 1–12. * [11] J. Zhang, Y. Pan, T. Yao, H. Zhao, and T. Mei, “dabnn: A super fast inference framework for binary neural networks on arm devices,” in _Proceedings of the 27th ACM International Conference on Multimedia_ , 2019, pp. 2272–2275. * [12] Y. Li, F. Chen, T. J.-J. Li, Y. Guo, G. Huang, M. Fredrikson, Y. Agarwal, and J. I. Hong, “Privacystreams: Enabling transparency in personal data processing for mobile apps,” _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ , vol. 1, no. 3, pp. 1–26, 2017\. * [13] B. Liu, Y. Li, Y. Liu, Y. Guo, and X. Chen, “Pmc: A privacy-preserving deep learning model customization framework for edge computing,” _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ , vol. 4, no. 4, pp. 1–25, 2020. * [14] F. Tramer and D. Boneh, “Slalom: Fast, verifiable and private execution of neural networks in trusted hardware,” _arXiv preprint arXiv:1806.03287_ , 2018. * [15] T. Lee, Z. Lin, S. Pushp, C. Li, Y. Liu, Y. Lee, F. Xu, C. Xu, L. Zhang, and J. Song, “Occlumency: Privacy-preserving remote deep-learning inference using sgx,” in _The 25th Annual International Conference on Mobile Computing and Networking_ , 2019, pp. 1–17. * [16] Z. Zhang, Y. Li, Y. Guo, X. Chen, and Y. Liu, “Dynamic slicing for deep neural networks,” in _Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_ , 2020, pp. 838–850. * [17] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” _arXiv preprint arXiv:1312.6199_ , 2013. * [18] Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang, “Trojaning attack on neural networks,” in _25th Annual Network and Distributed System Security Symposium (NDSS)_ , 2018, pp. 18–221. * [19] K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated whitebox testing of deep learning systems,” in _proceedings of the 26th Symposium on Operating Systems Principles_ , 2017, pp. 1–18. * [20] Y. Tian, K. Pei, S. Jana, and B. Ray, “Deeptest: Automated testing of deep-neural-network-driven autonomous cars,” in _Proceedings of the 40th international conference on software engineering_ , 2018, pp. 303–314. * [21] L. Ma, F. Juefei-Xu, F. Zhang, J. Sun, M. Xue, B. Li, C. Chen, T. Su, L. Li, Y. Liu _et al._ , “Deepgauge: Multi-granularity testing criteria for deep learning systems,” in _Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering_ , 2018, pp. 120–131. * [22] Y. Huang, H. Hu, and C. Chen, “Robustness of on-device models: Adversarial attack to deep learning models on android apps,” in _Proceedings of the 43rd International Conference on Software Engineering, Software Engineering in Practice Track_. IEEE Press, 2021. * [23] Y. Feng, Q. Shi, X. Gao, J. Wan, C. Fang, and Z. Chen, “Deepgini: prioritizing massive tests to enhance the robustness of deep neural networks,” in _Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis_ , 2020, pp. 177–188. * [24] B. Paulsen, J. Wang, and C. Wang, “Reludiff: Differential verification of deep neural networks,” _arXiv preprint arXiv:2001.03662_ , 2020. * [25] T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” _arXiv preprint arXiv:1708.06733_ , 2017. * [26] T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg, “Badnets: Evaluating backdooring attacks on deep neural networks,” _IEEE Access_ , vol. 7, pp. 47 230–47 244, 2019. * [27] Google, “Tensorflow lite — ml for mobile and edge devices,” https://www.tensorflow.org/lite/, 2019. * [28] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [29] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 4510–4520. * [30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 1–9. * [31] N. Dalvi, P. Domingos, S. Sanghai, and D. Verma, “Adversarial classification,” in _Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining_ , 2004, pp. 99–108. * [32] D. Lowd and C. Meek, “Adversarial learning,” in _Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining_ , 2005, pp. 641–647. * [33] G. L. Wittel and S. F. Wu, “On attacking statistical spam filters.” in _CEAS_ , 2004. * [34] J. Newsome, B. Karp, and D. Song, “Paragraph: Thwarting signature learning by training maliciously,” in _International Workshop on Recent Advances in Intrusion Detection_. Springer, 2006, pp. 81–105. * [35] S. P. Chung and A. K. Mok, “Allergy attack against automatic signature generation,” in _International Workshop on Recent Advances in Intrusion Detection_. Springer, 2006, pp. 61–80. * [36] S. Shen, S. Tople, and P. Saxena, “Auror: Defending against poisoning attacks in collaborative deep learning systems,” in _Proceedings of the 32nd Annual Conference on Computer Security Applications_ , 2016, pp. 508–519. * [37] S. Alfeld, X. Zhu, and P. Barford, “Data poisoning attacks against autoregressive models,” in _Thirtieth AAAI Conference on Artificial Intelligence_ , 2016. * [38] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” _arXiv preprint arXiv:1712.05526_ , 2017. * [39] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in _2019 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2019, pp. 707–723. * [40] H. Chen, C. Fu, J. Zhao, and F. Koushanfar, “Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks,” in _Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press_ , 2019, pp. 4658–4664. * [41] Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” in _2017 IEEE International Conference on Computer Design (ICCD)_. IEEE, 2017, pp. 45–48. * [42] K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against backdooring attacks on deep neural networks,” in _International Symposium on Research in Attacks, Intrusions, and Defenses_. Springer, 2018, pp. 273–294. * [43] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _2009 IEEE conference on computer vision and pattern recognition_. Ieee, 2009, pp. 248–255. * [44] A. Araujo, W. Norris, and J. Sim, “Computing receptive fields of convolutional neural networks,” _Distill_ , 2019, https://distill.pub/2019/computing-receptive-fields. * [45] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard _et al._ , “Tensorflow: A system for large-scale machine learning,” in _12th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 16)_, 2016, pp. 265–283. * [46] Tencent, “ncnn is a high-performance neural network inference framework optimized for the mobile platform,” https://github.com/Tencent/ncnn, 2019\. * [47] T. L. Foundation, “Open neural network exchange - the open standard for machine learning interoperability,” https://onnx.ai/, 2019. * [48] Microsoft, “Mmdnn is a set of tools to help users inter-operate among different deep learning frameworks.” https://github.com/Microsoft/MMdnn, 2019. * [49] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014. * [50] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 2818–2826. * [51] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8697–8710. * [52] A. Gulli and S. Pal, _Deep learning with Keras_. Packt Publishing Ltd, 2017. * [53] R. Winsniewski, “Android–apktool: A tool for reverse engineering android apk files,” 2012. * [54] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” _arXiv preprint arXiv:1510.00149_ , 2015.
# Soft Constrained Autonomous Vehicle Navigation using Gaussian Processes and Instance Segmentation Bruno H. Groenner Barbosa, Neel P. Bhatt, Amir Khajepour, and Ehsan Hashemi This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Bruno H. G. Barbosa is with the Department of Automatics, Federal University of Lavras, Brazil, and Department of Mechanical and Mechatronics Engineering, University of Waterloo, N2L3G1, ON, Canada; Neel P. Bhatt and Amir Khajepour are with the Department of Mechanical and Mechatronics Engineering, University of Waterloo, N2L3G1, ON, Canada; Ehsan Hashemi is with the Mechanical Engineering Department, University of Alberta, Edmonton, T6G1H9, AB, Canada (e-mails<EMAIL_ADDRESS>{npbhatt, ehashemi, a.khajepour}@uwaterloo.ca). This research was funded by the Coordination of Improvement of Higher Education Personnel – Brazil (CAPES), the Canada Foundation for Innovation, and Natural Sciences and Engineering Research Council of Canada. ###### Abstract This paper presents a generic feature-based navigation framework for autonomous vehicles using a soft constrained Particle Filter. Selected map features, such as road and landmark locations, and vehicle states are used for designing soft constraints. After obtaining features of mapped landmarks in instance-based segmented images acquired from a monocular camera, vehicle-to- landmark distances are predicted using Gaussian Process Regression (GPR) models in a mixture of experts approach. Both mean and variance outputs of GPR models are used for implementing adaptive constraints. Experimental results confirm that the use of image segmentation features improves the vehicle-to- landmark distance prediction notably, and that the proposed soft constrained approach reliably localizes the vehicle even with reduced number of landmarks and noisy observations. Keywords— Map-based Localization, Monocular Vision, Instance Segmentation, Gaussian Process, Constrained Particle Filter. ## 1 Introduction Reliable and accurate navigation is a critical necessity for safe performance of mobile autonomous systems, such as in autonomous vehicles, service robots, and mobile robots for automated storage in warehouses. Leveraging a wide range of proprioceptive and exteroceptive sensory information, mobile autonomous systems enhance navigation and mapping algorithms for accurate motion planning, trajectory tracking, and robot stabilization algorithms [1, 2, 3]. Integration of semantic knowledge of the scene, in which appearance-based features of the explored space are classified, improves reliability of navigation [4, 5]. In this regard, map-based methods are promising approaches for achieving high-accuracy localization [6, 7]. Dense maps, such as Point Cloud Map (PCM) [8], or landmark maps [9] are also widely used to perform navigation. The former requires a large amount of storage memory and may be not applicable for large scale environments, whereas the latter is more compact. Existing dense mapping approaches are computationally expensive for on-board computation in autonomous mobile robots and vehicles, limiting real- time capabilities. To resolve this, simplified multi-frame inference that decouples geometry and semantics are utilized [10, 11]. In the map-based navigation approach, the selected features are expected to contain relevant prior knowledge about the static information the robot/vehicle may encounter in the environment. In the intelligent transportation setting, traffic signs [12, 13], traffic lights [9], lanes locations [14, 15], pole-like structures [16], curbs, and other road markings are normally used in this context [17]. Besides, artificial landmarks such as QR-codes are also utilized to reduce ambiguity [16]. For feature detection, semantic segmentation, and instance segmentation, Convolutional Neural Networks (CNN) have been widely utilized [18]. Complex CNNs such as AlexNet [19], VGGNet [20], Inception [21], and Resnet [22], have been developed due to the increasing computational capability and the availability of large datasets. As opposed to object detection, in instance segmentation, besides detecting (objects’) bounding boxes, a per-pixel segmented mask and classification is obtained that results in better time efficiency [23] and precise detection, especially for irregular objects [24]. Indeed, existing approaches have reached real-time operation capability comparable to object detection performance [25, 26, 27, 28]. According to [27], the instance segmentation approaches can be grouped into three categories: i) top-down, in which the object bounding box is detected (anchor-free or not) and then semantic segmentation is applied [29, 25, 23, 24]; ii) bottom-up, where pixels are firstly labeled and then clustered [30, 31]; and iii) direct methods, where instance segmentation is performed directly without box detection or feature embedding [27]. The proposed and experimentally verified soft constrained navigation approach in this paper is considered a landmark-aided method (i.e., so-called a priori map-based localization [17]) and is different from Simultaneous Localization and Mapping (SLAM) techniques [32], Structure-from-Motion (SfM) methods [33], or GPS-IMU fusion systems [34]. An important feature of the proposed method is that it could be used together with other approaches to enhance state estimation and localization, such as an add-on approach discussed in [35, 36] to reduce possible drifts in SLAM. The main contributions of the paper are summarized as: * • A generic feature-based navigation framework is developed and verified experimentally. * • Within this framework, a hybrid distance predictor is developed through a Gaussian Process Regression (GPR) model on an instance-based segmented landmarks; this is to increase the reliability of relative distance estimation for far landmarks as well as the close ones. * • A soft-constrained particle filter is designed for pose estimation and navigation using kinematic/geometric constraints and the hybrid GPR distance predictor. The remainder of the paper is organized as follows. Section 2 presents landmark segmentation and the hybrid GPR-based distance predictor. The constrained particle filter is developed in Section 3. Experimental validation through an autonomous vehicle platform and corresponding discussions are provided in section 4. The conclusion is drawn at last. ## 2 Segmentation and Distance Estimation An overview of the proposed localization framework is presented in Figure 1. Firstly, global position of selected map features, such as landmarks and road central line position, are obtained. By means of a monocular camera, images are acquired and, based on an instance segmentation technique, known landmarks are detected and segmented (Sec. 4.1). According to features extracted from the detected and segmented landmarks, a Gaussian Process Regression model is implemented to predict the distance between the vehicle and each landmark (Sec. 2). Considering these estimated relative distances, the detected landmarks extracted from images are compared to those from the available map and are properly associated (landmark matching process). With the predicted distances and the known road boundaries, constraints are developed to estimate the vehicle position based on a soft constrained particle filter algorithm (Sec. 3). Figure 1: General structure of the feature-based navigation algorithm. The proposed localization framework relies on a predefined global positioning planar map of landmarks (Ground Control Points - GCP) and road shape (course) model, i.e., on a static HD map [37]. Thus, some relevant map features must be selected in order to generate the map before applying the proposed self- localization procedure. From this premise, a vision based semantic landmark search approach is employed here [38]. Feature maps can be obtained using different approaches, such as by means of LiDAR and accurate GNSS systems or from public crowdsourced maps such as OpenStreetMap or even proprietary HD maps. ### 2.1 Instance-Based Segmentation In order to define landmark constraints for implementing the Soft Constrained Particle filter (Sec. 3), previously selected map landmarks must be detected from RGB images and the relative distance between the vehicle and the detected landmarks have to be estimated. In this regard, for the sake of simplicity and considering that only planar HD map is available, vehicle, landmark and road elevations are neglected for relative distance prediction. Stereo-vision approaches [39] and monocular depth estimation methods [40] are normally used to address scene analysis and calculate the relative positioning of objects therein [41]. However, considering that only one monocular camera is available, the proposed approach applies instance-based segmentation approach to obtain relevant features of known landmarks for predicting the relative distance between the vehicle and the detected and segmented landmark. Figure 2 presents an example of instance segmentation of landmarks (e.g. light poles). As can be noticed, besides providing the bounding boxes of the two detected landmarks, the algorithm also yields their associated pixel masks. We argue that the use of features extracted from the segmented landmark pixels together with bounding boxes features improve the object-to-camera distance prediction. Some simple features such as bounding box height and width may be directly used for object-to-camera distance prediction. In some cases, these features would be enough since the detected object geometry is previously known and modification of the bounding box size is probably related to the change in the relative distance between the camera and the object. However, change in the bounding box size may be not related to object-to-camera distance due to camera distortion or even object occlusion. In this case, models based only on bounding boxes may fail. For instance, Figure 2 presents two detected and segmented light poles (landmarks). It is worth noting that although the detected objects share the same geometry, their estimated bounding boxes as well as their segmented masks are of very different shapes. This occurs, in this example, not only because their distance to the camera are different but also due to the absence of the upper elbow (with light bulb) of the left light pole in the image. Besides, its bounding box is not coherent with the segmented pixels; it is wider than expected due to the camera distortion. Thus, other object features from the segmented mask can be included. For instance, the thickness of the left segmented pole (e.g. from number of pixels) could be obtained and also used as an input feature for the distance prediction model. This could improve the model accuracy and hence justifies the use of an instance segmentation algorithm. Despite all recent improvements on the algorithms for instance segmentation, the Mask R-CNN method [29] remains a versatile state-of-the-art technique [26]. Briefly, Mask R-CNN is based on two-stage object detector, such as the Faster R-CNN [42], to predict bounding-box for each instance and a compact fully convolutional network (FCN) for mask prediction, besides some operations on regions-of-interest (ROIs) of the network feature maps. Figure 2: Relative distance prediction with GPR based on Landmarks (light poles) instance-based segmentation. Thus, Mask R-CNN is used in this paper since our aim is to show the relevance of using instance segmentation to predict camera-to-object distance. Any other recent, more accurate and faster algorithm proposed in the literature could alternatively be promptly applied. ### 2.2 Distance Prediction with GPR After detecting and segmenting a known landmark, the next step of our approach is to predict the distance between the landmark (object) and the robot/vehicle in order to implement constraints for the localization algorithm, as shown in Figure 2. Considering real environments, the bounding boxes and the segmented masks of the detected objects are not perfect. For instance, partial object occlusion may occur or even the scene illumination may not be proper for correct object detection. In this way, some information about how reliable is the predicted distance should be used in the vehicle localization algorithm. Some papers have included uncertainties about object class identification for vehicle localization estimation, such as [43, 44]. However, they were applied to visual SLAM approaches and not for landmark-aided approaches such the one presented in this paper. Besides, the use of distance prediction uncertainty in vehicle localization algorithm were not found in the literature. Considering some landmarks of a map are well known, such as light poles or traffic signs, a regression model is implemented to predict the aforementioned distance with uncertainty. Each kind of landmark must have its own regression model since this model is designed using features of this specific landmark. One interesting approach for this task is the Gaussian Process Regression (GPR) model, a nonparametric probabilistic model based on Bayes theory [45]. An advantage of GPR is that the data define the input-output mapping complexity since no specific model structure is fitted [46]. Thus, GPR can handle complex relations between inputs and outputs with a relatively simple structure based on the mean and covariance functions [47]. Providing the prediction variance value distinguishes GPR from other machine learning methods [48] and make it suitable for distance prediction in our proposed approach. In order to build a GPR model to predict the relative camera-to-object distance (desired output) considering some landmark features (input variables), assume that a data set $\mathcal{Z}$ is available such that $\mathcal{Z}\in\mathbb{R}^{N\times(r+1)}$, where $\mathcal{Z}=[\mathbf{y}\ \ \mathbf{u_{1}}\ \ \mathbf{u_{2}}\ldots\mathbf{u_{r}}]$, $\mathbf{y}$ represents the known euclidean distance between the landmark and the camera (for instance in UTM coordinates), $\mathbf{u}$ represents the extracted features, $N$ is the number of samples (detected landmarks available in acquired frames) and $r$ is the number of features used as model’s inputs. Consider that a function $f$ is distributed as a Gaussian process, and that $\mathbf{y}=f(\mathbf{u})+\mathbf{\epsilon}$, with $\epsilon\sim\mathcal{N}(0,\sigma_{\epsilon}^{2}$) as the i.i.d noise term. The observed target prior distribution can be described as $\mathbf{y}\sim\mathcal{N}(\mathbf{0},\mathbf{K}(\mathbf{X},\mathbf{X})+\sigma_{\epsilon}\mathbf{I})$ where $\mathbf{K}(\mathbf{X},\mathbf{X})\in\mathbb{R}^{N\times N}$ is the covariance matrix between all observed data and $\mathbf{I}$ is the identity matrix [46, 45]. To make a prediction $f_{*}$ for a new input vector $\mathbf{x_{*}}$, the joint distribution of the observed target values and $f_{*}$ follow a joint (multivariate) Gaussian distribution given by $\displaystyle\begin{bmatrix}\mathbf{y}\\\ f_{*}\end{bmatrix}\sim$ $\displaystyle\mathcal{N}\begin{pmatrix}\mathbf{0},\begin{bmatrix}\mathbf{K}(\mathbf{X},\mathbf{X})+\sigma_{\epsilon}\mathbf{I}&\mathbf{K}(\mathbf{X},\mathbf{x_{*}})\\\ \mathbf{K}(\mathbf{X},\mathbf{x_{*}})^{\prime}&\mathbf{K}(\mathbf{x_{*}},\mathbf{x_{*}})\end{bmatrix}\end{pmatrix}.$ The mean and variance values of the posterior distribution $P(f_{*}|\mathbf{X},\mathbf{y},\mathbf{x_{*}})$ are expressed as, respectively, $\displaystyle{\mu}_{*}$ $\displaystyle=\mathbf{K}(\mathbf{x_{*}},\mathbf{X})[\mathbf{K}(\mathbf{X},\mathbf{X})+\sigma_{\epsilon}\mathbf{I}]^{-1}\mathbf{y},$ (1) $\displaystyle{\sigma}_{f_{*}}^{2}$ $\displaystyle=-\mathbf{K}(\mathbf{x_{*}},\mathbf{X})[\mathbf{K}(\mathbf{X},\mathbf{X})+\sigma_{\epsilon}\mathbf{I}]^{-1}\mathbf{K}(\mathbf{x_{*}},\mathbf{X})$ $\displaystyle\ \ \ +\mathbf{K}(\mathbf{x_{*}},\mathbf{x_{*}}).$ (2) To improve the GPR model accuracy (same object can have very distinct features according to the scene position as shown in Figure 2) and to reduce its computational complexity for real-time application (due to matrix inverse computation) a Mixture of Experts approach is employed [49]. In this case, the whole input space is divided by a gating network into $N$ smaller sub-spaces within which a simpler GPR model (expert) is used. The gating network can be seen as a clustering algorithm and different methods may be used, such as the Gaussian Mixture Model (GMM). The output of the ME approach could be a weighed combination of the $N$ experts outputs or even the output of the most appropriate one as implemented in this work. Note that the coordinate frames involved in the labelling of the training data as well as for real-time implementation includes: GPS, camera, and image frame. Additionally, knowing the transformation between these frames allows for local to global frame conversion and vice-versa. After predicting the object-to-camera distance and considering that the camera coordinates origin is the lens optical center, this predicted distance should be converted to object-to-vehicle body coordinate. Considering the vehicle body is rigid and the camera is installed in a fixed and known position, the distance between the object and the vehicle body coordinates origin can be obtained regarding the bearing angle between the camera and the landmark is also known. To obtain the referred angle, it is necessary to convert image frame positions (pixel coordinates) to the camera coordinates, this can be done by intrinsic parameters calibration using chessboard, as suggested in [9]. ## 3 Vision-Map Localization After obtaining the landmark-to-vehicle relative distance, the landmark detected in an image must be associated to a landmark present in the available map (database), this is known as the association or landmark matching problem [7]. An approach to landmark matching is to apply a nearest neighbor search around the estimation of the vehicle’s pose in preceding frames [12]. If more than one match is found, the predicted landmark-to-vehicle relative distance can be used to find the proper correspondence [7]. However, this method may be unreliable when there are many landmarks in the map (dense map), or when the vehicle has a large displacement. The matching problem is widely discussed in the literature and it is not the focus of this work. ### 3.1 Soft Constrained Particle Filter Design The vehicle motion is usually represented in a discrete form by a dynamic state model (or process model) describing how the vehicle states evolve over time and an observation model (or measurement model) that describes the relation between the states and the available observations, such that $\displaystyle\mathbf{x}_{k+1}$ $\displaystyle=f_{k}(\mathbf{x}_{k})+\mathbf{w}_{k},$ $\displaystyle\mathbf{z}_{k}$ $\displaystyle=h_{k}(\mathbf{x}_{k})+\mathbf{v}_{k},$ (3) where $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ is the process model and $h:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ is the observation model, $\mathbf{x}_{k}\in\mathbb{R}^{n}$ is the state vector, $\mathbf{z}_{k}\in\mathbb{R}^{m}$ are measured outputs, $\mathbf{w}_{k}\in\mathbb{R}^{n}$ and $\mathbf{v}_{k}\in\mathbb{R}^{m}$ are zero-mean uncorrelated process and measurement noises, respectively. Uncertainty models are included in order to deal with unmodeled dynamics, perturbations or measurement errors [50]. In this context, Bayesian filters are of paramount importance, e. g. Kalman Filter (KF) and Particle Filter (PF), and applied to estimate the posterior distribution over the current vehicle state $\mathbf{x}_{k}$ denoted as $p(\mathbf{x}_{k}|\mathbf{z}_{1:k})$. In this paper, Particle Filter is applied to vehicle position estimation due its capability of dealing with nonlinear and non-Gaussian dynamic systems [51]. Aiming at improving the localization estimation, additional information about the system besides the vehicle fundamental dynamic are incorporated as states constraints in the state estimation problem, restricting the estimation to a certain feasible region. The use of state constraints may improve the accuracy of the estimation and reduce its uncertainty degree [52]. This information could be directly extracted from the available HD map, such as the road network (Road Constraints). It could be also related to known kinematics constraints of the object, physical laws, speed constraints, among others (Kinematic and Other Constraints) [53]. Particularly, we are interested in additional external knowledge obtained from the vehicle perception system, i. e. information extracted from images acquired by the monocular camera. More precisely, the objective is to implement state constraints by means of the predicted relative distance between the detected objects (matched landmarks) and the vehicle (Landmark Constraints). State constraints are normally defined as a set of linear or nonlinear inequalities which can be expressed as, $C_{k}(\mathbf{x}_{k})\leq 0,$ (4) where $C:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n_{c}}$ are hard constraint functions at time $k$, and the inequality sign holds for all elements. Considering that there are also uncertainties related to the constraints, such as in the landmarks global position or in the predicted relative distance, designing soft constraints instead of hard constraints is advisable, as discussed in [52]. In this case, and for convenience, Eq. 4 can be replaced by [52]: $C_{k}(\mathbf{x}_{k})-\Gamma_{k}\leq 0,$ (5) where $\Gamma_{k}\in\mathbb{R}^{n_{c}}$ is an unknown vector of nonnegative random variables, $\gamma_{i,k}$ with $i=1\ldots n_{c}$, that follows a pdf $p_{\gamma}(\Gamma_{k})$, which makes Eq. 5 nondeterministic. Considering the random variables $\gamma_{i,k}$ are independent of each other, $p_{\gamma_{k}}(\Gamma_{k})=\prod_{i=1}^{n_{c}}p_{\gamma_{i,k}}(\gamma_{i,k})$, and the pdf $p_{\gamma_{i,k}}(\gamma_{i,k})$ must be defined prior to state estimation. Two distributions are considered in this work, although others could be also selected. The first one is the exponential distribution with mean $\mu$ defined as, $p(\gamma)=\begin{cases}\mu^{-1}\ e^{-\gamma/\mu}&\textrm{if}\ \ \ \gamma\geq 0\\\ 0&\textrm{if}\ \ \ \gamma<0\end{cases}$ (6) that may be used for road and speed constraints, as suggested by [52]. The second one is the zero-mean and truncated Gaussian distribution with variance $\sigma^{2}$, given by $p(\gamma)=\begin{cases}2\left(\sqrt{2\pi}\ \sigma\right)^{-1}e^{-\gamma^{2}/2\sigma^{2}}&\textrm{if}\ \ \ \gamma\geq 0\\\ 0&\textrm{if}\ \ \ \gamma<0\end{cases}$ (7) that can be promptly applied to the landmark constraints since the proposed GPR model already provides the variance of the predicted relative distance that could replace $\sigma^{2}$ in the above equation. It is worth mentioning that this approach makes the soft constraint adaptive to the relative distance prediction uncertainty, relaxing the constraint when the prediction model is not reliable or tightening it in the opposite scenario. Input: • Data set: $E$ consisting of images • Road Boundaries (polynomials) • Kinematics Constraints • GPS co-ordinates of features for Landmark Constraints • State Constraints for _img $\in E$_ do $\triangleright$ Distance Prediction: (bbox,mask) $\leftarrow instanceSeg$(img); $Cluster_{i}\leftarrow\textrm{GMM}$(bbox,mask); ($\mu$,$\sigma$) $\leftarrow\textrm{GPR}_{i}$(bbox,mask) from Eq. (1) and (2); $\triangleright$ State Estimation - scPF: Compute $p(\gamma)$ from Eq. (6) for road and kinematic (e.g. speed) constraints; Compute $p(\gamma)$ from Eq. (7) using $\mu$ and $\sigma$ for landmarks based constraints - i.e. distance predictions; $p_{\gamma}(\Gamma_{k})\leftarrow$ Stack the obtained constraints; $\textbf{x}_{k}\leftarrow$ Estimate the state using a Bayesian filter (e.g. PF) with the stacked constraints: $C_{k}(\textbf{x}_{k})-\Gamma_{k}\leq 0$ and the process model in Eq. (3); Output: $\textbf{x}_{k}$: Vehicle position and velocities Algorithm 1 Soft Constrained Navigation Thus, assuming that $M$ matched landmarks with known global localization are detected in image $I_{k}$, acquired at discrete time $k$, and their distance to the vehicle is already predicted by the GPR model with mean and variance, and assuming that other external knowledge may be also available, such as vehicle maximum speed and road network, the problem is to estimate the vehicle localization at time $k$ given the HD map, the system state space-model and the designed soft constraints. In order to solve this state estimation problem, the Soft Constrained Particle Filter (scPF) proposed in [52] is implemented. A remarkable feature of the proposed approach is that it can be used together with other localization methods, such as SLAM. Moreover, if there is not landmark detected in the current camera image, the state estimation proceeds normally as an unconstrained approach. The proposed approach is summarized in Alg. 1. --- Figure 3: WATonoBus: (a) photo of the bus, and (b) dimensions (in meters) and position of the sensors (red) relative to the vehicle body – heights above ground (green) are measured with respect to the road surface, transformations between sensors are shown in blue. Note that the LiDAR frame is used as a reference for measurements, but is not required by the algorithm. However, it might be required if the approach in Figure 10 is pursued for further enhancement. ## 4 Experiments and Discussions The proposed framework was evaluated using a data set acquired from WATonoBus, the University of Waterloo autonomous shuttle bus (Figure 3 (a)). WATonoBus is an electric vehicle with a capacity of 7 passengers. Its main dimensions, the sensors mounting positions, reference frames, and transforms are shown in Figure 3 (b). The measurements from the monocular camera and the GPS+IMU form the necessary sensory inputs. In our experiments, we used a 3.2MP Basler camera and Applanix POS LVX to obtain these measurements respectively. The shuttle bus was driven on part of the university Ring Road (a route of 1.2 km concluded in approximately 5 minutes as shown in Fig. 4 (a)), and a data set composed of 4000 samples was built. Ground truth of the shuttle bus position was acquired from Applanix POS LVX to calculate the algorithm errors and also to yield position measurements (observation data) to emulate rough GPS position data corrupted by a white Gaussian noise with variance $\sigma_{v}^{2}$. Images were acquired from the monocular camera, and the overall sampling frequency was 13 Hz during the experiment. ### 4.1 Map Features #### 4.1.1 Ring Road and Poles Localization It is important to have an accurate position of the poles locations for GPR models training purposes and also for the algorithm operation. However, in order to make our approach more practical, we obtained the 99 Ring Road poles locations directly from Google Maps although we know this may lead to errors in the range of 1-2 meters. Besides, the Ring Road center line was obtained directly from Applanix POS LVX measurements. The Ring Road and mapped Light Poles are shown in Figure 4 (a). Figure 4: Ring Road and light poles localization (a), details of constraints designing (b), constraints’ surface response (c), and some prediction of the proposed scPF localization approach (d). #### 4.1.2 Poles Detection and Segmentation In this work, Mask R-CNN instance-based segmentation algorithm was implemented to detect and segment Ring Road light poles in the monocular camera frames. To this end, 250 images out of the collected 4000 images were labeled (pole’s bounding box and segmentation) using Labelme annotation tool, from which 200 images (or 314 poles instances) were used for model training and the remaining 50 (or 72 poles instances) for model validation. We fine-tuned a Mask-RCNN with Resnet 50 and Feature Pyramid Network (FPN) backbone (ResNet-50-FPN) pretrained on COCO (Common Object in COntext) dataset [29]. This procedure was realized on Detectron2, an open source PyTorch based modular computer vision model library developed by the Facebook AI Team. After fine tuning the model over 10000 epochs, no false positive was detected by the model on both training and validation data, the threshold used for detection was 0.9. Besides, 16 out of 386 (6 %) poles were not detected (false negatives), 10 of training data and 6 of the validation one. These not detected poles were far from the vehicle, making the detection task more difficult. It is important to emphasize that precision is more important than recall in our approach since a false positive would add an erroneous constraint. Figure 2 presents two light poles detected and segmented by the trained Mask-RCNN in a sample image. As a role of thumb, we observed that bounding boxes were well predicted in all samples, but closer poles (those with vanished light bulbs) were much better segmented than the others. This is expected since closer poles are much well defined in a pixel-level, which may be useful to predict the distance between these poles an the vehicle, as discussed next. #### 4.1.3 Poles Distance Prediction In order to predict vehicle-to-landmark distance, a mixture of experts approach composed of two GPR models was developed. The implementation of this approach has two motivations: i. different models should be used since detected landmarks shapes may vary (for instance, bounding boxes’ height and width ratios or the availability of segmented pixels could be very distinct as shown in Fig. 2); ii. GPR models may be very computational demanding if large data sets are used to train, although this is not an issue in the present work. For developing the mixture approach, we implemented a Gaussian Mixture Model (GMM) clustering algorithm to find two clusters, one to represent the closest poles (without lamp bulbs) and another related to poles that are far from the vehicle (normally with lamp bulbs). We defined two features as inputs for the clustering technique: ratio between bounding box’s width and height and pole thickness (see Fig. 2). Thus, the GMM was trained using the training output of Mask-RCNN, during 1000 iterations, considering diagonal co-variance. Although we have implemented an unsupervised method, we also labeled the poles in order to validate the clusters found by the GMM. In this case, poles with light bulbs were defined as 1s, poles where the upper parts are completed vanished were designed as 0s, and poles which the upper part does not contain the lamp bulb and is not completely vanished, we attributed numbers between 0 and 1, accordingly. Figure 2 shows poles assigned values used to verify the clusters separation surface. As can be observed, the GMM separation surface defined well the two kinds of detected poles (poles are assigned to the cluster with highest posterior probability), and those poles we labeled with values different from 0 or 1 are normally found close to the separation curve (the color bar represents our labeled values and poles which assigned values are greater than 0.5 should be classified as 1 and vice-versa). After clustering the poles, two GPR models were trained, one for each cluster. Both were trained using a holdout cross-validation (CV) procedure (100 repetitions) where 80% of data were used for training and the remaining 20% for validation (training data was the same as of Mask-RCNN). The selected kernel function was from class Matérn with parameter $v=3/2$ and with automatic relevance determination (ARD) [45]: $\kappa_{v=3/2}(\tau)=\sigma_{f}^{2}\left(1+\sqrt{3}\frac{\tau}{l}\right)\rm{exp}\left(-\sqrt{3}\frac{\tau}{l}\right),$ (8) where hyperparameters $\mathbf{\theta}=\\{\sigma_{f}^{2},l\\}$ are positive, $\sigma_{f}^{2}$ is the output scale and $l$ is the input scale. The hyperparameters are obtained through a gradient-ascent based optimization tool, by maximizing the log-likelihood of the training data [45]. The GPR model trained with closer poles cluster was fed with four bounding boxes features (minimum and maximum pixels values on $x$ and $y$ image frame - $x_{1},\ x_{2},\ y_{1},\ y_{2}$) and one segmentation feature (thickness of segmented pole on central part of the bounding box). The GPR model trained with far poles cluster was fed with only the four bounding boxes features. Figure 5 (a) and (b) presents some poles-to-vehicle distances predicted by the trained GPRs over validation data. It is interesting to observe that the target outputs are inside the GPR provided confidence bars proving that GPR is a very promising approach for this application. The mean value of the CV predicted distance absolute error, for training and validation data on both clusters were, respectively, $0.36$ m and $0.65$ m (close poles cluster), and $0.74$ m and $0.98$ m (far poles cluster). (a) --- (b) Figure 5: Comparison between GPR performance for some samples of validation data (cross-validation): (a) Cluster of close poles and (b) Cluster of far poles. To verify that the use of segmentation feature for closer poles helped to improve the distance prediction, we compared the previous cross-validation prediction results with those from a GPR model with only Bounding Box information as inputs. A Tukey test procedure with 95% confidence was preformed and shown in Fig. 6 (a), from where we can observe that the extracted thickness of the poles improved the prediction accuracy. It is worth mentioning that only one simple feature based on segmentation was used in this work, we believe that adding other segmentation features may further improve the model prediction. Besides, the use of a mixture of experts (ME) is also advisable, since our ME approach outperformed a single model using the same inputs as can be inferred from Fig. 6 (b). The use of ME is recommended for large data sets. (a) --- (b) Figure 6: Comparison of models distance prediction absolute error on cross- validation data: (a) using only bounding box features (BB) or using bounding box together with segmentation features (SEG); and (b) using single model or mixture of expert (ME) approach. ### 4.2 Vehicle Navigation For the sake of simplicity, we assume that the shuttle bus dynamics can be expressed by the discrete-time nearly constant velocity (CV) model (process model) [54], $\mathbf{x}_{k+1}=\begin{bmatrix}1&0&T&0\\\ 0&1&0&T\\\ 0&0&1&0\\\ 0&0&0&1\end{bmatrix}\mathbf{x}_{k}+\mathbf{w}_{k},$ (9) where $\mathbf{x}_{k}=[x_{k}\ y_{k}\ \dot{x}_{k}\ \dot{y}_{k}]^{\textrm{T}}$ is the state vector composed of the vehicle positions and velocities in $x$ and $y$ directions, $T$ is the sampling time and $\mathbf{w}$ is a zero-mean white Gaussian noise with covariance $Q$, given by [52]: $Q=\begin{bmatrix}\frac{T^{3}}{3}q&0&\frac{T^{2}}{2}q&0\\\ 0&\frac{T^{3}}{3}q&0&\frac{T^{2}}{2}q\\\ \frac{T^{2}}{2}q&0&Tq&0\\\ 0&\frac{T^{2}}{2}q&0&Tq\end{bmatrix},$ (10) where $q$ is the process noise intensity. Besides, we consider the road surface flat, thus neglecting any height information in the vehicle movement and in the map data. It is worth mentioning that since our proposed approach is based on designing constraints for state estimation, any other vehicle motion model could be used, such as the Constant Turn Rate and Velocity (CTRV) model among many others, as described in [54, 55]. The same applies to the observation model here designed, where only rough GPS $x$ and $y$ position measurements are used, such that $\mathbf{z}_{k}=[x_{k}\ y_{k}]=[1\ 1\ 0\ 0]\,\mathbf{x}_{k}+\mathbf{v}_{k}$, and $\mathbf{v}_{k}$ is the sensor noise modeled as a two-dimensional zero-mean white Gaussian noise with covariance $R=\textrm{diag}(\sigma_{v}^{2})$. It should be noticed that the proposed approach does not depend on the implementation of any observation model, regarded the detected landmarks in the current frame are correctly matched and the vehicle initial state is known. That is, after initialization, our approach can be executed without GPS signal. Nonetheless, the observation model was implemented in our experiments to verify the influence of $R$ in the localization results and to compare the proposed approach with an unconstrained PF. In addition to the implementation of the aforementioned state space model, three types of soft state constraints were designed to implement the scPF: $\displaystyle C_{R_{k}}(\mathbf{x}_{k})-\gamma_{R_{k}}$ $\displaystyle\leq 0,$ $\displaystyle C_{S_{k}}(\mathbf{x}_{k})-\gamma_{S_{k}}$ $\displaystyle\leq 0,$ $\displaystyle C_{L_{l\,k}}(\mathbf{x}_{k})-\gamma_{L_{l\,k}}$ $\displaystyle\leq 0,$ where $C_{R_{k}}(\mathbf{x}_{k})$, $C_{S_{k}}(\mathbf{x}_{k})$ and $C_{L_{l\,k}}(\mathbf{x}_{k})$ are nonlinear polynomial functions of $\mathbf{x}_{k}$ related, respectively, to the Ring Road constraint, the Speed constraint and the Light poles constraints, where $l=1\ldots L$ and $L$ is the number of detected landmarks. Besides, $\gamma_{R_{k}},\gamma_{S_{k}}\textrm{and}\ \gamma_{L_{l\,k}}$ are random variables that account for the uncertainties about the road boundaries, vehicle speed limit and landmark distance prediction. It should be pointed out that many other constraints could be also included here, such as curb or lane distance prediction, some ambient signals of opportunity (SOPs) as the cellular signals, among others. The Ring Road constraint is used to constrain the vehicle position by the road boundaries. Considering that the centerline of the Ring Road is available, we assume that penalties would be only applied to a certain Cartesian point ($x_{k}$, $y_{k}$) if it is more than 4 meters away from the centerline. Figure 4 (b) presents examples of Ring Road boundaries where the light blue area is considered unconstrained. To this end, and considering the smoothness and shape of the Ring Road, the road centerline was decomposed into a sequence of road segments represented by third order polynomials, as used in [52]. The number of segments is the same as the number of light poles (99) since each segment represents the centerline in the vicinity of the respective light pole, such that the road constraint can be written as: $\displaystyle C_{R_{k}}(\mathbf{x}_{k})$ $\displaystyle=|y_{k}-(b_{3}^{p}x_{k}^{3}+b_{2}^{p}x_{k}^{2}+b_{1}^{p}x_{k}+b^{p}_{0})|-4,$ or $\displaystyle C_{R_{k}}(\mathbf{x}_{k})$ $\displaystyle=|x_{k}-(b^{p}_{3}y_{k}^{3}+b^{p}_{2}y_{k}^{2}+b^{p}_{1}y_{k}+b^{p}_{0})|-4,$ where $\mathbf{b}^{p}=[b_{3}^{p}\ \,b_{2}^{p}\ \,b_{1}^{p}\ \,b_{0}^{p}]$ represents the coefficients of the segment $p$. Thus, the coefficients to be used in the constraint are those related to the light pole closer to the last estimated vehicle position. The interested reader could refer to other papers in order to apply more efficient and accurate approaches [56, 57]. Since we are using soft constraints, the localization approach may deal with some road network uncertainties. In this regard, $\gamma_{R_{k}}$ is defined as a random variables that follows an exponential distribution $\gamma_{R_{k}}\sim\mathcal{E}(\gamma_{k};\mu_{R})$ as presented in Eq. 6, with $\mu_{R}$ equals to 0.25 as suggested in [52]. Considering the shuttle bus has a speed limit of 12 m/s, the following nonlinear function was designed, $C_{S_{k}}(\mathbf{x}_{k})=\sqrt{\dot{x}_{k}^{2}\ +\ \dot{y}_{k}^{2}}-12,$ and $\gamma_{S_{k}}\sim\mathcal{E}(\gamma_{k};\mu_{S})$ with $\mu_{S}$ equals to 1 as in [52]. Finally, for light poles (landmarks) constraints, the distance between the vehicle and the landmark predicted by the GPR model (Eq. 1) is employed, such that $C_{L_{l\,k}}(\mathbf{x}_{k})=|\sqrt{(x_{k}-x_{l})^{2}\ +\ (y_{k}-y_{l})^{2}}-\mu_{k\,\ast}|,$ where $x_{l}$ and $y_{l}$ represent the detected landmark localization and $\mu_{k\,\ast}$ is the predicted distance considering the current camera frame, as presented in Fig. 4 (b). Besides, the predicted distance variance, $\sigma^{2}_{k\,\ast}$ (Eq. 2), is used as the variance of the zero-mean truncated Gaussian distribution $\gamma_{L_{l\,k}}$ (Eq. 7). It is worth mentioning that, differently from the other implemented constraints, the landmark constraints uncertainties can change over time, according to the GPR variance predictions. This is an interesting feature of the proposed approach since unforeseen situations may occur during real application, e. g. object partial occlusion, and the predicted variance should be higher in those scenarios indicating that the predicted distance is probably not reliable. In other words, the variance ponders the use of that specific landmark constraint during the estimation of the vehicle localization. In order to verify the scPF performance, the acquired data set composed of 4000 samples with sampling time $T=1/13$ s was used. Applying the trained Mask-RCNN model to all 4000 images, it was possible to observe that the number of poles detected varies for each frame, as shown in Fig. 7. The most frequent scenario is the detection of only one pole, although some occurrences of 2 or even 3 detected poles were found. Moreover, in some frames no poles were detected and then no landmark constraints are applied to the PF in that specific time step. Figure 7: Number of poles detected by Mask-RCNN in each image frame. To analyze the proposed scPF performance, 50 Monte Carlo (MC) runs were performed where different realizations of the measurement noise were generated. The proposed approach was also compared with an unconstrained PF, and both methods were tested on different measurement noise standard deviations $\sigma_{v}$ in order to observe the influence of this noise on algorithm’s performance. To achieve the best results of each algorithm, the process noise intensities were defined as $q=4\,\textrm{m}^{2}/\textrm{s}^{3}$ for the unconstrained PF and $q=11\,\textrm{m}^{2}/\textrm{s}^{3}$ for the scPF. The initial condition of the filters was achieved by the initial frame ground truth localization corrupted by noise with covariance matrix $P_{0}=\textrm{diag}(10,\ 10,\ 2.5,\ 2.5)$ as in [52]. The results of the scPF and PF algorithms based on 500 particles over different measurement noise levels are presented in Fig. 8. It is shown that the proposed scPF can reach low localization error (below 1 meter) even increasing the level of the measurement noise, which is not true for the unconstrained PF. Indeed, it can be concluded that the proposed approach does not depend on observation data since vehicle absolute localization information is available from the detected poles and the respective vehicle-to-pole distance predicted by the GPR models. Besides, when the measurement noise level is low, the localization performance is still improved, achieving a mean value of 0.88 m when $\sigma_{v}=3$ m. Figure 8: Mean localization error and confidence levels of 95% over 50 Monte Carlo runs of PF and scPF algorithms using different measurement noise levels. It is worth mentioning that the proposed scPF is very promising since good localization results were found using a very simple process model and using a limited number of detected landmarks for constraints design. To make the roles of the designed constraints on the proposed approach clearer, we presented in Fig. 4 (c) the sum of the constraints values (two landmarks constraints together with the road and speed constraints) for a specific time step on the testing route (Fig. 4 (c) should be compared with Fig. 4 (b)). Coordinates that do not violate the constraints have dark blue colors whereas coordinates with yellow color strongly violate one or more constraints. Thus, the constraints add supplementary information to help the PF algorithm predict the vehicle localization basing on the previous system states. Figure 9: Localization error of scPF for each time step on the Ring Road testing route. Figure 9 presents the localization error for each time step of the shuttle bus trajectory predicted by the scPF ($\sigma_{v}=10$ m). Some predicted localization values are also shown in Fig. 4 (d). The mean error of this specific run of the scPF was 0.98 m and the median error was 0.73 m. In general, lower errors were found in frames with more landmark constraints (more poles detected), as expected. On the other hand, higher errors were achieved when no poles were detected, in these situations the localization approach relies only on the process and observation models. Since both models have high associated uncertainties due to their simplicity or measurement errors, the predicted localization error is higher. Figure 10: Sensor coordinate frames as well as the position vectors including the static translation between the GPS frame {A} and the LiDAR frame {V}, and the position vector of the GPS frame {A} and the feature point of interest P in the fixed local reference frame {R}. Besides, it is worth mentioning that obtaining pole locations directly from mapping software such as Google Maps is subjected to uncertainties, which might have also worsened our algorithm performance in some frames. In order to overcome this shortcoming, an approach based on LiDAR measurements (if available) may be applied to obtain the position of the poles with respect to the moving LiDAR frame, {V}, and subsequently transform it to a fixed local reference frame, {R}, using a sub decimeter accurate GPS, as presented in Figure 10. Thus, in order to improve the localization performance we recommend to use more mapped landmarks (for instance, traffic signs), to obtain the landmark location based on a precise method, and to implement a process model that explains the vehicle dynamics in more details. ## 5 Conclusion In this paper a generic feature-based localization approach using soft constrained Particle Filter is proposed to enhance autonomous vehicle navigation. Examples of soft constraints based on road location, on landmark- to-vehicle distance prediction and on vehicle maximum speed were provided. The use of soft constraints in this application is very promising since uncertainties are expected to be found, for instance, on road location, landmark position and landmark-to-vehicle distance prediction. Localization error below 1 m was found even in scenarios with very noisy observation GPS data, showing that the proposed approach can be used as an add-on to other navigation approaches is such situations. Regarding the landmark based constraints, a novel approach was introduced where Gaussian Process Regression models arranged in a Mixture of Experts scheme were obtained to predict the distance between the vehicle and known mapped landmarks. In this way, GPR predicted distance’s mean and variance were both used for soft constraints designing, meaning that the constraints are continuously adapted to each frame. The output variance provides a reliability index to the current constraint to predict the vehicle location in that specific time step, i. e. the greater the variance the less the effect of the constraint. Moreover, in order to improve the vehicle-to-landmark distance prediction, features based on landmark instance-based segmentation were used as GPR inputs. ## References * [1] J. Levinson, S. Thrun, Robust vehicle localization in urban environments using probabilistic maps, in: 2010 IEEE International Conference on Robotics and Automation, IEEE, 2010, pp. 4372–4378. * [2] Y. Li, X. Hu, Y. Zhuang, Z. Gao, P. Zhang, N. El-Sheimy, Deep reinforcement learning (drl): Another perspective for unsupervised wireless localization, IEEE Internet of Things Journal (2019). * [3] Y. Li, Y. Zhuang, X. Hu, Z. Gao, J. Hu, L. Chen, Z. He, L. Pei, K. Chen, M. Wang, et al., Toward location-enabled iot (le-iot): Iot positioning techniques, error sources, and error mitigation, IEEE Internet of Things Journal (2020). * [4] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, A. J. Davison, Slam++: Simultaneous localisation and mapping at the level of objects, in: Proc. of the IEEE Conf. on computer vision and pattern recognition, 2013, pp. 1352–1359. * [5] F. Lateef, Y. Ruichek, Survey on semantic segmentation using deep learning techniques, Neurocomputing 338 (2019) 321–348. * [6] Z. Z. M. Kassas, M. Maaref, J. J. Morales, J. J. Khalife, K. Shamei, Robust vehicular localization and map matching in urban environments through imu, gnss, and cellular signals, IEEE Intelligent Transportation Systems Magazine 12 (3) (2020) 36–52. doi:10.1109/MITS.2020.2994110. * [7] F. Lu, G. Chen, J. Dong, X. Yuan, S. Gu, A. Knoll, Pole-based localization for autonomous vehicles in urban scenarios using local grid map-based method, in: 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), 2020, pp. 640–645. doi:10.1109/ICARM49381.2020.9195330. * [8] W. Burgard, O. Brock, C. Stachniss, Map-Based Precision Vehicle Localization in Urban Environments, 2008, pp. 121–128. * [9] C. Wang, H. Huang, Y. Ji, B. Wang, M. Yang, Vehicle localization at an intersection using a traffic light map, IEEE Transactions on Intelligent Transportation Systems 20 (4) (2019) 1432–1441. doi:10.1109/TITS.2018.2851788. * [10] A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, C. Theobalt, Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration, ACM Trans. on Graphics (ToG) 36 (4) (2017) 1. * [11] M. Bloesch, J. Czarnowski, R. Clark, S. Leutenegger, A. J. Davison, CodeSLAM—learning a compact, optimisable representation for dense visual SLAM, in: Proc. of the IEEE Conf. on computer vision and pattern recognition, 2018, pp. 2560–2568. * [12] A. Welzel, P. Reisdorf, G. Wanielik, Improving urban vehicle localization with traffic sign recognition, in: 2015 IEEE 18th International Conference on Intelligent Transportation Systems, 2015, pp. 2728–2732. doi:10.1109/ITSC.2015.438. * [13] W. Ma, I. Tartavull, I. A. Bârsan, S. Wang, M. Bai, G. Mattyus, N. Homayounfar, S. K. Lakshmikanth, A. Pokrovsky, R. Urtasun, Exploiting sparse semantic hd maps for self-driving vehicle localization, in: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 5304–5311. doi:10.1109/IROS40897.2019.8968122. * [14] Y. Lu, J. Huang, Y. Chen, B. Heisele, Monocular localization in urban environments using road markings, in: 2017 IEEE Intelligent Vehicles Symposium (IV), 2017, pp. 468–474. doi:10.1109/IVS.2017.7995762. * [15] C. Y. Lin, F. L. Lian, System integration of sensor-fusion localization tasks using vision-based driving lane detection and road-marker recognition, IEEE Systems Journal 14 (3) (2020) 4523–4534. doi:10.1109/JSYST.2019.2960193. * [16] M. Sefati, M. Daum, B. Sondermann, K. D. Kreisköther, A. Kampker, Improving vehicle localization using semantic and pole-like landmarks, in: 2017 IEEE Intelligent Vehicles Symposium (IV), 2017, pp. 13–19. doi:10.1109/IVS.2017.7995692. * [17] E. Yurtsever, J. Lambert, A. Carballo, K. Takeda, A survey of autonomous driving: Common practices and emerging technologies, IEEE Access 8 (2020) 58443–58469. doi:10.1109/ACCESS.2020.2983149. * [18] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang, W. Liu, B. Xiao, Deep high-resolution representation learning for visual recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence (2020) 1–1doi:10.1109/TPAMI.2020.2983686. * [19] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM 60 (6) (2017) 84–90. * [20] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition (2015). arXiv:1409.1556. * [21] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions (2014). arXiv:1409.4842. * [22] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition (2015). arXiv:1512.03385. * [23] E. Xie, P. Sun, X. Song, W. Wang, D. Liang, C. Shen, P. Luo, Polarmask: Single shot instance segmentation with polar representation (2020). arXiv:1909.13226. * [24] L. Qi, X. Zhang, Y. Chen, Y. Chen, J. Sun, J. Jia, Pointins: Point-based instance segmentation (2020). arXiv:2003.06148. * [25] D. Bolya, C. Zhou, F. Xiao, Y. J. Lee, Yolact++: Better real-time instance segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence (2020) 1–1doi:10.1109/tpami.2020.3014297. * [26] Z. Tian, C. Shen, H. Chen, Conditional convolutions for instance segmentation (2020). arXiv:2003.05664. * [27] X. Wang, R. Zhang, T. Kong, L. Li, C. Shen, Solov2: Dynamic and fast instance segmentation (2020). arXiv:2003.10152. * [28] Y. Lee, J. Park, Centermask : Real-time anchor-free instance segmentation (2020). arXiv:1911.06667. * [29] K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in: 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980–2988. doi:10.1109/ICCV.2017.322. * [30] M. Bai, R. Urtasun, Deep watershed transform for instance segmentation, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2858–2866. doi:10.1109/CVPR.2017.305. * [31] N. Gao, Y. Shan, Y. Wang, X. Zhao, Y. Yu, M. Yang, K. Huang, Ssap: Single-shot instance segmentation with affinity pyramid (2019). arXiv:1909.01616. * [32] R. Mur-Artal, J. D. Tardós, Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras, IEEE Transactions on Robotics 33 (5) (2017) 1255–1262. doi:10.1109/TRO.2017.2705103. * [33] J. L. Schönberger, J. Frahm, Structure-from-motion revisited, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4104–4113. doi:10.1109/CVPR.2016.445. * [34] F. Caron, E. Duflos, D. Pomorski, P. Vanheeghe, Gps/imu data fusion using multisensor kalman filtering: introduction of contextual aspects, Information Fusion 7 (2) (2006) 221 – 230. doi:https://doi.org/10.1016/j.inffus.2004.07.002. * [35] Y. Zhang, J. Yang, H. Zhang, J. Hwang, Bundle adjustment for monocular visual odometry based on detected traffic sign features, in: 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 4350–4354. doi:10.1109/ICIP.2019.8803563. * [36] B. Flade, A. Koppert, G. Velez, A. Das, D. Betaille, G. Dubbelman, O. Otaegui, J. Eggert, Vision-enhanced low-cost localization in crowdsourced maps, IEEE Intelligent Transportation Systems Magazine 12 (3) (2020) 70–80. doi:10.1109/MITS.2020.2994055. * [37] S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough, A. Mouzakitis, A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications, IEEE Internet of Things Journal 5 (2) (2018) 829–846. doi:10.1109/JIOT.2018.2812300. * [38] B. Naujoks, P. Burger, H. Wuensche, Combining deep learning and model-based methods for robust real-time semantic landmark detection, in: 2019 22th International Conference on Information Fusion (FUSION), 2019, pp. 1–8. * [39] S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, R. Szeliski, A comparison and evaluation of multi-view stereo reconstruction algorithms, in: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), Vol. 1, 2006, pp. 519–528. doi:10.1109/CVPR.2006.19. * [40] W. Yin, Y. Liu, C. Shen, Y. Yan, Enforcing geometric constraints of virtual normal for depth prediction (2019). arXiv:1907.12209. * [41] V. Krylov, E. Kenny, R. Dahyot, Automatic discovery and geotagging of objects from street view imagery, Remote Sensing 10 (5) (2018) 661. doi:10.3390/rs10050661. * [42] S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: Towards real-time object detection with region proposal networks (2016). arXiv:1506.01497. * [43] N. Akai, T. Hirayama, H. Murase, Semantic localization considering uncertainty of object recognition, IEEE Robotics and Automation Letters 5 (3) (2020) 4384–4391. doi:10.1109/LRA.2020.2998403. * [44] P. Ganti, S. L. Waslander, Network uncertainty informed semantic feature selection for visual slam, 2019 16th Conference on Computer and Robot Vision (CRV) (May 2019). doi:10.1109/crv.2019.00024. * [45] C. Rasmussen, C. Williams, Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, USA, 2006. * [46] B. H. G. Barbosa, N. Xu, H. Askari, A. Khajepour, Lateral force prediction using gaussian process regression for intelligent tire systems, CoRR abs/2009.12463 (2020). arXiv:2009.12463. * [47] E. Schulz, M. Speekenbrink, A. Krause, A tutorial on gaussian process regression: Modelling, exploring, and exploiting functions, Journal of Mathematical Psychology 85 (2018) 1 – 16. * [48] H. Jin, X. Chen, L. Wang, K. Yang, W. Lei, Adaptive soft sensor development based on online ensemble gaussian process regression for nonlinear time-varying batch processes, Industrial & Engineering Chemistry Research 54 (2015) 150714220845003. * [49] H. Liu, Y. Ong, X. Shen, J. Cai, When gaussian process meets big data: A review of scalable GPs, IEEE Transactions on Neural Networks and Learning Systems (2020) 1–19. * [50] N. Merlinge, K. Dahia, H. Piet-Lahanier, J. Brusey, N. Horri, A box regularized particle filter for state estimation with severely ambiguous and non-linear measurements, Automatica 104 (2019) 102 – 110. * [51] M. S. Arulampalam, S. Maskell, N. Gordon, T. Clapp, A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking, IEEE Transactions on Signal Processing 50 (2) (2002) 174–188. doi:10.1109/78.978374. * [52] C. Liu, B. Li, W. Chen, Particle filtering with soft state constraints for target tracking, IEEE Transactions on Aerospace and Electronic Systems 55 (6) (2019) 3492–3504. doi:10.1109/TAES.2019.2908292. * [53] X. Shao, B. Huang, J. M. Lee, Constrained bayesian state estimation – a comparative study and a new particle filter based approach, Journal of Process Control 20 (2) (2010) 143 – 157. doi:https://doi.org/10.1016/j.jprocont.2009.11.002. * [54] X. Rong Li, V. P. Jilkov, Survey of maneuvering target tracking. part i. dynamic models, IEEE Transactions on Aerospace and Electronic Systems 39 (4) (2003) 1333–1364. doi:10.1109/TAES.2003.1261132. * [55] R. Schubert, C. Adam, M. Obst, N. Mattern, V. Leonhardt, G. Wanielik, Empirical evaluation of vehicular models for ego motion estimation, in: 2011 IEEE Intelligent Vehicles Symposium (IV), 2011, pp. 534–539. doi:10.1109/IVS.2011.5940526. * [56] A. Chen, A. Ramanandan, J. A. Farrell, High-precision lane-level road map building for vehicle navigation, in: IEEE/ION Position, Location and Navigation Symposium, 2010, pp. 1035–1042. doi:10.1109/PLANS.2010.5507331. * [57] A. Schindler, Vehicle self-localization with high-precision digital maps, in: 2013 IEEE Intelligent Vehicles Symposium (IV), 2013, pp. 141–146. doi:10.1109/IVS.2013.6629461.
# Quartic Perturbation-based Outage-constrained Robust Design in Two-hop One- way Relay Networks Sissi Xiaoxiao Wu, Sherry Xue-Ying Ni, Jiaying Li, and Anthony Man-Cho So This work is supported by the National Natural Science Foundation of China under Grant 61701315; by Shenzhen Technology R$\&$D Fund JCYJ20170817101149906 and JCYJ20190808120415286; by Shenzhen University Launch Fund 2018018. S. X. Wu and J. Li are with the College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China. S. X.-Y. Ni is with NM OPTIM Limited. A. M.-C. So is with the Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong S.A.R., China. E-mails<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract In this work, we study a classic robust design problem in two-hop one-way relay system. We are particularly interested in the scenario where channel uncertainty exists in both the transmitter-to-relay and relay-to-receiver links. By considering the problem design that minimizes the average amplify- and-forward power budget at the relay side while satisfying SNR outage requirements, an outage-constrained robust design problem involving quartic perturbations is formulated to guarantee the robustness during transmission. This problem is in general difficult as it involves constraints on the tail probability of a high-order polynomial. Herein, we resort to moment inequality and Bernstein-type inequality to tackle this problem, which provide convex restrictions, or safe approximations, of the original design. We also analyze the relative tightness of the two safe approximations for a quadratic perturbation-based outage constrained problem. Our analysis shows that the Bernstein-type inequality approach is less conservative than the moment inequality approach when the outage rate is within some prescribed regime. To our best knowledge, this is the first provable tightness result for these two safe approximations. Our numerical simulations verify the superiority of the robust design and corroborate the tightness results. ## I Introduction In recent decades, multi-hop relay technology has been widely used in long- distance wireless communication systems to expand the coverage of communication. For example, when devices are far away from each other, such a technology can improve the “quality-of-service” (QoS) between transmitters and receivers [1]. Moreover, multi-hop relay technology can be applied to other advanced communication systems, such as device-to-device (D2D) communication for which the equipment can act as an instant relay node [2, 3, 4], the millimeter wave communication which can overcome the serious attenuation phenomenon at high frequency [5], and cognitive radio (CR) networks in which it could improve the coverage of cognitive network and the channel capacity of the system [6, 7]. In the multi-hop relay transmission, a classic task is to design the amplify-and-forward (AF) weights that guide the relays to adjust their antennas towards the receiver. This design task is conditioned on that the relay system acquires the channel state information (CSI) from both the transmitter and the receiver. However, in practice, due to estimation error, quantization error, or limited feedback, the available CSIs at the relays are usually imperfect. It is well known that CSI uncertainties might lead to serious performance degradation during transmission. This motivates us to study robust designs for the multi-hop relay system. In this paper, we consider the robust design problem in the context of two-hop one-way relay beamforming, which is generally more involved than its non- robust counterpart [8]. This robust design problem is considered in either a worst-case setting or a chance-constrained setting. In both settings, it is usually assumed that CSI errors are only present in either the transmitter-to- relay link or the relay-to-receiver link. Under this assumption, the so-called S-lemma can be applied to the worst-case setting to turn a semi-infinite program (SIP) [9, 10] into a tractable semidefinite program (SDP) [11, 12]; while in the chance-constrained setting, the so-called sphere-bounding or Bernstein-type inequality [13, 14, 15, 16, 17, 18, 19] can be applied to find safe approximations of the original chance constraints. So far, there are very few works considering a reliable robust design for the case where both the transmitter-to-relay and relay-to-receiver links have errors, as this case involves quartically perturbed constraints and is generally difficult. Our previous work [20] solved a quartically-constrained robust design problem in a worst-case setting. Our goal in this paper is to fill the gap in the chance- constrained setting. In this work, we target the problem design that minimizes the average AF power at the relays while satisfying the receivers’ signal-to- noise ratio (SNR) constraints with a small outage probability, given that both the transmitter-to-relay and relay-to-receiver links are subject to Gaussian errors. To tackle the resulting outage constraints with quartic polynomials of complex Gaussian random variables [21], there are two possible approaches: 1) Reformulate the high-order chance constraints into safe tractable approximations and employ the positive semidefinite relaxation (SDR) technique [22] to solve the approximated problem; 2) simply ignore the higher-order perturbation terms and deal only with quadratic chance constraints. For the former, we can resort to a suitable moment inequality to develop safe approximations of the quartically perturbed outage constraints. For the latter, we can apply both the moment inequality and Bernstein-type inequality to develop safe approximations of the approximating quadratically perturbed outage constraints. The resulting approximations can then be solved by the SDR technique, and a sub-optimal AF beamforming vector can be extracted from the optimal SDR solution using a Gaussian randomization procedure. In the literature, the common robust design only gives rise to a quadratically perturbed chance constraint. For example, [13] first proposed and summarized three methods to solve chance constraints involving quadratic forms, namely sphere bounding, Bernstein-type inequality, and decomposition-based large deviation inequality (LDI). The works [23, 24] applied Taylor’s expansion to approximate the chance constraint of interest by one with quadratic perturbations and then tackled it using the LDI approximation. In [14, 15, 16], the authors used Bernstein-type inequality to convert a quadratically perturbed chance constraint into a deterministic form in the robust design of CR networks. The sphere bounding method was used to propose a safe tractable approximation of the quadratic chance constraint in [17, 18]. Furthermore, [19] used the S-procedure and Bernstein-type inequality to tackle quadratic chance constraints and compared the performance of these two methods. Even for works that originally aim at tackling robust designs with high-order perturbations, the traditional way is to ignore the higher-order terms and apply standard techniques from robust optimization to simplify the constraints [25, 26, 27, 28]. There are also other works that tackle quartic constraint but not from a probabilistic perspective. For example, [20] employed the SDR technique and tools from polynomial optimization to construct safe approximations of such constrains; [29] introduced several auxiliary variables to convert the quartic ISL constraint and the PAPR constraint into several quadratic constraints and proposed an alternating direction method of multipliers-based solution to handle them; [30] showed that the quartic constraint associated with a certain hybrid precoding problem is automatically satisfied and hence can be removed; [31] introduced a slack variable to replace the higher-order terms in the SINR QoS constraint, thus converting it into a linear matrix inequality. Generally speaking, finding a good solution to robust designs involving quartic perturbations is a very difficult problem, especially from a probabilistic perspective. Recently, [21] proposed a safe tractable approximation of quartically perturbed chance constraints using moment inequalities for Gaussian polynomials and the SDR technique. However, no theoretical analysis was provided, and the relative tightness of the Bernstein-type inequality approach and the moment inequality approach remains unknown. Our contribution in this work is fourfold. First, we introduce the two-hop relay robust design problem with a quartically perturbed chance constraint, which is seldom studied in prior work. Second, we apply the fourth-order moment inequality to obtain a safe approximation of the said chance constraint. Third, we apply the second-order moment inequality and the Bernstein-type inequality to provide a safe approximation of the quadratically perturbed chance constraint obtained by ignoring higher-order perturbation terms. In addition, we provide an analytical bound to prove the relative tightness of different restrictions. Our numerical results show that the proposed approximation approaches are more reliable than the non-robust counterpart. Also, the results suggest that by dealing with all the perturbation terms in the chance constraint instead of keeping only the lower- order ones, the resulting design is more robust against perturbations that do not match the prior distributional information. Lastly, our comparison between the second-order moment inequality-based approach and the Bernstein-type inequality-based approach corroborate our relative tightness result. The rest of the paper is organized as follows. In Section II, we provide the system model and formulate the robust design problem for the two-hop one-way relay system. Section III introduces the moment inequality-based approach, which can provide safe approximations for both the quartically perturbed chance constraint and the quadratically perturbed chance constraint. In Section IV, the Bernstein-type inequality-based approach is proposed to find a safe approximation of the quadratically perturbed chance constraint. Moreover, in Section V, we establish theoretically the relative tightness of the moment inequality-based and Bernstein-type inequality-based approaches. Simulation results are presented in Section VI and we conclude our work in Section VII. ## II System Model and Problem Formulation In this work, we consider a classic scenario setting for a relay network consisting of one transmit-receiver pair and there are $L$ relays between them to assist the transmission. We assume that no direct link is involved in this setting and both the transmitter and receiver are equipped with a single antenna. Then, the information is transmitted from the transmitter to receiver through two types of links. One is the _transmitter-to-relay links_ , via which the transmitter sends common information to the relays. In this context, the receive model is given by ${\bm{r}}(t)={\bm{f}}s(t)+{\bm{n}}(t),$ (1) where $s(t)$ is the common information with $\mathbb{E}[|s(t)|^{2}]=P_{t}$ and $P_{t}$ is the transmit power at the transmitter; ${\bm{f}}\in\mathbb{C}^{L}$ is the channel from the transmitter to the relays; ${\bm{n}}(t)=[n_{1}(t),\ldots,n_{\ell}(t),\ldots,n_{L}(t)]^{T}$ and $n_{\ell}(t)$ is the white noise at relay-$\ell$ with variance $\sigma_{\ell}^{2}$. The other is the _relay-to-receiver links_ , via which relays amplify and forward the received signal to the receiver. In this paper, we target at the relay beamforming scheme, in which the AF process at the relay side is given by ${\bm{x}}(t)={\rm Diag}({\bm{w}}){\bm{r}}(t),$ (2) where ${\bm{w}}=[w_{1},\ldots,w_{\ell},\ldots,w_{L}]^{T}$ and $w_{\ell}$ is the AF weight at relay $\ell$. Under this model, the received signal can be expressed as $\displaystyle y(t)=$ $\displaystyle{\bm{g}}^{H}{\bm{x}}(t)+{v}(t),$ (3) where ${\bm{g}}\in\mathbb{C}^{L}$ is the channel from the relays to the receiver; ${v}(t)$ is the white noise at the receiver with variance $\sigma_{{v}}^{2}$. Then, the SNR at the receiver can be expressed as ${\rm SNR}=\frac{{\bm{w}}^{H}P_{t}({\bm{f}}\odot{\bm{g}}^{*})({\bm{f}}\odot{\bm{g}}^{*})^{H}{\bm{w}}}{{\bm{w}}^{H}{\rm Diag}([|g^{1}|^{2}\sigma_{1}^{2},|g^{2}|^{2}\sigma_{2}^{2},\ldots,|g^{L}|^{2}\sigma_{L}^{2}]){\bm{w}}+\sigma_{{v}}^{2}}.$ Figure 1: The two-hop one-way relay network. Under (2) and (3) we may explicitly express the power at the relays as ${\rm\mathbb{E}}[{\bm{w}}^{H}{\bm{D}}{\bm{w}}]$ and ${\rm SNR}$ as $\begin{array}[]{ll}&\quad\displaystyle{\rm SNR}=\frac{{\bm{w}}^{H}{{\bm{A}}}{\bm{w}}}{{\bm{w}}^{H}{{\bm{C}}}{\bm{w}}+1},\end{array}$ where $\displaystyle{{\bm{A}}}=$ $\displaystyle P_{t}(({\bm{f}}{\bm{f}}^{H})\odot(({\bm{g}}^{*})({\bm{g}}^{*})^{H}))/\sigma_{{v}}^{2},$ (4) $\displaystyle{{\bm{C}}}=$ $\displaystyle{\bm{\Sigma}}\odot({\bm{g}}{\bm{g}}^{H})/\sigma_{{v}}^{2},\quad$ (5) $\displaystyle{\bm{D}}=$ $\displaystyle P_{t}{\bm{I}}\odot({\bm{f}}{\bm{f}}^{H})+{\bm{\Sigma}},$ (6) $\sigma_{{v}}^{2}$ is the noise power at the destination, ${\bm{\Sigma}}={\rm Diag}([\sigma_{1}^{2},...,\sigma_{L}^{2}])$, and ${\bm{f}}$, ${\bm{g}}$ represent the actual transmiter-to-relays channel and relays-to-receiver channel, respectively. In the general case where both links are imperfect, we write ${\bm{f}}={\bar{\bm{f}}}+\Delta{\bm{f}},\quad\quad{\bm{g}}={\bar{\bm{g}}}+\Delta{\bm{g}},$ where ${\bar{\bm{f}}}$ and ${\bar{\bm{g}}}$ are the estimated CSI; $\Delta{\bm{f}}$ and $\Delta{\bm{g}}$ are the corresponding stochastic CSI errors that respectively follow the distributions $\Delta{\bm{f}}\sim\mathcal{CN}(0,{{\bm{E}}}_{f})$, ${{\bm{E}}}_{f}\succ 0$ and $\Delta{\bm{g}}\sim\mathcal{CN}(0,{{\bm{E}}}_{g})$, ${{\bm{E}}}_{f}\succ 0$. Here we adopt a Gaussian channel error model; $i.e.$, ${{\bm{E}}}_{f}=\epsilon^{2}{\bm{I}}$, ${{\bm{E}}}_{g}=\eta^{2}{\bm{I}}$ with $\epsilon,\eta>0$. Equivalently, we may write $\Delta{\bm{f}}=\epsilon{\bm{x}},\quad\quad\Delta{\bm{g}}=\eta{\bm{y}},$ where ${\bm{x}}$ and ${\bm{y}}$ are standard complex Gaussian vectors and $\epsilon,\eta$ are known scalars to bound the error magnitudes [8, 25]. We are motivated to design the AF weight vector ${\bm{w}}$ so that the average transmit power is minimized while the receiver’s SNR outage constraint is satisfied. Specifically, we consider the following problem: $\begin{array}[]{ll}\displaystyle\min_{{\bm{w}}\in\mathbb{C}^{L}}&\quad\displaystyle{\rm\mathbb{E}}[{\bm{w}}^{H}{\bm{D}}{\bm{w}}]\\\ \text{subject to}&\quad\displaystyle{\rm Pr}\\{{\rm SNR}\leq\gamma\\}\leq\rho,\\\ \end{array}$ (7) where $\gamma$ is the target SNR threshold and $\rho$ is a prescribed outage rate we want to guarantee during the transmission. To further tackle Problem (7), a typical relaxation is $\begin{array}[]{ll}\displaystyle\min_{{\bm{W}}\in\mathbb{C}^{L\times L}}&\quad\displaystyle{\rm\mathbb{E}}[{\bm{D}}\cdot{\bm{W}}]\\\ \text{subject to}&\quad\displaystyle{\rm Pr}\\{Q({\bm{W}},{\bm{x}},{\bm{y}})\geq 0\\}\leq\rho,\\\ &\quad{\bm{W}}\succeq 0,\end{array}$ (8) where $\displaystyle Q({\bm{W}},{\bm{x}},{\bm{y}})$ (9) $\displaystyle=$ $\displaystyle\sigma_{{v}}^{2}+{\bm{W}}\cdot\big{(}{\bm{\Sigma}}\odot({\bm{g}}{\bm{g}}^{H})-\frac{P_{t}}{\gamma}(({\bm{f}}{\bm{f}}^{H})\odot(({\bm{g}}^{*})({\bm{g}}^{*})^{H}))\big{)}.$ Problem (8) is still difficult as $Q({\bm{W}},{\bm{x}},{\bm{y}})$ in the outage constraint involves high-order perturbation terms. Our main task in the sequel is to discuss how to deal with this challenge. ## III Moment Inequality-Based Approach In this section, we review and further develop the moment inequality approach in [21] to construct safe tractable approximations of chance constraints with quartic perturbations. ### III-A The Fourth-order Moment Inequality Method Key to the development of safe tractable approximations of quartically perturbed chance constraints is the following moment inequality for quartic polynomials in complex Gaussian random variables. ###### Theorem 1. Let $\xi_{1},\ldots,\xi_{m}$ be independent standard real Gaussian random variables. Consider the function $f:\mathbb{C}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}$: $\displaystyle f({\bm{x}},{\bm{\xi}})=-a_{0}({\bm{x}})+\sum_{1\leq i\leq m}\xi_{i}a_{i}({\bm{x}})+\sum_{1\leq j_{1},j_{2}\leq m}\xi_{j_{1}}\xi_{j_{2}}a_{j_{1}j_{2}}({\bm{x}})$ $\displaystyle+\sum_{1\leq k_{1},k_{2},k_{3}\leq m}\xi_{k_{1}}\xi_{k_{2}}\xi_{k_{3}}a_{k_{1}k_{2}k_{3}}({\bm{x}})$ (10) $\displaystyle+\sum_{1\leq\ell_{1},\ell_{2},\ell_{3},\ell_{4}\leq m}\xi_{\ell_{1}}\xi_{\ell_{2}}\xi_{\ell_{3}}\xi_{\ell_{4}}a_{\ell_{1}\ell_{2}\ell_{3}\ell_{4}}({\bm{x}}),$ where $a_{0},a_{i},a_{j_{1}j_{2}},a_{k_{1}k_{2}k_{3}},a_{\ell_{1}\ell_{2}\ell_{3}\ell_{4}}$ are affine functions of $\bm{x}$. Note that we allow the decision vector ${\bm{x}}$ to take complex values. However, we assume that the value $f({\bm{x}},{\bm{\xi}})$ is real for any ${\bm{x}}\in\mathbb{C}^{n}$ and ${\bm{\xi}}\in\mathbb{R}^{m}$. Consider the chance constraint $\Pr(f({\bm{x}},{\bm{\xi}})\geq 0)\leq\rho,$ (11) where $\rho>0$ is given. The following hold: * (a) For each ${\bm{x}}\in{{\mathbb{C}}^{n}}$ and ${\bm{\xi}}\in{{\mathbb{R}}^{m}}$, let $\bar{f}({\bm{x}},{\bm{\xi}})\triangleq f({\bm{x}},{\bm{\xi}})+{{a}_{0}}({\bm{x}}).$ Then, the function $\bm{x}\mapsto\bar{f}(\bm{x},\bm{\xi})^{2}$ is quadratic in $\bm{x}$ and can be written in the form $\displaystyle\bar{f}({\bm{x}},{\bm{\xi}})^{2}={\bm{v}}^{H}({\bm{x}}){\bm{U}}({\bm{\xi}}){\bm{v}}({\bm{x}})$ (12) for some ${\bm{U}}({\bm{\xi}})\succeq{\bm{0}}$, where ${\bm{U}}({\bm{\xi}})$ is a Hermitian positive semidefinite matrix whose rows and columns are labeled by the set of indices $\mathscr{S}=\\{0,\underbrace{i,\ldots}_{1\leq i\leq m},\underbrace{j_{1}j_{2},\ldots}_{1\leq j_{1},j_{2}\leq m},\underbrace{k_{1}k_{2}k_{3},\ldots}_{1\leq k_{1},k_{2},k_{3}\leq m},\underbrace{\ell_{1}\ell_{2}\ell_{3}\ell_{4},\ldots}_{1\leq\ell_{1},\ell_{2},\ell_{3},\ell_{4}\leq m}\\}$ arranged in lexicographic order and ${\bm{v}}:\mathbb{C}^{n}\rightarrow\mathbb{C}^{|\mathscr{S}|}$ is an affine function. In particular, ${\bm{v}}({\bm{x}})$ is a vector whose $s$-th component is $a_{s}({\bm{x}})$, where $s\in\mathscr{S}$. * (b) Let ${\bm{U}}\triangleq{\mathbb{E}}[{\bm{U}}({\bm{\xi}})]\succeq{\bm{0}}$ and $c(\rho)\triangleq\left\\{\begin{array}[]{ll}{{({q}(\rho)-1)}^{2}}\exp(\frac{2{q}(\rho)}{{q}(\rho)-1})&\ {\rm if}\ {q}(\rho)>2;\\\ 1/{\sqrt{\rho}}&\ {\rm if}\ {q}(\rho)=2,\end{array}\right.$ (13) where ${\mathop{q}}\,(\rho)\triangleq\left\\{\begin{array}[]{ll}\frac{-\ln\rho+\sqrt{{{(\ln\rho)}^{2}}-8\ln\rho}}{4}&\ {\rm if}\ \rho\in(0,\exp(-8)];\\\ 2&\ {\rm otherwise},\end{array}\right.$ (14) Then, the second-order cone constraint ${{a}_{0}}({\bm{x}})\geq c(\rho)||{{\bm{U}}^{1/2}}{\bm{x}}||$ (15) serves as a safe tractable approximation of the chance constraint in (11). This theorem was first proposed in [21] for robust beamforming design in a two-way relay network. Therein, a partial proof was sketched while important details were omitted due to the page limit. In this work, we will give a complete proof in Appendix A-A. Now, let us apply Theorem 1 to our problem. We define $\displaystyle a_{0}({\bm{W}})$ $\displaystyle=$ $\displaystyle-\sigma_{{v,d}}^{2}-{\bm{W}}\cdot\big{(}{\bm{\Sigma}}\odot(\bar{\bm{g}}\bar{\bm{g}}^{H})-\frac{P_{t}}{\gamma}((\bar{\bm{f}}\bar{\bm{f}}^{H})\odot(({\bar{\bm{g}}}^{*})({\bar{\bm{g}}}^{*})^{H}))\big{)}.$ Then, we have $\displaystyle\bar{f}({\bm{W}},{\bm{x}},{\bm{y}})$ $\displaystyle=$ $\displaystyle f({\bm{W}},{\bm{x}},{\bm{y}})+a_{0}({\bm{W}})$ $\displaystyle=$ $\displaystyle{\bm{W}}\cdot{\bm{M}}({\bm{x}},{\bm{y}})$ $\displaystyle=$ $\displaystyle{\rm vec}({\bm{W}})^{H}{\rm vec}({\bm{M}}({\bm{x}},{\bm{y}}))$ and ${\rm\mathbb{E}}\big{[}\bar{f}({\bm{W}},{\bm{x}},{\bm{y}})^{2}\big{]}={\rm vec}({\bm{W}})^{H}{\bm{U}}{\rm vec}({\bm{W}}),$ where ${\bm{U}}={\rm\mathbb{E}}\big{[}{\rm vec}({\bm{M}}({\bm{x}},{\bm{y}})){\rm vec}({\bm{M}}({\bm{x}},{\bm{y}}))^{H}\big{]}.$ The explicit forms of ${\bm{M}}({\bm{x}},{\bm{y}})$ and ${\bm{U}}$ will be given in the sequel. Armed with Theorem 1, the following second-order cone constraint serves as a safe approximation of the chance constraint ${\rm Pr}\\{Q(\bm{W},\bm{x},\bm{y})\geq 0\\}\leq\rho$ in (8): $a_{0}({\bm{W}})\geq c_{1}(\rho)\|{\bm{U}}^{1/2}{\rm vec}({\bm{W}})\|.$ This yields the following safe approximation of Problem (8): $\begin{array}[]{ll}\displaystyle\min_{{\bm{W}}\in\mathbb{C}^{L\times L}}&\quad\displaystyle(P_{t}{\bm{I}}\odot(\bar{\bm{f}}\bar{\bm{f}}^{H}+\epsilon^{2}{\bm{I}})+{\bm{\Sigma}})\cdot{\bm{W}}\\\ \text{subject to}&\quad\displaystyle a_{0}({\bm{W}})\geq c_{1}(\rho)\|{\bm{U}}^{1/2}{\rm vec}({\bm{W}})\|,\\\ &\quad{\bm{W}}\succeq 0.\end{array}$ (16) Note that Problem (16) can be readily handled by applying the SDR technique and off-the-shelf convex solvers. TABLE I: Explicit Expressions in Fourth-order Moment Inequality-based Approach ${\bm{M}}({\bm{x}},{\bm{y}})_{(i,i)}$ | $\displaystyle{\bm{M}}({\bm{x}},{\bm{y}})_{(i,i)}=\eta\sigma_{i}^{2}({y_{i}}{\bar{g}}_{i}^{*}+{\bar{g}}_{i}{y_{i}}^{*})+\eta^{2}\sigma_{i}^{2}{y_{i}}{y_{i}}^{*}-\frac{P_{t}}{\gamma}\big{[}\big{(}\eta{\bar{f}}_{i}{\bar{f}}_{i}^{*}({y_{i}}^{*}{\bar{g}}_{i}+{\bar{g}}_{i}^{*}{y_{i}})$ $\displaystyle+\epsilon{\bar{g}}_{i}{\bar{g}}_{i}^{*}({x_{i}}{\bar{f}}_{i}^{*}+{\bar{f}}_{i}{x_{i}}^{*})\big{)}+\big{(}\eta^{2}{\bar{f}}_{i}{\bar{f}}_{i}^{*}{y_{i}}{y_{i}}^{*}+\epsilon^{2}{\bar{g}}_{i}{\bar{g}}_{i}^{*}{x_{i}}{x_{i}}^{*}+\epsilon\eta({x_{i}}{\bar{f}}_{i}^{*}+{\bar{f}}_{i}{x_{i}}^{*})({y_{i}}^{*}{\bar{g}}_{i}+{\bar{g}}_{i}^{*}{y_{i}})\big{)}$ $\displaystyle+\big{(}\epsilon\eta^{2}({x_{i}}{\bar{f}}_{i}^{*}+{\bar{f}}_{i}{x_{i}}^{*}){y_{i}}{y_{i}}^{*}+\eta\epsilon^{2}{x_{i}}{x_{i}}^{*}({y_{i}}^{*}{\bar{g}}_{i}+{\bar{g}}_{i}^{*}{y_{i}})\big{)}+\epsilon^{2}\eta^{2}{x_{i}}{x_{i}}^{*}{y_{i}}{y_{i}}^{*}\big{]}$ ---|--- ${\bm{M}}({\bm{x}},{\bm{y}})_{(k,\ell)}$ | $\displaystyle{\bm{M}}({\bm{x}},{\bm{y}})_{(k,\ell)}=-\frac{P_{t}}{\gamma}\big{[}\big{(}\eta{\bar{f}}_{k}{\bar{f}}_{\ell}^{*}({y_{k}}^{*}{\bar{g}}_{\ell}+{\bar{g}}_{k}^{*}{y_{\ell}})+\epsilon{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}({x_{k}}{\bar{f}}_{\ell}^{*}+{\bar{f}}_{k}{x_{\ell}}^{*})\big{)}$ $\displaystyle+\big{(}\eta^{2}{{\bar{f}}_{k}}{{\bar{f}}_{\ell}}^{*}{y_{k}}^{*}{y_{\ell}}+\epsilon^{2}{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}{x_{k}}{x_{\ell}}^{*}+\epsilon\eta({x_{k}}{\bar{f}}_{\ell}^{*}+{\bar{f}}_{k}{x_{\ell}}^{*})({y_{k}}^{*}{\bar{g}}_{\ell}+{\bar{g}}_{k}^{*}{y_{\ell}})\big{)}$ $\displaystyle+\big{(}\epsilon\eta^{2}({x_{k}}{\bar{f}}_{\ell}^{*}+{\bar{f}}_{k}{x_{\ell}}^{*}){y_{k}}^{*}{y_{\ell}}+\eta\epsilon^{2}{x_{k}}{x_{\ell}}^{*}({y_{k}}^{*}{\bar{g}}_{\ell}+{\bar{g}}_{k}^{*}{y_{\ell}})\big{)}+\epsilon^{2}\eta^{2}{x_{k}}{x_{\ell}}^{*}{y_{k}}^{*}{y_{\ell}}\big{]},\quad 1\leq i\leq L,1\leq k\neq\ell\leq L$ ${\bm{U}}_{(i,j),(k,\ell)}$ | ${\bm{U}}_{(i,j),(k,\ell)}=\left\\{\begin{array}[]{rl}\Sigma_{m=1}^{9}C_{m}(i)C_{m}(i)^{*}&\ {\rm if}~{}1\leq i=j=k=\ell\leq L;\\\ C_{9}(i)C_{9}(k)^{*}&\ {\rm if}\ i=j,\ k=\ell,\ i\neq k;\\\ C_{1}(i)D_{1}(i,\ell)^{*}+C_{3}(i)D_{3}(i,\ell)^{*}+C_{5}(i)D_{5}(i,\ell)^{*}&\ {\rm if}\ i=j,\ k\neq\ell,\ i=k;\\\ C_{2}(i)D_{2}(k,i)^{*}+C_{4}(i)D_{4}(k,i)^{*}+C_{8}(i)D_{8}(k,i)^{*}&\ {\rm if}\ i=j,\ k\neq\ell,\ i=\ell;\\\ D_{1}(i,j)C_{1}(i)^{*}+D_{3}(i,j)C_{3}(i)^{*}+D_{5}(i,j)C_{5}(i)^{*}&\ {\rm if}\ i\neq j,\ k=\ell,\ i=k;\\\ D_{2}(i,j)C_{2}(j)^{*}+D_{4}(i,j)C_{4}(j)^{*}+D_{8}(i,j)C_{8}(j)^{*}&\ {\rm if}\ i\neq j,\ k=\ell,\ j=k;\\\ \Sigma_{n=1}^{15}D_{n}(i,j)D_{n}(i,j)^{*}&\ {\rm if}~{}i\neq j,\ k\neq\ell,\ i=k,\ j=\ell;\\\ D_{1}(k,j)D_{1}(k,\ell)^{*}+D_{3}(k,j)D_{3}(k,\ell)^{*}+D_{5}(k,j)D_{5}(k,\ell)^{*}&\ {\rm if}~{}i\neq j,\ k\neq\ell,\ i=k,\ j\neq\ell;\\\ D_{2}(i,\ell)D_{2}(k,\ell)^{*}+D_{4}(i,\ell)D_{4}(k,\ell)^{*}+D_{8}(i,\ell)D_{8}(k,\ell)^{*}&\ {\rm if}~{}i\neq j,\ k\neq\ell,\ i\neq k,\ j=\ell;\\\ 0&\ {\rm otherwise.}\end{array}\right.$ A straightforward but tedious calculation yields ${\bm{M}}({\bm{x}},{\bm{y}})_{(i,j)}$ in Table I. To derive an explicit expression for ${\bm{U}}$, we treat $(i,j)$ as an index of any vectorized matrix in the sequel. In other words, for a matrix ${\bm{R}}$, we denote ${\rm vec}({\bm{R}})_{(i,j)}={\bm{R}}_{i,j}.$ Recall that ${\bm{U}}={\rm\mathbb{E}}\big{[}{\rm vec}({\bm{M}}({\bm{x}},{\bm{y}})){\rm vec}({\bm{M}}({\bm{x}},{\bm{y}}))^{H}\big{]}.$ Then, $\displaystyle{\bm{U}}_{(i,j),(k,\ell)}$ $\displaystyle=$ $\displaystyle{\rm\mathbb{E}}\big{[}{\rm vec}({\bm{M}}({\bm{x}},{\bm{y}}))_{(i,j)}{\rm vec}({\bm{M}}({\bm{x}},{\bm{y}}))^{*}_{(k,\ell)}\big{]}$ $\displaystyle=$ $\displaystyle{\rm\mathbb{E}}\big{[}{\bm{M}}({\bm{x}},{\bm{y}})_{i,j}{\bm{M}}({\bm{x}},{\bm{y}})^{*}_{k,\ell}\big{]}.$ The entries of $\bm{U}$ are given as ${\bm{U}}_{(i,j),(k,\ell)}$ in Table I with $\left\\{\begin{array}[]{rl}&C_{1}(i)=\eta\sigma_{i}^{2}{\bar{g}}_{i}-\frac{P_{t}}{\gamma}(\eta{\bar{f}}_{i}{\bar{f}}_{i}^{*}{\bar{g}}_{i}+\eta\epsilon^{2}{\bar{g}}_{i});\\\ &C_{2}(i)=\eta\sigma_{i}^{2}{\bar{g}}_{i}^{*}-\frac{P_{t}}{\gamma}(\eta{\bar{f}}_{i}{\bar{f}}_{i}^{*}{\bar{g}}_{i}^{*}+\eta\epsilon^{2}{\bar{g}}_{i}^{*});\\\ &C_{3}(i)=-\frac{P_{t}}{\gamma}(\epsilon{\bar{g}}_{i}{\bar{g}}_{i}^{*}{\bar{f}}_{i}^{*}+\epsilon\eta^{2}{\bar{f}}_{i}^{*});\\\ &C_{4}(i)=-\frac{P_{t}}{\gamma}(\epsilon{\bar{g}}_{i}{\bar{g}}_{i}^{*}{\bar{f}}_{i}+\epsilon\eta^{2}{\bar{f}}_{i});\\\ &C_{5}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}^{*}{\bar{g}}_{i};\\\ &C_{6}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}^{*}{\bar{g}}_{i}^{*};\\\ &C_{7}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}{\bar{g}}_{i};\\\ &C_{8}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}{\bar{g}}_{i}^{*};\\\ &C_{9}(i)=\eta^{2}\sigma_{i}^{2}-\frac{P_{t}}{\gamma}(\eta^{2}{\bar{f}}_{i}{\bar{f}}_{i}^{*}+\epsilon^{2}{\bar{g}}_{i}{\bar{g}}_{i}^{*}+\epsilon^{2}\eta^{2})\end{array}\right.$ and $\left\\{\begin{array}[]{rl}&D_{1}(k,\ell)=-\frac{P_{t}}{\gamma}\eta{\bar{f}}_{k}{\bar{f}}_{\ell}^{*}{\bar{g}}_{\ell};\\\ &D_{2}(k,\ell)=-\frac{P_{t}}{\gamma}\eta{\bar{f}}_{k}{\bar{f}}_{\ell}^{*}{\bar{g}}_{k}^{*};\\\ &D_{3}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}{\bar{f}}_{\ell}^{*};\\\ &D_{4}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}{\bar{f}}_{k};\\\ &D_{5}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{\ell}^{*}{\bar{g}}_{\ell};\\\ &D_{6}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{\ell}^{*}{\bar{g}}_{k}^{*};\\\ &D_{7}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{k}{\bar{g}}_{\ell};\\\ &D_{8}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{k}{\bar{g}}_{k}^{*};\\\ &D_{9}(k,\ell)=-\frac{P_{t}}{\gamma}\eta^{2}{\bar{f}}_{k}{\bar{f}}_{\ell}^{*};\\\ &D_{10}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon^{2}{\bar{g}}_{k}^{*}{\bar{g}}_{\ell};\\\ &D_{11}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta^{2}{\bar{f}}_{\ell}^{*};\\\ &D_{12}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta^{2}{\bar{f}}_{k};\\\ &D_{13}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon^{2}\eta{\bar{g}}_{\ell};\\\ &D_{14}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon^{2}\eta{\bar{g}}_{k}^{*};\\\ &D_{15}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon^{2}\eta^{2}.\\\ \end{array}\right.$ ### III-B The Second-order Moment Inequality Method The aforementioned fourth-order moment inequality method can provide a feasible solution to Problem (8). However, as it involves all orders (from first to fourth) of channel uncertainties in the outage constraint, the ultimate explicit expression is rather complicated. In the robust design, we observe that channel uncertainties are usually small and thus higher-order uncertainty terms contribute very little to the SNR quantity. Hence, we are motivated to ignore the third and fourth-order uncertainty terms so as to significantly simplify our problem. To proceed, we let $f_{quad}({\bm{W}},{\bm{x}},{\bm{y}})$ be the quadratic approximation of $f({\bm{W}},{\bm{x}},{\bm{y}})$, obtained by dropping the cubic and quartic terms in $f({\bm{W}},{\bm{x}},{\bm{y}})$. Then, we consider the following second-order approximation of the original chance constraint: $\Pr(f_{quad}({\bm{W}},{\bm{x}},{\bm{y}})\geq 0)\leq\rho.$ (17) To tackle the above constraint, we define $\displaystyle\bar{f}_{quad}({\bm{W}},{\bm{x}},{\bm{y}})$ $\displaystyle=$ $\displaystyle f_{quad}({\bm{W}},{\bm{x}},{\bm{y}})+a_{0}({\bm{W}})$ $\displaystyle=$ $\displaystyle{\bm{W}}\cdot{\bm{M}}_{quad}({\bm{x}},{\bm{y}})$ $\displaystyle=$ $\displaystyle{\rm vec}({\bm{W}})^{H}{\rm vec}({\bm{M}}_{quad}({\bm{x}},{\bm{y}}))$ and ${\rm\mathbb{E}}\big{[}\bar{f}_{quad}({\bm{W}},{\bm{x}},{\bm{y}})^{2}\big{]}={\rm vec}({\bm{W}})^{H}{\bm{U}}_{quad}{\rm vec}({\bm{W}}),$ where ${\bm{U}}_{quad}={\rm\mathbb{E}}\big{[}{\rm vec}({\bm{M}}_{quad}({\bm{x}},{\bm{y}})){\rm vec}({\bm{M}}_{quad}({\bm{x}},{\bm{y}}))^{H}\big{]}$. The explicit forms of ${\bm{M}}_{quad}({\bm{x}},{\bm{y}})$ and $\bm{U}_{quad}$ are given in the sequel. By Theorem 1, the second-order cone constraint $a_{0}({\bm{W}})\geq c_{2}(\rho)\|{\bm{U}_{quad}}^{1/2}{\rm vec}({\bm{W}})\|$ serves as a safe approximation of the chance constraint (17). This yields the following safe approximation of Problem (8): $\begin{array}[]{ll}\displaystyle\min_{{\bm{W}}\in\mathbb{C}^{L\times L}}&\quad\displaystyle(P_{t}{\bm{I}}\odot(\bar{\bm{f}}\bar{\bm{f}}^{H}+\epsilon^{2}{\bm{I}})+{\bm{\Sigma}})\cdot{\bm{W}}\\\ \text{subject to}&\quad\displaystyle a_{0}({\bm{W}})\geq c_{2}(\rho)\|{\bm{U}}_{quad}^{1/2}{\rm vec}({\bm{W}})\|,\\\ &\quad{\bm{W}}\succeq 0.\end{array}$ (18) Note that Problem (18) can be readily handled by applying SDR technique and off-the-shelf convex solvers. The explicit expression for $\bm{M}_{quad}({\bm{x}},{\bm{y}})$ and $\bm{U}_{quad}$ can be obtained in a similar way. We provide the explicit expression of $\bm{M}_{quad}({\bm{x}},{\bm{y}})$ and $\bm{U}_{quad}$ in Table II with $\left\\{\begin{array}[]{rl}&C_{1}(i)=\eta\sigma_{i}^{2}{\bar{g}}_{i}-\frac{P_{t}}{\gamma}\eta{\bar{f}}_{i}{\bar{f}}_{i}^{*}{\bar{g}}_{i};\\\ &C_{2}(i)=\eta\sigma_{i}^{2}{\bar{g}}_{i}^{*}-\frac{P_{t}}{\gamma}\eta{\bar{f}}_{i}{\bar{f}}_{i}^{*}{\bar{g}}_{i}^{*};\\\ &C_{3}(i)=-\frac{P_{t}}{\gamma}\epsilon{\bar{g}}_{i}{\bar{g}}_{i}^{*}{\bar{f}}_{i}^{*};\\\ &C_{4}(i)=-\frac{P_{t}}{\gamma}\epsilon{\bar{g}}_{i}{\bar{g}}_{i}^{*}{\bar{f}}_{i};\\\ &C_{5}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}^{*}{\bar{g}}_{i};\\\ &C_{6}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}^{*}{\bar{g}}_{i}^{*};\\\ &C_{7}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}{\bar{g}}_{i};\\\ &C_{8}(i)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{i}{\bar{g}}_{i}^{*};\\\ &C_{9}(i)=\eta^{2}\sigma_{i}^{2}-\frac{P_{t}}{\gamma}(\eta^{2}{\bar{f}}_{i}{\bar{f}}_{i}^{*}+\epsilon^{2}{\bar{g}}_{i}{\bar{g}}_{i}^{*})\end{array}\right.$ and $\left\\{\begin{array}[]{rl}&D_{1}(k,\ell)=-\frac{P_{t}}{\gamma}\eta{\bar{f}}_{k}{\bar{f}}_{\ell}^{*}{\bar{g}}_{\ell};\\\ &D_{2}(k,\ell)=-\frac{P_{t}}{\gamma}\eta{\bar{f}}_{k}{\bar{f}}_{\ell}^{*}{\bar{g}}_{k}^{*};\\\ &D_{3}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}{\bar{f}}_{\ell}^{*};\\\ &D_{4}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}{\bar{f}}_{k};\\\ &D_{5}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{\ell}^{*}{\bar{g}}_{\ell};\\\ &D_{6}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{\ell}^{*}{\bar{g}}_{k}^{*};\\\ &D_{7}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{k}{\bar{g}}_{\ell};\\\ &D_{8}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon\eta{\bar{f}}_{k}{\bar{g}}_{k}^{*};\\\ &D_{9}(k,\ell)=-\frac{P_{t}}{\gamma}\eta^{2}{\bar{f}}_{k}{\bar{f}}_{\ell}^{*};\\\ &D_{10}(k,\ell)=-\frac{P_{t}}{\gamma}\epsilon^{2}{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}.\\\ \end{array}\right.$ Apparently, the above second-order moment inequality-based approach involves less perturbation terms compared to its fourth-order counterpart. In fact, compared to Problem (16), it is easier to find a feasible solution to Problem (18). TABLE II: Explicit Expressions in Second-order Moment Inequality-based Approach ${\bm{M}}({\bm{x}},{\bm{y}})_{(i,i)}$ | $\displaystyle{\bm{M}}({\bm{x}},{\bm{y}})_{(i,i)}=\eta\sigma_{i}^{2}({y_{i}}{\bar{g}}_{i}^{*}+{\bar{g}}_{i}{y_{i}}^{*})+\eta^{2}\sigma_{i}^{2}{y_{i}}{y_{i}}^{*}-\frac{P_{t}}{\gamma}\big{[}\eta{\bar{f}}_{i}{\bar{f}}_{i}^{*}({y_{i}}^{*}{\bar{g}}_{i}+{\bar{g}}_{i}^{*}{y_{i}})+\epsilon{\bar{g}}_{i}{\bar{g}}_{i}^{*}({x_{i}}{\bar{f}}_{i}^{*}+{\bar{f}}_{i}{x_{i}}^{*})$ $\displaystyle+\eta^{2}{\bar{f}}_{i}{\bar{f}}_{i}^{*}{y_{i}}{y_{i}}^{*}+\epsilon^{2}{\bar{g}}_{i}{\bar{g}}_{i}^{*}{x_{i}}{x_{i}}^{*}+\epsilon\eta({x_{i}}{\bar{f}}_{i}^{*}+{\bar{f}}_{i}{x_{i}}^{*})({y_{i}}^{*}{\bar{g}}_{i}+{\bar{g}}_{i}^{*}{y_{i}})\big{]}$ ---|--- ${\bm{M}}({\bm{x}},{\bm{y}})_{(k,\ell)}$ | $\displaystyle{\bm{M}}({\bm{x}},{\bm{y}})_{(k,\ell)}=-\frac{P_{t}}{\gamma}\big{[}\eta{\bar{f}}_{k}{\bar{f}}_{\ell}^{*}({y_{k}}^{*}{\bar{g}}_{\ell}+{\bar{g}}_{k}^{*}{y_{\ell}})+\epsilon{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}({x_{k}}{\bar{f}}_{\ell}^{*}+{\bar{f}}_{k}{x_{\ell}}^{*})$ $\displaystyle+\eta^{2}{{\bar{f}}_{k}}{{\bar{f}}_{\ell}}^{*}{y_{k}}^{*}{y_{\ell}}+\epsilon^{2}{\bar{g}}_{k}^{*}{\bar{g}}_{\ell}{x_{k}}{x_{\ell}}^{*}+\epsilon\eta({x_{k}}{\bar{f}}_{\ell}^{*}+{\bar{f}}_{k}{x_{\ell}}^{*})({y_{k}}^{*}{\bar{g}}_{\ell}+{\bar{g}}_{k}^{*}{y_{\ell}})\big{]},\quad 1\leq i\leq L,1\leq k\neq\ell\leq L$ ${\bm{U}}^{quad}_{(i,j),(k,\ell)}$ | ${\bm{U}}^{quad}_{(i,j),(k,\ell)}=\left\\{\begin{array}[]{rl}\Sigma_{m=1}^{9}C_{m}(i)C_{m}(i)^{*}&\ {\rm if}~{}1\leq i=j=k=\ell\leq L;\\\ C_{9}(i)C_{9}(k)^{*}&\ {\rm if}\ i=j,\ k=\ell,\ i\neq k;\\\ C_{1}(i)D_{1}(i,\ell)^{*}+C_{3}(i)D_{3}(i,\ell)^{*}+C_{5}(i)D_{5}(i,\ell)^{*}&\ {\rm if}\ i=j,\ k\neq\ell,\ i=k;\\\ C_{2}(i)D_{2}(k,i)^{*}+C_{4}(i)D_{4}(k,i)^{*}+C_{8}(i)D_{8}(k,i)^{*}&\ {\rm if}\ i=j,\ k\neq\ell,\ i=\ell;\\\ D_{1}(i,j)C_{1}(i)^{*}+D_{3}(i,j)C_{3}(i)^{*}+D_{5}(i,j)C_{5}(i)^{*}&\ {\rm if}\ i\neq j,\ k=\ell,\ i=k;\\\ D_{2}(i,j)C_{2}(j)^{*}+D_{4}(i,j)C_{4}(j)^{*}+D_{8}(i,j)C_{8}(j)^{*}&\ {\rm if}\ i\neq j,\ k=\ell,\ j=k;\\\ \Sigma_{n=1}^{10}D_{n}(i,j)D_{n}(i,j)^{*}&\ {\rm if}\ i\neq j,\ k\neq\ell,\ i=k,\ j=\ell;\\\ D_{1}(k,j)D_{1}(k,\ell)^{*}+D_{3}(k,j)D_{3}(k,\ell)^{*}+D_{5}(k,j)D_{5}(k,\ell)^{*}&\ {\rm if}\ i\neq j,\ k\neq\ell,\ i=k,\ j\neq\ell;\\\ D_{2}(i,\ell)D_{2}(k,\ell)^{*}+D_{4}(i,\ell)D_{4}(k,\ell)^{*}+D_{8}(i,\ell)D_{8}(k,\ell)^{*}&\ {\rm if}\ i\neq j,\ k\neq\ell,\ i\neq k,\ j=\ell;\\\ 0&\ {\rm otherwise.}\end{array}\right.$ ## IV The Bernstein-type Inequality Approach Although the fourth-order moment inequality-based approach is much more restricted and the feasibility rate can be lower, it can provide a more robust AF design, compared to the second-order counterpart. Hence, in practice, we can first try to find an AF weight via Problem (16). If it cannot provide a feasible solution, we then drop the higher-order perturbations and resort to a quadratically perturbed chance constraint problem, e.g., Problem (18), to find a relatively good AF weight. In this section, we introduce another typical way of tackling the quadratically perturbed chance constraint, i.e., to use the so-called Bernstein-type inequality [13]. To be specific, we define ${\bm{\xi}}=\sqrt{2}\left(\begin{array}[]{c}{\rm Re}({\bm{x}})\\\ {\rm Im}({\bm{x}})\\\ {\rm Re}({\bm{y}})\\\ {\rm Im}({\bm{y}})\end{array}\right)\sim\mathcal{N}(0,{\bm{I}}_{4L})$, with ${\rm Re}(\cdot)$ and ${\rm Im}(\cdot)$ denoting the real part and imaginary part of a complex number, respectively. Then, the quadratic approximation can be rewritten as $\displaystyle{f}_{quad}({\bm{\xi}})=s_{0}({\bm{\xi}})+s_{1}({\bm{\xi}})+s_{2}({\bm{\xi}}),$ (19) where $s_{0}({\bm{\xi}}),s_{1}({\bm{\xi}}),s_{2}({\bm{\xi}})$ represent the constant term, linear term, and quadratic term in ${f}_{quad}({\bm{\xi}})$, respectively. More precisely, we have $\displaystyle s_{0}({{\bm{\xi}}})$ $\displaystyle=$ $\displaystyle-\sigma_{{v}}^{2}-{\bm{W}}\cdot\big{(}{\bm{\Sigma}}\odot(\bar{\bm{g}}\bar{\bm{g}}^{H})\big{)}+\frac{P_{t}}{\gamma}{\bm{W}}\cdot\big{(}{\bar{\bm{f}}}{\bar{\bm{f}}}^{H}\odot({\bar{\bm{g}}}^{*})({\bar{\bm{g}}}^{*})^{H}\big{)},$ $\displaystyle s_{1}({{\bm{\xi}}})$ $\displaystyle=$ $\displaystyle{\bm{W}}\cdot\big{(}\frac{P_{t}\epsilon}{\gamma}({\bm{x}}\bar{\bm{f}}^{H}+\bar{\bm{f}}{\bm{x}}^{H})\odot{\bar{\bm{g}}}^{*}({\bar{\bm{g}}}^{*})^{H}-\eta{\bm{\Sigma}}\odot(\bar{\bm{g}}{\bm{y}^{H}}+{\bm{y}}\bar{\bm{g}}^{H})\big{)}$ $\displaystyle+\frac{P_{t}\eta}{\gamma}{\bm{W}}\cdot\big{(}(\bar{\bm{f}}\bar{\bm{f}}^{H})\odot(({\bm{y}}^{*})(\bar{\bm{g}}^{*})^{H}+(\bar{\bm{g}}^{*})({\bm{y}}^{*})^{H}),$ $\displaystyle s_{2}({{\bm{\xi}}})$ $\displaystyle=$ $\displaystyle\frac{P_{t}}{\gamma}{\bm{W}}\cdot\big{(}\epsilon^{2}({\bm{x}}{\bm{x}}^{H})\odot({\bar{\bm{g}}}^{*})({\bar{\bm{g}}}^{*})^{H}+\eta^{2}(\bar{\bm{f}}\bar{\bm{f}}^{H})\odot(({\bm{y}}^{*})({\bm{y}}^{*})^{H})\big{)}$ $\displaystyle+\frac{P_{t}}{\gamma}{\bm{W}}\cdot\big{(}\eta\epsilon({\bm{x}}\bar{\bm{f}}^{H}+\bar{\bm{f}}{\bm{x}}^{H})\odot(({\bm{y}}^{*})(\bar{\bm{g}}^{*})^{H}+(\bar{\bm{g}}^{*})({\bm{y}}^{*})^{H})\big{)}$ $\displaystyle-\eta^{2}{\bm{W}}\cdot\big{(}{\bm{\Sigma}}\odot({\bm{y}}{\bm{y}}^{H})\big{)}.$ Define ${\bm{F}}=\bar{\bm{f}}\bar{\bm{f}}^{H}$, ${\bm{G}}=(\bar{\bm{g}}^{*})(\bar{\bm{g}}^{*})^{H}$, one can easily verify that $s_{1}({{\bm{\xi}}})={\bm{v}}_{{\bm{\xi}}}^{T}{\bm{\xi}},$ where ${\bm{v}}_{{\bm{\xi}}}={\sqrt{2}}\left(\begin{array}[]{c}\epsilon\frac{Pt}{\gamma}{\rm Re}\big{(}{\bm{W}}\odot{\bm{G}}^{*})\bar{\bm{f}}\big{)}\\\ \epsilon\frac{Pt}{\gamma}{\rm Im}\big{(}{\bm{W}}\odot{\bm{G}}^{*})\bar{\bm{f}}\big{)}\\\ -\eta{\rm Re}({\bm{W}}\odot{\bm{\Sigma}}){\rm Re}(\bar{\bm{g}})+\eta\frac{Pt}{\gamma}{\rm Re}\big{(}{\bm{W}}^{*}\odot{\bm{F}})\bar{\bm{g}}\big{)}\\\ -\eta{\rm Re}({\bm{W}}\odot{\bm{\Sigma}}){\rm Im}(\bar{\bm{g}})+\eta\frac{Pt}{\gamma}{\rm Im}\big{(}{\bm{W}}^{*}\odot{\bm{F}})\bar{\bm{g}}\big{)}\\\ \end{array}\right).$ In addition, consider $\displaystyle{\bm{R}}=\frac{1}{2}\left(\begin{array}[]{cc}\epsilon^{2}\frac{P_{t}}{\gamma}{\bm{K}_{1}}&\epsilon\eta\frac{P_{t}}{\gamma}{\bm{K}_{2}}\\\ \epsilon\eta\frac{P_{t}}{\gamma}{\bm{K}_{3}}&-\eta^{2}{\bm{K}_{4}^{(1)}}+\eta^{2}\frac{P_{t}}{\gamma}{\bm{K}_{4}^{(2)}}\\\ \end{array}\right),$ where $\displaystyle{\bm{K}_{1}}=\left(\begin{array}[]{cc}{\rm Re}({\bm{W}}^{*}\odot{\bm{G}})&{\rm Im}({\bm{W}}^{*}\odot{\bm{G}})\\\ -{\rm Im}({\bm{W}}^{*}\odot{\bm{G}})&{\rm Re}({\bm{W}}^{*}\odot{\bm{G}})\\\ \end{array}\right),$ $\displaystyle{\bm{K}_{4}^{(1)}}=\left(\begin{array}[]{cc}{\rm Re}({\bm{W}}\odot{\bm{\Sigma}})&{\bm{0}}_{L\times L}\\\ {\bm{0}}_{L\times L}&{\rm Re}({\bm{W}}\odot{\bm{\Sigma}})\\\ \end{array}\right),$ $\displaystyle{\bm{K}_{4}^{(2)}}=\left(\begin{array}[]{cc}{\rm Re}({\bm{W}}\odot{\bm{F}}^{*})&{\rm Im}({\bm{W}}\odot{\bm{F}}^{*})\\\ -{\rm Im}({\bm{W}}\odot{\bm{F}}^{*})&{\rm Re}({\bm{W}}\odot{\bm{F}}^{*})\\\ \end{array}\right),$ and ${\bm{K}_{2}},{\bm{K}_{3}}$ are given at the top of the next page. $\displaystyle{\bm{K}_{2}}=\left(\begin{array}[]{cc}{\rm Diag}\big{(}{\rm Re}({\bm{W}}(\bar{\bm{f}}\odot\bar{\bm{g}}^{*}))\big{)}+{\rm Re}\big{(}{\bm{W}}\odot(\bar{\bm{g}}\bar{\bm{f}}^{T})\big{)}&{\rm Diag}\big{(}{\rm Im}({\bm{W}}^{*}(\bar{\bm{f}}^{*}\odot\bar{\bm{g}}))\big{)}+{\rm Im}\big{(}{\bm{W}}\odot(\bar{\bm{g}}\bar{\bm{f}}^{T})\big{)}\\\ {\rm Diag}\big{(}{\rm Im}({\bm{W}}(\bar{\bm{f}}\odot\bar{\bm{g}}^{*}))\big{)}+{\rm Im}\big{(}{\bm{W}}\odot(\bar{\bm{g}}\bar{\bm{f}}^{T})\big{)}&{\rm Diag}\big{(}{\rm Re}({\bm{W}}^{*}(\bar{\bm{f}}^{*}\odot\bar{\bm{g}}))\big{)}-{\rm Re}\big{(}{\bm{W}}\odot(\bar{\bm{g}}\bar{\bm{f}}^{T})\big{)}\\\ \end{array}\right)$ $\displaystyle{\bm{K}_{3}}=\left(\begin{array}[]{cc}{\rm Diag}\big{(}{\rm Re}({\bm{W}}(\bar{\bm{f}}\odot\bar{\bm{g}}^{*}))\big{)}+{\rm Re}\big{(}{\bm{W}}^{*}\odot(\bar{\bm{f}}\bar{\bm{g}}^{T})\big{)}&{\rm Diag}\big{(}{\rm Im}({\bm{W}}(\bar{\bm{f}}\odot\bar{\bm{g}}^{*}))\big{)}+{\rm Im}\big{(}{\bm{W}}^{*}\odot(\bar{\bm{f}}\bar{\bm{g}}^{T})\big{)}\\\ {\rm Diag}\big{(}{\rm Im}({\bm{W}}^{*}(\bar{\bm{f}}^{*}\odot\bar{\bm{g}}))\big{)}+{\rm Im}\big{(}{\bm{W}}^{*}\odot(\bar{\bm{f}}\bar{\bm{g}}^{T})\big{)}&{\rm Diag}\big{(}{\rm Re}({\bm{W}}^{*}(\bar{\bm{f}}^{*}\odot\bar{\bm{g}}))\big{)}-{\rm Re}\big{(}{\bm{W}}^{*}\odot(\bar{\bm{f}}\bar{\bm{g}}^{T})\big{)}\\\ \end{array}\right)$ We symmetrize ${\bm{R}}$ by setting $\bar{{\bm{R}}}=\frac{1}{2}({{\bm{R}}}+{{\bm{R}}}^{T})$, so that $s_{2}({{\bm{\xi}}})={\bm{\xi}}^{T}\bar{{\bm{R}}}{\bm{\xi}}$. Furthermore, denote $s_{{\bm{\xi}}}=s_{0}({\bm{\xi}})$, so that we can rewrite the quadratic approximation as ${f}_{quad}({\bm{\xi}})=s_{{\bm{\xi}}}+{\bm{v}}_{{\bm{\xi}}}^{T}{\bm{\xi}}+{\bm{\xi}}^{T}\bar{\bm{R}}{\bm{\xi}}.$ Now, we apply the Bernstein-type inequality to the chance constraint ${\rm Pr}\\{s_{{\bm{\xi}}}+{\bm{v}}_{{\bm{\xi}}}^{T}{\bm{\xi}}+{\bm{\xi}}^{T}\bar{{\bm{R}}}{\bm{\xi}}\geq 0\\}\geq 1-\rho$ and obtain the following safe approximation of our problem: $\begin{array}[]{ll}\displaystyle\min_{{\bm{W}},\lambda,\delta}&\displaystyle(P_{t}{\bm{I}}\odot(\bar{\bm{f}}\bar{\bm{f}}^{H}+\epsilon^{2}{\bm{I}})+{\bm{\Sigma}})\cdot{\bm{W}}\\\ \text{subject to}&\displaystyle{\rm Tr}(\bar{{\bm{R}}})-2\sqrt{{\rm ln}(1/\rho)}\cdot\delta+2{\rm ln}(\rho)\cdot\lambda+s_{\bm{\xi}}\geq 0,\\\ &\sqrt{\|\bar{{\bm{R}}}\|^{2}_{F}+\frac{1}{2}\|{\bm{v}}_{\bm{\xi}}\|^{2}}\leq\delta,\\\ &\lambda{\bm{I}}_{4L}+\bar{{\bm{R}}}\succeq{\bm{0}},\\\ &\lambda\geq 0,\\\ &{\bm{W}}\succeq 0.\end{array}$ (20) Again, the problem can be readily solved by the SDR technique and convex solvers. Remark 1: For both the moment inequality-based and Bernstein-type inequality- based AF design, we target at solving Problem (8) to find a good AF weight matrix ${\bm{W}}$, rather than an AF weight vector ${\bm{w}}$, since we have relaxed the rank constraint in (7) for the sake of computational tractability. Hence, after we have solved (8), we still need to apply the Gaussian randomization algorithm [22]; more specifically, see Algorithm 1 in [32] on how to find an approximate rank-one AF weight vector ${\bm{w}}$. Remark 2: It is worth noting that solving Problem (16) yields a safe approximate solution to Problem (8), while solving Problem (18) or Problem (20) yields only approximate but not necessarily safe solution to Problem (8). This is because we drop the higher-order perturbation terms from the SNR expression. ## V A Relative Tightness Analysis When we tackle the second-order approximation of the chance constraint in (11)—i.e., $\Pr(f_{quad}({\bm{W}},{\bm{x}},{\bm{y}})\geq 0)\leq\rho$—we resort to two types of restriction approaches. The first is the second-order moment inequality approach in (18) and the other is the Bernstein-type inequality approach in (20). In this section, we provide a theoretical relative tightness result for the two approaches. To proceed, we consider a general Gaussian quadratic polynomial $\displaystyle Q({\bm{\xi}})$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{m}\xi_{i}a_{i}+\sum_{i,j=1}^{m}\xi_{i}\xi_{j}a_{ij}={\bm{\xi}}^{T}{\bm{A}}{\bm{\xi}}+{\bm{a}}^{T}{\bm{\xi}},$ where ${\bm{\xi}}\sim\mathcal{N}(\mathbf{0},\bm{I})$. Then, our tightness result can be summarized in the following theorem: ###### Theorem 2. For the chance constraint $\Pr(Q({\bm{\xi}})\geq t)\leq\rho,$ (21) consider its two safe approximations: * • Moment inequality-based safe approximation: $t\geq c(\rho)\sqrt{{\mathbb{E}}\left[Q({\bm{\xi}})^{2}\right]}.$ * • Bernstein-type inequality-based safe approximation: $\displaystyle\displaystyle-{\rm Tr}({{\bm{A}}})-2\sqrt{{\rm ln}(1/\rho)}\cdot\delta+2{\rm ln}(\rho)\cdot\lambda+t\geq 0,$ $\displaystyle\sqrt{\|{{\bm{A}}}\|^{2}_{F}+\frac{1}{2}\|{\bm{a}}\|^{2}}\leq\delta,$ $\displaystyle\lambda{\bm{I}}_{4L}-{{\bm{A}}}\succeq{\bm{0}},$ $\displaystyle\lambda\geq 0.$ Then, for any $\rho\in(\exp(-8),0.00045)$, the moment inequality-based safe approximation is always more conservative than the Bernstein-type inequality- based safe approximation. ###### Proof. We assume first that $\rho>\exp(-8)$ in the chance constraint (21). Using the fact that ${\mathbb{E}}\left[\xi_{i}\xi_{j}\right]=\delta_{ij}$, ${\mathbb{E}}\left[\xi_{i}\xi_{j}\xi_{k}\right]=0$, and $\displaystyle{\mathbb{E}}\left[\xi_{i}\xi_{j}\xi_{k}\xi_{l}\right]$ $\displaystyle=$ $\displaystyle\left\\{\begin{array}[]{c@{\quad}l}3&\mbox{if }i=j=k=l,\\\ 1&\mbox{if there are two distinct indices},\\\ 0&\mbox{otherwise},\end{array}\right.$ we can compute $\displaystyle{\mathbb{E}}\left[Q({\bm{\xi}})^{2}\right]$ $\displaystyle=$ $\displaystyle{\bm{v}}^{T}{\mathbb{E}}\left[{\bm{\eta}}{\bm{\eta}}^{T}\right]{\bm{v}}$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{m}a_{i}^{2}+3\sum_{i=1}^{m}a_{ii}^{2}+\sum_{1\leq i\not=j\leq m}\left(a_{ii}a_{jj}+a_{ij}^{2}\right)$ $\displaystyle=$ $\displaystyle\|{\bm{a}}\|_{2}^{2}+\|{\bm{A}}\|_{F}^{2}+(\mbox{tr}({\bm{A}}))^{2}+\sum_{i=1}^{m}a_{ii}^{2}.$ Then, following the moment inequality-based approach, we have a safe tractable approximation of (21): $\displaystyle t$ $\displaystyle\geq\sqrt{\frac{{\mathbb{E}}\left[Q({\bm{\xi}})^{2}\right]}{\rho}}$ (23) $\displaystyle=\left(\frac{\|{\bm{a}}\|_{2}^{2}+\|{\bm{A}}\|_{F}^{2}+(\mbox{tr}({\bm{A}}))^{2}+\sum_{i=1}^{m}a_{ii}^{2}}{\rho}\right)^{1/2}.$ On the other hand, the Bernstein-type inequality-based approach yields the following safe tractable approximation of (21): $t\geq\mbox{tr}({\bm{A}})+2\sqrt{\ln\frac{1}{\epsilon}}\sqrt{\|{\bm{A}}\|_{F}^{2}+\frac{1}{2}\|{\bm{a}}\|_{2}^{2}}+\left(2\ln\frac{1}{\epsilon}\right)s^{+}({\bm{A}}),$ (24) where $s^{+}({\bm{A}})=\max\\{\lambda_{\max}({\bm{A}}),0\\}$. We claim that for sufficiently small $\rho>\exp(-8)$, every feasible solution to (23) is feasible for (24), which means that (23) is more conservative than (24). Indeed, if (23) is satisfied, then $\displaystyle t$ $\displaystyle\geq\frac{1}{\sqrt{\rho}}\cdot\sqrt{\|{\bm{a}}\|_{2}^{2}+\|{\bm{A}}\|_{F}^{2}+(\mbox{tr}({\bm{A}}))^{2}}$ $\displaystyle\geq\frac{1}{\sqrt{3\rho}}\left(\|{\bm{a}}\|_{2}+\|A\|_{F}+|\mbox{tr}({\bm{A}})|\right)$ (25) $\displaystyle\geq\frac{1}{\sqrt{3\rho}}\left(\left[\sqrt{\frac{1}{16}\|{\bm{A}}\|_{F}^{2}+\|{\bm{a}}\|_{2}^{2}}+\frac{3}{4}\|{\bm{A}}\|_{F}\right]+|\mbox{tr}({\bm{A}})|\right)$ (26) $\displaystyle\geq\mbox{tr}({\bm{A}})+\frac{1}{4\sqrt{3\rho}}\sqrt{\|{\bm{A}}\|_{F}^{2}+\frac{1}{2}\|{\bm{a}}\|_{2}^{2}}+\frac{\sqrt{3}}{4\sqrt{\rho}}s^{+}({\bm{A}}),$ (27) where (25) and (26) follow from the fact that for any ${\bm{x}}\in\mathbb{R}^{n}$, $\|{\bm{x}}\|_{1}\geq\|{\bm{x}}\|_{2}\geq\frac{1}{\sqrt{n}}\|{\bm{x}}\|_{1},$ and (27) holds as long as $\rho<1/3$ (note that $s^{+}({\bm{A}})\leq\|{\bm{A}}\|_{F}$). Now, the proof of the claim will be complete if $\frac{1}{4\sqrt{3\rho}}\geq 2\sqrt{\ln\frac{1}{\rho}}\quad\mbox{and}\quad\frac{\sqrt{3}}{4\sqrt{\rho}}\geq 2\ln\frac{1}{\rho}.$ It can be verified that the above inequalities hold if $\rho<0.00045$. Hence, (23) is more conservative than (24) when $\rho\in(\exp(-8),0.00045)$ (note that $\exp(-8)\approx 0.0003)$. This completes the proof. ∎ It is worth remarking that a general tightness result for the moment inequality-based and Bernstein-type inequality-based safe approximations is still difficult to obtain, while Theorem 2 provides a provable result in a small region of $\rho$. In the next section, we will provide numerical validations via simulation results. Figure 2: Minimum Power required versus the SNR threshold. For $\epsilon^{2}=\eta^{2}=0.002$; SNR outage percentage $\rho=0.1$. Figure 3: Minimum Power required versus the SNR threshold. For $\epsilon^{2}=\eta^{2}=0.06$; SNR outage percentage $\rho=0.1$. Figure 4: Histogram of the SNR satisfaction probability. For $\epsilon^{2}=\eta^{2}=0.002$; SNR outage percentage $\rho=0.1$; SNR threshold $\gamma=18$dB. Figure 5: Histogram of the SNR satisfaction probability. For $\epsilon^{2}=\eta^{2}=0.06$; SNR outage percentage $\rho=0.1$; SNR threshold $\gamma=12$dB. ## VI Numerical Simulations In this section, we provide numerical simulations to compare four different AF weight designs for our target problem, namely, the non-robust (NR) AF weight, the fourth-order moment inequality-based (M4) AF weight, the second-order moment inequality-based (M2) AF weight, and the Bernstein-type inequality- based (B2) AF weight. The setup of the experiments is as follows. The number of relays is set to be $L=4$; channels are generated by $\bar{\bm{f}}\sim\mathcal{CN}(0,{\bm{I}})$, $\bar{\bm{g}}\sim\mathcal{CN}(0,{\bm{I}})$ independently; channel errors are generated by $\Delta{\bm{f}}\sim\mathcal{CN}(0,\epsilon^{2}{\bm{I}})$, $\Delta{\bm{g}}\sim\mathcal{CN}(0,\eta^{2}{\bm{I}})$ independently, where the variances are specified in the sequel; the noise power at each relay is set to be $\sigma_{\ell}^{2}=0.25,\forall\ell$; the noise power at the receiver is $\sigma_{v}^{2}=0.25$; the outage probability is denoted by $\rho$. In this paper, we take $\rho=0.1$ to illustrate the performance of each scheme. Also, if the $kth$ largest eigenvalue of the SDR solution is $10^{4}$ times larger than the $(k+1)st$ largest eigenvalue, then the SDR solution is considered to be of rank-$k$. ### VI-A Comparison among the M4, M2, and B2 Approaches In Fig. 5, we present the averaged minimum power budget needed to satisfy the outage constraint as the SNR threshold varies from 3dB to $18$dB when $\rho=0.1$, $\epsilon^{2}=\eta^{2}=0.002$. Specifically, we compare the results for the non-robust and robust cases under different design approaches. In Fig. 5, we may consider that the NR design discards all perturbation terms, and thus the required power budget serves as a lower bound for all the robust designs. We find that the M4 and M2 approaches require similar minimum power budgets, while the B2 and NR approaches require slightly less power budget. This is because the channel errors here are rather small and all the robust designs can easily handle the uncertainties in this case. This implies that the higher-order perturbation terms do not have much effect on the SNRs if the error is small. Despite the similar power budgets, we can still see the differences of the tightness among the four design approaches. In Fig. 5, we test the outage cases of each approach by generating $1000$ complex Gaussian channel perturbations with specific variance and evaluating the SNR satisfaction probability for each channel realization (herein we calculate the SNR by (9)). For each approach, we generate $100$ channel realizations and then pick up those realizations that are feasible for all methods to get the corresponding histograms in Fig. 5. As shown in Fig. 5, the SNR satisfaction percentage of NR is the lowest, basically near $50\%$, which implies that the NR design is not reliable in general. On the contrary, all robust designs are very conservative and almost always satisfy the SNR constraint. It reveals that both the moment inequality-based and Bernstein-type inequality-based schemes are robust against channel errors. However, it is important to note from the histogram that compared to the M4 and M2 designs, the B2 design sometimes violates the SNR constraint. This implies that the moment inequality-based approaches are more conservative than the Bernstein-type inequality-based approach. This is consistent with our relative tightness analysis in Section V, although the outage rate $\rho$ is not within the target region. In Fig. 5 and Fig. 5, we increase the variances of the channel errors to be $0.06$. Since in this case the channel uncertainty is significantly enlarged, we can see an obvious power budget gap among different strategies. We find that M4 requires the largest power to support the robust design since the fourth-order moment inequality is the strictest. B2 requires a relatively lower power budget than M2 and M4, which implies that moment inequality-based approaches are more conservative than the Bernstein-type inequality-based approach. This is confirmed by the SNR satisfaction histogram (herein we calculate the SNR by (9)). The results in Fig. 5 again demonstrate the robustness of the moment inequality-based and Bernstein-type inequality-based approaches, as the SNR satisfaction percentages of B2, M2, and M4 all exceed our target threshold $0.9$. Actually, in this setting, the M2 and M4 designs always satisfy the SNR, while the B2 design sometimes violates the SNR constraint. This also confirms that the B2 approach is less conservative than the M2 and M4 approaches. To further investigate conservatism, we increase the variance of channel uncertainty and compare the feasibility percentage, rank-one percentage, and randomization feasibility percentage in Tables III and IV. In Table III, we have $\rho=0.1$ and $\epsilon^{2}=\eta^{2}=0.06$; in Table IV, we have have $\rho=0.1$ and $\epsilon^{2}=\eta^{2}=0.08$. Herein, “Feasibility%” denotes the feasibility of the design problems (16), (18) and (20); “Rank $k\%$” denotes the percentage of rank-$k$ solutions of the original design problem; “Feasibility of Rand. in Rank $k$ Cases $\%$” denotes the rate of feasibility of the randomization algorithm for rank-$k$ solutions, where we set the number of randomizations to be $1000$. Note that “NaN” in the table indicates that the percentage of a certain rank is $0$, and thus there is no randomization of cases for this rank. The data in both tables reveal the same fact, i.e., M2 is less conservative than M4, while B2 is less conservative than M2 and M4. This is consistent with our analytical results in Section V, although the outage rate $\rho$ for these cases is not within the target region in Theorem 2. ### VI-B Comparison between M4 and M2 Approaches under Mismatched Noise and Perturbations To demonstrate that the M4 design is more robust than the M2 one, we set up three types of experiments. The first one follows the same setting as that in Fig. 5, where we have $\sigma_{\ell}^{2}=\sigma_{v}^{2}=0.25$, $\epsilon^{2}=\eta^{2}=0.002$, $\rho=0.1$, and $\gamma=18$dB. We generate $1000$ and $10000$ channel realizations, respectively. For each channel realization, we solve M4 and M2 respectively to obtain the AF weights. For the 1000 (resp. 10000) channel realizations generated, 861 (resp. 8725) of them are feasible for both the original design problems (16) and (18), and their optimal solutions are rank-one. Then, for each feasible channel realization, we generate $10000$ perturbation realizations to calculate the actual SNR (we calculate the SNR by (9)) satisfaction rate $\hat{\rho}$ under M4 and M2 AF weights, where $\hat{\rho}$ is calculated as the ratio of the SNR-satisfying cases and the number of perturbation realizations (10000). In Table V, we show the number of channel realizations that fall in different intervals of $\hat{\rho}$. Clearly, both M4 and M2 approaches provide a good outage rate, which is much less than $0.1$. However, we can still see that M4 is more restricted than M2, as the former has more channel realizations that exhibit higher $\hat{\rho}$ under the given perturbation realizations. In the second experiment, we consider the scenario that the actual noise levels $\hat{\sigma}_{{\ell}}^{2}$ and $\hat{\sigma}_{{v}}^{2}$, or the actual perturbations $\hat{\epsilon}$ and $\hat{\eta}$, do not match the prior information in the original design problem (7). That is, when we solve (7), we set $\sigma_{\ell}^{2}=\sigma_{v}^{2}=0.25$, $\epsilon^{2}=\eta^{2}=0.002$, $\rho=0.1$, and $\gamma=18$dB, while in practice, the noise levels change to $\hat{\sigma}_{\ell}^{2}$ and $\hat{\sigma}_{v}^{2}$; or the perturbation levels change to $\hat{\epsilon}^{2}$ and $\hat{\eta}^{2}$. Under the interference of the mismatch, SNR satisfaction rates and the corresponding number of channel realizations are shown in Table VI. We find that under M4, there are more channel realizations that exhibit higher $\hat{\rho}$ under the given mismatched realizations, which means that M4 is more robust than M2. To further corroborate the robustness of M4, in Table VII, we investigate the critical point at which the mismatch of the noise or the perturbation causes the outage. To proceed, we set $\rho=0.1,\gamma=18$dB and solve the M4 and M2 design problems. Among $1000$ channel realizations, there are in total $861$ feasible cases for both M4 and M2. From the table, we can clearly see that as the mismatch increases, when $\hat{\sigma}_{\ell}^{2}=\hat{\sigma}_{v}^{2}=0.265$ or $\hat{\epsilon}^{2}=\hat{\eta}^{2}=0.0032$, M2 may cause outage in one out of $861$ realizations while M4 does not cause outage in all $861$ realizations. This implies that M4 is more robust than M2, as the former takes into account the exact SNR expression by keeping all higher-order perturbations. TABLE III: Feasibility rate, rank-$k$ rate and feasibility rate of Randomization (Rand.): $\sigma_{\ell}^{2}=0.25,\forall\ell$, $\sigma_{v}^{2}=0.25$, $\epsilon^{2}=\eta^{2}=0.06$, $\rho=0.1$. SNR in dB | $\gamma=3$ | $\gamma=6$ | $\gamma=9$ | $\gamma=12$ | $\gamma=15$ ---|---|---|---|---|--- Method | M4 | M2 | B2 | M4 | M2 | B2 | M4 | M2 | B2 | M4 | M2 | B2 | M4 | M2 | B2 Feasibility% | $0.45$ | $0.50$ | $0.96$ | $0.42$ | $0.47$ | $0.92$ | $0.36$ | $0.40$ | $0.82$ | $0.28$ | $0.29$ | $0.78$ | $0.22$ | $0.24$ | $0.63$ Rank $1$% | $1.000$ | $1.000$ | $0.844$ | $1.000$ | $1.000$ | $0.891$ | $0.972$ | $1.000$ | $0.951$ | $1.000$ | $1.000$ | $0.961$ | $1.000$ | $1.000$ | $0.984$ Rank $2$% | $0$ | $0$ | $0.031$ | $0$ | $0$ | $0.022$ | $0.028$ | $0$ | $0.012$ | $0$ | $0$ | $0.026$ | $0$ | $0$ | $0.016$ Rank $3$% | $0$ | $0$ | $0.010$ | $0$ | $0$ | $0.065$ | $0$ | $0$ | $0.037$ | $0$ | $0$ | $0.013$ | $0$ | $0$ | $0$ Rank $4$% | $0$ | $0$ | $0.115$ | $0$ | $0$ | $0.022$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | Feasibility of Rand. --- in Rank $2$ Cases% NaN | NaN | $0.0507$ | NaN | NaN | $0.0280$ | $1.0000$ | NaN | $0.0750$ | NaN | NaN | $0.0345$ | NaN | NaN | $0.004$ | Feasibility of Rand. --- in Rank $3$ Cases% NaN | NaN | $0.0020$ | NaN | NaN | $0.0005$ | NaN | NaN | $0$ | NaN | NaN | $0$ | NaN | NaN | NaN | Feasibility of Rand. --- in Rank $4$ Cases% NaN | NaN | $9\times 10^{-5}$ | NaN | NaN | $0$ | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN TABLE IV: Feasibility rate, rank-$k$ rate and feasibility rate of Randomization (Rand.): $\sigma_{\ell}^{2}=0.25,\forall\ell$, $\sigma_{v}^{2}=0.25$, $\epsilon^{2}=\eta^{2}=0.08$, $\rho=0.1$. SNR in dB | $\gamma=3$ | $\gamma=6$ | $\gamma=9$ | $\gamma=12$ | $\gamma=15$ ---|---|---|---|---|--- Method | M4 | M2 | B2 | M4 | M2 | B2 | M4 | M2 | B2 | M4 | M2 | B2 | M4 | M2 | B2 Feasibility% | $0.17$ | $0.19$ | $0.91$ | $0.16$ | $0.19$ | $0.81$ | $0.16$ | $0.16$ | $0.72$ | $0.10$ | $0.13$ | $0.70$ | $0.06$ | $0.06$ | $0.44$ Rank $1$% | $1.000$ | $1.000$ | $0.714$ | $1.000$ | $1.000$ | $0.840$ | $1.00$ | $1.00$ | $0.833$ | $1.000$ | $1.000$ | $0.871$ | $1.000$ | $1.000$ | $0.864$ Rank $2$% | $0$ | $0$ | $0.077$ | $0$ | $0$ | $0.074$ | $0$ | $0$ | $0.115$ | $0$ | $0$ | $0.086$ | $0$ | $0$ | $0.114$ Rank $3$% | $0$ | $0$ | $0.033$ | $0$ | $0$ | $0.025$ | $0$ | $0$ | $0.039$ | $0$ | $0$ | $0.029$ | $0$ | $0$ | $0$ Rank $4$% | $0$ | $0$ | $0.176$ | $0$ | $0$ | $0.061$ | $0$ | $0$ | $0.013$ | $0$ | $0$ | $0.014$ | $0$ | $0$ | $0.022$ | Feasibility of Rand. --- in Rank $2$ Cases% NaN | NaN | $0.0733$ | NaN | NaN | $0.0583$ | NaN | NaN | $0.0453$ | NaN | NaN | $0.0187$ | NaN | NaN | $0.004$ | Feasibility of Rand. --- in Rank $3$ Cases% NaN | NaN | $0.0097$ | NaN | NaN | $0.0020$ | NaN | NaN | $0.0033$ | NaN | NaN | $0.0010$ | NaN | NaN | NaN | Feasibility of Rand. --- in Rank $4$ Cases% NaN | NaN | $0.0003$ | NaN | NaN | $0.0008$ | NaN | NaN | $0.0020$ | NaN | NaN | $0$ | NaN | NaN | $0.2320$ TABLE V: SNR Satisfaction rates for different channel realizations: $\sigma_{\ell}^{2}=0.25,\forall\ell$, $\sigma_{v}^{2}=0.25$, $\epsilon^{2}=\eta^{2}=0.002$, $\rho=0.1$; $\gamma=18$dB. Number of Channel Realizations | $1000$ | $10000$ ---|---|--- SNR Satisfaction Rate ($\hat{\rho}$) Method | M4 | M2 | M4 | M2 $1.000\geq\hat{\rho}>0.999$ | $852$ | $850$ | $8616$ | $8597$ $0.999\geq\hat{\rho}>0.998$ | $7$ | $8$ | $86$ | $95$ $0.998\geq\hat{\rho}>0.997$ | $2$ | $2$ | $8$ | $7$ $0.997\geq\hat{\rho}>0.996$ | $0$ | $0$ | $3$ | $7$ $0.996\geq\hat{\rho}>0.995$ | $0$ | $1$ | $1$ | $3$ $0.995\geq\hat{\rho}>0.994$ | $0$ | $0$ | $2$ | $3$ $0.994\geq\hat{\rho}>0.993$ | $0$ | $0$ | $1$ | $3$ $0.993\geq\hat{\rho}>0.992$ | $0$ | $0$ | $1$ | $2$ $0.992\geq\hat{\rho}>0.991$ | $0$ | $0$ | $0$ | $1$ $0.991\geq\hat{\rho}>0.990$ | $0$ | $0$ | $1$ | $0$ $0.990\geq\hat{\rho}\geq 0$ | $0$ | $0$ | $6$ | $7$ Number of Feasible Channel Realizations | $861$ | $8725$ TABLE VI: SNR Satisfaction rates under noise and perturbation mismatches: $\rho=0.1$, $\gamma=18$dB, channel realization $=1000$. | | Original --- (No Mismatch) Noise Mismatch | Perturbation Mismatch Channel Error ($\hat{\eta}^{2}=\hat{\epsilon}^{2}$) | 0.002 | 0.0022 | 0.0024 | 0.0028 Noise ($\hat{\sigma}_{\ell}^{2}=\hat{\sigma}_{v}^{2}$) | 0.25 | 0.252 | 0.256 | 0.258 | 0.25 SNR Satisfaction Rate ($\hat{\rho}$) Method | M4 | M2 | M4 | M2 | M4 | M2 | M4 | M2 | M4 | M2 | M4 | M2 | M4 | M2 $1.000\geq\hat{\rho}>0.999$ | $852$ | $850$ | $703$ | $694$ | $65$ | $59$ | $31$ | $26$ | $306$ | $294$ | $1$ | $0$ | $1$ | $0$ $0.999\geq\hat{\rho}>0.998$ | $7$ | $8$ | $156$ | $165$ | $285$ | $283$ | $85$ | $80$ | $538$ | $550$ | $130$ | $122$ | $0$ | $1$ $0.998\geq\hat{\rho}>0.997$ | $2$ | $2$ | $1$ | $1$ | $343$ | $346$ | $163$ | $158$ | $12$ | $12$ | $508$ | $503$ | $0$ | $0$ $0.997\geq\hat{\rho}>0.996$ | $0$ | $0$ | $0$ | $0$ | $127$ | $131$ | $203$ | $206$ | $2$ | $1$ | $206$ | $215$ | $2$ | $1$ $0.996\geq\hat{\rho}>0.995$ | $0$ | $1$ | $1$ | $0$ | $33$ | $33$ | $182$ | $181$ | $1$ | $1$ | $11$ | $13$ | $7$ | $6$ $0.995\geq\hat{\rho}>0.994$ | $0$ | $0$ | $0$ | $1$ | $4$ | $5$ | $111$ | $119$ | $1$ | $1$ | $0$ | $2$ | $19$ | $14$ $0.994\geq\hat{\rho}>0.993$ | $0$ | $0$ | $0$ | $0$ | $2$ | 2 | $36$ | $37$ | $0$ | $1$ | $1$ | $0$ | $74$ | $69$ $0.993\geq\hat{\rho}>0.992$ | $0$ | $0$ | $0$ | $0$ | $1$ | $1$ | $25$ | $27$ | $0$ | $0$ | $1$ | $1$ | $182$ | $183$ $0.992\geq\hat{\rho}>0.991$ | $0$ | $0$ | $0$ | $0$ | $1$ | $1$ | $15$ | $17$ | $0$ | $0$ | $0$ | $2$ | $293$ | $292$ $0.991\geq\hat{\rho}>0.990$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $4$ | $4$ | $0$ | $0$ | $0$ | $0$ | $195$ | $200$ $0.990\geq\hat{\rho}\geq 0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $6$ | $6$ | $1$ | $1$ | $3$ | $3$ | $88$ | $95$ Number of Feasible Channel Realizations | $861$ TABLE VII: Noise mismatch and perturbation mismatch cause the outage: $\rho=0.1$, $\gamma=18$dB, channel realization $=1000$. | | Original --- (No Mismatch) Noise Mismatch | Perturbation Mismatch Noise ($\hat{\sigma}_{\ell}^{2}=\hat{\sigma}_{v}^{2}$) | 0.25 | 0.265 | 0.25 Channel Error ($\hat{\eta}^{2}=\hat{\epsilon}^{2}$) | 0.002 | 0.0032 SNR Satisfaction Rate ($\hat{\rho}$) Method | M4 | M2 | M4 | M2 | M4 | M2 $1.000\geq\hat{\rho}\geq 0.900$ | $861$ | $861$ | $861$ | $860$ | $861$ | $860$ $0.900>\hat{\rho}\geq 0$ (Outage) | $0$ | $0$ | $0$ | $1$ | $0$ | $1$ Number of Feasible Channel Realizations | $861$ ### VI-C Relative Tightness Verification for M2 and B2 Approaches Lastly, we set the outage probability $\rho=0.00044$ to verify the conclusion in Theorem 2, i.e., “ _for any $\rho\in(\exp(-8),0.00045)$, the moment inequality-based safe approximation is always more conservative than the Bernstein-type inequality-based safe approximation_.” We also set $\sigma_{\ell}^{2}=\sigma_{v}^{2}=0.25$ and $\epsilon^{2}=\eta^{2}=0.0005$. The results are presented in Fig. 6, Fig. 7, and Table VIII. We find that M2 requires more power than B2, which implies that moment inequality-based approaches are more conservative than the Bernstein-type inequality-based approach. The SNR satisfaction percentages of both B2 and M2 exceed our target threshold $1-0.00044$ (herein we calculate the SNR by (19)), as M2 always provides $100\%$ SNR satisfaction while B2 provides a slightly lower SNR satisfaction percentage. It is interesting to see that B2 requires a similar level of power as the non-robust case, while its SNR satisfaction is significantly better than the latter. This shows the necessity of the robust design. Table VIII is even more interesting, as the B2 design seems always feasible while the M2 design is rarely feasible. Note that the rank-one feasibility is $100\%$ for both B2 and M2. We can clearly see and conclude that the moment inequality-based approach is more conservative than the Bernstein-type inequality-based approach from this experiment, which is consistent with our Theorem 2. Figure 6: Minimum Power required versus the SNR threshold. For $\epsilon^{2}=\eta^{2}=0.0005$; SNR outage percentage $\rho=0.00044$. Figure 7: Histogram of the SNR satisfaction probability. For $\epsilon^{2}=\eta^{2}=0.0005$; SNR outage percentage $\rho=0.00044$; SNR threshold $\gamma=12$dB. TABLE VIII: Feasibility rate and the rank-one rate respectively for $\sigma_{\ell}^{2}=0.25,\forall\ell$, $\sigma_{v}^{2}=0.25$, $\epsilon^{2}=\eta^{2}=0.0005$; $\rho=0.00044$. SNR in dB | $\gamma=3$ | $\gamma=6$ | $\gamma=9$ | $\gamma=12$ | $\gamma=15$ ---|---|---|---|---|--- Method | M2 | B2 | M2 | B2 | M2 | B2 | M2 | B2 | M2 | B2 Feasibility% | $0.07$ | $1.00$ | $0.06$ | $1.00$ | $0.06$ | $1.00$ | $0.06$ | $1.00$ | $0.04$ | $1.00$ Rank One% | $1.00$ | $1.00$ | $1.00$ | $1.00$ | $1.00$ | $1.00$ | $1.00$ | $1.00$ | $1.00$ | $1.00$ ## VII Conclusions In this paper we studied the robust design problem for two-hop one-way relay beamforming. Specifically, we considered the scenario where both the transmitter-to-relay and relay-to-receiver links are subject to errors. This scenario is difficult and seldom studied in the literature as it involves chance constraints with quartic perturbations. We provided different reformulations of the chance-constrained robust design problem and further analyzed the relative tightness of different reformulations. Numerical results further confirmed the superiority of the proposed robust design. The quartic perturbation-based outage-constrained robust design is indeed more conservative. Nevertheless, by taking into account the higher-order perturbations, the resulting design is more robust against mismatch of prior distributional information. The SNR satisfaction rate and rank feasibility tables verified the tightness results. In the future, many transceiver pairs could be considered as a non-trivial extension of this work. ## Appendix A Appendix ### A-A Proof of Theorem 1 Given $\bar{f}({\bm{x}},{\bm{\xi}})=f({\bm{x}},{\bm{\xi}})+a_{0}(\bm{x})$, by assumption, for each ${\bm{x}}\in\mathbb{C}^{n}$ the function ${\bm{x}}\mapsto\bar{f}({\bm{x}},{\bm{\xi}})$ is affine in ${\bm{x}}\in\mathbb{C}^{n}$. This implies that ${\bm{x}}\mapsto\bar{f}^{2}({\bm{x}},{\bm{\xi}})$ is a non-negative homogeneous quadratic polynomial in ${\bm{x}}\in\mathbb{C}^{n}$. This establishes (a) in Theorem 1. To prove Theorem 1(b), we need the following lemma. ###### Lemma 1. (cf. [33, Theorem 5.10]) For all $q\geq 2$, ${\mathbb{E}}\left[|\bar{f}({\bm{x}},{\bm{\xi}})|^{q}\right]^{1/q}\leq(q-1)^{2}{\mathbb{E}}\left[|\bar{f}({\bm{x}},{\bm{\xi}})|^{2}\right]^{1/2}.$ To prove Theorem 1(b), since $\bar{f}({\bm{x}},{\bm{\xi}})^{2}={\bm{v}}^{T}({\bm{x}}){\bm{U}}({\bm{\xi}}){\bm{v}}({\bm{x}}),$ it follows that $\displaystyle{\mathbb{E}}\left[|\bar{f}({\bm{x}},{\bm{\xi}})|^{2}\right]\geq$ $\displaystyle{\mathbb{E}}\left[\bar{f}({\bm{x}},{\bm{\xi}})^{2}\right]$ $\displaystyle=$ $\displaystyle{\bm{v}}^{T}({\bm{x}}){\mathbb{E}}\left[{\bm{U}}({\bm{\xi}})\right]{\bm{v}}({\bm{x}})$ $\displaystyle=$ $\displaystyle{\bm{v}}^{T}({\bm{x}}){\bm{U}}{\bm{v}}({\bm{x}}),$ where ${\bm{U}}={\mathbb{E}}\left[{\bm{U}}({\bm{\xi}})\right]$ is a Hermitian positive semidefinite matrix, which can be computed explicitly, as each entry of ${\bm{U}}({\bm{\xi}})$ involves only the expectation of a certain product of standard Gaussian random variables. For ease of notation, we let $\bar{q}=q(\rho)$, where $q(\rho)$ is defined in (14). By Lemma 1 and Markov’s inequality, for any $\bar{q}\geq 2$, we have $\displaystyle\Pr(|\bar{f}({\bm{x}},{\bm{\xi}})|\geq t)$ $\displaystyle\leq$ $\displaystyle\frac{{\mathbb{E}}\left[|\bar{f}({\bm{x}},{\bm{\xi}})|^{\bar{q}}\right]}{t^{\bar{q}}}$ $\displaystyle\leq$ $\displaystyle\frac{({\bar{q}}-1)^{2{\bar{q}}}\cdot{\mathbb{E}}\left[|\bar{f}({\bm{x}},{\bm{\xi}})|^{2}\right]^{{\bar{q}}/2}}{t^{\bar{q}}}$ $\displaystyle\leq$ $\displaystyle\left(\frac{({\bar{q}}-1)^{2}\cdot\|{\bm{U}}^{1/2}{\bm{v}}({\bm{x}})\|_{2}}{t}\right)^{\bar{q}}.$ Thus, whenever $t\geq c(\rho)\|{\bm{U}}^{1/2}{\bm{v}}({\bm{x}})\|_{2}$, where $c(\rho)$ is defined in (13), we have $\Pr(|\bar{f}({\bm{x}},{\bm{\xi}})|\geq t)\leq\rho$. Whenever the second-order cone constraint in (15) holds, we have $\displaystyle\Pr(f({\bm{x}},{\bm{\xi}})\geq 0)=$ $\displaystyle\Pr(\bar{f}({\bm{x}},{\bm{\xi}})\geq a_{0}(\bm{x}))$ $\displaystyle\leq$ $\displaystyle\Pr(|\bar{f}({\bm{x}},{\bm{\xi}})|\geq a_{0}(\bm{x}))$ $\displaystyle\leq$ $\displaystyle\Pr(|\bar{f}({\bm{x}},{\bm{\xi}})|\geq t)$ $\displaystyle\leq$ $\displaystyle\rho.$ This implies that the (complex) second-order cone constraint (15) is a safe tractable approximation of (11), as desired. ## References * [1] “IEEE standard for local and metropolitan area networks part 16: Air interface for broadband wireless access systems amendment 1: Multihop relay specification,” _IEEE Std 802.16j-2009 (Amendment to IEEE Std 802.16-2009)_ , pp. 1–290, 2009. * [2] X. Zhang, X. Tao, Q. Cui, and J. Bai, “Intra-cell and inter-cell interference-constrained D2D communication underlaying cellular networks,” _Electronics Letters_ , vol. 51, no. 14, pp. 1117–1119, 2015. * [3] C. Tian, Z. Qian, X. Wang, and L. Hu, “Analysis of joint relay selection and resource allocation scheme for relay-aided D2D communication networks,” _IEEE Access_ , vol. 7, pp. 142 715–142 725, 2019. * [4] S. Gong, X. Huang, J. Xu, W. Liu, P. Wang, and D. Niyato, “Backscatter relay communications powered by wireless energy beamforming,” _IEEE Transactions on Communications_ , vol. 66, no. 7, pp. 3187–3200, 2018. * [5] W. Roh, J.-Y. Seol, J. Park, B. Lee, J. Lee, Y. Kim, J. Cho, K. Cheun, and F. Aryanfar, “Millimeter-wave beamforming as an enabling technology for 5G cellular communications: Theoretical feasibility and prototype results,” _IEEE communications magazine_ , vol. 52, no. 2, pp. 106–113, 2014. * [6] C. Cai and R. Qiu, “Energy-efficient cooperative two-hop amplify-and-forward relay protocol in cognitive radio networks,” _IET Communications_ , vol. 10, no. 16, pp. 2135–2142, 2016. * [7] M. Zhang, G. Zhang, S. Zhang, and Z. Bao, “An optimized resource allocation algorithm in cooperative relay cognitive radio networks,” in _2017 Signal Processing Symposium (SPSympo)_. IEEE, 2017, pp. 1–6. * [8] S. Fazeli-Dehkordy, S. Shahbazpanahi, and S. Gazor, “Multiple peer-to-peer communications using a network of relays,” _IEEE Transactions on Signal Processing_ , vol. 57, no. 8, pp. 3053–3062, 2009. * [9] O. Stein, “How to solve a semi-infinite optimization problem,” _European Journal of Operational Research_ , vol. 223, no. 2, pp. 312–320, 2012. * [10] J. B. Lasserre, “Tractable approximations of sets defined with quantifiers,” _Mathematical Programming_ , vol. 151, no. 2, pp. 507–527, 2015. * [11] G. Zheng, K.-K. Wong, A. Paulraj, and B. Ottersten, “Robust collaborative-relay beamforming,” _IEEE Transactions on Signal Processing_ , vol. 57, no. 8, pp. 3130–3143, 2009. * [12] A. Aziz, M. Zeng, J. Zhou, C. Georghiades, and S. Cui, “Robust beamforming with channel uncertainty for two-way relay networks,” in _2012 Proceedings of IEEE International Conference on Communications (ICC)_ , June 2012, pp. 3632–3636. * [13] K. Y. Wang, M. C. So, T. H. Chang, W. K. Ma, and C. Y. Chi, “Outage constrained robust transmit optimization for multiuser MISO downlinks: Tractable approximations by conic optimization,” _IEEE Transactions on Signal Processing_ , vol. 62, no. 21, pp. 5690–5705, 2014. * [14] L. Ni, X. Da, H. Hu, M. Zhang, and K. Cumanan, “Outage constrained robust secrecy energy efficiency maximization for EH cognitive radio networks,” _IEEE Wireless Communications Letters_ , vol. 9, no. 3, pp. 363–366, 2019\. * [15] L. Ni, X. Da, H. Hu, Y. Huang, R. Xu, and M. Zhang, “Outage constrained robust transmit design for secure cognitive radio with practical energy harvesting,” _IEEE Access_ , vol. 6, pp. 71 444–71 454, 2018. * [16] Y. Zhou, H. Zhou, F. Zhou, D. W. K. Ng, and R. Q. Hu, “Robust chance-constrained trajectory and transmit power optimization for UAV-enabled CR networks,” in _ICC 2020-2020 IEEE International Conference on Communications (ICC)_. IEEE, 2020, pp. 1–7. * [17] O. Yazar, M. F. Keskin, and S. Gezici, “Power efficient positioning for visible light systems via chance constrained optimization,” _IEEE Transactions on Aerospace and Electronic Systems_ , 2020. * [18] M. F. Keskin, A. D. Sezer, and S. Gezici, “Optimal and robust power allocation for visible light positioning systems under illumination constraints,” _IEEE Transactions on Communications_ , vol. 67, no. 1, pp. 527–542, 2019. * [19] B. Li, Z. Fei, Z. Chu, F. Zhou, K.-K. Wong, and P. Xiao, “Robust chance-constrained secure transmission for cognitive satellite–terrestrial networks,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 5, pp. 4208–4219, 2018. * [20] S. X. Wu, X. Ni, and A. M.-C. So, “A polynomial optimization approach for robust beamforming design in a device-to-device two-hop one-way relay network,” in _2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2016, pp. 3841–3845. * [21] S. Ma, A. M. So, and K. Yang, “Robust beamforming in two-way relay networks: Quartically perturbed chance constrained formulation and tractable approximation,” in _2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2014, pp. 2734–2738. * [22] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” _IEEE Signal Process. Mag._ , vol. 27, no. 3, pp. 20–34, 2010. * [23] Y. Yan, W. Yang, B. Zhang, D. Guo, and G. Ding, “Outage constrained robust beamforming for sum rate maximization in multi-beam satellite systems,” _IEEE Communications Letters_ , vol. 24, no. 1, pp. 164–168, 2020\. * [24] X. Zhang, J. Wang, C. Jiang, C. Yan, Y. Ren, and L. Hanzo, “Robust beamforming for multibeam satellite communication in the face of phase perturbations,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 3, pp. 3043–3047, 2019. * [25] B. K. Chalise and L. Vandendorpe, “MIMO relay design for multipoint-to-multipoint communications with imperfect channel state information,” _IEEE Transactions on Signal Processing_ , vol. 57, no. 7, pp. 2785–2796, 2009. * [26] M. Tao and R. Wang, “Robust relay beamforming for two-way relay networks,” _IEEE Communications Letters_ , vol. 16, no. 7, pp. 1052–1055, 2012. * [27] A. Aziz, M. Zeng, J. Zhou, C. N. Georghiades, and S. Cui, “Robust beamforming with channel uncertainty for two-way relay networks,” in _2012 IEEE International Conference on Communications (ICC)_. IEEE, 2012, pp. 3632–3636. * [28] D. Ponukumati, F. Gao, and C. Xing, “Robust peer-to-peer relay beamforming: A probabilistic approach,” _IEEE communications letters_ , vol. 17, no. 2, pp. 305–308, 2013. * [29] S. Shi, Z. Wang, Z. He, and Z. Cheng, “Spectrally compatible waveform design for MIMO radar with ISL and PAPR constraints,” _IEEE Sensors Journal_ , vol. 20, no. 5, pp. 2368–2377, 2019. * [30] J. Jin, Y. R. Zheng, W. Chen, and C. Xiao, “Hybrid precoding for millimeter wave MIMO systems: A matrix factorization approach,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 5, pp. 3327–3339, 2018\. * [31] Z. Wen, S. Wang, X. Liu, and J. Zou, “Joint relay–user beamforming design in a full-duplex two-way relay channel,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 3, pp. 2874–2879, 2017. * [32] S. X. Wu, Q. Li, A. M.-C. So, and W.-K. Ma, “A stochastic beamformed amplify-and-forward scheme in a multigroup multicast MIMO relay network with per-antenna power constraints,” _IEEE Trans. Wireless Commun._ , vol. 15, no. 7, pp. 4973–4986, July 2016. * [33] S. Janson _et al._ , _Gaussian hilbert spaces_. Cambridge university press, 1997, vol. 129.
This essay received an Honorable Mention in the 2018 Essay Competition of the Gravity Research Foundation # Fermat’s principle in black-hole spacetimes Shahar Hod The Ruppin Academic Center, Emeq Hefer 40250, Israel The Hadassah Institute, Jerusalem 91010, Israel ###### Abstract Black-hole spacetimes are known to possess closed light rings. We here present a remarkably compact theorem which reveals the physically intriguing fact that these unique null circular geodesics provide the fastest way, as measured by asymptotic observers, to circle around spinning Kerr black holes. Email<EMAIL_ADDRESS> Fermat’s principle, also known as the principle of least time Lip ; Notewik , asserts that among all possible null trajectories, the path taken by a ray of light between two given points A and B in a flat spacetime geometry is the path that minimizes the traveling time $T_{A\to B}$. This remarkably elegant principle implies, in particular, that the unique null trajectory taken by a ray of light between two given points is generally distinct from the straight line trajectory which minimizes the spatial distance $d_{AB}$ between these points. In the present Essay we would like to highlight an intriguing and closely related physical phenomenon which characterizes curved spacetime geometries. In particular, we here raise the physically interesting question: Among all possible closed paths that circle around a black hole in a curved spacetime, which path provides the fastest way, as measured by asymptotic observers, to circle the central black hole? We first note that, in flat spacetimes, the characteristic orbital period $T_{\odot}$ of a test particle that moves around a spatially compact object of radius $R$ is trivially bounded from below by the compact relation Noteunit $T_{\odot}\geq T^{\text{flat}}_{\odot\text{min}}=2\pi R\ ,$ (1) where the equality sign in (1) is attained by massless particles that circle the compact object on the shortest possible (tangential) trajectory with $r_{\text{fast}}=R$. It should be emphasized, however, that the simple lower bound (1) is not valid in realistic curved spacetimes. In particular, it does not take into account the important time dilation (red-shift) effect which is caused by the gravitational field of the central compact object Chan . In addition, the flat-space relation (1) does not take into account the well-known phenomenon of dragging of inertial frames by spinning compact objects in curved spacetimes Chan . As we shall explicitly show below, due to the influences of these two interesting physical effects, the shortest possible orbital period $T_{\odot}$ of a test particle around a central compact object, as measured by asymptotic observers, is larger than the naive flat-space estimate (1). In particular, we shall prove that, in generic curved spacetimes, the unique circular trajectory $r=r_{\text{fast}}$ that minimizes the traveling time $T_{\odot}$ around a central Kerr black hole is distinct from the tangential trajectory with $r=r_{\text{short}}$ which could minimize the traveling distance around the spinning black hole. The fastest circular orbit around a spinning Kerr black hole.— We shall analyze the physical and mathematical properties of equatorial circular trajectories around spinning Kerr black holes. In Boyer-Lindquist coordinates $(t,r,\theta,\phi)$, the asymptotically flat black-hole spacetime can be described by the curved line element Chan $\displaystyle ds^{2}=-{{\Delta}\over{\rho^{2}}}(dt-a\sin^{2}\theta d\phi)^{2}+{{\rho^{2}}\over{\Delta}}dr^{2}+\rho^{2}d\theta^{2}+{{\sin^{2}\theta}\over{\rho^{2}}}\big{[}adt-(r^{2}+a^{2})d\phi\big{]}^{2}\ ,$ (2) where $M$ is the black-hole mass, $J\equiv Ma$ is its angular momentum, $\Delta\equiv r^{2}-2Mr+a^{2}$, and $\rho^{2}\equiv r^{2}+a^{2}\cos^{2}\theta$. The black-hole (event and inner) horizons are determined by the spatial zeros of the metric function $\Delta$: $r_{\pm}=M\pm(M^{2}-a^{2})^{1/2}\ .$ (3) We would like to identify the unique circular trajectory which minimizes the orbital period $T_{\odot}$, as measured by asymptotic observers, around the central black hole. We shall therefore assume the relation $v/c\to 1^{-}$ for the velocity of the orbiting test particle Notegng . The corresponding radius- dependent orbital periods $T_{\odot}(r)$ of the test particles can easily be obtained from the characteristic black-hole curved line element (2) with $ds=dr=d\theta=0$ and $\Delta\phi=\pm 2\pi$ Notecocoun ; Noteco ; Hod1 . This yields the compact functional relation $T_{\odot}(r)=2\pi\cdot{{\sqrt{r^{2}-2Mr+a^{2}}-{{2Ma}\over{r}}}\over{1-{{2M}\over{r}}}}\ $ (4) for the orbital periods of co-rotating test particles around the central spinning black hole. The physically interesting co-rotating circular orbit with $r=r_{\text{fast}}$, which is characterized by the shortest possible orbital period $T_{\odot\text{min}}=\min_{r}\\{T_{\odot}(r)\\}$ around the central spinning black hole, is determined by the functional relation $dT(r=r_{\text{fast}})/dr=0$. This yields the characteristic algebraic equation $r^{2}-3Mr+2a^{2}+2a\sqrt{r^{2}-2Mr+a^{2}}=0\ \ \ \ \text{for}\ \ \ \ r=r_{\text{fast}}\ .$ (5) Remarkably, this equation can be solved analytically to yield the simple functional relation $r_{\text{fast}}=2M\cdot\big{\\{}1+\cos[{2\over 3}\cos^{-1}(-{a/M})]\big{\\}}\ .$ (6) for the unique orbital radius $r_{\text{fast}}(M,a)$ which characterizes the fastest co-rotating circular trajectory (the closed circular path with the shortest possible orbital period) around the central spinning Kerr black hole. What we find most intriguing is the fact that the spin-dependent radii $r_{\text{fast}}(M,a)$ of the fastest circular trajectories, as given by the functional expression (6), exactly coincide with the corresponding radii $r_{\gamma}(M,a)$ of the null circular geodesics Bar which characterize the spinning Kerr black-hole spacetimes. One therefore concludes that co-rotating null circular geodesics (closed light rings) provide the fastest way, as measured by asymptotic observers, to circle around generic Kerr black holes. It is physically interesting to define the dimensionless ratio [see Eqs. (1) and (3)] $\Theta({\bar{a}})\equiv{{T_{\odot\text{min}}}\over{2\pi r_{+}}}\ \ \ \ ;\ \ \ \ {\bar{a}}\equiv a/M\ ,$ (7) which characterizes the unique closed circular trajectories [with $r=r_{\text{fast}}({\bar{a}})$] that minimize the orbital periods around the central spinning black holes. As emphasized above, a naive flat-space calculation predicts the relation $\Theta^{\text{flat}}_{\text{min}}\equiv T^{\text{flat}}_{\odot\text{min}}/2\pi R=1$ [see Eq. (1)]. However, substituting Eqs. (4) and (6) into (7), one finds the characteristic inequality Notetheta $\Theta^{\text{Kerr}}({\bar{a}})>1\ $ (8) for all Kerr black-hole spacetimes in the physically allowed regime ${\bar{a}}\in[0,1]$. In particular, the dimensionless function $\Theta^{\text{Kerr}}({\bar{a}})$ exhibits a non-trivial (non-monotonic) functional dependence on the dimensionless black-hole rotation parameter ${\bar{a}}$ with the property Noteexpan $\text{min}_{\bar{a}}\big{\\{}\Theta^{\text{Kerr}}({\bar{a}})\big{\\}}\simeq 2-{{3(13-7\sqrt{3})}\over{88}}\ \ \ \ \text{at}\ \ \ \ {\bar{a}}^{\text{Kerr}}_{\text{min}}\simeq 1-{{126-45\sqrt{3}}\over{1936}}\ .$ (9) Summary.— Fermat’s principle asserts that, in a flat spacetime geometry, the path taken by a ray of light is unique in the sense that it represents the spatial trajectory with the shortest possible traveling time between two given points Lip ; Notewik . This intriguing principle implies, in particular, that the paths taken by light rays are generally distinct from the straight line trajectories which could minimize the traveling distances between two given points. In the present short Essay we have highlighted an intriguing and closely related phenomenon in curved black-hole spacetimes. In particular, we have raised the physically interesting question: Among all possible trajectories that circle around a spinning Kerr black hole, which closed trajectory provides the fastest way, as measured by asymptotic observers, to circle the central black hole? Our compact theorem has revealed the physically intriguing fact that the equatorial null circular geodesics (closed light rings), which characterize the curved black-hole spacetimes, provide the fastest way to circle around spinning Kerr black holes. In particular, we have explicitly proved that, in analogy with the Fermat principle in flat spacetime geometries, the unique curved trajectories $r=r_{\text{fast}}(M,a)$ [see Eq. (6)] which minimize the traveling times $T_{\odot}$ of test particles around central black holes are distinct from the tangential trajectories $r=r_{+}(M,a)$ [see Eq. (3)] which could minimize the traveling distances around the black holes. ACKNOWLEDGMENTS This research is supported by the Carmel Science Foundation. I would like to thank Yael Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B. Tea for helpful discussions. ## References * (1) A. Lipson, S. G. Lipson, H. Lipson, Optical Physics 4th Edition, Cambridge University Press (2010). * (2) See also https://en.wikipedia.org/wiki/Fermat$\%$27s_principle . * (3) We use natural units in which $G=c=1$. * (4) S. Chandrasekhar, The Mathematical Theory of Black Holes, (Oxford University Press, New York, 1983). * (5) In the present essay we consider geodesic as well as non-geodesic trajectories of test particles around the central Kerr black holes. It is well known that the null circular geodesic of the black-hole spacetime, on which massless particles that move exactly at the speed of light can circle the central black hole, is characterized by a well defined radius $r_{\gamma}=r_{\gamma}(M,a)$. It is important to stress the fact that, by using man-made rockets which are based on non-gravitational forces, massive particles can also circle the central black hole on non-geodesic trajectories with orbital velocities that, in principle, may approach arbitrarily close to the speed of light and with orbital radii that may differ from the unique radius $r_{\gamma}(M,a)$ of the black-hole null circular geodesic. * (6) Here the upper/lower signs correspond respectively to co-rotating/counter-rotating trajectories of the test particles around the central spinning black holes. * (7) In the present essay we are interested in circular trajectories that minimize the orbital periods $T_{\odot}$ around the central spinning black holes as measured by asymptotic observers. We shall therefore focus on co-rotating circular orbits. * (8) S. Hod, Phys. Rev. D 84, 104024 (2011). * (9) See equation (2.18) of J. M. Bardeen, W. H. Press and S. A. Teukolsky, Astrophys. Jour. 178, 347 (1972). * (10) In particular, one finds $\Theta^{\text{Kerr}}({\bar{a}}\to 0)\to 3\sqrt{3}/2$ in the limit of non-rotating Schwarzschild black holes and $\Theta^{\text{Kerr}}({\bar{a}}\to 1)\to 2$ in the limit of maximally spinning (extremal) Kerr black holes. * (11) Here we have used the series expansion $\Theta^{\text{Kerr}}({\bar{a}})=2+\sqrt{2}(\sqrt{3}-2)\cdot\sqrt{\epsilon}+(14/3-2\sqrt{3})\cdot\epsilon+O(\epsilon^{3/2})$ in the near-extremal $0\leq\epsilon\equiv 1-{\bar{a}}\ll 1$ regime.
# Capitol (Pat)riots: A comparative study of Twitter and Parler Hitkul IIIT - DelhiIndia<EMAIL_ADDRESS>, Avinash Prabhu , Dipanwita Guhathakurta , Jivitesh jain , Mallika Subramanian , Manvith Reddy , Shradha Sehgal , Tanvi Karandikar IIIT - HyderabadIndia , Amogh Gulati , Udit Arora IIIT - DelhiIndia , Rajiv Ratn Shah IIIT - DelhiIndia <EMAIL_ADDRESS>and Ponnurangam Kumaraguru IIIT - DelhiIndia <EMAIL_ADDRESS> (2018) ###### Abstract. On 6 January 2021, a mob of right-wing conservatives stormed the USA Capitol Hill interrupting the session of congress certifying 2020 Presidential election results. Immediately after the start of the event, posts related to the riots started to trend on social media. A social media platform which stood out was a free speech endorsing social media platform Parler; it is being claimed as the platform on which the riots were planned and talked about. Our report presents a contrast between the trending content on Parler and Twitter around the time of riots. We collected data from both platforms based on the trending hashtags and draw comparisons based on what are the topics being talked about, who are the people active on the platforms and how organic is the content generated on the two platforms. While the content trending on Twitter had strong resentments towards the event and called for action against rioters and inciters, Parler content had a strong conservative narrative echoing the ideas of voter fraud similar to the attacking mob. We also find a disproportionately high manipulation of traffic on Parler when compared to Twitter. Social Computing, Data Mining, Social Media Analysis, Capitol Riots, Parler, Twitter ††copyright: acmcopyright††journalyear: 2018††doi: 10.1145/1122445.1122456††journal: POMACS††journalvolume: 37††journalnumber: 4††article: 111††publicationmonth: 8 ## 1\. Introduction Misinformation of the United States of America’s presidential election results being fraudulent has been spreading across the world since the elections in November 2020.111https://www.bbc.com/news/election-us-2020-55016029 Public protests and legal cases were taking place across the states against the allegation of voter fraud and manhandling of mail-in ballots.222https://www.business-standard.com/article/international/trump- approaches-us-supreme-court-against-presidential-election- results-120121000240_1.html This movement took a violent height when a mob attacked the Capitol Hill building to stop certification of Mr. Joe Biden as the 46th President of the United States of America. The incident lead to a mid-way halt of a running congress session and the death of five people including a police officer.333https://www.theguardian.com/us- news/2021/jan/08/capitol-attack-police-officer-five-deaths The incident also spread its ripples in the online world with hashtags like #capitolriots that started trending on Twitter, Google search and other social media platforms. A multitude of research has been done in uncovering the role of social media in politics from various aspects like misinformation, topical focus of content, communities, and bot accounts (Panda et al., 2020; Ferrara, 2017; Morstatter et al., 2018; Bessi and Ferrara, 2016). The USA saw an example of large scale use of social media for political protest earlier in 2020, after the killing of George Floyd incited widespread support for the BlackLivesMatter (BLM) movement on Twitter (Bolsover, 2020). The US 2020 presidential elections also saw a resurgence of the platform with #Election2020 and #NovemberIsComing trending, in order to spark election- related conversations (Shevtsov et al., 2020). In this report, we collect data from two Online Social Media sites (OSMs) - Twitter444https://www.twitter.com/ and Parler555https://parler.com/ \- that were actively used to discuss the riots. Parler is a free speech social network that garnered attention when President Donald Trump publicly denounced social media giants like Twitter and Facebook for targeting him and other conservatives. The network has also been used by right-wing extremists to plan the Jan 6th breach of the Capitol.666https://www.businessinsider.in/tech/news/plans-to-storm-the- capitol-were-circulating-on-social-media-sites-including-facebook-twitter-and- parler-for-days-before-the-siege/articleshow/80155657.cms Section 2 goes into details of Parler rules and functioning. We conduct a comparative study of the trending content and users on the two platforms. We observe a violent rhetoric on Parler with a significant portion of the content in support of the Capitol riots and misinformed claims of fraudulent elections.777https://www.thehindu.com/news/international/us- supreme-court-rejects-republican-attack-on-biden-victory/article33312520.ece Content on Twitter, by contrast, denounced the storming of the Capitol and the weak response from police to the incident, in contrast to the BLM protests. Users on Twitter shared their concerns regarding the rising violent riots in America. A longitudinal analysis of the content and the users’ joining dates highlights how the two OSMs were actively used during and after the protests, reaffirming the point that social media has a significant role to play in political events. ## 2\. An Introduction to Parler Parler is a micro-blogging and social networking service launched in 2018 with headquarters in Henderson, Nevada, USA. According to the About page, the network is a free speech social network, built on a foundation of respect for privacy and personal data, free speech, free markets and ethical, transparent corporate policy.888https://company.parler.com Parler advertises the minimal rules and content guidelines it imposes, explaining its popularity amongst users who are banned from popular social media websites such as Twitter and Facebook due to their content moderation policies.999https://www.nytimes.com/2020/11/11/technology/parler-rumble- newsmax.html In the aftermaths of the Capitol attack, Parler got suspended by Amazon Web Services hosting leading to platform being unavailable.101010https://www.cnbc.com/2021/01/11/parler-drops-offline-after- amazon-withdraws-support.html ### 2.1. Parler Rules The Parler Community Guidelines are based on two principles: 1) Parler’s services cannot be used as a tool for crime and unlawful acts and 2) Bots and spam are a nuisance and not conducive to productive and polite discourse. While threats of violence and advocacy of lawless actions are prohibited, fighting words and NSFW (Not-Safe-For-Work) content is allowed, under some restrictions.111111https://legal.parler.com/documents/Elaboration-on- Guidelines.pdf Reported violations of these guidelines are reviewed by a Community Jury which determines whether the content is permitted or not. A point system is in place to ban repeated and frequent offenders.121212https://legal.parler.com/documents/Parler-Community-Jury.pdf ### 2.2. Parler Social Network Structure Parler allows registered users to write parleys, which are posts at most 1,000 characters long. Social Network engagement features such as comments and votes on parleys written by others are also present. Each user has their own feed \- a stream of parleys that they can interact with. Unlike other popular platforms the feed is not curated by Parler. Users curate and moderate their feed using options provided by the platform - reflecting Parler’s principles. Parler allows users to search for hashtags and usernames. The lack of search in text leads to an overemphasis on the use of hashtags. Another means of exploring content on Parler is through the discover section. The discover section consists of parleys, people and affiliates. Affiliates are news outlets which are allowed by Parler to post their news articles. Table 1 provides a description of common terms and actions associated with the network. Table 1. Terminology used by Parler. Term | Definition ---|--- Parley | A Parley is a 1,000 word post that can be shared on the Parler Platform. Hashtag | A word or phrase preceded by a hash sign (#), used on Parler to identify digital content on a specific topic. Comment | A Comment is a 1,000 character reply to a Parley. Echo | An Echo is a re-posting of a Parley. Parler’s Echo feature helps users quickly share that Parley with all of their followers. Vote | A Vote on a Parley is a way to let people know that the user enjoys it. Direct Message or DM | Users can directly contact anyone by going to their Parler profile and selecting the message icon to start a conversation. Follow | Following another user means that all their parleys will appear in the feed. Unfollow | Unfollowing another user means that all their parleys will no longer appear in the feed. Citizen | Parler Citizens are verified unique people in the Parler network. Verified | People with a large following have the potential to be targeted for impersonation, hacking or phishing campaigns. The verified badge is given to protect the person’s authenticity and prove their identity to the community. Block | If one user blocks another, he/she won’t be be able to see the blocked account and vice versa. Mute | Muting a user will prevent the users’ posts from showing up on the feed. ## 3\. Data Collection Our data collection was done on the 7 and 8 January 2021. A list of trending hashtags and keywords were curated, and used as the seed to collect data. Parleys were collected using parler-py-api. 131313https://github.com/KonradIT/parler-py-api/ Data collected was dated between 1 Nov 2020 03:39 am EST to 8 Jan 2021 08:15 am EST. A total of approximately 100,000 parleys from 22,000 unique users were collected.141414http://precog.iiitd.edu.in/resources.html For collecting Tweets, we used the official Twitter Streaming API. Our collection period was 7 Jan 8:00 AM EST to 8 Jan 7:15 AM EST, while adhering to rate limits. A total of approximate 4 Million tweets were collected from 1.7 Million users. Table 2 provides a summary of our dataset statistics. Table 2. Summary of dataset statistics Parler | Twitter ---|--- Total Parleys | 101,945 | Total Tweets | 4,196,988 Echos | 35,165 | Retweets | 3,288,274 Unique users | 22,326 | Unique users | 1,720,826 ## 4\. A Comparative Study We compared the collected data from Parler and Twitter on the basis 1)_What_ was being posted, 2)_Who_ were the people posting and creating engagement, and lastly 3)How _organic_ was the traffic generated. ### 4.1. What was being posted? To understand each platform’s topics of trending conversation, we looked at the top ten most common hashtags on respective platforms. Figure 1 shows the ten most frequently used hashtags represented with the percentage of post they appear. Percentage of use for a particular hashtag has been calculated by counting the number of posts mentioning that hashtag at least once, normalised by the total number of posts on that platform containing at least one hashtag. We observe a stark contrast in the hashtag popularity on two platforms. All the hashtags trending on Parler represent the misinformed idea of voter fraud and echo the ideas similar to attacking mob. Hashtags popular on Twitter are either neutral towards the event, e.g. _#capitolriots_ and _#washingtondc_ or the ones who are calling out for impeachment of President Donald Trump who is being called responsible for inciting his follower for this attack.151515https://www.washingtonpost.com/politics/trump-rage- riot/2021/01/07/26894c54-5108-11eb-b96e-0e54447b23a1_story.html One exception to this result is the presence of _#maga_ on Twitter, which is associated with the President Trump’s election campaign. (a) Parler (b) Twitter Figure 1. Ten Most frequently used hashtags. Hashtags on Parler are echoing the ideas similar to the Capitol attacking mob, Twitter is treding with hashtags representing a call for action against the event. It is also worth noting that most frequent hashtag on Parler is present in above 70% of posts compared to only above 12% in case of Twitter representing an extreme one-sided narrative present on Parler. To provide a better insight of context in which these hashtags have been used, Table 3 and Table 4 shows 5 parleys and tweets for two most frequent hashtags respectively. Table 3. Sample Parleys using popular hashtags. #stopthesteal | #maga ---|--- Interesting.. The cops should have just went home.. We will never call Sleepy Creepy PEDO maker Joe our President. #DrainTheSwamp #StopTheSteal #FAKENEWSROBOTS #StillYourPresident #FREEDOM | January 21st, 2021 we rise up again…I CALL BULLSHIT #JOEBIDEN is not my president!…#maga …they stopped counting in Chatham County at like 10:30-11 last night - ”resumed” this morning. Republicans were leading both seats with over 90% of the vote in…#stopthesteal… | MAGA PATRIOTS please support Trump and storm the Capitol and Congress properly.!! We will install Truml as our Lord and savior, PRESIDENT FOR LIFE… because we won the election!! THIS TIME WE TAKE THE CAPITOL WITH GUNS bcs it is not fair we lost DESTORY AMERICA OR TRUMP STAYS PRESIDENT… its the most patriotic thing to do…#maga Antifa is in there. They have breached the House…#StopTheSteal #voterfraud… | DO THE MATH!…#maga 75% #CCP controlled voting machines used in United States election destroys trust in #electionintegrity ! #StopTheSteal… | …Treason against the United States is taking place surrounding the presidential election. The largest cyber warfare activity in the world…#maga… Still haven’t seen where Biden and Harris have taken their vaccines. Has anyone else seen it?…#StopTheSteal | Despite the crisis, a sense of unity & patriotism afford #America a common mission & increased opportunities…#maga… Table 4. Sample Tweets using popular hashtags. #capitolriots | #removetrumpnow ---|--- Dear @VP @Mike_Pence: In light of the #CapitolRiots yesterday, there are bipartisan calls for you to invoke the 25t… | President and Commander in Chief Trump should be either removed from office with the 25th amendment, impeached, and/or investigated for criminal charges. #ImpeachTrumpNow #RemoveTrumpNow #InvestigateTrump Confirmed that some police were involved in the Capitol attack yesterday #CapitolRiots | #RemoveTrumpNow He betrayed the country he swore to protect. The Constitution and Democracy mean nothing to him. He… The #CapitolRiots yesterday were underpinned by pure racist hatred nothing less. This started with Trumps birtharis… | We can’t wait 13 days. #RemoveTrumpNow #BREAKING: In response to Trump inciting the deadly | People have died because of Trump’s incitement and sedition. #RemoveTrumpNow The #CapitolRiots were a terrorist act incited by Donald Trump, Don Jr, Rudy Giuliani and members of Congress. Arre… | The right time to do the right thing is RIGHT NOW. #RemoveTrumpNow To get further insights into the content being posted on both platforms, we repeated the frequency plots for terms shown in Figure 2. We have used the word term in this context to mean any uni-gram (excluding stop words) included in a post, but not as a mention of a user or a hashtag. Percentage of use for a particular term has been calculated by counting the number of posts mentioning that term at least once, normalised by the total number of posts on that platform. We observe the same pattern as shown by hashtags, terms used on Twitter, indicating a sense of dissatisfaction and disdain towards the US Capitol’s actions. On the other hand terms, frequent on parley indicate a strong sense of support towards undermining the veracity of the 2020 US presidential elections. Parleys also display abundant use of strong language compared to the tweets, which can be easily attributed to the liberal community guidelines, and supports claims that the platform is used as a medium for proliferating violent and offensive content. (a) Parler (b) Twitter Figure 2. Ten most frequently used terms. Conservative and election fraud terms most frequently appeared on Parler representing an association with the idea of rioters. However, Twitter is dominated by call for 25th amendment and comparisons with black live matter protest. Apart from this stark difference in opinion, some terms harbour a deeper meaning. On Twitter, terms like 25th Amendment advocate President Donald Trump’s immediate removal from office. In contrast, those like black draw a comparison between law enforcement agencies’ response to the demonstrations at the US Capitol and the Black Lives Matter protests of July 2020, adding a racial dimension to the conversation. Since terms trump and president are common and occur frequently in both the platforms, we compare the contexts in which two popular terms appear in parleys and tweets in Table 5 and Table 6. These Tables reinforce our earlier inferences of Parler content supporting the riot and voter fraud narrative, whereas Twitter samples are against the riot and President Trump. Table 5. Context Comparison for the term Trump. Parler | Twitter ---|--- Plandemic Exposed…Trump knows that the Dummies are already forging their narrative with how they will stop this plandemic as soon as Chinese Joe gets in the Oval Office… | Well, Trump finally got his wall. It’s in Washington DC. …I urge all Trump supporters and lovers of freedom to protest at your place of government. We can not let this nation fall to the hands of foreign puppets… | No matter how Trump now behaves… Listening to Trump speak. #stopthesteal | Donald Trump needs to resign or be removed from office. America has endured enough. Trump pledging appear personally at the rally of outright fascists and white supremacists on January 6th as part of his coup efforts. This is an act of intimidation that has parallels with Mussolini’s march on Rome. It is aimed at intimidating not just the working class as a whole but Trump’s opponents…#fascsim #Socialism | Trump did NOT immediately send out the National Guard. No initial presence. Clashes began $\sim$1:20pm. The breach happ… I’m not saying there was no vandalism or violence from our side but I didn’t see any of it for the 8+ hours I was there. I did watch ANTIFA breaking windows at the Capitol and a group of Trump supporters tackled him in 30 seconds. #StopTheSteal | …Speaker Nancy Pelosi said Congress will IMPEACH trump if he’s not immediately removed by the 25th Amendment. LET’S GOOOO!!!! Table 6. Context Comparison for the term President. Parler | Twitter ---|--- …It took a small group of Patriots to enter the Capitol Building to get the attention of America about the #voterfraud that occurred that made Joe Biden a fraudulent president elect. Almost 50% of Americans do not think it was a free and fair election… | Why are these swamp creatures in such a rush to use the 25th amendment or to impeach President @realDonaldTrump? Do… I proudly stand with President Trump and I will fight to the death to ensure the sanctity of our Republic… | The Vice President and the Cabinet should vote, today, on invoking the 25th amendment. Every second that Donald Tr… President #DonaldTrump signed an executive order on Nov. 12 prohibiting Americans from investing in select Chinese firms that support China. | All that these cabinet resignations say to me is that they don’t have the guts to hold President Trump accountable… GOD BLESS PRESIDENT TRUMP!!! The president says HELL NO! to the PORK STUFFED BURRITO that CONGRESS calls A STIMULUS BILL!!!…#BIDENCRIMEFAMILY.. | NEW VIDEO: The President has betrayed the country and incited an insurrection against our own government. Retweet i… I also stand with President Trump…#donaldtrumpismypresident #trump4moreyears… | Donald Trump will be remembered as the worst president in the history of the United States. ### 4.2. Who were the people posting? Users on both platforms were compared across three attributes - Content generated, Mentions and Reposts. Since the datasets are vastly different in size, we normalize by number of posts to gauge the differences between the platforms. We start with filtering out users by the volume of content they generate. We also compare user bios and joining date to get a bird’s eye view on the users’ type present on the platforms. Tables 8 and 8 shows the five most active users on Parley and Twitter respectively. Table 7. Most Active Users on Parler. Five users account for 11% content generated. User | Posts | Percentage ---|---|--- Patriots4US | 6320 | 6.19% TheRealWakeUpMfers | 2580 | 2.53% GameOver | 1406 | 1.37% Billyboy428 | 1105 | 1.08% marylandcrabbing | 994 | 0.97% Table 8. Most Active Users on Twitter. User | Posts | Percentage ---|---|--- openletterbot | 511 | 0.0001% 4Tchat | 360 | 0.0001% Difference30360 | 333 | 0.0001% Anime_ABEMA | 330 | 0.0001% RogueRiverSun | 317 | 0.0001% We once again observe a large contrast between the users and their posts. The users on Parler posted many extreme Parleys in which they urged people to take part in the protest and later showed support for the protest. On the other hand, Twitter users openletterbot and RogueRiverSun posted extensively against the protest. openletterbot is a bot used to deliver messages from the public to elected officials and posted many anti-protest messages during the Capitol storming. On Parler approximately 11% of the total content is generated by only five users this may indicate that there is heavy traffic manipulation on Parler, we attempt to study this in Section 4.3. Next, in Table 10 and Table 10 we list the top five most mentioned users on both platforms. We observe President Trump’s account getting most mentions on both the platforms. However, there is a stark contrast for the rest of the users on the list. We observe predominantly right-wing personalities being mentioned frequently on Parler whereas on Twitter, we observe a larger inclination to left-wing leaders being mentioned. The lack of left-wing leaders not being mentioned on Parler may be due to the fact that most left- Wing Leaders do not have a Parler account. The FBI is also mentioned dominantly on Twitter due to a movement in which the identity of the people involved in the riot was identified on Twitter and reported to the FBI account.161616https://twitter.com/FBI/status/1348283582490546177?s=20 Table 9. Most Mentioned Users on Parler. Predominately right-wing user accounts. User | Mentions | Percentage ---|---|--- TeamTrump | 3363 | 3.30% linwood | 2802 | 2.75% SeanHannity | 2185 | 2.14% Marklevinshow | 2139 | 2.09% GenFlynn | 1805 | 1.77% Table 10. Most Mentioned Users on Twitter. Predominately left-wing leader, President Trump being an exception. The movement of Identify rioter. online led to heavy presence of FBI. User | Mentions | Percentage ---|---|--- realDonaldTrump | 87223 | 0.023% AOC | 53418 | 0.014% FBI | 37317 | 0.010% HawleyMO | 32931 | 0.009% SpeakerPelosi | 30325 | 0.008% In Tables 12 and 12 we list top five users with the highest number of Reposts on both platforms. Most of the accounts belonged to people who were involved in Media and Press. However, the content that two sets of users posted were extensively different. A majority on Parler posted in support of the “revolution” and believed strongly in the riots whereas the users on Twitter put up posts which condemned the riot on the platform. This strongly shows the disparity in the content that is spreading on both websites. We further notice an abnormally high repost percentage on Parler coming from a small set of users, this combined with the single-sided narrative of most user-generated content is a vital sign of a platform level echo chamber on Parler. Table 11. Most Reposted Users on Parler. Twenty percent of repost content being generated by five conservative accounts, present strong sign of skewed narrative on Parler. User | Reposts | Percentage ---|---|--- WarRoomPandemic | 11352 | 11.1% Ryanahlberg | 5342 | 5.2 % epochtimes | 2017 | 1.9 % tjf2020 | 1538 | 1.5 % JoePags | 1123 | 1.1% Table 12. Most Retweeted Users on Twitter. Predominantly left-wing accounts were retweeted on Twitter. User | Reposts | Percentage ---|---|--- AOC | 42016 | 0.012% kylegriffin1 | 26270 | 0.007% BarkyBoogz | 25503 | 0.007% SethAbramson | 24055 | 0.007% MeidasTouch | 23633 | 0.007% Finally, to garner a sense of user-profiles we generate a word cloud of user bio and time of joining the platform shown in Figure 3 and 4 respectively. Though results shown by the word cloud, do not show a clear separation like our other plots, mainly because of a heavy presence of trump and maga in Twitter bios which are generally associated with conservatives. However, users on Twitter indicate some signs of diversity with presence of terms like love, life, mom and fan versus that of Parler which is only populated with terminology associated with conservatives. (a) Parler (b) Twitter Figure 3. Wordclouds of user bios. Twitter presents some diversity with presence of terms like love, mom, trump, whereas, Parler only contains conservative right-wing terms same as those echoed by the attacking mob. While analysing the user joining date, we observed three distinct peaks in Parler. From left to right in the graph, these peaks correspond to the black lives matter protests from May-June 2020, the #twexit movement started by Parler during July 2020 and the US presidential elections during November 2020. In twitter data, we observe comparatively smaller but prominent peaks during the Black lives matter and November election. However, the outlier is the steep peak 6th of January 2021 during the Capitol hill riots. To understand the effect and role of these users, further analysis is required. Lastly, it is also interesting to observe that outside the three peaks caused by political events, user sign-ups on Parler are minuscule, explaining the highly political nature of users on Parler. (a) Parler (b) Twitter Figure 4. Proportion of accounts created over time. All the user sign-up spikes on Parler are during a political event, explaining a highly polar userbase. In comparison, Twitter have a steady rate of sign-ups with less prominent peaks during political event. However, A large rate of sign-up is observed on Twitter during the time of riots. ### 4.3. How organic was the traffic generated? We calculated the Coefficient of Traffic Manipulation (CTM) (Nimmo, 2019) for frequently occurring hashtags on both platforms. CTM is a relative metric to measure how much traffic of a given hashtag has been manipulated. Equation 1 provides an mathematical representation of CTM. (1) $C=\frac{R}{10}+F+U$ Here, for a given hashtag $t$:- * • $C$ is Coefficient of Traffic Manipulation for $t$. * • $R$ is percentage of $t$ traffic created by reposts. * • $F$ is percentage of $t$ traffic created by top fifty users. * • $U$ is average number of posts per user for $t$. (a) Parler (b) Twitter Figure 5. CTM on popular hashtags. Traffic on Parler is disproportionately more manipulated as compared to that on Twitter where all tags are within the range of organic traffic. Highest CTM score on Parler is $46$ times higher than that of Twitter. Using equations 1, we find the value of $C$ for most popular hashtags on Twitter and Parley shown in Figure 5. CTM for the most manipulated hashtag on Parler is approximately $46$ times more than the most manipulated hashtag on Twitter, indicating a large extent of traffic manipulation on Parler. A similar trend can be seen for all other frequently occurring hashtags too. Though CTM is a relative scale Nimmo (2019) in there report anecdotally found that CTM for organic traffic generally was less than 12. Judging on that scale along with the abundance of pro-riot content observed in our previous experiment indicates the presence of a manipulative rigged election agenda on Parler, which eventually could have fueled into the tragedy of 6 January 2021. ## 5\. Conclusion In this report, we analyse trending traffic from Twitter and Parler, in light of the riots at the US Capitol in Washington DC on January 6, 2020. We look at the kind of content being generated and the users involved and find evidence supporting the claims that a significant proportion of traffic on Parler was in support of undermining the veracity of the 2020 US Presidential Elections. We also find the traffic on Parler to often be violent, consisting of hate speech (as a result of Parler’s relaxed community guidelines) and manipulated as indicated by CTM. On the other hand, Twitter users majorly used the platform to condemn the incident and its perpetrators and show their disdain towards President Donald Trump for his actions. An in-depth analysis of these users’ networks and their activity across the two websites are required to understand better the role of social media in incitement and planning of these riots. We are continuing to analyze this data for more insights. ###### Acknowledgements. Hitkul funded by TCS Research Fellowship. ## References * (1) * Bessi and Ferrara (2016) Alessandro Bessi and Emilio Ferrara. 2016. Social bots distort the 2016 US Presidential election online discussion. _First Monday_ 21, 11-7 (2016). * Bolsover (2020) Gillian Bolsover. 2020\. Black Lives Matter discourse on US social media during COVID: polarised positions enacted in a new event. * Ferrara (2017) Emilio Ferrara. 2017\. Disinformation and social bot operations in the run up to the 2017 French presidential election. _arXiv preprint arXiv:1707.00086_ (2017). * Morstatter et al. (2018) Fred Morstatter, Yunqiu Shao, Aram Galstyan, and Shanika Karunasekera. 2018. From alt-right to alt-rechts: Twitter analysis of the 2017 german federal election. In _Companion Proceedings of the The Web Conference 2018_. 621–628. * Nimmo (2019) Ben Nimmo. 2019\. Measuring Traffic Manipulation on Twitter. * Panda et al. (2020) Anmol Panda, Ramaravind Kommiya Mothilal, Monojit Choudhury, Kalika Bali, and Joyojeet Pal. 2020\. Topical Focus of Political Campaigns and Its Impact: Findings from Politicians’ Hashtag Use during the 2019 Indian Elections. _Proc. ACM Hum.-Comput. Interact._ 4, CSCW1, Article 053 (May 2020), 14 pages. https://doi.org/10.1145/3392860 * Shevtsov et al. (2020) Alexander Shevtsov, Maria Oikonomidou, Despoina Antonakaki, Polyvios Pratikakis, and Sotiris Ioannidis. 2020. USelections 2020 analysis on Twitter and YouTube.
# TLU-Net: A Deep Learning Approach for Automatic Steel Surface Defect Detection Praveen Damacharla Research Scientist KineticAI Inc. Crown Point, IN, USA <EMAIL_ADDRESS>Achuth Rao M. V Dept. of Electrical Engineering Indian Institute of Science (IISc) Bengaluru, KA, India <EMAIL_ADDRESS>Jordan Ringenberg Computer Science Dept. The University of Findlay Findlay, OH, USA <EMAIL_ADDRESS>Ahmad Y. Javaid EECS Department The University of Toledo Toledo, OH, USA <EMAIL_ADDRESS> ###### Abstract Visual steel surface defect detection is an essential step in steel sheet manufacturing. Several machine learning-based automated visual inspection (AVI) methods have been studied in recent years. However, most steel manufacturing industries still use manual visual inspection due to training time and inaccuracies involved with AVI methods. Automatic steel defect detection methods could be useful in less expensive and faster quality control and feedback. But preparing the annotated training data for segmentation and classification could be a costly process. In this work, we propose to use the Transfer Learning-based U-Net (TLU-Net) framework for steel surface defect detection. We use a U-Net architecture as the base and explore two kinds of encoders: ResNet and DenseNet. We compare these nets’ performance using random initialization and the pre-trained networks trained using the ImageNet data set. The experiments are performed using Severstal data. The results demonstrate that the transfer learning performs 5% (absolute) better than that of the random initialization in defect classification. We found that the transfer learning performs 26% (relative) better than that of the random initialization in defect segmentation. We also found the gain of transfer learning increases as the training data decreases, and the convergence rate with transfer learning is better than that of the random initialization. ###### Index Terms: Automated visual inspection (AVI), DenseNet, ResNet, Surface defect detection, Transfer learning, U-Net ## I Introduction Steel is one of humanity’s most important building materials. Defect inspection is a critical step of quality control in the steel plates. This step mainly involves capturing images of the steel surface using an industrial camera followed by recognizing, localizing, and classifying the defect, which helps rectify the defect’s cause. Typically, this process is performed manually, which is not reliable and time-consuming. Unreliable quality control can cause a huge economic problem for manufacturers. Manual detection can be replaced or aided by the automatic classification using computer vision methods. The general flow of automatic visual inspection for quality control is shown in Fig. 1. Figure 1: A Generic Automatic Visual Inspection Outline There are two main steps involved in the defect inspection. The first step is to classify the defect type from the images, and the second step is to identify the defect location in the image. There are various automatic methods in the literature to address one/both of these steps. Some of the early methods use a handcrafted feature to classify the defect type [1, 2, 3], and few methods find the coarse defect locations. The main drawback of these methods is that the features need to be designed by an experts. The designed feature may not generalize to new type of defect. The recent advances in end- to-end deep learning (DL) methods overcame these hand-designed features. It learns to extracts the multi-scale features depending on the task using only the data and labels. The DL method has been shown to outperform the hand- designed features in various computer vision tasks [4]. Figure 2: Proposed architecture transfer learning method for the joint steel defect classification and segmentation. The blue line indicate the skip connection and the orange dotted line indicate the initialization. There are various deep learning methods are used to perform defect classification. [5] use the features extracted from the OverFeat, a variant of convolutional neural networks (CNN), to do the defect classification. They have also shown that the fixed features from the pre-trained network perform well on some defects and perform poorly on texture kind of defect. These authors have also proposed a structural visual inspection method based on faster region-based CNN (faster R-CNN) to ensure quasi-real-time simultaneous detection of multiple types of defects [6]. [7] proposed to detect weak scratches using deep CNN and skeleton extraction. They have shown that the their method is robust to background noise. [8] proposed a variant of you only look once (YOLO) network to detect surface defects of flat steel in real-time. The CNN is used extensively for a different kind of defect classification on different data sets [9, 10]. [11] use a transfer learning approach for defect classification. They have shown that transfer learning can help achieve a good classification accuracy with fewer data samples. [12] use a patch-wise classification to do both defect classification and segmentation. There are various methods in the literature on defect localization. [13] uses a classical CNN to perform steel defect detection. Authors have explored the effect of regularization and unsupervised pre-training. [14] uses a pre- trained ResNet to extract the multi-scale features, and the features from different scales are fused using a multilevel feature fusion network (MFN). The fused features and region proposal network are used to classify the defect type and predict the bounding box. The main drawback of the method is that localization is very coarse. [15] use an U-Net and residual U-Net architecture for the fine segmentation of the steel defect. The method’s main drawback is that the networks are trained with random initialization and need a large amount of pixel-level annotation of the defects. The pixel-level annotation process can be very time consuming and expensive. [16] uses a SegNet based semantic segmentation for the steel defect detection. There are various unsupervised and reinforcement learning based methods for steel defect detection. The summary of various methods for defect classification, and segmentation can be found in [17]. The authors discuss the taxonomy of defect detection including the statistical, spectral, model based and machine learning methods. In this work, we systematically study the transfer learning effectiveness for steel defect classification and localization (SDCL). The transfer learning or domain adaption aims to reuse the feature learned in one domain to improve the learning in the other domain. This is a popular approach in cases where the annotated data is limited. The transfer learning has shown really good application in various tasks such a object detection [18], semantic segmentation [19, 20] etc. It is already shown that the transfer-learning from an arbitrary domain to another domain may not be useful. Transfer learning is most effective when two domains are similar [20, 21]. Hence, it is important to study the effectiveness of transfer learning in the case of SDCL. We consider a baseline architecture of U-Net for steel defect segmentation. U-Net has demonstrated state of the art performance in various image segmentation tasks [22]. It uses an encoder-decoder architecture with skip connections. The encoder learns the images’ features at different scales, and the decoder uses these features to predict the segmentation masks. In this work, we explore two kinds of pre-trained encoder networks– ResNet and DenseNet networks. Both of these networks have been shown to perform well on various computer vision tasks. The networks are pre-trained on the ImageNet data set [23]. We use a linear classifier using the bottleneck representation of U-Net to classify the defect. We fine-tune both the encoder and decoder of the network using the Severstal dataset [24]. The experiments on Severstal data shows that the performance of both segmentation and classification is better in case of the pre-trained network compared to the random initialization. It is found that performance gain by using the pre-trained networks is even higher if 50% of data is used for training. We also show that the convergence of transfer learning is faster compared to random initialization. ## II Proposed Transfer Learning based U-Net The proposed architecture for joint steel defect segmentation and classification is shown in Fig. 2. The architecture takes an input image of dimension $H\times W$ and classifies each pixel to be one or more type of the defect. It involves mainly four parts (1) The U-Net architecture, (2) the type of initialization (3) classification and (4) objective function ### II-A U-net architecture The U-Net is an encoder-decoder architecture with a skip connection. The encoder encodes the image using an encoder block and reduces the resolution using pooling. This helps in extracting a multi-scale feature of images. The decoder up-samples the representation in every step. The skip connection can enable the decoder to select the feature at a different scale to make a more accurate prediction of the object boundaries. The output of the U-Net is $256\times 1600\times N$ with sigmoid activation, where N is the types of steel defects. ### II-B Transfer learning We explore two kinds of encoder blocks for transfer learning. Both of these nets are trained using ImageNet data set [23]. We briefly review the features of the two networks in the following subsections. Figure 3: The structure of encoder layer Resnet (left) and the Densenet (right). The concatenation of inputs is indicated by (c) and + indicate the add operation. BN+ReLU+Conv2D indicate the batch normalization, Relu activation and convolution of kernel size 3x3 #### II-B1 ResNet The residual networks (ResNet) are very deep convolutions neural networks with skip connection [25]. The vanishing gradient problem is addressed by having a skip connection after each block. Each block contains a two 3x3 convolution with batch normalization and ReLU activation. Fig 3 (left) shows one layer of ResNet. The total parameters of the encoder are 11 million. #### II-B2 Densenet-121 Densely connected convolutions neural nets (DenseNets) are stacked convolution networks where the feature map of the $L$-th layer is concatenated with the feature maps of the previous layer [26]. This has been shown to alleviate the vanishing gradient problem. The network’s representation power is also increased because the deep layer has access to the previous layer feature maps. Fig 3 (right) shows one layer of DenseNet. The total parameters of this encoder are 6 million. ### II-C Classification The encoder output encodes the rich abstract representation of the input image. Hence we propose to spatial average pooling the encoder output to extract the image representation. The image representation is passed through a linear classifier with sigmoid activation to enable the multi-label classification. ### II-D Objective function The joint segmentation and classification problem is formulated as a weighted combination of the two losses, as shown below. $\mathcal{J}=\sum_{k=1}^{L}\sum_{m=1}^{N}\left[BCE(\hat{y}_{km},{y}_{km})+\sum_{j=1}^{256}\sum_{i=1}^{1600}BCE(z_{kijm},\hat{z}_{kijm})\right]$ (1) where $BCE$ indicate the binary cross-entropy loss, $k$ indicate the data point index, $m$ is the defect class index, $i,j$ are the spatial index, $\hat{y}$ indicate the predicted probability and $y$ is the ground truth defect labels. The predicted and ground truth pixel labels are indicated by $z$ and $\hat{z}$. During the test stage, the labels from the probability are obtained using the threshold of 0.5. ## III Experiments and results ### III-A Data-set and Pre-processing The Kaggle Competition - ”Severstal: Steel Defect Detection“ data is used for all the experiments. In each experiment, the input image could contain one or more kinds of defects. The training set includes 12568 images, and 6666 of them include at least one defective region. Ground truth classification was performed by an expert to provide the defect type classification and the annotation of the defective region by visual inspection. The resolution of images is 256x1600 px. We normalize the image using the global mean and standard deviation. We apply a random vertical/horizontal flip as data augmentation. The same augmentation applies to both the original image and the corresponding ground truth masks to pair with the augmented images. Figure 4: Boxplot comparison of DICE and IoU for different networks and the initialization. The red line indicate the median, the black dot indicate the mean, the blue box indicate the 75% confidence interval and the black whiskers indicate the 95% confidence intervals. ### III-B Experimental setup We use a total of 75% data for the training, 12.5 % data for validation, and 12.5% data for the testing. The U-Net of five encoders and decoder is used for all the experiments. The network is trained using the objective function in eq. 1 with the batch size 16 and Adam optimizer with learning rate of $5\times 10^{-4}$, $\beta_{1}$=0.99 and $\beta_{2}$=0.99 [27]. We train the network for 10 epochs with early stopping. We use the U-Net with random initialization as the baseline. We have implemented the network in PyTorch [28] with the PyTorch segmentation library [29]. The ResNet/DenseNet with random initialization is indicated by ResNet(Random)/DenseNet(Random). The Imagenet pre-trained counterparts are indicated by ResNet(Imagenet)/DenseNet(Imagenet) respectively. To understand the model’s sample complexity, we also train these networks using 50% of the training data. We refer to the pre-trained initialization as TLU-Net and the random initialization as just U-net. ### III-C Evaluation metrics We evaluate the steel defect classification performance using multi-label classification accuracy (MLA) and the average area under receiver operating curve (AUC) across 4 classes. The MLA is defined as the proportion of the predicted correct labels to a total number of labels for instance. We treat the multi-label classification as 4 separate binary classifiers and compute the average AUC across four classifiers. We have used DICE and the intersection of unions (IoU) to evaluate the performance of steel defect segmentation. The DICE metric for each class is defined as follows: $DICE=\frac{2|X\cap Y|}{|X|+|Y|}$ (2) The IoU is defined as follows: $IoU=\frac{|X\cap Y|}{|X\cup Y|}$ (3) where $X$ and $Y$ are the ground-truth and the predicted segmentation masks, $\cup$ indicates the union operation, $\cap$ indicated the intersection, and $|.|$ indicates the cardinally. Figure 5: Comparison of average MLA for different networks and initialization using different amount of training data. ### III-D Results and discussion Fig. 5 shows the MLA(%) comparison for different networks and the initialization. It is clear from the figure that the TLU-Net achieves $\leavevmode\nobreak\ 5\%$ (absolute) improvement in the MLA compared to the random initialization with 100% training data. This indicates that the features learned using ImageNet can help for steel defect classification as well. The MLA gap increases to $\leavevmode\nobreak\ 8\%$ (absolute) as the training data is reduced to $50\%$. The performance of TLU-Net does not drop significantly as the training data is reduced. This indicates that the TLU-Net is helpful when there is a limited number of annotated data points. The ResNet(ImageNet) and DenseNet(ImageNet) performs best in case of 100% and 50% of training data respectively. This could be because the DenseNet has fewer parameters than the ResNet and hence needs less data but has less representational power. Figure 6: comparison of AUC plot of four classes for different networks and initialization. The title shows the average AUC across four classes. Fig. 6 shows the AUC for different networks and initialization. It is clear from the figure that the best AUC archived for class 1 in all cases. Class 3 and class 4 defects demonstrate the poorest performace. This is mainly because of the number of samples of the training data for each class. The AUC of all classes improved by using the TLU-Net compared to U-Net. The DeseNet(ImageNet) has the highest AUC. Figure 7: Illustration of segmentation mask prediction. (row a) The input images (row b) The ground truth masks (row c) The mask predicted by ResNet(Random) (row d) the mask predicted by ResNet(ImageNet). The corresponding DICE for the prediction is shown in the title of the image. Fig. 4 shows the box plot of the DICE and IoU for different network with the 100% and 50% of the training data. In all cases, the median and mean performance of the TLU-Net perform better than that of the U-Net. In case of 100% training data, the ResNet(ImageNet) DICE/IoU is better than all the other models and it has the small 75% confidence interval. The TLU-net with ResNet shows an improvement of $\sim$26% (relative) compared to U-Net with ResNet. The TLU-net with DenseNet shows an improvement of $\sim$5% (relative) compared to U-Net with ResNet. But in the case of 50% training data, the DenseNet with transfer learning shows an improvement of 60%(relative), and the ResNet shows an improvement of 12%(relative). This clearly indicate the gain of using transfer learning is higher as the number of annotated samples are lower. Figure 8: Comparison of validation loss/MLA evolution over epoch for different networks and initialization. Fig. 8 shows the DICE/MLA metric using validation data during the course of training. This helps us understand the convergence rate of different networks. It is clear from the figure that the TLU-Net has higher DICE at the beginning of the epochs than the U-Net. The converged DICE value for the TLU-Net is higher compared to the U-Net. This clearly indicates that transfer learning is helping in faster convergence of the model. Similar observations are applicable for the MLA plot also. It is interesting to note that the pre- trained model’s starting accuracy is significantly higher than the random initialization. This is mainly because the classifier directly uses the encoder output for classification, and the encoder is initialized with the pre-trained weights. It implies that the pre-trained features are useful in discriminating against the different steel defects. Figure 9: Histogram of difference between DICE obtained using transfer learning and the random initialization. The red line indicate the zero line. Fig. 7 shows the four illustrative examples of mask predictions for TLU-Net using ResNet and the U-Net using ResNet. In the first column, the TLU-Net can detect the defect more accurately, and the U-Net fails to detect some parts of the defects. In the second column, the TLU-net is showing some false positive detection because of some illumination differences. In the third column, both networks failed to detect some of the defects. This mainly because the training data has few defects belonging to this class. We hypothesize that the TLU-net also fails because the defect share is more complex, and pre-learned features may not help this scenario. In the fourth column, the TLU-Net can detect the defect in the form of a fine line, and the U-Net is failing to detect these lines. Fig. 9 shows the histogram of the DICE difference between the TLU-Net and the U-Net using both ResNet/DenseNet. It is clear from the figure that the histogram is skewed toward positive values. We observe improvement in the case of 81% images in case ResNet and 63% of images in DenseNet. ## IV Conclusion In this work, we propose to use the transfer learning framework for steel defect classification and segmentation. We use a U-Net architecture as a base architecture and explore two kinds of encoders: ResNet and Dense Net. We compare these nets’ performance using random initialization and the pre- trained networks trained using ImageNet data set. We found that the performance of the transfer learning is superior both in terms of defect segmentation and classification. We also found the performance gap increases as the training data decreases. We also found that the convergence rate with transfer learning is better than that of the random initialization. We have found that transfer learning performance is poor in rare defect types and complex shape defects. As a part of future work, we would like to work on transfer learning to handle more complex shapes using synthetic data and the rare defect type generalization using generative models. We want to explore the semi/weakly supervised learning approaches to reduce the annotated training data requirement. ## References * [1] Praminda Caleb-Solly and Jim E Smith “Adaptive surface inspection via interactive evolution” In _Image and Vision Computing_ 25.7 Elsevier, 2007, pp. 1058–1072 * [2] Kechen Song and Yunhui Yan “A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects” In _Applied Surface Science_ 285 Elsevier, 2013, pp. 858–864 * [3] Yongsheng Dong et al. “Texture classification and retrieval using shearlets and linear regression” In _IEEE transactions on cybernetics_ 45.3 IEEE, 2014, pp. 358–369 * [4] Yann LeCun, Yoshua Bengio and Geoffrey Hinton “Deep learning” In _nature_ 521.7553 Nature Publishing Group, 2015, pp. 436–444 * [5] Pei-Hung Chen and Shen-Shyang Ho “Is overfeat useful for image-based surface defect classification tasks?” In _2016 IEEE International Conference on Image Processing (ICIP)_ , 2016, pp. 749–753 IEEE * [6] Young-Jin Cha et al. “Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types” In _Computer-Aided Civil and Infrastructure Engineering_ 33.9 Wiley Online Library, 2018, pp. 731–747 * [7] Limei Song et al. “Weak micro-scratch detection based on deep convolutional neural network” In _IEEE Access_ 7 IEEE, 2019, pp. 27547–27554 * [8] Jiangyun Li, Zhenfeng Su, Jiahui Geng and Yixin Yin “Real-time detection of steel strip surface defects based on improved yolo detection network” In _IFAC-PapersOnLine_ 51.21 Elsevier, 2018, pp. 76–81 * [9] Shiyang Zhou et al. “Classification of surface defects on steel sheet using convolutional neural networks” In _Mater. Technol_ 51.1, 2017, pp. 123–131 * [10] Li Yi, Guangyao Li and Mingming Jiang “An End-to-End Steel Strip Surface Defects Recognition System Based on Convolutional Neural Networks” In _steel research international_ 88.2 Wiley Online Library, 2017, pp. 1600068 * [11] Vidhya Natarajan, Tzu-Yi Hung, Sriram Vaikundam and Liang-Tien Chia “Convolutional networks for voting-based anomaly classification in metal surface inspection” In _2017 IEEE International Conference on Industrial Technology (ICIT)_ , 2017, pp. 986–991 IEEE * [12] Ruoxu Ren, Terence Hung and Kay Chen Tan “A generic deep-learning-based approach for automated surface inspection” In _IEEE transactions on cybernetics_ 48.3 IEEE, 2017, pp. 929–940 * [13] Daniel Soukup and Reinhold Huber-Mörk “Convolutional neural networks for steel surface defect detection from photometric stereo images” In _International Symposium on Visual Computing_ , 2014, pp. 668–677 Springer * [14] Yu He, Kechen Song, Qinggang Meng and Yunhui Yan “An end-to-end steel surface defect detection approach via fusing multiple hierarchical features” In _IEEE Transactions on Instrumentation and Measurement_ 69.4 IEEE, 2019, pp. 1493–1504 * [15] Didarul Amin and Shamim Akhter “Deep Learning-Based Defect Detection System in Steel Sheet Surfaces” In _2020 IEEE Region 10 Symposium (TENSYMP)_ , 2020, pp. 444–448 IEEE * [16] Domen Tabernik, Samo Šela, Jure Skvarč and Danijel Skočaj “Segmentation-based deep-learning approach for surface-defect detection” In _Journal of Intelligent Manufacturing_ 31.3 Springer, 2020, pp. 759–776 * [17] Qiwu Luo et al. “Automated Visual Defect Detection for Flat Steel Surface: A Survey” In _IEEE Transactions on Instrumentation and Measurement_ 69.3 IEEE, 2020, pp. 626–644 * [18] Jonti Talukdar, Sanchit Gupta, PS Rajpura and Ravi S Hegde “Transfer learning for object detection using state-of-the-art deep neural networks” In _2018 5th International Conference on Signal Processing and Integrated Networks (SPIN)_ , 2018, pp. 78–83 IEEE * [19] Varun Belagali et al. “Two step convolutional neural network for automatic glottis localization and segmentation in stroboscopic videos” In _Biomedical Optics Express_ 11.8 Optical Society of America, 2020, pp. 4695–4713 * [20] Ruoqi Sun et al. “Not all areas are equal: Transfer learning for semantic segmentation via hierarchical region selection” In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 4360–4369 * [21] Shai Ben David, Tyler Lu, Teresa Luu and Dávid Pál “Impossibility theorems for domain adaptation” In _Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics_ , 2010, pp. 129–136 JMLR WorkshopConference Proceedings * [22] Olaf Ronneberger, Philipp Fischer and Thomas Brox “U-net: Convolutional networks for biomedical image segmentation” In _International Conference on Medical image computing and computer-assisted intervention_ , 2015, pp. 234–241 Springer * [23] Jia Deng et al. “Imagenet: A large-scale hierarchical image database” In _2009 IEEE conference on computer vision and pattern recognition_ , 2009, pp. 248–255 Ieee * [24] “Severstal: Steel Defect Detection on Kaggle Challenge” * [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun “Deep residual learning for image recognition” In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778 * [26] Gao Huang, Zhuang Liu, Laurens Van Der Maaten and Kilian Q Weinberger “Densely connected convolutional networks” In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4700–4708 * [27] Diederik P Kingma and Jimmy Ba “Adam: A method for stochastic optimization” In _arXiv preprint arXiv:1412.6980_ , 2014 * [28] Adam Paszke et al. “Automatic differentiation in pytorch”, 2017 * [29] Pavel Yakubovskiy “Segmentation Models Pytorch” In _GitHub repository_ GitHub, https://github.com/qubvel/segmentation_models.pytorch, 2020
# Detection of Insider Attacks in Distributed Projected Subgradient Algorithms Sissi Xiaoxiao Wu, Gangqiang Li, Shengli Zhang, and Xiaohui Lin This work is supported by the National Natural Science Foundation of China under Grant 61701315; by Shenzhen Technology R$\&$D Fund JCYJ20170817101149906 and JCYJ20190808120415286; by Shenzhen University Launch Fund 2018018.S. X. Wu, G. Li, S. Zhang and X. Lin are with the College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China. G. Li is the corresponding author. E-mails<EMAIL_ADDRESS>{xxwu.eesissi, zsl, <EMAIL_ADDRESS> ###### Abstract The gossip-based distributed algorithms are widely used to solve decentralized optimization problems in various multi-agent applications, while they are generally vulnerable to data injection attacks by internal malicious agents as each agent locally estimates its decent direction without an authorized supervision. In this work, we explore the application of artificial intelligence (AI) technologies to detect internal attacks. We show that a general neural network is particularly suitable for detecting and localizing the malicious agents, as they can effectively explore nonlinear relationship underlying the collected data. Moreover, we propose to adopt one of the state- of-art approaches in federated learning, i.e., a collaborative peer-to-peer machine learning protocol, to facilitate training our neural network models by gossip exchanges. This advanced approach is expected to make our model more robust to challenges with insufficient training data, or mismatched test data. In our simulations, a least-squared problem is considered to verify the feasibility and effectiveness of AI-based methods. Simulation results demonstrate that the proposed AI-based methods are beneficial to improve performance of detecting and localizing malicious agents over score-based methods, and the peer-to-peer neural network model is indeed robust to target issues. ###### Index Terms: Gossip algorithms, distributed projected subgradient (DPS), artificial intelligence (AI) technology, internal attacks, malicious agents. ## I Introduction recently, decentralized optimization algorithm as a popular tool to handle large scale computations has been broadly applied in various fields [1, 2]. Typical examples of Internet of Things (IoT) [3, 4], multi-agent systems [5, 6], wireless communications network [7], power grid [8], and federated learning [9]. The design approach in above applications is often refereed as gossip-based optimization problems, wherein interacting agents are randomly selected and exchange information following a point-to-point message passing protocol so as to optimize shared variables. Aiming at a coordinated response, these agents explicitly disclose their estimates (states) to neighboring agents in each iteration, thereby leading to a consistent globally optimal decision [1, 2, 10]. It is well known that gossip-based algorithms are inherently robust to intermittent communication and built-in fault-tolerance to agent failures. They can also provide a degree of privacy in many applications for participating agents without exchanging local user information [11]. Despite many advantages, these gossip-based algorithms, such as the distributed projected subgradient (DPS) algorithm [2], are inherently vulnerable to insider data injection attacks due to the flat architecture, since each agent locally estimates its (sub)gradient without any supervision [12, 13, 14]. Generally speaking, malicious agents’ (or attackers) attack on decentralized algorithms depends on specific attacking strategies. Attackers may interfere with distributed algorithms by injecting random data that hinders convergence [15]. Especially in an insider attack, the attacker always sends misleading messages to its neighbors to affect the distributed system, resulting in false convergence results [16, 17]. For example, a multi-agent system is forced to converge to the target values of attackers in [18] and an average consensus result is disturbed by coordinated attackers in [19]. The attack model we focus on in this work is that the attacker behaves like stubborn agents [20]. To be more specific, they coordinate and send messages to peers that contain a constant bias [12, 13, 21, 17] and their states can not be changed by other agents. As studied in [18, 19], the network always converges to a final state equal to the bias. This will bring serious security problems to distributed algorithms if the attacker cannot be detected effectively. Thus, a good defense mechanism is needed to protect these algorithms from internal data injection attacks. To detect anomalous behaviors in decentralized optimization, one commonly used approach in the literature is to calculate a dependent score through statistical techniques based on the messages received during the execution of protocol. For instance, in [15], the authors show that the convergence speed of network will slow down when the attacker is present, and design a score to identify potential attacks. In [19], two score-based detection strategies are proposed to protect the randomized average consensus gossip algorithm from malicious nodes. In [22], the authors design a comparison score to search for differences between a node and its neighbors, then adjust update rules to mitigate the impact of data falsification attacks. In [13], the decision score is computed by a temporal difference strategy to detect and isolate attackers. Similar score-based methods are also shown in [23, 24, 25]. While such methods have reasonable performance, the score design is somewhat ad-hoc and relies heavily on the experts to design sophisticated decision functions, and the detection thresholds of these score-based methods need to be adjusted judiciously. To circumvent the above difficulties, our idea in this work is to utilize the artificial intelligent (AI) technology to approximate more sophisticated decision functions. It is worth mentioning that AI technology has succeed in many applications with the same purpose, including image recognition [26], natural language processing [27], power grid [28], and communications [29]. Furthermore, AI also plays an important role in network security [30], such as anomaly intrusion detection [31], malicious PowerShell [32], distributed denial of service (DDoS) attacks [33] and malicious nodes in communication networks [34, 35]. The main purpose of this work is to apply AI technology to address the problem of detecting and localizing attackers in decentralized gossip based optimization algorithms. While our AI-based methods and training philosophy can be applied to a wide set of multi-agent algorithms and attack scenarios, we focus on testing the approach on a case that has been thoroughly studied in [13, 36], to facilitate the comparison. Concretely, we proposed two AI-based strategies, namely the temporal difference strategy via neural networks (TDNN) and the spatial difference strategy via neural networks (SDNN). We will show that even basic neural network (NN) models exhibit a good ability to extract non-linearity from our training data and thus can well detect and localize attackers, given that 1) the collected training data can well represent the attack model, and 2) training data from all agents can be fully learned at the training center. Unfortunately, collecting good and enough data which perfectly fits the real attack model is usually difficult. First of all, due to the intrinsic of gossp-algorithm, it is difficult and expensive to collect sufficient training samples at each agent. Also, with the emergence of new large-scale distributed agents in the network, sometimes it is hard to upload decentralized data at each agent to a fusion center due to storage and bandwidth limitations [37]. Furthermore, as the insider attacks could occur at any agent in the network, the training data may not cover all the occurrences of the attack. Therefore, some individually trained NN model at each agent may not fit in all insider attack events. A new approach to alleviate these issues is to leverage the decentralized federated learning [38], which utilizes the collaboration of agents to perform data operations inside the network by iterating local computations and mutual interactions. Such a learning architecture can be extremely useful for learning agents with access to only local/private data in a communication constrained environment [39]. Specially, as one of the state- of-the-art approach in decentralized federated learning, gossip learning is very suitable for training NN models from decentralized data sources [40], with the advantages of high scalability and privacy preservation. Thus, we propose a collaborative peer-to-peer learning protocol to help training our NN models by gossip exchanges. Specifically, each agent in the network has a local model with the same architecture, and only relies on local collaboration with neighbors to learn model parameters. It is worth noting that in this process each agent trains the local model by its local data periodically, and then send the local model parameters to its neighbors. It is expected that each agent can learn a local model close to the _global model_ (i.e., a NN trained by the center, which contains training data from all agents), so as to provide robustness in the case of insufficient and mismatched local data. It is also worth mentioning differences between this work and some previous work. Previous work [19] aims at the score-based method for securing the gossip-based average consensus algorithm. [35] improves the score-based method by using AI-based methods while it still targeted at an average consensus algorithm. We remark that The inputs for AI model in [35] does not always work for optimization algorithms. [13, 36] provide some preliminary results for protecting optimization algorithms while it only focus on partial neighboring information. This work is the first one which well elaborates AI-based methods for a DPS algorithm using full information from neighboring signal. More importantly, the proposed collaborative learning method is novel and effective to make the defense model more robust to different events of attacks, making our models more practical to multi-agent applications. In summary, the proposed AI-based strategies have following characteristics: 1) they can automatically learn appropriate decision models from the training data, thus reducing the dependence on complicated pre-designed models; 2) they adaptively scale the decision thresholds between 0 and 1, which reduces the difficulty of threshold setting; 3) they improve the performance of detecting and localizing attackers and show good adaptability to different degree of agents. 4) they have strong robustness to the scenarios with insufficient training data, or mismatched training data. Preliminary numerical results demonstrate that the proposed AI-based strategies are conducive to solve the insider attack problem faced by the DPS algorithm. The rest of the paper is organized as follows. In Section II, we describe the decentralized multi-agent system and the attack scheme against the DPS algorithm. In Section III, we review score-based strategies and propose two AI-based defense strategies to detect and locate attackers. Section IV introduces a collaborative peer-to-peer training protocol for NN, dealing with insufficient samples or mismatched samples available on different agents. Simulation results are given in Section V to confirm the effectiveness of the proposed strategies. We conclude this work in section VI. ## II System Model We consider a multi-agent network which can be defined by an undirected graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, wherein $\mathcal{V}=\\{1,\cdots,n\\}$ represents the set of all agents and $\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ represents the set of all edges. We define the set of the neighbor nodes of an agent $i\in\mathcal{V}$ by $\mathcal{N}_{i}=\\{v_{j}\in\mathcal{V}:(v_{i},v_{j})\in\mathcal{E}\\}$. All the agents in the distributed network follow a gossip-based optimization protocol; see Algorithm 1. That is, in each iteration of information exchange, an agent only directly communicates with its neighbors. We thus define a time- varying graph as ${\mathcal{G}}(t)\mathrel{\mathop{:}}=(\mathcal{V},\mathcal{E}(t))$ for the $t$th iteration and the associated weighted adjacency matrix is denoted by ${\bm{A}}(t)\in\mathbb{R}^{n\times n}$, where $[{\bm{A}}(t)]_{ij}\mathrel{\mathop{:}}=A_{ij}(t)=0$ if $(v_{j},v_{i})\notin\mathcal{E}(t)$. For this network with $n$ agents, we have the following assumption: ###### Assumption 1. There exists a scalar $\zeta\in(0,1)$ such that for all $t\geq 1$ and $i=1,\cdots,n$ : * • $A_{ij}(t)\geq\zeta$ if $(i,j)\in\mathcal{E}(t)$, * • $\sum_{i=1}^{n}A_{ij}(t)=1$, $A_{ij}(t)=A_{ji}(t)$; * • The graph $(\mathcal{V},\cup_{\ell=1}^{B_{0}}\mathcal{E}(t+\ell))$ is connected for $B_{0}<\infty$. The goal of these agents is cooperatively to solve the following optimization problem: $\min_{\bm{x}}f(\bm{x})\mathrel{\mathop{:}}=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\bm{x})~{}~{}\text{s.t.}~{}~{}\bm{x}\in X\;.$ (1) where $X\subseteq\mathbb{R}^{d}$ is a closed convex set common to all agents and $f_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}$ is a local objective function of agent $i$. Herein, $f_{i}$ is convex and not necessarily differentiable function which is only known to agent $i$. In this setting, we denote the optimal value of problem (1) by $f^{\star}$. A decentralized solution to estimate $f^{\star}$ of this problem is the DPS algorithm [2]. In this algorithm, each agent locally updates the decide variable by fusing the estimates from its neighbors and then take the subgradient of this function at the updated decide variable to be the decent direction for the current iteration. To be more specific, when applied this algorithm to solve problem (1), it performs the following iterations: $\begin{split}&\bar{\bm{x}}_{i}(t)=\sum_{j=1}^{n}A_{ij}(t)\bm{x}_{j}(t)\;.\\\\[2.84544pt] &\bm{x}_{i}(t+1)={P}_{X}\big{[}\bar{\bm{x}}_{i}(t)-\gamma(t){\hat{\nabla}}f_{i}\big{(}\bar{\bm{x}}_{i}(t)\big{)}\big{]}\;.\end{split}$ (2) for $t\geq 1$, where $A_{ij}(t)$ is a non-negative weight and $\gamma(t)>0$ is a diminishing stepsize. ${P}_{X}\big{[}\cdot\big{]}$ denotes the projection operation onto the set $X$ and $\hat{\nabla}f_{i}\left(\bar{\bm{x}}_{i}(t)\right)$ is a subgradient at agent $i$ of the local function $f_{i}$ at $\bm{x}=\bar{\bm{x}}_{i}(t)$. Then, we have the following result: ###### Fact 1. [2] Under Assumption 1. If $\|\hat{\nabla}f_{i}(\bm{x})\|\leq C_{1}$ for some $C_{1}$ and for all $\bm{x}\in X$, and the step size satisfies $\sum_{t=1}^{\infty}\gamma(t)=\infty$, $\sum_{t=1}^{\infty}\gamma^{2}(t)<\infty$, then for all $i,j\in\mathcal{V}$ we have $\lim_{t\rightarrow\infty}f(\bm{x}_{i}(t))=f^{\star}~{}~{}\text{and}~{}~{}\lim_{t\rightarrow\infty}\|\bm{x}_{i}(t)-\bm{x}_{j}(t)\|=0\;.$ The above fact tells that for these convex problems, the DPS method will converge to an optimal solution of problem (1). In the next, we will discuss how this convergence will change when there is attack within the network. Algorithm 1 The gossip-based optimization protocol Input: Number of instances $K$, and iterations $T$. for $k=1,\cdots,K$ do Initial states: $\bm{x}_{i}^{k}(0)=\bm{\beta}_{i}^{k}~{}\forall~{}i\in{\mathcal{V}}$ for $t=1,\cdots,T$ do $\bullet$ Uniformly wake up a random agent $i\in{\mathcal{V}}$ $\bullet$ Agent $i$ selects agent $j\in{\mathcal{N}}_{i}$ with probability $P_{ij}$ $\bullet$ The trustworthy agents $i,j\in\mathcal{V}_{t}$ update the states according to the rules in (2). $\bullet$ The malicious agents follow the attack scheme to keep their original states, as seen in (3). end for end for ### II-A Data Injection Attack From Insider In this setting, we assume that the set of agents $\mathcal{V}$ can be divided into two subsets: the set of trustworthy agents ${\mathcal{V}_{t}}$ and the set of malicious agents (attackers) ${\mathcal{V}_{m}}$, as seen in Fig. 1. We have ${\mathcal{V}}={\mathcal{V}}_{t}\cup{\mathcal{V}}_{m}$ and $n=|{\mathcal{V}}_{t}|+|{\mathcal{V}}_{m}|$. In our attack model, attackers are defined as agents whose estimates (or states) can not be affected by other agents, and those coordinated attackers try to drag the trustworthy agents to their desired value. If $i\in{\mathcal{V}}_{t}$, a trustworthy agent will perform the rules in (2). Otherwise, an attacker $j\in{\mathcal{V}}_{m}$ will update its state with the following rule: $\bm{x}_{j}(t)={\bm{\alpha}}+{\bm{r}}_{j}(t),~{}\forall~{}j\in\mathcal{V}_{m}\;.$ (3) where $\bm{\alpha}$ is the target value of attackers, ${\bm{r}}_{j}(t)$ is an artificial noise generated by attackers to confuse the trustworthy agents. If there are more than one attacker in the network, we assume that they will coordinate with each other to converge to the desired value ${\bm{\alpha}}$. Meanwhile, to disguise attacks, they will independently degenerate artificial noise ${\bm{r}}_{j}(t)$ which decays exponentially with time, i.e., $\lim_{t\rightarrow\infty}\|{\bm{r}}_{j}(t)\|=0$ for all $j\in\mathcal{V}_{m}$. For the time varying network, let $\mathcal{E}(\mathcal{V}_{t};t)$ be the edge set of the subgraph of $\mathcal{G}(t)$ with only the trustworthy agents in $\mathcal{V}_{t}$. The following assumption is needed to ensure a successful attack on the DPS algorithm: ###### Assumption 2. There exists $B_{1},B_{2}<\infty$ such that for all $t\geq 1$, $1$) the composite sub-graph $(\mathcal{V}_{t},\cup_{\ell=t+1}^{t+B_{1}}\mathcal{E}(\mathcal{V}_{t};\ell))$ is connected; $2$) there exists a pair $i\in\mathcal{V}_{t}$, $j\in\mathcal{V}_{m}$ with $(i,j)\in\mathcal{E}(t)\cup\ldots\cup\mathcal{E}(t+B_{2}-1)$. Based on this assumption, we have the following fact: ###### Fact 2. [13] Under Assumptions 1 and 2. If $\|\hat{\nabla}f_{i}(\bm{x})\|\leq C_{2}$ for some $C_{2}$ and for all $\bm{x}\in X$, and $\gamma(t)\rightarrow 0$, we have: $\lim_{t\rightarrow\infty}\max_{i\in\mathcal{V}_{t}}\|\bm{x}_{i}(t)-{\bm{\alpha}}\|=0\;.$ This fact implies that in this attack scheme, the attackers will succeed in steering the final states. This was also proved in our previous work [13]. ## III Detection and Localization Strategies The DPS algorithm runs in a fully decentralized fashion in trustworthy agents $i\in\mathcal{V}_{t}$. The neighborhood detection (ND) task and neighborhood localization (NL) task are then introduced for detecting and localizing attacker. To facilitate our discuss, we consider the following hypotheses. The ND task is defined as follows: $\begin{split}{\cal H}_{0}^{i}:&~{}{\mathcal{N}}_{i}\cap{\mathcal{V}}_{m}=\emptyset,\quad\text{No neighbor is an attacker},\\\ {\cal H}_{1}^{i}:&~{}{\mathcal{N}}_{i}\cap{\mathcal{V}}_{m}\neq\emptyset,\quad\text{At least one neighbor is the attacker},\end{split}$ (4) where ${\cal H}_{0}^{i}$ and ${\cal H}_{1}^{i}$ as two events of agent $i$ for the ND task. When event ${\cal H}_{1}^{i}$ is true at agent $i$, the second task is to check if the neighbor $j\in\mathcal{N}_{i}$ is an attacker. The NL task is defined as follows: $\begin{split}{\cal H}_{0}^{ij}:&~{}j\notin{\mathcal{V}}_{m},\quad\text{Neighbor $j$ is not an attacker},\\\ {\cal H}_{1}^{ij}:&~{}j\in{\mathcal{V}}_{m},\quad\text{Neighbor $j$ is an attacker},~{}~{}~{}~{}\end{split}$ (5) where ${\cal H}_{0}^{ij}$ and ${\cal H}_{1}^{ij}$ are as two events of agent $i$ for the NL task. If event ${\cal H}_{1}^{ij}$ is true, we say that the attacker is localized. We remark that such hypotheses were also made in previous work [19, 36]. An illustration of the neighborhood detection and localization tasks is shown in Fig. 1. Notice that the NL task is executed only if the event ${\cal H}_{1}^{i}$ in the ND task is true. Moreover, once the attacker is localized, trustworthy agents will disconnect from the attacker in the next communication. In this way, it is expected that the network can exclude all the attackers from the network. Figure 1: Neighborhood tasks in the attack detection scheme. Each trustworthy agent performs ND and NL tasks independently for isolating attacker from the network. To proceed our tasks, we run the asynchronous gossip-based optimization algorithm (Algorithm 1) for $K$ instances. We denote $\tilde{\bm{X}}_{i}^{k}$ as the neighborhood state matrix collected by agent $i$ in $k$th instance, i.e., $k\in[1,\cdots,K]$. The ND and NL tasks can be described as follows: $\displaystyle\tilde{\bm{X}}_{i}^{k}\mathrel{\mathop{:}}=$ $\displaystyle[\bm{x}_{i}^{k},\bm{x}_{1}^{k},\cdots,\bm{x}_{j}^{k},\cdots,\bm{x}_{\left|\mathcal{N}_{i}\right|}^{k}]^{\top}~{}\forall~{}j\in{\cal N}_{i},$ (6) $\displaystyle{y}_{i}={\mathrm{F_{ND}}}(\tilde{\bm{X}}_{i}^{1},\cdots,\tilde{\bm{X}}_{i}^{K})\overset{{\cal H}_{1}^{i}}{\underset{{\cal H}_{0}^{i}}{\gtrless}}\delta,$ (7) $\displaystyle z_{ij}={\mathrm{F_{NL}}}(\tilde{\bm{X}}_{i}^{1},\cdots,\tilde{\bm{X}}_{i}^{K})\overset{{\cal H}_{1}^{ij}}{\underset{{\cal H}_{0}^{ij}}{\gtrless}}\epsilon.$ (8) where $\bm{x}_{j}^{k}\in\mathbb{R}^{d}$ is the state vector of agent $j\in\mathcal{N}_{i}$, which can be directly obtained by agent $i\in\mathcal{V}_{t}$ from its neighbors, $y_{i}\in\mathbb{R}$ is a metric that indicates whether an attacker is present in the neighborhood of agent $i$, and $\bm{z}_{ij}=[z_{i1},\cdots,{z}_{i\left|\mathcal{N}_{i}\right|}]^{\top}\in\mathbb{R}^{\left|\mathcal{N}_{i}\right|}$ is the metric vector for localization task. Herein, $\delta>0$ and $\epsilon>0$ are some pre-designed thresholds. On top of the detection and localization strategies, we have an important assumption about the initial states: ###### Assumption 3. We have the prior information about the expected initial states about the mean of attackers $\mathbb{E}[\bm{x}_{j}^{k}(0)]=\bar{\bm{\alpha}},j\in\mathcal{V}_{m}$ and trustworthy agents $\mathbb{E}[\bm{x}_{i}^{k}(0)]=\bar{\bm{\beta}},i\in\mathcal{V}_{t}$ . Moreover, $\bar{\bm{\alpha}}\neq\bar{\bm{\beta}}$ in general. Note that this assumption is practical as the attacker always aims at dragging the trustworthy agents to its desired value, which is usually different from the optimal solution. Otherwise, we may not consider it as a meaningful attack. ###### Remark 1. ${\mathrm{F_{ND}}}(\cdot)$ and ${\mathrm{F_{NL}}}(\cdot)$ are statistical decision functions judiciously designed for ND and NL tasks respectively. For each agent $i\in\mathcal{V}_{t}$, these decision functions are used to calculate the criterion metrics to identify attackers. ### III-A The Score-based Method As a remedy to protect these distributed optimization algorithms, such score- based methods have been studied in [18, 19], which stem from statistical techniques. For gossip-based DPS algorithm, a temporal difference strategy (TD) in [13] and a spatial difference strategy (SD) in [36] are proposed to detect and localize the attackers, and these two strategies are reviewed below. #### III-A1 Temporal Difference Strategy Since the expected initial states about the mean of attackers and trustworthy agents are different, when $t\rightarrow\infty$, the network will be mislead by the attackers to $\mathbb{E}[\bm{x}_{j}^{k}(\infty)]=\bar{\alpha}=\mathbb{E}[\bm{x}_{i}^{k}(\infty)]$. This implies that the difference between the initial state and the steady state can be used to detect anomalies. For each trustworthy agent $i\in\mathcal{V}_{t}$, the following score can be evaluated 111For each instance $k$, each agent evaluates $\Delta_{j}(t)\triangleq\bm{x}^{k}_{j}(t)-\bm{x}_{j}^{k}(t-1)$ at iteration $t$ and sums it over all the iterations to obtain $\big{(}\bm{x}^{k}_{j}(T)-\bm{x}_{j}^{k}(0)\big{)}$: ${\xi}_{ij}\mathrel{\mathop{:}}=\frac{1}{Kd}\sum_{k=1}^{K}{\bm{1}^{\top}\big{(}\bm{x}^{k}_{j}(T)-\bm{x}_{j}^{k}(0)\big{)}},j\in{\cal N}_{i}.$ (9) Herein, $T\rightarrow\infty$ is sufficiently large, $d$ is the state dimension of agents, $\bm{1}$ is an all-one vector. $\bm{x}^{k}_{j}(T)$ and $\bm{x}_{j}^{k}(0)$ are respectively the last and the first state for agent $j$ observed by agent $i$. To discern the events in ND task, the detection criterion is defined as follow: $\hat{y}_{i}\mathrel{\mathop{:}}=\frac{1}{|{\mathcal{N}}_{i}|}\sum_{j\in{\mathcal{N}}_{i}}\left|\big{(}\xi_{ij}-\overline{\xi}_{i}\big{)}\right|\overset{{\mathcal{H}}_{0}^{i}}{\underset{{\mathcal{H}}_{1}^{i}}{\lessgtr}}\delta_{\mathrm{TD}}.$ (10) where $\overline{\xi}_{i}=1/\left|\mathcal{N}_{i}\right|\sum_{j\in\mathcal{N}_{i}}\xi_{ij}$ is the average of neighborhood of agent $i$. Intuitively, $\mathbb{E}[\hat{y}_{i}]=0$ when the event $\mathcal{H}_{0}^{i}$ is true, otherwise $\mathbb{E}[\hat{y}_{i}]\neq 0$ when the event $\mathcal{H}_{1}^{i}$ is true. $\delta_{\mathrm{TD}}$ is a pre-designed threshold of the ND task. For the NL task, these two events $\mathcal{H}_{1}^{ij}$ and $\mathcal{H}_{0}^{ij}$ are checked by the following criterion: $\hat{z}_{ij}\mathrel{\mathop{:}}=|{\xi}_{ij}|\overset{{\cal H}_{1}^{ij}}{\underset{{\cal H}_{0}^{ij}}{\lessgtr}}\epsilon_{\mathrm{TD}},~{}\forall~{}j\in{\cal N}_{i}\;.$ (11) Herein, $\epsilon_{\mathrm{TD}}$ is a pre-designed threshold used to identify which neighbor is the attacker. Note that $\mathbb{E}[\hat{z}_{ij}]$ is close to 0 if an agent $j$ is an attacker, seen in (9). #### III-A2 Spatial Difference Strategy According to (3), attackers always try to mislead the network to their desired value, and thus the transient state in the network will also be affected during the attack process. Unlike the TD method that only uses the initial state and steady state, the transient states are considered in the SD method for better performance. We expect that the expected state $\mathbb{E}[\bm{x}_{i}^{k}(t)-\bm{x}_{j}^{k}(t)]$ between neighbor $j$ and monitoring agent $i$ will behave differently in events $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$, i.e., $j\in\mathcal{N}_{i}$ and $0<t<\infty$. For the ND task, agent $i$ evaluates the following metrics: $\overline{\bm{\varphi}}_{ij}^{k}\mathrel{\mathop{:}}=\sum_{t=0}^{T}\Big{(}\bm{x}_{j}^{k}(t)-\overline{\bm{x}}_{i}^{k}(t)\Big{)},j\in\mathcal{N}_{i}.$ (12) $\check{y}_{i}\mathrel{\mathop{:}}=\frac{1}{|{\cal N}_{i}|}\sum_{j\in{\cal N}_{i}}\Big{(}\frac{1}{Kd}\sum_{k=1}^{K}{\bm{1}^{\top}\overline{\bm{\varphi}}_{ij}^{k}}\Big{)}^{2}\overset{{\cal H}_{0}^{i}}{\underset{{\cal H}_{1}^{i}}{\lessgtr}}\delta_{\mathrm{SD}}.$ (13) where $\overline{\bm{x}}_{i}^{k}(t)=1/\left|\mathcal{N}_{i}\cup i\right|\sum_{j\in\\{\mathcal{N}_{i}\cup i\\}}\bm{x}_{j}^{k}(t)$ (14) is the neighborhood average of agent $i$ when iterating $t$ in instance $k$. $\overline{\bm{\varphi}}_{ij}^{k}$ is the sum of differences between neighbor agent $j$ and the neighborhood average $\overline{\bm{x}}_{i}^{k}(t)$ in all iterations. $\delta_{\mathrm{SD}}$ is a pre-designed threshold. For the NL task, we compare the state of neighbor agent $j$ with agent $i$ to check the events in (5). The following criteria are used: $\bm{\varphi}_{ij}^{k}\mathrel{\mathop{:}}=\sum_{t=0}^{T}\big{(}\bm{x}_{j}^{k}(t)-\bm{x}_{i}^{k}(t)\big{)}-\overline{\bm{\varphi}}_{ii}^{k},j\in\mathcal{N}_{i}.$ (15) $\check{z}_{ij}\mathrel{\mathop{:}}=\Big{(}\frac{1}{Kd}\sum_{k=1}^{K}{\bm{1}^{\top}\bm{\varphi}_{ij}^{k}}\Big{)}^{2}\overset{{\cal H}_{0}^{ij}}{\underset{{\cal H}_{1}^{ij}}{\lessgtr}}\epsilon_{\mathrm{SD}},\forall j\in\mathcal{N}_{i}.$ (16) where $\bar{\bm{\varphi}}_{ii}^{k}$ is calculated by agent $i$ itself, seen in (12). $\bm{\varphi}_{ij}^{k}$ is the metric between agent $i$ and agent $j$, and $\epsilon_{\mathrm{SD}}$ is a pre-designed threshold used to identify the attacker. ### III-B The AI-based Method Figure 2: The TDNN method at trustworthy agent $i$: (Left) NN for ND task, (Right) NN for NL task. SDNN shares a similarly structure with TDNN. In fact, the reason why (10), (11), (13) and (16) take effect is that the anomalies will cause the the measured metrics to behave statistically different. In such score-based methods, these decision functions of ND and NL tasks are approximately linear or quadratic functions, which fuse the states obtained by agent $i$ into a scalar score for classification. A natural question that follows is whether there exists more sophisticated nonlinear functions that can better classify those events in the two neighborhood tasks. This is a natural application of AI technology for learning the complex mapping relationships in a classification problem. In the following, we propose to apply NN to handle the ND and NL tasks. Let $M={\mathrm{max}}_{i}\left|\mathcal{N}_{i}\right|$ be the input dimension of these NNs. Then, NNs can be trained at each monitoring agent in an offline manner using data collected from each agent. To facilitate our approach, we use the following process to collect training data for the AI-based methods: ###### Assumption 4. Assume that we have set a training data collecting process which contains $P$ training network $\mathcal{G}_{p}=(\mathcal{V}_{p},\mathcal{E})$ for $p={1,2,..,P}$. For each network $\mathcal{G}_{p}$, a randomly chosen agent222There could be more than one attackers in the training network while herein we only consider the simplest case. takes the role as an attacker. Based on Assumption 3, we run the asynchronous gossip-based optimization algorithm (Algorithm 1) for $K$ instances and record $\tilde{\bm{X}}_{i}^{k}$ as the data samples with the ground truth label ‘1’ for event $\mathcal{H}_{1}^{i}$ where agent $i$ is either in the neighborhood of (next to) the attacker or beyond the neighborhood of (far from) the attacker. We remark here that $\tilde{\bm{X}}_{i}^{k}$ is the local data collecting by agent $i$ which is not allowed to exchange among agents. On the other hand, ground truth label ‘0’ for event $\mathcal{H}_{0}^{i}$ can be easily obtained by running gossip-based algorithm on $\mathcal{G}$. We remark here that how to specifically set the training data collecting process is a challenging problem while it is beyond the scope of this work. Herein, we simply assume that each agent can obtain its own training data with correct labels. Other technique problems about the details of the training process will be included in another work. #### III-B1 Temporal Difference Strategy via NN Armed with training data, we propose a method called TDNN, which uses the time difference values as the input of the NN to perform neighborhood tasks, as illustrated in Fig. 2. Based on the metric in (9), the inputs for the two neighborhood tasks are as follows, $\bm{a}^{0}=\hat{\bm{a}}^{0}=[\xi_{i1},\xi_{i2}\cdots,\xi_{iM}]^{\top}.$ (17) where $\xi_{ij}$ can be obtained by agent $i$. For the ND task, the computation process of NN can be described below: $\displaystyle{\bm{a}}^{h}$ $\displaystyle=\sigma({\bm{W}}^{h}{\bm{a}}^{h-1}+{\bm{b}}^{h}),\quad h=1,...,n-1;$ (18) $\displaystyle\tilde{y}_{i}$ $\displaystyle=g({\bm{W}}^{n}{\bm{a}}^{n-1}+{\bm{b}}^{n}),\quad\tilde{y}_{i}\overset{{\cal H}_{1}^{i}}{\underset{{\cal H}_{0}^{i}}{\gtrless}}\delta_{\mathrm{NN}},$ (19) where $\bm{a}^{0}$ is the input of NN. $\sigma(\cdot)$ is the activation function, $g(\cdot)$ is the sigmoid function defined as $g(\cdot)=1/(1+e^{-x})$, $\bm{W}^{h}\in\mathbb{R}^{L_{h}\times L_{h-1}}$ is the weight matrix between the layer $h$ and layer $h-1$, $L_{h}$ represents the number of neurons in layer $h$, $\bm{b}^{h}\in\mathbb{R}^{L_{h}}$ and $\bm{a}^{h}\in\mathbb{R}^{L_{h}}$ are the bias vector and the activation output in the layer $h$, respectively. $\tilde{y}_{i}\in\mathbb{R}$ is the expected output, and $\delta_{\mathrm{NN}}\in[0,1]$ is some prescribed threshold for detection task. For the NL task, a similar NN structure is used, except for the number of neurons in the output layer. The design is given as follows, $\displaystyle\hat{\bm{a}}^{h}$ $\displaystyle=\sigma(\hat{\bm{W}}^{h}\hat{\bm{a}}^{h-1}+\hat{\bm{b}}^{h}),\quad h=1,...,n-1;$ (20) $\displaystyle\tilde{\bm{z}}_{i}$ $\displaystyle=g(\hat{\bm{W}}^{n}\hat{\bm{a}}^{n-1}+\hat{\bm{b}}^{n}),\quad\tilde{z}_{ij}\overset{{\cal H}_{1}^{ij}}{\underset{{\cal H}_{0}^{ij}}{\gtrless}}\epsilon_{\mathrm{NN}},$ (21) where $\hat{\bm{a}}^{0}$ is the input of NN, $\hat{\bm{W}}^{h}$, $\hat{\bm{b}}^{h}$, and $\hat{\bm{a}}^{h}$ are the weight matrix, bias term, and activation output in NN, respectively, $\tilde{\bm{z}}_{i}=[\tilde{z}_{i1},\cdots,\tilde{z}_{iM}]\in\mathbb{R}^{M}$ is the expected output of NL task, and $\epsilon_{\mathrm{NN}}\in[0,1]$ is some prescribed threshold. Notice that the actual output is encoded by one-hot vector during training stage, such as $\tilde{\bm{z}}_{i}=\bm{e}_{j}$ if $j\in\mathcal{V}_{m}$, see Fig. 2 (Right). #### III-B2 Spatial Difference Strategy via NN Both TD and TDNN only utilize the initial state and the steady state of agents rather than the transient states, leading to the possibility of losing some key features in the neighborhood tasks. In particular, the neighborhood transient state information is not effectively utilized for extracting key classification features. Therefore, we propose a strategy called SDNN to improve the detection and localization performance by using transient states and NN. As a malicious agent always tries to influence and steer the trustworthy agents away from the true value, we have $\mathbb{E}[\bm{x}_{j}^{k}(t)-\overline{\bm{x}}_{i}^{k}(t)|\mathcal{H}_{1}^{ij}]\neq\mathbb{E}[\bm{x}_{j}^{k}(t)-\overline{\bm{x}}_{i}^{k}(t)|\mathcal{H}_{0}^{ij}]$. Thus, we can compare the state of neighbor agent $j$ and the neighborhood average of agent $i$ over time. The metrics for ND and NL tasks can be described as follows: $\bm{s}_{ij}^{k}\mathrel{\mathop{:}}=\sum_{t=0}^{T}\big{(}\bm{x}_{j}^{k}(t)-\overline{\bm{x}}_{i}^{k}(t)\big{)},j\in\mathcal{N}_{i}.$ (22) $\chi_{ij}\mathrel{\mathop{:}}=\frac{1}{Kd}\sum_{k=1}^{K}\bm{1}^{\top}\bm{s}_{ij}^{k},j\in\mathcal{N}_{i}.$ (23) where $\bm{x}_{j}^{k}(t)$ is the $t$th states of agent $j$ at instance $k$, $\bm{s}_{ij}^{k}$ is the sum of statistical differences between agent $j$ and the neighborhood average of agent $i$. Note that $\overline{\bm{x}}_{i}^{k}(t)$ has been defined in (14). Herein, our goal is to accurately detect insider attacks and identify whether the attacker appears in the neighborhood of agent $i$. The detection structures of SDNN are similar with that for TDNN, as seen in Fig. 2. Therein, we use the following inputs for these NN models of ND and NL tasks: ${\bm{a}}^{0}=\hat{\bm{a}}^{0}=[\chi_{i1},\chi_{i2},\cdots,\chi_{iM}]^{\top}.$ (24) ## IV Collaborative learning for a robust model In previous sections, we have introduced how to use NN to help detect and localize the insider attackers. Our training data comes from a training data collecting process under Assumption 4 wherein the local data samples $\tilde{\bm{X}}_{i}^{k}$ are collected by agent $i$ which could be within or beyond the neighborhood of the attacker. Apparently, the optimal train way is to upload all agents’ data to a fusion center and train the model in a centralized manner. However in practice, collecting data from decentralized data sources to the center is hard due to storage and bandwidth limitations. On the other hand, as running a gossip algorithm is time-consuming, it is usually difficult and expensive to collect sufficient data at each agent. For example, the attack could occur far from the monitoring agent while the training data may only contains samples from a neighbor attacker. As the training samples may not well represent the general attack network, some individually trained NN may not fit in all insider attack events. To alleviate these issues, we propose a collaborative peer-to-peer protocol to facilitate training our NN models. Before we go to the details, we recall three assumptions for the proposed collaborative learning process. First, we assume that all agents in the network have equal number of neighbors (this is sort of impractical but we can resolve it later). Also, different agents collect their own training data with the advantages of high scalability and privacy preservation. Moreover, we allow the trustworthy agents to have correctly labeled samples from the ND and NL tasks. For instance, the samples labeled in the current training round can be used to the next training round. In the next, we will see how to share AI-based models between different agents to achieve robust performance in ND and NL tasks. ### IV-A The Distributed Collaborative Training Process The goal of collaborative training is that participating agents acting as local learners to train good local models (i.e., NN models) through gossip exchanges. That is, an agent $i\in\mathcal{V}$ aims to train a model that performs well with respect to the data points available on other agents. For distributed collaborative training, the standard unconstrained empirical risk minimization problem used in machine learning problems (such as NN) can be described as follows [39]: $\min_{\bm{W}}L(\bm{W})=\min\frac{1}{n}\sum_{i\in\cal V}L_{i}(\bm{W})$ (25) where $\bm{W}$ is the parameter of the NN model. $L_{i}(\cdot)$ is a local objective function of agent $i$, which is defined as the expected loss of the local data set. The local objective is to minimize the expected loss of its local sample $\textstyle L_{i}(\bm{W})=\mathbb{E}_{\varsigma\sim{\cal I}_{i}}[\ell(\bm{W},\varsigma)]$ (26) where $\varsigma$ is a pair variable, composed of an input and related label, following the unknown probability distribution ${\cal I}_{i}$, which is specific for the sample set received by agent $i$. $\ell(\cdot)$ is a loss function used to quantify the prediction error on $\varsigma$. Let $D_{i}=\\{\varsigma_{1},\cdots,\varsigma_{q}\\}$ represents the set of training data on agent $i\in\mathcal{V}$, which contains $q$ samples. Thus, we have $D=D_{1}\cup\cdots\cup D_{n}$ to optimize problem (25): $\begin{split}\min_{\bm{W}}L(\bm{W})=\min\frac{1}{n}\sum_{i\in\cal V}\Big{(}\frac{1}{q}\sum_{\varsigma\in D_{i}}\ell(\bm{W},\varsigma)\Big{)}\end{split}$ (27) where $L_{i}(\bm{W})=\frac{1}{q}\sum_{\varsigma\in D_{i}}\ell(\bm{W},\varsigma)$. This formulation enables us to state the optimization problem (25) in a distributed manner. This distributed collaborative training problem could be addressed by gossip exchanges [41, 42]. We will detail it as follows. Algorithm 2 Gossip training for AI-based methods Input: $P_{ij}:\text{probability of exchange}$, $\eta:\text{learning rate}$. Initialize: $\bm{W}$ is initialized randomly, $i\in\mathcal{V}.$ repeat $\bullet$ MergeModel($\bm{W}_{r},\bm{W}_{i}$) in (28) $\bullet$ $\bm{W}_{i}\leftarrow\bm{W}_{i}-\eta\hat{\nabla}L_{i}(\bm{W}_{i})$, agent $i$ updates parameters $\bullet$ Agent $i$ sends $\bm{W}_{i}$ to agent $j\in{\mathcal{N}}_{i}$ with probability $P_{ij}$ until Maximum iteration reached function MergeModel($\bm{W}_{r},\bm{W}_{i}$) $\bm{W}_{i}\leftarrow\bm{W}_{i}(1-\mu)+\mu\bm{W}_{r}$, $\mu\in[0,1]$. end function ### IV-B The Gossip Stochastic Gradient Descent Strategy Gossip learning is a method to learn models from fully distributed data without central control [9]. The skeleton of the gossip learning protocol is shown in Algorithm 2. Therein, during the training stage, each agent $i$ has a NN with the same architecture and initializes a local model with the parameter $\bm{W}_{i}$. This is then sent to another agent $j\in\mathcal{N}_{i}$ in the network periodically with the probability $P_{ij}$. Upon receiving a model $\bm{W}_{r}$, the agent $i$ merges it with the local model, and updates it using the local data set $D_{i}$. We utilize the stochastic gradient descent (SGD) algorithm to estimate the local parameter $\bm{W}_{i}$ [43], as follows $\bm{W}_{i}\leftarrow\bm{W}_{i}(1-\mu)+\mu\bm{W}_{r},\quad\quad\mu\in[0,1].$ (28) $\bm{W}_{i}\leftarrow\bm{W}_{i}-\eta\hat{\nabla}L_{i}(\bm{W}_{i}),\quad\quad i\in\mathcal{V},$ (29) where $\eta$ and $\hat{\nabla}L_{i}(\cdot)$ are the learning rate and the expected gradient of agent $i$, respectively. $\mu\in[0,1]$ is a weight used to merge the receive model $\bm{W}_{r}$. Herein, MergeModel($\bm{W}_{r},\bm{W}_{i}$) is a merging process as shown in (28) which is typically achieved by averaging the model with parameters, i.e., $\mu=0.5$. ### IV-C The Tailor-degree Network Figure 3: An example of tailoring neighbor agents. We have introduced the application of NN in detecting and localizing attacker, and assumed that each normal agent in the network has exactly $M$ neighbors. Inevitably, the communication network is an irregular network, where some agents have a heterogeneous number of neighbor agents. In order to adapt to scenarios with different Degree-$|\mathcal{N}_{i}|$ agents, we tailor our $M$-input NN to fit into the scenario when a normal agent has $\left|\mathcal{N}_{i}\right|\neq M$ neighbors. Two scenarios are considered in this subsection. In the first scenario, we consider the case of $|{\cal N}_{i}|>M$. The $|{\cal N}_{i}|$ neighbors is divided into $\lceil|{\cal N}_{i}|/M\rceil$ potentially overlapping groups. In ND and NL tasks, each group contains exactly $M$ agents, which can be treated as a standard neighbor set of TDNN and SDNN methods. Thus, these two tasks can be implemented with the unified NN model, as seen in Fig. 3. On the other hand, if we have $|{\cal N}_{i}|<M$, the deficient value in the input vector is replaced by a reference value to fit a Degree-$\left|{\cal N}_{i}\right|$ agent. For the TDNN method, the input is reconstructed by ${\bm{a}}^{0}(\hat{\bm{a}}^{0})\in{\mathbb{R}^{M}}=[\xi_{i1},\cdots,\xi_{i|{\cal N}_{i}|},\xi_{ii},\ldots,\xi_{ii}]^{\top}$, wherein $\xi_{ii}$ is the temporal difference value of agent $i$. For the SDNN method, the deficient value of input vector is replaced by $X_{ii}$, and the input is reconstructed by ${\bm{a}}^{0}(\hat{\bm{a}}^{0})\in{\mathbb{R}}^{M}=[\chi_{i1};\cdots;\chi_{i{|\mathcal{N}_{i}}|};\chi_{ii};\cdots;\chi_{ii}]^{\top}$ when $|{\mathcal{N}}_{i}|<M$. Remark: Typically, the training data used to train the AI-based methods is collected by trustworthy agents under a scenario of specific prior information $\bm{\beta}$. In practice, the prior information of the gossip-based DPS optimization protocol will be changed in some particular scenarios, that is, the test data is statistically mismatched with the training data. To further verify the robustness of the AI-based detection and localization models, we generate the test data by keeping the target value of attackers $\bm{\alpha}$ and changing the mean and the deviation of $\bm{\beta}$. As depicted in Section V, we set $\bm{\beta}\sim\mathcal{U}[0,1]^{d}$, then several test scenarios are defined in the TABLE I. TABLE I: Test scenarios settings for mismatched data Scenario | Mean | Deviation | Initial distribution ---|---|---|--- S0 | 0.5 | 1.0 | $\bm{\beta}\sim$ $\mathcal{U}[0.0,1.0]^{d}$ S1 | 0.5 | 0.6 | $\bm{\beta}\sim$ $\mathcal{U}[0.2,0.8]^{d}$ S2 | 0.5 | 1.4 | $\bm{\beta}\sim$ $\mathcal{U}[-0.2,1.2]^{d}$ S3 | 0.7 | 1.0 | $\bm{\beta}\sim$ $\mathcal{U}[0.2,1.2]^{d}$ S4 | 0.3 | 1.0 | $\bm{\beta}\sim$ $\mathcal{U}[-0.2,0.8]^{d}$ ## V Numerical Results and Analysis In this section, numerical results are presented to validate the effectiveness of the proposed AI-based methods in neighborhood tasks. The DPS algorithm runs on a Manhattan network with $n=9$ agents, as shown in Fig. 4. In our experiment, an example of the least-square optimization problem is considered; i.e., in (1) we set $f^{k}({\bm{x}})=\sum_{i=1}^{n}f_{i}^{k}({\bm{x}})=\sum_{i=1}^{n}\left|({\bm{\theta}}_{i}^{k})^{\top}{\bm{x}^{k}}-{\phi}_{i}^{k}\right|^{2},k=1,...,K.$ Herein, $f_{i}^{k}$ is a utility function at agent $i$. As shown in Algorithm 1, the DPS algorithm runs in an asynchronous manner such that an agent $i$ randomly selects an agent $j$ with probability $[{\bm{P}}]_{ij}=P_{ij}=1/|{\mathcal{N}}_{i}|$. Thus the expected transition matrix in iteration $t$ can be written as ${\mathbb{E}}\left[{\bm{A}}(t)\right]={\bm{I}}-\frac{1}{2n}\bm{\Sigma}+\frac{{\bm{P}}+{\bm{P}}^{\top}}{2n}$, where $\bm{\Sigma}$ is a diagonal matrix with $[\bm{\Sigma}]_{ii}=\sum_{j=1}^{n}({P}_{ij}+{P}_{ji})$. In each instance, we set $d=2$, $T=2000$, the initialization ${\bm{x}}^{k}(0)\sim{\mathcal{U}}[0,1]^{d}$, ${\bm{\alpha}}^{k}\sim{\mathcal{U}}[-0.5,0.5]^{d}$ and $\bm{r}_{j}^{k}(t)\sim{\mathcal{U}}[-\hat{\lambda}^{t},\hat{\lambda}^{t}]$, where $\lambda$ is the second largest eigenvalue of $\mathbb{E}[{\bm{A}}(t)]$. In particular, to serve our purpose we change the function $f_{i}^{k}({\bm{x}})$ by randomly generating ${\bm{\theta}}_{i}^{k}\sim{\mathcal{U}}[0.5,2.5]^{d}$, $({\bm{x}}^{\star})^{k}\sim{\mathcal{U}}[0,1]^{d}$, and thus we have $\phi_{i}^{k}=(\bm{\theta}_{i}^{k})^{\top}(\bm{x}^{\star})^{k}$. Figure 4: The Manhattan network topology with agent $1$ as the attacker. For the AI-based methods, the feed forward neural networks (FFNN) with three hidden layers is applied to perform the ND and NL tasks with neurons in each hidden layer being $200$, $100$ and $50$ respectively. These NNs are implemented using a modified version of the deep learning toolbox in [44]. Rectified linear unit (ReLU) as the activation function is equipped in all hidden layers and the parameters of NN are jointly optimized through a back propagation method by minimizing the loss function defined on different tasks. To provide neighbor data and ground truth labels for the AI-based methods, we run the DPS algorithm independently in each event (4), starting with a new initial state each time. As the Manhattan network is symmetric, to obtain the training data under hypothesis $\mathcal{H}_{1}^{i}$ with label ‘1’, we need to collect data at two types of the trustworthy agents: the one that stands at the position “next to” the attacker agent (for example agent $1$ is the only attacker and we collect data at agent $2$), and the one that stands at the position “far from” the attacker agent (for example agent $1$ is the only attacker and we collect data at agent $5$). Meanwhile, the training data under hypothesis $\mathcal{H}_{0}^{i}$ with label ‘0’ is collected at any agent when DPS is running on the Manhattan network free of attacker. We collect data from different scenarios as shown in Table II and fuse them into ND and NL models. Therein, available data is typically split into two sets, a training set and a testing set. As for the ND task, the detection task consists of $30,000$ samples as the training set and $18,000$ samples as the testing set. Herein, within the data under hypothesis $\mathcal{H}_{1}^{i}$, we have $10000$ samples collecting at agent next to the attacker and $10000$ samples collecting at agent far from the attacker. For the NL task, the training set and testing set contain $10,000$ and $6,000$ samples respectively. Herein, we encode the ground truth labels of event $\mathcal{H}_{1}^{i}=\mathcal{H}_{1}^{ij}\cup\mathcal{H}_{0}^{ij}$ by one-hot coding, where the neighbor attacker is labeled by ‘1’ and the trustworthy agent by ‘0’. TABLE II: Training and testing sets for AI-based methods given thatz agent $j$ is the attacker. Task | Event | Training set | Testing set | Label ---|---|---|---|--- ND | $\mathcal{H}_{0}^{i}$ | 10000 | 6000 | 0 $\mathcal{H}_{1}^{i}(j\in\mathcal{N}_{i})$ | 10000 | 6000 | 1 $\mathcal{H}_{1}^{i}(j\notin\mathcal{N}_{i})$ | 10000 | 6000 | 1 NL | $\mathcal{H}_{1}^{i}$ | 10000 | 6000 | $\bm{e}_{j}$ Usually, the detection and localization models of AI-based methods are actually classifiers for which NN produces continuous quantities to predict class membership through different thresholds. To make a more comprehensive evaluation of these classifiers, we adopt the probabilities of detection and false alarm for the ND and NL tasks. That is, we define $\displaystyle P_{nd}^{i}\mathrel{\mathop{:}}=P(\hat{\mathcal{H}}^{i}=$ $\displaystyle\mathcal{H}_{1}^{i}|\mathcal{H}_{1}^{i}),~{}P_{nf}^{i}\mathrel{\mathop{:}}=P(\hat{\mathcal{H}}^{i}=\mathcal{H}_{1}^{i}|\mathcal{H}_{0}^{i}).$ (30) $\displaystyle P_{ld}^{i}\mathrel{\mathop{:}}=P(\hat{\mathcal{H}}^{ij}=$ $\displaystyle\mathcal{H}_{1}^{ij}|\mathcal{H}_{1}^{ij}),~{}P_{lf}^{i}\mathrel{\mathop{:}}=P(\hat{\mathcal{H}}^{ij}=\mathcal{H}_{1}^{ij}|\mathcal{H}_{0}^{ij}).$ (31) where $\hat{\mathcal{H}}^{i}$ and $\hat{\mathcal{H}}^{ij}$ are the estimated event based on AI-based methods. $P_{nd}^{i}$ ($P_{ld}^{i}$) and $P_{nf}^{i}$ ($P_{lf}^{i}$) are the probability of detection and false alarm in the ND (NL) task, respectively. More specifically, those probabilities are calculated as follows $\displaystyle P_{nd}^{i}=$ $\displaystyle\frac{1}{N_{nd}}\sum_{n=1}^{N_{nd}}I(y_{i}^{(n)}=\hat{y}_{i}^{(n)}=1),$ (32) $\displaystyle P_{nf}^{i}=$ $\displaystyle\frac{1}{N_{nf}}\sum_{n=1}^{N_{nf}}I(y_{i}^{(n)}=0\wedge\hat{y}_{i}^{(n)}=1),$ (33) $\displaystyle P_{ld}^{i}=$ $\displaystyle\frac{1}{N_{ld}}\sum_{n=1}^{N_{ld}}I(z^{(n)}_{ij}=\hat{z}^{(n)}_{ij}=1),$ (34) $\displaystyle P_{lf}^{i}=$ $\displaystyle\frac{1}{N_{lf}}\sum_{n=1}^{N_{lf}}I(z^{(n)}_{ij}=0\wedge\hat{z}^{(n)}_{ij}=1).$ (35) where $N_{nd}$ ($N_{nf}$) and $N_{ld}$ ($N_{lf}$) are the number of positive (negative) samples in the ND and NL tasks, respectively, $\hat{y}_{i}^{(n)}$ ($\hat{z}^{(n)}_{ij}$) is the predicted class by ND (NL) classifiers, and $y_{i}^{(n)}$ ($z^{(n)}_{ij}$) is the ground-truth class label. Note that $I(\cdot)$ is an indicator function that has the value $1$ when the predicted class label equal to the ground-truth class label. Based on these probabilities, the detection (or localization) performance can be investigated by the receiver operating characteristic (ROC)[45], for which the probability of detection is plotted on the $Y$-axis and the probability of false alarm is plotted on the $X$-axis. It is worth noting that the ROC curves that approach the upper left corner outperform those far from it. ### V-A Detection and Localization for One attacker Figure 5: ROCs of TDNN and TD methods: (Left) ND task, (Right) NL task. Figure 6: ROCs of SDNN and SD methods: (Left) ND task, (Right) NL task. In this subsection, we show the detection and localization performance of AI- based methods when the Manhattan network contains only one attacker, seen in Fig. 4. Suppose that agent $1$ is the attacker and the monitoring node is agent $2$ (or $3$, $4$ and $7$). Then, the NN model is trained and also tested by the data collecting from the agent next to the attacker. In Fig. 5, we study the attacker detection and localization performance of TDNN, where TD in [13] is taken as the benchmark method. ROC curves of ND task is depicted in Fig. 5 (Left), variables $K$ and $d$ in the legend are the number of instances and dimensions used to detect insider attacks respectively. The localization performance of NL task is shown in Fig. 5 (Right), where we assume that the ND task can completely distinguish between events $\mathcal{H}_{0}^{i}$ and $\mathcal{H}_{1}^{i}$ without errors (by an ‘Oracle’). In these plots, it is obvious that both the performance of TDNN and TD will improve significantly as $K$ increases when $d$ is fixed, and vice versa. For the first and second curves in ND and NL tasks, TD has the same performance at $K=2,d=1$ as at $K=1,d=2$, and the same is true for TDNN on the fifth and sixth curves, which is inherent to TD strategy (9). Thus, we may say that either increasing $K$ or increasing $d$ will bring the same improvement over performance. From Fig. 5, it can be seen that TDNN improves significantly over TD in terms of both detection and localization performance, performing good performance when $K=5,d=2$. The ROCs of SDNN are shown in Fig. 6, while SD in [36] is selected as a benchmark. It can be seen from the plots that both SDNN and SD already provide good detection and localization performance when $K=2$, which is better than that of TDNN and TD methods. This result implies that transient states can indeed provide more information to identify the attacker, as spatial methods (SDNN and SD) leverage the entire dynamic information while the temporal methods (TDNN and TD) only utilize the first and last states. Also in this case, the attacker detection and localization performance of SDNN and SD will improve significantly as $K$ increases when $d$ is fixed. When $K$ is fixed, the performance of SDNN and SD slightly improved as $d$ increases. For the ND task in Fig. 6 (Left), the detection performance of SDNN and SD are close to each other, they show excellent performance under the same feature processing condition, seen in (12) and (22). Nevertheless, SDNN has a drastic advantage over SD method in NL task and can completely distinguish the neighboring attacker at $K=2$, seen in Fig. 6 (Right). ### V-B Performance of the Collaborative Learning Figure 7: Comparison between independent training and collaborative training based on matched data. Models are trained on sufficient “next to” data then tested on “next to” data. Figure 8: Comparison between independent training and collaborative training for mismatched data. The plots are SDNN with $K=1,d=2$. In this subsection, we show how to utilize the collaborative learning protocol to train a robust model for accommodating more attack events. Specifically, we consider the performance comparison of independent training and collaborative training. Herein, independent training means that each agent trains its model based on its local data, where “next to” data refers to samples collected at the agent next to an attacker while “far from” data refers to samples collected at the agent far from an attacker. Moreover, “next to” model refers as the independent model trained on “next to” data while “far from” model refers as the independent model trained on “far from” data. We then consider two extreme cases to verify the collaborative learning. For Case $1$, we assume that in the independent training process the monitoring agent only collects a very small amount of local “next to” data, which is not enough to complete a meaningful NN model. While in the collaborative training process, the agent update its local model by merging the received neighboring models. We then test the two training methods on “next to” data for ND and NL tasks. In Fig. 7, the dashed and solid lines are the performance for the independent training and the collaborative training, respectively. It is clear that with insufficient samples, the agent in independent learning performs poorly on both ND and NL tasks, while the collaborative training enables the agent to learn models from its neighbors, which greatly improves detection and localization performances. It is expected that a similar result also holds for “far from” cases. For Case $2$, we first consider the scenario that each agent has sufficient “next to” data (“far from” data) to train the NN model. We then test the independent/collaborative training models on the “next to” data (“far from” data), which is matched the training data. The red (blue) dashed and solid lines in Fig. 8 (Left) represent their ROC results, which imply that the collaborative learning model will converge to the independent learning model. On the other hand, we further test the mismatch case of the training data and testing data. That is, we use the “next to” data to test the “far from” model, and vice versa. Interestingly, Fig. 8 (Right) shows that the collaborative learning model has a significant improvement over the independent model, as it learns both the characteristics of the “next to” model and “far from” model. These results demonstrate the advantage of the collaborative learning on the robustness of the model. Therefore, when there are enough samples, collaborative learning also has strong competitiveness compared with independent learning. ### V-C Performance for Different Degree-$|\mathcal{N}_{i}|$ agents Figure 9: ROCs of TDNN and TD with different deficient size: (Left) ND task, (Right) NL task. $p={M-|{\cal N}_{i}|}$ means that the number of deficient inputs. Figure 10: ROCs of SDNN and SD with different deficient size: (Left) ND task, (Right) NL task. $p={M-|{\cal N}_{i}|}$ means that the number of deficient inputs. In this subsection, we discuss the scenario where the communication network is an irregular network. We assume that the number of mismatched inputs not matching the unified model is $p={M-|{\cal N}_{i}|}$. According to the scheme described in subsection IV-C, we test the detection and localization performance of AI-based methods when $p\neq 0$. To set up this simulation, the Manhattan network topology is selected as the target example, and we have $M=4$ and $|{\cal N}_{i}|=\\{2,3\\}$. The attack scenario of $m=1,c=1$ is applied to verify our proposed method, and the performance in $p=0$ is taken as the baseline. We choose agent $2$ as the test agent whose neighbors are agents $1,3,5$ and $8$, i.e., $p=0$. When $p=1$, we cut off the connection between agents $2$ and $3$, then the neighbors of the test agent are agents $1$, $5$, and $8$. Next when $p=2$, we further cut off the connection between agents $2$ and $5$, leaving only agents $1$ and $8$ as neighbors of the test agent $2$. Note that the parameters for the AI-based methods are the same as those in previous subsection V-A, and that the testing data is generated from a modified Manhattan network with $p=1$ and $p=2$. Fig. 9 shows the attacker detection and localization performance of TDNN and TD methods with $K=5,d=2$. It can be seen that the performance of TDNN and TD in ND and NL tasks will not fluctuate significantly with the increase of $p$. However, TDNN has more stable detection and localization performance than TD. In Fig. 10, we shows the performance of SDNN and SD methods with $K=2,d=2$. As $p$ increases, SD has good detection performance, but its localization performance slightly decreases. Obviously, the results in both ND and NL tasks show that in our setting $p$ does not have a significant effect on the performance of SDNN. SDNN still can provide stable performance for detecting and localizing attackers. These results suggest that the proposed AI-based models may fit well with irregular degree networks. ### V-D Robustness test when data is mismatched with prior information Figure 11: ROCs of TDNN and TD for the mismatch model: (Left) ND task, (Right) NL task. ${\bm{\alpha}}^{k}\sim{\mathcal{U}}[-0.5,0.5]^{d}$. Each entry of ${\bm{x}}^{k}(0)$ is distributed as legended for testing data. Figure 12: ROCs of SDNN and SD for the mismatch model: (Left) ND task, (Right) NL task. ${\bm{\alpha}}^{k}\sim{\mathcal{U}}[-0.5,0.5]^{d}$. Each entry of ${\bm{x}}^{k}(0)$ is distributed as legended for testing data. Figure 13: ROCs of SDNN and SD for the mismatch model: (Left) ND task, (Right) NL task. ${\bm{\alpha}}^{k}\sim{\mathcal{U}}[-0.5,0.5]^{d}$. Each entry of ${\bm{x}}^{k}(0)$ is distributed as legended for testing data. Furthermore, we test the robustness of AI-based methods when the prior information is inconsistent with the actual environment. The prior information of test scenarios are present in subsection IV-C. We consider one attacker in the Manhattan network and train the parameters of AI-based methods in the scenario with ${\bm{\alpha}}^{k}\sim{\mathcal{U}}[-0.5,0.5]^{d}$ and $\bm{\beta}\sim\mathcal{U}[0.0,1.0]^{d}$, and the test data is generated by other scenarios in TABLE I. In Fig. 11, we show the detection and localization performance of TDNN and TD methods in different test scenarios when $K=5,d=2$. Specifically, we generate the test data for the second and third curves by changing the deviation of $\bm{\beta}$ to $\bm{\beta}\sim\mathcal{U}[0.2,0.8]^{d}$ and $\bm{\beta}\sim\mathcal{U}[-0.2,1.2]^{d}$. The results indicate that the performances of TDNN and TD methods deteriorate when the deviation of $\bm{\beta}$ increases, and improve when the deviation of $\bm{\beta}$ decreases. While in the fourth and fifth curves, we change the mean of $\bm{\beta}$ to $\bm{\beta}\sim\mathcal{U}[0.2,1.2]^{d}$ and $\bm{\beta}\sim\mathcal{U}[-0.2,0.8]^{d}$. As can be seen that the performances of TDNN and TD will improve when $[\mathbb{E}[\bm{\alpha}]-\mathbb{E}[\bm{\beta}]]$ increases and deteriorate when the gap decreases. Meanwhile, TDNN performs better than TD in both ND and NL tasks. In addition, Fig. 12 and Fig. 13 respectively show the detection and localization performance of SDNN and SD methods when $K=1,d=2$ and $K=2,d=2$. In these plots, the ROC curves of SDNN and SD follow the same trends as those in Fig. 11. It is worth mentioning that SDNN still shows good detection and localization performance despite the mismatching of the training data and testing data. Specifically, in Fig. 11, Fig. 12 and Fig. 13. ### V-E Detection and Localization for Multiple Attackers Figure 14: ROCs for multiple attackers of TDNN and TD: (Left) ND task, (Right) NL task. $m$ is the number of attackers in the Manhattan network, and $c$ is the number of attackers in the testing agent’s neighborhood. Figure 15: ROCs for multiple attackers of SDNN and SD: (Left) ND task, (Right) NL task. $m$ is the number of attackers in the Manhattan network, and $c$ is the number of attackers in the testing agent’s neighborhood. Then, we investigated the performance of TDNN and SDNN with the case of multiple attackers. Note that the parameters of AI-based methods are the same as those in subsection V-A. In Fig. 14 and 15, we set agents $\\{1,\cdots,m\\}$ as the attackers when considering a scenario with $m$ attackers in the Manhattan network. The legend in these plots with _‘ $m$ and $c$’_ indicates that there are $m$ attackers in the Manhattan network, and $c$ attackers are in the neighborhood of the monitoring agent. In this case, the same ${\bm{\alpha}}^{k}$ is shared by all cooperating attackers, but the noise is random and independent among each attacker. In Fig. 14, we show the ROC curves of TDNN and TD methods when $K=5,d=2$. Obviously, both the detection and localization performance of TDNN and TD methods fluctuate obviously in different $m$ and $c$. We notice that the total number of attackers ($m$) has only a slight impact on the detection performance of TDNN, which can be seen from the sixth ($m=1,c=1$), seventh ($m=2,c=1$) and ninth curves ($m=5,c=1$). It shows that the detection performance for TDNN depends on the number of attacking neighbors. As for NL task, we observe that TDNN exhibits similar performance in different attack scenarios. Nevertheless, the proposed TDNN method outperforms TD and has good performance in the case of multiple attackers. For SDNN and SD methods, the detection and localization performance are shown in Fig. 15 with $K=2,d=2$. From the Fig. 15 (Left), SDNN and SD still show good detection performance in the case of multiple attackers. We notice that the detection performance of SDNN is a slightly better than SD when $m=5,c=3$. While in the NL task, SDNN has excellent localization performance and outperforms SD in all attack scenarios. ### V-F Performance Test In a Small World Network Figure 16: ROCs of TDNN for the small world network: (Left) ND task, (Right) NL task. Solid lines show the average detection and localization performance in the small world network, and the parameters of TDNN are trained by the Manhattan network. Figure 17: ROCs of SDNN for the small world network: (Left) ND task, (Right) NL task. Solid lines show the average detection and localization performance in the small world network, and the parameters of SDNN are trained by the Manhattan network. In addition, we also test the detection and localization performance of AI- based methods in a small world network. We consider a small world network with $20$ agents. The average degree is set to be $8$ and the rewiring probability is set to be $0.2$. We assume that among all the nodes, agents $3$, $10$ and $17$ are the attackers. The AI-based models are trained by the training data from Manhattan network as those in V-A, while the test data are collected from the small world network. We only consider to monitor at the agents next to the attacker. Note that herein the degree-mismatch problem is solved by the proposed method in Section IV-C. We therefore show the performance of TDNN and SDNN in Fig. 16 and Fig. 17 respectively. In these plots, the solid lines show the average detection and localization performance in the small world network. We notice that AI-based methods also exhibit good detection and localization performance in a small world network. These results further illustrate the potential of the proposed defense strategies. ## VI Conclusion This work is dedicated to the detection of insider attacks in the DPS algorithm through AI technology. We have proposed two AI-based defense strategies (TDNN and SDNN) for securing the gossip-based DPS algorithm. Unlike the traditional score-based methods, this work utilizes NN to learn the complex mapping relationships in this classification problem, thus reducing the design difficulty of the attacker detector. To circumvent the mismatch of the training data and the actual network attack, we propose a federated learning approach to learn a local model close to the global model using training data from all agents. Experiment results demonstrate that the proposed AI-based methods have good detection and localization performance in different attack scenarios. They also have good adaptability to different degree of agent, and have strong robustness to the inconsistency of prior information with the actual environment. Therefore, it is convinced that the proposed AI-based defense strategies have a high potential for practical applications in the DPS algorithm. As a future work, it would be interesting to try the AI-based methods on more complicated attack models and other decentralized algorithms. ## References * [1] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” _IEEE Transactions on Automatic Control_ , vol. 54, no. 1, pp. 48–61, 2009. * [2] A. Nedic, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks,” _IEEE Transactions on Automatic Control_ , vol. 55, no. 4, pp. 922–938, 2010. * [3] G. Hattab and D. Cabric, “Distributed wideband sensing-based architecture for unlicensed massive iot communications,” _IEEE Transactions on Cognitive Communications and Networking_ , vol. 5, no. 3, pp. 819–834, 2019. * [4] B. V. Philip, T. Alpcan, J. Jin, and M. Palaniswami, “Distributed real-time iot for autonomous vehicles,” _IEEE Transactions on Industrial Informatics_ , vol. 15, no. 2, pp. 1131–1140, 2019. * [5] S. X. Wu, H.-T. Wai, L. Li, and A. Scaglione, “A review of distributed algorithms for principal component analysis,” _Proceedings of the IEEE_ , vol. 106, no. 8, pp. 1321–1340, 2018. * [6] C. Zhang and Y. Wang, “Enabling privacy-preservation in decentralized optimization,” _IEEE Transactions on Control of Network Systems_ , vol. 6, no. 2, pp. 679–689, 2018. * [7] D. Gesbert, S. G. Kiani, A. Gjendemsjo, and G. E. Oien, “Adaptation, coordination, and distributed resource allocation in interference-limited wireless networks,” _Proceedings of the IEEE_ , vol. 95, no. 12, pp. 2393–2409, 2007. * [8] G. B. Giannakis, V. Kekatos, N. Gatsis, S.-J. Kim, H. Zhu, and B. F. Wollenberg, “Monitoring and optimization for power grids: A signal processing perspective,” _IEEE Signal Processing Magazine_ , vol. 30, no. 5, pp. 107–128, 2013. * [9] I. Hegedűs, G. Danner, and M. Jelasity, “Gossip learning as a decentralized alternative to federated learning,” in _IFIP International Conference on Distributed Applications and Interoperable Systems_. Springer, 2019, pp. 74–90. * [10] J. Tsitsiklis, “Problems in decentralized decision making and computation,” Ph.D. dissertation, Dept. of Electrical Engineering and Computer Science, M.I.T., Boston, MA, 1984. * [11] S. S. Ram, A. Nedić, and V. V. Veeravalli, “Distributed stochastic subgradient projection algorithms for convex optimization,” _Journal of optimization theory and applications_ , vol. 147, no. 3, pp. 516–545, 2010. * [12] S. Sundaram and B. Gharesifard, “Consensus-based distributed optimization with malicious nodes,” in _2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)_. IEEE, 2015, pp. 244–249. * [13] S. X. Wu, H.-T. Wai, A. Scaglione, A. Nedić, and A. Leshem, “Data injection attack on decentralized optimization,” in _2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2018, pp. 3644–3648. * [14] G. Li, X. Wu, S. Zhang, H.-T. Wai, and A. Scaglione, “Detecting and localizing adversarial nodes using neural networks,” in _2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) (IEEE SPAWC 2018)_ , Kalamata, Greece, Jun. 2018. * [15] Q. Yan, M. Li, T. Jiang, W. Lou, and Y. T. Hou, “Vulnerability and protection for distributed consensus-based spectrum sensing in cognitive radio networks,” in _INFOCOM, 2012 Proceedings IEEE_. IEEE, 2012, pp. 900–908. * [16] C. Zhao, J. He, and J. Chen, “Resilient consensus with mobile detectors against malicious attacks,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 4, no. 1, pp. 60–69, 2017. * [17] S. Sundaram and B. Gharesifard, “Distributed optimization under adversarial nodes,” _IEEE Transactions on Automatic Control_ , vol. 64, no. 3, pp. 1063–1076, 2018. * [18] R. Gentz, H.-T. Wai, A. Scaglione, and A. Leshem, “Detection of data-injection attacks in decentralized learning,” _Asilomar Conf_ , 2015. * [19] R. Gentz, S. X. Wu, H. T. Wai, A. Scaglione, and A. Leshem, “Data injection attacks in randomized gossiping,” _IEEE Transactions on Signal and Information Processing Over Networks_ , vol. 2, no. 4, pp. 523–538, 2016. * [20] M. Mobilia, “Does a single zealot affect an infinite group of voters ?” _Physical Review Letters_ , July 2003. * [21] B. Kailkhura, S. Brahma, and P. K. Varshney, “Data falsification attacks on consensus-based detection systems,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 3, no. 1, pp. 145–158, 2017. * [22] ——, “Consensus based detection in the presence of data falsification attacks,” _IEEE Transactions on Signal Processing_ , vol. PP, no. 99, 2015\. * [23] O. Shalom, A. Leshem, A. Scaglione, and A. Nedić, “Detection of data injection attacks on decentralized statistical estimation,” in _2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)_. IEEE, 2018, pp. 1–5. * [24] N. Ravi and A. Scaglione, “Detection and isolation of adversaries in decentralized optimization for non-strongly convex objectives,” _IFAC-PapersOnLine_ , vol. 52, no. 20, pp. 381–386, 2019. * [25] S. Patel, V. Khatana, G. Saraswat, and M. V. Salapaka, “Distributed detection of malicious attacks on consensus algorithms with applications in power networks,” in _Preprint. Golden, CO: National Renewable Energy Laboratory. NREL/CP-5D00-76848_ , 2020. * [26] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _CVPR_. IEEE Computer Society, 2016, pp. 770–778. * [27] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” _arXiv preprint arXiv:1810.04805_ , 2018. * [28] H. Wang, J. Ruan, G. Wang, B. Zhou, Y. Liu, X. Fu, and J. Peng, “Deep learning-based interval state estimation of ac smart grids against sparse cyber attacks,” _IEEE Transactions on Industrial Informatics_ , vol. 14, no. 11, pp. 4766–4778, 2018. * [29] Y. Yu, T. Wang, and S. C. Liew, “Deep-reinforcement learning multiple access for heterogeneous wireless networks,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 6, pp. 1277–1290, 2019. * [30] W. Wu, R. Li, G. Xie, J. An, Y. Bai, J. Zhou, and K. Li, “A survey of intrusion detection for in-vehicle networks,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 21, no. 3, pp. 919–933, 2019. * [31] A. Doboli, “Discovery of malicious nodes in wireless sensor networks using neural predictors,” _Wseas Transactions on Computers Research_ , vol. 2, 2007\. * [32] G. Rusak, A. Al-Dujaili, and U.-M. O’Reilly, “Ast-based deep learning for detecting malicious powershell,” in _Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security_ , 2018, pp. 2276–2278. * [33] O. Rahman, M. A. G. Quraishi, and C.-H. Lung, “Ddos attacks detection and mitigation in sdn using machine learning,” in _2019 IEEE World Congress on Services (SERVICES)_ , vol. 2642. IEEE, 2019, pp. 184–189. * [34] G. Li, S. X. Wu, S. Zhang, H.-T. Wai, and A. Scaglione, “Detecting and localizing adversarial nodes usig neural networks,” in _2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_. IEEE, 2018, pp. 1–5. * [35] G. Li, S. X. Wu, S. Zhang, and Q. Li, “Neural networks-aided insider attack detection for the average consensus algorithm,” _IEEE Access_ , vol. 8, pp. 51 871–51 883, 2020. * [36] ——, “Detect insider attacks using cnn in decentralized optimization,” in _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 8758–8762. * [37] L. Giaretta and Š. Girdzijauskas, “Gossip learning: off the beaten path,” in _2019 IEEE International Conference on Big Data (Big Data)_. IEEE, 2019, pp. 1117–1124. * [38] S. Savazzi, M. Nicoli, and V. Rampa, “Federated learning with cooperating devices: A consensus approach for massive iot networks,” _IEEE Internet of Things Journal_ , vol. 7, no. 5, pp. 4641–4654, 2020. * [39] Z. Jiang, A. Balu, C. Hegde, and S. Sarkar, “Collaborative deep learning in fixed topology networks,” in _Advances in Neural Information Processing Systems (NIPS)_ , 2017, pp. 5904–5914. * [40] R. Ormándi, I. Hegedűs, and M. Jelasity, “Gossip learning with linear models on fully distributed data,” _Concurrency and Computation: Practice and Experience_ , vol. 25, no. 4, pp. 556–571, 2013. * [41] M. Blot, D. Picard, N. Thome, and M. Cord, “Distributed optimization for deep learning with gossip exchange,” _Neurocomputing_ , vol. 330, pp. 287–296, 2019. * [42] J. Daily, A. Vishnu, C. Siegel, T. Warfel, and V. Amatya, “Gossipgrad: Scalable deep learning using gossip communication based asynchronous gradient descent,” _arXiv preprint arXiv:1803.05880_ , 2018. * [43] M. Blot, D. Picard, and M. Cord, “Gosgd: Distributed optimization for deep learning with gossip exchange,” _arXiv preprint arXiv:1804.01852_ , 2018\. * [44] R. B. Palm, “Prediction as a candidate for learning deep hierarchical models of data,” _Technical University of Denmark_ , vol. 5, 2012. * [45] T. Fawcett, “An introduction to roc analysis,” _Pattern recognition letters_ , vol. 27, no. 8, pp. 861–874, 2006.
# Link Prediction and Unlink Prediction on Dynamic Networks Christina Muro1, Boyu Li, Kun He,2 1The first two authors contribute equally.2Corresponding author.C. Muro, B. Li and K. He are with the School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China (E-mail: {christinamuro, afterslby, <EMAIL_ADDRESS> ###### Abstract Link prediction on dynamic networks has been extensively studied and widely applied in various applications. However, temporal unlink prediction, which also plays an important role in the evolution of social networks, has not been paid much attention. Accurately predicting the links and unlinks on the future network greatly contributes to the network analysis that uncovers more latent relations between nodes. In this work, we assume that there are two kinds of relations between nodes, namely long-term relation and short-term relation, and we propose an effective algorithm called LULS for temporal link prediction and unlink prediction based on such relations. Specifically, for each snapshot of a dynamic network, LULS first collects higher-order structure as two topological matrices by applying short random walks. Then, LULS initializes and optimizes a global matrix and a sequence of temporary matrices for all the snapshots by using non-negative matrix factorization (NMF) based on the topological matrices, where the global matrix denotes long-term relation and the temporary matrices represent short-term relations of snapshots. Finally, LULS calculates the similarity matrix of the future snapshot and predicts the links and unlinks for the future network. Additionally, we further improve the prediction results by using graph regularization constraints to enhance the global matrix, resulting that the global matrix contains a wealth of topological information and temporal information. The conducted experiments on real-world networks illustrate that LULS outperforms other baselines for both link prediction and unlink prediction tasks. ###### Index Terms: Link prediction, unlink prediction, dynamic network, random walk, non-negative matrix factorization ## I Introduction Link prediction is one of the fundamental problems that predicts whether two disconnected nodes in a network are likely to have a link [1]. It is useful in a wide variety of applications, such as recommendation [2], network reconstruction [3], and protein-protein interaction [4]. These networks always have dynamic nature, indicating that nodes and links can be added or removed with the evolution of networks [5]. It is inspiring and, in some respects, difficult to study networks at the level of individual edge formation or removal, especially in the dynamic scenario. Consequently, it still needs to be further studied on understanding the mechanisms by which such networks evolve at the level of individual edges. Link prediction has been studied extensively in recent years [1, 6, 7], and there are two primary independent scenarios for predicting unknown links in the networks, forecasting of missing relationships and projection of future relationships. Although real-world networks have a highly dynamic structure, most of the previous link prediction studies focus on static networks [8]. In such scenario, it is anticipated that once two nodes are tied, they mostly remain tied (e.g., in the Facebook network, once two people are connected, they rarely break up). As a result, the link prediction problem focuses on identifying new links. On the other hand, for many networks, additional temporal information, such as the time of link creation and deletion as well as node addition and deletion, is available over a time interval, which is beneficial to understand the structure of the network. Inferring links in dynamic networks is more challenging because dynamic networks are often subjected to short-period changes due to noise. Furthermore, due to changes in the overall environment, some dynamic networks may endure long-term shifts [9]. Existing dynamic link prediction methods consider only either the structure of the network [8, 10, 11] or the temporal information [12, 13]. However, these families of approaches have several limitations. First, real-world networks are generally sparse and have partially observable links, resulting that the approaches based on only structure information may perform poorly. Second, the methods considering temporal information alone may lack insight provided by the other model. It is thus essential to use both topological and temporal features to comprehend complex behaviours of the dynamic network. Up to now, only a few studies [9, 14] have utilized both types of information together for the dynamic link prediction task. However, these methods fail to consider the intrinsic geometric structure of the data [15] resulting in lacking discriminative information for prediction. Additionally, unlink prediction, which attempts to predict whether a previously occurred relationship will disappear in the future, is an important fundamental problem related to link prediction. Several studies [16, 17, 18] reveal that the probability of relationship to persist or to-be formed increases if a node pair has a high number of common neighbors as well as high transitivity through third parties. Furthermore, they also divulge that the probability of an edge to befall occurs between edges with low resemblance, fewer communal neighbors, and typically of low transitivity. These findings illustrate that the processes guiding link creations are negatively correlated with those guiding link removals, which signifies the importance of unlink prediction tasks. However, up until now, the link prediction problem has been heavily studied [8, 19, 20], while its counterpart, unlink prediction problem, although reporting a high proportion of link changes [21], is rarely studied [22]. In this work, we assume that the relations in the dynamic networks can be divided into two categories, long-term relation and short-term relation. The long-term relation represents a certain kind of stable relation, while the short-term relation denotes a sort of temporary relation. For instance, in the collaboration network, the authors in the same lab maintain a stable relation, and they may always collaborate on publishing papers. While the authors in the different labs or even focusing on different research fields have a temporary relation such that they only cooperate occasionally. As a result, these two kinds of relations determine the states of links and unlinks in the future network to a large extent. Based on such assumption, we propose a new algorithm for the link prediction and unlink prediction with long-term relation and short-term relation, termed LULS. LULS utilizes the method of non-negative matrix factorization (NMF) that embeds nodes into two kinds of representation matrices containing both structural similarity and temporal information. More specifically, for each snapshot of network, LULS collects higher-order topological information as matrices by performing two random walk methods, which are light lazy random walk [23] and a new variant of random walk called modified light lazy random walk. Then, LULS initializes and optimizes a global matrix and a temporary matrix for each snapshot by using the method of NMF, where the global matrix represents the long-term relation that stores the temporal information and is shared by all the snapshots, while the series of temporary matrices denotes the short-term relation that only stores the topological information within the corresponding snapshot. Additionally, LULS also incorporates the geometric structure by using graph regularization constraints to enhance the global matrix. Therefore, the global matrix contains a wealth of both topological information and temporal information, which can further improve the prediction results. Afterwards, LULS calculates the similarity matrix of the future snapshot for the temporal predictions of links and unlinks based on the global matrix and temporary matrices. The experiments conducted on real-world networks illustrate that LULS outperforms other baselines on AUC for both tasks of link prediction and unlink prediction. ## II Related Work Link prediction in static network has been extensively studied [1]. On the other hand, for many real-world networks, additional temporal information is available overtime interval, and the network build from such data can be represent as a dynamic network. One feasible strategy to model the dynamic network is to explore the network topological properties, which indicates that any two nodes close to each other in the network are likely to form a link in the near future. For example, Rahman et al. [24] capture the topology of dynamic networks by graphlets, where graphlet transitions between different timestamps are coded in a feature vector and can be used by supervised learning. Gunes et al. [25] apply the ARIMA model on the series of node similarity scores to predict links in the next period based on the previous time series data. Moradabadi et al. [26] predict the future connections between pairs of nodes by using the previous snapshot as input. Besides, many approaches based on the matrix factorization approach has also been proposed [27, 10, 11, 14, 28]. Matrix factorization has the advantages of generating embedding which are simple to interpret. The main idea is to learn a latent low-dimensional vector representation for each node and the nodes close to each other in the low-rank space are similar. However, most of the real-world networks are sparse, so that the methods considering only the structural characteristics of the network may perform poorly. Moreover, there are some approaches that utilizes temporal information, which reveals the relationship between the current snapshot and the previous snapshots, to predict links on dynamic networks [29, 10, 30, 12, 14, 31]. The main idea is the temporal smoothness, which assumes that the embeddings from the current snapshot should not deviate dramatically from the previous snapshots [29]. Although the approaches utilizing structural or temporal information alone have shown an encouraging performance, the combination of topological and temporal information always provides further insights that are missed with the use of single modeling. LIST [31] is proposed which exploits structural and temporal information and characterizes network information using time function to predict links on evolving networks. STEP [9] is presented to exploit structural and temporal information for prediction and characterize network evolution using a global transition matrix to reflect different types of evolutionary patterns. Despite their good predictive performance, these approaches fail to consider the intrinsic geometric structure of the data, lacking discriminative information for prediction. Other approaches such as those based on graph neural network (GNN) [32, 33, 34] requires the node features, which may not always be available or may produce embedding with limited interpretability. In additional to link prediction, unlink prediction also plays an important role in network evolution. For example, although some temporary relationships among people in an online social network are formed during a short-term, some of these relationships are likely to decay or even disappear in the future. Until now, there are only few studies address the problem. Preusse et al. [35] use various network structural features extracted from a knowledge network to predict the disappearance of links. Oliveira et al. [22] propose the method combining both the topology information and the information of individuals (semantic metrics) on evolving networks to predict the disappearance of links. Comparing to the previous link prediction and unlink prediction methods, LULS has several advantages. First, to preserve the higher-order topological information as much as possible, LULS applies two random walk methods on each snapshot, so that after optimization, the global matrix and the temporary matrices include more and better topological information. Second, LULS utilizes NMF to optimize a global matrix and a series of temporary matrices to reconstruct the matrices obtained by random walk methods. On the one hand, the global matrix is optimized by all the snapshots, resulting that the global matrix preserves a wealth of temporal information (i.e., long-term relation). On the other hand, each temporary matrix contains the specific topological information of the corresponding snapshot that accurately preserves the short- term relations between nodes. Therefore, the topological information of future snapshot can be precisely reconstructed. Last but not least, LULS also optimizes the global matrix by using graph regularization constraints which makes the global matrix preserve more topological information that further improves the prediction results. ## III Preliminaries In this section, we first give the problem definition, and then briefly introduce the random walk method. Important symbols used in this paper are listed in Table I. Table I: Symbols and definitions. Symbol | Definition ---|--- $G_{t}=(V_{t},E_{t})$ | the snapshot $G_{t}$ with node set $V_{t}$ and edge set $E_{t}$ $N$ | the final timestamp $\textbf{W}_{t}$, $\textbf{H}_{t}$ | the similarity matrices for $G_{t}$ U | the global representation matrix $\textbf{V}_{t}$ | the temporary representation matrix for $G_{t}$ $m$ | the dimension of node representation $\theta$ | decay weight $\lambda$ | smoothness weight $\gamma$ | control the importance of constraint terms $k$ | the steps of random walk ### III-A Problem Definition Consider an undirected and unweighted graph $G=(V,E)$, where $V$ and $E$ denote the set of observed nodes and edges, respectively. Let $\textbf{A}\in[0,1]^{|V|\times|V|}$ be the associated adjacency matrix, D be a diagonal matrix of node degrees, and I represents the identity matrix. A dynamic network is represented as a sequence of snapshots of graph $\mathcal{G}=\\{G_{1},G_{2},\cdots,G_{N}\\}$, where $N$ is the final timestamp. We denote $G_{t}=(V_{t},E_{t})$ as the graph at timestamp $t$ ($1\leq t\leq N$). For simplicity, we assume the set of nodes do not change over snapshots, i.e., $V_{t_{i}}=V_{t_{j}}$, for any $t_{i},t_{j}\in\\{1,\cdots,N\\}$, indicating that we ignore newly added and removed nodes. However, edges do appear and disappear in snapshots over the timestamps. Our goal is to predict the edges that will be added or removed in $G_{N+1}$ based on the previous observed snapshots in $\mathcal{G}$. ###### Definition 1 (Dynamic Link Prediction) Given a sequence of snapshots of graph $\mathcal{G}=\\{G_{1},G_{2},\cdots,G_{N}\\}$, where $N$ is the final timestamp, all the snapshots are with the same set of nodes but some edges may emerge or disappear along the sequence. For any pair of unconnected nodes $u$ and $v$ in $G_{N}$, the link prediction task on dynamic network aims to predict whether $u$ and $v$ will have a link in $G_{N+1}$. ###### Definition 2 (Dynamic Unlink Prediction) Given a sequence of snapshots of graph $\mathcal{G}=\\{G_{1},G_{2},\cdots,G_{N}\\}$, where $N$ is the final timestamp, all the snapshots are with the same set of nodes but some edges may emerge or disappear along the sequence. For any edge $(u,v)\in E_{N}$, the unlink prediction task on dynamic network aims to predict whether edge $(u,v)$ will disappear in $G_{N+1}$. Figure 1: An example of temporal link prediction and unlink prediction. In practice, we use the link states at previous timestamps $t=1$ and $t=2$ to predict the formations and disappearances of links at timestamp $t=3$. For example, as shown in Figure 1, given a graph and its two snapshots at $t=1$ and $t=2$, we know all the changes of edges in both snapshots. For instance, nodes $v$ and $w$ are unconnected at timestamp $t=1$ and then get linked at timestamp $t=2$, while the edge between $u$ and $v$ presented at timestamp $t=1$ disappears at timestamp $t=2$. Then, we aim to predict all the to-be formed links (e.g., the edge between $m$ and $w$) and to-be removed links (e.g., the edge between $u$ and $m$) at timestamp $t=3$ by using the link states at previous timestamps. ### III-B Random Walk Method Random walk is one of the most popular methods to measure the importance of nodes in the graph w.r.t. the given node. Suppose that we start a standard random walk from the given node $u$, $p_{0}$ is a one-hot vector in which the corresponding entry of $u$ is 1 and the rest entries are 0. $\textbf{N}_{rw}$ is the transition probability given as $\textbf{N}_{rw}=\textbf{D}^{-1}\textbf{A}$. Then, the $k$-steps probability vector can be iteratively calculated by $p^{(k)}=\textbf{N}_{rw}^{T}p^{(k-1)}=(\textbf{N}_{rw}^{T})^{k}p^{(0)}.$ Let $\textbf{P}^{(k)}\in\mathbb{R}^{|V|\times|V|}$ be the matrix that combines $p^{(k)}$ for all the nodes in the graph. Then, the $ij$-th entry in $\textbf{P}^{(k)}$ represents the probability that the random walker starts at node $i$ to reach node $j$ in $k$ steps. In this work, we use $\hat{\textbf{P}}^{(k)}=\textbf{P}^{(k)}+(\textbf{P}^{(k)})^{T}$ as the similarity scores for all the pairs of nodes, where $\hat{\textbf{P}}^{(k)}_{ij}$ denotes the sum of the probabilities that a random walker jumps from node $i$ to node $j$ as well as from node $j$ to node $i$ within $k$ steps. ## IV Methodology In this section, we illustrate the detailed procedures of LULS that contains three main steps. First, for each snapshot $G_{t}$, LULS calculates two matrices $\textbf{H}_{t}$ and $\textbf{W}_{t}$ by using local random walk methods to collect higher-order topological information. Then, LULS initializes a global representation matrix U and a temporal representation matrix $\textbf{V}_{t}$ for each snapshot $G_{t}$ to represent the long-term relations and short-term relations respectively, and optimizes U and $\textbf{V}_{t}$ by applying the method of NMF according to $\textbf{H}_{t}$ and $\textbf{W}_{t}$ of each snapshot. Finally, LULS computes the similarity matrix R of the future snapshot based on the matrices $\textbf{V}_{1},\textbf{V}_{2},\cdots,\textbf{V}_{N}$ and U, and predicts the states of links and unlinks for the future snapshot. The whole framework is given in Algorithm 1, and we elaborate each step as follows. Algorithm 1 The LULS Algorithm 1:A series of adjacency matrices $\textbf{A}_{1},\textbf{A}_{2},\cdots,\textbf{A}_{N}$, the hyper-parameters $\lambda$, $\gamma$, and $\theta$ 2:Similarity matrix R 3:compute matrices ${\textbf{H}_{1}},{\textbf{H}_{2}},\cdots,{\textbf{H}_{N}}$ 4:compute matrices ${\textbf{W}_{1}},{\textbf{W}_{2}},\cdots,{\textbf{W}_{N}}$ 5:for $t=1$ to $N$ do 6: randomly initialize ${\textbf{V}_{t}}$ 7:end for 8:randomly initialize U 9:repeat: 10: for $t=1$ to $N$ do 11: update ${\textbf{V}_{t}}$ according to Eq. (5) 12: end for 13: update U according to Eq. (6) 14:until termination criteria are reached 15:compute R according to Eq. (7) 16:return R ### IV-A Collecting Topological Information In the first step, for each snapshot $G_{t}$, LULS computes two matrices ${\textbf{H}_{t}}$ and ${\textbf{W}_{t}}$ to represent topological information. Note that the basic idea of NMF is to learn a low-dimensional vector for each node by factorizing the adjacency matrix. However, the adjacency matrix can not fully capture the higher-order relations between nodes. Moreover, since the real-world networks are always extremely large with millions or even billions of nodes, it is computationally expensive to apply global proximity methods to measure the strength between nodes. Therefore, a few variants of random walk, which can effectively collect the topological information, are proposed as solutions [23, 36]. In this work, we adopt light lazy random walk (LLRW) [23] and propose a new random walk variant called modified light lazy random walk (MLLRW) to collect the higher-order topological information. LLRW and MLLRW are defined as follows: * (1) Light Lazy Random Walk (LLRW). $\textbf{N}_{rw}=(\textbf{D}+\alpha\textbf{I})^{-1}(\alpha\textbf{I}+\textbf{A}),$ where $\alpha\in N^{0+}$ is a hyper-parameter. Compared to standard random walk, LLRW retains some probability at the current node for the random walks. Moreover, LLRW degenerates to standard random walk when $\alpha=0$, and $\alpha=l$ indicates that the random walker performs $l$ loops on each node. * (2) Modified Light Lazy Random Walk (MLLRW). $\textbf{N}_{rw}=\beta\textbf{S}+(1-\beta)\left((\textbf{D}+\alpha\textbf{I})^{-1}(\alpha\textbf{I}+\textbf{A})\right),$ where S is the diagonal matrix with degree centrality similarity between every node in the network, and $\beta\in[0,1]$ is a hyper-parameter. MLLRW expects the ranking score of the nodes to be biased more to higher degree nodes, such that a random walker of MLLRW performs light lazy walk to one of the neighbors of the current node with probability $(1-\beta)$ and jumps to any other node in the graph according to S with probability $\beta$. Additionally, MLLRW degenerates to LLRW when $\beta=0$. Let $\hat{\textbf{P}}^{(k)}_{LLRW}$ and $\hat{\textbf{P}}^{(k)}_{MLLRW}$ represent the similarity matrices as introduced in Section III-B obtained by performing LLRW and MLLRW over the graph, respectively. Then, we separately calculate $\textbf{H}_{t}=\hat{\textbf{P}}^{(k)}_{LLRW}$ and $\textbf{W}_{t}=\hat{\textbf{P}}^{(k)}_{MLLRW}$ for each snapshot $G_{t}$. ### IV-B Node Representation In the dynamic real-world network, nodes have long-term relations and short- term relations, and the temporal predictions of links and unlinks are determined by these two relations. Consequently, we leverage NMF to find the temporary representation matrices $\textbf{V}_{1},\textbf{V}_{2},\cdots,\textbf{V}_{N}$ and a global representation matrix U such that the inner product of $\textbf{V}_{i}$ and U can approximate the topological information $\textbf{W}_{t}$ of $G_{t}$. Note that, the global representation U can be considered as long-term relation between nodes, while the temporary representations $\textbf{V}_{1},\textbf{V}_{2},\cdots,\textbf{V}_{N}$ evolved over time is regarded as short-term relations. Specifically, let $m$ be the dimension of node representation, and for each $\textbf{W}_{t}$, we aim to factorize the matrix as ${\textbf{W}_{t}}\approx\textbf{U}\textbf{V}_{t}^{T}$, where the matrices $\textbf{U}\in{\mathbb{R}_{+}}^{|V|\times m}$ and ${\textbf{V}_{t}}\in{\mathbb{R}_{+}}^{|V|\times m}$ represent the global representation matrix and temporary representation matrix, respectively. Additionally, to ensure that the temporary representations of nodes between snapshots do not change significantly, we apply regularization to maintain the smoothness. Therefore, for the matrix $\textbf{W}_{t}$, the objective function is given as $\displaystyle\mathop{\min}\limits_{\textbf{U}\geq 0,{\textbf{V}_{t}}\geq 0}J$ $\displaystyle=\sum\limits_{t=1}^{N}{{\theta^{N-t}}\left\|{{\textbf{W}_{t}}-\textbf{U}\textbf{V}_{t}^{T}}\right\|}_{F}^{2}$ (1) $\displaystyle+\lambda\sum\limits_{t=2}^{N}{\theta^{N-t}}{{\left\|{{\textbf{V}_{t}}-{\textbf{V}_{t-1}}}\right\|}^{2}_{F}},$ where $\theta\in[0,1]$ is the decay weight and $\lambda$ is the smoothness weight. Furthermore, to enhance the node representation U that preserves more topological information,motivated by graph regularization technique [37], we define the similarity constraint term $M_{t}$ based on $\textbf{H}_{t}$ for each snapshot $G_{t}$ as follows $M_{t}=\frac{1}{2}{\sum\limits_{ij}{\left\|{u_{i}-u_{j}}\right\|}^{2}}\textbf{H}_{t_{ij}}=Tr(\textbf{U}^{T}{\textbf{L}_{t}}\textbf{U}),$ (2) where $\textbf{H}_{t_{ij}}$ is the $ij$-th entry in $\textbf{H}_{t}$, $\textbf{L}_{t}=\textbf{D}_{t}-\textbf{H}_{t}$ is the Laplacian matrix, and $Tr(\cdot)$ is the trace matrix. Overall, the enhanced objective function is expressed as follows: $\displaystyle\mathop{\min}\limits_{\textbf{U}\geq 0,\textbf{V}_{t}\geq 0}J=$ $\displaystyle\sum\limits_{t=1}^{N}{{\theta^{N-t}}\left\|{{\textbf{W}_{t}}-\textbf{U}\textbf{V}_{t}^{T}}\right\|}_{F}^{2}+\gamma\sum\limits_{t=1}^{N}M_{t}$ (3) $\displaystyle+\lambda\sum\limits_{t=2}^{N}{\theta^{N-t}}{{\left\|{{\textbf{V}_{t}}-{\textbf{V}_{t-1}}}\right\|}^{2}_{F}},$ where $\gamma$ is a hyper-parameter to control the relevance importance of constraint terms. ### IV-C Optimization In this section, we present an iterative update algorithm to solve the optimization problem in Eq. (3). In each iteration, the algorithm updates each matrix in turn while fixing the other matrices. This procedure repeats until the matrices converge or the maximum number of iterations is reached. By removing the irrelevant items, i.e., $\theta$, Eq. (3) can be simplified as: $\displaystyle\mathop{\min}\limits_{\textbf{U}\geq 0,{\textbf{V}_{t}}\geq 0}J$ $\displaystyle={\sum\limits_{t=1}^{N}{\left\|{{\textbf{W}_{t}}-\textbf{U}\textbf{V}_{t}^{T}}\right\|}^{2}_{F}}+\gamma\sum\limits_{t=1}^{N}{Tr({\textbf{U}^{T}}{\textbf{L}_{t}}\textbf{U})}$ $\displaystyle+\lambda\sum\limits_{t=2}^{N}{{{\left\|{{\textbf{V}_{t}}-{\textbf{V}_{t-1}}}\right\|}^{2}_{F}}}.$ We first address the problem of optimizing $\textbf{V}_{t}$ for each $t\in[1,N]$. According to Eq. (3), the optimization problem is transformed as follows: $\begin{split}\mathop{\min}\limits_{{\textbf{V}_{t}}\geq 0}J=&\sum\limits_{t=1}^{N}{Tr(\gamma({\textbf{W}_{t}}}-\textbf{U}\textbf{V}_{t}^{T}){({\textbf{W}_{t}}-\textbf{U}\textbf{V}_{t}^{T})^{T}})\\\ &+\sum\limits_{t=2}^{N-1}{Tr(\lambda({\textbf{V}_{t}}}-{\textbf{V}_{t-1}}){({\textbf{V}_{t}}-{\textbf{V}_{t-1}})^{T}}).\end{split}$ (4) Based on the non-negativity constraint of ${\textbf{V}_{t}}$ following the standard constraint optimization theory, we introduce the Lagrangian multiplier ${\phi_{t}}=\left[{{\phi_{ij}}}\right]$ and minimize the Lagrangian function $L$, such that $L=J+\sum\limits_{t=1}^{N}{Tr({\phi_{t}}}{\textbf{V}_{t}}^{T}).$ By computing the derivate of $L$ with respect to ${\textbf{V}_{t}}$, we have the following expression: $\displaystyle\frac{{\delta L}}{{\delta{\textbf{V}_{t}}}}$ $\displaystyle=-2{\textbf{W}_{t}}\textbf{U}+\textbf{U}\textbf{V}_{t}^{T}{\textbf{U}^{T}}+2\lambda({\textbf{V}_{t}}-{\textbf{V}_{t-1}})+{\phi_{t}}.$ Next, by setting ${{\delta L}\over{\delta{\textbf{V}_{t}}}}=0$ and ${\left[{{\phi_{t}}}\right]_{ij}}{\left[{{\textbf{V}_{t}}}\right]_{ij}}=0$, we obtain the following equation for ${\left[{{\textbf{V}_{t}}}\right]_{ij}}$: ${[-{\textbf{W}_{t}}\textbf{U}+\textbf{U}\textbf{V}_{t}^{T}{\textbf{U}^{T}}+\lambda({\textbf{V}_{t}}-{\textbf{V}_{t-1}})]_{ij}}{({\textbf{V}_{t}})_{ij}}=0.$ Then, the update rule for ${\textbf{V}_{t}}$ is derived as follows: ${\textbf{V}_{t}}\leftarrow{\textbf{V}_{t}}\odot\sqrt{{{{\textbf{H}_{t}}\textbf{U}+\lambda({\textbf{V}_{t-1}})}\over{\textbf{U}\textbf{V}_{t}^{T}{\textbf{U}^{T}}+\lambda({\textbf{V}_{t}})}}}.$ (5) The global matrix U can be learned in a very similar to latent factor ${\textbf{V}_{t}}$. To handle non-negative constraints, we introduce the Lagrangian multiplier $\psi=\left[{{\psi_{ij}}}\right]$ and minimize the Lagrangian function $L$: $L=J+\psi$ Taking the derivate of L with respect to $U$, we have the following expression $\displaystyle{{\delta L}\over{\delta U}}=-2{W_{t}}{V_{t}}+2U{V_{t}}^{T}{V_{t}}^{T}+2\gamma{L_{t}}U+\psi,$ by setting ${{\delta L}\over{\delta{U}}}=0$, and using KKT conditions ${[\psi]_{ij}}{[U]_{ij}}=0$, we obtain the following equation for ${[U]_{ij}}$: ${[-{W_{t}}{V_{t}}+U{V_{t}}^{T}{V_{t}}^{T}+\gamma{L_{t}}U]_{ij}}{({U})_{ij}}=0$ Then the update rule for $U$ is derived as follows: $U\leftarrow U\odot\sqrt{{{\sum\limits_{t=1}^{N}{({W_{t}}{V_{t}}+\gamma{H_{t}}U)}}\over{\sum\limits_{t=1}^{N}{(U{V_{t}}^{T}{V_{t}}^{T}+\gamma{D_{t}}U)}}}}.$ (6) ### IV-D Predicting Links and Unlinks After optimizing $\textbf{V}_{1},\textbf{V}_{2},\cdots,\textbf{V}_{N}$ and U, LULS calculates the similarity matrix of future snapshot for the predictions of links and unlinks. The probability of link formation and link disappearance can be obtained by the inner product of the latent factors U and ${\textbf{V}_{t}}$ for each $t\in[1,N]$, such that $\textbf{R}=\sum\nolimits_{t=1}^{N}{\textbf{U}\textbf{V}_{t}}^{T}.$ (7) Each $ij$-th entry in R denotes the proximity score for the pair of nodes $i$ and $j$. For the link prediction problem, the higher the similarity score between unconnected nodes in R the higher the probability of connect in the future. In contrast, for the unlink prediction problem, for two connected nodes in R if this pair of nodes have a lower similarity score, then they have a higher probability of disappearing in the future. ### IV-E Complexity Analysis In this section, we discuss the time complexity of LULS. Local spectal diffusion [6] depends on the number of nodes on the network, and the time complexity is $O(Nl{\left|V\right|^{2}})$, where $l$ is the number of iterations to convergence. For NMF, the computation is dominated by matrix multiplications, i.e., the matrix multiplication between $|V|\times|V|$ matrix and $|V|\times m$ matrix. Therefore, the complexity involved for updating $V_{t}$ and $U$ is $O(rN{\left|V\right|^{2}}m)$, where $r$ denotes the number of iterations. Besides, the complexity of computing $R$ is $O({\left|V\right|^{2}}m)$. Thus, the overall time complexity of our LULS model is $O((rN+1){\left|V\right|^{2}}m+Nl{\left|V\right|^{2}})$. Note that, since all matrices are sparse, the complexity between two sparse matrix is much smaller than $O({|V|^{2}})$. ## V Experimental Setup ### V-A Datasets We use six real-world dynamic networks for evaluating the performance of LULS. The datasets contain Facebook Forum, Reality Mining, Dublin, Hep-Th, Facebook Messages, and Retweet. Table II summarizes the detailed information of these datasets. Table II: Statistics of the datasets. | Facebook Forum | Reality Mining | Dublin | Hep-Th | Facebook Messages | Haggle ---|---|---|---|---|---|--- Number of nodes | 899 | 6416 | 6454 | 22908 | 274 | 18470 Number of edges | 7046 | 7250 | 24097 | 2444798 | 15737 | 2124 Average degree | 15.68 | 2.26 | 7.45 | 213.44 | 16.57 | 15.51 Density | 0.0175 | 0.0004 | 0.0012 | 0.0093 | 0.0087 | 0.0568 Avg shortest-path distance | 2.8320 | 4.2367 | 6.6808 | 2.7220 | 3.0552 | 2.42 Number of snapshots | 6 | 7 | 8 | 11 | 8 | 6 * - Facebook Forum [38]: The Facebook Forum network is the private messages exchanged between Facebook users from May to October in 2004, where the nodes are the users, and each edge is a message exchanged between a pair of users. * - Reality Mining [38]: Reality Mining network consists of individual’s mobile phone call events between a set of core users at Massachusetts Institute of Technology (MIT), where the vertices are users, and each edge is a phone call or voicemail between a pair of users. * - Dublin [38]: Dublin network is a human contact network where the vertices represent individuals and the edges denote proximity. * - Hep-Th111http://networkrepository.com: This dataset is a collaboration network from high energy physics theory section on arXiv, where vertices are authors and an edge denotes a common publication for a pair of authors. * - Facebook Messages††footnotemark: : This dataset is a Facebook-like social network originated from online community of the students at the University of California, where the nodes are users, and the edges are the messages exchanged between pairs of users. * - Haggle222http://konect.cc/networks: This network reflects connections between people as measured by wireless devices carried by the participants. A node symbolizes a person, and an edge between two people indicates that they came into touch. ### V-B Evaluation Metric In this paper, we apply AUC and average precision (AP) to evaluate the performance of LULS. Specifically, for each dataset, we use ${G_{1}},{G_{2}},\cdots,{G_{N-1}}$ as the training data and ${G_{N}}$ as the test data. Furthermore, the test data is divided into positive test samples and negative test samples. For the link prediction problem, the positive test set consists of the edges that appear in ${G_{N}}$ and do not present in ${G_{N-1}}$, while the negative test set consists of the edges that do not appear in ${G_{N-1}}$ and ${G_{N}}$. On the other hand, for unlink prediction problem, the positive test set contains the edges that appear in ${G_{N-1}}$ and ${G_{N}}$, while the negative test set consists of the edges that present in ${G_{N-1}}$ and disappear in ${G_{N}}$. To avoid the class imbalance, we randomly generate the same size of negative test set as that of positive test set for both link prediction and unlink prediction. In addition, the experiments are carried out five times independently and the average result is reported. Generally, the AUC is described as the likelihood that the randomly selected actual link in the positive test set is assigned a higher score than a randomly selected link in the negative test set. Formally, if among $n$ comparisons, there are ${n^{\prime}}$ times the the edges in the negative test set has a lower score than the edges in the positive test set and $n^{\prime\prime}$ times they have the same scores, the AUC scores are calculated as follows $\displaystyle AUC=\frac{{n^{\prime}+0.5*n^{\prime\prime}}}{n}.$ Note that, an algorithm has a better performance than pure chance when the value of AUC is bigger than 0.5. AP combines recall and precision for ranking results. We calculate the precision after each true positive given a ranked list of predicted links. The average of these values gives the average precision for that link. ### V-C Baselines We compare our method with the the-state-of-the-art methods as follows: * - AA [20]: AA assumes that two nodes are more likely to be linked together if they share more common neighbors. * - DCN [39]: This method uses a decay common neighbor to characterize the relationship between node pair. * - TD [11]: This method stacks all adjacency matrices of historical snapshots into a tensor with the time as the third dimension to improve the link prediction results. * - TMF [14]: This method uses matrix factorization techniques to characterize the network characteristics as a function of time * - GrNMF [40]: This method directly approximates the link matrix over time $T$ using NMF by setting networks from 1 to $T-1$ as a regularizer. Note that, the methods AA and DCN can be applied only to static network. Therefore, in the case of link prediction, these approaches are performed based on the links from all past time periods by combining them into a single link matrix. For the unlink prediction task, these approaches are conducted based on the link states over ${G_{N-1}}$. ## VI Experimental Results In the experiments, all the parameters of LULS have been manually tuned. Specifically, we set $m=5$, $\theta=0.4$, $\gamma=1$ and $\lambda=0.0001$ for facebook Forum, Facebook Messages and Haggle networks. And for Reality mining and Dublin network, we set $\gamma=0.0001$. For the random walks variants, we use $\alpha=1$ for LLRW, and $\beta=0.01$ for MLLRW. Besides, we use the random walk step $k=4$ for the Facebook Forum, Reality mining, Facebook Messages and Hep-Th networks, and $k=5$ for the Dublin network. Moreover, for evaluating the effectiveness of smoothness and constraint terms, we implement three versions of LULS as follows: * - LULS1: $\lambda\neq 0$ and $\gamma\neq 0$; * - LULS2: $\lambda\neq 0$ and $\gamma=0$; * - LULS3: $\lambda=0$ and $\gamma=0$. Table III: The AUC scores for the link prediction task. Methods Datasets | Facebook Forum | Reality Mining | Dublin | Hep-Th | Facebook Messages | Haggle ---|---|---|---|---|---|--- LULS1 | 0.9324 | 0.9735 | 0.9913 | 0.7351 | 0.9765 | 0.9881 LULS2 | 0.9232 | 0.9764 | 0.9909 | 0.7089 | 0.9741 | 0.9870 LULS3 | 0.8278 | 0.9750 | 0.9908 | 0.6974 | 0.9750 | 0.9835 AA | 0.5252 | 0.5192 | 0.8914 | 0.5612 | 0.7351 | 0.9230 DCN | 0.5313 | 0.5194 | 0.9620 | 0.5612 | 0.4601 | 0.4532 TD | 0.9059 | 0.9214 | 0.6495 | 0.6268 | 0.9192 | 0.5693 TMF | 0.8314 | 0.9284 | 0.6037 | 0.6985 | 0.7122 | 0.9127 GrNMF | 0.8792 | 0.9204 | 0.6542 | 0.6837 | 0.9474 | 0.9213 Table IV: The AP scores for the Link prediction task. Methods Datasets | Facebook Forum | Reality Mining | Dublin | Hep-Th | Facebook Messages | Haggle ---|---|---|---|---|---|--- LULS1 | 0.8942 | 0.9570 | 0.9850 | 0.7012 | 0.9528 | 0.9810 LULS2 | 0.8731 | 0.9604 | 0.9843 | 0.6926 | 0.9521 | 0.9775 LULS3 | 0.8646 | 0.9602 | 0.9844 | 0.6845 | 0.9510 | 0.9642 AA | 0.5160 | 0.5150 | 0.8909 | 0.5547 | 0.7400 | 0.9430 DCN | 0.5646 | 0.4424 | 0.9563 | 0.5610 | 0.4619 | 0.6096 TD | 0.8870 | 0.9106 | 0.9028 | 0.6248 | 0.9357 | 0.7481 TMF | 0.8271 | 0.9117 | 0.5926 | 0.6954 | 0.7018 | 0.8020 GrNMF | 0.8444 | 0.9071 | 0.6018 | 0.6773 | 0.7155 | 0.8587 Table V: The AUC scores for the unlink prediction task. Methods Datasets | Facebook Forum | Reality Mining | Dublin | Hep-Th | Facebook Messages | Haggle ---|---|---|---|---|---|--- LULS1 | 0.8363 | 0.7970 | 0.7525 | 0.6923 | 0.7790 | 0.9345 LULS2 | 0.8268 | 0.7968 | 0.7532 | 0.6567 | 0.8044 | 0.9334 LULS3 | 0.8278 | 0.7661 | 0.7517 | 0.5051 | 0.8036 | 0.9316 AA | 0.546 | 0.5192 | 0.5437 | 0.5558 | 0.5101 | 0.6492 DCN | 0.4858 | 0.5014 | 0.5871 | 0.5437 | 0.5117 | 0.6096 TD | 0.7596 | 0.7651 | 0.5731 | 0.6154 | 0.7032 | 0.9367 TMF | 0.6780 | 0.7946 | 0.5861 | 0.6748 | 0.6961 | 0.8143 GrNMF | 0.6395 | 0.7848 | 0.6351 | 0.6645 | 0.7411 | 0.8445 Table VI: The AP scores for the Unlink prediction task. Methods Datasets | Facebook Forum | Reality Mining | Dublin | Hep-Th | Facebook Messages | Haggle ---|---|---|---|---|---|--- LULS1 | 0.8120 | 0.7885 | 0.8231 | 0.7014 | 0.8162 | 0.9333 LULS2 | 0.7958 | 0.8773 | 0.8218 | 0.6612 | 0.8293 | 0.9320 LULS3 | 0.7877 | 0.8776 | 0.8210 | 0.5152 | 0.8280 | 0.9268 AA | 0.5462 | 0.5083 | 0.5437 | 0.5610 | 0.5089 | 0.6520 DCN | 0.4931 | 0.5011 | 0.5667 | 0.4619 | 0.4991 | 0.5741 TD | 0.8077 | 0.7620 | 0.7447 | 0.6272 | 0.7928 | 0.9377 TMF | 0.8280 | 0.7940 | 0.5612 | 0.6675 | 0.6961 | 0.8138 GrNMF | 0.7812 | 0.7845 | 0.6326 | 0.6651 | 0.7155 | 0.8341 ### VI-A Link Prediction In this experiment, we evaluate the performance of LULS for link prediction. Table III and Table IV shows the performance of different approaches on six dynamic networks for the link prediction task. It can be observed that LULS models perform best in all the datasets, indicating that LULS can effectively integrate both temporal and structural information to extract significant node representation for the link prediction task. Precisely, our model has shown an impressive performance even on a very sparse graphs, e.g., Dublin network. Furthermore, all the approaches based on dynamic characteristics (i.e., LULS, TD, TMF, and GrNMF) consistently perform better than the approaches that ignore the temporal behaviour of the network (i.e., AA and DCN) in almost all the datasets. In addition, among LULS models, LULS1 is better than LULS2 and LULS3. Consequently, the smoothness and similarity constraint terms play important roles in link prediction. ### VI-B Unlink Prediction Next, we investigate the effectiveness of LULS for the unlink prediction task. In Table V and Table VI, the AUC and AP scores show that LULS models significantly outperform the baseline methods in almost all the datasets. Precisely, our model have shown an impressive performance on Haggle network, suggesting that unlink prediction is more likely on dense graph than on sparse graphs. Similar to link prediction task, all the approaches which consider temporal information of the network outperform other methods. Moreover, among LULS models, LULS1 is also better than LULS2 and LULS3, which indicates that the smoothness and similarity constraint terms are useful in unlink prediction. Additionally, comparing to link prediction, we can observe that the AUC scores of unlink prediction are lower, indicating that it is more difficult to predict unlinks than links in the future network. ## VII Conclusion In this work, we address the problems of temporal link prediction and unlink prediction on dynamic networks. Assuming that there are two kinds of relations between nodes, namely long-term relation and short-term relation, we propose an effective algorithm called LULS for temporal link prediction and unlink prediction based on such relations. Specifically, LULS collects the topological information for each snapshot of a dynamic network and generates a global matrix and a sequence of temporary matrices to represent the long-term relation and short-term relation. Then, LULS utilizes the global matrix and the temporary matrices to predict the links and unlinks for the future network. The experiments conducted on six real-world networks show the superior results of LULS compared with the state of the art methods. ## References * [1] A. Kumar, S. S. Singh, K. Singh, and B. Biswas, “Link prediction techniques, applications, and performance: A survey,” _Physica A: Statistical Mechanics and its Applications_ , vol. 553, p. 124289, 2020. * [2] Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” _IEEE Computer_ , vol. 42, no. 8, pp. 30–37, 2009\. * [3] M. Nickel, K. Murphy, V. Tresp, and E. Gabrilovich, “A review of relational machine learning for knowledge graphs,” _arXiv preprint arXiv:1503.00759_ , vol. 104, no. 1, pp. 11–33, 2016. * [4] I. A. Kovács, K. Luck, K. Spirohn, Y. Wang, C. Pollis, S. Schlabach, W. Bian, D. K. Kim, N. Kishore, T. Hao, M. A. Calderwood, M. Vidal, and A. L. Barabási, “Network-based prediction of protein interactions.” _Nature Communications_ , vol. 10, no. 1, 2019\. * [5] C. Aggarwal and K. Subbian, “Evolutionary network analysis: A survey,” _ACM Computing Surveys_ , vol. 47, no. 1, 2014. * [6] V. Martínez, F. Berzal, and J.-C. Cubero, “A survey of link prediction in complex networks,” _ACM Computing Surveys_ , vol. 49, no. 4, 2017. * [7] P. Wang, B. Xu, Y. Wu, and X. Zhou, “Link prediction in social networks: the state-of-the-art,” _Science in China Series F: Information Sciences_ , vol. 58, no. 1, pp. 1–38, 2015. * [8] D. Liben-Nowell and J. Kleinberg, “The link-prediction problem for social networks,” _Journal of the Association for Information Science and Technology_ , vol. 58, no. 7, pp. 1019–1031, 2007. * [9] H. Chen and J. Li, “Exploiting structural and temporal evolution in dynamic link prediction,” in _Proceedings of the 27th ACM International Conference on Information and Knowledge Management_ , 2018, pp. 427–436. * [10] D. Deng, C. Shahabi, U. Demiryurek, L. Zhu, R. Yu, and Y. Liu, “Latent space model for road networks to predict time-varying traffic,” in _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2016, pp. 1525–1534. * [11] D. M. Dunlavy, T. G. Kolda, and E. Acar, “Temporal link prediction using matrix and tensor factorizations,” _ACM Transactions on Knowledge Discovery from Data (TKDD)_ , vol. 5, no. 2, pp. 1–27, 2011. * [12] R. A. Rossi, B. Gallagher, J. Neville, and K. Henderson, “Modeling dynamic behavior in large evolving graphs,” in _Proceedings of the sixth ACM international conference on Web search and data mining_ , 2013, pp. 667–676. * [13] P. Sarkar and A. W. Moore, “Dynamic social network analysis using latent space models,” _Acm Sigkdd Explorations Newsletter_ , vol. 7, no. 2, pp. 31–40, 2005. * [14] W. Yu, C. C. Aggarwal, and W. Wang, “Temporally factorized network modeling for evolutionary network analysis,” in _Proceedings of the Tenth ACM International Conference on Web Search and Data Mining_ , 2017, pp. 455–464. * [15] D. Cai, X. He, X. Wu, and J. Han, “Non-negative matrix factorization on manifold,” in _2008 Eighth IEEE International Conference on Data Mining_. IEEE, 2008, pp. 63–72. * [16] H. Kwak, H. Chun, and S. Moon, “Fragile online relationship: a first look at unfollow dynamics in twitter,” in _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ , 2011, pp. 1091–1100. * [17] D. Quercia, M. Bodaghi, and J. Crowcroft, “Loosing ”friends” on facebook,” in _Proceedings of the 4th Annual ACM Web Science Conference on_ , 2012. * [18] F. Kivran-Swaine, P. Govindan, and M. Naaman, “The impact of network structure on breaking ties in online social networks: unfollowing on twitter,” in _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ , 2011, pp. 1101–1104. * [19] R. Eyal, A. Rosenfeld, S. Sina, and S. Kraus, “Predicting and identifying missing node information in social networks,” _ACM Transactions on Knowledge Discovery From Data_ , vol. 8, no. 3, 2014. * [20] T. Zhou, L. Lü, and Y.-C. Zhang, “Predicting missing links via local information,” _European Physical Journal B_ , vol. 71, no. 4, pp. 623–630, 2009. * [21] S. A. Myers and J. Leskovec, “The bursty dynamics of the twitter information network,” in _Proceedings of the 23rd international conference on World wide web_ , 2014, pp. 913–924. * [22] M. A. de Oliveira, K. C. Revoredo, and J. E. O. Luna, “Semantic unlink prediction in evolving social networks through probabilistic description logic,” in _2014 Brazilian Conference on Intelligent Systems_ , 2014. * [23] K. He, P. Shi, D. Bindel, and J. E. Hopcroft, “Krylov subspace approximation for local community detection in large networks,” _ACM Transactions on Knowledge Discovery From Data_ , vol. 13, no. 5, pp. 1–30, 2019\. * [24] M. Rahman and M. A. Hasan, “Link prediction in dynamic networks using graphlet,” in _ECML PKDD 2016 European Conference on Machine Learning and Knowledge Discovery in Databases - Volume 9851_ , 2016, pp. 394–409. * [25] İsmail Güneş, Şule Gündüz-Öğüdücü, and Z. Çataltepe, “Link prediction using time series of neighborhood-based node similarity scores,” _Data Mining and Knowledge Discovery_ , vol. 30, no. 1, pp. 147–180, 2016\. * [26] B. Moradabadi and M. R. Meybodi, “A novel time series link prediction method: Learning automata approach,” _Physica A: Statistical Mechanics and its Applications_ , vol. 482, pp. 422–432, 2017. * [27] E. Acar, D. M. Dunlavy, and T. G. Kolda, “Link prediction on evolving data using matrix and tensor factorizations,” in _2009 IEEE International conference on data mining workshops_. IEEE, 2009, pp. 262–269. * [28] L. Zhu, D. Guo, J. Yin, G. Ver Steeg, and A. Galstyan, “Scalable temporal latent space inference for link prediction in dynamic social networks,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 28, no. 10, pp. 2765–2777, 2016. * [29] Y. Chi, X. Song, D. Zhou, K. Hino, and B. L. Tseng, “Evolutionary spectral clustering by incorporating temporal smoothness,” in _Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining_ , 2007, pp. 153–162. * [30] Y.-R. Lin, Y. Chi, S. Zhu, H. Sundaram, and B. L. Tseng, “Facetnet: a framework for analyzing communities and their evolutions in dynamic networks,” in _Proceedings of the 17th international conference on World Wide Web_ , 2008, pp. 685–694. * [31] W. Yu, W. Cheng, C. C. Aggarwal, H. Chen, and W. Wang, “Link prediction with spatial and temporal consistency in dynamic networks.” in _IJCAI_ , 2017, pp. 3343–3349. * [32] J. Chen, J. Zhang, X. Xu, C. Fu, D. Zhang, Q. Zhang, and Q. Xuan, “E-lstm-d: A deep learning framework for dynamic network link prediction,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , 2019. * [33] M. Yang, J. Liu, L. Chen, Z. Zhao, X. Chen, and Y. Shen, “An advanced deep generative framework for temporal link prediction in dynamic networks,” _IEEE transactions on cybernetics_ , vol. 50, no. 12, pp. 4946–4957, 2019\. * [34] K. Lei, M. Qin, B. Bai, G. Zhang, and M. Yang, “Gcn-gan: A non-linear temporal link prediction model for weighted dynamic networks,” in _IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019, pp. 388–396. * [35] J. Preusse, J. Kunegis, M. Thimm, S. Staab, and T. Gottron, “Structural dynamics of knowledge networks,” in _Seventh International AAAI Conference on Weblogs and Social Media_ , 2013. * [36] D. Kamuhanda, M. Wang, and K. He, “Sparse nonnegative matrix factorization for multiple-local-community detection,” _IEEE Transactions on Computational Social Systems_ , vol. 7, no. 5, pp. 1220–1233, 2020. * [37] D. Cai, X. He, J. Han, and T. S. Huang, “Graph regularized nonnegative matrix factorization for data representation,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 33, no. 8, pp. 1548–1560, 2010. * [38] A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , vol. 2016, 2016, pp. 855–864. * [39] H. Tian and R. Zafarani, “Exploiting common neighbor graph for link prediction,” in _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_, 2020, pp. 3333–3336. * [40] X. Ma, P. Sun, and Y. Wang, “Graph regularized nonnegative matrix factorization for temporal link prediction in dynamic networks,” _Physica A: Statistical mechanics and its applications_ , vol. 496, pp. 121–136, 2018. | Christina Muro received bachelor’s degree with Computer Science from the University of Dar-es-salaam, Dar-es-salaam, Tanzania in 2009 and the master’s degree in Computer Science, from the University of Dodoma, Dodoma, Tanzania in 2013. She is currently pursuing the Ph.D. degree with the School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China. She has been a Lecturer in the Department of Computer Science and Engineering at the University of Dodoma since 2018. Her research interests include machine learning and social network analysis. ---|--- | Boyu Li received his M.S. degree and Ph.D. degree in Computer Science and Technology from Jilin University, in 2014 and 2018, respectively. He is currently a post-doctor in School of Computer Science and Technology, Huazhong University of Science and Technology. His research interests include privacy- preserving data publishing, social network and graph embedding. ---|--- | Kun He is currently a Professor in School of Computer Science and Technology, Huazhong University of Science and Technology (HUST) Wuhan, P.R. China; and a Mary Shepard B. Upson Visiting Professor for the 2016-2017 Academic year in Engineering, Cornell University NY, USA. She received the B.S degree in physics from Wuhan University, Wuhan, China, in 1993; the M.S. degree in computer science from Huazhong Normal University, Wuhan, China, in 2002; and the Ph.D. degree in system engineering from HUST, Wuhan, China, in 2006. Her research interests include machine learning, deep learning, social networks, and algorithm design. ---|---
# Remarks on the factorization and monotonicity method for inverse acoustic scatterings Takashi FURUYA ###### Abstract We study the factorization and monotonicity method for inverse acoustic scattering problems. Firstly, we give a new general functional analysis theorem for the monotonicity method. Comparing with the factorization method, the general theorem of the monotonicity generates reconstruction schemes under weaker a priori assumptions for unknown targets, and can directly deal with mixed problems so that the unknown targets have several different boundary conditions. Using the general theorem, we give the reconstruction scheme for the mixed crack that the Dirichlet boundary condition is imposed on one side of the crack and the Neumann boundary condition on the other side, which is a new extension of monotonicity method. ## 1 Introduction In this paper, we study the factorization and monotonicity method for inverse acoustic scattering problems. The factorization method has first been introduced by Kirsch ([17]) for the inverse acoustic obstacle scattering. It has been studied so far by many authors (see e.g., [1, 3, 4, 7, 12, 18, 19, 20, 21, 23, 25]), and it is well-known as one of classical qualitative methods, which includes the linear sampling method of Colton and Kirsch ([5]), the singular sources method of Potthast ([24]), the probe method of Ikehata ([16]), etc. The monotonicity method, on the other hand, has been recently introduced by Harrach in [15] for the electrical impedance tomography. Very recently, it was extended to the Helmholtz equation in the bounded domain [13, 14], the inverse acoustic obstacle [2], Dirichlet crack [9], and medium scatterings [11, 22]. It was found from these works that the monotonicity method has advantage over the factorization method that we can give the reconstruction scheme under weaker a priori assumptions for unknown targets. The contributions of this paper are the following. * [A] We give a general functional analysis theorem for the monotonicity method (Theorem 4.2) including previous works [2, 9, 11] for inverse acoustic scatterings. * [B] Using the general theorem, we give the reconstruction scheme for the inverse mixed crack scattering, which is a new extension of the monotonicity method (Theorems 5.13 and 5.14). A characteristic of the factorization method is to prepare the general functional analysis theorem which generates reconstruction schemes by spectrums of the far-field operator (see Theorem 2.15 of [19]). Based on this idea, the contribution [A] will be discussed as the version of the monotonicity method. The general theorem of the factorization method assumes that the real part of the middle operator of the far-field operator has a decomposition into a positive coercive operator and a compact operator, while the imaginary part of the middle operator becomes strictly positive (see the assumptions (b) and (c) in Theorem 3.1). This two assumptions cause a priori assumptions for the unknown target, in particular, the positivity of the imaginary part corresponds to restrictions for the wave number. For example, it is necessary for the inverse medium scattering that the wave number is not a transmission eigenvalue with respect to the unknown medium (see e.g., Theorem 4.10 of [19]), and for the inverse obstacle scattering that the wave number is not an eigenvalue with respect to the boundary condition of the unknown obstacle (see e.g., Corollary 2.16 of [19]). However, the general theorem of the monotonicity method does not assume the positivity of the imaginary part (see Theorem 4.2), which means that the monotonicity can essentially avoid restrictions for the wave number. In fact, monotonicity reconstructions for inverse obstacle (Theorem 5.3 of [2]) and medium (Theorems 5.1–5.3 of [11]) have been successful without restrictions for the wave number. The advantage of the monotonicity over the factorization is not only weaker a priori assumptions, but also to directly deal with mixed problems that the unknown target consists of two separate components with different boundary conditions. The real part of middle operator of the far-field operator for mixed problems is not decomposed into a positive coercive operator and a compact operator (see e.g., Theorem 3.4 of [19], Theorem 3.2 of [20]). In order to make a decomposition of the coercivity and the compactness, the factorization method for mixed problems needs masking approaches that one component is covered by an artificial domain disjoint with the other component we want to reconstruct (see e.g., Lemma 3.5 of [19], (3.26) of [20]). However, the general theorem of the monotonicity method (see (3) of Theorem 4.2) does not assume such a real part decomposition, which means that the monotonicity essentially do not need masking approaches. In fact, monotonicity reconstructions for inverse mixed obstacle (Theorem 5.5 of [2]) have been successful without the masking approach. This paper studies not only previous works from the viewpoint of the general theorem, but also a new extension of the monotonicity to the inverse acoustic mixed crack scattering, which corresponds to the contribution [B]. The mixed crack consists of only one component, but imposes the Dirichlet boundary condition on one side of the unknown crack and the Neumann boundary condition on the other side (see the beginning of Section 5.5). The factorization method for the mixed crack has been studied in [25], but an extensive closed curve of the unknown crack should be known, which is a very restrictive assumption (see Theorem 3.3 of [25]). Using the general theorem for the monotonicity method, we give a reconstruction scheme without assuming such an extensive curve (see Theorems 5.13 and 5.14). This paper is organized as follows. In Section 2, we define the inverse acoustic scattering problem. In Section 3, we recall the functional analysis theorem for the factorization method. In Section 4, we give a new general functional analysis theorem (Theorem 4.2) for the monotonicity method. In Section 5, we study several applications of the general theorem. The results discussed in Sections 5.1–5.4 are the same in previous works [2, 11, 9], while the one in Section 5.5 is new in the monotonicity for the inverse mixed crack scattering. Finally in Section 6, we give numerical examples for our theoretical results. ## 2 Inverse acoustic scattering First of all, we define the inverse acoustic scattering problem. Let $k>0$ be the wave number, and let $\theta\in\mathbb{S}^{d-1}$ ($d=2,3$) be the incident direction. We denote the incident field $u^{inc}(x,\theta)$ with incident direction $\theta$ by the plane wave of the form $u^{inc}(x,\theta):=\mathrm{e}^{ikx\cdot\theta},\ x\in\mathbb{R}^{d}.$ (2.1) Let $\Omega\subset\mathbb{R}^{d}$ be a bounded open set with the smooth boundary $\partial\Omega$ such that the exterior $\mathbb{R}^{d}\setminus\overline{\Omega}$ is connected. Here, we denote by $\overline{\Omega}=\Omega\cup\partial\Omega$. In particular, we discuss the following two cases. The first case is that the scatterer $\Omega$ is a penetrable medium, and determine the total field $u=u^{sca}+u^{inc}$ such that $\Delta u+k^{2}(1+q)u=0\ \mathrm{in}\ \mathbb{R}^{d},$ (2.2) $\lim_{r:=|x|\to\infty}r^{\frac{d-1}{2}}\biggl{(}\frac{\partial u^{sca}}{\partial r}-iku^{sca}\biggr{)}=0,$ (2.3) where $q\in L^{\infty}(\mathbb{R}^{d})$ has a compact support such that $\Omega=\mathrm{supp}\ q$, and $\Delta$ is the Laplace operator. The Sommerfeld radiation condition (2.3) holds uniformly in all directions $\hat{x}:=\frac{x}{|x|}$. The second case is that $\Omega$ is an impenetrable obstacle, and determine the total field $u=u^{sca}+u^{inc}$ such that $\Delta u+k^{2}u=0\ \mathrm{in}\ \mathbb{R}^{d}\setminus\overline{\Omega},$ (2.4) $\mathcal{B}u=0\ \mathrm{on}\ \partial\Omega,$ (2.5) and $u^{sca}$ satisfies the Sommerfeld radiation condition (2.3) where (2.5) means the boundary conditions, for example, the Dirichlet boundary condition $\mathcal{B}u=u$, the Neumann boundary condition $\mathcal{B}u=\frac{\partial u}{\partial\nu}$, etc. In both cases, it is well-known that there exists a unique solution $u^{sca}$ that has the following asymptotic behaviour (see e.g., [6]) $u^{sca}(x)=\frac{\mathrm{e}^{ikr}}{r^{\frac{d-1}{2}}}\Bigl{\\{}u^{\infty}(\hat{x},\theta)+O\bigl{(}1/r\bigr{)}\Bigr{\\}},\ r\to\infty.$ (2.6) The function $u^{\infty}$ is called the far-field pattern of the scattered field $u^{sca}$. With the far-field pattern $u^{\infty}$, we define the far- field operator $F:L^{2}(\mathbb{S}^{d-1})\to L^{2}(\mathbb{S}^{d-1})$ by $Fg(\hat{x}):=\int_{\mathbb{S}^{d-1}}u^{\infty}(\hat{x},\theta)g(\theta)ds(\theta),\ \hat{x}\in\mathbb{S}^{d-1}.$ (2.7) In the inverse acoustic scattering problem, we reconstruct $\Omega$ from the far-field pattern $u^{\infty}(\hat{x},\theta)$ for all $\hat{x},\theta\in\mathbb{S}^{d-1}$, and fixed $k>0$. In other words, given the far-field operator $F$, we reconstruct $\Omega$. ## 3 The factorization method Here, we recall the general functional analysis theorem for the factorization method. The following functional analytic theorem is proved by Theorem 2.15 of [19] and Theorem 3.1 of [8]. ###### Theorem 3.1 (Theorem 2.15 of [19] and Theorem 3.1 of [8]). Let $X\subset U\subset X^{*}$ be a Gelfand triple with a Hilbert space $U$ and a reflexive Banach space $X$ such that the embedding is dense. Furthermore, let Y be a Hilbert space and let $F:Y\to Y$, $G:X\to Y$, $T:X^{*}\to X$ be linear bounded operators such that $F=GTG^{*}.$ (3.1) We make the following assumptions: (a) $G$ is compact with dense range in $Y$. (b) $\mathrm{Re}T$ has the form $\mathrm{Re}T=C+K$ where $K:X^{*}\to X$ is some self-adjoint compact operator and $C:X^{*}\to X$ is some positive coercive operator, i.e., there exists a constant $c>0$ such that $\langle\varphi,C\varphi\rangle\geq c\left\|\varphi\right\|_{X^{*}}^{2}\ for\ all\ \varphi\in X^{*},$ (3.2) where $\langle\cdot,\cdot\rangle$ denotes the duality pairing between $X^{*}$ and $X$. (c) $\mathrm{Im}\langle\varphi,T\varphi\rangle>0$ for all $\varphi\in\overline{\mathrm{Ran}(G^{*})}$ with $\varphi\neq 0$. Then, the operator $F_{\\#}:=\bigl{|}\mathrm{Re}F\bigr{|}+\mathrm{Im}F$ is non-negative, and the ranges of $G:X\to Y$ and $F_{\\#}^{1/2}:Y\to Y$ coincide with each other, that is, we have the following range identity; $\mathrm{Ran}(G)=\mathrm{Ran}(F_{\\#}^{1/2}).$ (3.3) Here, the real part and the imaginary part of an operator $A$ are self-adjoint operators given by $\mathrm{Re}A=\displaystyle\frac{A+A^{*}}{2}\ \ \ \mathrm{and}\ \ \ \mathrm{Im}A=\displaystyle\frac{A-A^{*}}{2i}.$ (3.4) ###### Remark 3.2. It has been well-known that in Theorem 2.1 of [23] the assumption (c) can be replaced by the injectivity of $T$, and it has been mainly used especially for the relaxation of the assumption that the wave number $k>0$ is not a transmission eigenvalue in inverse medium scatterings (see e.g., [7, 19, 20, 23]). However, it was found that this replacement is not correct (see Remark 3.2 of [8]), thus the factorization method for inverse medium scatterings essentially needs the assumption of transmission eigenvalues. ## 4 General theorems for the monotonicity method In this section, we give a new general functional analysis theorem for the monotonicity method. ###### Definition 4.1. Let $A,B:H\to H$ be self-adjoint compact linear operators on a Hilbert space $H$. We write $A\leq_{\mathrm{fin}}B,$ (4.1) if $B-A$ has only finitely many negative eigenvalues. ###### Theorem 4.2. Let $X\subset U\subset X^{*}$, $\tilde{X}\subset\tilde{U}\subset\tilde{X}^{*}$ be Gelfand triples with Hilbert spaces $U$, $\tilde{U}$ and reflexive Banach spaces $X$, $\tilde{X}$ such that the embeddings are dense. Furthermore, let Y be a Hilbert space and let $F:Y\to Y$, $\tilde{F}:Y\to Y$, $G:X\to Y$, $\tilde{G}:\tilde{X}\to Y$, $T:X^{*}\to X$, $\tilde{T}:\tilde{X}^{*}\to\tilde{X}$ be linear bounded operators such that $F=GTG^{*},\ \ \ \tilde{F}=\tilde{G}\tilde{T}\tilde{G}^{*}.$ (4.2) (1) Assume that (1a) $\mathrm{Re}T$ has the form $\mathrm{Re}T=C+K$ where $C:X^{*}\to X$ is some positive coercive operator and $K:X^{*}\to X$ is some self-adjoint compact operator. (1b) There exists a compact operator $R:\tilde{X}\to X$ such that $\tilde{G}=GR$. Then, $\mathrm{Re}\tilde{F}\leq_{\mathrm{fin}}\mathrm{Re}F.$ (4.3) (2) Assume that (2a) $\mathrm{Re}\tilde{T}$ has the form $\mathrm{Re}\tilde{T}=\tilde{C}+\tilde{K}$ where $\tilde{C}:\tilde{X}^{*}\to\tilde{X}$ is some positive coercive operator and $\tilde{K}:\tilde{X}^{*}\to\tilde{X}$ is some self-adjoint compact operator. (2b) There exists an infinite dimensional subspace $W$ in $\mathrm{Ran}(\tilde{G})$ such that $W\cap\mathrm{Ran}(G)=\\{0\\}$. Then, $\mathrm{Re}\tilde{F}\not\leq_{\mathrm{fin}}\mathrm{Re}F.$ (4.4) (3) Let $X_{j}\subset U_{j}\subset X^{*}_{j}$ ($j=1,2$) be a Gelfand triple with a Hilbert space $U_{j}$ and a reflexive Banach space $X_{j}$ such that the embedding is dense. Let $F^{Mix}:Y\to Y$, $G^{Mix}:X_{1}\times X_{2}^{*}\to Y$, $T^{Mix}:X_{1}^{*}\times X_{2}\to X_{1}\times X_{2}^{*}$ be linear bounded operators such that $F^{Mix}=G^{Mix}T^{Mix}G^{Mix\ *}.$ (4.5) Assume that (3a) $\mathrm{Re}T^{Mix}$ has the form $\mathrm{Re}T^{Mix}=\left(\begin{array}[]{cc}C_{11}&C_{12}\\\ C_{21}&C_{22}\end{array}\right)+K^{Mix}$ where $C_{11}:X_{1}^{*}\to X_{1}$ is some positive coercive operator, $C_{12}:X_{2}\to X_{1}$ and $C_{21}:X_{1}^{*}\to X_{2}^{*}$ are some linear bounded operators, $C_{22}:X_{2}\to X_{2}^{*}$ is some negative coercive operator (that is, $-C_{22}$ is positive coercive), and $K^{Mix}:X_{1}^{*}\times X_{2}\to X_{1}\times X_{2}^{*}$ is some self-adjoint compact operator. (3b) There exists an infinite dimensional subspace $W_{1}$ in $\mathrm{Ran}(G^{Mix}R_{1}^{*})$ such that $W_{1}\cap\mathrm{Ran}\left(\left[\tilde{G},G^{Mix}R_{2}^{*}\right]\right)=\\{0\\}$ where $R_{1}:X_{1}^{*}\times X_{2}\to X_{1}^{*}$ is defined by $R_{1}\left(\begin{array}[]{ccc}f\\\ g\\\ \end{array}\right):=f$, and its adjoint operator $R^{*}_{1}:X_{1}\to X_{1}\times X_{2}^{*}$ is given by $R_{1}^{*}\varphi:=\left(\begin{array}[]{ccc}\varphi\\\ 0\\\ \end{array}\right)$. Then, $\mathrm{Re}F^{Mix}\not\leq_{\mathrm{fin}}\mathrm{Re}\tilde{F}.$ (4.6) ###### Remark 4.3. If the assumption (3b) is replaced by (3b)’ There exists a finite dimensional subspace $W_{2}$ in $\mathrm{Ran}(G^{Mix}R_{2}^{*})$ such that $W_{2}\cap\mathrm{Ran}\left(\left[\tilde{G},G^{Mix}R_{1}^{*}\right]\right)=\\{0\\}$ where $R_{2}:X_{1}^{*}\times X_{2}\to X_{2}$ is defined by $R_{2}\left(\begin{array}[]{ccc}f\\\ g\\\ \end{array}\right):=g$, and its adjoint operator $R^{*}_{2}:X^{*}_{2}\to X_{1}\times X_{2}^{*}$ is given by $R_{2}^{*}\psi:=\left(\begin{array}[]{ccc}0\\\ \psi\\\ \end{array}\right)$, then, we can show that $-\mathrm{Re}F^{Mix}\not\leq_{\mathrm{fin}}\mathrm{Re}\tilde{F}$ by the same argument. We recall the following technical lemmas which will be useful to prove Theorem 4.2. ###### Lemma 4.4 (Corollary 3.3 of [14]). Let $A,B:H\to H$ be self-adjoint compact linear operators on a Hilbert space $H$ with an inner product $(\cdot,\cdot)_{H}$. Then, the following statements are equivalent: (1) $A\leq_{\mathrm{fin}}B$ (2) There exists a finite dimensional subspace $V$ in $H$ such that $\left((B-A)v,v\right)_{H}\geq 0,$ (4.7) for all $v\in V^{\bot}$. ###### Lemma 4.5 (Lemma 4.6 in [14]). Let $X$, $Y$, and $Z$ be Hilbert spaces, and let $A:X\to Y$ and $B:X\to Z$ be bounded linear operators. Then, $\exists C>0:\ \left\|Ax\right\|_{Y}^{2}\leq C\left\|Bx\right\|_{Z}^{2}\ for\ all\ x\in X\ \ \ \ \Longleftrightarrow\ \ \ \ \mathrm{Ran}(A^{*})\subset\mathrm{Ran}(B^{*}).$ (4.8) ###### Lemma 4.6 (Lemma 4.7 in [14]). Let $X$, $Y$, $V\subset Z$ be subspaces of a vector space $Z$. If $X\cap Y=\\{0\\},\ \ \ \ and\ \ \ \ X\subset Y+V,$ (4.9) then, $\mathrm{dim}(X)\leq\mathrm{dim}(V)$. ###### Proof of Theorem 4.2. (1) Since the restriction $C\bigl{|}_{U}:U\to U$ is positive, there exists a positive square root $\hat{W}:U\to U$, i.e., $C\bigl{|}_{U}=\hat{W}^{2}$. Since we have, $\|\hat{W}\varphi\|^{2}_{U}=(\varphi,\hat{W}^{2}\varphi)_{U}=(\varphi,C\bigl{|}_{U}\varphi)_{U}=\left<\varphi,C\varphi\right>\leq\|C\|\|\varphi\|_{X^{*}}^{2},\ \ for\ all\ \varphi\in U,$ (4.10) and the embedding $U\subset X^{*}$ is dense, $\hat{W}$ has a bounded extension $W:X^{*}\to U$ of $\hat{W}$. By the positive coercivity of $C$, there exists a constant $c>0$ such that for all $\varphi\in U$, $c\|\varphi\|^{2}_{X^{*}}\leq\langle\varphi,C\varphi\rangle=(\varphi,C\bigl{|}_{U}\varphi)_{U}=\|\hat{W}\varphi\|_{U}^{2}.$ (4.11) Hence, by the dense embedding $U\subset X^{*}$, we have $c\|\varphi\|^{2}_{X^{*}}\leq\|W\varphi\|_{U}^{2}$ for all $\varphi\in X^{*}$, which implies that the extension $W:X^{*}\to U$ of $\hat{W}$ is bounded invertible. It is easy to check that $C=W^{*}W$. By this, the factorization (4.2) of operators $F$ and $\tilde{F}$, assumptions (1a) and (1b), we have $\displaystyle\mathrm{Re}F-\mathrm{Re}\tilde{F}=G\left[C+\mathrm{Re}K-R(\mathrm{Re}\tilde{T})R^{*}\right]G^{*}$ (4.12) $\displaystyle=$ $\displaystyle\left[GW^{*}\right]\left[W^{*^{-1}}CW^{-1}+W^{*^{-1}}\left\\{\mathrm{Re}K-R(\mathrm{Re}\tilde{T})R^{*}\right\\}W^{-1}\right]\left[GW^{*}\right]^{*}$ $\displaystyle=:$ $\displaystyle\hat{G}[I_{U}+\hat{K}]\hat{G}^{*}.$ Let $\left\\{\mu_{j},\phi_{j}\right\\}$ be an eigensystem of the self-adjoint compact operator $\hat{K}:U\to U$, and let $V:=\mathrm{span}\left\\{\phi_{j}:\mu_{j}\leq-\displaystyle\frac{1}{2}\right\\}.$ (4.13) Then, $V$ is a finite dimensional subspace of $U$, and for all $\varphi\in\left[\mathrm{Ran}\left(\hat{G}\bigl{|}_{V}\right)\right]^{\bot}$, which is equivalent to $\hat{G}^{*}\varphi\in V^{\bot}=\mathrm{span}\left\\{\phi_{j}:\mu_{j}>-\displaystyle\frac{1}{2}\right\\}$, $((\mathrm{Re}F-\mathrm{Re}\tilde{F})\varphi,\varphi)_{Y}=((I_{U}+\hat{K})\hat{G}^{*}\varphi,\hat{G}^{*}\varphi)_{U}\geq\displaystyle\frac{1}{2}\|\hat{G}^{*}\varphi\|_{U}^{2}\geq 0.$ (4.14) From $\mathrm{dim}\left[\mathrm{Ran}\left(\hat{G}\bigl{|}_{V}\right)\right]<\infty$ and Lemma 4.4, we conclude (4.3). (2) Assume on the contrary that $\mathrm{Re}\tilde{F}\leq_{\mathrm{fin}}\mathrm{Re}F.$ Then by Lemma 4.4, there exists a finite dimensional subspace $V_{1}$ in $Y$ such that $(\mathrm{Re}\tilde{F}\varphi,\varphi)_{Y}\leq\left(\mathrm{Re}F\varphi,\varphi\right)_{Y},$ (4.15) for all $\varphi\in V^{\bot}_{1}$. By the same argument in the beginning of (1), there exists a bounded invertible operator $\tilde{W}:\tilde{X}^{*}\to\tilde{U}$ and a self-adjoint compact operator $K$ such that $\mathrm{Re}\tilde{F}=[\tilde{G}\tilde{W}^{*}][I_{\tilde{U}}+K][\tilde{G}\tilde{W}^{*}]^{*},$ (4.16) which implies that by the same argument in (4.13)–(4.14) there exists a finite dimensional subspace $V_{2}$ in $Y$ and a constant $c>0$ such that $(\mathrm{Re}\tilde{F}\varphi,\varphi)_{Y}\geq\displaystyle\frac{1}{2}\|\tilde{W}\tilde{G}^{*}\varphi\|_{\tilde{U}}^{2}\geq c\|\tilde{G}^{*}\varphi\|_{\tilde{X}^{*}}^{2},$ (4.17) for all $\varphi\in V^{\bot}_{2}$. Setting $V:=V_{1}\cup V_{2}$, $V$ is a finite dimensional subspace in $Y$. Then by (4.15) and (4.17) we have $c\|\tilde{G}^{*}\varphi\|_{\tilde{X}^{*}}^{2}\leq(\mathrm{Re}F\varphi,\varphi)_{Y}\leq\|\mathrm{Re}T\|\|G^{*}\varphi\|_{X^{*}}^{2},$ (4.18) for all $\varphi\in V^{\bot}$. On the other hand, by the assumption (2b) and Lemma 4.6, we have $W\not\subset\mathrm{Ran}(G)+V$, which implies that by $W\subset\mathrm{Ran}(\tilde{G})$ $\mathrm{Ran}(\tilde{G})\not\subset\mathrm{Ran}(G)+V=\mathrm{Ran}\left([G,P_{V}]\right),$ (4.19) where $P_{V}$ denotes the orthogonal projection on $V$. By this and Lemma 4.5, there exists a sequence $\displaystyle(\varphi_{n})_{n\in\mathbb{N}}\subset Y$ such that $\|\tilde{G}^{*}\varphi_{n}\|_{\tilde{X}^{*}}^{2}>n^{2}\left(\|G^{*}\varphi_{n}\|_{X^{*}}^{2}+\|P_{V}\varphi_{n}\|_{Y}^{2}\right),$ (4.20) for all $n\in\mathbb{N}$. Setting $\psi_{n}:=\displaystyle\frac{n^{\frac{1}{2}}\varphi_{n}}{\|\tilde{G}^{*}\varphi_{n}\|_{\tilde{X}^{*}}}$, we obtain $\|G^{*}\psi_{n}\|_{X^{*}}^{2}+\|P_{V}\psi_{n}\|_{Y}^{2}=n\displaystyle\frac{\|G^{*}\varphi_{n}\|_{X^{*}}^{2}+\|P_{V}\varphi_{n}\|_{Y}^{2}}{\|\tilde{G}^{*}\varphi_{n}\|^{2}_{\tilde{X}^{*}}}\leq\frac{1}{n},\ \ \ \ \|\tilde{G}^{*}\psi_{n}\|_{X^{*}}^{2}=n.$ (4.21) Setting $\tilde{\psi}_{n}:=(I-P_{V})\psi_{n}\in V^{\bot}$, we finally obtain $\|\tilde{G}^{*}\tilde{\psi}_{n}\|_{\tilde{X}^{*}}\geq\|\tilde{G}^{*}\psi_{n}\|_{\tilde{X}^{*}}-\|\tilde{G}^{*}\|\|P_{V}\psi_{n}\|_{Y}\to\infty,\ \ \ \mathrm{as}\ n\to\infty,$ (4.22) $\|G^{*}\tilde{\psi}_{n}\|_{X^{*}}\leq\|G^{*}\psi_{n}\|_{X^{*}}+\|G^{*}\|\|P_{V}\psi_{n}\|_{Y}^{2}\to 0,\ \ \ \mathrm{as}\ n\to\infty,$ (4.23) which contradicts (4.18). Therefore, we conclude (4.4). (3) Assume on the contrary that $\mathrm{Re}F^{Mix}\leq_{\mathrm{fin}}\mathrm{Re}\tilde{F}$. Then by Lemma 4.4, there exists a finite dimensional subspace $V_{1}$ in $Y$ such that $(\mathrm{Re}F^{Mix}\varphi,\varphi)_{Y}\leq(\mathrm{Re}\tilde{F}\varphi,\varphi)_{Y},$ (4.24) for all $\varphi\in V^{\bot}_{1}$. Since $C_{11}:X_{1}^{*}\to X_{1}$ and $-C_{22}^{-1}:X_{2}^{*}\to X_{2}$ are positive coercive, by the same argument in the beginning of (1) there exists a bounded invertible operator $W_{jj}:X^{*}_{j}\to U_{j}$ such that $C_{11}=W_{11}^{*}W_{11},\ \ \ \ \ -C_{22}^{-1}=W_{22}^{*}W_{22},$ (4.25) We denote by the operator $W:=\left(\begin{array}[]{cc}W_{11}^{*}&0\\\ 0&W_{22}^{-1}\end{array}\right):U_{1}\times U_{2}\to X_{1}\times X_{2}^{*}$, hence, we have $\displaystyle\mathrm{Re}F^{Mix}$ $\displaystyle=$ $\displaystyle[G^{Mix}W]\left[W^{-1}\left\\{\left(\begin{array}[]{cc}C_{11}&C_{12}\\\ C_{21}&C_{22}\end{array}\right)+K^{Mix}\right\\}W^{*^{-1}}\right][G^{Mix}W]^{*}$ (4.28) $\displaystyle=:$ $\displaystyle\hat{G}^{Mix}\left[\left(\begin{array}[]{cc}I_{U_{1}}&\hat{C}_{12}\\\ \hat{C}_{21}&-I_{U_{2}}\end{array}\right)+\hat{K}^{Mix}\right]\hat{G}^{Mix\ *},$ (4.31) where $\hat{C}_{12}:=W_{11}^{*^{-1}}C_{12}W_{22}^{*}:U_{2}\to U_{1}$, $\hat{C}_{21}:=W_{22}C_{21}W_{11}^{*}:U_{1}\to U_{2}$, and $\hat{K}^{Mix}=W^{-1}K^{Mix}W^{*^{-1}}:U_{1}\times U_{2}\to U_{1}\times U_{2}$. Since $\hat{K}^{Mix}$ is a self-adjoint compact operator, by the same argument in (4.13)–(4.14), there exists a finite dimensional subspace $V_{2}$ in Y, and constants $c_{1},c_{2},c_{3}>0$ such that $\displaystyle(\mathrm{Re}F^{Mix}\varphi,\varphi)_{Y}$ (4.38) $\displaystyle=$ $\displaystyle(\left[\left(\begin{array}[]{cc}I_{U_{1}}&\hat{C}_{12}\\\ \hat{C}_{21}&-I_{U_{2}}\end{array}\right)+\hat{K}^{Mix}\right]\hat{G}^{Mix\ *}\varphi,\hat{G}^{Mix\ *}\varphi)_{U_{1}\times U_{2}}$ $\displaystyle\geq$ $\displaystyle\|W_{11}R_{1}G^{Mix\ *}\varphi\|^{2}_{U_{1}}-\|W_{22}^{*^{-1}}R_{2}G^{Mix\ *}\varphi\|^{2}_{U_{2}}$ $\displaystyle-\frac{1}{2}\left\\{\|W_{11}R_{1}G^{Mix\ *}\varphi\|^{2}_{U_{1}}+\|W_{22}^{*^{-1}}R_{2}G^{Mix\ *}\varphi\|^{2}_{U_{2}}\right\\}$ $\displaystyle+(\left(\begin{array}[]{cc}0&C_{12}\\\ C_{21}&0\end{array}\right)G^{Mix\ *}\varphi,G^{Mix\ *}\varphi)_{U_{1}\times U_{2}}$ $\displaystyle\geq$ $\displaystyle c_{1}\|R_{1}G^{Mix\ *}\varphi\|^{2}_{X_{1}^{*}}-c_{2}\|R_{2}G^{Mix\ *}\varphi\|^{2}_{X_{2}}$ $\displaystyle-c_{3}\|R_{1}G^{Mix\ *}\varphi\|_{X_{1}^{*}}\|R_{2}G^{Mix\ *}\varphi\|_{X_{2}},$ for all $\varphi\in V_{2}^{\bot}$. Setting $V:=V_{1}\cup V_{2}$, $V$ is finite dimensional subspace in $Y$. Then by (4.24) and (4.38) we have $\displaystyle c_{1}\|R_{1}G^{Mix\ *}\varphi\|_{\tilde{X}^{*}_{1}}^{2}$ $\displaystyle\leq$ $\displaystyle c_{2}\|R_{2}G^{Mix\ *}\varphi\|^{2}_{X_{2}}$ $\displaystyle+$ $\displaystyle c_{3}\|R_{1}G^{Mix\ *}\varphi\|_{X_{1}^{*}}\|R_{2}G^{Mix\ *}\varphi\|_{X_{2}}+\|\mathrm{Re}\tilde{T}\|\|\tilde{G}^{*}\varphi\|_{\tilde{X}^{*}}^{2},$ for all $\varphi\in V^{\bot}$. On the other hand, by the assumption (3b) and Lemma 4.6, we have $W_{1}\not\subset\mathrm{Ran}(\left[\tilde{G},G^{Mix}R_{2}^{*}\right])+V$, which implies that by $W_{1}\subset\mathrm{Ran}(G^{Mix}R_{1}^{*})$ $\mathrm{Ran}(G^{Mix}R_{1}^{*})\not\subset\mathrm{Ran}\left([\tilde{G},G^{Mix}R_{2}^{*}]\right)+V=\mathrm{Ran}\left([\tilde{G},G^{Mix}R_{2}^{*},P_{V}]\right).$ (4.40) By this and Lemma 4.5, there exists sequence $\displaystyle(\varphi_{n})_{n\in\mathbb{N}}\subset Y$ such that $\|R_{1}G^{Mix\ *}\varphi_{n}\|_{X^{*}_{1}}^{2}>n^{2}\left(\|\tilde{G}^{*}\varphi_{n}\|_{\tilde{X}^{*}}^{2}+\|R_{2}G^{Mix\ *}\varphi_{n}\|_{X_{2}}^{2}+\|P_{V}\varphi_{n}\|_{Y}^{2}\right),$ (4.41) for all $n\in\mathbb{N}$. Setting $\psi_{n}:=\displaystyle\frac{n^{\frac{1}{2}}\varphi_{n}}{\|R_{1}G^{Mix\ *}\varphi_{n}\|_{X^{*}_{1}}}$, we obtain $\|\tilde{G}^{*}\psi_{n}\|_{\tilde{X}^{*}}^{2}+\|R_{2}G^{Mix\ *}\psi_{n}\|_{X_{2}}^{2}+\|P_{V}\psi_{n}\|_{Y}^{2}\leq\frac{1}{n},\ \ \ \ \|R_{1}G^{Mix\ *}\varphi_{n}\|_{X^{*}_{1}}^{2}=n.$ (4.42) Setting $\tilde{\psi}_{n}:=(I-P_{V})\psi_{n}\in V^{\bot}$, we finally obtain as $\ n\to\infty$ $\|R_{1}G^{Mix\ *}\tilde{\psi}_{n}\|_{X^{*}_{1}}\geq\|R_{1}G^{Mix\ *}\psi_{n}\|_{X^{*}_{1}}-\|R_{1}G^{Mix\ *}\|\|P_{V}\psi_{n}\|_{Y},$ (4.43) $\|\tilde{G}^{*}\tilde{\psi}_{n}\|_{\tilde{X}^{*}}+\|R_{2}G^{Mix\ *}\tilde{\psi}_{n}\|_{X_{2}}\leq\|\tilde{G}^{*}\psi_{n}\|_{\tilde{X}^{*}}+\|\tilde{G}^{*}\|\|P_{V}\psi_{n}\|_{Y}$ $\hskip 56.9055pt+\|R_{2}G^{Mix\ *}\psi_{n}\|_{X_{2}}+\|\tilde{G}^{*}\|\|P_{V}\psi_{n}\|_{Y}\to 0,$ (4.44) $\displaystyle\|R_{1}G^{Mix\ *}\tilde{\psi}_{n}\|_{X^{*}_{1}}\|R_{2}G^{Mix\ *}\tilde{\psi}_{n}\|_{X_{2}}$ $\displaystyle\leq$ $\displaystyle\left(\|R_{1}G^{Mix\ *}\psi_{n}\|+\|R_{1}G^{Mix\ *}\|\|P_{V}\psi_{n}\|\right)$ $\displaystyle\times\left(\|R_{2}G^{Mix\ *}\psi_{n}\|+\|R_{2}G^{Mix\ *}\|\|P_{V}\psi_{n}\|\right)\to 1+\|R_{2}G^{Mix\ *}\|,$ which contradicts (LABEL:(3)-3). Therefore, we conclude (4.6). ∎ ## 5 Applications of the general theorem In the following, we study many applications of Theorem 4.2 to inverse acoustic scatterings. ### 5.1 Dirichlet obstacle Let $F^{Dir}_{\Omega}$ be the far-field operator for a Dirichlet obstacle $\Omega$, that is, $F^{Dir}_{\Omega}$ is the far-field operator defined by (2.7) corresponding to the solution of (2.4)–(2.5) where $\mathcal{B}u=u$. $F^{Dir}_{\Omega}$ has the factorization (see Theorem 1.15 of [19]) $F^{Dir}_{\Omega}=-G^{Dir}_{\Omega}S^{*}_{\Omega}G^{Dir\ *}_{\Omega},$ (5.1) where $G^{Dir}_{\Omega}:H^{1/2}(\partial\Omega)\to L^{2}(\mathbb{S}^{d-1})$ is the data-to-pattern operator defined by $G^{Dir}_{\Omega}f:=v^{\infty}$ where $v^{\infty}$ is the far-field pattern of a radiating solution $v$ (that is, $v$ satisfies the Sommerfeld radiation condition (2.3)) such that $\Delta v+k^{2}v=0\ \mathrm{in}\ \mathbb{R}^{d}\setminus\overline{\Omega},\ \ \ \ v=f\ \mathrm{on}\ \partial\Omega,$ (5.2) and $S_{\Omega}:H^{-1/2}(\partial\Omega)\to H^{1/2}(\partial\Omega)$ is the single layer boundary operator defined by $S_{\Omega}\varphi(x):=\int_{\partial\Omega}\varphi(y)\Phi(x,y)ds(y),\ x\in\partial\Omega,$ (5.3) where $\Phi(x,y)$ denotes the fundamental solution for the Helmholtz equation in $\mathbb{R}^{d}$, i.e., $\Phi(x,y):=\left\\{\begin{array}[]{ll}\displaystyle\frac{i}{4}H^{(1)}_{0}(k|x-y|),&\quad\mbox{$d=2$},\\\ \displaystyle\frac{e^{ik|x-y|}}{4\pi|x-y|},&\quad\mbox{$d=3$},\\\ \end{array}\right.$ (5.4) where $H^{(1)}_{0}$ is the Hankel function of the first kind of order one. The single layer boundary operator $S_{\Omega}$ is of the form (see Lemma 1.14 of [19]) $S_{\Omega}=C_{\Omega}+K_{\Omega},$ (5.5) where $C_{\Omega}:H^{-1/2}(\partial\Omega)\to H^{1/2}(\partial\Omega)$ is some positive coercive operator and $K_{\Omega}:H^{-1/2}(\partial\Omega)\to H^{1/2}(\partial\Omega)$ is some compact operator. For a bounded domain $B\subset\mathbb{R}^{d}$ with the smooth boundary, we define the Herglotz operator $H_{\partial B}:L^{2}(\mathbb{S}^{d-1})\to L^{2}(\partial B)$ by $H_{\partial B}g(x):=\int_{\mathbb{S}^{1}}\mathrm{e}^{ik\theta\cdot x}g(\theta)ds(\theta),\ x\in\partial B,$ (5.6) If the range is restricted to the space $H^{1/2}(\partial B)$, we denote its Herglotz operator by $\hat{H}_{\partial B}:L^{2}(\mathbb{S}^{d-1})\to H^{1/2}(\partial B)$. From the definition, we have $\hat{H}_{\partial B}^{*}=G^{Dir}_{B}S_{B}$ (see e.g, Theorem 1.15 of [19]). Let $J_{\partial B}:H^{1/2}(\partial B)\to H^{-1/2}(\partial B)$ be a self-adjoint compact embedding. Then, we have $\displaystyle H^{*}_{\partial B}H_{\partial B}$ $\displaystyle=$ $\displaystyle\hat{H}^{*}_{\partial B}J_{\partial B}\hat{H}_{\partial B}=G^{Dir}_{B}S_{B}J_{\partial B}S_{B}^{*}G^{Dir\ *}_{B}$ (5.7) $\displaystyle=$ $\displaystyle G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ where $C_{B}J_{\partial B}C_{B}$ is a positive coercive operator and $\hat{K}_{B}$ is some self-adjoint compact operator. Assume that $B\subset\Omega$. Then, we can define $R:H^{1/2}(\partial B)\to H^{1/2}(\partial\Omega)$ by $Rf:=v\bigl{|}_{\partial\Omega}$ where $v$ is a radiating solution $v$ of (5.2) replacing $\Omega$ with $B$. Since $v\bigl{|}_{\partial\Omega}\in C^{\infty}(\partial\Omega)$, $R$ is a compact operator, and by the definition we have $G^{Dir}_{B}=G^{Dir}_{\Omega}R,$ (5.8) which corresponds to the assumption (1a) of Theorem 4.2. Applying (1) of Theorem 4.2 as $F=-F^{Dir}_{\Omega}=G^{Dir}_{\Omega}(C_{\Omega}+K^{*}_{\Omega})G^{Dir\ *}_{\Omega},$ $\tilde{F}=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ we have $H^{*}_{\partial B}H_{\partial B}\leq_{\mathrm{fin}}-\mathrm{Re}F^{Dir}_{\Omega}.$ (5.9) Assume that $B\not\subset\Omega$. Then, there exists a bounded domain $B_{0}\Subset B$ such that $B_{0}\cap\Omega=\emptyset$. We set $W:=\mathrm{Ran}(G^{Dir}_{B_{0}})\subset\mathrm{Ran}(G^{Dir}_{B})$, then, $W$ is an infinite dimensional subspace of $L^{2}(\mathbb{S}^{d-1})$ because $G^{Dir}_{B_{0}}$ is injective (see e.g., Lemma 1.13 of [19]). From $B_{0}\cap\Omega=\emptyset$, we obtain $W\cap\mathrm{Ran}(G^{Dir}_{\Omega})=\\{0\\},$ (5.10) (see e.g., Lemma 4.2 of [2]) which corresponds to the assumption (2b) of Theorem 4.2. Applying (2) of Theorem 4.2 as $F=-F^{Dir}_{\Omega}=G^{Dir}_{\Omega}(C_{\Omega}+K^{*}_{\Omega})G^{Dir\ *}_{\Omega},$ $\tilde{F}=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ we have $H^{*}_{\partial B}H_{\partial B}\not\leq_{\mathrm{fin}}-\mathrm{Re}F^{Dir}_{\Omega}.$ (5.11) From the above discussion, we conclude the following theorem, which is the same result as Theorem 5.3 of [2]. ###### Theorem 5.1 (Theorem 5.3 of [2]). Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary. Then, $B\subset\Omega\ \ \ \ \Longleftrightarrow\ \ \ \ H^{*}_{\partial B}H_{\partial B}\leq_{\mathrm{fin}}-\mathrm{Re}F^{Dir}_{\Omega},$ (5.12) By the same argument in Theorem 5.1, one can apply (1) and (2) of Theorem 4.2 as $F=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ $\tilde{F}=-F^{Dir}_{\Omega}=G^{Dir}_{\Omega}(C_{\Omega}+K^{*}_{\Omega})G^{Dir\ *}_{\Omega}.$ Then, we also conclude the following theorem. ###### Theorem 5.2. Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary. Then, $\Omega\subset B\ \ \ \ \Longleftrightarrow\ \ \ \ -\mathrm{Re}F^{Dir}_{\Omega}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B},$ (5.13) ###### Remark 5.3. We remark that the factorization reconstruction for the inverse obstacle scattering needs to assume that $k^{2}$ is not a Dirichlet eigenvalue of $-\Delta$ in $\Omega$ (see e.g., Corollary 2.16 of [19]). While, the monotonicity reconstruction (Theorems 5.1 and 5.2) does not require the assumption of Dirichlet eigenvalues. ### 5.2 Inhomogeneous medium Let $F^{Med}_{\Omega}$ be the far-field operator for an inhomogeneous medium $\Omega$ with the function $q\in L^{\infty}(\Omega)$, that is, $F^{Med}_{\Omega}$ is the far-field operator defined by (2.7) corresponding to the solution of (2.2)–(2.3). Throughout this section, we assume that there exists a constant $q_{0}>0$ such that $q\geq q_{0}$ in $\Omega$. $F^{Med}_{\Omega}$ has the following factorization by the same argument in Theorem 4.5 of [19] $F^{Med}_{\Omega}=H^{*}_{\Omega}T_{\Omega}H_{\Omega},$ (5.14) where $H_{\Omega}:L^{2}(\mathbb{S}^{d-1})\to L^{2}(\Omega)$ is the Herglotz operator defined by $H_{\Omega}g(x):=\int_{\mathbb{S}^{1}}\mathrm{e}^{ik\theta\cdot x}g(\theta)ds(\theta),\ x\in\Omega,$ (5.15) and some operator $T_{\Omega}:L^{2}(\Omega)\to L^{2}(\Omega)$ is of the form $T_{\Omega}=k^{2}qI_{L^{2}(\Omega)}+K_{\Omega},$ (5.16) where $K_{\Omega}$ is some compact operator. Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary. Assume that $B\subset\Omega$. We define the restriction operator $R:L^{2}(\Omega)\to L^{2}(B)$ by $Rf:=f\bigl{|}_{B}$. Then by the definition we have $H_{B}=RH_{\Omega}$, and for $\alpha\in(0,k^{2}q_{0})$ we have $F^{Med}_{\Omega}-\alpha H_{B}^{*}H_{B}=H^{*}_{\Omega}\left[k^{2}qI_{L^{2}(\Omega)}-\alpha R^{*}R+K_{\Omega}\right]H_{\Omega},$ (5.17) where the operator $k^{2}qI_{L^{2}(\Omega)}-\alpha R^{*}R$ is positive coercive when $\alpha\in(0,k^{2}q_{0})$. Applying (1) of Theorem 4.2 as $F=F^{Med}_{\Omega}-\alpha H_{B}^{*}H_{B}=H^{*}_{\Omega}\left[k^{2}qI_{L^{2}(\Omega)}-\alpha R^{*}R+K_{\Omega}\right]H_{\Omega},$ $\tilde{F}=0,$ we have for $\alpha\in(0,k^{2}q_{0})$ $\alpha H^{*}_{B}H_{B}\leq_{\mathrm{fin}}\mathrm{Re}F^{Med}_{\Omega}.$ (5.18) Assume that $B\not\subset\Omega$. Then, there exists a bounded domain $B_{0}\Subset B$ such that $B_{0}\cap\Omega=\emptyset$. We set $W:=\mathrm{Ran}(H^{*}_{B_{0}})\subset\mathrm{Ran}(H^{*}_{B})$, then, $W$ is an infinite dimensional subspace of $L^{2}(\mathbb{S}^{1})$ because $H^{*}_{B_{0}}$ is injective. From $B_{0}\cap\Omega=\emptyset$, we obtain $W\cap\mathrm{Ran}(H^{*}_{\Omega})=\\{0\\}.$ (5.19) (see e.g., Lemma 4.3 of [11]). Applying (2) of Theorem 4.2 as $F=F^{Med}_{\Omega}=H^{*}_{\Omega}T_{\Omega}H_{\Omega},$ $\tilde{F}=\alpha H^{*}_{B}H_{B},$ we have for $\alpha\in(0,k^{2}q_{0})$, $\alpha H^{*}_{B}H_{B}\not\leq_{\mathrm{fin}}\mathrm{Re}F^{Med}_{\Omega}.$ (5.20) From the above discussion, we conclude the following theorem, which is the same result as Theorem 5.1 of [11]. ###### Theorem 5.4 (Theorem 5.1 of [11]). Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary. Then, for $\alpha\in(0,k^{2}q_{0})$ $B\subset\Omega\ \ \ \ \Longleftrightarrow\ \ \ \ \alpha H^{*}_{B}H_{B}\leq_{\mathrm{fin}}\mathrm{Re}F^{Med}_{\Omega}.$ (5.21) Assume that $\Omega\subset B$. Then, we can define the compact operator $R:L^{2}(\Omega)\to H^{1/2}(\partial B)$ by $Rg:=w\bigl{|}_{\partial B}$ where $w$ is a radiating solution $w$ of $\Delta w+k^{2}(1+q)w=-k^{2}qg\ \mathrm{in}\ \mathbb{R}^{d},$ (5.22) From the definition we obtain $G^{Med}_{\Omega}=G^{Dir}_{B}R$ where the data- to-pattern operator $G^{Med}_{\Omega}$ is defined by $G^{Med}_{\Omega}g:=w^{\infty}$ and $G^{Dir}_{B}$ is defined by (5.2) replacing $\Omega$ by $B$. Since $G^{Med}_{\Omega}=H_{\Omega}^{*}T_{\Omega}$ and $T_{\Omega}$ is bounded invertible (see e.g., the arguments of Theorem 4.5 of [19]), we have $H_{\Omega}^{*}=G^{Med}_{\Omega}T^{-1}_{\Omega}=G^{Dir}_{B}RT^{-1}_{\Omega}.$ (5.23) Applying (1) of Theorem 4.2 as $F=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{\Omega}\right]G^{Dir\ *}_{B},$ $\tilde{F}=F^{Med}_{\Omega}=H^{*}_{\Omega}T_{\Omega}H_{\Omega}=H^{*}_{\Omega}\left[k^{2}qI_{L^{2}(\Omega)}+K_{\Omega}\right]H_{\Omega},$ we have $\mathrm{Re}F^{Med}_{\Omega}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.24) Assume that $\Omega\not\subset B$. Then, there exists a bounded domain $\Omega_{0}\Subset\Omega$ such that $\Omega_{0}\cap B=\emptyset$. We set $W:=\mathrm{Ran}(G^{Med}_{\Omega_{0}})\subset\mathrm{Ran}(G^{Med}_{\Omega})=\mathrm{Ran}(H^{*}_{\Omega})$, then, $W$ is an infinite dimensional subspace of $L^{2}(\mathbb{S}^{1})$ because $G^{Med}_{\Omega_{0}}$ is injective. From $\Omega_{0}\cap B=\emptyset$, we obtain $W\cap\mathrm{Ran}(G^{Dir}_{B})=\\{0\\},$ (5.25) (see e.g., Lemma 4.3 of [11]). Applying (2) of Theorem 4.2 as $F=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{\Omega}\right]G^{Dir\ *}_{B},$ $\tilde{F}=F^{Med}_{\Omega}=H^{*}_{\Omega}T_{\Omega}H_{\Omega}=H^{*}_{\Omega}\left[k^{2}qI_{L^{2}(\Omega)}+K_{\Omega}\right]H_{\Omega},$ we have $\mathrm{Re}F^{Med}_{\Omega}\not\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.26) From the above discussion, we conclude the following theorem. ###### Theorem 5.5. Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary. Then, $\Omega\subset B\ \ \ \ \Longleftrightarrow\ \ \ \ \mathrm{Re}F^{Med}_{\Omega}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B},$ (5.27) ###### Remark 5.6. We remark that the factorization reconstruction for the inverse medium scattering needs to assume that $k^{2}$ is not a transmission eigenvalue in $\Omega$ (see e.g., Theorem 4.10 of [19]). While, the monotonicity reconstruction (Theorems 5.4 and 5.5) does not require the assumption of transmission eigenvalues. ### 5.3 Dirichlet crack Let $F^{Dir}_{\Gamma}$ be the far-field operator for a Dirichlet crack $\Gamma$ where $\Gamma\subset\mathbb{R}^{d}$ is a smooth non-intersecting open arc ($d=2$) or surface ($d=3$), and we assume that $\Gamma$ can be extended to some smooth, connected, closed curve ($d=2$) or surface ($d=3$) $\partial\Omega$ enclosing a bounded domain $\Omega$ in $\mathbb{R}^{d}$. The corresponding far-field pattern is defined by solving the scattering problem (2.4)–(2.5) where $\Omega$ in (2.4) is replaced by $\Gamma$ and the boundary condition (2.5) is replaced by $u_{-}=0\ \mathrm{on}\ \Gamma,\ \ \ \ \ \ \ \ u_{+}=0\ \mathrm{on}\ \Gamma,$ (5.28) where we denote by $u_{\pm}$ the limit of $u$ approaching the boundary from exterior (+) and interior (-) of an extensive domain $\Omega$ (see Figure 1). $F^{Dir}_{\Gamma}$ has the factorization (see Lemma 3.4 of [21]) Figure 1: The assumption for $\Gamma$ and the boundary condition. $F^{Dir}_{\Gamma}=-G^{Dir}_{\Gamma}S^{*}_{\Gamma}G^{Dir\ *}_{\Gamma},$ (5.29) where $G^{Dir}_{\Gamma}:H^{1/2}(\Gamma)\to L^{2}(\mathbb{S}^{d-1})$ is the data-to-pattern operator corresponding to the crack $\Gamma$, and $S_{\Gamma}:\tilde{H}^{-1/2}(\Gamma)\to H^{1/2}(\Gamma)$ is the single layer boundary operator corresponding to the crack $\Gamma$ where we denote by $H^{1/2}(\Gamma):=\\{u\bigl{|}_{\Gamma}:u\in H^{1/2}(\partial\Omega)\\},$ (5.30) $\tilde{H}^{1/2}(\Gamma):=\\{u\bigl{|}_{\Gamma}:u\in H^{1/2}(\partial\Omega),\ supp(u)\subset\overline{\Gamma}\\},$ (5.31) and $H^{-1/2}(\Gamma)$ and $\tilde{H}^{-1/2}(\Gamma)$ the dual spaces of $\tilde{H}^{1/2}(\Gamma)$ and $H^{1/2}(\Gamma)$, respectively. We have the following inclusion relation $\tilde{H}^{1/2}(\Gamma)\subset H^{1/2}(\Gamma)\subset L^{2}(\Gamma)\subset\tilde{H}^{-1/2}(\Gamma)\subset H^{-1/2}(\Gamma).$ (5.32) The single layer boundary operator $S_{\Gamma}$ is of the form (see Lemma 3.2 of [21]) $S_{\Gamma}=C_{\Gamma}+K_{\Gamma},$ (5.33) where $C_{\Gamma}:\tilde{H}^{-1/2}(\Gamma)\to H^{1/2}(\Gamma)$ is some positive coercive operator and $K_{\Gamma}:\tilde{H}^{-1/2}(\Gamma)\to H^{1/2}(\Gamma)$ is some compact operator. Let $\sigma\subset\mathbb{R}^{d}$ be a smooth arc ($d=2$) or surface ($d=3$). Assume that $\sigma\subset\Gamma$. Then, we define $R:L^{2}(\Gamma)\to L^{2}(\sigma)$ by $Rf:=f\bigl{|}_{\sigma}$. Let $J:H^{1/2}(\Gamma)\to L^{2}(\Gamma)$ be the compact embedding. Since $\hat{H}_{\Gamma}^{*}=G^{Dir}_{\Gamma}S_{\Gamma}$ (see e.g., Lemma 3.4 of [21]), we have $H_{\sigma}=RH_{\Gamma}=RJ\hat{H}_{\Gamma}=RJS_{\Gamma}^{*}G^{Dir\ *}_{\Gamma},$ (5.34) where $H_{\Gamma}:L^{2}(\mathbb{S}^{d-1})\to L^{2}(\Gamma)$ is the Herglotz operator corresponding to $\Gamma$ and $\hat{H}_{\Gamma}:L^{2}(\mathbb{S}^{d-1})\to H^{1/2}(\Gamma)$ is the Herglotz operator that its range is restricted to the space $H^{1/2}(\Gamma)$. Applying (1) of Theorem 4.2 as $F=-F^{Dir}_{\Gamma}=G^{Dir}_{\Gamma}(C_{\Gamma}+K^{*}_{\Gamma})G^{Dir\ *}_{\Gamma},$ $\tilde{F}=H^{*}_{\sigma}H_{\sigma},$ we have $H^{*}_{\sigma}H_{\sigma}\leq_{\mathrm{fin}}-\mathrm{Re}F^{Dir}_{\Gamma}.$ (5.35) Assume that $\sigma\not\subset\Gamma$. Then, there exists $\sigma_{0}\Subset\sigma$ such that $\sigma_{0}\cap\Gamma=\emptyset$. We set $W:=\mathrm{Ran}(H^{*}_{\sigma_{0}})\subset\mathrm{Ran}(H^{*}_{\sigma})$, then, $W$ is an infinite dimensional subspace of $L^{2}(\mathbb{S}^{1})$ because $H^{*}_{\sigma_{0}}$ is injective. From $\sigma_{0}\cap\Gamma=\emptyset$, we obtain $W\cap\mathrm{Ran}(G^{Dir}_{\Gamma})=\\{0\\}.$ (5.36) (see e.g., Lemma 4.1 of [9]). Applying (2) of Theorem 4.2 as $F=-F^{Dir}_{\Gamma}=G^{Dir}_{\Gamma}(C_{\Gamma}+K^{*}_{\Gamma})G^{Dir\ *}_{\Gamma},$ $\tilde{F}=H^{*}_{\sigma}H_{\sigma},$ we have $H^{*}_{\sigma}H_{\sigma}\not\leq_{\mathrm{fin}}-\mathrm{Re}F^{Dir}_{\Gamma}.$ (5.37) From the above discussion, we conclude the following theorem, which is the same result as Theorem 1.1 of [9]. ###### Theorem 5.7 (Theorem 1.1 of [9]). Let $\sigma\subset\mathbb{R}^{d}$ be a smooth arc ($d=2$) or surface ($d=3$). Then, $\sigma\subset\Gamma\ \ \ \ \Longleftrightarrow\ \ \ \ H^{*}_{\sigma}H_{\sigma}\leq_{\mathrm{fin}}-\mathrm{Re}F^{Dir}_{\Gamma}.$ (5.38) By the same argument in Theorem 5.7, one can apply (1) and (2) of Theorem 4.2 as $F=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ $\tilde{F}=-F^{Dir}_{\Gamma}=G^{Dir}_{\Gamma}(C_{\Gamma}+K^{*}_{\Gamma})G^{Dir\ *}_{\Gamma}.$ Then, we also conclude the following theorem, which is the same result as Theorem 1.2 of [9]. ###### Theorem 5.8 (Theorem 1.2 of [9]). Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary. Then, $\Gamma\subset B\ \ \ \ \Longleftrightarrow\ \ \ \ -\mathrm{Re}F^{Dir}_{\Gamma}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.39) ###### Remark 5.9. We remark that the factorization reconstruction for the inverse crack scattering also does not restrict the wave number. (see Theorem 3.9 of [21]). One of the advantage of the monotonicity over the factorization is to have not only inside tests (Theorem 5.7), but also outside tests (Theorem 5.8). ### 5.4 Mixed obstacle Let $F^{Mix}_{\Omega_{1},\Omega_{2}}$ be the far-field operator for the mixed obstacle $\Omega=\Omega_{1}\cup\Omega_{2}$ with the Dirichlet part $\Omega_{1}$ and the Neumann part $\Omega_{2}$ where $\Omega_{1},\Omega_{2}$ are bounded domains with the smooth boundary such that $\Omega_{1}\cup\Omega_{2}=\emptyset$. The corresponding far-field pattern is defined by solving the scattering problem (2.4)–(2.5) where the boundary condition (2.5) is replaced by $u=0\ \mathrm{on}\ \partial\Omega_{1},\ \ \ \ \ \ \ \ \frac{\partial u}{\partial\nu}=0\ \mathrm{on}\ \partial\Omega_{2}.$ (5.40) $F^{Mix}_{\Omega_{1},\Omega_{2}}$ has the factorization (see Theorem 3.4 of [19]) $F^{Mix}_{\Omega_{1},\Omega_{2}}=-G^{Mix}_{\Omega_{1},\Omega_{2}}T^{Mix}_{\Omega_{1},\Omega_{2}}G^{Mix\ *}_{\Omega_{1},\Omega_{2}},$ (5.41) where $G^{Mix}_{\Omega_{1},\Omega_{2}}:H^{1/2}(\partial\Omega_{1})\times H^{-1/2}(\partial\Omega_{2})\to L^{2}(\mathbb{S}^{d-1})$ is the data-to- pattern operator for the mixed obstacle $\Omega$, i.e., defined by $G^{Mix}_{\Omega_{1},\Omega_{2}}\left(\begin{array}[]{cc}f_{1}\\\ f_{2}\end{array}\right):=v^{\infty}$ where $v^{\infty}$ is the far-field pattern of a radiating solution $v$ such that $\Delta v+k^{2}v=0\ \mathrm{in}\ \mathbb{R}^{d}\setminus\overline{\Omega},\ \ v=f_{1}\ \mathrm{on}\ \partial\Omega_{1},\ \ \frac{\partial v}{\partial\nu}=f_{2}\ \mathrm{on}\ \partial\Omega_{2},$ (5.42) and some operator $T^{Mix}_{\Omega_{1},\Omega_{2}}:H^{-1/2}(\partial\Omega_{1})\times H^{1/2}(\partial\Omega_{2})\to H^{1/2}(\partial\Omega_{1})\times H^{-1/2}(\partial\Omega_{2})$ has the form (Theorem 3.4 of [19]) $T^{Mix}_{\Omega_{1},\Omega_{2}}=\left(\begin{array}[]{cc}C^{+}_{\Omega_{1}}&0\\\ 0&C^{-}_{\Omega_{2}}\end{array}\right)+K^{Mix}_{\Omega_{1},\Omega_{2}},$ (5.43) where $C^{+}_{\Omega_{1}}:H^{-1/2}(\partial\Omega_{1})\to H^{1/2}(\partial\Omega_{1})$ is some positive coercive operator, $C^{-}_{\Omega_{2}}:H^{1/2}(\partial\Omega_{2})\to H^{-1/2}(\partial\Omega_{2})$ is some negative coercive operator, and $K^{Mix}:H^{-1/2}(\partial\Omega_{1})\times H^{1/2}(\partial\Omega_{2})\to H^{1/2}(\partial\Omega_{1})\times H^{-1/2}(\partial\Omega_{2})$ is some compact operator. Assume that $\Omega_{1}\subset B$. Then, there exists bounded domain $\tilde{B}\Subset B$ such that $\tilde{B}\cap\Omega_{2}=\emptyset$ and $\Omega_{1}\subset\tilde{B}$. Define $R:H^{1/2}(\partial\Omega_{1})\times H^{-1/2}(\partial\Omega_{2})\to H^{1/2}(\partial\tilde{B})\times H^{-1/2}(\partial\Omega_{2})$ by $R\left(\begin{array}[]{cc}f_{1}\\\ f_{2}\end{array}\right):=\left(\begin{array}[]{cc}v\bigl{|}_{\partial\tilde{B}}\\\ f_{2}\end{array}\right)$. Then, $R$ has the form $R=\left(\begin{array}[]{cc}0&0\\\ 0&I\end{array}\right)+\mathrm{compact}$, and $G^{Mix}_{\Omega_{1},\Omega_{2}}=G^{Mix}_{\tilde{B},\Omega_{2}}R,$ (5.44) where $G^{Mix}_{\tilde{B},\Omega_{2}}:H^{1/2}(\partial\tilde{B})\times H^{-1/2}(\partial\Omega_{2})\to L^{2}(\mathbb{S}^{d-1})$ is the data-to- pattern operator corresponding to $\tilde{B}\cup\Omega_{2}$ with the Dirichlet part $\tilde{B}$ and the Neumann part $\Omega_{2}$. We also define $\tilde{R}:H^{1/2}(\partial\tilde{B})\to H^{1/2}(\partial\tilde{B})\times H^{-1/2}(\partial\Omega_{2})$ by $\tilde{R}g:=\left(\begin{array}[]{cc}g\\\ \frac{\partial w}{\partial\nu}\bigl{|}_{\partial\Omega_{2}}\end{array}\right)$ where $w$ is a radiating solution $w$ of $\Delta w+k^{2}w=0\ \mathrm{in}\ \mathbb{R}^{d}\setminus\overline{\tilde{B}},\ \ w=g\ \mathrm{on}\ \partial\tilde{B},$ (5.45) Then, $\tilde{R}$ has the form $\tilde{R}=\left(\begin{array}[]{cc}I\\\ 0\end{array}\right)+\mathrm{compact}$, and $G^{Dir}_{\tilde{B}}=G^{Mix}_{\tilde{B},\Omega_{2}}\tilde{R}.$ (5.46) By these and (5.7), we have $\displaystyle F^{Mix}+H^{*}_{\partial\tilde{B}}H_{\partial\tilde{B}}$ (5.49) $\displaystyle=$ $\displaystyle G^{Mix}_{\tilde{B},\Omega_{2}}\left[R(-T^{Mix}_{\Omega_{1},\Omega_{2}})R^{*}+\tilde{R}S_{\tilde{B}}J_{\partial\tilde{B}}S^{*}_{\tilde{B}}\tilde{R}^{*}\right]G^{Mix\ *}_{\tilde{B},\Omega_{2}}$ $\displaystyle=$ $\displaystyle G^{Mix}_{\tilde{B},\Omega_{2}}\left[\left(\begin{array}[]{cc}C_{\tilde{B}}J_{\partial\tilde{B}}C_{\tilde{B}}&0\\\ 0&-C_{\Omega_{2}^{-}}\end{array}\right)+\tilde{K}^{Mix}\right]G^{Mix\ *}_{\tilde{B},\Omega_{2}},$ where $\left(\begin{array}[]{cc}C_{\tilde{B}}J_{\partial\tilde{B}}C_{\tilde{B}}&0\\\ 0&-C_{\Omega_{2}^{-}}\end{array}\right)$ is a positive coercive operator and $\tilde{K}^{Mix}$ is some compact operator. Applying (1) of Theorem 4.2 as $F=F^{Mix}_{\Omega_{1},\Omega_{2}}+H^{*}_{\partial\tilde{B}}H_{\partial\tilde{B}}=G^{Mix}_{\tilde{B},\Omega_{2}}\left[\left(\begin{array}[]{cc}C_{\tilde{B}}J_{\partial\tilde{B}}C_{\tilde{B}}&0\\\ 0&-C_{\Omega_{2}^{-}}\end{array}\right)+\tilde{K}^{Mix}\right]G^{Mix\ *}_{\tilde{B},\Omega_{2}},$ $\tilde{F}=0,$ we have $-\mathrm{Re}F^{Mix}_{\Omega_{1},\Omega_{2}}\leq_{\mathrm{fin}}H^{*}_{\partial\tilde{B}}H_{\partial\tilde{B}}.$ (5.50) Since we have $\tilde{B}\subset B$, one can show by the same argument in (5.8) that there exists a compact operator $R_{B}:H^{1/2}(\partial\tilde{B})\to H^{1/2}(\partial B)$ such that $G^{Dir}_{\tilde{B}}=G^{Dir}_{B}R_{B}.$ (5.51) Then, applying (1) of Theorem 4.2 as $F=H^{*}_{\partial B}H_{\partial B}$, $\tilde{F}=H^{*}_{\partial\tilde{B}}H_{\partial\tilde{B}}$ (remark (5.7)), it follows that $H^{*}_{\partial\tilde{B}}H_{\partial\tilde{B}}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B},$ (5.52) which implies that with (5.50) we conclude that $-\mathrm{Re}F^{Mix}_{\Omega_{1},\Omega_{2}}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.53) Assume that $\Omega_{1}\not\subset B$ and $\mathbb{R}^{d}\setminus(\overline{B\cup\Omega})$ is connected. Then, there exists $\Gamma\Subset\partial\Omega_{1}$ such that $\Gamma\cap B=\emptyset$. Define $E_{\Gamma}:H^{1/2}(\Gamma)\to H^{1/2}(\partial\Omega_{1})$ by $E_{\Gamma}f=f$ on $\Gamma$, otherwise zero. Denote by $X_{\Gamma}\subset H^{1/2}(\Gamma)$ the subspace of piecewise linear continuous functions on $\Gamma$ that vanish on $\partial\Gamma$. Set $W:=\mathrm{Ran}\left(G^{Mix}R_{1}^{*}E_{\Gamma}\big{|}_{X_{\Gamma}}\right)\subset\mathrm{Ran}(G^{Mix}R_{1}^{*})$, it is infinite dimensional because $X_{\Gamma}$ is infinite dimensional and the operator $G^{Mix}R_{1}^{*}E_{\Gamma}$ is injective. By Lemma 4.6 of [2], we have $W\cap\mathrm{Ran}(G^{Dir}_{B},G^{Mix}R_{2}^{*})=\\{0\\}.$ (5.54) Applying (3) of Theorem 4.2 as $F^{Mix}=-F^{Mix}_{\Omega_{1},\Omega_{2}}=G^{Mix}_{\Omega_{1},\Omega_{2}}\left(\left(\begin{array}[]{cc}C^{+}_{\Omega_{1}}&0\\\ 0&C^{-}_{\Omega_{2}}\end{array}\right)+K^{Mix}_{\Omega_{1},\Omega_{2}}\right)G^{Mix\ *}_{\Omega_{1},\Omega_{2}},$ $\tilde{F}=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ we have $-\mathrm{Re}F^{Mix}_{\Omega_{1},\Omega_{2}}\not\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.55) From the above discussion, we conclude the following theorem, which is the same result as Theorem 5.5 of [2]. We remark that the monotonicity reconstruction discussed here was succeeded without assuming that the wavenumber $k^{2}$ is neither a Dirichlet eigenvalue of $-\Delta$ in $\Omega_{1}$ and $B$, nor a Neumann eigenvalue in $\Omega_{2}$, although Theorem 5.5 of [2] assumed it. ###### Theorem 5.10 (Theorem 5.5 of [2]). Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary such that $\mathbb{R}^{d}\setminus(\overline{B\cup\Omega})$ is connected. Then, $\Omega_{1}\subset B\ \ \ \ \Longleftrightarrow\ \ \ \ -\mathrm{Re}F^{Mix}_{\Omega_{1},\Omega_{2}}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.56) By the same argument in Theorem 5.10, one can apply (1) and (3) replacing (3b) by (3b)’ of Theorem 4.2 as $F^{Mix}=F^{Mix}_{\Omega_{1},\Omega_{2}}=-G^{Mix}_{\Omega_{1},\Omega_{2}}\left(\left(\begin{array}[]{cc}C^{+}_{\Omega_{1}}&0\\\ 0&C^{-}_{\Omega_{2}}\end{array}\right)+K^{Mix}_{\Omega_{1},\Omega_{2}}\right)G^{Dir\ *}_{\Omega_{1},\Omega_{2}},$ $\tilde{F}=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ for the reconstruction of the Neumann part $\Omega_{2}$. Then, we also conclude the following theorem, which is the same result as Theorem 5.5 of [2] without the restriction of the wave number $k>0$ as well as Theorem 5.10. ###### Theorem 5.11 (Theorem 5.5 of [2]). Let $B\subset\mathbb{R}^{d}$ be a bounded domain with the smooth boundary such that $\mathbb{R}^{d}\setminus(\overline{B\cup\Omega})$ is connected. Then, $\Omega_{2}\subset B\ \ \ \ \Longleftrightarrow\ \ \ \ \mathrm{Re}F^{Mix}_{\Omega_{1},\Omega_{2}}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.57) ###### Remark 5.12. We remark that the factorization reconstruction for the mixed obstacle has to assume that one component is covered by some artificial domain which is disjoint with the other one we want to reconstruct, and furthermore assume that $k^{2}$ is neither a Dirichlet eigenvalue of $-\Delta$ in $\Omega_{1}$ and artificial covering domain, nor a Neumann eigenvalue in $\Omega_{2}$ (see Lemma 3.5 of [19]). However, the monotonicity reconstruction (Theorems 5.10 and 5.11) does not require both of them. ### 5.5 Mixed crack Let $F^{Mix}_{\Gamma}$ be the far-field operator for the mixed crack $\Gamma$ (The assumption for $\Gamma$ is the same as Section 5.3). The corresponding far-field pattern is defined by solving the scattering problem (2.4)–(2.5) where $\Omega$ in (2.4) is replaced by $\Gamma$ and the boundary condition (2.5) is replaced by $u_{-}=0\ \mathrm{on}\ \Gamma,\ \ \ \ \ \ \ \ \frac{\partial u_{+}}{\partial\nu}=0\ \mathrm{on}\ \Gamma.$ (5.58) $F^{Mix}_{\Gamma}$ has the factorization (see (3.6) and (3.13) of [25]) $F^{Mix}_{\Gamma}=-G^{Mix}_{\Gamma}M^{Mix}_{\Gamma}G^{Mix\ *}_{\Gamma},$ (5.59) where $G^{Mix}_{\Gamma}:H^{1/2}(\Gamma)\times H^{-1/2}(\Gamma)\to L^{2}(\mathbb{S}^{d-1})$ is the data-to-pattern operator for the mixed crack $\Gamma$, i.e., defined by $G^{Mix}_{\Gamma}\left(\begin{array}[]{cc}f_{1}\\\ f_{2}\end{array}\right):=v^{\infty}$ where $v^{\infty}$ is the far-field pattern of a radiating solution $v$ such that $\Delta v+k^{2}v=0\ \mathrm{in}\ \mathbb{R}^{d}\setminus\Gamma,\ \ v_{-}=f_{1}\ \mathrm{on}\ \Gamma,\ \ \frac{\partial v_{+}}{\partial\nu}=f_{2}\ \mathrm{on}\ \Gamma,$ (5.60) and $M^{Mix}_{\Gamma}:\tilde{H}^{-1/2}(\Gamma)\times\tilde{H}^{1/2}(\Gamma)\to H^{1/2}(\Gamma)\times H^{-1/2}(\Gamma)$ has the form (see (3.12) of [25]) $M^{Mix}_{\Gamma}=\left(\begin{array}[]{cc}C^{+}_{\Gamma}&-I\\\ -I&C^{-}_{\Gamma}\end{array}\right)+K^{Mix}_{\Gamma},$ (5.61) where $C^{+}_{\Gamma}:\tilde{H}^{-1/2}(\Gamma)\to H^{1/2}(\Gamma)$ is some positive coercive operator, $C^{-}_{\Gamma}:\tilde{H}^{1/2}(\Gamma)\to H^{-1/2}(\Gamma)$ is some negative coercive operator, and $K^{Mix}_{\Gamma}:\tilde{H}^{-1/2}(\Gamma)\times\tilde{H}^{1/2}(\Gamma)\to H^{1/2}(\Gamma)\times H^{-1/2}(\Gamma)$ is some compact operator. Assume that $\Gamma\subset B$. Define $R:H^{1/2}(\Gamma)\times H^{-1/2}(\Gamma)\to H^{1/2}(\partial B)$ by $R\left(\begin{array}[]{cc}f_{1}\\\ f_{2}\end{array}\right):=v\bigl{|}_{\partial B}$, then, $R$ is a compact operator and $G^{Mix}_{\Gamma}=G^{Dir}_{B}R.$ (5.62) Applying (1) of Theorem 4.2 as $F=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ $\tilde{F}=-F^{Mix}_{\Gamma}=G^{Mix}_{\Gamma}M^{Mix}_{\Gamma}G^{Mix\ *}_{\Gamma},$ we have $-\mathrm{Re}F^{Mix}_{\Gamma}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.63) Assume that $\Gamma\not\subset B$. Then, there exists $\tilde{\Gamma}\Subset\Gamma$ such that $\tilde{\Gamma}\cap B=\emptyset$. Setting $W:=\mathrm{Ran}\left(G^{Mix}_{\Gamma}R_{1}^{*}E_{\tilde{\Gamma}}\big{|}_{X_{\tilde{\Gamma}}}\right)\subset\mathrm{Ran}(G^{Mix}_{\Gamma}R_{1}^{*})$, it is infinite dimensional because $X_{\tilde{\Gamma}}$ is infinite dimensional and the operator $G^{Mix}_{\Gamma}R_{1}^{*}E_{\tilde{\Gamma}}$ is injective. By the same argument in Lemma 4.6 of [2], we have $W\cap\mathrm{Ran}(G^{Dir}_{B},G^{Mix}_{\Gamma}R_{2}^{*})=\\{0\\}.$ (5.64) Applying (3) of Theorem 4.2 as $F^{Mix}=-F^{Mix}_{\Gamma}=G^{Mix}_{\Gamma}\left(\left(\begin{array}[]{cc}C^{+}_{\Gamma}&-I\\\ -I&C^{-}_{\Gamma}\end{array}\right)+K^{Mix}_{\Gamma}\right)G^{Mix\ *}_{\Gamma},$ $\tilde{F}=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ we have $-\mathrm{Re}F^{Mix}_{\Gamma}\not\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.65) From the above discussion, we conclude the following theorem. ###### Theorem 5.13. Let $B\subset\mathbb{R}^{d}$ be a bounded domain. Then, $\Gamma\subset B\ \ \ \ \Longleftrightarrow\ \ \ \ -\mathrm{Re}F^{Mix}_{\Gamma}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.66) By the same argument in Theorem 5.13, one can apply (1) and (3) replacing (3b) by (3b)’ of Theorem 4.2 as $F^{Mix}=F^{Mix}_{\Gamma}=-G^{Mix}_{\Gamma}\left(\left(\begin{array}[]{cc}C^{+}_{\Gamma}&-I\\\ -I&C^{-}_{\Gamma}\end{array}\right)+K^{Mix}_{\Gamma}\right)G^{Mix\ *}_{\Gamma},$ $\tilde{F}=H^{*}_{\partial B}H_{\partial B}=G^{Dir}_{B}\left[C_{B}J_{\partial B}C_{B}+\hat{K}_{B}\right]G^{Dir\ *}_{B},$ Then, we also conclude the following theorem. ###### Theorem 5.14. Let $B\subset\mathbb{R}^{d}$ be a bounded domain. Then, $\Gamma\subset B\ \ \ \ \Longleftrightarrow\ \ \ \ \mathrm{Re}F^{Mix}_{\Gamma}\leq_{\mathrm{fin}}H^{*}_{\partial B}H_{\partial B}.$ (5.67) ###### Remark 5.15. The factorization reconstruction for the mixed crack has been studied in [25], but an extensive closed curve of the unknown crack should be known, which is a very restrictive assumption (see Theorem 3.3 of [25]). However, the monotonicity reconstruction (Theorems 5.13 and 5.14) does not require it. We also remark that our works in this section would be a new extension of the monotonicity to the inverse acoustic mixed crack scattering. ## 6 Numerical examples In this section, we study numerical examples of our theoretical results in 2 dimensions. The far-field operator $F$ is approximated by the matrix $F\approx\frac{2\pi}{N}\bigl{(}u^{\infty}(\hat{x}_{l},\theta_{m})\bigr{)}_{1\leq l,m\leq N}\in\mathbb{C}^{N\times N},$ (6.1) where $\hat{x}_{l}=\bigl{(}\mathrm{cos}(\frac{2\pi l}{N}),\mathrm{sin}(\frac{2\pi l}{N})\bigr{)}$ and $\theta_{m}=\bigl{(}\mathrm{cos}(\frac{2\pi m}{N}),\mathrm{sin}(\frac{2\pi m}{N})\bigr{)}$. For the far-field pattern $u^{\infty}(\hat{x},\theta)$ of the Dirichlet obstacle and the Dirichlet crack, we numerically compute the integral $u^{\infty}(\hat{x},\theta)=\frac{e^{i\frac{\pi}{4}}}{\sqrt{8\pi k}}\int_{\Gamma}e^{-ik\hat{x}y}\varphi_{\theta}(y)ds(y),$ (6.2) where $\varphi_{\theta}$ is given by solving $-e^{ikx\cdot\theta}=\int_{\Gamma}\Phi(x,y)\varphi_{\theta}(y)ds(y),\ x\in\Gamma.$ (6.3) For the inhomogeneous medium, we numerically compute the integral $u^{\infty}(\hat{x},\theta)=\frac{e^{i\frac{\pi}{4}}}{\sqrt{8\pi k}}\int_{\Omega}e^{-ik\hat{x}y}q(y)(u_{\theta}(y)+e^{iky\cdot\theta})dy,$ (6.4) where $u_{\theta}$ is given by solving $u(x)=k^{2}\int_{\Omega}\Phi(x,y)q(y)(u_{\theta}(y)+e^{iky\cdot\theta})dy,\ x\in\Omega.$ (6.5) For a bounded domain (or a smooth curve) $B$, the operator $H^{*}_{B}H_{B}$ is approximated by $H^{*}_{B}H_{B}\approx\frac{2\pi}{N}\biggl{(}\int_{B}e^{iky\cdot(\theta_{m}-\hat{x}_{l})}dy\biggr{)}_{1\leq l,m\leq N}\in\mathbb{C}^{N\times N}.$ (6.6) We denote by the sampling square region $[-R,R]^{2}$, which includes the unknown target. We also denote by a grid point $z_{i,j}:=(\frac{Ri}{M},\frac{Rj}{M})$ ($i,j=-M,-M+1,...,M$) in the sampling square region $[-R,R]^{2}$ (see Figure 2). Throughout our examples, we fix parameters $R=1.5$, $M=100$, and $N=20$. Figure 2: The grid points in the sampling square region. ### 6.1 Dirichlet obstacle and inhomogeneous medium We consider the Dirichlet obstacle and the inhomogeneous medium detections discussed in Sections 5.1 and 5.2. The following shapes $\partial\Omega_{j}$ ($j=1,2$) are considered (see Figure 6): $\partial\Omega_{1}=\left\\{\left(0.7\mathrm{cos}(\pi s),0.7\mathrm{sin}(\pi s)\right)|-1\leq s\leq 1\right\\}.$ (6.7) $\partial\Omega_{2}=\left\\{\left(0.3\mathrm{cos}(\pi s)-0.7,\ 0.3\mathrm{sin}(\pi s)\right)|-1\leq s\leq 1\right\\}$ $\cup\left\\{\left(0.3\mathrm{cos}(\pi s)+0.7,\ 0.3\mathrm{sin}(\pi s)\right)|-1\leq s\leq 1\right\\}.$ (6.8) Based on Theorems 5.1 and 5.4, the indicator functions for the Dirichlet obstacle and inhomogeneous medium are given by $I^{MM}_{dir}(B):=\\#\left\\{\mathrm{negative}\ \mathrm{eigenvalues}\ \mathrm{of}\ -\mathrm{Re}F^{Dir}_{\Omega}-H^{*}_{\partial B}H_{\partial B}\right\\},$ (6.9) $I^{MM}_{med}(B):=\\#\left\\{\mathrm{negative}\ \mathrm{eigenvalues}\ \mathrm{of}\ \mathrm{Re}F^{Med}_{\Omega}-\alpha H^{*}_{\partial B}H_{\partial B}\right\\},$ (6.10) for a bounded domain $B$, respectively. For the medium, we always consider $q=1$ in $\Omega_{j}$, and $\alpha=1$. Here, $B$ is chosen as a square, i.e., $B=B_{i,j}(r):=z_{i,j}+[-\frac{r}{2},\frac{r}{2}]\times[-\frac{r}{2},\frac{r}{2}]$ where a grid point $z_{i,j}$ is the center of a square (see Figure 3). Figure 3: The square test. Then, we can compute the integral $\int_{B_{i,j}(r)}e^{iky\cdot(\theta_{m}-\hat{x}_{l})}dy=r^{2}e^{ik(\theta_{m}-\hat{x}_{l})\cdot z_{i,j}}\mathrm{sinc}\left(\frac{kr}{2}(\theta_{m}-\hat{x}_{l})_{1}\right)\mathrm{sinc}\left(\frac{kr}{2}(\theta_{m}-\hat{x}_{l})_{2}\right).$ (6.11) Figures 7 and 8 are given by plotting the values of the indicator functions $I^{MM}_{dir,r}(z_{i,j}):=I^{MM}_{dir}(B_{i,j}(r)),\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.12) $I^{MM}_{med,r}(z_{i,j}):=I^{MM}_{med}(B_{i,j}(r)),\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.13) respectively, for different lengths $r=0.1,0.5$, wavenumbers $k=1,5$, and shapes $\Omega_{1},\Omega_{2}$. We also plot the values of the indicator function for the factorization method $I^{FM}(z_{i,j}):=\left(\sum_{n=1}^{\infty}\frac{|\langle\phi_{z_{i,j}},\varphi_{n}\rangle_{L^{2}(\mathbb{S}^{1})}|^{2}}{\mu_{n}}\right)^{-1},\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.14) with $\left\\{\mu_{n},\phi_{n}\right\\}$ an eigensystem of the self-adjoint compact operator $|\mathrm{Re}F|+|\mathrm{Im}F|$ where $F$ is some far-field operator, and $\phi_{z_{i,j}}$ is defined by $\phi_{z_{i,j}}(\hat{x}):=e^{-ik\hat{x}\cdot z_{i,j}},\hat{x}\in\mathbb{S}^{1},$ (6.15) Figures 9 and 10 are corresponding to $F=F^{Dir}_{\Omega}$ for the Dirichlet obstacle and $F=F^{Med}_{\Omega}$ for the inhomogeneous medium, respectively. For details of introductions of the indicator function $I^{FM}$, we refer to Corollary 2.16 and Theorem 4.9 of [19]. ### 6.2 Dirichlet crack We consider the Dirichlet crack detection discussed in Section 5.3. The following shapes $\Gamma_{j}$ ($j=1,2$) are considered (see Figure 11): $\Gamma_{1}=\left\\{\left(\mathrm{cos}(2s),\mathrm{sin}(2s)\right)|-1\leq s\leq 1\right\\}.$ (6.16) $\Gamma_{2}=\left\\{\left(-0.4\mathrm{cos}(2s)-0.7,\ 0.4\mathrm{sin}(2s)\right)|-1\leq s\leq 1\right\\}$ $\cup\left\\{\left(0.4\mathrm{cos}(2s)+0.7,\ 0.4\mathrm{sin}(2s)\right)|-1\leq s\leq 1\right\\}.$ (6.17) Based on Theorem 5.7, the indicator function is given by $I^{MM}_{dir}(\sigma):=\\#\left\\{\mathrm{negative}\ \mathrm{eigenvalues}\ \mathrm{of}-\mathrm{Re}F^{Dir}_{\Gamma}-H^{*}_{\sigma}H_{\sigma}\right\\}.$ (6.18) for a smooth arc $\sigma$. Here, $\sigma$ is chosen as a line segment, i.e., $\sigma=\sigma_{i,j}(\eta,r):=z_{i,j}+L(\eta,r)$ where a grid point $z_{i,j}$ is the center of line segments, and $L(\eta,r)$ is defined by $L(\eta,r):=\left\\{(s,s\mathrm{tan}(\eta))\bigl{|}-\frac{r}{2}\mathrm{cos}(\eta)\leq s\leq\frac{r}{2}\mathrm{cos}(\eta)\right\\},$ (6.19) where $\eta\in[0,\pi]$ is the angle and $r>0$ is the length of the line segment (see Figure 4). Figure 4: The line segment test. Then, we can compute the integral $\int_{\sigma_{i,j}(\eta,r)}e^{iky\cdot(\theta_{m}-\hat{x}_{l})}ds(y)=re^{ik(\theta_{m}-\hat{x}_{l})\cdot z_{i,j}}\mathrm{sinc}\biggl{(}\frac{rk}{2}\Bigl{(}\mathrm{cos}(\eta)(\theta_{m}-\hat{x}_{l})_{1}+\mathrm{sin}(\eta)(\theta_{m}-\hat{x}_{l})_{2}\Bigr{)}\biggr{)}.$ (6.20) Figures 12 and 13 are given by plotting the values of the indicator function for $\Gamma_{1}$ and $\Gamma_{2}$ $I^{MM}_{dir,\eta,r}(z_{i,j}):=I^{MM}_{dir}(\sigma_{i,j}(\eta,r)),\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.21) respectively, for different angles $\eta=0,\pi/2$, lengths $r=0.01,0.1$, and wavenumbers $k=1,5$. In Figure 14 we also plot the values of the indicator function for the factorization method $I^{FM}_{dir}(z_{i,j}):=\left(\sum_{n=1}^{\infty}\frac{|\langle\phi_{z_{i,j}},\varphi_{n}\rangle_{L^{2}(\mathbb{S}^{1})}|^{2}}{\mu_{n}}\right)^{-1},\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.22) with $\left\\{\mu_{n},\phi_{n}\right\\}$ an eigensystem of the self-adjoint compact operator $|\mathrm{Re}F^{Dir}_{\Gamma}|+|\mathrm{Im}F^{Dir}_{\Gamma}|$ and $\phi_{z_{i,j}}$ is defined in (6.15). For details of introductions of the indicator function $I^{FM}_{dir}$, we refer to Theorem 3.9 of [21]. ### 6.3 Mixed crack We consider the mixed crack detection discussed in Section 5.5. The following shape $\Gamma_{3}$ is considered (see Figure 15): $\Gamma_{3}=\left\\{\left(0.5\mathrm{cos}(2s),0.5\mathrm{sin}(2s)\right)|-1\leq s\leq 1\right\\}.$ (6.23) Based on Theorem 5.13, the indicator function is given by $I^{MM}_{mix}(B):=\\#\left\\{\mathrm{negative}\ \mathrm{eigenvalues}\ \mathrm{of}\ \mathrm{Re}F^{Mix}_{\Gamma}+H^{*}_{\partial B}H_{\partial B}\right\\}.$ (6.24) for a bounded domain $B$. Here, $B$ is chosen as a circle, i.e., $B=B_{r}(z)$ is an open circle with center $z\in\mathbb{R}^{2}$ and radius $r>0$. Then, we can compute the integral $\int_{\partial B_{r}(z)}e^{iky\cdot(\theta_{m}-\hat{x}_{l})}ds(y)=2\pi re^{ik(\theta_{m}-\hat{x}_{l})\cdot z}J_{0}(kr|\theta_{m}-\hat{x}_{l}|),$ (6.25) where $J_{0}$ is the Bessel function of the first kind for the zero order. Figures 16 and 17 are given by plotting the values of two types (see Figure 5) of indicator functions $I^{MM}_{mix,r}(z_{i,j}):=I^{MM}_{mix}(B_{r}(z_{i,j})),\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.26) $I^{MM}_{mix,p}(z_{i,j}):=I^{MM}_{mix}(B_{|z_{i,j}-p|}(p)),\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.27) respectively, for different radiuses $r=0.25,1$, points $p=(0,0)$, $(1,0)$, wavenumbers $k=1,5$. --- Figure 5: The shifting circle test(left) and the shrinking circle test(right). In Figures 18 we also plot the values of the indicator function for the factorization method $I^{FM}_{mix}(z_{i,j}):=\left(\sum_{n=1}^{\infty}\frac{|\langle\phi_{z_{i,j}},\varphi_{n}\rangle_{L^{2}(\mathbb{S}^{1})}|^{2}}{\mu_{n}}\right)^{-1},\ \ \mathrm{for\ each}\ i,j=-100,-99,...,100,$ (6.28) with $\left\\{\mu_{n},\phi_{n}\right\\}$ an eigensystem of the self-adjoint compact operator $|\mathrm{Re}\tilde{F}|+|\mathrm{Im}\tilde{F}|$. The operator $\tilde{F}$ is defined by adding the Herglotz operator correspoding to the auxiliary $\partial\Omega_{3}$ to the original far-field operator $\tilde{F}:=F^{Mix}_{\Gamma}-pH^{*}_{\partial\Omega_{3}}H_{\partial\Omega_{3}},$ (6.29) for $p\in\mathbb{C}\setminus\\{0\\}$. We choose $p=0.01+0.01i$ or $0.1+0.1i$ in our numerical examples. Here, the auxiliary $\partial\Omega_{3}$ is defined by $\partial\Omega_{3}=\left\\{\left(0.5\mathrm{cos}(\pi s),0.5\mathrm{sin}(\pi s)\right)|-1\leq s\leq 1\right\\},$ (6.30) which is an extension of $\Gamma_{3}$, that is, $\Gamma_{3}\subset\partial\Omega_{3}$. For details of introductions of the indicator function $I^{FM}_{mix}$, we refer to Theorem 3.4 of [25]. ## Conclusions In this paper, we studied the factorization and monotonicity method for inverse acoustic scattering problems. The main contribution was to give a new general functional analysis theorem (Theorem 4.2) for the monotonicity method, which can provide reconstruction schemes under weaker a priori assumptions rather than the factorization method (see the assumptions in Theorems 3.1 and 4.2). Furthermore, we observed that the factorization method needs the real and imaginary parts of the far-field operator (see (3.3)), while the monotonicity needs only the real part (see (4.3), (4.4), and (4.6)), which is also the advantage over the factorization in terms of less data. After proving the general theorem, we also showed how the general theorem is applied to three typical inverse scattering problems (obstacle in Sections 5.1 and 5.4, medium in Section 5.2, and crack in Sections 5.3 and 5.5). However, it can be applied to other inverse problems as well (especially for inverse problems that the factorization method already studied, e.g., inverse scatterings by a layered medium [3], a mixed-type scatterer of a obstacle and a medium [20], and an obstacle in an homogeneous half-space [10]). We also provided several numerical examples to compare the factorization method with the monotonicity method. The factorization method is a point test which checks whether a point $z$ is included in the unknown target or not, while the monotonicity is a domain test which checks whether a domain $B$ is included in the unknown target or not. For the domain test, we have to find the appropriate choice of $B$. By testing the monotonicity for many different shapes and sizes of $B$, we obtained as accurate reconstructions as the factorization (see Figures 7, 8, 9, 10, 12, 13, and 14). In our numerical examples for the mixture crack, we observed that the factorization method is more accurate than the monotonicity (see Figures 16, 17, 18). This comes from that the factorization method used the Herglotz operator corresponding to the auxiliary closed curve that is a extension of the unknown crack (see (6.29)), which is a very restrictive assumption. On the other hand, the monotonicity can give the reconstruction scheme without using the auxiliary closed curve. Its scheme is the outside domain test that checks whether a domain $B$ contains $\Gamma$ or not. However in our numerical examples, the outside domain test can not obtain the exact shape of the unknown crack $\Gamma$ although the location and size can be done (see Figures 16, 17). As we see numerical examples for the mixed obstacle discussed in [2], it also seems to be difficult to obtain the shape of unknown obstacle from outside domain tests. The numerical tests for the mixed problem in the monotonicity has to be developed in the future. ## Acknowledgments This work was supported by JSPS KAKENHI Grant Number JP19J10238. ## References * [1] K. A. Anagnostopoulos, A. Charalambopoulos, A. Kleefeld, The factorization method for the acoustic transmission problem, Inverse Probl., 29, (2013), 115015. * [2] A. Albicker, R. Griesmaier Monotonicity in inverse obstacle scattering on unbounded domains, Inverse Probl., 36, (2020), 085014. * [3] O. Bondarenko, A. Kirsch, X. Liu, The factorization method for inverse acoustic scattering in a layered medium, Inverse Probl., 29, (2013), 045010. * [4] Y. Boukari and H. Haddar, The factorization method applied to cracks with impedance boundary conditions, Inverse Problems and Imaging, 7, (2013), 1123–1138. * [5] D. Colton, A. Kirsch, A simple method for solving inverse scattering problems in the resonance region, Inverse Probl., 12, (1996), 383–393. * [6] D. Colton, R. Kress, Inverse acoustic and electromagnetic scattering theory, Fourth edition, Springer, (2019), New York. * [7] T. Furuya, A modification of the factorization method for scatterers with different physical properties, Math. Meth. Appl. Sci., 42, (2019), 4017–4030. * [8] T. Furuya, The factorization and monotonicity method for the defect in an open periodic waveguide, J. Inverse Ill-Posed Probl., 28, (2020), 783–796. * [9] T. Furuya, T. Daimon, R. Saiin, The monotonicity method for the inverse crack scattering problem, Inverse Probl. Sci. Eng., 28, (2020), 1570–1581. * [10] N. Grinberg, Obstacle localization in an homogeneous half-space, Inverse Probl., 17, (2001), 1113–1125. * [11] R. Griesmaier, B. Harrach, Monotonicity in inverse medium scattering on unbounded domains, SIAM J. Appl. Math., 78, (2018), 2533–2557. * [12] N. Grinberg, A. Kirsch, The factorization method for obstacles with a-priori separated sound-soft and sound-hard parts, Math. Comput. in Simul., 66, (2004), 267–279. * [13] B. Harrach, V. Pohjola, M. Salo, Dimension bounds in monotonicity methods for the Helmholtz equation, SIAM J. Math. Anal., 51, (2019), 2995–3019. * [14] B. Harrach, V. Pohjola, M. Salo, Monotonicity and local uniqueness for the Helmholtz equation, Anal. PDE, 12, (2019), 1741–1771. * [15] B. Harrach, M. Ullrich, Monotonicity based shape reconstruction in electrical impedance tomography, SIAM J. Math. Anal., 45, (2013), 3382–3403. * [16] M. Ikehata, Reconstruction of the shape of an obstacle from the scattering amplitude at a fixed frequency, Inverse Probl., 14, (1998), 949–954. * [17] A. Kirsch, Characterization of the shape of a scattering obstacle using the spectral data of the far-field operator, Inverse Probl., 14, (1998), 1489–1512. * [18] A. Kirsch, The factorization method for a class of inverse elliptic problems, Math. Nachr. 278, (2004), 258–277. * [19] A. Kirsch and N. Grinberg, The factorization method for inverse problems, Oxford University Press, (2008), Karlsruhe, Germany. * [20] A. Kirsch, X. Liu, Direct and inverse acoustic scattering by a mixed-type scatterer, Inverse Probl., 29, (2013), 065005. * [21] A. Kirsch and S. Ritter, A linear sampling method for inverse scattering from an open arc, Inverse Probl., 16, (2000), 89–105. * [22] E. Lakshtanov, A. Lechleiter, Difference factorizations and monotonicity in inverse medium scattering for contrasts with fixed sign on the boundary, SIAM J. Math. Anal. 48, (2016), 3688–3707. * [23] A. Lechleiter, The factorization method is independent of transmission eigenvalues, Inverse Probl. Imaging, 3, (2009), 123–138. * [24] R. Potthast, Stability estimates and reconstructions in inverse acoustic scattering using singular sources, J. Comput. Appl. Math., 114, 247–274, (2000). * [25] Q. Wu, G. Yan, The factorization method for an open arc, J. Comp. Math., 33, (2015), 517–532. e-mail<EMAIL_ADDRESS> --- Figure 6: The original domains $\Omega_{1}$ (left) and $\Omega_{2}$ (right). --- Figure 7: Reconstruction for the Dirichlet obstacle by the monotonicity method for different lengths $r=0.1,0.5$, wavenumbers $k=1,5$, and shapes $\Omega_{1},\Omega_{2}$. --- Figure 8: Reconstruction for the inhomogeneous medium by the monotonicity method for different lengths $r=0.1,0.5$, wavenumbers $k=1,5$, and shapes $\Omega_{1},\Omega_{2}$ --- Figure 9: Reconstruction for the Dirichlet obstacle by the factorization method for different wavenumbers $k=1,5$ and shapes $\Omega_{1},\Omega_{2}$. --- Figure 10: Reconstruction for the inhomogeneous medium by the factorization method for different wavenumbers $k=1,5$ and shapes $\Omega_{1},\Omega_{2}$. --- Figure 11: The original open arcs $\Gamma_{1}$ (left) and $\Gamma_{2}$ (right). --- Figure 12: Reconstruction for the Dirichlet crack $\Gamma_{1}$ by the monotonicity method for different angles $\eta=0,\pi/2$, lengths $r=0.01,0.1$, and wavenumbers $k=1,5$. --- Figure 13: Reconstruction for the Dirichlet crack $\Gamma_{2}$ by the monotonicity method for different angles $\eta=0,\pi/2$, lengths $r=0.01,0.1$, and wavenumbers $k=1,5$. --- Figure 14: Reconstruction for the Dirichlet crack by the factorization method for different wavenumbers $k=1,5$ and shapes $\Gamma_{1}$, $\Gamma_{2}$. Figure 15: The original open arc $\Gamma_{3}$. --- Figure 16: Reconstruction for the mixed crack $\Gamma_{3}$ by the shifting circle test of monotonicity method for different radiuses $r=0.25,1$ and wavenumbers $k=1,5$. --- Figure 17: Reconstruction for the mixed crack $\Gamma_{3}$ by the shrinking circle test of monotonicity method for different points $p=(0,0),(1,0)$ and wavenumbers $k=1,5$. --- Figure 18: Reconstruction for the mixed crack $\Gamma_{3}$ by the factorization method for different wavenumbers $k=1,5$ and complex numbers $p=0.01+0.01i,0.1+0.1i$.
# Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation Fan Yang, Ninghao Liu, Mengnan Du, Xia Hu Department of Computer Science and Engineering, Texas A&M UniversityCollege Station, TX, USA, 77843 nacoyang, nhliu43, dumengnan<EMAIL_ADDRESS> (2020) ###### Abstract. With the wide use of deep neural networks (DNN), model interpretability has become a critical concern, since explainable decisions are preferred in high- stake scenarios. Current interpretation techniques mainly focus on the feature attribution perspective, which are limited in indicating _why_ and _how_ particular explanations are related to the prediction. To this end, an intriguing class of explanations, named _counterfactuals_ , has been developed to further explore the “what-if” circumstances for interpretation, and enables the reasoning capability on black-box models. However, generating counterfactuals for raw data instances (i.e., text and image) is still in the early stage due to its challenges on high data dimensionality and unsemantic raw features. In this paper, we design a framework to generate counterfactuals specifically for raw data instances with the proposed Attribute-Informed Perturbation (AIP). By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently. Instead of directly modifying instances in the data space, we iteratively optimize the constructed attribute-informed latent space, where features are more robust and semantic. Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework, and show the superiority over other alternatives. Besides, we also introduce some practical applications based on our framework, indicating its potential beyond the model interpretability aspect. ††copyright: acmcopyright††journalyear: 2020††doi: 10.1145/1122445.1122456 ## 1\. Introduction The past decade has witnessed the success of deep neural networks (DNN) in a wide range of application domains (Pouyanfar et al., 2018). Despite the superior performance, DNN models have been increasingly criticized due to its black-box nature (Doshi-Velez and Kim, 2017). Interpretable machine learning techniques (Du et al., 2020) are thus becoming significantly vital, especially in those high-stake scenarios, such as medical diagnosis. To effectively interpret black-box DNNs, most approaches investigate the feature attributions between input instances and output predictions through correlation analysis, so that humans can have a sense of which part of the instance contributes most to the model decision. A typical example is the heatmaps employed for image classification (Selvaraju et al., 2017), where related saliency scores are capable of indicating the feature importance for one particular prediction label. However, existing correlation-based explanations are neither discriminative nor counterfactual (Pearl et al., 2009), since they are not able to help understand why and how particular explanations are relevant to model decisions. Thus, to further explore the decision boundaries of black-box DNN, _counterfactuals_ have gradually come to the attention of researchers, as an emerging technique for model interpretability. Counterfactuals are essentially some synthetic samples within data distribution, which can flip the model prediction. With counterfactuals, humans can understand how input changes affect the model and conduct reasoning under the “what-if” circumstances. Take a loan applicant who got rejection for instance. Correlation-based explanations may simply indicate those most contributed features (e.g., income and credit) for rejection, while counterfactuals are capable of showing how the application could be accepted with certain changes (e.g., increase the monthly income from $\$5,000$ to $\$7,000$). Recent work have already made some initial attempts on conducting such counterfactual analysis. The first line of research (Kim et al., 2016; Chen et al., 2019) employed the prototype and criticism samples in the training set as the raw ingredients for counterfactual analysis, even though those selected samples are not counterfactuals in nature. Prototypes indicate the set of data samples that best represent the original prediction label, while criticisms are the samples with the desired prediction label which are close to the decision boundary. Some other work (Goyal et al., 2019; Agarwal et al., 2019) further utilized feature replacement techniques to create hypothetical instances as counterfactuals, where a query instance and a distractor instance are typically needed for counterfactual generation. The key of this kind of methodologies lies in the effective feature extraction and efficient replacement algorithm. Besides, contrastive intervention (Dhurandhar et al., 2018; White and Garcez, 2019) on the query instance is another common way to generate counterfactuals regarding to the desired label. By reasonably perturbing input features, counterfactuals can be obtained in the form of modified data samples. Despite the existing efforts, generating valid counterfactuals for raw data instances is still challenging due to the following reasons. First, effective counterfactuals for certain label are not guaranteed to be existed in training set, so the selected prototypes and criticisms are not always sufficient for counterfactual analysis. The related sample selection algorithms are highly possible to select some “unexpected” instances due to data constraints (Kim et al., 2016), which would largely limit the reasoning on model behaviors. Second, efficient feature replacement for raw data instances could be very hard and time-consuming (Goyal et al., 2019). Also, relevant distractor instances for replacement may not be available in particular scenarios considering privacy and security issues, such as loan applications. Third, modifying query samples with intervention can simply work on a limited types of data, such as tabular data (White and Garcez, 2019) and naive image data (Dhurandhar et al., 2018). For general raw data like real-world texts or images, intervention operation in data space can be extremely complicated and intractable, which makes it difficult to be used in practice. To handle the aforementioned challenges of counterfactual generation for raw instances, the high-dimension data space and unsemantic raw features are the two obstacles ahead. To this end, in this paper, we design a framework to generate counterfactuals specifically for raw data instances with the proposed Attribute-Informed Perturbation (AIP) method. By utilizing the power of generative models, we can obtain useful hypothetical instances within the data distribution for counterfactual analysis. Essentially, our proposed AIP method can guide a well-trained generative model to generate valid counterfactuals by updating its parameters in the attribute-informed latent space, which is a joint embedding space for both raw features and data attributes. Compared with the original input space, attribute-informed latent space has two significant merits for counterfactual generation: (1) raw features are embedded as low- dimension ones which are more robust and efficient for generation; (2) data attributes are modeled as joint latent features which are more semantic for conditional generation. As for the construction of attribute-informed latent space, we typically employ two types of losses to conduct the training of generative models, where the reconstruction loss is used to guarantee the quality of raw feature embedding and the discrimination loss is used to ensure the correct attribute embedding. Through the gradient-based optimization, the proposed AIP method can iteratively derive the valid generative counterfactuals which are able to flip the prediction of target model. In the experiments, although we simply consider the DNN as the target prediction model, due to its general good performance for raw data instances, our proposed framework can also be easily applied with other different prediction models. The main contributions of this paper are summarized as follows: * • We design a general framework to derive counterfactuals for raw data instances by employing generative models, aiming to facilitate the reasoning on model behaviors of black-box DNN; * • We develop AIP to iteratively update the parameters of generative models in the attribute-informed latent space, according to the counterfactual loss with regards to the desired prediction label; * • We evaluate the designed framework with AIP on several real-world datasets including raw texts and images, and demonstrate the superiority both quantitatively and qualitatively. ## 2\. Preliminaries In this section, we briefly introduce some related contexts to our problem, as well as some basics of the employed techniques. ### 2.1. Counterfactual Explanation Counterfactual explanation is essentially a natural extension under the framework of example-based reasoning (Rissland, 1991), where particular data samples are provided to promote the understandings on model behaviors. Nevertheless, counterfactuals are not common examples for model interpretation, since they are typically generated under the “what-if” circumstances which may not necessarily exist. According to the theory proposed by J. Pearl (Pearl and Mackenzie, 2018), three distinct levels of cognitive ability are needed to fully master the behaviors of a particular model, i.e., _seeing_ , _doing_ and _imagining_ from the easiest to the hardest. In fact, counterfactual explanation is just raised to meet the imagining-level cognition for model interpretation. Within the contexts of this paper, we only discuss counterfactuals under the assumption of “closest possible world” (Wachter et al., 2017), where desired outcomes can be obtained through the smallest changes to the world. To be specific and simple without loss of generality, consider a binary classification model $f_{\bm{\theta}}:~{}\mathbb{R}^{d}\rightarrow\\{0,1\\}$, where $0$ and $1$ respectively indicate the undesired and desired output. The model input $\mathbf{x}\in\mathbb{R}^{d}$ is further assumed to be sampled from data distribution $\mathcal{P}(\mathbf{x})$. Then, given a query instance $\mathbf{x}_{0}$ with the undesired model output (i.e., $f_{\bm{\theta}}(\mathbf{x}_{0})=0$), the corresponding counterfactual $\mathbf{x}^{*}$ can be mathematically represented as: (1) $\mathbf{x}^{*}=\operatorname*{arg\,min}_{\mathbf{x}|\mathcal{P}(\mathbf{x})>\eta}l(\mathbf{x},\mathbf{x}_{0})\quad\mathrm{s.t.}\ f_{\bm{\theta}}(\mathbf{x}^{*})=1,$ where $l:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}$ indicates a distance measure defined in the input space, and $\eta>0$ denotes the threshold which quantifies how likely the sample $\mathbf{x}$ is under the distribution $\mathcal{P}(\mathbf{x})$. The obtained counterfactual $\mathbf{x}^{*}$ is regarded to be valid if it can effectively flip the target classifier $f_{\bm{\theta}}$ to the desired prediction. Although finding counterfactuals is somewhat similar to generating adversarial examples (in terms that both tasks aim to flip the model decision by minimally perturbing the input instance), they are essentially different in nature. Following the previous settings, the adversarial sample $\mathbf{x}^{adv}$ for model $f_{\bm{\theta}}$, with query instance $\mathbf{x}_{0}$, can be generally indicated by: (2) $\mathbf{x}^{adv}=\operatorname*{arg\,min}_{\mathbf{x}=\mathbf{x}_{0}+\bm{\delta}}\|\bm{\delta}\|_{p}\quad\mathrm{s.t.}\ f_{\bm{\theta}}(\mathbf{x}^{adv})\neq f_{\bm{\theta}}(\mathbf{x}_{0}),$ where $\bm{\delta}$ denotes the adversarial perturbation on the query, $\|\cdot\|$ represents the norm operation and $p\in\\{\infty,1,2,\cdots\\}$. Comparing with Eq. 1, we note that counterfactual example has two significant differences from adversarial sample. First, counterfactual generation process is subject to the original data distribution, while adversarial samples are not constrained by the distribution. This difference brings about the fact that counterfactuals are all in-distribution samples, but adversarial examples are mostly out-of-distribution (OOD) samples. Second, counterfactual changes on the query need to be human-perceptible, while adversarial perturbations are usually inconspicuous (Sen et al., 2019). Therefore, the key problem of counterfactual explanation actually lies in how to generate such in- distribution sample, with human-perceptible changes on the query, to flip the model decision as desired. ### 2.2. Generative Modeling Generative modeling is a typical task under the paradigm of unsupervised learning. Different from discriminative ones, which involves discriminating input samples across classes, generative modeling aims to summarize the data distribution of input variables and further create new samples that plausibly fit into that distribution (Murphy, 2012). In practice, a well-trained generative model is capable of generating new examples that are not only reasonable, but also indistinguishable from real examples in the problem domain. Conventional examples of generative modeling include Latent Dirichlet Allocation (LDA) and Gaussian Mixture Model (GMM). As emerging families of generative modeling, Generative Adversarial Network (GAN) (Goodfellow et al., 2014) and Variational Auto-Encoder (VAE) (Kingma and Welling, 2013) have been attracting lots of attentions due to their exceptional performance in a myriad of applications, especially for the task on image and text generation (Van den Oord et al., 2016; Hu et al., 2017). By taking full advantage of their power on raw data with high dimensionality, we are able to better investigate how those data samples were created in the first place, which potentially benefits the generation of certain hypothetical example. To this end, we specifically employ some advanced generative models (i.e., GAN and VAE) to study the counterfactual explanation for black-box DNN on raw data instances, providing effective generative counterfactuals for better model understanding. ## 3\. Counterfactual Generation In this section, we first introduce the designed generative counterfactual framework for raw data instances. Then, we present how to specifically construct the attribute-informed latent space with generative models. Finally, we show the details of our proposed AIP method on how to effectively obtain such counterfactuals. ### 3.1. Generative Counterfactual Framework Figure 1. Designed framework for counterfactual sample. We design a framework to create counterfactual samples for raw data instances, as illustrated by Fig. 1. To effectively handle the high dimensionality and unsemantic features, we utilize the generative modeling techniques to aid the counterfactual generation process. Consider a target DNN $F_{\bm{\phi}}:\mathbb{R}^{d}\rightarrow\\{1,\cdots,C\\}$, which is the black- box model for counterfactual analysis, where $\mathbb{R}^{d}$ is the input data space and $\\{1,\cdots,C\\}$ denotes the model prediction space with $C$ different outputs. Given a query instance $\mathbf{x}_{0}$, $F_{\bm{\phi}}(\mathbf{x}_{0})=\mathbf{y}_{0}$ outputs a one-hot vector. To effectively generate a valid counterfactual sample $\mathbf{x}^{*}\in\mathbb{R}^{d}$ that can flip the $F_{\bm{\phi}}$ decision to $\mathbf{y}^{*}\in\\{1,\cdots,C\\}$ as desired, a generative model is trained to achieve this in the framework. The applied generative modeling actually plays two important roles in the counterfactual generation process: (1) generative modeling guarantees that all created instances are in- distribution samples, since it can be regarded as a stochastic procedure that generates samples $\mathbf{x}\in\mathbb{R}^{d}$ under the particular data distribution $\mathcal{P}(\mathbf{x})$; (2) generative modeling generally assumes that underlying latent variables can be mapped to the data space under certain circumstances, which ensures the sufficient feasibility for hypothetical examples. Thus, a well-trained generative model is the basis for high-quality counterfactuals within the designed framework. The employed generative model specifically serves two sub-tasks for counterfactual generation, i.e., data _encoding_ and _decoding_. For raw data instances like images, the input space $\mathbb{R}^{d}$ could be extremely large, which makes it difficult and inefficient to directly create counterfactuals for the query. In our designed framework, data encoding is conducted to map the input data space to a low-dimension attribute-informed latent space, which is formulated as a joint embedding space for both raw features and data attributes. In this way, each data sample $\mathbf{x}$ can be effectively encoded through the function $G^{enc}_{\bm{\psi}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}\oplus\mathbb{R}^{t}$, where $\mathbb{R}^{k}$ is the latent space for raw feature embeddings, $\mathbb{R}^{t}$ indicates the data attribute space, and $\oplus$ represents a concatenation operator. Reversely, the mapping for decoding is from the attribute-informed latent space to the original data space. The decoder function can be similarly indicated by $G^{dec}_{\bm{\omega}}:\mathbb{R}^{k}\times\mathbb{R}^{t}\rightarrow\mathbb{R}^{d}$. Although $G^{enc}_{\bm{\psi}}$ and $G^{dec}_{\bm{\omega}}$ typically have two different focuses, they are jointly trained as a whole generative model in an end-to-end manner. The issues about how to derive $G^{enc}_{\bm{\psi}}$ and $G^{dec}_{\bm{\omega}}$ will be particularly discussed in Sec. 3.2. To finally obtain the counterfactual sample $\mathbf{x}^{*}$ for model $F_{\bm{\phi}}$ with query $\mathbf{x}_{0}$, we further need to modify the attribute-informed latent space of the deployed generative model. Specifically, we use the proposed AIP method to update the attribute-informed latent vector of $\mathbf{x}_{0}$, according to the counterfactual loss calculated. Assuming $G^{enc}_{\bm{\psi}}(\mathbf{x}_{0})=\mathbf{z}_{0}\oplus\mathbf{a}_{0}$ ($\mathbf{z}_{0}\in\mathbb{R}^{k},\mathbf{a}_{0}\in\mathbb{R}^{t}$), AIP method can jointly update $\mathbf{z}_{0}$ and $\mathbf{a}_{0}$, so as to minimize the corresponding loss counter-factually. The overall counterfactual loss consists of two parts, i.e., _prediction loss_ and _perturbation loss_. Prediction loss is set to ensure the flip of model decisions, and perturbation loss is involved to guarantee the “closest possible” changes on the query, which are both indispensable for counterfactual generation. For the prediction loss, we simply follow the common cross-entropy term, expressed as $L_{d}(F_{\bm{\phi}}(\mathbf{x}),\mathbf{y}^{*})=-\mathbf{y}^{*}\log(F_{\bm{\phi}}(\mathbf{x}))-(1-\mathbf{y}^{*})\log(1-F_{\bm{\phi}}(\mathbf{x}))$. For the perturbation loss, we employ two $l_{2}$ norms respectively on $\mathbb{R}^{k}$ and $\mathbb{R}^{t}$, indicated by $L_{b}(\mathbf{z},\mathbf{a},\mathbf{z}_{0},\mathbf{a}_{0})=\|\mathbf{z}-\mathbf{z}_{0}\|_{2}+\|\mathbf{a}-\mathbf{a}_{0}\|_{2}$ ($\mathbf{z}\in\mathbb{R}^{k},\mathbf{a}\in\mathbb{R}^{t}$), to restrain the query changes, which can also be regarded as a regularization term as well. Further, the overall counterfactual loss can thus be represented as follows: (3) $L_{c}(\mathbf{z},\mathbf{a},\mathbf{z}_{0},\mathbf{a}_{0},\mathbf{y}^{*})=L_{d}\left(F_{\bm{\phi}}(G^{dec}_{\bm{\omega}}(\mathbf{z},\mathbf{a})),\mathbf{y}^{*}\right)+\alpha L_{b}(\mathbf{z},\mathbf{a},\mathbf{z}_{0},\mathbf{a}_{0}),$ where $\alpha$ is a balance coefficient between the two loss terms. With the proposed AIP method, the designed framework can generate the valid counterfactual example $\mathbf{x}^{*}$ with the aid of optimized $\mathbf{z}^{*},\mathbf{a}^{*}$ through the decoder function (i.e., $\mathbf{x}^{*}=G^{dec}_{\bm{\omega}}(\mathbf{z}^{*},\mathbf{a}^{*})$). The details of the proposed AIP method will be introduced in Sec. 3.3. ### 3.2. Attribute-Informed Latent Space Figure 2. General illustration of the attribute-informed latent space in generative models. Particularly, blue arrows indicate the forward flow of computations, while orange arrows indicate the back-propagation flow of gradients. The dash lines denote the losses for generative model training. Constructing an appropriate attribute-informed latent space is the key part for generative modeling in our designed framework, which has direct influences on the quality of generated counterfactuals. To achieve this, we need to well train a generative model, better capturing the raw data features as well as relevant data attributes, where embedded features can bring about more robust bases for counterfactual analysis, and incorporated attributes are able to provide more semantics for conditional generation. Here, the data attributes mainly indicate those extra information from humans along with raw instances, such as annotations or labels, which can usually be represented as one-hot vectors. In practice, it is common that different generative models are employed for different tasks or data. Since different models typically involve disparate architectures, their training schemes can totally differ from each other. Take the GAN and VAE for example, where GAN is usually trained to obtain an equilibrium between a generator and a discriminator function, while VAE is typically trained to maximize a variational lower bound of the data log- likelihood. Therefore, to better introduce how to specifically construct the attribute-informed latent space with generative models, we present a general illustration shown by Fig. 2, although it may not be fully representative for all kinds of models. We generally introduce the modeling process with an encoder-decoder structure, which corresponds to the data encoding and decoding in our designed framework. Essentially, the attribute-informed latent space can be regarded as an extended code space of auto-encoders. By concatenating attribute vector $\mathbf{a}$ to raw feature embedding $\mathbf{z}$, the decoder function aims to achieve the conditional generation based on $\mathbf{a}$. To ensure the attribute consistency between original sample $\mathbf{x}$ and generated sample $\hat{\mathbf{x}}$, discriminator $D_{\xi}$ is particularly employed, which is trained separately and used to classify the attributes of $\hat{\mathbf{x}}$. To effectively train such generative model, two basic loss terms are required, which are the reconstruction loss and discrimination loss. The overall training can be indicated by: (4) $\min_{\psi,\omega}\quad\underbrace{\underset{\begin{subarray}{c}\mathbf{x}\sim\mathcal{P}(\mathbf{x})\\\ \mathbf{a}\sim\mathcal{P}(\mathbf{a})\end{subarray}}{\mathbb{E}}\ \sum_{i=1}^{t}-a_{i}\log\left(D_{\xi}^{i}(\hat{\mathbf{x}})\right)-(1-a_{i})\log\left(1-D_{\xi}^{i}(\hat{\mathbf{x}})\right)}_{\text{Discrimination Loss}}\\\ +\underbrace{\underset{\mathbf{x}\sim\mathcal{P}(\mathbf{x})}{\mathbb{E}}\|\mathbf{x}-\hat{\mathbf{x}}\|_{2}}_{\text{Reconstruction Loss}},$ where $a_{i}$ denotes the $i$-th attribute in $\mathbf{a}$, and $D_{\xi}^{i}$ indicates the prediction of $D_{\xi}$ on the $i$-th attribute. After sufficient training, $G^{enc}_{\bm{\psi}}$ and $G^{dec}_{\bm{\omega}}$ can be effectively obtained, and the attribute-informed latent space can be further constructed with the aid of $G^{enc}_{\bm{\psi}}$. For specific tasks and architectures, the generative modeling process could be further enhanced with more specific losses or other advanced tricks. ### 3.3. Attribute-Informed Perturbation With the obtained $G^{enc}_{\bm{\psi}}$ and $G^{dec}_{\bm{\omega}}$ for generative modeling, we then introduce the proposed AIP method to finally derive the counterfactual for target DNN $F_{\bm{\phi}}$ with the query $\mathbf{x}_{0}$. To guarantee the quality of the generated counterfactuals, AIP needs to find the sample that can minimize the counterfactual loss indicated by Eq. 3. Under the “closest possible world” assumption, the corresponding counterfactual sample can be denoted as: (5) $\mathbf{x}^{*}=G^{dec}_{\bm{\omega}}\left(\operatorname*{arg\,min}_{\mathbf{z}\in\mathbb{R}^{k},\ \mathbf{a}\in\mathbb{R}^{t}}L_{c}(\mathbf{z},\mathbf{a},\mathbf{z}_{0},\mathbf{a}_{0},\mathbf{y}^{*})\right).$ To effectively solve Eq. 5, the proposed AIP method utilizes an iterative gradient-based optimization algorithm with dynamic step sizes (controlled by a decaying factor $\beta$), which helps the iteration process converge faster. In each iteration, the updated $\mathbf{z}$ and $\mathbf{a}$ can be derived as follows: (6) $\left\\{\begin{aligned} \mathbf{z}^{(n+1)}=\mathbf{z}^{(n)}-\mu^{(n)}\nabla_{\mathbf{z}}L_{c}\left(\mathbf{z}^{(n)},\mathbf{a}^{(n)},\ \mathbf{z}_{0},\ \mathbf{a}_{0},\ \mathbf{y}^{*}\right)\\\ \mathbf{a}^{(n+1)}=\mathbf{a}^{(n)}-\gamma^{(n)}\nabla_{\mathbf{a}}L_{c}\left(\mathbf{z}^{(n)},\mathbf{a}^{(n)},\ \mathbf{z}_{0},\ \mathbf{a}_{0},\ \mathbf{y}^{*}\right)\end{aligned}\right.,$ where $n$ indicates the iteration index, $\mu$ and $\gamma$ respectively denotes the step sizes of updates on $\mathbf{z}$ and $\mathbf{a}$. Specifically, the proposed AIP method can be summarized in Algorithm 1. It is important to note that AIP only works on the optimization of $\mathbf{z},\mathbf{a}$, and does not involve the parameter update on $F_{\bm{\phi}}$, $G^{enc}_{\bm{\psi}}$, $G^{dec}_{\bm{\omega}}$. Thus, the proposed AIP method should be less time-consuming and easily deployed for counterfactual generation task, compared with those generative frameworks which need extra model training (Samangouei et al., 2018; Singla et al., 2020). Input: $F_{\bm{\phi}}$, $G^{enc}_{\bm{\psi}}$, $G^{dec}_{\bm{\omega}}$, $\mathbf{x}_{0}$, $\mathbf{y}^{*}$, $\mu$, $\gamma$, $\alpha$, $\beta$, $n_{\max}$ 1 Output: Counterfactual sample $\mathbf{x}^{*}$ 2 3Initialize $\mu$, $\gamma$, $\alpha$, $\beta$ ; 4 5Initialize $n=0,\ \mathbf{x}=\mathbf{x}_{0}$ ; 6 7Construct the latent space with $\mathbf{z}\oplus\mathbf{a}\leftarrow G^{enc}_{\bm{\psi}}(\mathbf{x})$ ; 8 9while _$F_{\bm{\phi}}(G^{dec}_{\bm{\omega}}(\mathbf{z},\mathbf{a}))\neq\mathbf{y^{*}}\ \mathrm{or}\ n\leq n_{\max}$_ do 10 11 Update $\mathbf{z}$ and $\mathbf{a}$ according to Eq. 6 ; 12 13 Update step sizes with $\mu\leftarrow\beta\mu$ and $\gamma\leftarrow\beta\gamma$ ; 14 15 $n\leftarrow n+1$ 16 17Reconstruct the optimized sample with $\mathbf{x}^{*}\leftarrow G^{dec}_{\bm{\omega}}(\mathbf{z},\mathbf{a})$ ; 18 19if _$F_{\bm{\phi}}(\mathbf{x}^{*})==\mathbf{y}^{*}$_ then 20 21 Return $\mathbf{x}^{*}$ as the counterfactual for $F_{\bm{\phi}}$ with query $\mathbf{x}_{0}$; 22else 23 Return None – No valid counterfactual exists; 24 Algorithm 1 Attribute-Informed Perturbation (AIP) ## 4\. Experiments In this section, we evaluate the designed counterfactual generation framework with the proposed AIP method on several real-world datasets, both quantitatively and qualitatively. Overall, we mainly conduct two sets of experiments respectively on _text_ and _image_ counterfactual generation, by utilizing different data modeling techniques. With all conducted experiments, we aim to answer the following four key research questions: * • How _effective_ is the designed framework in generating counterfactuals with AIP, regarding to different types of raw data? * • How is the _quality_ of created counterfactuals from our designed framework aided by AIP, comparing with other methods? * • How _efficient_ is the counterfactual generation under the designed framework with AIP, comparing with other potential ways? * • Can we further _benefit_ other practical tasks with the counterfactuals generated from our design framework with AIP method? ### 4.1. Experimental Settings #### 4.1.1. Real-World Datasets. Throughout the whole experiments, we employ three real-world datasets to evaluate the performance of the designed framework with AIP method, including both raw texts and images. The relevant data attributes depend on the particular tasks, which are collected either from labels or annotations. The statistics of the involved datasets are shown in Table 1. Table 1. Dataset statistics in experiments. Datasets | #Instance | #Attribute | Type | Domain ---|---|---|---|--- Yelp | $455,000$ | 1 | Raw Texts | Sentiment Amazon | $558,000$ | 1 | Raw Texts | Sentiment CelebA | $202,599$ | 13 | Raw Images | Face Feature * • Yelp User Review Dataset111https://www.yelp.com/dataset (Asghar, 2016): This dataset consists of user reviews from the Yelp associated with relevant rating scores. We involve a tailored and modified version of this data for our experiments on text counterfactual generation. Specifically, we consider the reviews with ratings higher than three as _positive_ samples and regard the others as _negative_ ones, and we further use these sentiment labels as the relevant attribute for data modeling. The vocabulary of our involved Yelp data contains more than $9,000$ words, and the average review length is around $9$ words. * • Amazon Product Review Dataset222http://jmcauley.ucsd.edu/data/amazon/index_2014.html (He and McAuley, 2016): This dataset is also involved as a raw textual dataset for our experiments. Similar to the Yelp data, we modify the original rating information of reviews into the sentiment categories (i.e., _positive_ and _negative_), and further model these labels as an sentiment attribute of the raw textual reviews. Amazon dataset has more than $50,000$ words in vocabulary, and the average length is around $15$. * • CelebFaces Attributes (CelebA) Dataset333http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (Liu et al., 2015): This is a large-scale face attributes dataset, containing tons of raw face images with human annotations. We employ this dataset for our experiments on image counterfactual generation, and select out $13$ representative face attributes (out of 40) for data modeling along with raw face images. The involved thirteen attributes include: _Male_ , _Young_ , _Blond_Hair_ , _Pale_Skin_ , _Bangs_ , _Mustache_ and etc. #### 4.1.2. Target Model for Interpretation. Since we mainly discuss the counterfactuals of raw data instances, DNN is a better choice as our target model. For the target DNN $F_{\bm{\phi}}$ in our experiments, we employ some regular structures for the corresponding tasks. Particularly, for the text sentiment classification, we use a common convolutional architecture in (Kim, 2014) to pre-train a DNN classifier for further counterfactual analysis. For the image attribute classification task, similarly, we utilize a simple convolutional network (Guo et al., 2017) to prepare a target classifier, where the model is trained with one of those attributes as the label. During the evaluations on counterfactual generation, target DNNs are fixed without further training. #### 4.1.3. Employed Generative Modeling Techniques. In our experiments, different data modeling techniques are employed for different types of data. In particular, we use different generative models to construct the corresponding attribute-informed latent space, regarding to text and image data. For textual reviews (Yelp, Amazon), we utilize the modeling techniques introduced in (Hu et al., 2017), and build a transformer-based VAE to effectively formulate the relevant attribute-informed latent space. For face images (CelebA), we mainly follow the modeling method of AttGAN (He et al., 2019), where more complicated training schemes are employed, compared with the general one shown in Sec. 3.2, for better visual quality of the generated images. Both of the employed generative models should be well- trained on the corresponding datasets before the counterfactual generation process, so as to guarantee the high quality of generated counterfactuals. #### 4.1.4. Alternative Methods and Baselines. To effectively evaluate the performance of the designed framework with AIP, we incorporate following alternative methods and baselines for comparison. * • TextBugger (Li et al., 2018): This is a general method for adversarial text generation, which is built based on the word attribution and bug selection. The created text samples can effectively flip the prediction of the target classifier. We employ this method as a baseline to specifically compare with our generated text counterfactuals. * • DeepWordBug (Gao et al., 2018): This is another method focusing on the adversarial text generation, where a token scoring strategy is utilized to guide the character-level adversarial perturbation. This method is employed as a baseline for text counterfactuals as well. * • FGSM (Goodfellow et al., 2015): Fast gradient sign method is a common way to generate image adversarial samples, by using the gradients of the loss with respect to the input. The sample created by this method can effectively maximize the loss, so as to flip the original prediction. We employ this method as a baseline specifically for our generated image counterfactuals. * • Counter_Vis (Goyal et al., 2019): This is a recent method in generating image counterfactuals, where particular image regions are replaced to flip the model decision. We employ this method as an alternative method for image counterfactual generation. * • CADEX (Moore et al., 2019): This is a state-of-the-art method for counterfactual generation, where the gradient-based method is directly applied to modify the input space of query. This method is originally proposed for tabular data, and we modify it simply as an alternative for image counterfactuals, due to the particularity of texts. * • xGEMs (Joshi et al., 2018): This is a state-of-the-art method for generating counterfactuals, which also employs the generative modeling technique for sample generation. This method only involves the latent space modeling and cannot achieve the conditional generation with semantic attributes. We employ this method as an important alternative for both text and image counterfactuals. * • AIP_R: This is the random version of our proposed AIP method, which updates all parameters in a random way. ### 4.2. Implementations In this part, we introduce the implementation details for our experiments on text and image counterfactual generation, corresponding to the following Sec. 4.3 and Sec. 4.4. #### 4.2.1. Details for Text Counterfactuals ##### Hyper-parameters. We implement all related algorithms and models by the PyTorch framework444https://pytorch.org/. Specifically for text counterfactuals, we set the balance coefficient $\alpha=0.8$ in Eq. 3 during the main evaluations. The decaying factor is set as $\beta=0.95$, and the maximum iteration is set as $n_{\max}=300$. As for the initial step sizes for optimization, we set $\mu=1$ and $\gamma=2$ in the related experiments. ##### Target DNN model $F_{\mathbf{\phi}}$ for texts. We train two CNN classifiers respectively on Yelp and Amazon datasets as our target models, following the architectures introduced in work (Kim, 2014). In particular, both CNN employ the _CNN-non-static_ version of training, as illustrated in the paper. The deployed target CNN for Yelp data has $89.58\%$ testing accuracy, and the deployed CNN for Amazon has $88.16\%$ testing accuracy. ##### Generative model $G^{enc}_{\psi},G^{dec}_{\omega}$ for texts. We employ a transformer based VAE to conduct the relevant data modeling. Specifically, the overall structure of the employed _Encoder_ and _Decoder_ can be illustrated by the following Tab. 2. The size of transformer and latent space are both set as $256$. As for the training phase, we use another classifier, trained separately with $F_{\mathbf{\phi}}$, as the discriminator $D_{\xi}$. The relevant batch size is $128$, embedding dropout rate is $0.5$, and learning rate is $10^{-3}$. Table 2. Structure for the employed transformer-based VAE. Encoder (_from top to down_) | Decoder (_from top to down_) ---|--- Embedding Layer | Multi-Head Attention Layers Multi-Head Attention Layers | Addition & Normalization Addition & Normalization | Dense Layer Dense Layer | Addition & Normalization Addition & Normalization | Multi-Head Attention Layers Multi-Head Attention Layers | Fully-Connected Layer GRU Layer | Softmax Summation | / #### 4.2.2. Details for Image Counterfactuals ##### Hyper-parameters. Similar as text counterfactuals, we also use PyTorch to implement relevant models and algorithms. In particular, we set $\alpha=1.5$ in Eq. 3. The decaying factor and maximum iteration in Algorithm 1 is respectively set as $\beta=0.9$ and $n_{\max}=500$. Besides, the initial optimization step sizes is set as $\mu=2$ and $\gamma=3$. ##### Target DNN model $F_{\mathbf{\phi}}$ for images. We train a basic CNN on the CelebA dataset as the target according to the architecture in (Guo et al., 2017), specifically using the “Male” annotation as the label for training. Essentially, the target CNN here is a binary gender classifier which predicts an input image as either “Male” or “Female”. The deployed CNN has the testing accuracy with $95.67\%$. ##### Generative model $G^{enc}_{\psi},G^{dec}_{\omega}$ for images. The generative modeling for CelebA data largely follows the training schemes in (He et al., 2019). Our modeling attributes include: ”Pale_Skin”, ”Bangs”, ”Black_Hair”, ”Blond_Hair”, ”No_Beard”, ”Brown_Hair”, ”Bushy_Eyebrows”, ”Male”, ”Eyeglasses”, ”Young”, ”Mustache”, ”Bald”, ”Mouth_Slightly_Open”. Particularly, we present our employed structure of _Encoder_ and _Decoder_ in the following Tab. 3. Here, DeConvolution indicates the transposed operation on convolution, and (64,4,2) respectively denotes the dimension, kernel size and stride, for example. From the structure, we note that the latent size is $1,024$. For the training phase, we specifically set the batch size as $32$, and learning rate as $2\times 10^{-4}$. The discriminator $D_{\xi}$ is a multi-class classifier, trained separately from $F_{\mathbf{\phi}}$. Table 3. Structure for the employed AttGAN. Encoder (_from top to down_) | Decoder (_from top to down_) ---|--- Convolution Layer (64,4,2) | DeConvolution Layer (1024,4,2) Normalization | Normalization Convolution Layer (128,4,2) | DeConvolution Layer(512,4,2) Normalization | Normalization Convolution Layer (256,4,2) | DeConvolution Layer(256,4,2) Normalization | Normalization Convolution Layer (512,4,2) | DeConvolution Layer(128,4,2) Normalization | Normalization Convolution Layer (1024,4,2) | DeConvolution Layer(3,4,2) Normalization | / ### 4.3. Text Counterfactual Evaluations In this part, we evaluate the experimental results of the designed framework with AIP in generating text counterfactuals, regrading to a convolutional neural network (CNN) built for sentiment classification. The involved raw texts for target DNN come from the user/product reviews in Yelp and Amazon datasets, where $90\%$ are used for training, $5\%$ for development and $5\%$ for testing. #### 4.3.1. Effectiveness Evaluation. Figure 3. Effectiveness evaluation for text counterfactuals. In order to evaluate the effectiveness for text counterfactuals, we employ the metric _Flipping Ratio_ (FR) to measure the relevant performance, which reflects how likely the generated text samples would flip the model decision to $\mathbf{y}^{*}$. Specifically, FR can be calculated as follows: (7) $\mathrm{FR}=|\mathcal{X}_{f}|\Big{/}|\mathcal{X}_{q}|\quad\left(\mathbf{x}_{0}\in\mathcal{X}_{f}\ \mathrm{if}\ F_{\bm{\phi}}(\mathbf{x}_{0})=\mathbf{y}^{*}\right),$ where $\mathcal{X}_{f}$ indicates the set of query samples with which new flipping instances can be generated by particular methods, and $\mathcal{X}_{q}$ denotes the set of all testing queries. In our experiments, there are $500$ testing queries in total (i.e., $|\mathcal{X}_{q}|=500$), which are randomly selected from the test set. Fig. 3 illustrates our experimental results on both Yelp and Amazon datasets. According to the numerical results, we note that our designed framework with AIP can work well on both datasets, and has competitive performance among all other alternatives as well as baselines, although TextBugger achieves the highest FR score with better robustness. Besides, we also observe that AIP_R does not effectively work for generating flipping samples, which indicates that random optimization in attribute-informed latent space cannot help for counterfactual sample generation. #### 4.3.2. Quality Evaluation. Figure 4. Quality evaluation for text counterfactuals. As for the quality assessment of counterfactual samples, we employ the _Latent Perturbation Ratio_ (LPR) metric to measure the latent closeness between the generated sample $\mathbf{x}^{*}$ and original query instance $\mathbf{x}_{0}$. Since high-quality counterfactual samples typically need to ensure sparse changes in the robust feature space, thus the smaller the LPR is, the better the counterfactual we have. To be specific, the LPR can be calculated by: (8) $\mathrm{LPR}=\big{\|}\mathbf{z}^{*}-\mathbf{z}_{0}\big{\|}_{0}\Big{/}k,$ where $\|\cdot\|_{0}$ indicates the $l_{0}$ norm operation, $\mathbf{z}^{*}$ and $\mathbf{z}_{0}$ are the raw feature embeddings respectively for $\mathbf{x}^{*}$ and $\mathbf{x}_{0}$. To make a fair comparison, we use the same encoder function $G^{enc}_{\bm{\psi}}$ for all generated samples to obtain the corresponding latent representation vectors. In this set of experiments, the latent dimension is $256$ (i.e., $k=256$), and the final LPR value for particular method is recorded with the average over $500$ testing queries. The relevant numerical results are presented in Fig. 4. From the experiments, it is noted that xGEMs and the proposed AIP method significantly outperform other baselines, indicating that the corresponding generated samples actually maintain more robust features regarding to the query. Furthermore, the proposed AIP is noted to be slightly better than xGEMs, which may partially result from the conditional generation brought by attribute vector $\mathbf{a}$. This set of results also validate a fact that adversarial samples typically utilize some artifacts to flip the model decisions, instead of using some robust features. #### 4.3.3. Efficiency Evaluation. Figure 5. Efficiency evaluation for text counterfactuals. To compare the efficiency, we record the time consumption for each method over $500$ testing queries in the generation phase on the same machine. Specifically, we calculate the average time cost for one query, and further employ this as the metric to access the efficiency for particular methods. Fig. 5 shows the relevant experimental results. Based on the statistics, it is observed that adversarial related methods (i.e., TextBugger and DeepWordBug) typically consume less time per query in average, compared with the counterfactual generation methods, which is mainly due to the fact that adversarial methods do not need to conduct encoding computations before sample generation. As for our proposed AIP method, the time efficiency is roughly the same as the alternative xGEMs, but it is significantly better than its random version AIP_R which needs more iterations to converge. #### 4.3.4. Qualitative Case Studies. Table 4. Case studies on generated text samples. Counterfactual on _Negative_ sentiment (Yelp) --- Query: this is the worst walmart neighborhood market out of any of them TextBugger: this is the worst wa1mart neighborho0d market out of a ny of them DeepWordBug: this id the wosrt walmart neighobrhood market out of any of htem xGEMs: that is good walmart market out of any neighborhood AIP: this is the best walmart neighborhood market for all of them Counterfactual on _Positive_ sentiment (Amazon) Query: this item works just as i thought it would TextBugger: this item w0rks just as i tho ught it wou1d DeepWordBug: this item wroks just ae i thought it wolud xGEMs: this item works out poorly just as i thought disappointed AIP: this item works bad just as i thought it would not play Figure 6. Evaluations for image counterfactual generation. Figure 7. Qualitative case studies on generated image samples. Here, we present several representative case studies from different methods, shown in Tab. 4, aiming to provide a qualitative comparison for generated text samples. Based on the Tab. 4, we can see that adversarial texts typically provide limited insights for humans on counterfactual analysis, since they mainly make use of the model artifacts to flip the prediction. Nevertheless, with the samples generated by xGEMs and AIP, we can easily observe some sentiment variation regarding to the query instance, which sheds light on model behaviors and facilitates further human reasoning on black-box models. Besides, compared with xGEMs, the proposed AIP method usually can generate more sensible counterfactuals with the aid of attribute conditions. ### 4.4. Image Counterfactual Evaluations In this part, we specifically evaluate the designed framework with AIP on image counterfactual generation. Instead of simply considering one attribute for conditional generation in texts, we take multiple attributes into account for image counterfactuals. In this set of experiments, our target DNN follows the common CNN architecture and is trained as a gender classifier, which can classify an input image as _Male_ or _Female_. All involved raw images for target DNN come from the CelebA dataset, and we use $90\%$ data for training, $5\%$ for development, $5\%$ for testing. The relevant quantitative results are all illustrated by Fig. 6. #### 4.4.1. Effectiveness Evaluation. For the effectiveness assessment, we still use the FR metric indicated by Eq. 7. In the experiments, we set $|\mathcal{X}_{q}|=500$, and aim to test how many of them can be effectively flipped with particular methods. Fig. 6(a) illustrates the relevant numerical results, where adversarial method FGSM performs the best on FR and can flip nearly every testing query. We note that the proposed AIP method ranks the second, and outperforms other counterfactual generation methods. Besides, it is also observed that CADEX and AIP_R performs relatively bad for the image counterfactual task within certain iterations, even though CADEX is proved to work well for tabular instances (Moore et al., 2019). #### 4.4.2. Quality Evaluation. Similar to text counterfactual scenario, we employ the LPR metric, shown as Eq. 8, to measure the quality of the generated image counterfactuals. In experiments, the latent dimension $k$ constructed by $G^{enc}_{\bm{\psi}}$ is $1,024$ (i.e., $k=1024$), and the corresponding LPR for particular method is recorded by calculating the average over $500$ testing queries. Relevant experimental results are shown by Fig. 6(b). Based on the LPR comparison, we note that the samples generated by FGSM and CADEX change a lot in the latent feature space, because both methods directly rely on the input perturbation for sample generation. As for the proposed AIP, it achieves the lowest LPR among all the alternatives and baselines, and it is significantly better than its random version AIP_R. #### 4.4.3. Efficiency Evaluation. We similarly employ the average time consumption per query to evaluate the efficiency aspect for image counterfactual generation. Specifically, the average time is obtained over the $500$ testing queries randomly selected from the test set. Fig. 6(c) shows the relevant experimental results. According to the statistics and comparison, we note that FGSM is the most efficient one, and xGEMs consumes the least time in average among all other counterfactual- based methods. As for the proposed AIP, a competitive efficiency performance is observed, and is remarkably superior compared with that of Counter_Vis, CADEX and AIP_R. #### 4.4.4. Qualitative Case Studies. To facilitate a qualitative comparison among different methods, we specifically show some case studies, illustrated by Fig. 7. We select several query instances whose model predictions are female, and then employ different methods to generate the corresponding image samples which flip the model decisions for counterfactual purpose. According to the results, we note that the samples generated by FGSM and CADEX do not have salient visual changes regarding to the query instances, which largely limits the human reasoning on model behaviors. Among other alternative methods, it is observed that the proposed AIP is capable of generating counterfactuals with better visual quality, which present much smoother transitions from female to male. ### 4.5. Influence of Hyper-parameter $\alpha$ In this part, we show some additional results on the influence of hyper- parameter $\alpha$ in Eq. 3. Other experimental settings keep unchanged. The relevant results are shown by Fig. 8. Figure 8. Influence of $\alpha$ on FR and LPR metrics. Based on the results, we observe that $\alpha$ serves as a knob to control the effectiveness and sample quality of the designed framework. To select an appropriate $\alpha$, we actually need to strike a balance between FR and LPR, where the larger the $\alpha$ is, the lower the effectiveness is and the higher the sample quality is. Different data types may also have different trade-off curves. ### 4.6. Applications In this part, we focus on some practical scenarios which may benefit from the counterfactual samples generated by our designed framework. In particular, we show the applications of the framework respectively on _feature interaction_ and _data augmentation_. #### 4.6.1. Feature Interaction Figure 9. Feature interactions for the decision change. Understanding the feature interaction could be very important in lots of real- world domains. A typical example is the bias detection task, where humans aim to find out a related set of features which can significantly influence the correctness or fairness of model decision. Utilizing our designed framework for counterfactual analysis can partially help this practical task. By observing the perturbation scale on attribute vector $\mathbf{a}$ of the generated counterfactual, humans can have a sense on which semantic features contribute significantly to the flipping of model decision. To illustrate the point, we show another case result from the designed framework with AIP in Fig. 9. Here, we train an age classifier on the CelebA dataset as our target DNN, and aim to analyze the feature interaction of a query prediction as “Old”. Based on the attribute perturbations of the generated sample, we note that the top semantic attributes are “Male”, “Bushy_Eyebrows”, “Black_Hair” and “Bangs”, besides the target attribute. This result directly demonstrates the fact that the “Male” attribute has a strong interaction with the predicted attribute for this particular query, and the target DNN exists potential gender bias for its age predictions. #### 4.6.2. Data Augmentation Table 5. Model performance with data augmentation. Dataset | CNN (Kim, 2014) | VDCNN (Conneau et al., 2017) ---|---|--- Yelp | Initial | 82.33% ($\pm$ 0.61%) | 88.79% ($\pm$ 0.53%) Augmented | 83.16% ($\pm$ 0.57%) | 89.95% ($\pm$ 0.46%) Amazon | Initial | 81.96% ($\pm$ 0.52%) | 88.55% ($\pm$ 0.63%) Augmented | 82.41% ($\pm$ 0.49%) | 88.76% ($\pm$ 0.55%) Dataset | CNN (Guo et al., 2017) | ResNet (He et al., 2016) ---|---|--- CelebA | Initial | 87.32% ($\pm$ 0.22%) | 90.96% ($\pm$ 0.27%) Augmented | 88.85% ($\pm$ 0.21%) | 91.35% ($\pm$ 0.25%) Another application of the designed framework is the data augmentation for model training. By taking full advantage of the generated counterfactual samples as new training instances, we aim to obtain better DNN models with higher performance and robustness. Specifically, to test the improvement, we train several DNN models on relatively smaller training sets, which are essentially the subsets of original data. For the sentiment classifiers on Yelp and Amazon, our initial training size is $20,000$, containing $10,000$ positive and $10,000$ negative reviews. The extra counterfactual training size is $2,000$ whose queries are randomly selected from the initial training set. For the binary age classifier on CelebA, we employ a similar setting for training, where each class includes $10,000$ initial samples, and $2,000$ generated counterfactual samples are further incorporated for augmentation. Relevant experimental results are shown in Tab. 5. Based on the statistics, we note that the augmented training with counterfactual samples typically achieves higher classification accuracies with smaller variances, which can also be observed under some advanced DNN structures. ## 5\. Related Work Generating counterfactual explanation is just one of many interpretation methods for black-box models, which generally belongs to the family of interpretable machine learning. According to the particular problems they focus on, interpretation methods can be divided into the following three categories in general. The first category of methods aims to answer the “ _What_ ”-type questions, i.e., what part of the input mostly contribute to the model prediction. A representative work in this category is LIME (Ribeiro et al., 2016), where authors specifically employ linear models to approximate the local decision boundary and further formulate it as a sub-modular optimization problem for model interpretation. The feature importance in LIME is essentially obtained by observing the prediction changes after perturbing input samples. Similar related methods can also be found in Anchors (Ribeiro et al., 2018) and SHAP (Lundberg and Lee, 2017). Another common methodology under this category is to utilize the model gradient information, where gradients are typically regarded as an indicator for perturbation sensitivity. Related methods can be found in GradCAM (Selvaraju et al., 2017), Integrated Gradients (Sundararajan et al., 2017), and SmoothGrad (Smilkov et al., 2017). The second category aims to answer the “ _Why_ ”-type questions, i.e., why the input is predicted as label _A_ instead of _B_. The methods under this category can be quite different from the previous ones, since these methods basically need to consider two labels simultaneously. There are several different methodologies proposed for this problem. For example, the authors in (Dhurandhar et al., 2018) design a contrastive perturbation method to derive related positive and negative features of input regarding to the concerned label. Besides, a general method based on structural causal models is proposed in (Miller, 2018) to tackle the problem specifically in classification and planning scenarios. Also, a generative framework CDeepEx is designed in (Feghahati et al., 2018) to particularly investigate this problem for images by utilizing GAN. The third category lies in the “ _How_ ”-type questions, i.e., how to particularly modify the input so as to flip the model prediction to the preferred label. This problem is a natural extension of the “Why”-type, and it can somewhat to be handled by the second category of methods under some simple scenarios. However, for problems with high-dimension space, previous categories of methods typically fail due to the intractable computation for sample modification. Several particular methods are raised to solve this issue. For example, authors in (Goyal et al., 2019) propose a straightforward solution with image region replacement, which is essentially a feature replacement process for input with the aid of a distractor. In work (Agarwal et al., 2019), authors novelly use the input itself as the distractor for feature replacement by utilizing GAN for inpainting. Besides, generative modeling is another potential way for this problem, and related methods can be found in (Singla et al., 2020; Joshi et al., 2018; Liu et al., 2019). Our work belongs to this branch of methodology. ## 6\. Conclusion And Future Work In this paper, we design a framework to generate counterfactual explanation for black-box DNN models specifically with raw data instances. By taking advantage of the generative modeling techniques, we effectively construct an attribute-informed latent space for particular data, and further utilize this space for counterfactual generation. To guarantee the validity of the generated samples, we propose the AIP method to iteratively optimize the specific attribute-informed latent vectors according to the counterfactual loss term, from which the counterfactuals can be finally obtained through data reconstruction. We evaluate the designed framework with AIP on several real- world datasets, including both texts and images, and demonstrate its effectiveness, sample quality as well as efficiency. Future extension of this work may possibly include the investigation under the “close possible worlds” assumption, where the goal is to find an optimal set of counterfactuals for a query instead of a single sample. Besides, employing causal models for counterfactual generation is another promising direction to explore. ## References * (1) * Agarwal et al. (2019) Chirag Agarwal, Dan Schonfeld, and Anh Nguyen. 2019\. Removing input features via a generative model to explain their attributions to classifier’s decisions. _arXiv preprint arXiv:1910.04256_ (2019). * Asghar (2016) Nabiha Asghar. 2016\. Yelp dataset challenge: Review rating prediction. _arXiv preprint arXiv:1605.05362_ (2016). * Chen et al. (2019) Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. 2019\. This looks like that: deep learning for interpretable image recognition. In _Advances in Neural Information Processing Systems_. 8928–8939. * Conneau et al. (2017) Alexis Conneau, Holger Schwenk, Loïc Barrault, and Yann Lecun. 2017. Very Deep Convolutional Networks for Text Classification. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_. 1107–1116. * Dhurandhar et al. (2018) Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In _Advances in neural information processing systems_. 592–603. * Doshi-Velez and Kim (2017) Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. _arXiv preprint arXiv:1702.08608_ (2017). * Du et al. (2020) Mengnan Du, Ninghao Liu, and Xia Hu. 2020. Techniques for Interpretable Machine Learning. _Commun. ACM_ 63, 1 (2020), 68–77. * Feghahati et al. (2018) Amir Feghahati, Christian R Shelton, Michael J Pazzani, and Kevin Tang. 2018. CDeepEx: Contrastive Deep Explanations. (2018). * Gao et al. (2018) Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018\. Black-box generation of adversarial text sequences to evade deep learning classifiers. In _2018 IEEE Security and Privacy Workshops (SPW)_. IEEE, 50–56. * Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014\. Generative adversarial nets. In _Advances in neural information processing systems_. 2672–2680. * Goodfellow et al. (2015) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015\. Explaining and harnessing adversarial examples. In _3rd International Conference on Learning Representations, ICLR_. * Goyal et al. (2019) Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019\. Counterfactual Visual Explanations. In _International Conference on Machine Learning_. 2376–2384. * Guo et al. (2017) Tianmei Guo, Jiwen Dong, Henjian Li, and Yunxing Gao. 2017\. Simple convolutional neural network on image classification. In _2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA)(_. IEEE, 721–724. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016\. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 770–778. * He and McAuley (2016) Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In _proceedings of the 25th international conference on world wide web_. 507–517. * He et al. (2019) Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. 2019. Attgan: Facial attribute editing by only changing what you want. _IEEE Transactions on Image Processing_ 28, 11 (2019), 5464–5478. * Hu et al. (2017) Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In _Proceedings of the 34th International Conference on Machine Learning_. 1587–1596. * Joshi et al. (2018) Shalmali Joshi, Oluwasanmi Koyejo, Been Kim, and Joydeep Ghosh. 2018\. xGEMs: Generating examplars to explain black-box models. _arXiv preprint arXiv:1806.08867_ (2018). * Kim et al. (2016) Been Kim, Rajiv Khanna, and Oluwasanmi O Koyejo. 2016\. Examples are not enough, learn to criticize! criticism for interpretability. In _Advances in neural information processing systems_. 2280–2288. * Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In _Proceedings of the Conference on EMNLP_. 1746–1751. * Kingma and Welling (2013) Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ (2013). * Li et al. (2018) Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. Textbugger: Generating adversarial text against real-world applications. _arXiv preprint arXiv:1812.05271_ (2018). * Liu et al. (2019) S Liu, B Kailkhura, D Loveland, and H Yong. 2019\. _Generative Counterfactual Introspection forExplainable Deep Learning_. Technical Report. Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States). * Liu et al. (2015) Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015\. Deep learning face attributes in the wild. In _Proceedings of the IEEE international conference on computer vision_. 3730–3738. * Lundberg and Lee (2017) Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In _Advances in neural information processing systems_. 4765–4774. * Miller (2018) Tim Miller. 2018\. Contrastive explanation: A structural-model approach. _arXiv preprint arXiv:1811.03163_ (2018). * Moore et al. (2019) Jonathan Moore, Nils Hammerla, and Chris Watkins. 2019\. Explaining Deep Learning Models with Constrained Adversarial Examples. In _Pacific Rim International Conference on Artificial Intelligence_. Springer, 43–56. * Murphy (2012) Kevin P Murphy. 2012\. _Machine learning: a probabilistic perspective_. MIT press. * Pearl et al. (2009) Judea Pearl et al. 2009\. Causal inference in statistics: An overview. _Statistics surveys_ 3 (2009), 96–146. * Pearl and Mackenzie (2018) Judea Pearl and Dana Mackenzie. 2018. _The book of why: the new science of cause and effect_. Basic Books. * Pouyanfar et al. (2018) Samira Pouyanfar, Saad Sadiq, Yilin Yan, Haiman Tian, Yudong Tao, Maria Presa Reyes, Mei-Ling Shyu, Shu-Ching Chen, and SS Iyengar. 2018. A survey on deep learning: Algorithms, techniques, and applications. _ACM Computing Surveys (CSUR)_ 51, 5 (2018), 1–36. * Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016\. ” Why should i trust you?” Explaining the predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_. 1135–1144. * Ribeiro et al. (2018) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018\. Anchors: High-precision model-agnostic explanations. In _Thirty-Second AAAI Conference on Artificial Intelligence_. * Rissland (1991) Edwina L Rissland. 1991\. Example-based reasoning. _Informal reasoning in education_ (1991), 187–208. * Samangouei et al. (2018) Pouya Samangouei, Ardavan Saeedi, Liam Nakagawa, and Nathan Silberman. 2018. ExplainGAN: Model Explanation via Decision Boundary Crossing Transformations. In _Proceedings of the European Conference on Computer Vision (ECCV)_. 666–681. * Selvaraju et al. (2017) Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In _Proceedings of the IEEE international conference on computer vision_. 618–626. * Sen et al. (2019) Ayon Sen, Xiaojin Zhu, Liam Marshall, and Robert Nowak. 2019\. Should Adversarial Attacks Use Pixel p-Norm? _arXiv preprint arXiv:1906.02439_ (2019). * Singla et al. (2020) Sumedha Singla, Brian Pollack, Junxiang Chen, and Kayhan Batmanghelich. 2020. Explanation by Progressive Exaggeration. In _8th International Conference on Learning Representations, ICLR 2020_. * Smilkov et al. (2017) Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. _arXiv preprint arXiv:1706.03825_ (2017). * Sundararajan et al. (2017) Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_. JMLR. org, 3319–3328. * Van den Oord et al. (2016) Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. 2016\. Conditional image generation with pixelcnn decoders. In _Advances in neural information processing systems_. 4790–4798. * Wachter et al. (2017) Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017\. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. _Harv. JL & Tech._ 31 (2017), 841. * White and Garcez (2019) Adam White and Artur d’Avila Garcez. 2019. Measurable counterfactual local explanations for any classifier. _arXiv preprint arXiv:1908.03020_ (2019).
# Kondo effect and subatomic structures of single U atoms on graphene/6H-SiC(0001) W. Feng Science and Technology on Surface Physics and Chemistry Laboratory, Mianyang 621908, China Institute of Materials, China Academy of Engineering Physics, Mianyang 621908, China P. Yang Science and Technology on Surface Physics and Chemistry Laboratory, Mianyang 621908, China School of Physics, Nankai University, Tianjin 300071, China B. K. Yuan Key Laboratory of Artificial Structures and Quantum Control, School of Physics and Astronomy, Shanghai Jiaotong University, Shanghai 200240, China Z. P. Hu School of Physics, Nankai University, Tianjin 300071, China X. G. Zhu S. Y. Tan X. C. Lai Science and Technology on Surface Physics and Chemistry Laboratory, Mianyang 621908, China Q. Liu<EMAIL_ADDRESS>Science and Technology on Surface Physics and Chemistry Laboratory, Mianyang 621908, China Institute of Materials, China Academy of Engineering Physics, Mianyang 621908, China Q. Y. Chen<EMAIL_ADDRESS>Science and Technology on Surface Physics and Chemistry Laboratory, Mianyang 621908, China Institute of Materials, China Academy of Engineering Physics, Mianyang 621908, China ###### Abstract The Kondo effect typically arises from the spin-flip scattering between the localized magnetic moment of the impurity and the delocalized electrons in the metallic host, which leads to a variety of intriguing phenomena. Here, by using scanning tunnelling microscopy/spectroscopy (STM/STS), we present the Kondo effect and subatomic features of single U adatom on graphene/6H-SiC(0001). A dip spectral feature can be observed around the Fermi energy, which is termed as the “fingerprint” of the Kondo resonance in STS; in addition, two subatomic features with different symmetries: a three-lobe structure and a donghnut-like structure can be observed from the dI/dV maps. The Kondo resonance is only detectable within 5 Å of the lateral distance from the U atom center, which is much smaller than the distances observed in Co atoms on different surfaces, indicating the more localized 5$f$ states than 3$d$ orbitals. By comparing with density functional theory calculations, we find that the two subatomic features displaying different symmetries originate from the selective hybridization between U 6$d$, 5$f$ orbitals and the $p_{z}$ orbitals from two inequivalent C atoms of the multilayer graphene. How localized $d$ or $f$ electrons interact with the delocalized conduction electrons is a central question in condensed matter physics, which leads to a variety of intriguing phenomena such as the anomalous behavior in the resistivity, specific heat, and magnetic susceptibility of dilute magnetic alloys Coleman.10 ; Varma.76 ; Stewart.84 . One of the typical examples is the Kondo effect, which arises from spin-flip scattering between a single magnetic atom and the electrons of a metal host. At sufficiently low temperature, the spin of the surrounding conduction electrons interacts with the spin of the magnetic impurity and causes a correlated screening cloud, so that no net moment remains at the Kondo impurity site. Screening the spin of a magnetic atom via the coupling between the spin and electronic degrees of freedom results in a strongly resonant peak in the density of states around the Fermi energy ($E_{F}$), the Kondo resonance. The spectral features of the Kondo resonance in $f$-electron based heavy-fermion compounds are often characterized by the large $f$ spectral weight around $E_{F}$ by angle- resolved photoemission spectroscopy (ARPES) Qiuyun.Co ; Qiuyun.Rh ; Qiuyun.Ir . Scanning tunneling spectroscopy (STS) studies of the single magnetic adatoms on surfaces of nonmagnetic metals observe sharp spectra features around $E_{F}$ Jiutao.98 ; Madhavan.98 , which is normally regarded as the fingerprint of the Kondo effect and interpreted as the quantum interference between two electron-tunneling channels Gumbsch.10 . Kondo resonance induced by individual atoms have been successfully observed by STM/STS in many 3$d$ magnetic impurity systems, such as Ti atoms on Ag(100) Nagaoka.02 , Co atoms on Au(111) Madhavan.98 , Cu(111) and Cu(100) Knorr.02 ; Manoharan.00 , Ag(100) and Ag(111) Wahl.04 ; Schneider.02 , Cu2N/Cu(100) Otte.08 , and Ru(0001) Feng.16 . However, the case of single 4$f$ rare-earth atom is still under debate Jiutao.98 ; Ternes.09 ; Silly.04 . Whereas, research on 5$f$-impurity related systems has never been reported, which is an essential part to facilitate the complete understanding of Kondo physics. Another important issue is the observation of subatomic features of single atoms adsorbed on different surfaces, which is crucial for understanding the orbital characters and bonding behavior of the system. Although visualizing chemical bond structures in large organic molecules or clusters by atomic force microscopy (AFM) have been widely reported in recent years Oteyza.13 ; Zhang.13 ; Emmrich.15 ; Jamneala.01 , there are only a few successful cases that directly observe the subatomic features of single atoms Giessibl.00 ; Giessibl.01 ; Herz.03 ; Emmrich.15 ; Lian.10 . Substructures of the tip atom was first revealed by AFM Giessibl.00 . Later on, similar observations were reported for Si adatoms imaging Si tips Giessibl.01 and Co6Fe3Sm tips Herz.03 . Observation of the subatomic structures of the sample atoms instead of the tip atoms was only realized in the following systems: Cu and Fe adatoms on Cu surface Emmrich.15 by AFM, Pb adatoms on Pb(111) Lian.10 and Ni adatoms on graphene Gyamfi.12 by STM. Up to now, distinct observation of the subatomic features in individual surface adatoms is still very challenging, especially for STM. In the present study, we report the Kondo effect and subatomic features of single U atoms on graphene/6H-SiC(0001) by STM/STS. A dip spectral feature can be observed around $E_{F}$, which is the fingerprint of the Kondo resonance. The Kondo effect is localized within a radius of 5 Å around the U atom center. Meanwhile, subatomic features with a three-lobe structure and a donghnut-like shape are distinctly observed in the dI/dV maps. By comparing with DFT calculations, we find that these subatomic features are due to the selective hybridization between U atoms and the two inequivalent C atoms of the substrate. Our results present the first distinct observation of the Kondo effect, together with the subatomic features of isolated 5$f$-impurity magnetic atoms adsorbed on nonmagnetic surfaces. The substrate preparations were performed in an ultrahigh vacuum (UHV) chamber with a base pressure better than $8\times 10^{-11}$ mbar. Epitaxial graphene films were grown on 6H-SiC(0001) following a standard procedure Huang.08 . The depositions of U and all the measurements were conducted in another UHV chamber with a base pressure below $1.5\times 10^{-11}$ mbar. A small ingot of 99.9% purity U metal was held with a molybdenum crucible and heated to degas and reduce impurities for a long time under UHV conditions by e-beam evaporator. Then a small amount of U was deposited onto the graphene/6H-SiC(0001) substrate at 7 K. All the STM and AFM measurements were performed by using a commercial qPlus-equipped STM/AFM at 4.2 K. Clean tungsten tips were used after e-beam heating and being treated on a clean Cu(111) substrate. The dI/dV spectra were collected through a standard lock-in technique by applying a 4 mV modulation with a frequency of 731 Hz to sample bias. AFM images were recorded by detecting the frequency shift of the qPlus resonator in non-contact mode with a tungsten tip. DFT calculations were carried out using Vienna Ab initio Simulation Package (VASP)M01 . Within the projector augmented wave (PAW)M02 framework, the plane-wave cutoff energy was set to be 400 eV. The exchange-correlation functional was treated within the PBE M03 version of generalized gradient approximation (GGA). Our periodic slab model is a 6$\times$6$\times$1 bilayer graphene supercell. To eliminate interactions between the neighbouring slabs, we set the length of $z$-direction to be 15 $\AA$. The optB86b-vdW M04 version of van der Waals interaction was included in our calculation, which shortens the distance between two graphene layers. The Brillouin zone was sampled by $\Gamma$ centered 2$\times$2$\times$1 $k$-points mesh, and total energy was well converged to less than 1$\times 10^{-5}$ eV. All the structures were fully relaxed without symmetric constraint until the residual atomic force on each atom was smaller than 0.02 eV/$\AA$. The calculated equilibrium lattice constant of graphene is 2.47 $\AA$ and the distance between two adjacent layers is 3.31 $\AA$, which is close to experimental values. In order to accurately calculate the electronic structures, spin-orbit coupling (SOC) was also considered in this work. We adopted GGA+$U$ method to deal with the strong Coulomb interaction between the 5$f$ electrons in U atoms. The value of Hubbard $U$ was determined to be 3.27 eV by the linear response approach Cococcioni.05 ; Qiu.20 . Figure 1: STM topography and AFM image of individual U atoms adsorbed on graphene/6H-SiC(0001). (a) Typical STM constant-current image of individual U atoms ($V_{b}$ = -0.6 V, $I$ = 20 pA). (b) Atomically resolved STM image of multilayer graphene surface with a single U adatom on it. Schematic of the lattice structure of the graphene substrate is superimposed on the image of U atom. Red and black dots represent two different sites respectively belong to two inequivalent triangular sublattices on multilayer graphene. Cross and black dashed circle indicate the center and circumference of the U atom, respectively. (c) Zoomed-in STM topographic image of a single U atom ($V_{b}$ = -0.2 V, $I$ = 10 pA). (d) Height profile measured across the red solid line in panel (c). (e) Non-contact mode AFM frequency shift image of a single U adatom on graphene/6H-SiC(0001) ($V_{b}$ = -0.2 V, $I$ = 10 pA, tip height $\vartriangle$$z$ = -0.06 nm). (f) Frequency shift $\vartriangle$$f$ as a function of distance measured along the blue solid line in panel (e). Figure 2: Kondo effect of single U adatoms on graphene/6H-SiC(0001). (a) Topographic image of a single U adatom on graphene/6H-SiC(0001). The black dots denote the positions where the dI/dV spectra in panel (b) are taken. (b) dI/dV spectra taken at different positions marked by the black dots in panel (a). Green solid curve is the Fano fit to the asymmetric dip feature around $E_{F}$. Red dashed line indicates the energy position where the dI/dV map in panel (c) was taken. (c) dI/dV map of the single U atom taken at $V_{b}$= 10 mV and $I$ = 20 pA. Figure 1(a) shows a typical topographic STM image of the epitaxial graphene film grown on graphene/6H-SiC(0001) after the deposition of a small amount of U. Based on close inspection of the substrate surface corrugation in the large scale STM image (see Fig. S1(a) of the supplemental material, SM) and lattice structure in the atomically resolved STM image (Fig. 1(b)), we find the epitaxial graphene thin film has a thickness larger than or at least equal to three layers. As shown in Figs. 1(c) and 1(d), isolated spherical protrusions with a uniform apparent height of 0.45 nm and a diameter of 1.5 nm emerged on graphene/6H-SiC(0001) surface after U deposition. According to literature, a free U atom has an empirical diameter of about 0.31 nm uranium , which is comparable with the height of the protrusion but much smaller than its diameter. To verify the observed spherical protrusion is a single U atom, instead of a cluster of several U atoms, we imaged the protrusion by AFM, which has been proved to be an effective technique for distinguishing cluster from a single atom Emmrich.15 . Previous studies indicate that the adsorption geometry and inter-atomic chemical bonds would induce specific structures in the AFM image of a cluster, but is usually manifested as a simple single Gaussian peak in STM image. As shown in Figs. 1(e) and 1(f), the spherical protrusion observed in the STM image appears as an intact circular depression without any inner detailed structures in the AFM image and it has a diameter close to 0.5 nm. This further supports the observed spherical protrusion in STM images only contains a single U atom Emmrich.15 . Through carefully analyzing the adsorption position of U atom and the atomic lattice structure of substrate, see Fig. 1(b), we find that U atom adsorbs at the hollow sites of graphene/6H-SiC(0001) surface. Tunneling spectra taken at different sites (marked by the black dots in Fig. 2(a)) above a single U atom adsorbed on graphene/6H-SiC(0001) are presented in Fig. 2(b). An asymmetric dip feature locating around $E_{F}$ is repeatedly observed in the dI/dV spectra collected around the center of the single U atom. Fig. 2(c) displays the dI/dV map of a single U atom taken at $V_{b}$=10 mV. Details for the choice of this sample bias can be found in SM. As shown in Fig. 2(b), this dip feature decreases in amplitude as the STM tip is moved outward from the U atom center, and it is gradually suppressed and completely disappears at about a distance of 5 Å from the U atom center, as enclosed by the black solid circles in Fig. 2(c). The observed dip spectral feature is similar to the Kondo resonance observed by STS on Ce/Ag(111) Jiutao.98 , Co/Au(111) Madhavan.98 and Co/Cu(111)Knorr.02 , and can be reasonably explained as the spectroscopic manifestation of a Kondo resonance state, which can be well described by the so-called Fano equation Madhavan.01 : $\rho(\varepsilon)=\frac{\left(q+\varepsilon^{\prime}\right)^{2}}{1+\varepsilon^{\prime 2}}$. Here $\varepsilon^{\prime}=\frac{\varepsilon-\varepsilon_{0}}{\Gamma/2}$ is normalized energy, $\varepsilon_{0}$ is energy position of the Kondo resonance relative to $E_{F}$, $\Gamma$ is full width at half-maximum of resonance curve and $q$ is line-shape parameter Madhavan.01 . The Fano fits yield a $\Gamma$ = 19.9 meV and $q$ = 0.08, giving a Kondo temperature $T_{K}$ of about 114 K. After we subtract the background of graphene, $\Gamma$ reduces to 13 meV, which gives a smaller $T_{K}$ of 75 K, and these values can be reproduced on different samples, see Figs. S2 and S3. The size of the Kondo cloud of single U atom on graphene/6H-SiC(0001) is much smaller than that of Co atoms on Au(111), Cu(111) and Cu(100) Madhavan.01 ; Nagaoka.02 , and the latter reveals a typical size of around 10 Å. The electronic states relating to the Kondo effect are more localized around the center of U atom than 3$d$-electron system. This is consistent with the more localized nature of 5$f$ orbitals. Fig. S4 shows the spin-polarized partial density of states (PDOS) of the system, from which it is clear that the local magnetic moment of 5.0 $\mu_{B}$ is mainly from the $f$ orbitals of the U atoms. Although Kondo effect induced by single 3$d$ transition-metal atoms has been extensively studied before Madhavan.01 ; Nagaoka.02 ; Madhavan.98 , the case of single 4$f$ rare-earth atom is still under debate Jiutao.98 ; Ternes.09 ; Silly.04 , whereas similar studies focusing on single actinide atom with 5$f$ electrons has never been reported before and was first revealed here. Figure 3: Subatomic features of a single U adatom on graphene/6H-SiC(0001). (a) dI/dV spectra of the adsorbed U atom and the substrate. The gray arrows denote the positions where the dI/dV maps in panels (b) and (c) are taken. The red arrows denote the positions where the dI/dV maps in panels (d-f) are taken. (b-f) dI/dV maps taken at $I$ = 100 pA, $V_{b}$ = -100 mV (b); $V_{b}$ = 160 mV (c); $V_{b}$ = -30 mV (d); $V_{b}$ = -40 mV (e); $V_{b}$ = -20 mV (f). Figure 4: PDOS and the isosurfaces of charge density around the U atoms. (a) PDOS of U and C atoms calculated by GGA+$U$+SOC, and red and green atoms represent the two inequivalent C sites. $E_{F}$ is set to zero. (b) and (c) Isosurfaces of charge density around the U atoms at the energy ranges correspond to the peaks denoted by the arrows in panel (a) with their energy ranges marked below, and the iso-values are $5.0\times 10^{-5}$ and $1.0\times 10^{-5}e\AA^{-3}$ for (b) and (c), respectively. The line shape of the Fano resonance and the coupling parameter $q$ have been extensively discussed in previous reports Fano.61 ; Madhavan.01 ; Madhavan.98 ; Jiutao.98 . If $q$ is large ($|q|\gg 1$), the spectrum is Lorentzian and the STM tip is strongly coupled to the atomic orbitals, while a low value of $q$ ($|q|<1$) implies that the tip is more strongly coupled to the conduction electrons than the atomic orbitals Madhavan.01 . In the present experiment, the obtained small $q$ value of 0.08 implies that the matrix element for the localized 5$f$ states should be very small, and the dI/dV spectra are more sensitive to the conduction electrons. The Kondo resonance in STS reveals itself by reducing the differential conductance for tunneling transmission near $E_{F}$ and appears as a Kondo antiresonance in the tunneling spectra. In addition to the dip spectral feature around $E_{F}$, we also observed three hump features in the dI/dV spectra in Fig. 3(a), locating at -100, -30 and 160 mV, respectively. To further investigate the origin of these hump features, dI/dV maps were taken at these energies. From the dI/dV maps taken at -100 and 160 mV in Figs. 3(b) and 3(c), a round and doughnut-like feature can be observed. Figures 3(d-f) present the dI/dV maps taken at three different biases near the hump feature around -30 mV, and they all display three-lobe structure with a threefold symmetry. This three-lobe structure can be best resolved at the sample bias of -30 mV, and the increased intensity at the center of the three lobes in the dI/dV pattern at -20 mV is due to the influence of the Kondo resonance near $E_{F}$. The three lobes align in the same direction as the triangular lattice manifested by the STM images of multilayer graphene surface, and the dI/dV patterns exhibit the same symmetry as the substrate. Therefore, we speculate that these special dI/dV patterns reflect the hybridization between the U atoms and the substrate. To precisely reveal the hybridization mechanism between the U atoms and substrate, we use DFT+$U$ method to calculate the spatial distributions of various atomic orbitals in a single U adatom on graphene. Figure 4 presents the PDOS and the isosurfaces of the charge density around U atom in different energy ranges. From Fig. 4(a), the round feature observed from the dI/dV map at -100 mV in Fig. 3(b) is mainly from the contribution of localized $f$ states as indicated by the orange arrow A in Fig. 4(a), which do not hybridize with the C atoms of the substrate. The three-lobe structure observed in Figs. 3(d-f) is due to the hybridization between the $p_{z}$ orbitals from one of the two inequivalent C atoms (C1) and the 6$d$ and 5$f$ orbitals of the U atom, as shown in Figs. 4(a) and 4(b). The donghnut-like feature at 160 mV from the dI/dV map in Fig. 3(c) originates from the hybridization between U 6$d$ and 5$f$ orbitals and $p_{z}$ orbitals of the C atoms from both two sites. In short, both the three-lobe and donghnut-like features originate from the hybridization between the U adatom and C atoms of the substrate. However, since the donghnut-like structure is due to the hybridization between the U adatom and C atoms from both sides, whereas only one site of the two inequivalent C atoms hybridize with the U adatom in the three-lobe structure, this lowers the symmetry in the three-lobe spectra feature. Although there are a few reported studies about the observation of subatomic features in a single atom by AFM or STM Giessibl.00 ; Giessibl.01 ; Herz.03 ; Emmrich.15 ; Lian.10 ; Gyamfi.12 , most of them concentrated on the subatomic structures induced by special tip configurations Giessibl.00 ; Giessibl.01 ; Herz.03 ; Emmrich.15 , while the rest only obtained relatively blurry images or spectral data Lian.10 ; Gyamfi.12 . Our STM observation of the clear subatomic orbitals inside a single U adatom is unprecedented. Such a remarkable result does not depend on a special decorated STM tip. Instead, it originates from the large apparent diameter of U adatom and the selective hybridization between U atom and C atoms from two inequivalent sites of the hexagonal lattice on multilayer graphene. To summarize, our results present the first observation of the Kondo effect, together with the subatomic features of isolated 5$f$-impurtiy U atom on graphene/6H-SiC(0001). The Kondo effect manifests itself by a dip spectral feature near $E_{F}$, which can be fitted by the Fano equation and the two different subatomic features locating at different energies originate from the selective hybridization between U 5$f$ and 6$d$ orbitals with the $p_{z}$ orbitals from two inequivalent C atoms of the substrate. 5$f$-electron system takes a special place in the Kondo category, but is less studied. Our results provide new insights into the understanding of the Kondo physics, by extension, the orbital character and bonding behaviors of 5$f$-electron systems. ###### Acknowledgements. This work is supported by the National Science Foundation of China (Grants No.11304291, 11974319, 11874330), the National Key Research and Development Program of China (No. 2017YFA0303104), and the Science Challenge Project (Grants No. TZ2016004). W. Feng and P. Yang contribute equally. ## References * (1) P. Coleman _et al._ , J. Low. Temp. Phys. 161, 182 (2010). * (2) C. M. Varma _et al._ , Rev. Mod. Phys. 48, 219 (1976). * (3) G. R. Stewart _et al._ , Rev. Mod. Phys. 56, 755 (1984). * (4) Q. Y. Chen _et al._ , Phys. Rev. B 96, 045107(2017). * (5) Q. Y. Chen _et al._ , Phys. Rev. Lett. 120, 066403(2018). * (6) Q. Y. Chen _et al._ , Phys. Rev. B 97, 045149(2018). * (7) Jiutao Li _et al._ , Phys. Rev. Lett. 80, 2893(1998). * (8) V. Madhavan _et al._ , Science 280, 567-569(1998). * (9) A. Gumbsch _et al._ , Phys. Rev. B 81, 165420(2010). * (10) K. Nagaoka _et al._ , Phys. Rev. Lett. 88, 077205(2002). * (11) H. C. Manoharan _et al._ , Nature 403, 512-515(2000). * (12) N. Knorr _et al._ , Phys. Rev. Lett. 88, 096804(2002). * (13) P. Wahl _et al._ , Phys. Rev. Lett. 93, 176603(2004). * (14) M. A. Schneider _et al._ , Phys. Rev. B 65, 121406(R)(2002). * (15) A. F. Otte _et al._ , Nat. Phys. 4, 847-850(2008). * (16) W. Feng _et al._ , New J. Phys. 18, 123011(2016). * (17) M. Ternes _et al._ , J. Phys.:Condens. Matter 21, 053001(2009). * (18) F. Silly _et al._ , Phys. Rev. Lett. 92, 016101(2004). * (19) D. G. de Oteyza _et al._ , Science 340, 1434(2013). * (20) J. Zhang _et al._ , Science 342, 611(2013). * (21) T. Jamneala _et al._ , Phys. Rev. Lett. 87, 256804(2001). * (22) M. Emmrich _et al._ , Science 348, 308-311(2015). * (23) F. J. Giessibl _et al._ , Science 289, 422-425(2000). * (24) F. J. Giessibl _et al._ , Ann. Phys. 10, 887-910(2001). * (25) M. Herz _et al._ , Phys. Rev. B 68, 045301(2003). * (26) J. C. Lian _et al._ , Phys. Rev. B 81, 195411(2010). * (27) M. Gyamfi _et al._ , Phys. Rev. B 85, 161406(R)(2012). * (28) H. Huang _et al._ , ACS Nano 2, 2513-2518(2008). * (29) G. Kresse _et al._ , J. Comp. Maer. S. 6, 15(1996). * (30) P. E. Blochl _et al._ , Phys. Rev. B 50, 17953(1994). * (31) J. P. Perdew _et al._ , Phys. Rev. Lett. 77, 3865(1996). * (32) J. Klimes _et al._ , Phys. Rev. B 83, 195131(2011). * (33) M. Cococcioni _et al._ , Phys. Rev. B 71, 035105(2005). * (34) R. Qiu _et al._ , Comp. Mater. Sci. 171, 109270(2020). * (35) https://en.wikipedia.org/wiki/Uranium * (36) V. Madhavan _et al._ , Phys. Rev. B 64, 165412(2001). * (37) U. Fano _et al._ , Phys. Rev. B 124, 1866(1961).
Lalign (0.1) $\displaystyle\BODY$ # Well-posedness of stochastic continuity equations on Riemannian manifolds Luca Galimberti Department of Mathematical Sciences NTNU Norwegian University of Science and Technology N–7491 Trondheim, Norway<EMAIL_ADDRESS>and Kenneth H. Karlsen Department of mathematics University of Oslo P.O. Box 1053, Blindern N–0316 Oslo, Norway<EMAIL_ADDRESS> ###### Abstract. We analyze continuity equations with Stratonovich stochasticity, $\partial\rho+\operatorname{div}_{h}\left[\rho\circ\left(u(t,x)+\sum_{i=1}^{N}a_{i}(x)\dot{W}_{i}(t)\right)\right]=0$, defined on a smooth closed Riemannian manifold $M$ with metric $h$. The velocity field $u$ is perturbed by Gaussian noise terms $\dot{W}_{1}(t),\ldots,\dot{W}_{N}(t)$ driven by smooth spatially dependent vector fields $a_{1}(x),\ldots,a_{N}(x)$ on $M$. The velocity $u$ belongs to $L^{1}_{t}W^{1,2}_{x}$ with $\operatorname{div}_{h}u$ bounded in $L^{p}_{t,x}$ for $p>d+2$, where $d$ is the dimension of $M$ (we do not assume $\operatorname{div}_{h}u\in L^{\infty}_{t,x}$). We show that by carefully choosing the noise vector fields $a_{i}$ (and the number $N$ of them), the initial-value problem is well-posed in the class of weak $L^{2}$ solutions, although the problem can be ill-posed in the deterministic case because of concentration effects. The proof of this “regularization by noise” result reveals a link between the nonlinear structure of the underlying domain $M$ and the noise, a link that is somewhat hidden in the Euclidian case ($a_{i}$ constant) [6, 19, 35]. The proof is based on an a priori estimate in $L^{2}$, which is obtained by a duality method, and a weak compactness argument. ###### Key words and phrases: Stochastic continuity equation, Riemannian manifold, hyperbolic equation, non- smooth velocity field, weak solution, existence, uniqueness ###### 2020 Mathematics Subject Classification: Primary: 60H15, 35L02; Secondary: 58J45, 35D30 This work was supported by the Research Council of Norway through the project Stochastic Conservation Laws (250674/F20). ###### Contents 1. 1 Introduction and main results 2. 2 Background material 1. 2.1 Geometric framework 2. 2.2 Stochastic framework 3. 3 Smooth data and strong solutions 1. 3.1 Strong solution 2. 3.2 Elementary $L^{p}$ bound 4. 4 Time-dependent test functions 5. 5 Irregular test functions 6. 6 On the ellipticity of $\sum_{i}a_{i}(a_{i})$, proof of Lemma 1.2 7. 7 Test function for duality method 8. 8 $L^{2}$ estimate and uniqueness for weak solutions 9. 9 Proof of main result, Theorem 1.3 1. 9.1 Smoothing of velocity vector field $u$ 2. 9.2 Weak compactness of approximate solutions 3. 9.3 General initial datum, $\rho_{0}\in L^{2}(M)$ 4. 9.4 Proof of Lemma 9.2 10. 10 Appendix 1. 10.1 Heat kernel on functions 2. 10.2 Heat kernel on forms 3. 10.3 Proof of Proposition 2.1 4. 10.4 An auxiliary result ## 1\. Introduction and main results One of the basic equations in fluid dynamics is the continuity equation $\partial_{t}\rho+\operatorname{div}\left(u\rho\right)=0\quad\text{in $[0,T]\times\mathbb{R}^{d}$},$ where $u=u(x,t)$ is the velocity field describing the flow and $\rho$ is the fluid density. It encodes the familiar law of conservation of mass. Mathematically speaking, if the velocity field $u$ is Lipschitz continuous, then the continuity equation (and the related transport equation) can be solved explicitly by means of the method of characteristics. Unfortunately, in realistic applications, the velocity is much rougher than Lipschitz, typically $u$ belongs to some spatial Sobolev space and one must seek well-posedness of the continuity equation in suitable classes of weak solutions. Well-posedness of weak solutions follows from the theory of renormalized solutions [13, 1, 33, 34], assuming that $u\in L^{1}_{t}W^{1,1}_{x}$ (or even $L^{1}_{t}BV_{x}$) with $\operatorname{div}u\in L^{\infty}_{t,x}$. A key step in this theory is to show that a weak solution $\rho$ is also a renormalized solution, that is, $S(\rho)$ is a weak solution for all “reasonable” nonlinear functions $S:\mathbb{R}^{d}\to\mathbb{R}$. It is the validity of this chain rule property that asks for $W^{1,1}_{x}$ (or $BV_{x}$) regularity of the velocity $u$. The assumption that $\operatorname{div}u$ is bounded cannot be relaxed (unbounded divergence leads to concentration effects). Recently there has been significant interest in studying fluid dynamics equations supplemented with stochastic terms. This (renewed) interest is partly motivated by the problem of turbulence. Although the basic (Navier- Stokes) equations are deterministic, some of their solutions exhibit wild random-like behavior, with the basic problem of existence and uniqueness of smooth solutions being completely open. There is a vague hope that “stochastic perturbations” can render some of the models “well-posed” or “better behaved”, thereby providing some insight into the onset of turbulence. We refer to [18] for a general discussion of “regularization by noise” phenomena, which has been a recurring theme in many recent works on stochastic transport and continuity equations of the form (1.1) $\partial\rho+\nabla\rho\circ\left(u+a\dot{W}\right)=0,\quad\partial\rho+\operatorname{div}\left[\rho\circ\left(u+a\dot{W}\right)\right]=0,$ posed on $\mathbb{R}^{d}$ with a given initial condition $\rho\big{|}_{t=0}=\rho_{0}$. Here $W=W(t)$ is a Wiener process with noise coefficient $a$ and the symbol $\circ$ refers to the Stratonovich stochastic differential. It is not our purpose here to review the (by now vast) literature on regularization by noise (i.e., improvements in regularity, existence, uniqueness, stability, etc., induced by noise). Instead we emphasize some of the papers that develop an analytical (PDE) approach [3, 6, 23, 24, 39], related to the one taken in the present paper. There is another flexible approach that study the stochastic flow associated with the SPDE (1.1), relying on regularizing properties of the corresponding SDE to supply a flow that is more regular than its coefficient $u$, see e.g. [19] for the stochastic transport equation and [35, 36] for the stochastic continuity equation. A good part of the recent literature is motivated by the article [19] of Flandoli, Gubinelli, and Priola, which in turn built upon an earlier work by Davies [11]. One of the main results in [19] is that if $u$ is $x$-Hölder continuous, then the initial-value problem for the transport equation in (1.1) is well-posed under the weak assumption that $\operatorname{div}u\in L^{2}$. Most of the works just cited assume that the noise coefficient $a$ is constant. Well-posedness results for continuity equations with $x$-dependent noise coefficients can be found in [39] (see also [40] and [29]). Subtle regularization by noise results for some nonlinear SPDEs can be found [23, 24]. Let us also recall that first order stochastic partial differential equations (SPDEs) with “Lipschitz coefficients” have been deeply analyzed in Kunita’s works [9, 31]. In recent years there has been a growing interest in analyzing the basic equations of fluid dynamics on Riemannian manifolds instead of flat domains, with the nonlinearity (curvature) of the domains altering the underlying dynamics in nontrivial ways (see e.g. [2, 7, 43]). A Riemannian manifold provides a more general framework in which to study fluid dynamics than a “physical surface”, with the relevant quantities becoming independent of coordinates and a distance function. Partial differential equations (PDEs) on manifolds arise in many applications, including geophysical flows (atmospheric models of fluids constrained to flow on the surface of a planet) and general relativity in which the Einstein-Euler equations are posed on a manifold with the metric being one of the unknowns. Transport equations on manifolds have been analyzed in [14] and [17], where the DiPerna-Lions theory of weak solutions is extended to (some classes of) Riemannian manifolds. The mathematical literature on SPDEs on manifolds is at the moment scanty, see [15, 21, 27, 28] for equations in which the noise enters the equation as an Itô source term. In [22] we established the renormalization property for weak solutions of stochastic continuity equations on manifolds, under the assumption that the irregular velocity field $u$ belongs to $L^{1}_{t}W^{1,2}_{x}$. Corollaries of this result included $L^{2}$ estimates and uniqueness (provided $\operatorname{div}_{h}u\in L^{\infty}$). The purpose of the present paper is to establish the existence and uniqueness of weak $L^{2}$ solutions without the assumption $\operatorname{div}_{h}u\in L^{\infty}$. To be more precise, we are given a $d$-dimensional ($d\geq 1$) smooth, closed, and compact manifold $M$, endowed with a (smooth) Riemannian metric $h$. We are interested in the initial-value problem for the stochastic continuity equation (1.2) $d\rho+\operatorname{div}_{h}(\rho\,u)dt+\sum_{i=1}^{N}\operatorname{div}_{h}(\rho\,a_{i})\circ dW^{i}(t)=0\,\;\;\;\text{in }\,[0,T]\times M,$ where $T>0$ denotes a fixed final time, $u:[0,T]\times M\to TM$ is a given time-dependent irregular vector field on $M$, $a_{1},\ldots,a_{N}:M\to TM$ are suitable smooth vector fields on $M$ (to be fixed later), $W^{1},\cdots,W^{N}$ are independent real-valued Brownian motions, and the symbol $\circ$ means that the equation is understood in the Stratonovich sense. We recall that for a vector field $X$ (locally of the form $X^{j}\partial_{j}$), the divergence of $X$ is given by $\operatorname{div}_{h}X=\partial_{j}X^{j}+\Gamma^{j}_{ij}X^{i}$, where $\Gamma_{ij}^{k}$ are the Christoffel symbols associated with the Levi-Civita connection $\nabla$ of the metric $h$ (Einstein’s summation convention is used throughout the paper). Roughly speaking, the proof of well-posedness for (1.2) consists of two main steps. In the first step we construct an appropriate noise term that has the potential to suppress concentration effects. Indeed, to remove the assumption $\operatorname{div}_{h}u\in L^{\infty}_{t,x}$, we are led to consider a specific noise term linked to the geometry of the underlying curved domain $M$, implying a structural effect of noise and nonlinear domains on improving the well-posedness of weak solutions (more on this below). Related results on Euclidean domains (with $x$-independent noise coefficients) can be found in [3] and [6] (see also [35, 36, 39]). In the second step, with help of the noise term, we establish a crucial $L^{\infty}_{t}L^{2}_{\omega,x}$ estimate for weak solutions that do not depend on $\left\|\operatorname{div}_{h}u\right\|_{L^{\infty}}$. To this end, we make use of a duality method, inspired by Beck, Flandoli, Gubinelli, and Maurelli [6], Gess and Maurelli [23, 24], and Gess and Smith [24] (more on this below). We use the following concept of weak solution for (1.2) (for unexplained notation and background material, see Section 2). ###### Definition 1.1 (weak $L^{2}$ solution, Stratonovich formulation). Given $\rho_{0}\in L^{2}(M)$, a weak $L^{2}$ solution of (1.2) with initial datum $\rho|_{t=0}=\rho_{0}$ is a function $\rho$ that belongs to $L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$ such that $\forall\psi\in C^{\infty}(M)$ the stochastic process $(\omega,t)\mapsto\int_{M}\rho(t)\psi\,dV_{h}$ has a continuous modification which is an $\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]}$-semimartingale and for any $t\in[0,T]$ the following equation holds $\mathbb{P}$-a.s.: $\begin{split}\int_{M}\rho(t)\psi\,dV_{h}&=\int_{M}\rho_{0}\psi\,dV_{h}+\int_{0}^{t}\int_{M}\rho(s)\,u(\psi)\,dV_{h}\,ds\\\ &\qquad+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\rho(s)\,a_{i}(\psi)\,dV_{h}\circ dW^{i}(s).\end{split}$ We have an equivalent concept of solution using the Itô stochastic integral and the corresponding SPDE (1.3) $d\rho+\operatorname{div}_{h}(\rho\,u)\,dt+\sum_{i=1}^{N}\operatorname{div}_{h}(\rho\,a_{i})\,dW^{i}(t)-\frac{1}{2}\sum_{i=1}^{N}\Lambda_{i}(\rho)\,dt=0,$ where $\Lambda_{i}$ is a second order differential operator linked to the vector field $a_{i}$, defined by $\Lambda_{i}(\rho):=\operatorname{div}_{h}\bigl{(}\operatorname{div}_{h}(\rho a_{i})a_{i}\bigr{)}$, for $i=1,\ldots,N$. Recall that, for a smooth function $f:M\to\mathbb{R}$ and a vector field $X$, we have $X(f)=(X,\mbox{grad}_{h}\,f)_{h}$ (which locally becomes $X^{j}\partial_{j}f$). Moreover, $X\bigl{(}X(f)\bigr{)}=(\nabla^{2}f)(X,X)+(\nabla_{X}X)(f)$, where $\nabla^{2}f$ is the covariant Hessian of $f$ and $\nabla_{X}X$ is the covariant derivative of $X$ in the direction $X$. In the Itô SPDE (1.3) the operator $\Lambda_{i}(\cdot)$ is the formal adjoint of $a_{i}\bigl{(}a_{i}(\cdot)\bigr{)}$. According to [22], the next definition is equivalent to Definition 1.1. ###### Definition 1.2 (weak $L^{2}$ solution, Itô formulation). Given $\rho_{0}\in L^{2}(M)$, a weak $L^{2}$ solution of (1.2) with initial datum $\rho|_{t=0}=\rho_{0}$ is a function $\rho$ that belongs to $L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$ such that $\forall\psi\in C^{\infty}(M)$ the stochastic process $(\omega,t)\mapsto\int_{M}\rho(t)\psi\,dV_{h}$ has a continuous modification which is an $\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]}$-adapted process and for any $t\in[0,T]$ the following equation holds $\mathbb{P}$-a.s.: (1.4) $\begin{split}\int_{M}\rho(t)\psi\,dV_{h}&=\int_{M}\rho_{0}\psi\,dV_{h}+\int_{0}^{t}\int_{M}\rho(s)\,u(\psi)\,dV_{h}\,ds\\\ &\qquad+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\rho(s)\,a_{i}(\psi)\,dV_{h}\,dW^{i}(s)\\\ &\qquad\qquad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\rho(s)\,a_{i}\bigl{(}a_{i}(\psi)\bigr{)}\,dV_{h}\,ds.\end{split}$ To guarantee that these definitions make sense, we need the vector field $u$ to fulfill some basic conditions. First, we require spatial Sobolev regularity: (1.5) $u\in L^{1}\left([0,T];\overrightarrow{W^{1,2}(M)}\right),$ see Section 2 for unexplained notation. This means that $u\in L^{1}\left([0,T];\overrightarrow{L^{2}(M)}\right)$, which is sufficient to ensure that the mapping $t\mapsto\int_{0}^{t}\int_{M}\rho(s)u(s)(\psi)\,dV_{h}\,ds$ is absolutely continuous, $\mathbb{P}$-a.s., for any $\rho\in L^{\infty}_{t}L^{2}_{\omega,x}$ and $\psi\in C^{\infty}(M)$, and hence it is not contributing to cross-variations against $W^{i}$. These cross-variations appear when passing from Stratonovich to Itô integrals in the SPDE (1.2). In addition, we will assume that (1.6) $u\in L^{\infty}\left([0,T];\overrightarrow{L^{\infty}(M)}\right),$ and, more importantly, that the distributional divergence of $u$ satisfies (1.7) $\operatorname{div}_{h}u\in L^{p}([0,T]\times M),\quad\text{for some $p>d+2$}.$ To derive a priori estimates, we need the following concept of renormalization (see [22] for details and comments). ###### Definition 1.3 (renormalization, Itô formulation). Let $\rho$ be a weak $L^{2}$ solution of (1.2) with initial datum $\rho|_{t=0}=\rho_{0}\in L^{2}(M)$. We say that $\rho$ is renormalizable if, for any $F\in C^{2}(\mathbb{R})$ with $F,F^{\prime},F^{\prime\prime}$ bounded on $\mathbb{R}$, and for any $\psi\in C^{\infty}(M)$, the stochastic process $(\omega,t)\mapsto\int_{M}F(\rho(t))\psi\,dV_{h}$ has a continuous modification which is an $\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]}$-adapted process, and, setting $G_{F}(\xi):=\xi F^{\prime}(\xi)-F(\xi)$, for $\xi\in\mathbb{R}$, the function $F(\rho)$ satisfy the SPDE (1.8) $\begin{split}&dF(\rho)+\operatorname{div}_{h}\bigl{(}F(\rho)u\bigr{)}\,dt+G_{F}(\rho)\operatorname{div}_{h}u\,dt\\\ &\qquad\qquad+\sum_{i=1}^{N}\operatorname{div}_{h}\bigl{(}F(\rho)a_{i}\bigr{)}\,dW^{i}(t)+\sum_{i=1}^{N}G_{F}(\rho)\operatorname{div}_{h}a_{i}\,dW^{i}(t)\\\ &\qquad=\frac{1}{2}\sum_{i=1}^{N}\Lambda_{i}(F(\rho))\,dt-\frac{1}{2}\sum_{i=1}^{N}\Lambda_{i}(1)G_{F}(\rho)\,dt\\\ &\qquad\qquad+\frac{1}{2}\sum_{i=1}^{N}F^{\prime\prime}(\rho)\,\bigl{(}\rho\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,dt+\sum_{i=1}^{N}\operatorname{div}_{h}\bigl{(}G_{F}(\rho)\bar{a}_{i}\bigr{)}\,dt,\end{split}$ weakly (in $x$), $\mathbb{P}$-a.s., where the first order differential operator $\bar{a}_{i}$ is defined by $\bar{a}_{i}:=\left(\operatorname{div}_{h}a_{i}\right)a_{i}$ and $\Lambda_{i}(1)=\operatorname{div}_{h}\bar{a_{i}}$, for $i=1,\ldots,N$; that is, for all $\psi\in C^{\infty}(M)$ and for any $t\in[0,T]$, the following equation holds $\mathbb{P}$-a.s.: (1.9) $\begin{split}&\int_{M}F(\rho(t))\psi\,dV_{h}=\int_{M}F(\rho_{0})\psi\,dV_{h}+\int_{0}^{t}\int_{M}F(\rho(s))\,u(\psi)\,dV_{h}\,ds\\\ &\quad+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}(\psi)\,dV_{h}\,dW^{i}(s)+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}\bigl{(}a_{i}(\psi)\bigr{)}\,dV_{h}\,ds\\\ &\quad-\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}u\,\psi\,dV_{h}\,ds-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}a_{i}\,\psi\,dV_{h}\,dW^{i}(s)\\\ &\quad-\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\Lambda_{i}(1)\,G_{F}(\rho(s))\,\psi\,dV_{h}\,ds\\\ &\quad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\psi\,dV_{h}\,ds\\\ &\quad-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\bar{a}_{i}(\psi)\,dV_{h}\,ds.\end{split}$ ###### Theorem 1.1 (renormalization property [22]). Assume (1.5) and consider a weak $L^{2}$ solution $\rho$ of (1.2) with initial datum $\rho_{0}\in L^{2}(M)$, according to Definition 1.2. Then $\rho$ is renormalizable in the sense of Definition 1.3. To prove the $L^{2}$ estimate mentioned earlier, we actually need a version of the weak formulation (1.9) that uses time-dependent test functions. Moreover, we are required to insert into that weak formulation “non-smooth” test functions. These technical aspects of the theory are developed in Sections 4 and 5 One can only expect the noise term to improve the well-posedness situation for (1.2) if the resulting second order differential operator $\sum_{i}a_{i}(a_{i})$ appearing in (1.4) is non-degenerate (uniformly elliptic). Unfortunately, the required non-degeneracy is not guaranteed. The reason is a geometric one that is tied to the nonlinearity of the domain. Indeed, given an arbitrary $d$-dimensional smooth manifold $M$, it is not possible to find a global frame for it, that is, $d$ smooth vector fields $E_{1},\ldots,E_{d}$ that constitute a basis for $T_{x}M$ for all $x\in M$. Manifolds that exhibit this property are called parallelizable. Examples of parallelizable manifolds are Lie groups (e.g. $\mathbb{R}^{d}$, $\mathbb{T}^{d}$) and the sphere $\mathbb{S}^{d}$ with $d\in\\{1,3,7\\}$. We refer to Section 6 for further details, and a proof of the following simple but useful fact: ###### Lemma 1.2 (non-degenerate second order operator). There exist $N=N(M)$ smooth vector fields $a_{1},\ldots,a_{N}$ on $M$ such that the following identity holds $\frac{1}{2}\sum_{i=1}^{N}a_{i}\bigl{(}a_{i}(\psi)\bigr{)}=\Delta_{h}\psi-\frac{1}{2}\sum_{i=1}^{N}\bar{a}_{i}(\psi),\quad\forall\psi\in C^{2}(M),$ where $\Delta_{h}$ is the Laplace-Beltrami operator of $(M,h)$ and $\bar{a}_{1},\ldots,\bar{a}_{N}$ are first order differential operators: $\bar{a}_{i}:=(\operatorname{div}_{h}a_{i})\,a_{i}$, for $i=1,\ldots,N$. It is now clear that with the specific vector fields $a_{1},\ldots,a_{N}$ constructed in Lemma 1.2, the resulting second order operator $\frac{1}{2}\sum_{i=1}^{N}a_{i}(a_{i}(\cdot))$ in (1.4) becomes uniformly elliptic. The main result of this paper, which shows how the use of noise can avoid concentration in the density $\rho$, is the following theorem: ###### Theorem 1.3 (well-posedness). Suppose conditions (1.5), (1.6), and (1.7) hold. Let the vector fields $a_{1},\ldots,a_{N}$ be given by Lemma 1.2. Then there exists a unique weak solution of (1.2) with initial datum $\rho|_{t=0}=\rho_{0}\in L^{2}(M)$. The proof consists of several steps. The first one establishes the well- posedness of strong solutions to (1.2) with smooth data ($u,\rho_{0}$), which is the topic of Section 3. Here the basic strategy is, with the help of a smooth partition of unity subordinate to a finite atlas, to solve localized versions of (1.2) “pulled back” to $\mathbb{R}^{d}$, relying on Kunita’s existence and uniqueness theory for SPDEs on Euclidean domains [30, 31]. We “glue” the localized solutions together on $M$, obtaining in this way a global solution. The gluing procedure is well-defined, because if two coordinate patches intersect, then their corresponding solutions must agree on the intersection, in view of the uniqueness result that is available on $\mathbb{R}^{d}$ (with $u,\rho_{0}$ smooth). In Section 8, we derive an $L^{2}$ estimate for general weak solutions $\rho$ (with non-smooth $u,\rho_{0}$): (1.10) $\mathbb{E}\int_{M}\left|\rho(t,x)\right|^{2}\,dx\leq C\int_{M}\left|\rho_{0}(x)\right|^{2}\,dx,\quad t\in[0,T],$ where the constant $C$ depends on $\left\|\operatorname{div}_{h}u\right\|_{L^{p}_{t,x}}$, cf. (1.7), but not $\left\|\operatorname{div}_{h}u\right\|_{L^{\infty}}$. The derivation of this estimate is based on (1.8), and a duality argument in which we construct a specific (deterministic) test function $\phi(t,x)$ that can “absorb” the bad $\operatorname{div}_{h}u$ term in (1.8). This function solves the terminal- value problem $\partial_{t}\phi(t)+\Delta_{h}\phi(t)-b(t,x)\phi=0\;\;\mbox{on }[0,t_{0}]\times M,\qquad\phi(t_{0},x)=1\;\;\mbox{on }M,$ where $t_{0}\in[0,T]$, $\Delta_{h}$ is the Laplace-Beltrami operator, and $b=b(t,x)\leq 0$ is an appropriately chosen irregular function ($b\in L^{p}$ with $p>d+2$). Using Fredholm theory and embedding theorems in anisotropic Sobolev spaces $W^{1,2,p}_{t,x}$ [8], we prove that this problem admits a unique solution $\phi\in W^{1,2,p}_{t,x}$ that satisfies (1.11) $\left\|\left(\phi,\nabla\phi\right)\right\|_{L^{\infty}_{t,x}}\leq C\left(p,d,T,M\right)\left\|b\right\|_{L^{p}_{t,x}},$ where $\nabla$ denotes the covariant derivative. Using this $\phi$ as test function in the time-space weak formulation of (1.8), along with the estimates (1.11), we arrive at the $L^{2}$ estimate (1.10) via Grönwall’s inequality. In the final step (Section 9), we replace the irregular vector field $u$ and the initial function $\rho_{0}\in L^{2}$ by appropriate smooth approximations $u_{\tau}$ and $\rho_{0,\tau}$, respectively, where $\tau>0$ is the approximation parameter, and solve for each $\tau>0$ the corresponding SPDE with smooth data $\left(u_{\tau},\rho_{0,\tau}\right)$, giving raise to a sequence $\left\\{\rho_{\tau}\right\\}_{\tau>0}$ of approximate solutions. In view of (1.10), we have an $L^{2}$ bound on $\rho_{\tau}$ that is independent of $\tau$, which is enough to arrive at the existence of a weak solution to (1.2) by way of a compactness argument. Uniqueness is an immediate consequence of (1.10). Before ending this (long) section, let us briefly discuss the nontrivial matter of regularizing functions and vector fields on manifolds. In the Euclidean case one uses mollification. Mollification possesses many fitting properties (e.g. it commutes with differential operators) that are not easy to engineer if the function in question is defined on a manifold. Indeed, on a Riemannian manifold, there are a number of smoothing devices currently being used, including partition of unity combined with Euclidean convolution in local charts (see e.g. [14, 21, 22]), Riemannian convolution smoothing [25], and the heat semigroup method (see e.g. [17, 21]), where the last two are better at preserving geometric properties. In this paper, for smoothing of the data $\rho_{0}$ (function) and $u$ (vector field), we employ standard mollifcation in time and convolution with the heat semigroup in the spatial variables, where the heat semigroup approach is applied to functions as well as vector fields (the latter via 1-forms and the de Rham-Hodge semigroup), see Section 10 for details. ## 2\. Background material In an attempt to make the paper more self-contained and fix relevant notation, we briefly review some basic aspects of differential geometry and stochastic analysis. For unexplained terminology and basic results concerning the target equation (1.2), we refer to the work [22]. ### 2.1. Geometric framework We refer to [4, 12, 32] for background material on differential geometry and analysis on manifolds. Fix a closed, compact, connected and oriented $d$-dimensional smooth Riemannian manifold $(M,h)$. The metric $h$ is a smooth positive-definite 2-covariant tensor field, which determines for every $x\in M$ an inner product $h_{x}$ on $T_{x}M$. Here $T_{x}M$ denotes the tangent space at $x$, and by $TM=\coprod_{x\in M}T_{x}M$ we denote the tangent bundle. For two arbitrary vectors $X_{1},X_{2}\in T_{x}M$, we will henceforth write $h_{x}(X_{1},X_{2})=:\left(X_{1},X_{2}\right)_{h_{x}}$ or even $\left(X_{1},X_{2}\right)_{h}$ if the context is clear. We set $\left|X\right|_{h}:=\left(X,X\right)_{h}^{1/2}$. Recall that, in local coordinates $x=(x^{i})$, the partial derivatives $\partial_{i}:=\frac{\partial}{\partial x^{i}}$ form a basis for $T_{x}M$, while the differential forms $dx^{i}$ determine a basis for the cotangent space $T_{x}^{\ast}M$. Therefore, in local coordinates, $h$ reads $h=h_{ij}\,dx^{i}dx^{j},\quad h_{ij}=\left(\partial_{i},\partial_{j}\right)_{h}.$ We will denote by $(h^{ij})$ the inverse of the matrix $(h_{ij})$. We denote by $dV_{h}$ the Riemannian density associated to $h$, which in local coordinates takes the form $dV_{h}=\left|h\right|^{1/2}\,dx^{1}\cdots dx^{d},$ where $\left|h\right|$ is the determinant of $h$. Throughout the paper, we will assume for convenience that $\mathrm{Vol}(M,h):=\int_{M}dV_{h}=1.$ For $p\in[1,\infty]$, we denote by $L^{p}(M)$ the usual Lebesgue spaces on $(M,h)$. In local coordinates, the gradient of a function $f:M\to\mathbb{R}$ is the vector field given by the following expression $\mbox{grad}_{h}\,f:=h^{ij}\partial_{i}f\,\partial_{j}.$ The symbol $\nabla$ refers to the Levi-Civita connection of $h$, namely the unique linear connection on $M$ that is compatible with $h$ and is symmetric. The Christoffel symbols associated to $\nabla$ are given by $\Gamma^{k}_{ij}=\frac{1}{2}h^{kl}\left(\partial_{i}h_{jl}+\partial_{j}h_{il}-\partial_{l}h_{ij}\right).$ In particular, the covariant derivative of a vector field $X=X^{\alpha}\partial_{\alpha}$ is the $(1,1)$-tensor field which in local coordinates reads $(\nabla X)_{j}^{\alpha}:=\partial_{j}X^{\alpha}+\Gamma^{\alpha}_{kj}X^{k}.$ The divergence of a vector field $X=X^{j}\partial_{j}$ is the function defined by $\operatorname{div}_{h}X:=\partial_{j}X^{j}+\Gamma^{j}_{kj}X^{k}.$ For any vector field $X$ and $f\in C^{1}(M)$, we have $X(f)=(X,\mbox{grad}_{h}\,f)_{h}$, which locally takes the form $X^{j}\partial_{j}f$. We recall that for a (smooth) vector field $X$, the following integration by parts formula holds: $\int_{M}X(f)\,dV_{h}=\int_{M}\left(\mbox{grad}_{h}\,f,X\right)_{h}\,dV_{h}=-\int_{M}f\,\operatorname{div}_{h}X\,dV_{h},$ recalling that $M$ is closed (so all functions are compactly supported). Given a smooth vector field $X$ on $M$, we consider the norm $\left\|X\right\|_{\overrightarrow{L^{p}(M)}}^{p}:=\begin{cases}\displaystyle\int_{M}\left|X\right|^{p}_{h}\,dV_{h},&p\in[1,\infty),\\\ \left\|\left|X\right|_{h}\right\|_{L^{\infty}(M)}.&p=\infty.\end{cases}$ The closure of the space of smooth vector fields on $M$ with respect to the norm $\left\|\cdot\right\|_{\overrightarrow{L^{p}(M)}}$ is denoted by $\overrightarrow{L^{p}(M)}$. We define the Sobolev space $\overrightarrow{W^{1,p}(M)}$ in a similar fashion. Indeed, consider the norm $\left\|X\right\|_{\overrightarrow{W^{1,p}(M)}}^{p}:=\begin{cases}\displaystyle\int_{M}\bigl{(}\left|X\right|^{p}_{h}+\left|\nabla X\right|^{p}_{h}\bigr{)}\,dV_{h},&\text{if $p\in[1,\infty$)},\\\ \left\|\,\left|X\right|_{h}+\left|\nabla X\right|_{h}\,\right\|_{L^{\infty}(M)},&\text{if $p=\infty$}\end{cases}$ where, locally, $\left|\nabla X\right|_{h}^{2}=(\nabla X)^{i}_{j}\,h_{ik}h^{jm}\,(\nabla X)^{k}_{m}$. The closure of the space of smooth vector fields with respect to this norm is $\overrightarrow{W^{1,p}(M)}$. For more operative definitions, $\overrightarrow{L^{p}(M)}$ and $\overrightarrow{W^{1,p}(M)}$ can be seen as the spaces of vector fields whose components in any arbitrary chart belong to the corresponding Euclidean space. We will make essential use of the anisotropic Sobolev space $W^{1,2,p}([0,T]\times M)$, with $p\in[1,\infty)$ and $T>0$ finite. This space is defined as the completion of $C^{\infty}([0,T]\times M)$ under the norm (2.1) $\left\|w\right\|_{W^{1,2,p}([0,T]\times M)}:=\sum_{\begin{subarray}{c}j,k\geq 0\\\ 2j+k\leq 2\end{subarray}}\left[\iint_{[0,T]\times M}\left|\partial_{t}^{j}\nabla^{k}w\right|^{p}_{h}\,dt\,dV_{h}\right]^{1/p},$ where $\nabla^{k}w$ denotes the $k$th covariant derivative of the function $w$. We have the following important embedding result (see Section 10 for a proof): ###### Proposition 2.1. Suppose $p>d+2$. Then $W^{1,2,p}([0,T]\times M)\subset\subset C^{0,1-\frac{1+d}{p}}([0,T]\times M);$ the first-order $x$-derivatives of a function $w=w(t,x)\in W^{1,2,p}([0,T]\times M)$ are Hölder continuous with exponent $1-\frac{1+d}{p}$, such that $\left\|w\right\|_{C^{0}([0,T]\times M)}+\left\|\nabla w\right\|_{C^{0}([0,T]\times M)}\leq C\left\|w\right\|_{W^{1,2,p}([0,T]\times M)},$ for some constant $C=C(p,d,M)$. Finally, we introduce the following second order differential operators associated with the vector fields $a_{1},\ldots,a_{N}$: (2.2) $\Lambda_{i}(\psi):=\operatorname{div}_{h}\bigl{(}\operatorname{div}_{h}(\psi a_{i})a_{i}\bigr{)},\quad\psi\in C^{2}(M),\quad i=1,\ldots,N.$ It is not difficult to see that the adjoint of $\Lambda_{i}(\cdot)$ is $a_{i}\bigl{(}a_{i}(\cdot)\bigr{)}$: $\displaystyle\int_{M}\Lambda_{i}(\psi)\phi\,dV_{h}$ $\displaystyle=\int_{M}\psi\,a_{i}\bigl{(}a_{i}(\phi)\bigr{)}\,dV_{h}$ $\displaystyle=\int_{M}\psi\,\Bigl{(}(\nabla^{2}\phi)(a_{i},a_{i})+(\nabla_{a_{i}}a_{i})(\psi)\Bigr{)}\,dV_{h},\quad\forall\psi,\phi\in C^{2}(M),$ cf. [22] for further details. ### 2.2. Stochastic framework We use the books [38, 42] as general references on the topic of stochastic analysis. From beginning to end, we fix a complete probability space $(\Omega,\mathcal{F},\mathbb{P})$ and a complete right-continuous filtration $\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]}$. Without loss of generality, we assume that the $\sigma$-algebra $\mathcal{F}$ is countably generated. Let $W=\left\\{W_{i}\right\\}_{i=1}^{N}$ be a finite sequence of independent one- dimensional Brownian motions adapted to the filtration $\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]}$. We refer to $\bigl{(}\Omega,\mathcal{F},\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]},\mathbb{P},W\bigr{)}$ as a (Brownian) stochastic basis. Consider two real-valued stochastic processes $Y,\tilde{Y}$. We call $\tilde{Y}$ a modification of $Y$ if, for each $t\in[0,T]$, $\mathbb{P}\bigl{(}\bigl{\\{}\omega\in\Omega:Y(\omega,t)=\tilde{Y}(\omega,t)\bigr{\\}}\bigr{)}=1$. It is important to pick good modifications of stochastic processes. Right (or left) continuous modifications are often used (they are known to exist for rather general processes), since any two such modifications of the same process are indistinguishable (with probability one they have the same sample paths). Besides, they necessarily have left-limits everywhere. Right- continuous processes with left-limits are referred to as càdlàg. An $\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]}$-adapted, càdlàg process $Y$ is an $\left\\{\mathcal{F}_{t}\right\\}_{t\in[0,T]}$-semimartingale if there exist processes $F,M$ with $F_{0}=M_{0}=0$ such that $Y_{t}=Y_{0}+F_{t}+M_{t},$ where $F$ is a finite variation process and $M$ is a local martingale. In this paper we will only be concerned with continuous semimartingales. The quantifier “local” refers to the existence of a sequence $\left\\{\tau_{n}\right\\}_{n\geq 1}$ of stopping times increasing to infinity such that the stopped processes $\mathbf{1}_{\left\\{\tau_{n}>0\right\\}}M_{t\wedge\tau_{n}}$ are martingales. Given two continuous semimartingales $Y$ and $Z$, we can define the Fisk- Stratonovich integral of $Y$ with respect to $Z$ by $\int_{0}^{t}Y(s)\circ dZ(s)=\int_{0}^{t}Y(s)\,dZ(s)+\frac{1}{2}\left\langle Y,Z\right\rangle_{t},$ where $\int_{0}^{t}Y(s)dZ(s)$ is the Itô integral of $Y$ with respect to $Z$ and $\left\langle Y,Z\right\rangle$ denotes the quadratic cross-variation process of $Y$ and $Z$. Let us recall Itô’s formula for a continuous semimartingale $Y$. Let $F\in C^{2}(\mathbb{R})$. Then $F(Y)$ is again a continuous semimartingale and the following chain rule formula holds: $F(Y(t))-F(Y(0))=\int_{0}^{t}F^{\prime}(Y(s))dY(s)+\frac{1}{2}\int_{0}^{t}F^{\prime\prime}(Y(s))\,d\left\langle Y,Y\right\rangle_{s}.$ Martingale inequalities are generally important for several reasons. For us they will be used to bound Itô stochastic integrals in terms of their quadratic variation (which is easy to compute). One of the most important martingale inequalities is the Burkholder-Davis-Gundy inequality. Let $Y=\left\\{Y_{t}\right\\}_{t\in[0,T]}$ be a continuous local martingale with $Y_{0}=0$. Then, for any stopping time $\tau\leq T$, $\mathbb{E}\left(\sup_{t\in[0,\tau]}\left|Y_{t}\right|\right)^{p}\leq C_{p}\,\mathbb{E}\sqrt[p]{\left\langle Y,Y\right\rangle_{\tau}},\qquad p\in(0,\infty),$ where $C_{p}$ is a universal constant. ## 3\. Smooth data and strong solutions ### 3.1. Strong solution We are going to construct strong solutions to (1.2) when the data $\rho_{0},u$ are smooth. More precisely, throughout this section, we will assume $\rho_{0}\in C^{\infty}(M)$ and that $u:[0,\infty)\times M\to TM$ is a vector field on $M$ that is smooth in both variables. The strategy we employ is the following one: firstly we solve a local version of (1.2) “pulled back” on $\mathbb{R}^{d}$, applying the “Euclidean” existence and uniqueness theory developed in [30]. In a second step we glue these solutions all together on $M$, obtaining a global solution. The gluing procedure is well-posed because there is a uniqueness result on $\mathbb{R}^{d}$ for smooth data $(\rho_{0},u)$. Fixing a point $p\in M$, we may find an open neighborhood $\mathcal{U}(p)\subset M$ of $p$ and coordinates $\gamma_{p}:\mathcal{U}(p)\to\mathbb{R}^{d}$ such that $\gamma_{p}(\mathcal{U}(p))=\mathbb{R}^{d}$. By compactness of $M$, there is a finite atlas $\mathcal{A}$ with these properties, namely there exist $p_{1},\ldots,p_{K}$ such that $M=\bigcup_{l=1}^{K}\mathcal{U}(p_{l})$ and $\gamma_{p_{l}}\bigl{(}\mathcal{U}(p_{l})\bigr{)}=\mathbb{R}^{d}$. ###### Remark 3.1. To construct these coordinates, one is usually led to use that the ball $B_{1}(0)\subset\mathbb{R}^{d}$ is diffeomorphic to the whole space, for instance via the map $\Phi:\mathbb{R}^{d}\to B_{1}(0),\quad z\mapsto\frac{z}{\sqrt{1+|z|^{2}}}.$ If we now have a $C^{1}$ function $f:B_{1}(0)\to\mathbb{R}$ with bounded derivatives, then it is strightforwad to check that $f\circ\Phi$ has bounded derivatives as well. Fix a point $p_{l}$. In the coordinates given by $\gamma_{p_{l}}$, (1.2) looks like $\begin{split}&d\rho^{l}+\left[\rho^{l}\operatorname{div}_{h}u+\partial_{j}\rho^{l}u^{j}\right]\,dt\\\ &\qquad\qquad+\sum_{i=1}^{N}\left[\partial_{j}\rho^{l}a_{i}^{j}+\rho^{l}\operatorname{div}_{h}a_{i}\right]\circ dW^{i}_{t}=0\quad\text{on $[0,T]\times\mathbb{R}^{d}$},\\\ &\rho^{l}(0)=\rho_{0}\circ\gamma_{p_{l}}^{-1}\quad\mbox{on $\mathbb{R}^{d}$}.\end{split}$ Observe that the coefficients satisfy the hypotheses on pages 264 and 267 in [30]. In particular, the $z$-derivatives are bounded, in view of Remark 3.1. (In Kunita’s notation we have $\displaystyle Q_{0}(t,x,v)=v\operatorname{div}_{h}u,\quad Q_{j}(t,x,v)=v\operatorname{div}_{h}a_{j},\quad j=1,\ldots,N$ $\displaystyle P_{0}^{r}=u^{r},\quad P^{r}_{j}=a^{r}_{j},\quad r=1,\ldots,d,\quad j=1,\ldots,N$ $\displaystyle Q^{(1)}_{0}=\operatorname{div}_{h}u,\quad Q^{(0)}_{0}=0,$ $\displaystyle Q^{(1)}_{j}=\operatorname{div}_{h}a_{j},\quad Q^{(0)}_{j}=0,\quad j=1,\ldots,N.)$ Therefore, we may apply [30, Theorem 4.2] to obtain a unique strong solution which we call $\rho^{l}$ (for the definition of strong solution, see [30, p. 255]). Let us “lift” $\rho^{l}$ on $M$, via $\gamma_{p_{l}}$, namely, for $t\in[0,T]$ define $\hat{\rho}^{l}(t,x):=\begin{cases}\rho^{l}\bigl{(}t,\gamma_{p_{l}}(x)\bigr{)},&x\in\mathcal{U}(p_{l}),\\\ 0,&x\notin\mathcal{U}(p_{l}).\end{cases}$ We repeat this procedure for all $p_{l}$, thereby obtaining $\hat{\rho}^{l}$ for $l\in\left\\{1,\ldots,K\right\\}$. Suppose that $\mathcal{U}(p_{l})\cap\mathcal{U}(p_{l^{\prime}})\neq\emptyset$, for some $l\neq l^{\prime}$. Fix $q\in\mathcal{U}(p_{l})\cap\mathcal{U}(p_{l^{\prime}})$. Arguing as above, we may find coordinates $\eta_{q}:\mathcal{V}(q)\to\mathbb{R}^{d}$ such that $\mathcal{V}(q)$ is an open neighborhood of $q$ with $\mathcal{V}(q)\subset\mathcal{U}(p_{l})\cap\mathcal{U}(p_{l^{\prime}})$, and $\eta_{q}\bigl{(}\mathcal{V}(q)\bigr{)}=\mathbb{R}^{d}$. Once again, we can find a unique strong solution $\rho_{q}$, which we lift on $M$: for $t\in[0,T]$ define $\hat{\rho}_{q}(t,x):=\begin{cases}\rho_{q}\bigl{(}t,\eta_{q}(x)\bigr{)},&x\in\mathcal{V}(q),\\\ 0,&x\notin\mathcal{V}(q).\end{cases}$ We now restrict $\hat{\rho}^{l}(t,\cdot)$ on $\mathcal{V}(q)$. Trivially, the restriction satisfies (1.2) on $\mathcal{V}(q)$. This is a geometric equation (and thus coordinate-independent), which implies that the restriction of $\hat{\rho}^{l}(t,\cdot)$ to $\mathcal{V}(q)$ must satisfy (1.2) when written in the coordinates given by $\eta_{q}$. By uniqueness in $\mathbb{R}^{d}$ (of strong solutions), we must have $\hat{\rho}^{l}\bigl{(}\cdot,\eta_{q}^{-1}(\cdot)\bigr{)}=\rho_{q}(\cdot,\cdot)$ on $[0,\infty)\times\mathbb{R}^{d}$, and thus $\hat{\rho}^{l}(t,x)=\hat{\rho}_{q}(t,x)$ for all $t\in[0,T]$ and $x\in\mathcal{V}(q)$. By symmetry, we infer $\hat{\rho}^{l}(t,x)=\hat{\rho}^{l^{\prime}}(t,x),\quad\text{for $(t,x)\in[0,T]\times\mathcal{V}(q)$}.$ Repeating the whole procedure for all $q\in\mathcal{U}(p_{l})\cap\mathcal{U}(p_{l^{\prime}})$, we conclude that $\hat{\rho}^{l}(t,x)=\hat{\rho}^{l^{\prime}}(t,x),\quad\text{for $(t,x)\in[0,T]\times\bigl{(}\mathcal{U}(p_{l})\cap\mathcal{U}(p_{l^{\prime}})\bigr{)}$}.$ In view of these compatibility conditions, we may unambiguously define (3.1) $\rho(t,x):=\hat{\rho}^{l}(t,x),\quad(t,x)\in[0,T]\times M,$ where $l$ is an index in $\left\\{1,\ldots,K\right\\}$ such that $x\in\mathcal{U}(p_{l})$. We have thus arrived at ###### Lemma 3.1 (strong solution, smooth data). The function $\rho$ given by (3.1) is the unique strong solution of (1.2) with initial datum $\rho_{0}\in C^{\infty}(M)$ and smooth vector field $u:[0,\infty)\times M\to TM$. Moreover, $\rho$ is a $C^{\infty}$ semimartingale. ### 3.2. Elementary $L^{p}$ bound Let $\rho$ be the solution constructed above. We observe that, in view of the results in [30], locally in the coordinates induced by $\gamma_{p_{l}}$ on $\mathcal{U}(p_{l})$, we have the following explicit expression for $\rho$: (3.2) $\begin{split}\rho&\bigl{(}t,\gamma_{p_{l}}^{-1}(z)\bigr{)}=\rho^{l}(t,z)\\\ &\quad=\exp\left(\int_{0}^{t}\operatorname{div}_{h}u(s,\xi_{s}(y))\,ds+\sum_{i=1}^{N}\int_{0}^{t}\operatorname{div}_{h}a_{i}(y)\circ dW_{s}^{i}\right)\Bigg{|}_{y=\xi_{t}^{-1}(z)}\\\ &\quad\quad\quad\quad\quad\quad\times\rho_{0}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi^{-1}_{t}(z)\bigr{)}\\\ &\quad=\exp\left(\int_{0}^{t}\operatorname{div}_{h}u(s,\xi_{s}(y))ds\right)\Bigg{|}_{y=\xi_{t}^{-1}(z)}\exp\left(\sum_{i=1}^{N}\operatorname{div}_{h}a_{i}(\xi_{t}^{-1}(z))W_{t}^{i}\right)\\\ &\quad\quad\quad\quad\quad\quad\times\rho_{0}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi^{-1}_{t}(z)\bigr{)},\end{split}$ for $(t,z)\in[0,T]\times\mathbb{R}^{d}$, where $\xi$ is a stochastic flow of diffeomorphisms, satisfying $d\xi_{t}(z)=-u(t,\xi_{t}(z))\,dt-\sum_{i=1}^{N}a_{i}(\xi_{t}(z))\circ dW^{i}_{t},\qquad\xi_{0}(z)=z.$ Here the vector fields $u,a_{i}$ are seen as vectors in $\mathbb{R}^{n}$ through our coordinate system. Let us derive an $L^{p}$ bound. Fix $p\in[1,\infty)$ and let $(\chi_{l})_{l}$ be a smooth partition of unity subordinated to our atlas $\mathcal{A}$. We have $\begin{split}\int_{M}\chi_{l}(x)\left|\rho(t,x)\right|^{p}\,dV_{h}(x)&=\int_{\operatorname{supp}\chi_{l}}\chi_{l}(x)\left|\rho(t,x)\right|^{p}\,dV_{h}(x)\\\ &=\int_{\gamma_{p_{l}}(\operatorname{supp}\chi_{l})}\chi_{l}\bigl{(}\gamma_{p_{l}}^{-1}(z)\bigr{)}\left|\rho\bigl{(}t,\gamma_{p_{l}}^{-1}(z)\bigr{)}\right|^{p}\left|h_{\gamma_{p_{l}}}(z)\right|^{1/2}\,dz,\end{split}$ where $\left|h_{\gamma_{p_{l}}}\right|^{1/2}$ denotes the determinant of the metric $h$ written in the coordinates induced by $\gamma_{p_{l}}$. Using (3.2) and the change of variable $z=\xi_{t}(w)$, we obtain $\begin{split}\int_{M}&\chi_{l}(x)\left|\rho(t,x)\right|^{p}\,dV_{h}(x)\\\ &=\int_{\gamma_{p_{l}}(\operatorname{supp}\chi_{l})}\chi_{l}\bigl{(}\gamma_{p_{l}}^{-1}(z)\bigr{)}\exp\left(p\int_{0}^{t}\operatorname{div}_{h}u\bigl{(}s,\xi_{s}(y)\bigr{)}\,ds\right)\Bigg{|}_{y=\xi_{t}^{-1}(z)}\\\ &\qquad\times\exp\left(p\sum_{i=1}^{N}\operatorname{div}_{h}a_{i}\bigl{(}\xi_{t}^{-1}(z)\bigr{)}W_{t}^{i}\right)\,\left|\rho_{0}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi^{-1}_{t}(z)\bigr{)}\right|^{p}\left|h_{\gamma_{p_{l}}}(z)\right|^{1/2}\,dz\\\ &=\int_{\xi_{t}^{-1}\circ\gamma_{p_{l}}(\operatorname{supp}\chi_{l})}\chi_{l}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi_{t}(w)\bigr{)}\exp\left(p\int_{0}^{t}\operatorname{div}_{h}u\bigl{(}s,\xi_{s}(w)\bigr{)}\,ds\right)\\\ &\qquad\times\exp\left(p\sum_{i=1}^{N}\operatorname{div}_{h}a_{i}(w)W_{t}^{i}\right)\,\left|\rho_{0}(\gamma_{p_{l}}^{-1}(w))\right|^{p}\left|\partial\xi_{t}(w)\right|\left|h_{\gamma_{p_{l}}}(\xi_{t}(w)\right|^{1/2}\,dw\\\ &=\int_{\xi_{t}^{-1}\circ\gamma_{p_{l}}(\operatorname{supp}\chi_{l})}\chi_{l}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi_{t}(w)\bigr{)}\exp\left(p\int_{0}^{t}\operatorname{div}_{h}u\bigl{(}s,\xi_{s}(w)\bigr{)}\,ds\right)\\\ &\qquad\times\exp\left(p\sum_{i=1}^{N}\operatorname{div}_{h}a_{i}(w)W_{t}^{i}\right)\,\left|\rho_{0}\bigl{(}\gamma_{p_{l}}^{-1}(w)\bigr{)}\right|^{p}\left|h_{\xi_{t}^{-1}\circ\gamma_{p_{l}}}(w)\right|^{1/2}\,dw.\end{split}$ In passing, note that $\xi_{t}^{-1}\circ\gamma_{p_{l}}$ is a bona fide smooth chart. In the following, $C$ denotes a constant that depends only on $T$, $p$, $\left\|\operatorname{div}_{h}u\right\|_{L^{\infty}_{t,x}}$, $\left\|\operatorname{div}_{h}a_{i}\right\|_{L^{\infty}}$ and is allowed to vary from line to line. For convenience, set $A_{i}:=\left\|\operatorname{div}_{h}a_{i}\right\|_{L^{\infty}(M)}$. We proceed as follows: $\begin{split}\int_{M}&\chi_{l}(x)\left|\rho(t,x)\right|^{p}\,dV_{h}(x)\\\ &\leq C\int_{\xi_{t}^{-1}\circ\gamma_{p_{l}}(\operatorname{supp}\chi_{l})}\chi_{l}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi_{t}(w)\bigr{)}\\\ &\qquad\qquad\qquad\times\exp\left(p\sum_{i=1}^{N}A_{i}\left|W_{t}^{i}\right|\right)\,\left|\rho_{0}\bigl{(}\gamma_{p_{l}}^{-1}(w)\bigr{)}\right|^{p}\left|h_{\xi_{t}^{-1}\circ\gamma_{p_{l}}}(w)\right|^{1/2}\,dw\\\ &=C\exp\left(p\sum_{i=1}^{N}A_{i}\left|W_{t}^{i}\right|\right)\int_{\xi_{t}^{-1}\circ\gamma_{p_{l}}(\operatorname{supp}\chi_{l})}\chi_{l}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi_{t}(w)\bigr{)}\\\ &\qquad\qquad\qquad\times\,\left|\rho_{0}\bigl{(}\gamma_{p_{l}}^{-1}(w)\bigr{)}\right|^{p}\left|h_{\xi_{t}^{-1}\circ\gamma_{p_{l}}}(w)\right|^{1/2}\,dw\\\ &\leq C\exp\left(p\sum_{i=1}^{N}A_{i}\left|W_{t}^{i}\right|\right)\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)}\\\ &\qquad\qquad\qquad\times\int_{\xi_{t}^{-1}\circ\gamma_{p_{l}}(\operatorname{supp}\chi_{l})}\chi_{l}\bigl{(}\gamma_{p_{l}}^{-1}\circ\xi_{t}(w)\bigr{)}\left|h_{\xi_{t}^{-1}\circ\gamma_{p_{l}}}(w)\right|^{1/2}\,dw\\\ &=C\exp\left(p\sum_{i=1}^{N}A_{i}\left|W_{t}^{i}\right|\right)\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)}\int_{\operatorname{supp}\chi_{l}}\chi_{l}(x)\,dV_{h}(x).\end{split}$ Taking expectation leads to $\begin{split}\mathbb{E}&\int_{M}\chi_{l}(x)\left|\rho(t,x)\right|^{p}\,dV_{h}(x)\\\ &\leq C\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)}\int_{M}\chi_{l}(x)\,dV_{h}(x)\,\mathbb{E}\exp\left(p\sum_{i=1}^{N}A_{i}\left|W_{t}^{i}\right|\right)\\\ &=C\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)}\int_{M}\chi_{l}(x)\,dV_{h}(x)\,\prod_{i=1}^{N}\mathbb{E}\exp\left(pA_{i}\left|W_{t}^{i}\right|\right)\\\ &\leq C\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)}\int_{M}\chi_{l}(x)\,dV_{h}(x),\end{split}$ where we have used that the Brownian motions are independent and satisfy the standard estimate [20, page 54] $\mathbb{E}\exp\bigl{(}\alpha\left|W_{t}^{i}\right|\bigr{)}\leq\beta,\qquad t\in[0,T],\,\,\alpha>0,$ where the constant $\beta$ depends on $\alpha$ and $T$. Therefore, summing over $l$, we obtain $\mathbb{E}\left\|\rho(t)\right\|_{L^{p}(M)}^{p}\leq C\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)}\int_{M}\,dV_{h}(x)=C\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)},$ where the constant $C$ depends on the $L^{\infty}$ norms of $\operatorname{div}_{h}u$, $\operatorname{div}a_{1},\ldots,\operatorname{div}_{h}a_{N}$. Since we are assuming that $\rho_{0},u\in C^{\infty}$, the right-hand side of the last expression is finite, and thus $\rho\in L^{\infty}_{t}L^{p}_{\omega,x}$. Moreover, using [30, Theorem 1.1], we infer that the stochastic process $(\omega,t)\mapsto\int_{M}\rho(t)\psi\,dV_{h}$ is a continuous $\mathcal{F}_{t}$-semimartingale for any $\psi\in C^{\infty}(M)$. Let us summarize all these results in ###### Lemma 3.2 ($L^{p}$ estimates, smooth data). Suppose $\rho_{0},u\in C^{\infty}$. Let $\rho$ be the unique strong solution of (1.2) given by Lemma 3.1. Then, for any $p\in[1,\infty)$, $\rho\in L^{\infty}\left([0,T];L^{p}(\Omega\times M)\right),\quad\sup_{t\in[0,T]}\mathbb{E}\left\|\rho(t)\right\|_{L^{p}(M)}^{p}\leq C\left\|\rho_{0}\right\|^{p}_{L^{\infty}(M)},$ where $C=C\left(p,T,\left\|\operatorname{div}_{h}u\right\|_{L^{\infty}([0,T]\times M)},\max_{i}\left\|\operatorname{div}_{h}a_{i}\right\|_{L^{\infty}(M)}\right)$. Besides, for any $\psi\in C^{\infty}(M)$, the process $(\omega,t)\mapsto\int_{M}\rho(t)\psi\,dV_{h}$ is a continuous $\mathcal{F}_{t}$-semimartingale. Let us bring (1.2) into its Itô form, still assuming that $\rho_{0},u\in C^{\infty}$. We are not going to spell out all the details, referring instead to [30] for the missing pieces. The solution $\rho$ we have constructed in Lemma 3.1 is a smooth semimartingale, and it satisfies $\mathbb{P}$-a.s. the following equation: (3.3) $\begin{split}\rho(t,x)&=\rho_{0}(x)-\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,u\right)\,ds-\sum_{i=1}^{N}\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\,dW^{i}(s)\\\ &\qquad-\frac{1}{2}\sum_{i=1}^{N}\left\langle\operatorname{div}_{h}\left(\rho(\cdot,x)a_{i}\right),W_{\cdot}^{i}\right\rangle_{t}\\\ &=\rho_{0}(x)-\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,u\right)\,ds-\sum_{i=1}^{N}\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\,dW^{i}(s)\\\ &\qquad-\frac{1}{2}\sum_{i=1}^{N}\left\langle a_{i}(\rho(\cdot,x)),W_{\cdot}^{i}\right\rangle_{t}-\frac{1}{2}\sum_{i=1}^{N}\left\langle\rho(\cdot,x),W_{\cdot}^{i}\right\rangle_{t}\operatorname{div}_{h}a_{i},\end{split}$ for all $t\in[0,T]$ and $x\in M$, by definition of the Stratonovich integral. By Theorem 1.1 and Lemma 1.3 in [30], we obtain $\begin{split}&a_{i}(\rho(t,x))=a_{i}(\rho_{0}(x))-\int_{0}^{t}a_{i}\bigl{(}\operatorname{div}_{h}(\rho(s,x)\,u)\bigr{)}\,ds\\\ &\quad-\sum_{j=1}^{N}\int_{0}^{t}a_{i}\bigl{(}\operatorname{div}_{h}(\rho(s,x)\,a_{j})\bigr{)}\,dW^{j}(s)-\frac{1}{2}\sum_{j=1}^{N}\left\langle a_{i}\bigl{(}\operatorname{div}_{h}(\rho(\cdot,x)a_{j})\bigr{)},W_{\cdot}^{j}\right\rangle_{t},\end{split}$ and (3.4) $\begin{split}\left\langle a_{i}(\rho(\cdot,x)),W^{i}_{\cdot}\right\rangle_{t}&=-\sum_{j=1}^{N}\left\langle\int_{0}^{\cdot}a_{i}\bigl{(}\operatorname{div}_{h}(\rho(s,x)\,a_{j})\bigr{)}\,dW^{j}(s),W^{i}_{\cdot}\right\rangle_{t}\\\ &=-\sum_{j=1}^{N}\int_{0}^{t}a_{i}\bigl{(}\operatorname{div}_{h}\left(\rho(s,x)\,a_{j}\right)\bigr{)}\,d\left\langle W^{j},W^{i}\right\rangle_{s}\\\ &=-\int_{0}^{t}a_{i}\bigl{(}\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\bigr{)}\,ds,\end{split}$ because the Brownian motions are independent, and the time-integral involving $u$ is absolutely continuous and thus not contributing to the quadratic variation. Moreover, it is clear that (3.5) $\begin{split}\left\langle\rho(\cdot,x),W^{i}_{\cdot}\right\rangle_{t}&=-\sum_{j=1}^{N}\left\langle\int_{0}^{\cdot}\operatorname{div}_{h}\left(\rho(s,x)\,a_{j}\right)\,dW^{j}(s),W^{i}_{\cdot}\right\rangle_{t}\\\ &=-\sum_{j=1}^{N}\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,a_{j}\right)\,d\langle W^{j},W^{i}\rangle_{s}\\\ &=-\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\,ds.\end{split}$ Re-starting from (3.3), using (3.4) and (3.5), we finally arrive at $\begin{split}\rho(t,x)&=\rho_{0}(x)-\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,u\right)\,ds-\sum_{i=1}^{N}\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\,dW^{i}(s)\\\ &\qquad\qquad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}a_{i}\bigl{(}\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\bigr{)}\,ds\\\ &\qquad\qquad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\operatorname{div}_{h}a_{i}\,\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\,ds\\\ &=\rho_{0}(x)-\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,u\right)\,ds-\sum_{i=1}^{N}\int_{0}^{t}\operatorname{div}_{h}\left(\rho(s,x)\,a_{i}\right)\,dW^{i}(s)\\\ &\qquad\qquad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\Lambda_{i}(\rho(s,x))\,ds,\end{split}$ where the second order differential equation $\Lambda_{i}$ is defined in (2.2). This is the strong Itô form of (1.2), derived under the assumption that $\rho_{0},u\in C^{\infty}$. If we now integrate this against $\psi\in C^{\infty}(M)$ (say), since the Itô integral admits a Fubini-type theorem, we arrive at the weak form given in Definition 1.2. In view of this, combining Lemmas 3.1 and 3.2, we eventually arrive at ###### Proposition 3.3 (weak solution, smooth data). Let $\rho$ given by (3.1) be the unique strong solution of (1.2) with initial datum $\rho_{0}\in C^{\infty}(M)$ and smooth vector field $u:[0,\infty)\times M\to TM$. Then $\rho$ is a weak $L^{2}$ solution of (1.2) in the sense of Definition 1.1. ## 4\. Time-dependent test functions During an upcoming proof (of the $L^{2}$ estimate), we will need a version of the weak formulation (1.9) that makes use of time-dependent test functions. The next result supplies that formulation. ###### Lemma 4.1 (space-time weak formulation). Let $\rho$ be a weak $L^{2}$ solution of (1.2) with initial datum $\rho|_{t=0}=\rho_{0}$. Suppose $\rho$ is renormalizable in the sense of Definition 1.3. Fix $F\in C^{2}(\mathbb{R})$ with $F,F^{\prime},F^{\prime\prime}\in L^{\infty}(\mathbb{R})$. For any $\psi\in C^{\infty}([0,T]\times M)$, the following equation holds $\mathbb{P}$-a.s., for any $t\in[0,T]$, (4.1) $\begin{split}&\int_{M}F(\rho(t))\psi(t)\,dV_{h}-\int_{M}F(\rho_{0})\psi(0)\,dV_{h}\\\ &\,=\int_{0}^{t}\int_{M}F(\rho(s))\partial_{t}\psi\,dV_{h}\,ds+\int_{0}^{t}\int_{M}F(\rho(s))\,u(\psi)\,dV_{h}\,ds\\\ &\quad+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}(\psi)\,dV_{h}\,dW^{i}(s)+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}\bigl{(}a_{i}(\psi)\bigr{)}\,dV_{h}\,ds\\\ &\quad-\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}u\,\psi\,dV_{h}\,ds-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}a_{i}\,\psi\,dV_{h}\,dW^{i}(s)\\\ &\quad-\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\Lambda_{i}(1)\,G_{F}(\rho(s))\,\psi\,dV_{h}\,ds\\\ &\quad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\psi\,dV_{h}\,ds\\\ &\quad-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\bar{a}_{i}(\psi)\,dV_{h}\,ds.\end{split}$ ###### Proof. It is sufficient to consider test functions of the form $\psi(t,x)=\theta(t)\phi(x)$, where $\theta\in C^{1}_{c}((-1,T+1))$ and $\phi\in C^{\infty}(M)$, because the general result will then follow from a density argument for the tensor product. We start off from the following space-weak formulation, cf. (1.9): $\begin{split}&\int_{M}F(\rho(t))\phi\,dV_{h}=\int_{M}F(\rho_{0})\phi\,dV_{h}+\int_{0}^{t}\int_{M}F(\rho(s))\,u(\phi)\,dV_{h}\,ds\\\ &\quad+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}(\phi)\,dV_{h}\,dW^{i}(s)+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}\bigl{(}a_{i}(\phi)\bigr{)}\,dV_{h}\,ds\\\ &\quad-\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}u\,\phi\,dV_{h}\,ds-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}a_{i}\,\phi\,dV_{h}\,dW^{i}(s)\\\ &\quad-\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\Lambda_{i}(1)\,G_{F}(\rho(s))\,\phi\,dV_{h}\,ds\\\ &\quad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\phi\,dV_{h}\,ds\\\ &\quad-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\bar{a}_{i}(\phi)\,dV_{h}\,ds,\qquad\text{$\mathbb{P}$-a.s., for any $t\in[0,T]$.}\end{split}$ We multiply this equation by $\dot{\theta}(t)$ and integrate the result over $t\in\left[0,\bar{t}\right]$. All the time-integrals are absolutely continuous by definition, and thus we can integrate them by parts. For example, $\begin{split}\int_{0}^{\bar{t}}&\dot{\theta}(t)\int_{0}^{t}\int_{M}F(\rho(s))\,u(\phi)\,dV_{h}\,ds\,dt\\\ &=\theta({\bar{t}})\int_{0}^{\bar{t}}\int_{M}F(\rho(s))\,u(\phi)\,dV_{h}\,ds-\int_{0}^{\bar{t}}\theta(t)\int_{M}F(\rho(t))\,u(\phi)\,dV_{h}\,dt,\end{split}$ and so forth. We can also integrate by parts the stochastic integrals. For example, $\begin{split}\int_{0}^{\bar{t}}&\dot{\theta}(t)\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}(\phi)\,dV_{h}\,dW^{i}(s)\,dt\\\ &=\theta({\bar{t}})\int_{0}^{\bar{t}}\int_{M}F(\rho(s))\,a_{i}(\phi)\,dV_{h}\,dW^{i}(s)-\int_{0}^{\bar{t}}\theta(t)\int_{M}F(\rho(s))\,a_{i}(\phi)\,dV_{h}\,dW^{i}(t),\end{split}$ and so forth. Finally, $\displaystyle\int_{0}^{\bar{t}}\dot{\theta}(t)\left(\int_{M}F(\rho(t))\phi\,dV_{h}-\int_{M}F(\rho_{0})\phi\,dV_{h}\right)\,dt$ $\displaystyle\quad=\int_{0}^{\bar{t}}\int_{M}F(\rho(t))\dot{\theta}(t)\phi\,dV_{h}\,dt+\int_{M}F(\rho_{0})\theta(0)\phi\,dV_{h}-\int_{M}F(\rho_{0})\theta(\bar{t})\phi\,dV_{h},$ where the last term is aggregated together with the other “$\theta({\bar{t}})\int_{0}^{\bar{t}}\left(\cdots\right)$” terms that appear, eventually leading to $\int_{M}F(\rho(\bar{t}))\theta(\bar{t})\phi\,dV_{h}$. Therefore, after many straightforward rearrangements of terms, we arrive at (now replacing $\bar{t}$ by $t$) $\begin{split}&\int_{M}F(\rho(t))\theta(t)\phi\,dV_{h}-\int_{M}F(\rho_{0})\theta(0)\phi\,dV_{h}=\int_{0}^{t}\int_{M}F(\rho(s))\dot{\theta}(s)\phi\,dV_{h}\,ds\\\ &\quad+\int_{0}^{t}\int_{M}F(\rho(s))\,u(\theta(s)\phi)\,dV_{h}\,ds+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}(\theta(s)\phi)\,dV_{h}\,dW^{i}(s)\\\ &\quad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}\bigl{(}a_{i}(\theta(s)\phi)\bigr{)}\,dV_{h}\,ds-\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}u\,\theta(s)\phi\,dV_{h}\,ds\\\ &\quad-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}a_{i}\,\theta(s)\phi\,dV_{h}\,dW^{i}(s)\\\ &\quad-\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\Lambda_{i}(1)\,G_{F}(\rho(s))\,\theta(s)\phi\,dV_{h}\,ds\\\ &\quad+\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}F^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\theta(s)\phi\,dV_{h}\,ds\\\ &\quad-\sum_{i=1}^{N}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\bar{a}_{i}(\theta(s)\phi)\,dV_{h}\,ds.\end{split}$ By density of tensor products [12], this equation continues to hold for any test function $\psi\in C^{\infty}_{c}((-1,T+1)\times M)$ and thus for any $\psi\in C^{\infty}([0,T]\times M)$. ∎ ## 5\. Irregular test functions We need to insert into the weak formulation (4.1) test functions $\psi(t,x)$ that are non-smooth. Clearly, in view of our assumptions, the stochastic integrals in (4.1) are zero-mean martingales. Hence, after taking the expectation in (4.1), we obtain (5.1) $\begin{split}&\mathbb{E}\int_{M}F(\rho(t))\psi(t)\,dV_{h}-\mathbb{E}\int_{M}F(\rho_{0})\psi(0)\,dV_{h}\\\ &\quad=\mathbb{E}\int_{0}^{t}\int_{M}F(\rho(s))\partial_{t}\psi\,dV_{h}\,ds+\mathbb{E}\int_{0}^{t}\int_{M}F(\rho(s))\,u(\psi)\,dV_{h}\,ds\\\ &\quad\qquad+\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t}\int_{M}F(\rho(s))\,a_{i}\bigl{(}a_{i}(\psi)\bigr{)}\,dV_{h}\,ds\\\ &\quad\qquad-\mathbb{E}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\operatorname{div}_{h}u\,\psi\,dV_{h}\,ds\\\ &\quad\qquad-\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t}\int_{M}\Lambda_{i}(1)\,G_{F}(\rho(s))\,\psi\,dV_{h}\,ds\\\ &\quad\qquad+\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t}\int_{M}F^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\psi\,dV_{h}\,ds\\\ &\quad\qquad-\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t}\int_{M}G_{F}(\rho(s))\bar{a}_{i}(\psi)\,dV_{h}\,ds,\end{split}$ which holds for any test function $\psi\in C^{\infty}([0,T]\times M)$. The main result of this section is ###### Lemma 5.1 (non-smooth test functions). Let $\rho$ be a weak $L^{2}$ solution of (1.2) with initial datum $\rho|_{t=0}=\rho_{0}$ and assume that $\rho$ is renormalizable. Fix $F\in C^{2}(\mathbb{R})$ with $F,F^{\prime},F^{\prime\prime}\in L^{\infty}(\mathbb{R})$. Fix a time $t_{0}\in(0,T]$ and consider (5.1) evaluated at $t=t_{0}$. Then (5.1) continues to hold for any $\psi\in W^{1,2,p}([0,t_{0}]\times M)$ with $p>d+2$. ###### Proof. By Proposition 2.1, $W^{1,2,p}([0,t_{0}]\times M)$ compactly embeds into $C^{0}([0,t_{0}]\times M)$ (since $p>d+2$). Moreover, the first order $x$-derivatives of a $W^{1,2,p}$ function belong to $C^{0}([0,t_{0}]\times M)$. Therefore, given a function $\psi\in W^{1,2,p}([0,t_{0}]\times M)$, the very definition of $W^{1,2,p}$ implies the existence of a sequence $\left\\{\psi_{j}\right\\}_{j\geq 1}\subset C^{\infty}([0,t_{0}]\times M)$ such that $\psi_{j}\to\psi$ in $W^{1,2,p}([0,t_{0}]\times M)$. Besides, we have $\psi_{j}\to\psi,\quad\nabla\psi_{j}\to\nabla\psi\quad\text{uniformly on $[0,t_{0}]\times M$}.$ We extend the functions $\psi_{j}$ to $C^{\infty}([0,T]\times M)$ by means of Proposition 10.1. These extensions are also denoted by $\psi_{j}$. Consequently, we can insert $\psi_{j}$ into (5.1). Equipped with the above convergences and the assumptions $\rho\in L^{\infty}_{t}L^{2}_{\omega,x}$ and $u\in L^{1}_{t}\overrightarrow{W_{x}^{1,2}}$, it is straightforward (repeated applications of Hölder’s inequality) to verify that (5.1) holds for test functions $\psi$ that belong to $W^{1,2,p}([0,t_{0}]\times M)$. ∎ ## 6\. On the ellipticity of $\sum_{i}a_{i}(a_{i})$, proof of Lemma 1.2 In this section we are going to prove Lemma 1.2. Before doing that, however, let us explain why the second order differential operator $\sum_{i}a_{i}\bigl{(}a_{i}(\cdot)\bigr{)}$, for arbitrary smooth vector fields $\left\\{a_{i}\right\\}_{i}$, fails to be non-degenerate (elliptic). To this end, we introduce the following (smooth) sections of the endomorphisms over $TM$: (6.1) $\mathcal{A}_{i}(x)X:=\bigl{(}X,a_{i}(x)\bigr{)}_{h}\,a_{i}(x),\qquad x\in M,\,\,X\in T_{x}M,\quad i=1,\ldots,N.$ It is clear that these sections are symmetric with respect to $h$, namely $\bigl{(}\mathcal{A}_{i}(x)X,Y\bigr{)}_{h}=\bigl{(}X,\mathcal{A}_{i}(x)Y\bigr{)}_{h},\qquad x\in M,\,\,X,Y\in T_{x}M.$ Set (6.2) $\mathcal{A}:=\mathcal{A}_{1}+\cdots+\mathcal{A}_{N},$ which is still a smooth section of the symmetric endomorphisms over $TM$. Given the sections $\mathcal{A}_{1},\ldots,\mathcal{A}_{N}$ and $\mathcal{A}$, we define the following second order linear differential operators in divergence form: $\displaystyle C^{2}(M)\ni\psi\mapsto\operatorname{div}_{h}\left(\mathcal{A}_{i}\nabla_{h}\psi\right),\qquad i=1,\ldots,N,$ $\displaystyle C^{2}(M)\ni\psi\mapsto\operatorname{div}_{h}\left(\mathcal{A}\nabla_{h}\psi\right)=\sum_{i=1}^{N}\operatorname{div}_{h}(\mathcal{A}_{i}\nabla_{h}\psi).$ Observe that the following identity holds trivially: $a_{i}\bigl{(}a_{i}(\psi)\bigr{)}=\operatorname{div}_{h}\left(\mathcal{A}_{i}\nabla_{h}\psi\right)-\bar{a}_{i}(\psi),$ thus (6.3) $\sum_{i=1}^{N}a_{i}\bigl{(}a_{i}(\psi)\bigr{)}=\operatorname{div}_{h}\left(\mathcal{A}\nabla_{h}\psi\right)-\sum_{i=1}^{N}\bar{a}_{i}(\psi),$ where $\bar{a}_{i}$ is short-hand for the first order differential operator $(\operatorname{div}_{h}a_{i})\,a_{i}$. Thus $\sum_{i=1}^{N}a_{i}\bigl{(}a_{i}(\cdot)\bigr{)}$ is non-degenerate (elliptic) if and only if $\operatorname{div}_{h}\left(\mathcal{A}\nabla_{h}\cdot\right)$ is so. In view of the identity (6.3), let us see why, for arbitrary vector fields $a_{i}$, the induced differential operator $\sum_{i=1}^{N}a_{i}\bigl{(}a_{i}(\cdot)\bigr{)}$ may be degenerate. From the very definition of $\mathcal{A}$, we have $\bigl{(}\mathcal{A}(x)X,X\bigr{)}_{h}=\sum_{i=1}^{N}\bigl{(}\mathcal{A}_{i}(x)X,X\bigr{)}_{h}=\sum_{i=1}^{N}\bigl{(}X,a_{i}(x)\bigr{)}_{h}^{2},\qquad x\in M,\,\,X\in T_{x}M,$ and the last expression may be zero unless we can find vector fields $a_{i_{1}}(x),\ldots,a_{i_{d}}(x)$ that constitute a basis for $T_{x}M$. Note that this can also happen in the “ideal” case $N=d$, that is, one can always find suitable $x\in M$ and $X\in T_{x}M$ such that $\bigl{(}\mathcal{A}(x)X,X\bigr{)}_{h}=0$. The explanation for this fact is geometric in nature. In general, given an arbitrary $d$-dimensional smooth manifold $M$, it is not possible to construct a global frame, i.e., smooth vector fields $E_{1},\ldots,E_{d}$ forming a basis for $T_{x}M$ for all $x\in M$. If this happens, the manifold is called parallelizable. Examples of parallelizable manifolds are Lie groups (like $\mathbb{R}^{d}$, $\mathbb{T}^{d}$) and $\mathbb{S}^{d}$ with $d\in\left\\{1,3,7\right\\}$. Nevertheless, by compactness of $M$, one can always find vector fields $a_{1},\ldots,a_{N}$ with $N\geq d$, depending on the geometry of $M$, such that the resulting operator $\operatorname{div}_{h}\left(\mathcal{A}\nabla_{h}\cdot\right)$ becomes the Laplace-Beltrami operator (and hence elliptic). In other words, to implement our strategy of using noise to avoid density concentrations, we must add to the original SPDE (1.2) as many independent Wiener processes and first order differential operators $\bar{a}_{1},\ldots,\bar{a}_{N}$ as deemed necessary by the geometry of the manifold itself. This is a new phenomenon, which reveals an intimate connection between the Riemannian structure of the underlying domain $M$ and the (gradient) noise, a connection that is somehow hidden in the Euclidean case [3, 6, 19]), where one always resorts to the canonical differential operators $a_{i}=\partial_{i}$ and thus $\sum_{i=1}^{N}a_{i}\bigl{(}a_{i}(\cdot)\bigr{)}=\operatorname{div}_{h}\left(\mathcal{A}\nabla_{h}\cdot\right)=\Delta\cdot$ (with $N=d$). Having said all of that, let us now return to the proof of Lemma 1.2, which will be a trivial consequence of the following crucial result: ###### Lemma 6.1. There exist $N=N(M)$ smooth vector fields $a_{1},\ldots,a_{N}$ on $M$ such that the corresponding section $\mathcal{A}$, cf. (6.1) and (6.2), satisfies $\bigl{(}\mathcal{A}(x)X,Y\bigr{)}_{h}=2\left(X,Y\right)_{h},\qquad\forall x\in M,\,\,\forall X,Y\in T_{x}M.$ Consequently, $\mathcal{A}(x)=2\,I_{T_{x}M}$ for all $x\in M$. ###### Proof. Let $p\in M$. Then, by means of the Gram-Schmidt algorithm, we can easily construct a local orthonormal frame near $p$, that is, a local frame $E_{p,1},\ldots,E_{p,d}$ defined in an open neighborhood $\mathcal{U}_{p}$ of $p$ that forms an orthonormal basis for the tangent space at each point of the neighborhood (see [32, p. 24] for details). Since $\left\\{\mathcal{U}_{p}\right\\}_{p\in M}$ forms an open covering of $M$, the compactness of $M$ ensures the existence of $p_{1},\ldots,p_{L}\in M$ such that $\bigcup_{j=1}^{L}\mathcal{U}_{p_{j}}=M$ and a collection of locally smooth vector fields $\left\\{E_{p_{j},i}\right\\}_{\overset{i=1,\ldots,d}{j=1,\ldots,L}}$ with the aforementioned property. Let us now consider a smooth partition of unity subordinate to $\left\\{\mathcal{U}_{p_{j}}\right\\}_{j=1}^{L}$, which we may write as $\left\\{\alpha_{j}^{2}\right\\}_{j=1}^{L}$, where $\alpha_{j}\in C^{\infty}(M)$ and $\sum_{j=1}^{L}\alpha^{2}_{j}=1$. Set $\tilde{E}_{p_{j},i}:=\alpha_{j}E_{p_{j},i}$, for $i=1,\ldots,d$ and $j=1,\ldots,L$. Extending these vector fields by zero outside their supports, we obtain global smooth vector fields on $M$. Observe that if $\alpha_{j}(x)\neq 0$, then $x\in\left(\operatorname{supp}\alpha_{j}\right)^{\circ}=\left(\operatorname{supp}\alpha^{2}_{j}\right)^{\circ}\subset\mathcal{U}_{p_{j}}$. As a result, $E_{p_{j},1}(x),\ldots,E_{p_{j},d}(x)$ constitute an orthonormal basis for $T_{x}M$. For convenience, we rename the vector fields $\left\\{\tilde{E}_{p_{j},i}\right\\}_{\overset{i=1,\cdots,d}{j=1,\cdots,L}}$ as $\beta_{1},\ldots,\beta_{N}$, where $N:=d\cdot L$. As before, we define sections $\mathcal{B}_{i}$ of the endomorphisms over $TM$ by setting $\mathcal{B}_{i}(x)X:=\bigl{(}X,\beta_{i}(x)\bigr{)}_{h}\,\beta_{i}(x),\qquad x\in M,\,\,X\in T_{x}M,$ for $i=1,\ldots,N$, and $\mathcal{B}:=\mathcal{B}_{1}+\cdots+\mathcal{B}_{N}$. For an arbitrary $x\in M$ and $X\in T_{x}M$, we compute $\displaystyle\bigl{(}\mathcal{B}(x)X,X\bigr{)}_{h}$ $\displaystyle=\sum_{k=1}^{N}\bigl{(}X,\beta_{k}(x)\bigr{)}_{h}^{2}=\sum_{j=1}^{L}\sum_{i=1}^{d}\left(X,\tilde{E}_{p_{j},i}(x)\right)_{h}^{2}$ $\displaystyle=\sum_{j:\alpha_{j}(x)\neq 0}\,\sum_{i=1}^{d}\left(X,E_{p_{j},i}(x)\right)_{h}^{2}\alpha^{2}_{j}(x)$ $\displaystyle=\sum_{j:\alpha_{j}(x)\neq 0}\alpha^{2}_{j}(x)\sum_{i=1}^{d}\left(X,E_{p_{j},i}(x)\right)_{h}^{2}$ $\displaystyle=\sum_{j:\alpha_{j}(x)\neq 0}\alpha^{2}_{j}(x)\left|X\right|_{h}^{2}=\left|X\right|_{h}^{2}.$ By the polarization identity for inner products and the symmetry of $\mathcal{B}$, this last equality implies that $\bigl{(}\mathcal{B}(x)X,Y\bigr{)}_{h}=\left(X,Y\right)_{h},\qquad\forall x\in M,\,\,\forall X,Y\in T_{x}M,$ and thus $\mathcal{B}(x)=I_{T_{x}M}$. Setting $a_{i}:=\sqrt{2}\beta_{i}$, $i=1,\ldots,N$, concludes the proof of the lemma. ∎ ###### Proof of Lemma 1.2. Fix $\psi\in C^{2}(M)$. In view of Lemma 6.1, the identity (6.3) becomes $\sum_{i=1}^{N}a_{i}\bigl{(}a_{i}(\psi)\bigr{)}=2\operatorname{div}_{h}\left(\nabla_{h}\psi\right)-\sum_{i=1}^{N}\bar{a}_{i}(\psi)=2\Delta_{h}\psi-\sum_{i=1}^{N}\bar{a}_{i}(\psi),$ where $\bar{a}_{i}=(\operatorname{div}_{h}a_{i})\,a_{i}$. ∎ From now on, we will be using the vector fields $a_{1},\ldots,a_{N}$ constructed in Lemma 6.1, in which case the Itô SPDE (1.3) becomes (6.4) $d\rho+\operatorname{div}_{h}\left(\rho\left[u-\frac{1}{2}\sum_{i=1}^{N}\bar{a}_{i}\right]\right)\,dt+\sum_{i=1}^{N}\operatorname{div}_{h}(\rho\,a_{i})\,dW^{i}(t)-\Delta_{h}\rho\,dt=0.$ The space-weak formulation of this SPDE is $\begin{split}&\int_{M}\rho(t)\psi\,dV_{h}=\int_{M}\rho_{0}\psi\,dV_{h}+\int_{0}^{t}\int_{M}\rho(s)\left[\,u(\psi)-\frac{1}{2}\sum_{i=1}^{N}\bar{a}_{i}(\psi)\right]\,dV_{h}\,ds\\\ &\qquad+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\rho(s)\,a_{i}(\psi)\,dV_{h}\,dW^{i}(s)+\int_{0}^{t}\int_{M}\rho(s)\,\Delta_{h}\psi\,dV_{h}\,ds,\end{split}$ cf. Definition 1.2 and equation (1.4). ## 7\. Test function for duality method In this section we first construct a solution to the following parabolic Cauchy problem on the manifold $M$: given $0<t_{0}\leq T$, solve (7.1) $\begin{cases}\partial_{t}v-\Delta_{h}v+b(t,x)v=f(x,t)\;\;\mbox{on $[0,t_{0}]\times M$},\\\ v(0,x)=0\;\;\mbox{on $M$},\end{cases}$ where $b$ and $f$ are given irregular functions in $L^{p}([0,t_{0}]\times M)$ (with $p\geq 1$ to be fixed later). We follow the strategy outlined in [4, p. 131] (for smooth $b,f$), making use of Fredholm theory and anisotropic Sobolev spaces. Toward the end of this section, we utilize the solution of (7.1) to construct a test function that will form the core of a duality argument given in an upcoming section. Consider the space $W^{1,2,p}_{0}([0,t_{0}]\times M)$, which is the subspace of functions in the anisotropic Sobolev space $W^{1,2,p}([0,t_{0}]\times M)$ vanishing at $t=0$. Let $L$ designate the heat operator on $M$, namely $L=\partial_{t}-\Delta_{h}$. According to [4, Thm. 4.45], $L$ is an isomorphism of $W^{1,2,p}_{0}([0,t_{0}]\times M)$ onto $L^{p}([0,t_{0}]\times M)$ for $1\leq p<\infty$. Consider the multiplication operator $K_{b}:W^{1,2,p}_{0}([0,t_{0}]\times M)\to L^{p}([0,t_{0}]\times M),\qquad v\stackrel{{\scriptstyle K_{b}}}{{\mapsto}}bv.$ To guarantee that this operator is well-defined, we must assume $p>d+2$. In this way, in view of Proposition 2.1, $W^{1,2,p}_{0}([0,t_{0}]\times M)$ compactly embeds into $C^{0}([0,t_{0}]\times M)$ and the first order space- derivatives of $v\in W^{1,2,p}_{0}([0,t_{0}]\times M)$ are continuous on $[0,t_{0}]\times M$. It then follows that $\int_{0}^{t_{0}}\int_{M}\left|bv\right|^{p}\,dV_{h}\,dt\leq\left\|v\right\|_{C^{0}}^{p}\int_{0}^{t_{0}}\int_{M}\left|b\right|^{p}\,dV_{h}\,dt,$ guaranteeing that $K_{b}$ is well-defined. Claim. $K_{b}$ is compact. First of all, $K_{b}$ is continuous: $\left\|K_{b}v\right\|_{L^{p}}\leq\left\|v\right\|_{C^{0}}\left\|b\right\|_{L^{p}}\leq C\left\|v\right\|_{W^{1,2,p}_{0}}\left\|b\right\|_{L^{p}},$ where $C>0$ is a constant coming from the anisotropic Sobolev embedding, consult Proposition 2.1. Clearly, $W^{1,2,p}_{0}([0,t_{0}]\times M)$ is reflexive, being a closed subspace of $W^{1,2,p}([0,t_{0}]\times M)$. Hence, to arrive at the claim, it is enough to prove that $K_{b}$ is completely continuous. Recall that a bounded linear operator $T:X\to Y$ between Banach spaces is called completely continuous if weakly convergent sequences in $X$ are mapped to strongly converging sequences in $Y$. Let $\left\\{v_{n}\right\\}_{n\geq 1}$ be a sequence in $W^{1,2,p}_{0}([0,t_{0}]\times M)$ such that $v_{n}\rightharpoonup v\in W^{1,2,p}_{0}$. By the compact embedding $W^{1,2,p}_{0}\subset\subset C^{0}$, $v_{n}\to v$ in $C^{0}$. Hence, $\left\|K_{b}v_{n}-K_{b}v\right\|_{L^{p}}\leq\left\|v_{n}-v\right\|_{C^{0}}\left\|b\right\|_{L^{p}}\to 0,$ and so $K_{b}$ is completely continuous. This concludes the proof of the claim. Next, being an isomorphism, $L$ is a Fredholm operator from $W^{1,2,p}_{0}$ to $L^{p}$. This implies that $L+K_{b}$ is a Fredholm operator, with index $\operatorname{Ind}\left(L+K_{b}\right)=\operatorname{Ind}\left(L\right)$, where trivially $\operatorname{Ind}\left(L\right)=0$ ($L$ is invertible). Thus our goal is to verify the Claim. Either $\ker\left(L+K_{b}\right)$ is trivial or $\operatorname{codim}\left(R(L+K_{b})\right)=0$. If this claim holds, then we will be able to conclude that (7.1) is solvable for any $f\in L^{p}([0,t_{0}]\times M)$. The proof of the claim is divided into three main steps. Step 1: $b\in C^{\infty}([0,t_{0}]\times M)$. Our aim is to show that $\ker\left(L+K_{b}\right)$ is trivial. Let $v\in W^{1,2,p}_{0}([0,t_{0}]\times M)$ solve $(L+K_{b})v=0$. Since $p>d+2$ and $b$ is smooth, it follows from parabolic regularity theory that $v$ is (at least) in $C^{1,2}([0,t_{0}]\times M)$. Indeed, by the anisotropic Sobolev embedding (Proposition 2.1), $v\in C^{0,\gamma}([0,t_{0}]\times M)$ with $\gamma=1-\frac{1+d}{p}$. Therefore, $Lv=-bv\in C^{0,\gamma}([0,t_{0}]\times M),$ and $v(0,\cdot)=0$ on $M$. Parabolic regularity theory (see e.g. [4, p. 130]) implies that $\partial_{t}v$ and the second derivatives of $v$ with respect to $x$ are Hölder continuous. By the chain rule, the function $\psi:=\frac{v^{2}}{2}$ satisfies $L\psi=-\left|\nabla_{h}v\right|_{h}^{2}-2b\psi\leq-2b\psi.$ Since $b$ is bounded and $\psi(0,x)=0$, the maximum principle (cf. [10, Prop. 4.3]) implies that $\psi\leq 0$ everywhere. On the other hand, $\psi\geq 0$ by definition. It follows that $\psi\equiv 0$, and so $v\equiv 0$. Hence, given any $b\in C^{\infty}([0,t_{0}]\times M)$, the Cauchy problem (7.1) admits a unique solution for any $f\in L^{p}([0,t_{0}]\times M)$. Step 2: A priori estimates (smooth data). Let us consider the more general problem (7.2) $\begin{cases}\partial_{t}v-\Delta_{h}v+b(t,x)v=g(x,t)\;\;\mbox{on $[0,t_{0}]\times M$},\\\ v(0,x)=c(x)\;\;\mbox{on $M$}.\end{cases}$ where $b,g\in C^{\infty}([0,t_{0}]\times M)$ and $c\in C^{\infty}(M)$. This problem admits a unique solution $v\in C^{1,2}([0,t_{0}]\times M)$, given by $v=\tilde{v}+c$, where $\tilde{v}$ solves (7.1) with right-hand side $f=g-cb+\Delta_{h}c\in L^{p}([0,t_{0}]\times M)$. From known a priori estimates for the heat equation on manifolds (cf. [4, Thm. 4.45]), there is a constant $C_{0}=C_{0}(p,M)$ such that $(\tilde{v}=v-c)$, $\displaystyle\left\|v\right\|_{W^{1,2,p}}$ $\displaystyle=\left\|\tilde{v}+c\right\|_{W^{1,2,p}}\leq\left\|\tilde{v}\right\|_{W^{1,2,p}}+T\left\|c\right\|_{W^{2,p}(M)}$ $\displaystyle\leq C_{0}\left\|g-bc+\Delta_{h}c\right\|_{L^{p}}+T\left\|c\right\|_{W^{2,p}(M)}$ $\displaystyle\leq C_{0}\left[\left\|g\right\|_{L^{p}}+\left\|b\right\|_{L^{p}}\left\|c\right\|_{C^{0}(M)}+T\left\|\Delta_{h}c\right\|_{L^{p}(M)}\right]+T\left\|c\right\|_{W^{2,p}(M)},$ where $W^{2,p}(M)$ denotes the standard Sobolev space on $(M,h)$, which embeds into $C^{0}(M)$ (recall $p>2+d$). Therefore, for a constant $C=C(p,M,T)$, we infer (7.3) $\left\|v\right\|_{W^{1,2,p}}\leq C\left[\left\|g\right\|_{L^{p}}+\left\|c\right\|_{W^{2,p}(M)}\bigl{(}\left\|b\right\|_{L^{p}}+1\bigr{)}\right].$ Summarizing, the general Cauchy problem (7.2) with $b,g\in C^{\infty}([0,t_{0}]\times M)$ and $c\in C^{\infty}(M)$ admits a unique solution $v\in C^{1,2}([0,t_{0}]\times M)$ satisfying (7.3). Step 3: Well-posedness of (7.2), non-smooth $b,g$. The aim is to prove the well-posedness of (7.2)—and thus (7.1)—for irregular $b$ and $g$ in $L^{p}([0,t_{0}]\times M)$. Since $C^{\infty}([0,t_{0}]\times M)$ is dense in $L^{p}([0,t_{0}]\times M)$ [4, Thm. 2.9], there exist sequences $\left\\{b_{n}\right\\}_{n\geq 1}$ and $\left\\{g_{n}\right\\}_{n\geq 1}$ of smooth functions such that $b_{n}\stackrel{{\scriptstyle L^{p}}}{{\to}}b,\qquad g_{n}\stackrel{{\scriptstyle L^{p}}}{{\to}}g.$ From the previous step, there exists a unique solution $v_{n}\in W^{1,2,p}([0,t_{0}]\times M)$ of $\begin{cases}\partial_{t}v_{n}-\Delta_{h}v_{n}+b_{n}(t,x)v_{n}=g_{n}(x,t)\;\;\mbox{on $[0,t_{0}]\times M$},\\\ v_{n}(0,x)=c(x)\;\;\mbox{on $M$}.\end{cases}$ In view of (7.3), $\left\\{v_{n}\right\\}_{n\geq 1}$ is bounded in $W^{1,2,p}([0,t_{0}]\times M)$. Therefore, up to a subsequence, we may assume that $\begin{cases}v_{n}\rightharpoonup v\in W^{1,2,p}([0,t_{0}]\times M),\\\ v_{n}\to v\in C^{0}([0,t_{0}]\times M).\end{cases}$ Given these convergences, it is easy to conclude that $v$ solves the Cauchy problem (7.2) with $b,g\in L^{p}([0,t_{0}]\times M)$ and $c\in C^{\infty}(M)$. We summarize our findings so far in ###### Proposition 7.1 (well-posedness of parabolic Cauchy problem, non- smooth data). Suppose $b$ and $g$ belong to $L^{p}([0,t_{0}]\times M)$. Then there exists a unique solution $v\in W^{1,2,p}([0,t_{0}]\times M)$ to the Cauchy problem (7.2) with initial data $c\in C^{\infty}(M)$. Furthermore, the a priori estimate (7.3) holds. ###### Proof. The only assertion that remains to be verified is the one about uniqueness, but uniqueness of the solution is an immediate consequence of (7.3). ∎ ###### Remark 7.1. The “non-smooth” quantifier in Proposition 7.1 refers to the functions $b$ and $g$ in (7.2). In upcoming applications it is essential that $b,g$ are allowed to be irregular (but a smooth initial function $c$ is fine, like $c\equiv 1$). Let us now consider the special Cauchy problem (7.4) $\begin{cases}\partial_{t}v-\Delta_{h}v+b(t,x)v=-b(x,t)\;\;\mbox{on $[0,t_{0}]\times M$},\\\ v(0,x)=0\;\;\mbox{on $M$},\end{cases}$ with $b\in C^{\infty}([0,t_{0}]\times M)$ and $b\leq 0$. This problem corresponds to (7.2) with a nonnegative smooth source $g$ (namely, $g=-b\geq 0$). By the previous discussion, there exists a unique solution $v\in C^{1,2}([0,t_{0}]\times M)$ to (7.4). Clearly, we have $\partial_{t}v-\Delta_{h}v\geq-b(t,x)v,$ where $b\geq-C$ for some positive constant $C$ (since $b$ is smooth). Thanks to the maximum principle ([10, Prop. 4.3]), this implies that $v\geq 0$ on $[0,t_{0}]\times M$. Next, suppose that $b$ is irregular with $b\in L^{p}([0,t_{0}]\times M)$ ($p>d+2$) and $b\leq 0$ almost everywhere. Let $v\in W^{1,2,p}_{0}([0,t_{0}]\times M)$ be the unique solution of the Cauchy problem (7.4), as supplied by Proposition 7.1. We would like to conclude that $v$ is nonnegative. To this end, approximate $b$ in $L^{p}([0,t_{0}]\times M)$ by $\left\\{b_{n}\right\\}_{n\geq 1}\subset C^{\infty}([0,t_{0}]\times M)$ with $b_{n}\leq 0$ for all $n$, and let $v_{n}$ be the corresponding (unique) solution in $C^{1,2}([0,t_{0}]\times M)$ of $\begin{cases}\partial_{t}v_{n}-\Delta_{h}v_{n}+b_{n}(t,x)v_{n}=-b_{n}(x,t)\;\;\mbox{on $[0,t_{0}]\times M$},\\\ v_{n}(0,x)=0\;\;\mbox{on $M$}.\end{cases}$ Then $v_{n}\geq 0$. By the a priori estimate (7.3), which now reads $\left\|v_{n}\right\|_{W^{1,2,p}}\leq C\left\|b_{n}\right\|_{L^{p}},$ and the previous discussion, we infer that $v_{n}\stackrel{{\scriptstyle C^{0}}}{{\to}}w$ (up to a subsequence), for some limit function $0\leq w\in W^{1,2,p}_{0}([0,t_{0}]\times M)$ that solves (7.4) with $b\in L^{p}([0,t_{0}]\times M)$. By uniqueness, we conclude that $v=w\geq 0$. To summarize, we have proved that for $0\geq b\in L^{p}([0,t_{0}]\times M)$ (with $p>d+2$), there exists a unique solution $0\leq v\in W^{1,2,p}_{0}([0,t_{0}]\times M)$ of (7.4), satisfying $\left\|v\right\|_{W^{1,2,p}}\leq C\left\|b\right\|_{L^{p}}.$ We are now in a position to prove the main result of this section, namely ###### Proposition 7.2 (test function for duality method). Suppose $b\in L^{p}([0,t_{0}]\times M)$ with $p>d+2$ and $b\leq 0$. Then the terminal value problem (7.5) $\begin{cases}\partial_{t}\phi+\Delta_{h}\phi-b(t,x)\phi=0\;\;\mbox{on $[0,t_{0}]\times M$},\\\ \phi(t_{0},x)=1\;\;\mbox{on $M$},\end{cases}$ admits a unique solution $\phi\in W^{1,2,p}([0,t_{0}]\times M)\cap C^{0}([0,t_{0}]\times M)$ with continuous first order spatial derivatives. Moreover, $\phi\geq 1$ everywhere and the following a priori estimates hold: (7.6) $\left\|\phi\right\|_{W^{1,2,p}([0,t_{0}]\times M)}\leq T+C(p,M,T)\left\|b\right\|_{L^{p}([0,t_{0}]\times M)}$ and (consequently) (7.7) $\left\|\phi\right\|_{C^{0}([0,t_{0}]\times M)}+\left\|\nabla\phi\right\|_{C^{0}([0,t_{0}]\times M)}\lesssim_{d,M,p,T}1+\left\|b\right\|_{L^{p}([0,t_{0}]\times M)}.$ ###### Proof. The solution $\phi$ of (7.5) is obtained by setting $\phi(t,x):=1+v(t_{0}-t,x)$, where $v\in W^{1,2,p}_{0}([0,t_{0}]\times M)$ is the unique solution of the Cauchy problem $\begin{cases}\partial_{t}v-\Delta_{h}v+\tilde{b}(t,x)v=-\tilde{b}(x,t)\;\;\mbox{on $[0,t_{0}]\times M$},\\\ v(0,x)=0\;\;\mbox{on $M$},\end{cases}$ where $\tilde{b}(t,x):=b(t_{0}-t,x)$. Proposition 2.1 therefore supplies the existence and uniqueness of $\phi$, estimate (7.6), and also the lower bound $\phi\geq 1$. The final estimate (7.7) follows from the anisotropic Sobolev inequality (Proposition 2.1) and (7.6). ∎ ###### Remark 7.2. Observe that the right-hand side of (7.6) is non-decreasing in $\left\|b\right\|_{L^{p}}$, a fact that will be exploited in Section 9. ## 8\. $L^{2}$ estimate and uniqueness for weak solutions The main outcome of this section is an a priori estimate that is valid for arbitrary weak $L^{2}$ solutions of the SPDE (1.2), with a rough velocity field $u$ satisfying in particular $\operatorname{div}_{h}u\in L^{p}_{t,x}$ for some $p>d+2$. The proof relies fundamentally on the special noise vector fields $a_{i}$ constructed in Lemma 1.2, the renormalization result provided by Theorem 1.1, and a duality method that makes use of the test function constructed in Proposition 7.2. ###### Theorem 8.1 ($L^{2}$ estimate and uniqueness). Let $\rho$ be an arbitrary weak $L^{2}$ solution of the stochastic continuity equation (1.2), with initial datum $\rho_{0}\in L^{2}(M)$, velocity vector field $u$ satisfying (1.5), (1.6), and (1.7), and noise vector fields $a_{1},\ldots,a_{N}$ given by Lemma 1.2. Then (8.1) $\sup_{0\leq t\leq T}\left\|\rho(t)\right\|^{2}_{L^{2}(\Omega\times M)}\leq C\left\|\rho_{0}\right\|^{2}_{L^{2}(M)},$ where $C=C\left(d,M,p,T,a_{i},\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)},\left\|u\right\|_{\infty}\right)$ is a constant that is non-decreasing in $\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$ and $\left\|u\right\|_{\infty}$; here, for convenience, we have set $\left\|u\right\|_{\infty}:=\left\|u\right\|_{L^{\infty}\left([0,T];\overrightarrow{L^{\infty}(M)}\right)}$. Furthermore, weak $L^{2}$ solutions are uniquely determined by their initial data. ###### Proof. Since $u\in L^{1}\left([0,T];\overrightarrow{W^{1,2}(M)}\right)$, the weak solution $\rho$ is renormalizable, in view of Theorem 1.1. However, Theorem 1.1 asks for bounded nonlinearities $F$. To handle $F(\xi)=\xi^{2}$, we must employ an approximation (truncation) procedure. We pick any increasing function $\chi\in C^{\infty}\bigl{(}[0,\infty)\bigr{)}$ such that $\chi(\xi)=\xi$ for $\xi\in[0,1]$, $\chi(\xi)=2$ for $\xi\geq 2$, $\chi(\xi)\in[1,2]$ for $\xi\in(1,2)$, and $A_{0}:=\sup_{\xi\geq 0}\chi^{\prime}(\xi)>1$. Set $A_{1}:=\sup_{\xi\geq 0}\left|\chi^{\prime\prime}(\xi)\right|$. We define the rescaled function $\chi_{\mu}(\xi)=\mu\,\chi(\xi/\mu)$, for $\mu>0$. The relevant approximation of $F(\xi)=\xi^{2}$ is $F_{\mu}(\xi):=\chi_{\mu}\left(\xi^{2}\right)$, for $\xi\in\mathbb{R}$, $\mu>0$. Some tedious computations will reveal that (8.2) $\begin{split}&F_{\mu}\in C^{\infty}(\mathbb{R}),\quad\lim_{\mu\to\infty}F_{\mu}(\xi)=\xi^{2},\quad\sup_{\xi\in\mathbb{R}}F_{\mu}(\xi)\leq 2\mu,\quad\sup_{\mu>0}F_{\mu}(\xi)\leq 2\xi^{2},\\\ &\sup_{\xi\in\mathbb{R}}\left|F_{\mu}^{\prime}(\xi)\right|\leq 2\sqrt{2}A_{0}\sqrt{\mu},\quad\sup_{\mu>0}\left|F_{\mu}^{\prime}(\xi)\right|\leq 2\sqrt{2}A_{0}\left|\xi\right|,\quad\lim_{\mu\to\infty}F_{\mu}^{\prime}(\xi)=2\xi,\\\ &\lim_{\mu\to\infty}F_{\mu}^{\prime\prime}(\xi)=2,\quad\left|F_{\mu}^{\prime\prime}(\xi)\right|\leq 8A_{1}+2A_{0}.\end{split}$ Furthermore, the function $G_{F_{\mu}}(\xi)=\xi F_{\mu}^{\prime}(\xi)-F_{\mu}(\xi)$ satisfies $\begin{split}&\sup_{\xi\in\mathbb{R}}\left|G_{F_{\mu}}(\xi)\right|\leq\left(4A_{0}+2\right)\mu,\quad\sup_{\mu>0}\left|G_{F_{\mu}}(\xi)\right|\leq 2\left(\sqrt{2}A_{0}+1\right)\xi^{2},\\\ &\qquad\text{and}\quad\lim_{\mu\to\infty}G_{F_{\mu}}(\xi)=\xi^{2},\end{split}$ and the following estimate: (8.3) $\left|G_{F_{\mu}}(\xi)\right|\leq C_{\chi}F_{\mu}(\xi),\quad\left|\xi^{2}F_{\mu}^{\prime\prime}(\xi)\right|\leq C_{\chi}\begin{cases}F_{\mu}(\xi),&\text{for $\left|\xi\right|\leq\sqrt{\mu}$},\\\ \xi^{2},&\text{for $\left|\xi\right|\in\left[\sqrt{\mu},\sqrt{2\mu}\right]$},\\\ O(\mu),&\text{for $\left|\xi\right|>\sqrt{2\mu}$},\end{cases}$ for some constant $C_{\chi}>0$ independent of $\mu$. Fix $t_{0}\in(0,T]$ and consider (5.1) evaluated at $t=t_{0}$ and with $F=F_{\mu}$. Then, in view of the choice of noise vector fields $a_{i}$ (cf. Lemma 1.2), the following equation holds for any $\psi\in W^{1,2,p}([0,t_{0}]\times M)$ (as long as$p>d+2$): (8.4) $\begin{split}&\mathbb{E}\int_{M}F_{\mu}(\rho(t_{0}))\psi(t_{0})\,dV_{h}-\mathbb{E}\int_{M}F_{\mu}(\rho_{0})\psi(0)\,dV_{h}\\\ &\quad=\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\partial_{t}\psi\,dV_{h}\,ds+\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,u(\psi)\,dV_{h}\,ds\\\ &\quad\qquad+\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,\Delta_{h}\psi\,dV_{h}\,ds-\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,\bar{a}_{i}(\psi)\,dV_{h}\,ds\\\ &\quad\qquad-\mathbb{E}\int_{0}^{t_{0}}\int_{M}G_{F_{\mu}}(\rho(s))\operatorname{div}_{h}u\,\psi\,dV_{h}\,ds\\\ &\quad\qquad-\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}\Lambda_{i}(1)\,G_{F_{\mu}}(\rho(s))\,\psi\,dV_{h}\,ds\\\ &\quad\qquad+\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\psi\,dV_{h}\,ds\\\ &\quad\qquad-\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}G_{F_{\mu}}(\rho(s))\bar{a}_{i}(\psi)\,dV_{h}\,ds,\end{split}$ where we have applied Theorem 1.1 to (6.4) and the time-space weak formulation with non-smooth test functions $\psi(t,x)$, cf. Proposition 5.1. Let $\phi$ be the unique solution of (7.5) with $b=-C_{\chi}\left|\operatorname{div}_{h}u\right|$, where $C_{\chi}>0$ is the constant appearing in (8.3). The existence of $\phi$ is guaranteed by Proposition 7.2. Moreover, $\phi$ belongs to $W^{1,2,p}([0,t_{0}]\times M)\cap C^{0}([0,t_{0}]\times M)$, the estimates (7.6) and (7.7) hold, and $\phi$ is lower bounded by $1$ everywhere in $[0,t_{0}]\times M$. Thanks to Proposition 5.1, we can use $\phi$ as test function in (8.4). Making use of (8.3), we obtain $\displaystyle-\mathbb{E}\int_{0}^{t_{0}}\int_{M}G_{F_{\mu}}(\rho(s))\operatorname{div}_{h}u\,\phi\,dV_{h}\,ds\leq\mathbb{E}\int_{0}^{t_{0}}\int_{M}\left|G_{F_{\mu}}(\rho(s))\right|\left|\operatorname{div}_{h}u\right|\,\phi\,dV_{h}\,ds$ $\displaystyle\qquad\leq\mathbb{E}\int_{0}^{t_{0}}\int_{M}C_{\chi}F_{\mu}(\rho(s))\left|\operatorname{div}_{h}u\right|\,\phi\,dV_{h}\,ds=-\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,b\,\phi\,dV_{h}\,ds.$ Now, recalling that the test function $\phi$ is the unique solution of the PDE problem (7.5), the inequality (8.4) (with $\psi=\phi$) simplifies to (8.5) $\begin{split}&\mathbb{E}\int_{M}F_{\mu}(\rho(t_{0}))\phi(t_{0})\,dV_{h}\leq\mathbb{E}\int_{M}F_{\mu}(\rho_{0})\phi(0)\,dV_{h}\\\ &\qquad+\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,u(\phi)\,dV_{h}\,ds-\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,\bar{a}_{i}(\phi)\,dV_{h}\,ds\\\ &\qquad\quad-\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}\Lambda_{i}(1)\,G_{F_{\mu}}(\rho(s))\,\phi\,dV_{h}\,ds\\\ &\qquad\quad\quad+\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\phi\,dV_{h}\,ds\\\ &\qquad\quad\quad\quad-\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}G_{F_{\mu}}(\rho(s))\bar{a}_{i}(\phi)\,dV_{h}\,ds.\end{split}$ Using the fourth property in (8.2) and the estimate (7.7) satisfied by the solution $\phi$ of (7.5) with $b=-C_{\chi}\left|\operatorname{div}_{h}u\right|$, we obtain $\displaystyle-\frac{1}{2}\sum_{i=1}^{N}$ $\displaystyle\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,\bar{a}_{i}(\phi)\,dV_{h}\,ds$ $\displaystyle\leq C(a_{i})\left\|\nabla\phi\right\|_{C^{0}([0,t_{0}]\times M)}\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\,dV_{h}\,ds$ $\displaystyle\leq C\left(a_{i},\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}\right)\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\,dV_{h}\,ds$ $\displaystyle\leq C\left(a_{i},\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}\right)\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi(s)\,dV_{h}\,ds,$ where we have also exploited that $\phi\geq 1$. Observe that the constant $C$ is non-decreasing in $\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$, cf. Remark 7.2, and we do not write its dependency on the $d,M,p,T$. Similarly, using also (8.3), we have $-\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}G_{F_{\mu}}(\rho(s))\bar{a}_{i}(\phi)\,dV_{h}\,ds\leq C\,\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi(s)\,dV_{h}\,ds,$ for a possibly different constant $C=C\Big{(}a_{i},\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}\Big{)}$, still non-decreasing in $\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$. Similar bounds can be derived for the terms on the third and fourth lines of (8.5): $\displaystyle\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}^{\prime\prime}(\rho(s))\bigl{(}\rho(s)\operatorname{div}_{h}a_{i}\bigr{)}^{2}\,\phi(s)\,dV_{h}\,ds\leq C\,\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi\,dV_{h}\,ds,$ $\displaystyle-\frac{1}{2}\sum_{i=1}^{N}\mathbb{E}\int_{0}^{t_{0}}\int_{M}\Lambda_{i}(1)\,G_{F_{\mu}}(\rho(s))\,\phi\,dV_{h}\,ds\leq C\,\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi(s)\,dV_{h}\,ds.$ Therefore (8.5) becomes $\begin{split}&\mathbb{E}\int_{M}F_{\mu}(\rho(t_{0}))\phi(t_{0})\,dV_{h}\leq\mathbb{E}\int_{M}F_{\mu}(\rho_{0})\phi(0)\,dV_{h}\\\ &\qquad+\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,u(\phi)\,dV_{h}\,ds\\\ &\qquad\qquad+C\left(a_{i},\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}\right)\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi(s)\,dV_{h}\,ds.\end{split}$ Arguing as above, since $u\in L^{\infty}([0,T];\overrightarrow{L^{\infty}(M)})$, $\displaystyle\mathbb{E}\int_{0}^{t_{0}}\int_{M}F_{\mu}(\rho(s))\,u(\phi)\,dV_{h}\,ds$ $\displaystyle\qquad\leq C\left(a_{i},\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)},\left\|u\right\|_{\infty}\right)\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi(s)\,dV_{h}\,ds,$ where the constant $C$ is non-decreasing in $\left\|u\right\|_{\infty}$ as well. In conclusion, we have obtained (8.6) $\begin{split}&\mathbb{E}\int_{M}F_{\mu}(\rho(t_{0}))\phi(t_{0})\,dV_{h}\\\ &\qquad\leq\mathbb{E}\int_{M}F_{\mu}(\rho_{0})\phi(0)\,dV_{h}+C\,\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi(s)\,dV_{h}\,ds,\end{split}$ where $C$ depends in particular on $a_{1},\ldots,a_{N}$, $\left\|u\right\|_{\infty}$, and $\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$ but not on $\mu$; $C$ is non-decreasing in $\left\|u\right\|_{\infty}$ and $\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$. By the dominated convergence theorem ($\phi$ is continuous, $\rho_{0}\in L^{2}(M)$), we obtain $\mathbb{E}\int_{M}F_{\mu}(\rho_{0})\phi(0)\,dV_{h}\to\int_{M}\rho_{0}^{2}\,\phi(0)\,dV_{h}$ as $\mu\to\infty$. On the other hand, by Fatou’s lemma, we can send $\mu\to\infty$ in the term on the left-hand side of (8.6), arriving at (8.7) $\mathbb{E}\int_{M}\rho^{2}(t_{0})\phi(t_{0})\,dV_{h}\leq\int_{M}\rho_{0}^{2}\,\phi(0)\,dV_{h}+C\,\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\phi(s)\,dV_{h}\,ds.$ Since $\phi$ is lower bounded by $1$, we can replace the term on the left-hand side by $\mathbb{E}\int_{M}\rho^{2}(t_{0})\,dV_{h}$. On the other hand, in view of (7.7), we can bound (remove) the $\phi$ part from the terms on the right-hand side of (8.7) by $\left\|\phi\right\|_{C^{0}([0,t_{0}]\times M)}\lesssim 1+\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$, where $\lesssim$ does not depend on $t_{0}$. As a result, (8.7) becomes $\mathbb{E}\int_{M}\rho^{2}(t_{0})\,dV_{h}\leq K\int_{M}\rho_{0}^{2}\,dV_{h}+K\,\mathbb{E}\int_{0}^{t_{0}}\int_{M}\rho^{2}(s)\,dV_{h}\,ds,$ where $K$ depends in particular on $a_{1},\ldots,a_{N}$, $\left\|u\right\|_{\infty}$, and $\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$, still non- decreasing in $\left\|u\right\|_{\infty}$ and $\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}$. Setting $\Phi(t_{0}):=\mathbb{E}\int_{M}\rho^{2}(t_{0})\,dV_{h}\in[0,\infty)$ and $\Phi(0):=C\int_{M}\rho_{0}^{2}\,dV_{h}\in[0,\infty)$, the last inequality reads as $\Phi(t_{0})\leq\Phi(0)+K\int_{0}^{t_{0}}\Phi(s)\,ds,\qquad 0<t_{0}\leq T.$ The integrability properties of weak solutions implies $\Phi\in L^{1}([0,t_{0}])$ for any $t_{0}\leq T$. Hence, by Grönwall’s inequality, $\Phi(t)\leq\Phi(0)e^{Kt},\qquad t\in[0,T].$ This concludes the proof of (8.1), which also implies the uniqueness assertion. ∎ ###### Remark 8.1. Regarding the uniqueness assertion in Proposition 8.1, we mention that it is possible to prove uniqueness without an additional assumption on $\operatorname{div}_{h}u$. This follows from the renormalized formulation (1.8) with $F(\cdot)=\left|\cdot\right|$, modulo an approximation argument. Since the existence of weak solutions (which asks that $\operatorname{div}_{h}u\in L^{p}$) holds in the $L^{2}$ setting, we have chosen not to focus on $L^{1}$ uniqueness. ## 9\. Proof of main result, Theorem 1.3 We divide the proof of Theorem 1.3 into four parts (subsections), starting with the procedure for smoothing the irregular velocity vector field $u$, yielding $u_{\tau}\in C^{\infty}$ such that $u_{\tau}\approx u$ for $\tau>0$ small. In the second subsection we rely on the $L^{2}$ estimate in Proposition 8.1 to ensure weak compactness of a sequence $\left\\{\rho^{\tau}\right\\}_{\tau>0}$ of approximate solutions, obtained by solving the the SPDE (1.2) with smooth initial datum $\rho_{0}$ and smooth velocity field $u_{\tau}$. The limit of a weakly converging subsequence is easily shown to be a solution of the SPDE. In the third subsection we remove the assumption that $\rho_{0}$ is smooth. Finally, we prove a technical lemma utilized in the second subsection. ### 9.1. Smoothing of velocity vector field $u$ We extend the vector field $u$ outside of $[0,T]$ by setting $u(t,\cdot)\equiv 0$ for $t<0$ and $t>T$, yielding $u\in L^{\infty}\left(\mathbb{R};\overrightarrow{L^{\infty}(M)}\right)$. Let $\left\\{\mathcal{E}_{\tau}\right\\}_{\tau\geq 0}$ denote the de Rham- Hodge semigroup on 1-forms, associated to the de Rham-Hodge Laplacian on $(M,h)$. We refer to Section 10 for a collection of properties of the heat kernel on forms. For a.e. $t\in\mathbb{R}$ and all $\tau>0$, $\mathcal{E}_{\tau}u(t)$ is a smooth vector field on $M$ and $\left\|\mathcal{E}_{\tau}u(t)\right\|_{\overrightarrow{L^{\infty}(M)}}\leq e^{\varepsilon^{2}\tau}\left\|u(t)\right\|_{\overrightarrow{L^{\infty}(M)}},$ where $\varepsilon$ is a constant such that $\operatorname{Ric}_{M}\geq-\varepsilon^{2}h$. By assumption, we clearly have $u(t)\in\overrightarrow{L^{r}(M)}$ for a.e. $t$ and thus $\mathcal{E}_{\tau}u(t)\overset{\tau\downarrow 0}{\longrightarrow}u(t)\quad\text{in $\overrightarrow{L^{r}(M)}$},\qquad r\in[1,\infty),$ where the null-set is $r$-independent. Let $\eta$ be a standard mollifier on $\mathbb{R}$, and set $\eta_{\tau}(t):=\tau^{-1}\eta\left(\frac{t}{\tau}\right),\quad t\in\mathbb{R}.$ We now define the following vector field: $u_{\tau}(t,x):=\int_{\mathbb{R}}\mathcal{E}_{\tau}u(t^{\prime},x)\eta_{\tau}(t-t^{\prime})\,dt^{\prime}\in T_{x}M,$ which is well-defined because $\displaystyle\left|u_{\tau}(t,x)\right|_{h}$ $\displaystyle\leq\int_{\mathbb{R}}\left|\mathcal{E}_{\tau}u(t^{\prime},x)\right|_{h}\eta_{\tau}(t-t^{\prime})\,dt^{\prime}$ $\displaystyle\leq\int_{\mathbb{R}}\left\|\mathcal{E}_{\tau}u(t)\right\|_{\overrightarrow{L^{\infty}(M)}}\eta_{\tau}(t-t^{\prime})\,dt^{\prime}$ $\displaystyle\leq e^{\varepsilon^{2}\tau}\int_{\mathbb{R}}\left\|u(t^{\prime})\right\|_{\overrightarrow{L^{\infty}(M)}}\eta_{\tau}(t-t^{\prime})\,dt^{\prime}<\infty,$ for any $t\in\mathbb{R}$ and $x\in M$. Clearly, $u_{\tau}:\mathbb{R}\times M\to TM$ is smooth in both variables, (9.1) $\left\|u_{\tau}\right\|_{L^{\infty}\left(\mathbb{R};\overrightarrow{L^{\infty}(M)}\right)}\leq e^{\varepsilon^{2}\tau}\left\|u\right\|_{L^{\infty}\left(\mathbb{R};\overrightarrow{L^{\infty}(M)}\right)},$ and $\operatorname{supp}u_{\tau}\subset[-1,T+1]\times M$ for all $\tau\ll 1$. For a.e. $t\in\mathbb{R}$, (9.2) $\operatorname{div}_{h}\left(\mathcal{E}_{\tau}u(t,x)\right)=P_{\tau}\operatorname{div}_{h}u(t,x),\quad x\in M,$ where $P_{\tau}$ is the heat kernel on functions. Indeed, fixing $\phi\in C^{1}(M)$, we compute (cf. [17, eq. 4.3]) $\displaystyle\int_{M}\operatorname{div}_{h}\left(\mathcal{E}_{\tau}u(t,x)\right)\phi\,dV_{h}=-\int_{M}\bigl{(}\mathcal{E}_{\tau}u(t,x),\nabla\phi\bigr{)}_{h}\,dV_{h}$ $\displaystyle\qquad=-\int_{M}\bigl{(}u(t,x),\mathcal{E}_{\tau}\nabla\phi\bigr{)}_{h}\,dV_{h}=-\int_{M}\bigl{(}u(t,x),\nabla P_{\tau}\phi\bigr{)}_{h}\,dV_{h}$ $\displaystyle\qquad=\int_{M}\operatorname{div}_{h}u(t,x)P_{\tau}\phi\,dV_{h}=\int_{M}P_{\tau}\operatorname{div}_{h}u(t,x)\phi\,dV_{h},$ where we have used the relation [17] $\mathcal{E}_{\tau}\nabla\phi=\nabla P_{\tau}\phi,\quad\phi\in C^{1}(M),$ and so the identity (9.2) follows. The next lemma expresses $\operatorname{div}_{h}u_{\tau}$ in terms of $\operatorname{div}_{h}u$. ###### Lemma 9.1 (formula for $\operatorname{div}_{h}u_{\tau}$). For any $t\in\mathbb{R}$ and $x\in M$, $\operatorname{div}_{h}u_{\tau}(t,x):=\int_{\mathbb{R}}\operatorname{div}_{h}\left(\mathcal{E}_{\tau}u(t^{\prime},x)\right)\eta_{\tau}(t-t^{\prime})\,dt^{\prime},$ where $\operatorname{div}_{h}\left(\mathcal{E}_{\tau}u(t^{\prime},x)\right)$ can be computed in terms of $\operatorname{div}_{h}u$ and the heat kernel on functions, cf. (9.2). ###### Proof. Locally expressing $\mathcal{E}_{\tau}u(t,x)$ as $\mathcal{E}_{\tau}u(t,x)=\left(\int_{M}e(\tau,x,y)_{ij}u^{j}(t,y)\,dV_{h}(y)\right)h^{ik}(x)\,\partial_{k},$ for a.e. $t\in\mathbb{R}$, cf. Section 10, we obtain (temporarily dropping Einstein’s summation convention for $k$) $\displaystyle\partial_{k}\left(\mathcal{E}_{\tau}u(t,x)\right)^{k}$ $\displaystyle=\int_{M}\partial_{k}e\left(\tau,x,y\right)_{ij}u^{j}(t,y)\,dV_{h}(y)\,h^{ik}(x)$ $\displaystyle\qquad+\int_{M}e(\tau,x,y)_{ij}u^{j}(t,y)\,dV_{h}(y)\,\partial_{k}h^{ik}(x),$ and thus $\left|\partial_{k}\left(\mathcal{E}_{\tau}u(t,x)\right)^{k}\right|\leq C(M,\tau)\left\|u\right\|_{\overrightarrow{L^{\infty}(M)}}.$ Therefore we are allowed to interchange $\int_{\mathbb{R}}$ and $\partial_{k}$ to obtain $\partial_{k}\left(u_{\tau}(t,x)\right)^{k}=\int_{\mathbb{R}}\partial_{k}\left(\mathcal{E}_{\tau}u(t^{\prime},x)\right)^{k}\eta_{\tau}(t-t^{\prime})\,dt^{\prime}.$ From here, recalling the local expression for $\operatorname{div}_{h}$ (see Section 2), it is now immediate to conclude that locally $\operatorname{div}_{h}u_{\tau}(t,x)=\int_{\mathbb{R}}\operatorname{div}_{h}\left(\mathcal{E}_{\tau}u(t^{\prime},x)\right)\eta_{\tau}(t-t^{\prime})\,dt^{\prime}.$ ∎ Fix $x\in M$. In view of Lemma 9.1 and basic convolution estimates on $\mathbb{R}$, $\left\|\operatorname{div}_{h}u_{\tau}(\cdot,x)\right\|_{L^{p}(\mathbb{R})}\leq\left\|\operatorname{div}_{h}\mathcal{E}_{\tau}u(\cdot,x)\right\|_{L^{p}(\mathbb{R})}$ for any $\tau>0$, and thus $\left\|\operatorname{div}_{h}u_{\tau}\right\|_{L^{p}(\mathbb{R}\times M)}\leq\left\|\operatorname{div}_{h}\mathcal{E}_{\tau}u\right\|_{L^{p}(\mathbb{R}\times M)}.$ As a result, via (9.2), we obtain (9.3) $\begin{split}\left\|\operatorname{div}_{h}u_{\tau}\right\|_{L^{p}(\mathbb{R}\times M)}&\leq\left\|P_{\tau}\operatorname{div}_{h}u\right\|_{L^{p}(\mathbb{R}\times M)}\\\ &\leq\left\|\operatorname{div}_{h}u\right\|_{L^{p}(\mathbb{R}\times M)}=\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)}.\end{split}$ ### 9.2. Weak compactness of approximate solutions Let $\rho^{\tau}$ be the unique weak $L^{2}$ solution of the SPDE (1.2) with initial datum $\rho_{0}\in C^{\infty}(M)$, noise vector fields $a_{i}$ given by Lemma 1.2, and irregular velocity field $u$ (satisfying the assumptions of Theorem 1.3) replaced by the smooth vector field $u_{\tau}$. We refer to Propositions 3.3 and 8.1 for the existence, uniqueness, and properties of the solution, which satisfies the Itô SPDE $\displaystyle d\rho^{\tau}+\operatorname{div}_{h}\bigl{(}\rho^{\tau}u_{\tau}\bigr{)}\,dt+\sum_{i=1}^{N}\operatorname{div}_{h}\bigl{(}\rho^{\tau}a_{i}\bigr{)}\,dW^{i}(t)$ $\displaystyle\qquad\qquad-\Delta_{h}\rho^{\tau}\,dt-\frac{1}{2}\sum_{i=1}^{N}\operatorname{div}_{h}\bigl{(}\rho^{\tau}\bar{a}_{i}\bigr{)}\,dt=0\quad\text{weakly in $x$, $\mathbb{P}$-a.s.},$ that is, for any $\psi\in C^{\infty}(M)$, the following equation holds $\mathbb{P}$-a.s.: (9.4) $\begin{split}&\int_{M}\rho^{\tau}(t)\psi\,dV_{h}=\int_{M}\rho_{0}\psi\,dV_{h}+\int_{0}^{t}\int_{M}\rho^{\tau}(s)\,u_{\tau}(\psi)\,dV_{h}\,ds\\\ &\qquad+\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\rho^{\tau}(s)\,a_{i}(\psi)\,dV_{h}\,dW^{i}(s)+\int_{0}^{t}\int_{M}\rho^{\tau}(s)\,\Delta_{h}\psi\,dV_{h}\,ds\\\ &\qquad\qquad-\frac{1}{2}\sum_{i=1}^{N}\int_{0}^{t}\int_{M}\rho^{\tau}(s)\,\bar{a}_{i}(\psi)\,dV_{h}\,ds,\qquad t\in[0,T].\end{split}$ In view of (9.1), (9.3), and (8.1), recalling the “monotonicity properties” of the constant $C$, we obtain the $\tau$-independent $L^{2}$ estimate $\sup_{0\leq t\leq T}\left\|\rho^{\tau}(t)\right\|_{L^{2}(\Omega\times M)}\leq C\left(T,a_{i},\left\|\operatorname{div}_{h}u\right\|_{L^{p}([0,T]\times M)},\left\|u\right\|_{\infty}\right)\left\|\rho_{0}\right\|_{L^{2}(M)}.$ In other words, $\left\\{\rho^{\tau}\right\\}_{\tau\in(0,1)}$ is bounded in $L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$. Since $\left(L^{2}(\Omega\times M)\right)^{\star}$ is separable and $\left([0,T],dt\right)$ is a finite measure space, we know that $L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$ is the dual of $L^{1}\left([0,T];L^{2}(\Omega\times M)\right)$. Therefore, there exist $\left\\{\tau_{n}\right\\}_{n\geq 1}\subset(0,1)$ with $\tau_{n}\downarrow 0$ and $\rho\in L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$ such that $\rho^{\tau_{n}}\stackrel{{\scriptstyle\star}}{{\rightharpoonup}}\rho\quad\text{in $L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$},$ as $n\to\infty$, which means that $\int_{0}^{T}\int_{\Omega}\int_{M}\left(\rho^{\tau_{n}}-\rho\right)\,\theta\,\,\mathbb{P}\otimes dV_{h}\otimes dt\overset{n\uparrow\infty}{\longrightarrow}0,\quad\forall\theta\in L^{1}\left([0,T];L^{2}(\Omega\times M)\right).$ We follow the arguments in [37]. Fix $\phi\in C^{\infty}(M)$. The process $\int_{M}\rho^{\tau_{n}}(t)\phi\,dV_{h}$ is adapted by definition and converges weakly in $L^{2}(\Omega_{T})$ to the process $\int_{M}\rho(t)\phi\,dV_{h}$. Since the space of adapted processes is a closed subspace of $L^{2}(\Omega_{T})$, it is weakly closed, and hence the limit process is adapted. For the same reason, the processes $\int_{M}\rho^{\tau_{n}}(t)a_{i}(\phi)\,dV_{h}$, $i=1,\ldots,N$, are adapted and their Itô integrals are well defined. Since the Itô integral is linear and continuous from the space of adapted $L^{2}(\Omega_{T})$ processes to $L^{2}(\Omega_{T})$, it is also weakly continuous. As a result, $\int_{0}^{\cdot}\int_{M}\rho^{\tau_{n}}(s)a_{i}(\phi)\,dV_{h}\,dW^{i}_{s}\overset{n\uparrow\infty}{\rightharpoonup}\int_{0}^{\cdot}\int_{M}\rho(s)a_{i}(\phi)\,dV_{h}\,dW^{i}_{s}\quad\text{in $L^{2}(\Omega_{T})$}.$ Exploiting the weak continuity of the time-integrals, $\int_{0}^{\cdot}\int_{M}\rho^{\tau_{n}}(s)\Delta_{h}\phi\,dV_{h}\,ds\overset{n\uparrow\infty}{\rightharpoonup}\int_{0}^{\cdot}\int_{M}\rho(s)\Delta_{h}\phi\,dV_{h}\,ds\quad\text{in $L^{2}(\Omega_{T})$}$ and, for $i=1,\ldots,N$, $\int_{0}^{\cdot}\int_{M}\rho^{\tau_{n}}(s)\bar{a}_{i}(\phi)\,dV_{h}\,ds\overset{n\uparrow\infty}{\rightharpoonup}\int_{0}^{\cdot}\int_{M}\rho(s)\bar{a}_{i}(\phi)\,dV_{h}\,ds\quad\text{in $L^{2}(\Omega_{T})$}.$ It remains to pass to the limit in the term involving the velocity field $u_{\tau}$ in (9.4). The proof of the next lemma is postponed to the end of this section. ###### Lemma 9.2. For any $r\in[1,\infty)$, $u_{\tau}\to u$ in $L^{r}\left(\mathbb{R};\overrightarrow{L^{r}(M)}\right)$ as $\tau\downarrow 0$. Lemma 9.2 immediately implies $u_{\tau_{n}}(\phi)\overset{n\uparrow\infty}{\longrightarrow}u(\phi)\quad\text{in $L^{r}(\mathbb{R}\times M)$},\qquad r\in[1,\infty).$ Using this, the goal is to verify that (9.5) $\int_{M}\rho^{\tau_{n}}u_{\tau_{n}}(\phi)\,dV_{h}\overset{n\uparrow\infty}{\rightharpoonup}\int_{M}\rho u(\phi)\,dV_{h}\quad\text{in $L^{2}(\Omega_{T})$}.$ Fix an arbitrary $\psi\in L^{2}(\Omega_{T})$. Then $\displaystyle I(n)$ $\displaystyle:=\int_{\Omega_{T}}\left(\int_{M}\rho^{\tau_{n}}u_{\tau_{n}}(\phi)\,dV_{h}\right)\psi\,\mathbb{P}\otimes ds-\int_{\Omega_{T}}\left(\int_{M}\rho u(\phi)\,dV_{h}\right)\psi\,\mathbb{P}\otimes ds$ $\displaystyle=\int_{\Omega_{T}}\left(\int_{M}\rho^{\tau_{n}}\left(u_{\tau_{n}}(\phi)-u(\phi)\right)\,dV_{h}\right)\psi\,\mathbb{P}\otimes ds$ $\displaystyle\qquad+\int_{\Omega_{T}}\left(\int_{M}\left(\rho^{\tau_{n}}-\rho\right)u(\phi)\,dV_{h}\right)\psi\,\mathbb{P}\otimes ds=:I_{1}(n)+I_{2}(n).$ By repeated applications of the Cauchy-Schwarz inequality, $\displaystyle\bigl{|}I_{1}(n)\bigr{|}$ $\displaystyle\leq\int_{\Omega_{T}}\left|\psi\right|\left\|\rho^{\tau_{n}}(s)\right\|_{L^{2}(M)}\left\|(u_{\tau_{n}}-u)(\phi)\right\|_{L^{2}(M)}\,\mathbb{P}\otimes ds$ $\displaystyle\leq\int_{0}^{T}\left\|\psi(s)\right\|_{L^{2}(\Omega)}\left\|\rho^{\tau_{n}}(s)\right\|_{L^{2}(\Omega\times M)}\left\|(u_{\tau_{n}}-u)(\phi)\right\|_{L^{2}(M)}\,ds$ $\displaystyle\leq\left\|\rho^{\tau_{n}}\right\|_{L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)}\int_{0}^{T}\left\|\psi(s)\right\|_{L^{2}(\Omega)}\left\|(u_{\tau_{n}}-u)(\phi)\right\|_{L^{2}(M)}\,ds$ $\displaystyle\leq C\left\|\psi\right\|_{L^{2}(\Omega_{T})}\left\|(u_{\tau_{n}}-u)(\phi)\right\|_{L^{2}([0,T]\times M)}\overset{n\uparrow\infty}{\longrightarrow}0.$ For the $I_{2}$ term it is enough to check that $u(\phi)\psi\in L^{1}\left([0,T];L^{2}(\Omega\times M)\right)$, because in that case we would get $I_{2}(n)\to 0$ directly from the definition of the weak convergence $\rho^{\tau}\overset{n\uparrow\infty}{\rightharpoonup}\rho$. In point of fact, we have $\displaystyle\int_{0}^{T}$ $\displaystyle\left(\int_{\Omega\times M}\left|u(\phi)\psi\right|^{2}\,dV_{h}\,d\mathbb{P}\right)^{1/2}\,ds$ $\displaystyle=\int_{0}^{T}\left\|\psi(s)\right\|_{L^{2}(\Omega)}\left(\int_{M}\left|u(\phi)\right|^{2}\,dV_{h}\right)^{1/2}\,ds$ $\displaystyle\leq\int_{0}^{T}\left\|\psi(s)\right\|_{L^{2}(\Omega)}\left\|\phi\right\|_{C^{1}(M)}\left\|u(s)\right\|_{\overrightarrow{L^{\infty}(M)}}\,ds$ $\displaystyle\leq\left\|\phi\right\|_{C^{1}(M)}\left\|u\right\|_{L^{\infty}\left([0,T];\overrightarrow{L^{\infty}(M)}\right)}\int_{0}^{T}\left\|\psi(s)\right\|_{L^{2}(\Omega)}\,ds$ $\displaystyle\leq\left\|\phi\right\|_{C^{1}(M)}\left\|u\right\|_{L^{\infty}\left([0,T];\overrightarrow{L^{\infty}(M)}\right)}\sqrt{T}\left\|\psi\right\|_{L^{2}(\Omega_{T})}\,ds<\infty.$ Therefore $I_{2}(n)\overset{n\uparrow\infty}{\longrightarrow}0$, and thus $I(n)\overset{n\uparrow\infty}{\longrightarrow}0$. This concludes the proof of (9.5). We may now pass to the limit in the SPDE (9.4) with $\tau=\tau_{n}$, to conclude that $\rho$ satisfies (1.4) for a.e. $(\omega,t)\in\Omega_{T}$. Since the right-hand-side of (1.4) clearly defines a continuous stochastic process, the process $\int_{M}\rho(\cdot,x)\phi(x)\,dV_{h}(x)$ has a continuous modification. In other words, we have constructed a weak $L^{2}$ solution to (1.2) under the assumption that $\rho_{0}\in C^{\infty}(M)$. ### 9.3. General initial datum, $\rho_{0}\in L^{2}(M)$ To finish off the proof, we must remove the smoothness assumption on the initial datum $\rho_{0}$. We follow the same strategy as above, but this time it is simpler since we have to regularize functions (not vector fields) defined on the manifold $M$. Given $\rho_{0}\in L^{2}(M)$, we employ the heat semigroup $\left\\{P_{\tau}\right\\}_{\tau>0}$ on functions to regularize $\rho_{0}$, see Section 10 for details. The following properties are known: $\displaystyle P_{\tau}\rho_{0}\in C^{\infty}(M),\quad\left\|P_{\tau}\rho_{0}\right\|_{L^{2}(M)}\leq\left\|\rho_{0}\right\|_{L^{2}(M)},$ $\displaystyle\quad\text{and}\quad P_{\tau}\rho_{0}\stackrel{{\scriptstyle L^{2}(M)}}{{\longrightarrow}}\rho_{0}\quad\text{as $\tau\downarrow 0$}.$ According to the previous subsection, there exists a unique weak $L^{2}$ solution $\rho^{\tau}$ of (1.2) with initial datum $P_{\tau}\rho_{0}\in C^{\infty}(M)$, irregular velocity field $u$ satisfying the assumptions listed in Theorem 1.3, and noise vector fields $a_{i}$ given by Lemma 1.2. As before, Proposition 8.1 supplies the estimate $\left\|\rho^{\tau}(t)\right\|_{L^{2}(\Omega\times M)}\leq C_{T}\left\|\rho_{0}\right\|_{L^{2}(M)}$ for all $t\in[0,T]$, where the constant $C_{T}$ is independent of $\tau$. This implies that $\left\\{u^{\tau}\right\\}_{\tau\in(0,1)}$ is weakly compact, i.e., there exist a subsequence $\left\\{\tau_{n}\right\\}_{n\geq 1}\subset(0,1)$ with $\tau_{n}\overset{n\uparrow\infty}{\longrightarrow}0$ and a limit $\rho\in L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$ such that $\rho^{\tau_{n}}\stackrel{{\scriptstyle\star}}{{\rightharpoonup}}\rho\quad\text{in $L^{\infty}\left([0,T];L^{2}(\Omega\times M)\right)$}.$ For any $\phi\in C^{\infty}(M)$, we have trivially that $\int_{M}P_{\tau_{n}}\rho_{0}\,\phi\,dV_{h}\overset{n\uparrow\infty}{\rightharpoonup}\int_{M}\rho_{0}\,\phi\,dV_{h}\quad\text{in $L^{2}(\Omega_{T})$}.$ The limit of the remaining terms in (9.4) can be computed as before, which in the end leads to the conclusion that $\rho$ is a weak $L^{2}$ solution of (1.2). ### 9.4. Proof of Lemma 9.2 To conclude the proof of Theorem 1.3, we need to verify the validity of Lemma 9.2. Define for convenience $\mathcal{J}_{\tau}(t,x):=\int_{\mathbb{R}}u(t^{\prime},x)\eta_{\tau}(t-t^{\prime})\,dt^{\prime},\quad t\in\mathbb{R},\,\,x\in M.$ We have $\left|u_{\tau}(t,x)-\mathcal{J}_{\tau}(t,x)\right|_{h}\leq\int_{\mathbb{R}}\left|\mathcal{E}_{\tau}u(t^{\prime},x)-u(t^{\prime},x)\right|_{h}\eta_{\tau}(t-t^{\prime})\,dt^{\prime}.$ By basic convolution estimates on $\mathbb{R}$, for any $r\in[1,\infty)$, $\left\|\,\left|u_{\tau}(\cdot,x)-\mathcal{J}_{\tau}(\cdot,x)\right|_{h}\,\right\|_{L^{r}(\mathbb{R})}\leq\left\|\,\left|\mathcal{E}_{\tau}u(\cdot,x)-u(\cdot,x)\right|_{h}\,\right\|_{L^{r}(\mathbb{R})},\quad x\in M,$ where $\left\|\,\left|\cdot\right|_{h}\,\right\|_{L^{r}(\mathbb{R})}^{r}=\int_{\mathbb{R}}\left|\cdot\right|_{h}^{r}\,dt\,$. Thus $\displaystyle\left\|u_{\tau}-\mathcal{J}_{\tau}\right\|_{L^{r}\left(\mathbb{R};\overrightarrow{L^{r}(M)}\right)}$ $\displaystyle\qquad\leq\left\|\mathcal{E}_{\tau}u-u\right\|_{L^{r}\left(\mathbb{R};\overrightarrow{L^{r}(M)}\right)}=\left(\int_{\mathbb{R}}\left\|\mathcal{E}_{\tau}u(t,\cdot)-u(t,\cdot)\right\|_{\overrightarrow{L^{r}(M)}}^{r}\,dt\right)^{1/r}.$ Observe that the integrand in the $dt$-integral converges to zero as $\tau\downarrow 0$ for a.e. $t\in\mathbb{R}$. Furthermore, cf. Section 10, $\left\|\mathcal{E}_{\tau}u(t,\cdot)-u(t,\cdot)\right\|_{\overrightarrow{L^{r}(M)}}\leq\bigl{(}\exp\left(\varepsilon^{2}\left|1-2/r\right|\tau\right)+1\bigr{)}\left\|u(t,\cdot)\right\|_{\overrightarrow{L^{r}(M)}},$ which is integrable on $\mathbb{R}$ by assumption on $u$ (here $-\varepsilon$ is a lower bound of the Ricci tensor on $M$). Therefore, by means of the dominated convergence theorem, we conclude that $u_{\tau}-\mathcal{J}_{\tau}\to 0$ in $L^{r}\left(\mathbb{R};\overrightarrow{L^{r}(M)}\right)$ as $\tau\downarrow 0$. Hence, with an error term $o(1)\to 0$ as $\tau\downarrow 0$, $u_{\tau}-u=\mathcal{J}_{\tau}-u+o(1),$ so it remains to verify that $\mathcal{J}_{\tau}-u$ converges to zero in $L^{r}_{t}\overrightarrow{L^{r}_{x}}$. Locally we have $\left|\mathcal{J}_{\tau}(t,x)-u(t,x)\right|_{h}\leq C(M,h)\left|\mathcal{J}_{\tau}(t,x)-u(t,x)\right|_{\operatorname{eucl}}$. Since the right-hand side converges to zero in $L^{r}(\mathbb{R})$ for all $x\in M$, it follows that the same holds for the left-hand side. We have $\displaystyle\int_{\mathbb{R}}\int_{M}$ $\displaystyle\left|\mathcal{J}_{\tau}(t,x)-u(t,x)\right|_{h}^{r}dV_{h}(x)\,dt=\int_{M}\int_{\mathbb{R}}\left|\mathcal{J}_{\tau}(t,x)-u(t,x)\right|_{h}^{r}\,dt\,dV_{h}(x)$ $\displaystyle=\sum_{\kappa}\int_{M}\alpha_{\kappa}(z)\left(\int_{\mathbb{R}}\left|\mathcal{J}_{\tau}(t,z)-u(t,z)\right|_{h}^{r}\,dt\right)\,\left|h_{\kappa}(z)\right|^{1/2}\,dz,$ where $(\alpha_{\kappa})_{\kappa}$ is an arbitrary smooth partition of unity. Arguing as we did above, $\left\|\,\left|\mathcal{J}_{\tau}(\cdot,x)-u(\cdot,x)\right|_{h}\,\right\|_{L^{r}(\mathbb{R})}^{r}\leq 2^{r}\left\|\,\left|u(\cdot,x)\right|_{h}\,\right\|_{L^{r}(\mathbb{R})}^{r}$ for any $x\in M$, and hence, by means of the dominated convergence theorem, $\mathcal{J}_{\tau}-u\to 0$ in $L^{r}\left(\mathbb{R};\overrightarrow{L^{r}(M)}\right)$. This concludes the proof of Lemma 9.1. ## 10\. Appendix ### 10.1. Heat kernel on functions We collect here some relevant properties of the heat kernel $H$ on $(M,h)$, that is, the fundamental solution of the heat operator $L=\partial_{t}-\Delta_{h}.$ 1. (1) the mapping $(x,y,t)\mapsto H(x,y,t)$ belongs to $C^{\infty}(M\times M\times(0,\infty))$, is symmetric in $x$ and $y$ for any $t>0$, and is positive. 2. (2) For any function $w\in L^{r}(M)$, $r\in[1,\infty]$, setting (10.1) $P_{t}w(x):=\int_{M}H(x,y,t)w(y)\,dV_{h}(y),\qquad x\in M,\,\,t>0,$ we have $P_{t}w\in C^{\infty}(M)$. Moreover, $\left\|P_{t}w\right\|_{L^{r}(M)}\leq\left\|w\right\|_{L^{r}(M)},\qquad t>0,$ and, for any finite $r\geq 1$, $P_{t}w\stackrel{{\scriptstyle L^{r}(M)}}{{\longrightarrow}}w.\quad\text{as $t\to 0^{+}$}.$ For proofs of these basic results, see [26]. ### 10.2. Heat kernel on forms During the proof of Theorem 1.3, we also make use of the heat kernel on forms. We recall here its most salient properties without proofs, referring to [16, 5, 12, 21] for details. Firstly, we define the space $L^{2}(M,h)$ as the closure of the space of smooth 1-forms on $M$ with respect to the norm $\left(\int_{M}\left|\zeta\right|_{h}^{2}\,dV_{h}\right)^{1/2},\qquad\text{where $\left|\zeta\right|_{h}^{2}=h^{ab}\zeta_{a}\zeta_{b}\,$ locally.}$ Denote by $\left\\{\mathcal{E}_{\tau}\right\\}_{\tau\geq 0}$ the de Rham-Hodge semigroup on 1-forms, associated to the de Rham-Hodge Laplacian, which by elliptic regularity has a kernel $e(\tau,\cdot,\cdot)$. More precisely, for any $\tau>0$, $e(\tau,\cdot,\cdot)$ is a double form on $M\times M$, such that for any 1-form $\zeta\in L^{2}(M,h)$ and any $P\in M$, $\left(\mathcal{E}_{\tau}\zeta\right)(P)=\int_{M}e(\tau,P,Q)\wedge\star_{Q}\,\zeta(Q),$ where $\star$ is the Hodge star operator, $Q$ is a point in $M$, and $\wedge$ is the wedge product between forms. Concretely, in a coordinate patch $\left(U,(x^{i})\right)$ around $P$ and in a coordinate patch $\left(U^{\prime},(y^{j})\right)$ around $Q$, if we write the double form $e(\tau,\cdot,\cdot)$ as $e(\tau,x,y)=\left(e(\tau,x,y)_{ij}\,dx^{i}\right)\,dy^{j}$ and $\zeta$ as $\zeta(y)=\zeta_{k}(y)\,dy^{k}$, then the above integral becomes $(\mathcal{E}_{\tau}\zeta)(x)=\left(\int_{M}e(\tau,x,y)_{ij}\,h^{jk}(y)\,\zeta_{k}(y)\,dV_{h}(y)\right)\,dx^{i}.$ For a vector field $V$, we denote by $V^{\flat}$ the 1-form obtained by lowering an index via the metric $h$; analogously, for a 1-form $\zeta$, we denote by $\zeta^{\sharp}$ the vector field obtained by raising an index via the metric. We define for a vector field $V$ the following quantity $\mathcal{E}_{\tau}V:=\left((\mathcal{E}_{\tau}V^{\flat})\right)^{\sharp}.$ Let $\varepsilon\geq 0$ be a constant such that $\operatorname{Ric}_{M}\geq-\varepsilon^{2}h$, where $\operatorname{Ric}_{M}$ denotes the Ricci tensor of $(M,h)$ (the constant $\varepsilon$ clearly exists because $M$ is compact). We have the following remarkable properties: for any $V\in\overrightarrow{L^{p}(M)}$, $p\in[1,\infty]$, * • $\mathcal{E}_{\tau}V$ is a smooth vector field, for any $\tau>0$, * • $\mathcal{E}_{\tau}V\to V$ in $\overrightarrow{L^{p}(M)}$ as $\tau\downarrow 0$, for any finite $p$, * • $\left\|\mathcal{E}_{\tau}V\right\|_{\overrightarrow{L^{p}(M)}}\leq e^{\varepsilon^{2}\left|1-\frac{2}{p}\right|\tau}\,\left\|V\right\|_{\overrightarrow{L^{p}(M)}}$, for any $\tau\geq 0$ [5]. Furthermore, in analogy with (10.1), the following local expression holds: $(\mathcal{E}_{\tau}V)(x)=\left(\int_{M}e(\tau,x,y)_{ij}\,V^{j}(y)\,dV_{h}(y)\right)h^{ik}(x)\,\partial_{k}.$ Finally, one can show that (cf. [21] for details) $\begin{split}\operatorname{div}\,\mathcal{E}_{\tau}V(x)&=\int_{M}\partial_{k}e(\tau,x,y)_{ij}\,V^{j}(y)\,dV_{h}(y)\,h^{ik}(x)\\\ &\qquad+\int_{M}e(\tau,x,y)_{ij}\,V^{j}(y)\,dV_{h}(y)\,\partial_{k}h^{ik}(x)\\\ &\qquad\qquad+\Gamma^{\rho}_{\rho k}(x)\int_{M}e(\tau,x,y)_{ij}\,V^{j}(y)\,dV_{h}(y)\,h^{ik}(x),\end{split}$ in local coordinates $x$ (differentiation is carried out in $x$). ### 10.3. Proof of Proposition 2.1 Let $\left\\{G_{i}\right\\}_{i=1}^{R}$ be a finite covering of $M$ and $\left\\{(G_{i},\phi_{i})\right\\}_{i=1}^{R}$ the corresponding charts. Without loss of generality, we may assume that $\phi_{i}(G_{i})=B$ for all $i$, where $B$ is the unit ball in $\mathbb{R}^{d}$. Let $\left\\{\alpha_{i}\right\\}_{i=1}^{R}$ be a smooth partition of unity subordinate to $\\{G_{i}\\}_{i=1}^{R}$. On $\operatorname{supp}\alpha_{i}$ the metric tensor $h$ and its derivatives of all orders are bounded in the system of coordinates corresponding to the chart $(G_{i},\phi_{i})$. Define, for $i=1,\ldots,R$, $\psi_{i}:[0,T]\times G_{i}\to[0,T]\times B,\qquad(t,P)\mapsto(t,\phi_{i}(P)),$ which is a finite smooth atlas for $[0,T]\times M$. Observe that $[0,T]\times G_{i}$ is diffeomorphic to $[0,T]\times B$. Moreover, $\tilde{\alpha}_{i}:[0,T]\times M\to[0,1]$, $\tilde{\alpha}_{i}(t,P):=\alpha_{i}(P)$ is a smooth partition of unity subordinate to $\left\\{[0,T]\times G_{i}\right\\}_{i=1}^{R}$. Let $w\in W^{1,2,p}([0,T]\times M)$. Then clearly, in view of the discussion above, $\displaystyle w\in W^{1,2,p}([0,T]\times M)$ $\displaystyle\Longleftrightarrow\tilde{\alpha}_{i}w\in W^{1,2,p}([0,T]\times M),\quad\forall i$ $\displaystyle\Longleftrightarrow\left(\tilde{\alpha}_{i}w\right)\circ\psi^{-1}_{i}\in W^{1,2,p}([0,T]\times B),\quad\forall i,$ where $W^{1,2,p}([0,T]\times B)$ denotes the more familiar Euclidean anisotropic Sobolev space [8], which can be defined similarly via (2.1) with $dV_{h}=dx$ and $\nabla^{k}=\nabla^{k}_{\operatorname{eucl}}$. For this space we have the compact embedding $W^{1,2,p}([0,T]\times B)\subset\subset C^{0,1-\frac{1+d}{p}}\left([0,T]\times\bar{B}\right)$ and $\partial_{x_{j}}\left(\tilde{\alpha_{i}}w\right)\circ\psi^{-1}_{i}\in C^{0,1-\frac{1+d}{p}}\left([0,T]\times\bar{B}\right),\quad j=1,\ldots,d,$ provided $p>d+2$, see [41] for example. In particular, for all $i$, $\displaystyle\left\|(\tilde{\alpha_{i}}w)\circ\psi^{-1}_{i}\right\|_{C^{0}\left([0,T]\times\bar{B}\right)}+$ $\displaystyle\left\|\nabla_{\operatorname{eucl}}\left(\tilde{\alpha_{i}}w\right)\circ\psi^{-1}_{i}\right\|_{C^{0}\left([0,T]\times\bar{B}\right)}$ $\displaystyle\leq C(p,d,B)\left\|\left(\tilde{\alpha_{i}}w\right)\circ\psi^{-1}_{i}\right\|_{W^{1,2,p}([0,T]\times B)}.$ Exploiting the boundedness of the metric tensor, we get $\displaystyle\left\|\tilde{\alpha_{i}}w\right\|_{C^{0}([0,T]\times M)}+\left\|\nabla(\tilde{\alpha_{i}}w)\right\|_{C^{0}([0,T]\times M)}$ $\displaystyle\quad=\left\|\tilde{\alpha_{i}}w\right\|_{C^{0}([0,T]\times G_{i})}+\left\|\nabla\left(\tilde{\alpha_{i}}w\right)\right\|_{C^{0}([0,T]\times G_{i})}$ $\displaystyle\quad\leq\left\|\left(\tilde{\alpha_{i}}w\right)\circ\psi^{-1}_{i}\right\|_{C^{0}\left([0,T]\times\bar{B}\right)}+C_{i}\left\|\nabla_{\operatorname{eucl}}\left(\tilde{\alpha_{i}}w\right)\circ\psi^{-1}_{i}\right\|_{C^{0}\left([0,T]\times\bar{B}\right)}$ $\displaystyle\quad\leq C(p,d,B,i)\left\|\left(\tilde{\alpha_{i}}w\right)\circ\psi^{-1}_{i}\right\|_{W^{1,2,p}([0,T]\times B)}$ $\displaystyle\quad\leq C^{\prime}(p,d,B,i)\left\|\tilde{\alpha_{i}}w\right\|_{W^{1,2,p}([0,T]\times M)}.$ Therefore, by the triangle inequality and summing over $i$, $\displaystyle\left\|w\right\|_{C^{0}([0,T]\times M)}+\left\|\nabla w\right\|_{C^{0}([0,T]\times M)}$ $\displaystyle\quad\leq C(p,d,M)\sum_{i=1}^{R}\left\|\tilde{\alpha_{i}}w\right\|_{W^{1,2,p}([0,T]\times M)}\leq C^{\prime}(p,d,M)\left\|w\right\|_{W^{1,2,p}([0,T]\times M)},$ where in the last passage we have used the fact that the derivatives of $\tilde{\alpha}_{i}$ are bounded. The compactness of the embedding is now evident. ### 10.4. An auxiliary result We now prove a useful result about the extension of smooth functions, which is used during the proof of Proposition 5.1. ###### Proposition 10.1 (extension of $C^{\infty}$ functions). Let $0<S<T$ and consider $w\in C^{\infty}([0,S]\times M)$. Then we can extend $w$ to a function $v\in C^{\infty}([0,T]\times M)$. ###### Proof. Let $\left\\{G_{i}\right\\}_{i=1}^{R}$ be a finite covering of $M$ and $\left\\{(G_{i},\phi_{i})\right\\}_{i=1}^{R}$ the corresponding charts. Without loss of generality, we may assume that $\phi_{i}(G_{i})=B$ for all $i$, where $B$ is the unit ball in $\mathbb{R}^{d}$. Let $\left\\{\alpha_{i}\right\\}_{i=1}^{R}$ be a squared smooth partition of unity subordinate to $\left\\{G_{i}\right\\}_{i=1}^{R}$, such that $\sum_{i=1}^{R}\alpha_{i}^{2}=1$. Define, for $i=1,\ldots,R$, $\psi_{i}:[0,T]\times G_{i}\to[0,T]\times B,\qquad(t,P)\mapsto(t,\phi_{i}(P)),$ which is a finite smooth atlas for $[0,T]\times M$. Observe that $[0,T]\times G_{i}$ is diffeomorphic to $[0,T]\times B$. Besides, $\tilde{\alpha}_{i}:[0,T]\times M\to[0,1]$, $\tilde{\alpha}_{i}(t,P):=\alpha_{i}(P)$ is a squared smooth partition of unity subordinate to $\left\\{[0,T]\times G_{i}\right\\}_{i=1}^{R}$. Given $w\in C^{\infty}([0,S]\times M)$, we define $\tilde{w}_{i}\in C^{\infty}([0,S]\times\mathbb{R}^{d})$, $i=1,\ldots,R$, by $\tilde{w}_{i}(t,x):=\begin{cases}\left(\tilde{\alpha}_{i}w\right)\circ\psi_{i}^{-1},&\text{for $(t,x)\in[0,S]\times B$},\\\ 0,&\text{otherwise.}\end{cases}$ Observe that for any $t\in[0,S]$, $\operatorname{supp}\tilde{w}_{i}(t,\cdot)\subset\operatorname{supp}\alpha_{i}\circ\phi_{i}^{-1}$. Seeley’s extension theorem [44] supplies an extension operator $\mathcal{E}:C^{\infty}([0,S]\times\mathbb{R}^{d})\to C^{\infty}(\mathbb{R}\times\mathbb{R}^{d}).$ Thanks to this, we can build an extension $\mathcal{E}\tilde{w}_{i}$ of $\tilde{w}_{i}$ in $C^{\infty}(\mathbb{R}\times\mathbb{R}^{d})$. Set $\bar{w}_{i}:=\left(\alpha_{i}\circ\phi_{i}^{-1}\right)\mathcal{E}\tilde{w}_{i}\in C^{\infty}(\mathbb{R}\times\mathbb{R}^{d}),$ and notice that for any $t\in\mathbb{R}$, $\operatorname{supp}\bar{w}_{i}(t,\cdot)\subset\operatorname{supp}\alpha_{i}\circ\phi_{i}^{-1}$. We may lift this function to $M$ by setting $w_{i}(t,P):=\begin{cases}\bar{w}_{i}(t,\phi(P)),&\text{for $(t,P)\in\mathbb{R}\times\operatorname{supp}\alpha_{i}$},\\\ 0,&\text{otherwise}.\end{cases}$ Clearly, $w_{i}\in C^{\infty}(\mathbb{R}\times M)$ and for $t\in[0,S]$ we have $w_{i}(t,P)=\begin{cases}\alpha_{i}^{2}(P)w(t,P),&\text{for $P\in\operatorname{supp}\alpha_{i}$}\\\ 0,&\text{otherwise.}\end{cases}$ Setting $\displaystyle v:=\sum_{i=1}^{R}w_{i}\in C^{\infty}(\mathbb{R}\times M)\subset C^{\infty}([0,T]\times M)$ we have $v(t,P)=\sum_{i:P\in\operatorname{supp}\alpha_{i}}\alpha_{i}^{2}(P)w(t,P)=w(t,P),\qquad(t,P)\in[0,S]\times M,$ and thus the desired extension is established. ∎ ## References * [1] L. Ambrosio. Transport equation and Cauchy problem for $BV$ vector fields. Invent. Math., 158(2):227–260, 2004. * [2] P. Amorim, M. Ben-Artzi, and P. G. LeFloch. Hyperbolic conservation laws on manifolds: total variation estimates and the finite volume method. Methods Appl. Anal., 12(3):291–323, 2005. * [3] S. Attanasio and F. Flandoli. Renormalized solutions for stochastic transport equations and the regularization by bilinear multiplication noise. Comm. Partial Differential Equations, 36(8):1455–1474, 2011. * [4] T. Aubin. Some nonlinear problems in Riemannian geometry. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 1998. * [5] D. Bakry. étude des transformations de Riesz dans les variétés riemanniennes à courbure de Ricci minorée. In Séminaire de Probabilités, XXI, volume 1247 of Lecture Notes in Math., pages 137–172. Springer, Berlin, 1987. * [6] L. Beck, F. Flandoli, M. Gubinelli, and M. Maurelli. Stochastic ODEs and stochastic linear PDEs with critical drift: regularity, duality and uniqueness. Electron. J. Probab., 24:Paper No. 136, 72, 2019. * [7] M. Ben-Artzi and P. G. LeFloch. Well-posedness theory for geometry-compatible hyperbolic conservation laws on manifolds. Ann. Inst. H. Poincaré Anal. Non Linéaire, 24(6):989–1008, 2007\. * [8] O.V. Besov, V.P. Il’in. S.M. Nikol’skij. Integral Representations of Functions and Imbedding Theorems, Vol 1. Wiley, New York, 1978. * [9] P.-L. Chow. Stochastic partial differential equations. Advances in Applied Mathematics. CRC Press, Boca Raton, FL, second edition, 2015. * [10] B. Chow, D. Knopf. The Ricci Flow: An Itroduction. American Mathematical Society, 2004. * [11] E. B. Davies. Pointwise bounds on the space and time derivatives of heat kernels. J. Operator Theory, 21(2):367–378, 1989. * [12] G. de Rham. Differentiable manifolds, volume 266 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, Berlin, 1984. * [13] R. J. DiPerna and P.-L. Lions. Ordinary differential equations, transport theory and Sobolev spaces. Invent. Math., 98(3):511–547, 1989. * [14] H. S. Dumas, F. Golse, and P. Lochak. Multiphase averaging for generalized flows on manifolds. Ergodic Theory Dynam. Systems, 14(1):53–67, 1994. * [15] C. M. Elliott, M. Hairer, and M. R. Scott. Stochastic partial differential equations on evolving surfaces and evolving Riemannian manifolds. arXiv:1208.5958. * [16] D. Elworthy. Geometric aspects of diffusions on manifolds. In École d’Été de Probabilités de Saint-Flour XV–XVII, 1985–87, volume 1362 of Lecture Notes in Math., pages 277–425. Springer, Berlin, 1988. * [17] S. Fang, H. Li, and D. Luo. Heat semi-group and generalized flows on complete Riemannian manifolds. Bull. Sci. Math., 135(6-7):565–600, 2011. * [18] F. Flandoli. Random perturbation of PDEs and fluid dynamic models, volume 2015 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011. Lectures from the 40th Probability Summer School held in Saint-Flour, 2010. * [19] F. Flandoli, M. Gubinelli, and E. Priola. Well-posedness of the transport equation by stochastic perturbation. Invent. Math., 180(1):1–53, 2010. * [20] A. Friedman. Stochastic Differential Equations and Applications. Dover, 2006. * [21] L. Galimberti and K. H. Karlsen. Well-posedness theory for stochastically forced conservation laws on Riemannian manifolds. J. Hyperbolic Differ. Equ., 16(3):519–593, 2019. * [22] L. Galimberti, K. Karlsen. Renormalization of stochastic continuity equations on Riemannian manifolds arXiv 1912.10731. * [23] B. Gess and M. Maurelli. Well-posedness by noise for scalar conservation laws. Comm. Partial Differential Equations, 43(12):1702–1736, 2018. * [24] B. Gess and S. Smith. Stochastic continuity equations with conservative noise. J. Math. Pures Appl. (9), 128:225–263, 2019. * [25] R. E. Greene and H. Wu. $C^{\infty}$ approximations of convex, subharmonic, and plurisubharmonic functions. Ann. Sci. École Norm. Sup. (4), 12(1):47–84, 1979. * [26] A. Grigor’yan. Heat kernel and analysis on manifolds, volume 47 of AMS/IP Studies in Advanced Mathematics. American Mathematical Society, 2009. * [27] I. Gyöngy. Stochastic partial differential equations on manifolds. I. Potential Anal., 2(2):101–113, 1993. * [28] I. Gyöngy. Stochastic partial differential equations on manifolds. II. Nonlinear filtering. Potential Anal., 6(1):39–56, 1997. * [29] H. Holden, K. H. Karlsen, and P. H. Pang. The Hunter–Saxton equation with noise. J. Differential Equations, 270:725–786, 2021. * [30] H. Kunita. First order stochastic partial differential equations Stochastic Analysis, 1982 North-Holland Math. Library vol. 32, 249-269. * [31] H. Kunita. Stochastic flows and stochastic differential equations, volume 24 of Cambridge studies in advanced mathematics. Cambridge University Press, 1990. * [32] J. M. Lee. Riemannian Manifolds, volume 176 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1997. * [33] P.-L. Lions. Mathematical topics in fluid mechanics. Vol. 1: Incompressible models. Oxford University Press, New York, 1996. * [34] P.-L. Lions. Mathematical topics in fluid mechanics. Vol. 2: Compressible models. Oxford University Press, New York, 1998. * [35] W. Neves, C. Olivera. Wellposedness for stochastic continuity equations with Ladyzhenskaya-Prodi-Serrin condition. NoDEA Nonlinear Differential Equations Appl., 22(5):1247–1258, 2019. * [36] W. Neves and C. Olivera. Stochastic continuity equations–a general uniqueness result. Bulletin of the Brazilian Mathematical Society, New Series, 47:631–639, 2016. * [37] E. Pardoux. Equations aux dérivées partielles stochastiques non linéaires monotones. Etude de solutions fortes de type Ito, PhD Thesis. Université Paris Sud (1975). * [38] P. Protter. Stochastic integration and differential equations, volume 21 of Applications of Mathematics (New York). Springer-Verlag, Berlin, 1990. * [39] S. Punshon-Smith. Renormalized solutions to stochastic continuity equations with rough coefficients. arXiv 1710.06041. * [40] S. Punshon-Smith and S. Smith. On the Boltzmann equation with stochastic kinetic transport: global existence of renormalized martingale solutions. Arch. Ration. Mech. Anal., 229(2):627–708, 2018. * [41] P. J. Rabier. Vector-valued Morrey’s embedding theorem and Hölder continuity in parabolic problems. Electron. J. Differential Equations, pages No. 10, 10, 2011. * [42] D. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, third edition, 1999. * [43] J. A. Rossmanith, D. S. Bale, and R. J. LeVeque. A wave propagation algorithm for hyperbolic systems on curved manifolds. J. Comput. Phys., 199(2):631–662, 2004. * [44] R. T. Seeley. Extension of $C^{\infty}$ functions defined in a half space. Proc. Amer. Math. Soc., 15:625–626, 1964.
# Wasserstein Convergence Rate for Empirical Measures of Markov Chains Adrian Riekert111Faculty of Mathematics and Computer Science, University of Münster, Münster, Germany; e-mail: ariekert\texttt{a}⃝uni-muenster.de ###### Abstract We consider a Markov chain on $\mathbb{R}^{d}$ with invariant measure $\mu$. We are interested in the rate of convergence of the empirical measures towards the invariant measure with respect to the $1$-Wasserstein distance. The main result of this article is a new upper bound for the expected Wasserstein distance, which is proved by combining the Kantorovich dual formula with a Fourier expansion. In addition, we show how concentration inequalities around the mean can be obtained. Mathematics Subject Classification— 60B10, 60J05, 65C05 Keywords— Empirical Measure, Markov Chains, Wasserstein Distance, Concentration ## 1 Introduction and main results ### 1.1 Empirical measures Let $X_{0},X_{1},X_{2},\ldots$ be a Markov chain on $\mathbb{R}^{d}$ with invariant probability distribution $\mu$. For $n\in\mathbb{N}$ we define the empirical measure $\mu_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}},$ a random probability measure on $\mathbb{R}^{d}$. Under suitable conditions these measures will converge to $\mu$ as $n\to\infty$. The purpose of this article is to quantify the rate of convergence with respect to the $1$-Wasserstein distance given by $W_{1}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|x-y|\ \mathrm{d}\pi(x,y),$ (1) where $\Pi(\mu,\nu)$ denotes the set of all couplings between $\mu$ and $\nu$, i.e. the set of all probability measures on $\mathbb{R}^{d}\times\mathbb{R}^{d}$ with marginals $\mu$ and $\nu$ [20]. This is a classical problem with numerous applications, including, e.g., clustering [14], density estimation [7], and Monte Carlo integration. The special case where the $X_{i}$ are i.i.d. with common distribution $\mu$ has been studied extensively, see, e.g., [11, 1, 10, 6, 21, 15]. The most general and tight results [12, 16] state that $\mathbb{E}[W_{1}(\mu,\mu_{n})]$ is of order $n^{-\nicefrac{{1}}{{d}}}$ if $d\geq 3$ and $\mu$ has a finite $q$-th moment for some $q>\frac{d}{d-1}$. Some of the proofs for the i.i.d. case can be adapted to Markov chains, but usually only under strong additional assumptions, such as absolute continuity of the initial distribution with respect to $\mu$ (see, e.g., [12] and [6]). This requires that one already has access to some approximation of $\mu$ to start the Markov chain with, which is not always the case in applications. We will instead follow the approach from [13], which does not require such an assumption on the initial distribution. ### 1.2 Contractive Markov chains Let $\mathcal{P}(\mathbb{R}^{d})$ denote the set of Borel probability measures on $\mathbb{R}^{d}$ and $\mathcal{P}_{1}(\mathbb{R}^{d})$ the set of measures in $\mathcal{P}(\mathbb{R}^{d})$ with a finite first moment. On this set the Wasserstein distance $W_{1}$ is finite and possesses the following dual formulation. ###### Lemma 1.1 (Kantorovich duality). If $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R}^{d})$ then $W_{1}(\mu,\nu)=\sup\nolimits_{f\in\textnormal{Lip}_{1}(\mathbb{R}^{d})}\left|\mu(f)-\nu(f)\right|.$ Here $\textnormal{Lip}_{1}(\mathbb{R}^{d})$ denotes the space of all Lipschitz continuous functions on $\mathbb{R}^{d}$ with Lipschitz constant at most $1$. The proof can be found in [20]. Let $P(x,\mathrm{d}y)$ be a Markov kernel on $\mathbb{R}^{d}$ and $X_{0},X_{1},X_{2},\ldots$ the corresponding time-homogeneous Markov chain defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$, with initial distribution $X_{0}\sim\gamma_{0}\in\mathcal{P}(\mathbb{R}^{d})$. Denote by $P^{n}$ the $n$-fold iteration of $P$. As usual, we introduce the averaging operator $(Pf)(x)=\int_{\mathbb{R}^{d}}f(y)P(x,\mathrm{d}y)$ for bounded or nonnegative measurable functions $f$, and similarly the action on measures $\nu$ $(\nu P)(B)=\int_{\mathbb{R}^{d}}P(y,B)\mathrm{d}\nu(y),\quad B\in\mathcal{B}(\mathbb{R}^{d}).$ We require the following assumption on the transition kernel. ###### Assumption 1. There are constants $D\geq 1$ and $\kappa\in(0,1)$ such that $W_{1}(P^{n}(x,\cdot),P^{n}(y,\cdot))\leq D\kappa^{n}|x-y|$ for all $n\in\mathbb{N}$ and $x,y\in\mathbb{R}^{d}$. In particular, this means that $P^{n}(x,\cdot)\in\mathcal{P}_{1}(\mathbb{R}^{d})$ for all $n\in\mathbb{N}$. In the case $D=1$, this assumption is equivalent to the Markov chain having uniformly positive Ricci curvature in the sense of Ollivier [18]. In this case, it is enough to require the condition for $n=1$, then the estimate for general $n$ follows by iterated application. Intuitively, the assumption means that if we start two Markov chains at different points $x$ and $y$ then the chains can be coupled in such a way that they approach each other as $n\to\infty$. See [18] for an introduction to this topic on general metric spaces and several examples of Markov chains which satisfy the assumption. Often one can only ensure the Ricci curvature to be positive by choosing a different (equivalent) metric, which results in the additional factor $D$. This weaker condition is still sufficient for our proofs. The assumption implies a similar relation for arbitrary initial distributions. ###### Lemma 1.2 ([18], Proposition 20). If 1 holds then for any $\mu_{0},\nu_{0}\in\mathcal{P}_{1}(\mathbb{R}^{d})$ and $n\in\mathbb{N}$ we have $W_{1}(\mu_{0}P^{n},\nu_{0}P^{n})\leq D\kappa^{n}W_{1}(\mu_{0},\nu_{0}).$ Since the space $\mathcal{P}_{1}(\mathbb{R}^{d})$ is complete, the Banach fixed point theorem implies that there is a unique invariant probability measure $\mu\in\mathcal{P}_{1}(\mathbb{R}^{d})$ for the Markov chain (i.e. $\mu P=\mu$). Moreover, for any initial distribution $\gamma_{0}\in\mathcal{P}_{1}(\mathbb{R}^{d})$ we have $\lim_{n\to\infty}W_{1}(\gamma_{0}P^{n},\mu)=0$ exponentially fast. ### 1.3 Rate of convergence in expectation Our goal is to find estimates from above for the quantity $\mathbb{E}\left[W_{1}(\mu_{n},\mu)\right]=\mathbb{E}\left[\sup\nolimits_{f\in\textnormal{Lip}_{1}(\mathbb{R}^{d})}|\mu_{n}(f)-\mu(f)|\right].$ Hence we need to obtain uniform bounds over $f\in\textnormal{Lip}_{1}(\mathbb{R}^{d})$. To deal with this problem, Boissard [5] uses approximations in terms of the covering numbers of the set $\mathcal{N}_{\delta}(\textnormal{Lip}_{1}(K))$ for a compact subset $K\subset\mathbb{R}^{d}$. The proofs assume that the transition kernels satisfy a transportation inequality, which is equivalent to an exponential moment condition and therefore a rather strong assumption. To obtain uniform bounds in $f$, we will instead follow the approach by Kloeckner [13] and use an approximation of $f$ by its Fourier series. For Lipschitz functions, the Fourier series converges uniformly on compact sets and reasonably fast. Kloeckner uses this to prove that if the Markov chain is supported on a compact set $K\subset\mathbb{R}^{d}$ and $d\geq 3$ then there is a constant $C$ depending on $K$, $d$, and $D$ such that for all $n$ large enough $\mathbb{E}[W_{1}(\mu,\mu_{n})]\leq C\frac{(\log n^{\prime})^{d-2+\nicefrac{{1}}{{d}}}}{(n^{\prime})^{\nicefrac{{1}}{{d}}}},$ where $n^{\prime}=(1-\kappa)n$ [13, Theorem 1.1]. This rate of convergence is only a power of logarithm slower than the one for the independent case. For $d=2$ one has $\mathbb{E}[W_{1}(\mu,\mu_{n})]\leq C\frac{\log n^{\prime}}{(n^{\prime})^{\nicefrac{{1}}{{2}}}}$ and for $d=1$ it holds that $\mathbb{E}[W_{1}(\mu,\mu_{n})]\leq C\frac{(\log n^{\prime})^{\nicefrac{{1}}{{2}}}}{(n^{\prime})^{\nicefrac{{1}}{{2}}}}$ for $n$ large enough. In the following we will generalize this result to Markov chains on the entire space $\mathbb{R}^{d}$. Since the Fourier series does not converge uniformly on $\mathbb{R}^{d}$, we will use a truncation argument. For this we need the following moment assumption. ###### Assumption 2. There exist some $q>1$ and $M\in(0,\infty)$ such that $\sup_{n\in\mathbb{N}_{0}}\left(\mathbb{E}|X_{n}|^{q}\right)^{\frac{1}{q}}\leq M$. ###### Remark. In the i.i.d. case $X_{i}\sim\mu$, this condition is equivalent to finiteness of the $q$-th moment for $\mu$. This assumption is also necessary if one wants to obtain meaningful estimates for the rate of convergence. If one only assumes finiteness of the first moment then the rate of convergence may be arbitrarily slow [4]. In particular we assume that the initial distribution $\gamma_{0}$ has a finite $q$-th moment. This is the only assumption on $\gamma_{0}$, no absolute continuity or further regularity is needed. Observe that 2 also implies that the $q$-th moment of the invariant measure $\mu$ is bounded by $M$. Our main result about the speed of convergence of $\mu_{n}$ to $\mu$ is the following. We will always write $\lesssim$ for inequalities which hold up to a constant depending on $d$ and $D$. ###### Theorem 1.3. Suppose that 1 and 2 are satisfied. With $n^{\prime}=(1-\kappa)n$ it holds for all large enough $n$ that $\mathbb{E}[W_{1}(\mu,\mu_{n})]\lesssim\begin{cases}M\left(\frac{(\log n^{\prime})^{d-2+\nicefrac{{1}}{{d}}}}{(n^{\prime})^{\nicefrac{{1}}{{d}}}}\right)^{1-\nicefrac{{1}}{{q}}},\quad&d\geq 3\\\ M\left(\frac{\log n^{\prime}}{(n^{\prime})^{\nicefrac{{1}}{{2}}}}\right)^{1-\nicefrac{{1}}{{q}}},\quad&d=2\\\ M\left(\frac{(\log n^{\prime})^{\nicefrac{{1}}{{2}}}}{(n^{\prime})^{\nicefrac{{1}}{{2}}}}\right)^{1-\nicefrac{{1}}{{q}}},\quad&d=1.\end{cases}$ In particular, for large $q$ we almost obtain the result from the compact case. Moreover, the speed of convergence is proportional to $(1-\kappa)$. Hence for $\kappa$ close to $1$ we need more samples of $X_{i}$ to obtain a good approximation compared to the independent case. For the proof, we will approximate a Lipschitz function $f\in\operatorname{Lip}_{1}(\mathbb{R}^{d})$ by its Fourier series uniformly on a compact set $K=[-R,R]^{d}$. The Lipschitz continuity allows us to bound the Fourier coefficients independently of $f$. The integrals over the complement of $K$ can by estimated by using the moment condition and the fact that any Lipschitz function is dominated by $|x|^{q}$ since $q>1$. In order to verify 2 in applications, one can use the following simple criterion. ###### Proposition 1.4. Let $f\colon\mathbb{R}^{d}\to[0,\infty)$ be a measurable function with $\mathbb{E}f(X_{0})<\infty$. Suppose that there are constants $C<\infty$ and $\gamma\in(0,1)$ such that $(Pf)(x)\leq\gamma f(x)+C$ for each $x\in\mathbb{R}^{d}$. Then one has $\sup_{n\in\mathbb{N}_{0}}\mathbb{E}f(X_{n})<\infty$. ###### Proof. We may assume that $\mathbb{E}f(X_{0})\leq\frac{C}{1-\gamma}$, otherwise one can replace $C$ with a larger constant. Then we show by induction that $\mathbb{E}f(X_{n})\leq\frac{C}{1-\gamma}$ for each $n$. By assumption this holds for $n=0$, and moreover $\mathbb{E}f(X_{n+1})=\mathbb{E}\left[(Pf)(X_{n})\right]\leq\gamma\mathbb{E}f(X_{n})+C.$ Hence $\mathbb{E}f(X_{n})\leq\frac{C}{1-\gamma}$ implies the same estimate for $n+1$, which completes the proof. ∎ If the condition $Pf\leq\gamma f+C$ holds for $f(x)=|x|^{q}$, then 2 of the theorem will be satisfied. ### 1.4 Concentration In addition to estimating the expectation of $W_{1}(\mu,\mu_{n})$, it is also of interest how well the Wasserstein distance concentrates around its expected value. In this section the Markov chain can be supported on an arbitrary Polish metric space $\mathcal{X}$. If the state space is bounded, one can use standard bounded difference methods to obtain the following concentration inequality. ###### Theorem 1.5. Suppose that $(X_{n})$ is an exponentially contracting Markov chain in the sense of 1, with constants $D=1$ and $\kappa<1$, taking values in a metric space $\mathcal{X}$ with $\operatorname{diam}(\mathcal{X})\leq 1$. Then for all $t\geq 0$ one has $\mathbb{P}\left(W_{1}(\mu,\mu_{n})\geq\mathbb{E}[W_{1}(\mu,\mu_{n})]+t\right)\leq\exp\left(-2(1-\kappa)^{2}\cdot nt^{2}\right).$ For the case of i.i.d. random variables taking values in a metric space with diameter at most 1, Weed and Bach [21] showed that $\mathbb{P}\left(W_{1}(\mu,\mu_{n})\geq\mathbb{E}[W_{1}(\mu,\mu_{n})]+t\right)\leq\exp\left(-2nt^{2}\right)$. We obtain the same sub-Gaussian concentration rate, up to the factor $(1-\kappa)^{2}$. Observe that the rate of concentration does not depend on the dimension or any other specific properties of the state space – the result holds for an arbitrary bounded metric space. It should be noted that Theorem 1.5 improves on [13, Theorem 5.4] by a factor of $4$ in the exponent. In the noncompact case, strong moment assumptions are needed in order to obtain useful concentration inequalities. We say that a measure $\mu\in\mathcal{P}_{1}(\mathcal{X})$ satisfies a transportation inequality $T_{1}(C)$ [3, 8] if for all $\nu\in\mathcal{P}(\mathcal{X})$ with $\nu\ll\mu$ $W_{1}(\mu,\nu)\leq\sqrt{2CH(\nu\mid\mu)},$ where $H(\nu\mid\mu)$ is the relative entropy. Using the Lipschitz properties of the Wasserstein metric, this allows us to prove a concentration inequality for $W_{1}(\mu,\mu_{n})$. ###### Theorem 1.6. Let $(X_{n})$ be an exponentially contracting Markov chain on a metric space $\mathcal{X}$ as in 1, with $D=1$ and $\kappa<1$. Suppose that both the initial distribution and the transition kernels $P(x,\cdot)$ satisfy $T_{1}(C)$ for all $x\in\mathcal{X}$. Then we have for all $t\geq 0$ and $n\in\mathbb{N}$ that $\mathbb{P}(W_{1}(\mu,\mu_{n})\geq\mathbb{E}[W_{1}(\mu,\mu_{n})]+t)\leq\exp\left(-\tfrac{1}{2C}nt^{2}(1-\kappa)^{2}\right).$ This is the same sub-Gaussian rate of concentration as in the bounded case. ## 2 Proofs ### 2.1 Upper bound for the expectation Suppose that the assumptions 1 and 2 are satisfied. We may assume that $M=1$, since otherwise the metric can be multiplied by $\frac{1}{M}$. We first prove some auxiliary statements. The first lemma we use is adapted from [13], where it is only formulated for Markov chains on compact sets. ###### Lemma 2.1. Let $\alpha\in(0,1]$ and $f\colon\mathbb{R}^{d}\to\mathbb{C}$ be a bounded, $\alpha$-Hölder continuous function. Then for all $m,n\in\mathbb{N}$ one has $|\mathbb{E}f(X_{n})-\mu(f)|\leq 2D^{\alpha}\kappa^{\alpha n}\operatorname{Hol}_{\alpha}(f)$ and $|\operatorname{Cov}(f(X_{n}),f(X_{m}))|\leq 8D^{\alpha}\kappa^{\alpha|m-n|}\|f\|_{\infty}\operatorname{Hol}_{\alpha}(f),$ where $\operatorname{Hol}_{\alpha}(f)$ denotes the $\alpha$-Hölder constant of $f$. In the case of a compact state space, any Hölder-continuous function is automatically bounded by a value depending on the Hölder constant. But the compactness assumption can be removed if one assumes additionally that $f$ is bounded and combines this with the moment condition. ###### Proof. Let $W_{\alpha}$ be the $1$-Wasserstein distance with respect to the modified metric $|x-y|^{\alpha}$. We first claim that $W_{\alpha}(\mu_{0}P^{n},\nu_{0}P^{n})\leq D^{\alpha}\kappa^{\alpha n}W_{\alpha}(\mu_{0},\nu_{0})$ (2) for any $n\in\mathbb{N}$ and probability measures $\mu_{0},\nu_{0}\in\mathcal{P}_{1}$. Indeed, if $\mu_{0}=\delta_{x}$ and $\nu_{0}=\delta_{y}$ are Dirac measures then by Jensen’s inequality, $\displaystyle W_{\alpha}(\delta_{x}P^{n},\delta_{y}P^{n})$ $\displaystyle\leq(W_{1}(\delta_{x}P^{n},\delta_{y}P^{n}))^{\alpha}\leq D^{\alpha}\kappa^{\alpha n}W_{1}(\delta_{x},\delta_{y})^{\alpha}$ $\displaystyle=D^{\alpha}\kappa^{\alpha n}|x-y|^{\alpha}=D^{\alpha}\kappa^{\alpha n}W_{\alpha}(\delta_{x},\delta_{y}),$ using the assumption that $P$ is a contraction in $W_{1}$. Then we can use the same argument as in the proof of [18, Proposition 20] to obtain the estimate in the general case. To prove the first part of the lemma, we use the dual representation for $W_{\alpha}$, noting that the Hölder constant $\operatorname{Hol}_{\alpha}(f)$ is the Lipschitz seminorm of $f$ with respect to the metric $|x-y|^{\alpha}$. The fact that $\mu$ is the stationary measure together with (2) now implies $\displaystyle|\mathbb{E}f(X_{n})-\mu(f)|$ $\displaystyle=\left|\gamma_{0}(P^{n}f)-\mu(f)\right|=\left|\gamma_{0}(P^{n}f)-\mu(P^{n}f)\right|$ $\displaystyle\leq\operatorname{Hol}_{\alpha}(f)W_{\alpha}(\gamma_{0}P^{n},\mu P^{n})\leq\operatorname{Hol}_{\alpha}(f)D^{\alpha}\kappa^{\alpha n}(W_{\alpha}(\delta_{0},\mu)+W_{\alpha}(\delta_{0},\gamma_{0}))$ $\displaystyle\leq 2D^{\alpha}\kappa^{\alpha n}\operatorname{Hol}_{\alpha}(f).$ Here we used that $W_{\alpha}(\delta_{0},\mu)=\int|x|^{\alpha}\ \mathrm{d}\mu(x)\leq\left(\int|x|^{q}\ \mathrm{d}\mu(x)\right)^{\nicefrac{{\alpha}}{{q}}}\leq 1$ by the moment condition and Jensen’s inequality, applied to the convex function $u\mapsto u^{q/\alpha}$ on $[0,\infty)$. Analogously, this estimate holds for $W_{\alpha}(\delta_{0},\gamma_{0})$, since the initial distribution $\gamma_{0}$ satisfies the same moment condition. This proves the first assertion. For the second one, note first that if the process starts in an arbitrary point $x\in\mathbb{R}^{d}$ then $\left|(P^{n}f)(x)-\mu(f)\right|\leq\operatorname{Hol}_{\alpha}(f)D^{\alpha}\kappa^{\alpha n}W_{\alpha}(\delta_{x},\mu)\leq\operatorname{Hol}_{\alpha}(f)D^{\alpha}\kappa^{\alpha n}(|x|^{\alpha}+1),$ by the triangle inequality and the same argument as before. After translating $f$ we may assume that $\mu(f)=0$. In particular $f$ then takes positive and negative values, so its $L^{\infty}$-norm increases by a factor of at most $2$. We also assume that $n\geq m$ and write $n=m+t$. Clearly $|f(X_{m})|\leq\|f\|_{\infty}$, and by the first part we get $|(\mathbb{E}f(X_{n}))(\mathbb{E}f(X_{m}))|\leq 2\|f\|_{\infty}D^{\alpha}\kappa^{\alpha n}\operatorname{Hol}_{\alpha}(f).$ For the term $\mathbb{E}[f(X_{n})f(X_{m})]$, we invoke the Markov property to obtain $\displaystyle|\mathbb{E}[f(X_{n})f(X_{m})]|$ $\displaystyle=|\mathbb{E}\left[f(X_{m})\mathbb{E}[f(X_{m+t})\mid X_{m}]\right]|=|\mathbb{E}\left[f(X_{m})(P^{t}f)(X_{m})\right]|$ $\displaystyle\leq\operatorname{Hol}_{\alpha}(f)D^{\alpha}\kappa^{\alpha t}\left(\mathbb{E}|f(X_{m})|+\mathbb{E}[|f(X_{m})|\cdot|X_{m}|^{\alpha}]\right)$ $\displaystyle\leq 2\|f\|_{\infty}D^{\alpha}\kappa^{\alpha t}\operatorname{Hol}_{\alpha}(f),$ where we used again that $\mathbb{E}|X_{m}|^{\alpha}\leq 1$ by the moment condition. Combining the two estimates yields the result (the additional factor of $2$ comes from translating $f$). ∎ Since we want to use a Fourier approximation, we need estimates in terms of the basis functions $e_{k}(x)=\exp(\pi ik\cdot x)$ for $k\in\mathbb{Z}^{d}$. As the Lipschitz constant of $e_{k}$ grows too rapidly as $\|k\|_{\infty}\to\infty$, we instead apply Lemma 2.1 to obtain bounds in terms of the $\alpha$-Hölder constant of $e_{k}$, for some parameter $\alpha$ to be specified later. The Hölder constant can be bounded as follows. ###### Lemma 2.2 ([13, Lemma 4.2]). For $\alpha\in(0,1]$, $k\in\mathbb{Z}^{d}$ the $\alpha$-Hölder constant of $e_{k}$ satisfies $\operatorname{Hol}_{\alpha}(e_{k})\leq 2^{1-\alpha}\pi^{\alpha}d^{\alpha/2}\|k\|_{\infty}^{\alpha}.$ ###### Proof. Since $|\nabla e_{k}|=\pi|k|$, the function $e_{k}$ is Lipschitz with constant $\operatorname{Lip}(e_{k})\leq\pi|k|\leq\pi\sqrt{d}\|k\|_{\infty}$. On the other hand, one has $\|e_{k}\|_{\infty}=1$, and thus we obtain for $x\not=y$ $\frac{|e_{k}(x)-e_{k}(y)|}{|x-y|^{\alpha}}\leq\min\left(\frac{2}{|x-y|^{\alpha}},\pi\sqrt{d}\cdot\|k\|_{\infty}|x-y|^{1-\alpha}\right).$ If $|x-y|\leq 2(\pi\sqrt{d}\|k\|_{\infty})^{-1}$, then the second term does not exceed $2^{1-\alpha}\pi^{\alpha}d^{\alpha/2}\|k\|_{\infty}^{\alpha}$, and otherwise the first term is not larger than this bound. The claim follows. ∎ These two lemmas enable us to prove the following useful estimate for $|\mu_{n}(e_{k})-\mu(e_{k})|$. ###### Lemma 2.3. For all $\alpha\in(0,1]$, $k\in\mathbb{Z}^{d}$, $n\geq(1-\kappa^{\alpha})^{-1}$ one has $\mathbb{E}|\mu_{n}(e_{k})-\mu(e_{k})|^{2}\lesssim\frac{\|k\|_{\infty}^{2\alpha}}{n(1-\kappa^{\alpha})}.$ ###### Proof. Note that $\displaystyle\mathbb{E}|\mu_{n}(e_{k})-\mu(e_{k})|^{2}$ $\displaystyle=\mathbb{E}\Bigl{[}\left(\tfrac{1}{n}\textstyle{\sum_{j=1}^{n}}e_{k}(X_{j})-\mu(e_{k})\right)^{2}\Bigr{]}$ $\displaystyle=\tfrac{1}{n^{2}}\left(\textstyle{\sum_{j=1}^{n}}\bigl{(}\mathbb{E}[e_{k}(X_{j})]-\mu(e_{k})\bigr{)}\right)^{2}+\tfrac{1}{n^{2}}\textstyle{\sum_{1\leq j,l\leq n}}\operatorname{Cov}(e_{k}(X_{j}),e_{k}(X_{l}))$ Using Lemma 2.1 and Lemma 2.2, we obtain $\displaystyle\frac{1}{n^{2}}\sum_{1\leq j,l\leq n}\left|\operatorname{Cov}(e_{k}(X_{j}),e_{k}(X_{l}))\right|\leq\frac{16D^{\alpha}\pi^{\alpha}d^{\alpha/2}\|k\|_{\infty}^{\alpha}}{n^{2}}\sum_{1\leq j,l\leq n}\kappa^{\alpha|j-l|}\lesssim\frac{\|k\|_{\infty}^{\alpha}}{n(1-\kappa^{\alpha})}.$ Similarly, for the second term we get $\displaystyle\frac{1}{n^{2}}\left(\textstyle{\sum_{j=1}^{n}}\bigl{(}\mathbb{E}e_{k}(X_{j})-\mu(e_{k})\bigr{)}\right)^{2}\lesssim\frac{\|k\|_{\infty}^{2\alpha}}{n^{2}}\left(\textstyle{\sum_{j=1}^{n}}\kappa^{\alpha j}\right)^{2}\leq\|k\|_{\infty}^{2\alpha}\frac{\kappa^{2\alpha}}{n^{2}(1-\kappa^{\alpha})^{2}}\leq\frac{\|k\|_{\infty}^{2\alpha}}{n(1-\kappa^{\alpha})},$ where we used the assumption on $n$ in the last step. The proof is complete. ∎ Now take an arbitrary function $f\in\operatorname{Lip}_{1}(\mathbb{R}^{d})$. To estimate $|\mu_{n}(f)-\mu(f)|$, we can assume $f(0)=0$. The main difference of this work to [13] is the following truncation argument. For some radius $R>0$ to be determined later, let $K:=[-R,R]^{d}$. The idea will be to approximate $f$ on $K$ by its Fourier series $g$, and to use the moment condition to obtain bounds for the integral over $\mathbb{R}^{d}\smallsetminus K=K^{c}$. More precisely, we will use the following lemma, which is a direct consequence of the triangle inequality. ###### Lemma 2.4. If $g\colon\mathbb{R}^{d}\to\mathbb{C}$ is any bounded measurable function, then $|\mu_{n}(f)-\mu(f)|\leq 2\|f-g\|_{L^{\infty}(K)}+|\mu_{n}(g)-\mu(g)|+\int_{K^{c}}(|f|+|g|)\ \mathrm{d}\mu_{n}+\int_{K^{c}}(|f|+|g|)\ \mathrm{d}\mu.$ Now we can clearly find a $1$-Lipschitz function $\tilde{f}\colon[-2R,2R]^{d}\to\mathbb{C}$ with periodic boundary conditions such that $f|_{K}=\tilde{f}|_{K}$. Then the function $g\colon[-1,1]^{d}\to\mathbb{R}$ defined by $g(x)=\tilde{f}(2Rx)$ is $2R$-Lipschitz and has periodic boundary conditions. Let $\mathcal{F}^{g}(x)=\sum_{k\in\mathbb{Z}^{d}}\hat{g_{k}}e^{\pi ik\cdot x}$ be the Fourier series of $g$, and for given $J\in\mathbb{N}$ $\mathcal{F}_{J}^{g}(x)=\sum_{k\in\mathbb{Z}^{d},\|k\|_{\infty}\leq J}\hat{g_{k}}e^{\pi ik\cdot x}$ the approximation of order $J$. Then Theorem 4.4 in [19] implies that $\|\mathcal{F}_{J}^{g}-g\|_{L^{\infty}([-1,1]^{d})}\leq C_{d}R\frac{(\log J)^{d}}{J},$ where $C_{d}\leq Cd2^{d}$ for an absolute constant $C$. Hence if we define $\mathcal{F}_{J}^{f}(x):=\mathcal{F}_{J}^{g}(x/2R)$, then $\|\mathcal{F}_{J}^{f}-f\|_{L^{\infty}([-R,R]^{d})}\leq\|\mathcal{F}_{J}^{f}-\tilde{f}\|_{L^{\infty}([-2R,2R]^{d})}\lesssim R\frac{(\log J)^{d}}{J}.$ Now we apply Lemma 2.4 for $g=\mathcal{F}_{J}^{f}$. To estimate the tail terms for $f$, we will use the moment condition. Since $f$ is $1$-Lipschitz with $f(0)=0$, we have $|f(x)|\leq|x|$ for any $x\in\mathbb{R}^{d}$ and therefore, by Hölder’s inequality, $\int_{K^{c}}|f|\ \mathrm{d}\mu\leq\int_{K^{c}}|x|\ \mathrm{d}\mu\leq\left(\int_{K^{c}}|x|^{q}\ \mathrm{d}\mu\right)^{\nicefrac{{1}}{{q}}}\left(\mu(K^{c})\right)^{1-\nicefrac{{1}}{{q}}}\leq\left(\mu(K^{c})\right)^{1-\nicefrac{{1}}{{q}}}.$ On the other hand, since $|x|\geq R$ for $x\in K^{c}$ the Markov inequality gives $\mu(K^{c})\leq R^{-q}$. Thus we obtain $\int_{K^{c}}|f|\ \mathrm{d}\mu\leq R^{1-q}.$ Analogously, for the measure $\mu_{n}$ we have $\int_{K^{c}}|f|\ \mathrm{d}\mu_{n}\leq\int_{K^{c}}|x|\ \mathrm{d}\mu_{n}.$ To estimate the tail integrals over $\mathcal{F}_{J}^{f}$, note that by periodicity one has $\displaystyle\sup_{x\in K^{c}}|\mathcal{F}_{J}^{f}(x)|=\|\mathcal{F}_{J}^{f}\|_{L^{\infty}([-2R,2R]^{d})}\lesssim R\frac{(\log J)^{d}}{J}+\|\tilde{f}\|_{L^{\infty}([-2R,2R]^{d})}\lesssim R\frac{(\log J)^{d}}{J}+R\sqrt{d},$ since $\tilde{f}$ is by assumption $1$-Lipschitz. This yields $\int_{K^{c}}|\mathcal{F}_{J}^{f}|\ \mathrm{d}\mu\lesssim\mu(K^{c})\left(R\frac{(\log J)^{d}}{J}+R\sqrt{d}\right)\leq R\frac{(\log J)^{d}}{J}+R^{1-q}\sqrt{d}.$ Analogously, we obtain for $\int_{K^{c}}|\mathcal{F}_{J}^{f}|\ \mathrm{d}\mu_{n}$ the upper bound $\int_{K^{c}}|\mathcal{F}_{J}^{f}|\ \mathrm{d}\mu_{n}\lesssim R\frac{(\log J)^{d}}{J}+R\sqrt{d}\cdot\mu_{n}(K^{c}).$ Hence these terms are of the same order as the corresponding expressions for $|f|$. It remains to estimate the term $|\mu(\mathcal{F}_{J}^{f})-\mu_{n}(\mathcal{F}_{J}^{f})|$ from above. Writing $e_{k}^{R}(x):=e^{\frac{\pi i}{2R}k\cdot x}$, we obtain $\displaystyle|\mu(\mathcal{F}_{J}^{f})-\mu_{n}(\mathcal{F}_{J}^{f})|$ $\displaystyle\leq\sum_{0<\|k\|_{\infty}\leq J}|\hat{g_{k}}||\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|$ $\displaystyle\leq\left(\sum_{0<\|k\|_{\infty}\leq J}\|k\|_{\infty}^{2}|\hat{g_{k}}|^{2}\right)^{\nicefrac{{1}}{{2}}}\left(\sum_{0<\|k\|_{\infty}\leq J}\frac{|\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|^{2}}{\|k\|_{\infty}^{2}}\right)^{\nicefrac{{1}}{{2}}}$ $\displaystyle\leq 2R\left(\sum_{0<\|k\|_{\infty}\leq J}\frac{|\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|^{2}}{\|k\|_{\infty}^{2}}\right)^{\nicefrac{{1}}{{2}}},$ where we used the Cauchy-Schwarz inequality and the fact that $g$ is $2R$-Lipschitz, which leads to the bound for the first factor. Moreover, the term for $k=0$ corresponds to a constant function and can therefore be ignored. Combining all the estimates, we obtain the following lemma. ###### Lemma 2.5. For all $f\in\operatorname{Lip}_{1}(\mathbb{R}^{d})$ and all $R>0$, $J\in\mathbb{N}$ it holds that $\displaystyle|\mu_{n}(f)-\mu(f)|\lesssim\ R\frac{(\log J)^{d}}{J}+R^{1-q}+\int_{K^{c}}|x|\ \mathrm{d}\mu_{n}+R\mu_{n}(K^{c})+R\left(\sum_{0<\|k\|_{\infty}\leq J}\frac{|\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|^{2}}{\|k\|_{\infty}^{2}}\right)^{\nicefrac{{1}}{{2}}}.$ Note that the right-hand side does not depend on $f$ anymore. Hence the same bound also holds for $\sup_{f\in\operatorname{Lip}_{1}(\mathbb{R}^{d})}|\mu_{n}(f)-\mu(f)|=W_{1}(\mu_{n},\mu)$. In the next step we take the expectation on both sides. Note that $\displaystyle\mathbb{E}\left[\int_{K^{c}}|x|\ \mathrm{d}\mu_{n}\right]=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[|X_{i}|\mathds{1}_{X_{i}\in K^{c}}]\leq R^{1-q},$ since the distribution of each $X_{i}$ satisfies by assumption the same moment condition as $\mu$. Analogously, $\mathbb{E}[\mu_{n}(K^{c})]=\tfrac{1}{n}\textstyle\sum_{i=1}^{n}\mathbb{P}(X_{i}\in K^{c})\leq R^{-q}.$ Hence we have shown the following. ###### Corollary 2.6. Under the assumptions of Theorem 1.3, for all $J\in\mathbb{N}$ and $R>0$ one has $\mathbb{E}[W_{1}(\mu_{n},\mu)]\lesssim R\frac{(\log J)^{d}}{J}+R^{1-q}+R\ \mathbb{E}\Biggl{[}\Biggl{(}\sum_{0<\|k\|_{\infty}\leq J}\frac{|\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|^{2}}{\|k\|_{\infty}^{2}}\Biggr{)}^{\nicefrac{{1}}{{2}}}\Biggr{]}.$ To prove the theorem, it remains to apply the estimate from Lemma 2.3 and then choose $\alpha$, $R$, and $J$ appropriately. The Hölder constant of $e_{k}^{R}$ is the constant for $e_{k}$ divided by $(2R)^{\alpha}$, but this factor is at most $1$ for $R>1/2$, so it can be ignored. By concavity of the square root function, for any $\alpha\in(0,1)$ and $n$ large enough we have $\displaystyle\mathbb{E}\Biggl{[}\Biggl{(}\sum_{0<\|k\|_{\infty}\leq J}\frac{|\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|^{2}}{\|k\|_{\infty}^{2}}\Biggr{)}^{\nicefrac{{1}}{{2}}}\Biggr{]}\leq\Biggl{(}\sum_{0<\|k\|_{\infty}\leq J}\frac{\mathbb{E}|\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|^{2}}{\|k\|_{\infty}^{2}}\Biggr{)}^{\nicefrac{{1}}{{2}}}$ $\displaystyle\lesssim\left(\sum_{0<\|k\|_{\infty}\leq J}\frac{\|k\|_{\infty}^{2\alpha}}{n(1-\kappa^{\alpha})\|k\|_{\infty}^{2}}\right)^{\nicefrac{{1}}{{2}}}\lesssim\left(\sum_{j=1}^{J}\frac{j^{d+2\alpha-3}}{n(1-\kappa^{\alpha})}\right)^{\nicefrac{{1}}{{2}}},$ using that the number of points $k\in\mathbb{Z}^{d}$ with $\|k\|_{\infty}=j$ is of order $j^{d-1}$. Now we choose $\alpha=1/\log_{2}J$, which gives $j^{2\alpha}\leq J^{2\alpha}=4$ for all $1\leq j\leq J$. Together with $1-\kappa^{\alpha}\geq\alpha(1-\kappa)$ this implies $\mathbb{E}\left[\left(\sum_{\|k\|_{\infty}\leq J}\frac{|\mu(e_{k}^{R})-\mu_{n}(e_{k}^{R})|^{2}}{\|k\|_{\infty}^{2}}\right)^{\nicefrac{{1}}{{2}}}\right]\lesssim\sqrt{\frac{\log J}{n(1-\kappa)}}\left(\sum_{j=1}^{J}j^{d-3}\right)^{\nicefrac{{1}}{{2}}}\lesssim J^{\nicefrac{{d}}{{2}}-1}\sqrt{\frac{\log J}{n(1-\kappa)}},$ since $\sum_{j=1}^{J}j^{d-3}=\mathcal{O}(J^{d-2})$ if $d\geq 3$. Setting $n^{\prime}=n(1-\kappa)$, we obtain $\mathbb{E}[W_{1}(\mu_{n},\mu)]\lesssim R\left(\frac{(\log J)^{d}}{J}+J^{\nicefrac{{d}}{{2}}-1}\sqrt{\frac{\log J}{n^{\prime}}}\right)+R^{1-q}$ (3) for $n$ large enough. It remains to choose the parameters $R$ and $J$ in such a way that the right- hand side of this estimate is as small as possible. We first minimize this expression in $R$. The optimal value is given by $R=(q-1)^{\nicefrac{{1}}{{q}}}\left(\frac{(\log J)^{d}}{J}+J^{\nicefrac{{d}}{{2}}-1}\sqrt{\frac{\log J}{n^{\prime}}}\right)^{-\nicefrac{{1}}{{q}}},$ whence (3) yields $\mathbb{E}[W_{1}(\mu_{n},\mu)]\lesssim\left(\frac{(\log J)^{d}}{J}+J^{\nicefrac{{d}}{{2}}-1}\sqrt{\frac{\log J}{n^{\prime}}}\right)^{1-\nicefrac{{1}}{{q}}}.$ This is, up to the exponent $1-1/q$, the same bound Kloeckner obtained. In particular, the optimal choice of $J$ is given by $J=\lfloor(\log n^{\prime})^{2-\nicefrac{{1}}{{d}}}(n^{\prime})^{\nicefrac{{1}}{{d}}}\rfloor.$ This value can be obtained using the ansatz $J=(n^{\prime})^{\beta}$ and ignoring terms of lower order, which leads to the optimal exponent of $n^{\prime}$ if $\beta=\frac{1}{d}$. Then the estimate can be refined by setting $J=(n^{\prime})^{1/d}(\log n^{\prime})^{\gamma}$, and the optimal power of $\log n^{\prime}$ is obtained for $\gamma=2-\frac{1}{d}$. As $n^{\prime}\to\infty$, we get $J\to\infty$ and $\alpha\to 0$. Moreover, up to logarithmic terms the term $J^{d/2-1}(n^{\prime})^{-1/2}$ is of order $n^{\prime-1/d}$ and therefore $R\to\infty$, hence the above estimates are indeed valid. One can also check that $(2R)^{\alpha}\to 1$ as $n\to\infty$, hence we do not lose anything by estimating $1/(2R)^{\alpha}\leq 1$. We need to verify that the requirement $n^{\prime}\geq(1-\kappa^{\alpha})^{-1}$ is satisfied if $n$ is large enough. Since $\kappa^{\alpha}=\kappa^{1/\log J}\leq\kappa^{d/\log n^{\prime}}$, it suffices to show that $\kappa^{d}\leq(1-1/n^{\prime})^{\log n^{\prime}}$ if $n$ is large enough. But this is true, as the right-hand side converges to $1$ for $n\to\infty$. Hence for these $n$ we finally obtain $\mathbb{E}[W_{1}(\mu_{n},\mu)]\lesssim\left(\frac{(\log n^{\prime})^{d-2+\nicefrac{{1}}{{d}}}}{(n^{\prime})^{\nicefrac{{1}}{{d}}}}\right)^{1-\nicefrac{{1}}{{q}}},$ completing the proof of the theorem for $d\geq 3$. Analogously, if $d=2$ then $\sum_{j=1}^{J}j^{d-3}=\mathcal{O}(\log J)$, thus we obtain the bound $\mathbb{E}[W_{1}(\mu_{n},\mu)]\lesssim\left(\frac{(\log J)^{2}}{J}+\frac{\log J}{\sqrt{n^{\prime}}}\right)^{1-\nicefrac{{1}}{{q}}}.$ This time the optimal value for $J$ is $J=\lfloor\sqrt{n^{\prime}}\log n^{\prime}\rfloor$, which leads to the claimed estimate. Finally, if $d=1$ then $\sum_{j=1}^{J}j^{d-3}=\mathcal{O}(1)$, and thus we have $\mathbb{E}[W_{1}(\mu_{n},\mu)]\lesssim\left(\frac{\log J}{J}+\sqrt{\frac{\log J}{n^{\prime}}}\right)^{1-\nicefrac{{1}}{{q}}}.$ Here the optimal choice for $J$ is $J=\lfloor\sqrt{n^{\prime}\log n^{\prime}}\rfloor$. and then the term in brackets is of order $\sqrt{\frac{\log n^{\prime}}{n^{\prime}}}$. This completes the proof of Theorem 1.3. ### 2.2 Concentration inequalities To prove Theorem 1.5 we will use standard bounded difference arguments. We first formulate a general result concerning concentration of functions of $n$ random variables. Let $Y_{1},\ldots,Y_{n}$ be random variables taking values in $\mathcal{X}$ and let $f\colon\mathcal{X}^{n}\to\mathbb{R}$ be a bounded function. For $1\leq i\leq j\leq n$ denote the vector $(Y_{i},Y_{i+1},\ldots,Y_{j})=:Y_{i}^{j}$. We will also write $Y=Y_{1}^{n}=(Y_{1},\ldots,Y_{n})$. For given elements $y_{i}\in\mathcal{X}$ define $\Delta_{k}(y_{1}^{k}):=\mathbb{E}\left[f(Y)\mid Y_{1}^{k}=y_{1}^{k}\right]-\mathbb{E}\left[f(Y)\mid Y_{1}^{k-1}=y_{1}^{k-1}\right],\quad 1\leq k\leq n.$ That is, $\Delta_{k}$ denotes by how much the expectation of $f(Y)$ changes under the additional information that $Y_{k}$ takes the value $y_{k}$. Next, we set $D_{k}(y_{1}^{k-1})=\sup_{x,y\in\mathcal{X}}\left\|\Delta_{k}(y_{1}^{k-1},x)-\Delta_{k}(y_{1}^{k-1},y)\right\|_{\infty},$ where $(y_{1}^{k-1},x)$ denotes the vector $(y_{1},\ldots,y_{k-1},x)$. Finally, define $\mathbf{C}:=\sup\left\\{\textstyle{\sum_{k=1}^{n}}|D_{k}(y_{1}^{k-1})|^{2}\mid(y_{1},\ldots,y_{n})\in\mathcal{X}^{n}\right\\}.$ Then the following result due to McDiarmid holds. ###### Lemma 2.7 ([17, Theorem 3.7]). Under the above conditions, for any $t\geq 0$ one has $\mathbb{P}(f(Y)\geq\mathbb{E}f(Y)+t)\leq\exp\left(-\tfrac{2t^{2}}{\mathbf{C}}\right).$ In order to apply this to the Wasserstein distance $W_{1}(\mu,\mu_{n})$, we use that the function $f\colon\mathcal{X}^{n}\to\mathbb{R}$ defined by $f(x_{1},\ldots,x_{n}):=W_{1}(\mu,\mu_{n})$ is Lipschitz with constant $\frac{1}{n}$. That is, for all $x_{1},\ldots,x_{n},x_{1}^{\prime},\ldots,x_{n}^{\prime}\in\mathcal{X}$ we have $|f(x_{1},\ldots,x_{n})-f(x_{1}^{\prime},\ldots,x_{n}^{\prime})|\leq\tfrac{1}{n}\textstyle\sum_{k=1}^{n}d(x_{k},x_{k}^{\prime})=:d_{n}^{(1)}((x_{1},\ldots,x_{n}),(x_{1}^{\prime},\ldots,x_{n}^{\prime})).$ (4) This follows directly from the triangle inequality for the distance $W_{1}$. To apply Lemma 2.7 to the function $f$, we first need a general estimate for Lipschitz functions of the Markov chain. ###### Lemma 2.8. Let $n\in\mathbb{N}$ and $f\colon\mathcal{X}^{n+1}\to\mathbb{R}$ be a function which is Lipschitz with respect to the metric $d_{n+1}^{(1)}$ with constant $L$. Then the function $F\colon\mathcal{X}\to\mathbb{R}$ defined by $F(x)=\mathbb{E}^{x}f(X_{0},X_{1},\ldots,X_{n})$ is Lipschitz with constant $L\sum_{j=0}^{n}\kappa^{j}\leq\frac{L}{1-\kappa}$. Here $\mathbb{E}^{x}$ denotes the expectation for the Markov chain starting in $X_{0}=x$, and $d^{(1)}_{n+1}$ is given by (4). ###### Proof. We show the statement by induction on $n$. If $n=1$ we have for $x,y\in\mathcal{X}$ that $\displaystyle|F(x)-F(y)|$ $\displaystyle=\left|\mathbb{E}^{x}f(x,X_{1})-\mathbb{E}^{y}f(y,X_{1})\right|$ $\displaystyle\leq|\mathbb{E}^{x}[f(x,X_{1})-f(y,X_{1})]|+|\mathbb{E}^{x}f(y,X_{1})-\mathbb{E}^{y}f(y,X_{1})|$ $\displaystyle\leq Ld(x,y)+\left|\int f(y,z)P(x,\mathrm{d}z)-\int f(y,z)P(y,\mathrm{d}z)\right|$ $\displaystyle\leq L(1+\kappa)d(x,y),$ where we used that $W_{1}(P(x,\cdot),P(y,\cdot))\leq\kappa d(x,y)$ and the dual formulation Lemma 1.1. Now suppose that the statement holds for some $n$, and let $F\colon\mathcal{X}^{n+2}\to\mathbb{R}$ be Lipschitz with constant $L$. For each $x\in\mathcal{X}$ define $F_{x}^{\prime}(z):=\mathbb{E}^{z}f(x,X_{0},\ldots,X_{n})$, then by the induction hypothesis $z\mapsto F_{x}^{\prime}(z)$ is Lipschitz with constant $L\sum_{j=0}^{n}\kappa^{j}$. The Markov property now implies that for $x,y\in\mathcal{X}$ $\displaystyle|F(x)-F(y)|\leq$ $\displaystyle\ |\mathbb{E}^{x}[f(x,X_{1},\ldots,X_{n+1})-f(y,X_{1},\ldots,X_{n+1})]|$ $\displaystyle+|\mathbb{E}^{x}f(y,X_{1},\ldots,X_{n+1})-\mathbb{E}^{y}f(y,X_{1},\ldots,X_{n+1})|$ $\displaystyle\leq$ $\displaystyle\ Ld(x,y)+\left|\int F_{y}^{\prime}(z)P(x,\mathrm{d}z)-\int F_{y}^{\prime}(z)P(y,\mathrm{d}z)\right|$ $\displaystyle\leq$ $\displaystyle\ \left(L+\kappa L\textstyle\sum_{j=0}^{n}\kappa^{j}\right)d(x,y)=L\textstyle\sum_{j=0}^{n+1}\kappa^{j}d(x,y),$ where we again applied the duality. This completes the proof. ∎ ###### Remark. Similar ideas have been used in [9] to estimate the concentration for separately Lipschitz functions of Markov chains. Here we really need to assume the contractivity with $D=1$, since otherwise the inductive proof would not be possible. ###### Proof of Theorem 1.5. For $1\leq i\leq j$ we set $X_{i}^{j}:=(X_{i},X_{i+1},\ldots,X_{j})$. As in Lemma 2.7, for given $x_{1},\ldots,x_{k}\in\mathcal{X}$ we define $\Delta_{k}(x_{1}^{k})=\mathbb{E}[f(X_{1}^{n})\mid X_{1}=x_{1},\ldots,X_{k}=x_{k}]-\mathbb{E}[f(X_{1}^{n})\mid X_{1}=x_{1},\ldots,X_{k-1}=x_{k-1}].$ Now for all $x,y\in\mathcal{X}$ the Markov property implies $\displaystyle\Delta_{k}(x_{1}^{k-1},x)-\Delta_{k}(x_{1}^{k-1},y)$ $\displaystyle=\mathbb{E}[f(x_{1}^{k-1},X_{k}^{n})\mid X_{k}=x]-\mathbb{E}[f(x_{1}^{k-1},X_{k}^{n})\mid X_{k}=y]$ $\displaystyle=\mathbb{E}^{x}f(x_{1},\ldots,x_{k-1},x,X_{1},\ldots,X_{n-k})-\mathbb{E}^{y}f(x_{1},\ldots,x_{k-1},y,X_{1},\ldots).$ Next, we apply Lemma 2.8, which leads to $\displaystyle\left|\Delta_{k}(x_{1}^{k-1},x)-\Delta_{k}(x_{1}^{k-1},y)\right|$ $\displaystyle\leq\frac{1}{n}\sum_{j=0}^{n-k}\kappa^{j}d(x,y)\leq\frac{1}{n(1-\kappa)}d(x,y).$ Since the space $\mathcal{X}$ is bounded by assumption, we have $d(x,y)\leq 1$. Hence Lemma 2.7 can be applied with $\mathbf{C}=\frac{1}{n(1-\kappa)^{2}}$ to obtain the result. ∎ It remains to prove the concentration inequality under transportation assumptions. For this we shall use the well-known fact that a transportation inequality implies sub-Gaussian measure concentration [3]. ###### Proposition 2.9. If $\mu$ satisfies $T_{1}(C)$ then it holds for all $f\in\operatorname{Lip}(\mathcal{X})$ and $t\geq 0$ that $\mu(f\geq\mu(f)+t)\leq\exp\left(-\tfrac{t^{2}}{2C\|f\|_{\operatorname{Lip}}^{2}}\right).$ ###### Proof of Theorem 1.6. We know that the function $f\colon\mathcal{X}^{n}\to\mathbb{R},\quad(x_{1},\ldots,x_{n})\mapsto W_{1}(\mu,\mu_{n}),\quad\mu_{n}=\frac{1}{n}\sum_{k=1}^{n}\delta_{x_{k}}$ is $\frac{1}{n}$-Lipschitz with respect to the metric $d_{n}^{(1)}$ on $\mathcal{X}^{n}$. Furthermore, [2, Theorem 1.1] implies that the distribution of $(X_{1},X_{2},\ldots,X_{n})$ satisfies $T_{1}(C_{n})$ with respect to the metric $d_{n}^{(1)}$ on $\mathcal{X}^{n}$, where $C_{n}=C\textstyle\sum_{m=1}^{n}\left(\textstyle\sum_{k=0}^{m-1}\kappa^{k}\right)^{2}\leq Cn\left(\textstyle\sum_{k=0}^{\infty}\kappa^{k}\right)^{2}=\tfrac{Cn}{(1-\kappa)^{2}}.$ Combining this with 2.9 completes the proof. ∎ ### Acknowledgments The author is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2044–390685587, Mathematics Münster: Dynamics–Geometry–Structure. This work is based on the author’s master’s thesis, which was conducted at Bonn University. The author is grateful to his advisor Andreas Eberle for his help and support. ## References * [1] Ajtai, M., Komlós, J., and Tusnády, G. On optimal matchings. Combinatorica 4, 4 (1984), 259–264. * [2] Blower, G., and Bolley, F. Concentration of measure on product spaces with applications to Markov processes. Studia Mathematica 175, 1 (2006), 47–72. * [3] Bobkov, S., and Götze, F. Exponential integrability and transportation cost related to logarithmic Sobolev inequalities. Journal of Functional Analysis 163, 1 (1999), 1–28. * [4] Bobkov, S. G., and Ledoux, M. One-dimensional empirical measures, order statistics, and Kantorovich transport distances, vol. number 261 of Memoirs of the American Mathematical Society Ser. American Mathematical Society, Providence, 2019. * [5] Boissard, E. Simple bounds for the convergence of empirical and occupation measures in 1-Wasserstein distance. Electronic Journal of Probability 16, 0 (2011), 2296–2333. * [6] Boissard, E., and Le Gouic, T. On the mean speed of convergence of empirical and occupation measures in Wasserstein distance. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques 50, 2 (2014), 539–563. * [7] Bolley, F., Guillin, A., and Villani, C. Quantitative concentration inequalities for empirical measures on non-compact spaces. Probability Theory and Related Fields 137, 3-4 (2006), 541–593. * [8] Bolley, F., and Villani, C. Weighted Csiszár-Kullback-Pinsker inequalities and applications to transportation inequalities. Annales de la faculté des sciences de Toulouse Mathématiques 14, 3 (2005), 331–352. * [9] Dedecker, J., and Fan, X. Deviation inequalities for separately Lipschitz functionals of iterated random functions. Stochastic Processes and their Applications 125, 1 (2015), 60–90. * [10] Dereich, S., Scheutzow, M., and Schottstedt, R. Constructive quantization: Approximation by empirical measures. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques 49, 4 (2013), 1183–1203. * [11] Dudley, R. M. The speed of mean Glivenko-Cantelli convergence. The Annals of Mathematical Statistics 40, 1 (1969), 40–50. * [12] Fournier, N., and Guillin, A. On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields 162, 3-4 (2015), 707–738. * [13] Kloeckner, B. Empirical measures: regularity is a counter-curse to dimensionality. ESAIM: PS 24 (2020), 408–434. * [14] Laloë, T. $L_{1}$-quantization and clustering in Banach spaces. Mathematical Methods of Statistics 19, 2 (2010), 136–150. * [15] Ledoux, M. On optimal matching of Gaussian samples. Journal of Mathematical Sciences 238, 4 (2019), 495–522. * [16] Lei, J. Convergence and concentration of empirical measures under Wasserstein distance in unbounded functional spaces. Bernoulli 26, 1 (2020), 767–798. * [17] McDiarmid, C. Concentration. In Probabilistic Methods for Algorithmic Discrete Mathematics, M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, and B. Reed, Eds., Algorithms and Combinatorics. Springer, Berlin and Heidelberg, 1998, pp. 195–248. * [18] Ollivier, Y. Ricci curvature of Markov chains on metric spaces. Journal of Functional Analysis 256, 3 (2009), 810–864. * [19] Schultz, M. H. $L^{\infty}$–multivariate approximation theory. SIAM Journal on Numerical Analysis 6, 2 (1969), 161–183. * [20] Villani, C. Optimal transport: Old and new, vol. 338 of Grundlehren der mathematischen Wissenschaften. Springer, Berlin, 2009. * [21] Weed, J., and Bach, F. Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance. Bernoulli 25, 4A (2019), 2620–2648.
# Exchange effects in nucleus-nucleus reactions J. Dohet-Eraly<EMAIL_ADDRESS>Physique Quantique, C.P. 165/82, Université Libre de Bruxelles (ULB), B 1050 Brussels, Belgium Physique Nucléaire Théorique et Physique Mathématique, C.P. 229, Université Libre de Bruxelles (ULB), B 1050 Brussels, Belgium P. Descouvemont<EMAIL_ADDRESS>Physique Nucléaire Théorique et Physique Mathématique, C.P. 229, Université Libre de Bruxelles (ULB), B 1050 Brussels, Belgium ###### Abstract We present a scattering model for nuclei with similar masses. In this three- body model, the projectile has a core+valence structure, whereas the target is identical to the core nucleus. The three-body wave functions must be symmetrized for the exchange of the cores. This property gives rise to non- local potentials, which are computed without approximation. The present model is an extension of the Continuum Discretized Coupled Channel (CDCC) formalism, with an additional treatment of core exchange. We solve the coupled-channel system, including non-local terms, by the $R$-matrix method using Lagrange functions. This model is applied to the ${}^{13}{\rm C}+^{12}$C, ${}^{13}{\rm N}+^{12}$C and ${}^{16}{\rm O}+^{12}$C systems. Experimental scattering cross sections are fairly well reproduced without any parameter fitting. The backward-angle enhancement of the elastic cross sections is due to the non- local potential. We discuss in more detail the various non-local contributions and present effective local potentials. ## I Introduction Nucleus-nucleus reactions represent an important topic in nuclear physics. In particular, they constitute the only way to investigate exotic nuclei. With the development of radioactive beams, more and more data become available. Accurate theoretical models are needed to interpret these data and to extract the relevant properties of exotic nuclei. A popular approach is the optical model Feshbach (1958); Dickhoff and Charity (2019), where the structure of the colliding nuclei is neglected. Microscopic effects and absorption channels are simulated by complex potentials. This approach is very simple, but usually involves several parameters. The information about the structure of the nuclei is therefore limited. Three-body models represent a step further in the description of nucleus- nucleus collisions. One of the participating nuclei is described by a two-body structure, and the main part of the absorption is simulated by breakup effects in this two-body nucleus. This approach is referred to as the Continuum Discretized Coupled Channel (CDCC) method Rawitscher (1974); Kamimura _et al._ (1986); Austern _et al._ (1987); Yahiro _et al._ (2012), and has been extended to systems involving four-body systems Matsumoto _et al._ (2004); Descouvemont (2018). It is well adapted to nuclei with a low separation energy, where breakup effects are expected to be important. The CDCC method was originally developed to describe deuteron scattering Rawitscher (1974), but many applications have been performed recently for reactions involving exotic nuclei (see, e.g., Refs. de Diego _et al._ (2014); Pesudo _et al._ (2017) for recent works). In its present form, the CDCC method neglects possible exchange effects between the projectile and the target. A typical example is the $\alpha+^{8}$Be reaction Ogata _et al._ (2009), where the symmetrization between the colliding $\alpha$ particle and the $\alpha$’s involved in 8Be is not taken into account. A more recent example is the $d+^{11}$Be system Descouvemont (2017), where $d$ and 11Be are described by $p+n$ and ${}^{10}{\rm Be}+n$ structures, without antisymmetrization between the neutrons of $d$ and of 11Be. An obvious situation where exchange effects are important is when the colliding nuclei have similar masses. Representative examples are the ${}^{13}{\rm C}+^{12}$C and ${}^{17}{\rm O}+^{16}$O reactions. In such a case the system can be described by a three-body structure involving two cores and an exchanged particle (typically a nucleon or an $\alpha$ particle). The symmetrization of the wave function for the core exchange is then crucial. In the literature, several works have been done in this direction, with various approximations of the exchange effects von Oertzen (1970); Imanishi and von Oertzen (1987); Buttle and Goldfarb (1966); Sparenberg _et al._ (2000). In the present work, we use a three-body model, and treat exchange effects exactly. This procedure gives rise to non-local potentials in a coupled- channel formalism, but does not require any parameter fit. As in the traditional CDCC approach, the only inputs are the two-body interactions between the constituents. A first important step is to determine the non-local potentials, stemming from exchange effects. In a second step, one has to solve a coupled-channel integro-differential system. This is in general a complicated task, but can be simplified with the help of the $R$-matrix formalism Descouvemont and Baye (2010) associated with the Lagrange-mesh technique Baye (2015). The paper is organized as follows. In Sec. II, we present the model, with emphasis on the calculation of the non-local terms. Section III is devoted to some applications. We present results on ${}^{13}{\rm C}+^{12}$C, ${}^{13}{\rm N}+^{12}$C and ${}^{16}{\rm O}+^{12}$C scattering. In Sec. IV, we discuss non- local effects in more detail. We focus on the long-range part of the non-local kernels. We also present equivalent local potentials. Concluding remarks and outlook are presented in Sec. V. ## II The three-body model ### II.1 Total wave functions We consider the three-body system presented in Fig. 1. The projectile is formed by a core ($C$) + valence ($v$) system, and the target is identical to the core. A typical example is the ${}^{13}{\rm C}+^{12}$C system, where the core is 12C and the valence particle a neutron. For the sake of simplicity, we assume that the spin of the core is zero. The Hamiltonian of this system is defined as $\displaystyle H=$ $\displaystyle T_{\boldsymbol{r}}+T_{\boldsymbol{R}}+V_{Cv}(\boldsymbol{r})$ $\displaystyle+V_{Cv}(|\alpha\boldsymbol{r}-\boldsymbol{R}|)+V_{CC}(|\beta\boldsymbol{r}+\boldsymbol{R}|),$ (1) where $T_{\boldsymbol{r}}$ and $T_{\boldsymbol{R}}$ are the kinetic energies associated with $\boldsymbol{r}$ and $\boldsymbol{R}$, and $V_{Cv}$ and $V_{CC}$ are core-valence and core-core potentials. The coordinates $\boldsymbol{R}$ and $\boldsymbol{R}^{\prime}$ are the relative coordinates between the projectile and the target before and after symmetrization, respectively. In definition (1), $\alpha$ and $\beta$ are positive coefficients given by $\displaystyle\alpha=\frac{A_{C}}{A_{C}+A_{v}},\ \beta=\frac{A_{v}}{A_{C}+A_{v}}=1-\alpha,$ (2) where $A_{C}$ and $A_{v}$ are the masses of the core and of the valence particle, respectively. We also define $\displaystyle\gamma=\frac{1}{1-\alpha^{2}},$ (3) which will be used later. The present approach is based on the CDCC formalism, but we include the symmetrization of the wave function with respect to core exchange. Notice that two possible choices exist for the potential $V_{Cv}$: either it reproduces the spectroscopic properties of the $C+v$ system, or it is fitted on elastic scattering. Figure 1: Coordinates $(\boldsymbol{r},\boldsymbol{R})$ and $(\boldsymbol{r}^{\prime},\boldsymbol{R}^{\prime})$. $C$ and $v$ represent the core and valence particles. Let us define the core-valence Hamiltonian by $\displaystyle H_{0}=T_{\boldsymbol{r}}+V_{Cv}(\boldsymbol{r}),$ (4) which is diagonalized from $\displaystyle H_{0}\phi^{\ell jm}_{n}(\boldsymbol{r})=E^{\ell j}_{n}\phi^{\ell jm}_{n}(\boldsymbol{r}).$ (5) The two-body wave functions are factorized as $\displaystyle\phi^{\ell jm}_{n}(\boldsymbol{r})=\frac{u^{\ell j}_{n}(r)}{r}[Y_{\ell}(\Omega_{r})\otimes\chi^{v}]^{jm},$ (6) where $\chi^{v}$ is a spinor associated with the valence particle. The radial eigenfunctions $u^{\ell j}_{n}(r)$ are expanded over a basis, chosen here from Lagrange functions Baye (2015). Lagrange-Laguerre functions regularized by $r$ and by $\sqrt{r}$ Baye (2015); Dohet-Eraly (2017) have been considered. Both provide the same level of accuracy. As usual in CDCC calculations, energies $E^{\ell j}_{n}<0$ correspond to physical states, whereas states with $E^{\ell j}_{n}>0$, referred to as pseudostates, simulate the projectile continuum. The total wave functions, associated with (1) are written before symmetrization as $\displaystyle\Phi^{JM\pi}(\boldsymbol{R},\boldsymbol{r})=\frac{1}{R}\sum_{cL}g^{J\pi}_{cL}(R)\,\varphi^{JM\pi}_{cL}(\Omega_{R},\boldsymbol{r}),$ (7) where index $c$ stands for $c=(\ell jn)$ and where $g^{J\pi}_{cL}$ are radial wave functions to be determined. The relative angular momentum is denoted as $L$. The channel functions $\varphi^{JM\pi}_{cL}$ are defined as $\displaystyle\varphi^{JM\pi}_{cL}(\Omega_{R},\boldsymbol{r})=\bigl{[}Y_{L}(\Omega_{R})\otimes\phi^{\ell j}_{n}(\boldsymbol{r})\bigr{]}^{JM}.$ (8) The total wave function (7) must be symmetrized with respect to the exchange of the cores. For spin-zero cores, this is achieved with the exchange operator $P$ $\displaystyle\Psi^{JM\pi}(\boldsymbol{R},\boldsymbol{r})=(1+P)\Phi^{JM\pi}(\boldsymbol{R},\boldsymbol{r}),$ (9) where $P$ permutes the coordinates of the cores. More precisely, we have $\displaystyle P\Phi^{JM\pi}(\boldsymbol{R},\boldsymbol{r})=\Phi^{JM\pi}(\boldsymbol{R}^{\prime},\boldsymbol{r}^{\prime}),$ (10) with $\displaystyle\boldsymbol{r}^{\prime}=\alpha\boldsymbol{r}-\boldsymbol{R},$ $\displaystyle\boldsymbol{R}^{\prime}=(\alpha^{2}-1)\boldsymbol{r}-\alpha\boldsymbol{R}.$ (11) In the $(\boldsymbol{R}^{\prime},\boldsymbol{r}^{\prime})$ system, Hamiltonian (1) can be written, in the so-called “post” form, as $\displaystyle H=T_{\boldsymbol{r}^{\prime}}+T_{\boldsymbol{R}^{\prime}}+V_{Cv}(\boldsymbol{r}^{\prime})$ $\displaystyle\hskip 28.45274pt+V_{Cv}(|\alpha\boldsymbol{r}^{\prime}-\boldsymbol{R}^{\prime}|)+V_{CC}(|\beta\boldsymbol{r}^{\prime}+\boldsymbol{R}^{\prime}|).$ (12) In the next step, we consider the three-body Schrödinger equation $\displaystyle H\Psi^{JM\pi}=E\Psi^{JM\pi},$ (13) and use (9) with expansion (7). After projection on the channel functions, this procedure provides the integro-differential system $\displaystyle(T_{R}+E_{c}-E)g^{J\pi}_{cL}(R)+\sum_{c^{\prime}L^{\prime}}V^{J\pi}_{cL,c^{\prime}L^{\prime}}(R)g^{J\pi}_{c^{\prime}L^{\prime}}(R)$ $\displaystyle+\sum_{c^{\prime}L^{\prime}}\int W^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})dR^{\prime}=0,$ (14) where $\displaystyle T_{R}=-\frac{\hbar^{2}}{2\mu}\biggl{[}\frac{d^{2}}{dR^{2}}-\frac{L(L+1)}{R^{2}}\biggr{]},$ (15) $\mu$ being the reduced mass. The first two terms of Eq. (14) correspond to the standard CDCC system Austern _et al._ (1987). The coupling potentials are defined by $\displaystyle V^{J\pi}_{cL,c^{\prime}L^{\prime}}(R)=\langle\varphi^{JM\pi}_{cL}|V_{Cv}+V_{CC}|\varphi^{JM\pi}_{c^{\prime}L^{\prime}}\rangle,$ (16) where the integration is performed over $\Omega_{R}$ and $\boldsymbol{r}$. The last term of (14) is non-local, and arises from the symmetrization operator $P$. The non-local potential $W^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})$ can be decomposed as $\displaystyle W^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})=(E_{c}-E){\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})$ $\displaystyle\hskip 28.45274pt+{\cal T}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})+{\cal V}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime}),$ (17) which explicitly shows overlap ${\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}$, kinetic energy ${\cal T}^{J\pi}_{cL,c^{\prime}L^{\prime}}$ and potential ${\cal V}^{J\pi}_{cL,c^{\prime}L^{\prime}}$ terms. The overlap kernel is defined from $\displaystyle\int{\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})dR^{\prime}=$ $\displaystyle\hskip 28.45274ptR\langle\varphi^{JM\pi}_{cL}|\varphi^{JM\pi}_{c^{\prime}L^{\prime}}\dfrac{g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})}{R^{\prime}}\rangle,$ (18) and equivalent expressions hold for the kinetic energy and potential kernels. Notice that similar terms shows up in the coupled-channel approach of transfer reactions Cotanch (1975). We discuss the various contributions of (17) in the next subsection. For scattering states ($E>0$), a radial function $g^{J\pi}_{cL}(R)$ has the asymptotic behavior at large $R$ values $\displaystyle g^{J\pi}_{cL,\omega L_{\omega}}(R)\rightarrow{v_{c}}^{-1/2}$ $\displaystyle\times\bigl{[}I_{L}(k_{c}R)\delta_{c\omega}\delta_{LL_{\omega}}-U^{J\pi}_{cL,\omega L_{\omega}}O_{L}(k_{c}R)\bigr{]},$ (19) where $\omega$ is the entrance channel, $I_{L}$ and $O_{L}$ are the incoming and outgoing Coulomb functions, $v_{c}$ and $k_{c}$ are the velocity and wave number in channel $c$, and $\boldsymbol{U}^{J\pi}$ is the scattering matrix. In the $R$-matrix formalism Descouvemont and Baye (2010), a channel radius $a$ separates the internal region, where all terms of the potentials contribute, and the external region where only the monopole part of the Coulomb interaction is present. In the internal region, the radial wave function is expanded as $\displaystyle g^{J\pi}_{cL,\omega L_{\omega}}(R)=\sum_{i=1}^{N}f^{J\pi}_{cLi,\omega L_{\omega}}u_{i}(R),$ (20) where the $N$ functions $u_{i}(R)$ represent the basis. The choice of the basis functions is discussed in Subsec. II.D. ### II.2 Non-local terms The non-local potential (17) arises from exchange effects due to the operator $P$ (9,10). The overlap and potential kernels in (17) are obtained from $\displaystyle\begin{Bmatrix}{\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})\\\ {\cal V}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})\end{Bmatrix}={\cal J}RR^{\prime}\iint\varphi^{JM\pi\ast}_{cL}(\Omega_{R},\boldsymbol{r})$ $\displaystyle\hskip 28.45274pt\times\begin{Bmatrix}1\\\ V_{Cv}+V_{CC}\end{Bmatrix}\varphi^{JM\pi}_{c^{\prime}L^{\prime}}(\Omega_{R^{\prime}},\boldsymbol{r}^{\prime})d\Omega_{R}\,d\Omega_{R^{\prime}},$ (21) where $\displaystyle{\cal J}=\gamma^{3}$ (22) is the Jacobian from coordinates $(\boldsymbol{r},\boldsymbol{R})$ to $(\boldsymbol{R},\boldsymbol{R}^{\prime})$. Coordinates $(\boldsymbol{r},\boldsymbol{r}^{\prime},\boldsymbol{R}_{CC})$ are expressed as $\displaystyle\boldsymbol{r}=-\gamma(\alpha\boldsymbol{R}+\boldsymbol{R}^{\prime}),$ $\displaystyle\boldsymbol{r}^{\prime}=-\gamma(\boldsymbol{R}+\alpha\boldsymbol{R}^{\prime}),$ $\displaystyle\boldsymbol{R}_{CC}=\boldsymbol{r}-\boldsymbol{r}^{\prime}=\frac{1}{\alpha+1}(\boldsymbol{R}-\boldsymbol{R}^{\prime}).$ (23) In the case of a nucleon transfer, $\alpha$ is close to 1, and the relative coordinates $(\boldsymbol{r},\boldsymbol{r}^{\prime})$ are large, even for relatively small values of $(\boldsymbol{R},\boldsymbol{R}^{\prime})$. This means that the core-valence wave function needs to be accurately known up to large distances. If the binding energy is small, the non-local potentials are therefore very sensitive to the long-range part of bound-state wave function. The potential term is quite similar to the matrix elements involved in Distorted Wave Born Approximation (DWBA) calculations Satchler (1983), and its calculation is explained in several references (see e.g. Refs. Satchler (1983); Thompson (1988); Shubhchintak and Descouvemont (2019)). The calculation of the overlap and potential kernels is based on the expansions $\displaystyle\frac{u^{\ell j}_{n}(r)\,u^{\ell^{\prime}j^{\prime}}_{n^{\prime}}(r^{\prime})}{r^{\ell+1}r^{\prime\ell^{\prime}+1}}=\sum_{K}N^{K}_{cc^{\prime}}(R,R^{\prime})P_{K}(\cos\theta_{R}),$ $\displaystyle\frac{u^{\ell j}_{n}(r)}{r^{\ell+1}}\biggl{(}V_{Cv}(r^{\prime})+V_{CC}(R_{CC})\biggr{)}\frac{u^{\ell^{\prime}j^{\prime}}_{n^{\prime}}(r^{\prime})}{r^{\prime\ell^{\prime}+1}}=$ $\displaystyle\hskip 28.45274pt\sum_{K}V^{K}_{cc^{\prime}}(R,R^{\prime})P_{K}(\cos\theta_{R}),$ (24) where $\theta_{R}$ is the angle between $\boldsymbol{R}$ and $\boldsymbol{R}^{\prime}$, and $P_{K}(x)$ is a Legendre polynomial. The components $N^{K}_{cc^{\prime}}(R,R^{\prime})$ and $V^{K}_{cc^{\prime}}(R,R^{\prime})$ are obtained from numerical integrations. Notice that these expansions only depend on the quantum numbers of the projectile. They do not depend on the angular momenta $J,L,L^{\prime}$, and are therefore performed once. The derivation of the kinetic-energy kernels is more tedious. However, we show in the Appendix, that they can be deduced from the overlap kernel as $\displaystyle{\cal T}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})=-\frac{\hbar^{2}}{2\mu}\biggl{[}\frac{\partial^{2}}{\partial R^{2}}-\frac{L(L+1)}{R^{2}}\biggr{]}{\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime}).$ (25) The calculation of the non-local kernels from (21) and (24) requires some angular-momentum algebra. We use the addition theorem $\displaystyle r^{\ell}Y_{\ell}^{m}(\Omega_{r})=\sum_{\lambda=0}^{\ell}C_{\ell}^{\lambda}r_{1}^{\lambda}r_{2}^{{\ell}-{\lambda}}\bigl{[}Y_{\lambda}(\Omega_{1})\otimes Y_{{\ell}-{\lambda}}(\Omega_{2})\bigr{]}^{\ell m},$ (26) where $\boldsymbol{r}=\boldsymbol{r}_{1}+\boldsymbol{r}_{2}$ and with $\displaystyle C_{\ell}^{\lambda}=\biggl{[}\frac{4\pi(2\ell+1)!}{(2\lambda+1)!(2\ell-2\lambda+1)!}\biggr{]}^{1/2}.$ (27) When the valence particle $v$ has a spin zero, the overlap kernel can be expanded as $\displaystyle{\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})={\cal J}(-\gamma)^{\ell+\ell^{\prime}}\sum_{K}N^{K}_{c,c^{\prime}}(R,R^{\prime})$ $\displaystyle\times\sum_{\lambda_{1}\lambda_{2}}C_{\ell}^{\lambda_{1}}C_{\ell^{\prime}}^{\lambda_{2}}\alpha^{\lambda_{1}+\lambda_{2}}R^{\ell^{\prime}+\lambda_{1}-\lambda_{2}+1}R^{\prime\ell-\lambda_{1}+\lambda_{2}+1}$ $\displaystyle\times F^{J\pi}_{cL,c^{\prime}L^{\prime}}(K,\lambda_{1},\lambda_{2}),$ (28) with $\displaystyle F^{J\pi}_{cL,c^{\prime}L^{\prime}}(K,\lambda_{1},\lambda_{2})=$ $\displaystyle\biggl{\langle}\biggl{[}Y_{L}(\Omega_{R})\otimes\bigl{[}Y_{\lambda_{1}}(\Omega_{R})\otimes Y_{\ell-\lambda_{1}}(\Omega_{R^{\prime}})\bigr{]}^{\ell}\biggr{]}^{JM}|P_{K}(\cos\theta_{R})|$ $\displaystyle\biggl{[}Y_{L^{\prime}}(\Omega_{R^{\prime}})\otimes\bigl{[}Y_{\ell^{\prime}-\lambda_{2}}(\Omega_{R})\otimes Y_{\lambda_{2}}(\Omega_{R^{\prime}})\bigr{]}^{\ell^{\prime}}\biggr{]}^{JM}\biggr{\rangle}.$ (29) The analytical calculation of these coefficients requires some algebra to modify the order of angular-momentum couplings, and involves $6j$ coefficients. When the valence particle has a spin, further angular-momentum recoupling is necessary. A simple value is obtained for $\ell=\ell^{\prime}=0$, where we have $\displaystyle F^{J\pi}(K,0,0)=\frac{\delta_{KJ}}{4\pi(2J+1)}.$ (30) ### II.3 Symmetry of the non-local kernels Let us briefly discuss the symmetry properties of the non-local potential (17). According to Eq. (9), we have, when the cores are bosons $\displaystyle P\Psi^{JM\pi}=\Psi^{JM\pi},$ (31) which means that the property $\displaystyle[H,P]=0$ (32) should be satisfied. This implies that both $V_{Cv}$ potentials in Eqs. (1) and (12) are identical. For example, in the ${}^{13}{\rm C}+^{12}$C system, the $n+^{12}$C potential (real) associated with the 13C ground state should be identical to the $n+^{12}$C optical potential which describes the neutron- target scattering. If this condition is fulfilled, the non-local potential $W^{J\pi}$ is symmetric, and we have $\displaystyle W^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})=W^{J\pi}_{c^{\prime}L^{\prime},cL}(R^{\prime},R).$ (33) This test is very strong since, individually, all terms of the r.h.s. of (17) are not symmetric. In practical applications, however, it may seem more physical to choose different potentials: one which binds the $C+v$ system, and the other which is adapted to the $C+v$ scattering. In such a case, the symmetry property (33) is approximately satisfied (see the discussion in Ref. Imanishi and von Oertzen (1987)). This means that some properties of the scattering matrix, such as the symmetry or the unitarity (for real potentials), are not any more valid. We choose to restore the symmetry of the non-local potential through $\displaystyle W^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})\rightarrow\frac{1}{2}\biggl{[}W^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})+W^{J\pi}_{c^{\prime}L^{\prime},cL}(R^{\prime},R)\biggr{]}.$ (34) We will see in some examples that these effects, in practice, are small. The main reason is that, in general, the contribution of the optical potential $V_{Cv}$ is small compared to the optical potential between the cores. ### II.4 Elastic cross sections The elastic cross sections are obtained from the scattering matrices $\boldsymbol{U}^{J\pi}$ [see Eq. (19)]. According to the scattering theory, the elastic cross section between different particles is defined from the scattering amplitude as $\displaystyle\frac{d\sigma}{d\theta}=|f(\theta)|^{2},$ $\displaystyle f(\theta)=f^{\rm N}(\theta)+f^{\rm C}(\theta),$ (35) where $\theta$ is the scattering angle. In this definition, $f^{\rm C}(\theta)$ is the Coulomb amplitude and $f^{\rm N}(\theta)$ is the nuclear amplitude, defined by $\displaystyle f^{\rm N}(\theta)=\frac{1}{2ik}\sum_{J}(2J+1)P_{J}(\cos\theta)(U^{J}-1)e^{2i\sigma_{J}},$ (36) where $\sigma_{J}$ is the Coulomb phase shift. For the sake of simplicity, we consider single-channel systems with spin 0 nuclei. The generalization is straightforward (see, for example, Ref. Satchler (1983)). As explained in Sec. II.A, the scattering matrices are obtained from the resolution of a Schrödinger equation involving a non-local potential. Let us denote as $U^{J}_{0}$ and $g^{J}_{0}(R)$ the scattering matrix and wave function obtained from the local term (16) only. These quantities are obtained without symmetrization of the wave function (9). The integral definition of the scattering matrix Canto and Hussein (2013) provides a relationship between $U^{J}$ and $U^{J}_{0}$ as $\displaystyle U^{J}=U^{J}_{0}+U^{J}_{\rm ex},$ $\displaystyle U^{J}_{\rm ex}=-\frac{i}{\hbar}\iint g^{J}_{0}(R)W^{J}(R,R^{\prime})g^{J}(R^{\prime})dRdR^{\prime}.$ (37) Consequently the nuclear scattering amplitude can be written as $\displaystyle f^{\rm N}(\theta)=f^{\rm N}_{0}(\theta)+f^{\rm N}_{\rm ex}(\theta),$ (38) where $f^{\rm N}_{0}(\theta)$ is obtained from (36) with the scattering matrices $U^{J}_{0}$, and where the exchange amplitude $f^{\rm N}_{\rm ex}(\theta)$ is the non-local contribution $\displaystyle f^{\rm N}_{\rm ex}(\theta)=\frac{1}{2ik}\sum_{J}(2J+1)P_{J}(\cos\theta)U^{J}_{\rm ex}e^{2i\sigma_{J}}.$ (39) A similar decomposition has been suggested in Refs. Phuc _et al._ (2018, 2021, 2019). Since the calculation of the exchange amplitude $f^{\rm N}_{\rm ex}(\theta)$ is based on a non-local potential, it is common in the literature to assume that it can be simulated by the transfer of the valence particle (see for example Ref. Phuc _et al._ (2019) and references therein). In other words, the exchange amplitude can be approximated as $\displaystyle f^{\rm N}_{\rm ex}(\theta)\approx f_{\rm tr}(\pi-\theta),$ (40) where $f_{\rm tr}(\pi-\theta)$ is the transfer amplitude, usually computed at the DWBA approximation Thompson (1988). In this approximation, however, the exact wave function $g^{J}(R)$ is replaced by the non-symmetrized wave function $g^{J}_{0}(R)$ in Eq. (37), and the non-local potential $W^{J}(R,R^{\prime})$ is approximated from an auxiliary potential. Another difference is related to the spectroscopic factors. As expected in transfer calculations, the DWBA amplitude $f_{\rm tr}(\pi-\theta)$ is multiplied by the spectroscopic factor of the projectile. This multiplicative factor, however, does not show up in the present approach. The two-body wave function (6) is of course an approximation which can be improved, either by introducing a spectroscopic factor or by including core excitations. The introduction of a spectroscopic factor in (6), however, represents a global renormalization of the expansion (7), and therefore of the symmetrized definition (9). The scattering matrices deduced from (19) are therefore not affected by a spectroscopic factor. Introducing core excitations in the present approach is a challenge for future works, but is beyond the scope of the present work. ### II.5 Lagrange functions As mentioned before, the scattering matrices are calculated with the $R$-matrix method, which is based on a channel radius $a$ and on the choice of basis functions $u_{i}$. As in previous works, we choose Lagrange functions which permit fast and accurate calculations of the matrix elements, in particular for non-local potentials (see Ref. Baye (2015) for detail). The calculation of the $R$ matrix is based on matrix elements between basis functions over the internal region. The main input is the matrix defined from $\displaystyle C^{J\pi}_{cLi,c^{\prime}L^{\prime}j}=$ $\displaystyle\langle u_{i}|(T_{R}+E_{c}-E)\delta_{cc^{\prime}}\delta_{LL^{\prime}}$ $\displaystyle+V^{J\pi}_{cL,c^{\prime}L^{\prime}}+W^{J\pi}_{cL,c^{\prime}L^{\prime}}|u_{j}\rangle.$ (41) For example, matrix elements of the local and non-local potentials are given by $\displaystyle\langle u_{i}|V|u_{j}\rangle=\int_{0}^{a}u_{i}(R)V(R)u_{j}(R)\,dR,$ $\displaystyle\langle u_{i}|W|u_{j}\rangle=\int_{0}^{a}\int_{0}^{a}u_{i}(R)W(R,R^{\prime})u_{j}(R^{\prime})\,dRdR^{\prime}.$ (42) These calculations are greatly simplified by using Lagrange functions for $u_{i}$ which are defined by $\displaystyle u_{i}(R)=(-1)^{N+i}\frac{R}{R_{i}}\sqrt{R_{i}\biggl{(}1-\frac{R_{i}}{a}\biggr{)}}\,\frac{P_{N}(2R/a-1)}{R-R_{i}},$ (43) where $R_{i}$ are the zeros of $\displaystyle P_{N}(2R/a-1)=0.$ (44) The normalization of (43) is chosen in such a way that the Lagrange condition $\displaystyle u_{i}(R_{j})=\frac{1}{\sqrt{a\lambda_{i}}}\delta_{ij}$ (45) is satisfied. In this equation, $\lambda_{i}$ is the weight of the Gauss- Legendre quadrature associated with the $[0,1]$ interval. With the choice of basis function (43), the calculation of the matrix elements (42) is extremely simple if the Gauss approximation of order $N$ is used for the quadratures. The matrix elements are given by $\displaystyle\langle u_{i}|V|u_{j}\rangle=V(R_{i})\delta_{ij},$ $\displaystyle\langle u_{i}|W|u_{j}\rangle=a\sqrt{\lambda_{i}\lambda_{j}}W(R_{i},R_{j}),$ (46) and no numerical integral is required for the matrix elements. We refer to Refs. Descouvemont and Baye (2010); Baye (2015) for details. ## III Applications ### III.1 The ${}^{13}{\rm C}+^{12}$C system The ${}^{13}{\rm C}+^{12}$C system has been intensively studied experimentally Liénard _et al._ (1995); Voit _et al._ (1988) as well as theoretically von Oertzen (1970); Imanishi and von Oertzen (1987); Imanishi _et al._ (1997). It is known that non-local effects can be simulated by a parity-dependent optical potential Buttle and Goldfarb (1966). Owing to the one-neutron exchange, the potentials for even and odd partial waves are different Baye (1986). In the “extreme” situation of identical nuclei, odd partial waves are strictly forbidden. We have determined the non-local potential (17) from ${}^{12}{\rm C}+^{12}$C and $n+^{12}$C potentials. For ${}^{12}{\rm C}+^{12}$C, we take the optical potential derived by Treu et al. Treu _et al._ (1980), and defined (in MeV) as $\displaystyle V_{\rm CC}(r)=$ $\displaystyle-\frac{100}{1+\exp[(r-5.45)/0.48]}$ $\displaystyle-i\frac{15}{1+\exp[(r-5.77)/0.26]}$ (47) where $r$ is expressed in fm. A Coulomb point-sphere potential of radius $R_{C}=5.45$ fm is added. This optical potential is fitted on elastic- scattering data around the Coulomb barrier. In all applications, we use the integer masses with $\hbar^{2}/2m_{N}=20.736$ MeV fm2 ($m_{N}$ is the nucleon mass). The $n+^{12}$C potential is chosen as in Ref. von Oertzen and Imanishi (1984), i.e. $\displaystyle V_{nC}(\boldsymbol{r})=$ $\displaystyle-\frac{V_{0}}{1+\exp[(r-r_{0})/a_{0}]}$ $\displaystyle-(\boldsymbol{\ell}\cdot\boldsymbol{s})\frac{V_{\ell s}}{r}\frac{d}{dr}\frac{1}{1+\exp[(r-r_{0})/a_{0}]},$ (48) where $V_{0}=62.70$ MeV for $\ell$ even and $50.59$ MeV for $\ell$ odd, and where $r_{0}=2.656$ fm and $a_{0}=0.705$ fm. The spin-orbit amplitude is $V_{\ell s}=28.406$ MeV. This potential reproduces the experimental energies of the first $1/2^{-},1/2^{+}$, and $5/2^{+}$ states in 13C. Between the target and the projectile, the same $n+^{12}$C potential is adopted for each partial wave, corresponding to the central part of (48). Once potentials (47) and (48) are determined, the model does not contain any free parameter. In Fig. 2, the ${}^{13}{\rm C}+^{12}$C elastic cross sections at $\mbox{$E_{\rm c.m.}$}=7.8$ and 14.2 MeV are shown. In each case, we consider four conditions: (1) when only the local potential is included, the backward angle enhancement of the cross section is not reproduced; (2) the non-local calculation involving the 13C ground state only (dashed line) reproduces fairly well the data; (3) when the $1/2^{+}$ and $5/2^{+}$ excited states are introduced (solid line), elastic scattering is not significantly modified; (4) in the $n+^{12}$C potential between the target and the projectile (dotted line), we have replaced the real potential (48) by the Koning-Delaroche parametrization Koning and Delaroche (2003). Although some symmetry properties are lost (see Sec. II.B), there is a weak influence on the ${}^{13}{\rm C}+^{12}$C cross section. Figure 2: ${}^{13}{\rm C}+^{12}$C elastic cross sections (divided by the Rutherford cross section) at $\mbox{$E_{\rm c.m.}$}=7.8$ MeV (a) and 14.2 MeV (b). The data are taken from Refs. Liénard _et al._ (1995) (full dots) and Voit _et al._ (1988) (open dots). The solid lines are obtained with the $1/2^{-},1/2^{+},5/2^{+}$ states of 13C. The dashed (red) lines correspond to the $1/2^{-}$ ground state only. The dotted (blue) lines are obtained with different $n+^{12}$C potentials in the entrance and exit channels (see text). Panel (c) presents the amplitudes $|U^{J\pi}_{L}|$ of the scattering matrices at $\mbox{$E_{\rm c.m.}$}=7.8$ MeV, and for $J=L-1/2$. Figure 2(c) presents the amplitudes $|U^{J\pi}_{L}|$ of the scattering matrices (elastic channel) at $\mbox{$E_{\rm c.m.}$}=7.8$ MeV. We choose here $J=L-1/2$ but a similar behaviour is observed for $J=L+1/2$. Without the non- local part of the potential, the variation is smooth. As expected, the non- locality leads to a splitting between odd and even $L$ values. This property gives rise to the backward angle enhancement of the cross section, and justifies the use of parity-dependent optical potentials Buttle and Goldfarb (1966) to simulate non-local effects. An advantage of the present method is that it does not require any additional parameter. In addition, excited states of the $n+^{12}$C system are included in a straightforward way. In Fig. 3, we investigate the inelastic cross sections to the 13C($1/2^{+}$) and 13C($5/2^{+}$) states, which have been measured in Ref. Fröhlich _et al._ (1984) at energies around the Coulomb barrier. The effect of non-locality is quite important. With the local potential only, the theoretical cross sections are far below the data. The calculation involves the ${}^{13}{\rm C(gs},1/2^{+},5/2^{+})+^{12}$C channels. Here the cross sections are more sensitive to the choice of the $n+^{12}$C potential. Let us emphasize that there is no fit of the cross sections. All inputs are kept identical as for elastic scattering. Figure 3: Inelastic ${}^{13}{\rm C}+^{12}$C cross sections to the $1/2^{+}$ (a) and $5/2^{+}$ (b) states of 13C at $\mbox{$E_{\rm c.m.}$}=9.88$ MeV. The experimental data are taken from Ref. Fröhlich _et al._ (1984). Solid (dashed) lines are obtained with identical (different) $n+^{12}$C potentials in the entrance and exit channels (see text). ### III.2 The ${}^{13}{\rm N}+^{12}$C system With the development of radioactive beams, the ${}^{13}{\rm N}+^{12}$C mirror system has attracted much attention in the literature Liénard _et al._ (1995); Imanishi _et al._ (1997). Measurements have provided some information about charge-symmetry and about the parity effect Liénard _et al._ (1995). Figure 4 shows the ${}^{13}{\rm N}+^{12}$C calculated cross sections. Only the 13N ground state is bound, and has been introduced in the calculation. With respect to the ${}^{13}{\rm C}+^{12}$C system, the only difference is the introduction of a Coulomb term for $p+^{12}$C (with $R_{C}=2.7$ fm). The binding energy of 13N is $-1.90$ MeV, in fair agreement with experiment ($-1.94$ MeV). Here again the model reproduces remarkably well the experimental data Liénard _et al._ (1995). Figure 4: ${}^{13}{\rm N}+^{12}$C elastic cross sections (divided by the Rutherford cross section) at $\mbox{$E_{\rm c.m.}$}=7.8$ (a) and 9.6 (b) MeV. The data are taken from Ref. Liénard _et al._ (1995). Solid lines correspond to the full calculation, and dashed lines to the local potential only. ### III.3 The ${}^{16}{\rm O}+^{12}$C system In the ${}^{16}{\rm O}+^{12}$C scattering, an $\alpha$ particle is exchanged between the target and the projectile. This system has been intensively investigated in the literature (see, for example, Refs. Phuc _et al._ (2019, 2018, 2021) for recent works). We consider the elastic-scattering data of Villari et al. Villari _et al._ (1989) at the typical energy $\mbox{$E_{\rm c.m.}$}=23.14$ MeV, where a backward-angle enhancement of the cross sections is observed. We use the same ${}^{12}{\rm C}+^{12}$C core-core potential (47) as in previous applications. For the $\alpha+^{12}$C system, we adopt the potentials used in Ref. Oulebsir _et al._ (2012), i.e. a Woods-Saxon potential with a range $R_{0}=4.15$ fm and a diffuseness $a=0.55$ fm. In addition to the $0^{+}$ ground state, we include the $1^{-},3^{-}$, and $2^{+}$ excited states. Since the $2^{+}$ state presents a cluster structure, $R_{0}$ is chosen larger (4.5 fm) for this state. The Coulomb potential has a point-sphere shape with a radius $R_{C}=4.15$ fm. The depths of the potentials $V_{0}$ are adjusted to the experimental binding energies, which provides $V_{0}=-43.25,-68.95,-41.3$, and $-42.4$ MeV for the $0^{+},2^{+},1^{-}$, and $3^{-}$ states, respectively. Between the target and the projectile, the same $\alpha+^{12}$C potential is adopted, corresponding to the 16O ground state. The elastic cross section is presented in Fig. 5(a) in different conditions. The calculation with the local potential provides a fair description of the data up to $\theta\approx 90^{\circ}$, but does not reproduce the enhancement at large angles. With the non-local term, even if the oscillations at backward angles are not exactly reproduced, the role of inelastic channels is obvious. The present model, based on the exchange of an $\alpha$ particle during the collision, cannot be expected to be perfect. Other channels are open, such as the neutron or proton transfer, but are neglected here. There are usually treated by phenomenological potentials involving additional parameters. In the ${}^{16}{\rm O}+^{12}$C system, the role of the core-valence potential in more important than in ${}^{13}{\rm C}+^{12}$C. To assess this sensitivity, we have also used the $\alpha$ optical potential of Avrigeanu et al. Avrigeanu _et al._ (2009) (dotted line in Fig. 5(a)). This is explained by the long range of the $\alpha+^{12}$C potential, due to the Coulomb term. In this system, using a consistent $\alpha+^{12}$C potential in the entrance and in the exit channels seems more appropriate. The amplitude of the scattering matrices $|U^{J\pi}|$ is displayed in Fig. 5(b). As for ${}^{13}{\rm C}+^{12}$C, the calculation with the non-local term provides differences between even and odd partial waves. These differences, however, are weaker than in ${}^{13}{\rm C}+^{12}$C. As observed in Ref. Phuc _et al._ (2019), the even-odd effect is stronger around the grazing angular momentum ($J\approx 18$). Figure 5: ${}^{16}{\rm O}+^{12}$C elastic cross section (divided by the Rutherford cross section) at $\mbox{$E_{\rm c.m.}$}=23.14$ MeV (a). The data are taken from Ref. Villari _et al._ (1989). The solid lines correspond to the multichannel model. The dashed lines are obtained with the 16O ground state only, and the dotted lines with different $\alpha+^{12}$C potentials (see text). Panel (b) presents the amplitudes $|U^{J\pi}|$ of the scattering matrices. ## IV Discussion of the non-locality ### IV.1 Local equivalent potentials The effects of the non-locality can be simulated by an equivalent local potential. For the sake of simplicity, we assume a single-channel problem. The extension to multichannel systems is simple and does not modify the conclusions. In a single-channel model, the radial Schrödinger equation reads $\displaystyle\biggl{[}-\frac{\hbar^{2}}{2\mu}\biggl{(}\frac{d^{2}}{dr^{2}}-\frac{L(L+1)}{R^{2}}\biggr{)}+V^{J\pi}(R)-E\biggr{]}g^{J\pi}(R)$ $\displaystyle\hskip 28.45274pt+\int W^{J\pi}(R,R^{\prime})g^{J\pi}(R^{\prime})dR^{\prime}=0,$ (49) and can be replaced by $\displaystyle\biggl{[}-\frac{\hbar^{2}}{2\mu}\biggl{(}\frac{d^{2}}{dr^{2}}-\frac{L(L+1)}{R^{2}}\biggr{)}+V^{J\pi}_{\rm eq}(R)-E\biggr{]}g^{J\pi}(R)=0,$ (50) where the equivalent potential $V^{J\pi}_{\rm eq}(R)$ is given by $\displaystyle V^{J\pi}_{\rm eq}(R)=V^{J\pi}(R)+\frac{1}{g^{J\pi}(R)}\int W^{J\pi}(R,R^{\prime})g^{J\pi}(R^{\prime})dR^{\prime}.$ (51) This potential depends on the angular momentum, and presents singularities at the nodes of the wave functions. Thompson et al. Thompson _et al._ (1989) have proposed to define a smooth, $J$-independent, effective potential by $\displaystyle V^{\pi}_{\rm eff}(R)=\frac{\sum_{J}\omega^{J\pi}(R)V_{\rm eq}^{J\pi}(R)}{\sum_{J}\omega^{J\pi}(R)},$ (52) where the weight factors $\omega^{J\pi}(R)$ are given by $\displaystyle\omega^{J\pi}(R)=(2J+1)(1-|U^{J\pi}|^{2})|g^{J\pi}(R)|^{2}.$ (53) In this way, the influence of the nodes is reduced and the potential (53) does not depend on $J$. However, the scattering matrices obtained with (52) are not strictly identical to those obtained with (49) or (50). A test with the cross sections must be performed to check the accuracy of the potential (52). As we expect the equivalent local potentials to depend on parity, the potential (52) is defined for each parity. From $V^{+}_{\rm eff}(R)$ and $V^{-}_{\rm eff}(R)$, we determine central and parity-dependent potentials as $\displaystyle V_{0}(R)=\frac{1}{2}\bigl{(}V^{+}_{\rm eff}(R)+V^{-}_{\rm eff}(R)\bigr{)},$ $\displaystyle V_{\pi}(R)=\frac{1}{2}\bigl{(}V^{+}_{\rm eff}(R)-V^{-}_{\rm eff}(R)\bigr{)}.$ (54) The present work, based on rigorous non-local potentials, offers the possibility to investigate the parity potential which, in general, is phenomenological (see, for example, Ref. Liénard _et al._ (1995)). Notice that both $V_{0}(R)$ an $V_{\pi}(R)$ contain real and imaginary components. A parity effect can be also deduced from techniques based on data inversion Mackintosh (2019); Phuc _et al._ (2019). ### IV.2 Asymptotic form of the non-local kernels Here we present some qualitative aspects regarding the non-local kernels, and in particular the overlap kernel ${\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})$. Let us first discuss the overlap functions $N^{K}_{cc^{\prime}}(R,R^{\prime})$ which show up in Eq. (24). To simplify the presentation, we assume that the valence particle is a neutron in a $s$ state. In that case, the core-valence wave function tends to $\displaystyle u_{0}(r)\rightarrow C\exp(-k_{B}r),$ (55) where $k_{B}$ is the wave number and $C$ the asymptotic normalization coefficient (ANC). To develop further, we use the expansion Buttle and Goldfarb (1966) $\displaystyle\frac{\exp(-k|\boldsymbol{r}_{1}-\boldsymbol{r}_{2}|)}{k|\boldsymbol{r}_{1}-\boldsymbol{r}_{2}|}=$ $\displaystyle\hskip 28.45274pt\frac{2}{\pi}\sum_{\ell}(2\ell+1)i_{\ell}(kr_{<})k_{\ell}(kr_{>})P_{\ell}(\cos\theta_{r}),$ (56) where $r_{<}=\min(r_{1},r_{2})$ and $r_{>}=\max(r_{1},r_{2})$, and where $\theta_{r}$ is the angle between $\boldsymbol{r}_{1}$ and $\boldsymbol{r}_{2}$. In this definition, $i_{\ell}(x)$ and $k_{\ell}(x)$ are modified spherical Bessel functions Abramowitz and Stegun (1972). For large arguments, they tend to $\displaystyle i_{\ell}(x)\rightarrow\frac{\exp(x)}{2x},$ $\displaystyle k_{\ell}(x)\rightarrow\pi\frac{\exp(-x)}{2x}.$ (57) Using the expansion (56) for $u_{0}(r)$ and $u_{0}(r^{\prime})$, and using relations (23), we find, for large $(R,R^{\prime})$ values $\displaystyle\frac{u_{0}(r)u_{0}(r^{\prime})}{rr^{\prime}}\rightarrow\sum_{K}N^{K,{\rm as}}(R,R^{\prime})P_{K}(\cos\theta_{R}).$ (58) In the range $\alpha R^{\prime}<R<R^{\prime}/\alpha$, the asymptotic kernels are defined by $\displaystyle N^{K,{\rm as}}(R,R^{\prime})=\frac{4}{\pi^{2}}k_{B}^{2}C^{2}(-1)^{K}\sum_{\ell_{1}\ell_{2}}(2\ell_{1}+1)(2\ell_{2}+1)$ $\displaystyle\times\langle\ell_{1}0\ell_{2}0|K0\rangle^{2}k_{\ell_{2}}(\gamma k_{B}R)k_{\ell_{1}}(\gamma k_{B}R^{\prime})$ $\displaystyle\times i_{\ell_{1}}(\alpha\gamma k_{B}R)i_{\ell_{2}}(\alpha\gamma k_{B}R^{\prime}),$ (59) and Eq. (57) provides $\displaystyle N^{K,{\rm as}}(R,R^{\prime})\sim\frac{1}{R^{2}R^{\prime 2}}\exp\biggl{(}-\frac{k_{B}}{1+\alpha}(R+R^{\prime})\biggr{)}.$ (60) As $\alpha$ is in general close to 1, Eqs. (59,60) are valid for $R\approx R^{\prime}$. Otherwise, a similar development gives $\displaystyle N^{K,{\rm as}}(R,R^{\prime})\sim\frac{1}{R^{2}R^{\prime 2}}\exp\biggl{(}-\frac{k_{B}}{1-\alpha}|R-R^{\prime}|\biggr{)}.$ (61) This shows that the non-locality overlap kernel presents an exponential decrease, associated with the binding energy of the projectile. When the angular momentum is not an $s$ wave, the expansion (56) can be generalized Buttle and Goldfarb (1966), but this does not change the general trend. In addition, if the transferred particle is charged, the asymptotic behaviour takes the form $\displaystyle u_{0}(r)\rightarrow C\frac{\exp(-k_{B}r)}{r^{\eta_{B}}},$ (62) where $\eta_{B}$ is the Sommerfeld parameter. The faster decrease can be simulated by using (55) with a (larger) effective wave number, which simulates Coulomb effects Buttle and Goldfarb (1968). Consequently, Eq. (59) remains qualitatively valid, even for charged transferred particles. The non-local potentials (17) also involve kinetic-energy and nucleus-nucleus potential terms. As discussed in Sec. II.B, the kinetic-energy kernel is directly deduced from a second derivative of the overlap. The asymptotic behaviour is therefore similar to (59). To determine the potential contribution, we start from definition (24). The first term in the potential $V_{Cv}(r^{\prime})$ is involved in standard DWBA calculations (post form) Satchler (1983). For a transferred neutron, the nuclear contribution in $V_{Cv}$ makes this term short-ranged. The asymptotic behaviour (60) is therefore modified, with a smaller range. In contrast, the core-core interaction $V_{CC}$ always presents a Coulomb term. One therefore expects this interaction to be dominant at large distances. The non-local potentials in Eq. (17) are obtained from (24),(55), and angular matrix elements Satchler (1983); Shubhchintak and Descouvemont (2019). The procedure is similar to the one followed in DWBA calculations. If the internal angular momenta are taken as $\ell=\ell^{\prime}=0$, the calculation is simple, as the sum over $K$ contains a single term $K=J$. We have $\displaystyle{\cal N}^{J\pi}(R,R^{\prime})=$ $\displaystyle{\cal J}RR^{\prime}N^{J}(R,R^{\prime})/(2J+1),$ $\displaystyle{\cal V}^{J\pi}(R,R^{\prime})=$ $\displaystyle{\cal J}RR^{\prime}\bigl{(}V^{J}_{Cv}(R,R^{\prime})+V^{J}_{CC}(R,R^{\prime})\bigr{)}$ $\displaystyle/(2J+1).$ (63) Equation (59) contains a phase factor $(-1)^{J}$ at large distances, which shows that the non-local potentials have opposite signs for even and odd partial waves. This effect arises from the symmetrization of the wave functions for the core exchange. As shown by von Oertzen von Oertzen (1970), it can be simulated by a local, parity-dependent, potential. An application to the analysis of the ${}^{13}{\rm C}+^{12}$C and of the ${}^{13}{\rm N}+^{12}$C systems can be found in Ref. Liénard _et al._ (1995). ### IV.3 Application to ${}^{13}{\rm C}+^{12}$C In Fig. 6, we present the effective potentials for the ${}^{13}{\rm C}+^{12}$C system at $\mbox{$E_{\rm c.m.}$}=7.8$ MeV. For the sake of simplicity, we discuss the single-channel calculation. Figure 6(a) shows the local potential (dashed lines), and the parity-dependent effective potentials (solid lines) obtained with (54). The elastic cross sections obtained with this effective potential is very close to the original cross section (they are almost indistinguishable in a figure). As expected, the potentials present small oscillations due to the nodes in the scattering wave functions. We have repeated the calculations with different numerical conditions (channel radius, number of basis functions), and checked that the effective potentials are quite stable. Figure 6(b) display the parity potential $V_{\pi}$ [see Eq. (54)]. The parity effect is important in the real part as well as in the imaginary part. Figure 6: Local effective potentials for the ${}^{13}{\rm C}+^{12}$C system at $\mbox{$E_{\rm c.m.}$}=7.8$ MeV. In panel (a), the dashed curve corresponds to the local potential, and the solid curves to the local equivalent potentials (52). Panel (b) displays the parity potential $V_{\pi}(R)$ (54). In Fig. 7, we display the different contributions in the non-local kernel $W^{J\pi}(R,R^{\prime})$ (17) for $R=R^{\prime}$ at $\mbox{$E_{\rm c.m.}$}=7.8$ MeV. Four terms are present: the overlap, the kinetic energy, and two contributions of the potential (the core-core and core-valence terms). Partial waves $J=1/2^{+}\ (L=1)$ and $J=1/2^{-}\ (L=0)$ are shown in panels (a) and (b). The change of sign between both parities is confirmed. At short distances, the main contribution comes from $V_{CC}$, but the kinetic-energy is dominant at large distances. For consistency, the overlap is multiplied by $-\mbox{$E_{\rm c.m.}$}$, and represents a small contribution. The inset of panel (a) confirms the asymptotic behaviour. All terms but $V_{Cv}$ have the same exponential behaviour. The contribution associated with $V_{Cv}$ presents a faster decrease owing to the absence of the Coulomb interaction. Figure 7: Contributions of the overlap, kinetic energy and potentials ($V_{CC}$ and $V_{Cn}$) to the non-local kernel (17) for $R=R^{\prime}$, and for $J=1/2^{+}$ (a) and $J=1/2^{-}$ (b) in the ${}^{13}{\rm C}+^{12}$C system ($\mbox{$E_{\rm c.m.}$}=7.8$ MeV). The overlap kernel is multiplied by $-\mbox{$E_{\rm c.m.}$}$. For $V_{CC}$, the real part is displayed. The inset focuses on the long-range part in a logarithmic scale. In Fig. 8, we analyze the non-locality for $R\neq R^{\prime}$ and for $J=1/2^{+}$. We plot the overlap and real-potential kernels as a function of $(R^{\prime}-R)$ for various $R$ values. For small $R$ values, the shape of the overlap kernel is close to a Gaussian, which justifies the Perey-Buck approximation Perey and Buck (1962). However, for $R>1$ fm, the shape is more complicated than a Gaussian factor. As expected the main effect of the non- locality comes from $|R-R^{\prime}|\lesssim 1$ fm. Figure 8: Non-local kernels ${\cal N}(R,R^{\prime})$ (a) and $W(R,R^{\prime})$ (real part, (b)) for the ${}^{13}{\rm C}+^{12}$C system ($J=1/2^{+}$), as a function of $(R^{\prime}-R)$ [see Eq. (17)]. The curves are plotted for different $R$ values. ### IV.4 Application to ${}^{16}{\rm O}+^{12}$C The local effective potentials (52) for ${}^{16}{\rm O}+^{12}$C at $\mbox{$E_{\rm c.m.}$}=23.14$ MeV are presented in Fig. 9(a). Again, we limit the discussion to the single-channel system. Since the depth of these potentials is rather large, the differences with the exact local potential (16) are small, indistinguishable at the scale of the figure. Figure 9(b) shows the parity potential (54). As for ${}^{13}{\rm C}+^{12}$C, these potentials present oscillations due to the nodes of the scattering wave functions. Although the amplitude is small, the effect of the parity potential extends to large distances. This explains that this potential provides a large backward-angle enhancement of the cross section (see Fig. 5). The cross sections provided by the effective potential (52) are very close to the original cross sections of Fig. 5. Figure 9: Local effective potentials for the ${}^{16}{\rm O}+^{12}$C system at $\mbox{$E_{\rm c.m.}$}=23.14$ MeV. In panel (a), the local potential and the equivalent potential (52) are superimposed. Panel (b) displays the parity potential $V_{\pi}(R)$ (54). The decomposition of the non-local kernel (17) in different terms is presented in Fig. 10 for $J=0^{+}$ (a) and $J=1^{-}$ (b). The dominant contribution comes from the core-core potential $V_{CC}$. There is a cancellation effect of the overlap, kinetic-energy and $V_{C\alpha}$ contributions. At large distances the decrease of the potentials (inset of Fig. 10(a) is much faster than in ${}^{13}{\rm C}+^{12}$C (see Fig. 7). This is explained by Eq. (60) since $k_{B}$ is much larger in 16O than in 13C (and coefficient $\alpha$ is smaller). Figure 10: Contributions of the overlap, kinetic energy and potential ($V_{CC}$ and $V_{C\alpha}$) to the non-local kernel (17) for $R=R^{\prime}$, and for $J=0^{+}$ (a) and $J=1^{-}$ (b) in the ${}^{16}{\rm O}+^{12}$C system. The real part of $V_{CC}$ is shown. The inset focuses on the long-range part in a logarithmic scale. The non-locality for $R\neq R^{\prime}$ is illustrated in Fig. 11 for $J=0^{+}$. As for ${}^{13}{\rm C}+^{12}$C, the shape of the overlap kernel is close to a Gaussian for small $R$ values, but deviates when $R$ increases. Figure 11: Non-local kernels ${\cal N}(R,R^{\prime})$ (a) and $W(R,R^{\prime})$ (real part, (b)) for the ${}^{16}{\rm O}+^{12}$C system ($J=0^{+}$), as a function of $(R^{\prime}-R)$ [see Eq. (17)]. The curves are plotted for different $R$ values. ## V Conclusion Our main goal is an exploratory study of a rigorous method to treat exchange effects in nucleus-nucleus scattering. Starting from a three-body model, we have derived local and non-local kernels in a coupled-channel formalism. This represents a natural extension of the three-body CDCC method. The coupled- channel system, involving non-local potentials, is solved with the $R$-matrix theory, associated with the Lagrange-mesh technique. This permits fast and accurate calculations of the scattering matrices and of the cross sections. We have applied the formalism to typical reactions: ${}^{13}{\rm C}+^{12}$C, ${}^{13}{\rm N}+^{12}$C and ${}^{16}{\rm O}+^{12}$C, which illustrate the transfer of a neutron, of a proton, and of an $\alpha$ particle, respectively. In each case we started from a core-core optical potential, taken from the literature, and which fits ${}^{12}{\rm C}+^{12}$C elastic-scattering data. As expected, exchange effects lead to a backward-angle enhancement of the elastic cross sections. For the core-valence potential, we have two “natural” choices: either it is fitted on the spectroscopic properties of the heavy particle, or it is fitted on scattering properties. The first choice guarantees the symmetry of the scattering matrix, since the same Hamiltonian is used in the entrance and exit channels. With our examples, we have however shown that both options provide similar cross sections. This can be explained by the weak contribution of this potential to the non-local kernel. We have shown in Sec. IV that the bound-state wave functions of the $C+v$ system need to be accurately determined up to large distances. If not, the calculation of the various kernels is inaccurate, and the scattering matrices are unstable. For this reason, CDCC calculations involving continuum states raise numerical difficulties with pseudostates as they tend to zero at very large distances only. The use of bins would be preferable. In our applications, however, the binding energies of the projectile are large, and no strong continuum effects are expected. The present work opens the path to more ambitious calculations, such as the $\alpha+^{8}$Be system, where exchange effects could affect the existing calculations Ogata _et al._ (2009). Also it could be extended to more complicated systems, such as $d+^{11}$Be or ${}^{13}{\rm N}+^{13}$C which require a four-body theory. Another application of the formalism deals with Coupled Reaction Channel (CRC) calculations where similar non-local kernels are involved Satchler (1983). ## Acknowledgments This work was supported by the Fonds de la Recherche Scientifique - FNRS under Grant Numbers 4.45.10.08 and J.0049.19. It benefited from computational resources made available on the Tier-1 supercomputer of the Fédération Wallonie-Bruxelles, infrastructure funded by the Walloon Region under the grant agreement No. 1117545. ## Appendix A Calculation of the non-local kinetic energy The kinetic-energy kernel ${\cal T}^{J\pi}_{cL,c^{\prime}L^{\prime}}$ is implicitly defined by $\displaystyle\int{\cal T}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})dR^{\prime}=$ $\displaystyle R\langle\varphi^{JM\pi}_{cL}|T_{\boldsymbol{R}}|\varphi^{JM\pi}_{c^{\prime}L^{\prime}}\dfrac{g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})}{R^{\prime}}\rangle,$ (64) where the integration is performed over $\boldsymbol{r}$ and $\Omega_{R}$. An explicit expression for this kernel can be obtained by, first, writing the kinetic-energy operator $T_{\boldsymbol{R}}$ in spherical coordinates, $\displaystyle T_{\boldsymbol{R}}=-\dfrac{\hbar^{2}}{2\mu R}\dfrac{d^{2}}{dR^{2}}R+\dfrac{\hat{L}^{2}_{\boldsymbol{R}}}{2\mu R^{2}},$ (65) where $\hat{L}_{\boldsymbol{R}}$ is the orbital angular momentum associated with the coordinate $\boldsymbol{R}$. The radial part of $T_{\boldsymbol{R}}$ can be moved outside the matrix element since the function $\varphi^{JM\pi}_{cL}$ is independant of $R$ and since the matrix element does not involve any integration over the coordinate $R$. Besides, since the operator $\hat{L}^{2}_{\boldsymbol{R}}$ is Hermitian, it can be applied, with a trivial effect, on the bra. We therefore obtain $\displaystyle R\langle\varphi^{JM\pi}_{cL}|T_{\boldsymbol{R}}|\varphi^{JM\pi}_{c^{\prime}L^{\prime}}\dfrac{g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})}{R^{\prime}}\rangle$ $\displaystyle=$ $\displaystyle-\dfrac{\hbar^{2}}{2\mu}\left[\dfrac{d^{2}}{dR^{2}}-\dfrac{L(L+1)}{R^{2}}\right]R\langle\varphi^{JM\pi}_{cL}|\varphi^{JM\pi}_{c^{\prime}L^{\prime}}\dfrac{g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})}{R^{\prime}}\rangle$ (66) $\displaystyle=$ $\displaystyle\int\left\\{-\dfrac{\hbar^{2}}{2\mu}\left[\dfrac{\partial^{2}}{\partial R^{2}}-\dfrac{L(L+1)}{R^{2}}\right]{\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})\right\\}g^{J\pi}_{c^{\prime}L^{\prime}}(R^{\prime})dR^{\prime},$ (67) where the definition (18) of the norm kernel and the Leibniz integral rule have been used. By comparison of (64) and (67), we get the relation (25), linking the kinetic-energy and norm kernels, $\displaystyle{\cal T}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime})=-\frac{\hbar^{2}}{2\mu}\biggl{[}\frac{\partial^{2}}{\partial R^{2}}-\frac{L(L+1)}{R^{2}}\biggr{]}{\cal N}^{J\pi}_{cL,c^{\prime}L^{\prime}}(R,R^{\prime}).$ (68) ## References * Feshbach (1958) H. Feshbach, Ann. Rev. Nucl. Sci. 8, 49 (1958). * Dickhoff and Charity (2019) W. H. Dickhoff and R. J. Charity, Prog. Part. Nucl. Phys. 105, 252 (2019). * Rawitscher (1974) G. H. Rawitscher, Phys. Rev. C 9, 2210 (1974). * Kamimura _et al._ (1986) M. Kamimura, M. Yahiro, Y. Iseri, S. Sakuragi, H. Kameyama, and M. Kawai, Prog. Theor. Phys. Suppl. 89, 1 (1986). * Austern _et al._ (1987) N. Austern, Y. Iseri, M. Kamimura, M. Kawai, G. Rawitscher, and M. Yahiro, Phys. Rep. 154, 125 (1987). * Yahiro _et al._ (2012) M. Yahiro, T. Matsumoto, K. Minomo, T. Sumi, and S. Watanabe, Prog. Theor. Phys. Supp. 196, 87 (2012). * Matsumoto _et al._ (2004) T. Matsumoto, E. Hiyama, K. Ogata, Y. Iseri, M. Kamimura, S. Chiba, and M. Yahiro, Phys. Rev. C 70, 061601 (2004). * Descouvemont (2018) P. Descouvemont, Phys. Rev. C 97, 064607 (2018). * de Diego _et al._ (2014) R. de Diego, J. M. Arias, J. A. Lay, and A. M. Moro, Phys. Rev. C 89, 064609 (2014). * Pesudo _et al._ (2017) V. Pesudo, M. J. G. Borge, A. M. Moro, J. A. Lay, E. Nácher, J. Gómez-Camacho, O. Tengblad, L. Acosta, M. Alcorta, M. A. G. Alvarez, C. Andreoiu, P. C. Bender, R. Braid, M. Cubero, A. Di Pietro, J. P. Fernández-García, P. Figuera, M. Fisichella, B. R. Fulton, A. B. Garnsworthy, G. Hackman, U. Hager, O. S. Kirsebom, K. Kuhn, M. Lattuada, G. Marquínez-Durán, I. Martel, D. Miller, M. Moukaddam, P. D. O’Malley, A. Perea, M. M. Rajabali, A. M. Sánchez-Benítez, F. Sarazin, V. Scuderi, C. E. Svensson, C. Unsworth, and Z. M. Wang, Phys. Rev. Lett. 118, 152502 (2017). * Ogata _et al._ (2009) K. Ogata, M. Kan, and M. Kamimura, Prog. Theor. Phys. 122, 1055 (2009). * Descouvemont (2017) P. Descouvemont, Phys. Lett. B 772, 1 (2017). * von Oertzen (1970) W. von Oertzen, Nucl. Phys. A 148, 529 (1970). * Imanishi and von Oertzen (1987) B. Imanishi and W. von Oertzen, Phys. Rep. 155, 29 (1987). * Buttle and Goldfarb (1966) P. Buttle and L. Goldfarb, Nucl. Phys. 78, 409 (1966). * Sparenberg _et al._ (2000) J.-M. Sparenberg, D. Baye, and B. Imanishi, Phys. Rev. C 61, 054610 (2000). * Descouvemont and Baye (2010) P. Descouvemont and D. Baye, Rep. Prog. Phys. 73, 036301 (2010). * Baye (2015) D. Baye, Phys. Rep. 565, 1 (2015). * Dohet-Eraly (2017) J. Dohet-Eraly, Eur. Phys. J. Plus 132, 362 (2017). * Cotanch (1975) S. Cotanch, Phys. Lett. B 57, 123 (1975). * Satchler (1983) G. R. Satchler, _Direct Nuclear Reactions_ (Oxford University Press, 1983). * Thompson (1988) I. J. Thompson, Comput. Phys. Rep. 7, 167 (1988). * Shubhchintak and Descouvemont (2019) Shubhchintak and P. Descouvemont, Phys. Rev. C 100, 034611 (2019). * Canto and Hussein (2013) L. F. Canto and M. S. Hussein, _Scattering Theory of Molecules, Atoms and Nuclei_ (World Scientific Publishing, Singapore, 2013). * Phuc _et al._ (2018) N. T. T. Phuc, N. H. Phuc, and D. T. Khoa, Phys. Rev. C 98, 024613 (2018). * Phuc _et al._ (2021) N. H. Phuc, D. T. Khoa, and N. T. T. Phuc, Eur. Phys. J. A 57, 7 (2021). * Phuc _et al._ (2019) N. T. T. Phuc, R. S. Mackintosh, N. H. Phuc, and D. T. Khoa, Phys. Rev. C 100, 054615 (2019). * Liénard _et al._ (1995) E. Liénard, D. Baye, T. Delbar, P. Descouvemont, P. Duhamel, W. Galster, M. Kurokawa, P. Leleux, I. Licot, P. Lipnik, C. Michotte, T. Motobayashi, A. Ninane, J. Vanhorenbeeck, and J. Vervier, Phys. Rev. C 52, 775 (1995). * Voit _et al._ (1988) H. Voit, N. Bischof, W. Tiereth, I. Weitzenfelder, W. von Oertzen, and B. Imanishi, Nucl. Phys. A 476, 491 (1988). * Imanishi _et al._ (1997) B. Imanishi, V. Denisov, and T. Motobayashi, Phys. Rev. C 55, 1946 (1997). * Baye (1986) D. Baye, Nucl. Phys. A 460, 581 (1986). * Treu _et al._ (1980) W. Treu, H. Fröhlich, W. Galster, P. Dück, and H. Voit, Phys. Rev. C 22, 2462 (1980). * von Oertzen and Imanishi (1984) W. von Oertzen and B. Imanishi, Nucl. Phys. A 424, 262 (1984). * Koning and Delaroche (2003) A. J. Koning and J. P. Delaroche, Nucl. Phys. A 713, 231 (2003). * Fröhlich _et al._ (1984) H. Fröhlich, N. Bischof, W. Tiereth, H. Voit, W. von Oertzen, and B. Imanishi, Nucl. Phys. A 420, 124 (1984). * Villari _et al._ (1989) A. C. C. Villari, A. Lépine-Szily, R. L. Filho, O. P. Filho, M. M. Obuti, J. M. O. Jr., and N. Added, Nucl. Phys. A 501, 605 (1989). * Oulebsir _et al._ (2012) N. Oulebsir, F. Hammache, P. Roussel, M. G. Pellegriti, L. Audouin, D. Beaumel, A. Bouda, P. Descouvemont, S. Fortier, L. Gaudefroy, J. Kiener, A. Lefebvre-Schuhl, and V. Tatischeff, Phys. Rev. C 85, 035804 (2012). * Avrigeanu _et al._ (2009) M. Avrigeanu, A. Obreja, F. Roman, V. Avrigeanu, and W. von Oertzen, At. Data Nucl. Data Tables 95, 501 (2009). * Thompson _et al._ (1989) I. Thompson, M. Nagarajan, J. Lilley, and M. Smithson, Nucl. Phys. A 505, 84 (1989). * Mackintosh (2019) R. S. Mackintosh, Eur. Phys. J. A 55, 147 (2019). * Abramowitz and Stegun (1972) M. Abramowitz and I. A. Stegun, _Handbook of Mathematical Functions_ (Dover, London, 1972). * Buttle and Goldfarb (1968) P. Buttle and L. Goldfarb, Nucl. Phys. A 115, 461 (1968). * Perey and Buck (1962) F. Perey and B. Buck, Nucl. Phys. 32, 353 (1962).
# Perturbations and Causality in Gaussian Latent Variable Models Armeen Taeb and Peter Bühlmann Seminar for Statistics, ETH Zürich Correspondence email: <EMAIL_ADDRESS> ###### Abstract Causal inference is a challenging problem with observational data alone. The task becomes easier when having access to data from perturbing the underlying system, even when happening in a non-randomized way: this is the setting we consider, encompassing also latent confounding variables. To identify causal relations among a collections of covariates and a response variable, existing procedures rely on at least one of the following assumptions: i) the response variable remains unperturbed, ii) the latent variables remain unperturbed, and iii) the latent effects are dense. In this paper, we examine a perturbation model for interventional data, which can be viewed as a mixed-effects linear structural causal model, over a collection of Gaussian variables that does not satisfy any of these conditions. We propose a maximum-likelihood estimator – dubbed _DirectLikelihood_ – that exploits system-wide invariances to uniquely identify the population causal structure from unspecific perturbation data, and our results carry over to linear structural causal models without requiring Gaussianity. We illustrate the utility of our framework on synthetic data as well as real data involving California reservoirs and protein expressions. ## 1 Introduction Identifying causal relations from observational data is challenging and one can often only identify the corresponding _Markov equivalence class_ (MEC). At the opposite pole are designed randomized experiments [28]: they are the gold standard for causal inference but the feasibility to do the randomization is hindered by cost or ethical reasons. It is possible though, under some assumptions, to exploit non-specific and non-randomized interventions or perturbations which frequently arise in many datasets: this is the topic of the current paper. In the context of observational data from structural causal models [24, 21], one possibility is to find the MEC of directed acyclic graphs under the faithfulness assumption [33] or the beta-min condition [32]. Some of the well- known algorithms for structure learning of MECs with observational data include the constraint based PC algorithm [30], score based greedy algorithm GES [4] and hybrid methods that integrate constraint based and score based methods such as ARGES [19]. In many applications though, we have available both observational and unspecific interventional or perturbation data, where the latter are coming from non-randomized experiments with unknown targets. In genomics, for example, with the advance of gene editing technologies, high throughput interventional gene expression data is being produced [9]. Interventional data can be viewed as _perturbations_ to components of the system and can offer substantial gain in identifiability: [13] demonstrated that combining interventional with observational data reduces ambiguity and enhances identifiability to a smaller equivalence class than the MEC, known as the I-MEC (Interventional MEC). A variety of methods have been proposed for causal structure learning from observational and interventional data. This includes the modified GES algorithm by [13] known as GIES, permutation-based causal structure learning [34], penalized maximum-likelihood procedure in Gaussian models [15], and methods based on a causal invariance framework [17, 23] building on a concept of stability [7, 8]. For a more comprehensive list, see [10] and the references therein. A common challenge for accurate structure learning is that there may be latent variables for which it is expensive or impossible to obtain sample observations. Such unobserved variables pose a significant difficulty as the causal graphical model structure is not closed under marginalization; therefore, the graphical structure corresponding to the marginal distribution of the observed variables consists of potentially many confounding dependencies that are induced due to the marginalization over the latent variables. There are causal structural learning methods that account for the presence of latent variables. In the observational setting, two prominent examples are the Fast Causal Inference [30] and its variant RFCI [6] for DAG learning and the two-stage deconfounding procedure [12] involving the sparse- plus-low rank decomposition framework [3] as the first stage and the standard DAG learning procedure in the second stage. As discussed earlier, the (R)FCI algorithms or the two-stage deconfounding procedure will only enable to infer a certain MEC but not the causal parameter and structure itself. In the joint observational and interventional setting with unperturbed latent variables and only shift interventions on the observed covariates, Causal Dantzig [25] consistently estimates the causal relations of a response variable assuming that the interventions do not directly affect the response variable. Such an assumption is relaxed in the backShift procedure [27] while it still requires that the latent variables remain unperturbed for identifying the causal structure. Guaranteed identifiability using these previous techniques for perturbation data relies on at least one of the following assumptions: i) the response variable remains unperturbed, ii) the latent variables remain unperturbed, and iii) the latent effects are dense. In this paper, we propose a modeling framework and an estimator that does not rely on any of these assumptions and yet identifies the population DAG structure. Fig 1 demonstrates a toy example of our setup among $4$ observed variables $X_{1},X_{2},X_{3},X_{4}$ and latent variables $H$, and $\mathcal{A}$ represents external variables (to the graphical structure among observed and latent variables) that provide perturbations. Figure 1: Toy example of 4 observed variables $X_{1},X_{2},X_{3},X_{4}$ and latent variables $H$ where solid lines are connections among observed variables and dotted lines are connections between observed and latent variables; left: without perturbations, right: perturbations $\mathcal{A}$ on all components indicated with red dotted lines. We consider a Gaussian structural causal model (SCM) specifying the perturbation model and the relationship between $p$ observed variables $X\in\mathbb{R}^{p}$ and latent variables $H\in\mathbb{R}^{h}$. We consider the setting with heterogeneous grouped data from different environments $e\in{\cal E}$. Here $e$ denotes the index of an environment or a sub- population and ${\cal E}$ is the space of different observed environments. As we will formalize in Section 2.1, each group or environment $e$ corresponds to some perturbations of the underlying SCM. The grouped data, across different environments, is denoted by $(X^{e},H^{e})$ with $e\in{\cal E}$. The SCM is parameterized by a connectivity matrix encoding the causal relationship among the observed variables, a coefficient matrix encoding the latent variable effects, and nuisance parameters involving the noise variances and perturbation magnitudes among all of the variables. A key property of this modeling framework is that the connectivity matrix and the latent variable coefficient matrix remain _invariant_ across of all the perturbation environments. With this insight, we propose a maximum-likelihood estimator – dubbed _DirectLikelihood_ – to score a given DAG structure. _DirectLikelihood_ provides a flexible framework to incorporate additional knowledge including do-interventions when their intervention-locations are known or additional information on the perturbation structure (such as statistically identical perturbations on all of the observed variables). Further, the framework can be specialized to the setting considered by [27, 25] where the latent variables are not perturbed across environments (i.e. $\mathcal{A}$ does not point to $H$ in Fig 1), or to the setting where there is no latent confounding (i.e. $H$ does not point to the covariates in Fig 1). Besides the novel methodology, we provide conditions for which _DirectLikelihood_ correctly identifies the population DAG structure. In particular, we demonstrate that with at least two interventional environments, where one of the environments consists of sufficiently large interventions on each of the observed variables, and the latent effects satisfying a _latent materiality_ assumption, _DirectLikelihood_ provides consistent estimates. The _latent materiality_ assumption for an environment $e$ states that the latent variables induce confounding dependencies among the observed variables; formally, there exists at least one pair of variables $(X^{e}_{k},X^{e}_{l})$ such that $X^{e}_{k}\mathchoice{\mathrel{\hbox to0.0pt{$\displaystyle\perp$\hss}\mkern 2.0mu{\displaystyle\perp}}}{\mathrel{\hbox to0.0pt{$\textstyle\perp$\hss}\mkern 2.0mu{\textstyle\perp}}}{\mathrel{\hbox to0.0pt{$\scriptstyle\perp$\hss}\mkern 2.0mu{\scriptstyle\perp}}}{\mathrel{\hbox to0.0pt{$\scriptscriptstyle\perp$\hss}\mkern 2.0mu{\scriptscriptstyle\perp}}}X^{e}_{l}~{}|~{}\\{X^{e}_{\backslash\\{k,l\\}},H^{e}\\}$ and $X^{e}_{k}\not\mathchoice{\mathrel{\hbox to0.0pt{$\displaystyle\perp$\hss}\mkern 2.0mu{\displaystyle\perp}}}{\mathrel{\hbox to0.0pt{$\textstyle\perp$\hss}\mkern 2.0mu{\textstyle\perp}}}{\mathrel{\hbox to0.0pt{$\scriptstyle\perp$\hss}\mkern 2.0mu{\scriptstyle\perp}}}{\mathrel{\hbox to0.0pt{$\scriptscriptstyle\perp$\hss}\mkern 2.0mu{\scriptscriptstyle\perp}}}X^{e}_{l}~{}|~{}X^{e}_{\backslash\\{k,l\\}}$, where $X^{e}_{\backslash\\{k,l\\}}$ denotes the collections of variables $X^{e}$ excluding $X^{e}_{k},X^{e}_{l}$. The _latent materiality_ assumption is substantially weaker than the latent denseness assumption required in the two-stage deconfounding procedure in [12] which insists that there are many pairs of variables satisfying the condition above. Our theoretical results are further specialized to the setting where the latent variables remain unperturbed across all of the environments. When the latent variables are unperturbed, _DirectLikelihood_ requires no assumption on the latent structure for identifiability, whereas the two-stage deconfounding procedure still requires latent denseness. We remark that the main focus of our analysis is on identifiability guarantees, and we discuss in Section 7 future work on understanding high-dimensional consistency properties of the _DirectLikelihood_ procedure. Further, we highlight a connection between distributional robustness and the causal parameters in our perturbation model. Specifically, we prove that the population causal parameters are minimizers of the worst-case risk over the space of DAGs and distributional shifts from a certain perturbation class. Here, the risk is measured by the Kullback-Libeler divergence between the estimated and population Gaussian distributions. As with the DAG identifiability, the relation between causality and distributional robustness relies on the stringent assumption that the perturbations do not directly affect the response variable or the latent variables [2, 26]. The results in this paper provided a more complete picture on the connection between perturbations, causality, and distributional robustness (see also Table 1). As our final contribution, we propose an optimization procedure to solve _DirectLikelihood_ in Section 5 and demonstrate the utility of our proposed estimator with synthetic data and real data involving California reservoirs and protein expression data in Section 6. The estimates provided by _DirectLikelihood_ offer improvements over previous approaches in multiple respects. First, the causal graphical structure that is obtained by _DirectLikelihood_ is accurate even when there are interventions on the response variable and the latent variables, or when the latent effects are not dense across the observed variables. Previous methods, on the other hand, may provide inaccurate estimates in such settings. Second, _DirectLikelihood_ produces models with few false positives and large number of true positives (with respect to graphical structure) with moderate sample sizes, as compared to competing methods like backShift that require much larger data. Finally, in the analysis with real data, we demonstrate that accounting for latent effects via the _DirectLikelihood_ procedure yields models that are more sensible (with fewer spurious edges) than if latent variables are not taken into account. The outline of this paper is as follows. In Section 2, we describe the model for observational and perturbation data and its representation as a mixed- effects model, and then present the maximum-likelihood estimator _DirectLikelihood_ to score a given DAG structure. In Section 3, we provide theoretical guarantees for the optimally scoring DAGs (scored via _DirectLikelihood_). In Section 4, the connection between the causal parameters of the proposed perturbation model and distributional robustness is explored. In Section 5, we present an optimization strategy for solving _DirectLikelihood_ for a given DAG structure and how to use it to obtain the best scoring DAGs. In Section 6, we demonstrate the utility of our approach with real and synthetic data. We conclude with future research directions in Section 7. ### 1.1 Related work We have mentioned differences to backshift [27] and two-stage deconfounding procedures and provide more comparisons throughout the paper. _DirectLikelihood_ is similar in spirit to approaches based on invariance principles [23, 25] as it exploits certain model parameters (such as the connectivity matrix and latent variable effects) remaining unchanged across perturbations. However, a key difference between _DirectLikelihood_ and these other techniques – in addition to being able to incorporate perturbations on the latent variables – is that _DirectLikelihood_ models the entire system of observed variables as opposed to just the regression of the response variable and the remaining observed variables. The virtue of this system-wide modeling is that all of the variables can experience perturbations without sacrificing consistency guarantees while the methods in [23, 25] assume that the perturbations do not directly affect the response variable. This perspective was also adopted in the backShift procedure [27], although _DirectLikelihood_ can allow for perturbations on the latent variables. For a summary of the assumptions for _DirectLikelihood_ as compared to competing methods, see Table 1. Method | Perturbed response variable | Unperturbed latent variables | Perturbed latent variables ---|---|---|--- IV, ICP, Causal Dantzig | x | ✓ | x two-stage deconfounding | ✓ | ✓ latent denseness | ✓ latent denseness backShift | ✓ | ✓ | x _DirectLikelihood_ | ✓ | ✓ | ✓ _latent materiality_ Table 1: Comparison of _DirectLikelihood_ with competing methods in the following settings: response variable is perturbed, latent variables are unperturbed, and the latent variables are perturbed. The methods are Instrumental Variables IV [1], Invariant Causal Prediction ICP [23], two-stage deconfounding [12] tailored for observational and interventional data, backShift [27] and our proposal _DirectLikelihood_. ### 1.2 Notation We denote the identity matrix by $\mathcal{I}$, with the size being clear from context. The collection of $d\times d$ symmetric matrices are denoted by $\mathbb{S}^{d}$ and positive-semidefinite matrices by $\mathbb{S}^{d}_{+}$ and the collection of strictly positive-definite matrices by $\mathbb{S}^{d}_{++}$. The collection of non-negative vectors in $\mathbb{R}^{d}$ is denoted by $\mathbb{R}_{+}^{d}$ and strictly positive vectors by $\mathbb{R}_{++}^{d}$. Given a matrix $M\in\mathbb{R}^{d\times d}$ and a set $S\subseteq\\{1,2,\dots,d\\}$, we denote the restriction of $M$ to rows and columns indexed by $S$ by $[M]_{S}$. We denote the number of nonzeros in a matrix $M\in\mathbb{R}^{p\times p}$ by $\|M\|_{\ell_{0}}$. We apply a similar notation to count the number of edges in a graph. We denote the the index set of the parents of a random variable $X_{p}$ by $\text{PA}(p)$ and the index sets for the descendants and ancestors by $\text{DES}(p)$ and $\text{ANC}(p)$, respectively. Further, letting $\mathcal{D}$ be the DAG underlying a collection of variables $(X,H)$, we denote the subgraph of $\mathcal{D}$ restricted to the variables $X$ by $\mathcal{D}_{X}$ and likewise for $\mathcal{D}_{H}$. Given a matrix $M\in\mathbb{R}^{d_{1}\times d_{2}}$, we denote $\|M\|_{2}$ to be the largest singular value (spectral norm). For two vectors $z_{1},z_{2}\in\mathbb{R}^{d}$, we denote $z_{1}\succeq z_{2}$ to denote element-wise inequality. Finally, for random variables $V_{1},V_{2}$ and random vectors $Z$, we use the notation $\rho(V_{1},V_{2}|Z)$ to denote the partial correlation between $V_{1}$ and $V_{2}$ given $Z$. ## 2 Modeling framework and maximum-likelihood estimator In Section 2.1, we describe a data generation process associated with the perturbation model in Fig 1. In Section 2.2, we propose _DirectLikelihood_ , a maximum-likelihood estimator with respect to the marginal distribution of the observed variables. _DirectLikelihood_ identifies estimates of the unknown perturbation effects, the latent effects, and the causal relation among the observed variables. ### 2.1 Modeling framework We consider a directed acyclic graph $\mathcal{D}^{\star}$ whose $p+h$ nodes correspond to jointly Gaussian and centered 111Without loss of generality, we assume that the observed variables are centered. random variables $(X,H)\subseteq\mathbb{R}^{p}\times\mathbb{R}^{h}$, where $X$ are observable and $H$ are latent variables. As described in Section 1.1, our methodology is also applicable in the setting where one may be primarily interested in the causal effects of a response variable. As such, we distinguish $X_{p}$ as the target or response variable. Owing to the joint Gaussianity of $(X,H)$, the random pair $(X,H)$ satisfies the following (compactified) SCM: $X=B^{\star}{X}+\Gamma^{\star}{H}+\epsilon.$ (1) Here, $\epsilon=(\epsilon_{1},\epsilon_{2},\dots,\epsilon_{p})$ are independent Gaussian random variables independent of $H$ where $\epsilon\sim\mathcal{N}(0,\texttt{diag}(w^{1,\star}))$ for some $w^{1,\star}\in\mathbb{R}^{p}_{++}$. The connectivity matrix $B^{\star}\in\mathbb{R}^{p\times p}$ contains zeros on the diagonal and encodes the causal relationship among the observables $X$, i.e. $B^{\star}_{k,j}\neq 0$ if and only if $j\in\text{PA}_{\mathcal{D}^{\star}_{X}}(k)$. The $p$-th row vector $B^{\star}_{p,:}$ encodes the causal parents of the response variable and the magnitude of their effects. The matrix $\Gamma^{\star}$ in (1) encodes the effects of the latent variables on the observed variables where $\Gamma^{\star}_{k,j}\neq 0$ if and only if $j\in\text{PA}_{\mathcal{D}^{\star}_{H}}(k)$. For the sake of generality, we do not immediately put any assumption on the number of latent variables $h$ or the denseness of their effects. The compact SCM (1) describes the generating process of $X$ in the observational setting where there are no external perturbations on the system. We next describe how the data generation process alters due to some type of perturbations to the variables $(X,H)$. We consider perturbations that directly shift the distributions of the random variables by some noise acting additively to the system. Specifically, the perturbations $\mathcal{A}$ generate the random pair $(X^{e},H^{e})$ for each $e\in\mathcal{E}$ satisfying the following SCM: $\begin{gathered}X^{e}=B^{\star}{X^{e}}+\Gamma^{\star}{H}^{e}+\epsilon^{e}+\delta^{e}\\\ H^{e}\sim\mathcal{N}(0,\Psi^{\star,e}),\end{gathered}$ (2) where for every $e\in\mathcal{E}$, $\epsilon^{e}\stackrel{{\scriptstyle\text{dist}}}{{=}}\epsilon$, $(H^{e},\delta^{e},\epsilon^{e})$ are jointly independent, and that the collection $(X^{e},H^{e},\delta^{e},\epsilon^{e})$ is independent across $e$. Further, $\delta^{e}\in\mathbb{R}^{p}$ is a Gaussian random vector (independent across the coordinates) that represents the additive perturbations, and $H^{e}\in\mathbb{R}^{h}$ is a Gaussian random vector that represents the perturbed latent variables with covariance $\Psi^{e}\in\mathbb{S}^{h}_{++}$. Notice that $\epsilon^{e},\delta^{e}$ are in general not identifiable from the sum $\epsilon^{e}+\delta^{e}$ in (2); we specify below in Section 2.1.1 an identifiable parametrization for the terms $\epsilon^{e}+\delta^{e}$. The modeling framework (2) can also incorporate information about do-interventions, as discussed in Section 2.1.3. The compactified SCM (2) characterizes the distribution among all of the observed variables and encodes system-wide invariances. Specifically, (2) insists that for every $k=1,2,\dots,p$, the regression coefficients when regressing $X^{e}_{k}$ on the parent sets $\\{X_{j}^{e}:j\in\text{PA}_{\mathcal{D}_{X}^{\star}}(k)\\}$ and $\\{H_{l}^{e}:l\in\text{PA}_{\mathcal{D}_{H}^{\star}}(k)\\}$ remain invariant for all environments $e\in\mathcal{E}$. This is a point of departure from instrumental variable techniques or invariant causal prediction in two significant ways: 1) such methods do not allow for the perturbations on the latent variables or the response variable $X_{p}$ (i.e. they assume $H^{e}\stackrel{{\scriptstyle\text{dist}}}{{=}}H$ and $\delta^{e}_{p}\equiv 0$ for all $e\in\mathcal{E}$) and 2) they only consider “local” invariances arising from the distribution $X^{e}_{p}~{}|~{}\\{(X_{j}^{e},H_{l}^{e}):j\in\text{PA}_{\mathcal{D}_{X}^{\star}}(p),l\in\text{PA}_{\mathcal{D}_{H}^{\star}}(p)\\}$. The virtue of considering a joint model over all of the variables and exploiting system-wide invariances is that we can propose a maximum-likelihood estimator _DirectLikelihood_ which identifies the population DAG structure even with perturbations on the response variable and the latent variables. The SCM (2) is similar in spirit to previous modeling frameworks in the literature. The authors [15] consider jointly observational and interventional Gaussian data where the interventions are limited to do-interventions and there are no latent variables. In the context of (2), this means that $\delta^{e}\equiv 0$ and $\Gamma^{\star}\equiv 0$. As such, the framework considered in this paper is a substantial generalization of [15]. Further, the backShift [27] procedure considers the linear SCM (2) with the some modifications: i) there are no do-interventions, ii) there are no perturbations to the latent variables, i.e. $H^{e}\stackrel{{\scriptstyle\text{dist}}}{{=}}H$ for all $e\in\mathcal{E}$, and iii) $B^{\star}$ may be a cyclic directed graph. In addition, the backShift algorithm relies on exploiting invariances of differences of estimated covariance matrices across environments. Our _DirectLikelihood_ procedure is more in the ”culture of likelihood modeling and inference” and has the advantage that it can cope well with having only a few observations per group or environment. This likelihood perspective also fits much more into the context of inference for mixed models as briefly discussed in Section 2.1.2. #### 2.1.1 Model specialization Clearly, one cannot distinguish the parameters for $\epsilon^{e}$ and $\delta^{e}$. We thus write, for all $e\in\mathcal{E}$: $\epsilon^{e}+\delta^{e}\sim{\cal N}(0,\mathrm{diag}(w^{e,\star})),\ w^{e,\star}\in\mathbb{R}_{++}^{p}.$ Since we are mainly interested in the connectivity matrix $B^{\star}$, the parameters $\Gamma^{\star}$, $w^{e,\star},\Psi^{e,\star}$ are nuisance parameters and we may simplify the modeling framework by restricting the parameter space for the covariances $\Psi^{e,*}$.. Our default proposal is to model the latent variables as independent and identically distributed across the environments. Specifically, we let $\Psi^{e,\star}=\Psi^{\star}+\psi^{e,\star}\mathcal{I}$ where $\psi^{e,\star}\in\mathbb{R}_{+}$. Further, without loss of generality, $\Psi^{\star}$ can be taken to be the identity matrix by absorbing its effect on $\Gamma^{\star}$ via the transformation $\Gamma^{\star}\to\Gamma^{\star}{\Psi^{\star}}^{1/2}$ so that: $\Psi^{e,\star}=(1+\psi^{e,\star})\mathcal{I},\ \psi^{e,\star}\in\mathbb{R}_{+}.$ Further, as an additional default setting, we assume that we have access to an observational environment ($e=1$ without loss of generality) so that: $w^{e,\star}\succeq w^{1,\star},\ \psi^{1,\star}=0.$ Here, the inequality $w^{e,\star}\succeq w^{1,\star}$ is element-wise. In the setting where the latent variables are unperturbed across the environments, one can take $\Psi^{e,\star}\equiv\mathcal{I}$ after the transformation $\Gamma(\Psi^{e,\star})^{1/2}\to\Gamma^{\star}$. Fitting to a model with equally distributed perturbations across the coordinates may be attained by the reparametrization $w^{e,\star}=w^{1,\star}+\zeta^{e,\star}{\bf 1}$ for $\zeta^{e,\star}\in\mathbb{R}_{+}$. In general, other models for the random terms $\epsilon^{e}+\delta^{e}$ and $H^{e}$ are possible. A connection to random effects modeling is discussed next. #### 2.1.2 Interpretation as mixed-effects linear structural causal model The framework in (2) bears some similarities to standard random effects mixed models [16]. In particular, random effects mixed models are widely employed to model grouped data, where some parameter components remain fixed and others are random. In the context of our problem, the fixed parameters are the matrices $B^{\star},\Gamma^{\star}$ and the random parameters are the shift perturbations $\delta^{e}$. For example, we can write for the response variable $Y=X_{p}$ and for simplicity in the absence of latent variables: for each environment or group $e$, $Y^{e}=X^{e}\beta+Z^{e}b^{e}+\epsilon^{e},\ e=1,\ldots,m,$ (3) where $Y^{e},\epsilon^{e}$ are $n^{e}\times 1$ vectors, $X^{e}$ is an $n^{e}\times p$ design matrix, here $Z^{e}=\mathcal{I}_{n^{e}}$, $n^{e}$ is the sample size within group or environment $e$, and the variables across $e$ are independent. The correspondence to (2) is as follows: $\epsilon^{e}\sim{\cal N}(0,w^{1,\star}_{p}\mathcal{I}_{n^{e}})$, $b^{e}\sim{\cal N}(0,v_{p}^{e,\star}\mathcal{I}_{n^{e}})$ (where $v_{p}^{e,\star}=w_{p}^{e,\star}-w_{p}^{1,\star}$) and $\beta={B^{\star}_{p,:}}^{T}$. There are three main differences to standard mixed models. First, the distribution of $b^{e}\sim{\cal N}(0,v_{p}^{e,\star}I_{n^{e}})$ changes with $e$ and the shrinkage effect across groups is abandoned. Second, we take a multivariate view point for all the variables $X_{j}^{e}\ (j=1,\ldots,p)$ in (2): they are all modelled with random effects and can be individually written as in (3), but we allow for dependence among all the $p$ variables. Finally, a difference between our model in (1) and (2) and the standard mixed models is that the group specific random effects, the random parameters $\delta^{e}$ in (2) or the random parameters vector $b^{e}$ in (3), act in a _dynamic way_ on the system: the effects of $\delta^{e}$ are _propagated_ through the structural equations; and in practice, the order of propagation is usually unknown. Thus, our model in (2) leads to a different way of describing group-specific perturbations, calling also for a different likelihood calculation: in fact, as we show, such dynamic perturbations allow to identify the causal structure. The latter is not possible with standard mixed models but due to the connection pointed out above, we refer to our formalization in (2) as ”mixed- effects linear structural causal modeling”. We believe that the causal inference literature has not much exploited this connection. We argue here that our random effects approach is very useful and practical for modeling perturbation data where the perturbations are believed to propagate further on other variables in the system. #### 2.1.3 Incorporating do-interventions The perturbation model (2) provides a flexible framework to incorporate additional knowledge including do-interventions (eliminating the connections between the perturbed variable and the corresponding parents) when their intervention-locations are known. Specifically, in such setting, (2) can be modified to: $\begin{gathered}X^{e}=\mathcal{F}_{\texttt{do}(e)^{c}}(B^{\star}{X^{e}}+\Gamma^{\star}{H}^{e}+\epsilon^{e})+\delta^{e}\\\ H^{e}\sim\mathcal{N}(0,\Psi^{\star,e}),\end{gathered}$ where $\texttt{do}(e)\subseteq\\{1,\ldots,p\\}$ denotes do-locations in the sub-graph of $\mathcal{D}^{\star}_{X}$ and $\mathcal{F}_{S}\in\mathbb{R}^{p\times p}$ is a diagonal matrix with ones corresponding to coordinates inside $S\subseteq\\{1,\ldots,p\\}$ and zeros elsewhere. Accordingly, the _DirectLikelihood_ procedure described in Section 2.2 can be modified; see Section A in the supplementary material. ### 2.2 Scoring DAGs via _DirectLikelihood_ Let $\mathcal{D}$ be a given DAG structure among the observed variables (which we can think of as the restriction $\mathcal{D}_{X}$ of a DAG among observed and latent variables). In this section, we score this DAG via the maximum likelihood procedure _DirectLikelihood_. We suppose that there are $m$ environments $|\mathcal{E}|=m$, and for every environment $e=1,2,\dots,m$, we have samples of random pairs $(X^{e},H^{e})$: $\\{X^{e}_{(i)}\\}_{i=1}^{n^{e}}$ for some positive integer $n^{e}$ which are IID for each $e$ and independent across $e$. Thus, since the $X^{e}$’s are independent for $e=1,2,\dots,m$ and the samples for each environment $e$ are are IID, the maximum-likelihood estimator for the DAG structure $\mathcal{D}$ is given by: $\displaystyle\operatornamewithlimits{arg\,min}_{\begin{subarray}{c}B\in\mathbb{R}^{p\times p},\Gamma\in\mathbb{R}^{p\times\bar{h}}\\\ \\{\Psi^{e}\\}_{e=1}^{m}\subseteq\mathbb{S}^{\bar{h}}_{++},\\{w^{e}\\}_{e=1}^{m}\subseteq\mathbb{R}^{p}_{++}\end{subarray}}$ $\displaystyle\sum_{e=1}^{m}\hat{\pi}^{e}\sum_{i=1}^{n^{e}}-\log\texttt{prob}\left(X^{e}_{(i)}|B,\Gamma,\Psi^{e},w^{e}\right)$ (4) subject-to $\displaystyle~{}~{}~{}B\text{ compatible with }\mathcal{D}.$ Here, $\texttt{prob}\left(X^{e}_{(i)}|B,\Gamma,\Psi^{e},w^{e}\right)$ represents the Gaussian likelihood of $X^{e}_{(i)}$ given parameters $B,\Gamma,\Psi^{e},w^{e}$; $\bar{h}\leq p$; the constraint $B\text{ compatible with }\mathcal{D}$ ensures that the estimated $B$ has its support restricted to the structure of $\mathcal{D}$, i.e. $B_{i,j}\neq 0$ if and only if $j\to i$ in $\mathcal{D}$; $\hat{\pi}^{e}=\frac{n^{e}}{\sum_{e=1}^{m}n^{e}}$; and $w^{e}$ is a surrogate for the variances of the sum $\delta^{e}+\epsilon$. The maximum-likelihood estimator (4) can be rewritten as: $\displaystyle(\hat{{B}},\hat{{\Gamma}},\\{(\hat{{\Psi}}^{e},\hat{{w}}^{e})\\}_{e=1}^{m})=\operatornamewithlimits{arg\,min}_{\begin{subarray}{c}B\in\mathbb{R}^{p\times p},\Gamma\in\mathbb{R}^{p\times\bar{h}}\\\ \\{\Psi^{e}\\}_{e=1}^{m}\subseteq\mathbb{S}^{\bar{h}}_{++},\\{w^{e}\\}_{e=1}^{m}\subseteq\mathbb{R}^{p}_{++}\end{subarray}}$ $\displaystyle\sum_{e=1}^{m}\hat{\pi}^{e}\ell(B,\Gamma,\Psi^{e},w^{e};\hat{\Sigma}^{e}),$ (5) subject-to $\displaystyle~{}~{}~{}B\text{ compatible with }\mathcal{D}$ where $\displaystyle\ell(\cdot)=\log\det\left(\texttt{diag}(w^{e})+\Gamma\Psi^{e}\Gamma^{T}\right)+\mathrm{trace}\left(\left[\texttt{diag}(w^{e})+\Gamma\Psi^{e}\Gamma^{T}\right]^{-1}(\mathcal{I}-B)\hat{\Sigma}^{e}(\mathcal{I}-B)^{T}\right),$ and $\hat{\Sigma}^{e}$ is the sample covariance matrix of the data $\\{X^{e}_{(i)}\\}_{i=1}^{n^{e}}$. The input to the program (5) are the sample covariance matrices $\hat{\Sigma}^{e}$ and the estimate $\bar{h}$ for the number of latent variables. We note that the _DirectLikelihood_ estimator can be specialized to different modeling options based on appropriate reparametrization of the nuisance parameters $\Psi^{e},w^{e}$ in (5). For example, in our default setting of IID latent variables with the environment $e=1$ being observational (see Section 2.1.1), we add the following constraints to (5): $\displaystyle\Psi^{e}=(1+\psi^{e})\mathcal{I}\text{ with }\psi^{e}\in\mathbb{R}_{+}\text{ for }e=1,\dots,m$ $\displaystyle w^{e}\succeq w^{1}\text{ for }e=2,\dots,m;~{}\psi^{1}=0.$ Given estimates $(\hat{{B}},\hat{{\Gamma}},\\{\hat{{\Psi}}^{e}\\}_{e=1}^{m},\\{\hat{{w}}^{e}\\}_{e=1}^{m})$, our score for the DAG $\mathcal{D}$ is: $\text{score}_{\lambda}(\mathcal{D})=\sum_{e=1}^{m}\hat{\pi}^{e}\ell(\hat{{B}},\hat{{\Gamma}},\hat{{\Psi}}^{e},\hat{{w}}^{e};\hat{\Sigma}^{e})+\lambda\|\text{moral}(\mathcal{D})\|_{\ell_{0}}.$ (6) Here, $\text{moral}(\mathcal{D})$ denotes the moralization of $\mathcal{D}$ which forms an undirected graph of $\mathcal{D}$ by adding edges between nodes that have commons children, $\lambda>0$ is a regularization parameter, and $\lambda\|\text{moral}(\mathcal{D})\|_{\ell_{0}}$ is akin to the Bayesian Information Criterion (BIC) score that prevents overfitting by incorporating the denseness of the moral graph of $\mathcal{D}$ in the likelihood score. In principle, a collection of DAGs can each be individually scored via (6) to find the best fit to the data. We remark that regularization terms controlling for complexity of estimated DAGs are commonly employed in structural causal learning (see [10] and the references therein). A classically known fact is that in a single environment setting, the moral graph of the DAGs in the Markov equivalence class have the same cardinality [33]. In the context of this paper with perturbations, incorporating the sparsity of the moral graph plays a central role in our theoretical analysis for proving identifiability. In comparison to the _DirectLikelihood_ procedure, backShift [27] fits the SCM (2) (with some restrictions outlined in Section 2.1) by performing joint diagonalization to the difference of sample covariance matrices. _DirectLikelihood_ allows for much more modeling flexibility. First, in contrast to backShift where the latent effects are subtracted by computing the difference of covariances, _DirectLikelihood_ explicitly models these effects. This feature of _DirectLikelihood_ enables the possibility of perturbations to the latent variables and a manner to control the number of estimated latent variables (as opposed to arbitrary number of latent variables with backShift). We discuss in Section 3 that controlling the number of latent variables may lead to identifiability using _DirectLikelihood_ with a single interventional environment, whereas backShift is guaranteed to fail. Second, _DirectLikelihood_ also models the perturbation magnitudes in each environment, allowing for the flexibility of constraining the perturbation magnitudes to improve estimation accuracy. Finally, _DirectLikelihood_ allows to pool information over different environments $e$ for the parameter $B$ of interest: this enable _DirectLikelihood_ to be used with only a few sample points per environment. ### 2.3 Beyond Gaussianity The _DirectLikelihood_ estimator (5) fits a Gaussian perturbation model (2) to the data. However, the perturbation data of the observed variables may be non- Gaussian but satisfy the linear SCM (2). In particular, the random variables $H^{e},\delta^{e}$ may be non-Gaussian while still inducing a linear relationship with the observed variables $X^{e}$. Nonetheless, since the _DirectLikelihood_ estimator only operates on second moments, one may still use the _DirectLikelihood_ procedure to find the best scoring DAGs and the associated connectivity matrices without compromising identifiability guarantees as shown in Section 3, still implying corresponding estimation consistency. Further, we empirically explore the robustness of the _DirectLikelihood_ procedure to non-Gaussianity as well as other model misspecifications via numerical experiments in supplementary material Section G. ## 3 Theoretical properties: identifiability via _DirectLikelihood_ We next investigate the theoretical properties of the _DirectLikelihood_ procedure. The main theorem in this section (Theorem 1) considers the general setting with perturbed latent variables and establishes identifiability properties under some population assumptions. Subsequently, Theorem 2 in Section 3.1 analyze _DirectLikelihood_ under the specialization that the latent variables are unperturbed. Throughout, the notation with ∗ indicates the true underlying population objects which we aim to estimate from data. Setup: We consider the perturbation model in (2) with a population connectivity matrix $B^{\star}\in\mathbb{R}^{p\times p}$ and latent effects coefficient matrix $\Gamma^{\star}\in\mathbb{R}^{p\times h}$. For every environment $e$, the random vector $H^{e}$ has a covariance matrix $\Psi^{e,\star}\in\mathbb{S}^{h}_{++}$ and the random vector $\epsilon^{e}+\delta^{e}$ has a diagonal covariance matrix $\texttt{diag}(w^{e,\star})$ for $w^{e,\star}\in\mathbb{R}^{p}_{+}$. In the subsequent discussion, we allow for $H^{e},\delta^{e},$ and $\epsilon$ to be non-Gaussian random vectors. As prescribed in Section 2.1.1 but not requiring Gaussianity, we assume that the latent variables are independent and identically distributed, i.e. $\Psi^{e,\star}=(1+\psi^{e,\star})\mathcal{I}$ with $\psi^{e,\star}\in\mathbb{R}_{+}$, and that for every environment $e=1,2,\dots,m$, we have IID data $\\{X^{e}_{(i)}\\}_{i=1}^{n^{e}}\subseteq\mathbb{R}^{p}$ where $e=1$ is an observational environment (our theoretical results can be extended to the settings with non-IID latent variables and perturbations in every environment). To score a given DAG $\mathcal{D}$, we consider the modified _DirectLikelihood_ estimator (5) in population: $\displaystyle\min_{\begin{subarray}{c}B\in\mathbb{R}^{p\times p},\Gamma\in\mathbb{R}^{p\times\bar{h}}\\\ \\{(\psi^{e},w^{e})\\}_{e=1}^{m}\subseteq\mathbb{R}_{+}\times\mathbb{R}^{p}_{++}\end{subarray}}$ $\displaystyle\sum_{e=1}^{m}\pi^{e,\star}\ell(B,\Gamma,(1+\psi^{e})\mathcal{I},w^{e};{\Sigma}^{e,\star}).$ (7) subject-to $\displaystyle~{}~{}~{}B\text{ compatible with }\mathcal{D}~{}~{};\psi^{e}\leq C_{\psi}\text{ for }e=1,\ldots,m$ $\displaystyle~{}~{}~{}\psi^{1}=0~{};~{}w^{e}\succeq w^{1}\text{ for }e=2,\dots,m.$ Comparing (7) to the _DirectLikelihood_ estimator (5), the reparametrization $\Psi^{e}\to(1+\psi^{e})\mathcal{I}$ is to account for the latent variables being IID and the constraints $\psi^{1}=0$ and $w^{e}\succeq w^{1}$ for $e=2,\dots,m$ account for $e=1$ being an observational environment. Further, the constraint $\|\psi\|_{\infty}\leq C_{\psi}$ bounds the strength of the latent perturbations for some user-specified parameters $C_{\psi}\geq 0$. We consider optimally scoring DAG(s) with their associated connectivity matrices: $\displaystyle{\mathcal{D}}_{\text{opt}}=\operatornamewithlimits{arg\,min}_{\text{DAG }\mathcal{D}}\text{score}_{\lambda=0}(\mathcal{D})~{}~{};~{}~{}{B}_{\text{opt}}:\text{ associated connectivity matrix(ces)}.$ (8) Here, $\text{score}_{\lambda=0}(\mathcal{D})$ is the achieved minimum in (7). It is the analogue of (6) but using the population covariance matrix $\Sigma^{e,*}$ and the population optimizers from (7). The sample _DirectLikelihood_ procedure replaces ${\Sigma}^{e,\star}$ and $\pi^{e,\star}$ in (7) with the population covariance matrix $\hat{\Sigma}^{e}$ and the mixture coefficients $\hat{\pi}^{e}$, respectively. Further, in the sample setting, the regularization parameters $\lambda$ in the score evaluation (6) must be tuned. Using the sample quantities as described above we denote by $\displaystyle\hat{\mathcal{D}}_{\text{opt}},\ \ \hat{B}_{\text{opt}}$ (9) the optimal scoring DAGs and connectivity matrices in the sample version. Our objective is to demonstrate that under some assumptions, $\mathcal{D}_{\text{opt}}=\mathcal{D}_{X}^{\star}$, $B_{\text{opt}}=B^{\star}$, and in the limit of sample sizes for all environments tending to infinity, $\hat{\mathcal{D}}_{\text{opt}}\to\mathcal{D}_{X}^{\star}$ and $\hat{B}_{\text{opt}}\to B^{\star}$ in probability for an appropriate choice of $\lambda$. Our consistency results are in the general setting where there are perturbations on the latent variables and require an assumption on the latent variable effects, dubbed _latent materiality_ , that is formalized below: ###### Definition 1 (_latent materiality_ for $e\in\mathcal{E}$). The random variables $(X^{e},H^{e})$ satisfy _latent materiality_ if there exists a pair $k,l$ such that: $\rho(X_{k}^{e},X_{l}^{e}|X_{\backslash\\{k,l\\}}^{e},H^{e})=0~{}\&~{}\rho(X_{k}^{e},X_{l}^{e}|X_{\backslash\\{k,l\\}}^{e})\neq 0.$ In words, (1) states that the latent variables induce “some” confounding dependencies among the observed variables in environment $e\in\mathcal{E}$. In comparison, the latent denseness assumption needed for consistency of the two stage deconfounding procedures [3, 12] require that the latent variables induce “many” confounding dependencies. As such, _latent materiality_ is a strictly (and often substantially) weaker condition than the denseness assumption required for the success of the two stage deconfounding. We investigate whether _DirectLikelihood_ is able to identify the population connectivity matrix $B^{\star}$ under this weaker condition, and answer in the affirmative under appropriate conditions on the strength and heterogeneity of the interventions. We provide two sets of assumptions that lead to identifiability. The first set requires two interventional environments that have sufficiently large interventions on the observed variables, and the second set requires two interventional environments with one interventional environment consisting of much stronger interventions on the observed variables than the other interventional environment. These assumptions are described below where the observational environment is denoted by $e=1$ and the two interventional environments are denoted by $e=2,3$: Assumption 1 $\displaystyle-\text{the mixture effects are}\text{ \allowbreak non-vanishing: }\pi^{e,\star}>0\text{ for }e=1,2,3$ (10) Assumption 2 $\displaystyle-\text{heterogeneity of perturbations for }e=2,3:~{}$ $\displaystyle~{}~{}~{}~{}\frac{w^{2,\star}_{k}-(1+\psi^{2,\star})w^{1,\star}_{k}}{w^{2,\star}_{l}-(1+\psi^{2,\star})w^{1,\star}_{l}}\neq\frac{w^{3,\star}_{k}-(1+\psi^{3,\star})w^{1,\star}_{k}}{w^{3,\star}_{l}-(1+\psi^{3,\star})w^{1,\star}_{l}}\text{ for all }k\neq{l}$ Assumption 3 $\displaystyle-\text{\emph{latent materiality }}\text{ in Definition }\ref{defn:hidden_spurious}\text{ for environments }e=2,3$ Assumption 4 $\displaystyle-\text{sufficiently large interventions on variables for $e=2,3$: }$ $\displaystyle~{}\frac{\min_{k}(w^{e,\star}_{k})^{2}}{\max_{k}w^{e,\star}_{k}}>8{\kappa^{\star}}(1+2C_{\psi})^{2}(1+\|w^{1,\star}\|_{\infty})(1+\|\Gamma^{\star}\|_{2}^{2}+\|\Gamma^{\star}\|_{2}^{4})$ Assumption 2’ $\displaystyle-\text{heterogeneity of perturbations for }e=2,3:$ $\displaystyle~{}~{}~{}~{}\frac{w^{3,\star}_{k}-(1+\psi^{3,\star})w^{1,\star}_{k}}{w^{3,\star}_{l}-(1+\psi^{3,\star})w^{1,\star}_{l}}\neq\frac{w^{3,\star}_{k}-\frac{1+\psi^{3,\star}}{1+\psi^{2,\star}}w^{2,\star}_{k}}{w^{3,\star}_{l}-\frac{1+\psi^{3,\star}}{1+\psi^{2,\star}}w^{2,\star}_{l}}\text{ for all }k\neq{l}$ Assumption 3’ $\displaystyle-\text{\emph{latent materiality }}\text{ in Definition }\ref{defn:hidden_spurious}\text{ for environments }e=3$ Assumption 4’ $\displaystyle-\text{sufficiently large interventions on variables in $S$ for $e=3$: }$ $\displaystyle~{}\frac{\min_{k}(w^{e,\star}_{k})^{2}}{\max_{k}w^{e,\star}_{k}}>8{\kappa^{\star}}(1+2C_{\psi})^{2}(1+\|w^{2,\star}\|_{\infty})(1+\|\Gamma^{\star}\|_{2}^{2}+\|\Gamma^{\star}\|_{2}^{4})$ where the quantity $\kappa^{\star}\equiv\frac{1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2}}{1+\min_{i}\|B^{\star}_{:,i}\|_{2}^{2}}$ in Assumption 4 or 4’ of (10). Assumptions 1-4 or 1 $\&$ 2’-4’ in (10) impose conditions on the population quantities associated with the environments $e=1,2,3$. In particular, Assumption 1 in (10) require that the contribution for each environment does not vanish in the large data limit; Assumptions 2 and 2’ in (10) ensure that the perturbations are heterogeneous. In principle, the interventions on the observed variables in the environments $e=2,3$ may come from identical distributions (i.e. $w^{2,\star}=w^{3,\star}$) or one of them being even zero (i.e. $w^{2,\star}=w^{1,\star}$) with different latent variable perturbations (i.e. $\psi^{2,\star}\neq\psi^{3,\star}$) without compromising Assumption 2 or 2’ in (10). Additionally, one can show that if the parameters $w^{3,\star},w^{2,\star},w^{1,\star}$ and $\psi^{2,\star},\psi^{3,\star}$ are drawn from continuous distributions, Assumption 2 and 2’ in (10) are satisfied almost surely. Assumptions 3 and 3’ in (10) insists that the _latent materiality_ in (1) is satisfied so that the latent variables induce at least a single spurious dependency among the observed variables. Finally, Assumptions 4 and 4’ in (10) require that the perturbations on the observed variables are sufficiently large. This is akin to strong instruments assumption in the instrumental variables literature [1]. Given Assumptions 1-4 or Assumptions 1 $\&$ 2’-4’ in (10), we first analyze the theoretical properties of the population _DirectLikelihood_ procedure. ###### Theorem 1 (Identifiability in population: perturbed latent variables). Suppose that the user-specified parameters $\bar{h}$ and $C_{\psi}$ in (7) are chosen conservatively so that $\bar{h}\geq\text{dim}(H)$ and $C_{\psi}\geq\psi^{e,\star}$ for all $e=1,2,\dots,m$. Under Assumptions 1-4 or Assumptions 1 $\&$ 2’-4’ in (10), the following are satisfied for $\mathcal{D}_{\text{opt}}$ in (8): 1. 1. $\mathcal{D}^{\star}_{X}\in{\mathcal{D}}_{\text{opt}}$ and any other optimum $\mathcal{D}\in{\mathcal{D}}_{\text{opt}}$ satisfies: $\text{moral}(\mathcal{D}^{\star}_{X})\subseteq\text{moral}(\mathcal{D})$. 2. 2. The optimum of $\arg\min_{D\in{D}_{\text{opt}}}\|\text{moral}(D)\|_{\ell_{0}}$ is unique and equal to $D^{\star}_{X}$. Further, the associated connectivity matrix is equal to $B^{\star}$. The proof is presented in the supplementary material Section B. The first assertion in Theorem 1 states that the moral graph of any optimum $\mathcal{D}\in\mathcal{D}_{\text{opt}}$ of the _DirectLikelihood_ procedure is a superset of the moral graph of $\mathcal{D}^{\star}$, and the second assertion states that the connectivity matrices yielding the sparsest moral graphs among the optima are unique and equal to $B^{\star}$. These statements do not guarantee recovering the other model parameters , viewed here as nuisance part, including $\Gamma^{\star}$, and $\\{(\psi^{e,\star},w^{e,\star})\\}_{e=1}^{m}$. However, under additional assumptions namely: $\bar{h}=\text{dim}(H)$ and the incoherence of the subspace $\text{col-space}(\Gamma^{\star})$, recovery of $\Gamma^{\star}{\Gamma^{\star}}^{T}$ and $\\{(\psi^{e,\star},w^{e,\star})\\}_{e=1}^{m}$ can be shown. We note that Assumptions 1-4 or Assumptions 1 $\&$ 2’-4’ in (10) are sufficient conditions for identifiability and are generally not necessary. As an example, we show in supplementary material Section C that identifiability cannot be achieved with only a single interventional environment if $\bar{h}=p$ (e.g. most conservative choice for the number of latent variables). However, we also demonstrate that if $\bar{h}<p$, _DirectLikelihood_ will attain identifiability under certain configurations of model parameters (i.e dense latent effects with sparse population DAG $\mathcal{D}_{X}^{\star}$). Thus, Assumptions 1-4 or Assumptions 1 $\&$ 2’-4’ in (10) serve as protection for arbitrary population DAG structure and a class of model parameters. We believe that relaxing these assumptions while retaining identifiability guarantees is an interesting direction for future research. The virtue of incorporating the regularization term $\lambda\|\mathcal{D}\|_{\ell_{0}}$ in (6) is that in the large data limit, this penalty term encourages sparser moral graphs. Thus, in conjunction with the results of Theorem 1, we demonstrate that in the large data limit, the set $\hat{\mathcal{D}}_{\text{opt}}$ and $\hat{B}_{\text{opt}}$ asymptotically converge to $\mathcal{D}^{\star}_{X}$ and $B^{\star}$, respectively. To appeal to standard empirical process theory results, we constrain the parameter space to be compact as described in the corollary below: ###### Corollary 1 (Asymptotic consistency for perturbed latent variables). Consider the sample version of the _DirectLikelihood_ procedure in (7) with the compactness constraints $\max\\{1/{\min_{k}w^{e}_{k}},\|B\|_{2}\\}\leq C_{\text{comp}}$ for every every $e=1,2,\dots,m$ where $C_{\text{comp}}>\max\\{{1}/{\min_{k}w^{e,\star}_{k}},\allowbreak\|B^{\star}\|_{2}\\}$ so that the true parameters are in the feasible region. Further, let $\lambda\sim\mathcal{O}({\log(\sum_{e=1}^{m}n^{e})}\allowbreak/{\sum_{e=1}^{m}n^{e})}$ in (6). Under the conditions in Theorem 1, the following are satisfied for $\hat{\mathcal{D}}_{\text{opt}}$ and $\hat{B}_{\text{opt}}$ in (9): $\hat{\mathcal{D}}_{\text{opt}}\to\mathcal{D}^{\star}_{X}$ and ${B}_{\text{opt}}\to B^{\star}$ , in probability, as $n^{e}\to\infty$ for $e=1,2,3$. The proof of Corollary 1 is a straightforward consequence of Theorem 1 and left out for brevity. The combined results of Theorem 1 and Corollary 1 state that under perturbations that are sufficiently different across environments and the _latent materiality_ condition, two interventional environments suffice for consistent estimation. _R_ emark 1: Assumptions 1-4 or Assumptions 1 $\&$ 2’-4’ in (10) needed for identifiability suggest that perturbations on the latent variables can improve identifiability. Specifically, the perturbations on the observed variables in one interventional environment may be statistically identical to another environment or even be completely equal to zero and still preserve identifiability as long as the latent variables have been perturbed. _R_ emark 2: As described in Section 2.1, the perturbation model (2) offers flexibility with respect to many components of the model such as the structure of the perturbations on the observed or latent variables. In particular, one may fit to data the perturbation model (2) where the perturbation magnitudes are equal in magnitude across the coordinates, e.g. $\texttt{diag}(w^{e,\star})\propto\mathcal{I}$. We demonstrate in the supplementary material Section D that _DirectLikelihood_ , under Assumptions similar to (10), provides consistent estimators in this setting. Thus, in principle one may have only two additional perturbation parameters per environment: a scalar for the latent variables and a scalar for the observed variables. As a point of contrast, in the setting where the perturbations among the observed variables may vary, there are $p+1$ new variables for each environment. The substantial reduction in the number of parameters can lead to better statistical properties in practice. ### 3.1 Specializations: unperturbed latent variables We next analyze the identifiability guarantees of the _DirectLikelihood_ procedure when the latent variables remain unperturbed across the environments, i.e. the perturbation $\mathcal{A}$ does not point to $H$ in Fig 1. Specifically, we consider the setup described in the beginning of Section 3 with the modification that $\psi^{e,\star}=0$. Thus, we also modify the _DirectLikelihood_ estimator (7) by setting $\psi\equiv 0$. We further consider an arbitrary latent effects matrix $\Gamma^{\star}$, where the two- stage deconfounding procedure will not perform well, since latent denseness may not be satisfied. We demonstrate on the other hand, that the under sufficient interventions, the connectivity matrix that attains the optimum score via _DirectLikelihood_ in population is unique and equal to $B^{\star}$. ###### Theorem 2 (Identifiability in population: unperturbed latent variables). Let $\bar{h}\geq\text{dim}(H)$ in the _DirectLikelihood_ estimator (7). Letting $S\subseteq\\{1,2,\dots,p\\}$ encode the location of perturbations, suppose that Assumption 2 in (10) is modified to $\frac{w^{2,\star}_{k}-w^{1,\star}_{k}}{w^{2,\star}_{l}-w^{1,\star}_{l}}\neq\frac{w^{3,\star}_{k}-w^{1,\star}_{k}}{w^{3,\star}_{l}-w^{1,\star}_{l}}$ for all $k,l\in S,k\neq l$ and Assumption 4 in (10) is modified to $w^{e,\star}_{k}>w^{1,\star}_{k}$ for $e=2,3$ and all $k\in S$. Then, under Assumptions 1 in (10) and modified Assumptions 2 and 4, we have for $\mathcal{D}_{\text{opt}}$ and $B_{\text{opt}}$ as in (8): 1. (a) ${\mathcal{D}}_{\text{opt}}=\mathcal{D}^{\star}_{X}$ and ${B}_{\text{opt}}=B^{\star}$ if ${S}=\\{1,\ldots,p\\}$. 2. (b) Any optimum $B\in{B}_{\text{opt}}$ satisfies $B_{p,:}=B^{\star}_{p,:}$ for the sets $\texttt{ANC}(p)\subseteq S$ or $\texttt{DES}(p)\cup p\subseteq S$. 3. (c) $\bar{B}=\arg\min_{B\in{B}_{\text{opt}}}\|B\|_{\ell_{0}}$ satisfies $\bar{B}_{p,:}=B^{\star}_{p,:}$ if $\texttt{PA}(p)\cup p\subseteq S$ and $\mathcal{D}^{\star}_{X}$ is faithful to the distribution of $X|H$. The proof of Theorem 2 is similar in nature to that of Theorem 1 and can be found in the supplementary material Section E. Further, analogous to Corollary 1, one can readily show the large limit convergence of the population _DirectLikelihood_ to the sample _DirectLikelihood_ , although we omit this for brevity. Remark 2: The conditions needed for identifiability of the unperturbed latent variable setting (Theorem 2) differ from the perturbed setting (Theorem 1) in multiple ways. First, there are no conditions on the strength of perturbations on the observed variables. Further, the latent coefficient matrix $\Gamma^{\star}$ may be arbitrary without needing conditions like _latent materiality_. Finally, the setting with unperturbed latent variables requires two interventional environments where all observed variables are perturbed, whereas the setting with perturbed latent variables only requires a single environment with perturbations on all the observed variables and another environment where the latent variables are perturbed, highlighting that perturbations on the latent variables is useful for identifiability. Remark 3: Theorem 2(a) is similar in nature to the backShift procedure [27]. Nonetheless, _DirectLikelihood_ provides additional flexibility such as controlling the number of latent variables, incorporating do-interventions, and structure in the strength of shift interventions that lead to more desirable statistical properties. As an example, a necessary condition for identifiability using the backShift procedure is that there are at least two interventional environments. We demonstrate in supplementary material Section C that this is also a necessary condition with _DirectLikelihood_ if $\bar{h}=p$. However, under $\bar{h}<p$, _DirectLikelihood_ may attain identifiability with only a single interventional environment. As another example, a single interventional environment consisting of the same magnitude perturbation across the coordinates is sufficient for consistency via _DirectLikelihood_(see Section D of the supplementary material for the theoretical statement). ## 4 Connections to distributional robustness Recent works have demonstrated an intrinsic connection between distributional robustness and causal inference. Specifically, in the setting where the response variable is not directly perturbed and there is no latent confounding, the causal parameter $B^{\star}_{p,:}$ linking the covariates $X_{{\backslash}p}$ to the response variable $X_{p}$ in the SCM (2) satisfies the following max-risk optimization problem: $B^{\star}_{p,:}=\min_{\begin{subarray}{c}\beta\in\mathbb{R}^{p}\\\ \beta_{p}=0\end{subarray}}\max_{\begin{subarray}{c}\mathcal{P}_{e}\in\mathcal{P}\\\ X^{e}\sim\mathcal{P}_{e}\end{subarray}}\|X_{p}^{e}-X^{e}\beta\|_{2}^{2},$ (11) for a certain perturbation distribution class $\mathcal{P}$ consisting of distributions $\mathcal{P}_{e}$ indexed by environments $e$ [2]. In particular, the causal coefficients $B^{\star}_{p,:}$ are solutions to a robust optimization problem subject to distributional changes to the system which do not act directly on $X_{p}$. Given access to exogenous variables or different environments, [26] allow for non-perturbed latent variables and possibly direct action of change on the target of interest, and prove a relation between the causal parameters and a particular robust optimization program. In this section, we demonstrate that the joint causal parameters $B^{\star}$ minimize a certain worst-case risk in the setting where there may be perturbations to all the variables including the latent variables, further strengthening the connection between causal inference and distributional robustness. We consider the following perturbation distribution class parameterized by the quantities $C_{\zeta},C_{\psi}\geq 0$: $\displaystyle\mathcal{P}_{C_{\zeta},C_{\psi}}$ $\displaystyle=$ $\displaystyle\Bigg{\\{}\text{distribution }\mathcal{P}_{e}\text{ over random pairs }(X^{e},H^{e})\text{ satisfying default SCM }\eqref{eq:SCM_env}\text{ and}$ $\displaystyle w^{e,\star}=w^{1,\star}+\zeta^{e,\star}{\bf{1}}\text{ with }\zeta^{e,\star}\in[0,C_{\zeta}],\psi^{e,\star}\in[0,C_{\psi}]\Bigg{\\}},$ where the default SCM is the setting with IID latent variables, i.e. $\Psi^{e,\star}=(1+\psi^{e,\star})\mathcal{I}$. Recall that the sum $\epsilon^{e}+\delta^{e}$ in the SCM (2) is distributed as follows: $\epsilon^{e}+\delta^{e}\sim\mathcal{N}(0,\texttt{diag}(w^{e,\star}))$. The constraints on $w^{e,\star}$ ensure that the perturbations on the observed variables are IID with magnitude less than a pre-specified level $C_{\zeta}$; finally, the constraints on $\psi_{e}^{\star}$ ensure that the perturbations on the latent variables have magnitude less than a pre-specified level of $C_{\psi}$. We note that the distributions inside $\mathcal{P}$ are specified by parameters that are invariant, namely the population connectivity matrix $B^{\star}$, the latent effects matrix $\Gamma^{\star}$, and noise variable $\epsilon$ with variance of its coordinates encoded in $w^{1,\star}$. We consider the following worst-case optimization program that identifies parameters $B,\Gamma,v$ that are robust to perturbations from the class $\mathcal{P}_{C_{\zeta},C_{\psi}}$: $(B_{\text{robust}},\Gamma_{\text{robust}},w^{1}_{\text{robust}})=\operatornamewithlimits{arg\,min}_{\begin{subarray}{c}B\text{ is a DAG}\\\ \Gamma\in\mathbb{R}^{p\times\bar{h}},w^{1}\in\mathbb{R}^{p}_{++}\end{subarray}}\max_{\begin{subarray}{c}\mathcal{P}_{e}\in\mathcal{P}_{C_{\zeta},C_{\psi}}\\\ (X^{e},H^{e})\sim\mathcal{P}_{e}\end{subarray}}~{}\texttt{KL}(\Sigma^{e,\star},{\Sigma}_{B,\Gamma,w^{1}}(\bar{\zeta}^{e},\bar{\psi}^{e})),$ (12) where $\Sigma_{B,\Gamma,w^{1}}(\cdot,\cdot)$ is an estimated covariance model with definition shown below and KL is the Gaussian Kullback-Leibler divergence between the estimated and population covariance models. Here, $\bar{\zeta}^{e},\bar{\psi}^{e}\in\mathbb{R}_{+}$ are estimates for the nuisance perturbation parameters $\zeta^{e,\star},\psi^{e,\star}$ that vary across the perturbation distributions. For a given $B,\Gamma,w^{1}$, the quantities $(\bar{\zeta}^{e},\bar{\psi}^{e})$ are obtained by finding the best fit to data: $(\bar{\zeta}^{e},\bar{\psi}^{e})=\arg\min_{0\leq\zeta^{e}\leq C_{\zeta},0\leq\psi^{e}\leq C_{\psi}}\allowbreak\texttt{KL}(\Sigma^{e,\star},{\Sigma}_{B,\Gamma,w^{1}}(\zeta^{e},\psi^{e}))$ where ${\Sigma}_{B,\Gamma,d}(\zeta^{e},{\psi}^{e})=(\mathcal{I}-B)^{-1}\allowbreak(\texttt{diag}(w^{1})+\zeta^{e}\mathcal{I})+(1+\psi^{e})\Gamma\Gamma^{T})(\mathcal{I}-B)^{-T}$ is the covariance specified by the model parameters. In comparison to (11), the risk in (12) is measured jointly over the entire collection of observed variables (via the covariance matrix). As observed previously, this system-wide perspective is crucial for allowing perturbations on all of the variables. The following theorem connects the max-risk solutions $B_{\text{robust}}$ to the causal parameter $B^{\star}$. ###### Theorem 3. Suppose that the estimated number of latent variables $\bar{h}$ in (12) is chosen conservatively, i.e. $\bar{h}\geq\text{dim}(H)$. Let the maximum perturbation size on the observed variables in the perturbation class satisfy $C_{\zeta}\geq{\kappa^{\star}}(1+2C_{\psi})^{2}(1+\|w^{1,\star}\|_{\infty})(1+\|\Gamma^{\star}\|_{2}^{2}+\|\Gamma^{\star}\|_{2}^{4})$ where $\kappa^{\star}\equiv\frac{1+\max_{i}\|B^{\star}_{:,i}\|_{2}}{1+\min_{i}\|B^{\star}_{:,i}\|_{2}}$. Suppose there exists a perturbation distribution $\mathcal{P}_{e}\in\mathcal{P}_{C_{\zeta},C_{\psi}}$ with parameters $\zeta^{e,\star}=C_{\zeta}$, $\psi^{e,\star}\neq 0$ such that the random pairs $(X^{e},H^{e})$ drawn from this distribution satisfy the _latent materiality_ assumption in Definition 1. Then: 1. 1. Any optimal connectivity matrix $B\in B_{\text{robust}}$ satisfies $\text{moral}(B^{\star})\subseteq\text{moral}(B)$ 2. 2. The optimum of $\arg\min_{B\in{B}_{\text{robust}}}\|\text{moral}(B)\|_{\ell_{0}}$ is unique and equal to $B^{\star}$. Remark 6: The proof of Theorem 3 is presented in the supplementary material Section F. This theorem result states that the causal parameter $B^{\star}$ is a minimizer of the max-risk optimization problem over the perturbation class $\mathcal{P}_{C_{\zeta},C_{\psi}}$ (and produces the sparsest moral graph among the optimum), establishing a fundamental relation between causality and distributional robustness. Further, under similar assumptions as required in Theorem 2, analogous connections can be established for the setting with unperturbed latent variables. ## 5 Computing the _DirectLikelihood_ estimates Solving the _DirectLikelihood_ estimator (5) for a DAG $\mathcal{D}$ is a challenging task, as the problem is non-convex over the decision variables $B,\Gamma,\\{\Psi^{e}\\}_{e=1}^{m},\\{w^{e}\\}_{e=1}^{m}$. Further, searching over the space of DAGs is super-exponential in the number of variables. These computational challenges are common in causal structure learning problems and are made worse with the presence of multiple environments and latent confounding. In this section, we propose some heuristics for computing _DirectLikelihood_ based on perturbation data to find optimal scoring DAGs; we discuss open questions regarding computations involving the _DirectLikelihood_ in Section 7. The outline of this section is as follows: in Section 5.1, we describe a method to compute _DirectLikelihood_ for a given DAG structure, that is, when the support of $B$ is pre-specified. Building on this, in Section 5.2, we describe some computational heuristics for structure search over different DAGs. ### 5.1 Scoring a DAG As announced above, we first assume that a DAG hence also the support of $B$ are pre-specified. The goal, for a given DAG, is to estimate the unknown parameters. As prescribed in Section 2.1, we employ the _DirectLikelihood_ procedure in the default setting (see Section 2.1.1) with IID latent variables and jointly observational and interventional data. While the optimization program (5) is jointly non-convex, solving for the connectivity matrix $B$ with the nuisance parameters $\psi,\Gamma,$ and $\\{w^{e}\\}_{e=1}^{m}$ fixed is a convex program. Since we are mainly interested in an accurate estimate for the connectivity matrix, we propose the following alternating minimization strategy: starting with an initialization of all of the model parameters, we fix $B$ and perform gradient updates to find updated estimates for the nuisance parameters, and then update $B$ – by solving a convex program to optimality with the remaining parameters fixed. We find that the alternating method described above is relatively robust to the initialization scheme, but we nonetheless propose the following concrete strategy: $\displaystyle 1){}$ $\displaystyle B_{(0)}\text{ via linear regression with observational data}$ (13) $\displaystyle 2){}$ $\displaystyle{w}^{1}_{(0)}=\texttt{diag}\left\\{\mathcal{I}-B_{(0)})\hat{\Sigma}^{1}(\mathcal{I}-B_{(0)})^{T}\right\\}$ $\displaystyle 3){}$ $\displaystyle\Gamma_{(0)}=UD^{1/2}\text{ where }UDU^{T}\text{ is SVD of }(\mathcal{I}-B_{(0)})\hat{\Sigma}^{1}(\mathcal{I}-B_{(0)})^{T}$ $\displaystyle 4){}$ $\displaystyle\text{ initialize }w^{e}_{(0)}={w}^{1}_{(0)}+\zeta^{e}{\bf 1}\text{ and solve }\zeta^{e},{\psi^{e}}\text{ by 2-dimensional gridding},$ where the first step follows since the DAG structure is known, the fourth step is based on the observation that for a fixed $B,\Gamma,{w}^{1}_{0}$, the optimization problems for $\zeta^{e},$ and $\psi^{e}$ decouples across the environments $e=2,3,\dots,m$. The entire procedure, involving the initialization step and the parameter updates is presented in Algorithm 1. Algorithm 1 Scoring $\mathcal{D}$ via _DirectLikelihood_ 1: Input: $\hat{\Sigma}^{e}$ for $e=1,2,\dots,m$; regularization $\lambda\geq 0$; number of latent variables $\bar{h}$ 2: Initialize parameters: via relation (13) 3: Alternating minimization: * (a) Fixing $(\Gamma_{(t)},\\{(\psi_{(t)}^{e},w^{e}_{(t)})\\}_{e=1}^{m})$, update $B_{(t+1)}$ by solving the convex optimization program (5). Fixing $B_{(t+1)}$, perform gradient updates until convergence to find $(\Gamma_{(t+1)},\\{(\psi^{e}_{(t+1)},w^{e}_{(t+1)})\\}_{e=1}^{m})$ * (b) Perform alternating iterates for positive integers $t$ until convergence at iteration $T$ 4: Compute $\text{score}_{\lambda}(\mathcal{D})$: plug-in the estimates $(B_{(T)},\Gamma_{(T)},\\{(\psi^{e}_{(T)},w^{e}_{(T)})\\}_{e=1}^{m})$ into (6) 5: Output: $\text{ score}_{\lambda}(\mathcal{D})$ and the connectivity matrix $B_{(T)}$ Step 3 of Algorithm 1 involves two convergence criteria: the convergence of the gradient steps for the parameters $(\Gamma_{(t)},\\{(\psi^{e}_{(t)},w^{e}_{(t)})\\}_{e=1}^{m})$ as well as the convergence of the alternating procedure. For the first criterion, we terminate the gradient descent when the relative change in the likelihood score is below $\epsilon_{1}$. For the second criterion, we terminate the alternating minimization at step $T$ when $\|B_{(T)}-B_{(T-1)}\|_{\infty}\leq\epsilon_{2}$. In the numerical experiments in Section 6, we set $\epsilon_{1}=10^{-6}$ and $\epsilon_{2}=10^{-2}$. Finally, for all our experiments, we select the regularization parameter $\lambda$ via holdout-validation. ### 5.2 Identifying candidate DAGs We have discussed how to score a given DAG using the _DirectLikelihood_ estimator. Searching over all DAGs is typically not possible unless the number of observed variables $p$ is small. In fact, performing a combinatorial search is known to be very challenging and in some sense NP-hard [5]. One could rely on greedy strategies [4]; we discuss below a strategy which exploits a reasonable set of candidate DAGs. In some applications with domain expertise, a set of plausible DAGs may be considered as candidate DAGs to be scored by the _DirectLikelihood_. Without this knowledge however, this candidate set must be obtained from data. In this section, we propose a heuristic to identify a collection of candidate DAGs to be scored via Algorithm 1. Our approach is to identify these DAGs by assuming no latent confounding. In general, fitting a DAG without taking into account the effect of latent variables yields a denser graph (compared to the population or Markov equivalent DAGs) since marginalization of latent variables induces confounding dependencies. As such, scoring such DAGs using Algorithm 1 may yield connectivity matrices that are more dense than the population connectivity matrix, although the magnitude of the spurious edges will be small. In our numerical experiments in Section 6, we find the optimally scoring DAG(s) (using _DirectLikelihood_) among the candidate DAGs. Then, for each optimal DAG, we perform backward deletion by removing each of its edges (in the reverse order of their edge strength) and computing the likelihood score of the resulting DAGs (using _DirectLikelihood_). We then choose the DAG(s) that obtain the smallest likelihood score along the entire path. In Section 1, we outlined procedures to identify DAGs without latent confounding, with the constraint based PC algorithm, score based GES, and the hybrid method ARGES being among the most popular for structure learning in the observational setting. In principal, when domain expertise is not available, many of these methods can be used to find candidate DAGs. For simplicity, in our synthetic illustrations in Section 6, we perform GES on pooled environmental data. The GES procedure greedily adds or deletes edges in the space of Markov equivalent DAGs based on $\ell_{0}$ regularized likelihood score and is asymptotically consistent [4]. We select the regularization parameter to be twice the analogue from the BIC score (as was suggested in [20]). Algorithm 2 presents the entire procedure of finding candidate DAGs, scoring them, and selecting the final output. Algorithm 2 Optimizing _DirectLikelihood_ 1: Input: $\hat{\Sigma}^{e}$ for $e\in 1,2,\dots,m$; regularization parameter $\lambda>0$; number of latent variables $\bar{h}$ 2: Find candidate DAGs: $\tilde{\mathcal{D}}_{\text{cand}}=\mathcal{D}_{1},\mathcal{D}_{2},\dots,\mathcal{D}_{q}$ using domain expertise, GES with pooled data, or some structure learning algorithm 3: Score each DAG: For each $\mathcal{D}_{i}$, compute $\text{score}_{\lambda}(\mathcal{D}_{i})$ via Algorithm 1 and obtain $\tilde{\mathcal{D}}_{\text{opt}}=\arg\max_{\mathcal{D}\in\tilde{\mathcal{D}}_{\text{cand}}}\text{score}_{\lambda}(\mathcal{D})$ and associated connectivity matrices $\tilde{B}_{\text{opt}}$ 4: Backward deletion: set ${\mathcal{D}}_{\text{cand}}=\tilde{\mathcal{D}}_{\text{opt}}$ and for each ${\mathcal{D}}\in\tilde{\mathcal{D}}_{\text{opt}}$, perform for $i=1,2,\dots,\\#\text{edges}({\mathcal{D}})$ 1. 1. Let ${\mathcal{D}}_{i}$ be the DAG after deleting smallest $i$ edges in magnitude in ${\mathcal{D}}$ 2. 2. Compute $\text{score}_{\lambda}({\mathcal{D}}_{i})$ via Algorithm 1 3. 3. Add ${\mathcal{D}}_{i}$ to ${\mathcal{D}}_{\text{cand}}$ 5: Output: Compute $\hat{\mathcal{D}}_{\text{opt}}=\arg\max_{\mathcal{D}\in\mathcal{D}_{\text{cand}}}\text{score}_{\lambda}(\mathcal{D})$ and the associated $\hat{B}_{\text{opt}}$. We remark that the $\arg\max$ in steps 3 and 5 of Algorithm 2 may not be unique in the infinite sample limit, due to potential non-identifiability. In practice, the optimization is done to find all optimal DAGs within a relative tolerance value from the minimum (set at $10^{-3}$ in our experiments), and outputs also its several associated parameter estimates. ## 6 Experiments In this section, we illustrate the utility of _DirectLikelihood_ with simulated and real data. In Section 6.1.1, we study the accuracy of the _DirectLikelihood_ procedure in estimating the population causal graph underlying the observed variables. In Section 6.1.2, we provide comparisons of _DirectLikelihood_ with other methods, including Invariant Causal Predictions, Causal Dantzig, backShift, and the two-stage deconfounding procedure [12]. Finally in Section 6.2, we evaluate the utility of _DirectLikelihood_ for learning causal networks on two real datasets, one involving California reservoir volumes [31] and the other involving protein mass spectroscopy [29]. In supplementary material Section G, we examine _DirectLikelihood_ under model miss-specifications, namely: non-Gaussian variables in a linear structure equation model, dependent latent variables, and non-linear SCMs. Algorithm 2 requires as input the regularization parameter $\lambda$ and the number of latent variables $\bar{h}$. We select the regularization parameter $\lambda$ via holdout-validation. Specifically, we partition in the data in each setting into a training set and a validation set, where the validation set compromises of some portion of the data in the observational environment. Unless specified otherwise, the validation set in all numerical experiments is taken to be $20\%$ of the samples in the observational data. Given estimates $(\hat{B},\hat{\Gamma},\hat{w}^{1})$ after supplying training data into _DirectLikelihood_ , we then compute the validation performance as the negative log-likelihood $\ell(\hat{B},\hat{\Gamma},\mathcal{I},\hat{w}^{1},\Sigma_{\texttt{valid}}^{1})$, where $\Sigma_{\texttt{valid}}^{1}$ is the sample covariance of the validation data. As smaller negative log-likelihood is indicative of better fit to data, we select $\lambda$ that minimizes the negative log-likelihood on validation data. We observe that our procedure is generally robust to the choice $\bar{h}$ and furthermore, _DirectLikelihood_ procedure is consistent as long as $\bar{h}\geq\text{dim}(H)$ (see Section 3). Thus, we select $\bar{h}$ to be moderately large (relative to the ambient dimension) so that it is an overestimate of the true number of latent variables, although holdout- validation can also be performed to select $\bar{h}$. ### 6.1 Synthetic experiments #### 6.1.1 DAG structural recovery _Setup_ : we consider a collection of $p=10$ observed variables influenced by $h\in\\{1,2\\}$ latent variables. To generate the connectivity matrix $B^{\star}\in\mathbb{R}^{p\times p}$, we sample from an Erdös Rényi graph with edge probabilities $0.1$ until we find a DAG structure, and form $B^{\star}$ by setting edge strengths equal to $-0.7$. The resulting DAG and connectivity matrix consists of $10$ nonzero entries. The entries of the latent coefficient matrix $\Gamma^{\star}\in\mathbb{R}^{p\times h}$ are generated IID distributed uniformly from the interval $[0,\sqrt{0.3/\sqrt{h}}]$ and the entries below $0.5\sqrt{0.3/\sqrt{h}}$ are set to zero. The noise term $\epsilon$ is distributed according to $\epsilon\sim\mathcal{N}(0,0.5\mathcal{I}_{p})$. Unless otherwise specified, the latent variables $H$ are generated as $H\sim\mathcal{N}(0,\mathcal{I}_{h})$. These parameters specify the distribution of the observed and latent variables when there are no perturbations and we denote this environment by $e=1$. In addition to this observational environment, we suppose there are $m-1$ interventional environments. The number of samples generated in the observational environment is set to $n^{1}=300$ and $n^{e}=5t$ for positive integer $t$ is the sample size for environment $e$. The values for $t$, the number of environments, and the magnitude of perturbations on the observed and latent variables is specified later. For each environment $e$, we set $\delta^{e}_{k}\sim\mathcal{N}(0,\zeta+\text{Unif}(0,1))$ for $k=1,2,\dots,p$ and certain values of $\zeta$, and $H^{e}\sim(1+\psi^{e,\star})\mathcal{N}(0,\mathcal{I}_{h})$ with $\psi^{e,\star}\sim\frac{1}{2}(1+\text{Unif}(0,1))$. We generate data from $m=7$ environments, one observational environment with no perturbations and six interventional environments, and consider the following five settings $(a)$ $h=1,\zeta=5$, $(b)$ $h=1,\zeta=2$, $(c)$ $h=2,\zeta=5$, and $(d)$ $h=1,\zeta=5$ and the last five environments have two observed variables that are chosen randomly to receive do-interventions with values set identically equal to $5$. The perturbation data for each setting is supplied to the _DirectLikelihood_ procedure to score each DAG in a collection of candidate DAGs. We set $\bar{h}=h+1$ in the _DirectLikelihood_ estimator (5) and constrain the latent variable perturbation $\psi^{e}\leq\psi_{\text{max}}=2$ for interventional environments $e=2,3,\dots,7$. We then evaluate the accuracy of the _DirectLikelihood_ procedure (Algorithm 2) for DAG structural recovery in each of the settings $(a-d)$ averaged across $10$ independent trials. The accuracy of DAG recovery is computed with respect to false positives (edges produced in the estimated DAG that are missing or in the reverse direction in the population DAG) and true positives (edges in the estimated DAG present in the correct direction in the population DAG). The set of candidate DAGs to score via Algorithm 2 is obtained by performing the GES algorithm on pooled data. Since _DirectLikelihood_ always finds a single graph as the optimum in these numerical experiments, we compute for comparison the average size of the observational Markov equivalence class obtained after the pooled GES step in setting $(a)$: $9$ DAGs for $t=64$, $8.8$ for $t=16$, $9.3$ for $t=4$, $6$ for $t=2$, $6.4$ for $t=1$. (a) $h=1,\zeta=5$ (b) $h=2,\zeta=5$ (c) $h=1,\zeta=2$ (d) $h=1,\zeta=5$, do- interventions Figure 2: Structure estimation accuracy of Algorithm 2 (best scoring DAG) using candidate DAGs obtained by the GES algorithm on pooled data for different problem settings. Total number of true discoveries equals to $10$. The curve for each $t$ corresponds to $5t$ samples for each interventional environment, with $t\in\\{1,2,4,16,64\\}$. For each curve, the accuracy of the estimated DAG in comparison to the population DAG is calculated by ordering the edges according to their strengths and sequentially counting (from strongest edge to weakest edge) an edge to be a false discovery if it is missing or in a reverse direction in the population DAG, and otherwise count as a true discovery. Each curve is averaged across 10 independent trials. #### 6.1.2 Comparison to previous methods _DirectLikelihood_ _vs Invariant Causal Predictions, causal Dantzig, and backShift_ : We compare the performance of _DirectLikelihood_ to Invariant Causal Predictions [23], causal Dantzig [25], and backShift [27] for finding the causal parents of a response variable. Consistency guarantees for these previous methods require at least one of these assumptions i) there are no latent effects, ii) the latent variables remain unperturbed across environments, iii) the response variable remains unperturbed across environments. Specifically, while Invariant Causal Predictions (ICP) does not impose any conditions on the specific relationship among the covariates, assumptions i) and iii) are needed for consistently estimating the causal parents. Causal Dantzig allows for latent effects, although it requires assumptions ii) and iii) for consistency. Finally, guarantees using the backShift procedure require assumption ii). Via numerical simulations, we illustrate the impact of using these previous approaches when assumptions i-iii) are not satisfied. We leave out the comparison to Instrumental Variables [1] as the number of instruments (environments) must be larger than the number of covariates. One in principle could apply Anchor regression [26], although this method does not obtain causal parameters. We consider a causal structure among $p=10$ variables and $h=1$ latent variable with $X_{p}$ denoting the response variable and $X_{1:p-1}$ denoting the covariates. We modify the parents and children of the DAG in Section 6.1.1 (so that the response variable in the population DAG has more parents and children): $X_{3},X_{4}$ are parents of the response variable and $X_{7},X_{8},X_{9}$ are children of the response variable. We set all the edge weights in the DAG to be $-0.7$. The entries of the latent coefficient matrix $\Gamma^{\star}\in\mathbb{R}^{p\times 1}$ are generated IID distributed uniformly from the interval $[0,\sqrt{0.3}]$. The noise term $\epsilon$ is distributed according to $\epsilon\sim\mathcal{N}(0,0.5\mathcal{I}_{p})$. We generate an observational environment $e=1$ and four interventional environments $e=2,3,4,5$ and consider the following four settings: 1. Setting 1. no perturbations on the response variable and the latent variables: $\delta_{k}^{e}\sim\mathcal{N}(0,5+\text{Unif}(0,1))$ for all $k=1,2,\dots,p-1$ and $\psi^{e}=0$ for all $e$ 2. Setting 2. no perturbations on the latent variables and perturbations on the response variable: $\delta_{k}^{e}\sim\mathcal{N}(0,5+\text{Unif}(0,1))$ for all $k=1,2,\dots,p$ and $\psi^{e}=0$ for all $e$ 3. Setting 3. no perturbations on the response variable and perturbations on the latent variables : $\delta_{k}^{e}\sim\mathcal{N}(0,5+\text{Unif}(0,1))$ for all $k=1,2,\dots,p-1$ and $\psi^{e}\sim 1+\text{Unif}(0,1)$ for all $e$ 4. Setting 4. perturbations on the response and latent variables: $\delta_{k}^{e}\sim\mathcal{N}(0,5+\text{Unif}(0,1))$ for all $k=1,2,\dots,p$ and $\psi^{e}\sim 1+\text{Unif}(0,1)$ for all $e$ We obtain $1000$ IID observational data and interventional data and supply this data to the algorithms for each of the procedures. For the ICP and causal Dantzig methods, we set the significance threshold at $0.01$ and for the backShift procedure, we perform stability selection (with stability parameter $0.70$) as is prescribed in [27]. We produce the set of candidate DAGs for the _DirectLikelihood_ procedure using pooled GES, and set $\bar{h}=2$. Table 2 compares the false positives and true positives associated with identifying the causal parents of the response variable (across $10$ independent trials) of _DirectLikelihood_ and competing methods. The population has two causal parents for the response variable, so that the total size of true positives is at most 2. A few remarks are in order. First, in all of the settings, ICP returns the empty set due to the latent effects. Further, backShift performs poorly in all settings, even when there are no perturbations on the latent variables (the setting where [27] prove identifiability guarantees). We do observe however that if we increase the strength and dynamic range of the perturbations, backShift is able to accurately estimate the causal parents when the perturbations do not affect the latent variables. Namely, we consider Setting 2 with $\delta_{k}^{e}\sim\mathcal{N}(0,5+5*\text{Unif}(0,1))$, set the stability threshold at $0.51$ and find that $\text{TP}=0.9;\text{FP}=0$ when $n^{e}=1000$ and $\text{TP}=1.5;\text{FP}=0$ when $n^{e}=10000$ for $e=1,2,\dots,5$. Next, as supported by theoretical guarantees, causal Dantzig estimates the causal parameters accurately in Setting 1 when there are no perturbations on the response or the latent variables. However, in settings where the latent variables or the response variables are perturbed, causal Dantzig yields many false positives and often incorrectly identifies the causal children of the response variable as the estimated causal parents. The _DirectLikelihood_ procedure on the other hand, does not yield many false positives and has comparable power performance. We note that the power performance of _DirectLikelihood_ in Setting 4 is negatively affected by the performance of pooled GES to select candidate DAGs. Specifically, the largest number of true positives among the candidate DAG (without scoring) is on average equal to $1.2$ and thus _DirectLikelihood_ is performing as well as possible given the candidate DAGs that are supplied as input. In Section 7, we discuss future directions for more rigorous techniques to obtain and score candidate DAGs. Method | Setting 1 | Setting 2 | Setting 3 | Setting 4 ---|---|---|---|--- _DirectLikelihood_ | TP = 2, FP = 0.3 | TP = 2, FP = 0.1 | TP = 2, FP = 0.8 | TP = 1.2, FP = 0.5 causal Dantzig | TP = 2, FP = 0 | TP = 2, FP = 5 | TP = 2, FP = 3 | TP = 2, FP = 5.1 backShift | TP = 0, FP = 1.6 | TP = 0, FP = 0 | TP = 0, FP = 1.4 | TP = 0, FP = 0 ICP | empty set | empty set | empty set | empty set Table 2: Comparison of _DirectLikelihood_ with other methods for identifying the causal parents of the response variable. Maximum possible number of true discoveries is equal $2$. There are $1000$ samples in the observational and each of the four interventional environments. _DirectLikelihood_ _vs two-stage deconfounding_ : The two-stage deconfounding procedure first employs a sparse+low-rank decomposition on data from each environment to deconfound the latent effects and then employs the _DirectLikelihood_ procedure with $\Gamma\equiv 0$ (i.e. as latent effects are in principle removed) in the second stage. As described in Section 1, the accuracy of the first step heavily relies on the denseness of the latent effects. We generate the following synthetic example to compare the performance of these algorithms. We set $h=3$ and consider the synthetic setup described earlier in Section 6.1.1 with the following modifications: the first two columns of $\Gamma^{\star}\in\mathbb{R}^{p\times 3}$ consist of standard basis elements with the coordinate corresponding to $X_{6}$ and $X_{5}$ nonzero, and a third column with entries sampled IID from the uniform distribution with entries less than $0.5$ set to zero. We generate an observational environment $e=1$ and four interventional environments $e=2,3,4,5$ where $\delta_{k}^{e}\sim\mathcal{N}(0,\zeta+\text{Unif}(0,1))$ for $k=1,2,\dots,p$ with $\zeta=2$. We generate the latent perturbation coefficient $\psi^{e}\sim\text{Uniform}(0,0.5)$. We obtain $n^{1}=1000$ IID observational data and $5t$ IID interventional data for each interventional environment with $t\in\\{60,200\\}$. The number of latent variables included in the model must be selected by the user in the _DirectLikelihood_ and two- stage deconfounding procedures ($\bar{h}$ in _DirectLikelihood_ and two regularization parameters in the first step of the two-stage deconfounding procedure). Since we are interested in comparing identifiability properties of these procedures, we choose $\bar{h}=3$ in _DirectLikelihood_. Further, we chose the regularization parameters in the deconfounding step of the two-stage deconfounding procedure by choosing the best predictive model on a validation set with number of latent variables less than or equal to $h=3$. Both the _DirectLikelihood_ procedure and the second stage of the two-stage deconfounding score a set of candidate DAGs. Noticing that the sparseness of the latent effects induces a spurious edges between the pairs $(X_{5},X_{10}),(X_{8},X_{10}),\allowbreak(X_{5},X_{3})$, we generate $8$ candidate DAGs adding $5$ edges at random to all $8$ DAGs in the Markov equivalence class of the population DAG $\mathcal{D}^{\star}_{X}$ and a final candidate DAG that adds the directed edges $X_{5}\to X_{10}$, $X_{8}\to X_{10}$, $X_{5}\to X_{3}$ to the population DAG $\mathcal{D}^{\star}_{X}$. Thus, the total number of candidate DAGs is equal to $9$. Table 3 compares the structural recovery (across $10$ independent trials) of _DirectLikelihood_ and the two-stage deconfounding procedure. We observe that since the denseness assumption is violated, the two-stage deconfounding produces a DAG with false positives, even when the number of samples in each environment is large (i.e. $1000$ samples). Furthermore, in the low-sample regime, the two-stage procedure yields fewer true discoveries than _DirectLikelihood_. It is worth noting that in addition to the superior performance of _DirectLikelihood_ as compared to two-stage deconfounding, the _DirectLikelihood_ solution is faster to compute since it does not involve tuning two regularization parameters. Method | $300$ samples/ interven. environment | $1000$ samples/ interven. environment ---|---|--- _DirectLikelihood_ | TP = 10, FP = 0 | TP = 10, FP = 0 two-stage deconfounding | TP = 9.2, FP = 1.9 | TP = 10, FP = 2.9 Table 3: Comparison of _DirectLikelihood_ with two-stage deconfounding and backShift procedures. Maximum possible number of true discoveries is equal $10$. There are $1000$ samples in the observational environment and $\\{300,1000\\}$ samples in each of the four interventional environments. ### 6.2 Experimental results on real datasets #### 6.2.1 California reservoirs The California reservoir network consists of approximately $1530$ reservoirs that act as buffer against severe drought conditions and are a major source of water for agricultural use, hydropower generation, and industrial use. Water managers of these reservoirs have to assess likelihood of system-wide failure and effectiveness of potential policies. Due to similarities in hydrological attributes (e.g. altitude, drainage area, spatial location), the reservoir network is highly interconnected. Thus, effective reservoir management requires understanding of reservoir interdependencies. [31] used historical data of volumes of the largest 55 reservoirs to obtain an undirected graphical model of the California reservoir network. The previous analysis does not provide causal implications, namely how change in management of one reservoir affects the entire system. As such, we seek a causal network among the reservoirs. We consider the 10 largest reservoirs (with respect to capacity) in California, where daily volume data (downloaded from https://github.com/armeentaeb/WRR-Reservoir) are available during the period of study (January 2003–December 2015). Following the preprocessing steps in [31], we average the data from daily down to 156 monthly observations. A seasonal adjustment step is performed to remove predictable seasonal patterns. The resulting data was demonstrated in [31] to be well-approximated by a multivariate Gaussian distribution. The data naturally be categorized to an observational environment and three interventional environments. Specifically, the observational environment is during a normal period (2003-2006, 2010-2012) with no drought conditions, one interventional environment corresponding to an abnormally dry period (2007, 2013) with small changes to management, moderate drought period (2008-2009) with significant changes to management, and a severe drought period (2014-2015) with extreme changes to management. We take as training data the periods 2003-2006, 2010 as well as all the interventional data, and take the validation data to be the period 2011-2012 from observational data. We include two latent variables in the model as was discovered in [31] (e.g $\bar{h}=2$) and supply the observational and interventional data to the _DirectLikelihood_ procedure. After holdout-validation, we identify a causal graph with $9$ edges as shown in Figure 3(a). The connections are between pairs of reservoirs with at least of these commonalities: a) similar hydrological attributes (e.g. hydrological zone and elevation), b) coordinated management by a district or a state-wide project, and c) similar usages (e.g. hydropower generation). For example, the reservoirs New Melones (NML), Don Pedro (NP), New Exchequer (EXC) and Pine Flat (PNF) are all in the San Joaquin district. Further, Shasta (SHA), Trinity (CLE), Oroville (ORO) and Folsom (FOL) are in the network of Central Valley and State Water projects and their reservoir operations are coordinated. Finally, Almanor (ALM) and Trinity (CLE) are primarily used for hydroelectric power generation. Specifically, the Pacific Gas $\&$ Electric Company owns Almanor and has historically negotiated with Trinity Public Utilities District that use water from Trinity to generate electricity. For comparison, we obtain _DirectLikelihood_ estimates when no latent variables are included in the model. The model we obtain after holdout- validation contains $14$ edges as shown in Figure 3(b). Unlike the structure with latent variables, the model without latent variables contains many spurious edges: namely connections between pairs of reservoirs that are geographically far apart (e.g. Oroville - Pine Flat and Trinity - New Melones). The same phenomena was also noted in [31] in the context of undirected graphical models. (a) with latent variables (b) without latent variables Figure 3: Causal graphical structure among the volumes of 10 largest reservoirs in California (with respect to capacity) using _DirectLikelihood_ procedure: a) incorporating latent variables, and b) without latent variables. #### 6.2.2 Protein expressions We next analyze the protein mass spectroscopy dataset [29]. This dataset (downloaded from http://www.sciencemag.org/content/suppl/2005/04/21/308.5721.523.DC1/Sachs.SOM.Datasets.zip) contains a large number of measurements of the abundance of $11$ phosphoproteins and phospholipids recorded under different experimental conditions in primary human immune system cells. The different experimental conditions are characterized by associated reagents that inhibit or activate signaling nodes, corresponding to interventions at different points of the protein-signaling network. Following the previous works [18, 17], we take 8 environments consisting of an observational environment and 7 interventional environments. Knowledge about some of the “ground truth” is available, which helps verification of results. To identify a set of candidate DAGs to score using our _DirectLikelihood_ procedure, we consider all DAGs that are Markov equivalent to the ground truth DAG reported in [29] (total of $176$ DAGs). We include two latent variables (e.g. set $\bar{h}=2$) in the _DirectLikelihood_ estimator, and after holdout- validation, we select a causal graphical structure with six total edges. We compare our findings to the direct causal relations reported in the literature [11, 17, 18, 29] in Table 4. Edge | [29](a) | [29](b) | [18] | [11] | [17] ---|---|---|---|---|--- PKC $\to$ p38 | ✓ | ✓ | ✓ | ✓ | ✓ Akt $\to$ Erk | | | ✓ | | ✓ Mek $\to$ Raf | | | ✓ | ✓ | ✓ PKC $\to$ JNK | ✓ | ✓ | ✓ | ✓ | PIP2 $\to$ PIP3 | | | ✓ | | PLCg $\to$ PIP2 | ✓ | ✓ | | ✓ | ✓ Table 4: Comparing the findings of _DirectLikelihood_(ordered by edge strength) with different causal discovery methods. Here, we are only including edges found by _DirectLikelihood_ and note that additional edges have been identified by the other methods. The consensus network according to [29] is denoted by “[29](a)” and their reconstructed network by “[29](b)”. We next compare our findings when we account for latent effects to the setting where we do not account for latent effects, namely by setting $\Gamma\equiv 0$ in the _DirectLikelihood_ procedure. The causal graphical model we obtain without accounting for latent effects also consists of six edges, but three of those are different than the model that incorporates latent variables. These edges (in the order of strength) are Akt $\to$ PKA, PIP3 $\to$ PIP2, PLCg $\to$ PIP3. The edge Akt $\to$ PKA was never reported in previous work, and the edge PLCg $\to$ PIP3 was not reported in methods that accounted for latent effects [17, 18]. Thus, these two edges appear to be spurious dependencies due to common latent variables. The edge PIP3 $\to$ PIP2 in the causal structure without latent variables is also reported in [11, 17, 29] while the reverse direction PIP2 $\to$ PIP3 is discovered in our causal structure with latent effects (see Table 4) and was also reported in [18]. ## 7 Discussion In this paper, we proposed a framework to model unspecific perturbation data among a collection of Gaussian observed and latent variables. It can be represented as a certain mixed-effects linear structural causal model where the interventions are modeled as random effects which propagate dynamically through the structural equations. This framework allows for perturbations on all components of the system, including a response variable of interest or the latent variables. We demonstrated the utility of _DirectLikelihood_ for identifying causal relationships on both synthetic and real datasets. There are several interesting directions for further investigation that arise from our work. In Section 5, we proposed heuristics for searching over the space of DAGs and for solving the _DirectLikelihood_ estimator (5) to score DAGs. While the empirical results in Section 6 support the utility of our heuristics, there is much room for more rigorous optimization techniques (e.g. provably consistent greedy methods). Next, the theoretical results in Section 3 are based on analysis in the large data limit. However, our empirical results in Section 6 suggest that the _DirectLikelihood_ procedure may provide accurate estimates with moderate data size. As such, an exciting research direction is to develop high-dimensional consistency guarantees for _DirectLikelihood_. Further, in the setting where the perturbations are limited, there may be multiple DAGs that are equally representative of the data, or equivalently, multiple DAGs that yield the same exact likelihood score in the population case (known as the interventional Markov equivalence class). The characterization of these equivalence classes will be central to developing greedy algorithms as well as constructing active learning strategies for maximally informative interventions [13, 14]. Finally, the perturbation model (2) assumes a linear relationship between the observed and latent variables. It would be of practical interest to explore extensions of our framework to non-linear settings, or alternatively, characterize the extent to which linear models capture the causal effects. ## Acknowledgements A. Taeb and P. Bühlmann both acknowledge scientific interaction and exchange at “ETH Foundations of Data Science”. The authors would like to thank Mona Azadkia, Yuansi Chen, Juan Gamella, and Marloes Maathius for helpful discussions and feedback on the manuscript. The dataset and the code to produce the results of this paper can be found at https://github.com/armeentaeb/perturbations-and-causality. ## References * [1] J. Angrist, G. Imbens, and D. Rubin, Identification of causal effects using instrumental variables, Journal of the American Statistical Association, 91 (1996), pp. 444–455. * [2] P. Bühlmann, Invariance, causality and robustness, Statistical Science, 35 (2020), pp. 404–426. * [3] V. Chandrasekaran, P. Parillo, and A. Willsky, Latent variable graphical model selection via convex optimization, Annals of Statistics, 40 (2012), pp. 1935–1967. * [4] D. Chickering, Optimal structure identification with greedy search, Journal of Machine Learning Research, 3 (2002), pp. 507–554. * [5] D. Chickering, C. Meek, and D. Heckerman, Large-sample learning of bayesian networks is np-hard, Journal of Machine Learning Research, 5 (2004), pp. 1287–1330. * [6] D. Colombo, M. Maathuis, M. Kalisch, and T. Richardson, Learning high-dimensional directed acyclic graphs with latent and selection variables, Annals of Statistics, 40 (2012), pp. 294–321. * [7] A. Dawid, Decision-theoretic foundations for statistical causality, arXiv 2004.12493, (2020). * [8] A. Dawid and V. Didelez, Identifying the consequences of dynamic treatment strategies: a decision-theoretic overview, Statistical Surveys, 4 (2010), pp. 184–231. * [9] A. Dixit, O. Parnas, and B. Li, Perturb-seq: dissecting molecular circuits with scalable single-cell rna profiling of pooled genetic screens, Cell, 167 (2016), pp. 1853–1866. * [10] M. Drton and M. Maathius, Structure learning in graphical modeling, Annual Review of Statistics and Its Application, 4 (2017), pp. 365–393. * [11] D. Eaton and K. Murphy, Exact bayesian structure learning from uncertain interventions, In Proceedings of the 10th International Conference on Artificial Intelligence and Statistics (AISTATS), (2007). * [12] B. Frot, P. Nandy, and M. Maathius, Robust causal structure learning with hidden variables, Journal of Royal Statistical Society, Series B, 81 (2019), pp. 459–487. * [13] A. Hauser and P. Bühlmann, Characterization and greedy learning of interventional markov equivalence classes of directed acyclic graphs, Journal of Machine Learning Research, 13 (2012), pp. 2409–2464. * [14] A. Hauser and P. Bühlmann, Two optimal strategies for active learning of causal models from interventional data, International Journal of Approximate Reasoning, 4 (2014), pp. 926–939. * [15] A. Hauser and P. Bühlmann, Jointly interventional and observational data: estimation of interventional markov equivalence classes of directed acyclic graphs, Journal of Royal Statistical Society, Series B, 77 (2015), pp. 291–318. * [16] A. McLean, L. Sanders, and W. Walter, A unified approach to mixed linear models, Journal of American Statistical Association, 45 (1991), pp. 54–64. * [17] N. Meinshausen, A. Hauser, J. Mooij, J. Peters, P. Versteeg, and P. Bühlmann, Methods for causal inference from gene perturbation experiments and validation, Proceeding of National Academy of Sciences, 113 (2016), pp. 7361–7368. * [18] J. Mooij and T. Heskes, Cyclic causal discovery from continuous equilibrium data, In Proceedings of the 29th Annual Conference on Uncertainty in Artificial Intelligence, (2013), pp. 431–439. * [19] P. Nandy, A. Hauser, and M. Maathius, High-dimensional consistency in score-based and hybrid structure learning, Annals of Statistics, 46 (2018), pp. 3151–3183. * [20] C. Nowzohour and P. Bühlmann, Score-based causal learning in additive noise models, Statistics, 50 (2016), pp. 471–485. * [21] J. Pearl, Causality: Models, reasoning, and inference, Cambridge University Press, 2nd edition, 2009. * [22] J. Peters and P. Bühlmann, Identifiability of Gaussian structural equation models with equal error variances, Biometrika, 101 (2014), pp. 219–228. * [23] J. Peters, P. Bühlmann, and N. Meinshausen, Causal inference using invariant prediction: identification and confidence intervals, Journal of the Royal Statistical Society, Series B, 78 (2016), pp. 947–1012. * [24] J. Robins, M. Hernan, and B. Brumback, Marginal structural models and causal inference in epidemiology, Epidemiology, 11 (2000), pp. 550–560. * [25] D. Rothenhäusler, P. Bühlmann, and N. Meinshausen, Causal dantzig: fast inference in linear structural equation models with hidden variables under additive interventions, Annals of Statistics, 47 (2019), pp. 1688–1722. * [26] D. Rothenhäusler, N. Meinshausen, P. Bühlmann, and J. Peters, Anchor regression: heterogeneous data meets causality, arXiv:1801.06229, (2020). * [27] D. Rothhenhäusler, C. Heinze, J. Peters, and N. Meinshausen, backshift: Learning causal cyclic graphs from unknown shift interventions, In Advances in Neural Information Processing Systems, (2016). * [28] D. Rubin, Causal inference using potential outcomes, Journal of the American Statistical Association, 100 (2015), pp. 322–331. * [29] K. Sachs, O. Perez, D. Lauffenburger, and G. Nolan, Causal protein-signaling networks derived from multiparameter single-cell data, Science, 308 (2005), pp. 523–529. * [30] P. Spirites, C. Glymour, and R. Scheines, Causation, Prediction, and Search, Cambridge: MITPress, 2000. * [31] A. Taeb, J. Reager, M. Turmon, and V. Chandrasekaran, A statistical graphical model of the california reservoir system, Water Resources Research, 53 (2017), pp. 9721–9739. * [32] S. van de Geer and P. Bühlmann, $\ell_{0}$-penalized maximum likelihood for sparse directed acyclic graphs, Annals of Statistics, 41 (2013), pp. 536–567. * [33] T. Verma and J. Pearl, Equivalence and synthesis of causal models, In Proceedings of the 6th Conference on Uncertainty in Artificial Intelligence (UAI), (1991), pp. 255–270. * [34] Y. Wang, L. Solus, K. Yang, and C. Uhler, Permutation based causal inference algorithms with interventions, In Advances in Neural Information Processing Systems, (2017), pp. 5822–5831. ## Supplementary Material The proof of the theoretical results in the supplementary material are based on the following population quantities that we summarize. Let $B^{\star}\in\mathbb{R}^{p\times p}$ be the population connectivity matrix, $\Gamma^{\star}\in\mathbb{R}^{p\times h}$ be the matrix encoding the effects of latent variables on the observed variables, $w^{1,\star}\in\mathbb{R}^{p}_{++}$ encode the variance of the coordinates of $\epsilon$, and $w^{e,\star}_{k}=w^{1,\star}_{k}+\text{var}(\delta^{e}_{k})$ with $w^{e,\star}\in\mathbb{R}^{p}_{++}$. Let $\\{\psi^{e,\star}\\}_{e=1}^{m}\subseteq\mathbb{R}_{+}$ be the perturbation on the latent variables. Let $\kappa^{\star}=\frac{1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2}}{1+\min_{i}\|B^{\star}_{:,i}\|_{2}^{2}}$. Finally, for a matrix $M\in\mathbb{R}^{d\times d}$, we denote $\|M\|_{2}$ as the spectral norm (largest singular value) of $M$. ## Appendix A Incorporating do-interventions Recall from Section 2.1.3 (main paper) that the structural causal model with do-interventions is modified to be: $\begin{gathered}X^{e}=\mathcal{F}_{\texttt{do}(e)^{c}}(B^{\star}{X^{e}}+\Gamma^{\star}{H}^{e}+\epsilon^{e})+\delta^{e}\\\ H^{e}\sim\mathcal{N}(0,\Psi^{\star,e}),\end{gathered}$ Given data cross environments $e=1,2,\dots,m$, we can optimize the parameters of the model via the negative log-likelihood (4) (main paper). It is straightforward to see that the negative log-likelihood $\log\texttt{prob}(\cdot)$ decouples across the parameters $(B,\Gamma,\\{\Psi^{e}\\}_{e=1}^{m},\\{{w^{e}_{\texttt{do}(e)^{c}}}\\}_{e=1}^{m})$ and $\\{{w^{e}_{\texttt{do}(e)}}\\}_{e=1}^{m}$. In other words, the structure of the DAG $\mathcal{D}$ only plays a role in the term involving the parameters $(B,\Gamma,\\{\Psi^{e}\\}_{e=1}^{m},\\{{w^{e}_{\texttt{do}(e)^{c}}}\\}_{e=1}^{m})$, and we thus focus on that component of the likelihood: $\displaystyle(\hat{{B}},\hat{{\Gamma}},\\{(\hat{{\Psi}}^{e},\hat{{w}}^{e})\\}_{e=1}^{m})=\operatornamewithlimits{arg\,min}_{\begin{subarray}{c}B\in\mathbb{R}^{p\times p},\Gamma\in\mathbb{R}^{p\times\bar{h}}\\\ \\{\Psi^{e}\\}_{e=1}^{m}\subseteq\mathbb{S}^{\bar{h}}_{++},\\{w^{e}\\}_{e=1}^{m}\subseteq\mathbb{R}^{p}_{++}\end{subarray}}$ $\displaystyle\sum_{e=1}^{m}\hat{\pi}^{e}\ell(B,\Gamma,\Psi^{e},w^{e};\hat{\Sigma}^{e},\texttt{do}(e)),$ (14) subject-to $\displaystyle~{}~{}~{}B\text{ compatible with }\mathcal{D}$ where $\displaystyle\ell(\cdot)=\log\det\left(\left[\texttt{diag}(w^{e})+\Gamma\Psi^{e}\Gamma^{T}\right]_{\texttt{do}(e)^{c}}\right)$ $\displaystyle+\mathrm{trace}\left(\left[\texttt{diag}(w^{e})+\Gamma\Psi^{e}\Gamma^{T}\right]_{\texttt{do}(e)^{c}}^{-1}\left[(\mathcal{I}-\mathcal{F}_{\texttt{do}(e)^{c}}B)\hat{\Sigma}^{e}(\mathcal{I}-\mathcal{F}_{\texttt{do}(e)^{c}}B)^{T}\right]_{\texttt{do}(e)^{c}}\right),$ Here, we assume that the location of the do-interventions are known so that the input to the program (14) are the sample covariance matrices $\hat{\Sigma}^{e}$, the do-intervention locations $\texttt{do}(e)$, and the estimate $\bar{h}$ for the number of latent variables. ## Appendix B Proof of Theorem 1 (main paper) Recall that the _DirectLikelihood_ estimator (5) (main paper) scores candidate DAGs and the best scoring DAGs are chosen as output (there is no penalty term in the score function as $\lambda=0$ in the population setting). As stated in the theoretical results in Section 3 (main paper), we assume that all possible DAGs among the observed variables may be scored. Thus, we consider the _DirectLikelihood_ estimator (5) (main paper) specialized to IID latent variables and $e=1$ denoting observational environment with the additional decision variable over the space of DAGs to find optimal DAGs with associated parameter estimates: $\displaystyle(\hat{B},\hat{\Gamma},\hat{\psi},\\{\hat{{w}}^{e}\\}_{e=1}^{m})=\operatornamewithlimits{arg\,min}_{\begin{subarray}{c}B,\Gamma,\psi\in\mathbb{R}^{m}_{+}\\\ \\{w^{e}\\}_{e=1}^{m}\subseteq\mathbb{R}^{p}_{++}\\\ \text{DAG }\mathcal{D}\end{subarray}}$ $\displaystyle\sum_{e=1}^{m}\pi^{e,\star}\ell(B,\Gamma,(1+\psi^{e})\mathcal{I},w^{e};\Sigma^{e,\star})$ (15) subject-to. $\displaystyle B\text{ compatible with }\mathcal{D}~{};~{}\|\psi\|_{\infty}\leq C_{\psi}$ $\displaystyle\psi^{1}=0~{};~{}w^{e}\succeq w^{1}\text{ for }e=2,\dots,m$ Here the decision variable $\psi$ encodes the latent perturbations and consists of coordinates $\psi=(\psi^{1},\psi^{2},\ldots,\psi^{m})$. As stated in Theorem 1 (main paper), we assume that the number of latent variables in the model is a conservative estimate of the true number of latent variables, i.e. $\bar{h}\geq\text{dim}(H)$. The proof strategy for proving Theorem 1 (main paper) is based on appealing to the following three lemmas: ###### Lemma 1. Optimal solutions of (15) satisfy the following equivalence: $\displaystyle(B,\Gamma,\psi,\\{{w}^{e}\\}_{e=1}^{m})\text{ optimum of }\eqref{eqn:estim_cons_exp_pert}$ $\displaystyle\iff B\text{ compatible with a DAG},\Gamma\in\mathbb{R}^{p\times\bar{h}},\\{w^{e}\\}_{e=1}^{m}\subseteq\mathbb{R}^{p}_{++},\psi\in\mathbb{R}^{m}_{+}$ $\displaystyle\|\psi\|_{\infty}<C_{\psi},\psi^{1}=0,w^{e}\succeq w^{1}\text{ for }e=2,\dots,m~{}\&~{}\text{ for every }e=1,2,\dots,m$ $\displaystyle\Sigma^{e,\star}=(\mathcal{I}-{B})^{-1}(\texttt{diag}(w^{e})+(\psi^{e}+1)\Gamma\Gamma^{T})(\mathcal{I}-{B})^{-T}.$ ###### Lemma 2. Let $(\tilde{B},\tilde{\Gamma},\tilde{\psi},\\{\tilde{w}^{e}\\}_{e=1}^{m})$ be an optimal solution of (15). The following statements hold: 1. 1. Suppose $\tilde{\psi}^{e}\neq\psi^{e,\star}$ for some $e\in\\{2,3\\}$. Under Assumptions 1-4 in (10) or Assumptions 1 $\&$ 2’-4’ in (10) (main paper), $\text{moral}(B^{\star})\subset\text{moral}(\tilde{B})$. 2. 2. Suppose $\tilde{\psi}^{e}=\psi^{e,\star}$ for $e=2,3$. Under Assumptions 1-4 in (10) or Assumptions 1 $\&$ 2’-4’ in (10) (main paper), $\text{moral}(\tilde{B})=\text{moral}(B^{\star})$. ###### Lemma 3. Let $(\tilde{B},\tilde{\Gamma},\tilde{\psi},\\{\tilde{w}^{e}\\}_{e=1}^{m})$ be an optimal solution of (15). Suppose $\tilde{\psi}^{e}=\psi^{e,\star}$ for $e=2,3$. Then, $\tilde{B}=B^{\star}$. Combining Lemma 2 and 3 will conclude the proof of Theorem 1 (main paper), due to the fact that $(B^{\star},\Gamma^{\star},\psi^{\star},\\{w^{e,\star}\\}_{e=1}^{m})$ are optimal solutions of (15). We now prove each lemma. ### B.1 Useful notations We introduce some notations that will be repeatedly used. Specifically, we define for $e\in\mathcal{E}$: $\displaystyle{\kappa}_{\text{cond}}^{e}$ $\displaystyle\equiv~{}~{}\min_{k,l}|(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})|_{k,l}$ $\displaystyle~{}~{}\text{s.t.}~{}~{}\rho(X_{k}^{e},X_{l}^{e}|X_{\backslash{\\{k,l\\}}}^{e},H^{e})\neq 0$ $\displaystyle{\kappa}_{\text{latent}}^{e}$ $\displaystyle\equiv\max_{k,l}~{}~{}|(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}({\Gamma^{\star}}^{T}\texttt{diag}(w^{e,\star})^{-1}\Gamma^{\star}+\frac{1}{1+\psi^{e,\star}}\mathcal{I})^{-1}$ $\displaystyle~{}~{}~{}~{}~{}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})|_{k,l}$ $\displaystyle~{}~{}\text{s.t.}~{}~{}\rho(X_{k}^{e},X_{l}^{e}|X_{\backslash{\\{k,l\\}}}^{e},H^{e})=0$ The intuition behind the quantities $\kappa_{\text{cond}}^{e}$ and $\kappa_{\text{latent}}^{e}$ is based on the decomposition of $(\Sigma^{e,\star})^{-1}$. Specifically, from the Woodbury inversion lemma: $\displaystyle(\Sigma^{e,\star})^{-1}$ $\displaystyle=$ $\displaystyle(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})$ $\displaystyle-$ $\displaystyle(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}({\Gamma^{\star}}^{T}\texttt{diag}(w^{e,\star})^{-1}\Gamma^{\star}+\frac{1}{1+\psi^{e,\star}}\mathcal{I})^{-1}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star}).$ Standard multivariate analysis states that for any pair of indices $(k,l)$ with $k\neq l$, $\left[(\Sigma^{e,\star})^{-1}\right]_{k,l}\neq 0$ if and only if $\rho(X_{k}^{e},X_{l}^{e}|X_{\backslash\\{k,l\\}}^{e})\neq 0$. Similarly, since the precision matrix $\text{cov}(X^{e}|H^{e})^{-1}=(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})$, we have that $\left[(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})\right]_{k,l}\neq 0$ if and only if $\rho(X_{k}^{e},X_{l}^{e}|X_{\backslash\\{k,l\\}}^{e},H^{e})\neq 0$. Thus, by definition, $\kappa_{\text{cond}}^{e}>0$ and by the _latent materiality_ in Definition 1 (main paper), $\kappa_{\text{latent}}^{e}>0$. Then, $\displaystyle\kappa_{\text{cond}}^{e}\geq\frac{(1+\min_{i}\|B^{\star}_{:,i}\|_{2}^{2})}{2\max_{k}w^{e,\star}_{k}}.$ (16) Similarly, we have due to $\min_{k}w^{e,\star}_{k}\geq 8\|\Gamma^{\star}\|_{2}^{2}(1+C_{\psi})$ and $\min_{k}w^{e,\star}_{k}\geq 8\|w^{1,\star}\|_{\infty}$: $\displaystyle\kappa_{\text{latent}}^{e}\geq\frac{8^{3}(1+\min_{i}\|B^{\star}_{:,i}\|_{2}^{2})(1+\psi^{e,\star})}{9^{3}(\max_{k}w^{e,\star}_{k})^{2}}.$ (17) ### B.2 Proof of Lemma 1 ###### Proof. Let $\mathcal{M}(B,\Gamma,\psi^{e},w^{e})$ denote a model associated with each equation in the SCM (2) (main paper). For notational convenience, we use the short-hand notation $\mathcal{M}^{e}$ for this model. We let $\Sigma(\mathcal{M}^{e})=(\mathcal{I}-\mathcal{F}_{\texttt{do}(e)^{c}}{B})^{-1}(\texttt{diag}(w^{e})+(1+\psi^{e})[\Gamma\Gamma^{T}]_{\texttt{do}(e)^{c}})(\mathcal{I}-\mathcal{F}_{\texttt{do}(e)^{c}}{B})^{-T}$ be the associated covariance model parameterized by the parameters $(B,\Gamma,\psi^{e},w^{e})$. The optimal solution of the population _DirectLikelihood_ can be equivalently reformulated as: $\displaystyle\operatornamewithlimits{arg\,min}_{\\{\mathcal{M}^{e}\\}_{e=1}^{m}}$ $\displaystyle\sum_{e=1}^{m}\pi^{e,\star}\texttt{KL}(\Sigma^{\star,e},\Sigma(\mathcal{M}^{e})).$ (18) Notice that for the decision variables $\mathcal{M}^{e,\star}=(B^{\star},\Gamma^{\star},\psi^{e,\star},w^{e,\star})$ for each $e=1,2,\dots,m$, (18) achieves zero loss. Hence, any other optimal solution of (18) must yield zero loss, or equivalently, $\Sigma(\mathcal{M}^{e})=\Sigma^{e,\star}$ for any optimal collection $\\{\mathcal{M}^{e}\\}_{e=1}^{m}$. ∎ ### B.3 Proof of Lemma 2 ###### Proof. We first provide the proof of Lemma 2 under Assumptions 1-4 in (10) (main paper). Lemma 1 implies that for every $e=2,3$, $\displaystyle\Sigma^{e,\star}-(1+\tilde{\psi}^{e})\Sigma^{1,\star}$ (19) $\displaystyle=(\mathcal{I}-B^{\star})^{-1}\left(\texttt{diag}\left(w^{e,\star}-(1+\tilde{\psi}^{e})w^{1,\star}\right)+(\psi^{e,\star}-\tilde{\psi}^{e})\Gamma^{\star}{\Gamma^{\star}}^{T}\right)(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}\left(\tilde{w}^{e}-(1+\tilde{\psi}^{e})\tilde{w}^{1}\right)(\mathcal{I}-\tilde{B})^{-T}$ Since $\min_{k}w^{e,\star}_{k}\geq C_{\psi}(\|w^{1,\star}\|_{\infty}+C_{\psi}\|\Gamma^{\star}\|_{2}^{2})$ from Assumption 4 in (10) (main paper), we conclude that the matrix $\Sigma^{e,\star}-(1+\tilde{\psi}^{e})\Sigma^{1,\star}$ is invertible for $e=2,3$. To establish the first component of Lemma 2, consider $e\in\\{2,3\\}$ for which $\psi^{e,\star}\neq\tilde{\psi}^{e}$. After an inversion of (19) for this environment, we obtain: $\displaystyle(\mathcal{I}-B^{\star})^{T}({M}^{e}+{L}^{e})(\mathcal{I}-B^{\star})$ (20) $\displaystyle=(\mathcal{I}-\tilde{B})^{T}\texttt{diag}\left(\tilde{w}^{e}-(1+\tilde{\psi}^{e})\tilde{w}^{1}\right)^{-1}(\mathcal{I}-\tilde{B})$ where, $\displaystyle{M}^{e}=\texttt{diag}\left(w^{e,\star}-(1+\tilde{\psi}^{e})w^{1,\star}\right)^{-1}~{}~{};~{}~{}{L}^{e}={M}^{e}\Gamma^{\star}\left[{\Gamma^{\star}}^{T}{M}^{e}\Gamma^{\star}+\frac{1}{\Delta\psi^{e}}\mathcal{I}\right]^{-1}{\Gamma^{\star}}^{T}{M}^{e},$ Here, we have introduced a short-hand notation: $\Delta\psi^{e}=(\psi^{e,\star}-\tilde{\psi}^{e})$. Notice that the nonzero entries of $(\mathcal{I}-\tilde{B})^{T}\allowbreak\texttt{diag}\allowbreak\left(\tilde{w}^{e}-(1+\tilde{\psi}^{e})\tilde{w}^{1}\right)^{-1}(\mathcal{I}-\tilde{B})$ encode the moral graph induced by $\tilde{B}$. Our strategy is to use Assumptions 1-4 in (10) (main paper) to show $(\mathcal{I}-B^{\star})^{T}(M^{e}+L^{e})(\mathcal{I}-B^{\star})$ has non- zeros in the entries corresponding to the moral graph of $B^{\star}$ and at least one nonzero outside of the moral graph. To conclude this, we consider the following intermediate terms close to $M^{e}$ and $L^{e}$: $\displaystyle\bar{{M}}^{e}=\texttt{diag}(w^{e,\star})^{-1}~{};~{}\bar{L}^{e}=\bar{M}^{e}\Gamma^{\star}\left[{\Gamma^{\star}}^{T}\bar{M}^{e}\Gamma^{\star}+\frac{1}{1+\psi^{e,\star}}\mathcal{I}\right]^{-1}{\Gamma^{\star}}^{T}\bar{M}^{e}$ Notice that: $\|{\bar{M}}^{e}-M^{e}\|_{2}\leq\frac{5(1+C_{\psi})\|w^{1,\star}\|_{\infty}}{4(\min_{k}w^{e,\star}_{k})^{2}}~{}~{}~{};~{}~{}~{}\|M^{e}\|_{2}\leq\frac{5}{4(\min_{k}w^{e,\star}_{k})}$ (21) where the inequalities follow by noting that $5(1+C_{\psi})\|w^{1,\star}\|_{\infty}\leq\min_{k}w^{e,\star}_{k}$ from Assumption 4 in (10) (main paper). Now let $(k,l)$ be any pair of indices connected in the moral graph of $B^{\star}$. Then: $\displaystyle|(\mathcal{I}-B^{\star})^{T}M^{e}(\mathcal{I}-B^{\star})|_{k,l}$ $\displaystyle\geq$ $\displaystyle|(\mathcal{I}-B^{\star})^{T}\bar{M}^{e}(\mathcal{I}-B^{\star})|_{k,l}$ (22) $\displaystyle-$ $\displaystyle(1+\max_{i}\|B^{\star}_{:,i}\|_{2})^{2}\|\bar{M}^{e}-M^{e}\|_{2}$ $\displaystyle\geq$ $\displaystyle{{\kappa}^{e}_{\text{cond}}}-\frac{5(1+C_{\psi})\|w^{1,\star}\|_{\infty}(1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2})}{4(\min_{k}w_{k}^{e,\star})^{2}}$ $\displaystyle\geq$ $\displaystyle\frac{(1+\min_{i}\|B^{\star}_{:,i}\|_{2})^{2}}{2(\max_{k}w^{e,\star}_{k})}-\frac{5(1+C_{\psi})\|w^{1,\star}\|_{\infty}(1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2})}{4(\min_{k}w_{k}^{e,\star})^{2}}$ $\displaystyle\geq$ $\displaystyle\frac{(1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2})}{4\max_{k}w^{e,\star}_{k}}.$ Here, the second to last inequality follows from the relation (16) and the last inequality follows from $\frac{\min_{k}w^{e,\star}_{k}}{\text{cond}(\texttt{diag}(w^{e,\star}))}\geq 5{\kappa^{\star}}(1+C_{\psi})\|w^{1,\star}\|_{\infty}$. Next, we control $\|(\mathcal{I}-B^{\star})^{T}L^{e}(\mathcal{I}-B^{\star})\|_{\infty}$. Using the inequality $\left[{\Gamma^{\star}}^{T}M^{e}\Gamma^{\star}+\frac{1}{\Delta\psi^{e}}\mathcal{I}\right]^{-1}\preceq({\Delta\psi^{e}})\mathcal{I}$, we have that: $\displaystyle\|(\mathcal{I}-B^{\star})^{T}L^{e}(\mathcal{I}-B^{\star})\|_{\infty}\leq\frac{25C_{\psi}(1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2})\|\Gamma^{\star}\|_{2}^{2}}{16(\min_{k}w^{e,\star}_{k})^{2}}$ (23) Since $\frac{\min_{k}w^{e,\star}_{k}}{\text{cond}(\texttt{diag}(w^{e,\star}))}\geq 7C_{\psi}\|\Gamma^{\star}\|_{2}^{2}$, comparing (23) and (22), we conclude that for any indices $(k,l)$ connected in the moral graph of $B^{\star}$ $\displaystyle|(\mathcal{I}-B^{\star})^{T}M^{e}(\mathcal{I}-B^{\star})|_{k,l}>\|(\mathcal{I}-B^{\star})^{T}L^{e}(\mathcal{I}-B^{\star})\|_{\infty}.$ To finish the proof of the first assertion of Lemma 2, we have to show that for indices $(k,l)$ attaining the optimum $\kappa_{\text{latent}}$, $|(\mathcal{I}-B^{\star})^{T}L^{e}(\mathcal{I}-B^{\star})|_{k,l}>0$ or equivalently, $|(\mathcal{I}-B^{\star})^{T}(\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e})(\mathcal{I}-B^{\star})|_{k,l}>0$. Notice that: $\displaystyle\left|(\mathcal{I}-B^{\star})^{T}\left(\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e}\right)(\mathcal{I}-B^{\star})\right|_{k,l}$ $\displaystyle\geq$ $\displaystyle|(\mathcal{I}-B^{\star})^{T}\bar{L}^{e}(\mathcal{I}-B^{\star})|_{k,l}$ $\displaystyle-$ $\displaystyle|(\mathcal{I}-B^{\star})^{T}\left(\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e}-\bar{L}^{e}\right)(\mathcal{I}-B^{\star})|_{k,l}$ $\displaystyle\geq$ $\displaystyle\kappa_{\text{latent}}-\left\|\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e}-\bar{L}^{e}\right\|_{2}(1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2})$ $\displaystyle\geq$ $\displaystyle\frac{8^{3}(1+\min_{i}\|B^{\star}_{:,i}\|_{2}^{2})(1+\psi^{e,\star})}{9^{3}(\max_{k}w^{e,\star}_{k})^{2}}$ $\displaystyle-$ $\displaystyle\left\|\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e}-\bar{L}^{e}\right\|_{2}(1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2})$ Where the last inequality follows from the relation (17). Thus, it suffices to show that: $\displaystyle\frac{8^{3}(1+\min_{i}\|B^{\star}_{:,i}\|_{2}^{2})(1+\psi^{e,\star})}{9^{3}(\max_{k}w^{e,\star}_{k})^{2}}-\left\|\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}{L}^{e}-\bar{L}^{e}\right\|_{2}(1+\max_{i}\|B^{\star}_{:,i}\|_{2}^{2})>0.$ (24) To that end, we control the term $\left\|\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e}-\bar{L}^{e}\right\|_{2}$. $\displaystyle\left\|\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e}-\bar{L}^{e}\right\|_{2}$ (25) $\displaystyle\leq{2}\underbrace{\left\|(M^{e}-\bar{M}^{e})\Gamma^{\star}\left[\frac{\Delta\psi^{e}}{1+\psi^{e,\star}}{\Gamma^{\star}}^{T}M^{e}\Gamma^{\star}+\frac{1}{1+\psi^{e,\star}}\mathcal{I}\right]^{-1}{\Gamma^{\star}}^{T}{M}^{e}\right\|_{2}}_{\text{Term 1}}$ $\displaystyle+\underbrace{\left\|(M^{e}-\bar{M}^{e})\Gamma^{\star}\left[\frac{\Delta\psi^{e}}{1+\psi^{e,\star}}{\Gamma^{\star}}^{T}M^{e}\Gamma^{\star}+\frac{1}{1+\psi^{e,\star}}\mathcal{I}\right]^{-1}{\Gamma^{\star}}^{T}(M^{e}-\bar{M}^{e})\right\|_{2}}_{\text{Term 2}}$ $\displaystyle+\underbrace{\Bigg{\|}\bar{M}^{e}\Gamma^{\star}\Bigg{\\{}\left[\frac{\Delta\psi^{e}{\Gamma^{\star}}^{T}M^{e}\Gamma^{\star}+\mathcal{I}}{1+\psi^{e,\star}}\right]^{-1}-\left[{\Gamma^{\star}}^{T}\bar{M}^{e}\Gamma^{\star}+\frac{\mathcal{I}}{1+\psi^{e,\star}}\mathcal{I}\right]^{-1}\Bigg{\\}}{\Gamma^{\star}}^{T}\bar{M}^{e}\Bigg{\|}_{2}}_{\text{Term 3}}.$ We bound each of the individual terms in (25). Using the inequalities $\left[{\Gamma^{\star}}^{T}M^{e}\Gamma^{\star}+\frac{1}{\Delta\psi^{e}}\mathcal{I}\right]^{-1}\preceq({\Delta\psi^{e}})\mathcal{I}$ and $\|\tilde{M}^{e}\|_{2}\leq\frac{1}{\min_{k}w_{k}^{e,\star}}$ and the relation (21), Term 1 and Term 2 can be bounded as follows: Term 1 $\displaystyle\leq\frac{10(1+C_{\psi})^{2}\|w^{1,\star}\|_{\infty}\|\Gamma^{\star}\|_{2}^{2}}{4(\min_{k}w^{e,\star}_{k})^{3}}$ (26) Term 2 $\displaystyle\leq\frac{25\|\Gamma^{\star}\|_{2}^{4}(1+C_{\psi})^{3}\|w^{1,\star}\|_{\infty}}{16(\min_{k}w^{e,\star}_{k})^{4}}.$ To bound Term 3, we use Taylor series expansion yielding $\displaystyle(A+E)^{-1}-A^{-1}=A^{-1}\sum_{k=1}^{\infty}(EA^{-1})^{k}.$ Further, if $\|E\|_{2}\|A^{-1}\|_{2}<1$, we can bound the spectral norm of the difference $(A+E)^{-1}-A^{-1}$ as follows: $\displaystyle\|(A+E)^{-1}-A^{-1}\|_{2}\leq\|A^{-1}\|_{2}\sum_{k=1}^{\infty}\|E\|_{2}^{k}\|A^{-1}\|_{2}^{k}=\frac{\|A^{-1}\|_{2}^{2}\|E\|_{2}}{1-\|E\|_{2}\|A^{-1}\|_{2}}$ (27) In the context of Term 3, $E=\frac{\Delta\psi^{e}}{1+\psi^{e,\star}}{\Gamma^{\star}}^{T}M^{e}\Gamma^{\star}-{\Gamma^{\star}}^{T}\bar{M}^{e}\Gamma^{\star}$ and $A=(1+\psi^{e,\star})\mathcal{I}$. One can check that $\|E\|_{2}\leq\frac{(\frac{5}{4}C_{\psi}+1)\|\Gamma^{\star}\|_{2}^{2}}{(\min_{k}w^{e,\star}_{k})^{2}}$. Thus, employing the relation $(\min_{k}w_{k}^{e,\star})^{2}\geq 5(\frac{5}{4}C_{\psi}+1)\|\Gamma^{\star}\|_{2}^{2}$, we have that: $\displaystyle\text{Term 3}\leq\frac{5\|\Gamma^{\star}\|_{2}^{4}(\frac{5}{4}C_{\psi}+1)}{4\min_{k}(w^{e,\star}_{k})^{3}}$ (28) Combining the bounds in (26) and (28) with (25), we find that: $\displaystyle\left\|\frac{1+\psi^{e,\star}}{\Delta\psi^{e}}L^{e}-\tilde{L}^{e}\right\|_{2}$ $\displaystyle\leq$ $\displaystyle\frac{10(1+C_{\psi})^{2}\|w^{1,\star}\|_{\infty}\|\Gamma^{\star}\|_{2}^{2}}{4(\min_{k}w^{e,\star}_{k})^{3}}+\frac{25\|\Gamma^{\star}\|_{2}^{4}(1+C_{\psi})^{3}\|w^{1,\star}\|_{\infty}}{16(\min_{k}w^{e,\star}_{k})^{4}}$ $\displaystyle+$ $\displaystyle\frac{5\|\Gamma^{\star}\|_{2}^{4}(\frac{5}{4}C_{\psi}+1)}{4\min_{k}(w^{e,\star}_{k})^{3}}$ $\displaystyle\leq$ $\displaystyle\left(\frac{10}{4}+\frac{25}{16}+\frac{5}{4}\right)\frac{(1+2C_{\psi})^{2}\max\\{\|\Gamma^{\star}\|_{2}^{2},\|\Gamma^{\star}\|_{2}^{4}\\}\max\\{1,\|w^{1,\star}\|_{\infty}\\}}{(\min_{k}w^{e,\star}_{k})^{3}},$ where the second inequality follows from $\min_{k}w^{e,\star}_{k}\geq\|w^{1,\star}\|_{\infty}(1+C_{\psi})$. Thus, since $\frac{\min_{k}w^{e,\star}_{k}}{\text{cond}(\texttt{diag}(w^{e,\star}))}\geq 8{\kappa^{\star}}(1+C_{\psi})^{2}\max\\{\|\Gamma^{\star}\|_{2}^{2},\|\Gamma^{\star}\|_{2}^{4}\\}\max\\{1,\|w^{1,\star}\|_{\infty}\\}$, the sufficient condition in (24) is satisfied. This concludes that if $\tilde{\psi}^{e}\neq\psi^{e,\star}$ for $e\in\\{2,3\\}$, $(\mathcal{I}-B^{\star})^{T}(M^{e}+L^{e})(\mathcal{I}-B^{\star})$ will have a non-zero outside of the moral graph of $B^{\star}$ and thus according to (20), $\text{moral}(B^{\star})\subset\text{moral}(\tilde{B})$. We have established the first component of Lemma 2. The second component (where $\tilde{\psi}^{e}=\psi^{e,\star}$ for $e=2,3$) follows from (19). We next provide a proof of Lemma 2 under Assumptions 1 $\&$ 2’-4’ in (10) (main paper). Lemma 1 implies the following relations: $\displaystyle(\mathcal{I}-B^{\star})^{-1}\left(\texttt{diag}\left(w^{3,\star}-(1+\tilde{\psi}^{3})w^{1,\star}\right)+(\psi^{3,\star}-\tilde{\psi}^{3})\Gamma^{\star}{\Gamma^{\star}}^{T}\right)(\mathcal{I}-B^{\star})^{-T}$ (29) $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}\left(\tilde{w}^{3}-(1+\tilde{\psi}^{3})\tilde{w}^{1}\right)(\mathcal{I}-\tilde{B})^{-T}$ $\displaystyle(\mathcal{I}-B^{\star})^{-1}\Bigg{(}\texttt{diag}\left(w^{3,\star}-\frac{1+\tilde{\psi}^{3}}{1+\tilde{\psi}^{2}}w^{2,\star}\right)$ $\displaystyle+\left(1+\psi^{3,\star}-\frac{(1+\psi^{2,\star})(1+\tilde{\psi}^{3})}{1+\tilde{\psi}^{2}}\right)\Gamma^{\star}{\Gamma^{\star}}^{T}\Bigg{)}(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}\left(\tilde{w}^{3}-\frac{(1+\tilde{\psi}^{3})}{1+\tilde{\psi}^{2}}\tilde{w}^{2}\right)(\mathcal{I}-\tilde{B})^{-T}$ Using the relation (29) and a similar analysis as with the proof under Assumptions 1-4 in (10) (main paper), one can arrive at the conclusion of Lemma 2 with Assumptions 1 $\&$ 2’-4’ in (10) (main paper). ∎ ### B.4 Proof of Lemma 3 ###### Proof. The proof technique of this lemma is similar in spirit to the proof of Theorem 1 in [27]. We consider the setup with Assumptions 1-4 in (10) (main paper) and for brevity, leave out the proof with Assumptions 1 $\&$ 2’-4’ in (10) (main paper). We have from (19) that for $e=2,3$: $\displaystyle(\mathcal{I}-B^{\star})^{-1}\texttt{diag}\left(w^{e,\star}-{(1+{\psi}^{e,\star})}w^{1,\star}\right)(\mathcal{I}-B^{\star})^{-T}$ (30) $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}\left(\tilde{w}^{e}-{(1+{\psi}^{e,\star})}\tilde{w}^{1}\right)(\mathcal{I}-\tilde{B})^{-T}$ From relation (30), we have: $\displaystyle(\mathcal{I}-\tilde{B})(\mathcal{I}-B^{\star})^{-1}\texttt{diag}\left(w^{2,\star}-{(1+{\psi}^{e,\star})}w^{1,\star}\right)(\mathcal{I}-B^{\star})^{-T}(\mathcal{I}-\tilde{B})^{T}$ $\displaystyle=$ $\displaystyle\texttt{diag}\left(\tilde{w}^{2}-{(1+{\psi}^{e,\star})}\tilde{w}^{1}\right)$ $\displaystyle(\mathcal{I}-\tilde{B})(\mathcal{I}-B^{\star})^{-1}\texttt{diag}\left(w^{3,\star}-{(1+{\psi}^{e,\star})}w^{1,\star}\right)(\mathcal{I}-B^{\star})^{-T}(\mathcal{I}-\tilde{B})^{T}$ $\displaystyle=$ $\displaystyle\texttt{diag}\left(\tilde{w}^{3}-{(1+{\psi}^{e,\star})}\tilde{w}^{1}\right)$ Let $\phi^{e}_{k}:=\left[(\mathcal{I}-\tilde{B})(\mathcal{I}-B^{\star})^{-1}\texttt{diag}(w^{e,\star}-w^{1,\star}{(1+{\psi}^{e,\star})}\right]_{k,}$ for any $k=1,2,\dots,p$. Let $\xi_{k}:=\left[(\mathcal{I}-\tilde{B})(\mathcal{I}-B^{\star})^{-1}\right]_{k,}$. Then $\displaystyle\phi_{k}^{e}\perp\xi_{l}~{}~{}\text{for any }k\neq l.$ (31) Notice that for any $k=1,2,\dots,p$, $\\{\xi_{l}\\}_{l\neq k}$ are linearly independent. The condition above means that $\phi_{k}^{2}$ and $\phi_{k}^{3}$ (where neither would be exactly a zero vector because of Assumption 4 in (10) (main paper) ensuring that $w^{2,\star}-w^{1,\star}{(1+{\psi}^{e,\star})},w^{3,\star}-w^{1,\star}{(1+{\psi}^{e,\star})}\neq 0$) live inside the one-dimensional null-space of the matrix formed by concatenating the vectors $\\{\xi_{l}\\}_{l\neq k}$. In particular, for every $k$, we have that for some constant $c\neq 0$: $\xi_{k}\texttt{diag}(w^{2,\star}-w^{1,\star}{(1+{\psi}^{e,\star})}))=c\xi_{k}\texttt{diag}(w^{3,\star}-w^{1,\star}{(1+{\psi}^{e,\star})}))$. It is straightforward to check that Assumption 2 in (10) (main paper) $\frac{w^{2,\star}_{k}-{(1+{\psi}^{e,\star})}w^{1,\star}_{k}}{w^{2,\star}_{l}-{(1+{\psi}^{e,\star})}w^{1,\star}_{l}}\neq\frac{w^{3,\star}_{k}-{(1+{\psi}^{e,\star})}w^{1,\star}_{k}}{w^{3,\star}_{l}-{(1+{\psi}^{e,\star})}w^{1,\star}_{l}}$ for $k,l\in S,k\neq l$ implies that: $\displaystyle\begin{aligned} \Xi_{m,:}=\begin{cases}\Xi_{m,S}=0\\\\[3.61371pt] \Xi_{m,S}\text{ has one nonzero-component}~{}\&~{}\Xi_{m,S^{c}}=0,\end{cases}\end{aligned}$ (32) where $\Xi\in\mathbb{R}^{p\times p}$ is the matrix formed by concatenating the row vectors $\\{\xi_{l}\\}_{l=1}^{p}$ so that $\mathcal{I}-\tilde{B}=\Xi(\mathcal{I}-B^{\star})$. Since $\mathcal{I}-\tilde{B}$ and $\mathcal{I}-B^{\star}$ are invertible, so must be $\Xi$. The relation (32), that $S=\\{1,2,\dots,p\\}$, and that $\Xi$ is invertible implies that $\Xi$ is a diagonal matrix up to row-permutations so that: $\displaystyle(\mathcal{I}-\tilde{B})=\mathcal{K}_{\pi}{D}(\mathcal{I}-B^{\star}),$ where $D$ is diagonal with all nonzero entries on the diagonal and $\mathcal{K}_{\pi}$ is a permutation matrix. We know that $(\mathcal{I}-\tilde{B})$ will have ones on the diagonal. Hence, it is straightforward to check that $\mathcal{K}_{\pi}=D=\mathcal{I}$ and thus $\tilde{B}=B^{\star}$. ∎ ## Appendix C Role of $\bar{h}$ in identifiability In this section, we consider the role of $\bar{h}$ (i.e. the number of latent variables in the model) for identifiability. Theorem 1 (main paper) states that as long as Assumptions 1-4 in (10) (main paper) or Assumptions 1 $\&$ 2’-4’ in (10) (main paper) are satisfied, then identifiability is possible for any $\bar{h}$ with $\bar{h}\geq\text{dim}(H)$. These assumptions rely on the existence of at least two interventional environments. In particular, we will first show that this is a necessary condition in the setting if $\bar{h}=p$. We will also show that if $\bar{h}=\text{dim}(H)$ and under some incoherence conditions (e.g. dense latent effects and sparse DAG structure), a single interventional environment is sufficient for identifiability. ### C.1 $\bar{h}=p$ Suppose there is only a single interventional environment satisfying Assumptions 1-4 in (10) (main paper), as an example. We will show that in addition to the population parameters, the population _DirectLikelihood_ estimator has an additional minimizer $\tilde{B},\tilde{\Gamma},\\{(\tilde{\psi}^{e},\tilde{w}^{e})\\}_{e=1}^{m}$ by showing that these parameters satisfy the requirement for an optimal solution in Lemma 1. Further, we show that $\|\text{moral}(\tilde{B})\|_{\ell_{0}}=\|\text{moral}({B}^{\star})\|_{\ell_{0}}$ so that choosing the associated connectivity matrix with the sparsest moral graph does not exclude $\tilde{B}$. We let $\tilde{\psi}^{e}=\psi^{e,\star}$ and we select $\tilde{B}$ and $\tilde{w}^{1},\tilde{w}^{2}$ to satisfy the following equation: $\displaystyle(\mathcal{I}-B^{\star})^{-1}\texttt{diag}(w^{2,\star}-(1+\psi^{2,\star})w^{1,\star})(\mathcal{I}-B^{\star})^{-T}$ (33) $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}(\tilde{w}^{2}-(1+{\psi}^{2,\star})\tilde{w}^{1})(\mathcal{I}-\tilde{B})^{-T}.$ Specifically, let $\tilde{\mathcal{D}}_{X}$ be some Markov equivalent DAG to $\mathcal{D}_{X}$. Let $\tilde{B}$ be compatible with $\tilde{\mathcal{D}}_{X}$. The strength of the coefficients of $\tilde{B}$ as well as the vector $\tilde{w}^{2}-(1+\psi^{2,\star})\tilde{w}^{1}$ can then be determined to satisfy (33). We choose the entries of $\tilde{w}^{2}$ large enough so that $(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}(\tilde{w}^{2})(\mathcal{I}-\tilde{B})^{-T}\succ(\mathcal{I}-{B}^{\star})^{-1}\texttt{diag}({w}^{2,\star})(\mathcal{I}-{B}^{\star})^{-T}$ and choose $\tilde{w}^{1}$ accordingly to yield the overall parameter vector $\tilde{w}^{2}-(1+\psi^{2,\star})\tilde{w}^{1}$. Thus, for this choice of parameters, (33) is satisfied. It remains to check that: $\displaystyle(\mathcal{I}-B^{\star})^{-1}(\texttt{diag}(w^{1,\star})+\Gamma^{\star}{\Gamma^{\star}}^{T})(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=$ $\displaystyle(\mathcal{I}-\tilde{B})^{-1}(\texttt{diag}(\tilde{w}^{1})+\tilde{\Gamma}\tilde{\Gamma}^{T})(\mathcal{I}-\tilde{B})^{-T}.$ Given (33), it suffices to check that: $\displaystyle(\mathcal{I}-B^{\star})^{-1}(-\texttt{diag}(w^{2,\star})/(1+\psi^{2,\star})+\Gamma^{\star}{\Gamma^{\star}}^{T})(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=$ $\displaystyle(\mathcal{I}-\tilde{B})^{-1}(-\texttt{diag}(\tilde{w}^{2})/(1+\psi^{2,\star})+\tilde{\Gamma}\tilde{\Gamma}^{T})(\mathcal{I}-\tilde{B})^{-T}.$ Rearranging terms and appealing to the fact that $(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}(\tilde{w}^{2})(\mathcal{I}-\tilde{B})^{-T}\succ(\mathcal{I}-{B}^{\star})^{-1}\texttt{diag}({w}^{2,\star})(\mathcal{I}-{B}^{\star})^{-T}$, it is straightforward to find a full rank $\tilde{\Gamma}$ that satisfies the relation above. ### C.2 $\bar{h}=\text{dim}(H)$ We consider the setting with a single interventional setting that satisfies Assumptions 1-4 in (10) (main paper). We show that under some incoherence-type assumptions, the _DirectLikelihood_ procedure combined with choosing the sparsest moral graph has a unique optimum equaling $B^{\star}$. By Lemma 2, we conclude that $\text{moral}(B^{\star})\subset\text{moral}(\tilde{B})$ unless $\tilde{\psi}^{2}=\psi^{2,\star}$. Since we are looking for the sparsest producing moral graph, we conclude that $\text{moral}(B^{\star})=\text{moral}(\tilde{B})$. By Lemma 1, we have that: $\displaystyle(\mathcal{I}-B^{\star})^{-1}(\texttt{diag}(w^{e,\star})+(1+\psi^{e,\star})\Gamma^{\star}{\Gamma^{\star}}^{T})(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=$ $\displaystyle(\mathcal{I}-\tilde{B})^{-1}(\texttt{diag}(\tilde{w}^{e})+(1+{\psi}^{e,\star})\tilde{\Gamma}\tilde{\Gamma}^{T})(\mathcal{I}-\tilde{B})^{-T}.$ By the Woodbury inversion lemma, we have for both $e=1,2$: $\displaystyle\left[(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})-(\mathcal{I}-\tilde{B})^{T}\texttt{diag}(\tilde{w}^{e})^{-1}(\mathcal{I}-\tilde{B})\right]+L^{e,\star}$ $\displaystyle~{}~{}\text{ is rank }\text{dim}(H),$ (34) where $L^{e,\star}$ is a rank $\text{dim}(H)$ matrix with row and column space equal to the row and column space of $(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}\allowbreak\Gamma^{\star}{\Gamma^{\star}}^{T}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})^{T}$. Notice that the quantity inside the brackets in (34) lies inside the moral graph of $B^{\star}$. We now use rank-sparsity incoherence [3] to conclude that the term inside the bracket in (34) vanishes. In particular, if the tangent space of the sparse variety at the moral graph of $B^{\star}$ is transverse with the tangent space of the low rank variety at $L^{e,\star}$, then (34) is be satisfied if and only if for $e=1,2$ $\left[(\mathcal{I}-B^{\star})^{T}\texttt{diag}(w^{e,\star})^{-1}(\mathcal{I}-B^{\star})-(\mathcal{I}-\tilde{B})^{T}\texttt{diag}(\tilde{w}^{e})^{-1}(\mathcal{I}-\tilde{B})\right]=0.$ (35) The transversality of the tangent spaces is satisfied if the latent effects are dense and $\mathcal{D}^{\star}_{X}$ is sparse (we leave out the technical details and refer the interested reader to [3]). Thus, following the same strategy as the proof of Lemma 3, we conclude from the relation (35) that $\tilde{B}=B^{\star}$. ## Appendix D Single parameter perturbation setting As discussed in Section 2 (main paper), one may fit to data the perturbation model (2) (main paper) where the perturbation magnitudes are equal in magnitude across the coordinates, e.g. $\text{var}(\delta^{e})=\zeta^{e,\star}{\bf 1}$ for $\zeta^{e,\star}\in\mathbb{R}_{+}$. Fitting such a model can be achieved by the reparametrization $w^{e}=w^{1}+\zeta^{e}{\bf 1}$ for $e=2,\dots,m$ where $w^{1}\in\mathbb{R}^{p}_{++}$ and $\zeta^{e}\in\mathbb{R}_{+}$. We assume an observational environment $e=1$ and two interventional environments $e=2,3$ and modify Assumption 2 and 4 appropriately in this setting as follows: Assumption 2” $\displaystyle-\text{heterogeneity among the perturbations:}$ (36) $\displaystyle\text{the vectors }\begin{pmatrix}\psi^{2,\star}\\\ \psi^{3,\star}\end{pmatrix}~{}\&~{}\begin{pmatrix}\zeta^{2,\star}\\\ \zeta^{3,\star}\end{pmatrix}\text{ are linearly independent}.$ Assumption 4” $\displaystyle-\text{perturbation is sufficiently strong for }e=3$ $\displaystyle\zeta^{3,\star}\geq 8{\kappa^{\star}}(1+2C_{\psi})^{2}(1+\|w^{2,\star}\|_{\infty})(1+\|\Gamma^{\star}\|_{2}^{2}+\|\Gamma^{\star}\|_{2}^{4})$ With this modification, we have the following consistency guarantees: ###### Theorem 4 (Single parameter perturbation with perturbed latent variables). Suppose Assumption 1,3 in (10) (main paper) and Assumption 2” and 4” in (36) are satisfied. The following assertions hold: 1. 1. $B^{\star}\in{B}_{\text{opt}}$ and any other optimum ${B}\in{B}_{\text{opt}}$ satisfies: $\text{moral}(B^{\star})\subseteq\text{moral}(B)$. 2. 2. The optimum of $\arg\min_{B\in{B}_{\text{opt}}}\|\text{moral}(B)\|_{\ell_{0}}$ is unique and equal to $B^{\star}$. We next provide identifiability guarantees in the setting without latent perturbations ( i.e. $\psi^{e,\star}=0$ for all $e$) with single parameter perturbation. Fitting such a model can be achieved by the reparametrization $w^{e}=w^{1}+\zeta^{e}{\bf 1}$ for a parameter $\zeta^{e}\in\mathbb{R}_{+}$ and $\psi^{e}\equiv 0$. We then have the following identifiability in this setting. ###### Theorem 5 (Single parameter perturbation with unperturbed latent variables). Suppose Assumptions 1-2 in (10) (main paper) are satisfied for only environments $e=2$. Then, if $\zeta^{2,\star}>0$, $\mathcal{D}_{\text{opt}}=\mathcal{D}^{\star}_{X}$ and $B_{\text{opt}}=B^{\star}$. ### D.1 Proof of Theorem 4 ###### Proof. The proof of the first part closely mirrors that of Theorem 1 (main paper) and is left out for brevity. It concludes that $\tilde{\psi}^{e}=\psi^{e,\star}$ for $e=1,2,3$. To prove the second part, suppose that in addition to the population parameters $(B^{\star},\Gamma^{\star},\\{(\psi^{e,\star},\zeta^{e,\star})\\}_{e=1}^{m})$, _DirectLikelihood_ has another solution $(\tilde{B},\\{(\tilde{\psi}^{e},\tilde{\zeta}^{e})\\}_{e=1}^{m})$. Then, since the first environment does not consist of any perturbations, we find that: $\displaystyle\Sigma^{2,\star}-\Sigma^{1,\star}$ $\displaystyle=(\mathcal{I}-B^{\star})^{-1}(\zeta^{2,\star}\mathcal{I}+\psi^{2,\star}{\Gamma^{\star}}{\Gamma^{\star}}^{T})(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}(\tilde{\zeta}^{2}\mathcal{I}+{\psi}^{2,\star}{\tilde{\Gamma}}{\tilde{\Gamma}}^{T})(\mathcal{I}-\tilde{B})^{-T}$ $\displaystyle\Sigma^{3,\star}-\Sigma^{1,\star}$ $\displaystyle=(\mathcal{I}-B^{\star})^{-1}(\zeta^{3,\star}\mathcal{I}+\psi^{3,\star}{\Gamma^{\star}}{\Gamma^{\star}}^{T})(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}(\tilde{\zeta}^{3}\mathcal{I}+{\psi}^{3,\star}{\tilde{\Gamma}}{\tilde{\Gamma}}^{T})(\mathcal{I}-\tilde{B})^{-T}$ Due to Assumption 3, there exists $a=(a_{1},a_{2})\in\mathbb{R}^{2}$ such that $a^{T}\begin{pmatrix}\psi^{2,\star}\\\ \psi^{3,\star}\end{pmatrix}=0$ but $a^{T}\begin{pmatrix}\zeta^{2,\star}\\\ \zeta^{3,\star}\end{pmatrix}\neq 0$. Then, $\displaystyle a_{1}(\Sigma^{2,\star}-\Sigma^{1,\star})+a_{2}(\Sigma^{3,\star}-\Sigma^{1,\star})$ $\displaystyle=a^{T}\begin{pmatrix}\zeta^{2,\star}\\\ \zeta^{3,\star}\end{pmatrix}(\mathcal{I}-B^{\star})^{-1}(\mathcal{I}-B^{\star})^{-T}$ $\displaystyle=a^{T}\begin{pmatrix}\tilde{\zeta}^{2}\\\ \tilde{\zeta}^{3}\end{pmatrix}(\mathcal{I}-\tilde{B})^{-1}(\mathcal{I}-\tilde{B})^{-T}$ Lastly, by appealing to identifiability of DAG under equal variances [22], we have that $\tilde{B}=B^{\star}$. We further note that asymptotic convergence results similar to Corollary 1 (main paper) may be shown but is left out for brevity. ∎ ### D.2 Proof of Theorem 5 ###### Proof. We will show in Lemma 4 that for $e=1,2$: $\displaystyle\Sigma^{e,\star}=$ $\displaystyle(\mathcal{I}-B^{\star})^{-1}\left(\texttt{diag}\left(w^{1,\star}+\zeta^{e,\star}{\bf 1}\right)+\Gamma^{\star}{\Gamma^{\star}}^{T}\right)(\mathcal{I}-B^{\star})^{-T}$ (37) $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}\left(\texttt{diag}\left(\tilde{w}^{1}+\tilde{\zeta}^{e}{\bf 1}\right)+\tilde{\Gamma}\tilde{\Gamma}^{T}\right)(\mathcal{I}-\tilde{B})^{-T}$ Taking the difference $\Sigma^{2,\star}-\Sigma^{1,\star}$, the relation (37) yields: $\displaystyle\Sigma^{2,\star}-\Sigma^{1,\star}=$ $\displaystyle\zeta^{2,\star}(\mathcal{I}-B^{\star})^{-1}(\mathcal{I}-B^{\star})^{-T}$ (38) $\displaystyle=\tilde{\zeta}^{2}(\mathcal{I}-\tilde{B})^{-1}(\mathcal{I}-\tilde{B})^{-T}.$ We can then appeal to identifiability of DAGs with equal noise variance [22] to conclude that $\tilde{B}=B^{\star}$. ∎ ## Appendix E Proof of Theorem 2 (main paper) We consider the proof of the case with unperturbed latent confounders. For notational convenience, we state the extended population _DirectLikelihood_ estimator (5) (main paper) in the setting with unperturbed latent variables as: $\displaystyle(\hat{B},\hat{\Gamma},\\{\hat{w}^{e}\\}_{e=1}^{m})=\operatornamewithlimits{arg\,min}_{\begin{subarray}{c}B\in\mathbb{R}^{p\times p},\Gamma\in\mathbb{R}^{p\times\bar{h}}\\\ \\{w^{e}\\}_{e=1}^{m}\subseteq\mathbb{R}^{p}_{++}\\\ \text{DAG }\mathcal{D}\end{subarray}}$ $\displaystyle\sum_{e=1}^{m}\pi^{e,\star}\ell^{e}(B,\Gamma,w^{e};\Sigma^{e,\star})$ (39) subject-to $\displaystyle B\text{ compatible with }\mathcal{D}~{}~{};~{}~{}w^{e}\succeq w^{1}\text{ for }e=2,\dots,m$ where, $\displaystyle\ell^{e}(\cdot)$ $\displaystyle=\log\det\left(\texttt{diag}(w^{e})+\Gamma\Gamma^{T}\right)$ $\displaystyle+\mathrm{trace}\left((\texttt{diag}(w^{e})+\Gamma\Gamma^{T})^{-1}(\mathcal{I}-B){\Sigma}^{e,\star}(\mathcal{I}-B)^{T}\right).$ As with Lemma 1, we characterize the optimal solutions of (39) in the following lemma. ###### Lemma 4. Optimal solutions of (39) satisfy the following equivalence $\displaystyle(B,\Gamma,\\{w^{e}\\}_{e=1}^{m})\text{ optimum to }\eqref{eqn:estim_cons_exp_unpert}$ $\displaystyle\iff$ $\displaystyle B~{}\text{compatible with a DAG},\\{w^{e}\\}_{e=1}^{m}\subseteq\mathbb{R}^{p}_{++},\Gamma\in\mathbb{R}^{p\times\bar{h}}\text{ and }$ $\displaystyle\Sigma^{e,\star}=(\mathcal{I}-{B})^{-1}(\texttt{diag}(w^{e})+\Gamma\Gamma^{T})(\mathcal{I}-{B})^{-T}\text{ for }e=1,2,\dots,m$ The proof of Lemma 4 is similar to Lemma 1 and left out for brevity. Based on the result of Lemma 4 , any optimum of (39) must satisfy for each $e=1,2,\dots,m$. $\Sigma^{e,\star}=(\mathcal{I}-{B})^{-1}(\Gamma\Gamma^{T}+\texttt{diag}(w^{e}))(\mathcal{I}-{B})^{-T}.$ (40) Aside from $(B^{\star},\Gamma^{\star},\\{w^{e,\star}\\}_{e=1}^{m})$, suppose there is another solution $(\tilde{B},\tilde{\Gamma},\\{\tilde{w}^{e}\\}_{e=1}^{m})$ satisfying (40). Thus, we have for $e=2,3$: $\displaystyle\Sigma^{e,\star}-\Sigma^{1,\star}$ $\displaystyle=(\mathcal{I}-B^{\star})^{-1}\texttt{diag}(w^{e,\star}-w^{1,\star})(\mathcal{I}-B^{\star})^{-T}$ (41) $\displaystyle=(\mathcal{I}-\tilde{B})^{-1}\texttt{diag}(\tilde{w}^{e}-\tilde{w}^{1})(\mathcal{I}-\tilde{B})^{-T}$ Equation (41) yields the relation for $e=2,3$: $\displaystyle(\mathcal{I}-\tilde{B})(\mathcal{I}-B^{\star})^{-1}\texttt{diag}(w^{e,\star}-w^{1,\star})(\mathcal{I}-B^{\star})^{-T}(\mathcal{I}-\tilde{B})^{T}=\texttt{diag}(\tilde{w}^{e}-\tilde{w}^{1}).$ Let $\phi^{e}_{k}:=[(\mathcal{I}-\tilde{B})(\mathcal{I}-B^{\star})^{-1}\texttt{diag}(w^{e,\star}-w^{1,\star})]_{k,:}$ for any $k=1,2,\dots,p$. Let $\xi_{k}:=[(\mathcal{I}-\tilde{B})(\mathcal{I}-B^{\star})^{-1}]_{k,:}$. Then for any $k=1,2,\dots,p$, Then for any $k=1,2,\dots,p$, $\displaystyle\phi_{k}^{e}\perp\xi_{l}~{}~{}\text{for any }k\neq l$ (42) Notice that for any $k=1,2,\dots,p$, $\\{\xi_{l}\\}_{l\neq k}$ are linearly independent. The condition above means that $\phi_{k}^{2}$ and $\phi_{k}^{3}$ (where neither would be exactly a zero vector because $w^{2,\star}-w^{1,\star},w^{3,\star}-w^{1,\star}\neq 0$) live inside the one- dimensional null-space of the matrix formed by concatenating the vectors $\\{\xi_{l}\\}_{l\neq k}$. As with the proof of Lemma 3, the assumption that $\frac{w^{2,\star}_{k}-w^{1,\star}_{k}}{w^{2,\star}_{l}-w^{1,\star}_{l}}\neq\frac{w^{3,\star}_{k}-w^{1,\star}_{k}}{w^{3,\star}_{l}-w^{1,\star}_{l}}$ for $k,l\in S,k\neq{l}$ implies that the matrix $\Xi$ consisting of concatenating the row vectors $\\{\xi_{l}\\}_{l=1}^{p}$ satisfies relation (32). _Proof of part (a)_ : The relation (32) and that $\Xi$ is invertible implies that $\Xi$ is a diagonal matrix up to row-permutations so that: $\displaystyle(\mathcal{I}-\tilde{B})=\mathcal{K}_{\pi}{D}(\mathcal{I}-B^{\star}),$ where $D$ is diagonal with all nonzero entries on the diagonal and $\mathcal{K}_{\pi}$ is a permutation matrix. We know that $(\mathcal{I}-\tilde{B})$ will have ones on the diagonal. Hence, it is straightforward to check that $\mathcal{K}_{\pi}=D=\mathcal{I}$ and thus $\tilde{B}=B^{\star}$. _Proof of part (b)_ : Suppose $B^{\star}$ and $\tilde{B}$ and $\Xi$ are ordered according to ancestors of $X_{p}$, then $X_{p}$ and then the remaining variables. Since the underlying graph is a DAG, there is a an ancestor of $X_{p}$ that does not have any parent. We first consider this variable. Suppose $\Xi_{1,:}(S)=0$. Then, since $\Xi_{1,:}$ is zero on this variable and its children, then $\Xi_{1,:}[(\mathcal{I}-B^{\star})_{:,1}]$ will be zero. This is a contradiction since $(\mathcal{I}-\tilde{B})$ has diagonal elements equal to one. By condition (32) and that $(\mathcal{I}-\tilde{B})$ must be diagonal, then $\Xi_{1,:}$ must have one nonzero entry, on either this ancestor variable or its children. Suppose for purposes of contradiction that this nonzero value happened on one of the children. Notice that if $\Xi_{j,1}$ is nonzero for some $j\neq 1$, then condition (32) implies that $\Xi_{j,:}=c_{1}e_{1}$ for some constant $c_{1}$. However, since the variable corresponding to index $j$ is not a parent to the variable corresponding to index $1$, then $\Xi_{j,:}(\mathcal{I}-B^{\star})_{:,j}$ will be zero. With this logic, $\Xi_{:,1}$ will have all zeros, leading to a contradiction since $\Xi$ must be invertible. Hence, $\Xi_{1,:}$ _must_ be of the form $\Xi_{1,:}=c_{2}e_{1}$ for some constant $c_{2}$. Since the diagonal elements of $\mathcal{I}-\tilde{B}$ are exactly one, then $c_{2}=1$. Repeating the same argument, and letting $\bar{S}$ denote the set of variables $X_{p}$ and the ancestors of $X_{p}$, we find that $\Xi_{\bar{S},:}=\begin{pmatrix}\mathcal{I}_{|\bar{S}|}&0_{|\bar{S}^{c}|}\end{pmatrix}$. Hence we have that $\tilde{B}_{k,:}=B^{\star}_{k,:}$ for all $k$ corresponding to target variable $X_{p}$ and all ancestors of $X_{p}$. Now suppose that $S$ includes $X_{p}$ and descendants of $X_{p}$. Let $\hat{B},B^{\star},\Xi$ be organized in descending order the descendants of $X_{p}$, $X_{p}$ and then everything else. Since the underlying graph is a DAG, there is one or more descendants of $X_{p}$ that do not have any children. Let $\bar{S}$ be this collection. Since $\Xi_{\bar{S},:}(\mathcal{I}-B^{\star})_{:,\bar{S}}$ must have diagonal equal one, and because of the condition (32), then $\Xi_{\bar{S},\bar{S}}=\mathcal{I}_{|\bar{S}|}$. Now consider any parent of these nodes that is a descendant of $X_{p}$. Since $\Xi_{|\bar{S}|+1,:}(\mathcal{I}-B^{\star})_{:,|\bar{S}|+1}$ must equal one and (32), then $\Xi_{|\bar{S}|+1,:}$ must have only one nonzero entry on $S$, either entries corresponding to its descendants or the variable itself. If this non-zero is in the location of one of the descendants, then $\Xi$ will have two identical rows, meaning that it would not be invertible. This reasoning can be repeated until we arrive at the index corresponding to $X_{p}$ and show that $\Xi_{p,:}=e_{p}$. Hence, $\tilde{B}_{p,:}=B^{\star}_{p,:}$. _Proof of part (c)_ : We prove that when the target variable and it’s parents all receive shift interventions and the DAG $B^{\star}$ is faithful with respect to the underlying distribution, the sparsest optimum $\tilde{B}$ satisfies $\tilde{B}_{p,:}=B^{\star}_{p,:}$. Due to the faithfulness assumption of the conditional distribution, any of the sparsest optimum DAGs will have the same v-structures and skeleton as the population DAG. From the discussion above, $\Xi$ will satisfy the relation (32) where $S$ denotes the set of variables that have received a shift intervention. Suppose for the sake of contradiction that $\Xi_{p,:}\neq e_{p}$ (e.g. the estimated causal parents are not equal to the true causal parents). Since $\Xi$ is invertible, the property (32) and that $\Xi(\mathcal{I}-B^{\star})$ must have nonzero diagonal elements, it must be that for one of the parents of $X_{p}$, denoted by index $t$, $\Xi_{t,:}=e_{p}$. With respect to the graph, this means that we are considering a graph where the edge between the parent of $X_{p}$ and $X_{p}$ is reversed. This edge reversal of course can be continued along the path of the descendants of $X_{p}$ as long as this descendant has only a single parent. Suppose at any one of the descendants, the edge reversal stops so that this descendant becomes a source node. Let $s$ be the index of this variable. Consider a node $s^{\prime}\neq s$ that is not a parent or ancestor of $X_{p}$. Starting from the last descendant of this node, denoted by index $s_{l}^{\prime}$, $\Gamma_{s,s_{l}^{\prime}}=0$ since otherwise this would imply that $s^{\prime}_{l}$ is a parent to $s$, contradicting that $s$ is a source node. Working upwards from this last descendant, we can see that $\Gamma_{s,s^{\prime}}=0$. Furthermore, for any parent of $X_{p}$ denoted by $k$, $\Gamma_{s,k}=0$ since otherwise based on condition (32), $\Gamma_{s,k}=ce_{k}$, which would mean that the node $k$ is a parent to $s$, contradicting that $s$ is a source node. Following this logic upwards, we can also conclude that $\Gamma_{s,k}=0$ for $k$ being an ancestor of $X_{p}$. Since $\Gamma$ is invertible, it remains that $\Gamma_{s,s}\neq 0$. This again leads to a contradiction to $s$ being a source node since it would mean that $s$ in the estimated DAG would have the same parents as the population DAG, and this set of parents is non-empty since $s$ is a descendant of $X_{p}$. These contradictions would imply that $\Xi_{p,:}=e_{p}$. ## Appendix F Proof of Theorem 3 (main paper) ###### Proof. For any connectivity matrix $B$, latent effects matrix $\Gamma$, noise variance $w^{1}$: $\displaystyle\text{KL}(\Sigma^{e,\star},\hat{\Sigma}_{B,\Gamma,w^{1}}(\bar{\zeta}^{e},\bar{\psi}^{e}))\geq\text{KL}(\Sigma^{e,\star},\hat{\Sigma}_{B^{\star},\Gamma^{\star},w^{1,\star}}(\bar{\zeta}^{e},\bar{\psi}^{e}))=0.$ Thus, any optimum $(\tilde{B},\tilde{\Gamma},\tilde{w}^{1})$ to the max-risk optimization problem (12) (main paper) must satisfy for all $\mathcal{P}_{e}\in\mathcal{P}_{C_{\zeta},C_{\psi}}$ the relation $\Sigma^{e,\star}=\hat{\Sigma}_{\tilde{B},\tilde{\Gamma},\tilde{w}^{1}}(\bar{\zeta}^{e},\bar{\psi}^{e})$. We take three environments: first one corresponding to the observational setting $e=1$ where none of the variables are intervened on, a second environment $e=2$ corresponding to setting where only the latent variables are perturbed, and a last environment $e=3$ that satisfies the assumptions of Theorem 3 (main paper). We then appeal to Theorem 4 to conclude the desired result. ∎ ## Appendix G Model miss-specification We next explore the robustness of _DirectLikelihood_ to model miss- specifications. We consider three types of model miss-specifications: non- Gaussian noise terms in the linear SCM (2) (main paper) so that the observed variables are non-Gaussian, non-IID latent variables, and non-linear functional forms in the SCM. We consider the synthetic setup described in Section 6.1.1 where the data is generated with two latent variables (i.e. $h=2$) in the setting with non-IID latent variables, and one latent variable (i.e. $h=1$) in the non-Gaussian and non-linear settings. Below we describe the specific modifications for each problem setting: * • Non-Gaussian: $\epsilon_{k}\sim\mathrm{Laplace}(0,0.5)$; $\delta^{e}_{k}\sim\mathrm{Laplace}(0,5+\text{Unif}(-1,1))$ and $\psi^{e,\star}\sim\text{Unif}(0,0.5)$ for $k=1,2,\dots,p$ and $e=2,3,\dots,m$. * • Non-IID latent variables: $\epsilon\sim\mathcal{N}(0,0.5\mathcal{I}_{p})$ and $H\sim\mathcal{N}\left(0,\begin{pmatrix}1&0.2\\\ 0.2&1\end{pmatrix}\right)$; $\delta^{e}_{k}\sim\mathcal{N}(0,5+\text{Unif}(0,1))$ and $H^{e}\sim\mathcal{N}\left(0,\begin{pmatrix}1+\text{Unif}(0,0.5)&0.2\\\ 0.2&1+\text{Unif}(0,0.5)\end{pmatrix}\right)$ for $k=1,2,\dots,p$ and $e=2,3,\dots,m$. * • Non-linear SCM: $\epsilon\sim\mathcal{N}(0,0.5\mathcal{I}_{p})$ and $H\sim\mathcal{N}(0,1)$; $\delta^{e}_{k}\sim\mathcal{N}(0,5+\text{Unif}(0,1))$ and $H^{e}\sim(1+\text{Unif}(0,0.5))\mathcal{N}(0,1)$ for every $k=1,2,\dots,p$ and $e=2,3,\dots,m$. Further, for every $k=1,2,\dots,p$: $X_{k}^{e}=B^{\star}_{k,\texttt{pa}(k)}X_{\texttt{pa}(k)}^{e}+\gamma^{T}_{k}H^{e}+\xi(B^{\star}_{k,\texttt{pa}(k)}X_{\texttt{pa}(k)}^{e}+\gamma^{T}_{k}H^{e})^{2}+\epsilon_{k}+\delta^{e}_{k}$. We consider $\xi=\\{0.1,0.3\\}$. For each setting, we obtain data for an observational environment and $6$ interventional environments, for a total of $m=7$ environments. We supply the perturbation data to the _DirectLikelihood_ procedure with $\bar{h}=2$ and the constraint $\psi^{e}\leq C_{\psi}=0.5$ for the non-Gaussian and non-linear settings and $\bar{h}=3$ and the constraint $\psi^{e}\leq C_{\psi}=0.5$ in the non-IID latent variables setting. For all problem instances, the set of candidate DAGs are obtained by employing GES on the pooled data and finding the optimal scoring DAGs among this collection as well as the modified DAGs from thresholding optimal connectivity matrices at level $0.05$. Fig 4 demonstrates the robustness of _DirectLikelihood_ procedure to these model miss-specifications. We observe that the _DirectLikelihood_ procedure provides an accurate estimate under non-Gaussian and non-IID latent variable settings. Further, our method appears to be robust to some amount of non-linearity. We remark that the empirical success in the non-Gaussian setting is supported by our theoretical results in Section 3 (main paper). As also noted in Section 3 (main paper), our theoretical results can be extended to the setting with non- IID latent variables. However, we are unable to provide any guarantees for non-linear SCMs. (a) Non-Gaussian var. (b) Non-IID latent var. (c) Non-linear SCM: $\xi=0.1$ (d) Non-linear SCM: $\xi=0.3$ Figure 4: Robustness of _DirectLikelihood_ under model miss-specifications including non-Gaussian data, non-IID latent variables, and non-linear SCM with different amounts of non-linearity. The total number of possible true discoveries equals $10$. We consider $t\in\\{1,2,4,16,64\\}$ in the non- Gaussian and non-IID latent settings and $t\in\\{4,16,64\\}$ in the non-linear SCM settings ($t\in\\{1,2\\}$ are not analyzed as finding estimates in these settings for non-linear model mismatches is computationally costly). In some problem settings, $t=64$ has the same behavior has $t=16$ and thus cannot be seen. The accuracy of the estimated DAGs via _DirectLikelihood_ is evaluated in a similar fashion as Figure 2 (main paper).
Landau-Khalatnikov-Fradkin Transformation and Even $\zeta$ Functions A. V. Kotikov1 and S. Teber2 1Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia. 2Sorbonne Université, CNRS, Laboratoire de Physique Théorique et Hautes Energies, LPTHE, F-75005 Paris, France. ###### Abstract An exact formula that relates standard $\zeta$ functions and so-called hatted $\zeta$ ($\hat{\zeta}$) functions in all orders of perturbation theory is presented. This formula is based on the Landau-Khalatnikov-Fradkin transformation. ## 1 Introduction We consider properties of multiloop massless functions of the propagator type. There is an ever growing number of indications (see, for example, [1]) that, in the calculations of various quantities in the Euclidean region, striking regularities arise in terms proportional to $\zeta_{2n}$–that is, to even Euler $\zeta$ functions. These regularities are thought [2] to be due to the fact that $\varepsilon$-dependent combinations of $\zeta$ functions, such as $\hat{\zeta}_{3}\equiv\zeta_{3}+\frac{3\varepsilon}{2}\zeta_{4}-\frac{5\varepsilon^{3}}{2}\zeta_{6},~{}~{}\hat{\zeta}_{5}\equiv\zeta_{5}+\frac{5\varepsilon}{2}\zeta_{6},~{}~{}\hat{\zeta}_{7}\equiv\zeta_{7}\,,$ (1) rather than the $\zeta$ functions themselves are dominant objects that eliminate $\zeta_{2n}$ in the $\varepsilon$ expansions of four-loop functions belonging to the propagator type. A generalization of combinations in (1) to the cases of five, six, and seven loops can be found in [3]. 111We note that the results in [3] also contain multiple $\zeta$ functions (multi-zeta values), but their analysis is beyond the scope of the present article. The results in (1) and their generalization in [3] make it possible to predict $\pi^{2n}$ terms in higher orders of perturbation theory. In [4] (see also [5]), the present authors extended the results in (1) to any order in $\varepsilon$ in a rather unexpected way—by means of the Landau- Khalatnikov-Fradkin (LKF) transformation [6], which relates the fermion propagators in quantum electrodynamics (QED) in two different gauges. It should be noted that the most important applications of the LKF transformation are generally associated with the predictions of some terms in high orders of perturbation theory in QED [7], its generalizations [8], and more general SU(N) gauge theories. In the present article, we give a brief survey of the results reported in [4], placing emphasis on how the LKF transformation demonstrates in a natural way the existence of $\hat{\zeta}$ functions and makes it possible to extend the results in (1) to any order in $\varepsilon$. ## 2 LKF transformation Let us consider QED in $d$-dimensional ($d=4-2\varepsilon$) Euclidean space. In general, the fermion propagator in a gauge involving the parameter $\xi$ in the $p$ and $x$ representations has the form $S_{F}(p,\xi)=\frac{1}{i\hat{p}}\,P(p,\xi)\,,~{}~{}S_{F}(x,\xi)=\hat{x}\,X(x,\xi)\,,$ (2) where there are explicit expressions for the factors $\hat{p}$ and $\hat{x}$, which involve the Dirac $\gamma$ matrices. Within the dimensional regularization, the LKF transformation relates the fermion propagator in these two gauges with parameters $\xi$ and $\eta$, respectively, as [4] $S_{F}(x,\xi)=S_{F}(x,\eta)\,e^{{\rm i}D(x)}\,,$ (3) where $D(x)=\frac{{\rm i}\,\Delta\,A}{\varepsilon}\,\Gamma(1-\varepsilon)\,(\pi\mu^{2}x^{2})^{\varepsilon},~{}~{}\Delta=\xi-\eta,~{}~{}A=\frac{\alpha_{\rm em}}{4\pi}=\frac{e^{2}}{(4\pi)^{2}}\,.$ (4) This means that $D(x)$ makes a contribution proportional to $\Delta A$ and the pole $\varepsilon^{-1}$. Suppose that, for a gauge-fixing parameter $\eta$, the fermion propagator $S_{F}(p,\eta)$ with an external momentum $p$ has the form (2), where $P(p,\eta)=\sum_{m=0}^{\infty}a_{m}(\eta)\,A^{m}\,{\left(\frac{\tilde{\mu}^{2}}{p^{2}}\right)}^{m\varepsilon}\,,~{}~{}\tilde{\mu}^{2}=4\pi\mu^{2}\,.$ (5) Here, $a_{m}(\eta)$ are the coefficients in the loop expansion of the propagator and $\tilde{\mu}$ is the renormalization scale lying between the scales of the ${\rm MS}$ (minimal-subtraction) and $\overline{{\rm MS}}$ (modified-minimal-subtraction) schemes. The LKF transformation determines the fermion propagator for another gauge parameter $\xi$ as $P(p,\xi)=\sum_{m=0}^{\infty}a_{m}(\eta)\,A^{m}\,{\left(\frac{\tilde{\mu}^{2}}{p^{2}}\right)}^{m\varepsilon}\sum_{l=0}^{\infty}\,\frac{1-(m+1)\varepsilon}{1-(m+l+1)\varepsilon}\,\Phi_{\rm MV}(m,l,\varepsilon)\,\frac{(\Delta\,A)^{l}}{(-\varepsilon)^{l}l!}\,{\left(\frac{\mu_{\rm MV}^{2}}{p^{2}}\right)}^{l\varepsilon},$ (6) where $\Phi_{{\rm MV}}(m,l,\varepsilon)=\frac{\Gamma(1-(m+1)\varepsilon)\Gamma(1+(m+l)\varepsilon)\Gamma^{2l}(1-\varepsilon)}{\Gamma(1+m\varepsilon)\Gamma(1-(m+l+1)\varepsilon)}\,.$ (7) Here, the symbol ${\rm MV}$ stands for the so-called minimal Vladimirov scale introduced in [4]. We note that, in [4], the use of the popular G scale [10] led to the same final results given in Eqs. (16) and (17) below. In order to derive expression (6), we employed the fermion propagator $S_{F}(p,\eta)$ with $P(p,\eta)$ given by (5), applied the Fourier transformation to $S_{F}(x,\eta)$, and made the LKF transformation (3). As a final step, we performed the inverse Fourier transformation and obtained the fermion propagator $S_{F}(p,\xi)$ with $P(p,\xi)$ given in (6). Let us now study the factor $\Phi_{{\rm MV}}(m,l,\varepsilon)$. For this, we make use of the expansion of the $\Gamma$ function in the form $\Gamma(1+\beta\varepsilon)=\exp\Big{[}-\gamma\beta\varepsilon+\sum_{s=2}^{\infty}\,(-1)^{s}\,\eta_{s}\beta^{s}\varepsilon^{s}\Bigr{]},~{}~{}\eta_{s}=\frac{\zeta_{s}}{s}\,,$ (8) where $\gamma$ is the Euler constant. Substituting this expansion into expression (7), we recast the factor $\Phi_{{\rm MV}}(m,l,\varepsilon)$ into the form $\Phi_{{\rm MV}}(m,l,\varepsilon)=\exp\Big{[}\sum_{s=2}^{\infty}\,\eta_{s}\,p_{s}(m,l)\,\varepsilon^{s}\Bigr{]}\,,$ (9) where $p_{s}(m,l)=(m+1)^{s}-(m+l+1)^{s}+2l+(-1)^{s}\Bigl{\\{}(m+l)^{s}-m^{s}\Bigr{\\}},~{}~{}p_{1}(m,l)=0,~{}~{}p_{2}(m,l)=0\,.$ (10) One can readily see from Eq. (9) that the factor $\Phi_{{\rm MV}}(m,l,\varepsilon)$ involves values of the $\zeta_{s}$ function of given weight $s$ (or transcendental level) in front of $\varepsilon^{s}$. This property constrains strongly the coefficients, thereby simplifying the ensuing analysis (the authors of the articles quoted in [11] also used this property). ## 3 $\hat{\zeta}_{2n-1}$ We now focus on the polynomial $p_{s}(m,l)$ in Eq. (10). It is convenient to partition it into components featuring even and odd values of s. The following recursion relations hold: $p_{2k}=p_{2k-1}+Lp_{2k-2}+p_{3},~{}~{}p_{2k-1}=p_{2k-2}+Lp_{2k-3}+p_{3},~{}~{}L=l(l+1)\,.$ (11) Expressing even components, $p_{2k}$, in terms of odd ones as $p_{2k}=\sum_{s=2}^{k}p_{2s-1}\,C_{2k,2s-1}\,=\sum_{m=1}^{k-1}p_{2k-2m+1}\,C_{2k,2k-2m+1}\,$ (12) we can determine the exact structure of $C_{2k,2k-2m+1}$ in the form $C_{2k,2k-2m+1}=b_{2m-1}\,\frac{(2k)!}{(2m-1)!\,(2k-2m+1)!},~{}~{}b_{2m-1}=\frac{(2^{2m}-1)}{m}\,B_{2m}\,,$ (13) where $B_{m}$ are well-known Bernoulli numbers. It is now convenient to represent the argument of the exponential form on the right-hand side of Eq. (9) in the form $\sum_{s=3}^{\infty}\,\eta_{s}\,p_{s}\,\varepsilon^{s}=\sum_{k=2}^{\infty}\,\eta_{2k}\,p_{2k}\,\varepsilon^{2k}+\sum_{k=2}^{\infty}\,\eta_{2k-1}\,p_{2k-1}\,\varepsilon^{2k-1}\,.$ (14) With the aid of Eq. (12), the first term on the right-hand side of (14) can be represented in the form $\displaystyle\sum_{k=2}^{\infty}\,\eta_{2k}\,p_{2k}\,\varepsilon^{2k}=\sum_{k=2}^{\infty}\,\eta_{2k}\,\varepsilon^{2k}\,\sum_{s=2}^{k}p_{2s-1}\,C_{2k,2s-1}=\sum_{s=2}^{\infty}p_{2s-1}\,\sum_{k=s}^{\infty}\,\eta_{2k}\,C_{2k,2s-1}\,\varepsilon^{2k}\,.$ Relation (14) can then be recast into the form $\sum_{s=2}^{\infty}\,\hat{\eta}_{2s-1}\,p_{2s-1}\,\varepsilon^{2s-1}=\sum_{s=2}^{\infty}\,[\hat{\zeta}_{2s-1}/(2s-1)]\,p_{2s-1}\,\varepsilon^{2s-1}\,,$ (15) where $\hat{\zeta}_{2s-1}=\zeta_{2s-1}+\sum_{k=s}^{\infty}\,\zeta_{2k}\,\hat{C}_{2k,2s-1}\,\varepsilon^{2(k-s)+1}$ (16) with $\hat{C}_{2k,2s-1}=\frac{2s-1}{2k}\,C_{2k,2s-1}=b_{2k-2s+1}\,\frac{(2k-1)!}{(2s-2)!\,(2k-2s+1)!}\,.$ (17) Relations (16), (17), and (13) lead to an expression for $\hat{\zeta}_{2s-1}$ in terms of standard $\zeta$ functions that is valid in all orders of the expansion in $\varepsilon$. ## 4 Conclusions The recursion relations in (11) between the even and odd components of the polynomial associated with the factor $\Phi_{{\rm MV}}(m,l,\varepsilon)$ (7) have been deduced from the result in (6) obtained by means of the LKF transformation for the fermion propagator. These recursion relations make it possible to express all results for the factor $\Phi_{{\rm MV}}(m,l,\varepsilon)$ in terms of $\hat{\zeta}_{2s-1}$. Expressions (16) and (17) for them are valid in any order of perturbation theory. A.V. Kotikov is grateful to the Organizing Committee of the Session-Conference of Nuclear Physics Section at the Department of Physical Sciences, Russian Academy of Sciences, for the invitation. ## References * [1] P. A. Baikov and K. G. Chetyrkin, PoS (LL2018), 008 (2018). * [2] D. J. Broadhurst, hep-th/9909185; P. A. Baikov and K. G. Chetyrkin, Nucl. Phys. B 837, 186 (2010). * [3] A. Georgoudis, V. Goncalves, E. Panzer, and R. Pereira, arXiv:1802.00803 [hep-th]; P. A. Baikov and K. G. Chetyrkin, JHEP 1806, 141 (2018); 1910, 190 (2019). * [4] A. V. Kotikov and S. Teber, Phys. Rev. D 100, 105017 (2019). * [5] A. V. Kotikov and S. Teber, arXiv:1912.10957 [hep-th]. * [6] L. D. Landau and I. M. Khalatnikov, 29, 89 (1955) [Sov. Phys. JETP 2, 69 (1956)]; E. S. Fradkin, 29, 258 (1955) [Sov. Phys. JETP 2, 361 (1956)]. * [7] A. Bashir and A. Raya, Phys. Rev. D 66, 105005 (2002); S. Jia and M. R. Pennington, Phys. Rev. D 95, 076007 (2017). * [8] A. Ahmad, J. J. Cobos-Martínez, Y. Concha-Sánchez, and A. Raya, Phys. Rev. D 93, 094035 (2016); A. James, A. V. Kotikov, and S. Teber, Phys. Rev. D 101, 045011 (2020) * [9] T. De Meerleer, D. Dudal, S. P. Sorella, P. Dall’Olio, and A. Bashir, Phys. Rev. D 97, 074017 (2018); Phys. Rev. D 101, no. 8, 085005 (2020) * [10] K. G. Chetyrkin, A. L. Kataev, and F. V. Tkachov, Nucl. Phys. B 174, 345 (1980). * [11] A. V. Kotikov and L. N. Lipatov, Nucl. Phys. B 582, 19 (2000); Nucl. Phys. B 661, 19 (2003); Nucl. Phys. B 769, 217 (2007); A. V. Kotikov, L. N. Lipatov, A. I. Onishchenko, and V. N. Velizhanin, Phys. Lett. B 595, 521 (2004); L. Bianchi, V. Forini, and A. V. Kotikov, Phys. Lett. B 725, 394 (2013).
# Phase transition of four-dimensional lattice $\phi^{4}$ theory with tensor renormalization group Shinichiro Akiyama<EMAIL_ADDRESS>Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan Yoshinobu Kuramashi<EMAIL_ADDRESS>Center for Computational Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan Yusuke Yoshimura<EMAIL_ADDRESS>Center for Computational Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan ###### Abstract We investigate the phase transition of the four-dimensional single-component $\phi^{4}$ theory on the lattice using the tensor renormalization group method. We have examined the hopping parameter dependence of the bond energy and the vacuum condensation of the scalar field $\braket{\phi}$ at a finite quartic coupling $\lambda$ on large volumes up to $V=1024^{4}$ in order to detect the spontaneous breaking of the $\mathbb{Z}_{2}$ symmetry. Our results show that the system undergoes the weak first-order phase transition at a certain critical value of the hopping parameter. We also make a comparative study of the three-dimensional $\phi^{4}$ theory and find that the properties of the phase transition are consistent with the universality class of the three-dimensional Ising model. ††preprint: UTHEP-755, UTCCS-P-136 ## I Introduction The issue of the triviality of the four-dimensional ($4d$) $\phi^{4}$ theory has been a theoretical concern among particle physicists, because it is related to the scalar sector in the standard model Wilson and Kogut (1974); Aizenman (1981, 1982); Frohlich (1982); Dashen and Neuberger (1983); Lindner (1986); Hasenfratz _et al._ (1987); Lüscher and Weisz (1987, 1988, 1989); Huang (1989); Frick _et al._ (1990); Kribs _et al._ (2007). The single- component $\phi^{4}$ theory becomes equivalent to the Ising model in the infinite limit of the quartic coupling $\lambda=\infty$ so that numerical studies of the $4d$ Ising model have been performed as a nonperturbative test of the triviality, assuming the universality Blöte and Swendsen (1980); Sanchez-Velasco (1987); Kenna and Lang (1993); Bittner _et al._ (2002); Kenna (2004); Lundow and Markström (2009, 2011). 111In the standard model, we need to consider the $\phi^{4}$ interaction as a part of a combined Higgs-Yukawa sector, whose nonperturbative aspects were investigated with the lattice simulations Lee _et al._ (1990a, b). Also there are some recent studies to discuss the triviality of $O(N)$ $\phi^{4}$ theory with the higher-loop beta function Shrock (2014, 2016, 2017). So far, no Monte Carlo calculation has confirmed the logarithmic correction to the mean-field exponents in the scaling behavior of the specific heat, which is expected from the perturbative renormalization group analysis Wegner and Riedel (1973). Moreover, a detailed Monte Carlo study has found a serious finite-volume effect due to nontrivial boundary effects in the $4d$ Ising model Lundow and Markström (2011). Recently, the authors have investigated the phase transition of the $4d$ Ising model with the higher-order tensor renormalization group (HOTRG) algorithm Akiyama _et al._ (2019). The tensor renormalization group (TRG) method,222In this paper, the TRG method or the TRG approach refers to not only the original numerical algorithm proposed by Levin and Nave Levin and Nave (2007) but also its extensions Xie _et al._ (2012); Adachi _et al._ (2020); Kadoh and Nakayama (2019); Shimizu and Kuramashi (2014a); Sakai _et al._ (2017); Akiyama _et al._ (2020a). which contains the HOTRG algorithm, has several superior features over the Monte Carlo method. (i) Since the TRG provides a deterministic numerical method, it does not have the sign problem encountered in stochastic methods, including the standard Monte Carlo simulation, as confirmed in various studies of quantum field theories Shimizu and Kuramashi (2014a, b, 2018); Takeda and Yoshimura (2015); Kadoh _et al._ (2018, 2020); Kuramashi and Yoshimura (2020); Akiyama _et al._ (2020b). (ii) Its computational cost depends on the system size only logarithmically. (iii) The computational cost to simulate fermions is almost equivalent to that to bosons because the TRG can directly manipulate the Grassmann variables Shimizu and Kuramashi (2014a); Sakai _et al._ (2017); Yoshimura _et al._ (2018); Akiyama _et al._ (2020a). (iv) We can obtain the partition function or the path- integral itself. Thanks to the above feature (ii), we have been allowed to enlarge the lattice volume up to $V=1024^{4}$, which is essentially identified as the thermodynamic limit, and found finite jumps for the internal energy and the magnetization as functions of temperature in the $4d$ Ising model Akiyama _et al._ (2019). These are characteristic features of the first-order phase transition. Having shown that the $4d$ Ising model undergoes the weak first- order phase transition, our interest turns to the order of the phase transition in the $4d$ single-component $\phi^{4}$ theory, which has the global $\mathbb{Z}_{2}$ symmetry as with the Ising model. 333The scenario of the weak first-order phase transition in the Ising model or the $\phi^{4}$ theory has been discussed phenomenologically in some recent studies Cea _et al._ (2019); Consoli and Cosmai (2020a, b, c). In this paper, we investigate the phase transition of the $4d$ single- component $\phi^{4}$ theory with the quartic coupling $\lambda$ and the hopping parameter $\kappa$, employing the anisotropic TRG (ATRG) algorithm Adachi _et al._ (2020), which was proposed to reduce the computational cost of the TRG method. The ATRG has been successfully applied to analyze the $4d$ complex $\phi^{4}$ theory at the finite density with parallel computation Akiyama _et al._ (2020b). Our main purpose is to determine the order of the phase transition by examining the $\kappa$ dependence of the bond energy and the vacuum condensation of the scalar field $\braket{\phi}$ around the critical value of $\kappa_{\rm c}$ for the fixed $\lambda$, the latter of which is an order parameter of the phase transition caused by the spontaneous $\mathbb{Z}_{2}$ symmetry breaking. We study the model with a single choice of $\lambda=40$, which is a finite-$\lambda$ generalization of the Ising model study performed in Ref. Akiyama _et al._ (2019), corresponding to $\lambda=\infty$. The choice of $\lambda=40$ may also be helpful to avoid the weak coupling region affected by the Gaussian fixed point at $\lambda=0$. For comparison, we also make the same analysis of the $3d$ single-component $\phi^{4}$ theory at $\lambda=40$, which is believed to belong to the universality class of the 3$d$ Ising model. We discuss the differences between the results of the 3$d$ and 4$d$ cases. This paper is organized as follows. In Sec. II we explain the formulation of the lattice $\phi^{4}$ theory and the ATRG algorithm. We present numerical results for the 4$d$ and 3$d$ cases in Sec. III and discuss the properties of the phase transition. Section IV is devoted to summary and outlook. ## II Formulation and numerical algorithm We use the following popular action for the $d$-dimensional single-component $\phi^{4}$ theory on a lattice $\Gamma$: $\displaystyle S[\phi]=\sum_{n\in\Gamma}\left[-\kappa\sum_{\nu=1}^{d}\left(\phi_{n}\phi_{n+\hat{\nu}}+\phi_{n}\phi_{n-\hat{\nu}}\right)+\phi_{n}^{2}+\lambda\left(\phi_{n}^{2}-1\right)^{2}\right],$ (1) where $\hat{\nu}$ is the unit vector of the $\nu$-direction. This formulation, which is explicit about the relation to the Ising model, is equivalent to the more conventional expression $\displaystyle S[\varphi]=\sum_{n\in\Gamma}\left[\frac{1}{2}\sum_{\nu=1}^{d}\left(\varphi_{n+\hat{\nu}}-\varphi_{n}\right)^{2}+\frac{1}{2}m_{0}^{2}\varphi_{n}^{2}+\frac{g_{0}}{4!}\varphi_{n}^{4}\right]$ (2) with $\displaystyle\varphi_{n}=\sqrt{2\kappa}\phi_{n},$ (3) $\displaystyle m_{0}^{2}=\frac{1-2\lambda}{\kappa}-2d,$ (4) $\displaystyle g_{0}=\frac{6\lambda}{\kappa^{2}}.$ (5) The partition function is defined by $\displaystyle Z=\int\mathcal{D}\phi~{}\mathrm{e}^{-S[\phi]}$ (6) using the action of Eq. (1) with the path integral measure $\displaystyle\int\mathcal{D}\phi=\prod_{n\in\Gamma}\int_{-\infty}^{\infty}\mathrm{d}\phi_{n}.$ (7) We express the partition function as a tensor network in the similar way to Ref. Akiyama _et al._ (2020b). The continuous variables $\phi_{n}$ are discretized by the $K$-point Gauss-Hermite quadrature rule as $\displaystyle\int_{-\infty}^{\infty}\mathrm{d}\phi_{n}~{}\mathrm{e}^{-\phi_{n}^{2}}f(\phi_{n})\simeq\sum_{\alpha_{n}=1}^{K}\omega_{\alpha_{n}}f(\phi_{\alpha_{n}}),$ (8) where $\phi_{\alpha}$ and $\omega_{\alpha}$ are the $\alpha$-th node and its weight. The partition function is thus discretized as $\displaystyle Z(K)=\sum_{\\{\alpha\\}}\prod_{n,\nu}M_{\alpha_{n}\alpha_{n+\hat{\nu}}},$ (9) where $\displaystyle M_{\alpha_{n}\alpha_{n+\hat{\nu}}}=\sqrt[2d]{\omega_{\alpha_{n}}\omega_{\alpha_{n+\hat{\nu}}}}\exp\left[2\kappa\phi_{\alpha_{n}}\phi_{\alpha_{n+\nu}}-\frac{\lambda}{2d}\left(\phi_{\alpha_{n}}^{2}-1\right)^{2}-\frac{\lambda}{2d}\left(\phi_{\alpha_{n+\hat{\nu}}}^{2}-1\right)^{2}\right].$ (10) Each matrix $M$ is approximated by the singular value decomposition (SVD) with a bond dimension $D$ as $\displaystyle M_{\alpha\beta}\simeq\sum_{k=1}^{D}U_{\alpha k}\sigma_{k}V_{\beta k},$ (11) where $\sigma_{k}$ is the $k$-th singular value sorted in the descending order, and $U,V$ are the orthogonal matrices composed of the singular vectors. One finally obtains a tensor network representation for $Z(K)$ as $\displaystyle Z(K)=\sum_{\\{i_{1},\cdots,i_{d}\\}}\prod_{n\in\Gamma}T_{n;i_{1}\cdots i_{d}i^{\prime}_{1}\cdots i^{\prime}_{d}},$ (12) where $\displaystyle T_{n;i_{1}\cdots i_{d}i^{\prime}_{1}\cdots i^{\prime}_{d}}=\sum_{\alpha=1}^{K}\prod_{\nu=1}^{d}\sqrt{\sigma_{i_{\nu}}\sigma_{i^{\prime}_{\nu}}}U_{\alpha i_{\nu}}V_{\alpha i^{\prime}_{\nu}},$ (13) with the shorthand notations such as $i_{\nu}=i_{\nu,n}$ and $i^{\prime}_{\nu}=i_{\nu,n-\hat{\nu}}$. In this study, we employ the parallelized $d$-dimensional ATRG algorithm developed in Refs. Akiyama _et al._ (2020c, b). We keep the bond dimension $D$ fixed throughout the ATRG procedure. For the swapping bond parts explained in Refs. Adachi _et al._ (2020); Oba (2020), the randomized SVD is applied with the choice of $p=2D$ and $q=2D$, where $p$ is the oversampling parameter and $q$ is the numbers of QR decomposition. ## III Numerical results ### III.1 4$d$ case The partition function of Eq. (12) is evaluated using the ATRG algorithm on lattices with the volume $V=L^{4}$ ($L=2^{m},m\in\mathbb{N}$) employing the periodic boundary conditions for all the space-time directions. As explained in the previous section, there are two important algorithmic parameters. One is the number of nodes $K$ in the Gauss-Hermite quadrature method to discretize the scalar field. The other is the bond dimension $D$. We check the convergence behavior of the free energy as a function of $K$ and $D$ by defining the following quantities: $\displaystyle\delta_{K}=\left|\frac{\ln Z(K,D=50)-\ln Z(K=2000,D=50)}{\ln Z(K=2000,D=50)}\right|$ (14) and $\displaystyle\delta_{D}=\left|\frac{\ln Z(K=2000,D)-\ln Z(K=2000,D=50)}{\ln Z(K=2000,D=50)}\right|.$ (15) Figure 1 shows the $K$ dependence of $\delta_{K}$ with $D=50$ on $V=1024^{4}$ at $\kappa=0.0763059$ and $0.0765000$, which are in the symmetric and broken symmetry phases. Note that $\kappa=0.0763059$ is close to the transition point $\kappa_{\rm c}$, as we will see below. We observe that $\delta_{K}$ decreases monotonically as a function of $K$ and reaches the order of $10^{-7}$ around $K=1500$. This shows that the Gauss-Hermite quadrature method is not affected by whether the system is in the symmetric or broken symmetry phase. We also plot the $D$ dependence of $\delta_{D}$ in Fig. 2, which shows the fluctuation of free energy is suppressed as $\delta_{D}\approx 10^{-5}$ up to $D=50$. Since the double-well potential in the $\phi^{4}$ theory becomes sharper for larger $\lambda$, we take a large number of $K$ to achieve good convergence for $\delta_{K}$. In the following, numerical results at $\lambda=40$ are presented for $K=2000$ and $D=50$ which are large enough in this study. Figure 1: Convergence behavior of free energy with $\delta_{K}$ of Eq. (14) at $\kappa=0.0763059$ and $0.0765000$ as a function of $K$ on $V=1024^{4}$. Figure 2: Same as Fig. 1 for $\delta_{D}$ of Eq. (15). The phase transition point $\kappa_{\rm c}$ is determined by following the method employed in the Ising case Akiyama _et al._ (2019). Suppose we have obtained a coarse-grained tensor $T^{(m)}_{i_{1}i_{2}i_{3}i_{4}i^{\prime}_{1}i^{\prime}_{2}i^{\prime}_{3}i^{\prime}_{4}}$ after the $m$ times of coarse-graining. Defining a $D\times D$ matrix as $\displaystyle A^{(m)}_{i_{4}i^{\prime}_{4}}=\sum_{i_{1},i_{2},i_{3}}T^{(m)}_{i_{1}i_{2}i_{3}i_{4}i_{1}i_{2}i_{3}i^{\prime}_{4}},$ (16) we calculate $\displaystyle X^{(m)}=\frac{\left({\rm Tr}A^{(m)}\right)^{2}}{{\rm Tr}\left(A^{(m)}\right)^{2}}.$ (17) This quantity, introduced in Ref. Gu and Wen (2009), possibly counts the number of the largest singular value of $A^{(m)}$. Therefore, it is expected that $X^{(m)}=1$ holds for the symmetric phase and $X^{(m)}=2$ for the broken symmetry phase. We may distinguish both phases by observing the plateau of $X^{(m)}$ after sufficient coarse-graining iterations. Figure 3: History of $X^{(m)}$ as a function of the coarse-graining step $m$ at $\kappa=0.089225$ (circle) and 0.089300 (diamond). Figure 4: Comparison of $\kappa_{\rm c}$ at $\lambda=5$ obtained by various methods. All numerical values except for the ATRG result are taken from Table III in Ref. Akerlund _et al._ (2013). For details on the dynamical or effective mean field theory, see Ref. Akerlund _et al._ (2013). For Kikuchi’s method, see Ref. Kikuchi (1951). In order to check the applicability of the above method to determine the value of $\kappa_{\rm c}$, we calculate $\kappa_{\rm c}$ at $\lambda=5$ and compare it with the previous results obtained by various methods including the Monte Carlo simulation Akerlund _et al._ (2013). Since we have found that the convergence of the free energy with respect to the bond dimension at $\lambda=5$ becomes slightly slower than that at $\lambda=40$, we have taken $D=55$ (and $K=2000$) to evaluate $\kappa_{\rm c}$ at $\lambda=5$. Up to $D=55$, the relative error for the free energy is suppressed to $O(10^{-5})$. Figure 3 shows the $m$ dependence of the value of $X^{(m)}$ at $\kappa=0.089225$ and $0.089300$, whose difference $\Delta\kappa=7.5\times 10^{-5}$ is the finest resolution across the transition point. We find $X^{(m)}=1$ for $m\gtrsim 30$ at $\kappa=0.089225$ and $X^{(m)}=2$ for $m\gtrsim 25$ at $\kappa=0.089300$. Based on this observation, we determine the critical kappa $\kappa_{\rm c}=0.0892625(375)$ on the $1024^{4}$ lattice, whose error bar is provided by the resolution of $\kappa$. In Fig. 4 we find that our result is comparable to the Monte Carlo result $\kappa_{\rm c}=0.08893(20)$ in Ref. Akerlund _et al._ (2013). Slight deviation from the Monte Carlo result may be attributed to the finite size effect: our result is obtained on the $1024^{4}$ lattice, while the previous one is on the $32^{4}$ lattice. Having confirmed the validity of the method using $X^{(m)}$, we determine $\kappa_{\rm c}$ at $\lambda=40$ with $D=50$ and $K=2000$. The result is $\kappa_{\rm c}=0.076305975(25)$ on the $1024^{4}$ lattice, whose error bar is provided by the resolution of $\kappa$. In Fig. 5 we check the $1/\lambda$ dependence of $\kappa_{\rm c}$ toward the Ising limit, where the result at $\lambda=100$ is obtained in the same way as the $\lambda=40$ case with $D=50$ and $K=2000$. We observe that the value of $\kappa_{\rm c}$ seems monotonically approaching that in the Ising case. The error bars are provided by the resolution of $\kappa$ but they are all within symbols. Figure 5: $\kappa_{\rm c}$ as a function of $1/\lambda$. $1/\lambda=0$ corresponds to the Ising model. Square symbol at $1/\lambda=0$ denotes the result obtained by the HOTRG Akiyama _et al._ (2019). All error bars are within symbols. We now turn to the investigation of the phase transition with the bond energy defined by $\displaystyle E_{\rm{b}}=-\frac{1}{2}\frac{\partial}{\partial\kappa}\frac{\ln Z}{V}$ (18) and the vacuum condensation of the scalar field $\braket{\phi}$. Both quantities are evaluated with the impure tensor method. Figure 6 plots the bond energy as a function of $\kappa$ on the $1024^{4}$ lattice. The resolution of $\kappa$ becomes finer toward the transition point and the finest one is $\Delta\kappa=5.0\times 10^{-8}$ around the transition point. The phase transition point is consistent with $\kappa_{\rm c}$ (gray band) determined by $X^{(m)}$. Inset graph in Fig. 6 shows an emergence of a finite gap with mutual crossings of curves for different volumes, $m\geq 23$, around $\kappa_{\rm c}$. These are characteristic features of the first-order phase transition as discussed in Ref. Fukugita _et al._ (1990). As the gap, we obtain $\displaystyle\Delta E_{\rm b}=0.001318(3),$ (19) by the linear extrapolation toward the transition point both from the symmetric and broken symmetry phases. In this extrapolation, we have used data points in $[0.07630560,0.07630595]$ for the symmetric phase and $[0.0763060,0.0763064]$ for the broken symmetry one. Note that we do not extrapolate $\Delta E_{\rm b}$ to the $D\rightarrow\infty$ limit in this paper because a systematic study of the $D$ dependence demands enormous computational cost and the theoretical formula for the extrapolation is not known so far. The value of $\Delta E_{\rm b}$ becomes smaller than the latent heat $\Delta E=0.0034(5)$ found in the Ising case with the HOTRG Akiyama _et al._ (2019). Figure 6: Bond energy as a function of $\kappa$ on $V=1024^{4}$. Inset graph shows it for various lattice sizes and gray band denotes $\kappa_{\rm c}$ estimated by $X^{(m)}$ of Eq. (17). Another quantity to detect the phase transition is the vacuum condensation of the scalar field $\braket{\phi}$, which is the order parameter of spontaneous breaking of the $\mathbb{Z}_{2}$ symmetry. We calculate $\braket{\phi}$ by introducing the external fields of $h=1.0\times 10^{-10}$ and $2.0\times 10^{-10}$ at each $\kappa$. After taking the infinite volume limit, we extrapolate the value of $\braket{\phi}$ at $h=0$. Figure 7 shows the $\kappa$ dependence of $\braket{\phi}_{h=0}$. The resolution of $\kappa$ is the same as that in Fig. 6. We find that the value of $\kappa_{\rm c}$, where the vacuum condensation sets in, is consistent with both estimates by $X^{(m)}$ and the bond energy. A finite jump in $\braket{\phi}_{h=0}$ at $\kappa_{\rm c}$ is another indication of the first-order phase transition. We find $\displaystyle\Delta\braket{\phi}_{h=0}=0.0105(9),$ (20) as the value of finite jump, where we have used data points in $[0.07630560,0.07630595]$ for the symmetric phase and $[0.0763060,0.0763064]$ for the broken symmetry one, as in the case with the bond energy, to extrapolate linearly the values of $\braket{\phi}_{h=0}$ toward the transition point. Note that this quantity is estimated as $0.037(2)$ in the Ising case with the HOTRG Akiyama _et al._ (2019). Figure 7: Vacuum condensation $\braket{\phi}_{h=0}$ as a function of $\kappa$ on $V=1024^{4}$. Gray band in inset graph shows $\kappa_{\rm c}$ estimated by $X^{(m)}$ of Eq. (17). ### III.2 3$d$ case The 2$d$ single-component lattice $\phi^{4}$ theory is believed to belong to the same universality class as the 2$d$ Ising model. The previous TRG analysis, which was carried out by two of the authors and collaborators, supports this ansatz Kadoh _et al._ (2019). Although the 3$d$ case should undergo the second-order phase transition belonging to the universality class of the 3$d$ Ising model, the direct check with the TRG method has not been performed so far. Here it must be instructive to repeat the same TRG calculation for the 3$d$ case and compare the results between the 3$d$ and 4$d$ cases at $\lambda=40$. We first show the convergence behavior of the free energy as a function of $K$ and $D$ by defining the relative error in the following way: $\displaystyle\delta_{K}=\left|\frac{\ln Z(K,D=90)-\ln Z(K=2000,D=90)}{\ln Z(K=2000,D=90)}\right|$ (21) and $\displaystyle\delta_{D}=\left|\frac{\ln Z(K=2000,D)-\ln Z(K=2000,D=90)}{\ln Z(K=2000,D=90)}\right|.$ (22) The $K$ dependence of $\delta_{K}$ with $D=90$ on $V=4096^{3}$ at $\kappa=0.112859$ and $0.112920$ in Fig. 8. $\kappa=0.112859$ is near the transition point in the symmetric phase, while $\kappa=0.112920$ is in the broken symmetric phase. We observe a monotonic decrease of $\delta_{K}$ as a function of $K$, which is quite similar to the 4$d$ case. Figure 9 shows the $D$ dependence of $\delta_{D}$, where $\delta_{D}$ reaches the order of $10^{-5}$ up to $D=90$. Notice that the achieved order of $\delta_{D}$ is similar with the $4d$ case. In the following, we present the results at $\lambda=40$ for $K=2000$ and $D=90$. Figure 8: Convergence behavior of 3$d$ free energy with $\delta_{K}$ of Eq. (21) at $\kappa=0.112859$ and $0.112920$ as a function of $K$ on $V=4096^{3}$. Figure 9: Same as Fig. 8 for $\delta_{D}$ of Eq. (22). Now let us discuss the results of the bond energy and the vacuum condensation of the scalar field $\braket{\phi}$, which are calculated with the impure tensor method as in the 4$d$ case. We plot the bond energy as a function of $\kappa$ on the $4096^{3}$ lattice in Fig. 10, where the gray band with $0.11285890\leq\kappa\leq 0.11285905$ in the inset indicates the location of the phase transition point determined by $X^{(m)}$. Note that in the $3d$ case, $X^{(m)}$ is also given in the same way as Eq. (17), defining the three- dimensional counterpart of Eq. (16). The value of the bond energy evaluated at $\kappa=0.11285900$ is located within this gray band. This is due to the situation that $X^{(m)}$ at $\kappa=0.11285900$ does not show any clear plateau at $X^{(m)}=1$ or 2. We observe that the bond energy on all the volumes smoothly varies as a function of $\kappa$ without generating any gap. In addition, we find no mutual crossing of curves for different volumes around the phase transition point: The curve of the bond energy monotonically approaches that on the largest volume of $4096^{3}$. These behaviors, which are in clear contrast to the 4$d$ case, are characteristics of the second- order phase transition as discussed in Ref. Fukugita _et al._ (1990). Figure 10: 3$d$ bond energy as a function of $\kappa$ on $V=4096^{3}$ ($m=36$). Inset graph shows it for various lattice sizes and gray band restricts the location of $\kappa_{\rm c}$ by $X^{(m)}$. In Fig. 11, we show the $\kappa$ dependence of $\braket{\phi}_{h=0}$, which is calculated in the same way as in the 4$d$ case. The resolution of $\kappa$ is the same as that in Fig. 10. In order to determine the transition point $\kappa_{\rm c}$ and extract the critical exponent $\beta$, we make a fit of $\braket{\phi}_{h=0}$ on $4096^{3}$ lattice, which is essentially in the thermodynamic limit, employing the function of $A(\kappa-\kappa_{\rm c})^{\beta}$ over the range of $\kappa\in[0.11285900,0.11300000]$ in the broken symmetry phase. The fit results are $A=3.7(9)$, $\kappa_{\rm c}=0.112859(6)$ and $\beta=0.32(2)$. The value of $\beta$ is consistent with recent estimates of $\beta\approx 0.3295$ and 0.3264 for 3$d$ Ising model with the HOTRG algorithm Xie _et al._ (2012) and the Monte Carlo method Hasenbusch (2010), respectively. Numerical results for the bond energy and $\braket{\phi}_{h=0}$ show consistency with the second-order phase transition in the universality class of the 3$d$ Ising model. Figure 11: 3$d$ vacuum condensation $\braket{\phi}_{h=0}$ as a function of $\kappa$ on $V=4096^{3}$. Inset graph also shows $\braket{\phi}_{h=0}$ together with the fitting result (dotted line) as a function of the reduced parameter $|(\kappa-\kappa_{\rm c})/\kappa_{\rm c}|$ on a logarithmic scale. Gray band indicates $\kappa_{\rm c}$ estimated by $X^{(m)}$. ## IV Summary and outlook We have investigated the phase transition of the $4d$ single-component $\phi^{4}$ theory at $\lambda=40$ employing the bond energy and the vacuum condensation of the scalar field. Both quantities show finite jumps at the transition point on the extremely large lattice of $V=1024^{4}$, corresponding to the thermodynamic limit, and they indicate the weak first-order phase transition as found in the Ising limit Akiyama _et al._ (2019). This means that the single-component lattice $\phi^{4}$ theory does not have a continuum limit. In the current ATRG calculation, the resulting latent heat $\Delta E_{\rm b}$ and the gap $\Delta\braket{\phi}$ are smaller than those in the Ising case obtained by the HOTRG with $D=13$. As a next step, it would be interesting to investigate the phase transition of the O(4)-symmetric $\phi^{4}$ theory, which is more relevant to the SU(2) Higgs model. ###### Acknowledgements. Numerical calculation for the present work was carried out with the supercomputer Fugaku provided by RIKEN (Project ID: hp200170) and also with the Oakforest-PACS (OFP) and the Cygnus computers under the Interdisciplinary Computational Science Program of Center for Computational Sciences, University of Tsukuba. This work is supported in part by Grants-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) (No. 20H00148). ## References * Wilson and Kogut (1974) K. G. Wilson and J. B. Kogut, Phys. Rept. 12, 75 (1974). * Aizenman (1981) M. Aizenman, Phys. Rev. Lett. 47, 886 (1981). * Aizenman (1982) M. Aizenman, Commun. Math. Phys. 86, 1 (1982). * Frohlich (1982) J. Frohlich, Nucl. Phys. B 200, 281 (1982). * Dashen and Neuberger (1983) R. F. Dashen and H. Neuberger, Phys. Rev. Lett. 50, 1897 (1983). * Lindner (1986) M. Lindner, Z. Phys. C 31, 295 (1986). * Hasenfratz _et al._ (1987) A. Hasenfratz, K. Jansen, C. B. Lang, T. Neuhaus, and H. Yoneyama, Phys. Lett. B 199, 531 (1987). * Lüscher and Weisz (1987) M. Lüscher and P. Weisz, Nucl. Phys. B290, 25 (1987). * Lüscher and Weisz (1988) M. Lüscher and P. Weisz, Nucl. Phys. B295, 65 (1988). * Lüscher and Weisz (1989) M. Lüscher and P. Weisz, Nucl. Phys. B318, 705 (1989). * Huang (1989) K. Huang, Int. J. Mod. Phys. A4, 1037 (1989). * Frick _et al._ (1990) C. Frick, K. Jansen, J. Jersak, I. Montvay, P. Seuferling, and G. Munster, Nucl. Phys. B331, 515 (1990). * Kribs _et al._ (2007) G. D. Kribs, T. Plehn, M. Spannowsky, and T. M. Tait, Phys. Rev. D 76, 075016 (2007), arXiv:0706.3718 [hep-ph] . * Blöte and Swendsen (1980) H. W. J. Blöte and R. H. Swendsen, Phys. Rev. B 22, 4481 (1980). * Sanchez-Velasco (1987) E. Sanchez-Velasco, J. Phys. A20, 5033 (1987). * Kenna and Lang (1993) R. Kenna and C. B. Lang, Nucl. Phys. B393, 461 (1993), [Erratum: Nucl. Phys. B411, 340(1994)], arXiv:hep-lat/9210009 [hep-lat] . * Bittner _et al._ (2002) E. Bittner, W. Janke, and H. Markum, Phys. Rev. D66, 024008 (2002), arXiv:hep-lat/0205023 [hep-lat] . * Kenna (2004) R. Kenna, Nucl. Phys. B691, 292 (2004), arXiv:hep-lat/0405023 [hep-lat] . * Lundow and Markström (2009) P. H. Lundow and K. Markström, Phys. Rev. E 80, 031104 (2009). * Lundow and Markström (2011) P. H. Lundow and K. Markström, Nucl. Phys. B845, 120 (2011), arXiv:1010.5958 [cond-mat.stat-mech] . * Lee _et al._ (1990a) I.-H. Lee, J. Shigemitsu, and R. E. Shrock, Nucl. Phys. B 330, 225 (1990a). * Lee _et al._ (1990b) I.-H. Lee, J. Shigemitsu, and R. E. Shrock, Nucl. Phys. B 334, 265 (1990b). * Shrock (2014) R. Shrock, Phys. Rev. D 90, 065023 (2014), arXiv:1408.3141 [hep-th] . * Shrock (2016) R. Shrock, Phys. Rev. D 94, 125026 (2016), arXiv:1610.03733 [hep-th] . * Shrock (2017) R. Shrock, Phys. Rev. D 96, 056010 (2017), arXiv:1707.06248 [hep-th] . * Wegner and Riedel (1973) F. J. Wegner and E. K. Riedel, Phys. Rev. B 7, 248 (1973). * Akiyama _et al._ (2019) S. Akiyama, Y. Kuramashi, T. Yamashita, and Y. Yoshimura, Phys. Rev. D100, 054510 (2019), arXiv:1906.06060 [hep-lat] . * Levin and Nave (2007) M. Levin and C. P. Nave, Phys. Rev. Lett. 99, 120601 (2007), arXiv:cond-mat/0611687 [cond-mat.stat-mech] . * Xie _et al._ (2012) Z. Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, and T. Xiang, Phys. Rev. B 86, 045139 (2012). * Adachi _et al._ (2020) D. Adachi, T. Okubo, and S. Todo, Phys. Rev. B 102, 054432 (2020), arXiv:1906.02007 [cond-mat.stat-mech] . * Kadoh and Nakayama (2019) D. Kadoh and K. Nakayama, (2019), arXiv:1912.02414 [hep-lat] . * Shimizu and Kuramashi (2014a) Y. Shimizu and Y. Kuramashi, Phys. Rev. D90, 014508 (2014a), arXiv:1403.0642 [hep-lat] . * Sakai _et al._ (2017) R. Sakai, S. Takeda, and Y. Yoshimura, PTEP 2017, 063B07 (2017), arXiv:1705.07764 [hep-lat] . * Akiyama _et al._ (2020a) S. Akiyama, Y. Kuramashi, T. Yamashita, and Y. Yoshimura, (2020a), arXiv:2009.11583 [hep-lat] . * Shimizu and Kuramashi (2014b) Y. Shimizu and Y. Kuramashi, Phys. Rev. D90, 074503 (2014b), arXiv:1408.0897 [hep-lat] . * Shimizu and Kuramashi (2018) Y. Shimizu and Y. Kuramashi, Phys. Rev. D97, 034502 (2018), arXiv:1712.07808 [hep-lat] . * Takeda and Yoshimura (2015) S. Takeda and Y. Yoshimura, PTEP 2015, 043B01 (2015), arXiv:1412.7855 [hep-lat] . * Kadoh _et al._ (2018) D. Kadoh, Y. Kuramashi, Y. Nakamura, R. Sakai, S. Takeda, and Y. Yoshimura, JHEP 03, 141 (2018), arXiv:1801.04183 [hep-lat] . * Kadoh _et al._ (2020) D. Kadoh, Y. Kuramashi, Y. Nakamura, R. Sakai, S. Takeda, and Y. Yoshimura, JHEP 02, 161 (2020), arXiv:1912.13092 [hep-lat] . * Kuramashi and Yoshimura (2020) Y. Kuramashi and Y. Yoshimura, JHEP 04, 089 (2020), arXiv:1911.06480 [hep-lat] . * Akiyama _et al._ (2020b) S. Akiyama, D. Kadoh, Y. Kuramashi, T. Yamashita, and Y. Yoshimura, JHEP 09, 177 (2020b), arXiv:2005.04645 [hep-lat] . * Yoshimura _et al._ (2018) Y. Yoshimura, Y. Kuramashi, Y. Nakamura, S. Takeda, and R. Sakai, Phys. Rev. D97, 054511 (2018), arXiv:1711.08121 [hep-lat] . * Cea _et al._ (2019) P. Cea, M. Consoli, and L. Cosmai, (2019), arXiv:1912.00849 [hep-ph] . * Consoli and Cosmai (2020a) M. Consoli and L. Cosmai, Int. J. Mod. Phys. A 35, 2050103 (2020a), arXiv:2006.15378 [hep-ph] . * Consoli and Cosmai (2020b) M. Consoli and L. Cosmai, (2020b), arXiv:2007.10837 [hep-ph] . * Consoli and Cosmai (2020c) M. Consoli and L. Cosmai, Symmetry 12, 2037 (2020c). * Akiyama _et al._ (2020c) S. Akiyama, Y. Kuramashi, T. Yamashita, and Y. Yoshimura, PoS LATTICE2019, 138 (2020c). * Oba (2020) H. Oba, PTEP 2020, 013B02 (2020), arXiv:1908.07295 [cond-mat.stat-mech] . * Gu and Wen (2009) Z.-C. Gu and X.-G. Wen, Phys. Rev. B 80, 155131 (2009). * Akerlund _et al._ (2013) O. Akerlund, P. de Forcrand, A. Georges, and P. Werner, Phys. Rev. D 88, 125006 (2013), arXiv:1305.7136 [hep-lat] . * Kikuchi (1951) R. Kikuchi, Phys. Rev. 81, 988 (1951). * Fukugita _et al._ (1990) M. Fukugita, H. Mino, M. Okawa, and A. Ukawa, Journal of Statistical Physics 59, 1397 (1990). * Kadoh _et al._ (2019) D. Kadoh, Y. Kuramashi, Y. Nakamura, R. Sakai, S. Takeda, and Y. Yoshimura, JHEP 05, 184 (2019), arXiv:1811.12376 [hep-lat] . * Hasenbusch (2010) M. Hasenbusch, Phys. Rev. B 82, 174433 (2010), arXiv:1004.4486 [cond-mat.stat-mech] .
gray!15 white # Chapter 1: Introduction: The emergence of spacetime Nick Huggett and Christian Wüthrich This is a chapter of the planned monograph _Out of Nowhere: The Emergence of Spacetime in Quantum Theories of Gravity_ , co-authored by Nick Huggett and Christian Wüthrich and under contract with Oxford University Press. More information at www.beyondspacetime.net. This work was supported financially by the ACLS and the John Templeton Foundation (the views expressed are those of the authors not necessarily those of the sponsors. (18 January 2021) ###### Contents 1. 1 Quantum gravity and philosophy 2. 2 Worlds without spacetime? 3. 3 Challenges to spacetime emergence 4. 4 Physical salience 5. 5 Non-commutative geometry 6. 6 Spacetime functionalism 7. 7 The role of philosophy in physics 8. 8 The plan for the book “Big Bang Machine Could Destroy Earth” …ran an attention grabbing headline in The Sunday Times (Leake 1999), regarding the new Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. To be fair, the main apocalyptic concern of the paper was that the RHIC would create a form of matter in which strange quarks eat the up and down quarks found in ordinary matter. But it also discussed the possibility that experiments involving high-energy collision of gold ions could create microscopic blackholes, which would pull all the matter in the world into them. Such scenarios were taken seriously enough that they were evaluated by a panel of elders, who concluded that the chances of any such events were utterly minuscule (Busza et al. 1999)—happily, up to the time of writing, they have not been contradicted by events at Brookhaven! Let’s look into the business of black hole formation more carefully to explain why physics needs an account of quantum gravity. First, the RHIC was built to probe how matter behaves under intense temperatures and pressures—in effect recreating in a tiny region the state of the universe within the first second of its existence, when quarks and gluons flowed in a plasma rather than binding to form particles. The predictions tested here are largely those of quantum chromodynamics, the quantum theory of the strong force binding nucleons and their constituents. That is, the collisions between heavy nuclei such as gold in the RHIC are governed by the laws of quantum mechanics (QM). The concern over black holes, however, arises when one asks how general relativity (GR)—the classical, non-quantum, theory of gravity, gets into the picture. According to GR, the spacetime metric outside a sphere of mass $M$ takes the form: $\mathrm{d}s^{2}=c^{2}\mathrm{d}t^{2}(1-\frac{2GM}{rc^{2}})-\mathrm{d}r^{2}/(1-\frac{2GM}{rc^{2}}),$ (1) where $G$ is Newton’s gravitational constant, $c$ is the speed of light, and $r$ the distance from the center. ($\mathrm{d}s^{2}$ is the infinitesimal spacetime ‘distance’ squared—the ‘interval’—between two radial points separated in time by $\mathrm{d}t$ and space by $\mathrm{d}r$, in suitable co- ordinates; though an exact understanding is not crucial here.) What is important is that this quantity blows up when $2GM=rc^{2}$, or $r=2GM/c^{2}$. Understanding this occurrence was an important issue in the early development of GR, but what it actually signifies is the presence of an ‘event horizon’ around the mass, from which neither matter nor light can escape—the boundary of a black hole. Of course this story only makes sense if the mass is all located within a radius of $2GM/c^{2}$, since the metric formula only holds outside the mass. So one can equally well say that if one has a mass $M$ it will only form a black hole if it is all located within a radius less than $2GM/c^{2}$. So finally, the question posed by the panel at Brookhaven was how small a region would the amount of energy to be created in collisions be located (see page 7 of the report)? Acting cautiously, they assumed the best conditions for black hole formation, supposing that all the energy produced by the collision contributes to the mass: about 50 times that of a gold atom. For a black hole of this mass the event horizon has a radius of $10^{-39}$m. On the other hand, a gold atom has a radius of around $10^{-12}$m, so even supposing that all the energy is concentrated in a region the size of a suitably Lorentz-contracted nucleus, general relativity predicts that collisions will be many, many orders of magnitude from creating a black hole. Phew.111Don’t be confused if you have read of black holes being created at RHIC. In fact what has (perhaps) been observed is the Unruh effect, which is formally equivalent to Hawking radiation, but does not involve black holes, but acceleration, which is in a sense indistinguishable from a gravitational force according to GR. See Nastase (2005). ## 1 Quantum gravity and philosophy What we have then is an argument that the physics of Brookhaven lies within the domain of relativistic QM, but that the gravitational effects of the collisions are utterly negligible. Perhaps the world is just ‘dappled’ in this way: in some domains, such as the motions of the planets (GR explains the perihelion of Mercury, for instance) GR holds and QM is irrelevant; in others, as in RHIC, it is QM that holds sway, with GR entering only to provide a background geometry determined by ambient bodies, not by the system under consideration. But the very argument here shows that it is possible to bring considerations from both theories to bear on a single system, making clear that one can sensibly ask whether there are domains in which both theories apply. On our current understanding of the universe indeed there are. In the first place, there is the big bang, which is entailed by GR given the current state of the universe, in which matter becomes so hot and compact that QM effects will necessarily occur. (And conversely, if the inflationary hypothesis is correct, between the first $10^{-33}$ to $10^{-32}$s quantum fields provide the energy which drives the expansion of the universe, according to the laws of GR.) In the second, Hawking radiation is predicted to occur around black holes as a QM effect resulting from the geometry of spacetime given by GR (Hawking 1974). Indeed, assuming that the local equivalence of acceleration and gravity—the ‘equivalence principle’—holds in the quantum domain, then RHIC does provide tests for this physics, since the radiation produced by decelerating ions can be measured (see footnote 1): under the assumption, it provides indirect tests of the overlap of gravity and QM. Thus, we say, the ultimate need for a theory that in some way unifies QM and GR—a theory of _quantum gravity_ (QG)— arises from the existence of phenomena in our universe in which the domains of the two overlap. There are other, more theoretical, arguments for such a theory and concerning the form it should take. For a critical evaluation of these, arguing that overlap is best reason to seek quantum gravity, and that empirical considerations best dictate its form see Callender and Huggett (2001); for further discussion Wüthrich (2005).222This situation may change if the technology improves enough over the next few years to carry out the experiment proposed by Bose et al. (2017). There are then good reasons for physicists to investigate QG. This book is predicated on the view that it also has an important call on the efforts of philosophers. Indeed, we would like this work to encourage, by example, our colleagues to be more adventurous in their choice of topics of enquiry. Philosophy of physics, we suggest, has a tendency to look too much to the past, and to the metaphysics of well-established physics (and of course to internecine disputes): classical statistical mechanics, classical spacetime theory and non-relativistic quantum mechanics are ‘so twentieth (or even nineteenth) century’, and yet have a virtual lock on the discipline. While quantum field theory (QFT) is becoming a significant topic, that is still at least half a century behind the physics! We’re overstating things somewhat for effect here: of course, even old theories do face important foundational problems, and their consequences for our broader understanding of the world take considerable elucidation. And of course it is unfair to suggest that no philosophers of physics show an interest in contemporary physics. Indeed, since we started writing this book, there has been an explosion of interest in QG, especially amongst a younger generation of scholars, which we find very exciting. Still, we do say that collectively the discipline pays insufficient attention to cutting edge physics, and hope this book in some way serves as an impetus to greater engagement. Our point is not that novelty is good for its own sake, nor that philosophy is a ‘hand-maiden’, who should dutifully follow the fashions of physics. Rather, we are inspired by recent work in the history and philosophy of science to believe that it is central to the business of philosophy to engage with developing physical theories—both because the search for philosophical knowledge must be responsive to empirical discoveries, and because philosophy has important contributions to make to the development of physics (and other sciences). We will discuss this point at greater length below (§7), to explain our aims and motivations. First, in part to make that discussion more concrete, in §2 we will very briefly introduce some theories of QG (or in its vicinity). We especially want to focus on a rather generic feature of them—that in various ways they do not contain familiar spacetime at a fundamental level, but rather it ‘emerges’ (in a sense to be discussed) in a higher, non-fundamental domain. That will lead us to a discussion (in §3) of the challenges to the very idea that something a seemingly fundamental as spacetime could be derivative, from even more fundamental, yet non-spatiotemporal, physics. We then address these challenges, analyzing how they can be overcome, and illustrating this process in a historical case (§4); and in a short contemporary example (§5). This discussion encapsulates much of the work of the book: the chapters introduce, for philosophers, several proposals for a theory of QG, discuss the ways in which they eliminate spatiotemporal structures, and investigates the ways in which they are recovered as effective, apparent structures. What we propose is a form of ‘functionalism’, so in §6 we explain that position. In a final section (§8) we will give an overview of the different strategies one might take towards quantizing gravity, to relate the different proposals that we will consider in the book. ## 2 Worlds without spacetime? All theories of QG are, to a large extent, speculative; some of the examples that follow are more speculative than others. However, as we shall explain below, there are good reasons to think that they may still teach important lessons in the search for QG. The first three examples are the focus of the following chapters; the remaining two are also illuminating, but we have discussed them elsewhere. * • Causal Set Theory (CST): As we will see in chapters 2 and 3, CST makes liberal use of GR as a vantage point for its research programme. In fact, it takes its most important motivation from theorems stating that given the causal structure of a spacetime, its metric is determined up to a conformal factor. In other words, the causal structure determines the geometry of a spacetime—but not its ‘size’. Taking this cue, CST posits that the fundamental structure is a set of elementary events which are locally finite, partially ordered by a basic causal relation. In other words, the fundamental structure is a causal set. The assumption of local finitarity is nothing but the formal demand that the fundamental structure—whatever else it is—is discrete. Together with the demand of Lorenz invariance at the derived level, the discreteness of causal sets forces a rather odd locality structure onto the elementary events of the causal set (§3.5). Furthermore, although the fundamental relation of causal precedence can double up as something akin to temporal precedence, space is altogether lost in a causal set (§2.3). Jointly, these facts entail that the structure we are facing in causal set theory is also rather different from the spacetime encountered in GR. In fact, the quantum nature of the causal sets yet to be incorporated into causal set theory is bound to further complicate the picture and to remove the resulting structure from that of relativistic spacetimes. * • Loop Quantum Gravity (LQG): LQG starts out from a Hamiltonian formulation of GR and attempts to use a recipe for cooking up a quantum from a classical theory that has been utilized with great success in other areas of physics. This recipe is the so-called canonical quantization. The goal of applying the canonical quantization procedure is to find the physical Hilbert space, i.e. the space of admissible physical states, and the operators defined on it that correspond to genuinely physical quantities. As will be seen in chapters 4-5, following this recipe leads rather straightforwardly into a morass of deep conceptual, interpretative, and technical issues concerning the dynamics of the theory as well as on time quite generally. We find that the states in the ‘kinematic’ Hilbert space afford a natural geometric interpretation: its elements are states that give rise to physical space, yet are discrete structures with a disordered locality structure. At least in one basis of this Hilbert space, the states appear to be states of a granular structure, welding together tiny ‘atoms’ of space(time). It is crucial to this picture that these atoms of space(time) are atoms in the original meaning of the word: they are the truly indivisible smallest pieces of space(time). The smooth space(time) of the classical can thus be seen to be supplanted by a discrete quantum structure. Moreover, since generically a state of this structure will be a superposition of basis states with a determinate geometry, generic states will not possess determinate geometric properties. If continuity, locality, or determinate geometry was an essential property of spacetime, then whatever the fundamental structure is, it is not spacetime. In this sense, spacetime is eliminated from the fundamental theory. * • String Theory: According to string theory (chapter 6) tiny one-dimensional objects move around space, wiggling as they go—the different kinds of vibration correspond to different masses (charges and spins) and hence to different subatomic particles. So it sounds as if spacetime is built into the theory in a pretty straight-forward way; however, we will see that things are not so simple. First, in chapter 7 we will see that various versions which are intuitively very different in fact correspond to the same physics—are ‘ _dual_ ’. For instance, suppose that at least one of the dimensions of space is ‘compactified’, or circular. Then it turns out that a theory in which the circumference of the dimension is $C$ has the same collection of values for physical quantities as a theory in which the circumference is $1/C$: i.e., that theories in which compactified dimensions are small are physically indistinguishable from—or ‘dual’ to— those in which they are large. Other dualities relate spaces of different topologies. These facts raise important questions about whether the space in which the string lives is the one we observe, since that is definitely large though the string space could be small, or even conventional. Second, in chapters 8-9, we will see how the geometrical structure of spacetime—and indeed GR—arises from the behavior of large collections of strings in a ‘graviton’ state, the quantum particle that mediates gravitational forces. * • Group Field Theory: Consider rotations, by and angle $\theta$, in the plane: a different one for each value of $0\leq\theta<2\pi$. These form a group under composition: a rotation by $\alpha$ followed by an angle $\beta$ is just a rotation by $\alpha+\beta$ (and for example, a rotation by $2\pi-\alpha$ undoes a rotation by $\alpha$). Similarly for rotations in three dimensions, and indeed similarly for Lorentz transformations (which are in fact nothing rotations in Minkowski spacetime), and so on. For each, composition yields a different function from any pair to third. We can thus characterize the abstract group structure simply by the action of this function of an entirely arbitrary set of elements—forget that we started with rotations, and let the elements be anything, with a composition rule isomorphic to that of the rotations. One could take then a set of such blank elements, which compose like rotations in the plane, but which should not be thought of as literal rotations: no plane at all is postulated, the only manifold is that formed by the group elements themselves—the circle of points with labels $0-2\pi$, not a 2-dimensional plane.333More accurately, the space that is assumed is not the space of relativistic physics, or that of planar rotations, but four copies of the group of Minkowski rotations. And finally one can introduce a field on this group manifold, a real number for each group element, with a dynamical law for its evolution; and indeed quantize this field. As we have emphasized, while physical space was used to guide the construction of this theory, it is not an explicit component of it, and yet models of general relativity can be derived from it. This example is discussed in greater detail in Oriti (2014) and Huggett (2018). * • Non-Commutative Geometry: Picture a Euclidean rectangle whose sides lie along two coordinate axes, so that the lengths of its sides are $x$ and $y$—its area is $x\cdot y=y\cdot x$. But what if such products fail to commute, $xy\neq yx$, so that area is no longer a sensible quantity? How can we understand such a thing—a _non-commutative geometry_? By abandoning ordinary images of geometry in terms of a literal space (such as the plane) and presenting it in an alternative, algebraic way. In fact, our example already starts to do so: even thinking about areas as products of co-ordinates uses Descartes’ algebraic approach to Euclidean geometry. Once we have entered the realm of algebra, all kinds of possible modifications arise. Especially, an abstract algebra $\mathcal{A}$ requires an operation of ‘multiplication’, $\star$, but this can be a quite general map from pairs of elements, $\star:\mathcal{A}\times\mathcal{A}\to\mathcal{A}$, _which need not be commutative_! For instance, one could define ‘multiplication’ to satisfy $x\star y-y\star x=\theta$ (a small number), and use it to generate polynomials in $x$ and $y$; these carry geometric information. Such a thing is perfectly comprehensible from the abstract point of view of algebra, but it cannot be given a familiar Euclidean interpretation via Cartesian geometry. So, such a theory seems to describe a world that is fundamentally algebraic, not spatial (in the ordinary sense)—there is $x\star y$ and $y\star x$ but no literal rectangle. If there is thus fundamentally nothing ‘in’ space, is the ultimate ontology ‘structural’, based on algebraic relations only? And how could an appearance of familiar (commutative!) space arise; especially, what significance could point-valued quantities have? We will return to this example in §3. All of these examples are speculative to some extent or other, and none can claim to be a complete quantum theory of gravity (and none has convincing, currently testable, novel predictions!), yet all have some claim to model relevant physical features of QG, worth exploring. In particular, we have emphasized in each case how spacetime features are missing in the theories (to be spelled out in detail in later chapters). This situation thus appears to be a common condition of many approaches to QG, in which case we say that classical, relativistic spacetime is ‘emergent’. We emphasize (as we have elsewhere) that we do not use this term in its strongest philosophical sense to indicate the _inexplicability_ of X from Y: as some have claimed life or mind emerges from matter. On the contrary, we argue that classical spacetime structures _can_ be explained in more fundamental terms: indeed, it was largely to explicate how physicists do so that we wrote the book. Some might then say that spacetime ‘reduces’ to non-spatiotemporal QG, but we prefer to stick with the notions of ‘explanation’ or ‘derivation’, because there are many notions of ‘reduction’, some of which are too strict. But we are also happy to speak of (weak) ‘emergence’ even when spacetime is derived, because the gulf between a theory that does not assume spacetime, and one that does is so great. Having spacetime or not makes a huge formal and conceptual difference, in particular because in almost all theories prior to QG classical spacetime has apparently been one of the most basic posits. Indeed, this very gulf makes one wonder what it could mean to derive spacetime, and whether it is possible at all. Before we proceed, we need to introduce some terminology to keep the discussion straight. The issue is that the theories of QG often contain some object referred to as ‘space’, even when they do not assume ‘space’ in the ordinary sense. For instance, there may be a ‘Hilbert space’, or a ‘dual space’, or ‘Weyl space’, or ‘group space’. So we will refer to spacetime in the ordinary sense as ‘classical’, or ‘relativistic’, or sometimes just ‘space’ or ‘spacetime’ when the context makes matters clear. (We eschew the phrase ‘physical space’, since the other ‘spaces’ may well be part of the fundamental physical furniture. We have previously used ‘phenomenal space’, to indicate that classical space is that of observable phenomena, according to the physicist’s use of ‘phenomenological’. However, this leads to confusion with the philosophical doctrine of ‘phenomenalism’, so we have dropped it.) By classical or relativistic spacetime, we mean that theorized in QM (especially QFT) and relativity, approximated in non-relativistic mechanics, and ultimately implicated in our observations of the physical world. As stated, that is not an entirely homogeneous concept, so we will say more later (§6) about exactly what features of classical spacetime are emergent from our theories of QG. First, we turn to some challenges to the project of deriving spacetime. ## 3 Challenges to spacetime emergence Space and time are so basic to both our manifest and scientific images of the world that at first the mind boggles at the thought that they might be mere ‘appearances’ or ‘phenomena’, of some deeper, more fundamental, non- spatiotemporal reality. Is a physics without spacetime even intelligible? And if it is, is spacetime the kind of thing whose existence could be explained? At its core, this book seeks to address these questions: on the one hand explicating the worlds described by theories of QG, while on the other showing how spacetime can be derived from them. But to understand the nature and methodology of that project, it is important here to unpack the mind boggling, vertiginous panic about the very idea. Larry Sklar (1983) gave expression to this all too common sentiment among philosophers (and physicists) when he wrote444A note on terminology: we take ‘Platonists’ to be committed to the existence of abstract entities, such as propositions, sets, love, and justice, but also to the existence of the concrete, physical, and spatiotemporal world. Those who maintain that our world is fundamentally mathematical in nature and thus entirely consists in ultimately abstract entities or structures are often labelled as ‘Pythagoreans’. Since we are interested not in whether there exist abstracta, but in the possibility that all physical existence is grounded in non-spatiotemporal structures, we will refer to those who maintain that a fundamentally non-spatiotemporal physical world is not devoid of “real being”—no doubt historically inaccurately—as _Pythagoreans_. We take this Pythagoreanism to be Sklar’s target—and the one of this monograph. > What could possibly constitute a more essential, a more ineliminable, > component of our conceptual framework than that ordering of phenomena which > places them in space and time? The spatiality and temporality of things is, > we feel, the very condition of their existing at all and having other, less > primordial, features. A world devoid of color, smell or taste we could, > perhaps, imagine. Similarly a world stripped of what we take to be essential > theoretical properties also seems conceivable to us. We could imagine a > world without electrical charge, without the atomic constitution of matter, > perhaps without matter at all. But a world not in time? A world not spatial? > Except to some Platonists, I suppose, such a world seems devoid of real > being altogether. (45) According to Sklar, a non-spatiotemporal world is inconceivable, and thus presumably not even metaphysically possible, let alone physically. This monograph is concerned with establishing the possibility of a fundamentally non-spatiotemporal world, articulating the consequences of such a possibility, and defending the idea that spacetime may be merely emergent in a perfectly acceptable scientific explanation of the manifest world. So in this section we will discuss various more precise ways that one might doubt the possibility of deriving spacetime. First, in Huggett and Wüthrich (2013) we discussed the idea that a theory without spacetime might be ‘empirically incoherent’. That is, any theory which entails that the observations apparently supporting it are impossible, cannot receive empirical support (Barrett 1996)—it undermines the very grounds for believing it. Since all observations ultimately involve events localized in spacetime, it might seem that theories without spacetime in their basic formulation are threatened with empirical incoherence; the confirmation of such a theory might be ruled out a priori. However, it is clear that a conflation is involved. (More) fundamental theories of QG are non- spatiotemporal in the sense that spatiotemporal structure is missing in their more furniture; but it is perfectly consistent to think that spacetime is present as an effective object, arising from the more fundamental ones. (And that observation events can be identified within effective spacetime.) That is, QG will not be empirically incoherent if the appearance of spacetime can be adequately explained, and of course that is exactly what our case studies aim to do. Second, while this book concerns the idea of ‘emergent’ spacetime as it arises in QG, one of our key concerns has already arisen in discussions of a different kind of spacetime emergence. The quantum mechanical wavefunction of $N$ particles is not a function in ordinary space, but of the positions of all the particles: $\Psi(x_{1},y_{1},z_{1};x_{2},y_{2},z_{2};\dots;x_{N},y_{N},z_{N})$. Thus $\Psi$ lives in ‘configuration’ space, in which there are three dimensions for each particle. Albert (1996) has argued that we should take the wavefunction ‘seriously’ as the ontology of the theory, and conclude that configuration space is more fundamental than regular space—that the three dimensions of experience are mere appearances of the $3N$ dimensions of reality. Whatever the merits of that view, the general idea has been attacked by Tim Maudlin (2007). In particular he argues as follows: one might > derive a physical structure with the form of local beables from a basic > ontology that does not postulate them. This would allow the theory to make > contact with evidence still at the level of local beables, but would also > insist that, at a fundamental level, the local structure is not itself > primitive. … This approach turns critically on what such a derivation of > something isomorphic to local structure would look like, _where the derived > structure deserves to be regarded as physically salient_ (rather than merely > mathematically definable). Until we know how to identify physically serious > derivative structure, it is not clear how to implement this strategy. (3161) We have italicized the key phrase here. Suppose that one managed to show formally that certain derivative quantities in a non-spatiotemporal theory took on values corresponding to the values of classical spatiotemporal quantities; one would then be in a position to make predictions about derived space. However, according to the passage quoted, such a derivation (even if the predictions were correct) would not show that spacetime had been explained. In addition, we have to be assured that the formally derived structure is ‘ _physically salient_ ’. We agree with Maudlin that physical salience is required of proper—one can say ‘explanatory’—derivations: otherwise one simply has a formal, instrumental book-keeping of the phenomena. Indeed, we agree with him that the issue is particularly pressing in theories of emergent spacetime. But we think that it can be addressed in QG: one of the goals of this book is to investigate the (novel) principles of physical salience for theories of QG, the principles whose satisfaction makes the derivations of spacetime physically salient. In the following chapters we will look in detail at the derivations, to make clear the assumptions and forms of reasoning that lie behind them. In the concluding chapter 10 we will analyze what have learned, to start to explicate what makes a derivation of spacetime in QG physically salient. But to explain that project—and its relevance to philosophy—we need to unpack the very notion of physical salience, as we understand it.555We are grateful to Maudlin for conversations on this topic. We believe that we capture the essence of his idea, even if we might differ in details; and especially regarding the depth of the problem in the case of spacetime emergence. We do, however, want to point out an important difference between the cases of emergence from QG and from configuration space: in the latter, but not the former, there is a way to formulate the theory in 3-space (as single particle wavefunctions with a tensor product). Thus in QM (but not QG) Maudlin can argue that the derivation isn’t physically salient, because the formulation from which it is derived is unnecessary in the first place. That the crucial difference between QG on the one hand and QM (and GR) on the other lies in there being no alternatives translates into a different status for spacetime functionalism in the two cases has been argued by Lam and Wüthrich (forthcoming). ## 4 Physical salience There is a subtlety about the way that Maudlin makes the point, however (which we did not clearly address in Huggett and Wüthrich (2013)). For the target derived structure in itself is prima facie physically salient: it is the physical datum to which the more fundamental theory is answerable. (Perhaps a more fundamental theory will show that some less fundamental theory is profoundly confused; but more generally one expects that existing, well- confirmed theories have latched onto some genuine physical structures, and that new, better theories will simply explain how, by subsuming the old in some broad sense.) So in that sense there is really no question of the physical salience of the ‘derived structure’—in the sense of the structure _to be_ derived. Rather, Maudlin is talking about a formal derivation within a proposed new theory, and the question of whether what is at present simply a mathematical structure, in numerical agreement with the target structure, in fact explains it, and isn’t merely an instrument for generating predictions. We would break this question down into two interconnected parts (which will also help illuminate what is involved in explanation here). First, the question of whether and how the basal objects or structures of the more fundamental theory accurately represent physically salient objects and structures. As we shall see shortly, that question becomes far more pressing when none of the putative objects or structures are supposed to be in spacetime. Second, does the formal derivation of the phenomenal from the more fundamental make physical sense? That the derivation exists shows that it makes sense at the level of the formalism, and especially that the derivation is compatible with the mathematical laws. But, as Maudlin suggests, there is more to the question of physical salience than that. And the question is especially pointed when one wonders how the spatiotemporal could ever be ‘made’ of the non-spatiotemporal. We will illustrate these ideas with a homely (and idealized in many ways) example.666The following has also been discussed in Huggett (2018). The ideal gas law tells that for a gas (in a box of fixed volume) pressure $\propto$ temperature. Ideal gas theory says nothing about the microscopic composition of gases, so these are (among) the primitive quantities of the theory, operationalized via pressure gauges (relying on forces measured via Hooke’s law for springs), and thermometers (so relying on the linear expansion with temperature of some substance). This is the phenomenon to be explained by the more fundamental theory, the kinetic gas model, according to which the gas is composed of atoms with mass $m$, whose degrees of freedom are their positions and velocities. The latter can be expressed by a vector $\vec{V}$, with 3$n$ components: for each of the $n$ atoms that make up the gas, three components, to describe the speed with respect to each of the three dimensions of space. Each atom has a kinetic energy associated with its velocity ($1/2m\vec{v}^{2}$); the average kinetic energy is simply their sum, divided by $n$: denote this quantity $T(\vec{V})\equiv\overline{\frac{1}{2}m\vec{V}^{2}}.$ (2) Now one computes the atoms’ momentum change (per second per unit area) resulting from their collisions with the sides of the box: assuming that the collisions are elastic, and that the atoms are distributed evenly throughout the box and with respect to their velocities, one formally derives that $P(\vec{V})\equiv\frac{2n}{3v}\overline{\frac{1}{2}m\vec{V}^{2}}.$ (3) Clearly the two quantities are proportional: $P(\vec{V})\propto T(\vec{V}),$ (4) which has the form of the ideal gas law. However (and despite the suggestive names, $P$ and $T$) we have so far said nothing to justify identifying the quantities with the pressure and temperature of the ideal gas law; we have only noted a formal proportionality. From this example we can abstract the following schema: > If (a) fundamental quantities $X$ can be ‘aggregated’ into $\alpha(X)$ and > $\beta(X)$, such that (b) $f(\alpha(X))=g(\beta(X))$ follows from > fundamental laws, then the law $f(A)=g(B)$ relating less fundamental > quantities $A$ and $B$ is _formally derived_. The term ‘aggregated’ is supposed to be vague, in order to accommodate the many ways a derivation might proceed. But the underlying idea is that the more fundamental theory has (many) more degrees of freedom than the less fundamental, and somehow the more fundamental must be ‘summarized’ by the less, for example by averaging, or by coarse-graining. Maudlin’s claim is that formal derivability does not suffice to properly derive phenomena: in particular, 3-dimensional space can be formally derived from the full $3N$-dimensional configuration space, but for Maudlin that does not make it a plausible, more fundamental alternative to ordinary space. And more generally, one should worry that a merely formal condition does not distinguish instrumental calculi from serious physical accounts. And indeed, further analysis of the derivation of the ideal gas law shows that considerations of physical salience are at play. In particular, $P(\vec{V})$ is derived by assuming that the atoms are striking the sides of the box, and exerting a force there: so acting exactly at the place and in the way that would produce a reading on a pressure gauge. And $T(\vec{V})$ is (according to the randomness assumption) the amount of energy in any macroscopic region of the box, say the location of the bulb of a thermometer: and collisions with the bulb will transfer kinetic energy to the molecules of the thermometer, causing thermal expansion. Imagine if instead that $P(\vec{V})$ only referred to the center of the box; or if $T(\vec{V})$ referred to a single atom in the box. Then the formal derivation would not be convincing. Or suppose that instead of the atomic gas model we imagined that a gas was a continuous object, whose degrees of freedom were somehow described by $\vec{V}$, but not as the velocities of anything (certainly not atoms). Then the formal consequences of kinetic gas theory could still be taken to hold, but they would no longer have the interpretation that they do in the kinetic gas model; the whole derivation would go through, but its physical meaning would be obscure. In short, the reason, in addition to their proportionality, that we find $P(\vec{V})$ and $T(\vec{V})$ convincing as pressure and temperature, and not just quantities following a similar law, is that they are spatiotemporally coincident with those quantities, and involve processes capable of producing the phenomena associated with those quantities. The derivation is not merely formal, but also physically salient. Continuing our schema: > A (non-instrumental) derivation of phenomena requires, in addition to a > formal derivation, that (c) the derivation have _physical salience_. Of course, this schema does not tell us what it is to be physically salient, but the example of the ideal gas above illustrates two very important aspects, spatiotemporal coincidence, and the action of a physically accepted mechanism. And this observation immediately reveals the problem for the emergence of spacetime, because such criteria simply cannot be satisfied by derivations from non-spatiotemporal theories, because they are explicitly spatiotemporal criteria. For instance, it makes no fundamental sense in such a theory to even ask where a structure is. So if such criteria are a priori constraints on science, then the QG program, to the extent that it involves non- spatiotemporal theories, is in some serious trouble. However, a second example indicates the contextuality of physical salience, and thereby the way in which QG can hope to achieve physical salience in its derivations. Figure 1: Descartes’ and Newton’s competing images of gravity. On the left is pictured Descartes’ vortex model: each cell represents a ball of rotating matter, with lines to indicate the direction of rotation (e.g., those surrounding $f,L,Y$ rotate about axes in the plane shown, while those surrounding $D,F,S$ rotate about axes perpendicular to the plane). The bodies at the center of a cell represent suns: $S$ is ours. On the right is the diagram from Newton’s Proposition I.1 proof of Kepler’s equal areas law for a central force (essentially, conservation of angular momentum). All that matters is the direction of the force (towards the point $S$), not any ‘hypothesis’ about its nature. Ultimately Newton will apply the proposition to the case in which $S$ is our Sun. (Public domain, via Wikimedia Commons and Google Books.) Consider the competing Cartesian and Newtonian accounts of gravity, exemplified by illustrations from the _Principles of Philosophy_ (Descartes 1644) and the _Mathematical Principles of Natural Philosophy_ (Newton 1726), respectively: see figure 1. On the one hand we have the vortices of Descartes, which aimed to provide a mechanical account of gravity, in terms of the motions and collisions of particles. On the other there is Newtonian action at a distance, which allowed him (as in Proposition I.1) to formulate and use his mathematical principles. We pass over Newton’s own ambiguous attitude towards the causes of gravitation (his refusal to ‘feign hypotheses’ on the one hand, but his speculations in the _Optiks_ (Newton 1730) on the other). The point to which we draw attention is the controversy between the Newtonians and Cartesians regarding the need for mechanical explanation.777Note especially that we strictly distort the logic of Newton’s _Principia_ here: as far as Proposition I.1 is concerned, the forces could be impulses directed towards the point $S$. However, though Newton’s reader may not at that stage know the nature of the force, for Newton the figure represents the action of universal gravitation. For the latter, Newton might have captured the effects of gravity in a formally accurate way, but offered no scientific explanation for the phenomena. For example, consider Leibniz’s clear statement to Clarke: > If God would cause a body to move [round a] fixed centre, without any > [created thing] acting upon it …it cannot be explained by the nature of > bodies. For, a free body does naturally recede from a curve in the tangent. > And therefore …the attraction of bodies …is a miraculous thing, since it > cannot be explained by the nature of bodies. (_Leibniz-Clarke > Correspondence_ in Clarke et al. (1956)) We take Leibniz’s complaint to be exactly that Newton’s derivation of the phenomena lacks physical salience, because only mechanical causes are physically salient explanations of unnatural motions. Of course, the Newtonians were ultimately victorious, and this Cartesian condition of physical salience was replaced by one that allows action at a distance, because of the success of universal gravity, and the failure of mechanical alternatives, such as Leibniz’s. But that was not the end of the story: through the development and empirical success of electromagnetic theory, culminating in the development of special relativity, action at a distance was again rejected, with contact action replaced by the demand for local field interactions—and hence the replacement of Newtonian gravity with general relativity. Again, we understand this demand as a criterion of physical salience, required for more than merely formal accounts. But even that is not the end of the story, for quantum mechanics experimentally conflicts with that concept of locality, and so quantum non-locality must be accommodated in some way. (Hesse 1961 is a classic telling of this tale.) By now, three points are indicated by this story: first, questions of physical salience, here in the form of the principles of locality, are genuine, controversial components of scientific enquiry. Second, such principles are historically contingent, changing in step with major advances in physics. Third, such changes are ultimately settled by, and epistemically justified by, empirical success: one of the things that we learn in a scientific revolution is a set of criteria of physical salience for explanation appropriate to the new domain of enquiry. Put this way, we see principles of physical salience as part of what Kuhn called the ‘disciplinary matrix’ in the _Postscript_ to the second edition of (1962), or what Friedman (2001) refers to as the ‘relative, constitutive a priori’. Though changes in the principles change wholesale what theories are even candidate explanations, we don’t infer any catastrophic incommensurability here: as we said, innovations in physical salience are grounded in empirical success, like all other scientific knowledge. So we have a general answer to the problem raised earlier. _How can a derivation of spacetime from a non-spatiotemporal theory ever be physically salient?_ Well, it cannot satisfy the standards of physical salience that apply to theories with classical spacetime, but we should expect a non- spatiotemporal theory to require new standards. And so the real question is what are those new principles? Like Friedman, we see that question, and the development of such new principles as a foundational, interpretational, conceptual—hence _philosophical_ —endeavor. We shall elaborate on how such a project is to be conducted in §7. We will see throughout the book how this endeavor is ineliminably philosophical in the different approaches to QG. For now we want to illustrate the problem with an example. ## 5 Non-commutative geometry Of necessity, this section is somewhat more technical than the others, and could be skipped by those not requiring a concrete illustration of how interpretational considerations come into play in elevating a formal derivation into one (potentially) having physical salience. It elaborates an example of a non-spatiotemporal theory already given, to show how one might come to view it as a theory from which spacetime emerges. We start with familiar, commutative geometry, for which $xy=yx$, in a smooth manifold of points; let it be 2-dimensional for simplicity.888This section is based on Huggett et al. (forthcoming). See also Lizzi (2009) for a more mathematical survey. Consider polynomials $\mathcal{P}(x,y)$ of $x$ and $y$. These are ‘fields’, meaning that they return a numerical value at each point $(x,y)$. They form an algebra with respect to multiplication: this just means that when you multiply two polynomials together, the result is another polynomial. Moreover, because $xy=yx$ we have that $\mathcal{P}(x,y)\mathcal{Q}(x,y)=\mathcal{Q}(x,y)\mathcal{P}(x,y)$, so that the algebra is commutative. (Check with $\mathcal{P}(x,y)=xy$ and $\mathcal{Q}(x,y)=x^{2}+y^{2}$ if you like.) It may seem like a rather uninteresting structure, but in fact such algebraic relations alone contain geometric information about the space: in this case, that it is smooth, that it is 2-dimensional, and whether it is open or closed. This fact is shown by the important Gelfand-Naimark theorem (1943), which is the foundation of ‘algebraic geometry’. Indeed, the whole structure of differential geometry can be recast in algebraic terms. (An interesting application is Geroch’s (1972) formulation of general relativity as an ‘Einstein algebra’; discussed by Earman (1989, §9.9) as a possible response to the hole argument.) For a mathematician, the question of what happens when the algebra is ‘deformed’ so that it is no longer commutative is irresistible: so one sets $xy-yx=i\theta$ and sees what happens. (And similarly in spaces of any dimensions.) Surprisingly, one finds that the structure necessary to cast geometry in algebraic terms remains (at bottom, one can still define a derivative on the algebra, in terms of which the other structure is defined). Moreover, the Euler-Langrange equation and Noether’s theorem do not require commutativity, and so the structure of modern physics is preserved, even in such a ‘non-commutative spacetime’—in a purely algebraic formulation. But suppose such a physics were correct: how could it explain spacetime as it appears to us? Specifically, how are we to understand events localized in space in terms of an abstract algebra? When the algebra is commutative, the Gelfand-Naimark theorem lets us interpret the elements as fields, $\mathcal{P}(x,y)$ related to regions of space; but what about the non- commutative case? The question is just that which has concerned us in this chapter (and indeed the whole book): how can we derive the appearance of classical spacetime from a non-spatiotemporal theory, in a physically salient way? The obvious thing to try is to (a) interpret $x$ and $y$ not as elements of an abstract algebra, but as fields in an ordinary plane: taking the value of the $x$ and $y$ coordinates at any point $(x,y)$. Then (b) define a new binary operation, $\star$, such that $x\star y-y\star x=i\theta$. Then (c) construct the algebra of polynomial fields, but with $\star$-multiplication instead of regular (point-wise) multiplication. Indeed, this is exactly how one typically proceeds in non-commutative geometry: in one formulation, the operation is ‘Moyal-$\star$’ multiplication999$\phi\star\psi\equiv\phi\cdot\psi\ +\sum^{\infty}_{n=1}(\frac{i}{2})^{n}\frac{1}{n!}\theta^{i_{1}j_{1}}\dots\theta^{i_{n}j_{n}}\partial_{i_{1}}\dots\partial_{i_{n}}\phi\cdot\partial_{j_{1}}\dots\partial_{j_{n}}\psi$., and the fields form the ‘Weyl representation’ of the algebra. The algebra of the fields with respect to $\star$ will be that of the abstract non-commutative algebra, and now we have referred that algebra to objects in an ordinary manifold. In particular, one could talk about the local region in which such-and-such a field has values less than 1, say. Indeed, one might now wonder whether we should throw away the abstract algebra, and just treat physics in ‘non-commutative geometry’ as really physics in commutative geometry, but with an unfamiliar multiplication operation. In other words, wonder whether classical spacetime needs to be recovered at all? Huggett, Lizzi, and Menon (forthcoming) argue that indeed it must be, for the Weyl representation has formal representational structure that exceeds its meaningful, physical content. In particular, the concept of a region with an area smaller than $\theta$—a forteriori that of a point—is undefinable in the theory. This can be seen in a couple of ways, but for instance the attempt to measure positions more accurately leads to unphysical results. The conclusion is that, although the Weyl representation contains points and arbitrarily small regions, they are purely formal, and do not represent anything real: non-commutative geometry—even the Weyl representation—is physically ‘pointless’. As a result, we cannot understand a point value of the Weyl fields as having any physical meaning. Rather we need to understand the fields as complete configurations: the unit of physical meaning for a field in non-commutative space is the _function_ from each point to a value, $\mathcal{P}:(x,y)\to\mathbb{R}$; not its _value_ , $\mathcal{P}(\mathrm{x,y})$ at any particular point $(\mathrm{x,y})$. But the full configuration is equivalent to the place of the field in the abstract algebra, and so we are back to the question of deriving locality. Here is one way to proceed, using an ansatz proposed by Chaichian, Demichev, and Presnajder 2000 (discussed further in Huggett et al. forthcoming). They propose that an ordinary, commuting field—the kind observed in classical spacetime—be related to a Weyl field $W(x,y)$ by an operation of ‘smearing’. One multiplies $W(x,y)$ by a $\theta$-sized ‘bell function’ about $(X,Y)$, and integrates over the Weyl space coordinates $x$ and $y$.101010$\Omega(X,Y)\propto\int\big{(}e^{-((X-x)^{2}+(Y-y)^{2})/\theta}\cdot W(x,y)\big{)}\ \mathrm{d}x\mathrm{d}y$. The result is a new field $\Omega(X,Y)$. Extrapolating from this ‘CDP ansatz’, the result of smearing is to introduce classical space into the theory. $W$ lives in Weyl space, whose status, we argue, is only that of a formal representation of the fundamental algebra, while $\Omega$ should be interpreted as living in the physical space that we observe. We thus interpret smearing as relating a _function on one space_ to a _value on another space_ : it relates the non-commuting field $W$, represented as a function over Weyl space points $(x,y)$, to the _value_ of an observed, commuting field $\Omega$ at physical space point $(X,Y)$. That it takes a function to a value is just mathematics; that it relates Weyl and physical spaces is a substantive physical postulate. However, it still makes no sense to consider $\Omega$ in regions smaller than $\theta$: we have in fact erased the unphysical information at such scales by smearing $W$. So strictly a single coordinate pair $(X,Y)$ does not label a physical point. Rather, the proposal is that these smeared fields are approximated by observable fields over regions greater than $\theta$; thereby formally deriving the latter, spatially localized objects from the former, purely algebraic objects. (In this case, we have smearing as ‘aggregating’, in a very loose sense.) Of course, in our existing theories, fields live in a full commuting spacetime, but that is an extrapolation from our actual observations of fields, which are always over finite regions, to date larger than $\theta$. On the proposed interpretation, then, any information contained in $\Omega$ about regions less than $\theta$ is not only unobserved, but unphysical, surplus representational ‘fluff’. Now, nothing in the theory forces this picture as a physical story—it is merely an interpretational postulate. (Though we claim that it is conceptually coherent.) However, it has empirical consequences: the dynamics magnifies the $\theta$-scale non-commutativity to observable scales (e.g., Carroll et al. 2001). If those predictions are successful, then we have evidence that the underlying non-commutative field theory _and_ the interpretational postulate are correct. Imagining that situation then, we claim that the situation is exactly analogous to that of the Newtonians regarding action at a distance. That is, we would be justified in accepting the CBP ansatz and our interpretational postulate as novel principles of physical salience: they regulate what constitutes a physically salient derivation in the theory. In both cases, the final ground is the empirical success of the theory. So it should be clear how the example illustrates our points about physical salience and our scheme proposed above. In the first place we have argued that non-commutative geometry is non-spatial, in the sense that it is ‘pointless’, and so must be understood as a purely algebraic theory. Then we have explicated a possible formal derivation of localizable fields from this more fundamental theory. And finally, we have sketched a scenario in which such a derivation leads to successful predictions, and hence to the conclusion that the formal derivation is physically salient, in fact _explaining_ the appearance of localized fields, and ultimately classical spacetime. Of course, we highlight that this discovery constitutes a _change_ in what derivations ‘deserve to be regarded as physically salient (rather than merely mathematically definable)’, to paraphrase Maudlin. This is the pattern that we will see in more detail in the examples of the following chapters, and to which we will return in the conclusion. In the following two sections we will investigate further what is achieved by such a derivation (§6), and use our account of physical salience to indicate what philosophy should aim to do when engaging emerging physics such as QG (§7). ## 6 Spacetime functionalism The schema that we presented for a physically salient derivation relates closely (but not exactly) to Lewis’ (1972) account of functional identification. Since ‘spacetime functionalism’, of various varieties, has been recently discussed it is worth drawing the comparison, to better understand the nature of the proposed emergence. Suppose, in idealization, that a theory $T$ is formulated as a postulate ‘$T[t]$’. $t$ represents what was traditionally called the ‘theoretical’ terms, though we prefer ‘troublesome’ (Walsh and Button 2018, §3.1): the idea is that these are the new terms introduced by the theory, and the ‘trouble’ is the question of how they garner meaning. ‘$T[\cdot]$’ also involves (traditionally) ‘observational’, or (according to Lewis) ‘old’, or (with Walsh and Button) ‘okay’ terms, and Lewis proposes that the $t$ are defined in terms of them by $t=\rotatebox[origin={c}]{180.0}{$\iota$}x\ T[x].$ (5) That is, ‘the $t$ are (if anything) the extant, unique things that satisfy the theory postulate’. The $t$ are thus defined in terms of their nomic relations to one another and to the okay terms—i.e., in terms of their ‘functional’ relations—and so (5) is a functional definition.111111Lewis proposes in passing that the actual definition be modified to allow for _approximate_ satisfaction of $T[\cdot]$. In our opinion that is always going to be the case in actual theories, so this modification is not optional but required, and his discussion is a significant idealization. The harder question of how exactly the modification is to be implemented is not carefully addressed by Lewis. As a result, the terms $t$ are rendered semantically okay (though they may remain metaphysically problematic). Suppose that we also hold a postulate $R[r]$, where $r$ and $t$ do not overlap, so that our acceptance $R$ does not depend on our views on the troublesome terms of $T$.121212Acceptance of a theory requires that it be meaningful, hence acceptance of $R[r]$ requires that the $r$ be referential, and thus that any troublesome $r$ can be functionally defined in terms of the okay $r$ as in (5), mutatis mutandis. We accept this assumption for Lewis’ cases, but we will see that things are more complex in the case of QG. Further suppose that we come to believe $T[r]$. Now, $T[r],\ t=\rotatebox[origin={c}]{180.0}{$\iota$}x\ T[x]\ \vDash\ r=t,$ (6) so, by definition of the $t$, $T[r]$ deductively entails the identity of the objects of $R$ and $T$. Lewis’ point is the epistemic one that such functional identifications are thus not inductive: given a functional definition, once one accepts that the objects of $R$ play the same roles as those of $T$, _logic and meaning alone commit one to accepting their identity_. Lewis offered electromagnetic waves and light as an example of the scheme, but of course his point was that ‘when’ neuroscience showed that neural states played the functional roles of mental states, then they would—as a matter of logic and definition—be identified. The subject of this book also broadly fits Lewis’ scheme: theory $T$ is our spacetime theory assumed by QFT and relativity theory, while $R$ is a theory of QG. Denote them $ST$ and $QG$, respectively, and use $ST$ to functionally define any troublesome spacetime terms. Then, according to the schema of §4, a physically salient derivation of $ST$ from $QG$ shows that ‘aggregates’ of $QG$, described using its terms $q$, satisfy $ST[\cdot]$—that they indeed play the functional roles of the objects of $ST$. So the identity follows. However, there are differences to Lewis’ functionalism, which we will explain presently. Now, how to turn Lewis’ scheme into a concrete plan for the functional reduction of spacetime? The spacetime functionalism recently introduced by Lam and Wüthrich (2018, forthcoming) is based on the general scheme in the spirit of Kim (2005, 101f) according to which a functional reduction of higher-level properties or entities to lower-level properties or entities consists in two necessary and jointly sufficient steps:131313Kim’s model involves three steps, where the second is to identify the entities in the reduction base that perform the role at stake, and the third is to construct a theory explaining how these fundamental entities perform that role. We subsume these two steps in our second stage. 1. (FR1) The higher-level entities/properties/states to be reduced are ‘functionalized’; i.e., one specifies the causal roles that identify them, effectively making (5) explicit. 2. (FR2) An explanation is given of how the lower-level entities/properties/states fill this functional role, so that we come to accept $T[r]$. If these two steps are fulfilled, then it follows that the higher-level entities/properties/states are realized by the lower level ones. Applying the template of functional reduction to the case of the emergence of spacetime in QG, the two steps above become: 1. (SF1) Spacetime entities/properties/states, $s$, are functionalized by specifying their identifying roles, such as spacetime localization, dimensionality, interval, etc. Effectively, one makes explicit $s=\rotatebox[origin={c}]{180.0}{$\iota$}x\ ST[x]$. 2. (SF2) An explanation is given of how the fundamental entities/properties/states, $q$, postulated by the theory of quantum gravity fill these roles, so that we come to accept $ST[q]$. Again, if these two steps are fulfilled, it follows that the (perhaps aggregated) QG entities/properties/states _are_ the spacetime entities/properties/states. In the following chapters, after explaining each theory of QG and its conceptual foundations, we will follow this scheme in our discussions: on the one hand describing the functional roles of spacetime entities/properties/states, and on the other showing how the theory of QG proposes that those roles are played. Of course, given the evidential state of QG, we do not claim that that these proposals are correct: we only describe how the theories _may_ functionally reduce spacetime, not—as far as we currently know–how they _do_. Several remarks are in order regarding the functionalist approach to spacetime. First, it should be made clear that we take emergence and reduction to be compatible with one another, and hence functional _reduction_ may serve as a template to explain the _emergence_ of a higher-level feature, i.e., the fact that higher-level entities exhibit novel and robust behaviour not encountered or anticipated at the more fundamental level.141414As restated many times in our earlier publications, and in agreement with what we take to be the consensus in philosophy of physics as stated, e.g., in Butterfield (2011a, b) and Crowther (2016, §2). Second, there is a sense in which a functionalism about spacetime must start from a broader conception of functional reduction than is usual in the familiar functionalisms in the philosophy of mind or the philosophy of the special sciences. There, a mental or biological or other higher-level property is understood to be determined by—indeed, usually _identified with_ —its _causal role_ within the relevant network such as the network of mental or biological activities. If in spacetime functionalism the roles are still supposed to be _causal_ , then a much broader notion of ‘causal’ must be at work, one that does not in any way depend on the prior existence of spacetime. As it is not clear what that would be, it is preferable to formulate a notion of functionalism devoid of any insistence that the functional roles be causal. Third, the central claim of spacetime functionalism is that it is sufficient to establish only the functionally relevant aspects of spacetime. In particular, it is therefore not necessary to somehow derive relativistic spacetime in its full glory and in its every aspect in order to discharge the task. Naturally, this raises the question of what these functionally relevant aspects of spacetime are—the task of (SF1). As we will see in the following chapters, different approaches to QG take different stances on what functions are to be recovered, though broadly speaking, all aim to recover functions sufficient for the empirical significance of basic metrical and topological properties. Our stance will be that the list of functions cannot be determined _a priori_ from conceptual analysis of classical spacetime theories, but by the twin demands of the empirical, and of the resources of the proposed reducing theory. In short, part of the work of each chapter will be to identify the spacetime functions recovered in the different approaches, and indicate how they relate to observation. Fourth, the scheme permits a form of ‘multiple realizability’, as is typical of functionalism also in the philosophy of mind or the philosophy of the special sciences: (SF2) allows that different (kinds of) fundamental entities might play one and the same functional role, i.e., that the ‘realizer’ of spacetime might have been by something other than what it in fact is. This liberal stance spurs a concern that functionalism is too weak a condition to secure the emergence of spacetime, that the true nature of spacetime is not exhausted by its functional roles, so that none of the mere functional realizers could ever truly be spacetime. In particular, the worry continues, a rash reliance on functionalism misses the qualitative nature of spacetime—some kind of spacetime ‘qualia’, as it were—and it is precisely such qualitative features that make spacetime what it is, and which cannot be recovered by mere functional realization. However, the case of spacetime is disanalogous to that of mind: we agree with Knox (2014) who states that where “the fan of qualia [in the philosophy of mind] has introspection, the fan of the [spacetime] container has only metaphor” (16), and with Lam and Wüthrich (2018) who agree that the “nature and status of the evidence in favour of [mental] qualia may be equivocal, but the alleged ineliminable intrinsically spatiotemporal but ineffable quality of spacetime substance remains positively elusive” (43f). We conclude with them that the qualia worry in this form gets little if any traction in the spacetime case.151515We also concur with Lam and Wüthrich (2018) in their rejection of the version of this concern articulated in Ney (2015), who worries that if the fundamental entities are not already appropriately (spatio)temporal in their nature, they cannot ‘build up’ or constitute spacetime as they are not the right kind of stuff (see also Hagar and Hemmo 2013). As diagnosed by Lam and Wüthrich, advocates of this worry seem to rely on an unreasonably narrow concept of constitution. We might also object that if we surrendered to this worry, there would be no principled reason to think that it would not also annihilate all other cases of presumed emergence and amount to an unyielding dualism. Borrowing a distinction from Le Bihan (2018) between a “hard” and an “easy problem” of spacetime emergence, spacetime functionalism amounts to the denial that there is a hard problem of an unbridgeable explanatory gap between the fundamental, non-spatiotemporal and the emergent, spatiotemporal realm. For the functionalist, what is to be shown—by a physically salient derivation—is how the fundamental degrees of freedom can collectively behave in ways such that they play the required spacetime roles. And nothing more. No special character, or essence, or metaphysical nature need be accounted for. Functional identification requires no ‘luminosity’ of light beyond the behavior of electromagnetic waves, or ‘consciousness’ beyond the functioning of neurons. Or in our case, no special ‘spatiotemporality’ that the non- spatiotemporal could never obtain. Once one has shown that the non- spatiotemporal plays the roles of the spatiotemporal—and so _is_ the spatiotemporal—no more need be said: one has a full scientific account of the emergence of spacetime, and no ‘explanatory gap’ remains. Fifth, functionalism shows how the goal of reduction can be the scientific explanation of the functional roles of higher level entities/structures/states by lower level entities/structures/states (and to nothing more). But, it is debatable to what exactly spacetime functionalism is ontologically committed: substances, relations, entities, structures, states, or something else. We will not further pursue this debate as we believe it to be orthogonal to the concerns of this book. Thus we hope that the reader will forgive our switching between speaking of the spatiotemporal as if it were an entity, or a structure, or a state, or something different yet again. We simply aim to avoid torturing English more than necessary, and no deep philosophical commitment should, for instance, be read into our using ‘spacetime’ as a noun. Sixth, we are far from the first to suggest functionally defining space or spacetime. DiSalle (2006, chapter 2) reads Newton’s Scholium to the definition in much this way (though Huggett 2012 disagrees). Functionalist strategies have also become very visible in the philosophy of non-relativistic quantum mechanics, where Wallace (2012) deploys it in his defence of an Everettian interpretation and Albert (2015, Ch. 6) in support of wave function monism. Those latter applications differ from ours because they are concerned with recovering three-dimensional physical space. In contrast, spacetime functionalism in QG is commissioned with functionally recovering 4-dimensional spacetime, and so relates to work by Knox (2013, 2014, 2019) in the context of classical spacetime physics. For her, something ‘plays spacetime’s role’ and thus _is_ spacetime “just in case it describes the structure of inertial frames, and the coordinate systems associated with these” (2014, 15). In GR, the metric field performs spacetime’s role in this sense and thus is identified with spacetime by her. As the metric may itself not be fundamental but instead emerge from the collective behavior of more fundamental degrees of freedom, she explicitly leaves open the possibility that the realizers of spacetime’s functions may themselves not be fundamental (Knox 2013, 18). As the relationship between the fundamental degrees of freedom and the emergent spacetime realizer is left untouched by Knox’s inertial frame functionalism, the latter does also not shed any light on it.161616Cf. Lam and Wüthrich (2018, 40) and Lam and Wüthrich (forthcoming, §3) for a more detailed discussion of inertial frame functionalism and how it relates to our project. Seventh and finally, there is an important but subtle difference in the application of Lewis’s scheme to QG from that in the cases he has in mind. Suppose we accept a spacetime theory $ST[s]$, where whatever it is that performs the spacetime functions is denoted by the troublesome terms, $s$. These, following Lewis, we take to be defined by $s=\rotatebox[origin={c}]{180.0}{$\iota$}x\ ST[x].$ (7) The okay terms appearing in $ST[\cdot]$ would refer to matter of various kinds, its relative motions and point-coincidences: so, for instance, the metric in GR might be defined locally in terms of its role in determining motions under gravity or scattering amplitudes. We think that this part of Lewis’ picture—which corresponds to (SF1)—fits our cases well. But what about the second part of his scheme, involving $R[r]$? Although the result is still a functional identification, its significance has shifted somewhat, as we shall now explain. Butterfield and Gomes (2020a; 2020b) analyze recent proposals for spacetime functionalism in explicitly Lewisian terms. They emphasize, as we have, that in Lewis’ scheme theoretical identification follows by definition alone (once the $r$s are known to play the role of the $t$s), and that functional identification is a species of reduction. But they also show how various spacetime and temporal functionalisms follow the ‘Canberra plan’, according to which the troublesome $t$s are not only defined by $T$, but are also ‘vindicated’ by their functional identification as $r$s. For instance, as mental states, perhaps, turn out to be neural states so, in their examples, a temporal metric might be identified with purely spatial structure; then, if neural states or spatial structures are on a firm (or firmer) ontological footing than mental states or time, the identifications show that the latter are equally well grounded. They are, that is, vindicated against any metaphysical suspicions raised against them. That vindication is not by itself achieved by the functional definition (5) of the $t$s; that merely makes the terms referential, so that they can be meaningfully employed. Put another way, (FR1) alone does not vindicate the mental, for instance; (FR2) is also needed, to show how the mental is part of the physical.171717Or put yet another way, the $t$ are often troublesome both semantically and ontologically: the functional definition takes care of the first problem, while the functional identification takes care of the second. When we use ‘troublesome’ we always mean semantically. Regarding these cases, we are in agreement with Butterfield and Gomes emphasis of this important distinction, and its applicability to the cases that they discuss. However, in our cases, for which $R$ is some $QG$, the second step, while still involving a functional identification, does not follow the Canberra plan, because the troublesome terms, $q$, of $QG$ are non-spatiotemporal, and so on a _weaker_ , not firmer, footing—the ontological and semantic correlate of empirical incoherence.181818Butterfield and Gomes do not claim otherwise, and indeed acknowledge that QG will look different (2020b, 3). Ontologically, as we have discussed, our physical and metaphysical categories assume spatiotemporality, and so the natures of the $q$ are mysterious. Semantically, we can expect an attempt to functionally define the $q$s as $q=\rotatebox[origin={c}]{180.0}{$\iota$}x\ QG[x]$ to fail. Lewis’ scheme for functional definition requires that a theory have sufficient okay terms to _uniquely_ define the troublesome ones: if many collections of terms satisfy the putative definition, then it fails to establish reference. But that is what one expects in a theory that breaks from established categories as radically as a non-spatiotemporal one; the terms that we take to be okay are systematically spatiotemporal in some way, and so are expected not to appear in $QG$. And indeed, we contend that the theoretical concepts of the theories we consider in this book cannot be defined without appeal to spatiotemporal concepts external to the basic formulation of the theory. Given this situation, the significance of functional reduction is different from the way in which Lewis (and Kim) proposed. Rather than following the Canberra plan of vindicating spacetime objects by reduction, in our approach to QG things are _reversed_ : the non-spatiotemporal objects of $QG$ are vindicated via their identifications with spatiotemporal objects. Clearly this approach only works to the extent that the spatiotemporal is itself on a firm ontological footing, which of course is a topic of endless debate. To skirt such debates in this book we will remain as neutral as possible, and not take any stand on the metaphysical nature of spacetime features such as topology or metricity, so that our conclusions remain valid for anyone who accepts them under whatever interpretation. Within Lewis’ framework, the vindication of the $q$ works as follows. Suppose that non-spatiotemporal $QG[q]$ has been proposed. As explained, the $q$ are semantically troublesome and ontologically suspect. Moreover, until we accept that a derivation of (at least a fragment of) $ST$ is physically salient, we have no empirical grounds for accepting $QG$. Such a derivation will provide, not only grounds for $QG$, but also define and vindicate the $q$. Introducing the ‘aggregate operator’ $\alpha(\cdot)$, according to our schema, when we have a physically salient derivation of spacetime properties, then we accept $\alpha(q)=\rotatebox[origin={c}]{180.0}{$\iota$}x\ ST[x].$ (8) In conjunction with $ST[s]$ this entails that $\alpha(q)=s$ (9) more-or-less as for Lewis. However, the reversal of the Canberra plan makes several things different. First, semantics. As noted, the $q$ were not antecedently defined, but now can be through their—or rather the $\alpha(q)$’s—role as spacetime entities/structures/states. In other words, (8) is in part definitional of the $q$: the troublesome non-spatiotemporal terms of $QG$ can only be defined with reference to spatiotemporal terms not native in $QG$. Moreover, (8) only succeeds in defining the $q$ if in physical fact they play the ascribed roles, and do merely mimic them formally; something that the physical salience of the derivation will secure.191919(8) is not purely definitional, since it also also involves an existential commitment that the $q$s exist. And it need not fully define $q$; we also still have that $q=\rotatebox[origin={c}]{180.0}{$\iota$}x\ QG[q]$ by definition. Second, ontology. The $q$s are placed on a firm ontological footing—are vindicated—when we accept that the $\alpha(q)$ are in physical fact those entities/structures/states that play the spacetime role.202020In Huggett and Wüthrich (2013, 284), we described this approach to vindication as physical salience flowing down to the $q$ ‘from above’. Once again, acceptance of the physical salience of the derivation secures just that. Finally, epistemology. In Lewis’ scheme, we have independently accepted theories of, say, neuronal and mental states, and _later_ discover that they play the same functional roles, entailing that they are identical. In our case, the acceptance that $QG$’s objects (or rather their aggregates) play the same roles as $ST$’s objects, and hence are identical with them, is _simultaneous_ with our acceptance of $QG$. In general terms, the evidence for $R[r]$ is no longer antecedent (or independent) of the evidence for $T[r]$, but rather the very same evidence. As such the epistemic calculus is different. In one case, observations of neuronal states can be made independently of mental states, and we only have to show that they perform the relevant functions: producing suitable behaviors, for instance. In the other, observations are not independent of spacetime states, and have to support both the truth of a theory of QG, and that its objects perform the right functions. To give evidence, that is, that the formal derivation of those functions is indeed physically salient. As a result, although the deductive logic is the same, the empirical inference to the premises of the identification is different, and indeed weaker. However, as we say, it is of the normal empirical kind, and we fully expect it to be made for a successful theory of QG. There is no special ground for skepticism. So much for the functionalism that lies behind the investigations of this book. But why is finding a functional reduction in any way a philosophical task, rather than one for physics? ## 7 The role of philosophy in physics As we noted, the theories that we plan to investigate are all speculative at present, faced with considerable formal and empirical uncertainties. So what can we hope to learn from a philosophical enquiry into something that is at worst likely false, or at best a work in progress? We see the situation as characteristic of emerging fundamental physics (and perhaps other sciences). The process of discovery takes place along various fronts: obviously, new empirical work constrains theory and requires explanation; also obviously, new mathematical formalisms are tried out and explored; less obviously, but just as importantly, conceptual analysis of the emerging theory is undertaken. In particular, we want to stress that this last kind of work is carried out concurrently with the empirical and theoretical. One should not view interpretation as something that merely happens after an uninterpreted formal structure is presented, but as an inextricable aspect of the process of discovery. As such, it is something that has to be carried out on inchoate theories, in order to help their development into a finished product. We claim that this view is supported by the historical record: we have in fact already seen this for Newtonian gravity. But one can equally well point to the absolute-relative debate in the development of the concept of motion, or 19th century efforts to come to grips with the physical significance of non- Euclidean geometry. These debates did not wait until after a theory was developed to clarify its concepts; rather they had to be carried out simultaneously, as an integral part of the development of the theory (see DiSalle (2006)). Of course we are hardly the first to realize that such philosophical issues have to be addressed together with the empirical and theoretical ones. Many of Kuhn’s (1962) arguments illustrate this point, and more recently it is a major theme of Friedman (2001). But while we agree with their focus on philosophical, conceptual analysis as an essential part of theory construction, we don’t intend to get involved in issues involving the a priori or incommensurability, instead we want to emphasize the practical role for analysis in the development of QG. In the search for a new fundamental theory, the goal is—as it was for Einstein and for Newton—a new formalism plus an interpretation that connects parts of the formalism to antecedently understood aspects of the physical world, especially to the empirical realm. That is, an interpretation of how the more fundamental plays the functional roles of the less fundamental. And of course that means undertaking the project that we have been talking about in this section, of deriving spatiotemporal predictions from theories of QG. But one never simply co-opts or invents formalism without some eye on the question of how it represents existing physics of interest; and as the formalism is developed it becomes possible to see more clearly how and what the new formalism represents. Addressing this question is of on-going importance for finding the right formalism for the area under study. Moreover, constructing such a formalism does not typically proceed in a monolithic fashion; instead different fragments of theory are proposed, investigated, developed or abandoned. For example, think of the development of the standard model of QFT from the early days of quantum mechanics. So the analysis of concepts of the new theory in terms of existing physics is often faced with a range of half- baked theories and models. All the same, lessons about how a more developed, less fragmented theory can be found depend on asking how the fragments represent known physics—the answers are potential clues to how the finished product could do so. We believe that contemporary QG should be thought of in just this way—certainly the fragmentation is real! Our primary goal is to look at a range of the half-baked fragments and ask how they connect to spatiotemporal phenomena. Since they do not do so in a familiar way, in terms of a continuous manifold of points, the question becomes ‘how does spacetime emerge from the underlying physics?’. We hope, therefore, that by concentrating on the question of emergence, aside from all the other issues involved in the search for a theory of QG, we will be performing a service to physicists working in QG, by focussing their attention on what is already known—and reminding them that success depends on making it part of the search. Naturally, we do not expect to find solutions of the order of Newton or Einstein! Indeed, a lot of what we shall do is draw out answers already given by physicists; we believe that careful philosophical analysis of these answers can help clarify them to reveal strengths and weaknesses, and hence aid progress. (Moreover, because we are focussed on this quite narrow issue, we can survey a wider range of approaches than most physicists actively study, and so provide a helpful overview of the topic.) And hence we believe that in the examples we will consider there are important clues for the development of QG which philosophical analysis can reveal. ## 8 The plan for the book Thus, our primary aim is to see how spacetime disappears and re-emerges in several approaches to QG, and to show how this is not just a technical issue for physicists to solve, but instead elicits numerous foundational and philosophical problems. As we work through three such approaches—causal set theory (CST), loop quantum gravity (LQG), and string theory, which were all briefly introduced in §2—, we bring these philosophical issues to the fore and will concentrate our discussion on them. It is common to divide approaches to QG into those which start out from GR and attempt to convert it into a quantum theory of gravity in different ways and into those departing from the standard model of particle physics and aim to add gravity to the other three forces of the standard model. In the former approaches such as CST and LQG, we would not expect the resulting theories to fold in the physics of the standard model, whereas the latter, such as string theory, will presumably deliver more encompassing, unifying theories. It is clear that either way, a theory of QG needs to address how the geometrical degrees of freedom of spacetime interact with the matter degrees of freedom present in the world. But it is also clear that both kinds of degrees of freedom may well look very differently from what we are used to from other theories. The first two chapters after this one focus on CST. Chapter 2 introduced the basic kinematic axiom of the theory and shows how in it at least space disappears rather radically from the fundamental ontology, but also that temporal aspects do not all survive. This raises the immediate question of the relationship between the fundamental ontology of causal sets with that of relativistic spacetimes, a question we start to address in chapter 2. Although some functions of space can tentatively be recovered, what is needed is a more systematic understanding of how causal sets generically give rise to worlds which appear to be spatiotemporal in ways described, to good approximation, by GR. The way in which this ‘derivation’ of spacetime is attempted in CST is sketched and discussed in chapter 3. In this chapter, we will discuss the role played by introducing a dynamics for the theory. We will argue that the emergence of spacetime in CST is closely tied to deeply philosophical questions regarding the metaphysics of space and time. Chapter 4 and 5 turn to LQG, retracing the disappearance and emergence of spacetime in this approach. Just as CST, LQG builds a research program around what it takes to be GR’s central lesson. In the case of LQG, this is the insight that GR postulates a truly dynamical spacetime, interacting with other fields. The demand is encoded in the theory’s general covariance. LQG seeks to articulate a theory of QG by delicately applying known quantization procedures to a Hamiltonian formulation of GR. Chapter 4 chronicles and discusses whether and, if so, how this approach leads to the disappearance of spacetime. Unlike CST, it is time whose existence is much threatened in LQG than space. Chapter 5 seeks to understand how relativistic spacetime then emerges from the fundamental theory, finding, again, close ties to philosophical questions. Other approaches apply the strategies of perturbative QFT—so successful in understanding the other forces—to quantize gravity. The technique calls for starting a system in which the fields do not interact to build up a space of states: a lowest, vacuum state, and states of discrete, particle-like ‘quanta’. Generally such a system is solved exactly, and the vacuum describes an obvious classical state. Then one introduces a small interaction, and uses approximation techniques to study the behavior of fields: especially the scattering of quanta. This approach was applied to gravity early on: Minkowski spacetime is a natural vacuum, and the gravitational field has quanta known as ‘gravitons’, very analogous to photons, the quanta of the electromagnetic field. Indeed, quite a lot is known about the quantized gravitational field through such methods, and this knowledge is taken as a constraint on a successful theory of QG. However, divergences prevent the theory from being generally applied; moreover, these divergence cannot be adequately resolved by ‘renormalization’ as they can for other QFTs.212121See Kiefer (2004, chapter 2) for a very nice survey of QFT of the gravitational field. String theory works within this approach, but with one important tweak: instead of quantized point like particles, it deals in quantized 1-dimensional, string-like objects. This, it appears, makes all the difference to the finiteness of the theory. Chapters 6-9, address the emergence of spacetime in string theory. Chapter 6 is a fairly technical introduction of the theory, aimed at philosophers of physics: it aims to be more intuitive, and more explicit about the conceptual and physical framework than physics textbooks usually are. For those who have some familiarity with classical and quantum field theory, it will tell you what you need to know about strings. Chapter 7 deals with string ‘dualities’: some fascinating and powerful symmetries that arise when space has an interesting topology (a cylinder, say). We argue that they are the kind of symmetries are not merely observational, but ‘go all the way down’, showing that string theory does not possess, in its basic objects, familiar spacetime properties, such as definite size or topology; it is for largely that reason that spacetime ‘emerges’. Chapter 8 is again fairly technical, explaining and analyzing in some detail the derivation of the Einstein field equation for gravity, from string theory. This is a central part of emergence, for it derives the spacetime metric, giving empirical content to spacetime geometry, and gives rise to GR. Finally, chapter 9 draws on the material of the previous chapters to argue that indeed spacetime emerges in string theory, how this happens, and what ‘principles of physical salience’ are required. The final, concluding, chapter draws on the results of the previous ones to return to the question of this introduction. How can we see that the derivations of spacetime that we have investigated are themselves physically salient, and what principles can we extract from them that might be helpful in the search for QG? ## References * Albert [2015] David Albert. _After Physics_. Harvard University Press, Cambridge, MA, 2015. * Albert [1996] David Z Albert. Elementary quantum metaphysics. In _Bohmian mechanics and quantum theory: An appraisal_ , pages 277–284. Springer, 1996. * Barrett [1996] Jeffrey A Barrett. Empirical adequancy and the availability of reliable records in quantum mechanics. _Philosophy of Science_ , 63:49–64, 1996. * Bose et al. [2017] Sougato Bose, Anupam Mazumdar, Gavin W Morley, Hendrik Ulbricht, Marko Toroš, Mauro Paternostro, Andrew A Geraci, Peter F Barker, MS Kim, and Gerard Milburn. Spin entanglement witness for quantum gravity. _Physical Review Letters_ , 119(24):240401, 2017\. * Busza et al. [1999] W. Busza, R.L. Jaffe, J. Sandweiss, and F. Wilczek. Review of speculative “disaster scenarios” at rhic. Technical report, Brookhaven National Laboratory, http://www.bnl.gov/rhic/docs/rhicreport.pdf, September 1999. * Butterfield [2011a] Jeremy Butterfield. Emergence, reduction and supervenience: A varied landscape. _Foundations of Physics_ , 41:920–959, 2011a. * Butterfield [2011b] Jeremy Butterfield. Less is different: emergence and reduction reconciled. _Foundations of Physics_ , 41:1065–1135, 2011b. * Butterfield and Gomes [2020a] Jeremy Butterfield and Henrique Gomes. Functionalism as a species of reduction, July 2020a. URL http://philsci-archive.pitt.edu/18043/. Submitted to: ?Current Debates in Philosophy of Science: In honor of Roberto Torretti?, edited by Cristian Soto; (to be published in the Synthese Library). * Butterfield and Gomes [2020b] Jeremy Butterfield and Henrique Gomes. Geometrodynamics as functionalism about time, October 2020b. URL http://philsci-archive.pitt.edu/18339/. Submitted to: From Quantum to Classical: Essays in memory of Dieter Zeh; edited by Claus Kiefer: Springer, Cham, 2021. * Callender and Huggett [2001] Craig Callender and Nick Huggett. Why quantize gravity (or any other field for that matter)? _Philosophy of Science_ , 68:S382–94, 2001. * Carroll et al. [2001] Sean M. Carroll, Jeffrey A. Harvey, V. Alan Kostelecky, Charles D. Lane, and Takemi Okamoto. Noncommutative field theory and lorentz violation. 2001\. URL http://arxiv.org/abs/hep-th/0105082v1. * Chaichian et al. [2000] M. Chaichian, A. Demichev, and P. Presnajder. Quantum field theory on noncommutative space-times and the persistence of ultraviolet divergences. _Nucl.Phys._ , B567:360–390, 2000. URL http://arxiv.org/abs/hep-th/9812180. * Clarke et al. [1956] Samuel Clarke, Gottfried Wilhelm Leibniz, and Robert Gavin Alexander. _The Leibniz-Clarke Correspondence: Together Wiith Extracts from Newton’s Principia and Opticks_. Manchester University Press, 1956. * Crowther [2016] Karen Crowther. _Effective Spacetime: Understanding Emergence in Effective Field Theory and Quantum Gravity_. Springer, Cham, 2016. * Descartes [1644] Rene Descartes. _Principia Philosophiae_. Apud Ludovicum Elezvirium, 1644. * DiSalle [2006] Robert DiSalle. _Understanding Spacetime: The Philosophical Development of Physics from Newton to Einstein_. Cambridge University Press, Cambridge, 2006. * Earman [1989] John Earman. _World Enough and Space-Time: Absolute versus Relational Theories of Space and Time_. MIT Press, Cambridge, MA, 1989. * Friedman [2001] Michael Friedman. _Dynamics of Reason_. CSLI Publications, 2001. * Gelfand and Naimark [1943] I.M Gelfand and M.A Naimark. On the embedding of normed rings into the ring of operators in hilbert space. _Mat. Sbornik_ , 12:197–213, 1943. * Geroch [1972] Robert Geroch. Einstein algebras. _Communications in Mathematical Physics_ , 26:271–275, 1972\. * Hagar and Hemmo [2013] Amit Hagar and Meir Hemmo. The primacy of geometry. _Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ , 44(3):357–364, 2013. * Hawking [1974] S. W. Hawking. Black hole explosions? _Nature_ , 248(5443):30–31, 1974. * Hesse [1961] Mary B Hesse. _Forces and fields_. T. Nelson, 1961. * Huggett [2012] Nick Huggett. What did newton mean by ‘absolute motion’? In Andrew Janiak and Eric Schliesser, editors, _Interpreting Newton: Critical Essays_ , pages 196–218. Cambridge University Press, 2012. * Huggett [2018] Nick Huggett. Spacetime ’emergence’, December 2018. URL http://philsci-archive.pitt.edu/15440/. To appear in Routledge Companion to Philosophy of Physics, edited by Eleanor Knox and Alastair Wilson. * Huggett and Wüthrich [2013] Nick Huggett and Christian Wüthrich. Emergent spacetime and empirical (in) coherence. _Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ , 44(3):276–285, 2013. * Huggett et al. [forthcoming] Nick Huggett, Fedele Lizzi, and Tushar Menon. Missing the point in noncommutative geometry. _Synthese_ , forthcoming. * Kiefer [2004] Claus Kiefer. _Quantum Gravity_. Oxford University Press, 2004. * Kim [2005] Jaegwon Kim. _Physicalism, or Something Near Enough_. Princeton University Press, Princeton, 2005. * Knox [2013] Eleanor Knox. Effective spacetime geometry. _Studies in History and Philosophy of Modern Physics_ , 44:346–356, 2013. * Knox [2014] Eleanor Knox. Spacetime structuralism or spacetime functionalism? Manuscript, 2014. * Knox [2019] Eleanor Knox. Physical relativity from a functionalist perspective. _Studies in History and Philosophy of Modern Physics_ , 67:118–124, 2019. * Kuhn [1962] Thomas S Kuhn. _The Structure of Scientific Revolutions_. University of Chicago Press, Chicago, 1962. * Lam and Wüthrich [2018] Vincent Lam and Christian Wüthrich. Spacetime is as spacetime does. _Studies in History and Philosophy of Modern Physics_ , 64:39–51, 2018. * Lam and Wüthrich [forthcoming] Vincent Lam and Christian Wüthrich. Spacetime functionalism from a realist perspective. _Synthese_ , forthcoming. * Le Bihan [2018] Baptiste Le Bihan. Priority monism beyond spacetime. _Metaphysica_ , 19:95–111, 2018. * Leake [1999] Jonathan Leake. Big bang machine could destroy earth. _The Sunday Times_ , July 18 1999. * Lewis [1972] David Lewis. Psychophysical and theoretical identifications. _Australasian Journal of Philosophy_ , 50(3):249–258, 1972. * Lizzi [2009] Fedele Lizzi. Noncommutative spaces. _Lecture Notes in Physics_ , 774:89–109, 2009. * Maudlin [2007] Tim Maudlin. Completeness, supervenience, and ontology. _Journal of Physics A: Mathematical and Theoretical_ , 40:3151–3171, 2007. * Nastase [2005] Horatiu Nastase. The rhic fireball as a dual black hole. _arXiv preprint hep-th/0501068_ , 2005. * Newton [1726] Isaac Newton. _Philosophiae naturalis principia mathematica_ , volume 3. Apud Guil. & Joh. Innys, Regiæ Societatis typographos, 1726. * Newton [1730] Isaac Newton. _Opticks_. Prabhat Prakashan, 1730. * Ney [2015] Alyssa Ney. Fundamental physical ontologies and the constraint of empirical coherence. _Synthese_ , 192:3105–3124, 2015. * Oriti [2014] Daniele Oriti. Disappearance and emergence of space and time in quantum gravity. _Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics_ , 46:186–199, 2014. * Sklar [1983] Lawrence Sklar. Prospects for a causal theory of space-time. In Richard Swinburne, editor, _Space, Time and Causality_ , pages 45–62. D. Reidel Publishing Company, Dordrecht, 1983. * Wallace [2012] David Wallace. _The Emergent Multiverse: Quantum Theory According to the Everett Interpretation_. Oxford University Press, Oxford, 2012. * Walsh and Button [2018] Sean Walsh and Tim Button. _Philosophy and Model Theory_. Oxford, UK: Oxford University Press, 2018. * Wüthrich [2005] Christian Wüthrich. To quantize or not to quantize: fact and folklore in quantum gravity. _Philosophy of Science_ , 72:777–788, 2005.
# Dynamic industry uncertainty networks and the business cycle††thanks: We thank Charlie Cai, Daniel Greenwald, Stephen Terry, Emil Verner and Jacky Qi Zhang for their valuable comments. We thank the participants at the 3rd Workshop on Macroeconomic Research (Cracow University), CFE 2020, and Southwestern Finance Association 2021 Annual Meeting. Mattia Bevilacqua gratefully acknowledges the support of the Economic and Social Research Council (ESRC) in funding the Systemic Risk Centre [grant number ES/K002309/1 and ES/R009724/1]. Jozef Barunik gratefully acknowledges support from the Czech Science Foundation under the EXPRO GX19-28231X project. Jozef Baruník Charles University and Czech Academy of Sciences Mattia Bevilacqua London School of Economics, Systemic Risk Centre Robert Faff University of Queensland, Australia ###### Abstract We argue that uncertainty network structures extracted from option prices contain valuable information for business cycles. Classifying U.S. industries according to their contribution to system-related uncertainty across business cycles, we uncover an uncertainty hub role for the communications, industrials and information technology sectors, while shocks to materials, real estate and utilities do not create strong linkages in the network. Moreover, we find that this ex-ante network of uncertainty is a useful predictor of business cycles, especially when it is based on uncertainty hubs. The industry uncertainty network behaves counter-cyclically in that a tighter network tends to associate with future business cycle contractions. Keywords: Financial Uncertainty, Industry Network, Options Market, Business Cycle. ## 1 Introduction Throughout history, industrial structure has witnessed influential economic and financial cycles in which different sectors seem to take on a leading role.111The ascendancy of technology and telecommunications is a notable recent example. The rapidly growing internet sector accounted for $2.1 trillion of the U.S. economy in 2018 or about 10% of the nation’s gross domestic product (GDP). Tech companies such as Apple, Google, and Amazon are leading the stock market; any little variation in their quarterly earnings or stock market prices can move the entire index. Fluctuations in performance, valuation and interconnection have been crucially important not only for swings in financial markets, but also for the real economy and the prediction of business cycles. Despite the intensifying linkages emerging from economic activities among industries, we do not fully understand the influence of such networks on the real economy. Given this motivational backdrop, we develop forward-looking measures of industry uncertainty network structures, and we study how industry-specific shocks to option investors’ expectations linked to investor fear evolve over time as well as how these shocks relate to macroeconomic activity. Understanding the differential actions and performance of the largest firms within an economy’s various industrial sectors offers key insight into the aggregate economy.222For example, Gabaix (2011) reports that the total sales of the top 50 firms accounted for 25% of GDP in 2005. As another example, in December 2004, a $24 billion one-time Microsoft dividend boosted growth in personal income from 0.6% to 3.7% (Bureau of Economic Analysis, January 31, 2005). Acemoglu et al. (2012) document that significant aggregate fluctuations can originate from firm-specific microeconomic shocks or disaggregated sectors due to interconnections between different firms and sectors, functioning as a potential propagation mechanism of idiosyncratic shocks throughout the economy. Notably, Atalay (2017) concludes that 83% of the variation in aggregate output growth is attributable to idiosyncratic industry-level shocks, while Gabaix (2011) contends that a salient feature of business cycles is that firms and sectors comove. To capture uncertainty connected to industrial structures (and especially driven by large firms), recent literature has exploited information about volatility or uncertainty shocks highlighting the importance of network measures to capture the propagation of volatility mechanisms (e.g. Acemoglu et al., 2012; Carvalho and Gabaix, 2013; Gabaix, 2016; Acemoglu et al., 2017; Baqaee and Farhi, 2019; Herskovic et al., 2020). Carvalho and Gabaix (2013) argue that the sector-specific “fundamental” microeconomic volatility has explanatory power and can serve as an early warning signal of swings in macroeconomic volatility. Further, financial uncertainty shocks and fluctuations in risk have been identified as one of the main drivers of the U.S. business cycle (see Bloom, 2009; Christiano et al., 2014; Leduc and Liu, 2016; Basu and Bundick, 2017; Bloom et al., 2018; Ludvigson et al., 2020). Despite these enormous efforts, a recent surge of interest relies on historical or ex-post analysis. In contrast, we propose an ex-ante approach employing measures extracted from option prices of individual firms. We argue that information reflected in the implied second moment of firms provides an enhanced overview of the future risk and performance of economic sectors and the firms therein, shedding light on their future outlook and evaluation in a forward-looking manner through the options investors’ expectations directly related to industry-linked “fears” (e.g. Diebold and Yılmaz, 2014; Baruník et al., 2020). To infer such prospective information, we exploit an extensive sample of option prices to extract option-based uncertainty measures for key major firms across the full U.S. industrial landscape. We produce aggregate uncertainty measures for each industry at a daily frequency encompassing three recession periods (including the most recent Covid-19 crisis). Based on the time-varying parameter approximating models, we then construct a measure of ex-ante industry uncertainty network that reflects investor expectations of future uncertainty relevant to each industry. In doing so, we follow the work of Diebold and Yılmaz (2014); Barunik and Ellington (2020) who argue that time- varying variance decompositions characterize well how shocks to uncertainty create dynamic networks.333Our approach is intimately connected to network node degrees, mean degrees, and connectedness measures of Diebold and Yilmaz (2012) and Diebold and Yılmaz (2014). We assess the varying role of industries contribution to shocks to uncertainty across the business cycles. Our analysis then quite naturally transitions to the question of how useful are such uncertainty network measures in predicting economic activity. We argue that, compared to the currently existing approaches, industry-based uncertainty network measures represent more informative (leading) propagation channels highly relevant to predict business cycles given their forward-looking feature. The essence of our contribution is linked to two relatively recent strands of literature. The first strand is concerned with uncertainty measures and their relationship with business cycle fluctuations (e.g. Bloom, 2009, 2014; Jurado et al., 2015; Ludvigson et al., 2020).444See also Bachmann et al. (2013), Bachmann and Bayer (2014), Christiano et al. (2014), Decker et al. (2016), Bloom et al. (2018), and Arellano et al. (2019). The second strand focuses on the role of sector-level or firm-to-firm linkages in microeconomic shocks and their relationship with the aggregate economy and changes in business conditions (see, e.g. Gabaix, 2011; Acemoglu et al., 2012; Carvalho and Gabaix, 2013; Barrot and Sauvagnat, 2016; Acemoglu et al., 2017; Baqaee and Farhi, 2019) or the survey in Carvalho and Tahbaz-Salehi (2019). Notably, connected to the second strand of literature, are studies on the role of production networks as a propagation mechanism from individual firms and/or industries to the real economy (see, e.g. Foerster et al., 2011; Di Giovanni et al., 2014; Ozdagli and Weber, 2017; Atalay, 2017; Garin et al., 2018; Auer et al., 2019; Lehn and Winberry, 2020). However, in contrast to this extant body of work, we adopt purely market-based networks as a mechanism to dynamically study the propagation of shocks to uncertainty from industries flowing to the real economy. In addition, a recent body of work suggests that sector-specific shocks have become more volatile relative to aggregate shocks. For instance, Garin et al. (2018) develop an “islands” model with two sectors showing a decline in the contribution of the aggregate volatility shocks to the business cycle in favour of sector-specific shocks. Similarly, Lehn and Winberry (2020) show that the empirical network is dominated by a few “investment hubs” that produce the majority of investment goods, which are highly volatile, and have large aggregate effects on fluctuations in the business cycle. Inspired by these latest findings, our work also builds on a similar framework. Augmenting the definition of “hubs” from the input-output network literature, we characterize critical “uncertainty hubs”, as industries that largely transmit and/or receive uncertainty across the business cycle, versus “non-hubs”, being those industries that are (largely) neutral across business cycles. A shock to an uncertainty hub may directly trigger consequences for that specific industry, for the whole system and the broader economy. While it can affect production, employment and growth at the hub, it can also generate larger uncertainty spillovers, changes in prices, growth and production of other sectors in the system, affecting the broader economy. Our proposed framework with respect to the role of uncertainty hubs in driving fluctuations in the aggregate economy is reminiscent of investment-specific technology shocks (e.g. Greenwood et al., 2000; Justiniano et al., 2010). Given this framework, we hypothesize that the industry network constructed from uncertainty hubs contains greater predictability for business cycles compared to the non-hubs uncertainty network. The main findings of our paper are as follows. First, we identify industries showing a stronger (versus weaker) contribution of shocks to uncertainty and their propagation, thus playing an essential role within the aggregate industry uncertainty network. Specifically, the communications, IT and industrials play a key role, being classified as the main uncertainty hubs. In contrast, materials, utilities and real estate are classified as non-hubs. Further, our analysis suggests that the hub role of the financial industry is limited to the global financial crisis (GFC). Second, we have further insights relevant to the time-series perspective. We find that the ex-ante industry network rises sharply during the dot com bubble, the GFC, increasing steadily afterwards. Additionally, we detect a rising importance of hubs-specific shocks in the last decade implying that shocks to hubs account for the majority of aggregate fluctuations post-GFC. Third, our empirical exercise shows that the ex-ante industry uncertainty network generates an important channel through which sector-specific shocks propagate and it is a useful leading predictor of business cycles up to one year. Changes in business cycle indicators occur in line with changes in the comovement of uncertainty across industries, and they are especially due to changes in the network connectedness in uncertainty hubs. These results suggest that uncertainty hubs are a major channel for targeting policy effectiveness. Our results are robust to several robustness checks and the inclusion of additional control variables. The remainder of this paper is organized as follows. Section 2 describes the data and sampling used in our study. Section 3 sets out the essence of the TVP-VAR network connectedness method applied to our chosen industry setting. Section 4 studies the dynamic aggregate uncertainty network connectedness, and section 5 presents the findings for the dynamic idiosyncratic uncertainty network connectedness through the business cycle. Section 6 studies the predictive ability of the networks for the real economy. Section 7 concludes the paper. Additional results are relegated to the appendix of the paper. ## 2 Industry uncertainty, investor beliefs and option prices According to Kozeniauskas et al. (2018), conceptually three different types of uncertainty have been used in existing research: measures of uncertainty about macroeconomic outcomes (macro uncertainty); measures of the dispersion of firm outcomes (micro dispersion); and measures of the uncertainty that people have about what others believe (higher-order uncertainty). A common origin for the various uncertainty shocks can be found in macro uncertainty, shocks of which generate positive covariances between all pairs of uncertainty types. The VIX is a common proxy used in the financial economics literature to measure the unpredictability of future aggregate outcomes, or macro uncertainty (see Bloom, 2009; Bekaert et al., 2013; Leduc and Liu, 2016; Kozeniauskas et al., 2018; Bhattarai et al., 2020). The VIX index (often referred to as “fear index”), introduced by the Chicago Board Options Exchange (CBOE), is a model-free forward-looking measure implied by options prices and reflects investors’ expectations about uncertainty in the stock market. It has also been linked with future macro outcomes in the aforementioned studies. To study the dynamic uncertainty network, rather than looking at the whole U.S. stock market, we develop a forward-looking measure of uncertainty reflecting investor beliefs derived from option price data at the industry level. To capture industry uncertainty, we use forward-looking uncertainty measures that are intimately related to the VIX methodology and derived from the uncertainty of single firms within the main U.S. industries. More details on the composition of the 11 U.S. industries are provided in the Appendix, section A. According to Kozeniauskas et al. (2018), micro uncertainty describes an increase in the uncertainty that firms have about their outcomes due to changes in idiosyncratic variables. To this end, while our proposed network measure might be influenced by a common structure in volatilites, importantly our measure contains additional critical information about idiosyncratic volatilities. Indeed, single firm VIX measures can be a valid proxy for both macro and micro uncertainty, therefore capturing macro volatility as well as dispersion of firm-specific outcomes. Moreover, such micro uncertainty is challenging to measure due to the scarcity of data on firm-specific beliefs. Our measures overcome this limitation since can be computed at a high (daily) frequency. Finally, options-based measures of risk are superior to historical volatility measures with respect to both predictive power and the set of information they encompass (e.g. Christensen and Prabhala, 1998; Santa-Clara and Yan, 2010; Baruník et al., 2020). ### 2.1 Extracting model-free industry uncertainty from option prices For each chosen firm, we compute a model-free implied volatility index as detailed in the Appendix, section B. This measure reflects expectations about investor uncertainty regarding the individual firm over the coming 30-day horizon. We then aggregate the individual firm information to construct a measure of ex-ante uncertainty at the industry level. Such a measure reflects the industry expected uncertainty over the next 30 days. More formally, the ex-ante industry uncertainty measure $\text{IVIX}^{(\text{Ind})}_{t}$ is constructed by taking the time-varying weighted average of the main five stocks in each industry and at each point in time through our sample period as: $\text{IVIX}^{(\text{Ind})}_{t}=\sum_{s\in N^{(\text{Ind})}}\mathcal{W}_{t}^{(s)}\text{VIX}_{t}^{(s)}$ (1) where $\text{Ind}\in\\{1,\ldots,11\\}$ represents the industry we consider, $s$ is an index for one of the $N^{(\text{Ind})}$ firms included in the given industry at time $t$, $\text{VIX}_{t}^{(s)}$ is the implied volatility for an individual stock $s$, and $\mathcal{W}_{t}^{(s)}$ is the time-varying market capitalization weight of that specific stock $s$ computed as the ratio between the time-varying market capitalization of the stock and total market capitalization of all stocks included in the industry. ### 2.2 Data We use daily data encompassing the sample period January 2000 to May 2020 in each of the following 11 U.S. industries: consumer discretionary (CD), communications (CM), consumer staples (CS), energy (E), financials (F), health care (HC), industrials (IN), information technology (IT), materials (M), real estate (RE) and utilities (U). More specifically we merge two data sources. We use options data from OptionMetrics from January 2000 to December 2018 to compute the individual stock VIXs. We expand the coverage of the VIX time series from January 2019 until May 2020 aided by the IHS Markit’s Totem Vanilla Volatility Swap data set from which we collect broker-dealers consensus prices for the volatility strike of the swaps.555The Totem database is a service within IHS Markit that gathers a large variety of derivatives marks from the major broker-dealers and returns consensus prices. In the volatility swaps service, contributors are requested to price the volatility strike at which the swap would have an inception price of zero which should be the traders best estimate of mid-market. Before 2019 January, the majority of the data in the Totem Vanilla Volatility (or Variance) Swap services were monthly, therefore would not have served our purpose. This aids the expansion of the individual stock VIXs since we replace $\text{VIX}_{t}^{(s)}$ with the volatility strike of the swaps and we then compute the industry uncertainty measures as in equation 1.666On the equality between VIX and the strike of a volatility swap see, for instance, Filipović et al. (2016) and Cheng (2019). For more details on the VIX index, the variance swap market and their relationship see Carr and Wu (2006) and Carr and Lee (2009). The other financial information such as market capitalization and trading volume regarding the selected stocks are collected from Bloomberg. Overall, our data set includes options prices of 69 U.S. firms. We select the largest constituent stocks in each U.S. industry at every point in time according to time-varying market capitalizations, new IPOs, exclusions of the stocks from the $S\&P500$, or missing data. For instance, for some industries such as industrials, the same five stocks have been adopted throughout the sample period. In more dynamic sectors such as IT, we observe several changes in the stocks ranking within our sample period. In cases where options on a specific firm have been only issued in recent times, we include the next ranked firm as a substitute to always ensure at least five stocks for every sector across our time period in that industry with available data.777The only exceptions are the materials and real estate industries. For the first, we use only four stocks, monthly interpolated between 01-2019 and 04-2019 due to data availability. This is because before 05-2019 for a few stocks in this sector the submission service was monthly. Between 01- and 04-2019 no single firm volatility data is available for the real estate industry, hence we replace $\text{IVIX}^{(\text{RE})}_{t}$ directly with the real estate sector volatility measure submitted to IHS Markit/Totem. Table B1 in the appendix shows the included stocks within each industry and their available time period. Figure B1 in the appendix depicts an example of individual firm uncertainty $\text{VIX}^{(i)}$.888The CBOE has introduced stock market VIX series for a few stocks in the U.S. Comparing our calculations, with available period CBOE counterparts, show a correlation, on average, exceeding 94%. This minor divergence is likely due to the interpolation among the two closest expiration dates to 30 days used in the CBOE methodology. For the data collected by IHS Markit spanning a shorter time frame, the correlation between the consensus volatility and CBOE VIX series is, on average, above 97%, this again due to the interpolation used in the CBOE methodology. The selected stocks account for more than 58% of the U.S. $S\&P500$ market capitalization, thus being a valid proxy for the 11 U.S. industries, and being representative of a not trivial fraction of the U.S. GDP (e.g. Gabaix, 2011). A large representation of the U.S. stock market and its industries is what matters when studying these as economic and business cycle drivers. We report the descriptive statistics for the industry uncertainty indexes in Table 1. Table 1: Industry Uncertainty $\text{IVIX}^{(\text{Ind})}_{t}$ Descriptive Statistics Ind | CD | CM | CS | E | F | HC | IN | IT | M | RE | U ---|---|---|---|---|---|---|---|---|---|---|--- Mean | 0.321 | 0.318 | 0.230 | 0.270 | 0.370 | 0.252 | 0.300 | 0.342 | 0.289 | 0.343 | 0.232 Standard Dev. | 0.116 | 0.121 | 0.096 | 0.106 | 0.279 | 0.089 | 0.148 | 0.144 | 0.111 | 0.191 | 0.107 Min | 0.133 | 0.1305 | 0.115 | 0.128 | 0.145 | 0.136 | 0.139 | 0.138 | 0.152 | 0.122 | 0.104 Max | 0.876 | 1.137 | 0.911 | 1.404 | 2.682 | 0.815 | 1.409 | 0.994 | 1.191 | 2.225 | 1.018 Skewness | 1.363 | 1.538 | 1.777 | 3.249 | 3.727 | 1.552 | 2.409 | 1.504 | 2.220 | 2.769 | 2.246 Kurtosis | 4.658 | 5.755 | 6.363 | 21.240 | 20.468 | 5.969 | 11.411 | 5.011 | 11.247 | 17.057 | 9.788 * • Notes: This table reports the descriptive statistics for the industry uncertainty $\text{IVIX}^{(\text{Ind})}_{t}$ index of 11 industries: consumer discretionary (CD), communications (CM), consumer staples (CS), energy (E), financials (F), health care (HC), industrials (IN), information technology (IT), materials (M), real estate (RE) and utilities (U). The time period is from 03-01-2000 to 29-05-2020, at a daily frequency. From Table 1 we observe that the financial industry uncertainty shows the highest mean, followed by the information technology and real estate industry uncertainty measures with consumer staples and utilities showing the lowest mean values. The financial sector is also found to be the one with the highest standard deviation, and skewness of uncertainty measure. On the other hand, consumer staples, energy, health care and utilities are found to have lower standard deviations. Consumer discretionary and IT show lower skewness and kurtosis compared to the other industries uncertainty measures. The minimum values of industry uncertainty range between 10% and 15%, while the maximum values present a wider range with financials and real estate leading with the highest values. As an example, we plot uncertainty measures for the IT, consumer staples and financial sectors in Figure 1. Figure 1: Industry Uncertainty Notes: This figure shows $\text{IVIX}^{(\text{Ind})}_{t}$ series for financials (black), IT (grey) and consumer staples (light grey) covering the period 03-01-2000 to 29-05-2020 at a daily frequency. We observe that the IT sector dominates the other two during the dot com bubble and technologic boom in the early 2000s. The financial sector shows a dominant role during the GFC in 2008 and the Eurozone debt crisis in 2010 and 2011. In the most recent times, the IT sector exhibits greater uncertainty in comparison to the financial sector, showing how uncertainty might come from technology innovation and growth. Consumer staples show peaks in the early 2000s and during the GFC, however being always below the other two series and overall low throughout the sample period. All three uncertainty measures spiked in correspondence to the recent Covid-19 pandemic in March 2020. Among them, we observe that financials and IT increased during the Covid-19 crisis in a more pronounced manner than consumer staples impacted less by the Covid-19 crisis. ## 3 Measurement of dynamic industry uncertainty networks Industries are connected directly through counterparty risk, contractual obligations or other general business conditions of the firms. High-frequency analysis of such networks requires generally unavailable high-frequency information. In contrast, option prices and uncertainty measured in high frequencies reflect the decisions of many agents assessing risks from the existing linkages. Hence the pure market-based approach we use, in contrast to other network techniques, allows us to monitor the network on a daily frequency as well as to exploit its forward-looking strength with minimal assumptions. Looking at how a shock to the expected uncertainty of a firm $j$ transmits to future expectations about the uncertainty of a firm $k$, we will define weighted and directed networks. Aggregating the information about such networks can provide industry level uncertainty characteristics that will measure how strongly the investors’ expectations are interconnected. Importantly, we will focus on the time variation of such networks. ### 3.1 Link to the network literature and causality of proposed measures The measures that we use are intimately related to modern network theory. Algebraically, the adjacency matrix capturing information about network linkages carries all information about the network and any sensible measure must be related to it. As noted by Diebold and Yılmaz (2014), a variance decomposition matrix defining network adjacency matrix is then readily used as a network connectedness that is related to network node degrees and mean degree. Currently, studies examine, almost exclusively, static networks mimicking time dynamics with estimation from an approximating window. In contrast to this approach, we follow Barunik and Ellington (2020) who employ a locally stationary TVP-VAR that allows us to estimate the adjacency matrix for a network at each point in time with possibly large dimension. Dynamic networks defined by such time-varying variance decompositions are then more sophisticated than classical network structures in several ways.999For previous literature on the importance of the variation in the transmission of uncertainty shocks over time (see Caggiano et al., 2014; Mumtaz and Theodoridis, 2018; Alessandri and Mumtaz, 2019). In a typical network, the adjacency matrix contains a set of zero and one entries, depending on the node being linked or not, respectively. In the above notion, one interprets variance decompositions as weighted links showing the strength of the connections. In addition, the links are directed, meaning that the $j$ to $k$ link is not necessarily the same as the $k$ to $j$ link, and hence, the adjacency matrix is not symmetric. Therefore we can define weighted, directed versions of network connectedness statistics readily that include degrees, degree distributions, distances and diameters. Using the time-varying approximating model, we will define a truly time-varying adjacency matrix that will describe a dynamic network. The proposed network connectedness measure is also directly connected to the vast economic literature about the importance of network effects in macroeconomics (see Acemoglu et al., 2012; Carvalho and Gabaix, 2013; Gabaix, 2016; Barrot and Sauvagnat, 2016; Acemoglu et al., 2017; Baqaee and Farhi, 2019; Altinoglu, 2020; Acemoglu and Azar, 2020). Herskovic et al. (2020) state that measuring network effects is crucial to explain the joint evolution of firm volatility distributions. The network analysis has developed conceptual frameworks and an extensive set of tools to effectively measure interconnections among the units of a network, see for instance the survey by Carvalho and Tahbaz-Salehi (2019). Moreover, our network connectedness measure improves on shocks to uncertainty measured ex-post (e.g. Diebold and Yılmaz, 2014). Employing implied measures of uncertainty gives one access to a different set of information in uncertainty reflecting market participants’ expectations of future movements in the underlying asset, a set of information found superior compared to ex- post measures of uncertainty (see Christensen and Prabhala, 1998). We are naturally interested in capturing shocks to the ex-ante uncertainty of industry $j$ that will transmit to future expectations about the uncertainty of industry $k$.101010Baruník et al. (2020) stated that option based measures of uncertainty reflect decisions of many agents assessing the risks from the existing linkages. The options market-based approach allows us to monitor the network on a daily frequency as well as use its forward-looking strength in contrast to other network techniques based on balance sheet and other information which is generally unavailable at high frequency. Finally, we note that our measures can have a direct causal interpretation. Rambachan and Shephard (2019) provide an important discussion about the causal interpretation of impulse response analysis in the time series literature. In particular, they argue that if an observable time series is shown to be a potential outcome time series, then generalized impulse response functions have a direct causal interpretation. Potential outcome series describe at time $t$ the output for a particular path of treatments. In the context of our study, paths of treatments are shocks. The assumptions required for a potential outcome series are natural and intuitive for a typical economic and/or financial time series: i) they depend only on past and current shocks; ii) series are outcomes of shocks; and iii) assignment of shocks depend only on past outcomes and shocks. The dynamic adjacency matrix we introduce in the next section is a transformation of generalized impulse response functions. Therefore, the dynamic adjacency matrix and all measures that stem from manipulations of its elements possess a causal interpretation; thus establishing the notion of causal dynamic network measures. ### 3.2 Construction of dynamic uncertainty network To formalize the discussion, we construct a dynamic uncertainty network of industries from the industry implied volatilities computed for the main U.S. industries and we interpret the TVP-VAR model approximating its dynamics as a dynamic network following the work of Barunik and Ellington (2020). In particular, consider a locally stationary TVP-VAR of lag order $p$ describing the dynamics of industry uncertainty as $\mathbf{IVIX}_{t,T}=\boldsymbol{\Phi}_{1}(t/T)\mathbf{IVIX}_{t-1,T}+\ldots+\boldsymbol{\Phi}_{p}(t/T)\mathbf{IVIX}_{t-p,T}+\boldsymbol{\epsilon}_{t,T},$ (2) where $\mathbf{IVIX}_{t,T}=\left(\text{IVIX}^{(\text{1})}_{t,T},\ldots,\text{IVIX}^{(\text{N})}_{t,T}\right)^{\top}$ is a doubly indexed $N$-variate time series of industry uncertainties, $\boldsymbol{\epsilon}_{t,T}=\sum^{-1/2}(t/T)\eta_{t,T}$ with $\eta_{t,T}\sim NID(0,I_{M})$, and $\boldsymbol{\Phi}(t/T)=\Big{(}\boldsymbol{\Phi}_{1}(t/T),....,\boldsymbol{\Phi}_{p}(t/T)\Big{)}^{T}$ are the time-varying autoregressive coefficients. Note that $t$ refers to a discrete time index $1\leq t\leq T$ and $T$ is an additional index indicating the sharpness of the local approximation of the time series by a stationary one. Rescaling time such that the continuous parameter $u\approx t/T$ is a local approximation of the weakly stationary time-series (Dahlhaus, 1996), we approximate the $\mathbf{IVIX}_{t,T}$ in a neighborhood of a fixed time point $u_{0}=t_{0}/T$ by a stationary process $\widetilde{\mathbf{IVIX}}_{t}(u_{0})$ as $\widetilde{\mathbf{IVIX}}_{t}(u_{0})=\boldsymbol{\Phi}_{1}(u_{0})\widetilde{\mathbf{IVIX}}_{t-1}(u_{0})\ldots+\boldsymbol{\Phi}_{p}(u_{0})\widetilde{\mathbf{IVIX}}_{t-p}(u_{0})+\boldsymbol{\epsilon}_{t}.$ (3) The process has time-varying Vector Moving Average VMA($\infty$) representation (Dahlhaus et al., 2009; Roueff and Sanchez-Perez, 2016) $\mathbf{IVIX}_{t,T}=\sum_{h=-\infty}^{\infty}\boldsymbol{\Psi}_{t,T,h}\boldsymbol{\epsilon}_{t-h}$ (4) where parameter vector $\boldsymbol{\Psi}_{t,T,h}\approx\boldsymbol{\Psi}_{h}(t/T)$ is a time-varying impulse response function characterized by a bounded stochastic process.111111Since $\boldsymbol{\Psi}_{t,T,h}$ contains an infinite number of lags, we approximate the moving average coefficients at $h=1,\ldots,H$ horizons. The connectedness measures rely on variance decompositions, which are transformations of the information in $\boldsymbol{\Psi}_{t,T,h}$ that permit the measurement of the contribution of shocks to the system. Since a shock to a variable in the model does not necessarily appear alone, an identification scheme is crucial in calculating variance decompositions. We adapt the generalized identification scheme in Pesaran and Shin (1998) to locally stationary processes. The following proposition establishes a time-varying representation of the variance decomposition of shocks from asset $j$ to asset $k$. It is central to the development of the dynamic network measures since it constitutes a dynamic adjacency matrix. ###### Proposition 1 (Dynamic Adjacency Matrix). 121212Note to notation: $[\boldsymbol{A}]_{j,k}$ denotes the $j$th row and $k$th column of matrix $\boldsymbol{A}$ denoted in bold. $[\boldsymbol{A}]_{j,\cdot}$ denotes the full $j$th row; this is similar for the columns. A $\sum A$, where $A$ is a matrix that denotes the sum of all elements of the matrix $A$. Suppose $\mathbf{IVIX}_{t,T}$ is a locally stationary process, then the time- varying generalized variance decomposition of the $j$th variable at a rescaled time $u=t_{0}/T$ due to shocks in the $k$th variable forming a dynamic adjacency matrix of a network is $\Big{[}\boldsymbol{\theta}^{H}(u)\Big{]}_{j,k}=\frac{\sigma_{kk}^{-1}\displaystyle\sum_{h=0}^{H}\Bigg{(}\Big{[}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\Sigma}(u)\Big{]}_{j,k}\Bigg{)}^{2}}{\displaystyle\sum_{h=0}^{H}\Big{[}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\Sigma}(u)\boldsymbol{\Psi}_{h}^{\top}(u)\Big{]}_{j,j}}$ (5) where $\boldsymbol{\Psi}_{h}(u)$ is a time-varying impulse response function. ###### Proof. See Appendix C. ∎ It is important to note that proposition 12 defines the dynamic network completely. Naturally, our adjacency matrix is filled with weighted links showing strengths of the connections over time. The links are directional, meaning that the $j$ to $k$ link is not necessarily the same as the $k$ to $j$ link. Therefore the adjacency matrix is asymmetric. To characterize network uncertainty, we define total dynamic network connectedness measures in the spirit of Diebold and Yılmaz (2014); Barunik and Ellington (2020) as the ratio of the off-diagonal elements to the sum of the entire matrix $\mathcal{C}^{H}(u)=100\times\displaystyle\sum_{\begin{subarray}{c}j,k=1\\\ j\neq k\end{subarray}}^{N}\Big{[}\widetilde{}\boldsymbol{\theta}^{H}(u)\Big{]}_{j,k}\Bigg{/}\displaystyle\sum_{j,k=1}^{N}\Big{[}\widetilde{}\boldsymbol{\theta}^{H}(u)\Big{]}_{j,k}$ (6) where $\Big{[}\widetilde{}\boldsymbol{\theta}^{H}(u)\Big{]}$ is a normalized $\boldsymbol{\theta}$ by the row sum. This measures the contribution of forecast error variance attributable to all shocks in the system, minus the contribution of own shocks. Similar to the aggregate network connectedness measure that infers the system-wide strengths of connections, we define measures that will reveal when an individual industry is a transmitter or a receiver of uncertainty shocks in the system. We use these measures to proxy dynamic network uncertainty. The dynamic directional connectedness that measures how much of each industry’s $j$ variance is due to shocks in other industry $j\neq k$ in the economy is given by $\mathcal{C}_{j\leftarrow\bullet}^{H}(u)=100\times\displaystyle\sum_{\begin{subarray}{c}k=1\\\ k\neq j\end{subarray}}^{N}\Big{[}\widetilde{}\boldsymbol{\theta}^{H}(u)\Big{]}_{j,k}\Bigg{/}\displaystyle\sum_{j,k=1}^{N}\Big{[}\widetilde{}\boldsymbol{\theta}^{H}(u)\Big{]}_{j,k},$ (7) defining the so-called from connectedness. Note one can precisely interpret this quantity as dynamic from-degrees (or out-degrees in the network literature) that associates with the nodes of the weighted directed network we represent by the dynamic variance decomposition matrix. Likewise, the contribution of asset $j$ to variances in other variables is $\mathcal{C}_{j\rightarrow\bullet}^{H}(u)=100\times\displaystyle\sum_{\begin{subarray}{c}k=1\\\ k\neq j\end{subarray}}^{N}\Big{[}\widetilde{}\boldsymbol{\theta}^{H}(u)\Big{]}_{k,j}\Bigg{/}\displaystyle\sum_{j,j=1}^{N}\Big{[}\widetilde{}\boldsymbol{\theta}^{H}(u)\Big{]}_{k,j}$ (8) and is the so-called to connectedness. Again, one precisely interprets this as dynamic to-degrees (or in-degrees in the network literature) that associates with the nodes of the weighted directed network that we represent by the variance decompositions matrix. These two measures show how other industries contribute to the uncertainty of industry $j$, and how industry $j$ contributes to the uncertainty of others, respectively, in a time-varying fashion. Further, the net dynamic connectedness showing whether an industry is inducing more uncertainty than it receives from other industries in the system can be calculated as the difference between to and from is as $\mathcal{C}_{j,\textsc{net}}^{H}(u)=\mathcal{C}_{j\rightarrow\bullet}^{H}(u)-\mathcal{C}_{j\leftarrow\bullet}^{H}(u)$ and the agg connectedness measure as $\mathcal{C}_{j,\textsc{agg}}^{H}(u)=\mathcal{C}_{j\rightarrow\bullet}^{H}(u)+\mathcal{C}_{j\leftarrow\bullet}^{H}(u)$. Finally, to obtain the time-varying coefficient estimates, and the time- varying covariance matrices at a fixed time point $u=t_{0}/T$, $\boldsymbol{\Phi}_{1}(u),...,\boldsymbol{\Phi}_{p}(u)$ $\boldsymbol{\Sigma}(u)$, we estimate the approximating model in (2) using Quasi-Bayesian Local-Likelihood (QBLL) methods (Petrova, 2019). Specifically, we use a kernel weighting function that provides larger weights to observations that surround the period whose coefficient and covariance matrices are of interest. Using conjugate priors, the (quasi) posterior distribution of the parameters of the model are available analytically. This alleviates the need to use a Markov Chain Monte Carlo (MCMC) simulation algorithm and permits the use of parallel computing. Note also that in using (quasi) Bayesian estimation methods, we obtain a distribution of parameters that we use to construct network measures that provide confidence bands for inference. We detail the estimation algorithm in Appendix D. ## 4 The dynamics of the network connectedness of industry uncertainties The technological and housing market bubbles, the commodity crash, and the Covid-19 pandemic are a few major examples that show how a dramatic increase in uncertainty and different investors’ expectations can rise sharply in many alternative industries. Being able to temporally and precisely characterize the industry-based network dynamics is crucial given that industries can swiftly change their characteristics and macro-economic roles. Working with dynamic network estimates, we can characterize and assess industry uncertainty connections in a timely and forward-looking manner according to the precise events leading to a more or less connected industry uncertainty network. This measurement provides new insights about the propagation of the ex-ante uncertainty shocks over different phases of the business cycle, and it identifies periods in which the U.S. industries’ uncertainty was tightly connected. We compute the dynamic aggregate network connectedness through equation 6 and present its dynamics in Figure 2. We identify several cycles mainly driven by key events that took place in our sample such as the dot com bubble in the early 2000s, the housing market bubble, the 2007-2009 GFC, and the most recent Covid-19 crisis. Some events might be described as bursts that rapidly subside, others might be characterized by a more continuous pattern and trend. We also split the time period into inversions, recessions and expansions following the NBER classification. Inversions are marked between July 2000 and March 2001 and between September 2006 and December 2007.131313The FOMC raised the target fed funds rate by 25 basis points on June 29, 2006 and lowered the target by 50 basis points on September 18, 2007. Adrian and Estrella (2008) identified September 2006 as the end of the tightening cycle because during that month the one-month fed futures rate went from higher than the spot rate to lower. Due to the unprecedented causes of the pandemic recession in 2020, this has resulted in a downturn with different characteristics and dynamics than prior recessions. Hence, we are unable to establish an inversion period for the Covid-19 crisis that we signal only as a recession from February 2020. See also the NBER website: https://www.nber.org/cycles.html. The recessions are marked between April 2001 and November 2001, between January 2008 and June 2009, and between February 2020 until the end of the sample, while the other years are marked as expansions. Figure 2: Dynamic Network Connectedness of Industry Uncertainties Notes: This figure shows the dynamic uncertainty network connectedness with respect to the 11 U.S. industries estimated during the 03-01-2000 – 29-05-2020 period at a daily frequency. Inversions (light grey area) are marked between July 2000 and March 2001 and between September 2006 and December 2007. The recessions (grey area) are marked between April 2001 and November 2001, between January 2008 and June 2009, and between February 2020 until the end of the sample, while the other years are marked as expansions. Note the network connectedness is plotted with two standard deviation percentiles of the measure. We document the system to be strongly connected with values fluctuating around 60% in the first half of 2000. The first cycle starts with the burst of the tech bubble in 2000, and with the network measure climbing from about 60% to 68%, and increasing up to about 80% in the second half of 2001 as a response to the dot com bubble strengthening U.S. industries’ uncertainty connections via shock to uncertainty from the technology industry. The index recovers to the initial level until 2004, hitting minimum values in our sample at the end of 2004 before spiking again. After that, uncertainty network connectedness shows a new lower average level, fluctuating around 50% until the second half of 2007, the only exception being a peak at the end of 2005 which might be due to the U.S. housing bubble showing a connectedness level up to 67%. We observe the index recording a significant upward movement from the beginning of 2007 to 2009 reaching a level close to 80%, in response to the high uncertainty during the 2007–2009 GFC spreading from the financial industry to other industries. Several cycles can be detected during the 2007–2009 GFC: the first between the first quarter of 2007 and August 2007 reflecting the U.S. credit crunch; the second in January–March 2008 (panic in stock and foreign exchange markets, and Bear Stearns’ takeover by JP Morgan), the collapse of Lehman Brothers in September 2008 showing a spike from 47% to 75% in our network connectedness, and lastly in the first half of 2009 when the financial crisis started to propagate among all other industries, increasing the average network connectedness level. Uncertainty connectedness spikes again in line with the two phases of the European sovereign debt crisis, in 2010 and the second half of 2011, reaching one of the highest levels, up to that point, close to 80%. We then observe a drop in 2012, this being followed by a quite calm period from 2012 to mid-2013. Connectedness spikes again at the end of 2013 due to trade wars and energy turmoils reaching levels above 70%, and twice at the end of 2014 and in mid-2015 reaching levels overcoming 80%. Connectedness peaks in correspondence of Brexit in 2016 and at the end of 2017, and eventually in 2020 due to the coronavirus outbreak, reaching its all-time maximum value in March 2020 (a level of almost 90%), signalling the beginning of the Covid-19 crisis. This reflects the tight connectedness among all industries in the coronavirus period since almost all industries have been severely affected. The fluctuations of the aggregate uncertainty network across crises, market downturns and expansions open up for a further investigation of the role of each industry uncertainty network characteristics. The U.S. industries appear to be more connected after the GFC and even more with the most recent Covid-19 crisis. We still lack an understanding of which industries drive the tightening or loosening of the network according to different business cycles. The next sections aim to clarify these points exploiting the precise time- varying estimation of the uncertainty network first, and its forward-looking properties in the last section. ## 5 Uncertainty networks across the business cycles In addition to the aggregate network characteristics, we can classify each industry based on their expected contribution to shocks to uncertainty in the system across different phases of the business cycle. Specifically, we are interested to identify transmitters, receivers as well as industries being hubs of uncertainty. To this end, we classify industries according to the $\mathcal{C}_{\textsc{net}}^{H}$ and $\mathcal{C}_{\textsc{agg}}^{H}$ characteristics of dynamic uncertainty network. A specific shock to uncertainty related to any industry, especially when the most influential ones are affected, can trigger major consequences for the other industries generating an aggregate impact on the whole network, tightening or weakening the uncertainty network, as well as being ultimately transmitted to the real economy. For instance, a tightening in the industry network may be connected to drops in real activity, representing a timely monitoring tool for immediate interventions by the Federal Reserve to sustain the business cycle. ### 5.1 Hubs, non-hubs and business cycles In the case of positive or negative net measure, computed as the difference between to (equation 8) and from (equation 7) values, an industry is deemed to be an uncertainty transmitter or receiver, respectively. An industry receiving or transmitting shocks to uncertainty with an intermediate level can be classified as a moderate transmitter or receiver, respectively, and may contribute to the uncertainty propagation in the system in a mild manner. An industry transmitting shocks to the system more (less) then receiving shocks from the system is labelled as a transmitter (receiver). An industry showing high values of both directional measures reflected by high agg values is playing an active role in the transmission of uncertainty shocks and is denoted as “uncertainty hub”, and is an industry that contributes the most to uncertainty shocks within the network. Conversely, a neutral industry showing low agg values is denoted as “uncertainty non-hub”. Industries might have changed their roles in terms of their contribution to shocks to uncertainty according to the specific economic cycle. Accordingly, we average the network characteristics across each of the three business cycle phases (inversions, recessions and expansions) as well as over the total period. Table 2 provides the details. Financials and IT are detected as the main uncertainty hubs in the inversion and recession periods, reflecting the role of these two industries in the dot com and GFC, respectively. Also, the consumer discretionary and industrials are classified as uncertainty hubs during recessions. The IT industry is found to be the main uncertainty hub during all cycles consistently, showing a time- invariant role as a key industry in terms of the contribution of shocks to uncertainty, especially during the dot com and Covid-19 recessions. During expansion periods, the communication industry also plays a hub role in addition to the IT industry. In contrast, materials, real estate and utilities show the smallest values of network statistics and are classified as uncertainty non-hubs. The IT, communication and industrial industries are classified as the main uncertainty hubs within the total period, with positive net characteristics paired with the highest agg values. This finding highlights the important role that the information and communication technology (ICT) industries have played in the last two decades in the system (e.g. Jorgenson, 2001; Bloom et al., 2012). Consumer discretionary and energy industries can also be classified as uncertainty hubs given their high agg statistics. Conversely, we find that financials, materials, real estate and utilities are overall classified as uncertainty non-hubs within the whole sample. For instance, Table 2 reveals that the IT industry, the main uncertainty hub, contributes to the network connectedness characteristics two and three-times more in comparison to the M and U industries, respectively. Market participants forward-looking expectations show that shocks to uncertainty in one of the hubs are valued differently from the corresponding shocks to uncertainty in non-hubs. Notably, the financial industry uncertainty has been transmitting differently within different market settings, mainly during inversions and recessions, but it is overall classified as a non-hub.141414Studies show how the financial and banking sector may represent a major channel in transmitting the shocks across markets during crises (e.g. Kaminsky and Reinhart, 1999; Tai, 2004; Baur, 2012). This finding reflects the ability of authorities to influence the U.S. financial sector in the aftermath of the GFC through accommodative and unconventional monetary policies. To some, such policy interventions aimed at restoring the functioning of the financial sector during the Great Recession might have been key to avoid a second Great Depression (see Bianchi, 2020).151515To note that The IMF October 2017 Global Financial Stability Report (GFSR) finds that the global financial system continues to strengthen in response to extraordinary policy support, regulatory enhancements, and the cyclical upturn in growth. This has also been accompanied by stronger harmonization of financial regulatory standards (e.g. the Basel capital framework). The financial sector has seen one of the most-pronounced stock market booms on record during 2009-2018. It is therefore not surprising that there has been a low level of uncertainty within the financial industry and, as a consequence, transmitted to the rest of the system. Table 2: Aggregate $\mathcal{C}_{\textsc{NET}}$ and $\mathcal{C}_{\textsc{AGG}}$ across business cycles | | Inversion | | Recession | | Expansion | | Total Period ---|---|---|---|---|---|---|---|--- | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ CD | | -1.72 | 16.77 | 9.8 | | 0.49 | 35.02 | 10.9 | | -0.98 | 25.61 | 11.0 | | -0.71 | 29.61 | 11.1 CM | | -1.55 | 13.07 | 7.6 | | -0.29 | 28.88 | 9.0 | | 1.37 | 27.16 | 11.6 | | 1.55 | 30.49 | 11.5 CS | | -0.87 | 12.12 | 7.1 | | -0.15 | 32.70 | 10.2 | | -1.54 | 20.93 | 9.0 | | -1.50 | 24.27 | 9.1 E | | -4.52 | 16.88 | 9.8 | | -0.43 | 30.70 | 9.5 | | -0.43 | 25.09 | 10.7 | | -0.80 | 28.27 | 10.6 F | | 3.76 | 21.07 | 12.3 | | 0.01 | 40.55 | 12.6 | | 0.04 | 18.02 | 7.7 | | 0.54 | 22.15 | 8.3 HC | | -0.76 | 14.23 | 8.3 | | -2.15 | 29.70 | 9.2 | | -0.29 | 23.63 | 10.1 | | -0.56 | 26.45 | 9.9 IN | | 0.92 | 20.63 | 12.1 | | -0.21 | 40.09 | 12.5 | | 0.26 | 26.71 | 11.4 | | 0.17 | 31.47 | 11.9 IT | | 4.42 | 25.09 | 14.7 | | 2.77 | 39.74 | 12.4 | | 1.93 | 28.48 | 12.2 | | 2.23 | 33.71 | 12.7 M | | -0.33 | 10.15 | 5.9 | | -1.77 | 15.99 | 5.0 | | -1.01 | 14.62 | 6.3 | | -0.98 | 15.84 | 6.0 RE | | 1.24 | 12.01 | 7.1 | | 1.55 | 17.18 | 5.3 | | 0.97 | 13.89 | 6.0 | | 0.51 | 14.22 | 5.3 U | | -0.58 | 9.11 | 5.3 | | 0.19 | 11.04 | 3.4 | | -0.24 | 9.41 | 4.0 | | -0.43 | 9.59 | 3.6 Notes: The table shows the average net and agg values with respect to the 11 U.S. industries’ uncertainty network. When the net measure is positive an industry can be classified as a net marginal transmitter, while, when negative, it can be classified as a net marginal receiver. The highest values of the agg network statistics are associated with uncertainty hubs, while the lowest with uncertainty non-hubs. The statistics are reported for the business cycle main phases, namely inversion, recession, expansion, aggregated, and also for the total period, namely from 03-01-2000 to 29-05-2020, at a daily frequency. After having classified the industries based on their contribution to uncertainty across business cycles, we compute separate forward-looking networks extracted from uncertainty hubs only and uncertainty non-hubs only. To this end, we input the $\text{IVIX}^{(\text{Ind})}_{t}$ of hubs only (consumer discretionary, communications, energy, industrials and IT) and of non-hubs only (financials, materials, real estate and utilities) in the network model in equation 6. Figure 3 plots the dynamics of both network connectedness characteristics. We observe that the uncertainty network extracted from hubs shows a higher degree of integration compared to the ones extracted from non-hubs. The difference in uncertainty produced by hubs versus non-hubs becomes stronger in the post-GFC and more recent years, implying that shocks in hubs such as ICT industries and industrials have played a key role increasingly contributing to shocks in uncertainty within the system and aggregate fluctuations over time. This finding is consistent with the development of the ICT industries in the last decades. Such hubs are in fact industries providing services to other industries, sharing a role in propagating uncertainty shocks to other sectors (see Bloom et al., 2012). This channel can also be because uncertainty hubs are sectors likely to have financial importance and a large and fast-growing market capitalization. Thus, our proposed framework for the role of uncertainty hubs in driving fluctuations in the aggregate economy is reminiscent of investment-specific technology shocks (e.g. Greenwood et al., 2000; Justiniano et al., 2010). Figure 3: Uncertainty Hubs and non-Hubs Networks Notes: This figure shows the network connectedness measures extracted from uncertainty hubs (black line) and non-hubs (grey line) from 03-01-2000 to 29-05-2020, at a daily frequency. Inversions (light grey area) are marked between July 2000 and March 2001 and between September 2006 and December 2007. The recessions (grey area) are marked between April 2001 and November 2001, between January 2008 and June 2009, and between February 2020 until the end of the sample, while the other years are marked as expansions. An uncertainty shock can affect production, employment and growth within the hub, it can also generate larger uncertainty spillovers, changes in prices, growth and production of other sectors in the system, affecting the broader economy (e.g. Kozeniauskas et al., 2018). As an example of a few possible mechanisms, Lehn and Winberry (2020) state that a positive shock to an investment hub directly increases production and employment in that hub; because the shock also raises the supply of investment goods, other sectors increase employment to produce more intermediate inputs for the hub. In contrast, a shock in a non-hub has a small effect on investment supply generating smaller spillovers to the rest of the economy. Further, according to the islands’ framework in Garin et al. (2018), a shock that simultaneously affects both hubs and non-hubs (both islands) is significantly weaker compared to shocks in both islands separately, this due to a reallocative shock mechanism. In our context, this would directly translate into an increase in uncertainty of both islands (hubs and non-hubs) with the difference that non- hubs receive shocks to uncertainty, while hubs (mainly providers of services cross-sectorally) both receive and transmit shocks to uncertainty to a much larger extent (captured by a higher agg statistic in our model). The uncertainty transmitted or received by hubs is found to be approximately three times higher than for non-hubs. Given the framework and rationale that we put forward, we hypothesize that the response of business cycles to shocks should be larger for hubs than for non- hubs. Shocks to uncertainty hubs generate grater effects within the network of industries and extending their implications towards the real economy. Investors’ expectations and beliefs generating shocks to uncertainty in hubs reflect value-added growth into those, future price increases or losses, productivity shocks and changes in future outcomes which can trigger economic booms or downturns to a larger extent than in non-hubs. Therefore, the hubs- based network measure is the natural candidate to be a better leading indicator of business cycles compared to non-hubs, which is explored in the next section. ### 5.2 Shocks to uncertainty in specific business cycles Before moving on, we briefly show a more granular classification of industries for each business cycle. We report the $\mathcal{C}_{\textsc{net}}$ and $\mathcal{C}_{\textsc{agg}}$ statistics in Table E1 in the appendix. The IT industry can be classified as the main uncertainty hub during both inversion and recession phases related to the dot com bubble (from 2000 to mid-2002), contributing to a spread in uncertainty within the whole system. Given the high agg values, CD and F are classified as main uncertainty hubs during the dot com recession. In contrast, M and U are classified as uncertainty non-hubs during this first recession. Interestingly, around 2002-2004 we observe a role of uncertainty transmitter for RE since this period corresponds to the U.S. housing market bubble. The 2007–2008 GFC was indeed related to the bursting of a real estate bubble, identified precisely by the dynamic network at the end of 2005. We then observe a main role as net uncertainty transmitter for F between 2006 and 2007 reflecting the first events related to the GFC and the mortgage crisis, and especially in correspondence to the collapse of Lehman Brothers in September 2008. F shows a predominant role during the GFC, both in inversion and actual recession, contributing to spread uncertainty across the whole system and being therefore classifiable as uncertainty hub within this period. After this, the U.S. economy is characterized by a long recovery period in which the ICT industries are, by far, the main uncertainty hubs. From 2010 until 2015, we observe E taking on a role as an uncertainty hub, and this could be due to a combination of events throughout this time period,161616For instance the changes in the U.S. energy policies, U.S. conflicts in the Middle East, OPEC excessive production, and U.S. oil prices collapse leading to higher oil price volatility. Moreover, in summer 2011, oil and other commodities prices have fallen, although from historically high levels, and this was reflected in the fear that the commodities boom was over, with raw materials set to drop sharply. emphasized by the spike in uncertainty between June 2014 and February 2015 associated with the global commodity price crash and oil price drop. From 2015 until the end of our sample period, industrials, consumer discretionary, communications and IT are detected as the main uncertainty hubs within our system, whereas all the other industries either as mild receivers or as uncertainty non-hubs. Finally, in the most recent Covid-19 recession, we note that the same industries are found to be the main uncertainty hubs, while health-care, real estate and utilities as uncertainty non-hubs. This analysis further highlights the usefulness of the time-varying parameter model to more precisely uncover timing in the role of industries in shocks to uncertainty. Business cycles, economic downturns and crises with different nature might intensify the role of a specific industry, becoming a key player in contributing to shocks to uncertainty within the network. It is then crucial to be able to identify the role of each industry across time. Moreover, differences in terms of shocks to uncertainty might play a critical role in predicting business cycles throughout the sample. ## 6 Industry uncertainty networks and business cycle predictability We study whether the information extracted from options of large firms and aggregated into ex-ante industry uncertainty networks may contribute to the predictability of indicators of the business cycle. We draw from previous literature on the role of aggregated sector-level or firm-level networks of microeconomic shocks to uncertainty and their relationship with the aggregate economy and business conditions (e.g. Gabaix, 2011; Acemoglu et al., 2012; Carvalho and Gabaix, 2013; Barrot and Sauvagnat, 2016; Atalay, 2017; Lehn and Winberry, 2020). The forward-looking aspect of our network is what we are interested in exploiting in this section to predict future business cycles in advance. Further, we also argue that network measures extracted from uncertainty hubs play a more informative predictive role compared to the ones extracted from uncertainty non-hubs. The literature on business cycle indicators, turning points, expansions and recessions characterizations has a long history. The Business Cycle Dating Committee (BCDC) of the National Bureau of Economic Research (NBER) provides probably the largest historical record of the economic activity classification and business cycles. However, its releases are usually made with about a year’s delay. It, therefore, represents a very reliable chronology of peaks and troughs throughout history rather than providing an early warning tool. The latter is what studies have been aiming to propose for decades. We study the relationship between uncertainty network connectedness and more timely indicators of business cycles, both coincident and leading indicators. We hypothesize that our network connectedness may represent an even more timely and forward-looking predictor of business cycle indicators given its ex-ante characteristics from the options market. It may therefore represent both a good predictor of coincident and leading indicators and it can be also classified as a leading monitoring tool of the business cycle itself. This would provide researchers, policy-makers and the public with an even more timely indicator than the ones already available. ### 6.1 The predictability of business cycle coincident indicators Berge and Jordà (2011) provides an exhaustive summary of the different measures available that provide reliable signals about the current state of the business cycle. Among those, we adopt the Chicago Fed National Activity Index (CFNAI) and the Aruoba, Diebold, and Scotti (ADS) index of business conditions (see Aruoba et al., 2009).171717The CFNAI is a monthly index that tracks the overall economic activity and the inflationary pressure. It is computed as the first principal component of 85 series drawn from four broad categories of data all adjusted for inflation. A zero value for the monthly index has been associated with the national economy expanding at its historical trend (average) rate of growth; negative values with below-average growth; positive values with above-average growth. See also Chava et al. (2020) on the adoption of the CFNAI as business cycle indicator. For more information see https://www.chicagofed.org/publications/cfnai/index. The Aruoba-Diebold-Scotti (ADS) Business Condition Index tracks real business conditions at a high frequency and it is based on economic indicators. The average value of the ADS index is zero. Progressively positive values indicate progressively better-than-average conditions, whereas progressively more negative values indicate progressively worse-than-average conditions. It is collected from: https://www.philadelphiafed.org/research-and-data/real-time- center. Specifically, we adopt the 3-month moving average of the Chicago FED National Activity Index (CFNAI-MA3) and we aggregate the ADS indicator at a monthly frequency. We adopt these as business cycle coincident indicators and we study whether the uncertainty network can forecast them. We also disentangle these business cycles indicators into proxies of expansions and recessions following the decomposition approach by Berge and Jordà (2011) who proposed optimal thresholds for CFNAI and ADS equal to -0.72 and -0.80, respectively. Thus, periods of economic expansion are associated with values of the CFNAI-MA3 (ADS) above –0.72 (-0.80), whereas periods of economic contraction with values of the CFNAI-MA3 (ADS) below –0.72 (-0.80). We study whether the predictive ability of the uncertainty network connectedness varies according to different states of the business cycle. We aggregate the network connectedness measure at a monthly frequency to match the frequency of the business cycle indicators and we run the following predictive regression: $\mathcal{Y}^{(\ell)}_{t+h}=\beta_{0}+\beta_{\mathcal{C}}\ \mathcal{C}_{t}+\sum_{i=1}^{N}\beta_{X,i}\ X_{t,i}+\epsilon_{t}$ (9) where $\mathcal{Y}^{(\ell)}_{t+h}$ is one of the business cycle indicators we select (or their components) with the predictive horizon $h\in{1,3,6,9,12}$ months. The $\mathcal{C}_{t}$ is the industry uncertainty network measure (note we drop index $H$ here for the ease of notation), $X_{t,i}$ is a set of control variables including both traditional predictors of business cycles such as oil price changes (OIL), term-spread as 10-year bond rate minus the 3-month bond rate (TS), unemployment rate (UR), (see also Gabaix, 2011), and also a potential leading indicator extracted from the financial markets, namely the changes in the CBOE VIX index being a common proxy for macro uncertainty in the U.S. (VIX), changes in the $S\&P500$ price index (SPX), the Bloomberg Commodity price index (COMM) and the S&P Case-Shiller Home price (CSHP).181818Oil prices, 10-year and 3-month bond rates, unemployment rate and the S&P Case-Shiller Home price index are collected from the Federal Reserve Bank of St. Louis economic database at https://fred.stlouisfed.org/; the CBOE VIX index, $S\&P500$ price index and the Bloomberg Commodity price index are collected from Bloomberg. Therefore, $X_{t,i}$ is indexed for $i$ up to $N=7$, the number of control we select, with $i\in(\text{OIL, TS, UR, VIX, SPX, COMM, CSHP})$. Table 3 reports the predictive results. We observe that the uncertainty network is a strong predictor of the aggregate CFNAI-MA3 indicator of the business cycle up to 12 months in advance also after taking into account the information of the selected controls. The coefficient associated with our independent predictor is negative suggesting that a tighter network of industry uncertainties would lead to a contraction in the business cycle in the future horizons. The network is therefore found to behave counter- cyclically, a finding in line with previous studies relating uncertainty measures with the business cycles (e.g. Bloom et al., 2018). The performance of the models measured by the adjusted-$R^{2}$ is found to be close to 30% at the 1-month horizon, then it decreases at the semi-annual horizon, increasing again at longer horizons such as 9 and 12 months. Table 3: CFNAI-MA3 Predictive Results | | Panel A: CFNAI-MA3 ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.013** | -0.024*** | -0.026*** | -0.040*** | -0.028*** | | (0.006) | (0.007) | (0.007) | (0.007) | (0.007) Adj. $R^{2}$ | | 0.294 | 0.156 | 0.067 | 0.176 | 0.168 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: CFNAI-MA3 Expansion | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.007*** | -0.010*** | -0.012*** | -0.011*** | -0.008*** | | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) Adj. $R^{2}$ | | 0.093 | 0.175 | 0.268 | 0.304 | 0.239 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel C: CFNAI-MA3 Recession | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.006 | -0.014** | -0.013** | -0.029*** | -0.020*** | | (0.006) | (0.007) | (0.007) | (0.007) | (0.007) Adj. $R^{2}$ | | 0.287 | 0.135 | 0.059 | 0.144 | 0.148 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regression in equation 9 between the industry uncertainty network connectedness and the 3-month moving average of the Chicago FED National Activity Index (CFNAI-MA3), indicator of business cycle (Panel A). In Panel B and Panel C the results of the predictive regression with respect to the CFNAI-MA3 expansion and recession indicators are reported, respectively. We also add a set of controls, $X$. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are considered at a monthly frequency between 01-2000 and 05-2020. When we look at the disentangled components of the business cycle indicator, we observe that uncertainty network can predict well future expansion periods up to one year in advance. The sign associated with the models’ coefficients is, again, negative therefore suggesting that expansion periods might contract when the network is tighter. We notice a greater adjusted-$R^{2}$ performance of the model for the 6- to 12-month horizons. Regarding U.S. recession periods, we observe weaker predictability of network across shorter horizons. However, the uncertainty network is still found to predict well future recessions from 3-month up to the 12-month horizon, being the higher adjusted-$R^{2}$ placed again on the longer horizon. Interestingly, we find a negative sign associated with the coefficients, this implying that increasing levels of network connectedness will expand the business cycle when in recession (in this case the dependent variable is below the -0.72 threshold).191919The Chicago Fed suggested -0.7 to be a more accurate threshold of turning points for the CFNAI indicator. We repeat the empirical analysis of this section by adopting this threshold. The results are found to be materially the same. We validate the predictive ability of uncertainty network connectedness by showing how it can also similarly predict a different business cycle coincident indicator. We show the results for the ADS Index in Table F1 in the appendix. The aggregate uncertainty network measure shows predictability power for the ADS index from 3 months up to 12 months in advance, with again negative coefficients and stronger performance at the long horizon. When we look at the expansion or recession indicators, a stronger predictive ability is still placed at longer horizons, especially for the expansion indicator. Another possible business cycle coincident indicator is the industrial production (IP) growth rate. As a robustness check, we repeat the same exercise with the IP annualized growth rate, still confirming the predictive ability of the uncertainty network. We also adopt another business cycle coincident indicator collected from the Economic Cycle Research Institute (ECRI), the U.S. coincident indicator (U.S.CI).202020For more information and data see https://www.businesscycle.com/ecri-reports-indexes/all-indexes. We take the growth rate of the indicator and show that this leads to similar results, relegated to the appendix in Table F2. As an additional robustness check, we also replace the uncertainty network connectedness measure constructed with time-varying networks with a network measure constructed by following previous studies (e.g. Diebold and Yilmaz, 2012; Diebold and Yılmaz, 2014) using a moving window. We find that the latter is unable to predict future business cycles and the expansion and recession components, highlighting, even more, the importance of precisely characterizing the network at any point in time without relying on moving windows when it comes to predicting future levels of the business cycle or the real economy.212121The all set of results is available from the authors upon request. ### 6.2 Industry uncertainty networks and leading indicators Due to its forward-looking nature, we argue that the uncertainty network connectedness could also potentially serve as a good predictor of business cycle leading indicators, and be considered a leading indicator itself. We here check the predictive ability of this index for two business cycle leading indicators: the U.S. composite leading indicator (CLI) by the OECD.222222The composite leading indicator is collected from the OECD data base at https://data.oecd.org/leadind/composite-leading-indicator-cli.htm. CLI provides early signals of turning points in business cycles showing fluctuations of the economic activity around its long term potential level. and the U.S. leading indicator (U.S.LI) computed by the Economic Cycle Research Institute (ECRI). U.S.LI is available at a weekly frequency and aggregated here at a monthly frequency, and we take the growth rate of the indicator. We find similar results, confirming both significance and coefficients signs, even after adding the set of controls. We repeat the same predictive exercise of the previous subsection, by running equation 9 where now the dependent variable is CLI. We report the results in Table 4. We observe that the predictability of uncertainty network connectedness is even stronger for the business cycle leading indicators, spanning from 3-month up to one year and from 1-month up to 9-month horizons, for CLI and U.S.LI, respectively. The coefficients are still found to be negative confirming our previous findings. Table 4: Leading Indicators Predictive Results | | Panel A: CLI ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.011 | -0.030*** | -0.051*** | -0.079*** | -0.084*** | | (0.010) | (0.011) | (0.012) | (0.011) | (0.011) Adj. $R^{2}$ | | 0.435 | 0.294 | 0.192 | 0.247 | 0.262 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: U.S.LI | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.348*** | -0.371*** | -0.417*** | -0.372*** | -0.108 | | (0.063) | (0.066) | (0.066) | (0.067) | (0.073) Adj. $R^{2}$ | | 0.364 | 0.304 | 0.303 | 0.299 | 0.178 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regression in equation 9 between the industry uncertainty network connectedness and two leading indicators of business cycle, namely CLI and U.S.LI, in Panel A and B, respectively. We also add a set of controls, $X$. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are considered at a monthly frequency between 01-2000 and 05-2020. Overall, it appears that the uncertainty network can anticipate what is commonly viewed as a business cycle leading indicator. This opens up some interesting considerations. Given that the uncertainty network is extracted from options prices, it is expected that the newly proposed uncertainty network contains forward-looking information that can be useful as ex-ante business cycle monitoring indicator. We know that a business cycle leading indicator should ideally anticipate and predict coincident indicators. We illustrate in the previous section that the uncertainty network shares such properties. In this subsection, we also show how our network measure is a good predictor of leading indicators, such a finding emphasizing even further the usefulness of its forward-looking information content. Table 5: Coincident Indicators Predictive Results | | Panel A: CFNAI-3M ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.011** | -0.023*** | -0.025*** | -0.040*** | -0.028*** | | (0.005) | (0.006) | (0.007) | (0.007) | (0.007) CLI | | 0.432*** | 0.357*** | 0.265*** | 0.136*** | 0.008 | | (0.036) | (0.045) | (0.051) | (0.051) | (0.053) Adj. $R^{2}$ | | 0.560 | 0.333 | 0.162 | 0.198 | 0.164 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: ADS | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.018 | -0.026* | -0.034* | -0.053*** | -0.038** | | (0.015) | (0.018) | (0.018) | (0.018) | (0.019) CLI | | 0.590*** | 0.587*** | 0.425*** | 0.305** | 0.159 | | (0.109) | (0.130) | (0.134) | (0.135) | (0.138) Adj. $R^{2}$ | | 0.351 | 0.105 | 0.066 | 0.080 | 0.074 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regressions between the industry uncertainty network connectedness, and the coincident indicators of business cycle, namely CFNAI and ADS. We present results for regression equation 9 in which we add a set of controls including also the leading indicator, CLI. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space, the only exception being the CLI control. Series are considered at a monthly frequency between 01-2000 and 05-2020. To validate this point, we test whether the existing business cycle leading indicators might contain a different set of information, mainly at shorter horizons, compared to our uncertainty network. To this end, we test whether our measure can predict coincident indicators even after controlling for a leading indicator (CLI). The results are reported in Table 5. We find that the uncertainty network predictability holds at every horizon, even after controlling for CLI. The latter shows a good predictive ability, however up to the 9-month horizon (in line with the index characteristics description). The uncertainty network clearly shows characteristics of a complementary (and rather superior) business cycle leading indicator, spanning predictive power from 1- to the 12-month horizon in advance, even after controlling for CLI. For the ADS, as a proxy for a business cycle coincident indicator, we find a weaker predictive power for the uncertainty network at the short horizon, however still confirming the anticipatory property of about one quarter. We repeat the same exercise of Table 5 by adopting the U.S.LI indicator by the ECRI as control. We obtain similar findings for both CFNAI and ADS and results are relegated to the paper appendix in Table F3. Overall, the uncertainty network adds quite a lot in terms of long-horizon predictability compared to the information content of other leading indicators of business cycles. ### 6.3 Hubs and non-hubs industry connectedness networks In this subsection, we check whether the predictability power of uncertainty hubs-based networks may differ from uncertainty non-hubs. We repeat the empirical analysis of the previous section, now considering only hubs and non- hubs based networks, $\mathcal{C}_{t}^{\text{hub}}$ and $\mathcal{C}_{t}^{\text{non-hub}}$, respectively. We hypothesize that the former leads to greater predictability since reflecting information from the industries detected to be the main uncertainty contributors within the system. The uncertainty hubs network is based on the CD, CM, E, IN and IT industries, while the non-hubs network on F, M, RE and U industries, as illustrated in section 5 and in Figure 3. Similarly to equation 9, we estimate the following: $\mathcal{Y}^{(\ell)}_{t+h}=\beta_{0}+\beta_{\text{hub}}\ \mathcal{C}_{t}^{\text{hub}}+\beta_{\text{non-hub}}\ \mathcal{C}_{t}^{\text{non-hub}}+\sum_{i=1}^{N}\beta_{X,i}\ X_{t,i}+\epsilon_{t}$ (10) where we add the independent variables that characterize uncertainty hubs and non-hubs based on network connectedness taken jointly and aggregated at a monthly frequency to match the frequency of the indicators we adopt. We include the same set of controls $X$. In Table 6 we observe that the predictability of the hubs network is superior compared to the non-hubs for the aggregate CFNAI-MA3 and recessions especially for longer horizons, while for expansions at any horizons. We notice how the result achieved by looking at the predictive ability of uncertainty network resemble, or appear even stronger than, the results obtained in the previous section when looking at the aggregated predictability. In Table G1 in the appendix we further confirm this finding by showing the results of the $\mathcal{C}_{t}^{\text{hub}}$ predictor alone, controlling for $X$. The predictive ability of $\mathcal{C}_{t}^{\text{hub}}$ is found to be stronger than the one achieved by the aggregate network $\mathcal{C}$. This finding is reflected by the higher adjusted-$R^{2}$ detected, most of the time, in Table G1 compared to Table 3. These results suggest that the predictive ability of the uncertainty network appear to be driven by a few uncertainty hubs. Table 6: Hubs vs. non-Hubs Network Predictive Results | | Panel A: CFNAI-MA3 ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.007* | -0.013*** | -0.017*** | -0.021*** | -0.019*** | | (0.004) | (0.004) | (0.004) | (0.004) | (0.004) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.015*** | -0.021*** | -0.013* | -0.010 | 0.002 | | (0.006) | (0.006) | (0.007) | (0.007) | (0.007) Adj. $R^{2}$ | | 0.319 | 0.214 | 0.115 | 0.183 | 0.183 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: CFNAI-MA3 Expansion | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.004*** | -0.005*** | -0.005*** | -0.006*** | -0.006*** | | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | 0.001 | -0.001 | -0.005*** | -0.001 | 0.0004 | | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) Adj. $R^{2}$ | | 0.097 | 0.150 | 0.258 | 0.289 | 0.286 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel C: CFNAI-MA3 Recession | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.003 | -0.008** | -0.012*** | -0.015*** | -0.012*** | | (0.004) | (0.004) | (0.004) | (0.004) | (0.004) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.016*** | -0.020*** | -0.009 | -0.009 | 0.002 | | (0.005) | (0.006) | (0.007) | (0.006) | (0.006) Adj. $R^{2}$ | | 0.313 | 0.190 | 0.090 | 0.152 | 0.149 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regression 10 comparing the predictive ability of the uncertainty hubs vs non-hubs sub- networks with respect to the 3-month moving average of the Chicago FED National Activity Index (CFNAI-MA3) in Panel A. In Panel B and Panel C the results of the predictive regression with respect to the CFNAI expansion and recession periods are reported, respectively. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are considered at a monthly frequency between 01-2000 and 05-2020. We then check the relationship between the hubs and non-hubs networks for leading indicators. The predictive results for CLI are reported in Table 7. We find that the hub network connectedness strongly predicts CLI up to one year, whereas the predictive power of the non-hubs network is overall absent. Thus, the predictive power of the uncertainty hubs network is found to be strong also for business cycles leading indicators, confirming a clear superior predictive ability compared to non-hubs. We further validate the predictive ability of the hubs-based uncertainty network by including the leading indicator CLI as a control variable in the multivariate regression when predicting CFNAI-MA3. The hubs network shows strong predictive ability from 3-month up to one year, complementing the shorter horizon predictive ability of CLI by expanding it to longer horizons. The non-hub network shows good predictive power in the short horizon and it, therefore, appears not to contain additional information compared to other leading indicators of business cycles e.g. CLI. The hubs-based network shows a longer horizon predictive power, a useful feature for any business cycle leading indicators. Our results suggest that the hubs-based network may be considered as the main driver of the aggregate network, achieving even stronger predictive power on its own. Table 7: Hubs vs. non-Hubs Network Predictive Results | | Panel A: Leading Indicator CLI ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.012** | -0.024*** | -0.039*** | -0.051*** | -0.056*** | | (0.005) | (0.006) | (0.006) | (0.006) | (0.006) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.009 | -0.015 | -0.016 | -0.017* | -0.005 | | (0.008) | (0.009) | (0.010) | (0.010) | (0.010) Adj. $R^{2}$ | | 0.447 | 0.335 | 0.269 | 0.325 | 0.339 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: CFNAI-MA3 controlling for CLI | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.003 | -0.009*** | -0.015*** | -0.020*** | -0.019*** | | (0.003) | (0.004) | (0.004) | (0.004) | (0.004) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.014*** | -0.021*** | -0.012* | -0.011 | 0.002 | | (0.004) | (0.005) | (0.007) | (0.007) | (0.007) CLI | | 0.424*** | 0.340*** | 0.243*** | 0.109** | -0.017 | | (0.036) | (0.044) | (0.051) | (0.051) | (0.053) Adj. $R^{2}$ | | 0.573 | 0.371 | 0.193 | 0.196 | 0.179 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regression 10 comparing the predictive ability of the uncertainty hubs vs non-hubs sub- networks with respect to the leading indicator, CLI (Panel A). In Panel B, the results with respect to the CFNAI-MA3 coincident indicator controlling also for the leading indicator, CLI, are reported. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space, the only exception being the CLI control. Series are considered at a monthly frequency between 01-2000 and 05-2020. As a further robustness check, we also repeat the same predictive exercises by adopting a stricter construction of the hubs and non-hubs networks, including only CM, IN and IT industries and M, RE and U in the networks respectively. We report the results in Table G2 in the appendix for all the coincident indicator CFNAI-MA3, expansions and recessions, and also for the leading indicator CLI. We corroborate our previous findings and confirm our hypothesis since the hubs network is found to be more informative in predicting well the future business cycle indicators. ### 6.4 Predicting the volatility of GDP Finally, inspired by Carvalho and Gabaix (2013), in this section we check whether or not our uncertainty network connectedness is also able to predict future U.S. GDP growth rate and volatility. We calculate the growth rate of $GDP_{t}$, the U.S. GDP at time $t$ as $g_{t}=log(GDP_{t+1}/GDP_{t})\quad$where $t$ is expressed in quarterly frequency, end of the quarter. The volatility of GDP growth is measured as the annualized GDP standard deviation over 4 quarters. We check whether or not the aggregate network can predict U.S. GDP indicators in the next $h$ quarters ahead, with $h\in(1,2,3,4)$ by running the predictive equation 9 at a quarterly frequency. We report the empirical results in Table H1 in the appendix. We observe that the uncertainty network cannot predict the future GDP growth rate 2 and 3 quarters ahead. An intensification of connections leads to a decreasing GDP growth rate in the following quarters. We then show how the information embedded in our measures is also useful to predict future GDP volatility. By repeating the same exercise, we also show that the network predicts future GDP volatility in the next four quarters, the results being even stronger and found to be significant up to one year. An increase in connectedness leads to an increase in GDP volatility, thus confirming the counter-cyclicality of the uncertainty network. In Table H2 in the appendix, we show the predictive results of hubs and non- hubs based uncertainty networks for the GDP growth rate and volatility. Also, in this case, we find a stronger predictive ability for the hubs network compared to non-hubs for both the GDP growth rate and GDP volatility. We confirm the asymmetric predictive ability of the uncertainty network in favour of the hubs network, while weak and almost absent predictive power for the non-hubs network. We also confirm stronger results for the hubs network compared to the aggregate network results of the previous subsection, emphasizing once more how a network extracted solely from uncertainty hubs might have even strong predictive power not only for the business cycle indicators, but also for the volatility of GDP. ## 7 Conclusion We studied the ex-ante uncertainty network of the U.S. industries constructed from options-based investors future expectations about the future month uncertainty. We relied on a novel data set of industry forward-looking uncertainties and we adopted a time-varying parameter VAR (TVP-VAR) to model the ex-ante uncertainty network of industries. We were able to obtain a precise point in time estimation of the uncertainty network to accurately characterize the specific industry role in shocks to uncertainty, dynamically over the business cycle. We uncovered a main role for booming industries such as communications and information technology and we classified these as uncertainty hubs. Industries such as, financial (having an important influence mainly limited to the global financial crisis), real estate, materials and utilities showed a more neutral role and are classified as uncertainty non-hubs. We exploited the forward-looking industry connectedness networks characteristics in predictability. We found the industry uncertainty network to be a useful tool to predict future business cycles. We identified a greater predictive ability for the network extracted from uncertainty hubs, being the main (leading) indicator of business cycles. Such uncertainty networks can serve as a new tool for regulators and policymakers to monitor the relationship between industry networks, the business cycle and the real economy in a precise, timely and forward-looking manner. Fluctuations and shocks to uncertainty in uncertainty hubs should be more carefully monitored due to their potential for shaping the industry networks and impacting the real economy. Our findings suggest a possible direction for policies and government interventions. Policy actions should target uncertainty hubs since they are the stronger contributors to uncertainty shocks and they show a tighter link with the real economy. Policy interventions aimed at dampening their shocks to uncertainty can potentially provide a direct channel for boosting business cycles. ## References * Acemoglu and Azar (2020) Acemoglu, D. and P. D. Azar (2020). Endogenous production networks. Econometrica 88(1), 33–82. * Acemoglu et al. (2012) Acemoglu, D., V. M. Carvalho, A. Ozdaglar, and A. Tahbaz-Salehi (2012). The network origins of aggregate fluctuations. Econometrica 80(5), 1977–2016. * Acemoglu et al. (2017) Acemoglu, D., A. Ozdaglar, and A. Tahbaz-Salehi (2017). Microeconomic origins of macroeconomic tail risks. American Economic Review 107(1), 54–108. * Adrian and Estrella (2008) Adrian, T. and A. Estrella (2008). Monetary tightening cycles and the predictability of economic activity. Economics Letters 99(2), 260–264. * Alessandri and Mumtaz (2019) Alessandri, P. and H. Mumtaz (2019). Financial regimes and uncertainty shocks. Journal of Monetary Economics 101, 31–46. * Altinoglu (2020) Altinoglu, L. (2020). The origins of aggregate fluctuations in a credit network economy. Journal of Monetary Economics. * Arellano et al. (2019) Arellano, C., Y. Bai, and P. J. Kehoe (2019). Financial frictions and fluctuations in volatility. Journal of Political Economy 127(5), 2049–2103. * Aruoba et al. (2009) Aruoba, S. B., F. X. Diebold, and C. Scotti (2009). Real-time measurement of business conditions. Journal of Business & Economic Statistics 27(4), 417–427. * Atalay (2017) Atalay, E. (2017). How important are sectoral shocks? American Economic Journal: Macroeconomics 9(4), 254–80. * Auer et al. (2019) Auer, R. A., A. A. Levchenko, and P. Sauré (2019). International inflation spillovers through input linkages. Review of Economics and Statistics 101(3), 507–521. * Bachmann and Bayer (2014) Bachmann, R. and C. Bayer (2014). Investment dispersion and the business cycle. American Economic Review 104(4), 1392–1416. * Bachmann et al. (2013) Bachmann, R., S. Elstner, and E. R. Sims (2013). Uncertainty and economic activity: Evidence from business survey data. American Economic Journal: Macroeconomics 5(2), 217–49. * Bakshi et al. (2003) Bakshi, G., N. Kapadia, and D. Madan (2003). Stock return characteristics, skew laws, and the differential pricing of individual equity options. Review of Financial Studies 16(1), 101–143. * Baqaee and Farhi (2019) Baqaee, D. R. and E. Farhi (2019). The macroeconomic impact of microeconomic shocks: beyond Hulten’s theorem. Econometrica 87(4), 1155–1203. * Barrot and Sauvagnat (2016) Barrot, J.-N. and J. Sauvagnat (2016). Input specificity and the propagation of idiosyncratic shocks in production networks. The Quarterly Journal of Economics 131(3), 1543–1592. * Baruník et al. (2020) Baruník, J., M. Bevilacqua, and R. Tunaru (2020). Asymmetric network connectedness of fears. The Review of Economics and Statistics forthcoming. * Barunik and Ellington (2020) Barunik, J. and M. Ellington (2020). Dynamic networks in large financial and economic systems. arXiv preprint arXiv:2007.07842. * Basu and Bundick (2017) Basu, S. and B. Bundick (2017). Uncertainty shocks in a model of effective demand. Econometrica 85(3), 937–958. * Baur (2012) Baur, D. G. (2012). Financial contagion and the real economy. Journal of Banking & Finance 36(10), 2680–2692. * Bekaert et al. (2013) Bekaert, G., M. Hoerova, and M. L. Duca (2013). Risk, uncertainty and monetary policy. Journal of Monetary Economics 60(7), 771–788. * Berge and Jordà (2011) Berge, T. J. and Ò. Jordà (2011). Evaluating the classification of economic activity into recessions and expansions. American Economic Journal: Macroeconomics 3(2), 246–77. * Bhattarai et al. (2020) Bhattarai, S., A. Chatterjee, and W. Y. Park (2020). Global spillover effects of US uncertainty. Journal of Monetary Economics 114, 71–89. * Bianchi (2020) Bianchi, F. (2020). The great depression and the great recession: A view from financial markets. Journal of Monetary Economics 114, 240–261. * Bloom (2009) Bloom, N. (2009). The impact of uncertainty shocks. Econometrica 77(3), 623–685. * Bloom (2014) Bloom, N. (2014). Fluctuations in uncertainty. Journal of Economic Perspectives 28(2), 153–76. * Bloom et al. (2018) Bloom, N., M. Floetotto, N. Jaimovich, I. Saporta-Eksten, and S. J. Terry (2018). Really uncertain business cycles. Econometrica 86(3), 1031–1065. * Bloom et al. (2012) Bloom, N., R. Sadun, and J. Van Reenen (2012). Americans do IT better: US multinationals and the productivity miracle. American Economic Review 102(1), 167–201. * Caggiano et al. (2014) Caggiano, G., E. Castelnuovo, and N. Groshenny (2014). Uncertainty shocks and unemployment dynamics in US recessions. Journal of Monetary Economics 67, 78–92. * Carr and Lee (2009) Carr, P. and R. Lee (2009). Volatility derivatives. Annu. Rev. Financ. Econ. 1(1), 319–339. * Carr and Wu (2006) Carr, P. and L. Wu (2006). A tale of two indices. The Journal of Derivatives 13(3), 13–29. * Carvalho and Gabaix (2013) Carvalho, V. and X. Gabaix (2013). The great diversification and its undoing. American Economic Review 103(5), 1697–1727. * Carvalho and Tahbaz-Salehi (2019) Carvalho, V. M. and A. Tahbaz-Salehi (2019). Production networks: A primer. Annual Review of Economics 11, 635–663. * Chava et al. (2020) Chava, S., A. Hsu, and L. Zeng (2020). Does history repeat itself? Business cycle and industry returns. Journal of Monetary Economics 116, 201–218. * Cheng (2019) Cheng, I.-H. (2019). The VIX premium. The Review of Financial Studies 32(1), 180–227. * Christensen and Prabhala (1998) Christensen, B. J. and N. R. Prabhala (1998). The relation between implied and realized volatility. Journal of Financial Economics 50(2), 125–150. * Christiano et al. (2014) Christiano, L. J., R. Motto, and M. Rostagno (2014). Risk shocks. American Economic Review 104(1), 27–65. * Dahlhaus (1996) Dahlhaus, R. (1996). On the Kullback-Leibler information divergence of locally stationary processes. Stochastic Processes and their Applications 62(1), 139–168. * Dahlhaus et al. (2009) Dahlhaus, R., W. Polonik, et al. (2009). Empirical spectral processes for locally stationary time series. Bernoulli 15(1), 1–39. * Decker et al. (2016) Decker, R. A., P. N. D’Erasmo, and H. Moscoso Boedo (2016). Market exposure and endogenous firm volatility over the business cycle. American Economic Journal: Macroeconomics 8(1), 148–98. * Di Giovanni et al. (2014) Di Giovanni, J., A. A. Levchenko, and I. Mejean (2014). Firms, destinations, and aggregate fluctuations. Econometrica 82(4), 1303–1340. * Diebold and Yilmaz (2012) Diebold, F. X. and K. Yilmaz (2012). Better to give than to receive: Predictive directional measurement of volatility spillovers. International Journal of Forecasting 28(1), 57–66. * Diebold and Yılmaz (2014) Diebold, F. X. and K. Yılmaz (2014). On the network topology of variance decompositions: Measuring the connectedness of financial firms. Journal of Econometrics 182(1), 119–134. * Filipović et al. (2016) Filipović, D., E. Gourier, and L. Mancini (2016). Quadratic variance swap models. Journal of Financial Economics 119(1), 44–68. * Foerster et al. (2011) Foerster, A. T., P.-D. G. Sarte, and M. W. Watson (2011). Sectoral versus aggregate shocks: A structural factor analysis of industrial production. Journal of Political Economy 119(1), 1–38. * Gabaix (2011) Gabaix, X. (2011). The granular origins of aggregate fluctuations. Econometrica 79(3), 733–772. * Gabaix (2016) Gabaix, X. (2016). Power laws in economics: An introduction. Journal of Economic Perspectives 30(1), 185–206. * Garin et al. (2018) Garin, J., M. J. Pries, and E. R. Sims (2018). The relative importance of aggregate and sectoral shocks and the changing nature of economic fluctuations. American Economic Journal: Macroeconomics 10(1), 119–48. * Greenwood et al. (2000) Greenwood, J., Z. Hercowitz, and P. Krusell (2000). The role of investment-specific technological change in the business cycle. European Economic Review 44(1), 91–115. * Herskovic et al. (2020) Herskovic, B., B. T. Kelly, H. N. Lustig, and S. Van Nieuwerburgh (2020). Firm volatility in granular networks. Journal of Political Economy (12-56). * Jorgenson (2001) Jorgenson, D. W. (2001). Information technology and the US economy. American Economic Review 91(1), 1–32. * Jurado et al. (2015) Jurado, K., S. C. Ludvigson, and S. Ng (2015). Measuring uncertainty. American Economic Review 105(3), 1177–1216. * Justiniano et al. (2010) Justiniano, A., G. E. Primiceri, and A. Tambalotti (2010). Investment shocks and business cycles. Journal of Monetary Economics 57(2), 132–145. * Kadiyala and Karlsson (1997) Kadiyala, K. R. and S. Karlsson (1997). Numerical methods for estimation and inference in Bayesian VAR-models. Journal of Applied Econometrics 12(2), 99–132. * Kaminsky and Reinhart (1999) Kaminsky, G. L. and C. M. Reinhart (1999). The twin crises: the causes of banking and balance-of-payments problems. American Economic Review 89(3), 473–500. * Kozeniauskas et al. (2018) Kozeniauskas, N., A. Orlik, and L. Veldkamp (2018). What are uncertainty shocks? Journal of Monetary Economics 100, 1–15. * Leduc and Liu (2016) Leduc, S. and Z. Liu (2016). Uncertainty shocks are aggregate demand shocks. Journal of Monetary Economics 82, 20–35. * Lehn and Winberry (2020) Lehn, C. v. and T. Winberry (2020). The investment network, sectoral comovement, and the changing US business cycle. National Bureau of Economic Research. * Ludvigson et al. (2020) Ludvigson, S., S. Ma, and S. Ng (2020). Uncertainty and business cycles: Exogenous impulse or endogenous response? American Economic Journal: Macroeconomics. * Mumtaz and Theodoridis (2018) Mumtaz, H. and K. Theodoridis (2018). The changing transmission of uncertainty shocks in the US. Journal of Business & Economic Statistics 36(2), 239–252. * Ozdagli and Weber (2017) Ozdagli, A. and M. Weber (2017). Monetary policy through production networks: Evidence from the stock market. National Bureau of Economic Research Working Paper. * Pesaran and Shin (1998) Pesaran, H. H. and Y. Shin (1998). Generalized impulse response analysis in linear multivariate models. Economics letters 58(1), 17–29. * Petrova (2019) Petrova, K. (2019). A quasi-Bayesian local likelihood approach to time varying parameter VAR models. Journal of Econometrics. * Rambachan and Shephard (2019) Rambachan, A. and N. Shephard (2019). Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function. arXiv preprint arXiv:1903.01637. * Roueff and Sanchez-Perez (2016) Roueff, F. and A. Sanchez-Perez (2016). Prediction of weakly locally stationary processes by auto-regression. arXiv preprint arXiv:1602.01942. * Santa-Clara and Yan (2010) Santa-Clara, P. and S. Yan (2010). Crashes, volatility, and the equity premium: Lessons from S&P 500 options. Review of Economics and Statistics 92(2), 435–451. * Tai (2004) Tai, C.-S. (2004). Can bank be a source of contagion during the 1997 Asian crisis? Journal of Banking & Finance 28(2), 399–421. ## Appendix ## Appendix A S&P 500 sectors breakdown In this short section we present a U.S. stock market sectors breakdown and description where the S&P 500 index is used as a proxy for the stock market. The information in this section are reported as of January 25, 2019. For more details and updated information, see also https://us.spindices.com/indices/equity/sp-500. * • Consumer Discretionary (CD): The CD sector consists of businesses that have demand that rises and falls based on general economic conditions such as washers and dryers, sporting goods, new cars, etc. At present, the consumer discretionary sector contains 11 sub-industries: Automobile Components Industry, Automobiles Industry, Distributors Industry, Diversified Consumer Services Industry, Hotels, Restaurants & Leisure Industry, Household Durables Industry, Leisure Products Industry, Multiline Retail Industry, Specialty Retail Industry, Textile, Apparel & Luxury Goods Industry, Internet & Direct Marketing. The total value of all consumer discretionary stocks in the U.S. came to $4.54 trillion, or about 10.11% of the market. Examples of consumer discretionary stocks include Amazon and Starbucks. * • Communication Services (CM): from telephone access to high-speed internet, the communication services sector of the economy keeps us all connected. At present, the communication services sector is made up of five industries: Diversified Telecommunication Services, Wireless Telecommunication Services, Entertainment Media, Interactive Media and Services. the total value of all communication services stocks in the U.S. came to $4.42 trillion, or 10.33% of the market. The CM industry includes stocks such as AT&T and Verizon, but also the giants Alphabet Inc A and Facebook from 2004 and 2012, respectively. In fact, the new communication sector of the S&P 500 includes now big companies such as Facebook and Alphabet Google since these were moved out from the technology and consumer discretionary sectors, respectively, due to the changes of the Global Industry Classification Standard (GICS). * • Consumer Staples (CS): The CS sector consists of businesses that sell the necessities of life, ranging from bleach and laundry detergent to toothpaste and packaged food. At present, the consumer staples sector contains six industries: Beverages Industry, Food & Staples Retailing Industry, Food Products Industry, Household Products Industry, Personal Products Industry, Tobacco Industry. The total value of all consumer staples stocks in the U.S. came to $2.95 trillion, or about 7.18% of the market and includes companies such as Procter & Gamble. * • Energy (E): The E sector consists of businesses that source, drill, extract, and refine the raw commodities we need to keep the country going, such as oil and gas. At present, the energy sector contains two industries: Energy Equipment & Services Industry, and Oil, Gas & Consumable Fuels Industry. The total value of all energy stocks in the U.S. came to $3.36 trillion, or about 5.51% of the market. Major energy stocks include Exxon Mobil and Chevron. * • Financial (F): The F sector consists of banks, insurance companies, real estate investment trusts, credit card issuers. At present, the financial sector contains seven industries: Banking Industry, Capital Markets Industry, Diversified Financial Services Industry, Insurance Industry, Mortgage Real Estate Investment Trusts (REITs) Industry, Thrifts & Mortgage Finance Industry. The total value of all financial stocks in the U.S. came to $6.89 trillion, or about 13.63% of the market. J.P. MorganChase, GoldmanSachs, and Bank of America are examples of financial stocks. * • Health Care (HC): The HC sector consists of drug companies, medical supply companies, and other scientific-based operations that are concerned with improving and healing human life. At present, the HC sector contains six industries: Biotechnology Industry, Health Care Equipment & Supplies Industry, Health Care Providers & Services Industry, Health Care Technology Industry, Life Sciences Tools & Services Industry, Pharmaceuticals Industry. The total value of all health care stocks in the U.S. came to $5.25 trillion, or about 15.21% of the market. Examples of HC stocks include Johnson & Johnson, and Pfizer. * • Industrials (IN): The IN sector comprises railroads and airlines to military weapons and industrial conglomerates. At present, the industrial sector contains fourteen industries: Aerospace & Defense Industry, Air Freight & Logistics Industry, Airlines Industry, Building Products Industry, Commercial Services & Supplies Industry, Construction & Engineering Industry, Electrical Equipment Industry, Industrial Conglomerates Industry, Machinery Industry, Marine Industry, Professional Services Industry, Road & Rail Industry, Trading Companies & Distributors Industry, Transportation Infrastructure Industry. The total value of all IN stocks in the U.S. came to $3.80 trillion, or about 9.33% of the market. * • Information Technology (IT): the IT sector is home to the hardware, software, computer equipment, and IT services operations that make it possible for you to be reading this right now. At present, the information technology sector contains six industries: Communications Equipment Industry, Electronic Equipment, Instruments & Components Industry, IT Services Industry, Semiconductors & Semiconductor Equipment Industry, Software Industry, Technology Hardware, Storage & Peripherals Industry. The total value of all IT stocks in the United States came to $7.10 trillion, or about 19.85% of the market. It is the largest sector in the S&P 500. Top IT stocks include Microsoft and Apple. * • Materials (M): The building blocks that supply the other sectors with the raw materials it needs to conduct business, the material sector manufacturers, logs, and mines everything from precious metals, paper, and chemicals to shipping containers, wood pulp, and industrial ore. At present, the material sector contains five industries: Chemicals Industry, Construction Materials Industry, Containers & Packaging Industry, Metals & Mining Industry, Paper & Forest Products Industry. The total value of all materials stocks in the U.S. came to $1.77 trillion, or about 2.71% of the market. Major materials stocks include Dupont. * • Real Estate (RE): The RE sector includes all Real Estate Investment Trusts (REITs) with the exception of Mortgage REITs, which is housed under the financial sector. The sector also includes companies that manage and develop properties. At present, the RE sector is made up of two industries: Equity Real Estate Investment Trusts, Real Estate Management & Development. The total value of all real estate stocks in the U.S. came to $1.17 trillion, or 2.96% of the market. The RE industry includes stocks such as Simon Property Group and Prologis. * • Utilities (U): The U sector of the economy is home to the firms that make our lights work when we flip the switch, let our stoves erupt in flame when we want to cook food, make water come out of the tap when we are thirsty, and more. At present, the utilities sector is made up of five industries: Electric Utilities Industry, Gas Utilities Industry, Independent Power and Renewable Electricity Producers Industry, Multi-Utilities Industry, Water Utilities Industry. The total value of all utilities stocks in the U.S. came to $1.27 trillion, or about 3.18% of the market. U stocks include many local electricity and water companies including Dominion Resources. ## Appendix B Model-free individual implied volatility Formalizing the implied volatility computation for each stock, we follow Bakshi et al. (2003) in adopting out-of-money (OTM) call and put option prices to compute the individual stock $s$th implied variance as $\sigma^{2}_{\text{VIX}^{(s)}}=\int_{P_{t}}^{\infty}\frac{2(1-\log(K/P_{t}))}{K^{2}}C(t,t+1,K)dK+\int_{0}^{P_{t}}\frac{2(1+\log(P_{t}/K))}{K^{2}}P(t,t+1,K)dK,$ (11) where $C(.)$ and $P(.)$ denote the time $t$ prices of call and put contracts, respectively, with time to maturity of one period and a strike price of $K$. Intuitively, the implied variance measure can be computed in a model-free way from a range of option prices upon a discretization of formula (11), adopting call and put option prices with respect to the next 30 days, considering all available strikes for each individual stock options. We compute the $\text{VIX}^{(s)}$ for all the stocks in our sample belonging to the 11 U.S. industries as follows: $\sigma^{2}_{\text{VIX}^{(s)}}=\frac{2}{T}\sum_{i=1}^{n}\frac{\Delta K_{i}}{K_{i}^{2}}e^{rT}Q(K_{i})-\frac{1}{T}\left[\frac{F}{K_{0}}-1\right]^{2},\\\ $ (12) where $T$ is time to expiration, $F$ is the forward index level derived from the put-call parity as $F=e^{rT}[C(K,T)-P(K,T)]+K$ with the risk-free rate $r$ , $K_{0}$ is the reference price, the first exercise price less or equal to the forward level $F(K_{0}\leq F)$, and $K_{i}$ is the $i$th out-of-the-money (OTM) strike price available on a specific date (call if $K_{i}>K_{0}$, put if $K_{i}<K_{0}$, and both call and put if $K_{i}=K_{0}$). $Q(K_{i})$ is the average bid-ask of OTM options with exercise price equal to $K_{i}$. If $K_{i}=K_{0}$, it will be equal to the average between the at-the-money (ATM) call and put price, relative to the strike price, and $\Delta(K_{i})$ is the sum divided by two of the two nearest prices to the exercise price $K_{0}$, namely, $\frac{(K_{i+1}-K_{i-1})}{2}$ for $2\leq i\leq n-1$. The annualized square roots of the quantities computed for each of the $s$-th individual firms are then labeled $\text{VIX}^{(s)}$ denoting individual, model-free implied volatility measures of the expected price fluctuations in the $s$-th underlying asset’s options over the next month.232323The standard CBOE methodology considers an interpolation between the two closest to 30-days expiration dates. We use a simplified formula taking into account only one expiration date closest to 30-days due to options data availability with respect to U.S. single stocks. $\text{VIX}^{(s)}=\sqrt{\frac{365}{30}\sigma^{2}_{\text{VIX}^{(s)}}}$ (13) Table B1: List of Selected Stocks by Industry Ticker | Full name | Period | | Ticker | Full name | Period ---|---|---|---|---|---|--- Consumer discretionary: | | | | Industrial | | AMZN | Amazon.com Inc. | 1996-2020 | | BA | Boeing Company | 1996-2020 HD | Home Depot Inc. | 1996-2020 | | GE | General Electric Company | 1996-2020 MCD | McDonald’s Corporation | 1996-2020 | | HON | Honeywell International Inc. | 1996-2020 NKE | NIKE Inc. Class B | 1996-2020 | | UNP | Union Pacific Corporation | 1996-2020 SBUX | Starbucks Corporation | 1996-2020 | | UTX | United Technologies Corporation | 1996-2018 | | | | CAT | Caterpillar Inc. | 2019-2020 Communications: | | | | Information Technology | | CMCSA | Comcast Corporation Class A | 1996-2004 | | AAPL | Apple Inc. | 1996-2020 DIS | Walt Disney Company | 1996-2020 | | ADBE | Adobe Inc. | 1996-2020 EA | Electronic Arts Inc. | 1996-2015 | | CSCO | Cisco Systems Inc. | 1996-2015 FB | Facebook Inc. Class A | 2012-2020 | | INTC | Intel Corporation | 1996-2020 GOOG | Alphabet Inc. Class C | 2004-2020 | | MA | Mastercard Incorporated Class A | 2007-2018 T | AT&T Inc. | 1996-2020 | | MSFT | Microsoft Corporation | 1996-2020 VZ | Verizon Communications Inc. | 1996-2020 | | V | Visa Inc. Class A | 2008-2020 Consumer staples: | | | | Materials | | COST | Costco Wholesale Corporation | 1996-2018 | | APD | Air Products and Chemicals Inc | 1996-2018 KO | Coca-Cola Company | 1996-2020 | | DD | DuPont de Nemours Inc. | 1996-2018 PEP | PepsiCo Inc. | 1996-2020 | | ECL | Ecolab Inc. | 1996-2018 PG | Procter & Gamble Company | 1996-2020 | | LIN | Linde plc | 1996-2007 WMT | Walmart Inc. | 1996-2020 | | PPG | PPG Industries Inc. | 2007-2018 PM | Philip Morris Inc. | 2018-2020 | | SHW | Sherwin-Williams Company | 1998-2020 | | | | NEM | Newmont Corporation | 2019-2020 | | | | FCX | Freeport-McMoRan Inc. | 2019-2020 | | | | IP | International Paper | 2019-2020 Energy: | | | | Real Estate | | COP | ConocoPhillips | 1996-2020 | | AMT | American Tower Corporation | 1996-2020 CVX | Chevron Corporation | 1996-2020 | | CCI | Crown Castle International Corp | 1996-2018 SLB | Schlumberger NV | 1996-2020 | | EQIX | Equinix Inc. | 2006-2020 XOM | Exxon Mobil Corporation | 1996-2020 | | EQR | Equity Residential | 1996-2020 VLO | Valero Energy Corp | 1996-2015 | | PLD | Prologis Inc. | 1996-2018 PSX | Phillips 66 | 2012-2020 | | SPG | Simon Property Group Inc. | 1996-2020 Financial | | | | Utilities | | BAC | Bank of America Corp | 1996-2020 | | AEP | American Electric Power Company Inc. | 1996-2020 BRK | Berkshire Hathaway Inc. Class B | 2010-2020 | | D | Dominion Energy Inc | 1996-2020 C | Citigroup Inc. | 1996-2020 | | DUK | Duke Energy Corporation | 2006-2020 GS | Goldman Sachs | 1999-2018 | | NEE | NextEra Energy Inc. | 1996-2020 JPM | JPMorgan Chase & Co. | 1996-2020 | | SO | Southern Company | 1996-2020 WFC | Wells Fargo & Company | 1996-2020 | | | | Health care: | | | | | | ABT | Abbott Laboratories | 1996-2019 | | | | JNJ | Johnson & Johnson | 1996-2020 | | | | MRK | Merck & Co. Inc. | 1996-2020 | | | | PFE | Pfizer Inc. | 1996-2020 | | | | UNH | UnitedHealth Group Incorporated | 1996-2020 | | | | ABBV | AbbVie Inc. | 2018-2020 | | | | * • Notes: This table summarizes all the U.S. stocks selected in the paper, divided by industry. The stock tickers, full names and period of options data availability are reported. Figure B1: Individual Firm Uncertainty Notes: This figure illustrates the individual uncertainty $\text{VIX}^{(i)}$ for Amazon (in black), CocaCola (in grey) and Disney (in light grey) from 03-01-2000 to 29-05-2020, at a daily frequency. ## Appendix C Proofs ###### Proposition 12. Let us have the VMA($\infty$) representation of the locally stationary TVP-VAR model (Dahlhaus et al., 2009; Roueff and Sanchez-Perez, 2016) $\mathbf{IVIX}_{t,T}=\sum_{h=-\infty}^{\infty}\boldsymbol{\Psi}_{t,T,h}\boldsymbol{\epsilon}_{t-h}$ (14) $\boldsymbol{\Psi}_{t,T,h}\approx\boldsymbol{\Psi}(t/T,h)$ is a stochastic process satisfying $\sup_{\ell}||\boldsymbol{\Psi}_{t}-\boldsymbol{\Psi}_{\ell}||^{2}=O_{p}(h/t)$ for $1\leq h\leq t$ as $t\rightarrow\infty$, hence in a neighborhood of a fixed time point $u=t/T$ the process $\mathbf{IVIX}_{t,T}$ can be approximated by a stationary process $\widetilde{\mathbf{IVIX}}_{t}(u)$ $\widetilde{\mathbf{IVIX}}_{t}(u)=\sum_{h=-\infty}^{\infty}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\epsilon}_{t-h}$ (15) with $\boldsymbol{\epsilon}$ being iid process with $\mathbb{E}[\boldsymbol{\epsilon}_{t}]=0$, $\mathbb{E}[\boldsymbol{\epsilon}_{s}\boldsymbol{\epsilon}_{t}]=0$ for all $s\neq t$, and the local covariance matrix of the errors $\boldsymbol{\Sigma}(u)$. Under suitable regularity conditions $|\mathbf{IVIX}_{t,T}-\widetilde{\mathbf{IVIX}}_{t}(u)|=O_{p}\big{(}|t/T-u|+1/T\big{)}$. Since the errors are assumed to be serially uncorrelated, the total local covariance matrix of the forecast error conditional on the information at time $t-1$ is given by $\boldsymbol{\Omega}^{H}(u)=\sum_{h=0}^{H}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\Sigma}(u)\boldsymbol{\Psi}^{\top}_{h}(u).$ (16) Next, we consider the local covariance matrix of the forecast error conditional on knowledge of today’s shock and future expected shocks to $k$-th variable. Starting from the conditional forecasting error, $\boldsymbol{\xi}^{k,H}(u)=\sum_{h=0}^{H}\boldsymbol{\Psi}_{h}(u)\Big{[}\boldsymbol{\epsilon}_{t+H-h}-\mathbb{E}(\boldsymbol{\epsilon}_{t+H-h}|\boldsymbol{\epsilon}_{k,t+H-h})\Big{]},$ (17) assuming normal distribution of $\boldsymbol{\epsilon}_{t}\sim N(0,\boldsymbol{\Sigma})$, we obtain242424Note to notation: $[\boldsymbol{A}]_{j,k}$ denotes the $j$th row and $k$th column of matrix $\boldsymbol{A}$ denoted in bold. $[\boldsymbol{A}]_{j,\cdot}$ denotes the full $j$th row; this is similar for the columns. A $\sum A$, where $A$ is a matrix that denotes the sum of all elements of the matrix $A$. $\mathbb{E}(\boldsymbol{\epsilon}_{t+H-h}|\boldsymbol{\epsilon}_{k,t+H-h})=\sigma_{kk}^{-1}\Big{[}\boldsymbol{\Sigma}(u)\Big{]}_{\cdot k}\boldsymbol{\epsilon}_{k,t+H-h}$ (18) and substituting (18) to (17), we obtain $\boldsymbol{\xi}^{k,H}(u)=\sum_{h=0}^{H}\boldsymbol{\Psi}_{h}(u)\Big{[}\boldsymbol{\epsilon}_{t+H-h}-\sigma_{kk}^{-1}\Big{[}\boldsymbol{\Sigma}(u)\Big{]}_{\cdot k}\boldsymbol{\epsilon}_{k,t+H-h}\Big{]}.$ (19) Finally, the local forecast error covariance matrix is $\boldsymbol{\Omega}^{k,H}(u)=\sum_{h=0}^{H}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\Sigma}(u)\boldsymbol{\Psi}^{\top}_{h}(u)-\sigma_{kk}^{-1}\sum_{h=0}^{H}\boldsymbol{\Psi}_{h}(u)\Big{[}\boldsymbol{\Sigma}(u)\Big{]}_{\cdot k}\Big{[}\boldsymbol{\Sigma}(u)\Big{]}_{\cdot k}^{\top}\boldsymbol{\Psi}^{\top}_{h}(u).$ (20) Then $\Big{[}\boldsymbol{\Delta}^{H}(u)\Big{]}_{(j)k}=\Big{[}\boldsymbol{\Omega}^{H}(u)-\boldsymbol{\Omega}^{k,H}(u)\Big{]}_{j,j}=\sigma_{kk}^{-1}\sum_{h=0}^{H}\Bigg{(}\Big{[}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\Sigma}(u)\Big{]}_{j,k}\Bigg{)}^{2}$ (21) is the unscaled local $H$-step ahead forecast error variance of the $j$-th component with respect to the innovation in the $k$-th component. Scaling the equation with $H$-step ahead forecast error variance with respect to the $j$th variable yields the desired time-varying generalized forecast error variance decompositions (TVP-GFEVD) $\Big{[}\boldsymbol{\theta}^{H}(u)\Big{]}_{j,k}=\frac{\sigma_{kk}^{-1}\displaystyle\sum_{h=0}^{H}\Bigg{(}\Big{[}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\Sigma}(u)\Big{]}_{j,k}\Bigg{)}^{2}}{\displaystyle\sum_{h=0}^{H}\Big{[}\boldsymbol{\Psi}_{h}(u)\boldsymbol{\Sigma}(u)\boldsymbol{\Psi}^{\top}_{h}(u)\Big{]}_{j,j}}$ (22) This completes the proof. ∎ ## Appendix D Estimation of the time-varying parameter VAR model To estimate our high dimensional systems, we follow the Quasi-Bayesian Local- Liklihood (QBLL) approach of Petrova (2019). let $\mathbf{IVIX}_{t}$ be an $N\times 1$ vector generated by a stable time-varying parameter (TVP) heteroskedastic VAR model with $p$ lags: $\mathbf{IVIX}_{t,T}=\boldsymbol{\Phi}_{1}(t/T)\mathbf{IVIX}_{t-1,T}+\ldots+\boldsymbol{\Phi}_{p}(t/T)\mathbf{IVIX}_{t-p,T}+\boldsymbol{\epsilon}_{t,T},$ (23) where $\boldsymbol{\epsilon}_{t,T}=\boldsymbol{\Sigma}^{-1/2}(t/T)\boldsymbol{\eta}_{t,T}$ with $\boldsymbol{\eta}_{t,T}\sim NID(0,\boldsymbol{I}_{M})$ and $\boldsymbol{\Phi}(t/T)=(\boldsymbol{\Phi}_{1}(t/T),\ldots,\boldsymbol{\Phi}_{p}(t/T))^{\top}$ are the time-varying autoregressive coefficients. Note that all roots of the polynomial, $\chi(z)=\text{det}\left(\textbf{I}_{N}-\sum^{L}_{p=1}z^{p}\mathbf{B}_{p,t}\right)$, lie outside the unit circle, and $\boldsymbol{\Sigma}^{-1}_{t}$ is a positive definite time-varying covariance matrix. Stacking the time-varying intercepts and autoregressive matrices in the vector $\phi_{t,T}$ with $\overline{\mathbf{IVIX}}^{\top}_{t}=\left(\text{{I}}_{N}\otimes x_{t}\right),\>x_{t}=\left(1,x^{\top}_{t-1},\dots,x^{\top}_{t-p}\right)$ and $\otimes$ denotes the Kronecker product, the model can be written as: $\displaystyle\mathbf{IVIX}_{t,T}=\overline{\mathbf{IVIX}}^{\top}_{t,T}\phi_{t,T}+\boldsymbol{\Sigma}^{-\frac{1}{2}}_{t/T}\boldsymbol{\eta}_{t,T}$ (24) We obtain the time-varying parameters of the model by employing Quasi-Bayesian Local Likelihood (QBLL) methods. Estimation of (23) requires re-weighting the likelihood function. Essentially, the weighting function gives higher proportions to observations surrounding the time period whose parameter values are of interest. The local likelihood function at time period $k$ is given by: $\displaystyle\begin{split}\mathbf{L}_{k}\left(\mathbf{IVIX}|\theta_{k},\boldsymbol{\Sigma}_{k},\overline{\mathbf{IVIX}}\right)&\propto&\\\ |\boldsymbol{\Sigma}_{k}|^{\text{trace}(\mathbf{D}_{k})/2}\exp\left\\{-\frac{1}{2}(\mathbf{IVIX}-\overline{\mathbf{IVIX}}^{\top}\phi_{k})^{\top}\left(\boldsymbol{\Sigma}_{k}\otimes\mathbf{D}_{k}\right)(\mathbf{IVIX}-\overline{\mathbf{IVIX}}^{\top}\phi_{k})\right\\}\end{split}$ (25) The $\mathbf{D}_{k}$ is a diagonal matrix whose elements hold the weights: $\displaystyle\mathbf{D}_{k}$ $\displaystyle=$ $\displaystyle\text{diag}(\varrho_{k1},\dots,\varrho_{kT})$ (26) $\displaystyle\varrho_{kt}$ $\displaystyle=$ $\displaystyle\phi_{T,k}w_{kt}/\sum^{T}_{t=1}w_{kt}$ (27) $\displaystyle w_{kt}$ $\displaystyle=$ $\displaystyle(1/\sqrt{2\pi})\exp((-1/2)((k-t)/H)^{2}),\quad\text{for}\>k,t\in\\{1,\dots,T\\}$ (28) $\displaystyle\zeta_{Tk}$ $\displaystyle=$ $\displaystyle\left(\left(\sum^{T}_{t=1}w_{kt}\right)^{2}\right)^{-1}$ (29) where $\varrho_{kt}$ is a normalised kernel function. $w_{kt}$ uses a Normal kernel weighting function. $\zeta_{Tk}$ gives the rate of convergence and behaves like the bandwidth parameter $H$ in (28), and it is the kernel function that provides greater weight to observations surrounding the parameter estimates at time $k$ relative to more distant observations. Using a Normal-Wishart prior distribution for $\phi_{k}|\>\boldsymbol{\Sigma}_{k}$ for $k\in\\{1,\dots,T\\}$: $\displaystyle\phi_{k}|\boldsymbol{\Sigma}_{k}\backsim\mathcal{N}\left(\phi_{0k},(\boldsymbol{\Sigma}_{k}\otimes\mathbf{\Xi}_{0k})^{-1}\right)$ (30) $\displaystyle\boldsymbol{\Sigma}_{k}\backsim\mathcal{W}\left(\alpha_{0k},\mathbf{\Gamma}_{0k}\right)$ (31) where $\phi_{0k}$ is a vector of prior means, $\mathbf{\Xi}_{0k}$ is a positive definite matrix, $\alpha_{0k}$ is a scale parameter of the Wishart distribution ($\mathcal{W}$), and $\mathbf{\Gamma}_{0k}$ is a positive definite matrix. The prior and weighted likelihood function implies a Normal-Wishart quasi posterior distribution for $\phi_{k}|\>\boldsymbol{\Sigma}_{k}$ for $k=\\{1,\dots,T\\}$. Formally let $\mathbf{A}=(\overline{x}^{\top}_{1},\dots,\overline{x}^{\top}_{T})^{\top}$ and $\mathbf{Y}=(x_{1},\dots,x_{T})^{\top}$ then: $\displaystyle\phi_{k}|\boldsymbol{\Sigma}_{k},\mathbf{A},\mathbf{Y}$ $\displaystyle\backsim$ $\displaystyle\mathcal{N}\left(\tilde{\theta}_{k},\left(\boldsymbol{\Sigma}_{k}\otimes\mathbf{\tilde{\Xi}}_{k}\right)^{-1}\right)$ (32) $\displaystyle\boldsymbol{\Sigma}_{k}$ $\displaystyle\backsim$ $\displaystyle\mathcal{W}\left(\tilde{\alpha}_{k},\mathbf{\tilde{\Gamma}}^{-1}_{k}\right)$ (33) with quasi posterior parameters $\displaystyle\tilde{\phi}_{k}$ $\displaystyle=$ $\displaystyle\left(\mathbf{I}_{N}\otimes\mathbf{\tilde{\Xi}}^{-1}_{k}\right)\left[\left(\mathbf{I}_{N}\otimes\mathbf{A}^{\prime}\mathbf{D}_{k}\mathbf{A}\right)\hat{\phi}_{k}+\left(\mathbf{I}_{N}\otimes\mathbf{\Xi}_{0k}\right)\phi_{0k}\right]$ (34) $\displaystyle\mathbf{\tilde{\Xi}}_{k}$ $\displaystyle=$ $\displaystyle\mathbf{\tilde{\Xi}}_{0k}+\mathbf{A}^{\prime}\mathbf{D}_{k}\mathbf{A}$ (35) $\displaystyle\tilde{\alpha}_{k}$ $\displaystyle=$ $\displaystyle\alpha_{0k}+\sum^{T}_{t=1}\varrho_{kt}$ (36) $\displaystyle\mathbf{\tilde{\Gamma}}_{k}$ $\displaystyle=$ $\displaystyle\mathbf{\Gamma}_{0k}+\mathbf{Y}^{\prime}\mathbf{D}_{k}\mathbf{Y}+\mathbf{\Phi}_{0k}\mathbf{\Gamma}_{0k}\mathbf{\Phi}^{\prime}_{0k}-\mathbf{\tilde{\Phi}}_{k}\mathbf{\tilde{\Gamma}}_{k}\mathbf{\tilde{\Phi}}^{\top}_{k}$ (37) where $\hat{\phi}_{k}=\left(\mathbf{I}_{N}\otimes\mathbf{A}^{\prime}\mathbf{D}_{k}\mathbf{A}\right)^{-1}\left(\mathbf{I}_{N}\otimes\mathbf{A}^{\prime}\mathbf{D}_{k}\right)y$ is the local likelihood estimator for $\phi_{k}$. The matrices $\mathbf{\Phi}_{0k},\>\mathbf{\tilde{\Phi}}_{k}$ are conformable matrices from the vector of prior means, $\phi_{0k}$, and a draw from the quasi posterior distribution, $\tilde{\phi}_{k}$, respectively. The motivation for employing these methods are threefold. First, we are able to estimate large systems that conventional Bayesian estimation methods do not permit. This is typically because the state-space representation of an $N$-dimensional TVP-VAR ($p$) requires an additional $N(3/2+N(p+1/2))$ state equations for every additional variable. Conventional Markov Chain Monte Carlo (MCMC) methods fail to estimate larger models, which in general confine one to (usually) fewer than 6 variables in the system. Second, the standard approach is fully parametric and requires a law of motion. This can distort inference if the true law of motion is misspecified. Third, the methods used here permit direct estimation of the VAR’s time-varying covariance matrix, which has an inverse-Wishart density and is symmetric positive definite at every point in time. In estimating the model, we use $p$=2 and a Minnesota Normal-Wishart prior with a shrinkage value $\varphi=0.05$ and centre the coefficient on the first lag of each variable to 0.1 in each respective equation. The prior for the Wishart parameters are set following Kadiyala and Karlsson (1997). For each point in time, we run 500 simulations of the model to generate the (quasi) posterior distribution of parameter estimates. Note we experiment with various lag lengths, $p=\\{2,3,4,5\\}$; shrinkage values, $\varphi=\\{0.01,0.25,0.5\\}$; and values to centre the coefficient on the first lag of each variable, $\\{0,0.05,0.2,0.5\\}$. Network measures from these experiments are qualitatively similar. Notably, adding lags to the VAR and increasing the persistence in the prior value of the first lagged dependent variable in each equation increases computation time. ## Appendix E Net and Agg Uncertainty Connectedness Measures Table E1: $\mathcal{C}_{\textsc{NET}}$ and $\mathcal{C}_{\textsc{AGG}}$ across each business cycle | | Dot com Inv | | Dot com Rec | | Exp after Dot com | | GFC Inv ---|---|---|---|---|---|---|---|--- | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ CD | | -2.27 | 20.81 | 10 | | 0.81 | 66.58 | 13.5 | | -2.04 | 10.99 | 8.2 | | -1.17 | 12.72 | 9.4 CM | | -3.09 | 14.28 | 6.9 | | -0.26 | 38.82 | 7.8 | | -1.22 | 11.73 | 8.8 | | -0.02 | 11.85 | 8.8 CS | | -1.02 | 13.08 | 6.3 | | -4.91 | 47.86 | 9.7 | | -0.76 | 11.38 | 8.6 | | -0.72 | 11.17 | 8.3 E | | -6.24 | 21.21 | 10.2 | | 0.37 | 42.96 | 8.7 | | -0.71 | 12.57 | 9.4 | | -2.79 | 12.54 | 9.3 F | | 0.14 | 18.77 | 9.1 | | 0.06 | 66.84 | 13.5 | | 1.51 | 16.86 | 12.7 | | 7.38 | 23.37 | 17.3 HC | | -1.31 | 18.31 | 8.8 | | -4.11 | 54.12 | 11 | | -0.42 | 12.24 | 9.2 | | -0.21 | 10.14 | 7.5 IN | | 1.79 | 28.56 | 13.8 | | -3.27 | 62.47 | 12.6 | | 0.96 | 13.47 | 10.1 | | 0.05 | 12.70 | 9.4 IT | | 9.79 | 36.39 | 17.6 | | 6.48 | 68.59 | 13.9 | | 1.03 | 12.51 | 9.4 | | -0.94 | 13.79 | 10.2 M | | -0.62 | 10.64 | 5.1 | | -1.47 | 13.14 | 2.7 | | -1.03 | 9.24 | 6.9 | | 0.04 | 9.66 | 7.1 RE | | 3.68 | 16.34 | 7.9 | | 7.02 | 24.92 | 5.1 | | 2.52 | 13.09 | 9.8 | | -1.99 | 7.67 | 5.7 U | | -0.85 | 8.71 | 4.3 | | -0.67 | 7.89 | 1.5 | | 0.17 | 9.12 | 6.9 | | -0.32 | 9.50 | 7 | | GFC Rec | | Exp after GFC | | Covid-19 Rec | | Total Period | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ | | NET | AGG | AGG $\%$ CD | | -2.98 | 16.12 | 6.5 | | 0.07 | 40.24 | 12.1 | | 3.64 | 22.34 | 10.1 | | -0.71 | 29.61 | 11.1 CM | | -2.21 | 25.37 | 10.2 | | 3.97 | 42.58 | 12.8 | | 1.61 | 22.46 | 10.1 | | 1.55 | 30.49 | 11.5 CS | | 1.09 | 24.50 | 9.8 | | -2.33 | 30.48 | 9.1 | | 3.36 | 25.75 | 11.6 | | -1.50 | 24.27 | 9.1 E | | -1.87 | 23.44 | 9.4 | | -0.16 | 37.61 | 11.2 | | 0.22 | 25.71 | 11.6 | | -0.80 | 28.27 | 10.6 F | | 8.61 | 40.32 | 16.2 | | -1.58 | 19.17 | 5.8 | | -8.63 | 14.51 | 6.5 | | 0.54 | 22.15 | 8.3 HC | | -1.81 | 20.04 | 8 | | -0.17 | 35.02 | 10.5 | | -0.53 | 14.94 | 6.7 | | -0.56 | 26.45 | 9.9 IN | | 3.41 | 33.15 | 13.3 | | -0.43 | 39.94 | 12.0 | | -0.77 | 24.64 | 11.1 | | 0.17 | 31.47 | 11.9 IT | | -1.46 | 25.93 | 10.5 | | 2.84 | 44.45 | 13.3 | | 3.30 | 24.71 | 11.5 | | 2.23 | 33.71 | 12.7 M | | -0.79 | 15.93 | 6.2 | | -0.99 | 20.01 | 5.9 | | -3.06 | 19.40 | 8.7 | | -0.98 | 15.84 | 6.0 RE | | -0.89 | 14.47 | 5.8 | | -0.55 | 14.68 | 4.4 | | -1.48 | 12.14 | 5.4 | | 0.51 | 14.22 | 5.3 U | | -1.09 | 10.27 | 4.1 | | -0.66 | 9.70 | 2.9 | | 2.33 | 14.97 | 6.7 | | -0.43 | 9.59 | 3.6 Notes: The table shows the average net and agg network characteristics with respect to the 11 U.S. industries. When the net measure is positive the $IVIX^{(I)}$ can be classified as a net marginal transmitter, while, when negative, it can be classified as a net marginal receiver. The agg statistic is computed as the sum (in absolute values) between to and from. The highest values of the agg statistic are associated with uncertainty hubs, while the lowest with uncertainty non-hubs. The statistics are reported for each business cycle in our sample, namely inversion, recession, expansion separately and also for the total period, namely from 03-01-2000 to 29-05-2020, at a daily frequency. ## Appendix F Robustness Checks Table F1: ADS Predictive Results | | Panel A: ADS ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.020 | -0.032* | -0.036** | -0.055*** | -0.038** | | (0.016) | (0.019) | (0.019) | (0.017) | (0.019) Adj. $R^{2}$ | | 0.273 | 0.030 | 0.029 | 0.063 | 0.072 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: ADS Expansion | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.002 | -0.003 | -0.005* | -0.007*** | -0.007*** | | (0.003) | (0.003) | (0.003) | (0.003) | (0.003) Adj. $R^{2}$ | | 0.009 | 0.055 | 0.065 | 0.082 | 0.056 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel C: ADS Recession | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.022 | -0.025 | -0.031* | -0.048*** | -0.031* | | (0.016) | (0.019) | (0.019) | (0.019) | (0.019) Adj. $R^{2}$ | | 0.273 | 0.022 | 0.025 | 0.053 | 0.067 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regression in equation 9 between the aggregate network connectedness and the 3-month moving average of the ADS indicator of business cycle (Panel A). In Panel B and Panel C the results of the predictive regression with respect to the ADS expansion and recession indicators are reported, respectively. We also add a set of controls, $X$. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are considered at a monthly frequency between 01-2000 and 05-2020. Table F2: ADS Predictive Results | | Panel A: IP Growth Rate ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.001 | -0.003** | -0.003** | -0.003*** | -0.003** | | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) Adj. $R^{2}$ | | 0.247 | 0.054 | 0.035 | 0.057 | 0.062 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: U.S.CI Growth Rate | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.019 | -0.060** | -0.087*** | -0.150*** | -0.140*** | | (0.027) | (0.029) | (0.030) | (0.027) | (0.026) Adj. $R^{2}$ | | 0.009 | 0.055 | 0.065 | 0.082 | 0.056 Obs | | 0.205 | 0.078 | 0.058 | 0.213 | 0.280 * • Notes: This table presents the results of the predictive regression in equation 9 between the aggregate network connectedness, and the industrial production growth rate in Panel A, and the U.S. coincident indicator in Panel B. We also add a set of controls, $X$. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are considered at a monthly frequency between 01-2000 and 05-2020. Table F3: Coincident Indicators Predictive Results (2) | | Dependent: CFNAI-3M ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.008** | -0.021*** | -0.023*** | -0.039*** | -0.028*** | | (0.004) | (0.007) | (0.007) | (0.007) | (0.007) U.S.CI | | 0.104*** | 0.064*** | 0.044*** | 0.025** | 0.003 | | (0.006) | (0.010) | (0.011) | (0.011) | (0.011) Adj. $R^{2}$ | | 0.678 | 0.280 | 0.123 | 0.191 | 0.164 Obs | | 243 | 241 | 238 | 235 | 232 | | Dependent: ADS | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.012 | -0.023 | -0.033** | -0.052*** | -0.037** | | (0.014) | (0.018) | (0.019) | (0.019) | (0.019) U.S.CI | | 0.172*** | 0.088*** | 0.059** | 0.041 | 0.023 | | (0.020) | (0.029) | (0.029) | (0.029) | (0.029) Adj. $R^{2}$ | | 0.440 | 0.064 | 0.042 | 0.068 | 0.071 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regressions between the aggregate network connectedness, and the coincident indicators of business cycle, namely CFNAI and ADS. We present results for regression equation 9 in which we add a set of controls including also the leading indicator, U.S.LI. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space, exception for the U.S.LI control. Series are considered at a monthly frequency between 01-2000 and 05-2020. ## Appendix G Hubs and non-hubs predictive results Table G1: CFNAI-MA3 Hubs Network Predictive Results | | Panel A: CFNAI-MA3 ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.009** | -0.016*** | -0.019*** | -0.023*** | -0.018*** | | (0.004) | (0.004) | (0.004) | (0.004) | (0.004) Adj. $R^{2}$ | | 0.302 | 0.176 | 0.104 | 0.178 | 0.186 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: CFNAI-MA3 Expansion | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.004*** | -0.005*** | -0.006*** | -0.006*** | -0.006*** | | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) Adj. $R^{2}$ | | 0.099 | 0.152 | 0.238 | 0.290 | 0.289 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel C: CFNAI-MA3 Recession | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.005 | -0.012*** | -0.013*** | -0.017*** | -0.012*** | | (0.003) | (0.004) | (0.004) | (0.004) | (0.004) Adj. $R^{2}$ | | 0.291 | 0.153 | 0.087 | 0.148 | 0.153 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regression 10 (with $\beta_{non-hub}$ set equal to 0) considering only the predicting results of the uncertainty hub sub-network with respect to the 3-month moving average of the Chicago FED National Activity Index (CFNAI-MA3), indicator of business cycle (Panel A). In Panel B and Panel C the results of the predictive regression with respect to the CFNAI-MA3 expansion and recession indicators are reported, respectively. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are considered at a monthly frequency between 01-2000 and 05-2020. Table G2: Stricter Hubs vs non-Hubs Network Predictive Results | | Panel A: CFNAI-MA3 ---|---|--- | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.010*** | -0.013*** | -0.016*** | -0.017*** | -0.016*** | | (0.003) | (0.004) | (0.004) | (0.004) | (0.004) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.014** | -0.022*** | -0.015** | -0.013* | -0.0003 | | (0.006) | (0.006) | (0.007) | (0.007) | (0.007) Adj. $R^{2}$ | | 0.334 | 0.224 | 0.121 | 0.163 | 0.173 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel B: CFNAI-MA3 Expansion | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.003** | -0.004*** | -0.004*** | -0.005*** | -0.006*** | | (0.001) | (0.001) | (0.001) | (0.001) | (0.001) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | 0.0004 | -0.002 | -0.005*** | -0.002 | -0.0002 | | (0.002) | (0.002) | (0.002) | (0.002) | (0.002) Adj. $R^{2}$ | | 0.073 | 0.135 | 0.240 | 0.297 | 0.293 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel C: CFNAI-MA3 Recession | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.007** | -0.010*** | -0.012*** | -0.012*** | -0.010*** | | (0.003) | (0.004) | (0.004) | (0.004) | (0.004) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.015*** | -0.020*** | -0.009 | -0.012* | -0.0002 | | (0.005) | (0.006) | (0.006) | (0.006) | (0.006) Adj. $R^{2}$ | | 0.326 | 0.200 | 0.098 | 0.135 | 0.141 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel D: CLI | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.013** | -0.024*** | -0.038*** | -0.048*** | -0.050*** | | (0.005) | (0.006) | (0.006) | (0.006) | (0.006) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.010 | -0.017* | -0.019* | -0.022** | -0.011 | | (0.008) | (0.009) | (0.010) | (0.010) | (0.010) Adj. $R^{2}$ | | 0.451 | 0.341 | 0.285 | 0.338 | 0.334 Obs | | 243 | 241 | 238 | 235 | 232 | | Panel E: CFNAI-MA3 controlling for CLI | | h=1 | h=3 | h=6 | h=9 | h=12 $\mathcal{C}_{t}^{\text{hub}}$ $\mid$ $X_{t}$ | | -0.006** | -0.010*** | -0.014*** | -0.016*** | -0.016*** | | (0.003) | (0.003) | (0.004) | (0.004) | (0.004) $\mathcal{C}_{t}^{\text{non-hub}}$ $\mid$ $X_{t}$ | | -0.013*** | -0.022*** | -0.015** | -0.013* | -0.0003 | | (0.004) | (0.005) | (0.006) | (0.006) | (0.007) CLI | | 0.418*** | 0.336*** | 0.241*** | 0.112** | -0.017 | | (0.036) | (0.044) | (0.051) | (0.052) | (0.053) Adj. $R^{2}$ | | 0.579 | 0.378 | 0.197 | 0.177 | 0.170 Obs | | 243 | 241 | 238 | 235 | 232 * • Notes: This table presents the results of the predictive regression 10 comparing the predictive ability of the uncertainty hubs (CM, IN and IT) vs non-hubs (M, RE and U) sub-networks, with respect to the 3-month moving average of the Chicago FED National Activity Index (CFNAI-MA3) in Panel A. In Panel B and Panel C the results of the predictive regression with respect to the CFNAI expansion and recession periods are reported, respectively. In Panel D the results with respect to the leading indicator CLI are reported. In Panel E the results of predicting the coincident business cycle indicator controlling for CLI are reported. The five columns of the table represent different predictability horizons with $h\in(1,3,6,9,12)$. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space, the only exception being the CLI control. Series are considered at a monthly frequency between 01-2000 and 05-2020. ## Appendix H U.S. GDP and its volatility Table H1: U.S. GDP Predictive Results | | Panel A: GDP Growth Rate ---|---|--- | | h=1 | h=2 | h=3 | h=4 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | -0.056 | -0.158*** | -0.176*** | -0.156*** | | (0.046) | (0.041) | (0.041) | (0.041) Adj. $R^{2}$ | | 0.015 | 0.221 | 0.218 | 0.250 Obs | | 79 | 78 | 77 | 76 | | Panel B: GDP Volatility | | h=1 | h=2 | h=3 | h=4 $\mathcal{C}_{t}$ $\mid$ $X_{t}$ | | 0.028* | 0.036** | 0.047*** | 0.039*** | | (0.014) | (0.014) | (0.014) | (0.013) Adj. $R^{2}$ | | 0.138 | 0.106 | 0.165 | 0.219 Obs | | 79 | 78 | 77 | 76 * • Notes: This table presents the results of the predictive regression 9 between the aggregate network connectedness, and the U.S. GDP growth rate (Panel A) and GDP volatility (Panel B). The four columns of the table represent different predictability horizons with $h\in(1,2,3,4)$ quarters. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are all taken at quarterly frequency, between 01-2000 and 05-2020. Table H2: U.S. GDP Prediction with Hubs and non-Hubs | | Panel A: GDP Growth Rate ---|---|--- | | h=1 | h=2 | h=3 | h=4 $\mathcal{C}_{t}^{\text{hub}}$ | | -0.055** | -0.083*** | -0.104*** | -0.081*** | (0.027) | (0.024) | (0.024) | (0.024) | $\mathcal{C}_{t}^{\text{non-hub}}$ | | 0.017 | -0.043 | -0.017 | -0.039 | (0.044) | (0.039) | (0.039) | (0.039) | Adj. $R^{2}$ | | 0.038 | 0.223 | 0.238 | 0.246 Obs | | 79 | 78 | 77 | 76 | | Panel B: GDP Volatility | | h=1 | h=2 | h=3 | h=4 $\mathcal{C}_{t}^{\text{hub}}$ | | 0.030*** | 0.029*** | 0.024*** | 0.015* | | (0.008) | (0.008) | (0.008) | (0.008) $\mathcal{C}_{t}^{\text{non-hub}}$ | | -0.016 | -0.004 | 0.014 | 0.024* | | (0.013) | (0.013) | (0.013) | (0.013) Adj. $R^{2}$ | | 0.236 | 0.166 | 0.170 | 0.221 Obs | | 79 | 78 | 77 | 76 * • Notes: This table presents the results of the predictive regression 9 between the hubs and non-hubs networks, and the U.S. GDP growth rate (Panel A) and GDP volatility (Panel B). The four columns of the table represent different predictability horizons with $h\in(1,2,3,4)$ quarters. Regressions’ coefficients and standard errors (in parentheses), and adjusted-$R^{2}$ are reported. Coefficients are marked with *, **, *** for 10%, 5%, 1% significance levels, respectively. Intercept and controls results are not reported for the sake of space. Series are all taken at quarterly frequency, between 01-2000 and 05-2020.